Beyond the Model: Maximizing Trading Utility with Limited Capital

Author: Denis Avetisyan


This review explores how to achieve optimal trading strategies for derivative contracts when faced with real-world capital constraints and uncertainty in valuation models.

The analysis demonstrates that, under the established parameters, the utility function reaches its optimum when <span class="katex-eq" data-katex-display="false">\lambda \approx 3.1</span>, indicating a specific value maximizes the defined benefit.
The analysis demonstrates that, under the established parameters, the utility function reaches its optimum when \lambda \approx 3.1, indicating a specific value maximizes the defined benefit.

The paper investigates utility maximization under model-independent constraints, focusing on admissible strategies, robust pricing, and dynamic hedging techniques.

Traditional utility maximization problems assume complete market alignment, yet practical portfolios often involve derivative contracts valued with inherent model risk. This paper, ‘Utility Maximisation with Model-independent Constraints’, addresses this discrepancy by formulating an optimal trading strategy subject to pathwise constraints derived from model-independent valuation bounds. We demonstrate that, under complete markets, the optimal terminal wealth can be expressed using max-plus decompositions, and provide explicit solutions for the Black-Scholes-Merton model, enabling numerical analysis of robust hedging strategies. How can these findings be extended to incomplete markets and more complex derivative structures to further refine dynamic portfolio allocation?


The Illusion of Precision: Deconstructing Derivative Pricing

Established derivative pricing models, prominently including the Black-Scholes framework, frequently operate under assumptions that diverge significantly from actual market conditions, creating potential for both mispricing and heightened risk. These models often presume constant volatility, efficient markets, and normally distributed asset returns – characteristics rarely observed in practice. The simplification, while mathematically convenient, neglects the realities of volatility clustering, jump diffusion, and the presence of transaction costs. Consequently, reliance on these models can lead to an underestimation of potential losses, particularly during periods of market stress, and ultimately compromise the accuracy of portfolio valuations and risk assessments. The inherent disconnect between model assumptions and market behavior underscores the necessity for more sophisticated pricing techniques and robust risk management strategies.

Traditional derivative pricing frameworks often falter when confronted with complex financial instruments whose value depends not just on the final price of an asset, but on the path it takes to get there. Instruments like one-touch options – which pay out only if an asset reaches a certain price at any point during the option’s life – present a significant challenge, as their valuation requires simulating numerous potential price trajectories. Compounding this issue is the inherent uncertainty in the models themselves; the assumption that a single model perfectly captures market dynamics is demonstrably false. This model uncertainty – the risk that the chosen model is simply incorrect – is rarely incorporated into pricing calculations, leading to potentially significant underestimation of risk. Consequently, a demand exists for more sophisticated methodologies, such as Monte Carlo simulation and robust optimization, capable of handling path dependency and explicitly acknowledging the limitations of any single pricing model.

The reliance on simplified models in derivative pricing introduces vulnerabilities that can translate directly into substantial financial repercussions for investors. While mathematically elegant, these approaches often fail to capture the complexities of real-world market dynamics, leading to mispriced assets and an underestimation of inherent risk. This disconnect isn’t merely theoretical; it manifests as unexpected losses during periods of market stress or when dealing with exotic instruments. Consequently, effective risk management becomes significantly hampered, as standard metrics derived from flawed models offer a misleading picture of potential downsides. The inability to accurately assess exposure can lead to inadequate hedging strategies and ultimately, the erosion of capital, highlighting a critical need for more sophisticated and robust valuation techniques.

In the Black-Scholes-Merton model, the function ρ represents the hedging cost when <span class="katex-eq" data-katex-display="false">S_{t}^{D} = K^{D}</span>, while φ denotes the process value along that line.
In the Black-Scholes-Merton model, the function ρ represents the hedging cost when S_{t}^{D} = K^{D}, while φ denotes the process value along that line.

Defining True Value: A Foundation of Lower Bounds

The conventional approach to derivative pricing centers on identifying a single, theoretically ‘fair’ price; however, this methodology is susceptible to model risk. Instead, we define intrinsic value not as a target price, but as a lower bound on valuation. This lower bound is designed to prevent losses arising from any plausible model specification. By establishing this constraint, the intrinsic value effectively creates a safety threshold; any model predicting a value below this threshold is considered unacceptable. This approach prioritizes risk mitigation by ensuring that, regardless of the model used, the derivative’s value will not fall below the defined intrinsic value, thereby protecting the investor from adverse outcomes.

The intrinsic value, as defined within this framework, functions as a limiting factor on permissible trading strategies by establishing a ‘no-arbitrage’ boundary within the model space. This boundary is mathematically represented by the condition W_T \pi,C \geq -D T^{-1} \alpha, where W_T \pi,C represents the trading strategy’s wealth at time T, D is a discounting factor, T is time, and α represents a risk aversion parameter. Strategies violating this inequality are deemed unacceptable as they imply potential losses based on model parameters, effectively creating a region where arbitrage opportunities would exist. Consequently, all valid strategies are constrained to operate within the defined region, ensuring a minimum level of financial protection and stability.

Establishing a constraint on trading strategies, rather than relying on a singular model for valuation, provides a guaranteed minimum level of investor protection. This is achieved through the definition of fair valuation as a càdlàg ℚ-ℱ-submartingale, specifically DTℑt(CT), where DT represents the discounting factor, ℑt denotes the information available at time t, and CT signifies the contingent claim. The càdlàg property ensures right-continuity and finite limits, preventing abrupt valuation shifts, while the submartingale condition guarantees that the conditional expectation of future value, given current information, is greater than or equal to the current value, thus avoiding potential losses and establishing a robust valuation framework independent of specific model assumptions.

The optimal consumption rate <span class="katex-eq" data-katex-display="false">r^{\*} (M(\lambda), \lambda)</span> decreases with increasing λ and is zero for small values of λ when the intermediate wealth constraint is not binding, consistent with the parameters from Figure 3.
The optimal consumption rate r^{\*} (M(\lambda), \lambda) decreases with increasing λ and is zero for small values of λ when the intermediate wealth constraint is not binding, consistent with the parameters from Figure 3.

Admissible Strategies and Superreplication: Ensuring Portfolio Integrity

An admissible strategy in financial derivative pricing is defined by its capacity to maintain a portfolio value consistently above the intrinsic value of the derivative throughout the life of the contract. This characteristic ensures that the replicating portfolio generates a return sufficient to cover the cost of the derivative, thereby eliminating the possibility of a negative payoff, even under the most unfavorable market scenarios. Formally, if V(t) represents the portfolio value at time t and H(t) denotes the intrinsic value of the derivative at time t, an admissible strategy satisfies the condition V(t) \ge H(t) for all t within the contract’s duration. This non-negativity constraint is fundamental to risk-neutral valuation and ensures the absence of arbitrage opportunities.

Superreplication involves constructing a self-financing portfolio composed of the underlying asset and a risk-free asset, designed to replicate the payoff of a derivative contract. This method guarantees a return that is always greater than or equal to the derivative’s payoff, irrespective of the actual price path of the underlying asset. The principle relies on identifying a range of possible future values for the underlying and creating a portfolio that can achieve at least the minimum required payoff in all scenarios within that range. Mathematically, superreplication seeks to find a portfolio strategy Π such that the wealth process W_{\Pi}(T) \geq V(T) for all time T , where V(T) represents the derivative’s payoff at time T . This approach does not necessarily minimize the replication cost, but prioritizes ensuring a payoff that meets or exceeds the target derivative’s value under any market condition.

Both static hedging and dynamic trading strategies represent viable approaches to constructing admissible solutions for derivative replication. Static hedging involves determining a fixed position in the underlying asset at the outset, ensuring the portfolio value consistently meets or exceeds the intrinsic value of the derivative throughout its lifespan. Conversely, dynamic trading strategies necessitate continuous adjustments to the portfolio based on evolving market conditions and the underlying asset’s price, aiming to maintain a value equal to or greater than the derivative’s intrinsic value at all times. These strategies differ in complexity and cost – static hedges are simpler to implement but may require larger initial capital, while dynamic strategies incur transaction costs but potentially offer more efficient replication. The choice between them depends on factors like market liquidity, transaction costs, and the specific characteristics of the derivative being hedged.

The certainty equivalent of a trader employing a semi-static hedge demonstrates that the minimal-cost, Hobson-optimal strike <span class="katex-eq" data-katex-display="false">K</span> maximizes profit, starting from an initial wealth of 0.1.
The certainty equivalent of a trader employing a semi-static hedge demonstrates that the minimal-cost, Hobson-optimal strike K maximizes profit, starting from an initial wealth of 0.1.

Calculating the Constraint: Snell Envelopes and Local Time – A Rigorous Determination

The Snell envelope, a core concept in stochastic control, provides a method for determining the intrinsic value of a derivative by identifying the largest submartingale that bounds its potential payoff. This submartingale represents the maximum amount an agent can guarantee receiving at any future time, given the current state of the asset. Mathematically, the Snell envelope S(x) is defined as the solution to the dynamic programming equation S(x) = \max_{a} E[R(x,a) + \beta S(X(t+1)) | X(t)=x] , where R is the immediate reward, β is a discount factor, and X represents the asset’s state. By constructing this envelope, one effectively establishes an upper bound on the rational payoff, which is critical for optimal decision-making under uncertainty and the formulation of admissible trading strategies.

Local time, denoted as L_t(x), quantifies the cumulative duration that a diffusion process, such as an asset price, spends at a specific price level x up to time t. Its calculation isn’t a simple integral of time spent at x due to the continuous nature of the process; instead, it’s formally defined as the limit of the accumulated time the process remains within an infinitesimal neighborhood of x. This value is essential for determining the reflection boundary in the Snell envelope calculation because it directly impacts the cumulative reward accrued by the trading strategy at that level. Accurate computation of local time is critical for correctly defining the intrinsic budget constraint, as it dictates the precise point at which the strategy’s cumulative gains are maximized while remaining within the bounds of admissibility.

The intrinsic budget constraint is precisely defined as the largest submartingale, calculated via the Snell envelope, which bounds the derivative’s payoff and represents the maximum amount of value an agent can reliably extract. This calculation incorporates local time, quantifying the duration an asset’s price remains at specific levels, to accurately determine the boundary of this constraint. Admissible strategies, those that do not violate this boundary, are thereby ensured to be self-financing and avoid arbitrage opportunities, guaranteeing their practical effectiveness within the defined financial model. Failure to adhere to the constraint, determined by the Snell envelope and local time, would indicate a strategy reliant on unsustainable price movements or an overestimation of potential gains.

Beyond Pricing: A Foundation for Robust Risk Management – The True Objective

Traditional financial modeling often prioritizes pinpoint accuracy in pricing assets, yet overlooks the inherent uncertainty within those models themselves. This framework represents a fundamental shift, prioritizing robust risk management over precise valuation. Rather than striving for a single ‘correct’ price, it establishes a guaranteed lower bound on potential losses, safeguarding against model misspecification or unforeseen market events. This approach doesn’t eliminate risk, but rather defines an acceptable level of protection, allowing investors to confidently navigate complex financial landscapes. By focusing on downside protection, the framework empowers decision-makers to build resilient portfolios and ensure long-term stability, ultimately determining the optimal terminal wealth as defined by WTπ = ξT[(sup0≤u≤TJuζ)∨M] .

A core principle of effective financial strategy lies in understanding and limiting potential downsides. Rather than solely focusing on maximizing potential gains, this approach prioritizes establishing a guaranteed minimum level of performance, even under unfavorable market circumstances. By rigorously defining a lower bound on possible losses, investors gain the capacity to make substantially more informed decisions, moving beyond speculative projections to a position of quantifiable risk management. This allows for the construction of portfolios that are not only designed for growth but are also demonstrably resilient against adverse events, fostering long-term stability and preserving capital during periods of volatility. The ability to confidently assess worst-case scenarios empowers stakeholders to allocate resources more effectively and navigate complex financial landscapes with increased assurance.

This framework transcends the limitations of traditional derivative pricing, offering a comprehensive approach to managing the intricacies of modern financial portfolios and bolstering long-term stability. Rather than solely focusing on pinpoint accuracy in pricing, it establishes a mechanism for guaranteeing a minimum level of protection against the inherent uncertainties of financial modeling. This allows for a proactive assessment of potential downside risks and a more informed allocation of capital. The culmination of this risk management strategy is the determination of optimal terminal wealth, mathematically represented as WTπ = ξT[(sup0≤u≤TJuζ)∨M], where WTπ signifies terminal wealth, ξT represents a scaling factor, Juζ encapsulates the investment strategy, and M defines a minimum acceptable wealth level; the superior function ensures the investor achieves at least this minimum, even under adverse conditions, thereby fundamentally reshaping portfolio construction and risk mitigation practices.

The expected utility of a trader increases with initial wealth <span class="katex-eq" data-katex-display="false">w_0</span> and is maximized by effectively hedging against vanilla call options with varying strikes <span class="katex-eq" data-katex-display="false">K</span>.
The expected utility of a trader increases with initial wealth w_0 and is maximized by effectively hedging against vanilla call options with varying strikes K.

The pursuit of utility maximization, as detailed in the paper, necessitates a rigorous examination of constraints – not merely those dictated by a specific model, but those inherent to the very nature of valuation. This echoes the sentiment expressed by Confucius: “To know what you know and what you do not know, that is true knowledge.” The paper’s focus on model-independent constraints – seeking strategies robust against model misspecification – exemplifies this principle. One must first understand the limitations of any assumed framework before attempting to optimize within it. The concept of superreplication, ensuring a risk-free hedge, requires precisely this awareness of what remains invariant as market conditions shift – a form of applied epistemological humility. Let N approach infinity – what remains invariant is the fundamental need for a strategy grounded in intrinsic value, independent of transient model assumptions.

Beyond the Hedged Bet

The pursuit of utility maximization, even under constraints seemingly well-defined by intrinsic value, reveals a persistent unease. The work demonstrates, with mathematical rigor, what can be achieved when one abandons reliance on specific stochastic models. However, it simultaneously highlights the inherent difficulty in fully escaping model dependence. Constraints, even those framed as ‘model-independent,’ are, at their core, mathematical abstractions – and all abstractions involve a degree of simplification, a deliberate neglect of reality’s infinite complexity. The question is not whether one can eliminate model risk, but whether one can achieve a level of robustness that justifies the computational cost of approaching it.

Future research must address the limits of tractability. The elegant frameworks developed here, while theoretically sound, quickly become computationally burdensome as dimensionality increases. A fruitful direction lies in exploring approximation techniques that preserve the mathematical purity of the approach, rather than sacrificing it for the sake of speed. The current emphasis on superreplication, while valuable, may obscure the possibility of more nuanced, probabilistic solutions. Perhaps, the true goal isn’t to eliminate all risk, but to understand it, and to price it accordingly – a challenge requiring not merely computational power, but a deeper appreciation for the fundamental limitations of any predictive model.

Ultimately, this work serves as a reminder: the search for an optimal trading strategy is not merely an engineering problem, but a philosophical one. It forces a confrontation with the fundamental ambiguity of value, and the impossibility of truly knowing the future. The algorithms may become more sophisticated, the computations faster, but the underlying uncertainty will remain – a constant, irreducible element in the equation.


Original article: https://arxiv.org/pdf/2512.24371.pdf

Contact the author: https://www.linkedin.com/in/avetisyan/

See also:

2026-01-03 22:30