Author: Denis Avetisyan
New research suggests that trade isn’t solely driven by information gaps, but can emerge from the computational constraints and strategic choices of even highly capable artificial intelligence.

This paper demonstrates that trade can arise even with common knowledge and powerful agents, by computationally inverting the No-Trade Theorem and exploring the behavior of almost-rational agents in unfolding games.
Classic economic models posit that trade arises from differences in beliefs, yet this paper, ‘Will AI Trade? A Computational Inversion of the No-Trade Theorem’, challenges this notion by demonstrating that trade can emerge even with common knowledge and rational agents. Through an unfolding game framework modeling bounded computational rationality, we find a paradoxical result: stable outcomes require slight disparities in agents’ computational power, while identical power can lead to persistent strategic adjustments resembling trade. This instability is amplified when agents strategically underutilize resources, precluding equilibrium even in simple scenarios. Does this suggest that the computational limitations of artificial intelligence may foster a more dynamic, and potentially unpredictable, economic landscape than previously understood?
The Illusion of Rationality: Why Models Fail
Conventional game theory operates on the premise of complete rationality – that individuals consistently make optimal decisions given their information and preferences – and common knowledge, meaning all players understand this rationality and each other’s understanding of it. However, this foundation frequently clashes with observed human behavior, as real-world actors are susceptible to cognitive biases, emotional influences, and limited computational abilities. The resulting disconnect manifests in scenarios where individuals deviate from predicted “rational” choices, opting instead for heuristics or satisficing strategies. Consequently, models built upon these assumptions often struggle to accurately predict outcomes in complex social or economic interactions, highlighting the need for approaches that incorporate the psychological realities of decision-making and acknowledge the inherent bounds on cognitive processing.
The No-Trade Theorem demonstrates a surprising paradox within economic modeling: even when participants possess complete information about each other’s assets and valuations, mutually beneficial trades may not occur. This isn’t a failure of information, but a consequence of computational complexity; determining the optimal trade often requires solving an incredibly difficult problem, akin to the traveling salesperson problem. As the number of agents or assets increases, the cognitive burden on each participant grows exponentially, making it practically impossible to identify and execute trades that, while theoretically advantageous, are simply too computationally expensive to discover. Consequently, markets can appear inefficient, failing to reach equilibria predicted by standard economic theory, not because of irrationality, but due to the inherent limits of calculation and cognitive processing.
The predictive power of traditional game theory often falters not because individuals are irrational, but because real-world decision-making is fundamentally constrained by cognitive limitations and computational power. Standard models assume agents can flawlessly process information and calculate optimal strategies, a capacity rarely, if ever, met in practice. Faced with scenarios of even moderate complexity – consider a chess game with dozens of potential moves, or an economic negotiation with countless variables – individuals rely on heuristics, approximations, and simplified models to make choices within a reasonable timeframe. This reliance introduces deviations from the perfectly rational behavior predicted by theory, as agents effectively trade off optimal outcomes for computational feasibility. Consequently, the ‘No-Trade Theorem’ and similar anomalies aren’t necessarily evidence of irrationality, but rather a demonstration of how bounded rationality – the acknowledgement of these cognitive and computational limits – is crucial for accurately modeling economic and strategic interactions.
Accounting for the Limits of Thought
Computational rationality departs from traditional rational choice theory by acknowledging and integrating the constraints of limited computational resources. Standard economic models often assume agents can perfectly optimize decisions, requiring unlimited processing power and memory; however, real-world agents operate with finite cognitive abilities. This framework recognizes that the complexity of a strategy, including its memory requirements and processing steps, directly impacts an agent’s ability to implement it. Consequently, computational rationality models agents as having bounded rationality, where their decisions are rational given their computational limitations, rather than seeking globally optimal solutions that may be computationally intractable. This approach allows for the development of more realistic and predictive models of behavior in complex environments by explicitly accounting for the trade-offs between optimality and computational cost.
The Unfolding Game formalism analyzes strategic interactions not as static, single-move decisions, but as potentially infinite sequences of actions and observations. This transforms a game into a tree-like structure where each node represents a decision point and branches represent possible actions. By explicitly representing the temporal dimension, the computational cost of a strategy can be determined by assessing the depth and branching factor of the tree required to implement it. Strategies that require exploring deeper or wider portions of the unfolding game tree incur higher computational costs, as the agent must maintain and process information about a larger number of potential future states. This allows for a quantifiable assessment of strategy complexity, moving beyond purely qualitative descriptions and enabling comparative analysis of different approaches within a given game.
Finite Automata (FA) are employed as a computational model of boundedly rational agents due to their defined state space and transition functions, which limit strategic complexity. An FA consists of a finite set of states, a set of input symbols, a transition function mapping states and inputs to new states, an initial state, and a set of accepting states. In the context of game theory, each state represents an agent’s belief about the game’s history, and transitions are determined by observed actions and the agent’s strategy. The number of states in the FA directly corresponds to the computational resources required to implement a given strategy; strategies requiring more states are computationally more expensive. Consequently, FA provide a quantifiable measure of complexity, allowing researchers to analyze the trade-offs between strategic sophistication and computational cost, and to model agents that choose strategies based on their available resources.
When Simplicity Isn’t Stupidity
Strategic Under-utilization describes the observed behavior of agents deliberately selecting strategies that are computationally simpler than their maximum capability allows. This is not necessarily indicative of incompetence, but rather a calculated choice based on the specific game being played and the anticipated actions of other agents. While maximizing computational effort is often assumed to be optimal, instances arise where a less complex strategy yields a more favorable outcome, potentially due to factors such as predictability or reduced signaling of intent. The phenomenon differentiates itself from simply being limited in computational capacity; it is an intentional reduction in complexity, representing a strategic choice within a broader strategy space.
In the Matching Pennies game, a player’s optimal strategy isn’t always to maximize immediate payoff by employing the computationally intensive mixed strategy Nash Equilibrium. Instead, simplifying to a deterministic strategy-always choosing heads or tails-can be rational when facing an opponent with limited computational resources or predictable behavior. This simplification, while resulting in a lower expected payoff than the Nash Equilibrium, introduces a predictable element exploitable by the agent. The deviation in payoff from the Nash Equilibrium when employing such a simplified strategy is mathematically bounded below by 1/3, meaning the agent sacrifices no more than one-third of the potential gain from perfect play to achieve strategic under-utilization and potentially exploit opponent weaknesses.
A Flexible Strategy Space is a requirement for agents exhibiting Strategic Under-utilization, as it enables the dynamic allocation of computational resources. This means agents must possess the capacity to select from a range of strategies differing in complexity, rather than being limited to a single, fixed approach. The granularity of this space-the number and distinction of available strategies-directly impacts an agent’s ability to optimize for scenarios where reduced computation yields a beneficial outcome. Without such flexibility, an agent cannot rationally choose to under-utilize its capabilities, even when faced with opponents or environments where a simpler strategy is demonstrably more effective. The design of this space is therefore crucial for implementing and observing Strategic Under-utilization in multi-agent systems and game theory models.
The Fragility of Equilibrium, and the Rise of the Almost Rational
Research into ‘Almost Identical Rationality’ reveals a surprising sensitivity in economic equilibria to even the smallest disparities in computational ability. While classical models often assume agents possess equivalent processing power, this work demonstrates that minute differences – perhaps a negligible advantage in evaluating complex options – can dramatically reshape market outcomes. Simulations show that as computational gaps widen, even within a largely homogeneous group, predictable equilibrium states become unstable, leading to fluctuating prices and altered trading patterns. This challenges the notion of a stable, predictable market driven by purely rational actors, suggesting that the capacity for rationality, not just rationality itself, is a crucial determinant of economic behavior and can explain outcomes otherwise inexplicable under standard assumptions. The implication is that subtle computational advantages, even if seemingly insignificant, can provide a decisive edge, influencing market dynamics and potentially driving persistent trade even amongst agents who would otherwise agree on a fair price.
The emergence of ‘Almost Rational Agents’ – entities possessing computational capabilities far exceeding those traditionally modeled – necessitates a re-evaluation of economic and game-theoretic frameworks. This concept moves beyond simple rationality, acknowledging that advanced artificial intelligence can process information and anticipate outcomes with a speed and complexity unattainable by humans or simpler algorithms. Consequently, these agents don’t merely react to market conditions; they actively shape them, potentially identifying and exploiting subtle inefficiencies invisible to others. This dynamic introduces novel equilibrium outcomes, as the sheer processing power of these agents allows for the calculation of optimal strategies in scenarios previously considered intractable, and challenges the assumptions of homogeneity underlying many established economic models. The implications extend to predicting market behavior, designing effective algorithms, and understanding the potential consequences of increasingly sophisticated AI in complex systems.
Conventional economic models often rely on the concept of Nash Equilibrium, predicting stable states when agents possess complete and symmetrical information. However, recent research demonstrates a surprising dynamic: when agents share identical computational abilities, stable equilibria can actually dissolve, paradoxically encouraging trade. This contrasts sharply with the classic no-trade theorem, which posits trade only occurs with differing valuations. The absence of equilibrium arises because even minor discrepancies in how agents process information-amplified by computational homogeneity-create persistent incentives to exchange assets, continually reshaping valuations and preventing a final, settled state. This suggests that computational similarity, rather than difference, can be a potent driver of market activity, challenging long-held assumptions about the foundations of economic stability and potentially offering insights into the behavior of complex systems where agents operate with comparable processing power.
The pursuit of perfectly rational agents, as modeled in traditional economics, feels increasingly… quaint. This paper’s demonstration of trade emerging from computational limitations isn’t a surprise to anyone who’s spent a late night debugging a production system. It’s a pragmatic acknowledgement that ‘almost rational’ behavior-agents constrained by finite automata and strategic under-utilization-is the only rationality that truly exists. As G. H. Hardy observed, “The essence of mathematics is its freedom.” This freedom, ironically, allows for imperfection, and within those imperfections, trade blossoms-not because of information gaps, but because even immense computational power has its limits. The elegance of the theory doesn’t matter; production will always find a way to introduce the necessary chaos.
The Road Ahead
The demonstration that trade can arise from computational constraints, rather than informational deficiencies, shifts the burden of explanation. It is a subtle, but critical, divergence from decades of assumption. The next iteration of this work will inevitably involve scaling these ‘almost rational’ agents. Tests, of course, are a form of faith, not certainty. Simulating true market complexity-the sheer volume of strategically under-utilized information-will expose the fragility of these models. The question isn’t whether the system can trade, but whether it will trade in a manner distinguishable from randomness once confronted with adversarial agents designed to exploit the inherent limitations.
A persistent challenge lies in defining ‘computational limitation’ itself. The use of finite automata provides a tractable starting point, but real-world agents are not so neatly bounded. The transition to more complex, albeit still imperfect, models of computation will require a re-evaluation of what constitutes ‘rationality’ and a grudging acceptance that optimal solutions are often computationally unattainable. It’s a comforting thought, in a way – imperfection as a fundamental driver of economic activity.
Ultimately, the field will likely move towards hybrid models-agents possessing both informational asymmetries and computational constraints. The interaction between these two forces will be messy, unpredictable, and almost certainly resist elegant mathematical formulation. But it will, at least, more closely resemble the markets that invariably find a way to break even the most carefully constructed theories.
Original article: https://arxiv.org/pdf/2512.17952.pdf
Contact the author: https://www.linkedin.com/in/avetisyan/
See also:
- Deepfake Drama Alert: Crypto’s New Nemesis Is Your AI Twin! 🧠💸
- Can the Stock Market Defy Logic and Achieve a Third Consecutive 20% Gain?
- Dogecoin’s Big Yawn: Musk’s X Money Launch Leaves Market Unimpressed 🐕💸
- Bitcoin’s Ballet: Will the Bull Pirouette or Stumble? 💃🐂
- SentinelOne’s Sisyphean Siege: A Study in Cybersecurity Hubris
- Binance’s $5M Bounty: Snitch or Be Scammed! 😈💰
- LINK’s Tumble: A Tale of Woe, Wraiths, and Wrapped Assets 🌉💸
- ‘Wake Up Dead Man: A Knives Out Mystery’ Is on Top of Netflix’s Most-Watched Movies of the Week List
- Yearn Finance’s Fourth DeFi Disaster: When Will the Drama End? 💥
- Ethereum’s Fusaka: A Leap into the Abyss of Scaling!
2025-12-23 13:51