Author: Denis Avetisyan
A new review examines whether foundation models, initially underwhelming in finance, can gain predictive power with targeted financial data training.

Domain-specific pre-training significantly improves the performance of time series foundation models for asset return predictability and financial forecasting.
Despite advances in financial modeling, accurate time series forecasting remains a persistent challenge due to inherent data complexities. This is addressed in ‘Re(Visiting) Time Series Foundation Models in Finance’, a comprehensive empirical study evaluating the potential of large-scale temporal representation learning for global financial markets. The research demonstrates that while pre-trained, generic time series foundation models initially underperform traditional methods, substantial gains in forecasting accuracy and economic value are achievable through domain-specific pre-training on financial data. Does this suggest that successful application of foundation models in finance hinges not on architectural novelty, but on carefully curated, relevant data?
The Illusion of Predictability in Financial Markets
The ability to accurately predict financial time series is paramount to effective risk management and the development of successful investment strategies; however, this endeavor consistently presents a formidable challenge. Financial institutions and individual investors alike depend on forecasting to assess potential losses, optimize portfolio allocation, and make informed trading decisions. Despite decades of research and the implementation of sophisticated analytical tools, consistently achieving high accuracy remains elusive due to the inherent volatility and complexity of financial markets. External factors – geopolitical events, macroeconomic shifts, and even investor sentiment – introduce unpredictable noise, while internal dynamics, such as feedback loops and cascading effects, contribute to non-linear behaviors that traditional models struggle to capture. Consequently, imperfect forecasts can lead to significant financial repercussions, underscoring the critical need for continuous innovation in forecasting methodologies and a realistic assessment of their limitations.
Foundational econometric models, such as Autoregressive Integrated Moving Average (ARIMA) and Generalized Autoregressive Conditional Heteroskedasticity (GARCH), historically served as cornerstones for financial forecasting. However, these methods frequently demonstrate limited efficacy when confronted with the intricate, non-linear behaviors characteristic of modern financial markets. Recent evaluations reveal that benchmark and readily available Time Series Foundation Models (TSFMs) – built upon these classical approaches – often yield remarkably poor predictive power, as evidenced by R-squared values ranging from -0.47% to -1.37%. This indicates that these models not only fail to explain a substantial portion of the variance in financial time series but, in many instances, actively perform worse than simply predicting the mean, highlighting a critical need for more sophisticated techniques capable of capturing the underlying complexities of financial data.
Classical financial forecasting models frequently depend on the assumption of normally distributed data, a condition rarely met in volatile markets. This reliance introduces significant limitations because financial time series often exhibit characteristics like skewness and kurtosis – fat tails and asymmetrical distributions – which deviate sharply from the bell curve. Consequently, predictions generated under these strict assumptions can be unreliable, underestimating the probability of extreme events – such as market crashes or unexpected surges. The resulting suboptimal performance isn’t a flaw in the models themselves, but rather a consequence of applying tools designed for predictable systems to the inherently unpredictable nature of financial data, necessitating more robust and adaptable methodologies.

From Single Trees to Robust Forecasts: The Power of Ensemble Learning
Tree-based ensemble models represent a progression from single decision trees by combining multiple trees to create a more accurate and stable predictive system. Individual decision trees, while interpretable, are prone to high variance and overfitting. Ensemble methods, such as Random Forests and Gradient Boosting, mitigate these issues by aggregating the predictions of numerous trees, each trained on slightly different subsets of the data or with varying weighting schemes. This aggregation process reduces variance and improves generalization performance, leading to more robust predictions, particularly when dealing with complex datasets and non-linear relationships. The increased robustness stems from the error reduction achieved through averaging or weighted combination of individual tree outputs, diminishing the impact of outliers or noisy data points.
XGBoost, LightGBM, and CatBoost represent optimized implementations of tree-based ensemble methods designed for scalability and performance with large datasets. These algorithms incorporate features such as gradient boosting, regularization, and efficient tree-building techniques to improve predictive accuracy and reduce overfitting. Specifically, CatBoost, when tested using a 252-day lookback window, demonstrated a Sharpe Ratio of 6.79 and an annualized return of 46.50%, indicating its potential for generating substantial risk-adjusted returns in financial forecasting applications.
Traditional statistical forecasting methods often assume linear relationships within financial data; however, financial time series frequently exhibit non-linear patterns and complex interactions between variables. Ensemble methods, particularly those utilizing decision trees, effectively model these complexities without requiring explicit specification of functional forms. By combining multiple decision trees, these models can approximate highly non-linear functions and capture interaction effects between features, leading to improved predictive performance compared to linear models. This adaptability is crucial in financial markets where relationships are constantly evolving and subject to unforeseen events, allowing ensemble methods to adjust to changing conditions and maintain forecasting accuracy.

Beyond Rationality: Accounting for Human Behavior in Financial Models
Traditional economic models often assume investors act rationally, seeking to maximize utility. However, empirical evidence demonstrates that psychological biases systematically influence investment decisions and, consequently, asset pricing. These biases include, but are not limited to, loss aversion, where the pain of a loss is felt more acutely than the pleasure of an equivalent gain; confirmation bias, the tendency to favor information confirming existing beliefs; and herding behavior, where individuals mimic the actions of a larger group. The prevalence of these biases results in market anomalies and deviations from efficient market hypotheses, creating opportunities for strategies that exploit predictable irrationalities in investor behavior. These biases are not random noise, but rather consistent patterns of behavior that contribute to observable trends in financial markets.
The momentum effect describes the tendency of assets exhibiting high past returns to continue generating positive returns over a specific period, while the reversal effect indicates that assets with consistently poor performance are likely to experience a rebound. These phenomena deviate from the efficient market hypothesis, which posits that asset prices fully reflect all available information. Observed instances of both effects suggest investor overreaction or underreaction to news and data, leading to price trends that persist beyond what rational valuation models would predict. Specifically, momentum strategies capitalize on the continuation of existing trends, while reversal strategies attempt to profit from the correction of perceived mispricing, both relying on identifiable behavioral biases within market participants.
Incorporating behavioral finance principles into forecasting models demonstrates potential for improved performance compared to traditional methods. Recent evaluations indicate that Pre-trained Time Series Foundation Models (TSFMs), when utilizing data scaling and augmentation techniques, achieved a directional accuracy of 51.74%. This figure surpasses the 51.16% directional accuracy attained by the CatBoost algorithm under the same conditions, suggesting that TSFMs are more effective at capturing and leveraging the nuances of market behavior influenced by cognitive biases and emotional factors. This enhancement in accuracy also contributes to improved model interpretability by explicitly accounting for non-rational influences on asset pricing.

The study reveals a curious truth about forecasting models: initial promise doesn’t guarantee ultimate success. Generic models, lacking the nuance of financial data, predictably falter. Yet, adaptation-pre-training on domain-specific information-unlocks substantial improvement. This echoes a broader principle; rationality is a rare burst of clarity in an ocean of bias. As Carl Sagan observed, “Somewhere, something incredible is waiting to be known.” The researchers demonstrate that incredible potential lies not within the model’s architecture itself, but within the data used to shape it, revealing the market is, at its core, a barometer of collective mood and learned patterns.
What Lies Ahead?
The apparent need for domain-specific pre-training in Time Series Foundation Models (TSFMs) isn’t surprising. The expectation that a general-purpose algorithm could simply discover financial predictability feels increasingly naive. Markets aren’t governed by pristine mathematical relationships; they are the emergent behavior of countless individuals, each operating with incomplete information and deeply flawed heuristics. The models perform better when ‘taught’ the language of finance, not when expected to deduce it from raw data. This suggests the primary challenge isn’t architectural innovation, but the painstaking task of encoding human biases – fear, greed, and the endless search for patterns where none reliably exist – into a trainable form.
Future work will likely focus on increasingly sophisticated methods for capturing this ‘financial psychology’. Perhaps embedding behavioral economic principles directly into the model architecture, or developing pre-training datasets designed to expose the model to realistic, irrational market conditions. But it’s worth remembering that even the most accurate forecast is merely a fleeting prediction of a fundamentally unpredictable system. The illusion of control is powerful, and easily mistaken for genuine insight.
Ultimately, all behavior is a negotiation between fear and hope. Psychology explains more than equations ever will. The true limit of these models isn’t computational power, but the inherent unknowability of human motivation.
Original article: https://arxiv.org/pdf/2511.18578.pdf
Contact the author: https://www.linkedin.com/in/avetisyan/
See also:
- DOGE PREDICTION. DOGE cryptocurrency
- Calvin Harris Announces India Debut With 2 Shows Across Mumbai and Bangalore in November: How to Attend
- EQT Earnings: Strong Production
- Heights Capital Bets $16M on ImmunityBio: A Calculated Gamble?
- Docusign’s Theatrical Ascent Amidst Market Farce
- The Relentless Ascent of Broadcom Stock: Why It’s Not Too Late to Jump In
- Why Rocket Lab Stock Skyrocketed Last Week
- TON PREDICTION. TON cryptocurrency
- Ultraman Live Stage Show: Kaiju Battles and LED Effects Coming to America This Fall
- HBO Boss Discusses the Possibility of THE PENGUIN Season 2
2025-11-25 12:32