Mapping the Flow: Reconstructing Traffic from Sparse Data

A new machine learning approach accurately estimates traffic density using only limited probe vehicle data, offering a powerful solution for real-time traffic monitoring.

A new machine learning approach accurately estimates traffic density using only limited probe vehicle data, offering a powerful solution for real-time traffic monitoring.

A new framework reveals systematic biases in time series forecasting, demonstrating that even sophisticated models can struggle with complex data dynamics.

A new framework optimizes the performance of AI agents by allowing them to dynamically switch between complex and streamlined models, significantly reducing computational costs.

Researchers have developed a novel adversarial imitation learning framework to refine flow matching models, offering a compelling alternative to reinforcement learning-based approaches.
New research explores how online learning algorithms can effectively optimize pricing in platforms connecting distinct user groups.
New research explores how artificial intelligence can automatically extract key relationships from complex financial documents.
![The study acknowledges the United Nations’ Sustainable Development Goals [25] as a globally recognized framework for addressing interconnected challenges, yet implicitly recognizes that even these ambitious targets will inevitably encounter practical limitations and unforeseen consequences in real-world implementation.](https://arxiv.org/html/2602.11168v1/pics/UN_SDG.png)
A new approach to analyzing text related to the UN’s Sustainable Development Goals leverages the power of combined machine learning models to achieve greater accuracy.
Deep learning models are proving increasingly effective at identifying tea leaf diseases, but ensuring their reliability requires more than just accuracy.
![The study demonstrates that optimal policies, derived through brute-force computation under constraints of Sequential Approximate Programming and Message Scheduling with Full Revelation, exhibit a predictable pattern: when the sequence length [latex]K[/latex] does not perfectly accommodate increasing pool sizes, the construction repeats, while for [latex]K=19[/latex], a perfect fit eliminates computational need, revealing the algorithm’s capacity to learn policies that fully reveal the middle state via message mixing.](https://arxiv.org/html/2602.12035v1/graphics/plot_lbpol_K19.png)
New research demonstrates that reinforcement learning algorithms can develop surprisingly rich and robust communication strategies, even in complex strategic environments.
![Optimal policies, tested in a noiseless environment with an inhomogeneous mean-volatility of [latex]2.3[/latex], demonstrate that the generated path is acutely sensitive to the level of risk aversion β, revealing how carefully tuned aversion governs trajectory selection.](https://arxiv.org/html/2602.12030v1/meanvar.png)
A new approach to risk-averse reinforcement learning dynamically adjusts for reward timing to optimize performance in complex financial applications.