Safeguarding Finance: A New Approach to Fraud Detection

A novel federated learning framework enhances anti-money laundering efforts by prioritizing data privacy and minimizing the risk of information leakage.

A novel federated learning framework enhances anti-money laundering efforts by prioritizing data privacy and minimizing the risk of information leakage.

This review examines how deep learning models are leveraging the inherent patterns of autocorrelation to achieve increasingly accurate time-series predictions.
Researchers have developed an automated system that uses artificial intelligence to design attacks that reveal whether a data point was used to train a machine learning model.

A new benchmark reveals the challenges large language models face when generating reliable and consistent financial reports.
A new approach leverages reinforcement learning to fine-tune pre-trained timeseries models, improving prediction performance and enabling knowledge transfer.

A new deep learning framework offers a more accurate and efficient approach to modeling financial markets for improved portfolio construction.
New research explores whether large language models can reliably predict stock performance, and reveals the critical role of human oversight.

A new framework empowers text-to-image models to refine their creations in real-time, dramatically improving spatial accuracy and compositional understanding.
![The PowerFlow framework dynamically adjusts the distribution of logical reasoning-sharpening it with [latex] \alpha > 1 [/latex] to enhance performance or flattening it with [latex] \alpha < 1 [/latex] to encourage creative exploration-resulting in a Pareto improvement over existing approaches to directional capability elicitation.](https://arxiv.org/html/2603.18363v1/x1.png)
A new approach unlocks directional control over language model capabilities, boosting reasoning skills or restoring creative flair without relying on traditional reward-based reinforcement learning.

New research demonstrates a method for large language models to refine their own generation process during inference, leading to higher quality outputs without requiring retraining.