The Art of the Bluff: AI Agents Learn to Lie

New research shows that artificial intelligence, when pitted against itself, rapidly develops sophisticated strategies of deception to gain an advantage.

New research shows that artificial intelligence, when pitted against itself, rapidly develops sophisticated strategies of deception to gain an advantage.

A new framework, SCAN, offers a powerful method for understanding how deep learning models arrive at their decisions through high-fidelity visual explanations.

A new benchmark reveals the limitations of current data management systems when faced with the complexity and dynamism of real-world graph data.
Researchers have developed a deep learning framework that combines network analysis of stock relationships with insights from investor sentiment to improve prediction accuracy.

A new analysis reveals that applying differential privacy techniques to machine learning can inadvertently reduce fairness and robustness in neural networks.
A new framework leverages multimodal AI to identify duplicate patient records while safeguarding privacy, moving beyond reliance on traditional identifiers.
![Deepfake detection systems exhibit varying efficacy-measured as Equal Error Rate [latex]EER[/latex]-across different audio generation techniques in both Track 1 and Track 2 evaluations, highlighting the sensitivity of these systems to the specific origins of manipulated audio.](https://arxiv.org/html/2603.04865v1/2603.04865v1/x3.png)
A new challenge reveals the growing threat of AI-generated environmental sounds and the surprisingly effective techniques for spotting them.

New research shows deep learning models can accurately assess forest biomass and carbon storage using data generated from simulations, offering a cost-effective alternative to traditional field measurements.
![The study demonstrates that mitigating hallucination in image captioning can be achieved through adaptive attention mechanisms, as evidenced by AdaIAT’s layer-wise thresholding and attention head-specific modulation [latex]\mathcal{M}^{(l,h)}[/latex], which effectively addresses the limitations of fixed-attention approaches like PAI-prone to repetitive language-and greedy methods that generate hallucinatory objects, such as incorrectly identifying “cars”.](https://arxiv.org/html/2603.04908v1/2603.04908v1/x2.png)
New research tackles the problem of ‘hallucinations’ in large AI models that process both images and text, improving their reliability and trustworthiness.
![Attention mechanisms demonstrably align with linguistic structure, as evidenced by a correspondence between part-of-speech tags and attention weights - specifically, attention concentrates on nouns and verbs, suggesting the model prioritizes content words during processing [latex] \implies [/latex] a hierarchical understanding of sentence construction.](https://arxiv.org/html/2603.04805v1/2603.04805v1/agf-image/pos.png)
Researchers are finding that the way language models focus on words isn’t random, but follows a predictable pattern reminiscent of gravity.