Uncovering Hidden Weaknesses in AI’s Reasoning

A new technique efficiently pinpoints the specific data patterns that cause large language models to stumble, offering a path toward more reliable AI.

A new technique efficiently pinpoints the specific data patterns that cause large language models to stumble, offering a path toward more reliable AI.

A new system efficiently analyzes footage from multiple cameras to deliver faster, more comprehensive understanding of traffic patterns.

Researchers are leveraging generative AI to forecast the progression of Alzheimer’s disease by predicting future brain scans and key indicators of cognitive decline.

A new framework accurately extracts opinions from Bangla product reviews, offering valuable business intelligence for a rapidly growing online market.

New research reveals that as language models learn, the relationship between how often a word is used and how many meanings it acquires isn’t straightforward, defying a long-held linguistic principle.

New research explores whether large language models can move beyond simple fact-checking to identify the specific evidence supporting or refuting a statement.

A new approach to text-to-image generation bypasses the need for dedicated prior networks by directly optimizing image embeddings within diffusion models.

Researchers have developed a novel method that combines expert guidance with adversarial learning to infer reward functions and optimize policies more effectively.
New research shows that language models can be trained to reliably identify concepts they’ve been taught, opening a path toward more transparent and controllable artificial intelligence.

A new technique allows aggressively compressed neural networks to regain lost performance by generating synthetic data and transferring knowledge, offering a path to efficient and privacy-preserving AI.