Finding the Core: Pruning for Efficient Transformer Networks

A new approach efficiently identifies the most crucial connections within transformer models, drastically reducing computational demands without sacrificing performance.

A new approach efficiently identifies the most crucial connections within transformer models, drastically reducing computational demands without sacrificing performance.

Researchers have demonstrated a novel attack that exploits the creative reasoning of large language models to generate prompts that bypass safety filters in text-to-image systems.

A new index reveals a concerning trend: despite growing power, major AI companies are becoming less transparent about the models shaping our future.

A new pipeline offers a comprehensive approach to detecting and mitigating harmful biases embedded in the textual data used to train large language models.

A new benchmark reveals that current AI-generated video detection systems heavily rely on watermarks and can be easily bypassed when those patterns are removed or manipulated.
A new machine learning framework moves beyond tackles and interceptions to comprehensively assess the true impact of defensive players.

A new technique boosts the reasoning abilities of AI agents by strategically introducing challenging examples during reinforcement learning.

Researchers have developed a novel framework and dataset to automatically map drivable paths in challenging off-road environments, moving beyond traditional endpoint-based methods.

A new empirical study reveals the surprising effectiveness of a classic optimization technique, the Frank-Wolfe method, for crafting powerful adversarial attacks against deep learning models.

A new framework, THeGAU, boosts the performance of graph neural networks on complex data by intelligently incorporating node and edge types and strategically augmenting the graph structure.