Skip to content

usdaed

  • Science
  • Who is Denis Avetissian?

Science

Privacy’s Price: How Data Protection Can Undermine Neural Network Performance

09.03.2026 by qfx

A delicate balance exists between data privacy and practical utility, where excessive protection-though intended to safeguard information-can render data unusable, marking a phase transition from benign safeguarding to harmful restriction.

A new analysis reveals that applying differential privacy techniques to machine learning can inadvertently reduce fairness and robustness in neural networks.

Categories Science

Beyond Identifiers: AI-Powered Data Deduplication for Healthcare

09.03.2026 by qfx

A new framework leverages multimodal AI to identify duplicate patient records while safeguarding privacy, moving beyond reliance on traditional identifiers.

Categories Science

Can You Hear the Lie? Benchmarking Deepfake Audio Detection

08.03.2026 by qfx

Deepfake detection systems exhibit varying efficacy-measured as Equal Error Rate [latex]EER[/latex]-across different audio generation techniques in both Track 1 and Track 2 evaluations, highlighting the sensitivity of these systems to the specific origins of manipulated audio.

A new challenge reveals the growing threat of AI-generated environmental sounds and the surprisingly effective techniques for spotting them.

Categories Science

Seeing the Forest for the Trees: AI Estimates Carbon from Simulated Lidar

08.03.2026 by qfx

The study demonstrates how estimations of wood volume across simulated plots diverge depending on the modeling approach, specifically when trained on synthetic data reduced using either random sampling or farthest point sampling techniques.

New research shows deep learning models can accurately assess forest biomass and carbon storage using data generated from simulations, offering a cost-effective alternative to traditional field measurements.

Categories Science

Seeing is Believing: Reducing falsehoods in Vision-Language AI

08.03.2026 by qfx

The study demonstrates that mitigating hallucination in image captioning can be achieved through adaptive attention mechanisms, as evidenced by AdaIAT’s layer-wise thresholding and attention head-specific modulation [latex]\mathcal{M}^{(l,h)}[/latex], which effectively addresses the limitations of fixed-attention approaches like PAI-prone to repetitive language-and greedy methods that generate hallucinatory objects, such as incorrectly identifying “cars”.

New research tackles the problem of ‘hallucinations’ in large AI models that process both images and text, improving their reliability and trustworthiness.

Categories Science

How Attention Decays: A New Law of Language

08.03.2026 by qfx

Attention mechanisms demonstrably align with linguistic structure, as evidenced by a correspondence between part-of-speech tags and attention weights - specifically, attention concentrates on nouns and verbs, suggesting the model prioritizes content words during processing [latex] \implies [/latex] a hierarchical understanding of sentence construction.

Researchers are finding that the way language models focus on words isn’t random, but follows a predictable pattern reminiscent of gravity.

Categories Science

Teaching Machines to Learn from Our Words

08.03.2026 by qfx

A new reinforcement learning framework uses collective human feedback expressed in natural language to dramatically improve the training of large AI models.

Categories Science

Seeing the Whole Picture: AI Learns to Read CT Scans Like a Radiologist

08.03.2026 by qfx

A computed tomography scan and its associated report demonstrate a highly structured approach to textual description, effectively linking visual data with detailed, corresponding narratives.

A new approach focuses on anatomical structures to generate more accurate and detailed reports from computed tomography scans using the power of artificial intelligence.

Categories Science

Reading Between the Lines: Can AI Truly Understand Human Values?

08.03.2026 by qfx

Value distributions elicited from large language models, when varied with prompting techniques, demonstrate a comparative alignment with those of human experts.

New research explores whether large language models can accurately identify and interpret the complex values embedded within qualitative interview data.

Categories Science

The ReLU Effect: How Gradient Descent Shapes Neural Network Solutions

08.03.2026 by qfx

New research reveals how the popular ReLU activation function subtly influences the solutions found by gradient descent in high-dimensional neural networks.

Categories Science
Older posts
Newer posts
← Previous Page1 Page2 Page3 … Page139 Next →
© 2026 usdaed • Built with GeneratePress