Skip to content

usdaed

  • Science
  • Who is Denis Avetissian?

Science

Deep Reinforcement Learning Gets a Bayesian Boost

27.12.2025 by qfx

A new method combines the power of deep learning with Bayesian principles to tackle complex reinforcement learning tasks with improved efficiency and accuracy.

Categories Science

Why Won’t It Just *Do* What You Ask? Unpacking the Quirks of AI Language

27.12.2025 by qfx

New research reveals large language models often prioritize ease over accuracy, but surprisingly excel at remembering details over extended conversations.

Categories Science

Faster Aerodynamic Design with Graph Networks and Smart Data

27.12.2025 by qfx

The study demonstrates that test Mean Squared Error (testMSE) scales predictably with training set size ([latex]D_D[/latex]), where [latex]D_D[/latex] represents the number of unique geometry-flow snapshots used as graph-based data, and this scaling behavior differs significantly across models of varying sizes.

Researchers have created a new dataset and scaling laws to accelerate aerodynamic simulations using graph neural networks, enabling efficient design even with limited data.

Categories Science

When Conversations Get Confused: A New Test for Chatbot Clarity

27.12.2025 by qfx

Effective communication between users and large language models hinges on clarifying ambiguous or contradictory input, as demonstrated by the ability of follow-up questioning to resolve initial uncertainties and ensure alignment with user intent.

Researchers have created a benchmark and framework to help conversational AI better navigate ambiguity and ask clarifying questions during extended dialogues.

Categories Science

Beyond the Network Boundary: Adapting Traffic Analysis to New Environments

27.12.2025 by qfx

Domain characteristics vary considerably across network environments, with the Campus domain exhibiting the highest proportion of elephant flows-reaching 15.0%-while the UNSW-NB15 dataset provides the most extensive data for analysis, comprising 82,332 flows.

Detecting unusually large network traffic flows – ‘elephant flows’ – becomes significantly harder when models are moved between different network setups, and this research tackles that challenge.

Categories Science

When Memories Fade: Understanding Forgetting in AI

27.12.2025 by qfx

New research reveals how the depth of knowledge representation impacts a model’s ability to retain information when learning new tasks, offering a path towards more robust artificial intelligence.

Categories Science

What Large Language Models Still Don’t Know

26.12.2025 by qfx

The Competency Gap method decomposes large language model evaluation into interpretable benchmark and model gaps by leveraging a concept dictionary learned through sparse autoencoding, quantifying both how much benchmarks activate individual concepts and projecting model performance into concept space to yield per-concept scores across benchmarks and evaluation suites.

A new method reveals critical weaknesses in today’s most powerful AI systems and highlights shortcomings in how we measure their abilities.

Categories Science

Squeezing Value from Spot Instances for Large Language Model Training

26.12.2025 by qfx

Time series analysis using the ARIMA model effectively forecasts both spot availability and price fluctuations.

New research details a smart scheduling framework that minimizes costs and meets deadlines when fine-tuning massive AI models using fluctuating cloud GPU pricing.

Categories Science

Strength in Numbers: A New Defense Against AI Attacks

26.12.2025 by qfx

The architecture decomposes complex functions into specialized sub-networks - the ‘experts’ - and a gating mechanism dynamically routes inputs to these experts, allowing the system to adapt its capacity and maintain performance even as demands shift and decay over time, a strategy mirroring the graceful degradation observed in resilient systems.

A novel system leveraging a mixture of experts significantly improves the robustness of machine learning models against carefully crafted adversarial inputs.

Categories Science

The Deep Learning Scaling Puzzle: Why Bigger Isn’t Always Better

26.12.2025 by qfx

Internal feature learning in deep residual networks collapses with increasing depth-at a rate of [latex] 1/\sqrt{L} [/latex]-but this degradation is rectified by a depth-aware learning rate, [latex] \eta_1 = \eta_c n \sqrt{L} [/latex], which restores active learning across layers and enables consistent hyperparameter transfer and improved performance, as demonstrated by lower training and testing losses and higher accuracy even with varying network depths and widths.

New research reveals how the dynamics of feature learning in deep neural networks explain both the successes and limitations of simply scaling up model size.

Categories Science
Older posts
Newer posts
← Previous Page1 … Page39 Page40 Page41 … Page89 Next →
© 2026 usdaed • Built with GeneratePress