Skip to content

usdaed

  • Science
  • Who is Denis Avetissian?

Science

Taming the Gradient Spike: A New Approach to Training Large Language Models

15.02.2026 by qfx

Spectra demonstrates improved convergence loss and downstream performance across a range of learning rates-[latex] \eta \in \{8 \times 10^{-4}, 1 \times 10^{-3}, 5 \times 10^{-3}, 1 \times 10^{-2}\} [/latex]-establishing its advantage in optimization regimes.

Researchers have discovered a key pattern in the gradients of large language models and developed a new optimizer designed to exploit this structure for faster, more efficient training.

Categories Science

Hidden in Plain Sight: Shielding Graph Communities from AI Detection

15.02.2026 by qfx

A defender subtly alters a network’s connections to frustrate an adversary’s graph neural network-deployed to identify community structure and, crucially, a hidden target community-demonstrating that even slight modifications can effectively obscure information and compromise inference accuracy.

A new technique fortifies the privacy of network communities by subtly altering both the connections and characteristics of nodes, making them harder for machine learning models to identify.

Categories Science

Building Blocks of AI: Generating 3D Assets with Transformers

15.02.2026 by qfx

AssetFormer establishes a framework wherein modular assets are rendered and queried with GPT-4o to generate cleaned, pre-filled captions, subsequently driving autoregressive modeling that produces new modular assets primed for integration into industrial applications through model-based enhancement and deployment.

Researchers have developed a new framework, AssetFormer, that uses the power of autoregressive transformers to create customizable 3D models from text prompts.

Categories Science

When AI Makes Things Up: Tracking the Roots of false Information

15.02.2026 by qfx

The expert model classifies textual tokens as either factual content or hallucination, assigning a binary label of [latex]0[/latex] to indicate factual accuracy and [latex]1[/latex] to denote fabricated content.

New research sheds light on why large language models confidently generate incorrect statements, and how internal analysis can reveal patterns in these ‘hallucinations’.

Categories Science

Building Models Piece by Piece: A New Approach to Feature Interaction

15.02.2026 by qfx

The Neural Additive Expert framework achieves flexible modeling of complex relationships by dynamically weighting contributions from multiple expert predictors for each feature, effectively summing these weighted features to generate a final prediction while retaining inherent interpretability-a process akin to establishing a provably correct solution through additive decomposition rather than relying on opaque, monolithic functions.

Researchers have developed a novel framework that combines the benefits of additive models with the power of expert systems to achieve both high accuracy and clear interpretability.

Categories Science

The Trust Trap: When We Rely Too Much on AI Chatbots

15.02.2026 by qfx

Reliance on large language models manifests in distinct behavioral patterns across tasks, with users exhibiting high overreliance frequently copying unedited text, repeatedly referencing LLM responses, employing rough locating strategies, and demonstrating hesitation before prompting - contrasted by cautious editing, focused initial reading, precise editing, and independent task completion among those with low overreliance.

New research pinpoints how users behave when they become overly dependent on conversational AI, offering critical insights for building more responsible interfaces.

Categories Science

Skewed Data, Flawed Detection: The Vulnerability Blind Spot

15.02.2026 by qfx

The distribution of vulnerability types across the training, validation, and test sets reveals the composition of each split, with specific vulnerability IDs - detailed in Table 2 - determining the frequency of each type within each set.

A new study reveals that common data imbalance issues significantly hinder the performance of deep learning models used to identify software vulnerabilities.

Categories Science

Thinking for Itself: How Language Models Can Learn to Reason Without Human Help

14.02.2026 by qfx

A system trains itself to reason by internally rewarding lines of thought that bolster its confidence in a correct answer, circumventing the limitations of external verification-a process demonstrated by a model that eschews reliance on externally defined rewards or pre-authored reasoning traces, instead generating reasoning [latex]z[/latex] based solely on a question [latex]x[/latex] and a reference answer [latex]y^{\star}[/latex].

New research explores a method for training large language models to develop robust reasoning skills by rewarding internally consistent thought processes.

Categories Science

Adapting to the Unknown: AI Learns to Spot Anomalies in Any Graph

14.02.2026 by qfx

EvoFG’s performance is demonstrably sensitive to variations in its soft routing frequency, as evidenced by heatmaps detailing the impact of ablating specific components.

A new framework uses the power of large language models to create adaptable anomaly detection systems that generalize across diverse and unseen graph datasets.

Categories Science

Steering the Market: How Shaping Agent Behavior Can Unlock Climate Investment

14.02.2026 by qfx

The study demonstrates that a firm’s commitment to cooperation-defined as allocating 0.5% of capital to mitigation-directly impacts overall market wealth, with gains visualized as increases in ‘cooperator’ presence (green) and losses indicated by ‘defector’ dominance (red) across varying policy landscapes.

New research shows that influencing the learning of investment agents can overcome common hurdles in climate-focused financial modeling and lead to more effective sustainability outcomes.

Categories Science
Older posts
Newer posts
← Previous Page1 … Page32 Page33 Page34 … Page142 Next →
© 2026 usdaed • Built with GeneratePress