Faster Aerodynamic Design with Graph Networks and Smart Data
![The study demonstrates that test Mean Squared Error (testMSE) scales predictably with training set size ([latex]D_D[/latex]), where [latex]D_D[/latex] represents the number of unique geometry-flow snapshots used as graph-based data, and this scaling behavior differs significantly across models of varying sizes.](https://arxiv.org/html/2512.20941v1/figs/scaling/mean-error-all-models.png)
Researchers have created a new dataset and scaling laws to accelerate aerodynamic simulations using graph neural networks, enabling efficient design even with limited data.
![The study demonstrates that test Mean Squared Error (testMSE) scales predictably with training set size ([latex]D_D[/latex]), where [latex]D_D[/latex] represents the number of unique geometry-flow snapshots used as graph-based data, and this scaling behavior differs significantly across models of varying sizes.](https://arxiv.org/html/2512.20941v1/figs/scaling/mean-error-all-models.png)
Researchers have created a new dataset and scaling laws to accelerate aerodynamic simulations using graph neural networks, enabling efficient design even with limited data.

Researchers have created a benchmark and framework to help conversational AI better navigate ambiguity and ask clarifying questions during extended dialogues.

Detecting unusually large network traffic flows – ‘elephant flows’ – becomes significantly harder when models are moved between different network setups, and this research tackles that challenge.
New research reveals how the depth of knowledge representation impacts a model’s ability to retain information when learning new tasks, offering a path towards more robust artificial intelligence.

A new method reveals critical weaknesses in today’s most powerful AI systems and highlights shortcomings in how we measure their abilities.

New research details a smart scheduling framework that minimizes costs and meets deadlines when fine-tuning massive AI models using fluctuating cloud GPU pricing.

A novel system leveraging a mixture of experts significantly improves the robustness of machine learning models against carefully crafted adversarial inputs.
![Internal feature learning in deep residual networks collapses with increasing depth-at a rate of [latex] 1/\sqrt{L} [/latex]-but this degradation is rectified by a depth-aware learning rate, [latex] \eta_1 = \eta_c n \sqrt{L} [/latex], which restores active learning across layers and enables consistent hyperparameter transfer and improved performance, as demonstrated by lower training and testing losses and higher accuracy even with varying network depths and widths.](https://arxiv.org/html/2512.21075v1/figures/Vanish_resnet_performence_acc_loss.png)
New research reveals how the dynamics of feature learning in deep neural networks explain both the successes and limitations of simply scaling up model size.
![The system iteratively refines node descriptions within a closed loop, leveraging a graph neural network (GNN) to provide task feedback and a model-conditioned memory to retrieve relevant in-graph exemplars-guiding a large language model (LLM) to update node semantics before these are fed back into the GNN for continuous improvement [latex] \rightarrow [/latex].](https://arxiv.org/html/2512.21106v1/x2.png)
A new approach leverages the power of large language models to refine the semantic understanding of nodes within graph structures, leading to improved performance and adaptability.
![Combinatorial optimization problems defined on graph structures encompass a diverse range of challenges, fundamentally categorized by constraints on node and edge variables - such as those maximizing flow through a network [latex] G = (V, E) [/latex], minimizing the cost of traversing a graph, or satisfying complex relationships between interconnected elements - ultimately requiring algorithms to navigate this landscape of possibilities and identify provably optimal solutions.](https://arxiv.org/html/2512.20915v1/Classification_of_graph_COPs.png)
Researchers have developed a framework to predict how challenging a graph-based problem will be, offering insights into its inherent complexity.