Untangling Neural Network Dynamics

Feature learning performance in a linear single hidden layer network demonstrates strong agreement between theoretical predictions-specifically, the Neural Network Gaussian Process (NNGP) and the Li & Sompolinsky approach-and direct simulation, as evidenced by comparable mean discrepancies [latex]\langle\Delta\alpha\rangle[/latex] across a parameter space defined by [latex]P=80[/latex], [latex]N=100[/latex], [latex]d=200[/latex], and an Ising task with [latex]p=0.1[/latex] and a regulator of [latex]\kappa=0.01[/latex], validated through Langevin dynamics with one million training steps and twenty thousand samples.

New research bridges the gap between Gaussian processes and recurrent neural networks to reveal the underlying principles governing deep learning.