Guiding Giants: Eliciting Hidden Potential from Large Language Models
![The PowerFlow framework dynamically adjusts the distribution of logical reasoning-sharpening it with [latex] \alpha > 1 [/latex] to enhance performance or flattening it with [latex] \alpha < 1 [/latex] to encourage creative exploration-resulting in a Pareto improvement over existing approaches to directional capability elicitation.](https://arxiv.org/html/2603.18363v1/x1.png)
A new approach unlocks directional control over language model capabilities, boosting reasoning skills or restoring creative flair without relying on traditional reward-based reinforcement learning.

![A novel Meta-Bayesian Federated Learning with Federated Fine-tuning (Meta-BayFLFL) algorithm enables personalized learning across [latex] K[/latex] clients connected to a global server, where each client optimizes a local model-built using Binarized Neural Networks-by adaptively selecting from a range of temporary learning rates, and the server aggregates these refined local models to distribute an updated global model, facilitating efficient and customized learning at scale.](https://arxiv.org/html/2603.18083v1/x2.png)

![Transformer-based in-context learning demonstrates robustness across noise distributions-including non-Gaussian and heavy-tailed scenarios like Bernoulli, exponential, Gamma, and Poisson-where the [latex]\ell_{1}[/latex] objective aligns with maximum-likelihood estimation, and even extends to distributions like the Student-t distribution (with [latex]\nu = 2[/latex]) that fall outside traditional finite-variance statistical frameworks, offering a performance comparable to, or exceeding, classical estimators such as least squares, Ridge regression, and [latex]\ell_{1}[/latex] solvers (LP and ADMM).](https://arxiv.org/html/2603.18564v1/figs/500k_steps_student_noise_df_2_noise_std_3_with_data_gamma_and_w_gaussian.png)


