Learning to Optimize with Limited Data

A three-stage amortized optimization framework-comprising approximate label collection, supervised pretraining, and self-supervised training-achieves up to a 59× reduction in offline computational cost while demonstrably enhancing accuracy, optimality, and feasibility when contrasted with established baseline methods.

A novel framework leverages inexpensive labels and self-supervision to enhance the robustness and efficiency of surrogate-based optimization for complex problems.