Solve Bi-Level Optimization Model for Meta-Learning Using Method of Lagrangian Multipliers

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Optimization-based meta-learning has emerged as a powerful framework for improving model generalization, especially in domains with diverse and heterogeneous data distributions. In this work, we propose a bilevel optimization model for meta-learning, explicitly framed through an optimal control perspective. Our approach formulates the meta-training process as a constrained optimization problem, where the lower-level updates task-specific models using a learnable unrolling network, and the upper-level adjusts hyperparameters to minimize validation losses across tasks. By applying the Method of Lagrangian Multipliers (MLM), we model both the primal reconstruction variables and the dual multipliers, ensuring that updates respect the dynamic constraints of the optimization process. We prove the theoretical equivalence between direct loss minimization and Lagrangian-based optimization and develop an efficient algorithm for network training. Experimental motivations drawn from magnetic resonance imaging (MRI) reconstruction suggest that our framework offers scalable and principled solutions, with potential for broader impact in general inverse problems and meta-learning scenarios.

Article activity feed