Alternating Method of Successive Approximations

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

In this work, we propose a principled deep learning framework for solving inverse problems by casting them as optimal control problems. Building upon variational models, we formulate the reconstruction task as the minimization of an energy functional that combines a data fidelity term with a learnable regularization parameterized by deep neural networks. To solve the resulting nonconvex and nonsmooth optimization problem, we employ a gradient flow approach, leading to a continuous-time dynamical system. Learning the network parameters is further structured as an optimal control problem, where the parameters act as controls to minimize a terminal cost and an integrated running cost. We adopt the Method of Successive Approximations (MSA), a theoretically grounded algorithm inspired by the Pontryagin Maximum Principle, to iteratively solve the control problem. Each iteration alternates between solving a forward state equation and a backward adjoint equation, followed by updating the parameters via Hamiltonian maximization. We show that when gradient ascent is used for the Hamiltonian step, the MSA framework recovers classical back-propagation. Moreover, we discuss the computational challenges associated with MSA, particularly the linear memory growth with respect to temporal discretization, and outline potential strategies for memory-efficient implementation. Numerical results in sparse-view CT and accelerated MRI reconstruction demonstrate the effectiveness and robustness of the proposed method, offering a theoretically interpretable and practically scalable alternative to traditional deep learning-based reconstruction techniques.

Article activity feed