CLSP: Linear Algebra Foundations of a Modular Two-Step Convex Optimization-Based Estimator for Ill-Posed Problems

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

This paper develops the linear-algebraic foundations of the Convex Least Squares Programming (CLSP) estimator and constructs its modular two-step convex optimization framework, capable of addressing ill-posed and underdetermined problems. After reformulating a problem in its canonical form, A(r)z(r)=b, Step~1 yields an iterated (if r>1) minimum-norm least-squares estimate z^(r)=(AZ(r))†b on a constrained subspace defined by a symmetric idempotent Z (reducing to the Moore-Penrose pseudoinverse when Z=I). The optional Step~2 corrects z^(r) by solving a convex program, which penalizes deviations using a Lasso/Ridge/Elastic net-based scheme parameterized by α∈[0,1] and yields z^∗. The second step guarantees a unique solution for α∈(0,1] and coincides with the Minimum-Norm BLUE (MNBLUE) when α=1. This paper also proposes an analysis of numerical stability and CLSP-specific goodness-of-fit statistics, such as partial R2, normalized RMSE (NRMSE), Monte Carlo t-tests for the mean of NRMSE, and condition-number-based confidence bands. The three special CLSP problem cases are then tested in a 50,000-iteration Monte Carlo experiment and on simulated numerical examples. The estimator has a wide range of applications, including interpolating input-output tables and structural matrices.

Article activity feed