Differentially Private Lasso: An ISTA Framework with Finite-Iteration Guarantees

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Differential privacy (DP) offers a principled way to protect individual records, but in high-dimensional sparse regression, it introduces a delicate accuracy-privacy tradeoff. In this paper, we develop an ISTA-based framework for DP estimation in the high-dimensional sparse linear model, instantiated for the Lasso objective. Our main contribution is a set of finite-iteration, high-probability ℓ 2 guarantees for the returned iterates. Across the considered DP mechanisms, the bounds admit an interpretable form: a nonprivate baseline term, a privacy-induced term determined by the effective noise level of the DP mechanism and its accounting , and an optimization residual that vanishes as the iteration budget increases. To enable stable implementations and principled Gaussian calibration, our algorithms incorporate clipping and an ℓ 2 projection step. Simulation studies and real-data experiments under matched privacy budgets support the theoretical predictions and demonstrate competitive accuracy in high-dimensional regimes.

Article activity feed