Secure and Lossless Federated Matrix Learning for Recommender Systems

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Privacy concerns within recommender systems have emerged as a pivotal challenge, garnering significant scrutiny from both academic and industrial sectors. While federated matrix factorization have been proposed to enhance privacy via local differential privacy, their applicability is often hindered by two fundamental limitations: the inherent utility-privacy trade-off and suboptimal training convergence. To address these challenges, this paper introduces a Secure and Lossless Federated Matrix Factorization (SLFedMF) framework tailored for diverse deployment environments. Specifically, we integrate perturbation with a mask mechanism to provide robust, multi-dimensional privacy guarantees. To eliminate the accuracy degradation typical of perturbation-based methods, we employ a trusted party authority denoising mechanism, enabling clients to reconstruct noise-free global gradients locally. Furthermore, the Barzilai-Borwein method is leveraged to adaptively optimize learning rates, significantly accelerating model convergence. We present two variants: SLFedMF-Full for synchronous full-device participation and SLFedMF-Part to mitigate the straggler effect in partial participation scenarios. Extensive experiments on four real-world datasets demonstrate that SLFedMF outperforms state-of-the-art methods, achieving optimal recommendation accuracy equivalent to non-private models while maintaining stringent privacy standards.

Article activity feed