Federated Recommendation with Dual Additive Decoupling

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Federated learning provides a privacy-preserving paradigm for building recommender systems. Most existing federated recommendation methods adopt a partial sharing design: item embeddings are shared globally via a server, while user embeddings are kept strictly local. However, this conventional split may fail to fully extract and leverage common preference patterns among users, limiting the model's ability to generalize, especially for users with sparse data. To bridge this gap, this paper proposes FedDAD, a federated recommendation algorithm with dual additive decoupling. FedDAD simultaneously decouples both user and item embeddings into private and shared components. This dual structure facilitates comprehensive personalization while enabling collaborative knowledge sharing for both users and items. Furthermore, to enhance communication efficiency, we impose L1 sparsity constraints on the global parameters exchanged between clients and the server. Extensive experiments on five real-world datasets demonstrate that FedDAD consistently and significantly outperforms state-of-the-art federated recommendation baselines in key ranking metrics (HR@10 and NDCG@10). These results validate its effectiveness in achieving superior recommendation performance while maintaining communication efficiency.

Article activity feed