Personalized Conformity Disentanglement for Debiased Recommendations

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Traditional recommender systems often focus on spurious correlations between user/item profiles and interaction predictions, while ignoring the hidden confounding factors, such as user conformity. It affects both the item's popularity and the ratings. Item popularity can introduce popularity bias in observed interactions, affecting recommendation systems. Existing methods typically address this bias from a causal perspective. The item's popularity corresponds to the treatment, while the ratings equate to the outcome within the causal inference framework. However, we argue that popularity bias is personalized, necessitating a personalized approach to debiasing. In this work, we propose PCDR, a personalized causal disentanglement for debiasing recommendation. We analyze the causality behind the interaction between the users and employ multiple encoders to assign different representations to users. To further enhance this distinction, we apply contrastive learning to separate user conformity levels across different popular items. To validate the effectiveness of PCDR, we performed experiments on three real-world datasets (ML-1M, Netflix, and Amazon). Our results demonstrate significant improvements over state-of-the-art models in three evaluation metrics: Recall (34.59% improvement), NDCG (21.00% improvement), and HR (27.50% improvement). The source code is available at: https://anonymous.4open.science/r/PCDR.

Article activity feed