ResCLG: Improving Recommendation via Contrastive Alignment and Residual Propagation in Graph Networks

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

In the face of the challenges of data sparsity and the difficulty in deeply exploring user-item interaction relationships within recommender systems, this research puts forward a graph-neural-network-based recommendation model named ResCLG, which combines contrastive learning and residual connection. The model deploys a contrastive learning module to enhance the alignment quality of user-item representations. Meanwhile, residual connections are employed to mitigate the over-smoothing issue implications. in deep graph convolution, thereby boosting the model’s representational capacity. The experiments carried out on three benchmark datasets demonstrate that ResCLG outperforms mainstream baseline models like LightGCN and SGL in terms of Recall and NDCG. Specifically, on the Amazon - Book dataset, its performance witnesses an improvement of over 22%. Significantly, ResCLG demonstrates excellent performance When the temperature parameter τ in contrastive learning is set to a relatively low value. Moreover, its performance decline is far less than that of SGL. This implies that the latent representations generated by ResCLG are of superior quality, less susceptible to noise, and can more effectively harness strict contrast signals. This study presents a novel avenue for the construction of an efficient and robust graph recommendation model, carrying substantial theoretical and practical implications.

Article activity feed