Comparative Analysis of Privacy-Preserving Collaborative Learning Approaches: Security, Efficiency, and Convergence

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

This study explores the comparative strengths of distributed learning models, Federated Learning (FL), Blind Federated Learning (BFL), Blended Blind Federated Learning (BBFL), Split Learning (SL), and Decentralized Learning (DL), by evaluating their performance metrics, convergence rates, and security features. Distributed learning models aim to leverage data from multiple clients while maintaining privacy. Each model uses different architectural and security mechanisms to achieve this, resulting in unique strengths and limitations in terms of scalability, resilience to attacks, and data integrity. By comparing convergence speed, accuracy, and resilience to specific threats, this paper provides insights into each model’s effectiveness for privacy-sensitive applications, particularly in handling model inversion, gradient leakage, and data poisoning. Results suggest that while DL exhibits superior performance and security, BBFL and SL models provide competitive alternatives for structured environments with moderate security needs. This analysis offers a framework to guide distributed learning model selection based on application requirements and security priorities.

Article activity feed