SplitML: A Unified Privacy-Preserving Architecture for Federated Split-Learning in Heterogeneous Environments
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
While Federated Learning (FL) and Split Learning (SL) aim to uphold data confidentiality by localized training, they remain susceptible to adversarial threats such as model poisoning and sophisticated inference attacks. To mitigate these vulnerabilities, we propose SplitML, a secure and privacy-preserving framework for Federated Split Learning (FSL). By integrating IND−CPAD secure Fully Homomorphic Encryption (FHE) with Differential Privacy (DP), SplitML establishes a defense-in-depth strategy that minimizes information leakage and thwarts reconstructive inference attempts. The framework accommodates heterogeneous model architectures by allowing clients to collaboratively train only the common top layers while keeping their bottom layers exclusive to each participant. This partitioning strategy ensures that the layers closest to the sensitive input data are never exposed to the centralized server. During the training phase, participants utilize multi-key CKKS FHE to facilitate secure weight aggregation, which ensures that no single entity can access individual updates in plaintext. For collaborative inference, clients exchange activations protected by single-key CKKS FHE to achieve a consensus derived from Total Labels (TL) or Total Predictions (TP). This consensus mechanism enhances decision reliability by aggregating decentralized insights while obfuscating soft-label confidence scores that could be exploited by attackers. Our empirical evaluation demonstrates that SplitML provides substantial defense against Membership Inference (MI) attacks, reduces temporal training costs compared to standard encrypted FL, and improves inference precision via its consensus mechanism, all while maintaining a negligible impact on federation overhead.