Federated Reinforcement Learning Framework for Privacy Preserving Few Shot Learning
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
This study introduces a federated reinforcement learning framework for few-shot learning (FRL-FSL), aiming to address the dual challenges of data scarcity and privacy preservation in distributed environments. The proposed framework integrates policy gradient optimization with secure aggregation and introduces validator nodes to ensure the authenticity of both data and model updates. Experiments were conducted on the Omniglot and FC100 datasets under 1-shot and 5-shot conditions, with comparisons against FedAvg, FedFSL and traditional supervised baselines. Results demonstrate that FRL-FSL achieved an average accuracy of 87.3% on Omniglot (5-shot), improving by 25.9% over FedAvg and 13.8% over FedFSL, while maintaining 72.6% accuracy in 1-shot tasks. On the FC100 dataset, FRL-FSL reached 59.8% accuracy in 5-shot learning, outperforming FedAvg by 18.6% and FedFSL by 7.1%, and achieved 46.3% in 1-shot learning. The framework also reduced the privacy risk index by 37% relative to FedAvg, with convergence accelerated by nearly 30% compared to baselines. These findings confirm that FRL-FSL achieves a practical balance between accuracy, convergence, and privacy, offering a promising solution for real-world, privacy-sensitive applications.