Efficient Federated Learning Based On Domain Adaptation and Knowledge Distillation Losses
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Numerous devices nowadays generate vast amounts of data for learning. Traditional centralized learning necessitates transmitting all data to a central site, which conducts the model training. However, much of these data may be sensitive, leading customers to refuse to share it. Federated Learning (FL) addresses this dilemma by employing a distributed learning framework where multiple local users collaborate to train a shared model via the central server's coordination. Nevertheless, reducing communication costs with respect to computational costs and efficiently handling non-independent and identically distributed (non-IID) problems still present significant struggles. Therefore, we propose an efficient FL method using domain adaptation and knowledge distillation losses to solve the abovementioned issues. Experimental results implemented on MNIST, CIFAR-10, and CIFAR-100 datasets demonstrate that our method can achieve almost the same accuracy as the other well-known FL methods using fewer communication rounds, particularly for non-IID situations.