MMVO-SHFL: A Fair and Efficient Hierarchical Federated Learning

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Federated learning (FL) enables collaborative model training without centralizing data. However, the traditional FL framework is cloud-based and suffers from high communication latency. On the other hand, the edge-based FL framework, although reducing communication latency by leveraging edge servers, suffers from degraded model accuracy due to the limited data access of these servers. To overcome these limitations, this work introduces a novel hierarchical federated learning framework named MMVO - SHFL. It incorporates a bandwidth prediction based on LSTM, a unique MAB - Driven dynamic client selection strategy and an MVO - Guided model parameter optimization mechanism. Extensive experiments show that MMVO-SHFL significantly improves model convergence speed while also enhancing model accuracy. Compared to traditional methods, MMVO-SHFL not only reduces energy consumption but also significantly improves the fairness of client participation. MMVO-SHFL outperforms existing methods across various configurations, highlighting its great potential for large-scale heterogeneous federated learning scenarios. Moreover, through grid search optimization of hyperparameters G and β , the optimal combination ( G  = 0.8, β  = 0.1) is determined to maximize its performance. MMVO-SHFL provides a more efficient, energy-saving, and fair solution for large-scale heterogeneous FL scenarios.

Article activity feed