FedNolowe: A Normalized Loss-Based Weighted Aggregation Strategy for Robust Federated Learning in Heterogeneous Environments
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Federated Learning supports collaborative model training across distributed clients while keeping sensitive data decentralized. Still, non-independent and identically distributed data pose challenges like unstable convergence and client drift. We propose Federated Normalized Loss-based Weighted Aggregation (FedNolowe), a new method that weights client contributions using normalized training losses, favoring those with lower losses to improve global model stability. Unlike prior methods tied to dataset sizes or resource-heavy techniques, FedNolowe employs a two-stage Manhattan normalization, reducing computational complexity by 40% in floating-point operations while matching state-of-the-art performance. A detailed sensitivity analysis shows our two-stage weighting maintains stability in heterogeneous settings by mitigating extreme loss impacts while remaining effective in independent and identically distributed scenarios.