Self-Paced Online Multi-Task Learning via Alternating Direction Method of Multipliers
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Online multi-task learning (OMTL) enhances streaming data processing by harnessing relationships across tasks, typically formulated as an optimization problem with a shared loss function. However, conventional gradient-based methods often grapple with gradient vanishing and conditioning issues. Moreover, their centralized nature hampers online parallel optimization, crucial for big data. Drawing from the cognitive principle of learning from simple to complex, this study introduces a Self-Paced Online Multi-Task Learning (SPOMTL) framework using the Alternating Direction Method of Multipliers (ADMM). Task relationships are dynamically modeled to adapt to online changes, while the self-paced mechanism prioritizes easier tasks and instances, gradually introducing harder ones. In a distributed architecture with a central server, this SPOMTL-ADMM outperforms SGD methods in accuracy and efficiency. To mitigate server bottlenecks with large data, we also developed a decentralized version, enabling nodes to operate via local neighbor interactions. Experiments on synthetic and real-world datasets highlight the efficiency of our approach.