Multi-Device Task Offloading Optimization in Edge Computing Systems with Reinforcement Learning: A Case Study on Video Object Tracking
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
This paper presents a novel framework for multi-device task offloading optimization in edge computing systems using reinforcement learning, demonstrated through a case study on video object tracking. Distributing computational tasks efficiently between resource-constrained devices and edge servers remains difficult when dealing with heterogeneous client devices. We address this problem by proposing a system in which multiple devices independently optimize their offloading decisions to a shared edge server using device-specific Deep Q-Networks (DQNs). Our framework incorporates comprehensive energy measurement methodologies. For quantifying communication energy consumption using external hardware monitoring and statistical modeling, experiments conducted across three devices with distinct computational profiles to demonstrate substantial improvements in performance. Results show up to 95.94% reduction in client energy consumption and 92.60% reduction in processing latency for resource-constrained devices, with benefits diminishing proportionally to increasing local computational power. This work bridges the gap between theoretical offloading models and practical implementations, providing a case and analysis for developing adaptive edge computing systems capable of operating with heterogeneous devices under realistic conditions.