Dynamic Task Offloading in Vehicular NetworksUsing Large Language Models: A Novel EdgeIntelligence Framework for Adaptive, Low-Latency,and Energy-Aware Decision Making
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Task offloading in vehicular environments is essential for efficient computation and resource utilization among connected vehicles. Traditional approaches, such as reinforcement learning and federated learning, often struggle with dynamic adaptation, high communication costs, and scalability issues. This paper introduces a novel approach leveraging a Large Language Model (LLM) deployed at an edge node to optimize task offloading decisions. Unlike conventional machine learning models, the LLM is trained on a structured dataset specifically designed for vehicular task offloading, allowing it to process real time updates including speed, direction, CPU availability, and battery levels without extensive manual feature engineering. By analyzing multi dimensional data holistically, the LLM dynamically selects the most suitable node for task execution, significantly improving decision accuracy and adaptability. The proposed system outperforms traditional methods in scalability and responsiveness, particularly in large scale vehicular networks. However, challenges such as computational overhead, latency, and energy efficiency must be addressed for real world deployment. This work highlights the transformative potential of LLMs in intelligent vehicular decision making, paving the way for more efficient and adaptive task offloading strategies.