A new version of ResearchHub is available.Try it now
Paper
Document
Download
Flag content
0

Intelligent Driving Task Scheduling Service in Vehicle-Edge Collaborative Networks Based on Deep Reinforcement Learning

Save
TipTip
Document
Download
Flag content
0
TipTip
Save
Document
Download
Flag content

Abstract

With the evolution of 6G technology, mobile edge computing is rapidly advancing as a crucial application scenario. This research presents an innovative method for vehicle-edge task offloading decision-making, leveraging real-time data inputs such as channel conditions, image entropy, and detector confidence levels. We propose a collaborative task processing framework for vehicle-edge computing that effectively combines lightweight and heavyweight models to cater to varying demands, ensuring efficient task execution. Additionally, the study introduces a custom-designed reinforcement learning algorithm aimed explicitly at optimizing offloading scheduling. This algorithm boosts decision-making accuracy and efficiency and features a comprehensive reward system to achieve a balanced trade-off between detection performance and latency. The frameworks efficacy is thoroughly evaluated in complex driving scenarios using the SODA10M dataset. Our results indicate the frameworks capability to achieve convergence, enhance precision, ensure stability, and maintain a lightweight operation, emphasizing its suitability for real-world implementation. This work provides practical and efficient strategies for intelligent driving task scheduling to meet the requirements of contemporary dynamic environments.

Paper PDF

This paper's license is marked as closed access or non-commercial and cannot be viewed on ResearchHub. Visit the paper's external site.