Have a personal or library account? Click to login
Distributed Deep Reinforcement Learning Via Split Computing For Connected Autonomous Vehicles Cover

Distributed Deep Reinforcement Learning Via Split Computing For Connected Autonomous Vehicles

By: Robert Rauch and  Juraj Gazda  
Open Access
|Jun 2025

Abstract

This paper proposes the application of split computing paradigms for deep reinforcement learning through distributed computation between Connected Autonomous Vehicles (CAVs) and edge servers. While this approach has been explored in computer vision, it remains largely unexplored for reinforcement learning scenarios. We introduce a novel autoencoder trained directly through Deep Q-Network (DQN) rewards, wherein we optimize autoencoder layers using the DQN reward function while maintaining all other layers frozen. Our experimental results demonstrate that the proposed approach outperforms baseline methods by reducing data offloading requirements to the edge server by up to 98.7%. Additionally, this methodology not only decreases the data transmission burden but also achieves comparable rewards. In certain configurations, it even enhancing performance by up to 9.65%. The primary objective of this research is to reduce latency in deep reinforcement learning tasks for autonomous vehicles. In this regard, proposed approach achieves up to 66.5% improvement in latency reduction compared to baseline methods. These findings indicate that partial offloading through split computing offers significant benefits over both full offloading and complete on-device computation strategies for CAVs.

DOI: https://doi.org/10.2478/aei-2025-0008 | Journal eISSN: 1338-3957 | Journal ISSN: 1335-8243
Language: English
Page range: 21 - 29
Submitted on: Apr 8, 2025
Accepted on: May 19, 2025
Published on: Jun 4, 2025
Published by: Technical University of Košice
In partnership with: Paradigm Publishing Services
Publication frequency: 4 issues per year

© 2025 Robert Rauch, Juraj Gazda, published by Technical University of Košice
This work is licensed under the Creative Commons Attribution 4.0 License.