Hybrid PPO–DQN-Based Edge Computing Framework for AI-Driven Task Offloading and Energy Optimization in 5G Networks
Keywords:
5G, task offloading, energy optimization, reinforcement learning, edge computingAbstract
The rapid deployment of 5G networks and the increasing number of IoT devices have greatly accelerated this demand for computation at the network edge, where ultra-low latency and high reliability are demanded. In this paper, we propose an AI-based intelligent edge computing framework employing the hybrid deep reinforcement learning (PPO–DQN) approach for multi-objective optimization their task offloading and energy management in heterogeneous wireless systems. The framework adaptively trade off the delay, throughput and the power consumption delivered by network slicing to fulfill two specific demands of URLLC and mMTC services. Simulation results in MATLAB show that the proposed model provides superior performance compared to existing schemes by lowering 26% energy consumption, reducing 4% latency and augmenting overall system throughput by up to 35%. The findings demonstrate the potential of hybrid AI-based optimization for 5G edge deployments that are both efficient and sustainable.
