Hybrid PPO–DQN-Based Edge Computing Framework for AI-Driven Task Offloading and Energy Optimization in 5G Networks

Authors

  • Ahmed Elstail Department of Computer Technology, Higher Institute of Science and Technology, Tamzawa, Ashati, Libya
  • Talal Mohammed Department of Electric and Electronic Technology, Higher Institute of Science and Technology, Tamzawa, Ashati, Libya

Keywords:

5G, task offloading, energy optimization, reinforcement learning, edge computing

Abstract

The rapid deployment of 5G networks and the increasing number of IoT devices have greatly accelerated this demand for computation at the network edge, where ultra-low latency and high reliability are demanded. In this paper, we propose an AI-based intelligent edge computing framework employing the hybrid deep reinforcement learning (PPO–DQN) approach for multi-objective optimization their task offloading and energy management in heterogeneous wireless systems. The framework adaptively trade off the delay, throughput and the power consumption delivered by network slicing to fulfill two specific demands of URLLC and mMTC services. Simulation results in MATLAB show that the proposed model provides superior performance compared to existing schemes by lowering 26% energy consumption, reducing 4% latency and augmenting overall system throughput by up to 35%. The findings demonstrate the potential of hybrid AI-based optimization for 5G edge deployments that are both efficient and sustainable.

Downloads

Download data is not yet available.

Downloads

Published

2025-11-24

Issue

Section

Branch of Applied and Natural Sciences

How to Cite

Ahmed Elstail, & Talal Mohammed. (2025). Hybrid PPO–DQN-Based Edge Computing Framework for AI-Driven Task Offloading and Energy Optimization in 5G Networks. Libyan Journal of Contemporary Academic Studies, 3(2), 93-100. https://ljcas.ly/index.php/ljcas/article/view/224