Exponential Moving Average Q-learning Algorithm-Based PID Control

Published: 2025-05-19

Abstract

roportional-Integral-Derivative (PID) controllers remain ubiquitous in industrial control systems due to their simplicity and robustness. However, their performance is strongly tied to parameter tuning, which is challenging in dynamic, nonlinear environments. Recent advancements in reinforcement learning (RL) offer promising solutions to adaptive PID tuning. This paper presents a novel approach that employs the Exponential Moving Average Q-Learning (EMAQL) algorithm - an RL approach with decaying learning rates, and adaptive mechanisms for dynamic PID gain tuning. EMAQL employs a Win-or-Learn-Fast/Slow (WoLF/WoLS) strategy in balancing the policy updates so that it is able to adapt rapidly to system uncertainties. By integrating EMAQL and PID control, we present an adaptive tuning mechanism that adjusts proportional, integral, and derivative gains in real time based on system states and reward feedback. Simulation experiments on water tank systems demonstrate that the EMAQL-based PID controller achieves improved transient response, reduced overshoot, and enhanced robustness compared to the traditional PID controller.

Keywords: PID Reinforcement Learning EMAQL Algorithm

How to Cite

Mostafa D. Awheda, & Abdallah M. Faraj. (2025). Exponential Moving Average Q-learning Algorithm-Based PID Control . Libyan Journal of Contemporary Academic Studies, 3(1), 136-141. https://ljcas.ly/index.php/ljcas/article/view/67

Issue

Section

Branch of Applied and Natural Sciences

License

Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License.