Adaptive PID-Based Deep Reinforcement Learning for Load Frequency Control in Islanded Microgrids with Heterogeneous Resources and Energy Storage

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Load frequency control (LFC) is an essential measure in maintaining stability in power systems in islanded microgrids that include heterogeneous generation sources and energy storage systems. Conventional PID controllers frequently encounter constraints owing to their static parameters, which fail to accommodate fluctuating loads, indeterminate system parameters, and diverse generation conditions. In the presented paper, a reinforcement learning (RL)-based adaptive PID tuning methodology with the use of Deep Deterministic Policy Gradient (DDPG) and Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithms is introduced. The proposed RL-PID controllers are trained to achieve an adaptive LFC controller by reducing the frequency variations and control effort in a variety of operating scenarios. Simulations of a heterogeneous-sources-landed microgrid demonstrate that both DDPG- and TD3-based controllers outperform conventional PID controllers in dynamic response, settling time, and robustness to disturbances. Besides, the TD3-PID controller shows better stability and reduced oscillations compared to the DDPG-PID controller, which can be explained by the fact that it improves the policy update mechanism. The results point out the importance of RL for adaptive and robust load–frequency control in modern microgrids.

Article activity feed