Adaptive Multi-Objective Optimization of Microgrid Energy Management Using Deep Reinforcement Learning Considering Battery Degradation and Renewable Uncertainty
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Microgrids offer enhanced resilience and efficiency but require sophisticated energy management systems (EMS) to balance conflicting objectives like cost minimization, renewable energy utilization, and component longevity, especially under uncertainty. Traditional optimization methods often rely on precise forecasts and may struggle with real-time adaptation and complex trade-offs like battery degradation. This research aimed to develop a Deep Reinforcement Learning (DRL) based EMS for optimizing microgrid operation considering operational cost, battery degradation, and renewable generation uncertainty. A Deep Q-Network (DQN) based reinforcement learning agent was trained to manage energy flows within a simulated microgrid comprising solar PV, battery storage, controllable loads, and a grid connection. The reward function incorporated operational costs, battery degradation, and renewable utilization objectives, with the agent learning control policies through environment interaction. The DRL-based EMS demonstrated effective adaptive control, achieving a 12.01% reduction in overall operational costs compared to the Model Predictive Control benchmark. The DRL agent implicitly learned strategies that reduced battery degradation by 8.19% while increasing renewable energy utilization by 10.39%. Most notably, the approach maintained robust performance under uncertainty, with only 8.9% cost increase under severe forecast errors compared to 21.5% for conventional methods. This study demonstrates the efficacy of DRL for adaptive multi-objective microgrid energy management, successfully balancing economic operation, battery health preservation, and renewable energy integration under uncertainty.