File(s) not publicly available
Hierarchical Coordination of Networked-Microgrids Toward Decentralized Operation: A Safe Deep Reinforcement Learning Method
Multiple individual microgrids can be integrated as a networked microgrid system for enhanced technical and economic performance. In this paper, a two-stage data-driven method is proposed to hierarchically coordinate individual microgrids towards decentralized operation in a networked microgrid (NMG) system. The first stage schedules active power outputs of micro-turbines and energy storage systems (ESSs) on an hourly basis for energy balancing and cost minimization, where ESSs are controlled by a local P/SoC droop scheme. In the second stage, the reactive power outputs of PV inverters are dispatched every three minutes based on a Q/V droop controller, aiming to reduce network power losses and regulate the voltage under real-time uncertainties. At offline training stage, a multi-agent deep reinforcement learning model is trained to learn an optimal coordination policy, enhanced by a safety model framework. For online application, the trained agent can work locally in a decentralized manner without information exchanges, and the safety model can also be applied to monitor and guide online actions for safety compliance. Numerical test results validate the effectiveness and advantages of the proposed method.