Distribution Network Path Planning Method and System Based on Artificial Intelligence Optimization Algorithm

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The foundation of dependable, safe, and economically viable power system operation is sensible transmission network design, which is essential given the ever-increasing size of power grids. Traditional mathematical optimization approaches have a hard time solving the transmission network route planning issue because it is large-scale, non-linear, and has high dimensions. In order to resolve the transmission network planning issue, this research employs the ant colony algorithm, a highly intelligent bionic optimization tool. Using it to seek the paths and lessen the coupling among parameters is suggested as an enhanced ant colony method. With the assumption of accurate convergence in mind, the ant colony method significantly improves the computation speed for transmission network design. The paper's modified ant colony algorithm outperforms the state-of-the-art in terms of processing time and efficiency in searching for the ideal transmission line route. A computerized model of the electrical system that includes power cables, nodes (such as transformers and substations), and the capacity, impedance, and position of each component. A basic topic with diverse applications, the route planning issue is a staple in many domains. Scholarly interest in finding a solution to the route optimization issue using deep reinforcement learning technologies has grown in recent years, making it a popular avenue for path planning problems. In this research, we will examine a power distribution optimization route approach, use deep reinforcement learning to address the continuous route planning issue, and do experiments in a Miniworld maze. A neural network representation of the reward function is used to suggest a reward shaping DDPG algorithm that optimizes the reward functionality dynamically. In a study that compared DDPG to genetic algorithms, Binary Swarm Optimization, while the historical average approach, it was found that the latter had a small and less than ideal accuracy rate, was easy to calculate, and showed little change in accuracy with increasing data. The genetic algorithm's accuracy hovers around 70%; it degrades with increasing training size. Eventually stabilizing at about 83%, the forecasting accuracy rate increased in tandem with the training system's expansion, leading to a deeper learning model with a higher training level.

Article activity feed