A Memory-Efficient Grey Wolf Optimization Algorithm and Its Performance Evaluation in High-Dimensional Problems

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The Gray Wolf Optimization Algorithm is widely applied to continuous optimization problems due to its simple structure and strong global search capability. However, in high-dimensional scenarios, it often faces issues of high memory consumption and reduced convergence efficiency. This paper proposes a memory-efficient Gray Wolf Optimization algorithm that reduces the algorithm's space complexity by minimizing intermediate state storage during individual updates and simplifying the leader individual update strategy. To validate its effectiveness, comprehensive experiments were conducted on over 20 classic continuous benchmark functions across 30 to 100 dimensions. Experimental results demonstrate that the improved algorithm achieves 20%–35% faster convergence while reducing peak memory consumption by approximately 30%. Stability assessments reveal a consistent reduction in solution variance under high-dimensional complex landscapes. Furthermore, parameter sensitivity analysis confirms the algorithm's robustness to changes in contraction and weighting factors. A real-world antenna array synthesis task further validates its practical performance, achieving notable improvements in pattern stability and memory efficiency. These findings indicate that the proposed method not only enhances search effectiveness but also provides substantial memory benefits, making it highly suitable for resource-constrained or large-scale optimization tasks in real engineering contexts.

Article activity feed