An Adaptive Multi-Objective Memetic Algorithm (AMOMA) for the Hyperparameter Tuning of LightGBM

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Hyperparameter optimization (HPO) is vital for achieving optimal performance in gradient boosting frameworks such as Light Gradient Boosting Machine (LightGBM). Traditional optimization methods such as grid search, random search, and Bayesian optimization typically focus on maximizing predictive accuracy while overlooking computational efficiency and resource constraints. This study presents an Adaptive Multi-Objective Memetic Algorithm (AMOMA) that jointly optimizes classification performance, training time, and memory usage through the integration of global evolutionary search, adaptive local refinement, and dynamic objective weighting. Across three benchmark datasets including Adult Income, Yeast, and Give Me Some Credit, AMOMA achieved consistent and significant improvements over baseline optimizers. On the Adult Income dataset, AMOMA recorded the highest accuracy of 0.877 and Jaccard index of 0.708. For the Yeast dataset, AMOMA achieved an F1 of 0.720, and recall of 0.662. On the highly imbalanced Give Me Some Credit dataset, it improved recall by 2% while maintaining competitive precision. Pareto front analysis further demonstrated superior convergence and diversity, highlighting AMOMA’s ability to discover balanced solutions across conflicting objectives. These findings establish AMOMA as a robust, adaptive, and resource-efficient framework for multi-objective hyperparameter optimization in LightGBM.

Article activity feed