Stacking Ensemble Learning : Combining XGBoost, LightGBM, CatBoost, and AdaBoost with Random Forest Meta Model

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Ensemble learning has become a powerful approach in machine learning, particularly for improving prediction accuracy and generalization. This work proposes a stacking ensemble framework that integrates four popular boosting algorithms, XGBoost, LightGBM, CatBoost, and AdaBoost, as base learners, with Random Forest employed as the meta-lover. The design leverages the complementary strengths of boosting algorithms in handling tabular data, categorical variables, and imbalanced datasets, while Random Forest ensures robust decision-making at the meta-level. The results show that the proposed stacking ensemble consistently outperforms individual models, achieving better stability and reducing both variance and bias. This highlights the effectiveness of combining multiple boosting algorithms with a Random Forest metamodel to build a hybrid system that is accurate, generalizable, and applicable across different machine learning domains.

Article activity feed