Evaluating the Efficacy of Bayesian Optimization for Class-Imbalanced Data: Jointly Optimizing Classifier Hyperparameters and Sampling Strategies
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Training machine learning models on class-imbalanced datasets is an issue critical to real-world applications of artificial intelligence in fields such as fraud detection, medical diagnosis, etc. These datasets, with extreme imbalance between the population of majority and minority class samples, often lead to poor minority-class recall when training models using traditional methods. The majority of these conventional approaches seek to recognize patterns within classes by independently tuning sampling strategies or classifier hyperparameters, failing to account for the interdependence between these factors. The study details our alternative to these underperforming heuristics—the first published Bayesian optimization framework that jointly optimizes sampling ratios and classifier hyperparameters simultaneously while considering the effects of their relationships. When trained on the IEEE-CIS Fraud Detection dataset with a 3.5% extreme class imbalance, the framework yielded an average of 91.2% minority-class recall across 5 trials, roughly doubling that of traditional methods such as inverse class frequency weighting and SMOTE. T-tests validated the statistical significance of these results (p<0.0001), confirming the relevance and potential importance of our approach in becoming a scalable framework for addressing class-imbalanced problems in the real world.