Unlocking the Black Box beyond Bayesian Global Optimization for Materials Design using Reinforcement Learning

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Materials design often becomes an expensive black-box optimization problem due to limitations in balancing exploration-exploitation trade-offs in high-dimensional spaces. We propose a reinforcement learning (RL) framework that effectively navigates the complex design spaces through two complementary approaches: a model-based strategy utilizing surrogate models for sample-efficient exploration, and an on-the-fly strategy when direct experimental feedback is available. This approach demonstrates better performance in high-dimensional spaces (D ≥ 6) compared to Bayesian optimization (BO) with the Expected Improvement (EI) acquisition function through more dispersed sampling patterns and better landscape learning capabilities. Furthermore, we observe a synergistic effect when combining BO's early-stage exploration with RL's adaptive learning. Evaluations on both standard benchmark functions (Ackley, Rastrigin) and real-world high-entropy alloy data, demonstrate statistically significant improvements (p < 0.01) over traditional BO with EI, particularly in complex, high-dimensional scenarios. This work addresses limitations of existing methods while providing practical tools for guiding experiments.

Article activity feed