Soft Actor-Critic Reinforcement Learning Improves Distillation Column Internals Design Optimization

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Amid the advancements in computer-based chemical process modeling and simulation packages used in commercial applications aimed at accelerating chemical process design and analysis, there are still certain tasks in design optimization, such as distillation column internals design, that become bottlenecks due to inherent limitations in such software packages. This work demonstrates the use of soft actor-critic (SAC) reinforcement learning (RL) in automating the task of determining the optimal design of trayed multistage distillation columns. The design environment was created using the AspenPlus® software (version 12, Aspen Technology Inc., Bedford, Massachusetts, USA) with its RadFrac module for the required rigorous modeling of the column internals. The RL computational work was achieved by developing a Python package that allows interfacing with AspenPlus® and by implementing in OpenAI’s Gymnasium module (version 1.0.0, OpenAI Inc., San Francisco, California, USA) the learning space for the state and action variables. The results evidently show that (1) SAC RL works as an automation approach for the design of distillation column internals, (2) the reward scheme in the SAC model significantly affects SAC performance, (3) column diameter is a significant constraint in achieving column internals design specifications in flooding, and (4) SAC hyperparameters have varying effects on SAC performance. SAC RL can be implemented as a one-shot learning model that can significantly improve the design of multistage distillation column internals by automating the optimization process.

Article activity feed