Control of a Bi-Stable Genetic System via Parallelized Reinforcement Learning

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Achieving real-time control of genetic systems is critical for improving the reliability, efficiency, and reproducibility of biological research and engineering. Yet the intrinsic stochasticity of these systems makes this goal difficult. Prior efforts have faced three recurring challenges: (a) predictive models of gene expression dynamics are often inaccurate or unavailable, (b) nonlinear dynamics and feedback in genetic circuits frequently lead to multi-stability, limiting the effectiveness of deterministic control strategies, and (c) slow biological response times make data collection for learning-based methods prohibitively time-consuming. Recent experimental advances now allow the parallel observation and manipulation of over a million individual cells, opening the door to model-free, data-driven control strategies. Here we investigate the use of Parallelized Q-Networks (PQN), a recently-developed reinforcement learning algorithm, to learn control policies for a simulated bi-stable gene regulatory network. We show that PQN can not only control this self-activating system more accurately than other model-free and model-based control methods previously used in the field, but also converges efficiently enough to be practical for experimental application. Our results suggest that parallelized experiments coupled with advances in reinforcement learning provide a viable path for real-time, model-free control of complex, multi-stable biological systems.

Article activity feed