LMI4Boltz: Optimising VRAM utilisation to predict large macromolecular complexes with consumer grade hardware

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

AlphaFold2 has revolutionised structural biology by enabling the prediction of protein structures approaching experimental quality. AlphaFold3 extends this framework to support modelling broad biomolecular classes while also reducing the computational cost of prediction. However, AlphaFold3 is distributed with licence conditions which restrict general purpose use. Boltz is a permissive, open-source re-implementation of AlphaFold3, but it is bottlenecked by increased VRAM requirements and requires high-end GPU hardware to model large molecular systems. Here we introduce Low Memory Inference for Boltz (LMI4Boltz) which reduces VRAM requirements using in-place updates, offloading tensors to host memory, careful management of functional scope and aggressive chunking of key operations. Using these strategies, LMI4Boltz increases the token size limit of Boltz-2 by 66.7% without sacrificing prediction accuracy. These optimisations improve the accessibility of Boltz using consumer grade hardware and unlock the ability to model large molecular systems. LMI4Boltz is available at https://github.com/tlitfin/lmi4boltz .

Article activity feed