Pretrained protein language models choose between sequence novelty and structural completeness
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Protein language models (PLMs) have gained increasing acceptance in tasks ranging from variant effect prediction in disease to optimization and de novo design of proteins with improved stability, target-binding affinity, and catalytic performance. Despite encouraging performance in such applications, little is understood as far as the degree to which PLM-generated sequences -- putative novel protein outputs -- recapitulate the broad biophysical rules and diversity of sequence, structure, and function that defines natural protein-space, vital knowledge for boosting the design capacity of PLMs in ever-more-complex systems. Towards this end, we computationally profile and characterize the sequence and structure statistics and properties of hundreds of thousands of potential small proteins proposed through free unconstrained generation from architecturally distinct PLMs. We show that although these models exhibit a prodigious latent capacity to access novel amino-acid sequences, they struggle to approach the structural variation that exists on plain display in nature. Moreover, we uncover a stark tradeoff between prioritizing sequence novelty or structural breadth, exemplified by a "helical bundle trap" that dominates model output when aiming outside the comfortable bounds and evolutionary organization of natural sequences. These findings underscore a critical need for strategies that can rapidly guide PLMs into unlocking through generation the full richness of protein sequence, structure, and function that is consistent with governing biophysics but tantalizingly untapped as of yet in design contexts.