Improving functional protein generation via foundation model-derived latent space likelihood optimization
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
A variety of deep generative models have been adopted to perform de novo functional protein generation. Compared to 3D protein design, sequence-based generation methods, which aim to generate amino acid sequences with desired functions, remain a major approach for functional protein generation due to the abundance and quality of protein sequence data, as well as the relatively low modeling complexity for training. Although these models are typically trained to match protein sequences from the training data, exact matching of every amino acid is not always essential. Certain amino acid changes (e.g., mismatches, insertions, and deletions) may not necessarily lead to functional changes. This suggests that maximizing the training data likelihood beyond the amino acid sequence space could yield better generative models. Pre-trained protein large language models (PLMs) like ESM2 can encode protein sequences into a latent space, potentially serving as functional validators. We propose training functional protein sequence generative models by simultaneously optimizing the likelihood of training data in both the amino acid sequence space and the latent space derived from a PLM. This training scheme can also be viewed as a knowledge distillation approach that dynamically re-weights samples during training. We applied our method to train GPT- like models (i.e., autoregressive transformers) for antimicrobial peptide (AMP) and malate dehydrogenase (MDH) generation tasks. Computational experiments confirmed that our method outperformed various deep generative models (e.g., generative adversarial net, variational autoencoder, and GPT model without the proposed training strategy) on these tasks, demonstrating the effectiveness of our multi-likelihood optimization strategy.