ProtoBind-Diff: A Structure-Free Diffusion Language Model for Protein Sequence-Conditioned Ligand Design

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Designing small molecules that selectively bind to protein targets remains a central challenge in drug discovery. While recent generative models leverage 3D structural data to guide ligand generation, their applicability is limited by the sparsity and bias of structural resources. Here, we introduce ProtoBind-Diff, a structure-free masked diffusion model that conditions molecular generation directly on protein sequences via pre-trained language model embeddings. Trained on over one million active protein-ligand pairs from BindingDB, ProtoBind-Diff generates chemically valid, novel, and target-specific ligands without requiring structural supervision. In extensive benchmarking against structure-based models, ProtoBind-Diff performs competitively in docking and Boltz-1 evaluations and generalizes well to challenging targets, including those with limited training data. Despite never observing 3D information during training, its attention maps align with predicted binding residues, suggesting the model learns spatially meaningful interaction priors from sequence alone. This sequence-conditioned generation framework may unlock ligand discovery across the full proteome, including orphan, flexible, or rapidly emerging targets for which structural data are unavailable or unreliable.

Article activity feed