Mechanism-Aware Inductive Bias Enhances Generalization in Protein-Protein Interaction Prediction
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Robust prediction of protein–protein interactions (PPIs) requires models that generalize beyond the training distribution. Here, we present PLMDA-PPI, a mechanism-aware framework that incorporates biophysical inductive biases into a dual-attention architecture with residue-level contact supervision applied to protein language model embedded geometric representations. This design enables the model to jointly learn global interaction likelihoods and the residue pairs mediating interactions, producing interpretable, mechanism-grounded predictions. PLMDA-PPI demonstrates strong out-of-distribution generalization, substantially outperforming lightweight deep learning models (D-SCRIPT, Topsy-Turvy, TT3D, and TUnA) and achieving comparable or even superior performance to computationally intensive methods (AF2Complex, RF2-Lite, and RF2-PPI) derived from AlphaFold2 and RoseTTAFold2, while requiring orders of magnitude fewer computational resources. These results show that incorporating mechanistic priors as architectural inductive biases enhances generalization, interpretability, and computational efficiency, providing a principled foundation for AI-driven prediction of complex biological interactions.