Interformer: An Interaction-Aware Model for Protein-Ligand Docking and Affinity Prediction

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

In recent years, there has been a growing interest in using deep learning models for protein-ligand docking and affinity prediction, both vital for structure based drug design. However, many of these models overlook the intricate modeling of interactions between ligand and protein atoms, thereby constraining their capabilities in generalization and interpretability. In this paper, we introduce \textsc{Interformer}, a unified model built upon the Graph-Transformer architecture, which specially crafted to capture non-covalent interactions through the interaction-aware mixture density network. Besides, we incorporated a new strategy that utilizes negative samples, effective interaction distribution correction for affinity prediction. Experimental results on widely-used and our in-house datasets demonstrate the effectiveness and universality of the proposed approach. Extensive analyses confirm our claim that our approach improves performance by modelling protein-ligand specific interactions. Encouragingly, our approach propels the SOTA performance docking tasks forward. We intend to make our code publicly available, hoping to facilitate future research in this field.

Article activity feed