Predicting commercially available antiviral drugs that may act on the novel coronavirus (2019-nCoV), Wuhan, China through a drug-target interaction deep learning model
This article has been Reviewed by the following groups
Listed in
- Evaluated articles (ScreenIT)
Abstract
The infection of a novel coronavirus found in Wuhan of China (2019-nCoV) is rapidly spreading, and the incidence rate is increasing worldwide. Due to the lack of effective treatment options for 2019-nCoV, various strategies are being tested in China, including drug repurposing. In this study, we used our pretrained deep learning-based drug-target interaction model called Molecule Transformer-Drug Target Interaction (MT-DTI) to identify commercially available drugs that could act on viral proteins of 2019-nCoV. The result showed that atazanavir, an antiretroviral medication used to treat and prevent the human immunodeficiency virus (HIV), is the best chemical compound, showing a inhibitory potency with K d of 94.94 nM against the 2019-nCoV 3C-like proteinase, followed by efavirenz (199.17 nM), ritonavir (204.05 nM), and dolutegravir (336.91 nM). Interestingly, lopinavir, ritonavir, and darunavir are all designed to target viral proteinases. However, in our prediction, they may also bind to the replication complex components of 2019-nCoV with an inhibitory potency with K d < 1000 nM. In addition, we also found that several antiviral agents, such as Kaletra, could be used for the treatment of 2019-nCoV, although there is no real-world evidence supporting the prediction. Overall, we suggest that the list of antiviral drugs identified by the MT-DTI model should be considered, when establishing effective treatment strategies for 2019-nCoV.
Article activity feed
-
SciScore for 10.1101/2020.01.31.929547: (What is this?)
Please note, not all rigor criteria are appropriate for all manuscripts.
Table 1: Rigor
NIH rigor criteria are not applicable to paper type.Table 2: Resources
Software and Algorithms Sentences Resources Briefly, the natural language processing (NLP) based Bidirectional Encoder Representations from Transformers (BERT) framework is a core algorithm of the model with good performance and robust results in diverse drug-target interaction datasets through pretraining with ‘chemical language’ SMILES of approximately 1,000,000,000 compounds. BERTsuggested: (BERT, RRID:SCR_018008)Since the BindingDB database includes a wide variety of species and target proteins, the MT-DTI model has the potential power to predict interactions between antiviral drugs and 2019-nCoV proteins. BindingDBsuggested: …SciScore for 10.1101/2020.01.31.929547: (What is this?)
Please note, not all rigor criteria are appropriate for all manuscripts.
Table 1: Rigor
NIH rigor criteria are not applicable to paper type.Table 2: Resources
Software and Algorithms Sentences Resources Briefly, the natural language processing (NLP) based Bidirectional Encoder Representations from Transformers (BERT) framework is a core algorithm of the model with good performance and robust results in diverse drug-target interaction datasets through pretraining with ‘chemical language’ SMILES of approximately 1,000,000,000 compounds. BERTsuggested: (BERT, RRID:SCR_018008)Since the BindingDB database includes a wide variety of species and target proteins, the MT-DTI model has the potential power to predict interactions between antiviral drugs and 2019-nCoV proteins. BindingDBsuggested: (BindingDB, RRID:SCR_000390)Results from OddPub: We did not detect open data. We also did not detect open code. Researchers are encouraged to share open data when possible (see Nature blog).
Results from LimitationRecognizer: An explicit section about the limitations of the techniques employed in this study was not found. We encourage authors to address study limitations.Results from TrialIdentifier: No clinical trial numbers were referenced.
Results from Barzooka: We did not find any issues relating to the usage of bar graphs.
Results from JetFighter: We did not find any issues relating to colormaps.
Results from rtransparent:- No conflict of interest statement was detected. If there are no conflicts, we encourage authors to explicit state so.
- No funding statement was detected.
- No protocol registration statement was detected.
-