Rapid protein stability prediction using deep learning representations

Curation statements for this article:
  • Curated by eLife

    eLife logo

    eLife Assessment:

    Predicting the effect of mutations on protein stability is important both for protein engineering and for helping to decipher the effects of genetic and clinical mutations. The machine learning methodology introduce here is timely in view of the millions of AlphaFold model structures that are now becoming available, which could hypothetically be examined through approaches such as this one. The methodology presented is valuable, but the manuscript would benefit from a substantial amount of comparative data to provide more compelling evidence for the validity of the methods.

This article has been Reviewed by the following groups

Read the full article

Abstract

Predicting the thermodynamic stability of proteins is a common and widely used step in protein engineering, and when elucidating the molecular mechanisms behind evolution and disease. Here, we present RaSP, a method for making rapid and accurate predictions of changes in protein stability by leveraging deep learning representations. RaSP performs on-par with biophysics-based methods and enables saturation mutagenesis stability predictions in less than a second per residue. We use RaSP to calculate ∼ 230 million stability changes for nearly all single amino acid changes in the human proteome, and examine variants observed in the human population. We find that variants that are common in the population are substantially depleted for severe destabilization, and that there are substantial differences between benign and pathogenic variants, highlighting the role of protein stability in genetic diseases. RaSP is freely available—including via a Web interface—and enables large-scale analyses of stability in experimental and predicted protein structures.

Article activity feed

  1. Author Response

    Reviewer #1 (Public Review):

    Estimating the effects of mutations on the thermal stability of proteins is fundamentally important and also has practical importance, e.g, for engineering of stable proteins. Changes can be measured using calorimetric methods and values are reported as differences in free energy (dG) of the mutant compared to wt proteins, i.e., ddG. Values typically range between -1 kcal/mol through +7 kcal/mol. However, measurements are highly demanding. The manuscript introduces a novel deep learning approach to this end, which is similar in accuracy to ROSETTA-based estimates, but much faster, enabling proteomewide studies. To demonstrate this the authors apply it to over 1000 human proteins.

    The main strength here is the novelty of the approach and the high speed of the computation. The main weakness is that the results are not compared to existing machine learning alternatives.

    We thank Prof. Ben-Tal for taking the time to assess our work, and for his comments and suggestions below.

    Reviewer 2 (Public Review):

    Summary:

    This work presents a new machine-learning method, RaSP, to predict changes in protein stability due to point mutations, measured by the change in folding free energy ΔΔG.
    The model consists of two coupled neural networks, a 3D selfsupervised convolutional neural network that produces a reduceddimensionality representation of the structural environment of a given residue, and a downstream supervised fully-connected neural network that, using the former network's structural representation as input, predicts the ΔΔG of any given amino-acid mutation. The first network is trained on a large dataset of protein structures, and the second network is trained using a dataset of the ΔΔG values of all mutants of 35 proteins, predicted by the biophysics-based method Rosetta.

    The paper shows that RaSP gives good approximations of Rosetta ΔΔG predictions while being several orders of magnitude faster. As compared to experimental data, judging by a comparison made for a few proteins, RaSP and Rosetta predictions perform similarly. In addition, it is shown that both RaSP and Rosetta are robust to variations of input structure, so good predictions are obtained using either structures predicted by homology or structures predicted using AlphaFold2.
    Finally, the usefulness of a rapid approach such as RaSP is clearly demonstrated by applying it to calculate ΔΔG values for all mutations of a large dataset of human proteins, for which this method is shown to reproduce previous findings of the overall ΔΔG distribution and the relationship between ΔΔG and the pathological consequences of mutations. The RaSP tool and the dataset of mutations of human proteins are shared.

    Strengths:

    The single main strength of this work is that the model developed, RaSP, is much faster than Rosetta (5 to 6 dex), and still produces ΔΔG predictions of comparable accuracy (as compared with Rosetta, and with the experiment). The usefulness of such a rapid approach is convincingly demonstrated by its application to predicting the ΔΔG of all single-point mutations of a large dataset of human proteins, for which using this new method they reproduce previous findings on the relationship between stability and disease. Such a large-scale calculation would be prohibitive with Rosetta. Importantly, other researchers will be able to take advantage of the method because the code and data are shared, and a google colab site where RaSP can be easily run has been set up. An additional bonus is that the dataset of human proteins and their RaSP ΔΔG predictions, annotated as beneficial/pathological (according to the ClinVar database) and/or by their allele frequency (from the gnomAD database) are also made available, which may be very useful for further studies.

    Weaknesses:

    The paper presents a solid case in support of the speed, accuracy, and usefulness of RaSP. However, it does suffer from a few weaknesses.

    The main weakness is, in my opinion, that it is not clear where RaSP is positioned in the accuracy-vs-speed landscape of current ΔΔGprediction methods. The paper does show that RaSP is much faster than Rosetta, and provides evidence that supports that its accuracy is comparable with that of Rosetta, but RaSP is not compared to any other method. For instance, FoldX has been used in large-scale studies of similar size to the one used here to exemplify RaSP. How does RaSP compare with FoldX? Is it more accurate? Is it faster? Also, as the paper mentions in the introduction, several ML methods have been developed recently; how does RaSP compare with them regarding accuracy and CPU time? How RaSP fares in comparison with other fast approaches such as FoldX and/or ML methods will strongly affect the potential usefulness and impact of the present work.

    Second, this work being about presenting a new model, a notable weakness is that the model is not sufficiently described. I had to read a previous paper of 2017 on which this work builds to understand the self-supervised CNN used to model the structure, and even so, I still don't know which of 3 different 3D grids used in that original paper is used in the present work.

    A third weakness is, I think, that a stronger case needs to be made for fitting RaSP to Rosetta ΔΔG predictions rather than experimental ΔΔGs. The justification put forward by the authors is that the dataset of Rosetta predictions is large and unbiased while the dataset of experimental data is smaller and biased, which may result in overfitting. While I understand that this may be a problem and that, in general, it is better to have a large unbiased dataset in place of a small biassed one, it is not so obvious to me from reading the paper how much of a problem this is, and whether trying to fix it by fitting the model to the predictions of another model rather than to empirical data does not introduce other issues.

    Finally, the method is claimed to be "accurate", but it is not clear to me what this means. Accuracy is quantified by the correlation coefficient between Rosetta and RaSP predictions, R = 0.82, and by the Mean Absolute Error, MAE = 0.73 kcal/mol. Also, both RaSP and Rosetta have R ~ 0.7 with experiment for the few cases where they were tested on experimental data. This seems to be a rather modest accuracy; I wouldn't claim that a method that produces this sort of fit is "accurate". I suppose the case is that this may be as accurate as one can hope it to be, given the limitations of current experimental data, Rosetta, RaSP, and other current methods, but if this is the case, it is not clearly discussed in the paper.

    We thank the reviewer for their detailed comments and suggestions.

    As discussed in our general comments above and also below, we have now added additional benchmarking, making it easier to compare the accuracy of RaSP with other methods. Regarding the model description, we have now added a more detailed description of also the 3D CNN.

    Regarding whether to fit the model to experiments or computational data, we agree that it is not clear cut that the former would also not work. Indeed, a main problem is that in both cases it is hard to answer which approach is better because of the scarcity of experimental data. One major problem with the larger sets of experimental data is, as we mention, the bias and variability; another is the provenance. While some databases exist, they are rarely exactly raw data, and for example may contain ∆∆G values estimated from ∆Tm values. In the revised manuscript we now explain better why we chose to target Rosetta, but also acknowledge that one might also have used experiments.

    As to the question of accuracy, we agree completely that the methods could be better. One problem, however, is that it is very difficult to answer how much better because of problems with experiments. As mentioned also by reviewer 1, variation across different experiments suggest that even a “perfect” predictor would only achieve Pearson correlation coefficients in the range 0.7–0.8 (https://doi.org/10.1093/bioinformatics/bty880). Clearly, this is an issue with imperfect data curation (it is possible to measure ∆∆G quite accurately), but in the absence of larger and better curated experiments, one will not expect much better accuracy than what we report here. This is now discussed in the revised manuscript.

    Reviewer 3 (Public Review):

    The authors present a machine learning method for predicting the effects of mutations on the free energy of protein stability. The method performs similarly to existing methods, but has the advantage that it is faster to run. Overall this is reasonable and a faster method will likely have some potential uses. However, not improving performance beyond the reasonable but not great performance of existing methods of course makes this a less useful advance. The authors provide predictions for a set of human proteins, but the impact of their method would be much greater if they provided predictions for all substitutions in all human proteins, for example. In places the text somewhat overstates the performance of computational methods for predicting free energy changes and is potentially misleading about when ddGs are predicted vs. experimentally measured. In addition, the comparison to existing methods is rather slim and there isn't a formal evaluation of how well RASP discriminates pathological from benign variants.

    We thank the reviewer for taking time to read our work and for their various suggestions.

  2. eLife Assessment:

    Predicting the effect of mutations on protein stability is important both for protein engineering and for helping to decipher the effects of genetic and clinical mutations. The machine learning methodology introduce here is timely in view of the millions of AlphaFold model structures that are now becoming available, which could hypothetically be examined through approaches such as this one. The methodology presented is valuable, but the manuscript would benefit from a substantial amount of comparative data to provide more compelling evidence for the validity of the methods.

  3. Reviewer #1 (Public Review):

    Estimating the effects of mutations on the thermal stability of proteins is fundamentally important and also has practical importance, e.g, for engineering of stable proteins. Changes can be measured using calorimetric methods and values are reported as differences in free energy (dG) of the mutant compared to wt proteins, i.e., ddG. Values typically range between -1 kcal/mol through +7 kcal/mol. However, measurements are highly demanding. The manuscript introduces a novel deep learning approach to this end, which is similar in accuracy to ROSETTA-based estimates, but much faster, enabling proteome-wide studies. To demonstrate this the authors apply it to over 1000 human proteins.

    The main strength here is the novelty of the approach and the high speed of the computation. The main weakness is that the results are not compared to existing machine learning alternatives.

  4. Reviewer #2 (Public Review):

    Summary:

    This work presents a new machine-learning method, RaSP, to predict changes in protein stability due to point mutations, measured by the change in folding free energy ΔΔG.

    The model consists of two coupled neural networks, a 3D self-supervised convolutional neural network that produces a reduced-dimensionality representation of the structural environment of a given residue, and a downstream supervised fully-connected neural network that, using the former network's structural representation as input, predicts the ΔΔG of any given amino-acid mutation. The first network is trained on a large dataset of protein structures, and the second network is trained using a dataset of the ΔΔG values of all mutants of 35 proteins, predicted by the biophysics-based method Rosetta.

    The paper shows that RaSP gives good approximations of Rosetta ΔΔG predictions while being several orders of magnitude faster. As compared to experimental data, judging by a comparison made for a few proteins, RaSP and Rosetta predictions perform similarly. In addition, it is shown that both RaSP and Rosetta are robust to variations of input structure, so good predictions are obtained using either structures predicted by homology or structures predicted using AlphaFold2.

    Finally, the usefulness of a rapid approach such as RaSP is clearly demonstrated by applying it to calculate ΔΔG values for all mutations of a large dataset of human proteins, for which this method is shown to reproduce previous findings of the overall ΔΔG distribution and the relationship between ΔΔG and the pathological consequences of mutations. The RaSP tool and the dataset of mutations of human proteins are shared.

    Strengths:

    The single main strength of this work is that the model developed, RaSP, is much faster than Rosetta (5 to 6 dex), and still produces ΔΔG predictions of comparable accuracy (as compared with Rosetta, and with the experiment). The usefulness of such a rapid approach is convincingly demonstrated by its application to predicting the ΔΔG of all single-point mutations of a large dataset of human proteins, for which using this new method they reproduce previous findings on the relationship between stability and disease. Such a large-scale calculation would be prohibitive with Rosetta. Importantly, other researchers will be able to take advantage of the method because the code and data are shared, and a google colab site where RaSP can be easily run has been set up. An additional bonus is that the dataset of human proteins and their RaSP ΔΔG predictions, annotated as beneficial/pathological (according to the ClinVar database) and/or by their allele frequency (from the gnomAD database) are also made available, which may be very useful for further studies.

    Weaknesses:

    The paper presents a solid case in support of the speed, accuracy, and usefulness of RaSP. However, it does suffer from a few weaknesses.

    The main weakness is, in my opinion, that it is not clear where RaSP is positioned in the accuracy-vs-speed landscape of current ΔΔG-prediction methods. The paper does show that RaSP is much faster than Rosetta, and provides evidence that supports that its accuracy is comparable with that of Rosetta, but RaSP is not compared to any other method. For instance, FoldX has been used in large-scale studies of similar size to the one used here to exemplify RaSP. How does RaSP compare with FoldX? Is it more accurate? Is it faster? Also, as the paper mentions in the introduction, several ML methods have been developed recently; how does RaSP compare with them regarding accuracy and CPU time? How RaSP fares in comparison with other fast approaches such as FoldX and/or ML methods will strongly affect the potential usefulness and impact of the present work.

    Second, this work being about presenting a new model, a notable weakness is that the model is not sufficiently described. I had to read a previous paper of 2017 on which this work builds to understand the self-supervised CNN used to model the structure, and even so, I still don't know which of 3 different 3D grids used in that original paper is used in the present work.

    A third weakness is, I think, that a stronger case needs to be made for fitting RaSP to Rosetta ΔΔG predictions rather than experimental ΔΔGs. The justification put forward by the authors is that the dataset of Rosetta predictions is large and unbiased while the dataset of experimental data is smaller and biased, which may result in overfitting. While I understand that this may be a problem and that, in general, it is better to have a large unbiased dataset in place of a small biassed one, it is not so obvious to me from reading the paper how much of a problem this is, and whether trying to fix it by fitting the model to the predictions of another model rather than to empirical data does not introduce other issues.

    Finally, the method is claimed to be "accurate", but it is not clear to me what this means. Accuracy is quantified by the correlation coefficient between Rosetta and RaSP predictions, R = 0.82, and by the Mean Absolute Error, MAE = 0.73 kcal/mol. Also, both RaSP and Rosetta have R ~ 0.7 with experiment for the few cases where they were tested on experimental data. This seems to be a rather modest accuracy; I wouldn't claim that a method that produces this sort of fit is "accurate". I suppose the case is that this may be as accurate as one can hope it to be, given the limitations of current experimental data, Rosetta, RaSP, and other current methods, but if this is the case, it is not clearly discussed in the paper.

  5. Reviewer #3 (Public Review):

    The authors present a machine learning method for predicting the effects of mutations on the free energy of protein stability. The method performs similarly to existing methods, but has the advantage that it is faster to run. Overall this is reasonable and a faster method will likely have some potential uses. However, not improving performance beyond the reasonable but not great performance of existing methods of course makes this a less useful advance. The authors provide predictions for a set of human proteins, but the impact of their method would be much greater if they provided predictions for all substitutions in all human proteins, for example. In places the text somewhat overstates the performance of computational methods for predicting free energy changes and is potentially misleading about when ddGs are predicted vs. experimentally measured. In addition, the comparison to existing methods is rather slim and there isn't a formal evaluation of how well RASP discriminates pathological from benign variants.