Article activity feed

  1. Author Response:

    Reviewer #1 (Public Review):

    The recent development of AlphaFold2 has improved the ability to predict protein fold from sequence. However, this approach typically yields a defined structural fold, while it is known that proteins exhibit structural diversity through different conformations. In particular, membrane transport proteins and receptors are known to adopt distinct conformational states in order to allow for alternate access or signaling across the membrane. In this study, the authors demonstrate that by reducing the size of the input sequence alignment fed into AlphaFold2, conformational diversity in the structural predictions is increased, with some of these corresponding to known experimentally determined structures. They test this with a diverse set of transporters where the structures have been solved in both inward and outward facing conformations, as well as GPCRs in active and inactive states. Decreasing the size of the sequence alignment from 5120 to 32 leads to a general increase in conformational diversity with the predicted structures and that these structures are generally bounded by the experimental structures. The RMSF analysis of residues amongst the different models, corresponds to RMSD of residues in the experimental structures, and principal component analysis demonstrates that these models connect the two known conformations. Altogether, this analysis validates that the ability to predict alternate conformations of transporters and receptors is already present in the AlphaFold2.

    This validation is important, but further analysis is necessary to move beyond a demonstration and towards a procedure for predicting relevant conformations. Along these lines, quantification of the robustness of the approach along different parameters is needed. Furthermore, the study stops short of defining how to statistically weed through the ensemble of models to predict meaningful conformations. AlphaFold2 may generate highly accurate models, but how does the user pick which ones are likely to be relevant? Therefore, this is an interesting study that is expected to be broadly impactful for the study of all proteins, not just membrane proteins tested here. However, limitations remain on the interpretation of the results and a clarification is needed to demonstrate how others may use this approach to predict new biologically relevant conformations.

    We believe the approach used in this manuscript can only sample the energy minimum; identification of the relevant individual states of interest will likely require experimental validation. Thus, we have modified the text to reinforce this point in various parts of the manuscript. We have edited the text at the end of “Introduction”:

    "Finally, we propose a modeling pipeline for researchers interested in sampling alternative conformations of specific membrane proteins, which we apply to the structurally unknown GPR114/AGRG5 adhesion GPCR as an example."

    Additional clarification is provided in “Results and Discussion” subsection “Distributions of predicted models relative to the experimental structures”:

    "Indeed, the models with the most extreme PC1 values were also among the most accurate: average TM-scores were 0.94 for the top one, top three and top ten PC1 models, and Pearson correlation coefficients between PC1 and TM-scores of the ensemble of models exceeded 0.8 for all transporters in this dataset. Moreover, the experimental structures virtually always flanked the AF2 models along PC1. The exception, PTH1R, was determined in a partially inactive and active conformation29, suggesting that models extending beyond the former state along PC1 may represent the fully inactive conformation. Therefore, these results indicate that accurate representative models of conformations of interest can be selected from the extreme positions along PC1."

    Finally, we have added a sentence in subsection "Concluding remarks":

    "Accurate representatives of distinct conformers were generally obtainable with exhaustive sampling and could be identified by performing PCA and selecting models at the extreme positions of PC1."

    Reviewer #3 public review:

    This manuscript describes a workflow for using AlphaFold2 (AF2) to model membrane proteins in different conformations. It then evaluates the models generated by this workflow on eight different membrane protein structures representing different structural classes and mechanisms. The authors conclude that AF2 can provide models with reasonable accuracy and conformational diversity of membrane proteins, but additional improvements are needed to be able to sample biologically relevant conformations.

    In principle, the research presented in this study is timely and can be of general interest to the community. It attempts to address the question of whether AF2 can accurately predict membrane protein dynamics. As the authors state, they provide "a hack" for modeling membrane proteins with AF2. My main concern with this manuscript is that the adopted workflow needs to be optimized and assessed more rigorously, in order to support the conclusions regarding the usefulness of AF2 for modeling membrane proteins.

    In addition to the importance of the topic, some strengths of the study include: focusing on proteins representing different folds and families, using different measures for structural evaluation, and presenting several examples in greater detail, particularly of important human proteins.

    My specific comments can be found below:

    A significant concern is that the Methods section of this manuscript is lacking. Additional details are needed in order to be able to evaluate the validity of the approach and reproduce these results. I list below some specific issues.

    The alignments used to develop the models should be provided. Specific details on how the visual inspection of the alignments guided their refinement should also be included. I could imagine that the alignment quality may correlate with model accuracy. This is an important analysis to include.

    We introduced modifications to the manuscript to clarify that all alignment subsampling was performed randomly by the AF2 program. As the major modification discussed here is the reduction of the size of the MSA subsampled at each iteration of the program, our pipeline did not provide an opportunity for either modification or saving of the alignments by the user. Analysis of the alignments responsible for producing specific models is therefore not possible.

    For some of the targets, the template-based modeling clearly improved sampling of various conformations and for others it did not. The authors only vaguely discussed this observation without providing a detailed analysis. For example, how were the template selected for the template-based modeling? Was the performance of AF2 dependent on the sequence similarity between the template(s) and the target? These are critical points that are needed to understand the utility of the approach and how one can adopt the proposed workflow.

    In response to a similar comment made by another Reviewer, we have expanded the relevant section in Methods regarding the use of templates. However, due to the relatively small size of this test set, a thorough quantitative analysis is likely not currently possible.

    A key conclusion of this study is that there is no one-model fits-all approach with AF2 for accurately sampling the conformational space of membrane proteins. Although this conclusion sounds plausible, the authors do not provide significant evidence to support it: they tested the performance of the models for a very limited set of parameters. For example, they only used a few MSA depths, and they do not report performances for templates with different similarities to the target. Also, is it possible that a "one-model fits-all" exists for particular folds or families? For example, LAT1 and MCT1 each represent very large protein families and a clear workflow for each would represent an important advance in the field.

    Per the recommendation of another Reviewer, we carried out a more rigorous analysis of MSA depths (see Figure 1 - figure supplement 1). However, these results support our general conclusion that there are too few proteins to confidently identify the optimal set of parameters for accurate prediction of multiple conformations. We have rewritten a sentence in “Concluding remarks”:

    "Thus, while the results presented here provide a blueprint for obtaining AF2 models of alternative conformations, they also argue against an optimal one-size-fits-all approach for sampling conformational space of every protein with high accuracy."

    How were misfolded models were identified? Providing a reference is not sufficient here. It is also stated that "padding MSAs with additional sequences had the desirable effect of decreasing the proportion of these models, it also limited the extent to which alternative conformations were sampled. Thus, our results revealed a delicate balance that must be achieved to generate models that are both diverse and natively folded. No general pattern was readily apparent regarding the ideal MSA depth required to achieve this balance.". While this is interesting initial observation, finding a pattern in the ability to detect those misfolded structures (for at least some folds or protein families) could increase the impact of the work.

    We have rewritten this paragraph and remade Figure S2 (now numbered Figure 1 - figure supplement 1) in response to a similar comment made by another Reviewer.

    In general, the definition of the different conformations is nuanced for each structural class and a better explanation is needed for those proteins that are discussed in greater detail. For example, when discussing one of these proteins, MCT1, the authors state: "One target, MCT1, was exclusively modeled by AF2 in either IF or fully occluded conformations regardless of MSA depth. Notably, these results closely parallel those reported by DeepMind during their attempt to model multiple conformations of LmrP in CASP14.". Could the authors elaborate on this statement? Could they provide quantitative data defining how occluded and open conformations are defined? Many of the readers are unlikely to know the LmrP example from a previous publication.

    We agree with this statement and have rewritten the paragraph to remove the reference to the CASP14 in this section.

    The authors evaluate the models on structures that were not included in the AF2 training set. It would be useful to provide the list of the PDB ids that were included in the training of the AF2 version that was used in this study. This is important because the structures of some of these proteins were solved a few years ago with minor differences, even though they were classified as a "different conformation". As mentioned in the point above, the definition of "different conformation" can be highly nuanced depending on the protein family and the mechanism used by the protein.

    We have edited the first paragraph of “Results and Discussion” to more explicitly state that the structures of the proteins used in this test set were entirely absent from the version of the PDB used to train AF2. This design decision was critical in allowing us to sidestep this question of whether the conformations of interest, or similar conformations, were present or absent from the training set.

    In the section "Alternative conformations cannot be predicted for proteins with structures in the training set", the results should be described in a more quantitative way. Specifically, the following statement should be accompanied by quantitative data: "virtually every transporter model superimposed nearly perfectly with the training set conformation, and none resembled the alternative conformation".

    Per recommendations made by another Reviewer, we have added metrics to quantify the similarity of these models to the training set conformers. This also allows us to establish the similarity of these predictions to those of MCT1.

    Was this evaluation helpful?
  2. Evaluation Summary:

    In this work, del Alamo and colleagues illustrate the ability of recent Deep Learning techniques to predict diverse conformational states in proteins, as opposed to single static models reflecting individual states. Although they are limited to a small number of test cases of membrane proteins, the examples are of interest to members of the community, who are currently unable to reliably model the essential conformational changes required for function, at least until Deep Learning methods can be improved along these lines.

    (This preprint has been reviewed by eLife. We include the public reviews from the reviewers here; the authors also receive private feedback with suggested changes to the manuscript. Reviewer #1 agreed to share their name with the authors.)”

    Was this evaluation helpful?
  3. Reviewer #1 (Public Review):

    The recent development of AlphaFold2 has improved the ability to predict protein fold from sequence. However, this approach typically yields a defined structural fold, while it is known that proteins exhibit structural diversity through different conformations. In particular, membrane transport proteins and receptors are known to adopt distinct conformational states in order to allow for alternate access or signaling across the membrane. In this study, the authors demonstrate that by reducing the size of the input sequence alignment fed into AlphaFold2, conformational diversity in the structural predictions is increased, with some of these corresponding to known experimentally determined structures. They test this with a diverse set of transporters where the structures have been solved in both inward and outward facing conformations, as well as GPCRs in active and inactive states. Decreasing the size of the sequence alignment from 5120 to 32 leads to a general increase in conformational diversity with the predicted structures and that these structures are generally bounded by the experimental structures. The RMSF analysis of residues amongst the different models, corresponds to RMSD of residues in the experimental structures, and principal component analysis demonstrates that these models connect the two known conformations. Altogether, this analysis validates that the ability to predict alternate conformations of transporters and receptors is already present in the AlphaFold2.

    This validation is important, but further analysis is necessary to move beyond a demonstration and towards a procedure for predicting relevant conformations. Along these lines, quantification of the robustness of the approach along different parameters is needed. Furthermore, the study stops short of defining how to statistically weed through the ensemble of models to predict meaningful conformations. AlphaFold2 may generate highly accurate models, but how does the user pick which ones are likely to be relevant? Therefore, this is an interesting study that is expected to be broadly impactful for the study of all proteins, not just membrane proteins tested here. However, limitations remain on the interpretation of the results and a clarification is needed to demonstrate how others may use this approach to predict new biologically relevant conformations.

    Was this evaluation helpful?
  4. Reviewer #2 (Public Review):

    In "Sampling the conformational landscapes of transporters and receptors with AlphaFold2" the authors provide insight into the methods available for predicting varying conformations in dynamic membrane proteins. The authors noted that AlphaFold2, a recently reported breakthrough in structure prediction technology based on deep mining machine learning, tended to report outputs that are very homogeneous, even for proteins with dynamics as a primary feature, such as transporters or G-protein coupled receptors. The authors' goal was to produce a range of structural models more reflective of true conformations observed during function, by modifications to the input parameters, e.g. by providing templates or by reducing the number of input sequences.

    Excitingly, the results indicated that, by reducing the number of "constraints" through limiting the number of provided sequences, a much greater variability of conformational space could be explored. Even more excitingly, these conformations reflected the major dynamics of the conformational changes, at least according to comparison with known structures.

    A limitation of the reported work is the relatively small number of test cases (~10 different protein families), which is unavoidable given that AlphaFold2 was trained on almost the entirety of available structures in the Protein Databank. Indeed, for proteins in the training set, the strategies that the authors identified were of mixed effectiveness. Nevertheless, the authors provide a helpful strategy for researchers working with dynamic proteins, for whom AlphaFold2 results are currently rather limited. Moreover, their findings provide insights likely to contribute to the development of future machine learning tools.

    Was this evaluation helpful?
  5. Reviewer #3 (Public Review):

    This manuscript describes a workflow for using AlphaFold2 (AF2) to model membrane proteins in different conformations. It then evaluates the models generated by this workflow on eight different membrane protein structures representing different structural classes and mechanisms. The authors conclude that AF2 can provide models with reasonable accuracy and conformational diversity of membrane proteins, but additional improvements are needed to be able to sample biologically relevant conformations.

    In principle, the research presented in this study is timely and can be of general interest to the community. It attempts to address the question of whether AF2 can accurately predict membrane protein dynamics. As the authors state, they provide "a hack" for modeling membrane proteins with AF2. My main concern with this manuscript is that the adopted workflow needs to be optimized and assessed more rigorously, in order to support the conclusions regarding the usefulness of AF2 for modeling membrane proteins.

    In addition to the importance of the topic, some strengths of the study include: focusing on proteins representing different folds and families, using different measures for structural evaluation, and presenting several examples in greater detail, particularly of important human proteins.

    My specific comments can be found below:

    A significant concern is that the Methods section of this manuscript is lacking. Additional details are needed in order to be able to evaluate the validity of the approach and reproduce these results. I list below some specific issues.

    The alignments used to develop the models should be provided. Specific details on how the visual inspection of the alignments guided their refinement should also be included. I could imagine that the alignment quality may correlate with model accuracy. This is an important analysis to include.

    For some of the targets, the template-based modeling clearly improved sampling of various conformations and for others it did not. The authors only vaguely discussed this observation without providing a detailed analysis. For example, how were the template selected for the template-based modeling? Was the performance of AF2 dependent on the sequence similarity between the template(s) and the target? These are critical points that are needed to understand the utility of the approach and how one can adopt the proposed workflow.

    A key conclusion of this study is that there is no one-model fits-all approach with AF2 for accurately sampling the conformational space of membrane proteins. Although this conclusion sounds plausible, the authors do not provide significant evidence to support it: they tested the performance of the models for a very limited set of parameters. For example, they only used a few MSA depths, and they do not report performances for templates with different similarities to the target. Also, is it possible that a "one-model fits-all" exists for particular folds or families? For example, LAT1 and MCT1 each represent very large protein families and a clear workflow for each would represent an important advance in the field.

    How were misfolded models were identified? Providing a reference is not sufficient here. It is also stated that "padding MSAs with additional sequences had the desirable effect of decreasing the proportion of these models, it also limited the extent to which alternative conformations were sampled. Thus, our results revealed a delicate balance that must be achieved to generate models that are both diverse and natively folded. No general pattern was readily apparent regarding the ideal MSA depth required to achieve this balance.". While this is interesting initial observation, finding a pattern in the ability to detect those misfolded structures (for at least some folds or protein families) could increase the impact of the work.

    In general, the definition of the different conformations is nuanced for each structural class and a better explanation is needed for those proteins that are discussed in greater detail. For example, when discussing one of these proteins, MCT1, the authors state: "One target, MCT1, was exclusively modeled by AF2 in either IF or fully occluded conformations regardless of MSA depth. Notably, these results closely parallel those reported by DeepMind during their attempt to model multiple conformations of LmrP in CASP14.". Could the authors elaborate on this statement? Could they provide quantitative data defining how occluded and open conformations are defined? Many of the readers are unlikely to know the LmrP example from a previous publication.

    The authors evaluate the models on structures that were not included in the AF2 training set. It would be useful to provide the list of the PDB ids that were included in the training of the AF2 version that was used in this study. This is important because the structures of some of these proteins were solved a few years ago with minor differences, even though they were classified as a "different conformation". As mentioned in the point above, the definition of "different conformation" can be highly nuanced depending on the protein family and the mechanism used by the protein.

    In the section "Alternative conformations cannot be predicted for proteins with structures in the training set", the results should be described in a more quantitative way. Specifically, the following statement should be accompanied by quantitative data: "virtually every transporter model superimposed nearly perfectly with the training set conformation, and none resembled the alternative conformation".

    Was this evaluation helpful?