Article activity feed

  1. Author Response:

    Reviewer #2 (Public Review):

    In their supplementary section A.3-1.5 the authors perform QTL simulations to assess the performance of their analysis methods. Of particular interest is the performance of their cross-validated stepwise forward search methodology, which was used to identify all the QTL. However, a major limitation of their simulations was their choice of genetic architectures. In their simulations, all variants have a mean effect of 1% and a random sign. They also simulated 15, 50, or 150 QTL, which spans a range of sparse architectures, but not highly polygenic ones. It was unclear how the results would change as a function of different trait heritability. The simulations should explore a wider range of genetic architectures, with effect sizes sampled from normal or exponential distributions, as is more commonly done in the field.

    As suggested, we have expanded the range of simulations we explore in the revised manuscript. We note that the original simulations discussed in the manuscript involve exponentially distributed effect sizes (with a mean of 1% and random sign) at multiple different heritability values. These are described in Figures A3-4 and A3-5. We also simulated epistatic terms (Figure A3-3.3). In the revision, we have broadened the simulations to add more ‘highly polygenic’ architectures (1000 QTL). We find that the algorithm still performs well, though worse than when 150 QTL are simulated. The forward search behaves in a fairly intuitive way: QTLs get added when the contribution of a true QTL to the explained phenotypic variance overcomes the model bias and variance. QTLs are only missed if their effect size is too low to contribute significantly to phenotypic variance, or if they are in strong linkage and thus their independent discovery barely increases the variance explained (which is all finally controlled by the trait heritability). At much higher polygenicity, composite QTL can be detected as a single QTL when their sum contribute to phenotypic variance, and get broken up if and only if independent sums also contribute significantly to phenotypic variance. Of course, there are many ways to break up composite QTL, but the algorithm proceeds in a greedy fashion focusing on unexplained variance. We have also explored cases with multiple QTL of the same effect, and with different mean effects or different number of epistatic terms, but we found these results were largely redundant. To summarize these conclusions, we have added the following discussion at the end of the results section: “The behavior of this approach is simple and intuitive: the algorithm greedily adds QTL if their expected contribution to the total phenotypic variance exceeds the bias and increasing variance of the forward search procedure, which is greatly reduced at large sample size. Thus, it may fail to identify very small effect size variants and may fail to break up composite QTL in extremely strong linkage.”

    We have also added additional clarification in the Appendix: “These results allow us to gain some intuition for how our cross-validated forward search operates. […] However, while our panel of spores is very large, it remains underpowered in several cases: 1) when QTL have very low effect size, therefore not contributing significantly to the phenotypic variance, and 2) when composite QTL are in strong linkage and few spores have recombination between the QTL, then the individual identification of QTL only contributes marginally to the explained variance and the forward search may also miss them.”

    In this simulation section, the authors show that the lasso model overestimates the number of causal variants by a factor of 2-10, and that the model underestimates the number of QTL except in the case of a very sparse genetic architecture of 15 QTL and heritability > 0.8. This indicates that the experimental study is underpowered if there are >50 causal variants, and that the detected QTL do not necessarily correspond to real underlying genetic effects, as revealed by the model similarity scores shown in A3-4. This limitation should be factored into the discussion of the ability of the study to break up "composite" QTL, and more generally, detect QTL of small effect.

    We agree with some aspects of this comment, but the details are a bit subtle. First, we note that the definition of underpowered depends on the specifics of the QTL assumed in the simulation. In addition, many of the simulations were performed at 10,000 segregants, not at 100,000, with no effort to enforce a minimum effect size, or minimum distance between QTL. For example, if 100 QTL are all evenly spaced (in recombination space) and all have the same effect such that they all contribute the same to the phenotypic variance, then the algorithm is in principle maximally powered to detect these. This is why our algorithm is capable of finding >100 QTL per environment. On the other hand, just 2 QTL in complete linkage cannot be distinguished and no panel size will be able to detect these.

    However, we do agree with the general need to discuss the limitations in more detail and have clarified these concerns in the ‘Polygenicity’ result section. We have also reiterated the limitations of the LASSO approach within the simulation section. The motivation for an L0 normalization in this data was first discussed in the section A3-1.3: “Unfortunately, a harsh condition for model consistency is the lack of strong collinearity between true and spurious predictors (Zhao & Yu, 2006). This is always violated in QTL mapping studies if recombination frequencies between nearby SNPs are low. In these cases, the LASSO will almost always choose multiple correlated predictors and distribute the true QTL effect amongst them.”

    In section A3-2.3, the authors develop a model similarity score presented in A3-4 for the simulations. The measure is similar to R^2 in that it ranges from 0 to 1, but beyond that it is not clear how to interpret what constitutes a "good" score. The authors should provide some guidance on interpreting this novel metric. It might also be helpful to see the causal and lead QTLs SNPs compared directly on chromosome plots.

    We agree that this was unclear, and have added additional discussion in the main text describing how to interpret the model similarity score. Essentially, the score is a Pearson’s correlation coefficient on the model coefficient (as defined in section A3-2.3, after equation A3-28). However, given a single QTL that spans two SNPs in close linkage, a pure Pearson’s correlation coefficient would have high variance, as subtle noise in the data could lead to one SNP being called the lead SNP vs the other, and two models that call the same QTL might have either 100% correlation, or 0% correlation. Instead, our model similarity score ‘aligns’ these predicted QTL before obtaining the correlation coefficient. The degree at which QTL are aligned are based on penalties with respect to collinearity (or linkage) between the SNPs, and the maximum possible score is obtained by dynamic programming. Similar to sequence alignments between two completely unrelated sequences, a score of 0 is unlikely to occur on sufficiently large models as at least a few QTL can usually be paired (erroneously). We have also added a mention in the main text referring to Figures A3-3, A3-7, A3-8, A3-9, which show the causal and lead QTL SNP directly on the chromosome plots.

    The authors performed validation experiments for 6 individual SNPs and 9 pairs of RM SNPs engineered onto the BY background. It was promising that the experiments showed a positive correlation between the predicted and measured fitness effects; however, the authors did not perform power calculations, which makes it hard to evaluate the success of each individual experiment. The main text also does not make clear why these SNPS were chosen over others-was this done according to their effect sizes, or was other prior information incorporated in the choice to validate these particular variants? The authors chose to focus mostly on epistatic interactions in the validation experiments, but given their limited power to detect such interactions, it would probably be more informative to perform validation for a larger number of individual SNPs in order to test the ability of the study to detect causal variants across a range of effect sizes. The authors should perform some power calculations for their validation experiments, and describe in detail the process they employed to select these particular SNPs for validation.

    We agree with the thrust of the comment, but some of the suggestions are impossible to implement because of practical constraints on the experimental methods (and to a lesser extent on the model inference). First, we chose the SNPs to reconstruct based on three main factors: (a) to ensure that we are validating the right locus, the model must have a confident prediction that that specific SNP is causal, (b) the predicted effect must be large enough in at least one environment that we would expect to reliably measure it given the detection limits of our experimental fitness measurements, and (c) the SNP must be in a location that is amenable to CRISPR-Cas9 or Delitto Perfetto reconstruction. In practice, this means that it is impossible to validate SNPs across a wide range of effect sizes, as smaller-effect SNPs have wider confidence intervals around the lead SNP (violating condition a) and have effects that are harder to measure experimentally (violating condition b). In addition, because the cloning constraints mentioned in (c) require experimental testing for each SNP we analyze, it is much easier to construct combinations of a smaller set of SNPs than a larger set of individual SNPs. Together, these considerations motivated our choice of specific SNPs and of the overall structure of the validation experiments (6 individual and 9 pairs, rather than a broader set of individual SNPs).

    In the revised manuscript, we have added a more detailed discussion of these motivations for selecting particular SNPs for validation, and mention the inherent limitations imposed by the practical constraints involved. We have also added a description of the power and resolution of the experimental fitness measurements of the reconstructed genotypes (we can detect approximately ~0.5% fitness differences in most conditions). We are unsure if there are any other types of power calculations the reviewer is referring to, but we are only attempting to note an overall positive correlation between predicted and measured effects, not making any claims about the success of any individual validation (these can fail for a variety of reasons including experimental artifacts with reconstructions, model errors in identifying the correct causal SNP, unresolved higher-order epistasis, and noise in our fitness measurements, among others).

    In section A3-1.4, the authors describe their fine-mapping methodology, but as presented is difficult to understand. Was the fine-mapping performed using a model that includes all the other QTL effects, or was the range of the credible set only constrained to fall between the lead SNPs of the nearest QTL or the ends of the chromosome, whichever is closest to the QTL under investigation? The methodology presented on its face looks similar to the approximate Bayes credible interval described in Manichaikul et al. (PMID: 16783000). The authors should cite the relevant literature, and expand this section so that it is easier to understand exactly what was done.

    We have attempted to clarify section A3-1.4. As the reviewer correctly points out, the fine mapping for a QTL is performed by scanning an interval between neighboring detected QTL (on either side) and using a model that includes all other QTL. For example, if a detected QTL is a SNP found in a closed interval of 12 SNPs produced by its two neighboring QTL, 10 independent likelihoods are obtained (re-optimizing all effect sizes for each), and a posterior probability is obtained for each of the ten possible positions. We have cited the recommended paper, as our approach is indeed based on an approximate Bayes credible interval similar to the one described in that study (using all SNPs instead of markers). We have added the following sentence to the A3-1.4 section at the end of the second paragraph (similar to the analogous paragraph in Manichaikul et al): “[…] as above by obtaining the maximum likelihood of the data given that a single QTL is found at each possible SNP position between its neighboring QTL and given all detected other QTL (thus obtaining a likelihood profile for the considered positions of the QTL). We then used a uniform prior on the location of the QTL to derive a posterior distribution, from which one can derive an interval that exceeds 0.95.” Some typos referring to a ‘confidence’ interval were also changed to ‘credible’ interval.

    The text explicitly describes an issue with the HMM employed for genotyping: "we find that the genotyping is accurate, with detectable error only very near recombination breakpoints". The genotypes near recombination breakpoints are precisely what is used to localize and fine-map QTL, and it is therefore important to discuss in the text whether the authors think this source of error impacts their results.

    This is a good point, we have added a reference in the main text to the Appendix section (A1-1.4) that has an extensive discussion and analysis of the effect of recombination breakpoint uncertainties on finemapping.

    The use of a count-based HMM to infer genotypes has been previously described in the literature (PMID: 29487138), and this should be included in the references.

    We now also add this citation to our text on the count-based HMM.

    Was this evaluation helpful?
  2. Evaluation Summary:

    Overall, this is an impressive and interesting piece of work that not only expands the identification of small-effect QTL, but also reveals epistatic interactions at an unprecedented scale. Their approach takes advantage of DNA barcodes to increase the scale of genetic mapping studies in yeast by an order of magnitude over previous studies, yielding a more complete and precise view of the QTL landscape and confirming widespread epistatic interactions between the different QTL.

    (This preprint has been reviewed by eLife. We include the public reviews from the reviewers here; the authors also receive private feedback with suggested changes to the manuscript. Reviewer #1 agreed to share their name with the authors.)

    Was this evaluation helpful?
  3. Reviewer #1 (Public Review):

    Nguyen Ba and coworkers report the development of a clever novel approach for QTL mapping in budding yeast, dubbed "BB-QTL". In brief, they use batches of barcoded yeasts to generate very large barcoded F1 libraries (100,000 cells), followed by a Bar-Seq approach to map the fitness of these individuals and a clever low-coverage whole-genome sequencing coupled to background knowledge of the parental sequences to map their respective genotypes. A custom analysis pipeline then allowed predicting QTLs as well as possible epistatic interactions for a set of 18 phenotypes.

    The novel technology expands the precision and power of more traditional approaches. The results mainly confirm previous findings. S. cerevisiae phenotypes are typically influenced by many different QTLs of different nature, including coding and noncoding variation; with coding and rare variants often having a larger effect. Moreover, several QTLs located in a set of specific genes like MKT1 and IRA2, were confirmed to influence multiple phenotypes (pleiotropy). Apart from confirming previous findings, the increased power of BB-QTL does offer the advantage of having lower error rates and higher power to detect specific mutations as drivers of a QTL, including some with only small effect sizes. Together, this yields a more complete and precise view of the QTL landscape and, most importantly, confirms widespread epistatic interactions between the different QTLs. Moreover, now that the barcoded pools have been developed, it becomes relatively easy to test these in other conditions. On the other hand, the power to detect many novel (industrially-relevant) QTLs is likely limited by the inclusion of only two parental strains, one being the lab strain BY4741.

    Was this evaluation helpful?
  4. Reviewer #2 (Public Review):

    Ngyuyen Ba et al. investigated the genetic architecture of complex traits in yeast using a novel bulk QTL mapping approach. Their approach takes advantage of genetic tools to increase the scale of genetic mapping studies in yeast by an order of magnitude over previous studies. Briefly, their approach works by integrating unique sequenceable barcodes into the progeny of a yeast cross. These progeny were then whole genome sequenced, and bulk liquid phenotyping was carried out using the barcodes as an amplicon-based read-out of relative fitness. The authors used their approach to study the genetic architecture of several traits in ~100,000 progeny from the well-studied cross between the strains RM and BY, revealing in greater detail the polygenic, pleiotropic, and epistatic architecture of complex traits in yeast. The authors developed a new cross-validated stepwise forward search methodology to identify QTL and used simulations to show that if a trait is sufficiently polygenic, a study at the scale they perform is not sufficiently powered to accurately identify all the QTL. In the final section of the paper, the authors engineered 6 individual SNPs and 9 pairs of RM SNPs on the BY background, and measured their effects in 11 of the 18 conditions used for QTL discovery. These results highlighted the difficulty of precisely identifying the causal variants using this study design.

    The conclusions in this paper are well supported by the data and analyses presented, but some aspects of the statistical mapping procedure and validation experiments deserve further attention.

    In their supplementary section A.3-1.5 the authors perform QTL simulations to assess the performance of their analysis methods. Of particular interest is the performance of their cross-validated stepwise forward search methodology, which was used to identify all the QTL. However, a major limitation of their simulations was their choice of genetic architectures. In their simulations, all variants have a mean effect of 1% and a random sign. They also simulated 15, 50, or 150 QTL, which spans a range of sparse architectures, but not highly polygenic ones. It was unclear how the results would change as a function of different trait heritability. The simulations should explore a wider range of genetic architectures, with effect sizes sampled from normal or exponential distributions, as is more commonly done in the field.

    In this simulation section, the authors show that the lasso model overestimates the number of causal variants by a factor of 2-10, and that the model underestimates the number of QTL except in the case of a very sparse genetic architecture of 15 QTL and heritability > 0.8. This indicates that the experimental study is underpowered if there are >50 causal variants, and that the detected QTL do not necessarily correspond to real underlying genetic effects, as revealed by the model similarity scores shown in A3-4. This limitation should be factored into the discussion of the ability of the study to break up "composite" QTL, and more generally, detect QTL of small effect.

    In section A3-2.3, the authors develop a model similarity score presented in A3-4 for the simulations. The measure is similar to R^2 in that it ranges from 0 to 1, but beyond that it is not clear how to interpret what constitutes a "good" score. The authors should provide some guidance on interpreting this novel metric. It might also be helpful to see the causal and lead QTLs SNPs compared directly on chromosome plots.

    The authors performed validation experiments for 6 individual SNPs and 9 pairs of RM SNPs engineered onto the BY background. It was promising that the experiments showed a positive correlation between the predicted and measured fitness effects; however, the authors did not perform power calculations, which makes it hard to evaluate the success of each individual experiment. The main text also does not make clear why these SNPS were chosen over others-was this done according to their effect sizes, or was other prior information incorporated in the choice to validate these particular variants? The authors chose to focus mostly on epistatic interactions in the validation experiments, but given their limited power to detect such interactions, it would probably be more informative to perform validation for a larger number of individual SNPs in order to test the ability of the study to detect causal variants across a range of effect sizes. The authors should perform some power calculations for their validation experiments and describe in detail the process they employed to select these particular SNPs for validation.

    In section A3-1.4, the authors describe their fine-mapping methodology, but as presented is difficult to understand. Was the fine-mapping performed using a model that includes all the other QTL effects, or was the range of the credible set only constrained to fall between the lead SNPs of the nearest QTL or the ends of the chromosome, whichever is closest to the QTL under investigation? The methodology presented on its face looks similar to the approximate Bayes credible interval described in Manichaikul et al. (PMID: 16783000). The authors should cite the relevant literature, and expand this section so that it is easier to understand exactly what was done.

    The text explicitly describes an issue with the HMM employed for genotyping: "we find that the genotyping is accurate, with detectable error only very near recombination breakpoints". The genotypes near recombination breakpoints are precisely what is used to localize and fine-map QTL, and it is therefore important to discuss in the text whether the authors think this source of error impacts their results.

    The use of a count-based HMM to infer genotypes has been previously described in the literature (PMID: 29487138), and this should be included in the references.

    Was this evaluation helpful?