Minus the Error: Testing for Positive Selection in the Presence of Residual Alignment Errors

Curation statements for this article:
  • Curated by eLife

    eLife logo

    eLife Assessment

    Alignment and sequencing errors are a major concern in molecular evolution, and this valuable study represents a welcome improvement for genome-wide scans of positive selection. This new method seems to perform well and is generally convincing, although the evidence could be made more direct and more complete through additional simulations to determine the extent to which alignment errors are being properly captured.

This article has been Reviewed by the following groups

Read the full article See related articles

Abstract

Positive selection is an evolutionary process which increases the frequency of advantageous mutations because they confer a fitness benefit. Inferring the past action of positive selection on protein-coding sequences is fundamental for deciphering phenotypic diversity and the emergence of novel traits. With the advent of genome-wide comparative genomic datasets, researchers can analyze selection not only at the level of individual genes but also globally, delivering systems-level insights into evolutionary dynamics. However, genome-scale datasets are generated with automated pipelines and imperfect curation that does not eliminate all sequencing, annotation, and alignment errors. Positive selection inference methods are highly sensitive to such errors. We present BUSTED-E: a method designed to detect positive selection for amino acid diversification while concurrently identifying some alignment errors. This method builds on the flexible branch-site random effects model (BUSTED) for fitting distributions of dN/dS, with a critical modification: it incorporates an “error-sink” component to represent an abiological evolutionary regime. Using several genome-scale biological datasets that were extensively filtered using state-of-the art automated alignment tools, we show that BUSTED-E identifies pervasive residual alignment errors, produces more realistic estimates of positive selection, reduces bias, and improves biological interpretation. The BUSTED-E model promises to be a more stringent filter to identify positive selection in genome-wide contexts, thus enabling further characterization and validation of the most biologically relevant cases.

Article activity feed

  1. eLife Assessment

    Alignment and sequencing errors are a major concern in molecular evolution, and this valuable study represents a welcome improvement for genome-wide scans of positive selection. This new method seems to perform well and is generally convincing, although the evidence could be made more direct and more complete through additional simulations to determine the extent to which alignment errors are being properly captured.

  2. Reviewer #1 (Public review):

    Summary:

    Selberg et al. present a small but apparently very relevant modification to the existing BUSTED model. The new model allows for a fraction of codons to be assigned to an error class characterized by a very high dN/dS value. This "omega_e" category is constrained to represent no more than 1% of the alignment. The analyses convincingly show that the method performs well and represents a real improvement for genome-wide scans of positive selection. Alignment and sequencing errors are a major concern in molecular evolution. This new method, which shows strong performance, is therefore a very welcome contribution.

    Strengths:

    By thoroughly reanalyzing four datasets, the manuscript convincingly demonstrates that omega_e effectively identifies genuine alignment errors. Next, the authors evaluate the reduction in power to detect true selection through simulations. This new model is simple, efficient, and computationally fast. It is already implemented and available in HYPHY software.

    As a side note, I found it particularly interesting how the authors tested the statistical support for the new method compared to the simpler version without the error class. In many cases, the simpler model could not be statistically rejected in favor of the more complex model, despite producing biologically incorrect results in terms of parameter inference. This highlights a broader issue in molecular evolution and phylogenomics, where model selection often relies too heavily on statistical tests, potentially at the expense of biological realism. The analyses also reveal a trade-off between statistical power and the false positive rate. As with other methods, BUSTED-E cannot distinguish between alignment/sequencing errors and episodes of very strong positive selection. The authors are transparent about this limitation in the discussion.

    Weaknesses:

    Regarding the structure of the manuscript, the text could be clearer and more precise. Clear, practical recommendations for users could also be provided in the Results section. Additionally, the simulation analyses could be further developed to include scenarios with both alignment errors and positive selection, in order to better assess the method's performance. Finally, the model is evaluated only in the context of site models, whereas the widely used branch-site model is mentioned as possible but not assessed.

  3. Reviewer #2 (Public review):

    Summary:

    In this paper, Selberg et al present an extension of their widely used BUSTED family of codon models for the detection of episodic ("site-branch") positive selection from coding gene sequences. The extension adds an "error component" to ω (dN/dS) to capture misaligned codons. This ω component is set to an arbitrarily high value to distinguish it from positive selection, which is characterised by ω > 1 but assumed not to be so high.

    The new method is tested on several datasets of comparative genomes, characterised by their size and the fact that the authors scanned for positive selection and/or provided filtering of alignment quality. It is also tested on simple simulations.

    Overall, the new method appears to capture relatively little of the ω variability in the alignments, although it is often significant. Given the complexity of codon evolution, adding a new parameter is more or less significant, and the question is whether it captures the signal that is intended, preferably in an unbiased manner.

    Strengths:

    This is an important issue, and I am enthusiastic to see it explicitly modeled within the codon modeling framework, rather than externalised to ad hoc filtering methods. The promise of quantifying the divergence signal from alignment error vs selection is exciting.

    The BUSTED family of models is widely used and very powerful for capturing many aspects of codon evolution, and it is thus an excellent choice for this extension.

    Weaknesses:

    (1) The definition of alignment error by a very large ω is not justified anywhere in the paper. There are known cases of bona fide positive selection with many non-synonymous and 0 synonymous substitutions over branches. How would they be classified here? E.g., lysosyme evolution, bacterial experimental evolution.

    Using the power of the model family that the authors develop, I would suggest characterising a more specific error model. E.g., radical amino-acid "changes" clustered close together in the sequence, proximity to gaps in the alignment, correlation of apparent ω with genome quality.

    Also concerning this high ω, how sensitive is its detection to computational convergence issues?

    (2) The authors should clarify the relation between the "primary filter for gross or large-scale errors" and the "secondary filter" (this method). Which sources of error are expected to be captured by the two scales of filters? What is their respective contribution to false positives of positive selection?

    Sources of error in the alignment of coding genes include:

    a) Errors in gene models, which may differ between species but also propagate among close species (i.e., when one species is used as a reference to annotate others).

    b) Inconsistent choice of alternative transcripts/isoforms.

    Both of these lead to asking an alignment algorithm to align non-homologous sequences, which violates the assumptions of the algorithms, yet both are common issues in phylogenomics.

    c) Sequencing errors, but I doubt they affect results much here.

    d) Low complexity regions of proteins.

    e) Aproximations by alignment heuristics, sometimes non-deterministic or dependent on input order.

    f) Failure to capture aspects of protein or gene evolution in the optimality criteria used.

    For example, Figure 1 seems to correspond to a wrong or inconsistent definition of the final exon of the gene in one species, which I would expect to be classified as "gross or large-scale error".

    (3) The benchmarking of the method could be improved both for real and simulated data.

    For real data, the authors only analysed sequences from land vertebrates with relatively low Ne and thus relatively low true positive selection. I suggest comparing results with e.g. Drosophila genomes, where it has been reported that 50% of all substitutions are fixed by positive selection, or with viral evolution.

    For simulations, the authors should present simulations with or without alignment errors (e.g., introduce non-homologous sequences, or just disturb the alignments) and with or without positive selection, to measure how much the new method correctly captures alignment errors and incorrect positive selection.

    I also recommend simulating under more complex models, such as multinucleotide mutations or strong GC bias, and investigating whether these other features are captured by the alignment error component.

    Finally, I suggest taking true alignments and perturbing them (e.g., add non-homologous segments or random gaps which shift the alignment locally), to verify how the method catches this. It would be interesting to apply such perturbations to genes which have been reported as strong examples of positive selection, as well as to genes with no such evidence.

    (4) It would be interesting to compare to results from the widely used filtering tool GUIDANCE, as well as to the Selectome database pipeline (https://doi.org/10.1093/nar/gkt1065). Moreover, the inconsistency between BUSTED-E and HMMCleaner, and BMGE is worrying and should be better explained.

    (5) For a new method such as this, I would like to see p-value distributions and q-q plots, to verify how unbiased the method is, and how well the chi-2 distribution captures the statistical value.

    (6) I disagree with the motivation expressed at the beginning of the Discussion: "The imprimatur of "positive selection" has lost its luster. Researchers must further refine prolific candidate lists of selected genes to confirm that the findings are robust and meaningful." Our goal should not be to find a few impressive results, but to measure accurately natural selection, whether it is frequent or rare.

  4. Author response:

    eLife Assessment

    Alignment and sequencing errors are a major concern in molecular evolution, and this valuable study represents a welcome improvement for genome-wide scans of positive selection. This new method seems to perform well and is generally convincing, although the evidence could be made more direct and more complete through additional simulations to determine the extent to which alignment errors are being properly captured.

    We thank the editors for their positive assessment and for highlighting the core strength and a key area for improvement. The main request (also echoed by both reviewers) is for us to conduct additional simulation studies where true alignment errors are known and assess the performance of BUSTED-E. We plan to conduct several simulations (on the order of 100,000 individual alignments in total) in response to that request, with the caveat that we are not aware of any tools that simulate realistic alignment errors, so these simulations are likely only a pale reflection of biological reality.

    (1) Ad hoc small local edits of alignments similar to what was implemented in the HMMCleaner paper. These local edits would include operations like replacement of codons or small stretches of sequences with random data, local transposition, inversion.

    (a) Using parametrically simulated alignments (under BUSTED models).

    (b) Using empirical alignments.

    (2) Simulations under model misspecification, specifically to address the point of reviewer 2. For example, we would simulate under models that allow for multi-nucleotide substitutions, and then apply error filtering under models which do not.

    We will also run several new large-scale screens of existing alignments, to directly and indirectly address the reviewers comments. These will include

    (a) A drosophila dataset (from https://academic.oup.com/mbe/article/42/4/msaf068/8092905)

    (b) Current Selectome data (https://selectome.org/), both filtered and unfiltered. Here the filtering procedure refers to what Selectome does to obtain what its authors think are high quality alignments.

    (c) Current OrthoMam data, both (https://orthomam.mbb.cnrs.fr/) filtered and unfiltered. Here the filtering procedure refers to what OrthoMam does to obtain what its authors think are high quality alignments.

    Reviewer #1:

    We are grateful to Reviewer #1 for their positive and encouraging review. We are pleased they found our analyses convincing and recognized BUSTED-E as a "simple, efficient, and computationally fast" improvement for evolutionary scans.

    Strengths:

    As a side note, I found it particularly interesting how the authors tested the statistical support for the new method compared to the simpler version without the error class. In many cases, the simpler model could not be statistically rejected in favor of the more complex model, despite producing biologically incorrect results in terms of parameter inference. This highlights a broader issue in molecular evolution and phylogenomics, where model selection often relies too heavily on statistical tests, potentially at the expense of biological realism.

    We agree that this observation touches upon a critical issue in phylogenomics. A statistically "good" fit does not always equate to a biologically accurate model. We believe our work serves as a useful case study in this regard. We will add discussion of the importance of considering biological realism alongside statistical adequacy in model selection.

    Weaknesses:

    Regarding the structure of the manuscript, the text could be clearer and more precise.

    We appreciate this feedback. We will perform a thorough revision of the entire manuscript to improve its clarity, flow, and precision. We will focus on streamlining the language and ensuring that our methodological descriptions and results are as unambiguous as possible.

    Clear, practical recommendations for users could also be provided in the Results section.

    To make our method more accessible and its application more straightforward, we will add a new section that provides clear, practical recommendations for users. This includes guidance on when to apply BUSTED-E, how to interpret its output, and best practices for distinguishing potential errors from strong selection.

    Additionally, the simulation analyses could be further developed to include scenarios with both alignment errors and positive selection, in order to better assess the method's performance.

    Additional simulations will be conducted (see above)

    Finally, the model is evaluated only in the context of site models, whereas the widely used branch-site model is mentioned as possible but not assessed.

    BUSTED class models support branch-site variation in dN/dS, so technically all of our analyses are already branch-site. However, we interpret the reviewer’s comment as describing use cases when a method is used to test for selection on a subset of tree branches (as opposed to the entire tree). BUSTED-E already supports this ability, and we will add a section in the manuscript describing how this type of testing can be done, including examples. However, we do not plan to conduct additional extensive data analyses or simulations, as this would probably bloat the manuscript too much.

    Reviewer #2:

    We thank Reviewer #2 for their detailed and thought-provoking comments, and for their enthusiasm for modeling alignment issues directly within the codon modeling framework. The criticisms raised are challenging and we will work on improving the justification, testing, and contextualization of our method.

    Weaknesses:

    The definition of alignment error by a very large ω is not justified anywhere in the paper... I would suggest characterising a more specific error model. E.g., radical amino-acid "changes" clustered close together in the sequence, proximity to gaps in the alignment, correlation of apparent ω with genome quality... Also concerning this high ω, how sensitive is its detection to computational convergence issues?

    This is a fundamental point that we are grateful to have the opportunity to clarify. Our intention with the high ω category is not to provide a mechanistic or biological definition of an alignment error. Rather, its purpose is to serve as a statistical "sink" for codons exhibiting patterns of divergence so extreme that they are unlikely to have resulted from a typical selective process. It is phenomenological and ad hoc. The reviewer makes sensible suggestions for other ad hoc/empirical approaches to alignment quality filtering, but most of those have already been implemented in existing (excellent) alignment filtering tools. BUSTED-E is never meant to replace them, but rather to catch what is left over. Importantly, error detection is not even the primary goal of BUSTED-E; errors are treated as a statistical nuisance. With all due respect, all of the reviewers suggestions are similarly ad hoc -- there is no rigorous quantitative justification for any of them, but they are all sensible and plausible, and usually work in practice.

    Computational convergence issues can never be fully dismissed, but we do not consider this to be a major issue. Our approach already pays careful attention to proper initialization, does convergence checks, considers multiple initial starting points. We also don’t need to estimate large ω with any degree of precision, it just needs to be “large”.

    The authors should clarify the relation between the "primary filter for gross or large-scale errors" and the "secondary filter" (this method). Which sources of error are expected to be captured by the two scales of filters?

    We will add discussion and examples to explicitly define the distinct and complementary roles of these filtering stages.

    The benchmarking of the method could be improved both for real and simulated data... I suggest comparing results with e.g. Drosophila genomes... For simulations, the authors should present simulations with or without alignment errors... and with or without positive selection... I also recommend simulating under more complex models, such as multinucleotide mutations or strong GC bias...

    We will add more simulations as suggested (see above). We will also analyze a drosophila gene alignment from previously published papers.

    It would be interesting to compare to results from the widely used filtering tool GUIDANCE, as well as to the Selectome database pipeline... Moreover, the inconsistency between BUSTED-E and HMMCleaner, and BMGE is worrying and should be better explained.

    Some of the alignments we have analyzed had already been filtered by GUIDANCE. We’ll also run the Selectome data through BUSTED-E: both filtered and unfiltered. We consider it beyond the scope of this manuscript to conduct detailed filtering pipeline instrumentation and side-by-side comparison.

    For a new method such as this, I would like to see p-value distributions and q-q plots, to verify how unbiased the method is, and how well the chi-2 distribution captures the statistical value.

    We will report these values for new null simulations.

    I disagree with the motivation expressed at the beginning of the Discussion... Our goal should not be to find a few impressive results, but to measure accurately natural selection, whether it is frequent or rare.

    That’s a philosophical point; at some level, given enough time, every single gene likely experiences some positive selection at some point in the evolutionary past. The practically important question is how to improve the sensitivity of the methods while controlling for ubiquitous noise. We do agree with the sentiment that the ultimate goal is to “measure accurately natural selection, whether it is frequent or rare”. However, we also must be pragmatic about what is possible with dN/dS methods on available genomic data.