Finding the LMA needle in the wheat proteome haystack

This article has been Reviewed by the following groups

Read the full article See related articles

Abstract

Late maturity alpha-amylase (LMA) is a wheat genetic defect causing the synthesis of high isoelectric point (pI) alpha-amylase in the aleurone as a result of a temperature shock during mid-grain development or prolonged cold throughout grain development leading to an unacceptable low falling numbers (FN) at harvest or during storage. High pI alpha-amylase is normally not synthesized until after maturity in seeds when they may sprout in response to rain or germinate following sowing the next season’s crop. Whilst the physiology is well understood, the biochemical mechanisms involved in grain LMA response remain unclear. We have employed high-throughput proteomics to analyse thousands of wheat flours displaying a range of LMA values. We have applied an array of statistical analyses to select LMA-responsive biomarkers and we have mined them using a suite of tools applicable to wheat proteins. To our knowledge, this is not only the first proteomics study tackling the wheat LMA issue, but also the largest plant-based proteomics study published to date. Logistics, technicalities, requirements, and bottlenecks of such an ambitious large-scale high-throughput proteomics experiment along with the challenges associated with big data analyses are discussed. We observed that stored LMA-affected grains activated their primary metabolisms such as glycolysis and gluconeogenesis, TCA cycle, along with DNA- and RNA binding mechanisms, as well as protein translation. This logically transitioned to protein folding activities driven by chaperones and protein disulfide isomerase, as wellas protein assembly via dimerisation and complexing. The secondary metabolism was also mobilised with the up-regulation of phytohormones, chemical and defense responses. LMA further invoked cellular structures among which ribosomes, microtubules, and chromatin. Finally, and unsurprisingly, LMA expression greatly impacted grain starch and other carbohydrates with the up-regulation of alpha-gliadins and starch metabolism, whereas LMW glutenin, stachyose, sucrose, UDP-galactose and UDP-glucose were down-regulated. This work demonstrates that proteomics deserves to be part of the wheat LMA molecular toolkit and should be adopted by LMA scientists and breeders in the future.

Article activity feed

  1. Late maturity alpha-amylase (LMA) is a wheat genetic defect causing the synthesis of high isoelectric point (pI) alpha-amylase in the aleurone as a result of a temperature shock during mid-grain development or prolonged cold throughout grain development leading to an unacceptable low falling numbers (FN) at harvest or during storage. High pI alpha-amylase is normally not synthesized until after maturity in seeds when they may sprout in response to rain or germinate following sowing the next season’s crop. Whilst the physiology is well understood, the biochemical mechanisms involved in grain LMA response remain unclear. We have employed high-throughput proteomics to analyse thousands of wheat flours displaying a range of LMA values. We have applied an array of statistical analyses to select LMA-responsive biomarkers and we have mined them using a suite of tools applicable to wheat proteins. To our knowledge, this is not only the first proteomics study tackling the wheat LMA issue, but also the largest plant-based proteomics study published to date. Logistics, technicalities, requirements, and bottlenecks of such an ambitious large-scale high-throughput proteomics experiment along with the challenges associated with big data analyses are discussed. We observed that stored LMA-affected grains activated their primary metabolisms such as glycolysis and gluconeogenesis, TCA cycle, along with DNA- and RNA binding mechanisms, as well as protein translation. This logically transitioned to protein folding activities driven by chaperones and protein disulfide isomerase, as wellas protein assembly via dimerisation and complexing. The secondary metabolism was also mobilised with the up-regulation of phytohormones, chemical and defense responses. LMA further invoked cellular structures among which ribosomes, microtubules, and chromatin. Finally, and unsurprisingly, LMA expression greatly impacted grain starch and other carbohydrates with the up-regulation of alpha-gliadins and starch metabolism, whereas LMW glutenin, stachyose, sucrose, UDP-galactose and UDP-glucose were down-regulated. This work demonstrates that proteomics deserves to be part of the wheat LMA molecular toolkit and should be adopted by LMA scientists and breeders in the future.Competing Interest StatementThe authors have declared no competing interest.

    Reviewer 2. Luca Ermini

    This manuscript, which I had the pleasure of reading, is, simply put, a benchmark of five long read de novo assembly tools. Using 13 real and 72 simulated datasets, the manuscript evaluated the performance of five widely used long-read de novo assemblers: Canu, Flye, Miniasm, Raven, and Redbean.

    Although I find the methodological approach of the manuscript to be solid and trustworthy, I do not think the research is particularly innovative. Long-read assemblers have already been benchmarked in the scientific literature, and similar findings have been made. The authors are aware of this limitation of the study and have added a novel feature: the impact of read length on assembly quality, which in my opinion is still lacking sufficient innovation. However, the manuscript as a whole is valid and worthy of consideration. In light of this, I would like to share some suggestions I made in an effort to make the manuscript unique and more novel.

    Please see my comment below.

    1. Evaluation of the assemblies The metrics used to evaluate an assembly are frequently a murky subject as we are still lacking a standard language. The authors assessed the assemblies using three types of metrics: compass analysis, assembly statistics, and the Busco assessment, in addition to computational metrics like runtime and RAM usage. This is not incorrect, but I would suggest making a clear distinction between the metrics using (in addition to the computational metrics) three widely recognised metrics, or in short, the 3C criterion. The assembly metrics can be broken down into three dimensions: correctness (your compass analysis), contiguity (NG50) and completeness (the BUSCO assessment). The authors should reconsider the text using the 3C criterion; this will provide a clear, understandable, and structured way of categorising metrics. The paragraph beginning at line 197, for example, causes some confusion for the reader. The NG50 metrics evaluate assembly contiguity, whereas the number of misassemblies (considered by the authors in terms of relocation, inversion, and translocation) evaluate assembly correctness. I must admit that the misassemblies and contiguity can overlap, but I would still recommend keeping the NG50 (within contiguity) and misassemblies (within correctness) metrics separate.

    2. Novelty of the comparison The authors of the study had two main goals: to conduct a systematic comparison of five long-read assembly tools (Raven, Flye, Wtdbg2 or Redbean, Canu, and Miniasm) and to determine whether increased read length has a positive effect on overall assembly quality. The authors acknowledge the study's limitations and include an evaluation of the effect of read length on assembly quality as a novel feature of the manuscript (see line 70).

    The manuscript that described the Raven assembler (Vaser, R., Sikic, M. Time- and memory-efficient genome assembly with Raven. Nat Comput Sci 1, 332-336 (2021)) compared the same assemblers' tools (Raven, Flye, Wtdbg2 or Redbean, Canu and Miniasm) evaluated in this manuscript plus two more (Ra and Shasta), used similar eukaryotes (A. thaliana, D. melanogaster, and Human), and reached a similar conclusion on Flye in terms of contiguity (NG50), and completeness (genome fraction) but overall there is not a best assembler in all of the evaluated categories. In this manuscript authors increased the number of eukaryotic genomes (including S. cerevisiae, C. elegans, T. rupribes, and P. falciparum) and reached similar conclusions: there is no assembler that performs the best in all the evaluation categories, but overall Flye is the best-performing assembler. This strengthens the manuscript, but the research is not entirely novel.

    Given that the field of third-generation technologies is rapidly progressing toward the generation of high-quality reads (Pacbio HiFi technology and ONT Q20+ chemistry are achieving accuracy of 99% and higher), the manuscript should also include a HiFi assembler benchmark. This would add novelty to the manuscript and pique the scientific community's interest. The authors have already simulated HiFi reads from S. cerevisiae, P. falciparum, C. elegans, A. thaliana, D. melanogaster, T. rubripes in addition to reference reads (or real reads) from S. cerevisiae (SRR18210286). P. falciparum (SRR13050273) and A. thaliana (SRR14728885).

    Furthermore, I am not sure what the benefit is of evaluating Canu on HiFi data instead of HiCanu, which was designed to deal with HiFi data. The authors already included some HiFi-enabled assemblers like Flye and Wtdbg2 but also HiFiasm should also be considered. I would strongly advise benchmarking the HiFi assemblers to complete the study and add a level of novelty. I would like to emphasise that the manuscript is solid and that I appreciate it; however, I believe that some novelty should be added.

    1. C elegans genomics The now-discontinued RSII, which had a higher error rate and a shorter average read than Sequel I or Sequel II, was used to generate the genomic data from C elegans. I understand the authors' motivation for including it in the analysis, but the use of RSII may skew the comparisons, and I would suggest adding a few sentences to the discussion about it.

    2. CPU time (h) and memory usage The authors claim the benchmark evaluation included runtime and RAM usage. However, I missed finding information about the runtime and RAM usage. Please provide CPU time (h) and memory usage (GB)


    Minor comments:

    1. Lines 64-65 "Here, we provide a comprehensive comparison on de novo assembly tools on all TGS technologies and 7 different eukaryotic genomes, to complement the study of Wick and Holt" I would modify "on all TGS technologies" as "at the present the two main TGS technologies"

    2. Line 163 Real reads. The term "real reads" may cause confusion for readers, leading them to believe that the authors produced the sequencing reads for the manuscript. I would use the term "ref-reads" indicating "reads from the reference genomes"

    3. Lines 218-219 Please provide full names (genus + species): S. cerevisiae, P. falciparum, A. thaliana, D. melanogaster, C. elegans, and T. rubripes

    4. Supplementary Table S4 "Accession number SRR15720446 seems to belong to a sample sequenced with 1 PACBIO_SMRT (Sequel II) rather than ONT

    5. Figures 2 and 3. Figures 2 and 3 give visual results of the performance of the five assemblers. I want to make a few points here: According to what I understand, the top-performing assembler is marked with a star and is plotted with a brighter colour than the others. However, this is not immediately apparent, and some readers might have trouble identifying the colour that has been highlighted. I would suggest either lessening the intensity of the other, lower-performance assemblers or giving the best assembler a graphically distinct outline. I also wonder if it would be useful to give the exact numbers as supplemental tables.

    Re-Review:

    Dear Cosma and colleagues, Thank you so much for addressing my comments in a satisfactory manner. The manuscript, in my opinion, has dramatically improved.

  2. AbstractLate maturity alpha-amylase (LMA) is a wheat genetic defect causing the synthesis of high isoelectric point (pI) alpha-amylase in the aleurone as a result of a temperature shock during mid-grain development or prolonged cold throughout grain development leading to an unacceptable low falling numbers (FN) at harvest or during storage. High pI alpha-amylase is normally not synthesized until after maturity in seeds when they may sprout in response to rain or germinate following sowing the next season’s crop. Whilst the physiology is well understood, the biochemical mechanisms involved in grain LMA response remain unclear. We have employed high-throughput proteomics to analyse thousands of wheat flours displaying a range of LMA values. We have applied an array of statistical analyses to select LMA-responsive biomarkers and we have mined them using a suite of tools applicable to wheat proteins. To our knowledge, this is not only the first proteomics study tackling the wheat LMA issue, but also the largest plant-based proteomics study published to date. Logistics, technicalities, requirements, and bottlenecks of such an ambitious large-scale high-throughput proteomics experiment along with the challenges associated with big data analyses are discussed. We observed that stored LMA-affected grains activated their primary metabolisms such as glycolysis and gluconeogenesis, TCA cycle, along with DNA- and RNA binding mechanisms, as well as protein translation. This logically transitioned to protein folding activities driven by chaperones and protein disulfide isomerase, as wellas protein assembly via dimerisation and complexing. The secondary metabolism was also mobilised with the up-regulation of phytohormones, chemical and defense responses. LMA further invoked cellular structures among which ribosomes, microtubules, and chromatin. Finally, and unsurprisingly, LMA expression greatly impacted grain starch and other carbohydrates with the up-regulation of alpha-gliadins and starch metabolism, whereas LMW glutenin, stachyose, sucrose, UDP-galactose and UDP-glucose were down-regulated. This work demonstrates that proteomics deserves to be part of the wheat LMA molecular toolkit and should be adopted by LMA scientists and breeders in the future.

    This work has been published in GigaScience Journal under a CC-BY 4.0 license (https://doi.org/10.1093/gigascience/giad100), and has published the reviews under the same license. These are as follows.

    **Reviewer 1. Brandon Pickett **

    Overall, this manuscript is well-written and understandable. There's a lot of good work here and I think the authors were thoughtful about how to compare the resulting assemblies. Scripts and models used have been made available for free via GitHub and could be mirrored on or moved to GigaDB if required. I'll include a several minor comments, including some line-item edits, but the bulk of my comments will focus on a few major items.

    Major Comments: My primary concern here is that the comparison is outdated and doesn't address some of the most helpful questions. CLR-only assemblies are no longer state-of-the-art. There are still applications and situations where ONT (simplex, older-pore)-only assemblies are reasonable, but most projects that are serious about generating excellent assemblies as references are unlikely to take that approach.

    Generating assemblies for non-reference situations, especially when the sequencing is done "in the field" (e.g., using a MinION with a laptop) or by a group with insufficient funding or other access to PromethIONs and Sequel/Revios, is an exception to this for ONT-only assemblies. Further, this work assumes a person wants to generate "squashed" assemblies instead of haplotype-resolved or pseudohaplotype assemblies. To be fair, sequencing technology in the TGS space has been advancing so rapidly that it is extremely difficult to keep up, and a sequencing run is often outdated by the time analyses are finished, not to mention by the time a manuscript is written, reviewed, and published.

    Accordingly, in raising my concerns, I am not objecting to the analysis being published or suggesting that the work performed was poor, but I do believe clarifications and discussion are necessary to contextualize the comparison and specify what is missing.

    1. This comparison seeks to address Third-generation sequencing technologies: namely PacBio vs. ONT. However, each company offers multiple kinds of long-read sequencing, and they are not all comparable in the same way. Just as long noisy reads (PacBio CLR & ONT simplex) are a whole new generation from "NGS" short reads like from Illumina, long-accurate reads are arguably a new generation beyond noisy long reads. If this paper wants to include PacBio HiFi reads in the comparison, significant changes are necessary to make the comparison meaningful. I think it's reasonable to drop HiFi reads from this paper altogether and focus on noisy long reads since the existing comparison isn't currently set up to tell us enough about HiFi reads and including them would be an ordeal. If including HiFi, consider the following:

    1.a. Use assemblers designed for long-accurate reads. HiCanu (i.e., Canu with --pacbio-hifi option) is already used, as is a similar approach for Flye and wtdbg2. However, raven is not meant for HiFi data and miniasm is not either (though, it could be done with the correct minimap2 settings, but Hifiasm would be better). Assemblies of HiFi data with Raven and miniasm should be removed. Sidenote – Raven can be run with --weaken (or similar) for HiFi data, but it is only experimental and the parameter has since been removed. Including Hifiasm would be necessary, and it should have been included since Hifiasm was out when this analysis was done. Similarly, including MBG (released before your analysis was done) would be appropriate. Since you'd be redoing the analyses, it would be appropriate to include other assemblers that have since been released: namely LJA. Once could argue that Verkko should be included, but that opens another can of worms as a hybrid assembler (more on that later).

    1b. Use a read simulator that is built for HiFi reads. Badreads is not built for HiFi data (though using custom parameters to make it work for HiFi reads wasn't a bad idea at the time), and new simulators (e.g., PBSIM3, DOI: 10.1093/nargab/lqac092) have since been released that consider the multi-pass process used to generate HiFi data.

    1c. ONT Duplex data is likely not available for the species you've chosen as it is a very new technology. However, you should at least discuss its existence as something for readers to "keep an eye on" as something that is conceptually comparable to HiFi. 1d. Use the latest & greatest HiFi data if possible and at least discuss the evolution of HiFi data. Even better would be to compare HiFi data over time, but this data may not really be available and most people won't be using older HiFi data. Though, simulation of older data would conceivably be possible. While doing so would make this paper more complete, I would argue that it isn't worth the effort at this juncture. For reference, in my observation, older data has a median read length around 10-15 kb instead of 18-22 kb. 1e. Include real Hifi data for the species you are assembling. If none is available and you aren't in a position to generate it, then keep the hifi assembler comparison on real data separate from that of the CLR/ONT assembler comparisons on real data by using real HiFi data for other species.

    1. Discuss in the intro and/or discussion that you are focusing on "squashed" assemblies. Without clever sample separation and/or trio-based approaches (e.g., DOI: 10.1038/nbt.4277), a single squashed haplotype is the only possible outcome for PacBio CLR and ONT-only approaches. For non-haploid genomes, other approaches (HiFi-only or hybrid approaches (e.g., HiFi + ONT or HiFi + Hi-C)) can generate pseudohaplotypes at worse and fully-resolved haplotypes at best. The latter is an objectively better option when possible, and it's important to note that this comparison wouldn't apply when planning a project with such goals. Similarly, it would probably be helpful to point out to the novice reader that this comparison doesn't apply to metagenome assembly either.

    2. The title suggests to the reader that we'll be shown how long reads makes a difference in assembly compared to non-long read approaches. However, this is not the case, despite some mention of it in near line 318. Short read assemblies are not compared here and no discussion is provided to suggest how long read-based assemblies would improve outcomes in various situations relative to short reads. Unless such a comparison and/or discussion is added, I think the title should be changed. I've included this point in the "Major Comments" section because including such a comparison would be a big overhaul, but I don't expect this to be done. The core concern is that the analysis is portrayed correctly.

    3. Sequencing technologies are often portrayed as static through time, but this is not accurate. This is a failing of the field generally. Part of the problem is the length of the publishing cycle (often >1yr from when a paper is written to when it's published, not to mention how long it takes to do the analysis before a paper is even written). Part of the problem is that current statistics are often cited in influential papers and then recited in more recent papers based on the influential paper despite changes having been made since that influential paper was released. Accordingly, the error rate in ONT reads has been misreported as being ~15% for many years even though their chemistry has improved over time and the machine learning models (especially for human samples) have also improved, dropping the error rate substantially. ONT has made improvements to their chemistry and changed nanopores over time and PacBio has tinkered with their polymerase and chemistry too. Accordingly, a better question for a person planning to perform an assembly would be "which assembler is best for my datatype (pacbio clr vs ont) and chemistry/etc.?" instead of just differentiating by company. Any comparison of those datatypes should at least address this as a factor in their discussion, if not directly in their analysis. I feel that this is missing from this comparison. In an ideal world, we'd have various CLR chemistries and ONT pores/etc. for each species in this analysis. That data likely doesn't exist for each of the chosen species though, and generating it would be non-trivial, especially retroactively. Using the most recent versions is a good option, but may also not exist for every species chosen. Since this analysis was started (circa Nov/Dec 2021 by my estimate based on the chosen assembler versions), ONT has released pore 10; in combination with the most recent release of Guppy, error rates <=3% are expected for a huge portion of the data. That type of data is likely to assemble very differently from R9.4, and starker differences would be expected for data older than R9.4. Even if all the data were the most recent (or from the same generation (e.g., R9.4)), library preps vary greatly, especially between UL (ultralong) libraries and non-UL libraries. Having reads >100kb, especially a large number of them, makes a big difference in assembly outcome in my observation. How does choice of assembler (and possibly different parameters) affect the assembly when UL data is included? How is that different from non-UL data? What about UL data at different percentages of the reads being considered UL? A paper focusing on long noisy reads would be much more impactful if it addresses these questions. Again, this may not be possible for this particular paper considering what's already been done and the available funding, and I think that's okay. However, these issues need to addressed in the discussion as open questions and suggested future work. The type of CLR and ONT data also needs to be specified in this work, e.g., in a supplemental table, and if the various datasets are not from the same types, these differences need to be acknowledged. At a minimum, I think the following data points should be included: chemistry/pore information (e.g., R9.4 for ONT or P2/C5 for PacBio), basecaller (e.g., guppy vX.Y.Z), and read length distribution info (e.g., mean, st. dev., median, %>100kb), ideally a plot of the distribution in addition to summary values. I also understand that these data were generated previously by others, and this information should theoretically be available from their original publications, which are hopefully accessible via the INSDC records associated with the provided accessions. The objective here is making the information easily accessible to the readers of this paper because those could be confounding variables in the analysis.

    4. This comparison considered only a single coverage level (30x). That's not an unreasonable shortcut, but it certainly leaves a lot of room for differences between assemblers. If the objective the paper is to help future project planners decide what assembler to use, it would be most helpful if they had an idea of what coverage they can use and still succeed. That's especially true for projects that don't have a lot of funding or aren't planning to make a near-perfect reference genome (which would likely spend the money on high coverage of multiple datatypes). It would be helpful to include some discussion about how these results may be different at much lower (e.g., 2x or 10x coverage) or at higher coverage (e.g., 50x, 70x, etc.) and/or provide some justification from another study for why including that kind of comparison would be unlikely to be worthwhile for this study, even if project planners should consider those factors when developing their budget and objectives.

    5. Figure 2 and 3 include a lot of information, and I generally like how they look and that they provide a quick overview. I believe two things are missing that will improve either the assessment or the presentation of the information, and I think one change will also improve things. 6a. I think metrics from Merqury (DOI: 10.1186/s13059-020-02134-9) should be included where possible. Specifically, the k-mer completeness (recovery rate) and reference-free QV estimate (#s 1 and 3 from https://github.com/marbl/merqury/wiki/2.-Overall-k-mer-evaluation). Generally these are meant to be done from data of the same individual. However, most of the species selected for this comparison are highly homozygous strains that should have Illumina data available, and thus having the data come from not the exact some individual will likely be okay. This can serve as another source of validation. If such a dataset is not available for 1 or more of these species, then specify in the text that it wasn't available, and thus such an evaluation wasn't possible. If it's not possible to add one or both of these metrics to the figures (2 & 3), that's fine, but having it as a separate figure would still be helpful. I find these values to be some of the most informative for the quality of an assembly. 6b. It's not strictly necessary, so this might be more of a minor comment, but I found that I wanted to view individual plots for each metric. Perhaps including such plots in the supplement would help (e.g., 6 sets of plots similar to figure 4 with color based on assembler, grouping based on species, and opacity based on datatype). The specifics aren't critical, I just found it hard to get more than a very general idea from the main figures and wanted something easy to digest for each metric. 6c. Using N50/NG50 for a measure of contiguity is an outdated and often misleading approach. Unfortunately, it's become such common practice that many people feel obligated to include it or use it. Instead, the auN (auNG) would be a better choice for contiguity: https://lh3.github.io/2020/04/08/a-new-metric-on-assembly-contiguity.

    6. This paper focuses on assembly and intentionally does not consider polishing (line 176), which I think is a reasonable choice. It also does not consider scaffolding or hybrid assembly approaches (again, reasonable choices). In the case of hybrid assembly options, most weren't available when this analysis was done (short read + long read assemblers were available, but I think it's perfectly reasonable to not have included those). Given the frequency of scaffolding (especially with Hi-C data [DOIs:10.1371/journal.pcbi.1007273 & 10.1093/bioinformatics/btac808]) and the recent shift to hybrid assemblers (e.g., phasing HiFi-based string graphs using Hi-C data to get haplotype resolved diploid assemblies (albeit with some switch errors) [DOI: 10.1038/s41587-022-01261-x] or resolving HiFi-based minimizer de bruijn graphs using ONT data and parental Illumina data to get complete, T2T diploid assemblies [DOI: 10.1038/s41587-023-01662-6]), I think it would be appropriate to briefly mention these methods so the novice reader will know that this benchmark does not apply to hybrid approaches or post-assembly genome finishing. This is a minor change, but I included it in this section because it matches the general theme of ensuring the scope of this benchmark is clear.

    Minor Comments:

    1. line 25 in the abstract. Change Redbean to wtdbg2 for consistency with the rest of the manuscript.

    2. "de novo" should be italicized. It is done correctly in some places but not in others.

    3. line 64. "all TGS technologies": I would argue that this isn't quite true. ONT Duplex isn't included here even though Duplex likely didn't exist when you did this work. Also, see the major comments concerning whether TGS should include HiFi and Duplex.

    4. Table 1. Read length distributions vary dramatically by technology and library prep. E.g., HiFi is often a very tight distribution about the mean because of size selection. Including the median in the table would be helpful, but more importantly, I would like to see read-length distribution plots in the supplement for (a) the real data used to generate the initial iteration models and (b) the real data from each species.

    5. line 166 "fair comparison". I'm not sure that a fair comparison should be the goal, but having them at the same coverage level makes them more comparable which is helpful. Maybe rephrase to indicate that keeping them at the same coverage level reduces potentially confounding variables when comparing between the real and simulated datasets.

    6. line 169. Citation 18 is used for Canu, which is appropriate but incomplete. The citation for HiCanu should also be included here: DOI: 10.1101/gr.263566.120.

    7. line 169. State that these were the most current releases of the various assemblers at the time that this analysis was started. Presumably, that was Nov/Dec 2021. Since then, Raven has gone from v1.7.0->1.8.1 and Flye has gone from v2.9->2.9.1.

    8. line 175. Table S6 is mentioned here, but S5 has not yet been mentioned. S5 is mentioned for the first time on line 196. These two supp tables' numbers should be swapped.

    9. There is inconsistent use of the Oxford comma. I noticed is missing multiple times, e.g., lines 191, 208, 259, & 342.

    10. line 193. The comma at the end of the line (after "tools") should be removed. Alternatively, keep the comma but add a subject to the next clause to make it an independent clause (e.g., "...assembly tools, and they were computed...").

    11. line 237. The N50 of the reference is being used here. You provide accessions for the references used, but most people will not go look those up (which is reasonable). The sequences in a reference can vary greatly in their lengths, even within the same species, because which sequences are included in the reference are not standardized. Even the size difference between a homogametic and heterogametic reference can be non-trivial. Which are included in the reference, and more importantly included in your N50 value, can significantly change the outcome and may bias results if these are not done consistently between the included species. It would be helpful if here or somewhere (e.g., in some supplemental text or a table) the contents of these references was somehow summarized. In addition to 1 copy of each of the expected autosomes, were any of the following included: (a) one or two sex chromosomes if applicable, (b) mitochondrial, chloroplast, or other organelle sequences, (c) alternate sequences (i.e., another copy of an allele of some sequence included elsewhere), (d) unplaced sequence from the 1st copy, (e) unplaced sequence from subsequent copies, and (f) vectors (e.g., EBV used when transforming a cell line)?

    12. Supplemental tables. Some cells are uncolored, and other cells are colored red or blue with varying shading. I didn't notice a legend or description of what the coloring and shading was supposed to mean. Please include this either with each table or at the beginning of the supplemental section that includes these tables and state that it applies to all tables #-#.

    13. Supplemental table S3. It was not clear to me that you created your own model for the hifi data (pacbio_hifi_human2022). I was really confused when I couldn't find that model in the GitHub repo for Badreads. In the caption for this table or in the text somewhere, please make it more explicit that you created this yourself instead of using an existing model.