The Lithuanian reference genome LT1 - a human de novo genome assembly with short and long read sequence and Hi-C data
This article has been Reviewed by the following groups
Listed in
- Evaluated articles (GigaByte)
Abstract
We present LT1, the first high-quality human reference genome from the Baltic States. LT1 is a female de novo human reference genome assembly constructed using 57× of ultra-long nanopore reads and 47× of short paired-end reads. We also utilized 72 Gb of Hi-C chromosomal mapping data to maximize the assembly’s contiguity and accuracy. LT1’s contig assembly was 2.73 Gbp in length comprising of 4,490 contigs with an N50 value of 13.4 Mbp. After scaffolding with Hi-C data and extensive manual curation, we produced a chromosome-scale assembly with an N50 value of 138 Mbp and 4,699 scaffolds. Our gene prediction quality assessment using BUSCO identify 89.3% of the single-copy orthologous genes included in the benchmarking set. Detailed characterization of LT1 suggested it has 73,744 predicted transcripts, 4.2 million autosomal SNPs, 974,000 short indels, and 12,330 large structural variants. These data are shared as a public resource without any restrictions and can be used as a benchmark for further in-depth genomic analyses of the Baltic populations.
Article activity feed
-
We present LT1
Reviewer 2. Professor.Gong Zhang
Is the language of sufficient quality? No.
Are all data available and do they match the descriptions in the paper?
No. Hi-C data was not deposited.
Is the data acquisition clear, complete and methodologically sound?
No. The quality of the nanopore sequencing datasets was not evaluated. The error correction using short-read sequencing was not clear. It seems not necessary to use Hi-C data for the assembly.
Is there sufficient detail in the methods and data-processing steps to allow reproduction? No. Error correction was not clear.
Is there sufficient data validation and statistical analyses of data quality? No.
Is the validation suitable for this type of data?
No. No validation of the variants was performed. The authors used multiple SNV detection algorithms and got quite different …
We present LT1
Reviewer 2. Professor.Gong Zhang
Is the language of sufficient quality? No.
Are all data available and do they match the descriptions in the paper?
No. Hi-C data was not deposited.
Is the data acquisition clear, complete and methodologically sound?
No. The quality of the nanopore sequencing datasets was not evaluated. The error correction using short-read sequencing was not clear. It seems not necessary to use Hi-C data for the assembly.
Is there sufficient detail in the methods and data-processing steps to allow reproduction? No. Error correction was not clear.
Is there sufficient data validation and statistical analyses of data quality? No.
Is the validation suitable for this type of data?
No. No validation of the variants was performed. The authors used multiple SNV detection algorithms and got quite different results. They should experimentally validate which one is better.
Is there sufficient information for others to reuse this dataset or integrate it with other data?
No. It is difficult to reuse it. There's little annotation done.
Additional Comments: I don't understand why the authors chose to sequence a woman. As a reference of a certain ethnic, complete chromosomes are needed, which means a man (XY) is necessary.
-
Abstract
This work has been published in GigaByte Journal under a CC-BY 4.0 license (https://doi.org/10.46471/gigabyte.51), and has published the reviews under the same license. These are as follows.
Reviewer 1. Dr.Giulio Formenti
First review: Language: A few minor typos to correct, highlighted in the revised manuscript
Is there sufficient detail in the methods and data-processing steps to allow reproduction?
Yes, but please revise as per my comments
Additional Comments: see comments here https://gigabyte-review.rivervalleytechnologies.com/download-api-file?ZmlsZV9wYXRoPXVwbG9hZHMvZ3gvRFIvMjg3L0xUMV9NU19HaWdhQnl0ZV8yMDIxMTEyM19IU19HRi5kb2N4
Decision: Minor Revision.
Re-review:
I am happy with the changes and I think the article is worth publishing in Gigabytes. However, I think one main point needs further clarity. Since this is …
Abstract
This work has been published in GigaByte Journal under a CC-BY 4.0 license (https://doi.org/10.46471/gigabyte.51), and has published the reviews under the same license. These are as follows.
Reviewer 1. Dr.Giulio Formenti
First review: Language: A few minor typos to correct, highlighted in the revised manuscript
Is there sufficient detail in the methods and data-processing steps to allow reproduction?
Yes, but please revise as per my comments
Additional Comments: see comments here https://gigabyte-review.rivervalleytechnologies.com/download-api-file?ZmlsZV9wYXRoPXVwbG9hZHMvZ3gvRFIvMjg3L0xUMV9NU19HaWdhQnl0ZV8yMDIxMTEyM19IU19HRi5kb2N4
Decision: Minor Revision.
Re-review:
I am happy with the changes and I think the article is worth publishing in Gigabytes. However, I think one main point needs further clarity. Since this is mostly about a new dataset and assembly, the authors should make it very clear to the reader what they did. I think the title is still misleading in this respect. In it the authors refer to an "assembly with short and long reads combined with Hi-C data". This is not how one would generally refer to such an assembly in the community as it reads as if a short-read based assembly was complemented with long reads (gap filling?) and hic reads (phasing?). I suggest rephrasing as "a ONT long-read-based assembly scaffolded with Hi-C data and polished with short reads". The confusion/ambiguity about this is further reinforced in the text. I think the authors should make and extra effort reading the text to make sure the genome assembly terminology is consistent with the state of the art and therefore very clear to the reader. For instance, in the abstract the authors say that the assembly was constructed using 57x ultra-long nanopore reads. I think this is incorrect. Ultra-long nanopore reads are usually defined as reads >100kbp. I don't think the authors filtered their dataset for ultralong and this should be corrected. Indeed, it would be interesting to know what fraction of ultralong reads are available in their 57x dataset.
-