A replicable and modular benchmark for long-read transcript quantification methods

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

We provide a replicable benchmark for long-read transcript quantification, and evaluate the performance of some recently-introduced tools on several synthetic long-read RNA-seq datasets. This benchmark is designed to allow the results to be easily replicated by other researchers, and the structure of the underlying Snakemake workflow is modular to make the addition of new tools or new data sets relatively easy. In analyzing previously assessed simulations, we find discrepancies with recently-published results. We also demonstrate that the robustness of certain approaches hinge critically on the quality and “cleanness” of the simulated data.

Availability

The Snakemake scripts for the benchmark are available at https://github.com/COMBINE-lab/lr_quant_benchmarks , the data used as input for the benchmarks (reference sequences, annotations, and simulated reads) are available at https://doi.org/10.5281/zenodo.13130623 .

Article activity feed