ntStat: k-mer characterization using occurrence statistics in raw sequencing data

Wait 5 sec.

by Parham Kazemi, Lauren Coombe, René L. Warren, Inanc BirolK-mer counts are fundamental in many genomic data analysis tasks, providing valuable information for genome assembly, error correction, and variant detection. State-of-the-art k-mer counting tools employ various techniques, such as parallelism, probabilistic data structures, and disk utilization, to efficiently extract k-mer frequencies from large datasets. The distribution of k-mer counts in raw sequencing reads reveals key genomic characteristics such as genome size, heterozygosity, and basecalling quality. The number of reads containing a k-mer has also shown application in genome assembly and sequence analysis. We present ntStat, a toolkit that employs succinct Bloom filter data structures to track both k-mer count and depth information and use in downstream applications. ntStat models the k-mer count histogram using evolutionary computation, and infers valuable insights about the genome, sequencing data, and individual k-mers, de novo. ntStat consistently ran faster than DSK, BFCounter, hackgap, and Squeakr in all of our tests. Jellyfish performed faster than ntStat for human data with k = 25 but fell behind with k = 64. KMC3 was faster overall but at a high disk usage and memory cost. ntStat also used less memory than other non-disk-based k-mer counters and typically, 99.5-99.9% of the k-mers processed by ntStat are counted correctly. ntStat’s histogram analysis module detected heterozygosity percentages and k-mer coverage for long-read datasets simulated from a diploid human genome with less than 1% and 0.5-fold difference to the ground truth. The analysis of simulated long read datasets showed an average error of just 2% in k-mer robustness estimates.