Home Software Services About Contact     
 
USEARCH v11

Singletons

 
ImageSee also
 
Tolstoy's rule: most unique sequences are bad

Definition of a singleton
 
A singleton is a read with a sequence that is present exactly once, i.e. is unique among the reads.
 
Singletons should be discarded
If sequencer errors are independent and randomly distributed, then the sequence in a bad read is unlikely to be reproduced by chance and most singletons will contain at least one error. If bad sequences are unlikely to be reproduced by chance, then if the sequence of a read is found two or more times it is likely to be correct, though it could be the correct sequence of a bad amplicon, e.g. a chimera. Reads that are singletons after quality filtering and global trimming are therefore discarded and reads with abundances of two or more are used as input for OTU clustering.

Most singletons will map to a OTU when the OTU table is constructed (otutab command), so the data is not lost.
 
USEARCH command for discarding singletons
 
  usearch -sortbysize derep.fasta -output derep2.fasta -minsize 2
 
But this reduces sensitivity, right?
Maybe a little. Typically, most singletons map to a OTU. But yes, there may be a small reduction in sensitivity. Most singletons are probably good, meaning close enough to a biological sequence to be informative in downstream analysis. and some species may only be present in a single read. However, most errors are probably singletons, especially with the very large numbers of reads obtained with newer technologies such as the Illumina MiSeq machine. Discarding singletons has a small cost in sensitivity but often achieves a large improvement in specificity (reduction in error rate), as explained below.
 
Loner tags
Most singletons will have one or more errors. Consider a typical singleton (S) that has, say, one error. Usually there will be a correct read (C) of the same amplicon with higher abundance. So discarding S doesn't hurt sensitivity because we keep C. It only hurts if it is the only read for a given species. Call such a read a "loner tag".
 
If you have millions of reads and the error rate is anywhere close to 1%, then inevitably you will have millions of singletons due to sequencer errors. Even if your error rate is very low, say 0.01%, you will still have thousands of singletons due to errors. is Only a tiny fraction of those, if any, will be lone singletons. So by discarding singletons, you discard thousands or millions of reads with errors, and at most perhaps one or two loner tags.
 
What if there are many low-abundance species?
Singletons are a special case because errors are unlikely to be reproduced by chance. Singletons that give spurious OTUs arise from errors, mostly reads with >3% bad bases and chimeras. Other sources of error include PCR point errors and contaminants, though these are usually rare.

If you get K spurious OTUs from N reads due to these types of error on a mock community, then you should expect to get ~K spurious OTUs from N reads in a real community. If the community is diverse and has a long tail of low-abundance species, then you will get more valid singletons, but K will not change because the same sources of error (bad reads, chimeras...) are present. The mock community results in the UPARSE paper show that K is large even with aggressive read quality filtering.
 
If your community has high diversity, then you cannot assume that you will sample all species, regardless of whether you keep singletons or not. Methods to deal with undersampling such as rarefaction curves will work better if you discard singletons for the same reasons (reducing error bias towards small OTUs).
 
Errors correlate
You might think that if the error rate is ~1% or less, then the probability of having enough errors per read to get a 3% divergence is tiny and there is no need to discard singletons. This would be true if errors were independent (then the probability can be calculated from the Poisson distribution). However, empirically I've found that errors tend to correlate, and the number of reads with >=3% errors is much larger than you would expect if errors were Poisson-distributed.