See also
Interpreting counts and frequencies in OTU tables
Sneath-Sokal OTUs
The concept of an Operational Taxonomic Unit (OTU)
was introduced by Peter Sneath and Robert Sokal in the 1960s through a series of books
and articles which founded the field of
numerical taxonomy
(see e.g. Sneath & Sokal: Numerical Taxonomy,
W.H. Freeman, 1973).
Their goal was to develop a quantitative strategy for classifying organisms into
groups based on observed characters, creating a hierarchical classification
reflecting the evolutionary relationships between the organisms as faithfully as
possible.
Sneath-Sokal OTUs were constructed by making a table of observed traits
which could be described by numerical values; e.g.1=present, 0=absent. A
hierarchy (tree) was built by repeatedly merging the most similar groups
according to the number of traits they had in common. This is an early
example of a agglomerative clustering algorithm applied to
biological classification. Sokal and
Sneath were working before DNA sequences were available, and before the
development of more accurate trait- and sequence-based
phylogenetic tree construction algorithms such as
neighbor-joining
and
maximum likelihood.
97% identity threshold
In 16S sequencing, OTUs are
typically constructed using an identity threshold of 97%. To the best of my knowledge, the
first mention of this threshold is in (Stackebrandt
and Goebel 1994). If you know of an earlier reference for the magic
number 97, please let me know. Stackebrandt and Goebel found that 97% similarity
of 16S sequences corresponded approximately to a DNA reassociation value
of 70%, which had previously been accepted as a working definition for bacterial
species (Wayne et al.
1987). A more recent analysis found that the optimal threshold was 98.5% for
approximating species by clustering full-length 16S sequences (Yarza
et al. 2014). While this study was done using CD-HIT (Li
and Godzik 2006), which uses a problematic
non-standard definition of sequence identity, I have obtained a similar
optimal identity with UCLUST, which uses the
standard BLAST definition (unpublished
results).
OTUs constructed by clustering 16S sequences
The
earliest software program I'm aware of for clustering 16S sequences is
FastGroup (Seguritan and
Rohwer 1995). FastGroup used a greedy clustering algorithm similar to
UCLUST. The identity threshold was 97%, for
which the authors cite Stackebrandt and Goebel.
DOTUR (Schloss and
Handlesman 2005), the forerunner of
mothur (Schloss et al.
2009), used agglomerative clustering with options
to use maximum, minimum or average linkage. DOTUR
and mothur generate OTUs at a range of identity thresholds, noting (DOTUR paper) that "[s]sequences with greater than 97% identity are typically
assigned to the same species, those with 95% identity are typically assigned
to the same genus, and those with 80% identity are typically assigned to the
same phylum, although these distinctions are controversial".
QIIME (Caporaso
et al. 2010) uses the 97% threshold almost exclusively. Early versions
of QIIME used CD-HIT as the default OTU clustering method, which was later replaced by
UCLUST (Edgar
2013). I have not found any description of the intended interpretation
of QIIME OTUs in their papers or documention (please let me know if I missed
it).
OTUs as a working definition of species
Imagine it is
the 1990s, and we have 16S sequences obtained by PCR and Sanger sequencing. At that time, relatively few 16S
sequences were available in public repositories, so most could not be
identified by database search. A pragmatic alternative is to make clusters
at 97% identity. These clusters can be treated as Sneath-Sokal OTUs which
tentatively assign sequences to species and higher-level groups.
This is a reasonable approach if, but only if, experimental errors
in the sequences are not
significant. In the early literature,
errors are rarely (never?) considered. Presumably, this is because Sanger sequencing has a very low error rate, and
also because the importance of
errors due to PCR, especially chimeras, was not widely known in the '90s (Ashelford
et al. 2005).
OTUs with next-generation reads
Next-generation
sequencing (NGS) emerged in the late 1990s (Wikipedia,
DNA sequencing) and revolutionized microbiology by enabling low-cost,
high-throughput sequencing of the 16S gene. These advantages came at the
expense of shorter read lengths and higher error rates compared to Sanger
sequencing. OTU clustering methods were applied to NGS reads, but it
gradually became clear that these methods were generating many spurious OTUs
due to experimental error, causing inflated estimates of diversity (see e.g.,
Huse et al.
2010). Some widely-used methods continue to generate large numbers of
spurious OTUs at the time of writing (mid-2017), notably QIIME which
reports hundreds or thousands of OTUs
from Illumina reads of mock communities with ~20 species using recommended
procedures.
Goals for making OTUs as clusters of reads
Here, I
suggest some reasonable goals for making OTUs.
Operational
definition of species-like groups
Sneath-Sokal classification of the
sequences. This is the historical motivation for making 16S OTUs.
Merging variation within a single strain into one cluster
Bacterial chromosomes often contain multiple
copies of the 16S gene (paralogs). Paralogs often have small sequence
differences. Clustering tends to merge paralogs into a single OTU. This is a
good thing if the goal is to get one OTU per phenotype, which makes sense
for most analyses.
Merging variation between strains of a single species into one cluster
Different strains of a given species, say E. coli, often have
differences in their 16S sequences. This adds a level of variation within a
species on top of the variation that may occur within a single strain due to
paralogs. Clustering tends to merge both levels of variation to create a
single OTU, or at least fewer OTUs, for the species. This is not such a good
thing if the goal is to get one OTU per phenotype, because different strains
can have important differences in phenotype (e.g., pathogenic or not). Also,
while the concept of a strain is reasonably well-defined (minimal variation
in the compete genome), the concept of a species is problematic for
bacteria (Gevers et al. 2005,
Doolittle and Pakpke
2006), so the goal of having one OTU per species is not so well-defined
or useful.
Merging variation due to experimental error into one
cluster
Errors due to PCR sequencing error cause sequence variation
in the reads which is not biological in origin. Clustering tends to merge
bad sequences with correct biological sequences. To be effective, this
requires that sequences with >3% error are rare, because they always induce spurious OTUs (see e.g.
Edgar and Flyvbjerg 2014). A read with <3% error can also cause a
spurious OTU, depending on whether its correct sequence is
close to the center of the cluster (probably not) or close to the edge (more
likely).
Interpreting results obtained using OTUs
To do meaningful biology with OTUs, you need to know which goals
are intended (those I suggest above? others?), and how well those goals are
achieved in practice. This can be problematic, because authors are often
vague about their goals and sloppy in validating their methods. For example,
one approach found in the literature is to test the method on artificial
("mock") communities with known composition, and measure alpha diversity /
richness from the number of OTUs reported. If the number of OTUs is
approximately the same as the number of species, the method is declared to
be accurate. Some of these papers are deeply flawed. Algorithm parameters
may be tuned on the mock community data used for testing, which could result
in over-fitting. The
right number of OTUs could be obtained for the wrong reasons; for example,
because some of the OTUs have several species lumped into the same cluster,
while others are spurious, e.g. chimeric. In other cases, authors attempt to
use mock community diversity for validation, find that far too many OTUs are
generated, then ignore, downplay or obfuscate the result without
investigating where the spurious OTUs come from. If sources of error are not
understood and accounted for, then biological inferences are unreliable and
naive estimates of statistical significance can be misleading
because low P-values can be obtained for false hypotheses (Taylor
1997).
My definition of OTUs
In
USEARCH, my algorithms are designed to report OTUs which are
correct biological sequences.
By this definition, the connection to biology is clear and an OTU can be
objectively verified or falsified, especially in mock community tests where
the biological sequences are known.
UPARSE OTUs
In the case of the
UPARSE-OTU algorithm (cluster_otus
command), the goal is to report a
subset of the correct biological sequences such that (a) all
pairs of OTUs are <97% identical to each other, and (b) the OTU sequence is
the most abundant in its neighborhood. These can be interpreted as
Sneath-Sokal OTUs giving an operational approximation to species.
Denoised OTUs (ZOTUs)
With the
UNOISE algorithm (unoise3
command), the goal is to report all
correct biological sequences in the reads. These are called
zero-radius OTUs, or ZOTUs. It is expected that some species may be split
over several ZOTUs due to intra-species variation (differences between
paralogs and differences between strains). The advantage of ZOTUs is that
they enable resolution of closely related strains with potentially different
phenotypes that would be lumped into the same cluster by 97% OTUs.
OTU abundances
By my definition, OTUs are sequences, not
clusters. Traditional OTU analysis is usually based on an
OTU table, which requires an abundance or
frequency for each OTU in each sample. I calculate abundances by counting
reads, while emphasizing that read abundance
has very low correlation with species abundance, and therefore the
biological interpretation of these
abundances is unclear.
For denoised sequences, the abundance of a ZOTU is intended to be the number of reads derived from that (one, unique) biological template sequence, counting correct reads and reads with errors.
With 97% OTUs (UPARSE), the abundance of an OTU is intended to be the number of reads derived from all biological sequences that are >=97% identical to the OTU sequence, counting correct reads and reads with errors. This definition is not quite enough, because a read can match two or more OTUs. If there is more than one choice, the read is assigned to the OTU with highest identity (not highest abundance) because the OTU with highest identity is more likely to belong to the same species. If the choice is still ambiguous because two or more OTUs are tied for highest identity, the tie is broken systematically by choosing the first OTU in database file order. This procedure ensures that reads for a given strain should be consistently assigned to the same OTU.
In practice, for both OTUs and ZOTUs, abundances are calculated by aligning
reads to OTUs at a 97% identity threshold using the
otutab command. A read is assigned to the most
similar OTU sequence, breaking ties systematically. In the case of 97% OTUs,
the motivation for using a 97% identity threshold is self-explanatory. With
ZOTUs, the motivation is to allow up to 3% errors due to sequencing and PCR.
It is expected that the ZOTUs already contain the true biological variation
and the most similar ZOTU will almost always be the correct biological
sequence for a read with errors.