[[analysis-tokenizers]] == Tokenizers Tokenizers are used to break a string down into a stream of terms or tokens. A simple tokenizer might split the string up into terms wherever it encounters whitespace or punctuation. Elasticsearch has a number of built in tokenizers which can be used to build <>. include::tokenizers/standard-tokenizer.asciidoc[] include::tokenizers/edgengram-tokenizer.asciidoc[] include::tokenizers/keyword-tokenizer.asciidoc[] include::tokenizers/letter-tokenizer.asciidoc[] include::tokenizers/lowercase-tokenizer.asciidoc[] include::tokenizers/ngram-tokenizer.asciidoc[] include::tokenizers/whitespace-tokenizer.asciidoc[] include::tokenizers/pattern-tokenizer.asciidoc[] include::tokenizers/uaxurlemail-tokenizer.asciidoc[] include::tokenizers/pathhierarchy-tokenizer.asciidoc[] include::tokenizers/classic-tokenizer.asciidoc[] include::tokenizers/thai-tokenizer.asciidoc[]