mirror of
https://github.com/honeymoose/OpenSearch.git
synced 2025-02-05 20:48:22 +00:00
b9a09c2b06
Add `irish` analyzer Add `sorani` analyzer (Kurdish) Add `classic` tokenizer: specific to english text and tries to recognize hostnames, companies, acronyms, etc. Add `thai` tokenizer: segments thai text into words. Add `classic` tokenfilter: cleans up acronyms and possessives from classic tokenizer Add `apostrophe` tokenfilter: removes text after apostrophe and the apostrophe itself Add `german_normalization` tokenfilter: umlaut/sharp S normalization Add `hindi_normalization` tokenfilter: accounts for hindi spelling differences Add `indic_normalization` tokenfilter: accounts for different unicode representations in Indian languages Add `sorani_normalization` tokenfilter: normalizes kurdish text Add `scandinavian_normalization` tokenfilter: normalizes Norwegian, Danish, Swedish text Add `scandinavian_folding` tokenfilter: much more aggressive form of `scandinavian_normalization` Add additional languages to stemmer tokenfilter: `galician`, `minimal_galician`, `irish`, `sorani`, `light_nynorsk`, `minimal_nynorsk` Add support access to default Thai stopword set "_thai_" Fix some bugs and broken links in documentation. Closes #5935
35 lines
961 B
Plaintext
35 lines
961 B
Plaintext
[[analysis-tokenizers]]
|
|
== Tokenizers
|
|
|
|
Tokenizers are used to break a string down into a stream of terms
|
|
or tokens. A simple tokenizer might split the string up into terms
|
|
wherever it encounters whitespace or punctuation.
|
|
|
|
Elasticsearch has a number of built in tokenizers which can be
|
|
used to build <<analysis-custom-analyzer,custom analyzers>>.
|
|
|
|
include::tokenizers/standard-tokenizer.asciidoc[]
|
|
|
|
include::tokenizers/edgengram-tokenizer.asciidoc[]
|
|
|
|
include::tokenizers/keyword-tokenizer.asciidoc[]
|
|
|
|
include::tokenizers/letter-tokenizer.asciidoc[]
|
|
|
|
include::tokenizers/lowercase-tokenizer.asciidoc[]
|
|
|
|
include::tokenizers/ngram-tokenizer.asciidoc[]
|
|
|
|
include::tokenizers/whitespace-tokenizer.asciidoc[]
|
|
|
|
include::tokenizers/pattern-tokenizer.asciidoc[]
|
|
|
|
include::tokenizers/uaxurlemail-tokenizer.asciidoc[]
|
|
|
|
include::tokenizers/pathhierarchy-tokenizer.asciidoc[]
|
|
|
|
include::tokenizers/classic-tokenizer.asciidoc[]
|
|
|
|
include::tokenizers/thai-tokenizer.asciidoc[]
|
|
|