2016-05-11 08:17:56 -04:00
|
|
|
[[analyzer-anatomy]]
|
2020-01-16 13:00:04 -05:00
|
|
|
=== Anatomy of an analyzer
|
2016-05-11 08:17:56 -04:00
|
|
|
|
|
|
|
An _analyzer_ -- whether built-in or custom -- is just a package which
|
|
|
|
contains three lower-level building blocks: _character filters_,
|
|
|
|
_tokenizers_, and _token filters_.
|
|
|
|
|
|
|
|
The built-in <<analysis-analyzers,analyzers>> pre-package these building
|
|
|
|
blocks into analyzers suitable for different languages and types of text.
|
|
|
|
Elasticsearch also exposes the individual building blocks so that they can be
|
|
|
|
combined to define new <<analysis-custom-analyzer,`custom`>> analyzers.
|
|
|
|
|
2020-03-19 07:42:26 -04:00
|
|
|
[[analyzer-anatomy-character-filters]]
|
2020-01-16 13:00:04 -05:00
|
|
|
==== Character filters
|
2016-05-11 08:17:56 -04:00
|
|
|
|
|
|
|
A _character filter_ receives the original text as a stream of characters and
|
|
|
|
can transform the stream by adding, removing, or changing characters. For
|
2016-11-25 08:30:08 -05:00
|
|
|
instance, a character filter could be used to convert Hindu-Arabic numerals
|
|
|
|
(٠١٢٣٤٥٦٧٨٩) into their Arabic-Latin equivalents (0123456789), or to strip HTML
|
2016-05-11 08:17:56 -04:00
|
|
|
elements like `<b>` from the stream.
|
|
|
|
|
|
|
|
An analyzer may have *zero or more* <<analysis-charfilters,character filters>>,
|
|
|
|
which are applied in order.
|
|
|
|
|
2020-03-19 07:42:26 -04:00
|
|
|
[[analyzer-anatomy-tokenizer]]
|
2020-01-16 13:00:04 -05:00
|
|
|
==== Tokenizer
|
2016-05-11 08:17:56 -04:00
|
|
|
|
|
|
|
A _tokenizer_ receives a stream of characters, breaks it up into individual
|
|
|
|
_tokens_ (usually individual words), and outputs a stream of _tokens_. For
|
|
|
|
instance, a <<analysis-whitespace-tokenizer,`whitespace`>> tokenizer breaks
|
|
|
|
text into tokens whenever it sees any whitespace. It would convert the text
|
|
|
|
`"Quick brown fox!"` into the terms `[Quick, brown, fox!]`.
|
|
|
|
|
|
|
|
The tokenizer is also responsible for recording the order or _position_ of
|
|
|
|
each term and the start and end _character offsets_ of the original word which
|
|
|
|
the term represents.
|
|
|
|
|
|
|
|
An analyzer must have *exactly one* <<analysis-tokenizers,tokenizer>>.
|
|
|
|
|
2020-03-19 07:42:26 -04:00
|
|
|
[[analyzer-anatomy-token-filters]]
|
2020-01-16 13:00:04 -05:00
|
|
|
==== Token filters
|
2016-05-11 08:17:56 -04:00
|
|
|
|
|
|
|
A _token filter_ receives the token stream and may add, remove, or change
|
|
|
|
tokens. For example, a <<analysis-lowercase-tokenfilter,`lowercase`>> token
|
|
|
|
filter converts all tokens to lowercase, a
|
|
|
|
<<analysis-stop-tokenfilter,`stop`>> token filter removes common words
|
|
|
|
(_stop words_) like `the` from the token stream, and a
|
|
|
|
<<analysis-synonym-tokenfilter,`synonym`>> token filter introduces synonyms
|
|
|
|
into the token stream.
|
|
|
|
|
|
|
|
Token filters are not allowed to change the position or character offsets of
|
|
|
|
each token.
|
|
|
|
|
|
|
|
An analyzer may have *zero or more* <<analysis-tokenfilters,token filters>>,
|
2020-01-16 13:00:04 -05:00
|
|
|
which are applied in order.
|