[DOCS] Retitle analysis reference pages (#51071)
* Changes titles to sentence case. * Appends pages with 'reference' to differentiate their content from conceptual overviews. * Moves the 'Normalizers' page to end of the Analysis topic pages.
This commit is contained in:
parent
d590150ca2
commit
1edaf2b101
|
@ -150,11 +150,11 @@ include::analysis/testing.asciidoc[]
|
|||
|
||||
include::analysis/analyzers.asciidoc[]
|
||||
|
||||
include::analysis/normalizers.asciidoc[]
|
||||
|
||||
include::analysis/tokenizers.asciidoc[]
|
||||
|
||||
include::analysis/tokenfilters.asciidoc[]
|
||||
|
||||
include::analysis/charfilters.asciidoc[]
|
||||
|
||||
include::analysis/normalizers.asciidoc[]
|
||||
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[analysis-analyzers]]
|
||||
== Analyzers
|
||||
== Built-in analyzer reference
|
||||
|
||||
Elasticsearch ships with a wide range of built-in analyzers, which can be used
|
||||
in any index without further configuration:
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[analysis-charfilters]]
|
||||
== Character Filters
|
||||
== Character filters reference
|
||||
|
||||
_Character filters_ are used to preprocess the stream of characters before it
|
||||
is passed to the <<analysis-tokenizers,tokenizer>>.
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[analysis-tokenfilters]]
|
||||
== Token Filters
|
||||
== Token filter reference
|
||||
|
||||
Token filters accept a stream of tokens from a
|
||||
<<analysis-tokenizers,tokenizer>> and can modify tokens
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[analysis-tokenizers]]
|
||||
== Tokenizers
|
||||
== Tokenizer reference
|
||||
|
||||
A _tokenizer_ receives a stream of characters, breaks it up into individual
|
||||
_tokens_ (usually individual words), and outputs a stream of _tokens_. For
|
||||
|
|
Loading…
Reference in New Issue