Changes the titles for tokenizer pages to sentence case.
Also moves the 'Path hierarchy tokenizer examples' page within the
'Path hierarchy tokenizer' page and adds a related redirect.
Changes:
* Adds title abbreviation
* Adds Lucene link to description
* Adds standard headings
* Simplifies analyze example
* Simplifies analyzer example and adds contextual text
This commit adds support for rules with multiple tokens on LHS, also
known as "contraction rules", into stemmer override token
filter. Contraction rules are handy into translating multiple
inflected words into the same root form. One side effect of this change is
that it brings stemmer override rules format closer to synonym rules
format so that it makes it easier to translate one into another.
This change also makes stemmer override rules parser more strict so
that it should catch more errors which were previously accepted.
Closes#56113
Changes:
* Rewrites description and adds a Lucene link
* Reformats the configurable parameters as a definition list
* Changes the `Theory` heading to `Using the min_hash token filter for
similarity search`
* Adds some additional detail to the analyzer example
Makes the following changes to the `porter_stem` token filter docs:
* Rewrites description and adds a Lucene link
* Adds detailed analyze example
* Adds an analyzer example
Makes the following changes to the `kstem` token filter docs:
* Rewrite description and adds a Lucene work
* Adds detailed analyze example
* Adds an analyzer example
The Lucene `preserve_original` setting is currently not supported in the `edge_ngram`
token filter. This change adds it with a default value of `false`.
Closes#55767
Makes the following changes to the `stemmer` token filter docs:
* Adds detailed analyze example
* Rewrites parameter definitions
* Adds custom analyzer example
* Adds a `language` value for the `estonian` stemmer
* Reorders the `language` values to show recommended algorithms first,
followed by other values alphabetically
Adds conceptual documentation for stemming, including:
* An overview of why stemming is helpful in search
* Algorithmic vs. dictionary stemming
* Token filters used to control stemming, such as `stemmer_override`, `keyword_marker`, and `conditional`
* [DOCS] Reformat `flatten_graph` token filter
Makes the following changes to the `flatten_graph` token filter docs:
* Rewrites description and adds Lucene link
* Adds detailed analyze example
* Adds analyzer example
Creates a reusable template for token filter reference documentation.
Contributors can make a copy of this template and customize it when
documenting new token filters.
Makes the following changes to the `keyword_marker` token filter docs:
* Rewrites description and adds Lucene link
* Adds detailed analyze example
* Rewrites parameter definitions
* Adds custom analyzer and filter example
Adds conceptual docs for token graphs.
These docs cover:
* How a token graph is constructed from a token stream
* How synonyms and multi-position tokens impact token graphs
* How token graphs are used during search
* Why some token filters produce invalid token graphs
Also makes the following supporting changes:
* Adds anchors to the 'Anatomy of an Analyzer' docs for cross-linking
* Adds several SVGs for token graph diagrams
Makes the following changes to the `remove_duplicates` token filter
docs:
* Rewrites description and adds Lucene link
* Adds detailed analyze example
* Adds custom analyzer example
This change removes the Lucene's experimental flag from the documentations of the following
tokenizer/filters:
* Simple Pattern Split Tokenizer
* Simple Pattern tokenizer
* Flatten Graph Token Filter
* Word Delimiter Graph Token Filter
The flag is still present in Lucene codebase but we're fully supporting these tokenizers/filters
in ES for a long time now so the docs flag is misleading.
Co-authored-by: James Rodewig <james.rodewig@elastic.co>
Makes the following changes to the `word_delimiter` token filter docs:
* Adds a warning admonition recommending the `word_delimiter_graph`
filter instead. This warning includes a link to the deprecated Lucene
`WordDelimiterFilter`.
* Updates the description
* Adds detailed analyze snippet
* Adds custom analyzer and custom filter snippets
* Reorganizes and updates parameter documentation
In a tip admonition, we recommend using the `keyword` tokenizer with the
`word_delimiter_graph` token filter. However, we only use the
`whitespace` tokenizer in the example snippets. This updates those
snippets to use the `keyword` tokenizer instead.
Also corrects several spacing issues for arrays in these docs.
Makes the following changes to the `word_delimiter_graph` token filter
docs:
* Updates the Lucene experimental admonition.
* Updates description
* Adds analyze snippet
* Adds custom analyzer and custom filter snippets
* Reorganizes and updates parameter list
* Expands and updates section re: differences between `word_delimiter`
and `word_delimiter_graph`
Per the [Asciidoctor docs][0], Asciidoctor replaces the following
syntax with double arrows in the rendered HTML:
* => renders as ⇒
* <= renders as ⇐
This escapes several unintended replacements, such as in the Painless
docs.
Where appropriate, it also replaces some double arrow instances with
single arrows for consistency.
[0]: https://asciidoctor.org/docs/user-manual/#replacements
Makes the following changes to the `stop` token filter docs:
* Updates description
* Adds a link to the related Lucene filter
* Adds detailed analyze snippet
* Updates custom analyzer and custom filter snippets
* Adds a list of predefined stop words by language
Co-authored-by: ScottieL <36999642+ScottieL@users.noreply.github.com>
Makes the following changes to the `trim` token filter docs:
* Updates description
* Adds a link to the related Lucene filter
* Adds tip about removing whitespace using tokenizers
* Adds detailed analyze snippets
* Adds custom analyzer snippet
* [DOCS] Rewrite analysis intro. Move index/search analysis content.
* Rewrites 'Text analysis' page intro as high-level definition.
Adds guidance on when users should configure text analysis
* Rewrites and splits index/search analysis content:
* Conceptual content -> 'Index and search analysis' under 'Concepts'
* Task-based content -> 'Specify an analyzer' under 'Configure...'
* Adds detailed examples for when to use the same index/search analyzer
and when not.
* Adds new example snippets for specifying search analyzers
* clarifications
* Add toc. Decrement headings.
* Reword 'When to configure' section
* Remove sentence from tip
Adds a 'Configure text analysis' page to house tutorial content for the
analysis topic.
Also relocates the following pages as children as this new page:
* 'Test an analyzer'
* 'Configuring built-in analyzers'
* 'Create a custom analyzer'
I plan to add a tutorial for specifying index-time and search-time
analyzers to this section as part of a future PR.