The current link points to an obsolete site, which is no longer maintained.
Co-authored-by: Stefan Walter <67258699+rd-stefan-walter@users.noreply.github.com>
Plugin discovery documentation contained information about installing
Elasticsearch 2.0 and installing an oracle JDK, both of which is no
longer valid.
While noticing that the instructions used cleartext HTTP to install
packages, this commit replaces HTTPs links instead of HTTP where possible.
In addition a few community links have been removed, as they do not seem
to exist anymore.
Co-authored-by: Alexander Reelsen <alexander@reelsen.net>
This commit allows customizing the word delimiter token filters to skip processing
tokens tagged as keyword through the `ignore_keywords` flag Lucene's
WordDelimiterGraphFilter already exposes.
Fix for #59491
Changes the titles for tokenizer pages to sentence case.
Also moves the 'Path hierarchy tokenizer examples' page within the
'Path hierarchy tokenizer' page and adds a related redirect.
Changes:
* Adds title abbreviation
* Adds Lucene link to description
* Adds standard headings
* Simplifies analyze example
* Simplifies analyzer example and adds contextual text
This commit adds support for rules with multiple tokens on LHS, also
known as "contraction rules", into stemmer override token
filter. Contraction rules are handy into translating multiple
inflected words into the same root form. One side effect of this change is
that it brings stemmer override rules format closer to synonym rules
format so that it makes it easier to translate one into another.
This change also makes stemmer override rules parser more strict so
that it should catch more errors which were previously accepted.
Closes#56113
Changes:
* Rewrites description and adds a Lucene link
* Reformats the configurable parameters as a definition list
* Changes the `Theory` heading to `Using the min_hash token filter for
similarity search`
* Adds some additional detail to the analyzer example
Makes the following changes to the `porter_stem` token filter docs:
* Rewrites description and adds a Lucene link
* Adds detailed analyze example
* Adds an analyzer example
Makes the following changes to the `kstem` token filter docs:
* Rewrite description and adds a Lucene work
* Adds detailed analyze example
* Adds an analyzer example
The Lucene `preserve_original` setting is currently not supported in the `edge_ngram`
token filter. This change adds it with a default value of `false`.
Closes#55767
Makes the following changes to the `stemmer` token filter docs:
* Adds detailed analyze example
* Rewrites parameter definitions
* Adds custom analyzer example
* Adds a `language` value for the `estonian` stemmer
* Reorders the `language` values to show recommended algorithms first,
followed by other values alphabetically
Adds conceptual documentation for stemming, including:
* An overview of why stemming is helpful in search
* Algorithmic vs. dictionary stemming
* Token filters used to control stemming, such as `stemmer_override`, `keyword_marker`, and `conditional`
* [DOCS] Reformat `flatten_graph` token filter
Makes the following changes to the `flatten_graph` token filter docs:
* Rewrites description and adds Lucene link
* Adds detailed analyze example
* Adds analyzer example
Creates a reusable template for token filter reference documentation.
Contributors can make a copy of this template and customize it when
documenting new token filters.
Makes the following changes to the `keyword_marker` token filter docs:
* Rewrites description and adds Lucene link
* Adds detailed analyze example
* Rewrites parameter definitions
* Adds custom analyzer and filter example
Adds conceptual docs for token graphs.
These docs cover:
* How a token graph is constructed from a token stream
* How synonyms and multi-position tokens impact token graphs
* How token graphs are used during search
* Why some token filters produce invalid token graphs
Also makes the following supporting changes:
* Adds anchors to the 'Anatomy of an Analyzer' docs for cross-linking
* Adds several SVGs for token graph diagrams
Makes the following changes to the `remove_duplicates` token filter
docs:
* Rewrites description and adds Lucene link
* Adds detailed analyze example
* Adds custom analyzer example