Makes the following changes to the `remove_duplicates` token filter
docs:
* Rewrites description and adds Lucene link
* Adds detailed analyze example
* Adds custom analyzer example
* New wildcard field optimised for wildcard queries (#49993)
Indexes values using size 3 ngrams and also stores the full original as a binary doc value.
Wildcard queries operate by using a cheap approximation query on the ngram field followed up by a more expensive verification query using an automaton on the binary doc values. Also supports aggregations and sorting.
Keyword field values with length more than ignore_above are not
indexed. But highlighters still were retrieving these values
from _source and were trying to highlight them. This sometimes lead to
errors if a field length exceeded max_analyzed_offset. But also this
is an overall wrong behaviour to attempt to highlight something that was
ignored during indexing.
This PR checks if a keyword value was ignored because of its length,
and if yes, skips highlighting it.
Backport: #53408Closes#43800
Adds a new parameter for classification that enables choosing whether to assign labels to
maximise accuracy or to maximise the minimum class recall.
Fixes#52427.
This changes the `top_metrics` aggregation to return metrics in their
original type. Since it only supports numerics, that means that dates,
longs, and doubles will come back as stored, with their appropriate
formatter applied.
This change removes the Lucene's experimental flag from the documentations of the following
tokenizer/filters:
* Simple Pattern Split Tokenizer
* Simple Pattern tokenizer
* Flatten Graph Token Filter
* Word Delimiter Graph Token Filter
The flag is still present in Lucene codebase but we're fully supporting these tokenizers/filters
in ES for a long time now so the docs flag is misleading.
Co-authored-by: James Rodewig <james.rodewig@elastic.co>
Restructures the 'Update an enrich policy' section to:
* Migrate the content to the section. It was previously stored in the
Put Enrich Policy API docs.
* Remove the warning tag admonition from the section content.
* Replace a reused section earlier in the "Set up an enrich processor"
page with a link.
No substantive changes were made to the content.
Adds a new `default_field_map` field to trained model config objects.
This allows the model creator to supply field map if it knows that there should be some map for inference to work directly against the training data.
The use case internally is having analytics jobs supply a field mapping for multi-field fields. This allows us to use the model "out of the box" on data where we trained on `foo.keyword` but the `_source` only references `foo`.
Makes the following changes to the `word_delimiter` token filter docs:
* Adds a warning admonition recommending the `word_delimiter_graph`
filter instead. This warning includes a link to the deprecated Lucene
`WordDelimiterFilter`.
* Updates the description
* Adds detailed analyze snippet
* Adds custom analyzer and custom filter snippets
* Reorganizes and updates parameter documentation
In a tip admonition, we recommend using the `keyword` tokenizer with the
`word_delimiter_graph` token filter. However, we only use the
`whitespace` tokenizer in the example snippets. This updates those
snippets to use the `keyword` tokenizer instead.
Also corrects several spacing issues for arrays in these docs.
Updates the SVG for a token graph to make the layout consistent with
other graphs. This means moving the text directly above the edge lines.
Previously, the text was above the edge line.
Adds a tip admonition to the basic example in the EQL search docs.
This tip lets users know they can set up a Beat to automatically
index data in ES, rather than manually indexing using the bulk or index
APIs.
Documents the `nodes` response parameters returned by the
`_cluster/stats` API.
Also adds collapsible attributes for the `indices` and `nodes`
sections.
Makes the following changes to the `word_delimiter_graph` token filter
docs:
* Updates the Lucene experimental admonition.
* Updates description
* Adds analyze snippet
* Adds custom analyzer and custom filter snippets
* Reorganizes and updates parameter list
* Expands and updates section re: differences between `word_delimiter`
and `word_delimiter_graph`
This commit introduces hidden aliases. These are similar to hidden
indices, in that they are not visible by default, unless explicitly
specified by name or by indicating that hidden indices/aliases are
desired.
The new alias property, `is_hidden` is implemented similarly to
`is_write_index`, except that it must be consistent across all indices
with a given alias - that is, all indices with a given alias must
specify the alias as either hidden, or all specify it as non-hidden,
either explicitly or by omitting the `is_hidden` property.
7.5 and 7.6 had a regression that allowed for
script_score queries to have negative scores.
We have corrected this regression in #52478.
This is an addition to #52478 that adds
a test and release notes.
Adds documentation for the `any` keyword to the EQL syntax docs.
Includes:
* Definition of an event category and its relationship to the event
category field.
* Example matching all event categories using `any` keyword
* Example using `any` with `where true`
Updates the documented default `event_category_field` and `timestamp_field`
values for the EQL search API. Also updates related guidance in the
EQL requirement docs.
Relates to #53073.
Per the [Asciidoctor docs][0], Asciidoctor replaces the following
syntax with double arrows in the rendered HTML:
* => renders as ⇒
* <= renders as ⇐
This escapes several unintended replacements, such as in the Painless
docs.
Where appropriate, it also replaces some double arrow instances with
single arrows for consistency.
[0]: https://asciidoctor.org/docs/user-manual/#replacements