The "Conditionals with the Pipeline Processor" incorrectly documents
how to create a pipeline of pipelines with a failure condition. The
example as-is will always execute the fail processor. The change here
updates the documentation to correct guard the fail processor with an
if condition.
* Add Snapshot Lifecycle Retention documentation
This commits adds API and general purpose documentation for SLM
retention.
Relates to #43663
* Fix docs tests
* Update default now that #47604 has been merged
* Update docs/reference/ilm/apis/slm-api.asciidoc
Co-Authored-By: Gordon Brown <gordon.brown@elastic.co>
* Update docs/reference/ilm/apis/slm-api.asciidoc
Co-Authored-By: Gordon Brown <gordon.brown@elastic.co>
* Update docs with feedback
Setting `cluster.routing.allocation.disk.include_relocations` to `false` is a
bad idea since it will lead to the kinds of overshoot that were otherwise fixed
in #46079. This commit deprecates this setting so it can be removed in the next
major release.
The example use of a scoring script was incorrectly using a filter
script query, which has no scoring, and thus no _score variable
avialable. This commit converts the example doc to using the newer
script_score query.
Adds the following parameters to `outlier_detection`:
- `compute_feature_influence` (boolean): whether to compute or not
feature influence scores
- `outlier_fraction` (double): the proportion of the data set assumed
to be outlying prior to running outlier detection
- `standardization_enabled` (boolean): whether to apply standardization
to the feature values
Backport of #47600
While function scores using scripts do allow explanations, they are only
creatable with an expert plugin. This commit improves the situation for
the newer script score query by adding the ability to set the
explanation from the script itself.
To set the explanation, a user would check for `explanation != null` to
indicate an explanation is needed, and then call
`explanation.set("some description")`.
The warning section above the example tells that index name has to end with the digits but the example itself uses index name without digits which is confusing.
* [DOCS] Adds examples to the PUT dfa and the evaluate dfa APIs.
* [DOCS] Removes extra lines from examples.
* Update docs/reference/ml/df-analytics/apis/evaluate-dfanalytics.asciidoc
Co-Authored-By: Lisa Cawley <lcawley@elastic.co>
* Update docs/reference/ml/df-analytics/apis/put-dfanalytics.asciidoc
Co-Authored-By: Lisa Cawley <lcawley@elastic.co>
* [DOCS] Explains examples.
We do mention that rolling back an upgrade requires a restore from a snapshot,
but it's hidden at the bottom of the "preparing to upgrade" instructions on a
different page from the actual upgrade instructions. This commit duplicates the
preparatory instructions onto the pages containing the actual upgrade
instructions and rewords the point about rollbacks a bit.
DATE_PART(<datetime unit>, <date/datetime>) is a function that allows
the user to extract the specified unit from a date/datetime field
similar to the EXTRACT (<datetime unit> FROM <date/datetime>) but
with different names and aliases for the units and it also provides more
options like `DATE_PART('tzoffset', datetimeField)`.
Implemented following the SQL server's spec: https://docs.microsoft.com/en-us/sql/t-sql/functions/datepart-transact-sql?view=sql-server-2017
with the difference that the <datetime unit> argument is either a
literal single quoted string or gets a value from a table field, whereas
in SQL server keywords are used (unquoted identifiers) and it's not
possible to use a value coming for a table column.
Closes: #46372
(cherry picked from commit ead743d3579eb753fd314d4a58fae205e465d72e)