Update experimental labels in the docs (#25727)

Relates https://github.com/elastic/elasticsearch/issues/19798

Removed experimental label from:
* Painless
* Diversified Sampler Agg
* Sampler Agg
* Significant Terms Agg
* Terms Agg document count error and execution_hint
* Cardinality Agg precision_threshold
* Pipeline Aggregations
* index.shard.check_on_startup
* index.store.type (added warning)
* Preloading data into the file system cache
* foreach ingest processor
* Field caps API
* Profile API

Added experimental label to:
* Moving Average Agg Prediction


Changed experimental to beta for:
* Adjacency matrix agg
* Normalizers
* Tasks API
* Index sorting

Labelled experimental in Lucene:
* ICU plugin custom rules file
* Flatten graph token filter
* Synonym graph token filter
* Word delimiter graph token filter
* Simple pattern tokenizer
* Simple pattern split tokenizer

Replaced experimental label with warning that details may change in the future:
* Analysis explain output format
* Segments verbose output format
* Percentile Agg compression and HDR Histogram
* Percentile Rank Agg HDR Histogram
This commit is contained in:
Clinton Gormley 2017-07-18 14:06:22 +02:00 committed by GitHub
parent 0d8b753325
commit ff4a2519f2
43 changed files with 22 additions and 78 deletions

View File

@ -1,8 +1,6 @@
[[painless-debugging]]
=== Painless Debugging
experimental[The Painless scripting language is new and is still marked as experimental. The syntax or API may be changed in the future in non-backwards compatible ways if required.]
==== Debug.Explain
Painless doesn't have a

View File

@ -1,8 +1,6 @@
[[painless-getting-started]]
== Getting Started with Painless
experimental[The Painless scripting language is new and is still marked as experimental. The syntax or API may be changed in the future in non-backwards compatible ways if required.]
include::painless-description.asciidoc[]
[[painless-examples]]

View File

@ -1,8 +1,6 @@
[[painless-syntax]]
=== Painless Syntax
experimental[The Painless scripting language is new and is still marked as experimental. The syntax or API may be changed in the future in non-backwards compatible ways if required.]
[float]
[[control-flow]]
==== Control flow

View File

@ -113,7 +113,7 @@ PUT icu_sample
===== Rules customization
experimental[]
experimental[This functionality is marked as experimental in Lucene]
You can customize the `icu-tokenizer` behavior by specifying per-script rule files, see the
http://userguide.icu-project.org/boundaryanalysis#TOC-RBBI-Rules[RBBI rules syntax reference]

View File

@ -6,7 +6,7 @@ The request provides a collection of named filter expressions, similar to the `f
request.
Each bucket in the response represents a non-empty cell in the matrix of intersecting filters.
experimental[The `adjacency_matrix` aggregation is a new feature and we may evolve its design as we get feedback on its use. As a result, the API for this feature may change in non-backwards compatible ways]
beta[The `adjacency_matrix` aggregation is a new feature and we may evolve its design as we get feedback on its use. As a result, the API for this feature may change in non-backwards compatible ways]
Given filters named `A`, `B` and `C` the response would return buckets with the following names:

View File

@ -1,8 +1,6 @@
[[search-aggregations-bucket-diversified-sampler-aggregation]]
=== Diversified Sampler Aggregation
experimental[]
Like the `sampler` aggregation this is a filtering aggregation used to limit any sub aggregations' processing to a sample of the top-scoring documents.
The `diversified_sampler` aggregation adds the ability to limit the number of matches that share a common value such as an "author".

View File

@ -1,8 +1,6 @@
[[search-aggregations-bucket-sampler-aggregation]]
=== Sampler Aggregation
experimental[]
A filtering aggregation used to limit any sub aggregations' processing to a sample of the top-scoring documents.
.Example use cases:

View File

@ -3,8 +3,6 @@
An aggregation that returns interesting or unusual occurrences of terms in a set.
experimental[The `significant_terms` aggregation can be very heavy when run on large indices. Work is in progress to provide more lightweight sampling techniques. As a result, the API for this feature may change in non-backwards compatible ways]
.Example use cases:
* Suggesting "H5N1" when users search for "bird flu" in text
* Identifying the merchant that is the "common point of compromise" from the transaction history of credit card owners reporting loss

View File

@ -197,8 +197,6 @@ could have the 4th highest document count.
==== Per bucket document count error
experimental[]
The second error value can be enabled by setting the `show_term_doc_count_error` parameter to true. This shows an error value
for each term returned by the aggregation which represents the 'worst case' error in the document count and can be useful when
deciding on a value for the `shard_size` parameter. This is calculated by summing the document counts for the last term returned
@ -728,8 +726,6 @@ collection mode need to replay the query on the second pass but only for the doc
[[search-aggregations-bucket-terms-aggregation-execution-hint]]
==== Execution hint
experimental[The automated execution optimization is experimental, so this parameter is provided temporarily as a way to override the default behaviour]
There are different mechanisms by which terms aggregations can be executed:
- by using field values directly in order to aggregate data per-bucket (`map`)
@ -767,7 +763,7 @@ in inner aggregations.
}
--------------------------------------------------
<1> experimental[] the possible values are `map`, `global_ordinals`, `global_ordinals_hash` and `global_ordinals_low_cardinality`
<1> The possible values are `map`, `global_ordinals`, `global_ordinals_hash` and `global_ordinals_low_cardinality`
Please note that Elasticsearch will ignore this execution hint if it is not applicable and that there is no backward compatibility guarantee on these hints.

View File

@ -43,8 +43,6 @@ Response:
This aggregation also supports the `precision_threshold` option:
experimental[The `precision_threshold` option is specific to the current internal implementation of the `cardinality` agg, which may change in the future]
[source,js]
--------------------------------------------------
POST /sales/_search?size=0

View File

@ -247,8 +247,6 @@ it. It would not be the case on more skewed distributions.
[[search-aggregations-metrics-percentile-aggregation-compression]]
==== Compression
experimental[The `compression` parameter is specific to the current internal implementation of percentiles, and may change in the future]
Approximate algorithms must balance memory utilization with estimation accuracy.
This balance can be controlled using a `compression` parameter:
@ -287,7 +285,7 @@ the TDigest will use less memory.
==== HDR Histogram
experimental[]
NOTE: This setting exposes the internal implementation of HDR Histogram and the syntax may change in the future.
https://github.com/HdrHistogram/HdrHistogram[HDR Histogram] (High Dynamic Range Histogram) is an alternative implementation
that can be useful when calculating percentiles for latency measurements as it can be faster than the t-digest implementation

View File

@ -159,7 +159,7 @@ This will interpret the `script` parameter as an `inline` script with the `painl
==== HDR Histogram
experimental[]
NOTE: This setting exposes the internal implementation of HDR Histogram and the syntax may change in the future.
https://github.com/HdrHistogram/HdrHistogram[HDR Histogram] (High Dynamic Range Histogram) is an alternative implementation
that can be useful when calculating percentile ranks for latency measurements as it can be faster than the t-digest implementation

View File

@ -2,8 +2,6 @@
== Pipeline Aggregations
experimental[]
Pipeline aggregations work on the outputs produced from other aggregations rather than from document sets, adding
information to the output tree. There are many different types of pipeline aggregation, each computing different information from
other aggregations, but these types can be broken down into two families:

View File

@ -1,8 +1,6 @@
[[search-aggregations-pipeline-avg-bucket-aggregation]]
=== Avg Bucket Aggregation
experimental[]
A sibling pipeline aggregation which calculates the (mean) average value of a specified metric in a sibling aggregation.
The specified metric must be numeric and the sibling aggregation must be a multi-bucket aggregation.

View File

@ -1,8 +1,6 @@
[[search-aggregations-pipeline-bucket-script-aggregation]]
=== Bucket Script Aggregation
experimental[]
A parent pipeline aggregation which executes a script which can perform per bucket computations on specified metrics
in the parent multi-bucket aggregation. The specified metric must be numeric and the script must return a numeric value.

View File

@ -1,8 +1,6 @@
[[search-aggregations-pipeline-bucket-selector-aggregation]]
=== Bucket Selector Aggregation
experimental[]
A parent pipeline aggregation which executes a script which determines whether the current bucket will be retained
in the parent multi-bucket aggregation. The specified metric must be numeric and the script must return a boolean value.
If the script language is `expression` then a numeric return value is permitted. In this case 0.0 will be evaluated as `false`

View File

@ -1,8 +1,6 @@
[[search-aggregations-pipeline-cumulative-sum-aggregation]]
=== Cumulative Sum Aggregation
experimental[]
A parent pipeline aggregation which calculates the cumulative sum of a specified metric in a parent histogram (or date_histogram)
aggregation. The specified metric must be numeric and the enclosing histogram must have `min_doc_count` set to `0` (default
for `histogram` aggregations).

View File

@ -1,8 +1,6 @@
[[search-aggregations-pipeline-derivative-aggregation]]
=== Derivative Aggregation
experimental[]
A parent pipeline aggregation which calculates the derivative of a specified metric in a parent histogram (or date_histogram)
aggregation. The specified metric must be numeric and the enclosing histogram must have `min_doc_count` set to `0` (default
for `histogram` aggregations).

View File

@ -1,8 +1,6 @@
[[search-aggregations-pipeline-extended-stats-bucket-aggregation]]
=== Extended Stats Bucket Aggregation
experimental[]
A sibling pipeline aggregation which calculates a variety of stats across all bucket of a specified metric in a sibling aggregation.
The specified metric must be numeric and the sibling aggregation must be a multi-bucket aggregation.

View File

@ -1,8 +1,6 @@
[[search-aggregations-pipeline-max-bucket-aggregation]]
=== Max Bucket Aggregation
experimental[]
A sibling pipeline aggregation which identifies the bucket(s) with the maximum value of a specified metric in a sibling aggregation
and outputs both the value and the key(s) of the bucket(s). The specified metric must be numeric and the sibling aggregation must
be a multi-bucket aggregation.

View File

@ -1,8 +1,6 @@
[[search-aggregations-pipeline-min-bucket-aggregation]]
=== Min Bucket Aggregation
experimental[]
A sibling pipeline aggregation which identifies the bucket(s) with the minimum value of a specified metric in a sibling aggregation
and outputs both the value and the key(s) of the bucket(s). The specified metric must be numeric and the sibling aggregation must
be a multi-bucket aggregation.

View File

@ -1,8 +1,6 @@
[[search-aggregations-pipeline-movavg-aggregation]]
=== Moving Average Aggregation
experimental[]
Given an ordered series of data, the Moving Average aggregation will slide a window across the data and emit the average
value of that window. For example, given the data `[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]`, we can calculate a simple moving
average with windows size of `5` as follows:
@ -513,6 +511,8 @@ POST /_search
==== Prediction
experimental[]
All the moving average model support a "prediction" mode, which will attempt to extrapolate into the future given the
current smoothed, moving average. Depending on the model and parameter, these predictions may or may not be accurate.

View File

@ -1,8 +1,6 @@
[[search-aggregations-pipeline-percentiles-bucket-aggregation]]
=== Percentiles Bucket Aggregation
experimental[]
A sibling pipeline aggregation which calculates percentiles across all bucket of a specified metric in a sibling aggregation.
The specified metric must be numeric and the sibling aggregation must be a multi-bucket aggregation.

View File

@ -1,8 +1,6 @@
[[search-aggregations-pipeline-serialdiff-aggregation]]
=== Serial Differencing Aggregation
experimental[]
Serial differencing is a technique where values in a time series are subtracted from itself at
different time lags or periods. For example, the datapoint f(x) = f(x~t~) - f(x~t-n~), where n is the period being used.

View File

@ -1,8 +1,6 @@
[[search-aggregations-pipeline-stats-bucket-aggregation]]
=== Stats Bucket Aggregation
experimental[]
A sibling pipeline aggregation which calculates a variety of stats across all bucket of a specified metric in a sibling aggregation.
The specified metric must be numeric and the sibling aggregation must be a multi-bucket aggregation.

View File

@ -1,8 +1,6 @@
[[search-aggregations-pipeline-sum-bucket-aggregation]]
=== Sum Bucket Aggregation
experimental[]
A sibling pipeline aggregation which calculates the sum across all bucket of a specified metric in a sibling aggregation.
The specified metric must be numeric and the sibling aggregation must be a multi-bucket aggregation.

View File

@ -1,7 +1,7 @@
[[analysis-normalizers]]
== Normalizers
experimental[]
beta[]
Normalizers are similar to analyzers except that they may only emit a single
token. As a consequence, they do not have a tokenizer and only accept a subset

View File

@ -1,7 +1,7 @@
[[analysis-flatten-graph-tokenfilter]]
=== Flatten Graph Token Filter
experimental[]
experimental[This functionality is marked as experimental in Lucene]
The `flatten_graph` token filter accepts an arbitrary graph token
stream, such as that produced by

View File

@ -1,7 +1,7 @@
[[analysis-synonym-graph-tokenfilter]]
=== Synonym Graph Token Filter
experimental[]
experimental[This functionality is marked as experimental in Lucene]
The `synonym_graph` token filter allows to easily handle synonyms,
including multi-word synonyms correctly during the analysis process.

View File

@ -1,7 +1,7 @@
[[analysis-word-delimiter-graph-tokenfilter]]
=== Word Delimiter Graph Token Filter
experimental[]
experimental[This functionality is marked as experimental in Lucene]
Named `word_delimiter_graph`, it splits words into subwords and performs
optional transformations on subword groups. Words are split into

View File

@ -1,7 +1,7 @@
[[analysis-simplepattern-tokenizer]]
=== Simple Pattern Tokenizer
experimental[]
experimental[This functionality is marked as experimental in Lucene]
The `simple_pattern` tokenizer uses a regular expression to capture matching
text as terms. The set of regular expression features it supports is more

View File

@ -1,7 +1,7 @@
[[analysis-simplepatternsplit-tokenizer]]
=== Simple Pattern Split Tokenizer
experimental[]
experimental[This functionality is marked as experimental in Lucene]
The `simple_pattern_split` tokenizer uses a regular expression to split the
input into terms at pattern matches. The set of regular expression features it

View File

@ -1,7 +1,7 @@
[[tasks]]
== Task Management API
experimental[The Task Management API is new and should still be considered experimental. The API may change in ways that are not backwards compatible]
beta[The Task Management API is new and should still be considered a beta feature. The API may change in ways that are not backwards compatible]
[float]
=== Current Tasks Information

View File

@ -47,7 +47,7 @@ specific index module:
`index.shard.check_on_startup`::
+
--
experimental[] Whether or not shards should be checked for corruption before opening. When
Whether or not shards should be checked for corruption before opening. When
corruption is detected, it will prevent the shard from being opened. Accepts:
`false`::
@ -69,7 +69,7 @@ corruption is detected, it will prevent the shard from being opened. Accepts:
as corrupted will be automatically removed. This option *may result in data loss*.
Use with extreme caution!
Checking shards may take a lot of time on large indices.
WARNING: Expert only. Checking shards may take a lot of time on large indices.
--
[[index-codec]] `index.codec`::

View File

@ -1,7 +1,7 @@
[[index-modules-index-sorting]]
== Index Sorting
experimental[]
beta[]
When creating a new index in elasticsearch it is possible to configure how the Segments
inside each Shard will be sorted. By default Lucene does not apply any sort.

View File

@ -32,7 +32,7 @@ PUT /my_index
}
---------------------------------
experimental[This is an expert-only setting and may be removed in the future]
WARNING: This is an expert-only setting and may be removed in the future.
The following sections lists all the different storage types supported.
@ -73,7 +73,7 @@ compatibility.
=== Pre-loading data into the file system cache
experimental[This is an expert-only setting and may be removed in the future]
NOTE: This is an expert setting, the details of which may change in the future.
By default, elasticsearch completely relies on the operating system file system
cache for caching I/O operations. It is possible to set `index.store.preload`

View File

@ -144,7 +144,7 @@ GET _analyze
If you want to get more advanced details, set `explain` to `true` (defaults to `false`). It will output all token attributes for each token.
You can filter token attributes you want to output by setting `attributes` option.
experimental[The format of the additional detail information is experimental and can change at any time]
NOTE: The format of the additional detail information is labelled as experimental in Lucene and it may change in the future.
[source,js]
--------------------------------------------------

View File

@ -79,7 +79,7 @@ compound:: Whether the segment is stored in a compound file. When true, this
To add additional information that can be used for debugging, use the `verbose` flag.
experimental[The format of the additional verbose information is experimental and can change at any time]
NOTE: The format of the additional detail information is labelled as experimental in Lucene and it may change in the future.
[source,js]
--------------------------------------------------

View File

@ -1010,11 +1010,6 @@ to the requester.
[[foreach-processor]]
=== Foreach Processor
experimental[This processor may change or be replaced by something else that provides similar functionality. This
processor executes in its own context, which makes it different compared to all other processors and for features like
verbose simulation the subprocessor isn't visible. The reason we still expose this processor, is that it is the only
processor that can operate on an array]
Processes elements in an array of unknown length.
All processors can operate on elements inside an array, but if all elements of an array need to

View File

@ -98,7 +98,6 @@ The following parameters are accepted by `keyword` fields:
<<normalizer,`normalizer`>>::
experimental[]
How to pre-process the keyword prior to indexing. Defaults to `null`,
meaning the keyword is kept as-is.

View File

@ -1,8 +1,6 @@
[[modules-scripting-painless]]
=== Painless Scripting Language
experimental[The Painless scripting language is new and is still marked as experimental. The syntax or API may be changed in the future in non-backwards compatible ways if required.]
include::../../../painless/painless-description.asciidoc[]
Ready to start scripting with Painless? See {painless}/painless-getting-started.html[Getting Started with Painless] in the guide to the

View File

@ -1,8 +1,6 @@
[[search-field-caps]]
== Field Capabilities API
experimental[]
The field capabilities API allows to retrieve the capabilities of fields among multiple indices.
The field capabilities api by default executes on all indices:

View File

@ -1,7 +1,7 @@
[[search-profile]]
== Profile API
experimental[]
WARNING: The Profile API is a debugging tool and adds signficant overhead to search execution.
The Profile API provides detailed timing information about the execution of individual components
in a search request. It gives the user insight into how search requests are executed at a low level so that