Docs: Updated the experimental annotations in the docs as follows:

* Removed the docs for `index.compound_format` and `index.compound_on_flush` - these are expert settings which should probably be removed (see https://github.com/elastic/elasticsearch/issues/10778)
* Removed the docs for `index.index_concurrency` - another expert setting
* Labelled the segments verbose output as experimental
* Marked the `compression`, `precision_threshold` and `rehash` options as experimental in the cardinality and percentile aggs
* Improved the experimental text on `significant_terms`, `execution_hint` in the terms agg, and `terminate_after` param on count and search
* Removed the experimental flag on the `geobounds` agg
* Marked the settings in the `merge` and `store` modules as experimental, rather than the modules themselves

Closes #10782
This commit is contained in:
Clinton Gormley 2015-04-26 18:49:15 +02:00
parent f1a0e2216a
commit 37ed61807f
14 changed files with 22 additions and 65 deletions

View File

@ -15,29 +15,6 @@ all the relevant modules settings can be provided when creating an index
There are specific index level settings that are not associated with any
specific module. These include:
[[index-compound-format]]`index.compound_format`::
experimental[]
Should the compound file format be used (boolean setting).
The compound format was created to reduce the number of open
file handles when using file based storage. However, by default it is set
to `false` as the non-compound format gives better performance. It is important
that OS is configured to give Elasticsearch ``enough'' file handles.
See <<file-descriptors>>.
+
Alternatively, `compound_format` can be set to a number between `0` and
`1`, where `0` means `false`, `1` means `true` and a number inbetween
represents a percentage: if the merged segment is less than this
percentage of the total index, then it is written in compound format,
otherwise it is written in non-compound format.
[[index-compound-on-flush]]`index.compound_on_flush`::
experimental[]
Should a new segment (create by indexing, not by merging) be written
in compound format or non-compound format? Defaults to `true`.
This is a dynamic setting.
`index.refresh_interval`::
A time setting controlling how often the
refresh operation will be executed. Defaults to `1s`. Can be set to `-1`
@ -59,7 +36,7 @@ otherwise it is written in non-compound format.
When `checksum`, check for physical corruption.
When `true`, check for both physical and logical corruption. This is much
more expensive in terms of CPU and memory usage.
When `fix`, check for both physical and logical corruption, and segments
When `fix`, check for both physical and logical corruption, and segments
that were reported as corrupted will be automatically removed.
Default value is `false`, which performs no checks.

View File

@ -1,7 +1,7 @@
[[index-modules-merge]]
== Merge
experimental[]
experimental[All of the settings exposed in the `merge` module are expert only and may be removed in the future]
A shard in elasticsearch is a Lucene index, and a Lucene index is broken
down into segments. Segments are internal storage elements in the index
@ -72,12 +72,6 @@ This policy has the following settings:
Higher values favor selecting merges that reclaim deletions. A value of
`0.0` means deletions don't impact merge selection. Defaults to `2.0`.
`index.compound_format`::
Should the index be stored in compound format or not. Defaults to `false`.
See <<index-compound-format,`index.compound_format`>> in
<<index-modules-settings>>.
For normal merging, this policy first computes a "budget" of how many
segments are allowed to be in the index. If the index is over-budget,
then the policy sorts segments by decreasing size (proportionally considering percent

View File

@ -1,8 +1,6 @@
[[index-modules-store]]
== Store
experimental[]
The store module allows you to control how index data is stored.
The index can either be stored in-memory (no persistence) or on-disk
@ -20,6 +18,7 @@ heap space* using the "Memory" (see below) storage type. It translates
to the fact that there is no need for extra large JVM heaps (with their
own consequences) for storing the index in memory.
experimental[All of the settings exposed in the `store` module are expert only and may be removed in the future]
[float]
[[file-system]]
@ -28,7 +27,7 @@ own consequences) for storing the index in memory.
File system based storage is the default storage used. There are
different implementations or _storage types_. The best one for the
operating environment will be automatically chosen: `mmapfs` on
Windows 64bit, `simplefs` on Windows 32bit, and `default`
Windows 64bit, `simplefs` on Windows 32bit, and `default`
(hybrid `niofs` and `mmapfs`) for the rest.
This can be overridden for all indices by adding this to the

View File

@ -78,7 +78,7 @@ compound:: Whether the segment is stored in a compound file. When true, this
To add additional information that can be used for debugging, use the `verbose` flag.
NOTE: The format of additional verbose information is experimental and can change at any time.
experimental[The format of the additional verbose information is experimental and can change at any time]
[source,js]
--------------------------------------------------
@ -108,7 +108,7 @@ Response:
},
...
]
}
...
}

View File

@ -12,7 +12,7 @@ of the request includes the updated settings, for example:
{
"index" : {
"number_of_replicas" : 4
}
}
}
--------------------------------------------------
@ -25,7 +25,7 @@ curl -XPUT 'localhost:9200/my_index/_settings' -d '
{
"index" : {
"number_of_replicas" : 4
}
}
}'
--------------------------------------------------
@ -61,9 +61,6 @@ settings API:
`index.refresh_interval`::
The async refresh interval of a shard.
`index.index_concurrency`::
experimental[] Defaults to `8`.
`index.translog.flush_threshold_ops`::
When to flush based on operations.
@ -151,14 +148,6 @@ settings API:
`index.translog.fs.type`::
experimental[] Either `simple` or `buffered` (default).
`index.compound_format`::
experimental[] See <<index-compound-format,`index.compound_format`>> in
<<index-modules-settings>>.
`index.compound_on_flush`::
experimental[] See <<index-compound-on-flush,`index.compound_on_flush>> in
<<index-modules-settings>>.
<<index-modules-slowlog>>::
All the settings for slow log.

View File

@ -424,10 +424,7 @@ automatically loaded.
[float]
=== Lucene Expressions Scripts
[WARNING]
========================
This feature is *experimental* and subject to change in future versions.
========================
experimental[The Lucene expressions module is undergoing significant development and the exposed functionality is likely to change in the future]
Lucene's expressions module provides a mechanism to compile a
`javascript` expression to bytecode. This allows very fast execution,

View File

@ -3,7 +3,7 @@
An aggregation that returns interesting or unusual occurrences of terms in a set.
experimental[]
experimental[The `significant_terms` aggregation can be very heavy when run on large indices. Work is in progress to provide more lightweight sampling techniques. As a result, the API for this feature may change in non-backwards compatible ways]
.Example use cases:
* Suggesting "H5N1" when users search for "bird flu" in text

View File

@ -613,7 +613,7 @@ this would typically be too costly in terms of RAM.
[[search-aggregations-bucket-terms-aggregation-execution-hint]]
==== Execution hint
experimental[]
experimental[The automated execution optimization is experimental, so this parameter is provided temporarily as a way to override the default behaviour]
There are different mechanisms by which terms aggregations can be executed:

View File

@ -23,10 +23,10 @@ match a query:
==== Precision control
experimental[]
This aggregation also supports the `precision_threshold` and `rehash` options:
experimental[The `precision_threshold` and `rehash` options are specific to the current internal implementation of the `cardinality` agg, which may change in the future]
[source,js]
--------------------------------------------------
{
@ -42,14 +42,14 @@ This aggregation also supports the `precision_threshold` and `rehash` options:
}
--------------------------------------------------
<1> experimental[] The `precision_threshold` options allows to trade memory for accuracy, and
<1> The `precision_threshold` options allows to trade memory for accuracy, and
defines a unique count below which counts are expected to be close to
accurate. Above this value, counts might become a bit more fuzzy. The maximum
supported value is 40000, thresholds above this number will have the same
effect as a threshold of 40000.
Default value depends on the number of parent aggregations that multiple
create buckets (such as terms or histograms).
<2> experimental[] If you computed a hash on client-side, stored it into your documents and want
<2> If you computed a hash on client-side, stored it into your documents and want
Elasticsearch to use them to compute counts using this hash function without
rehashing values, it is possible to specify `rehash: false`. Default value is
`true`. Please note that the hash must be indexed as a long when `rehash` is

View File

@ -1,8 +1,6 @@
[[search-aggregations-metrics-geobounds-aggregation]]
=== Geo Bounds Aggregation
experimental[]
A metric aggregation that computes the bounding box containing all geo_point values for a field.

View File

@ -155,7 +155,7 @@ it. It would not be the case on more skewed distributions.
[[search-aggregations-metrics-percentile-aggregation-compression]]
==== Compression
experimental[]
experimental[The `compression` parameter is specific to the current internal implementation of percentiles, and may change in the future]
Approximate algorithms must balance memory utilization with estimation accuracy.
This balance can be controlled using a `compression` parameter:

View File

@ -64,7 +64,8 @@ query.
|default_operator |The default operator to be used, can be `AND` or
`OR`. Defaults to `OR`.
|terminate_after |experimental[] The maximum count for each shard, upon
|terminate_after |experimental[The API for this feature may change in the future]
The maximum count for each shard, upon
reaching which the query execution will terminate early.
If set, the response will have a boolean field `terminated_early` to
indicate whether the query execution has actually terminated_early.

View File

@ -77,7 +77,8 @@ And here is a sample response:
`terminate_after`::
experimental[] The maximum number of documents to collect for each shard,
experimental[The API for this feature may change in the future]
The maximum number of documents to collect for each shard,
upon reaching which the query execution will terminate early. If set, the
response will have a boolean field `terminated_early` to indicate whether
the query execution has actually terminated_early. Defaults to no

View File

@ -82,7 +82,8 @@ scores and return them as part of each hit.
within the specified time value and bail with the hits accumulated up to
that point when expired. Defaults to no timeout.
|`terminate_after` |experimental[] The maximum number of documents to collect for
|`terminate_after` |experimental[The API for this feature may change in the future]
The maximum number of documents to collect for
each shard, upon reaching which the query execution will terminate early.
If set, the response will have a boolean field `terminated_early` to
indicate whether the query execution has actually terminated_early.