Docs: Use the new experimental annotation.

We now have a very useful annotation to mark features or parameters as
experimental. Let's use it! This commit replaces some custom text warnings with
this annotation and adds this annotation to some existing features/parameters:
 - inner_hits (unreleased yet)
 - terminate_after (released in 1.4)
 - per-bucket doc count errors in the terms agg (released in 1.4)

I also tagged with this annotation settings which should either be not needed
(like the ability to evict entries from the filter cache based on time) or that
are too deep into the way that Elasticsearch works like the Directory
implementation or merge settings.

Close #9563
This commit is contained in:
Adrien Grand 2015-02-04 15:43:22 +01:00
parent 3a486066fd
commit 95f46f1212
14 changed files with 41 additions and 42 deletions

View File

@ -17,6 +17,7 @@ specific module. These include:
[[index-compound-format]]`index.compound_format`::
experimental[]
Should the compound file format be used (boolean setting).
The compound format was created to reduce the number of open
file handles when using file based storage. However, by default it is set
@ -32,6 +33,7 @@ otherwise it is written in non-compound format.
[[index-compound-on-flush]]`index.compound_on_flush`::
experimental[]
Should a new segment (create by indexing, not by merging) be written
in compound format or non-compound format? Defaults to `true`.
This is a dynamic setting.
@ -42,14 +44,18 @@ otherwise it is written in non-compound format.
in order to disable it.
`index.codec`::
experimental[]
The `default` value compresses stored data with LZ4 compression, but
this can be set to `best_compression` for a higher compression ratio,
at the expense of slower stored fields performance.
`index.shard.check_on_startup`::
experimental[]
Should shard consistency be checked upon opening. When corruption is detected,
it will prevent the shard from being opened.
+
When `checksum`, check for physical corruption.
When `true`, check for both physical and logical corruption. This is much
more expensive in terms of CPU and memory usage.

View File

@ -19,7 +19,7 @@ and perform poorly.
eg `30%` of node heap space, or an absolute value, eg `12GB`. Defaults
to unbounded.
|`indices.fielddata.cache.expire` |A time based setting that expires
|`indices.fielddata.cache.expire` |experimental[] A time based setting that expires
field data after a certain time of inactivity. Defaults to `-1`. For
example, can be set to `5m` for a 5 minute expiry.
|=======================================================================

View File

@ -1,6 +1,8 @@
[[index-modules-merge]]
== Merge
experimental[]
A shard in elasticsearch is a Lucene index, and a Lucene index is broken
down into segments. Segments are internal storage elements in the index
where the index data is stored, and are immutable up to delete markers.

View File

@ -1,6 +1,8 @@
[[index-modules-store]]
== Store
experimental[]
The store module allows you to control how index data is stored.
The index can either be stored in-memory (no persistence) or on-disk

View File

@ -56,10 +56,10 @@ settings API:
The async refresh interval of a shard.
`index.index_concurrency`::
Defaults to `8`.
experimental[] Defaults to `8`.
`index.fail_on_merge_failure`::
Default to `true`.
experimental[] Default to `true`.
`index.translog.flush_threshold_ops`::
When to flush based on operations.
@ -79,19 +79,19 @@ settings API:
Set to `-1` to disable.
`index.cache.filter.expire`::
The expire after access time for filter cache.
experimental[] The expire after access time for filter cache.
Set to `-1` to disable.
`index.gateway.snapshot_interval`::
The gateway snapshot interval (only applies to shared gateways).
Defaults to 10s.
experimental[] The gateway snapshot interval (only applies to shared
gateways). Defaults to 10s.
<<index-modules-merge,merge policy>>::
All the settings for the merge policy currently configured.
A different merge policy can't be set.
`index.merge.scheduler.*`::
All the settings for the merge scheduler.
experimental[] All the settings for the merge scheduler.
`index.routing.allocation.include.*`::
A node matching any rule will be allowed to host shards from the index.
@ -137,22 +137,23 @@ settings API:
* Number values are also supported, e.g. `1`.
`index.gc_deletes`::
experimental[]
`index.ttl.disable_purge`::
Disables temporarily the purge of expired docs.
experimental[] Disables temporarily the purge of expired docs.
<<index-modules-store,store level throttling>>::
All the settings for the store level throttling policy currently configured.
`index.translog.fs.type`::
Either `simple` or `buffered` (default).
experimental[] Either `simple` or `buffered` (default).
`index.compound_format`::
See <<index-compound-format,`index.compound_format`>> in
experimental[] See <<index-compound-format,`index.compound_format`>> in
<<index-modules-settings>>.
`index.compound_on_flush`::
See <<index-compound-on-flush,`index.compound_on_flush>> in
experimental[] See <<index-compound-on-flush,`index.compound_on_flush>> in
<<index-modules-settings>>.
<<index-modules-slowlog>>::

View File

@ -3,13 +3,7 @@
An aggregation that returns interesting or unusual occurrences of terms in a set.
.Experimental!
[IMPORTANT]
=====
This feature is marked as experimental, and may be subject to change in the
future. If you use this feature, please let us know your experience with it!
=====
experimental[]
.Example use cases:
* Suggesting "H5N1" when users search for "bird flu" in text

View File

@ -190,6 +190,10 @@ could have the 4th highest document count.
}
--------------------------------------------------
==== Per bucket document count error
experimental[]
The second error value can be enabled by setting the `show_term_doc_count_error` parameter to true. This shows an error value
for each term returned by the aggregation which represents the 'worst case' error in the document count and can be useful when
deciding on a value for the `shard_size` parameter. This is calculated by summing the document counts for the last term returned
@ -638,6 +642,8 @@ this would typically be too costly in terms of RAM.
[[search-aggregations-bucket-terms-aggregation-execution-hint]]
==== Execution hint
experimental[]
There are different mechanisms by which terms aggregations can be executed:
- by using field values directly in order to aggregate data per-bucket (`map`)
@ -675,6 +681,6 @@ in inner aggregations.
}
--------------------------------------------------
<1> the possible values are `map`, `global_ordinals`, `global_ordinals_hash` and `global_ordinals_low_cardinality`
<1> experimental[] the possible values are `map`, `global_ordinals`, `global_ordinals_hash` and `global_ordinals_low_cardinality`
Please note that Elasticsearch will ignore this execution hint if it is not applicable and that there is no backward compatibility guarantee on these hints.

View File

@ -1,14 +1,10 @@
[[search-aggregations-metrics-geobounds-aggregation]]
=== Geo Bounds Aggregation
experimental[]
A metric aggregation that computes the bounding box containing all geo_point values for a field.
.Experimental!
[IMPORTANT]
=====
This feature is marked as experimental, and may be subject to change in the
future. If you use this feature, please let us know your experience with it!
=====
Example:

View File

@ -1,14 +1,9 @@
[[search-aggregations-metrics-scripted-metric-aggregation]]
=== Scripted Metric Aggregation
A metric aggregation that executes using scripts to provide a metric output.
experimental[]
.Experimental!
[IMPORTANT]
=====
This feature is marked as experimental, and may be subject to change in the
future. If you use this feature, please let us know your experience with it!
=====
A metric aggregation that executes using scripts to provide a metric output.
Example:

View File

@ -1,12 +1,7 @@
[[search-benchmark]]
== Benchmark
.Experimental!
[IMPORTANT]
=====
This feature is marked as experimental, and may be subject to change in the
future. If you use this feature, please let us know your experience with it!
=====
experimental[]
The benchmark API provides a standard mechanism for submitting queries and
measuring their performance relative to one another.

View File

@ -64,7 +64,7 @@ query.
|default_operator |The default operator to be used, can be `AND` or
`OR`. Defaults to `OR`.
|terminate_after |The maximum count for each shard, upon
|terminate_after |experimental[] The maximum count for each shard, upon
reaching which the query execution will terminate early.
If set, the response will have a boolean field `terminated_early` to
indicate whether the query execution has actually terminated_early.

View File

@ -76,7 +76,7 @@ And here is a sample response:
`terminate_after`::
The maximum number of documents to collect for each shard,
experimental[] The maximum number of documents to collect for each shard,
upon reaching which the query execution will terminate early. If set, the
response will have a boolean field `terminated_early` to indicate whether
the query execution has actually terminated_early. Defaults to no

View File

@ -1,6 +1,8 @@
[[search-request-inner-hits]]
=== Inner hits
experimental[]
The <<mapping-parent-field, parent/child>> and <<mapping-nested-type, nested>> features allow the return of documents that
have matches in a different scope. In the parent/child case, parent document are returned based on matches in child
documents or child document are returned based on matches in parent documents. In the nested case, documents are returned

View File

@ -82,7 +82,7 @@ scores and return them as part of each hit.
within the specified time value and bail with the hits accumulated up to
that point when expired. Defaults to no timeout.
|`terminate_after` |The maximum number of documents to collect for
|`terminate_after` |experimental[] The maximum number of documents to collect for
each shard, upon reaching which the query execution will terminate early.
If set, the response will have a boolean field `terminated_early` to
indicate whether the query execution has actually terminated_early.