Docs: Remove the experimental status of the cardinality and percentiles(-ranks) aggregations

These aggregations are not experimental anymore but some of their parameters
still are:
 - `precision_threshold` and `rehash` on `cardinality`
 - `compression` on percentiles(-ranks)

Close #9560
This commit is contained in:
Adrien Grand 2015-02-04 15:05:26 +01:00
parent f7fe6b7461
commit 3a486066fd
3 changed files with 8 additions and 16 deletions

View File

@ -5,13 +5,6 @@ A `single-value` metrics aggregation that calculates an approximate count of
distinct values. Values can be extracted either from specific fields in the
document or generated by a script.
.Experimental!
[IMPORTANT]
=====
This feature is marked as experimental, and may be subject to change in the
future. If you use this feature, please let us know your experience with it!
=====
Assume you are indexing books and would like to count the unique authors that
match a query:
@ -28,6 +21,10 @@ match a query:
}
--------------------------------------------------
==== Precision control
experimental[]
This aggregation also supports the `precision_threshold` and `rehash` options:
[source,js]
@ -45,14 +42,14 @@ This aggregation also supports the `precision_threshold` and `rehash` options:
}
--------------------------------------------------
<1> The `precision_threshold` options allows to trade memory for accuracy, and
<1> experimental[] The `precision_threshold` options allows to trade memory for accuracy, and
defines a unique count below which counts are expected to be close to
accurate. Above this value, counts might become a bit more fuzzy. The maximum
supported value is 40000, thresholds above this number will have the same
effect as a threshold of 40000.
Default value depends on the number of parent aggregations that multiple
create buckets (such as terms or histograms).
<2> If you computed a hash on client-side, stored it into your documents and want
<2> experimental[] If you computed a hash on client-side, stored it into your documents and want
Elasticsearch to use them to compute counts using this hash function without
rehashing values, it is possible to specify `rehash: false`. Default value is
`true`. Please note that the hash must be indexed as a long when `rehash` is

View File

@ -153,6 +153,8 @@ it. It would not be the case on more skewed distributions.
[[search-aggregations-metrics-percentile-aggregation-compression]]
==== Compression
experimental[]
Approximate algorithms must balance memory utilization with estimation accuracy.
This balance can be controlled using a `compression` parameter:

View File

@ -6,13 +6,6 @@ over numeric values extracted from the aggregated documents. These values
can be extracted either from specific numeric fields in the documents, or
be generated by a provided script.
.Experimental!
[IMPORTANT]
=====
This feature is marked as experimental, and may be subject to change in the
future. If you use this feature, please let us know your experience with it!
=====
[NOTE]
==================================================
Please see <<search-aggregations-metrics-percentile-aggregation-approximation>>