mirror of
https://github.com/honeymoose/OpenSearch.git
synced 2025-02-22 04:45:37 +00:00
This commit adds a new dynamic cluster setting named `search.max_buckets` that can be used to limit the number of buckets created per shard or by the reduce phase. Each multi bucket aggregator can consume buckets during the final build of the aggregation at the shard level or during the reduce phase (final or not) in the coordinating node. When an aggregator consumes a bucket, a global count for the request is incremented and if this number is greater than the limit an exception is thrown (TooManyBuckets exception). This change adds the ability for multi bucket aggregator to "consume" buckets in the global limit, the default is 10,000. It's an opt-in consumer so each multi-bucket aggregator must explicitly call the consumer when a bucket is added in the response. Closes #27452 #26012
62 lines
2.2 KiB
Plaintext
62 lines
2.2 KiB
Plaintext
[[search-aggregations-bucket]]
|
|
== Bucket Aggregations
|
|
|
|
Bucket aggregations don't calculate metrics over fields like the metrics aggregations do, but instead, they create
|
|
buckets of documents. Each bucket is associated with a criterion (depending on the aggregation type) which determines
|
|
whether or not a document in the current context "falls" into it. In other words, the buckets effectively define document
|
|
sets. In addition to the buckets themselves, the `bucket` aggregations also compute and return the number of documents
|
|
that "fell into" each bucket.
|
|
|
|
Bucket aggregations, as opposed to `metrics` aggregations, can hold sub-aggregations. These sub-aggregations will be
|
|
aggregated for the buckets created by their "parent" bucket aggregation.
|
|
|
|
There are different bucket aggregators, each with a different "bucketing" strategy. Some define a single bucket, some
|
|
define fixed number of multiple buckets, and others dynamically create the buckets during the aggregation process.
|
|
|
|
NOTE: The maximum number of buckets allowed in a single response is limited by a dynamic cluster
|
|
setting named `search.max_buckets`. It defaults to 10,000, requests that try to return more than
|
|
the limit will fail with an exception.
|
|
|
|
include::bucket/adjacency-matrix-aggregation.asciidoc[]
|
|
|
|
include::bucket/children-aggregation.asciidoc[]
|
|
|
|
include::bucket/datehistogram-aggregation.asciidoc[]
|
|
|
|
include::bucket/daterange-aggregation.asciidoc[]
|
|
|
|
include::bucket/diversified-sampler-aggregation.asciidoc[]
|
|
|
|
include::bucket/filter-aggregation.asciidoc[]
|
|
|
|
include::bucket/filters-aggregation.asciidoc[]
|
|
|
|
include::bucket/geodistance-aggregation.asciidoc[]
|
|
|
|
include::bucket/geohashgrid-aggregation.asciidoc[]
|
|
|
|
include::bucket/global-aggregation.asciidoc[]
|
|
|
|
include::bucket/histogram-aggregation.asciidoc[]
|
|
|
|
include::bucket/iprange-aggregation.asciidoc[]
|
|
|
|
include::bucket/missing-aggregation.asciidoc[]
|
|
|
|
include::bucket/nested-aggregation.asciidoc[]
|
|
|
|
include::bucket/range-aggregation.asciidoc[]
|
|
|
|
include::bucket/reverse-nested-aggregation.asciidoc[]
|
|
|
|
include::bucket/sampler-aggregation.asciidoc[]
|
|
|
|
include::bucket/significantterms-aggregation.asciidoc[]
|
|
|
|
include::bucket/significanttext-aggregation.asciidoc[]
|
|
|
|
include::bucket/terms-aggregation.asciidoc[]
|
|
|
|
include::bucket/composite-aggregation.asciidoc[]
|
|
|