235 lines
11 KiB
Plaintext
235 lines
11 KiB
Plaintext
[[search-aggregations]]
|
|
== Aggregations
|
|
|
|
The aggregations framework helps provide aggregated data based on a search query. It is based on simple building blocks
|
|
called aggregations, that can be composed in order to build complex summaries of the data.
|
|
|
|
An aggregation can be seen as a _unit-of-work_ that builds analytic information over a set of documents. The context of
|
|
the execution defines what this document set is (e.g. a top-level aggregation executes within the context of the executed
|
|
query/filters of the search request).
|
|
|
|
There are many different types of aggregations, each with its own purpose and output. To better understand these types,
|
|
it is often easier to break them into two main families:
|
|
|
|
_Bucketing_::
|
|
A family of aggregations that build buckets, where each bucket is associated with a _key_ and a document
|
|
criterion. When the aggregation is executed, all the buckets criteria are evaluated on every document in
|
|
the context and when a criterion matches, the document is considered to "fall in" the relevant bucket.
|
|
By the end of the aggregation process, we'll end up with a list of buckets - each one with a set of
|
|
documents that "belong" to it.
|
|
|
|
_Metric_::
|
|
Aggregations that keep track and compute metrics over a set of documents.
|
|
|
|
The interesting part comes next. Since each bucket effectively defines a document set (all documents belonging to
|
|
the bucket), one can potentially associate aggregations on the bucket level, and those will execute within the context
|
|
of that bucket. This is where the real power of aggregations kicks in: *aggregations can be nested!*
|
|
|
|
NOTE: Bucketing aggregations can have sub-aggregations (bucketing or metric). The sub-aggregations will be computed for
|
|
the buckets which their parent aggregation generates. There is no hard limit on the level/depth of nested
|
|
aggregations (one can nest an aggregation under a "parent" aggregation, which is itself a sub-aggregation of
|
|
another higher-level aggregation).
|
|
|
|
[float]
|
|
=== Structuring Aggregations
|
|
|
|
The following snippet captures the basic structure of aggregations:
|
|
|
|
[source,js]
|
|
--------------------------------------------------
|
|
"aggregations" : {
|
|
"<aggregation_name>" : {
|
|
"<aggregation_type>" : {
|
|
<aggregation_body>
|
|
}
|
|
[,"meta" : { [<meta_data_body>] } ]?
|
|
[,"aggregations" : { [<sub_aggregation>]+ } ]?
|
|
}
|
|
[,"<aggregation_name_2>" : { ... } ]*
|
|
}
|
|
--------------------------------------------------
|
|
|
|
The `aggregations` object (the key `aggs` can also be used) in the JSON holds the aggregations to be computed. Each aggregation
|
|
is associated with a logical name that the user defines (e.g. if the aggregation computes the average price, then it would
|
|
make sense to name it `avg_price`). These logical names will also be used to uniquely identify the aggregations in the
|
|
response. Each aggregation has a specific type (`<aggregation_type>` in the above snippet) and is typically the first
|
|
key within the named aggregation body. Each type of aggregation defines its own body, depending on the nature of the
|
|
aggregation (e.g. an `avg` aggregation on a specific field will define the field on which the average will be calculated).
|
|
At the same level of the aggregation type definition, one can optionally define a set of additional aggregations,
|
|
though this only makes sense if the aggregation you defined is of a bucketing nature. In this scenario, the
|
|
sub-aggregations you define on the bucketing aggregation level will be computed for all the buckets built by the
|
|
bucketing aggregation. For example, if you define a set of aggregations under the `range` aggregation, the
|
|
sub-aggregations will be computed for the range buckets that are defined.
|
|
|
|
[float]
|
|
==== Values Source
|
|
|
|
Some aggregations work on values extracted from the aggregated documents. Typically, the values will be extracted from
|
|
a specific document field which is set using the `field` key for the aggregations. It is also possible to define a
|
|
<<modules-scripting,`script`>> which will generate the values (per document).
|
|
|
|
TIP: The `script` parameter expects an inline script. Use `script_id` for indexed scripts and `script_file` for scripts in the `config/scripts/` directory.
|
|
|
|
When both `field` and `script` settings are configured for the aggregation, the script will be treated as a
|
|
`value script`. While normal scripts are evaluated on a document level (i.e. the script has access to all the data
|
|
associated with the document), value scripts are evaluated on the *value* level. In this mode, the values are extracted
|
|
from the configured `field` and the `script` is used to apply a "transformation" over these value/s.
|
|
|
|
["NOTE",id="aggs-script-note"]
|
|
===============================
|
|
When working with scripts, the `lang` and `params` settings can also be defined. The former defines the scripting
|
|
language which is used (assuming the proper language is available in Elasticsearch, either by default or as a plugin). The latter
|
|
enables defining all the "dynamic" expressions in the script as parameters, which enables the script to keep itself static
|
|
between calls (this will ensure the use of the cached compiled scripts in Elasticsearch).
|
|
===============================
|
|
|
|
Scripts can generate a single value or multiple values per document. When generating multiple values, one can use the
|
|
`script_values_sorted` settings to indicate whether these values are sorted or not. Internally, Elasticsearch can
|
|
perform optimizations when dealing with sorted values (for example, with the `min` aggregations, knowing the values are
|
|
sorted, Elasticsearch will skip the iterations over all the values and rely on the first value in the list to be the
|
|
minimum value among all other values associated with the same document).
|
|
|
|
[float]
|
|
=== Metrics Aggregations
|
|
|
|
The aggregations in this family compute metrics based on values extracted in one way or another from the documents that
|
|
are being aggregated. The values are typically extracted from the fields of the document (using the field data), but
|
|
can also be generated using scripts.
|
|
|
|
Numeric metrics aggregations are a special type of metrics aggregation which output numeric values. Some aggregations output
|
|
a single numeric metric (e.g. `avg`) and are called `single-value numeric metrics aggregation`, others generate multiple
|
|
metrics (e.g. `stats`) and are called `multi-value numeric metrics aggregation`. The distinction between single-value and
|
|
multi-value numeric metrics aggregations plays a role when these aggregations serve as direct sub-aggregations of some
|
|
bucket aggregations (some bucket aggregations enable you to sort the returned buckets based on the numeric metrics in each bucket).
|
|
|
|
|
|
[float]
|
|
=== Bucket Aggregations
|
|
|
|
Bucket aggregations don't calculate metrics over fields like the metrics aggregations do, but instead, they create
|
|
buckets of documents. Each bucket is associated with a criterion (depending on the aggregation type) which determines
|
|
whether or not a document in the current context "falls" into it. In other words, the buckets effectively define document
|
|
sets. In addition to the buckets themselves, the `bucket` aggregations also compute and return the number of documents
|
|
that "fell in" to each bucket.
|
|
|
|
Bucket aggregations, as opposed to `metrics` aggregations, can hold sub-aggregations. These sub-aggregations will be
|
|
aggregated for the buckets created by their "parent" bucket aggregation.
|
|
|
|
There are different bucket aggregators, each with a different "bucketing" strategy. Some define a single bucket, some
|
|
define fixed number of multiple buckets, and others dynamically create the buckets during the aggregation process.
|
|
|
|
[float]
|
|
=== Reducer Aggregations
|
|
|
|
coming[2.0.0]
|
|
|
|
experimental[]
|
|
|
|
Reducer aggregations work on the outputs produced from other aggregations rather than from document sets, adding
|
|
information to the output tree. There are many different types of reducer, each computing different information from
|
|
other aggregations, but these types can broken down into two families:
|
|
|
|
_Parent_::
|
|
A family of reducer aggregations that is provided with the output of its parent aggregation and is able
|
|
to compute new buckets or new aggregations to add to existing buckets.
|
|
|
|
_Sibling_::
|
|
Reducer aggregations that are provided with the output of a sibling aggregation and are able to compute a
|
|
new aggregation which will be at the same level as the sibling aggregation.
|
|
|
|
Reducer aggregations can reference the aggregations they need to perform their computation by using the `buckets_paths`
|
|
parameter to indicate the paths to the required metrics. The syntax for defining these paths can be found in the
|
|
<<search-aggregations-bucket-terms-aggregation-order, terms aggregation order>> section.
|
|
|
|
?????? SHOULD THE SECTION ABOUT DEFINING AGGREGATION PATHS
|
|
BE IN THIS PAGE AND REFERENCED FROM THE TERMS AGGREGATION DOCUMENTATION ???????
|
|
|
|
Reducer aggregations cannot have sub-aggregations but depending on the type it can reference another reducer in the `buckets_path`
|
|
allowing reducers to be chained.
|
|
|
|
NOTE: Because reducer aggregations only add to the output, when chaining reducer aggregations the output of each reducer will be
|
|
included in the final output.
|
|
|
|
[float]
|
|
=== Caching heavy aggregations
|
|
|
|
Frequently used aggregations (e.g. for display on the home page of a website)
|
|
can be cached for faster responses. These cached results are the same results
|
|
that would be returned by an uncached aggregation -- you will never get stale
|
|
results.
|
|
|
|
See <<index-modules-shard-query-cache>> for more details.
|
|
|
|
[float]
|
|
=== Returning only aggregation results
|
|
|
|
There are many occasions when aggregations are required but search hits are not. For these cases the hits can be ignored by
|
|
setting `size=0`. For example:
|
|
|
|
[source,js]
|
|
--------------------------------------------------
|
|
$ curl -XGET 'http://localhost:9200/twitter/tweet/_search' -d '{
|
|
"size": 0,
|
|
"aggregations": {
|
|
"my_agg": {
|
|
"terms": {
|
|
"field": "text"
|
|
}
|
|
}
|
|
}
|
|
}
|
|
'
|
|
--------------------------------------------------
|
|
|
|
Setting `size` to `0` avoids executing the fetch phase of the search making the request more efficient.
|
|
|
|
[float]
|
|
=== Metadata
|
|
|
|
You can associate a piece of metadata with individual aggregations at request time that will be returned in place
|
|
at response time.
|
|
|
|
Consider this example where we want to associate the color blue with our `terms` aggregation.
|
|
|
|
[source,js]
|
|
--------------------------------------------------
|
|
{
|
|
...
|
|
aggs": {
|
|
"titles": {
|
|
"terms": {
|
|
"field": "title"
|
|
},
|
|
"meta": {
|
|
"color": "blue"
|
|
},
|
|
}
|
|
}
|
|
}
|
|
--------------------------------------------------
|
|
|
|
Then that piece of metadata will be returned in place for our `titles` terms aggregation
|
|
|
|
[source,js]
|
|
--------------------------------------------------
|
|
{
|
|
...
|
|
"aggregations": {
|
|
"titles": {
|
|
"meta": {
|
|
"color" : "blue"
|
|
},
|
|
"buckets": [
|
|
]
|
|
}
|
|
}
|
|
}
|
|
--------------------------------------------------
|
|
|
|
include::aggregations/metrics.asciidoc[]
|
|
|
|
include::aggregations/bucket.asciidoc[]
|
|
|
|
include::aggregations/reducer.asciidoc[]
|
|
|