OpenSearch/docs/reference/search/aggregations.asciidoc

166 lines
9.2 KiB
Plaintext
Raw Normal View History

[[search-aggregations]]
== Aggregations
2013-12-02 09:14:25 -05:00
Aggregations grew out of the <<search-facets, facets>> module and the long experience of how users use it
(and would like to use it) for real-time data analytics purposes. As such, it serves as the next generation
replacement for the functionality we currently refer to as "faceting".
<<search-facets, Facets>> provide a great way to aggregate data within a document set context.
This context is defined by the executed query in combination with the different levels of filters that can be defined
(filtered queries, top-level filters, and facet level filters). While powerful, their implementation is not designed
2014-03-12 21:28:40 -04:00
from the ground up to support complex aggregations and is thus limited.
The aggregations module breaks the barriers the current facet implementation put in place. The new name ("Aggregations")
also indicates the intention here - a generic yet extremely powerful framework for building aggregations - any types of
aggregations.
An aggregation can be seen as a _unit-of-work_ that builds analytic information over a set of documents. The context of
the execution defines what this document set is (e.g. a top-level aggregation executes within the context of the executed
query/filters of the search request).
There are many different types of aggregations, each with its own purpose and output. To better understand these types,
it is often easier to break them into two main families:
_Bucketing_::
A family of aggregations that build buckets, where each bucket is associated with a _key_ and a document
criterion. When the aggregation is executed, all the buckets criteria are evaluated on every document in
the context and when a criterion matches, the document is considered to "fall in" the relevant bucket.
By the end of the aggregation process, we'll end up with a list of buckets - each one with a set of
documents that "belong" to it.
_Metric_::
2014-03-12 21:28:40 -04:00
Aggregations that keep track and compute metrics over a set of documents.
2014-03-12 21:28:40 -04:00
The interesting part comes next. Since each bucket effectively defines a document set (all documents belonging to
the bucket), one can potentially associate aggregations on the bucket level, and those will execute within the context
of that bucket. This is where the real power of aggregations kicks in: *aggregations can be nested!*
2014-03-12 21:28:40 -04:00
NOTE: Bucketing aggregations can have sub-aggregations (bucketing or metric). The sub-aggregations will be computed for
the buckets which their parent aggregation generates. There is no hard limit on the level/depth of nested
aggregations (one can nest an aggregation under a "parent" aggregation, which is itself a sub-aggregation of
another higher-level aggregation).
[float]
=== Structuring Aggregations
The following snippet captures the basic structure of aggregations:
[source,js]
--------------------------------------------------
"aggregations" : {
"<aggregation_name>" : {
"<aggregation_type>" : {
<aggregation_body>
}
[,"aggregations" : { [<sub_aggregation>]+ } ]?
}
[,"<aggregation_name_2>" : { ... } ]*
}
--------------------------------------------------
The `aggregations` object (the key `aggs` can also be used) in the JSON holds the aggregations to be computed. Each aggregation
2014-03-12 21:28:40 -04:00
is associated with a logical name that the user defines (e.g. if the aggregation computes the average price, then it would
make sense to name it `avg_price`). These logical names will also be used to uniquely identify the aggregations in the
response. Each aggregation has a specific type (`<aggregation_type>` in the above snippet) and is typically the first
2014-03-12 21:28:40 -04:00
key within the named aggregation body. Each type of aggregation defines its own body, depending on the nature of the
aggregation (e.g. an `avg` aggregation on a specific field will define the field on which the average will be calculated).
At the same level of the aggregation type definition, one can optionally define a set of additional aggregations,
though this only makes sense if the aggregation you defined is of a bucketing nature. In this scenario, the
sub-aggregations you define on the bucketing aggregation level will be computed for all the buckets built by the
2014-03-12 21:28:40 -04:00
bucketing aggregation. For example, if you define a set of aggregations under the `range` aggregation, the
sub-aggregations will be computed for the range buckets that are defined.
[float]
==== Values Source
Some aggregations work on values extracted from the aggregated documents. Typically, the values will be extracted from
a specific document field which is set using the `field` key for the aggregations. It is also possible to define a
2014-03-12 21:28:40 -04:00
<<modules-scripting,`script`>> which will generate the values (per document).
When both `field` and `script` settings are configured for the aggregation, the script will be treated as a
`value script`. While normal scripts are evaluated on a document level (i.e. the script has access to all the data
associated with the document), value scripts are evaluated on the *value* level. In this mode, the values are extracted
2014-03-12 21:28:40 -04:00
from the configured `field` and the `script` is used to apply a "transformation" over these value/s.
["NOTE",id="aggs-script-note"]
===============================
When working with scripts, the `lang` and `params` settings can also be defined. The former defines the scripting
2014-03-12 21:28:40 -04:00
language which is used (assuming the proper language is available in Elasticsearch, either by default or as a plugin). The latter
enables defining all the "dynamic" expressions in the script as parameters, which enables the script to keep itself static
between calls (this will ensure the use of the cached compiled scripts in Elasticsearch).
===============================
2014-03-12 21:28:40 -04:00
Scripts can generate a single value or multiple values per document. When generating multiple values, one can use the
`script_values_sorted` settings to indicate whether these values are sorted or not. Internally, Elasticsearch can
perform optimizations when dealing with sorted values (for example, with the `min` aggregations, knowing the values are
sorted, Elasticsearch will skip the iterations over all the values and rely on the first value in the list to be the
minimum value among all other values associated with the same document).
[float]
=== Metrics Aggregations
The aggregations in this family compute metrics based on values extracted in one way or another from the documents that
are being aggregated. The values are typically extracted from the fields of the document (using the field data), but
can also be generated using scripts.
Numeric metrics aggregations are a special type of metrics aggregation which output numeric values. Some aggregations output
a single numeric metric (e.g. `avg`) and are called `single-value numeric metrics aggregation`, others generate multiple
metrics (e.g. `stats`) and are called `multi-value numeric metrics aggregation`. The distinction between single-value and
multi-value numeric metrics aggregations plays a role when these aggregations serve as direct sub-aggregations of some
bucket aggregations (some bucket aggregations enable you to sort the returned buckets based on the numeric metrics in each bucket).
[float]
=== Bucket Aggregations
Bucket aggregations don't calculate metrics over fields like the metrics aggregations do, but instead, they create
buckets of documents. Each bucket is associated with a criterion (depending on the aggregation type) which determines
2014-03-12 21:28:40 -04:00
whether or not a document in the current context "falls" into it. In other words, the buckets effectively define document
sets. In addition to the buckets themselves, the `bucket` aggregations also compute and return the number of documents
2014-03-12 21:28:40 -04:00
that "fell in" to each bucket.
2014-03-12 21:28:40 -04:00
Bucket aggregations, as opposed to `metrics` aggregations, can hold sub-aggregations. These sub-aggregations will be
aggregated for the buckets created by their "parent" bucket aggregation.
There are different bucket aggregators, each with a different "bucketing" strategy. Some define a single bucket, some
define fixed number of multiple buckets, and others dynamically create the buckets during the aggregation process.
[float]
=== Caching heavy aggregations
coming[1.4.0]
Frequently used aggregations (e.g. for display on the home page of a website)
can be cached for faster responses. These cached results are the same results
that would be returned by an uncached aggregation -- you will never get stale
results.
See <<index-modules-shard-query-cache>> for more details.
[float]
=== Returning only aggregation results
There are many occasions when aggregations are required but search hits are not. For these cases the hits can be ignored by
adding `search_type=count` to the request URL parameters. For example:
[source,js]
--------------------------------------------------
$ curl -XGET 'http://localhost:9200/twitter/tweet/_search?search_type=count' -d '{
"aggregations": {
"my_agg": {
"terms": {
"field": "text"
}
}
}
}
'
--------------------------------------------------
Setting `search_type` to `count` avoids executing the fetch phase of the search making the request more efficient. See
<<search-request-search-type>> for more information on the `search_type` parameter.
include::aggregations/metrics.asciidoc[]
include::aggregations/bucket.asciidoc[]