2018-02-23 17:10:37 -05:00
|
|
|
[role="xpack"]
|
2018-08-31 13:50:43 -04:00
|
|
|
[testenv="basic"]
|
2018-02-23 17:10:37 -05:00
|
|
|
[[rollup-search]]
|
2018-12-20 13:23:28 -05:00
|
|
|
=== Rollup search
|
2018-02-23 17:10:37 -05:00
|
|
|
++++
|
2018-12-20 13:23:28 -05:00
|
|
|
<titleabbrev>Rollup search</titleabbrev>
|
2018-02-23 17:10:37 -05:00
|
|
|
++++
|
|
|
|
|
2020-07-13 16:57:40 -04:00
|
|
|
Enables searching rolled-up data using the standard query DSL.
|
2019-11-20 13:43:53 -05:00
|
|
|
|
2018-06-13 15:42:20 -04:00
|
|
|
experimental[]
|
|
|
|
|
2019-11-20 13:43:53 -05:00
|
|
|
[[rollup-search-request]]
|
|
|
|
==== {api-request-title}
|
2018-02-23 17:10:37 -05:00
|
|
|
|
2020-07-13 16:57:40 -04:00
|
|
|
`GET <target>/_rollup_search`
|
2018-02-23 17:10:37 -05:00
|
|
|
|
2019-11-20 13:43:53 -05:00
|
|
|
[[rollup-search-desc]]
|
|
|
|
==== {api-description-title}
|
2018-02-23 17:10:37 -05:00
|
|
|
|
2019-11-20 13:43:53 -05:00
|
|
|
The rollup search endpoint is needed because, internally, rolled-up documents
|
|
|
|
utilize a different document structure than the original data. The rollup search
|
|
|
|
endpoint rewrites standard query DSL into a format that matches the rollup
|
|
|
|
documents, then takes the response and rewrites it back to what a client would
|
|
|
|
expect given the original query.
|
2018-02-23 17:10:37 -05:00
|
|
|
|
2019-11-20 13:43:53 -05:00
|
|
|
[[rollup-search-path-params]]
|
|
|
|
==== {api-path-parms-title}
|
2018-02-23 17:10:37 -05:00
|
|
|
|
2020-07-13 16:57:40 -04:00
|
|
|
`<target>`::
|
|
|
|
+
|
|
|
|
--
|
|
|
|
(Required, string)
|
|
|
|
Comma-separated list of data streams and indices used to limit
|
|
|
|
the request. Wildcard expressions (`*`) are supported.
|
2018-02-23 17:10:37 -05:00
|
|
|
|
2020-07-13 16:57:40 -04:00
|
|
|
This target can include both rollup and non-rollup indices.
|
2018-12-07 14:39:58 -05:00
|
|
|
|
2020-07-13 16:57:40 -04:00
|
|
|
Rules for the `<target>` parameter:
|
|
|
|
|
|
|
|
- At least one data stream, index, or wildcard expression must be specified.
|
|
|
|
This target can include a rollup or non-rollup index. For data streams, the
|
|
|
|
stream's backing indices can only serve as non-rollup indices. Omitting the
|
|
|
|
`<target>` parameter or using `_all` is not permitted.
|
|
|
|
- Multiple non-rollup indices may be specified.
|
2019-11-20 13:43:53 -05:00
|
|
|
- Only one rollup index may be specified. If more than one are supplied, an
|
|
|
|
exception occurs.
|
2020-07-13 16:57:40 -04:00
|
|
|
- Wildcard expressions may be used, but, if they match more than one rollup index, an
|
|
|
|
exception occurs. However, you can use an expression to match multiple non-rollup
|
|
|
|
indices or data streams.
|
|
|
|
--
|
2018-02-23 17:10:37 -05:00
|
|
|
|
2019-11-20 13:43:53 -05:00
|
|
|
[[rollup-search-request-body]]
|
|
|
|
==== {api-request-body-title}
|
2018-02-23 17:10:37 -05:00
|
|
|
|
2019-11-20 13:43:53 -05:00
|
|
|
The request body supports a subset of features from the regular Search API. It
|
|
|
|
supports:
|
2018-02-23 17:10:37 -05:00
|
|
|
|
2019-11-20 13:43:53 -05:00
|
|
|
- `query` param for specifying an DSL query, subject to some limitations
|
|
|
|
(see <<rollup-search-limitations>> and <<rollup-agg-limitations>>
|
2018-02-23 17:10:37 -05:00
|
|
|
- `aggregations` param for specifying aggregations
|
|
|
|
|
|
|
|
Functionality that is not available:
|
|
|
|
|
2019-11-20 13:43:53 -05:00
|
|
|
- `size`: Because rollups work on pre-aggregated data, no search hits can be
|
|
|
|
returned and so size must be set to zero or omitted entirely.
|
|
|
|
- `highlighter`, `suggestors`, `post_filter`, `profile`, `explain`: These are
|
|
|
|
similarly disallowed.
|
2018-02-23 17:10:37 -05:00
|
|
|
|
2019-11-20 13:43:53 -05:00
|
|
|
[[rollup-search-example]]
|
|
|
|
==== {api-examples-title}
|
2018-02-23 17:10:37 -05:00
|
|
|
|
2019-11-20 13:43:53 -05:00
|
|
|
===== Historical-only search example
|
2018-02-23 17:10:37 -05:00
|
|
|
|
2019-11-20 13:43:53 -05:00
|
|
|
Imagine we have an index named `sensor-1` full of raw data, and we have created
|
|
|
|
a {rollup-job} with the following configuration:
|
2018-02-23 17:10:37 -05:00
|
|
|
|
2019-09-09 12:35:50 -04:00
|
|
|
[source,console]
|
2018-02-23 17:10:37 -05:00
|
|
|
--------------------------------------------------
|
2018-12-11 19:43:17 -05:00
|
|
|
PUT _rollup/job/sensor
|
2018-02-23 17:10:37 -05:00
|
|
|
{
|
2020-07-21 15:49:58 -04:00
|
|
|
"index_pattern": "sensor-*",
|
|
|
|
"rollup_index": "sensor_rollup",
|
|
|
|
"cron": "*/30 * * * * ?",
|
|
|
|
"page_size": 1000,
|
|
|
|
"groups": {
|
|
|
|
"date_histogram": {
|
|
|
|
"field": "timestamp",
|
|
|
|
"fixed_interval": "1h",
|
|
|
|
"delay": "7d"
|
|
|
|
},
|
|
|
|
"terms": {
|
|
|
|
"fields": [ "node" ]
|
|
|
|
}
|
|
|
|
},
|
|
|
|
"metrics": [
|
|
|
|
{
|
|
|
|
"field": "temperature",
|
|
|
|
"metrics": [ "min", "max", "sum" ]
|
2018-02-23 17:10:37 -05:00
|
|
|
},
|
2020-07-21 15:49:58 -04:00
|
|
|
{
|
|
|
|
"field": "voltage",
|
|
|
|
"metrics": [ "avg" ]
|
|
|
|
}
|
|
|
|
]
|
2018-02-23 17:10:37 -05:00
|
|
|
}
|
|
|
|
--------------------------------------------------
|
2018-04-04 18:32:26 -04:00
|
|
|
// TEST[setup:sensor_index]
|
2018-02-23 17:10:37 -05:00
|
|
|
|
2019-11-20 13:43:53 -05:00
|
|
|
This rolls up the `sensor-*` pattern and stores the results in `sensor_rollup`.
|
|
|
|
To search this rolled up data, we need to use the `_rollup_search` endpoint.
|
|
|
|
However, you'll notice that we can use regular query DSL to search the rolled-up
|
|
|
|
data:
|
2018-02-23 17:10:37 -05:00
|
|
|
|
2019-09-09 12:35:50 -04:00
|
|
|
[source,console]
|
2018-02-23 17:10:37 -05:00
|
|
|
--------------------------------------------------
|
|
|
|
GET /sensor_rollup/_rollup_search
|
|
|
|
{
|
2020-07-21 15:49:58 -04:00
|
|
|
"size": 0,
|
|
|
|
"aggregations": {
|
|
|
|
"max_temperature": {
|
|
|
|
"max": {
|
|
|
|
"field": "temperature"
|
|
|
|
}
|
2018-02-23 17:10:37 -05:00
|
|
|
}
|
2020-07-21 15:49:58 -04:00
|
|
|
}
|
2018-02-23 17:10:37 -05:00
|
|
|
}
|
|
|
|
--------------------------------------------------
|
|
|
|
// TEST[setup:sensor_prefab_data]
|
2018-08-23 16:15:37 -04:00
|
|
|
// TEST[s/_rollup_search/_rollup_search?filter_path=took,timed_out,terminated_early,_shards,hits,aggregations/]
|
2018-02-23 17:10:37 -05:00
|
|
|
|
2019-11-20 13:43:53 -05:00
|
|
|
The query is targeting the `sensor_rollup` data, since this contains the rollup
|
|
|
|
data as configured in the job. A `max` aggregation has been used on the
|
|
|
|
`temperature` field, yielding the following response:
|
2018-02-23 17:10:37 -05:00
|
|
|
|
2019-09-06 16:09:09 -04:00
|
|
|
[source,console-result]
|
2018-02-23 17:10:37 -05:00
|
|
|
----
|
|
|
|
{
|
|
|
|
"took" : 102,
|
|
|
|
"timed_out" : false,
|
[Rollup] Select best jobs then execute msearch-per-job (elastic/x-pack-elasticsearch#4152)
If there are multiple jobs that are all the "best" (e.g. share the
best interval) we have no way of knowing which is actually the best.
Unfortunately, we cannot just filter for all the jobs in a single
search because their doc_counts can potentially overlap.
To solve this, we execute an msearch-per-job so that the results
stay isolated. When rewriting the response, we iteratively
unroll and reduce the independent msearch responses into a single
"working tree". This allows us to intervene if there are
overlapping buckets and manually choose a doc_count.
Job selection is found by recursively descending through the aggregation
tree and independently pruning the list of valid job caps in each branch.
When a leaf node is reached in the branch, the remaining jobs are
sorted by "best'ness" (see comparator in RollupJobIdentifierUtils for the
implementation) and added to a global set of "best jobs". Once
all branches have been evaluated, the final set is returned to the
calling code.
Job "best'ness" is, briefly, the job(s) that have
- The largest compatible date interval
- Fewer and larger interval histograms
- Fewer terms groups
Note: the final set of "best" jobs is not guaranteed to be minimal,
there may be redundant effort due to independent branches choosing
jobs that are subsets of other branches.
Related changes:
- We have to include the job's ID in the rollup doc's
hash, so that different jobs don't overwrite the same summary
document.
- Now that we iteratively reduce the agg tree, the agg framework
injects empty buckets while we're working. In most cases this
is harmless, but for `avg` aggs the empty bucket is a SumAgg while
any unrolled versions are converted into AvgAggs... causing a cast
exception. To get around this, avg's are renamed to
`{source_name}.value` to prevent a conflict
- The job filtering has been pushed up into a query filter, since it
applies to the entire msearch rather than just individual agg components
- We no longer add a filter agg clause about the date_histo's interval, because
that is handled by the job validation and pruning.
Original commit: elastic/x-pack-elasticsearch@995be2a039912ba55ee4b5d21f25eaae54b76b68
2018-03-27 13:33:59 -04:00
|
|
|
"terminated_early" : false,
|
2018-02-23 17:10:37 -05:00
|
|
|
"_shards" : ... ,
|
|
|
|
"hits" : {
|
2018-12-05 13:49:06 -05:00
|
|
|
"total" : {
|
|
|
|
"value": 0,
|
|
|
|
"relation": "eq"
|
|
|
|
},
|
2018-02-23 17:10:37 -05:00
|
|
|
"max_score" : 0.0,
|
|
|
|
"hits" : [ ]
|
|
|
|
},
|
|
|
|
"aggregations" : {
|
|
|
|
"max_temperature" : {
|
|
|
|
"value" : 202.0
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
----
|
|
|
|
// TESTRESPONSE[s/"took" : 102/"took" : $body.$_path/]
|
|
|
|
// TESTRESPONSE[s/"_shards" : \.\.\. /"_shards" : $body.$_path/]
|
|
|
|
|
2019-11-20 13:43:53 -05:00
|
|
|
The response is exactly as you'd expect from a regular query + aggregation; it
|
|
|
|
provides some metadata about the request (`took`, `_shards`, etc), the search
|
|
|
|
hits (which is always empty for rollup searches), and the aggregation response.
|
2018-02-23 17:10:37 -05:00
|
|
|
|
2019-11-20 13:43:53 -05:00
|
|
|
Rollup searches are limited to functionality that was configured in the
|
|
|
|
{rollup-job}. For example, we are not able to calculate the average temperature
|
|
|
|
because `avg` was not one of the configured metrics for the `temperature` field.
|
|
|
|
If we try to execute that search:
|
2018-02-23 17:10:37 -05:00
|
|
|
|
2019-09-09 12:35:50 -04:00
|
|
|
[source,console]
|
2018-02-23 17:10:37 -05:00
|
|
|
--------------------------------------------------
|
|
|
|
GET sensor_rollup/_rollup_search
|
|
|
|
{
|
2020-07-21 15:49:58 -04:00
|
|
|
"size": 0,
|
|
|
|
"aggregations": {
|
|
|
|
"avg_temperature": {
|
|
|
|
"avg": {
|
|
|
|
"field": "temperature"
|
|
|
|
}
|
2018-02-23 17:10:37 -05:00
|
|
|
}
|
2020-07-21 15:49:58 -04:00
|
|
|
}
|
2018-02-23 17:10:37 -05:00
|
|
|
}
|
|
|
|
--------------------------------------------------
|
|
|
|
// TEST[continued]
|
[Rollup] Select best jobs then execute msearch-per-job (elastic/x-pack-elasticsearch#4152)
If there are multiple jobs that are all the "best" (e.g. share the
best interval) we have no way of knowing which is actually the best.
Unfortunately, we cannot just filter for all the jobs in a single
search because their doc_counts can potentially overlap.
To solve this, we execute an msearch-per-job so that the results
stay isolated. When rewriting the response, we iteratively
unroll and reduce the independent msearch responses into a single
"working tree". This allows us to intervene if there are
overlapping buckets and manually choose a doc_count.
Job selection is found by recursively descending through the aggregation
tree and independently pruning the list of valid job caps in each branch.
When a leaf node is reached in the branch, the remaining jobs are
sorted by "best'ness" (see comparator in RollupJobIdentifierUtils for the
implementation) and added to a global set of "best jobs". Once
all branches have been evaluated, the final set is returned to the
calling code.
Job "best'ness" is, briefly, the job(s) that have
- The largest compatible date interval
- Fewer and larger interval histograms
- Fewer terms groups
Note: the final set of "best" jobs is not guaranteed to be minimal,
there may be redundant effort due to independent branches choosing
jobs that are subsets of other branches.
Related changes:
- We have to include the job's ID in the rollup doc's
hash, so that different jobs don't overwrite the same summary
document.
- Now that we iteratively reduce the agg tree, the agg framework
injects empty buckets while we're working. In most cases this
is harmless, but for `avg` aggs the empty bucket is a SumAgg while
any unrolled versions are converted into AvgAggs... causing a cast
exception. To get around this, avg's are renamed to
`{source_name}.value` to prevent a conflict
- The job filtering has been pushed up into a query filter, since it
applies to the entire msearch rather than just individual agg components
- We no longer add a filter agg clause about the date_histo's interval, because
that is handled by the job validation and pruning.
Original commit: elastic/x-pack-elasticsearch@995be2a039912ba55ee4b5d21f25eaae54b76b68
2018-03-27 13:33:59 -04:00
|
|
|
// TEST[catch:/illegal_argument_exception/]
|
2018-02-23 17:10:37 -05:00
|
|
|
|
2019-09-06 16:09:09 -04:00
|
|
|
[source,console-result]
|
2018-02-23 17:10:37 -05:00
|
|
|
----
|
|
|
|
{
|
2020-07-21 15:49:58 -04:00
|
|
|
"error": {
|
|
|
|
"root_cause": [
|
|
|
|
{
|
|
|
|
"type": "illegal_argument_exception",
|
|
|
|
"reason": "There is not a rollup job that has a [avg] agg with name [avg_temperature] which also satisfies all requirements of query.",
|
2018-02-23 17:10:37 -05:00
|
|
|
"stack_trace": ...
|
2020-07-21 15:49:58 -04:00
|
|
|
}
|
|
|
|
],
|
|
|
|
"type": "illegal_argument_exception",
|
|
|
|
"reason": "There is not a rollup job that has a [avg] agg with name [avg_temperature] which also satisfies all requirements of query.",
|
|
|
|
"stack_trace": ...
|
|
|
|
},
|
|
|
|
"status": 400
|
2018-02-23 17:10:37 -05:00
|
|
|
}
|
|
|
|
----
|
|
|
|
// TESTRESPONSE[s/"stack_trace": \.\.\./"stack_trace": $body.$_path/]
|
|
|
|
|
2019-11-20 13:43:53 -05:00
|
|
|
===== Searching both historical rollup and non-rollup data
|
2018-02-23 17:10:37 -05:00
|
|
|
|
2019-11-20 13:43:53 -05:00
|
|
|
The rollup search API has the capability to search across both "live"
|
|
|
|
non-rollup data and the aggregated rollup data. This is done by simply adding
|
|
|
|
the live indices to the URI:
|
2018-02-23 17:10:37 -05:00
|
|
|
|
2019-09-09 12:35:50 -04:00
|
|
|
[source,console]
|
2018-02-23 17:10:37 -05:00
|
|
|
--------------------------------------------------
|
|
|
|
GET sensor-1,sensor_rollup/_rollup_search <1>
|
|
|
|
{
|
2020-07-21 15:49:58 -04:00
|
|
|
"size": 0,
|
|
|
|
"aggregations": {
|
|
|
|
"max_temperature": {
|
|
|
|
"max": {
|
|
|
|
"field": "temperature"
|
|
|
|
}
|
2018-02-23 17:10:37 -05:00
|
|
|
}
|
2020-07-21 15:49:58 -04:00
|
|
|
}
|
2018-02-23 17:10:37 -05:00
|
|
|
}
|
|
|
|
--------------------------------------------------
|
|
|
|
// TEST[continued]
|
2018-08-23 16:15:37 -04:00
|
|
|
// TEST[s/_rollup_search/_rollup_search?filter_path=took,timed_out,terminated_early,_shards,hits,aggregations/]
|
2018-02-23 17:10:37 -05:00
|
|
|
<1> Note the URI now searches `sensor-1` and `sensor_rollup` at the same time
|
|
|
|
|
2019-11-20 13:43:53 -05:00
|
|
|
When the search is executed, the rollup search endpoint does two things:
|
2018-02-23 17:10:37 -05:00
|
|
|
|
2019-11-20 13:43:53 -05:00
|
|
|
1. The original request is sent to the non-rollup index unaltered.
|
|
|
|
2. A rewritten version of the original request is sent to the rollup index.
|
2018-02-23 17:10:37 -05:00
|
|
|
|
2019-11-20 13:43:53 -05:00
|
|
|
When the two responses are received, the endpoint rewrites the rollup response
|
|
|
|
and merges the two together. During the merging process, if there is any overlap
|
|
|
|
in buckets between the two responses, the buckets from the non-rollup index are
|
|
|
|
used.
|
2018-02-23 17:10:37 -05:00
|
|
|
|
2019-11-20 13:43:53 -05:00
|
|
|
The response to the above query looks as expected, despite spanning rollup and
|
|
|
|
non-rollup indices:
|
2018-02-23 17:10:37 -05:00
|
|
|
|
2019-09-06 16:09:09 -04:00
|
|
|
[source,console-result]
|
2018-02-23 17:10:37 -05:00
|
|
|
----
|
|
|
|
{
|
|
|
|
"took" : 102,
|
|
|
|
"timed_out" : false,
|
[Rollup] Select best jobs then execute msearch-per-job (elastic/x-pack-elasticsearch#4152)
If there are multiple jobs that are all the "best" (e.g. share the
best interval) we have no way of knowing which is actually the best.
Unfortunately, we cannot just filter for all the jobs in a single
search because their doc_counts can potentially overlap.
To solve this, we execute an msearch-per-job so that the results
stay isolated. When rewriting the response, we iteratively
unroll and reduce the independent msearch responses into a single
"working tree". This allows us to intervene if there are
overlapping buckets and manually choose a doc_count.
Job selection is found by recursively descending through the aggregation
tree and independently pruning the list of valid job caps in each branch.
When a leaf node is reached in the branch, the remaining jobs are
sorted by "best'ness" (see comparator in RollupJobIdentifierUtils for the
implementation) and added to a global set of "best jobs". Once
all branches have been evaluated, the final set is returned to the
calling code.
Job "best'ness" is, briefly, the job(s) that have
- The largest compatible date interval
- Fewer and larger interval histograms
- Fewer terms groups
Note: the final set of "best" jobs is not guaranteed to be minimal,
there may be redundant effort due to independent branches choosing
jobs that are subsets of other branches.
Related changes:
- We have to include the job's ID in the rollup doc's
hash, so that different jobs don't overwrite the same summary
document.
- Now that we iteratively reduce the agg tree, the agg framework
injects empty buckets while we're working. In most cases this
is harmless, but for `avg` aggs the empty bucket is a SumAgg while
any unrolled versions are converted into AvgAggs... causing a cast
exception. To get around this, avg's are renamed to
`{source_name}.value` to prevent a conflict
- The job filtering has been pushed up into a query filter, since it
applies to the entire msearch rather than just individual agg components
- We no longer add a filter agg clause about the date_histo's interval, because
that is handled by the job validation and pruning.
Original commit: elastic/x-pack-elasticsearch@995be2a039912ba55ee4b5d21f25eaae54b76b68
2018-03-27 13:33:59 -04:00
|
|
|
"terminated_early" : false,
|
2018-02-23 17:10:37 -05:00
|
|
|
"_shards" : ... ,
|
|
|
|
"hits" : {
|
2018-12-05 13:49:06 -05:00
|
|
|
"total" : {
|
|
|
|
"value": 0,
|
|
|
|
"relation": "eq"
|
|
|
|
},
|
2018-02-23 17:10:37 -05:00
|
|
|
"max_score" : 0.0,
|
|
|
|
"hits" : [ ]
|
|
|
|
},
|
|
|
|
"aggregations" : {
|
|
|
|
"max_temperature" : {
|
|
|
|
"value" : 202.0
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
----
|
|
|
|
// TESTRESPONSE[s/"took" : 102/"took" : $body.$_path/]
|
2019-09-06 16:09:09 -04:00
|
|
|
// TESTRESPONSE[s/"_shards" : \.\.\. /"_shards" : $body.$_path/]
|