OpenSearch/docs/reference/rollup/apis/rollup-search.asciidoc

268 lines
7.5 KiB
Plaintext
Raw Normal View History

[role="xpack"]
[testenv="basic"]
[[rollup-search]]
2018-12-20 13:23:28 -05:00
=== Rollup search
++++
2018-12-20 13:23:28 -05:00
<titleabbrev>Rollup search</titleabbrev>
++++
Enables searching rolled-up data using the standard query DSL.
experimental[]
[[rollup-search-request]]
==== {api-request-title}
`GET <target>/_rollup_search`
[[rollup-search-desc]]
==== {api-description-title}
The rollup search endpoint is needed because, internally, rolled-up documents
utilize a different document structure than the original data. The rollup search
endpoint rewrites standard query DSL into a format that matches the rollup
documents, then takes the response and rewrites it back to what a client would
expect given the original query.
[[rollup-search-path-params]]
==== {api-path-parms-title}
`<target>`::
+
--
(Required, string)
Comma-separated list of data streams and indices used to limit
the request. Wildcard expressions (`*`) are supported.
This target can include both rollup and non-rollup indices.
Rules for the `<target>` parameter:
- At least one data stream, index, or wildcard expression must be specified.
This target can include a rollup or non-rollup index. For data streams, the
stream's backing indices can only serve as non-rollup indices. Omitting the
`<target>` parameter or using `_all` is not permitted.
- Multiple non-rollup indices may be specified.
- Only one rollup index may be specified. If more than one are supplied, an
exception occurs.
- Wildcard expressions may be used, but, if they match more than one rollup index, an
exception occurs. However, you can use an expression to match multiple non-rollup
indices or data streams.
--
[[rollup-search-request-body]]
==== {api-request-body-title}
The request body supports a subset of features from the regular Search API. It
supports:
- `query` param for specifying an DSL query, subject to some limitations
(see <<rollup-search-limitations>> and <<rollup-agg-limitations>>
- `aggregations` param for specifying aggregations
Functionality that is not available:
- `size`: Because rollups work on pre-aggregated data, no search hits can be
returned and so size must be set to zero or omitted entirely.
- `highlighter`, `suggestors`, `post_filter`, `profile`, `explain`: These are
similarly disallowed.
[[rollup-search-example]]
==== {api-examples-title}
===== Historical-only search example
Imagine we have an index named `sensor-1` full of raw data, and we have created
a {rollup-job} with the following configuration:
[source,console]
--------------------------------------------------
PUT _rollup/job/sensor
{
"index_pattern": "sensor-*",
"rollup_index": "sensor_rollup",
"cron": "*/30 * * * * ?",
"page_size": 1000,
"groups": {
"date_histogram": {
"field": "timestamp",
"fixed_interval": "1h",
"delay": "7d"
},
"terms": {
"fields": [ "node" ]
}
},
"metrics": [
{
"field": "temperature",
"metrics": [ "min", "max", "sum" ]
},
{
"field": "voltage",
"metrics": [ "avg" ]
}
]
}
--------------------------------------------------
// TEST[setup:sensor_index]
This rolls up the `sensor-*` pattern and stores the results in `sensor_rollup`.
To search this rolled up data, we need to use the `_rollup_search` endpoint.
However, you'll notice that we can use regular query DSL to search the rolled-up
data:
[source,console]
--------------------------------------------------
GET /sensor_rollup/_rollup_search
{
"size": 0,
"aggregations": {
"max_temperature": {
"max": {
"field": "temperature"
}
}
}
}
--------------------------------------------------
// TEST[setup:sensor_prefab_data]
// TEST[s/_rollup_search/_rollup_search?filter_path=took,timed_out,terminated_early,_shards,hits,aggregations/]
The query is targeting the `sensor_rollup` data, since this contains the rollup
data as configured in the job. A `max` aggregation has been used on the
`temperature` field, yielding the following response:
[source,console-result]
----
{
"took" : 102,
"timed_out" : false,
[Rollup] Select best jobs then execute msearch-per-job (elastic/x-pack-elasticsearch#4152) If there are multiple jobs that are all the "best" (e.g. share the best interval) we have no way of knowing which is actually the best. Unfortunately, we cannot just filter for all the jobs in a single search because their doc_counts can potentially overlap. To solve this, we execute an msearch-per-job so that the results stay isolated. When rewriting the response, we iteratively unroll and reduce the independent msearch responses into a single "working tree". This allows us to intervene if there are overlapping buckets and manually choose a doc_count. Job selection is found by recursively descending through the aggregation tree and independently pruning the list of valid job caps in each branch. When a leaf node is reached in the branch, the remaining jobs are sorted by "best'ness" (see comparator in RollupJobIdentifierUtils for the implementation) and added to a global set of "best jobs". Once all branches have been evaluated, the final set is returned to the calling code. Job "best'ness" is, briefly, the job(s) that have - The largest compatible date interval - Fewer and larger interval histograms - Fewer terms groups Note: the final set of "best" jobs is not guaranteed to be minimal, there may be redundant effort due to independent branches choosing jobs that are subsets of other branches. Related changes: - We have to include the job's ID in the rollup doc's hash, so that different jobs don't overwrite the same summary document. - Now that we iteratively reduce the agg tree, the agg framework injects empty buckets while we're working. In most cases this is harmless, but for `avg` aggs the empty bucket is a SumAgg while any unrolled versions are converted into AvgAggs... causing a cast exception. To get around this, avg's are renamed to `{source_name}.value` to prevent a conflict - The job filtering has been pushed up into a query filter, since it applies to the entire msearch rather than just individual agg components - We no longer add a filter agg clause about the date_histo's interval, because that is handled by the job validation and pruning. Original commit: elastic/x-pack-elasticsearch@995be2a039912ba55ee4b5d21f25eaae54b76b68
2018-03-27 13:33:59 -04:00
"terminated_early" : false,
"_shards" : ... ,
"hits" : {
"total" : {
"value": 0,
"relation": "eq"
},
"max_score" : 0.0,
"hits" : [ ]
},
"aggregations" : {
"max_temperature" : {
"value" : 202.0
}
}
}
----
// TESTRESPONSE[s/"took" : 102/"took" : $body.$_path/]
// TESTRESPONSE[s/"_shards" : \.\.\. /"_shards" : $body.$_path/]
The response is exactly as you'd expect from a regular query + aggregation; it
provides some metadata about the request (`took`, `_shards`, etc), the search
hits (which is always empty for rollup searches), and the aggregation response.
Rollup searches are limited to functionality that was configured in the
{rollup-job}. For example, we are not able to calculate the average temperature
because `avg` was not one of the configured metrics for the `temperature` field.
If we try to execute that search:
[source,console]
--------------------------------------------------
GET sensor_rollup/_rollup_search
{
"size": 0,
"aggregations": {
"avg_temperature": {
"avg": {
"field": "temperature"
}
}
}
}
--------------------------------------------------
// TEST[continued]
[Rollup] Select best jobs then execute msearch-per-job (elastic/x-pack-elasticsearch#4152) If there are multiple jobs that are all the "best" (e.g. share the best interval) we have no way of knowing which is actually the best. Unfortunately, we cannot just filter for all the jobs in a single search because their doc_counts can potentially overlap. To solve this, we execute an msearch-per-job so that the results stay isolated. When rewriting the response, we iteratively unroll and reduce the independent msearch responses into a single "working tree". This allows us to intervene if there are overlapping buckets and manually choose a doc_count. Job selection is found by recursively descending through the aggregation tree and independently pruning the list of valid job caps in each branch. When a leaf node is reached in the branch, the remaining jobs are sorted by "best'ness" (see comparator in RollupJobIdentifierUtils for the implementation) and added to a global set of "best jobs". Once all branches have been evaluated, the final set is returned to the calling code. Job "best'ness" is, briefly, the job(s) that have - The largest compatible date interval - Fewer and larger interval histograms - Fewer terms groups Note: the final set of "best" jobs is not guaranteed to be minimal, there may be redundant effort due to independent branches choosing jobs that are subsets of other branches. Related changes: - We have to include the job's ID in the rollup doc's hash, so that different jobs don't overwrite the same summary document. - Now that we iteratively reduce the agg tree, the agg framework injects empty buckets while we're working. In most cases this is harmless, but for `avg` aggs the empty bucket is a SumAgg while any unrolled versions are converted into AvgAggs... causing a cast exception. To get around this, avg's are renamed to `{source_name}.value` to prevent a conflict - The job filtering has been pushed up into a query filter, since it applies to the entire msearch rather than just individual agg components - We no longer add a filter agg clause about the date_histo's interval, because that is handled by the job validation and pruning. Original commit: elastic/x-pack-elasticsearch@995be2a039912ba55ee4b5d21f25eaae54b76b68
2018-03-27 13:33:59 -04:00
// TEST[catch:/illegal_argument_exception/]
[source,console-result]
----
{
"error": {
"root_cause": [
{
"type": "illegal_argument_exception",
"reason": "There is not a rollup job that has a [avg] agg with name [avg_temperature] which also satisfies all requirements of query.",
"stack_trace": ...
}
],
"type": "illegal_argument_exception",
"reason": "There is not a rollup job that has a [avg] agg with name [avg_temperature] which also satisfies all requirements of query.",
"stack_trace": ...
},
"status": 400
}
----
// TESTRESPONSE[s/"stack_trace": \.\.\./"stack_trace": $body.$_path/]
===== Searching both historical rollup and non-rollup data
The rollup search API has the capability to search across both "live"
non-rollup data and the aggregated rollup data. This is done by simply adding
the live indices to the URI:
[source,console]
--------------------------------------------------
GET sensor-1,sensor_rollup/_rollup_search <1>
{
"size": 0,
"aggregations": {
"max_temperature": {
"max": {
"field": "temperature"
}
}
}
}
--------------------------------------------------
// TEST[continued]
// TEST[s/_rollup_search/_rollup_search?filter_path=took,timed_out,terminated_early,_shards,hits,aggregations/]
<1> Note the URI now searches `sensor-1` and `sensor_rollup` at the same time
When the search is executed, the rollup search endpoint does two things:
1. The original request is sent to the non-rollup index unaltered.
2. A rewritten version of the original request is sent to the rollup index.
When the two responses are received, the endpoint rewrites the rollup response
and merges the two together. During the merging process, if there is any overlap
in buckets between the two responses, the buckets from the non-rollup index are
used.
The response to the above query looks as expected, despite spanning rollup and
non-rollup indices:
[source,console-result]
----
{
"took" : 102,
"timed_out" : false,
[Rollup] Select best jobs then execute msearch-per-job (elastic/x-pack-elasticsearch#4152) If there are multiple jobs that are all the "best" (e.g. share the best interval) we have no way of knowing which is actually the best. Unfortunately, we cannot just filter for all the jobs in a single search because their doc_counts can potentially overlap. To solve this, we execute an msearch-per-job so that the results stay isolated. When rewriting the response, we iteratively unroll and reduce the independent msearch responses into a single "working tree". This allows us to intervene if there are overlapping buckets and manually choose a doc_count. Job selection is found by recursively descending through the aggregation tree and independently pruning the list of valid job caps in each branch. When a leaf node is reached in the branch, the remaining jobs are sorted by "best'ness" (see comparator in RollupJobIdentifierUtils for the implementation) and added to a global set of "best jobs". Once all branches have been evaluated, the final set is returned to the calling code. Job "best'ness" is, briefly, the job(s) that have - The largest compatible date interval - Fewer and larger interval histograms - Fewer terms groups Note: the final set of "best" jobs is not guaranteed to be minimal, there may be redundant effort due to independent branches choosing jobs that are subsets of other branches. Related changes: - We have to include the job's ID in the rollup doc's hash, so that different jobs don't overwrite the same summary document. - Now that we iteratively reduce the agg tree, the agg framework injects empty buckets while we're working. In most cases this is harmless, but for `avg` aggs the empty bucket is a SumAgg while any unrolled versions are converted into AvgAggs... causing a cast exception. To get around this, avg's are renamed to `{source_name}.value` to prevent a conflict - The job filtering has been pushed up into a query filter, since it applies to the entire msearch rather than just individual agg components - We no longer add a filter agg clause about the date_histo's interval, because that is handled by the job validation and pruning. Original commit: elastic/x-pack-elasticsearch@995be2a039912ba55ee4b5d21f25eaae54b76b68
2018-03-27 13:33:59 -04:00
"terminated_early" : false,
"_shards" : ... ,
"hits" : {
"total" : {
"value": 0,
"relation": "eq"
},
"max_score" : 0.0,
"hits" : [ ]
},
"aggregations" : {
"max_temperature" : {
"value" : 202.0
}
}
}
----
// TESTRESPONSE[s/"took" : 102/"took" : $body.$_path/]
// TESTRESPONSE[s/"_shards" : \.\.\. /"_shards" : $body.$_path/]