Aggs: Change the default `min_doc_count` to 0 on histograms.
The assumption is that gaps in histogram are generally undesirable, for instance if you want to build a visualization from it. Additionally, we are building new aggregations that require that there are no gaps to work correctly (eg. derivatives).
This commit is contained in:
parent
969f53e399
commit
e5be85d586
|
@ -139,6 +139,8 @@ equivalent to the former `pre_zone` option. Setting `time_zone` to a value like
|
||||||
being applied in the specified time zone but In addition to this, also the `pre_zone_adjust_large_interval` is removed because we
|
being applied in the specified time zone but In addition to this, also the `pre_zone_adjust_large_interval` is removed because we
|
||||||
now always return dates and bucket keys in UTC.
|
now always return dates and bucket keys in UTC.
|
||||||
|
|
||||||
|
Both the `histogram` and `date_histogram` aggregations now have a default `min_doc_count` of `0` instead of `1` previously.
|
||||||
|
|
||||||
`include`/`exclude` filtering on the `terms` aggregation now uses the same syntax as regexp queries instead of the Java syntax. While simple
|
`include`/`exclude` filtering on the `terms` aggregation now uses the same syntax as regexp queries instead of the Java syntax. While simple
|
||||||
regexps should still work, more complex ones might need some rewriting. Also, the `flags` parameter is not supported anymore.
|
regexps should still work, more complex ones might need some rewriting. Also, the `flags` parameter is not supported anymore.
|
||||||
|
|
||||||
|
|
|
@ -119,7 +119,7 @@ Response:
|
||||||
|
|
||||||
Like with the normal <<search-aggregations-bucket-histogram-aggregation,histogram>>, both document level scripts and
|
Like with the normal <<search-aggregations-bucket-histogram-aggregation,histogram>>, both document level scripts and
|
||||||
value level scripts are supported. It is also possible to control the order of the returned buckets using the `order`
|
value level scripts are supported. It is also possible to control the order of the returned buckets using the `order`
|
||||||
settings and filter the returned buckets based on a `min_doc_count` setting (by default all buckets with
|
settings and filter the returned buckets based on a `min_doc_count` setting (by default all buckets between the first
|
||||||
`min_doc_count > 0` will be returned). This histogram also supports the `extended_bounds` setting, which enables extending
|
bucket that matches documents and the last one are returned). This histogram also supports the `extended_bounds`
|
||||||
the bounds of the histogram beyond the data itself (to read more on why you'd want to do that please refer to the
|
setting, which enables extending the bounds of the histogram beyond the data itself (to read more on why you'd want to
|
||||||
explanation <<search-aggregations-bucket-histogram-aggregation-extended-bounds,here>>).
|
do that please refer to the explanation <<search-aggregations-bucket-histogram-aggregation-extended-bounds,here>>).
|
||||||
|
|
|
@ -50,6 +50,10 @@ And the following may be the response:
|
||||||
"key": 50,
|
"key": 50,
|
||||||
"doc_count": 4
|
"doc_count": 4
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"key": 100,
|
||||||
|
"doc_count": 0
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"key": 150,
|
"key": 150,
|
||||||
"doc_count": 3
|
"doc_count": 3
|
||||||
|
@ -60,10 +64,11 @@ And the following may be the response:
|
||||||
}
|
}
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
|
||||||
The response above shows that none of the aggregated products has a price that falls within the range of `[100 - 150)`.
|
==== Minimum document count
|
||||||
By default, the response will only contain those buckets with a `doc_count` greater than 0. It is possible change that
|
|
||||||
and request buckets with either a higher minimum count or even 0 (in which case elasticsearch will "fill in the gaps"
|
The response above show that no documents has a price that falls within the range of `[100 - 150)`. By default the
|
||||||
and create buckets with zero documents). This can be configured using the `min_doc_count` setting:
|
response will fill gaps in the histogram with empty buckets. It is possible change that and request buckets with
|
||||||
|
a higher minimum count thanks to the `min_doc_count` setting:
|
||||||
|
|
||||||
[source,js]
|
[source,js]
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
@ -73,7 +78,7 @@ and create buckets with zero documents). This can be configured using the `min_d
|
||||||
"histogram" : {
|
"histogram" : {
|
||||||
"field" : "price",
|
"field" : "price",
|
||||||
"interval" : 50,
|
"interval" : 50,
|
||||||
"min_doc_count" : 0
|
"min_doc_count" : 1
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -96,10 +101,6 @@ Response:
|
||||||
"key": 50,
|
"key": 50,
|
||||||
"doc_count": 4
|
"doc_count": 4
|
||||||
},
|
},
|
||||||
{
|
|
||||||
"key" : 100,
|
|
||||||
"doc_count" : 0 <1>
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"key": 150,
|
"key": 150,
|
||||||
"doc_count": 3
|
"doc_count": 3
|
||||||
|
@ -110,13 +111,11 @@ Response:
|
||||||
}
|
}
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
|
||||||
<1> No documents were found that belong in this bucket, yet it is still returned with zero `doc_count`.
|
|
||||||
|
|
||||||
[[search-aggregations-bucket-histogram-aggregation-extended-bounds]]
|
[[search-aggregations-bucket-histogram-aggregation-extended-bounds]]
|
||||||
By default the date_/histogram returns all the buckets within the range of the data itself, that is, the documents with
|
By default the date_/histogram returns all the buckets within the range of the data itself, that is, the documents with
|
||||||
the smallest values (on which with histogram) will determine the min bucket (the bucket with the smallest key) and the
|
the smallest values (on which with histogram) will determine the min bucket (the bucket with the smallest key) and the
|
||||||
documents with the highest values will determine the max bucket (the bucket with the highest key). Often, when when
|
documents with the highest values will determine the max bucket (the bucket with the highest key). Often, when when
|
||||||
requesting empty buckets (`"min_doc_count" : 0`), this causes a confusion, specifically, when the data is also filtered.
|
requesting empty buckets, this causes a confusion, specifically, when the data is also filtered.
|
||||||
|
|
||||||
To understand why, let's look at an example:
|
To understand why, let's look at an example:
|
||||||
|
|
||||||
|
@ -149,7 +148,6 @@ Example:
|
||||||
"histogram" : {
|
"histogram" : {
|
||||||
"field" : "price",
|
"field" : "price",
|
||||||
"interval" : 50,
|
"interval" : 50,
|
||||||
"min_doc_count" : 0,
|
|
||||||
"extended_bounds" : {
|
"extended_bounds" : {
|
||||||
"min" : 0,
|
"min" : 0,
|
||||||
"max" : 500
|
"max" : 500
|
||||||
|
@ -265,67 +263,6 @@ PATH := <AGG_NAME>[<AGG_SEPARATOR><AGG_NAME>]*[<METRIC_SEPARATOR
|
||||||
The above will sort the buckets based on the avg rating among the promoted products
|
The above will sort the buckets based on the avg rating among the promoted products
|
||||||
|
|
||||||
|
|
||||||
==== Minimum document count
|
|
||||||
|
|
||||||
It is possible to only return buckets that have a document count that is greater than or equal to a configured
|
|
||||||
limit through the `min_doc_count` option.
|
|
||||||
|
|
||||||
[source,js]
|
|
||||||
--------------------------------------------------
|
|
||||||
{
|
|
||||||
"aggs" : {
|
|
||||||
"prices" : {
|
|
||||||
"histogram" : {
|
|
||||||
"field" : "price",
|
|
||||||
"interval" : 50,
|
|
||||||
"min_doc_count": 10
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
The above aggregation would only return buckets that contain 10 documents or more. Default value is `1`.
|
|
||||||
|
|
||||||
NOTE: The special value `0` can be used to add empty buckets to the response between the minimum and the maximum buckets.
|
|
||||||
Here is an example of what the response could look like:
|
|
||||||
|
|
||||||
[source,js]
|
|
||||||
--------------------------------------------------
|
|
||||||
{
|
|
||||||
"aggregations": {
|
|
||||||
"prices": {
|
|
||||||
"buckets": {
|
|
||||||
"0": {
|
|
||||||
"key": 0,
|
|
||||||
"doc_count": 2
|
|
||||||
},
|
|
||||||
"50": {
|
|
||||||
"key": 50,
|
|
||||||
"doc_count": 0
|
|
||||||
},
|
|
||||||
"150": {
|
|
||||||
"key": 150,
|
|
||||||
"doc_count": 3
|
|
||||||
},
|
|
||||||
"200": {
|
|
||||||
"key": 150,
|
|
||||||
"doc_count": 0
|
|
||||||
},
|
|
||||||
"250": {
|
|
||||||
"key": 150,
|
|
||||||
"doc_count": 0
|
|
||||||
},
|
|
||||||
"300": {
|
|
||||||
"key": 150,
|
|
||||||
"doc_count": 1
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
==== Offset
|
==== Offset
|
||||||
|
|
||||||
By default the bucket keys start with 0 and then continue in even spaced steps of `interval`, e.g. if the interval is 10 the first buckets
|
By default the bucket keys start with 0 and then continue in even spaced steps of `interval`, e.g. if the interval is 10 the first buckets
|
||||||
|
|
|
@ -2,7 +2,8 @@
|
||||||
=== Derivative Aggregation
|
=== Derivative Aggregation
|
||||||
|
|
||||||
A parent reducer aggregation which calculates the derivative of a specified metric in a parent histogram (or date_histogram)
|
A parent reducer aggregation which calculates the derivative of a specified metric in a parent histogram (or date_histogram)
|
||||||
aggregation. The specified metric must be numeric and the enclosing histogram must have `min_doc_count` set to `0`.
|
aggregation. The specified metric must be numeric and the enclosing histogram must have `min_doc_count` set to `0` (default
|
||||||
|
for `histogram` aggregations).
|
||||||
|
|
||||||
The following snippet calculates the derivative of the total monthly `sales`:
|
The following snippet calculates the derivative of the total monthly `sales`:
|
||||||
|
|
||||||
|
@ -13,8 +14,7 @@ The following snippet calculates the derivative of the total monthly `sales`:
|
||||||
"sales_per_month" : {
|
"sales_per_month" : {
|
||||||
"date_histogram" : {
|
"date_histogram" : {
|
||||||
"field" : "date",
|
"field" : "date",
|
||||||
"interval" : "month",
|
"interval" : "month"
|
||||||
"min_doc_count" : 0
|
|
||||||
},
|
},
|
||||||
"aggs": {
|
"aggs": {
|
||||||
"sales": {
|
"sales": {
|
||||||
|
|
|
@ -54,24 +54,22 @@ embedded like any other metric aggregation:
|
||||||
"my_date_histo":{ <1>
|
"my_date_histo":{ <1>
|
||||||
"date_histogram":{
|
"date_histogram":{
|
||||||
"field":"timestamp",
|
"field":"timestamp",
|
||||||
"interval":"day",
|
"interval":"day"
|
||||||
"min_doc_count": 0 <2>
|
|
||||||
},
|
},
|
||||||
"aggs":{
|
"aggs":{
|
||||||
"the_sum":{
|
"the_sum":{
|
||||||
"sum":{ "field": "lemmings" } <3>
|
"sum":{ "field": "lemmings" } <2>
|
||||||
},
|
},
|
||||||
"the_movavg":{
|
"the_movavg":{
|
||||||
"moving_avg":{ "buckets_path": "the_sum" } <4>
|
"moving_avg":{ "buckets_path": "the_sum" } <3>
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
<1> A `date_histogram` named "my_date_histo" is constructed on the "timestamp" field, with one-day intervals
|
<1> A `date_histogram` named "my_date_histo" is constructed on the "timestamp" field, with one-day intervals
|
||||||
<2> We must specify "min_doc_count: 0" in our date histogram that all buckets are returned, even if they are empty.
|
<2> A `sum` metric is used to calculate the sum of a field. This could be any metric (sum, min, max, etc)
|
||||||
<3> A `sum` metric is used to calculate the sum of a field. This could be any metric (sum, min, max, etc)
|
<3> Finally, we specify a `moving_avg` aggregation which uses "the_sum" metric as its input.
|
||||||
<4> Finally, we specify a `moving_avg` aggregation which uses "the_sum" metric as its input.
|
|
||||||
|
|
||||||
Moving averages are built by first specifying a `histogram` or `date_histogram` over a field. You can then optionally
|
Moving averages are built by first specifying a `histogram` or `date_histogram` over a field. You can then optionally
|
||||||
add normal metrics, such as a `sum`, inside of that histogram. Finally, the `moving_avg` is embedded inside the histogram.
|
add normal metrics, such as a `sum`, inside of that histogram. Finally, the `moving_avg` is embedded inside the histogram.
|
||||||
|
@ -85,8 +83,7 @@ A moving average can also be calculated on the document count of each bucket, in
|
||||||
"my_date_histo":{
|
"my_date_histo":{
|
||||||
"date_histogram":{
|
"date_histogram":{
|
||||||
"field":"timestamp",
|
"field":"timestamp",
|
||||||
"interval":"day",
|
"interval":"day"
|
||||||
"min_doc_count": 0
|
|
||||||
},
|
},
|
||||||
"aggs":{
|
"aggs":{
|
||||||
"the_movavg":{
|
"the_movavg":{
|
||||||
|
@ -294,4 +291,4 @@ global trend is slightly positive, so the prediction makes a sharp u-turn and be
|
||||||
|
|
||||||
[[double_prediction_global]]
|
[[double_prediction_global]]
|
||||||
.Double Exponential moving average with window of size 100, predict = 20, alpha = 0.5, beta = 0.1
|
.Double Exponential moving average with window of size 100, predict = 20, alpha = 0.5, beta = 0.1
|
||||||
image::images/reducers_movavg/double_prediction_global.png[]
|
image::images/reducers_movavg/double_prediction_global.png[]
|
||||||
|
|
|
@ -86,7 +86,7 @@ public class DateHistogramParser implements Aggregator.Parser {
|
||||||
.build();
|
.build();
|
||||||
|
|
||||||
boolean keyed = false;
|
boolean keyed = false;
|
||||||
long minDocCount = 1;
|
long minDocCount = 0;
|
||||||
ExtendedBounds extendedBounds = null;
|
ExtendedBounds extendedBounds = null;
|
||||||
InternalOrder order = (InternalOrder) Histogram.Order.KEY_ASC;
|
InternalOrder order = (InternalOrder) Histogram.Order.KEY_ASC;
|
||||||
String interval = null;
|
String interval = null;
|
||||||
|
|
|
@ -52,7 +52,7 @@ public class HistogramParser implements Aggregator.Parser {
|
||||||
.build();
|
.build();
|
||||||
|
|
||||||
boolean keyed = false;
|
boolean keyed = false;
|
||||||
long minDocCount = 1;
|
long minDocCount = 0;
|
||||||
InternalOrder order = (InternalOrder) InternalOrder.KEY_ASC;
|
InternalOrder order = (InternalOrder) InternalOrder.KEY_ASC;
|
||||||
long interval = -1;
|
long interval = -1;
|
||||||
ExtendedBounds extendedBounds = null;
|
ExtendedBounds extendedBounds = null;
|
||||||
|
|
|
@ -170,7 +170,7 @@ public class DateHistogramTests extends ElasticsearchIntegrationTest {
|
||||||
@Test
|
@Test
|
||||||
public void singleValuedField_WithTimeZone() throws Exception {
|
public void singleValuedField_WithTimeZone() throws Exception {
|
||||||
SearchResponse response = client().prepareSearch("idx")
|
SearchResponse response = client().prepareSearch("idx")
|
||||||
.addAggregation(dateHistogram("histo").field("date").interval(DateHistogramInterval.DAY).timeZone("+01:00")).execute()
|
.addAggregation(dateHistogram("histo").field("date").interval(DateHistogramInterval.DAY).minDocCount(1).timeZone("+01:00")).execute()
|
||||||
.actionGet();
|
.actionGet();
|
||||||
DateTimeZone tz = DateTimeZone.forID("+01:00");
|
DateTimeZone tz = DateTimeZone.forID("+01:00");
|
||||||
assertSearchResponse(response);
|
assertSearchResponse(response);
|
||||||
|
|
|
@ -167,7 +167,7 @@ public class DerivativeTests extends ElasticsearchIntegrationTest {
|
||||||
SearchResponse response = client()
|
SearchResponse response = client()
|
||||||
.prepareSearch("idx")
|
.prepareSearch("idx")
|
||||||
.addAggregation(
|
.addAggregation(
|
||||||
histogram("histo").field(SINGLE_VALUED_FIELD_NAME).interval(interval).minDocCount(0)
|
histogram("histo").field(SINGLE_VALUED_FIELD_NAME).interval(interval)
|
||||||
.subAggregation(derivative("deriv").setBucketsPaths("_count"))
|
.subAggregation(derivative("deriv").setBucketsPaths("_count"))
|
||||||
.subAggregation(derivative("2nd_deriv").setBucketsPaths("deriv"))).execute().actionGet();
|
.subAggregation(derivative("2nd_deriv").setBucketsPaths("deriv"))).execute().actionGet();
|
||||||
|
|
||||||
|
@ -204,7 +204,7 @@ public class DerivativeTests extends ElasticsearchIntegrationTest {
|
||||||
SearchResponse response = client()
|
SearchResponse response = client()
|
||||||
.prepareSearch("idx")
|
.prepareSearch("idx")
|
||||||
.addAggregation(
|
.addAggregation(
|
||||||
histogram("histo").field(SINGLE_VALUED_FIELD_NAME).interval(interval).minDocCount(0)
|
histogram("histo").field(SINGLE_VALUED_FIELD_NAME).interval(interval)
|
||||||
.subAggregation(sum("sum").field(SINGLE_VALUED_FIELD_NAME))
|
.subAggregation(sum("sum").field(SINGLE_VALUED_FIELD_NAME))
|
||||||
.subAggregation(derivative("deriv").setBucketsPaths("sum"))).execute().actionGet();
|
.subAggregation(derivative("deriv").setBucketsPaths("sum"))).execute().actionGet();
|
||||||
|
|
||||||
|
@ -250,7 +250,7 @@ public class DerivativeTests extends ElasticsearchIntegrationTest {
|
||||||
SearchResponse response = client()
|
SearchResponse response = client()
|
||||||
.prepareSearch("idx")
|
.prepareSearch("idx")
|
||||||
.addAggregation(
|
.addAggregation(
|
||||||
histogram("histo").field(SINGLE_VALUED_FIELD_NAME).interval(interval).minDocCount(0)
|
histogram("histo").field(SINGLE_VALUED_FIELD_NAME).interval(interval)
|
||||||
.subAggregation(stats("stats").field(SINGLE_VALUED_FIELD_NAME))
|
.subAggregation(stats("stats").field(SINGLE_VALUED_FIELD_NAME))
|
||||||
.subAggregation(derivative("deriv").setBucketsPaths("stats.sum"))).execute().actionGet();
|
.subAggregation(derivative("deriv").setBucketsPaths("stats.sum"))).execute().actionGet();
|
||||||
|
|
||||||
|
@ -296,7 +296,7 @@ public class DerivativeTests extends ElasticsearchIntegrationTest {
|
||||||
SearchResponse response = client()
|
SearchResponse response = client()
|
||||||
.prepareSearch("idx_unmapped")
|
.prepareSearch("idx_unmapped")
|
||||||
.addAggregation(
|
.addAggregation(
|
||||||
histogram("histo").field(SINGLE_VALUED_FIELD_NAME).interval(interval).minDocCount(0)
|
histogram("histo").field(SINGLE_VALUED_FIELD_NAME).interval(interval)
|
||||||
.subAggregation(derivative("deriv").setBucketsPaths("_count"))).execute().actionGet();
|
.subAggregation(derivative("deriv").setBucketsPaths("_count"))).execute().actionGet();
|
||||||
|
|
||||||
assertSearchResponse(response);
|
assertSearchResponse(response);
|
||||||
|
@ -312,7 +312,7 @@ public class DerivativeTests extends ElasticsearchIntegrationTest {
|
||||||
SearchResponse response = client()
|
SearchResponse response = client()
|
||||||
.prepareSearch("idx", "idx_unmapped")
|
.prepareSearch("idx", "idx_unmapped")
|
||||||
.addAggregation(
|
.addAggregation(
|
||||||
histogram("histo").field(SINGLE_VALUED_FIELD_NAME).interval(interval).minDocCount(0)
|
histogram("histo").field(SINGLE_VALUED_FIELD_NAME).interval(interval)
|
||||||
.subAggregation(derivative("deriv").setBucketsPaths("_count"))).execute().actionGet();
|
.subAggregation(derivative("deriv").setBucketsPaths("_count"))).execute().actionGet();
|
||||||
|
|
||||||
assertSearchResponse(response);
|
assertSearchResponse(response);
|
||||||
|
@ -342,7 +342,7 @@ public class DerivativeTests extends ElasticsearchIntegrationTest {
|
||||||
.prepareSearch("empty_bucket_idx")
|
.prepareSearch("empty_bucket_idx")
|
||||||
.setQuery(matchAllQuery())
|
.setQuery(matchAllQuery())
|
||||||
.addAggregation(
|
.addAggregation(
|
||||||
histogram("histo").field(SINGLE_VALUED_FIELD_NAME).interval(1).minDocCount(0)
|
histogram("histo").field(SINGLE_VALUED_FIELD_NAME).interval(1)
|
||||||
.subAggregation(derivative("deriv").setBucketsPaths("_count"))).execute().actionGet();
|
.subAggregation(derivative("deriv").setBucketsPaths("_count"))).execute().actionGet();
|
||||||
|
|
||||||
assertThat(searchResponse.getHits().getTotalHits(), equalTo(numDocsEmptyIdx));
|
assertThat(searchResponse.getHits().getTotalHits(), equalTo(numDocsEmptyIdx));
|
||||||
|
@ -371,7 +371,7 @@ public class DerivativeTests extends ElasticsearchIntegrationTest {
|
||||||
.prepareSearch("empty_bucket_idx_rnd")
|
.prepareSearch("empty_bucket_idx_rnd")
|
||||||
.setQuery(matchAllQuery())
|
.setQuery(matchAllQuery())
|
||||||
.addAggregation(
|
.addAggregation(
|
||||||
histogram("histo").field(SINGLE_VALUED_FIELD_NAME).interval(1).minDocCount(0)
|
histogram("histo").field(SINGLE_VALUED_FIELD_NAME).interval(1)
|
||||||
.extendedBounds(0l, (long) numBuckets_empty_rnd - 1)
|
.extendedBounds(0l, (long) numBuckets_empty_rnd - 1)
|
||||||
.subAggregation(derivative("deriv").setBucketsPaths("_count").gapPolicy(randomFrom(GapPolicy.values()))))
|
.subAggregation(derivative("deriv").setBucketsPaths("_count").gapPolicy(randomFrom(GapPolicy.values()))))
|
||||||
.execute().actionGet();
|
.execute().actionGet();
|
||||||
|
@ -402,7 +402,7 @@ public class DerivativeTests extends ElasticsearchIntegrationTest {
|
||||||
.prepareSearch("empty_bucket_idx")
|
.prepareSearch("empty_bucket_idx")
|
||||||
.setQuery(matchAllQuery())
|
.setQuery(matchAllQuery())
|
||||||
.addAggregation(
|
.addAggregation(
|
||||||
histogram("histo").field(SINGLE_VALUED_FIELD_NAME).interval(1).minDocCount(0)
|
histogram("histo").field(SINGLE_VALUED_FIELD_NAME).interval(1)
|
||||||
.subAggregation(derivative("deriv").setBucketsPaths("_count").gapPolicy(GapPolicy.INSERT_ZEROS))).execute()
|
.subAggregation(derivative("deriv").setBucketsPaths("_count").gapPolicy(GapPolicy.INSERT_ZEROS))).execute()
|
||||||
.actionGet();
|
.actionGet();
|
||||||
|
|
||||||
|
@ -432,7 +432,7 @@ public class DerivativeTests extends ElasticsearchIntegrationTest {
|
||||||
.prepareSearch("empty_bucket_idx")
|
.prepareSearch("empty_bucket_idx")
|
||||||
.setQuery(matchAllQuery())
|
.setQuery(matchAllQuery())
|
||||||
.addAggregation(
|
.addAggregation(
|
||||||
histogram("histo").field(SINGLE_VALUED_FIELD_NAME).interval(1).minDocCount(0)
|
histogram("histo").field(SINGLE_VALUED_FIELD_NAME).interval(1)
|
||||||
.subAggregation(sum("sum").field(SINGLE_VALUED_FIELD_NAME))
|
.subAggregation(sum("sum").field(SINGLE_VALUED_FIELD_NAME))
|
||||||
.subAggregation(derivative("deriv").setBucketsPaths("sum"))).execute().actionGet();
|
.subAggregation(derivative("deriv").setBucketsPaths("sum"))).execute().actionGet();
|
||||||
|
|
||||||
|
@ -474,7 +474,7 @@ public class DerivativeTests extends ElasticsearchIntegrationTest {
|
||||||
.prepareSearch("empty_bucket_idx")
|
.prepareSearch("empty_bucket_idx")
|
||||||
.setQuery(matchAllQuery())
|
.setQuery(matchAllQuery())
|
||||||
.addAggregation(
|
.addAggregation(
|
||||||
histogram("histo").field(SINGLE_VALUED_FIELD_NAME).interval(1).minDocCount(0)
|
histogram("histo").field(SINGLE_VALUED_FIELD_NAME).interval(1)
|
||||||
.subAggregation(sum("sum").field(SINGLE_VALUED_FIELD_NAME))
|
.subAggregation(sum("sum").field(SINGLE_VALUED_FIELD_NAME))
|
||||||
.subAggregation(derivative("deriv").setBucketsPaths("sum").gapPolicy(GapPolicy.INSERT_ZEROS))).execute()
|
.subAggregation(derivative("deriv").setBucketsPaths("sum").gapPolicy(GapPolicy.INSERT_ZEROS))).execute()
|
||||||
.actionGet();
|
.actionGet();
|
||||||
|
@ -514,7 +514,7 @@ public class DerivativeTests extends ElasticsearchIntegrationTest {
|
||||||
.prepareSearch("empty_bucket_idx_rnd")
|
.prepareSearch("empty_bucket_idx_rnd")
|
||||||
.setQuery(matchAllQuery())
|
.setQuery(matchAllQuery())
|
||||||
.addAggregation(
|
.addAggregation(
|
||||||
histogram("histo").field(SINGLE_VALUED_FIELD_NAME).interval(1).minDocCount(0)
|
histogram("histo").field(SINGLE_VALUED_FIELD_NAME).interval(1)
|
||||||
.extendedBounds(0l, (long) numBuckets_empty_rnd - 1)
|
.extendedBounds(0l, (long) numBuckets_empty_rnd - 1)
|
||||||
.subAggregation(sum("sum").field(SINGLE_VALUED_FIELD_NAME))
|
.subAggregation(sum("sum").field(SINGLE_VALUED_FIELD_NAME))
|
||||||
.subAggregation(derivative("deriv").setBucketsPaths("sum").gapPolicy(gapPolicy))).execute().actionGet();
|
.subAggregation(derivative("deriv").setBucketsPaths("sum").gapPolicy(gapPolicy))).execute().actionGet();
|
||||||
|
|
|
@ -94,7 +94,7 @@ public class MaxBucketTests extends ElasticsearchIntegrationTest {
|
||||||
@Test
|
@Test
|
||||||
public void testDocCount_topLevel() throws Exception {
|
public void testDocCount_topLevel() throws Exception {
|
||||||
SearchResponse response = client().prepareSearch("idx")
|
SearchResponse response = client().prepareSearch("idx")
|
||||||
.addAggregation(histogram("histo").field(SINGLE_VALUED_FIELD_NAME).interval(interval).minDocCount(0)
|
.addAggregation(histogram("histo").field(SINGLE_VALUED_FIELD_NAME).interval(interval)
|
||||||
.extendedBounds((long) minRandomValue, (long) maxRandomValue))
|
.extendedBounds((long) minRandomValue, (long) maxRandomValue))
|
||||||
.addAggregation(maxBucket("max_bucket").setBucketsPaths("histo>_count")).execute().actionGet();
|
.addAggregation(maxBucket("max_bucket").setBucketsPaths("histo>_count")).execute().actionGet();
|
||||||
|
|
||||||
|
@ -138,7 +138,7 @@ public class MaxBucketTests extends ElasticsearchIntegrationTest {
|
||||||
.field("tag")
|
.field("tag")
|
||||||
.order(Order.term(true))
|
.order(Order.term(true))
|
||||||
.subAggregation(
|
.subAggregation(
|
||||||
histogram("histo").field(SINGLE_VALUED_FIELD_NAME).interval(interval).minDocCount(0)
|
histogram("histo").field(SINGLE_VALUED_FIELD_NAME).interval(interval)
|
||||||
.extendedBounds((long) minRandomValue, (long) maxRandomValue))
|
.extendedBounds((long) minRandomValue, (long) maxRandomValue))
|
||||||
.subAggregation(maxBucket("max_bucket").setBucketsPaths("histo>_count"))).execute().actionGet();
|
.subAggregation(maxBucket("max_bucket").setBucketsPaths("histo>_count"))).execute().actionGet();
|
||||||
|
|
||||||
|
@ -232,7 +232,7 @@ public class MaxBucketTests extends ElasticsearchIntegrationTest {
|
||||||
.field("tag")
|
.field("tag")
|
||||||
.order(Order.term(true))
|
.order(Order.term(true))
|
||||||
.subAggregation(
|
.subAggregation(
|
||||||
histogram("histo").field(SINGLE_VALUED_FIELD_NAME).interval(interval).minDocCount(0)
|
histogram("histo").field(SINGLE_VALUED_FIELD_NAME).interval(interval)
|
||||||
.extendedBounds((long) minRandomValue, (long) maxRandomValue)
|
.extendedBounds((long) minRandomValue, (long) maxRandomValue)
|
||||||
.subAggregation(sum("sum").field(SINGLE_VALUED_FIELD_NAME)))
|
.subAggregation(sum("sum").field(SINGLE_VALUED_FIELD_NAME)))
|
||||||
.subAggregation(maxBucket("max_bucket").setBucketsPaths("histo>sum"))).execute().actionGet();
|
.subAggregation(maxBucket("max_bucket").setBucketsPaths("histo>sum"))).execute().actionGet();
|
||||||
|
@ -291,7 +291,7 @@ public class MaxBucketTests extends ElasticsearchIntegrationTest {
|
||||||
.field("tag")
|
.field("tag")
|
||||||
.order(Order.term(true))
|
.order(Order.term(true))
|
||||||
.subAggregation(
|
.subAggregation(
|
||||||
histogram("histo").field(SINGLE_VALUED_FIELD_NAME).interval(interval).minDocCount(0)
|
histogram("histo").field(SINGLE_VALUED_FIELD_NAME).interval(interval)
|
||||||
.extendedBounds((long) minRandomValue, (long) maxRandomValue)
|
.extendedBounds((long) minRandomValue, (long) maxRandomValue)
|
||||||
.subAggregation(sum("sum").field(SINGLE_VALUED_FIELD_NAME)))
|
.subAggregation(sum("sum").field(SINGLE_VALUED_FIELD_NAME)))
|
||||||
.subAggregation(maxBucket("max_bucket").setBucketsPaths("histo>sum").gapPolicy(GapPolicy.INSERT_ZEROS)))
|
.subAggregation(maxBucket("max_bucket").setBucketsPaths("histo>sum").gapPolicy(GapPolicy.INSERT_ZEROS)))
|
||||||
|
@ -370,7 +370,7 @@ public class MaxBucketTests extends ElasticsearchIntegrationTest {
|
||||||
.field("tag")
|
.field("tag")
|
||||||
.order(Order.term(true))
|
.order(Order.term(true))
|
||||||
.subAggregation(
|
.subAggregation(
|
||||||
histogram("histo").field(SINGLE_VALUED_FIELD_NAME).interval(interval).minDocCount(0)
|
histogram("histo").field(SINGLE_VALUED_FIELD_NAME).interval(interval)
|
||||||
.extendedBounds((long) minRandomValue, (long) maxRandomValue))
|
.extendedBounds((long) minRandomValue, (long) maxRandomValue))
|
||||||
.subAggregation(maxBucket("max_histo_bucket").setBucketsPaths("histo>_count")))
|
.subAggregation(maxBucket("max_histo_bucket").setBucketsPaths("histo>_count")))
|
||||||
.addAggregation(maxBucket("max_terms_bucket").setBucketsPaths("terms>max_histo_bucket")).execute().actionGet();
|
.addAggregation(maxBucket("max_terms_bucket").setBucketsPaths("terms>max_histo_bucket")).execute().actionGet();
|
||||||
|
|
|
@ -94,7 +94,7 @@ public class MinBucketTests extends ElasticsearchIntegrationTest {
|
||||||
@Test
|
@Test
|
||||||
public void testDocCount_topLevel() throws Exception {
|
public void testDocCount_topLevel() throws Exception {
|
||||||
SearchResponse response = client().prepareSearch("idx")
|
SearchResponse response = client().prepareSearch("idx")
|
||||||
.addAggregation(histogram("histo").field(SINGLE_VALUED_FIELD_NAME).interval(interval).minDocCount(0)
|
.addAggregation(histogram("histo").field(SINGLE_VALUED_FIELD_NAME).interval(interval)
|
||||||
.extendedBounds((long) minRandomValue, (long) maxRandomValue))
|
.extendedBounds((long) minRandomValue, (long) maxRandomValue))
|
||||||
.addAggregation(minBucket("min_bucket").setBucketsPaths("histo>_count")).execute().actionGet();
|
.addAggregation(minBucket("min_bucket").setBucketsPaths("histo>_count")).execute().actionGet();
|
||||||
|
|
||||||
|
@ -138,7 +138,7 @@ public class MinBucketTests extends ElasticsearchIntegrationTest {
|
||||||
.field("tag")
|
.field("tag")
|
||||||
.order(Order.term(true))
|
.order(Order.term(true))
|
||||||
.subAggregation(
|
.subAggregation(
|
||||||
histogram("histo").field(SINGLE_VALUED_FIELD_NAME).interval(interval).minDocCount(0)
|
histogram("histo").field(SINGLE_VALUED_FIELD_NAME).interval(interval)
|
||||||
.extendedBounds((long) minRandomValue, (long) maxRandomValue))
|
.extendedBounds((long) minRandomValue, (long) maxRandomValue))
|
||||||
.subAggregation(minBucket("min_bucket").setBucketsPaths("histo>_count"))).execute().actionGet();
|
.subAggregation(minBucket("min_bucket").setBucketsPaths("histo>_count"))).execute().actionGet();
|
||||||
|
|
||||||
|
@ -232,7 +232,7 @@ public class MinBucketTests extends ElasticsearchIntegrationTest {
|
||||||
.field("tag")
|
.field("tag")
|
||||||
.order(Order.term(true))
|
.order(Order.term(true))
|
||||||
.subAggregation(
|
.subAggregation(
|
||||||
histogram("histo").field(SINGLE_VALUED_FIELD_NAME).interval(interval).minDocCount(0)
|
histogram("histo").field(SINGLE_VALUED_FIELD_NAME).interval(interval)
|
||||||
.extendedBounds((long) minRandomValue, (long) maxRandomValue)
|
.extendedBounds((long) minRandomValue, (long) maxRandomValue)
|
||||||
.subAggregation(sum("sum").field(SINGLE_VALUED_FIELD_NAME)))
|
.subAggregation(sum("sum").field(SINGLE_VALUED_FIELD_NAME)))
|
||||||
.subAggregation(minBucket("min_bucket").setBucketsPaths("histo>sum"))).execute().actionGet();
|
.subAggregation(minBucket("min_bucket").setBucketsPaths("histo>sum"))).execute().actionGet();
|
||||||
|
@ -291,7 +291,7 @@ public class MinBucketTests extends ElasticsearchIntegrationTest {
|
||||||
.field("tag")
|
.field("tag")
|
||||||
.order(Order.term(true))
|
.order(Order.term(true))
|
||||||
.subAggregation(
|
.subAggregation(
|
||||||
histogram("histo").field(SINGLE_VALUED_FIELD_NAME).interval(interval).minDocCount(0)
|
histogram("histo").field(SINGLE_VALUED_FIELD_NAME).interval(interval)
|
||||||
.extendedBounds((long) minRandomValue, (long) maxRandomValue)
|
.extendedBounds((long) minRandomValue, (long) maxRandomValue)
|
||||||
.subAggregation(sum("sum").field(SINGLE_VALUED_FIELD_NAME)))
|
.subAggregation(sum("sum").field(SINGLE_VALUED_FIELD_NAME)))
|
||||||
.subAggregation(minBucket("min_bucket").setBucketsPaths("histo>sum").gapPolicy(GapPolicy.INSERT_ZEROS)))
|
.subAggregation(minBucket("min_bucket").setBucketsPaths("histo>sum").gapPolicy(GapPolicy.INSERT_ZEROS)))
|
||||||
|
@ -370,7 +370,7 @@ public class MinBucketTests extends ElasticsearchIntegrationTest {
|
||||||
.field("tag")
|
.field("tag")
|
||||||
.order(Order.term(true))
|
.order(Order.term(true))
|
||||||
.subAggregation(
|
.subAggregation(
|
||||||
histogram("histo").field(SINGLE_VALUED_FIELD_NAME).interval(interval).minDocCount(0)
|
histogram("histo").field(SINGLE_VALUED_FIELD_NAME).interval(interval)
|
||||||
.extendedBounds((long) minRandomValue, (long) maxRandomValue))
|
.extendedBounds((long) minRandomValue, (long) maxRandomValue))
|
||||||
.subAggregation(minBucket("min_histo_bucket").setBucketsPaths("histo>_count")))
|
.subAggregation(minBucket("min_histo_bucket").setBucketsPaths("histo>_count")))
|
||||||
.addAggregation(minBucket("min_terms_bucket").setBucketsPaths("terms>min_histo_bucket")).execute().actionGet();
|
.addAggregation(minBucket("min_terms_bucket").setBucketsPaths("terms>min_histo_bucket")).execute().actionGet();
|
||||||
|
|
|
@ -314,7 +314,7 @@ public class MovAvgTests extends ElasticsearchIntegrationTest {
|
||||||
SearchResponse response = client()
|
SearchResponse response = client()
|
||||||
.prepareSearch("idx").setTypes("type")
|
.prepareSearch("idx").setTypes("type")
|
||||||
.addAggregation(
|
.addAggregation(
|
||||||
histogram("histo").field(INTERVAL_FIELD).interval(interval).minDocCount(0)
|
histogram("histo").field(INTERVAL_FIELD).interval(interval)
|
||||||
.extendedBounds(0L, (long) (interval * (numBuckets - 1)))
|
.extendedBounds(0L, (long) (interval * (numBuckets - 1)))
|
||||||
.subAggregation(metric)
|
.subAggregation(metric)
|
||||||
.subAggregation(movingAvg("movavg_counts")
|
.subAggregation(movingAvg("movavg_counts")
|
||||||
|
@ -367,7 +367,7 @@ public class MovAvgTests extends ElasticsearchIntegrationTest {
|
||||||
SearchResponse response = client()
|
SearchResponse response = client()
|
||||||
.prepareSearch("idx").setTypes("type")
|
.prepareSearch("idx").setTypes("type")
|
||||||
.addAggregation(
|
.addAggregation(
|
||||||
histogram("histo").field(INTERVAL_FIELD).interval(interval).minDocCount(0)
|
histogram("histo").field(INTERVAL_FIELD).interval(interval)
|
||||||
.extendedBounds(0L, (long) (interval * (numBuckets - 1)))
|
.extendedBounds(0L, (long) (interval * (numBuckets - 1)))
|
||||||
.subAggregation(metric)
|
.subAggregation(metric)
|
||||||
.subAggregation(movingAvg("movavg_counts")
|
.subAggregation(movingAvg("movavg_counts")
|
||||||
|
@ -420,7 +420,7 @@ public class MovAvgTests extends ElasticsearchIntegrationTest {
|
||||||
SearchResponse response = client()
|
SearchResponse response = client()
|
||||||
.prepareSearch("idx").setTypes("type")
|
.prepareSearch("idx").setTypes("type")
|
||||||
.addAggregation(
|
.addAggregation(
|
||||||
histogram("histo").field(INTERVAL_FIELD).interval(interval).minDocCount(0)
|
histogram("histo").field(INTERVAL_FIELD).interval(interval)
|
||||||
.extendedBounds(0L, (long) (interval * (numBuckets - 1)))
|
.extendedBounds(0L, (long) (interval * (numBuckets - 1)))
|
||||||
.subAggregation(metric)
|
.subAggregation(metric)
|
||||||
.subAggregation(movingAvg("movavg_counts")
|
.subAggregation(movingAvg("movavg_counts")
|
||||||
|
@ -473,7 +473,7 @@ public class MovAvgTests extends ElasticsearchIntegrationTest {
|
||||||
SearchResponse response = client()
|
SearchResponse response = client()
|
||||||
.prepareSearch("idx").setTypes("type")
|
.prepareSearch("idx").setTypes("type")
|
||||||
.addAggregation(
|
.addAggregation(
|
||||||
histogram("histo").field(INTERVAL_FIELD).interval(interval).minDocCount(0)
|
histogram("histo").field(INTERVAL_FIELD).interval(interval)
|
||||||
.extendedBounds(0L, (long) (interval * (numBuckets - 1)))
|
.extendedBounds(0L, (long) (interval * (numBuckets - 1)))
|
||||||
.subAggregation(metric)
|
.subAggregation(metric)
|
||||||
.subAggregation(movingAvg("movavg_counts")
|
.subAggregation(movingAvg("movavg_counts")
|
||||||
|
@ -525,7 +525,7 @@ public class MovAvgTests extends ElasticsearchIntegrationTest {
|
||||||
client()
|
client()
|
||||||
.prepareSearch("idx").setTypes("type")
|
.prepareSearch("idx").setTypes("type")
|
||||||
.addAggregation(
|
.addAggregation(
|
||||||
histogram("histo").field(INTERVAL_FIELD).interval(interval).minDocCount(0)
|
histogram("histo").field(INTERVAL_FIELD).interval(interval)
|
||||||
.extendedBounds(0L, (long) (interval * (numBuckets - 1)))
|
.extendedBounds(0L, (long) (interval * (numBuckets - 1)))
|
||||||
.subAggregation(randomMetric("the_metric", VALUE_FIELD))
|
.subAggregation(randomMetric("the_metric", VALUE_FIELD))
|
||||||
.subAggregation(movingAvg("movavg_counts")
|
.subAggregation(movingAvg("movavg_counts")
|
||||||
|
@ -568,7 +568,7 @@ public class MovAvgTests extends ElasticsearchIntegrationTest {
|
||||||
client()
|
client()
|
||||||
.prepareSearch("idx").setTypes("type")
|
.prepareSearch("idx").setTypes("type")
|
||||||
.addAggregation(
|
.addAggregation(
|
||||||
histogram("histo").field(INTERVAL_FIELD).interval(interval).minDocCount(0)
|
histogram("histo").field(INTERVAL_FIELD).interval(interval)
|
||||||
.extendedBounds(0L, (long) (interval * (numBuckets - 1)))
|
.extendedBounds(0L, (long) (interval * (numBuckets - 1)))
|
||||||
.subAggregation(randomMetric("the_metric", VALUE_FIELD))
|
.subAggregation(randomMetric("the_metric", VALUE_FIELD))
|
||||||
.subAggregation(movingAvg("movavg_counts")
|
.subAggregation(movingAvg("movavg_counts")
|
||||||
|
@ -592,7 +592,7 @@ public class MovAvgTests extends ElasticsearchIntegrationTest {
|
||||||
SearchResponse response = client()
|
SearchResponse response = client()
|
||||||
.prepareSearch("idx").setTypes("type")
|
.prepareSearch("idx").setTypes("type")
|
||||||
.addAggregation(
|
.addAggregation(
|
||||||
histogram("histo").field("test").interval(interval).minDocCount(0)
|
histogram("histo").field("test").interval(interval)
|
||||||
.extendedBounds(0L, (long) (interval * (numBuckets - 1)))
|
.extendedBounds(0L, (long) (interval * (numBuckets - 1)))
|
||||||
.subAggregation(randomMetric("the_metric", VALUE_FIELD))
|
.subAggregation(randomMetric("the_metric", VALUE_FIELD))
|
||||||
.subAggregation(movingAvg("movavg_counts")
|
.subAggregation(movingAvg("movavg_counts")
|
||||||
|
@ -617,7 +617,7 @@ public class MovAvgTests extends ElasticsearchIntegrationTest {
|
||||||
SearchResponse response = client()
|
SearchResponse response = client()
|
||||||
.prepareSearch("idx").setTypes("type")
|
.prepareSearch("idx").setTypes("type")
|
||||||
.addAggregation(
|
.addAggregation(
|
||||||
histogram("histo").field("test").interval(interval).minDocCount(0)
|
histogram("histo").field("test").interval(interval)
|
||||||
.extendedBounds(0L, (long) (interval * (numBuckets - 1)))
|
.extendedBounds(0L, (long) (interval * (numBuckets - 1)))
|
||||||
.subAggregation(randomMetric("the_metric", VALUE_FIELD))
|
.subAggregation(randomMetric("the_metric", VALUE_FIELD))
|
||||||
.subAggregation(movingAvg("movavg_counts")
|
.subAggregation(movingAvg("movavg_counts")
|
||||||
|
@ -643,7 +643,7 @@ public class MovAvgTests extends ElasticsearchIntegrationTest {
|
||||||
client()
|
client()
|
||||||
.prepareSearch("idx").setTypes("type")
|
.prepareSearch("idx").setTypes("type")
|
||||||
.addAggregation(
|
.addAggregation(
|
||||||
histogram("histo").field(INTERVAL_FIELD).interval(interval).minDocCount(0)
|
histogram("histo").field(INTERVAL_FIELD).interval(interval)
|
||||||
.extendedBounds(0L, (long) (interval * (numBuckets - 1)))
|
.extendedBounds(0L, (long) (interval * (numBuckets - 1)))
|
||||||
.subAggregation(randomMetric("the_metric", VALUE_FIELD))
|
.subAggregation(randomMetric("the_metric", VALUE_FIELD))
|
||||||
.subAggregation(movingAvg("movavg_counts")
|
.subAggregation(movingAvg("movavg_counts")
|
||||||
|
@ -666,7 +666,7 @@ public class MovAvgTests extends ElasticsearchIntegrationTest {
|
||||||
client()
|
client()
|
||||||
.prepareSearch("idx").setTypes("type")
|
.prepareSearch("idx").setTypes("type")
|
||||||
.addAggregation(
|
.addAggregation(
|
||||||
histogram("histo").field(INTERVAL_FIELD).interval(interval).minDocCount(0)
|
histogram("histo").field(INTERVAL_FIELD).interval(interval)
|
||||||
.extendedBounds(0L, (long) (interval * (numBuckets - 1)))
|
.extendedBounds(0L, (long) (interval * (numBuckets - 1)))
|
||||||
.subAggregation(randomMetric("the_metric", VALUE_FIELD))
|
.subAggregation(randomMetric("the_metric", VALUE_FIELD))
|
||||||
.subAggregation(movingAvg("movavg_counts")
|
.subAggregation(movingAvg("movavg_counts")
|
||||||
|
@ -695,7 +695,7 @@ public class MovAvgTests extends ElasticsearchIntegrationTest {
|
||||||
SearchResponse response = client()
|
SearchResponse response = client()
|
||||||
.prepareSearch("idx").setTypes("gap_type")
|
.prepareSearch("idx").setTypes("gap_type")
|
||||||
.addAggregation(
|
.addAggregation(
|
||||||
histogram("histo").field(INTERVAL_FIELD).interval(1).minDocCount(0).extendedBounds(0L, 49L)
|
histogram("histo").field(INTERVAL_FIELD).interval(1).extendedBounds(0L, 49L)
|
||||||
.subAggregation(min("the_metric").field(GAP_FIELD))
|
.subAggregation(min("the_metric").field(GAP_FIELD))
|
||||||
.subAggregation(movingAvg("movavg_values")
|
.subAggregation(movingAvg("movavg_values")
|
||||||
.window(windowSize)
|
.window(windowSize)
|
||||||
|
@ -754,7 +754,7 @@ public class MovAvgTests extends ElasticsearchIntegrationTest {
|
||||||
SearchResponse response = client()
|
SearchResponse response = client()
|
||||||
.prepareSearch("idx").setTypes("gap_type")
|
.prepareSearch("idx").setTypes("gap_type")
|
||||||
.addAggregation(
|
.addAggregation(
|
||||||
histogram("histo").field(INTERVAL_FIELD).interval(1).minDocCount(0).extendedBounds(0L, 49L)
|
histogram("histo").field(INTERVAL_FIELD).interval(1).extendedBounds(0L, 49L)
|
||||||
.subAggregation(min("the_metric").field(GAP_FIELD))
|
.subAggregation(min("the_metric").field(GAP_FIELD))
|
||||||
.subAggregation(movingAvg("movavg_values")
|
.subAggregation(movingAvg("movavg_values")
|
||||||
.window(windowSize)
|
.window(windowSize)
|
||||||
|
@ -822,7 +822,7 @@ public class MovAvgTests extends ElasticsearchIntegrationTest {
|
||||||
.prepareSearch("idx").setTypes("gap_type")
|
.prepareSearch("idx").setTypes("gap_type")
|
||||||
.addAggregation(
|
.addAggregation(
|
||||||
filter("filtered").filter(new RangeFilterBuilder(INTERVAL_FIELD).from(1)).subAggregation(
|
filter("filtered").filter(new RangeFilterBuilder(INTERVAL_FIELD).from(1)).subAggregation(
|
||||||
histogram("histo").field(INTERVAL_FIELD).interval(1).minDocCount(0).extendedBounds(0L, 49L)
|
histogram("histo").field(INTERVAL_FIELD).interval(1).extendedBounds(0L, 49L)
|
||||||
.subAggregation(randomMetric("the_metric", GAP_FIELD))
|
.subAggregation(randomMetric("the_metric", GAP_FIELD))
|
||||||
.subAggregation(movingAvg("movavg_values")
|
.subAggregation(movingAvg("movavg_values")
|
||||||
.window(windowSize)
|
.window(windowSize)
|
||||||
|
@ -865,7 +865,7 @@ public class MovAvgTests extends ElasticsearchIntegrationTest {
|
||||||
.prepareSearch("idx").setTypes("gap_type")
|
.prepareSearch("idx").setTypes("gap_type")
|
||||||
.addAggregation(
|
.addAggregation(
|
||||||
filter("filtered").filter(new RangeFilterBuilder(INTERVAL_FIELD).from(1)).subAggregation(
|
filter("filtered").filter(new RangeFilterBuilder(INTERVAL_FIELD).from(1)).subAggregation(
|
||||||
histogram("histo").field(INTERVAL_FIELD).interval(1).minDocCount(0).extendedBounds(0L, 49L)
|
histogram("histo").field(INTERVAL_FIELD).interval(1).extendedBounds(0L, 49L)
|
||||||
.subAggregation(randomMetric("the_metric", GAP_FIELD))
|
.subAggregation(randomMetric("the_metric", GAP_FIELD))
|
||||||
.subAggregation(movingAvg("movavg_values")
|
.subAggregation(movingAvg("movavg_values")
|
||||||
.window(windowSize)
|
.window(windowSize)
|
||||||
|
@ -921,7 +921,7 @@ public class MovAvgTests extends ElasticsearchIntegrationTest {
|
||||||
.prepareSearch("idx").setTypes("gap_type")
|
.prepareSearch("idx").setTypes("gap_type")
|
||||||
.addAggregation(
|
.addAggregation(
|
||||||
filter("filtered").filter(new RangeFilterBuilder(INTERVAL_FIELD).to(1)).subAggregation(
|
filter("filtered").filter(new RangeFilterBuilder(INTERVAL_FIELD).to(1)).subAggregation(
|
||||||
histogram("histo").field(INTERVAL_FIELD).interval(1).minDocCount(0).extendedBounds(0L, 49L)
|
histogram("histo").field(INTERVAL_FIELD).interval(1).extendedBounds(0L, 49L)
|
||||||
.subAggregation(randomMetric("the_metric", GAP_FIELD))
|
.subAggregation(randomMetric("the_metric", GAP_FIELD))
|
||||||
.subAggregation(movingAvg("movavg_values")
|
.subAggregation(movingAvg("movavg_values")
|
||||||
.window(windowSize)
|
.window(windowSize)
|
||||||
|
@ -968,7 +968,7 @@ public class MovAvgTests extends ElasticsearchIntegrationTest {
|
||||||
.prepareSearch("idx").setTypes("gap_type")
|
.prepareSearch("idx").setTypes("gap_type")
|
||||||
.addAggregation(
|
.addAggregation(
|
||||||
filter("filtered").filter(new RangeFilterBuilder(INTERVAL_FIELD).to(1)).subAggregation(
|
filter("filtered").filter(new RangeFilterBuilder(INTERVAL_FIELD).to(1)).subAggregation(
|
||||||
histogram("histo").field(INTERVAL_FIELD).interval(1).minDocCount(0).extendedBounds(0L, 49L)
|
histogram("histo").field(INTERVAL_FIELD).interval(1).extendedBounds(0L, 49L)
|
||||||
.subAggregation(randomMetric("the_metric", GAP_FIELD))
|
.subAggregation(randomMetric("the_metric", GAP_FIELD))
|
||||||
.subAggregation(movingAvg("movavg_values")
|
.subAggregation(movingAvg("movavg_values")
|
||||||
.window(windowSize)
|
.window(windowSize)
|
||||||
|
|
Loading…
Reference in New Issue