Aggregations: Ability to perform computations on aggregations
Adds a new type of aggregation called 'reducers' which act on the output of aggregations and compute extra information that they add to the aggregation tree. Reducers look much like any other aggregation in the request but have a buckets_path parameter which references the aggregation(s) to use. Internally there are two types of reducer; the first is given the output of its parent aggregation and computes new aggregations to add to the buckets of its parent, and the second (a specialisation of the first) is given a sibling aggregation and outputs an aggregation to be a sibling at the same level as that aggregation. This PR includes the framework for the reducers, the derivative reducer (#9293), the moving average reducer(#10002) and the maximum bucket reducer(#10000). These reducer implementations are not all yet fully complete. Known work left to do (these points will be done once this PR is merged into the master branch): Add x-axis normalisation to the derivative reducer Add lots more JUnit tests for all reducers Contributes to #9876 Closes #10002 Closes #9293 Closes #10000
|
@ -118,6 +118,38 @@ aggregated for the buckets created by their "parent" bucket aggregation.
|
|||
There are different bucket aggregators, each with a different "bucketing" strategy. Some define a single bucket, some
|
||||
define fixed number of multiple buckets, and others dynamically create the buckets during the aggregation process.
|
||||
|
||||
[float]
|
||||
=== Reducer Aggregations
|
||||
|
||||
coming[2.0.0]
|
||||
|
||||
experimental[]
|
||||
|
||||
Reducer aggregations work on the outputs produced from other aggregations rather than from document sets, adding
|
||||
information to the output tree. There are many different types of reducer, each computing different information from
|
||||
other aggregations, but these types can broken down into two families:
|
||||
|
||||
_Parent_::
|
||||
A family of reducer aggregations that is provided with the output of its parent aggregation and is able
|
||||
to compute new buckets or new aggregations to add to existing buckets.
|
||||
|
||||
_Sibling_::
|
||||
Reducer aggregations that are provided with the output of a sibling aggregation and are able to compute a
|
||||
new aggregation which will be at the same level as the sibling aggregation.
|
||||
|
||||
Reducer aggregations can reference the aggregations they need to perform their computation by using the `buckets_paths`
|
||||
parameter to indicate the paths to the required metrics. The syntax for defining these paths can be found in the
|
||||
<<search-aggregations-bucket-terms-aggregation-order, terms aggregation order>> section.
|
||||
|
||||
?????? SHOULD THE SECTION ABOUT DEFINING AGGREGATION PATHS
|
||||
BE IN THIS PAGE AND REFERENCED FROM THE TERMS AGGREGATION DOCUMENTATION ???????
|
||||
|
||||
Reducer aggregations cannot have sub-aggregations but depending on the type it can reference another reducer in the `buckets_path`
|
||||
allowing reducers to be chained.
|
||||
|
||||
NOTE: Because reducer aggregations only add to the output, when chaining reducer aggregations the output of each reducer will be
|
||||
included in the final output.
|
||||
|
||||
[float]
|
||||
=== Caching heavy aggregations
|
||||
|
||||
|
@ -197,3 +229,6 @@ Then that piece of metadata will be returned in place for our `titles` terms agg
|
|||
include::aggregations/metrics.asciidoc[]
|
||||
|
||||
include::aggregations/bucket.asciidoc[]
|
||||
|
||||
include::aggregations/reducer.asciidoc[]
|
||||
|
||||
|
|
|
@ -0,0 +1,5 @@
|
|||
[[search-aggregations-reducer]]
|
||||
|
||||
include::reducer/derivative.asciidoc[]
|
||||
include::reducer/max-bucket-aggregation.asciidoc[]
|
||||
include::reducer/movavg-reducer.asciidoc[]
|
|
@ -0,0 +1,194 @@
|
|||
[[search-aggregations-reducer-derivative-aggregation]]
|
||||
=== Derivative Aggregation
|
||||
|
||||
A parent reducer aggregation which calculates the derivative of a specified metric in a parent histogram (or date_histogram)
|
||||
aggregation. The specified metric must be numeric and the enclosing histogram must have `min_doc_count` set to `0`.
|
||||
|
||||
The following snippet calculates the derivative of the total monthly `sales`:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
{
|
||||
"aggs" : {
|
||||
"sales_per_month" : {
|
||||
"date_histogram" : {
|
||||
"field" : "date",
|
||||
"interval" : "month",
|
||||
"min_doc_count" : 0
|
||||
},
|
||||
"aggs": {
|
||||
"sales": {
|
||||
"sum": {
|
||||
"field": "price"
|
||||
}
|
||||
},
|
||||
"sales_deriv": {
|
||||
"derivative": {
|
||||
"buckets_paths": "sales" <1>
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
|
||||
<1> `bucket_paths` instructs this derivative aggregation to use the output of the `sales` aggregation for the derivative
|
||||
|
||||
And the following may be the response:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
{
|
||||
"aggregations": {
|
||||
"sales_per_month": {
|
||||
"buckets": [
|
||||
{
|
||||
"key_as_string": "2015/01/01 00:00:00",
|
||||
"key": 1420070400000,
|
||||
"doc_count": 3,
|
||||
"sales": {
|
||||
"value": 550
|
||||
} <1>
|
||||
},
|
||||
{
|
||||
"key_as_string": "2015/02/01 00:00:00",
|
||||
"key": 1422748800000,
|
||||
"doc_count": 2,
|
||||
"sales": {
|
||||
"value": 60
|
||||
},
|
||||
"sales_deriv": {
|
||||
"value": -490 <2>
|
||||
}
|
||||
},
|
||||
{
|
||||
"key_as_string": "2015/03/01 00:00:00",
|
||||
"key": 1425168000000,
|
||||
"doc_count": 2, <3>
|
||||
"sales": {
|
||||
"value": 375
|
||||
},
|
||||
"sales_deriv": {
|
||||
"value": 315
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
|
||||
<1> No derivative for the first bucket since we need at least 2 data points to calculate the derivative
|
||||
<2> Derivative value units are implicitly defined by the `sales` aggregation and the parent histogram so in this case the units
|
||||
would be $/month assuming the `price` field has units of $.
|
||||
<3> The number of documents in the bucket are represented by the `doc_count` value
|
||||
|
||||
==== Second Order Derivative
|
||||
|
||||
A second order derivative can be calculated by chaining the derivative reducer aggregation onto the result of another derivative
|
||||
reducer aggregation as in the following example which will calculate both the first and the second order derivative of the total
|
||||
monthly sales:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
{
|
||||
"aggs" : {
|
||||
"sales_per_month" : {
|
||||
"date_histogram" : {
|
||||
"field" : "date",
|
||||
"interval" : "month"
|
||||
},
|
||||
"aggs": {
|
||||
"sales": {
|
||||
"sum": {
|
||||
"field": "price"
|
||||
}
|
||||
},
|
||||
"sales_deriv": {
|
||||
"derivative": {
|
||||
"buckets_paths": "sales"
|
||||
}
|
||||
},
|
||||
"sales_2nd_deriv": {
|
||||
"derivative": {
|
||||
"buckets_paths": "sales_deriv" <1>
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
|
||||
<1> `bucket_paths` for the second derivative points to the name of the first derivative
|
||||
|
||||
And the following may be the response:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
{
|
||||
"aggregations": {
|
||||
"sales_per_month": {
|
||||
"buckets": [
|
||||
{
|
||||
"key_as_string": "2015/01/01 00:00:00",
|
||||
"key": 1420070400000,
|
||||
"doc_count": 3,
|
||||
"sales": {
|
||||
"value": 550
|
||||
} <1>
|
||||
},
|
||||
{
|
||||
"key_as_string": "2015/02/01 00:00:00",
|
||||
"key": 1422748800000,
|
||||
"doc_count": 2,
|
||||
"sales": {
|
||||
"value": 60
|
||||
},
|
||||
"sales_deriv": {
|
||||
"value": -490
|
||||
} <1>
|
||||
},
|
||||
{
|
||||
"key_as_string": "2015/03/01 00:00:00",
|
||||
"key": 1425168000000,
|
||||
"doc_count": 2,
|
||||
"sales": {
|
||||
"value": 375
|
||||
},
|
||||
"sales_deriv": {
|
||||
"value": 315
|
||||
},
|
||||
"sales_2nd_deriv": {
|
||||
"value": 805
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
<1> No second derivative for the first two buckets since we need at least 2 data points from the first derivative to calculate the
|
||||
second derivative
|
||||
|
||||
==== Dealing with gaps in the data
|
||||
|
||||
There are a couple of reasons why the data output by the enclosing histogram may have gaps:
|
||||
|
||||
* There are no documents matching the query for some buckets
|
||||
* The data for a metric is missing in all of the documents falling into a bucket (this is most likely with either a small interval
|
||||
on the enclosing histogram or with a query matching only a small number of documents)
|
||||
|
||||
Where there is no data available in a bucket for a given metric it presents a problem for calculating the derivative value for both
|
||||
the current bucket and the next bucket. In the derivative reducer aggregation has a `gap_policy` parameter to define what the behavior
|
||||
should be when a gap in the data is found. There are currently two options for controlling the gap policy:
|
||||
|
||||
_ignore_::
|
||||
This option will not produce a derivative value for any buckets where the value in the current or previous bucket is
|
||||
missing
|
||||
|
||||
_insert_zeros_::
|
||||
This option will assume the missing value is `0` and calculate the derivative with the value `0`.
|
||||
|
||||
|
After Width: | Height: | Size: 69 KiB |
After Width: | Height: | Size: 72 KiB |
After Width: | Height: | Size: 70 KiB |
After Width: | Height: | Size: 66 KiB |
After Width: | Height: | Size: 65 KiB |
After Width: | Height: | Size: 70 KiB |
After Width: | Height: | Size: 64 KiB |
After Width: | Height: | Size: 66 KiB |
After Width: | Height: | Size: 67 KiB |
After Width: | Height: | Size: 63 KiB |
After Width: | Height: | Size: 67 KiB |
|
@ -0,0 +1,82 @@
|
|||
[[search-aggregations-reducer-max-bucket-aggregation]]
|
||||
=== Max Bucket Aggregation
|
||||
|
||||
A sibling reducer aggregation which identifies the bucket(s) with the maximum value of a specified metric in a sibing aggregation
|
||||
and outputs both the value and the key(s) of the bucket(s). The specified metric must be numeric and the sibling aggregation must
|
||||
be a multi-bucket aggregation.
|
||||
|
||||
The following snippet calculates the maximum of the total monthly `sales`:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
{
|
||||
"aggs" : {
|
||||
"sales_per_month" : {
|
||||
"date_histogram" : {
|
||||
"field" : "date",
|
||||
"interval" : "month"
|
||||
},
|
||||
"aggs": {
|
||||
"sales": {
|
||||
"sum": {
|
||||
"field": "price"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"max_monthly_sales": {
|
||||
"max_bucket": {
|
||||
"buckets_paths": "sales_per_month>sales" <1>
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
|
||||
<1> `bucket_paths` instructs this max_bucket aggregation that we want the maximum value of the `sales` aggregation in the
|
||||
`sales_per_month` date histogram.
|
||||
|
||||
And the following may be the response:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
{
|
||||
"aggregations": {
|
||||
"sales_per_month": {
|
||||
"buckets": [
|
||||
{
|
||||
"key_as_string": "2015/01/01 00:00:00",
|
||||
"key": 1420070400000,
|
||||
"doc_count": 3,
|
||||
"sales": {
|
||||
"value": 550
|
||||
}
|
||||
},
|
||||
{
|
||||
"key_as_string": "2015/02/01 00:00:00",
|
||||
"key": 1422748800000,
|
||||
"doc_count": 2,
|
||||
"sales": {
|
||||
"value": 60
|
||||
}
|
||||
},
|
||||
{
|
||||
"key_as_string": "2015/03/01 00:00:00",
|
||||
"key": 1425168000000,
|
||||
"doc_count": 2,
|
||||
"sales": {
|
||||
"value": 375
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"max_monthly_sales": {
|
||||
"keys": ["2015/01/01 00:00:00"], <1>
|
||||
"value": 550
|
||||
}
|
||||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
|
||||
<1> `keys` is an array of strings since the maximum value may be present in multiple buckets
|
||||
|
|
@ -0,0 +1,297 @@
|
|||
[[search-aggregations-reducers-movavg-reducer]]
|
||||
=== Moving Average Aggregation
|
||||
|
||||
Given an ordered series of data, the Moving Average aggregation will slide a window across the data and emit the average
|
||||
value of that window. For example, given the data `[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]`, we can calculate a simple moving
|
||||
average with windows size of `5` as follows:
|
||||
|
||||
- (1 + 2 + 3 + 4 + 5) / 5 = 3
|
||||
- (2 + 3 + 4 + 5 + 6) / 5 = 4
|
||||
- (3 + 4 + 5 + 6 + 7) / 5 = 5
|
||||
- etc
|
||||
|
||||
Moving averages are a simple method to smooth sequential data. Moving averages are typically applied to time-based data,
|
||||
such as stock prices or server metrics. The smoothing can be used to eliminate high frequency fluctuations or random noise,
|
||||
which allows the lower frequency trends to be more easily visualized, such as seasonality.
|
||||
|
||||
==== Syntax
|
||||
|
||||
A `moving_avg` aggregation looks like this in isolation:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
{
|
||||
"movavg": {
|
||||
"buckets_path": "the_sum",
|
||||
"model": "double_exp",
|
||||
"window": 5,
|
||||
"gap_policy": "insert_zero",
|
||||
"settings": {
|
||||
"alpha": 0.8
|
||||
}
|
||||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
|
||||
.`moving_avg` Parameters
|
||||
|===
|
||||
|Parameter Name |Description |Required |Default
|
||||
|
||||
|`buckets_path` |The path to the metric that we wish to calculate a moving average for |Required |
|
||||
|`model` |The moving average weighting model that we wish to use |Optional |`simple`
|
||||
|`gap_policy` |Determines what should happen when a gap in the data is encountered. |Optional |`insert_zero`
|
||||
|`window` |The size of window to "slide" across the histogram. |Optional |`5`
|
||||
|`settings` |Model-specific settings, contents which differ depending on the model specified. |Optional |
|
||||
|===
|
||||
|
||||
|
||||
`moving_avg` aggregations must be embedded inside of a `histogram` or `date_histogram` aggregation. They can be
|
||||
embedded like any other metric aggregation:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
{
|
||||
"my_date_histo":{ <1>
|
||||
"date_histogram":{
|
||||
"field":"timestamp",
|
||||
"interval":"day",
|
||||
"min_doc_count": 0 <2>
|
||||
},
|
||||
"aggs":{
|
||||
"the_sum":{
|
||||
"sum":{ "field": "lemmings" } <3>
|
||||
},
|
||||
"the_movavg":{
|
||||
"moving_avg":{ "buckets_path": "the_sum" } <4>
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
<1> A `date_histogram` named "my_date_histo" is constructed on the "timestamp" field, with one-day intervals
|
||||
<2> We must specify "min_doc_count: 0" in our date histogram that all buckets are returned, even if they are empty.
|
||||
<3> A `sum` metric is used to calculate the sum of a field. This could be any metric (sum, min, max, etc)
|
||||
<4> Finally, we specify a `moving_avg` aggregation which uses "the_sum" metric as its input.
|
||||
|
||||
Moving averages are built by first specifying a `histogram` or `date_histogram` over a field. You can then optionally
|
||||
add normal metrics, such as a `sum`, inside of that histogram. Finally, the `moving_avg` is embedded inside the histogram.
|
||||
The `buckets_path` parameter is then used to "point" at one of the sibling metrics inside of the histogram.
|
||||
|
||||
A moving average can also be calculated on the document count of each bucket, instead of a metric:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
{
|
||||
"my_date_histo":{
|
||||
"date_histogram":{
|
||||
"field":"timestamp",
|
||||
"interval":"day",
|
||||
"min_doc_count": 0
|
||||
},
|
||||
"aggs":{
|
||||
"the_movavg":{
|
||||
"moving_avg":{ "buckets_path": "_count" } <1>
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
<1> By using `_count` instead of a metric name, we can calculate the moving average of document counts in the histogram
|
||||
|
||||
==== Models
|
||||
|
||||
The `moving_avg` aggregation includes four different moving average "models". The main difference is how the values in the
|
||||
window are weighted. As data-points become "older" in the window, they may be weighted differently. This will
|
||||
affect the final average for that window.
|
||||
|
||||
Models are specified using the `model` parameter. Some models may have optional configurations which are specified inside
|
||||
the `settings` parameter.
|
||||
|
||||
===== Simple
|
||||
|
||||
The `simple` model calculates the sum of all values in the window, then divides by the size of the window. It is effectively
|
||||
a simple arithmetic mean of the window. The simple model does not perform any time-dependent weighting, which means
|
||||
the values from a `simple` moving average tend to "lag" behind the real data.
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
{
|
||||
"the_movavg":{
|
||||
"moving_avg":{
|
||||
"buckets_path": "the_sum",
|
||||
"model" : "simple"
|
||||
}
|
||||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
|
||||
A `simple` model has no special settings to configure
|
||||
|
||||
The window size can change the behavior of the moving average. For example, a small window (`"window": 10`) will closely
|
||||
track the data and only smooth out small scale fluctuations:
|
||||
|
||||
[[movavg_10window]]
|
||||
.Moving average with window of size 10
|
||||
image::images/movavg_10window.png[]
|
||||
|
||||
In contrast, a `simple` moving average with larger window (`"window": 100`) will smooth out all higher-frequency fluctuations,
|
||||
leaving only low-frequency, long term trends. It also tends to "lag" behind the actual data by a substantial amount:
|
||||
|
||||
[[movavg_100window]]
|
||||
.Moving average with window of size 100
|
||||
image::images/movavg_100window.png[]
|
||||
|
||||
|
||||
==== Linear
|
||||
|
||||
The `linear` model assigns a linear weighting to points in the series, such that "older" datapoints (e.g. those at
|
||||
the beginning of the window) contribute a linearly less amount to the total average. The linear weighting helps reduce
|
||||
the "lag" behind the data's mean, since older points have less influence.
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
{
|
||||
"the_movavg":{
|
||||
"moving_avg":{
|
||||
"buckets_path": "the_sum",
|
||||
"model" : "linear"
|
||||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
|
||||
A `linear` model has no special settings to configure
|
||||
|
||||
Like the `simple` model, window size can change the behavior of the moving average. For example, a small window (`"window": 10`)
|
||||
will closely track the data and only smooth out small scale fluctuations:
|
||||
|
||||
[[linear_10window]]
|
||||
.Linear moving average with window of size 10
|
||||
image::images/linear_10window.png[]
|
||||
|
||||
In contrast, a `linear` moving average with larger window (`"window": 100`) will smooth out all higher-frequency fluctuations,
|
||||
leaving only low-frequency, long term trends. It also tends to "lag" behind the actual data by a substantial amount,
|
||||
although typically less than the `simple` model:
|
||||
|
||||
[[linear_100window]]
|
||||
.Linear moving average with window of size 100
|
||||
image::images/linear_100window.png[]
|
||||
|
||||
==== Single Exponential
|
||||
|
||||
The `single_exp` model is similar to the `linear` model, except older data-points become exponentially less important,
|
||||
rather than linearly less important. The speed at which the importance decays can be controlled with an `alpha`
|
||||
setting. Small values make the weight decay slowly, which provides greater smoothing and takes into account a larger
|
||||
portion of the window. Larger valuers make the weight decay quickly, which reduces the impact of older values on the
|
||||
moving average. This tends to make the moving average track the data more closely but with less smoothing.
|
||||
|
||||
The default value of `alpha` is `0.5`, and the setting accepts any float from 0-1 inclusive.
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
{
|
||||
"the_movavg":{
|
||||
"moving_avg":{
|
||||
"buckets_path": "the_sum",
|
||||
"model" : "single_exp",
|
||||
"settings" : {
|
||||
"alpha" : 0.5
|
||||
}
|
||||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
|
||||
|
||||
|
||||
[[single_0.2alpha]]
|
||||
.Single Exponential moving average with window of size 10, alpha = 0.2
|
||||
image::images/single_0.2alpha.png[]
|
||||
|
||||
[[single_0.7alpha]]
|
||||
.Single Exponential moving average with window of size 10, alpha = 0.7
|
||||
image::images/single_0.7alpha.png[]
|
||||
|
||||
==== Double Exponential
|
||||
|
||||
The `double_exp` model, sometimes called "Holt's Linear Trend" model, incorporates a second exponential term which
|
||||
tracks the data's trend. Single exponential does not perform well when the data has an underlying linear trend. The
|
||||
double exponential model calculates two values internally: a "level" and a "trend".
|
||||
|
||||
The level calculation is similar to `single_exp`, and is an exponentially weighted view of the data. The difference is
|
||||
that the previously smoothed value is used instead of the raw value, which allows it to stay close to the original series.
|
||||
The trend calculation looks at the difference between the current and last value (e.g. the slope, or trend, of the
|
||||
smoothed data). The trend value is also exponentially weighted.
|
||||
|
||||
Values are produced by multiplying the level and trend components.
|
||||
|
||||
The default value of `alpha` and `beta` is `0.5`, and the settings accept any float from 0-1 inclusive.
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
{
|
||||
"the_movavg":{
|
||||
"moving_avg":{
|
||||
"buckets_path": "the_sum",
|
||||
"model" : "double_exp",
|
||||
"settings" : {
|
||||
"alpha" : 0.5,
|
||||
"beta" : 0.5
|
||||
}
|
||||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
|
||||
In practice, the `alpha` value behaves very similarly in `double_exp` as `single_exp`: small values produce more smoothing
|
||||
and more lag, while larger values produce closer tracking and less lag. The value of `beta` is often difficult
|
||||
to see. Small values emphasize long-term trends (such as a constant linear trend in the whole series), while larger
|
||||
values emphasize short-term trends. This will become more apparently when you are predicting values.
|
||||
|
||||
[[double_0.2beta]]
|
||||
.Double Exponential moving average with window of size 100, alpha = 0.5, beta = 0.2
|
||||
image::images/double_0.2beta.png[]
|
||||
|
||||
[[double_0.7beta]]
|
||||
.Double Exponential moving average with window of size 100, alpha = 0.5, beta = 0.7
|
||||
image::images/double_0.7beta.png[]
|
||||
|
||||
=== Prediction
|
||||
|
||||
All the moving average model support a "prediction" mode, which will attempt to extrapolate into the future given the
|
||||
current smoothed, moving average. Depending on the model and parameter, these predictions may or may not be accurate.
|
||||
|
||||
Predictions are enabled by adding a `predict` parameter to any moving average aggregation, specifying the nubmer of
|
||||
predictions you would like appended to the end of the series. These predictions will be spaced out at the same interval
|
||||
as your buckets:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
{
|
||||
"the_movavg":{
|
||||
"moving_avg":{
|
||||
"buckets_path": "the_sum",
|
||||
"model" : "simple",
|
||||
"predict" 10
|
||||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
|
||||
The `simple`, `linear` and `single_exp` models all produce "flat" predictions: they essentially converge on the mean
|
||||
of the last value in the series, producing a flat:
|
||||
|
||||
[[simple_prediction]]
|
||||
.Simple moving average with window of size 10, predict = 50
|
||||
image::images/simple_prediction.png[]
|
||||
|
||||
In contrast, the `double_exp` model can extrapolate based on local or global constant trends. If we set a high `beta`
|
||||
value, we can extrapolate based on local constant trends (in this case the predictions head down, because the data at the end
|
||||
of the series was heading in a downward direction):
|
||||
|
||||
[[double_prediction_local]]
|
||||
.Double Exponential moving average with window of size 100, predict = 20, alpha = 0.5, beta = 0.8
|
||||
image::images/double_prediction_local.png[]
|
||||
|
||||
In contrast, if we choose a small `beta`, the predictions are based on the global constant trend. In this series, the
|
||||
global trend is slightly positive, so the prediction makes a sharp u-turn and begins a positive slope:
|
||||
|
||||
[[double_prediction_global]]
|
||||
.Double Exponential moving average with window of size 100, predict = 20, alpha = 0.5, beta = 0.1
|
||||
image::images/double_prediction_global.png[]
|
|
@ -28,7 +28,9 @@ import org.elasticsearch.common.xcontent.XContentBuilder;
|
|||
import org.elasticsearch.common.xcontent.XContentType;
|
||||
import org.elasticsearch.index.query.FilterBuilder;
|
||||
import org.elasticsearch.index.query.QueryBuilder;
|
||||
import org.elasticsearch.search.aggregations.AbstractAggregationBuilder;
|
||||
import org.elasticsearch.search.aggregations.AggregationBuilder;
|
||||
import org.elasticsearch.search.aggregations.reducers.ReducerBuilder;
|
||||
import org.elasticsearch.search.highlight.HighlightBuilder;
|
||||
import org.elasticsearch.search.sort.SortBuilder;
|
||||
|
||||
|
@ -162,9 +164,9 @@ public class PercolateRequestBuilder extends BroadcastOperationRequestBuilder<Pe
|
|||
}
|
||||
|
||||
/**
|
||||
* Delegates to {@link PercolateSourceBuilder#addAggregation(AggregationBuilder)}
|
||||
* Delegates to {@link PercolateSourceBuilder#addAggregation(AbstractAggregationBuilder)}
|
||||
*/
|
||||
public PercolateRequestBuilder addAggregation(AggregationBuilder aggregationBuilder) {
|
||||
public PercolateRequestBuilder addAggregation(AbstractAggregationBuilder aggregationBuilder) {
|
||||
sourceBuilder().addAggregation(aggregationBuilder);
|
||||
return this;
|
||||
}
|
||||
|
|
|
@ -19,13 +19,18 @@
|
|||
package org.elasticsearch.action.percolate;
|
||||
|
||||
import com.google.common.collect.ImmutableList;
|
||||
|
||||
import org.apache.lucene.util.BytesRef;
|
||||
import org.elasticsearch.action.support.broadcast.BroadcastShardOperationResponse;
|
||||
import org.elasticsearch.common.bytes.BytesReference;
|
||||
import org.elasticsearch.common.io.stream.StreamInput;
|
||||
import org.elasticsearch.common.io.stream.StreamOutput;
|
||||
import org.elasticsearch.index.shard.ShardId;
|
||||
import org.elasticsearch.percolator.PercolateContext;
|
||||
import org.elasticsearch.search.aggregations.InternalAggregations;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
import org.elasticsearch.search.aggregations.reducers.ReducerStreams;
|
||||
import org.elasticsearch.search.aggregations.reducers.SiblingReducer;
|
||||
import org.elasticsearch.search.highlight.HighlightField;
|
||||
import org.elasticsearch.search.query.QuerySearchResult;
|
||||
|
||||
|
@ -51,6 +56,7 @@ public class PercolateShardResponse extends BroadcastShardOperationResponse {
|
|||
private int requestedSize;
|
||||
|
||||
private InternalAggregations aggregations;
|
||||
private List<SiblingReducer> reducers;
|
||||
|
||||
PercolateShardResponse() {
|
||||
hls = new ArrayList<>();
|
||||
|
@ -69,6 +75,7 @@ public class PercolateShardResponse extends BroadcastShardOperationResponse {
|
|||
if (result.aggregations() != null) {
|
||||
this.aggregations = (InternalAggregations) result.aggregations();
|
||||
}
|
||||
this.reducers = result.reducers();
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -112,6 +119,10 @@ public class PercolateShardResponse extends BroadcastShardOperationResponse {
|
|||
return aggregations;
|
||||
}
|
||||
|
||||
public List<SiblingReducer> reducers() {
|
||||
return reducers;
|
||||
}
|
||||
|
||||
public byte percolatorTypeId() {
|
||||
return percolatorTypeId;
|
||||
}
|
||||
|
@ -144,6 +155,16 @@ public class PercolateShardResponse extends BroadcastShardOperationResponse {
|
|||
hls.add(fields);
|
||||
}
|
||||
aggregations = InternalAggregations.readOptionalAggregations(in);
|
||||
if (in.readBoolean()) {
|
||||
int reducersSize = in.readVInt();
|
||||
List<SiblingReducer> reducers = new ArrayList<>(reducersSize);
|
||||
for (int i = 0; i < reducersSize; i++) {
|
||||
BytesReference type = in.readBytesReference();
|
||||
Reducer reducer = ReducerStreams.stream(type).readResult(in);
|
||||
reducers.add((SiblingReducer) reducer);
|
||||
}
|
||||
this.reducers = reducers;
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -169,5 +190,15 @@ public class PercolateShardResponse extends BroadcastShardOperationResponse {
|
|||
}
|
||||
}
|
||||
out.writeOptionalStreamable(aggregations);
|
||||
if (reducers == null) {
|
||||
out.writeBoolean(false);
|
||||
} else {
|
||||
out.writeBoolean(true);
|
||||
out.writeVInt(reducers.size());
|
||||
for (Reducer reducer : reducers) {
|
||||
out.writeBytesReference(reducer.type().stream());
|
||||
reducer.writeTo(out);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -29,6 +29,7 @@ import org.elasticsearch.index.query.FilterBuilder;
|
|||
import org.elasticsearch.index.query.QueryBuilder;
|
||||
import org.elasticsearch.search.aggregations.AbstractAggregationBuilder;
|
||||
import org.elasticsearch.search.aggregations.AggregationBuilder;
|
||||
import org.elasticsearch.search.aggregations.reducers.ReducerBuilder;
|
||||
import org.elasticsearch.search.highlight.HighlightBuilder;
|
||||
import org.elasticsearch.search.sort.ScoreSortBuilder;
|
||||
import org.elasticsearch.search.sort.SortBuilder;
|
||||
|
@ -50,7 +51,7 @@ public class PercolateSourceBuilder implements ToXContent {
|
|||
private List<SortBuilder> sorts;
|
||||
private Boolean trackScores;
|
||||
private HighlightBuilder highlightBuilder;
|
||||
private List<AggregationBuilder> aggregations;
|
||||
private List<AbstractAggregationBuilder> aggregations;
|
||||
|
||||
/**
|
||||
* Sets the document to run the percolate queries against.
|
||||
|
@ -130,7 +131,7 @@ public class PercolateSourceBuilder implements ToXContent {
|
|||
/**
|
||||
* Add an aggregation definition.
|
||||
*/
|
||||
public PercolateSourceBuilder addAggregation(AggregationBuilder aggregationBuilder) {
|
||||
public PercolateSourceBuilder addAggregation(AbstractAggregationBuilder aggregationBuilder) {
|
||||
if (aggregations == null) {
|
||||
aggregations = Lists.newArrayList();
|
||||
}
|
||||
|
|
|
@ -34,6 +34,7 @@ import org.elasticsearch.index.query.QueryBuilder;
|
|||
import org.elasticsearch.script.ScriptService;
|
||||
import org.elasticsearch.search.Scroll;
|
||||
import org.elasticsearch.search.aggregations.AbstractAggregationBuilder;
|
||||
import org.elasticsearch.search.aggregations.reducers.ReducerBuilder;
|
||||
import org.elasticsearch.search.builder.SearchSourceBuilder;
|
||||
import org.elasticsearch.search.fetch.innerhits.InnerHitsBuilder;
|
||||
import org.elasticsearch.search.highlight.HighlightBuilder;
|
||||
|
|
|
@ -19,6 +19,9 @@
|
|||
|
||||
package org.elasticsearch.index.query;
|
||||
|
||||
import org.apache.lucene.index.Term;
|
||||
import org.apache.lucene.search.BooleanQuery;
|
||||
import org.apache.lucene.search.similarities.Similarity;
|
||||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
|
||||
import java.io.IOException;
|
||||
|
|
|
@ -19,6 +19,7 @@
|
|||
package org.elasticsearch.percolator;
|
||||
|
||||
import com.carrotsearch.hppc.ByteObjectOpenHashMap;
|
||||
import com.google.common.collect.Lists;
|
||||
|
||||
import org.apache.lucene.index.LeafReaderContext;
|
||||
import org.apache.lucene.index.ReaderUtil;
|
||||
|
@ -85,8 +86,11 @@ import org.elasticsearch.script.ScriptService;
|
|||
import org.elasticsearch.search.SearchParseElement;
|
||||
import org.elasticsearch.search.SearchShardTarget;
|
||||
import org.elasticsearch.search.aggregations.AggregationPhase;
|
||||
import org.elasticsearch.search.aggregations.InternalAggregation;
|
||||
import org.elasticsearch.search.aggregations.InternalAggregation.ReduceContext;
|
||||
import org.elasticsearch.search.aggregations.InternalAggregations;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
import org.elasticsearch.search.aggregations.reducers.SiblingReducer;
|
||||
import org.elasticsearch.search.highlight.HighlightField;
|
||||
import org.elasticsearch.search.highlight.HighlightPhase;
|
||||
import org.elasticsearch.search.internal.SearchContext;
|
||||
|
@ -846,15 +850,24 @@ public class PercolatorService extends AbstractComponent {
|
|||
return null;
|
||||
}
|
||||
|
||||
if (shardResults.size() == 1) {
|
||||
return shardResults.get(0).aggregations();
|
||||
}
|
||||
|
||||
List<InternalAggregations> aggregationsList = new ArrayList<>(shardResults.size());
|
||||
for (PercolateShardResponse shardResult : shardResults) {
|
||||
aggregationsList.add(shardResult.aggregations());
|
||||
}
|
||||
return InternalAggregations.reduce(aggregationsList, new ReduceContext(bigArrays, scriptService));
|
||||
InternalAggregations aggregations = InternalAggregations.reduce(aggregationsList, new ReduceContext(bigArrays, scriptService));
|
||||
if (aggregations != null) {
|
||||
List<SiblingReducer> reducers = shardResults.get(0).reducers();
|
||||
if (reducers != null) {
|
||||
List<InternalAggregation> newAggs = new ArrayList<>(Lists.transform(aggregations.asList(), Reducer.AGGREGATION_TRANFORM_FUNCTION));
|
||||
for (SiblingReducer reducer : reducers) {
|
||||
InternalAggregation newAgg = reducer.doReduce(new InternalAggregations(newAggs), new ReduceContext(bigArrays,
|
||||
scriptService));
|
||||
newAggs.add(newAgg);
|
||||
}
|
||||
aggregations = new InternalAggregations(newAggs);
|
||||
}
|
||||
}
|
||||
return aggregations;
|
||||
}
|
||||
|
||||
}
|
||||
|
|
|
@ -20,6 +20,7 @@
|
|||
package org.elasticsearch.search.aggregations;
|
||||
|
||||
import com.google.common.collect.Lists;
|
||||
|
||||
import org.elasticsearch.ElasticsearchGenerationException;
|
||||
import org.elasticsearch.client.Requests;
|
||||
import org.elasticsearch.common.bytes.BytesArray;
|
||||
|
|
|
@ -56,6 +56,11 @@ import org.elasticsearch.search.aggregations.metrics.stats.extended.ExtendedStat
|
|||
import org.elasticsearch.search.aggregations.metrics.sum.SumParser;
|
||||
import org.elasticsearch.search.aggregations.metrics.tophits.TopHitsParser;
|
||||
import org.elasticsearch.search.aggregations.metrics.valuecount.ValueCountParser;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
import org.elasticsearch.search.aggregations.reducers.bucketmetrics.MaxBucketParser;
|
||||
import org.elasticsearch.search.aggregations.reducers.derivative.DerivativeParser;
|
||||
import org.elasticsearch.search.aggregations.reducers.movavg.MovAvgParser;
|
||||
import org.elasticsearch.search.aggregations.reducers.movavg.models.MovAvgModelModule;
|
||||
|
||||
import java.util.List;
|
||||
|
||||
|
@ -64,40 +69,45 @@ import java.util.List;
|
|||
*/
|
||||
public class AggregationModule extends AbstractModule implements SpawnModules{
|
||||
|
||||
private List<Class<? extends Aggregator.Parser>> parsers = Lists.newArrayList();
|
||||
private List<Class<? extends Aggregator.Parser>> aggParsers = Lists.newArrayList();
|
||||
private List<Class<? extends Reducer.Parser>> reducerParsers = Lists.newArrayList();
|
||||
|
||||
public AggregationModule() {
|
||||
parsers.add(AvgParser.class);
|
||||
parsers.add(SumParser.class);
|
||||
parsers.add(MinParser.class);
|
||||
parsers.add(MaxParser.class);
|
||||
parsers.add(StatsParser.class);
|
||||
parsers.add(ExtendedStatsParser.class);
|
||||
parsers.add(ValueCountParser.class);
|
||||
parsers.add(PercentilesParser.class);
|
||||
parsers.add(PercentileRanksParser.class);
|
||||
parsers.add(CardinalityParser.class);
|
||||
aggParsers.add(AvgParser.class);
|
||||
aggParsers.add(SumParser.class);
|
||||
aggParsers.add(MinParser.class);
|
||||
aggParsers.add(MaxParser.class);
|
||||
aggParsers.add(StatsParser.class);
|
||||
aggParsers.add(ExtendedStatsParser.class);
|
||||
aggParsers.add(ValueCountParser.class);
|
||||
aggParsers.add(PercentilesParser.class);
|
||||
aggParsers.add(PercentileRanksParser.class);
|
||||
aggParsers.add(CardinalityParser.class);
|
||||
|
||||
parsers.add(GlobalParser.class);
|
||||
parsers.add(MissingParser.class);
|
||||
parsers.add(FilterParser.class);
|
||||
parsers.add(FiltersParser.class);
|
||||
parsers.add(SamplerParser.class);
|
||||
parsers.add(TermsParser.class);
|
||||
parsers.add(SignificantTermsParser.class);
|
||||
parsers.add(RangeParser.class);
|
||||
parsers.add(DateRangeParser.class);
|
||||
parsers.add(IpRangeParser.class);
|
||||
parsers.add(HistogramParser.class);
|
||||
parsers.add(DateHistogramParser.class);
|
||||
parsers.add(GeoDistanceParser.class);
|
||||
parsers.add(GeoHashGridParser.class);
|
||||
parsers.add(NestedParser.class);
|
||||
parsers.add(ReverseNestedParser.class);
|
||||
parsers.add(TopHitsParser.class);
|
||||
parsers.add(GeoBoundsParser.class);
|
||||
parsers.add(ScriptedMetricParser.class);
|
||||
parsers.add(ChildrenParser.class);
|
||||
aggParsers.add(GlobalParser.class);
|
||||
aggParsers.add(MissingParser.class);
|
||||
aggParsers.add(FilterParser.class);
|
||||
aggParsers.add(FiltersParser.class);
|
||||
aggParsers.add(SamplerParser.class);
|
||||
aggParsers.add(TermsParser.class);
|
||||
aggParsers.add(SignificantTermsParser.class);
|
||||
aggParsers.add(RangeParser.class);
|
||||
aggParsers.add(DateRangeParser.class);
|
||||
aggParsers.add(IpRangeParser.class);
|
||||
aggParsers.add(HistogramParser.class);
|
||||
aggParsers.add(DateHistogramParser.class);
|
||||
aggParsers.add(GeoDistanceParser.class);
|
||||
aggParsers.add(GeoHashGridParser.class);
|
||||
aggParsers.add(NestedParser.class);
|
||||
aggParsers.add(ReverseNestedParser.class);
|
||||
aggParsers.add(TopHitsParser.class);
|
||||
aggParsers.add(GeoBoundsParser.class);
|
||||
aggParsers.add(ScriptedMetricParser.class);
|
||||
aggParsers.add(ChildrenParser.class);
|
||||
|
||||
reducerParsers.add(DerivativeParser.class);
|
||||
reducerParsers.add(MaxBucketParser.class);
|
||||
reducerParsers.add(MovAvgParser.class);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -106,14 +116,18 @@ public class AggregationModule extends AbstractModule implements SpawnModules{
|
|||
* @param parser The parser for the custom aggregator.
|
||||
*/
|
||||
public void addAggregatorParser(Class<? extends Aggregator.Parser> parser) {
|
||||
parsers.add(parser);
|
||||
aggParsers.add(parser);
|
||||
}
|
||||
|
||||
@Override
|
||||
protected void configure() {
|
||||
Multibinder<Aggregator.Parser> multibinder = Multibinder.newSetBinder(binder(), Aggregator.Parser.class);
|
||||
for (Class<? extends Aggregator.Parser> parser : parsers) {
|
||||
multibinder.addBinding().to(parser);
|
||||
Multibinder<Aggregator.Parser> multibinderAggParser = Multibinder.newSetBinder(binder(), Aggregator.Parser.class);
|
||||
for (Class<? extends Aggregator.Parser> parser : aggParsers) {
|
||||
multibinderAggParser.addBinding().to(parser);
|
||||
}
|
||||
Multibinder<Reducer.Parser> multibinderReducerParser = Multibinder.newSetBinder(binder(), Reducer.Parser.class);
|
||||
for (Class<? extends Reducer.Parser> parser : reducerParsers) {
|
||||
multibinderReducerParser.addBinding().to(parser);
|
||||
}
|
||||
bind(AggregatorParsers.class).asEagerSingleton();
|
||||
bind(AggregationParseElement.class).asEagerSingleton();
|
||||
|
@ -122,7 +136,7 @@ public class AggregationModule extends AbstractModule implements SpawnModules{
|
|||
|
||||
@Override
|
||||
public Iterable<? extends Module> spawnModules() {
|
||||
return ImmutableList.of(new SignificantTermsHeuristicModule());
|
||||
return ImmutableList.of(new SignificantTermsHeuristicModule(), new MovAvgModelModule());
|
||||
}
|
||||
|
||||
}
|
||||
|
|
|
@ -29,6 +29,8 @@ import org.elasticsearch.common.lucene.search.Queries;
|
|||
import org.elasticsearch.search.SearchParseElement;
|
||||
import org.elasticsearch.search.SearchPhase;
|
||||
import org.elasticsearch.search.aggregations.bucket.global.GlobalAggregator;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
import org.elasticsearch.search.aggregations.reducers.SiblingReducer;
|
||||
import org.elasticsearch.search.aggregations.support.AggregationContext;
|
||||
import org.elasticsearch.search.internal.SearchContext;
|
||||
import org.elasticsearch.search.query.QueryPhaseExecutionException;
|
||||
|
@ -74,7 +76,8 @@ public class AggregationPhase implements SearchPhase {
|
|||
List<Aggregator> collectors = new ArrayList<>();
|
||||
Aggregator[] aggregators;
|
||||
try {
|
||||
aggregators = context.aggregations().factories().createTopLevelAggregators(aggregationContext);
|
||||
AggregatorFactories factories = context.aggregations().factories();
|
||||
aggregators = factories.createTopLevelAggregators(aggregationContext);
|
||||
for (int i = 0; i < aggregators.length; i++) {
|
||||
if (aggregators[i] instanceof GlobalAggregator == false) {
|
||||
collectors.add(aggregators[i]);
|
||||
|
@ -138,6 +141,21 @@ public class AggregationPhase implements SearchPhase {
|
|||
}
|
||||
}
|
||||
context.queryResult().aggregations(new InternalAggregations(aggregations));
|
||||
try {
|
||||
List<Reducer> reducers = context.aggregations().factories().createReducers();
|
||||
List<SiblingReducer> siblingReducers = new ArrayList<>(reducers.size());
|
||||
for (Reducer reducer : reducers) {
|
||||
if (reducer instanceof SiblingReducer) {
|
||||
siblingReducers.add((SiblingReducer) reducer);
|
||||
} else {
|
||||
throw new AggregationExecutionException("Invalid reducer named [" + reducer.name() + "] of type ["
|
||||
+ reducer.type().name() + "]. Only sibling reducers are allowed at the top level");
|
||||
}
|
||||
}
|
||||
context.queryResult().reducers(siblingReducers);
|
||||
} catch (IOException e) {
|
||||
throw new AggregationExecutionException("Failed to build top level reducers", e);
|
||||
}
|
||||
|
||||
// disable aggregations so that they don't run on next pages in case of scrolling
|
||||
context.aggregations(null);
|
||||
|
|
|
@ -21,6 +21,7 @@ package org.elasticsearch.search.aggregations;
|
|||
import org.apache.lucene.index.LeafReaderContext;
|
||||
import org.elasticsearch.search.aggregations.bucket.BestBucketsDeferringCollector;
|
||||
import org.elasticsearch.search.aggregations.bucket.DeferringBucketCollector;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
import org.elasticsearch.search.aggregations.support.AggregationContext;
|
||||
import org.elasticsearch.search.internal.SearchContext.Lifetime;
|
||||
import org.elasticsearch.search.query.QueryPhaseExecutionException;
|
||||
|
@ -46,6 +47,7 @@ public abstract class AggregatorBase extends Aggregator {
|
|||
|
||||
private Map<String, Aggregator> subAggregatorbyName;
|
||||
private DeferringBucketCollector recordingWrapper;
|
||||
private final List<Reducer> reducers;
|
||||
|
||||
/**
|
||||
* Constructs a new Aggregator.
|
||||
|
@ -56,8 +58,10 @@ public abstract class AggregatorBase extends Aggregator {
|
|||
* @param parent The parent aggregator (may be {@code null} for top level aggregators)
|
||||
* @param metaData The metaData associated with this aggregator
|
||||
*/
|
||||
protected AggregatorBase(String name, AggregatorFactories factories, AggregationContext context, Aggregator parent, Map<String, Object> metaData) throws IOException {
|
||||
protected AggregatorBase(String name, AggregatorFactories factories, AggregationContext context, Aggregator parent,
|
||||
List<Reducer> reducers, Map<String, Object> metaData) throws IOException {
|
||||
this.name = name;
|
||||
this.reducers = reducers;
|
||||
this.metaData = metaData;
|
||||
this.parent = parent;
|
||||
this.context = context;
|
||||
|
@ -112,6 +116,10 @@ public abstract class AggregatorBase extends Aggregator {
|
|||
return this.metaData;
|
||||
}
|
||||
|
||||
public List<Reducer> reducers() {
|
||||
return this.reducers;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get a {@link LeafBucketCollector} for the given ctx, which should
|
||||
* delegate to the given collector.
|
||||
|
|
|
@ -18,12 +18,18 @@
|
|||
*/
|
||||
package org.elasticsearch.search.aggregations;
|
||||
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
import org.elasticsearch.search.aggregations.reducers.ReducerFactory;
|
||||
import org.elasticsearch.search.aggregations.support.AggregationContext;
|
||||
import org.elasticsearch.search.aggregations.support.AggregationPath;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.ArrayList;
|
||||
import java.util.HashMap;
|
||||
import java.util.HashSet;
|
||||
import java.util.LinkedList;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.Set;
|
||||
|
||||
/**
|
||||
|
@ -33,18 +39,30 @@ public class AggregatorFactories {
|
|||
|
||||
public static final AggregatorFactories EMPTY = new Empty();
|
||||
|
||||
private AggregatorFactory parent;
|
||||
private AggregatorFactory[] factories;
|
||||
private List<ReducerFactory> reducerFactories;
|
||||
|
||||
public static Builder builder() {
|
||||
return new Builder();
|
||||
}
|
||||
|
||||
private AggregatorFactories(AggregatorFactory[] factories) {
|
||||
private AggregatorFactories(AggregatorFactory[] factories, List<ReducerFactory> reducers) {
|
||||
this.factories = factories;
|
||||
this.reducerFactories = reducers;
|
||||
}
|
||||
|
||||
public List<Reducer> createReducers() throws IOException {
|
||||
List<Reducer> reducers = new ArrayList<>();
|
||||
for (ReducerFactory factory : this.reducerFactories) {
|
||||
reducers.add(factory.create());
|
||||
}
|
||||
return reducers;
|
||||
}
|
||||
|
||||
/**
|
||||
* Create all aggregators so that they can be consumed with multiple buckets.
|
||||
* Create all aggregators so that they can be consumed with multiple
|
||||
* buckets.
|
||||
*/
|
||||
public Aggregator[] createSubAggregators(Aggregator parent) throws IOException {
|
||||
Aggregator[] aggregators = new Aggregator[count()];
|
||||
|
@ -75,6 +93,7 @@ public class AggregatorFactories {
|
|||
}
|
||||
|
||||
void setParent(AggregatorFactory parent) {
|
||||
this.parent = parent;
|
||||
for (AggregatorFactory factory : factories) {
|
||||
factory.parent = parent;
|
||||
}
|
||||
|
@ -84,15 +103,19 @@ public class AggregatorFactories {
|
|||
for (AggregatorFactory factory : factories) {
|
||||
factory.validate();
|
||||
}
|
||||
for (ReducerFactory factory : reducerFactories) {
|
||||
factory.validate(parent, factories, reducerFactories);
|
||||
}
|
||||
}
|
||||
|
||||
private final static class Empty extends AggregatorFactories {
|
||||
|
||||
private static final AggregatorFactory[] EMPTY_FACTORIES = new AggregatorFactory[0];
|
||||
private static final Aggregator[] EMPTY_AGGREGATORS = new Aggregator[0];
|
||||
private static final List<ReducerFactory> EMPTY_REDUCERS = new ArrayList<>();
|
||||
|
||||
private Empty() {
|
||||
super(EMPTY_FACTORIES);
|
||||
super(EMPTY_FACTORIES, EMPTY_REDUCERS);
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -111,8 +134,9 @@ public class AggregatorFactories {
|
|||
|
||||
private final Set<String> names = new HashSet<>();
|
||||
private final List<AggregatorFactory> factories = new ArrayList<>();
|
||||
private final List<ReducerFactory> reducerFactories = new ArrayList<>();
|
||||
|
||||
public Builder add(AggregatorFactory factory) {
|
||||
public Builder addAggregator(AggregatorFactory factory) {
|
||||
if (!names.add(factory.name)) {
|
||||
throw new IllegalArgumentException("Two sibling aggregations cannot have the same name: [" + factory.name + "]");
|
||||
}
|
||||
|
@ -120,11 +144,65 @@ public class AggregatorFactories {
|
|||
return this;
|
||||
}
|
||||
|
||||
public Builder addReducer(ReducerFactory reducerFactory) {
|
||||
this.reducerFactories.add(reducerFactory);
|
||||
return this;
|
||||
}
|
||||
|
||||
public AggregatorFactories build() {
|
||||
if (factories.isEmpty()) {
|
||||
if (factories.isEmpty() && reducerFactories.isEmpty()) {
|
||||
return EMPTY;
|
||||
}
|
||||
return new AggregatorFactories(factories.toArray(new AggregatorFactory[factories.size()]));
|
||||
List<ReducerFactory> orderedReducers = resolveReducerOrder(this.reducerFactories, this.factories);
|
||||
return new AggregatorFactories(factories.toArray(new AggregatorFactory[factories.size()]), orderedReducers);
|
||||
}
|
||||
|
||||
private List<ReducerFactory> resolveReducerOrder(List<ReducerFactory> reducerFactories, List<AggregatorFactory> aggFactories) {
|
||||
Map<String, ReducerFactory> reducerFactoriesMap = new HashMap<>();
|
||||
for (ReducerFactory factory : reducerFactories) {
|
||||
reducerFactoriesMap.put(factory.getName(), factory);
|
||||
}
|
||||
Set<String> aggFactoryNames = new HashSet<>();
|
||||
for (AggregatorFactory aggFactory : aggFactories) {
|
||||
aggFactoryNames.add(aggFactory.name);
|
||||
}
|
||||
List<ReducerFactory> orderedReducers = new LinkedList<>();
|
||||
List<ReducerFactory> unmarkedFactories = new ArrayList<ReducerFactory>(reducerFactories);
|
||||
Set<ReducerFactory> temporarilyMarked = new HashSet<ReducerFactory>();
|
||||
while (!unmarkedFactories.isEmpty()) {
|
||||
ReducerFactory factory = unmarkedFactories.get(0);
|
||||
resolveReducerOrder(aggFactoryNames, reducerFactoriesMap, orderedReducers, unmarkedFactories, temporarilyMarked, factory);
|
||||
}
|
||||
return orderedReducers;
|
||||
}
|
||||
|
||||
private void resolveReducerOrder(Set<String> aggFactoryNames, Map<String, ReducerFactory> reducerFactoriesMap,
|
||||
List<ReducerFactory> orderedReducers, List<ReducerFactory> unmarkedFactories, Set<ReducerFactory> temporarilyMarked,
|
||||
ReducerFactory factory) {
|
||||
if (temporarilyMarked.contains(factory)) {
|
||||
throw new IllegalStateException("Cyclical dependancy found with reducer [" + factory.getName() + "]");
|
||||
} else if (unmarkedFactories.contains(factory)) {
|
||||
temporarilyMarked.add(factory);
|
||||
String[] bucketsPaths = factory.getBucketsPaths();
|
||||
for (String bucketsPath : bucketsPaths) {
|
||||
List<String> bucketsPathElements = AggregationPath.parse(bucketsPath).getPathElementsAsStringList();
|
||||
String firstAggName = bucketsPathElements.get(0);
|
||||
if (bucketsPath.equals("_count") || bucketsPath.equals("_key") || aggFactoryNames.contains(firstAggName)) {
|
||||
continue;
|
||||
} else {
|
||||
ReducerFactory matchingFactory = reducerFactoriesMap.get(firstAggName);
|
||||
if (matchingFactory != null) {
|
||||
resolveReducerOrder(aggFactoryNames, reducerFactoriesMap, orderedReducers, unmarkedFactories,
|
||||
temporarilyMarked, matchingFactory);
|
||||
} else {
|
||||
throw new IllegalStateException("No aggregation found for path [" + bucketsPath + "]");
|
||||
}
|
||||
}
|
||||
}
|
||||
unmarkedFactories.remove(factory);
|
||||
temporarilyMarked.remove(factory);
|
||||
orderedReducers.add(factory);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -23,10 +23,12 @@ import org.apache.lucene.search.Scorer;
|
|||
import org.elasticsearch.common.lease.Releasables;
|
||||
import org.elasticsearch.common.util.BigArrays;
|
||||
import org.elasticsearch.common.util.ObjectArray;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
import org.elasticsearch.search.aggregations.support.AggregationContext;
|
||||
import org.elasticsearch.search.internal.SearchContext.Lifetime;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
/**
|
||||
|
@ -64,6 +66,10 @@ public abstract class AggregatorFactory {
|
|||
return this;
|
||||
}
|
||||
|
||||
public String name() {
|
||||
return name;
|
||||
}
|
||||
|
||||
/**
|
||||
* Validates the state of this factory (makes sure the factory is properly configured)
|
||||
*/
|
||||
|
@ -79,7 +85,8 @@ public abstract class AggregatorFactory {
|
|||
return parent;
|
||||
}
|
||||
|
||||
protected abstract Aggregator createInternal(AggregationContext context, Aggregator parent, boolean collectsFromSingleBucket, Map<String, Object> metaData) throws IOException;
|
||||
protected abstract Aggregator createInternal(AggregationContext context, Aggregator parent, boolean collectsFromSingleBucket,
|
||||
List<Reducer> reducers, Map<String, Object> metaData) throws IOException;
|
||||
|
||||
/**
|
||||
* Creates the aggregator
|
||||
|
@ -92,7 +99,7 @@ public abstract class AggregatorFactory {
|
|||
* @return The created aggregator
|
||||
*/
|
||||
public final Aggregator create(AggregationContext context, Aggregator parent, boolean collectsFromSingleBucket) throws IOException {
|
||||
return createInternal(context, parent, collectsFromSingleBucket, this.metaData);
|
||||
return createInternal(context, parent, collectsFromSingleBucket, this.factories.createReducers(), this.metaData);
|
||||
}
|
||||
|
||||
public void doValidate() {
|
||||
|
@ -102,6 +109,8 @@ public abstract class AggregatorFactory {
|
|||
this.metaData = metaData;
|
||||
}
|
||||
|
||||
|
||||
|
||||
/**
|
||||
* Utility method. Given an {@link AggregatorFactory} that creates {@link Aggregator}s that only know how
|
||||
* to collect bucket <tt>0</tt>, this returns an aggregator that can collect any bucket.
|
||||
|
|
|
@ -24,6 +24,8 @@ import org.elasticsearch.common.collect.MapBuilder;
|
|||
import org.elasticsearch.common.inject.Inject;
|
||||
import org.elasticsearch.common.xcontent.XContentParser;
|
||||
import org.elasticsearch.search.SearchParseException;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
import org.elasticsearch.search.aggregations.reducers.ReducerFactory;
|
||||
import org.elasticsearch.search.internal.SearchContext;
|
||||
|
||||
import java.io.IOException;
|
||||
|
@ -38,21 +40,30 @@ import java.util.regex.Pattern;
|
|||
public class AggregatorParsers {
|
||||
|
||||
public static final Pattern VALID_AGG_NAME = Pattern.compile("[^\\[\\]>]+");
|
||||
private final ImmutableMap<String, Aggregator.Parser> parsers;
|
||||
private final ImmutableMap<String, Aggregator.Parser> aggParsers;
|
||||
private final ImmutableMap<String, Reducer.Parser> reducerParsers;
|
||||
|
||||
|
||||
/**
|
||||
* Constructs the AggregatorParsers out of all the given parsers
|
||||
*
|
||||
* @param parsers The available aggregator parsers (dynamically injected by the {@link org.elasticsearch.search.aggregations.AggregationModule}).
|
||||
* @param aggParsers
|
||||
* The available aggregator parsers (dynamically injected by the
|
||||
* {@link org.elasticsearch.search.aggregations.AggregationModule}
|
||||
* ).
|
||||
*/
|
||||
@Inject
|
||||
public AggregatorParsers(Set<Aggregator.Parser> parsers) {
|
||||
MapBuilder<String, Aggregator.Parser> builder = MapBuilder.newMapBuilder();
|
||||
for (Aggregator.Parser parser : parsers) {
|
||||
builder.put(parser.type(), parser);
|
||||
public AggregatorParsers(Set<Aggregator.Parser> aggParsers, Set<Reducer.Parser> reducerParsers) {
|
||||
MapBuilder<String, Aggregator.Parser> aggParsersBuilder = MapBuilder.newMapBuilder();
|
||||
for (Aggregator.Parser parser : aggParsers) {
|
||||
aggParsersBuilder.put(parser.type(), parser);
|
||||
}
|
||||
this.parsers = builder.immutableMap();
|
||||
this.aggParsers = aggParsersBuilder.immutableMap();
|
||||
MapBuilder<String, Reducer.Parser> reducerParsersBuilder = MapBuilder.newMapBuilder();
|
||||
for (Reducer.Parser parser : reducerParsers) {
|
||||
reducerParsersBuilder.put(parser.type(), parser);
|
||||
}
|
||||
this.reducerParsers = reducerParsersBuilder.immutableMap();
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -62,7 +73,18 @@ public class AggregatorParsers {
|
|||
* @return The parser associated with the given aggregation type.
|
||||
*/
|
||||
public Aggregator.Parser parser(String type) {
|
||||
return parsers.get(type);
|
||||
return aggParsers.get(type);
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns the parser that is registered under the given reducer type.
|
||||
*
|
||||
* @param type
|
||||
* The reducer type
|
||||
* @return The parser associated with the given reducer type.
|
||||
*/
|
||||
public Reducer.Parser reducer(String type) {
|
||||
return reducerParsers.get(type);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -102,7 +124,8 @@ public class AggregatorParsers {
|
|||
+ "], expected a [" + XContentParser.Token.START_OBJECT + "].", parser.getTokenLocation());
|
||||
}
|
||||
|
||||
AggregatorFactory factory = null;
|
||||
AggregatorFactory aggFactory = null;
|
||||
ReducerFactory reducerFactory = null;
|
||||
AggregatorFactories subFactories = null;
|
||||
|
||||
Map<String, Object> metaData = null;
|
||||
|
@ -134,37 +157,57 @@ public class AggregatorParsers {
|
|||
subFactories = parseAggregators(parser, context, level+1);
|
||||
break;
|
||||
default:
|
||||
if (factory != null) {
|
||||
throw new SearchParseException(context, "Found two aggregation type definitions in [" + aggregationName + "]: ["
|
||||
+ factory.type + "] and [" + fieldName + "]", parser.getTokenLocation());
|
||||
if (aggFactory != null) {
|
||||
throw new SearchParseException(context, "Found two aggregation type definitions in [" + aggregationName + "]: ["
|
||||
+ aggFactory.type + "] and [" + fieldName + "]", parser.getTokenLocation());
|
||||
}
|
||||
if (reducerFactory != null) {
|
||||
// TODO we would need a .type property on reducers too for this error message?
|
||||
throw new SearchParseException(context, "Found two aggregation type definitions in [" + aggregationName + "]: ["
|
||||
+ reducerFactory + "] and [" + fieldName + "]", parser.getTokenLocation());
|
||||
}
|
||||
|
||||
Aggregator.Parser aggregatorParser = parser(fieldName);
|
||||
if (aggregatorParser == null) {
|
||||
throw new SearchParseException(context, "Could not find aggregator type [" + fieldName + "] in [" + aggregationName
|
||||
+ "]", parser.getTokenLocation());
|
||||
Reducer.Parser reducerParser = reducer(fieldName);
|
||||
if (reducerParser == null) {
|
||||
throw new SearchParseException(context, "Could not find aggregator type [" + fieldName + "] in ["
|
||||
+ aggregationName + "]", parser.getTokenLocation());
|
||||
} else {
|
||||
reducerFactory = reducerParser.parse(aggregationName, parser, context);
|
||||
}
|
||||
} else {
|
||||
aggFactory = aggregatorParser.parse(aggregationName, parser, context);
|
||||
}
|
||||
factory = aggregatorParser.parse(aggregationName, parser, context);
|
||||
}
|
||||
}
|
||||
|
||||
if (factory == null) {
|
||||
if (aggFactory == null && reducerFactory == null) {
|
||||
throw new SearchParseException(context, "Missing definition for aggregation [" + aggregationName + "]",
|
||||
parser.getTokenLocation());
|
||||
}
|
||||
|
||||
} else if (aggFactory != null) {
|
||||
assert reducerFactory == null;
|
||||
if (metaData != null) {
|
||||
factory.setMetaData(metaData);
|
||||
aggFactory.setMetaData(metaData);
|
||||
}
|
||||
|
||||
if (subFactories != null) {
|
||||
factory.subFactories(subFactories);
|
||||
aggFactory.subFactories(subFactories);
|
||||
}
|
||||
|
||||
if (level == 0) {
|
||||
factory.validate();
|
||||
aggFactory.validate();
|
||||
}
|
||||
|
||||
factories.add(factory);
|
||||
factories.addAggregator(aggFactory);
|
||||
} else {
|
||||
assert reducerFactory != null;
|
||||
if (subFactories != null) {
|
||||
throw new SearchParseException(context, "Aggregation [" + aggregationName + "] cannot define sub-aggregations",
|
||||
parser.getTokenLocation());
|
||||
}
|
||||
factories.addReducer(reducerFactory);
|
||||
}
|
||||
}
|
||||
|
||||
return factories.build();
|
||||
|
|
|
@ -18,6 +18,9 @@
|
|||
*/
|
||||
package org.elasticsearch.search.aggregations;
|
||||
|
||||
import com.google.common.collect.ImmutableList;
|
||||
import com.google.common.collect.Lists;
|
||||
|
||||
import org.elasticsearch.common.bytes.BytesArray;
|
||||
import org.elasticsearch.common.bytes.BytesReference;
|
||||
import org.elasticsearch.common.io.stream.StreamInput;
|
||||
|
@ -28,6 +31,8 @@ import org.elasticsearch.common.xcontent.ToXContent;
|
|||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
import org.elasticsearch.common.xcontent.XContentBuilderString;
|
||||
import org.elasticsearch.script.ScriptService;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
import org.elasticsearch.search.aggregations.reducers.ReducerStreams;
|
||||
import org.elasticsearch.search.aggregations.support.AggregationPath;
|
||||
|
||||
import java.io.IOException;
|
||||
|
@ -110,6 +115,8 @@ public abstract class InternalAggregation implements Aggregation, ToXContent, St
|
|||
|
||||
protected Map<String, Object> metaData;
|
||||
|
||||
private List<Reducer> reducers;
|
||||
|
||||
/** Constructs an un initialized addAggregation (used for serialization) **/
|
||||
protected InternalAggregation() {}
|
||||
|
||||
|
@ -118,8 +125,9 @@ public abstract class InternalAggregation implements Aggregation, ToXContent, St
|
|||
*
|
||||
* @param name The name of the get.
|
||||
*/
|
||||
protected InternalAggregation(String name, Map<String, Object> metaData) {
|
||||
protected InternalAggregation(String name, List<Reducer> reducers, Map<String, Object> metaData) {
|
||||
this.name = name;
|
||||
this.reducers = reducers;
|
||||
this.metaData = metaData;
|
||||
}
|
||||
|
||||
|
@ -139,7 +147,15 @@ public abstract class InternalAggregation implements Aggregation, ToXContent, St
|
|||
* try reusing an existing get instance (typically the first in the given list) to save on redundant object
|
||||
* construction.
|
||||
*/
|
||||
public abstract InternalAggregation reduce(List<InternalAggregation> aggregations, ReduceContext reduceContext);
|
||||
public final InternalAggregation reduce(List<InternalAggregation> aggregations, ReduceContext reduceContext) {
|
||||
InternalAggregation aggResult = doReduce(aggregations, reduceContext);
|
||||
for (Reducer reducer : reducers) {
|
||||
aggResult = reducer.reduce(aggResult, reduceContext);
|
||||
}
|
||||
return aggResult;
|
||||
}
|
||||
|
||||
public abstract InternalAggregation doReduce(List<InternalAggregation> aggregations, ReduceContext reduceContext);
|
||||
|
||||
@Override
|
||||
public Object getProperty(String path) {
|
||||
|
@ -172,6 +188,10 @@ public abstract class InternalAggregation implements Aggregation, ToXContent, St
|
|||
return metaData;
|
||||
}
|
||||
|
||||
public List<Reducer> reducers() {
|
||||
return reducers;
|
||||
}
|
||||
|
||||
@Override
|
||||
public final XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
|
||||
builder.startObject(name);
|
||||
|
@ -190,6 +210,11 @@ public abstract class InternalAggregation implements Aggregation, ToXContent, St
|
|||
public final void writeTo(StreamOutput out) throws IOException {
|
||||
out.writeString(name);
|
||||
out.writeGenericValue(metaData);
|
||||
out.writeVInt(reducers.size());
|
||||
for (Reducer reducer : reducers) {
|
||||
out.writeBytesReference(reducer.type().stream());
|
||||
reducer.writeTo(out);
|
||||
}
|
||||
doWriteTo(out);
|
||||
}
|
||||
|
||||
|
@ -199,6 +224,17 @@ public abstract class InternalAggregation implements Aggregation, ToXContent, St
|
|||
public final void readFrom(StreamInput in) throws IOException {
|
||||
name = in.readString();
|
||||
metaData = in.readMap();
|
||||
int size = in.readVInt();
|
||||
if (size == 0) {
|
||||
reducers = ImmutableList.of();
|
||||
} else {
|
||||
reducers = Lists.newArrayListWithCapacity(size);
|
||||
for (int i = 0; i < size; i++) {
|
||||
BytesReference type = in.readBytesReference();
|
||||
Reducer reducer = ReducerStreams.stream(type).readResult(in);
|
||||
reducers.add(reducer);
|
||||
}
|
||||
}
|
||||
doReadFrom(in);
|
||||
}
|
||||
|
||||
|
|
|
@ -20,19 +20,43 @@
|
|||
package org.elasticsearch.search.aggregations;
|
||||
|
||||
import org.elasticsearch.search.aggregations.bucket.MultiBucketsAggregation;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
public abstract class InternalMultiBucketAggregation extends InternalAggregation implements MultiBucketsAggregation {
|
||||
public abstract class InternalMultiBucketAggregation<A extends InternalMultiBucketAggregation, B extends InternalMultiBucketAggregation.InternalBucket>
|
||||
extends InternalAggregation implements MultiBucketsAggregation {
|
||||
|
||||
public InternalMultiBucketAggregation() {
|
||||
}
|
||||
|
||||
public InternalMultiBucketAggregation(String name, Map<String, Object> metaData) {
|
||||
super(name, metaData);
|
||||
public InternalMultiBucketAggregation(String name, List<Reducer> reducers, Map<String, Object> metaData) {
|
||||
super(name, reducers, metaData);
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a new copy of this {@link Aggregation} with the same settings as
|
||||
* this {@link Aggregation} and contains the provided buckets.
|
||||
*
|
||||
* @param buckets
|
||||
* the buckets to use in the new {@link Aggregation}
|
||||
* @return the new {@link Aggregation}
|
||||
*/
|
||||
public abstract A create(List<B> buckets);
|
||||
|
||||
/**
|
||||
* Create a new {@link InternalBucket} using the provided prototype bucket
|
||||
* and aggregations.
|
||||
*
|
||||
* @param aggregations
|
||||
* the aggregations for the new bucket
|
||||
* @param prototype
|
||||
* the bucket to use as a prototype
|
||||
* @return the new bucket
|
||||
*/
|
||||
public abstract B createBucket(InternalAggregations aggregations, B prototype);
|
||||
|
||||
@Override
|
||||
public Object getProperty(List<String> path) {
|
||||
if (path.isEmpty()) {
|
||||
|
@ -57,18 +81,19 @@ public abstract class InternalMultiBucketAggregation extends InternalAggregation
|
|||
String aggName = path.get(0);
|
||||
if (aggName.equals("_count")) {
|
||||
if (path.size() > 1) {
|
||||
throw new IllegalArgumentException("_count must be the last element in the path");
|
||||
throw new InvalidAggregationPathException("_count must be the last element in the path");
|
||||
}
|
||||
return getDocCount();
|
||||
} else if (aggName.equals("_key")) {
|
||||
if (path.size() > 1) {
|
||||
throw new IllegalArgumentException("_key must be the last element in the path");
|
||||
throw new InvalidAggregationPathException("_key must be the last element in the path");
|
||||
}
|
||||
return getKey();
|
||||
}
|
||||
InternalAggregation aggregation = aggregations.get(aggName);
|
||||
if (aggregation == null) {
|
||||
throw new IllegalArgumentException("Cannot find an aggregation named [" + aggName + "] in [" + containingAggName + "]");
|
||||
throw new InvalidAggregationPathException("Cannot find an aggregation named [" + aggName + "] in [" + containingAggName
|
||||
+ "]");
|
||||
}
|
||||
return aggregation.getProperty(path.subList(1, path.size()));
|
||||
}
|
||||
|
|
|
@ -0,0 +1,33 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.search.aggregations;
|
||||
|
||||
import org.elasticsearch.ElasticsearchException;
|
||||
|
||||
public class InvalidAggregationPathException extends ElasticsearchException {
|
||||
|
||||
public InvalidAggregationPathException(String msg) {
|
||||
super(msg);
|
||||
}
|
||||
|
||||
public InvalidAggregationPathException(String msg, Throwable cause) {
|
||||
super(msg, cause);
|
||||
}
|
||||
}
|
|
@ -20,9 +20,11 @@
|
|||
package org.elasticsearch.search.aggregations;
|
||||
|
||||
import org.apache.lucene.index.LeafReaderContext;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
import org.elasticsearch.search.aggregations.support.AggregationContext;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
/**
|
||||
|
@ -31,12 +33,14 @@ import java.util.Map;
|
|||
*/
|
||||
public abstract class NonCollectingAggregator extends AggregatorBase {
|
||||
|
||||
protected NonCollectingAggregator(String name, AggregationContext context, Aggregator parent, AggregatorFactories subFactories, Map<String, Object> metaData) throws IOException {
|
||||
super(name, subFactories, context, parent, metaData);
|
||||
protected NonCollectingAggregator(String name, AggregationContext context, Aggregator parent, AggregatorFactories subFactories,
|
||||
List<Reducer> reducers, Map<String, Object> metaData) throws IOException {
|
||||
super(name, subFactories, context, parent, reducers, metaData);
|
||||
}
|
||||
|
||||
protected NonCollectingAggregator(String name, AggregationContext context, Aggregator parent, Map<String, Object> metaData) throws IOException {
|
||||
this(name, context, parent, AggregatorFactories.EMPTY, metaData);
|
||||
protected NonCollectingAggregator(String name, AggregationContext context, Aggregator parent, List<Reducer> reducers,
|
||||
Map<String, Object> metaData) throws IOException {
|
||||
this(name, context, parent, AggregatorFactories.EMPTY, reducers, metaData);
|
||||
}
|
||||
|
||||
@Override
|
||||
|
|
|
@ -59,6 +59,12 @@ import org.elasticsearch.search.aggregations.metrics.stats.extended.InternalExte
|
|||
import org.elasticsearch.search.aggregations.metrics.sum.InternalSum;
|
||||
import org.elasticsearch.search.aggregations.metrics.tophits.InternalTopHits;
|
||||
import org.elasticsearch.search.aggregations.metrics.valuecount.InternalValueCount;
|
||||
import org.elasticsearch.search.aggregations.reducers.InternalSimpleValue;
|
||||
import org.elasticsearch.search.aggregations.reducers.bucketmetrics.InternalBucketMetricValue;
|
||||
import org.elasticsearch.search.aggregations.reducers.bucketmetrics.MaxBucketReducer;
|
||||
import org.elasticsearch.search.aggregations.reducers.derivative.DerivativeReducer;
|
||||
import org.elasticsearch.search.aggregations.reducers.movavg.MovAvgReducer;
|
||||
import org.elasticsearch.search.aggregations.reducers.movavg.models.TransportMovAvgModelModule;
|
||||
|
||||
/**
|
||||
* A module that registers all the transport streams for the addAggregation
|
||||
|
@ -93,7 +99,7 @@ public class TransportAggregationModule extends AbstractModule implements SpawnM
|
|||
SignificantStringTerms.registerStreams();
|
||||
SignificantLongTerms.registerStreams();
|
||||
UnmappedSignificantTerms.registerStreams();
|
||||
InternalGeoHashGrid.registerStreams();
|
||||
InternalGeoHashGrid.registerStreams();
|
||||
DoubleTerms.registerStreams();
|
||||
UnmappedTerms.registerStreams();
|
||||
InternalRange.registerStream();
|
||||
|
@ -106,10 +112,17 @@ public class TransportAggregationModule extends AbstractModule implements SpawnM
|
|||
InternalTopHits.registerStreams();
|
||||
InternalGeoBounds.registerStream();
|
||||
InternalChildren.registerStream();
|
||||
|
||||
// Reducers
|
||||
DerivativeReducer.registerStreams();
|
||||
InternalSimpleValue.registerStreams();
|
||||
InternalBucketMetricValue.registerStreams();
|
||||
MaxBucketReducer.registerStreams();
|
||||
MovAvgReducer.registerStreams();
|
||||
}
|
||||
|
||||
@Override
|
||||
public Iterable<? extends Module> spawnModules() {
|
||||
return ImmutableList.of(new TransportSignificantTermsHeuristicModule());
|
||||
return ImmutableList.of(new TransportSignificantTermsHeuristicModule(), new TransportMovAvgModelModule());
|
||||
}
|
||||
}
|
||||
|
|
|
@ -27,10 +27,12 @@ import org.elasticsearch.search.aggregations.AggregatorFactories;
|
|||
import org.elasticsearch.search.aggregations.InternalAggregation;
|
||||
import org.elasticsearch.search.aggregations.InternalAggregations;
|
||||
import org.elasticsearch.search.aggregations.LeafBucketCollector;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
import org.elasticsearch.search.aggregations.support.AggregationContext;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.Arrays;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
/**
|
||||
|
@ -41,9 +43,9 @@ public abstract class BucketsAggregator extends AggregatorBase {
|
|||
private final BigArrays bigArrays;
|
||||
private IntArray docCounts;
|
||||
|
||||
public BucketsAggregator(String name, AggregatorFactories factories,
|
||||
AggregationContext context, Aggregator parent, Map<String, Object> metaData) throws IOException {
|
||||
super(name, factories, context, parent, metaData);
|
||||
public BucketsAggregator(String name, AggregatorFactories factories, AggregationContext context, Aggregator parent,
|
||||
List<Reducer> reducers, Map<String, Object> metaData) throws IOException {
|
||||
super(name, factories, context, parent, reducers, metaData);
|
||||
bigArrays = context.bigArrays();
|
||||
docCounts = bigArrays.newIntArray(1, true);
|
||||
}
|
||||
|
|
|
@ -23,6 +23,7 @@ import org.elasticsearch.common.io.stream.StreamOutput;
|
|||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
import org.elasticsearch.search.aggregations.InternalAggregation;
|
||||
import org.elasticsearch.search.aggregations.InternalAggregations;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.ArrayList;
|
||||
|
@ -46,8 +47,8 @@ public abstract class InternalSingleBucketAggregation extends InternalAggregatio
|
|||
* @param docCount The document count in the single bucket.
|
||||
* @param aggregations The already built sub-aggregations that are associated with the bucket.
|
||||
*/
|
||||
protected InternalSingleBucketAggregation(String name, long docCount, InternalAggregations aggregations, Map<String, Object> metaData) {
|
||||
super(name, metaData);
|
||||
protected InternalSingleBucketAggregation(String name, long docCount, InternalAggregations aggregations, List<Reducer> reducers, Map<String, Object> metaData) {
|
||||
super(name, reducers, metaData);
|
||||
this.docCount = docCount;
|
||||
this.aggregations = aggregations;
|
||||
}
|
||||
|
@ -68,7 +69,7 @@ public abstract class InternalSingleBucketAggregation extends InternalAggregatio
|
|||
protected abstract InternalSingleBucketAggregation newAggregation(String name, long docCount, InternalAggregations subAggregations);
|
||||
|
||||
@Override
|
||||
public InternalAggregation reduce(List<InternalAggregation> aggregations, ReduceContext reduceContext) {
|
||||
public InternalAggregation doReduce(List<InternalAggregation> aggregations, ReduceContext reduceContext) {
|
||||
long docCount = 0L;
|
||||
List<InternalAggregations> subAggregationsList = new ArrayList<>(aggregations.size());
|
||||
for (InternalAggregation aggregation : aggregations) {
|
||||
|
|
|
@ -20,9 +20,11 @@ package org.elasticsearch.search.aggregations.bucket;
|
|||
|
||||
import org.elasticsearch.search.aggregations.Aggregator;
|
||||
import org.elasticsearch.search.aggregations.AggregatorFactories;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
import org.elasticsearch.search.aggregations.support.AggregationContext;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
/**
|
||||
|
@ -31,8 +33,9 @@ import java.util.Map;
|
|||
public abstract class SingleBucketAggregator extends BucketsAggregator {
|
||||
|
||||
protected SingleBucketAggregator(String name, AggregatorFactories factories,
|
||||
AggregationContext aggregationContext, Aggregator parent, Map<String, Object> metaData) throws IOException {
|
||||
super(name, factories, aggregationContext, parent, metaData);
|
||||
AggregationContext aggregationContext, Aggregator parent,
|
||||
List<Reducer> reducers, Map<String, Object> metaData) throws IOException {
|
||||
super(name, factories, aggregationContext, parent, reducers, metaData);
|
||||
}
|
||||
|
||||
}
|
||||
|
|
|
@ -23,8 +23,10 @@ import org.elasticsearch.common.io.stream.StreamInput;
|
|||
import org.elasticsearch.search.aggregations.AggregationStreams;
|
||||
import org.elasticsearch.search.aggregations.InternalAggregations;
|
||||
import org.elasticsearch.search.aggregations.bucket.InternalSingleBucketAggregation;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
/**
|
||||
|
@ -49,8 +51,9 @@ public class InternalChildren extends InternalSingleBucketAggregation implements
|
|||
public InternalChildren() {
|
||||
}
|
||||
|
||||
public InternalChildren(String name, long docCount, InternalAggregations aggregations, Map<String, Object> metaData) {
|
||||
super(name, docCount, aggregations, metaData);
|
||||
public InternalChildren(String name, long docCount, InternalAggregations aggregations, List<Reducer> reducers,
|
||||
Map<String, Object> metaData) {
|
||||
super(name, docCount, aggregations, reducers, metaData);
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -60,6 +63,6 @@ public class InternalChildren extends InternalSingleBucketAggregation implements
|
|||
|
||||
@Override
|
||||
protected InternalSingleBucketAggregation newAggregation(String name, long docCount, InternalAggregations subAggregations) {
|
||||
return new InternalChildren(name, docCount, subAggregations, getMetaData());
|
||||
return new InternalChildren(name, docCount, subAggregations, reducers(), getMetaData());
|
||||
}
|
||||
}
|
||||
|
|
|
@ -31,6 +31,7 @@ import org.elasticsearch.common.util.LongObjectPagedHashMap;
|
|||
import org.elasticsearch.index.search.child.ConstantScorer;
|
||||
import org.elasticsearch.search.aggregations.*;
|
||||
import org.elasticsearch.search.aggregations.bucket.SingleBucketAggregator;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
import org.elasticsearch.search.aggregations.support.AggregationContext;
|
||||
import org.elasticsearch.search.aggregations.support.ValuesSource;
|
||||
import org.elasticsearch.search.aggregations.support.ValuesSourceAggregatorFactory;
|
||||
|
@ -63,8 +64,9 @@ public class ParentToChildrenAggregator extends SingleBucketAggregator {
|
|||
|
||||
public ParentToChildrenAggregator(String name, AggregatorFactories factories, AggregationContext aggregationContext,
|
||||
Aggregator parent, String parentType, Filter childFilter, Filter parentFilter,
|
||||
ValuesSource.Bytes.WithOrdinals.ParentChild valuesSource, long maxOrd, Map<String, Object> metaData) throws IOException {
|
||||
super(name, factories, aggregationContext, parent, metaData);
|
||||
ValuesSource.Bytes.WithOrdinals.ParentChild valuesSource,
|
||||
long maxOrd, List<Reducer> reducers, Map<String, Object> metaData) throws IOException {
|
||||
super(name, factories, aggregationContext, parent, reducers, metaData);
|
||||
this.parentType = parentType;
|
||||
// these two filters are cached in the parser
|
||||
this.childFilter = childFilter;
|
||||
|
@ -77,12 +79,13 @@ public class ParentToChildrenAggregator extends SingleBucketAggregator {
|
|||
|
||||
@Override
|
||||
public InternalAggregation buildAggregation(long owningBucketOrdinal) throws IOException {
|
||||
return new InternalChildren(name, bucketDocCount(owningBucketOrdinal), bucketAggregations(owningBucketOrdinal), metaData());
|
||||
return new InternalChildren(name, bucketDocCount(owningBucketOrdinal), bucketAggregations(owningBucketOrdinal), reducers(),
|
||||
metaData());
|
||||
}
|
||||
|
||||
@Override
|
||||
public InternalAggregation buildEmptyAggregation() {
|
||||
return new InternalChildren(name, 0, buildEmptySubAggregations(), metaData());
|
||||
return new InternalChildren(name, 0, buildEmptySubAggregations(), reducers(), metaData());
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -192,21 +195,25 @@ public class ParentToChildrenAggregator extends SingleBucketAggregator {
|
|||
}
|
||||
|
||||
@Override
|
||||
protected Aggregator createUnmapped(AggregationContext aggregationContext, Aggregator parent, Map<String, Object> metaData) throws IOException {
|
||||
return new NonCollectingAggregator(name, aggregationContext, parent, metaData) {
|
||||
protected Aggregator createUnmapped(AggregationContext aggregationContext, Aggregator parent, List<Reducer> reducers,
|
||||
Map<String, Object> metaData) throws IOException {
|
||||
return new NonCollectingAggregator(name, aggregationContext, parent, reducers, metaData) {
|
||||
|
||||
@Override
|
||||
public InternalAggregation buildEmptyAggregation() {
|
||||
return new InternalChildren(name, 0, buildEmptySubAggregations(), metaData());
|
||||
return new InternalChildren(name, 0, buildEmptySubAggregations(), reducers(), metaData());
|
||||
}
|
||||
|
||||
};
|
||||
}
|
||||
|
||||
@Override
|
||||
protected Aggregator doCreateInternal(ValuesSource.Bytes.WithOrdinals.ParentChild valuesSource, AggregationContext aggregationContext, Aggregator parent, boolean collectsFromSingleBucket, Map<String, Object> metaData) throws IOException {
|
||||
protected Aggregator doCreateInternal(ValuesSource.Bytes.WithOrdinals.ParentChild valuesSource,
|
||||
AggregationContext aggregationContext, Aggregator parent, boolean collectsFromSingleBucket, List<Reducer> reducers,
|
||||
Map<String, Object> metaData) throws IOException {
|
||||
long maxOrd = valuesSource.globalMaxOrd(aggregationContext.searchContext().searcher(), parentType);
|
||||
return new ParentToChildrenAggregator(name, factories, aggregationContext, parent, parentType, childFilter, parentFilter, valuesSource, maxOrd, metaData);
|
||||
return new ParentToChildrenAggregator(name, factories, aggregationContext, parent, parentType, childFilter, parentFilter,
|
||||
valuesSource, maxOrd, reducers, metaData);
|
||||
}
|
||||
|
||||
}
|
||||
|
|
|
@ -22,6 +22,7 @@ import org.apache.lucene.index.LeafReaderContext;
|
|||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.util.Bits;
|
||||
import org.elasticsearch.common.lucene.docset.DocIdSets;
|
||||
import org.elasticsearch.search.aggregations.AggregationExecutionException;
|
||||
import org.elasticsearch.search.aggregations.Aggregator;
|
||||
import org.elasticsearch.search.aggregations.AggregatorFactories;
|
||||
import org.elasticsearch.search.aggregations.AggregatorFactory;
|
||||
|
@ -29,9 +30,11 @@ import org.elasticsearch.search.aggregations.InternalAggregation;
|
|||
import org.elasticsearch.search.aggregations.LeafBucketCollector;
|
||||
import org.elasticsearch.search.aggregations.LeafBucketCollectorBase;
|
||||
import org.elasticsearch.search.aggregations.bucket.SingleBucketAggregator;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
import org.elasticsearch.search.aggregations.support.AggregationContext;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
/**
|
||||
|
@ -45,9 +48,9 @@ public class FilterAggregator extends SingleBucketAggregator {
|
|||
org.apache.lucene.search.Filter filter,
|
||||
AggregatorFactories factories,
|
||||
AggregationContext aggregationContext,
|
||||
Aggregator parent,
|
||||
Aggregator parent, List<Reducer> reducers,
|
||||
Map<String, Object> metaData) throws IOException {
|
||||
super(name, factories, aggregationContext, parent, metaData);
|
||||
super(name, factories, aggregationContext, parent, reducers, metaData);
|
||||
this.filter = filter;
|
||||
}
|
||||
|
||||
|
@ -69,12 +72,13 @@ public class FilterAggregator extends SingleBucketAggregator {
|
|||
|
||||
@Override
|
||||
public InternalAggregation buildAggregation(long owningBucketOrdinal) throws IOException {
|
||||
return new InternalFilter(name, bucketDocCount(owningBucketOrdinal), bucketAggregations(owningBucketOrdinal), metaData());
|
||||
return new InternalFilter(name, bucketDocCount(owningBucketOrdinal), bucketAggregations(owningBucketOrdinal), reducers(),
|
||||
metaData());
|
||||
}
|
||||
|
||||
@Override
|
||||
public InternalAggregation buildEmptyAggregation() {
|
||||
return new InternalFilter(name, 0, buildEmptySubAggregations(), metaData());
|
||||
return new InternalFilter(name, 0, buildEmptySubAggregations(), reducers(), metaData());
|
||||
}
|
||||
|
||||
public static class Factory extends AggregatorFactory {
|
||||
|
@ -87,8 +91,9 @@ public class FilterAggregator extends SingleBucketAggregator {
|
|||
}
|
||||
|
||||
@Override
|
||||
public Aggregator createInternal(AggregationContext context, Aggregator parent, boolean collectsFromSingleBucket, Map<String, Object> metaData) throws IOException {
|
||||
return new FilterAggregator(name, filter, factories, context, parent, metaData);
|
||||
public Aggregator createInternal(AggregationContext context, Aggregator parent, boolean collectsFromSingleBucket,
|
||||
List<Reducer> reducers, Map<String, Object> metaData) throws IOException {
|
||||
return new FilterAggregator(name, filter, factories, context, parent, reducers, metaData);
|
||||
}
|
||||
|
||||
}
|
||||
|
|
|
@ -22,8 +22,10 @@ import org.elasticsearch.common.io.stream.StreamInput;
|
|||
import org.elasticsearch.search.aggregations.AggregationStreams;
|
||||
import org.elasticsearch.search.aggregations.InternalAggregations;
|
||||
import org.elasticsearch.search.aggregations.bucket.InternalSingleBucketAggregation;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
/**
|
||||
|
@ -48,8 +50,8 @@ public class InternalFilter extends InternalSingleBucketAggregation implements F
|
|||
|
||||
InternalFilter() {} // for serialization
|
||||
|
||||
InternalFilter(String name, long docCount, InternalAggregations subAggregations, Map<String, Object> metaData) {
|
||||
super(name, docCount, subAggregations, metaData);
|
||||
InternalFilter(String name, long docCount, InternalAggregations subAggregations, List<Reducer> reducers, Map<String, Object> metaData) {
|
||||
super(name, docCount, subAggregations, reducers, metaData);
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -59,6 +61,6 @@ public class InternalFilter extends InternalSingleBucketAggregation implements F
|
|||
|
||||
@Override
|
||||
protected InternalSingleBucketAggregation newAggregation(String name, long docCount, InternalAggregations subAggregations) {
|
||||
return new InternalFilter(name, docCount, subAggregations, getMetaData());
|
||||
return new InternalFilter(name, docCount, subAggregations, reducers(), getMetaData());
|
||||
}
|
||||
}
|
|
@ -25,6 +25,7 @@ import org.apache.lucene.index.LeafReaderContext;
|
|||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.util.Bits;
|
||||
import org.elasticsearch.common.lucene.docset.DocIdSets;
|
||||
import org.elasticsearch.search.aggregations.AggregationExecutionException;
|
||||
import org.elasticsearch.search.aggregations.Aggregator;
|
||||
import org.elasticsearch.search.aggregations.AggregatorFactories;
|
||||
import org.elasticsearch.search.aggregations.AggregatorFactory;
|
||||
|
@ -33,6 +34,7 @@ import org.elasticsearch.search.aggregations.InternalAggregations;
|
|||
import org.elasticsearch.search.aggregations.LeafBucketCollector;
|
||||
import org.elasticsearch.search.aggregations.LeafBucketCollectorBase;
|
||||
import org.elasticsearch.search.aggregations.bucket.BucketsAggregator;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
import org.elasticsearch.search.aggregations.support.AggregationContext;
|
||||
|
||||
import java.io.IOException;
|
||||
|
@ -59,8 +61,9 @@ public class FiltersAggregator extends BucketsAggregator {
|
|||
private final boolean keyed;
|
||||
|
||||
public FiltersAggregator(String name, AggregatorFactories factories, List<KeyedFilter> filters, boolean keyed, AggregationContext aggregationContext,
|
||||
Aggregator parent, Map<String, Object> metaData) throws IOException {
|
||||
super(name, factories, aggregationContext, parent, metaData);
|
||||
Aggregator parent, List<Reducer> reducers, Map<String, Object> metaData)
|
||||
throws IOException {
|
||||
super(name, factories, aggregationContext, parent, reducers, metaData);
|
||||
this.keyed = keyed;
|
||||
this.filters = filters.toArray(new KeyedFilter[filters.size()]);
|
||||
}
|
||||
|
@ -73,7 +76,7 @@ public class FiltersAggregator extends BucketsAggregator {
|
|||
final Bits[] bits = new Bits[filters.length];
|
||||
for (int i = 0; i < filters.length; ++i) {
|
||||
bits[i] = DocIdSets.asSequentialAccessBits(ctx.reader().maxDoc(), filters[i].filter.getDocIdSet(ctx, null));
|
||||
}
|
||||
}
|
||||
return new LeafBucketCollectorBase(sub, null) {
|
||||
@Override
|
||||
public void collect(int doc, long bucket) throws IOException {
|
||||
|
@ -95,7 +98,7 @@ public class FiltersAggregator extends BucketsAggregator {
|
|||
InternalFilters.Bucket bucket = new InternalFilters.Bucket(filter.key, bucketDocCount(bucketOrd), bucketAggregations(bucketOrd), keyed);
|
||||
buckets.add(bucket);
|
||||
}
|
||||
return new InternalFilters(name, buckets, keyed, metaData());
|
||||
return new InternalFilters(name, buckets, keyed, reducers(), metaData());
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -106,7 +109,7 @@ public class FiltersAggregator extends BucketsAggregator {
|
|||
InternalFilters.Bucket bucket = new InternalFilters.Bucket(filters[i].key, 0, subAggs, keyed);
|
||||
buckets.add(bucket);
|
||||
}
|
||||
return new InternalFilters(name, buckets, keyed, metaData());
|
||||
return new InternalFilters(name, buckets, keyed, reducers(), metaData());
|
||||
}
|
||||
|
||||
final long bucketOrd(long owningBucketOrdinal, int filterOrd) {
|
||||
|
@ -125,8 +128,9 @@ public class FiltersAggregator extends BucketsAggregator {
|
|||
}
|
||||
|
||||
@Override
|
||||
public Aggregator createInternal(AggregationContext context, Aggregator parent, boolean collectsFromSingleBucket, Map<String, Object> metaData) throws IOException {
|
||||
return new FiltersAggregator(name, factories, filters, keyed, context, parent, metaData);
|
||||
public Aggregator createInternal(AggregationContext context, Aggregator parent, boolean collectsFromSingleBucket,
|
||||
List<Reducer> reducers, Map<String, Object> metaData) throws IOException {
|
||||
return new FiltersAggregator(name, factories, filters, keyed, context, parent, reducers, metaData);
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -29,8 +29,10 @@ import org.elasticsearch.search.aggregations.Aggregations;
|
|||
import org.elasticsearch.search.aggregations.InternalAggregation;
|
||||
import org.elasticsearch.search.aggregations.InternalAggregations;
|
||||
import org.elasticsearch.search.aggregations.InternalMultiBucketAggregation;
|
||||
import org.elasticsearch.search.aggregations.InternalMultiBucketAggregation.InternalBucket;
|
||||
import org.elasticsearch.search.aggregations.bucket.BucketStreamContext;
|
||||
import org.elasticsearch.search.aggregations.bucket.BucketStreams;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.ArrayList;
|
||||
|
@ -41,7 +43,7 @@ import java.util.Map;
|
|||
/**
|
||||
*
|
||||
*/
|
||||
public class InternalFilters extends InternalMultiBucketAggregation implements Filters {
|
||||
public class InternalFilters extends InternalMultiBucketAggregation<InternalFilters, InternalFilters.Bucket> implements Filters {
|
||||
|
||||
public final static Type TYPE = new Type("filters");
|
||||
|
||||
|
@ -163,8 +165,8 @@ public class InternalFilters extends InternalMultiBucketAggregation implements F
|
|||
|
||||
public InternalFilters() {} // for serialization
|
||||
|
||||
public InternalFilters(String name, List<Bucket> buckets, boolean keyed, Map<String, Object> metaData) {
|
||||
super(name, metaData);
|
||||
public InternalFilters(String name, List<Bucket> buckets, boolean keyed, List<Reducer> reducers, Map<String, Object> metaData) {
|
||||
super(name, reducers, metaData);
|
||||
this.buckets = buckets;
|
||||
this.keyed = keyed;
|
||||
}
|
||||
|
@ -174,6 +176,16 @@ public class InternalFilters extends InternalMultiBucketAggregation implements F
|
|||
return TYPE;
|
||||
}
|
||||
|
||||
@Override
|
||||
public InternalFilters create(List<Bucket> buckets) {
|
||||
return new InternalFilters(this.name, buckets, this.keyed, this.reducers(), this.metaData);
|
||||
}
|
||||
|
||||
@Override
|
||||
public Bucket createBucket(InternalAggregations aggregations, Bucket prototype) {
|
||||
return new Bucket(prototype.key, prototype.docCount, aggregations, prototype.keyed);
|
||||
}
|
||||
|
||||
@Override
|
||||
public List<Bucket> getBuckets() {
|
||||
return buckets;
|
||||
|
@ -191,7 +203,7 @@ public class InternalFilters extends InternalMultiBucketAggregation implements F
|
|||
}
|
||||
|
||||
@Override
|
||||
public InternalAggregation reduce(List<InternalAggregation> aggregations, ReduceContext reduceContext) {
|
||||
public InternalAggregation doReduce(List<InternalAggregation> aggregations, ReduceContext reduceContext) {
|
||||
List<List<Bucket>> bucketsList = null;
|
||||
for (InternalAggregation aggregation : aggregations) {
|
||||
InternalFilters filters = (InternalFilters) aggregation;
|
||||
|
@ -210,7 +222,7 @@ public class InternalFilters extends InternalMultiBucketAggregation implements F
|
|||
}
|
||||
}
|
||||
|
||||
InternalFilters reduced = new InternalFilters(name, new ArrayList<Bucket>(bucketsList.size()), keyed, getMetaData());
|
||||
InternalFilters reduced = new InternalFilters(name, new ArrayList<Bucket>(bucketsList.size()), keyed, reducers(), getMetaData());
|
||||
for (List<Bucket> sameRangeList : bucketsList) {
|
||||
reduced.buckets.add((sameRangeList.get(0)).reduce(sameRangeList, reduceContext));
|
||||
}
|
||||
|
|
|
@ -28,12 +28,14 @@ import org.elasticsearch.search.aggregations.LeafBucketCollectorBase;
|
|||
import org.elasticsearch.search.aggregations.InternalAggregations;
|
||||
import org.elasticsearch.search.aggregations.LeafBucketCollector;
|
||||
import org.elasticsearch.search.aggregations.bucket.BucketsAggregator;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
import org.elasticsearch.search.aggregations.support.AggregationContext;
|
||||
import org.elasticsearch.search.aggregations.support.ValuesSource;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.Arrays;
|
||||
import java.util.Collections;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
/**
|
||||
|
@ -49,8 +51,9 @@ public class GeoHashGridAggregator extends BucketsAggregator {
|
|||
private final LongHash bucketOrds;
|
||||
|
||||
public GeoHashGridAggregator(String name, AggregatorFactories factories, ValuesSource.Numeric valuesSource,
|
||||
int requiredSize, int shardSize, AggregationContext aggregationContext, Aggregator parent, Map<String, Object> metaData) throws IOException {
|
||||
super(name, factories, aggregationContext, parent, metaData);
|
||||
int requiredSize, int shardSize, AggregationContext aggregationContext, Aggregator parent, List<Reducer> reducers,
|
||||
Map<String, Object> metaData) throws IOException {
|
||||
super(name, factories, aggregationContext, parent, reducers, metaData);
|
||||
this.valuesSource = valuesSource;
|
||||
this.requiredSize = requiredSize;
|
||||
this.shardSize = shardSize;
|
||||
|
@ -126,12 +129,12 @@ public class GeoHashGridAggregator extends BucketsAggregator {
|
|||
bucket.aggregations = bucketAggregations(bucket.bucketOrd);
|
||||
list[i] = bucket;
|
||||
}
|
||||
return new InternalGeoHashGrid(name, requiredSize, Arrays.asList(list), metaData());
|
||||
return new InternalGeoHashGrid(name, requiredSize, Arrays.asList(list), reducers(), metaData());
|
||||
}
|
||||
|
||||
@Override
|
||||
public InternalGeoHashGrid buildEmptyAggregation() {
|
||||
return new InternalGeoHashGrid(name, requiredSize, Collections.<InternalGeoHashGrid.Bucket>emptyList(), metaData());
|
||||
return new InternalGeoHashGrid(name, requiredSize, Collections.<InternalGeoHashGrid.Bucket> emptyList(), reducers(), metaData());
|
||||
}
|
||||
|
||||
|
||||
|
|
|
@ -34,6 +34,7 @@ import org.elasticsearch.search.aggregations.AggregatorFactory;
|
|||
import org.elasticsearch.search.aggregations.InternalAggregation;
|
||||
import org.elasticsearch.search.aggregations.NonCollectingAggregator;
|
||||
import org.elasticsearch.search.aggregations.bucket.BucketUtils;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
import org.elasticsearch.search.aggregations.support.AggregationContext;
|
||||
import org.elasticsearch.search.aggregations.support.ValuesSource;
|
||||
import org.elasticsearch.search.aggregations.support.ValuesSourceAggregatorFactory;
|
||||
|
@ -43,6 +44,7 @@ import org.elasticsearch.search.internal.SearchContext;
|
|||
|
||||
import java.io.IOException;
|
||||
import java.util.Collections;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
/**
|
||||
|
@ -123,10 +125,11 @@ public class GeoHashGridParser implements Aggregator.Parser {
|
|||
}
|
||||
|
||||
@Override
|
||||
protected Aggregator createUnmapped(AggregationContext aggregationContext, Aggregator parent, Map<String, Object> metaData) throws IOException {
|
||||
final InternalAggregation aggregation = new InternalGeoHashGrid(name, requiredSize, Collections.<InternalGeoHashGrid.Bucket>emptyList(), metaData);
|
||||
return new NonCollectingAggregator(name, aggregationContext, parent, metaData) {
|
||||
@Override
|
||||
protected Aggregator createUnmapped(AggregationContext aggregationContext, Aggregator parent, List<Reducer> reducers,
|
||||
Map<String, Object> metaData) throws IOException {
|
||||
final InternalAggregation aggregation = new InternalGeoHashGrid(name, requiredSize,
|
||||
Collections.<InternalGeoHashGrid.Bucket> emptyList(), reducers, metaData);
|
||||
return new NonCollectingAggregator(name, aggregationContext, parent, reducers, metaData) {
|
||||
public InternalAggregation buildEmptyAggregation() {
|
||||
return aggregation;
|
||||
}
|
||||
|
@ -134,12 +137,15 @@ public class GeoHashGridParser implements Aggregator.Parser {
|
|||
}
|
||||
|
||||
@Override
|
||||
protected Aggregator doCreateInternal(final ValuesSource.GeoPoint valuesSource, AggregationContext aggregationContext, Aggregator parent, boolean collectsFromSingleBucket, Map<String, Object> metaData) throws IOException {
|
||||
protected Aggregator doCreateInternal(final ValuesSource.GeoPoint valuesSource, AggregationContext aggregationContext,
|
||||
Aggregator parent, boolean collectsFromSingleBucket, List<Reducer> reducers, Map<String, Object> metaData)
|
||||
throws IOException {
|
||||
if (collectsFromSingleBucket == false) {
|
||||
return asMultiBucketAggregator(this, aggregationContext, parent);
|
||||
}
|
||||
ValuesSource.Numeric cellIdSource = new CellIdSource(valuesSource, precision);
|
||||
return new GeoHashGridAggregator(name, factories, cellIdSource, requiredSize, shardSize, aggregationContext, parent, metaData);
|
||||
return new GeoHashGridAggregator(name, factories, cellIdSource, requiredSize, shardSize, aggregationContext, parent, reducers,
|
||||
metaData);
|
||||
|
||||
}
|
||||
|
||||
|
|
|
@ -32,6 +32,7 @@ import org.elasticsearch.search.aggregations.InternalAggregations;
|
|||
import org.elasticsearch.search.aggregations.InternalMultiBucketAggregation;
|
||||
import org.elasticsearch.search.aggregations.bucket.BucketStreamContext;
|
||||
import org.elasticsearch.search.aggregations.bucket.BucketStreams;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.ArrayList;
|
||||
|
@ -45,7 +46,8 @@ import java.util.Map;
|
|||
* All geohashes in a grid are of the same precision and held internally as a single long
|
||||
* for efficiency's sake.
|
||||
*/
|
||||
public class InternalGeoHashGrid extends InternalMultiBucketAggregation implements GeoHashGrid {
|
||||
public class InternalGeoHashGrid extends InternalMultiBucketAggregation<InternalGeoHashGrid, InternalGeoHashGrid.Bucket> implements
|
||||
GeoHashGrid {
|
||||
|
||||
public static final Type TYPE = new Type("geohash_grid", "ghcells");
|
||||
|
||||
|
@ -162,7 +164,6 @@ public class InternalGeoHashGrid extends InternalMultiBucketAggregation implemen
|
|||
return builder;
|
||||
}
|
||||
}
|
||||
|
||||
private int requiredSize;
|
||||
private Collection<Bucket> buckets;
|
||||
protected Map<String, Bucket> bucketMap;
|
||||
|
@ -170,8 +171,9 @@ public class InternalGeoHashGrid extends InternalMultiBucketAggregation implemen
|
|||
InternalGeoHashGrid() {
|
||||
} // for serialization
|
||||
|
||||
public InternalGeoHashGrid(String name, int requiredSize, Collection<Bucket> buckets, Map<String, Object> metaData) {
|
||||
super(name, metaData);
|
||||
public InternalGeoHashGrid(String name, int requiredSize, Collection<Bucket> buckets, List<Reducer> reducers,
|
||||
Map<String, Object> metaData) {
|
||||
super(name, reducers, metaData);
|
||||
this.requiredSize = requiredSize;
|
||||
this.buckets = buckets;
|
||||
}
|
||||
|
@ -181,6 +183,16 @@ public class InternalGeoHashGrid extends InternalMultiBucketAggregation implemen
|
|||
return TYPE;
|
||||
}
|
||||
|
||||
@Override
|
||||
public InternalGeoHashGrid create(List<Bucket> buckets) {
|
||||
return new InternalGeoHashGrid(this.name, this.requiredSize, buckets, this.reducers(), this.metaData);
|
||||
}
|
||||
|
||||
@Override
|
||||
public Bucket createBucket(InternalAggregations aggregations, Bucket prototype) {
|
||||
return new Bucket(prototype.geohashAsLong, prototype.docCount, aggregations);
|
||||
}
|
||||
|
||||
@Override
|
||||
public List<GeoHashGrid.Bucket> getBuckets() {
|
||||
Object o = buckets;
|
||||
|
@ -188,7 +200,7 @@ public class InternalGeoHashGrid extends InternalMultiBucketAggregation implemen
|
|||
}
|
||||
|
||||
@Override
|
||||
public InternalGeoHashGrid reduce(List<InternalAggregation> aggregations, ReduceContext reduceContext) {
|
||||
public InternalGeoHashGrid doReduce(List<InternalAggregation> aggregations, ReduceContext reduceContext) {
|
||||
|
||||
LongObjectPagedHashMap<List<Bucket>> buckets = null;
|
||||
for (InternalAggregation aggregation : aggregations) {
|
||||
|
@ -217,7 +229,7 @@ public class InternalGeoHashGrid extends InternalMultiBucketAggregation implemen
|
|||
for (int i = ordered.size() - 1; i >= 0; i--) {
|
||||
list[i] = ordered.pop();
|
||||
}
|
||||
return new InternalGeoHashGrid(getName(), requiredSize, Arrays.asList(list), getMetaData());
|
||||
return new InternalGeoHashGrid(getName(), requiredSize, Arrays.asList(list), reducers(), getMetaData());
|
||||
}
|
||||
|
||||
@Override
|
||||
|
|
|
@ -27,9 +27,11 @@ import org.elasticsearch.search.aggregations.InternalAggregation;
|
|||
import org.elasticsearch.search.aggregations.LeafBucketCollector;
|
||||
import org.elasticsearch.search.aggregations.LeafBucketCollectorBase;
|
||||
import org.elasticsearch.search.aggregations.bucket.SingleBucketAggregator;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
import org.elasticsearch.search.aggregations.support.AggregationContext;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
/**
|
||||
|
@ -37,8 +39,9 @@ import java.util.Map;
|
|||
*/
|
||||
public class GlobalAggregator extends SingleBucketAggregator {
|
||||
|
||||
public GlobalAggregator(String name, AggregatorFactories subFactories, AggregationContext aggregationContext, Map<String, Object> metaData) throws IOException {
|
||||
super(name, subFactories, aggregationContext, null, metaData);
|
||||
public GlobalAggregator(String name, AggregatorFactories subFactories, AggregationContext aggregationContext, List<Reducer> reducers,
|
||||
Map<String, Object> metaData) throws IOException {
|
||||
super(name, subFactories, aggregationContext, null, reducers, metaData);
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -56,7 +59,8 @@ public class GlobalAggregator extends SingleBucketAggregator {
|
|||
@Override
|
||||
public InternalAggregation buildAggregation(long owningBucketOrdinal) throws IOException {
|
||||
assert owningBucketOrdinal == 0 : "global aggregator can only be a top level aggregator";
|
||||
return new InternalGlobal(name, bucketDocCount(owningBucketOrdinal), bucketAggregations(owningBucketOrdinal), metaData());
|
||||
return new InternalGlobal(name, bucketDocCount(owningBucketOrdinal), bucketAggregations(owningBucketOrdinal), reducers(),
|
||||
metaData());
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -71,7 +75,8 @@ public class GlobalAggregator extends SingleBucketAggregator {
|
|||
}
|
||||
|
||||
@Override
|
||||
public Aggregator createInternal(AggregationContext context, Aggregator parent, boolean collectsFromSingleBucket, Map<String, Object> metaData) throws IOException {
|
||||
public Aggregator createInternal(AggregationContext context, Aggregator parent, boolean collectsFromSingleBucket,
|
||||
List<Reducer> reducers, Map<String, Object> metaData) throws IOException {
|
||||
if (parent != null) {
|
||||
throw new AggregationExecutionException("Aggregation [" + parent.name() + "] cannot have a global " +
|
||||
"sub-aggregation [" + name + "]. Global aggregations can only be defined as top level aggregations");
|
||||
|
@ -79,7 +84,7 @@ public class GlobalAggregator extends SingleBucketAggregator {
|
|||
if (collectsFromSingleBucket == false) {
|
||||
throw new IllegalStateException();
|
||||
}
|
||||
return new GlobalAggregator(name, factories, context, metaData);
|
||||
return new GlobalAggregator(name, factories, context, reducers, metaData);
|
||||
}
|
||||
|
||||
}
|
||||
|
|
|
@ -22,8 +22,10 @@ import org.elasticsearch.common.io.stream.StreamInput;
|
|||
import org.elasticsearch.search.aggregations.AggregationStreams;
|
||||
import org.elasticsearch.search.aggregations.InternalAggregations;
|
||||
import org.elasticsearch.search.aggregations.bucket.InternalSingleBucketAggregation;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
/**
|
||||
|
@ -49,8 +51,8 @@ public class InternalGlobal extends InternalSingleBucketAggregation implements G
|
|||
|
||||
InternalGlobal() {} // for serialization
|
||||
|
||||
InternalGlobal(String name, long docCount, InternalAggregations aggregations, Map<String, Object> metaData) {
|
||||
super(name, docCount, aggregations, metaData);
|
||||
InternalGlobal(String name, long docCount, InternalAggregations aggregations, List<Reducer> reducers, Map<String, Object> metaData) {
|
||||
super(name, docCount, aggregations, reducers, metaData);
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -60,6 +62,6 @@ public class InternalGlobal extends InternalSingleBucketAggregation implements G
|
|||
|
||||
@Override
|
||||
protected InternalSingleBucketAggregation newAggregation(String name, long docCount, InternalAggregations subAggregations) {
|
||||
return new InternalGlobal(name, docCount, subAggregations, getMetaData());
|
||||
return new InternalGlobal(name, docCount, subAggregations, reducers(), getMetaData());
|
||||
}
|
||||
}
|
||||
|
|
|
@ -31,6 +31,7 @@ import org.elasticsearch.search.aggregations.InternalAggregation;
|
|||
import org.elasticsearch.search.aggregations.LeafBucketCollector;
|
||||
import org.elasticsearch.search.aggregations.LeafBucketCollectorBase;
|
||||
import org.elasticsearch.search.aggregations.bucket.BucketsAggregator;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
import org.elasticsearch.search.aggregations.support.AggregationContext;
|
||||
import org.elasticsearch.search.aggregations.support.ValuesSource;
|
||||
import org.elasticsearch.search.aggregations.support.ValuesSourceAggregatorFactory;
|
||||
|
@ -56,15 +57,14 @@ public class HistogramAggregator extends BucketsAggregator {
|
|||
private final InternalHistogram.Factory histogramFactory;
|
||||
|
||||
private final LongHash bucketOrds;
|
||||
private SortedNumericDocValues values;
|
||||
|
||||
public HistogramAggregator(String name, AggregatorFactories factories, Rounding rounding, InternalOrder order,
|
||||
boolean keyed, long minDocCount, @Nullable ExtendedBounds extendedBounds,
|
||||
@Nullable ValuesSource.Numeric valuesSource, @Nullable ValueFormatter formatter,
|
||||
InternalHistogram.Factory<?> histogramFactory,
|
||||
AggregationContext aggregationContext, Aggregator parent, Map<String, Object> metaData) throws IOException {
|
||||
InternalHistogram.Factory<?> histogramFactory, AggregationContext aggregationContext,
|
||||
Aggregator parent, List<Reducer> reducers, Map<String, Object> metaData) throws IOException {
|
||||
|
||||
super(name, factories, aggregationContext, parent, metaData);
|
||||
super(name, factories, aggregationContext, parent, reducers, metaData);
|
||||
this.rounding = rounding;
|
||||
this.order = order;
|
||||
this.keyed = keyed;
|
||||
|
@ -130,13 +130,14 @@ public class HistogramAggregator extends BucketsAggregator {
|
|||
|
||||
// value source will be null for unmapped fields
|
||||
InternalHistogram.EmptyBucketInfo emptyBucketInfo = minDocCount == 0 ? new InternalHistogram.EmptyBucketInfo(rounding, buildEmptySubAggregations(), extendedBounds) : null;
|
||||
return histogramFactory.create(name, buckets, order, minDocCount, emptyBucketInfo, formatter, keyed, metaData());
|
||||
return histogramFactory.create(name, buckets, order, minDocCount, emptyBucketInfo, formatter, keyed, reducers(), metaData());
|
||||
}
|
||||
|
||||
@Override
|
||||
public InternalAggregation buildEmptyAggregation() {
|
||||
InternalHistogram.EmptyBucketInfo emptyBucketInfo = minDocCount == 0 ? new InternalHistogram.EmptyBucketInfo(rounding, buildEmptySubAggregations(), extendedBounds) : null;
|
||||
return histogramFactory.create(name, Collections.emptyList(), order, minDocCount, emptyBucketInfo, formatter, keyed, metaData());
|
||||
return histogramFactory.create(name, Collections.emptyList(), order, minDocCount, emptyBucketInfo, formatter, keyed, reducers(),
|
||||
metaData());
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -166,13 +167,20 @@ public class HistogramAggregator extends BucketsAggregator {
|
|||
this.histogramFactory = histogramFactory;
|
||||
}
|
||||
|
||||
@Override
|
||||
protected Aggregator createUnmapped(AggregationContext aggregationContext, Aggregator parent, Map<String, Object> metaData) throws IOException {
|
||||
return new HistogramAggregator(name, factories, rounding, order, keyed, minDocCount, null, null, config.formatter(), histogramFactory, aggregationContext, parent, metaData);
|
||||
public long minDocCount() {
|
||||
return minDocCount;
|
||||
}
|
||||
|
||||
@Override
|
||||
protected Aggregator doCreateInternal(ValuesSource.Numeric valuesSource, AggregationContext aggregationContext, Aggregator parent, boolean collectsFromSingleBucket, Map<String, Object> metaData) throws IOException {
|
||||
protected Aggregator createUnmapped(AggregationContext aggregationContext, Aggregator parent, List<Reducer> reducers,
|
||||
Map<String, Object> metaData) throws IOException {
|
||||
return new HistogramAggregator(name, factories, rounding, order, keyed, minDocCount, null, null, config.formatter(),
|
||||
histogramFactory, aggregationContext, parent, reducers, metaData);
|
||||
}
|
||||
|
||||
@Override
|
||||
protected Aggregator doCreateInternal(ValuesSource.Numeric valuesSource, AggregationContext aggregationContext, Aggregator parent,
|
||||
boolean collectsFromSingleBucket, List<Reducer> reducers, Map<String, Object> metaData) throws IOException {
|
||||
if (collectsFromSingleBucket == false) {
|
||||
return asMultiBucketAggregator(this, aggregationContext, parent);
|
||||
}
|
||||
|
@ -185,7 +193,8 @@ public class HistogramAggregator extends BucketsAggregator {
|
|||
extendedBounds.processAndValidate(name, aggregationContext.searchContext(), config.parser());
|
||||
roundedBounds = extendedBounds.round(rounding);
|
||||
}
|
||||
return new HistogramAggregator(name, factories, rounding, order, keyed, minDocCount, roundedBounds, valuesSource, config.formatter(), histogramFactory, aggregationContext, parent, metaData);
|
||||
return new HistogramAggregator(name, factories, rounding, order, keyed, minDocCount, roundedBounds, valuesSource,
|
||||
config.formatter(), histogramFactory, aggregationContext, parent, reducers, metaData);
|
||||
}
|
||||
|
||||
}
|
||||
|
|
|
@ -19,16 +19,13 @@
|
|||
package org.elasticsearch.search.aggregations.bucket.histogram;
|
||||
|
||||
import org.elasticsearch.common.Nullable;
|
||||
import org.elasticsearch.search.aggregations.AggregationExecutionException;
|
||||
import org.elasticsearch.search.aggregations.InternalAggregation.Type;
|
||||
import org.elasticsearch.search.aggregations.InternalAggregations;
|
||||
import org.elasticsearch.search.aggregations.bucket.histogram.InternalHistogram.EmptyBucketInfo;
|
||||
import org.elasticsearch.search.aggregations.support.format.ValueFormatter;
|
||||
import org.joda.time.DateTime;
|
||||
import org.joda.time.DateTimeZone;
|
||||
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
/**
|
||||
*
|
||||
*/
|
||||
|
@ -74,14 +71,20 @@ public class InternalDateHistogram {
|
|||
}
|
||||
|
||||
@Override
|
||||
public InternalHistogram create(String name, List<InternalDateHistogram.Bucket> buckets, InternalOrder order,
|
||||
long minDocCount, EmptyBucketInfo emptyBucketInfo, @Nullable ValueFormatter formatter, boolean keyed, Map<String, Object> metaData) {
|
||||
return new InternalHistogram(name, buckets, order, minDocCount, emptyBucketInfo, formatter, keyed, this, metaData);
|
||||
public InternalDateHistogram.Bucket createBucket(InternalAggregations aggregations, InternalDateHistogram.Bucket prototype) {
|
||||
return new Bucket(prototype.key, prototype.docCount, aggregations, prototype.getKeyed(), prototype.formatter, this);
|
||||
}
|
||||
|
||||
@Override
|
||||
public InternalDateHistogram.Bucket createBucket(long key, long docCount, InternalAggregations aggregations, boolean keyed, @Nullable ValueFormatter formatter) {
|
||||
return new Bucket(key, docCount, aggregations, keyed, formatter, this);
|
||||
public InternalDateHistogram.Bucket createBucket(Object key, long docCount, InternalAggregations aggregations, boolean keyed,
|
||||
@Nullable ValueFormatter formatter) {
|
||||
if (key instanceof Number) {
|
||||
return new Bucket(((Number) key).longValue(), docCount, aggregations, keyed, formatter, this);
|
||||
} else if (key instanceof DateTime) {
|
||||
return new Bucket(((DateTime) key).getMillis(), docCount, aggregations, keyed, formatter, this);
|
||||
} else {
|
||||
throw new AggregationExecutionException("Expected key of type Number or DateTime but got [" + key + "]");
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
|
|
|
@ -29,6 +29,7 @@ import org.elasticsearch.common.rounding.Rounding;
|
|||
import org.elasticsearch.common.text.StringText;
|
||||
import org.elasticsearch.common.text.Text;
|
||||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
import org.elasticsearch.search.aggregations.AggregationExecutionException;
|
||||
import org.elasticsearch.search.aggregations.AggregationStreams;
|
||||
import org.elasticsearch.search.aggregations.Aggregations;
|
||||
import org.elasticsearch.search.aggregations.InternalAggregation;
|
||||
|
@ -36,6 +37,7 @@ import org.elasticsearch.search.aggregations.InternalAggregations;
|
|||
import org.elasticsearch.search.aggregations.InternalMultiBucketAggregation;
|
||||
import org.elasticsearch.search.aggregations.bucket.BucketStreamContext;
|
||||
import org.elasticsearch.search.aggregations.bucket.BucketStreams;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
import org.elasticsearch.search.aggregations.support.format.ValueFormatter;
|
||||
import org.elasticsearch.search.aggregations.support.format.ValueFormatterStreams;
|
||||
|
||||
|
@ -49,7 +51,8 @@ import java.util.Map;
|
|||
/**
|
||||
* TODO should be renamed to InternalNumericHistogram (see comment on {@link Histogram})?
|
||||
*/
|
||||
public class InternalHistogram<B extends InternalHistogram.Bucket> extends InternalMultiBucketAggregation implements Histogram {
|
||||
public class InternalHistogram<B extends InternalHistogram.Bucket> extends InternalMultiBucketAggregation<InternalHistogram, B> implements
|
||||
Histogram {
|
||||
|
||||
final static Type TYPE = new Type("histogram", "histo");
|
||||
|
||||
|
@ -184,6 +187,14 @@ public class InternalHistogram<B extends InternalHistogram.Bucket> extends Inter
|
|||
out.writeVLong(docCount);
|
||||
aggregations.writeTo(out);
|
||||
}
|
||||
|
||||
public ValueFormatter getFormatter() {
|
||||
return formatter;
|
||||
}
|
||||
|
||||
public boolean getKeyed() {
|
||||
return keyed;
|
||||
}
|
||||
}
|
||||
|
||||
static class EmptyBucketInfo {
|
||||
|
@ -222,7 +233,7 @@ public class InternalHistogram<B extends InternalHistogram.Bucket> extends Inter
|
|||
|
||||
}
|
||||
|
||||
static class Factory<B extends InternalHistogram.Bucket> {
|
||||
public static class Factory<B extends InternalHistogram.Bucket> {
|
||||
|
||||
protected Factory() {
|
||||
}
|
||||
|
@ -232,12 +243,27 @@ public class InternalHistogram<B extends InternalHistogram.Bucket> extends Inter
|
|||
}
|
||||
|
||||
public InternalHistogram<B> create(String name, List<B> buckets, InternalOrder order, long minDocCount,
|
||||
EmptyBucketInfo emptyBucketInfo, @Nullable ValueFormatter formatter, boolean keyed, Map<String, Object> metaData) {
|
||||
return new InternalHistogram<>(name, buckets, order, minDocCount, emptyBucketInfo, formatter, keyed, this, metaData);
|
||||
EmptyBucketInfo emptyBucketInfo, @Nullable ValueFormatter formatter, boolean keyed, List<Reducer> reducers,
|
||||
Map<String, Object> metaData) {
|
||||
return new InternalHistogram<>(name, buckets, order, minDocCount, emptyBucketInfo, formatter, keyed, this, reducers, metaData);
|
||||
}
|
||||
|
||||
public B createBucket(long key, long docCount, InternalAggregations aggregations, boolean keyed, @Nullable ValueFormatter formatter) {
|
||||
return (B) new Bucket(key, docCount, keyed, formatter, this, aggregations);
|
||||
public InternalHistogram<B> create(List<B> buckets, InternalHistogram<B> prototype) {
|
||||
return new InternalHistogram<>(prototype.name, buckets, prototype.order, prototype.minDocCount, prototype.emptyBucketInfo,
|
||||
prototype.formatter, prototype.keyed, this, prototype.reducers(), prototype.metaData);
|
||||
}
|
||||
|
||||
public B createBucket(InternalAggregations aggregations, B prototype) {
|
||||
return (B) new Bucket(prototype.key, prototype.docCount, prototype.getKeyed(), prototype.formatter, this, aggregations);
|
||||
}
|
||||
|
||||
public B createBucket(Object key, long docCount, InternalAggregations aggregations, boolean keyed,
|
||||
@Nullable ValueFormatter formatter) {
|
||||
if (key instanceof Number) {
|
||||
return (B) new Bucket(((Number) key).longValue(), docCount, keyed, formatter, this, aggregations);
|
||||
} else {
|
||||
throw new AggregationExecutionException("Expected key of type Number but got [" + key + "]");
|
||||
}
|
||||
}
|
||||
|
||||
protected B createEmptyBucket(boolean keyed, @Nullable ValueFormatter formatter) {
|
||||
|
@ -258,8 +284,8 @@ public class InternalHistogram<B extends InternalHistogram.Bucket> extends Inter
|
|||
|
||||
InternalHistogram(String name, List<B> buckets, InternalOrder order, long minDocCount,
|
||||
EmptyBucketInfo emptyBucketInfo,
|
||||
@Nullable ValueFormatter formatter, boolean keyed, Factory<B> factory, Map<String, Object> metaData) {
|
||||
super(name, metaData);
|
||||
@Nullable ValueFormatter formatter, boolean keyed, Factory<B> factory, List<Reducer> reducers, Map<String, Object> metaData) {
|
||||
super(name, reducers, metaData);
|
||||
this.buckets = buckets;
|
||||
this.order = order;
|
||||
assert (minDocCount == 0) == (emptyBucketInfo != null);
|
||||
|
@ -280,10 +306,20 @@ public class InternalHistogram<B extends InternalHistogram.Bucket> extends Inter
|
|||
return buckets;
|
||||
}
|
||||
|
||||
protected Factory<B> getFactory() {
|
||||
public Factory<B> getFactory() {
|
||||
return factory;
|
||||
}
|
||||
|
||||
@Override
|
||||
public InternalHistogram<B> create(List<B> buckets) {
|
||||
return getFactory().create(buckets, this);
|
||||
}
|
||||
|
||||
@Override
|
||||
public B createBucket(InternalAggregations aggregations, B prototype) {
|
||||
return getFactory().createBucket(aggregations, prototype);
|
||||
}
|
||||
|
||||
private static class IteratorAndCurrent<B> {
|
||||
|
||||
private final Iterator<B> iterator;
|
||||
|
@ -410,7 +446,7 @@ public class InternalHistogram<B extends InternalHistogram.Bucket> extends Inter
|
|||
}
|
||||
|
||||
@Override
|
||||
public InternalAggregation reduce(List<InternalAggregation> aggregations, ReduceContext reduceContext) {
|
||||
public InternalAggregation doReduce(List<InternalAggregation> aggregations, ReduceContext reduceContext) {
|
||||
List<B> reducedBuckets = reduceBuckets(aggregations, reduceContext);
|
||||
|
||||
// adding empty buckets if needed
|
||||
|
@ -430,7 +466,8 @@ public class InternalHistogram<B extends InternalHistogram.Bucket> extends Inter
|
|||
CollectionUtil.introSort(reducedBuckets, order.comparator());
|
||||
}
|
||||
|
||||
return getFactory().create(getName(), reducedBuckets, order, minDocCount, emptyBucketInfo, formatter, keyed, getMetaData());
|
||||
return getFactory().create(getName(), reducedBuckets, order, minDocCount, emptyBucketInfo, formatter, keyed, reducers(),
|
||||
getMetaData());
|
||||
}
|
||||
|
||||
@Override
|
||||
|
|
|
@ -22,8 +22,10 @@ import org.elasticsearch.common.io.stream.StreamInput;
|
|||
import org.elasticsearch.search.aggregations.AggregationStreams;
|
||||
import org.elasticsearch.search.aggregations.InternalAggregations;
|
||||
import org.elasticsearch.search.aggregations.bucket.InternalSingleBucketAggregation;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
/**
|
||||
|
@ -50,8 +52,8 @@ public class InternalMissing extends InternalSingleBucketAggregation implements
|
|||
InternalMissing() {
|
||||
}
|
||||
|
||||
InternalMissing(String name, long docCount, InternalAggregations aggregations, Map<String, Object> metaData) {
|
||||
super(name, docCount, aggregations, metaData);
|
||||
InternalMissing(String name, long docCount, InternalAggregations aggregations, List<Reducer> reducers, Map<String, Object> metaData) {
|
||||
super(name, docCount, aggregations, reducers, metaData);
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -61,6 +63,6 @@ public class InternalMissing extends InternalSingleBucketAggregation implements
|
|||
|
||||
@Override
|
||||
protected InternalSingleBucketAggregation newAggregation(String name, long docCount, InternalAggregations subAggregations) {
|
||||
return new InternalMissing(name, docCount, subAggregations, getMetaData());
|
||||
return new InternalMissing(name, docCount, subAggregations, reducers(), getMetaData());
|
||||
}
|
||||
}
|
||||
|
|
|
@ -26,12 +26,14 @@ import org.elasticsearch.search.aggregations.InternalAggregation;
|
|||
import org.elasticsearch.search.aggregations.LeafBucketCollector;
|
||||
import org.elasticsearch.search.aggregations.LeafBucketCollectorBase;
|
||||
import org.elasticsearch.search.aggregations.bucket.SingleBucketAggregator;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
import org.elasticsearch.search.aggregations.support.AggregationContext;
|
||||
import org.elasticsearch.search.aggregations.support.ValuesSource;
|
||||
import org.elasticsearch.search.aggregations.support.ValuesSourceAggregatorFactory;
|
||||
import org.elasticsearch.search.aggregations.support.ValuesSourceConfig;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
/**
|
||||
|
@ -42,8 +44,9 @@ public class MissingAggregator extends SingleBucketAggregator {
|
|||
private final ValuesSource valuesSource;
|
||||
|
||||
public MissingAggregator(String name, AggregatorFactories factories, ValuesSource valuesSource,
|
||||
AggregationContext aggregationContext, Aggregator parent, Map<String, Object> metaData) throws IOException {
|
||||
super(name, factories, aggregationContext, parent, metaData);
|
||||
AggregationContext aggregationContext, Aggregator parent, List<Reducer> reducers,
|
||||
Map<String, Object> metaData) throws IOException {
|
||||
super(name, factories, aggregationContext, parent, reducers, metaData);
|
||||
this.valuesSource = valuesSource;
|
||||
}
|
||||
|
||||
|
@ -69,12 +72,13 @@ public class MissingAggregator extends SingleBucketAggregator {
|
|||
|
||||
@Override
|
||||
public InternalAggregation buildAggregation(long owningBucketOrdinal) throws IOException {
|
||||
return new InternalMissing(name, bucketDocCount(owningBucketOrdinal), bucketAggregations(owningBucketOrdinal), metaData());
|
||||
return new InternalMissing(name, bucketDocCount(owningBucketOrdinal), bucketAggregations(owningBucketOrdinal), reducers(),
|
||||
metaData());
|
||||
}
|
||||
|
||||
@Override
|
||||
public InternalAggregation buildEmptyAggregation() {
|
||||
return new InternalMissing(name, 0, buildEmptySubAggregations(), metaData());
|
||||
return new InternalMissing(name, 0, buildEmptySubAggregations(), reducers(), metaData());
|
||||
}
|
||||
|
||||
public static class Factory extends ValuesSourceAggregatorFactory<ValuesSource> {
|
||||
|
@ -84,13 +88,15 @@ public class MissingAggregator extends SingleBucketAggregator {
|
|||
}
|
||||
|
||||
@Override
|
||||
protected MissingAggregator createUnmapped(AggregationContext aggregationContext, Aggregator parent, Map<String, Object> metaData) throws IOException {
|
||||
return new MissingAggregator(name, factories, null, aggregationContext, parent, metaData);
|
||||
protected MissingAggregator createUnmapped(AggregationContext aggregationContext, Aggregator parent, List<Reducer> reducers,
|
||||
Map<String, Object> metaData) throws IOException {
|
||||
return new MissingAggregator(name, factories, null, aggregationContext, parent, reducers, metaData);
|
||||
}
|
||||
|
||||
@Override
|
||||
protected MissingAggregator doCreateInternal(ValuesSource valuesSource, AggregationContext aggregationContext, Aggregator parent, boolean collectsFromSingleBucket, Map<String, Object> metaData) throws IOException {
|
||||
return new MissingAggregator(name, factories, valuesSource, aggregationContext, parent, metaData);
|
||||
protected MissingAggregator doCreateInternal(ValuesSource valuesSource, AggregationContext aggregationContext, Aggregator parent,
|
||||
boolean collectsFromSingleBucket, List<Reducer> reducers, Map<String, Object> metaData) throws IOException {
|
||||
return new MissingAggregator(name, factories, valuesSource, aggregationContext, parent, reducers, metaData);
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -22,8 +22,10 @@ import org.elasticsearch.common.io.stream.StreamInput;
|
|||
import org.elasticsearch.search.aggregations.AggregationStreams;
|
||||
import org.elasticsearch.search.aggregations.InternalAggregations;
|
||||
import org.elasticsearch.search.aggregations.bucket.InternalSingleBucketAggregation;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
/**
|
||||
|
@ -49,8 +51,9 @@ public class InternalNested extends InternalSingleBucketAggregation implements N
|
|||
public InternalNested() {
|
||||
}
|
||||
|
||||
public InternalNested(String name, long docCount, InternalAggregations aggregations, Map<String, Object> metaData) {
|
||||
super(name, docCount, aggregations, metaData);
|
||||
public InternalNested(String name, long docCount, InternalAggregations aggregations, List<Reducer> reducers,
|
||||
Map<String, Object> metaData) {
|
||||
super(name, docCount, aggregations, reducers, metaData);
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -60,6 +63,6 @@ public class InternalNested extends InternalSingleBucketAggregation implements N
|
|||
|
||||
@Override
|
||||
protected InternalSingleBucketAggregation newAggregation(String name, long docCount, InternalAggregations subAggregations) {
|
||||
return new InternalNested(name, docCount, subAggregations, getMetaData());
|
||||
return new InternalNested(name, docCount, subAggregations, reducers(), getMetaData());
|
||||
}
|
||||
}
|
||||
|
|
|
@ -22,8 +22,10 @@ import org.elasticsearch.common.io.stream.StreamInput;
|
|||
import org.elasticsearch.search.aggregations.AggregationStreams;
|
||||
import org.elasticsearch.search.aggregations.InternalAggregations;
|
||||
import org.elasticsearch.search.aggregations.bucket.InternalSingleBucketAggregation;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
/**
|
||||
|
@ -49,8 +51,9 @@ public class InternalReverseNested extends InternalSingleBucketAggregation imple
|
|||
public InternalReverseNested() {
|
||||
}
|
||||
|
||||
public InternalReverseNested(String name, long docCount, InternalAggregations aggregations, Map<String, Object> metaData) {
|
||||
super(name, docCount, aggregations, metaData);
|
||||
public InternalReverseNested(String name, long docCount, InternalAggregations aggregations, List<Reducer> reducers,
|
||||
Map<String, Object> metaData) {
|
||||
super(name, docCount, aggregations, reducers, metaData);
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -60,6 +63,6 @@ public class InternalReverseNested extends InternalSingleBucketAggregation imple
|
|||
|
||||
@Override
|
||||
protected InternalSingleBucketAggregation newAggregation(String name, long docCount, InternalAggregations subAggregations) {
|
||||
return new InternalReverseNested(name, docCount, subAggregations, getMetaData());
|
||||
return new InternalReverseNested(name, docCount, subAggregations, reducers(), getMetaData());
|
||||
}
|
||||
}
|
||||
|
|
|
@ -39,9 +39,11 @@ import org.elasticsearch.search.aggregations.LeafBucketCollector;
|
|||
import org.elasticsearch.search.aggregations.LeafBucketCollectorBase;
|
||||
import org.elasticsearch.search.aggregations.NonCollectingAggregator;
|
||||
import org.elasticsearch.search.aggregations.bucket.SingleBucketAggregator;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
import org.elasticsearch.search.aggregations.support.AggregationContext;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
/**
|
||||
|
@ -55,8 +57,8 @@ public class NestedAggregator extends SingleBucketAggregator {
|
|||
private DocIdSetIterator childDocs;
|
||||
private BitSet parentDocs;
|
||||
|
||||
public NestedAggregator(String name, AggregatorFactories factories, ObjectMapper objectMapper, AggregationContext aggregationContext, Aggregator parentAggregator, Map<String, Object> metaData, QueryCachingPolicy filterCachingPolicy) throws IOException {
|
||||
super(name, factories, aggregationContext, parentAggregator, metaData);
|
||||
public NestedAggregator(String name, AggregatorFactories factories, ObjectMapper objectMapper, AggregationContext aggregationContext, Aggregator parentAggregator, List<Reducer> reducers, Map<String, Object> metaData, QueryCachingPolicy filterCachingPolicy) throws IOException {
|
||||
super(name, factories, aggregationContext, parentAggregator, reducers, metaData);
|
||||
childFilter = aggregationContext.searchContext().filterCache().cache(objectMapper.nestedTypeFilter(), null, filterCachingPolicy);
|
||||
}
|
||||
|
||||
|
@ -120,12 +122,13 @@ public class NestedAggregator extends SingleBucketAggregator {
|
|||
|
||||
@Override
|
||||
public InternalAggregation buildAggregation(long owningBucketOrdinal) throws IOException {
|
||||
return new InternalNested(name, bucketDocCount(owningBucketOrdinal), bucketAggregations(owningBucketOrdinal), metaData());
|
||||
return new InternalNested(name, bucketDocCount(owningBucketOrdinal), bucketAggregations(owningBucketOrdinal), reducers(),
|
||||
metaData());
|
||||
}
|
||||
|
||||
@Override
|
||||
public InternalAggregation buildEmptyAggregation() {
|
||||
return new InternalNested(name, 0, buildEmptySubAggregations(), metaData());
|
||||
return new InternalNested(name, 0, buildEmptySubAggregations(), reducers(), metaData());
|
||||
}
|
||||
|
||||
private static Filter findClosestNestedPath(Aggregator parent) {
|
||||
|
@ -151,33 +154,35 @@ public class NestedAggregator extends SingleBucketAggregator {
|
|||
}
|
||||
|
||||
@Override
|
||||
public Aggregator createInternal(AggregationContext context, Aggregator parent, boolean collectsFromSingleBucket, Map<String, Object> metaData) throws IOException {
|
||||
public Aggregator createInternal(AggregationContext context, Aggregator parent, boolean collectsFromSingleBucket,
|
||||
List<Reducer> reducers, Map<String, Object> metaData) throws IOException {
|
||||
if (collectsFromSingleBucket == false) {
|
||||
return asMultiBucketAggregator(this, context, parent);
|
||||
}
|
||||
MapperService.SmartNameObjectMapper mapper = context.searchContext().smartNameObjectMapper(path);
|
||||
if (mapper == null) {
|
||||
return new Unmapped(name, context, parent, metaData);
|
||||
return new Unmapped(name, context, parent, reducers, metaData);
|
||||
}
|
||||
ObjectMapper objectMapper = mapper.mapper();
|
||||
if (objectMapper == null) {
|
||||
return new Unmapped(name, context, parent, metaData);
|
||||
return new Unmapped(name, context, parent, reducers, metaData);
|
||||
}
|
||||
if (!objectMapper.nested().isNested()) {
|
||||
throw new AggregationExecutionException("[nested] nested path [" + path + "] is not nested");
|
||||
}
|
||||
return new NestedAggregator(name, factories, objectMapper, context, parent, metaData, queryCachingPolicy);
|
||||
return new NestedAggregator(name, factories, objectMapper, context, parent, reducers, metaData, queryCachingPolicy);
|
||||
}
|
||||
|
||||
private final static class Unmapped extends NonCollectingAggregator {
|
||||
|
||||
public Unmapped(String name, AggregationContext context, Aggregator parent, Map<String, Object> metaData) throws IOException {
|
||||
super(name, context, parent, metaData);
|
||||
public Unmapped(String name, AggregationContext context, Aggregator parent, List<Reducer> reducers, Map<String, Object> metaData)
|
||||
throws IOException {
|
||||
super(name, context, parent, reducers, metaData);
|
||||
}
|
||||
|
||||
@Override
|
||||
public InternalAggregation buildEmptyAggregation() {
|
||||
return new InternalNested(name, 0, buildEmptySubAggregations(), metaData());
|
||||
return new InternalNested(name, 0, buildEmptySubAggregations(), reducers(), metaData());
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -40,9 +40,11 @@ import org.elasticsearch.search.aggregations.LeafBucketCollector;
|
|||
import org.elasticsearch.search.aggregations.LeafBucketCollectorBase;
|
||||
import org.elasticsearch.search.aggregations.NonCollectingAggregator;
|
||||
import org.elasticsearch.search.aggregations.bucket.SingleBucketAggregator;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
import org.elasticsearch.search.aggregations.support.AggregationContext;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
/**
|
||||
|
@ -52,8 +54,10 @@ public class ReverseNestedAggregator extends SingleBucketAggregator {
|
|||
|
||||
private final BitDocIdSetFilter parentFilter;
|
||||
|
||||
public ReverseNestedAggregator(String name, AggregatorFactories factories, ObjectMapper objectMapper, AggregationContext aggregationContext, Aggregator parent, Map<String, Object> metaData) throws IOException {
|
||||
super(name, factories, aggregationContext, parent, metaData);
|
||||
public ReverseNestedAggregator(String name, AggregatorFactories factories, ObjectMapper objectMapper,
|
||||
AggregationContext aggregationContext, Aggregator parent, List<Reducer> reducers, Map<String, Object> metaData)
|
||||
throws IOException {
|
||||
super(name, factories, aggregationContext, parent, reducers, metaData);
|
||||
if (objectMapper == null) {
|
||||
parentFilter = context.searchContext().bitsetFilterCache().getBitDocIdSetFilter(Queries.newNonNestedFilter());
|
||||
} else {
|
||||
|
@ -105,12 +109,13 @@ public class ReverseNestedAggregator extends SingleBucketAggregator {
|
|||
|
||||
@Override
|
||||
public InternalAggregation buildAggregation(long owningBucketOrdinal) throws IOException {
|
||||
return new InternalReverseNested(name, bucketDocCount(owningBucketOrdinal), bucketAggregations(owningBucketOrdinal), metaData());
|
||||
return new InternalReverseNested(name, bucketDocCount(owningBucketOrdinal), bucketAggregations(owningBucketOrdinal), reducers(),
|
||||
metaData());
|
||||
}
|
||||
|
||||
@Override
|
||||
public InternalAggregation buildEmptyAggregation() {
|
||||
return new InternalReverseNested(name, 0, buildEmptySubAggregations(), metaData());
|
||||
return new InternalReverseNested(name, 0, buildEmptySubAggregations(), reducers(), metaData());
|
||||
}
|
||||
|
||||
Filter getParentFilter() {
|
||||
|
@ -127,7 +132,8 @@ public class ReverseNestedAggregator extends SingleBucketAggregator {
|
|||
}
|
||||
|
||||
@Override
|
||||
public Aggregator createInternal(AggregationContext context, Aggregator parent, boolean collectsFromSingleBucket, Map<String, Object> metaData) throws IOException {
|
||||
public Aggregator createInternal(AggregationContext context, Aggregator parent, boolean collectsFromSingleBucket,
|
||||
List<Reducer> reducers, Map<String, Object> metaData) throws IOException {
|
||||
// Early validation
|
||||
NestedAggregator closestNestedAggregator = findClosestNestedAggregator(parent);
|
||||
if (closestNestedAggregator == null) {
|
||||
|
@ -139,11 +145,11 @@ public class ReverseNestedAggregator extends SingleBucketAggregator {
|
|||
if (path != null) {
|
||||
MapperService.SmartNameObjectMapper mapper = context.searchContext().smartNameObjectMapper(path);
|
||||
if (mapper == null) {
|
||||
return new Unmapped(name, context, parent, metaData);
|
||||
return new Unmapped(name, context, parent, reducers, metaData);
|
||||
}
|
||||
objectMapper = mapper.mapper();
|
||||
if (objectMapper == null) {
|
||||
return new Unmapped(name, context, parent, metaData);
|
||||
return new Unmapped(name, context, parent, reducers, metaData);
|
||||
}
|
||||
if (!objectMapper.nested().isNested()) {
|
||||
throw new AggregationExecutionException("[reverse_nested] nested path [" + path + "] is not nested");
|
||||
|
@ -151,18 +157,19 @@ public class ReverseNestedAggregator extends SingleBucketAggregator {
|
|||
} else {
|
||||
objectMapper = null;
|
||||
}
|
||||
return new ReverseNestedAggregator(name, factories, objectMapper, context, parent, metaData);
|
||||
return new ReverseNestedAggregator(name, factories, objectMapper, context, parent, reducers, metaData);
|
||||
}
|
||||
|
||||
private final static class Unmapped extends NonCollectingAggregator {
|
||||
|
||||
public Unmapped(String name, AggregationContext context, Aggregator parent, Map<String, Object> metaData) throws IOException {
|
||||
super(name, context, parent, metaData);
|
||||
public Unmapped(String name, AggregationContext context, Aggregator parent, List<Reducer> reducers, Map<String, Object> metaData)
|
||||
throws IOException {
|
||||
super(name, context, parent, reducers, metaData);
|
||||
}
|
||||
|
||||
@Override
|
||||
public InternalAggregation buildEmptyAggregation() {
|
||||
return new InternalReverseNested(name, 0, buildEmptySubAggregations(), metaData());
|
||||
return new InternalReverseNested(name, 0, buildEmptySubAggregations(), reducers(), metaData());
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -31,6 +31,7 @@ import org.elasticsearch.search.aggregations.InternalAggregations;
|
|||
import org.elasticsearch.search.aggregations.InternalMultiBucketAggregation;
|
||||
import org.elasticsearch.search.aggregations.bucket.BucketStreamContext;
|
||||
import org.elasticsearch.search.aggregations.bucket.BucketStreams;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
import org.elasticsearch.search.aggregations.support.format.ValueFormatter;
|
||||
import org.elasticsearch.search.aggregations.support.format.ValueFormatterStreams;
|
||||
|
||||
|
@ -42,7 +43,8 @@ import java.util.Map;
|
|||
/**
|
||||
*
|
||||
*/
|
||||
public class InternalRange<B extends InternalRange.Bucket> extends InternalMultiBucketAggregation implements Range {
|
||||
public class InternalRange<B extends InternalRange.Bucket, R extends InternalRange<B, R>> extends InternalMultiBucketAggregation<R, B>
|
||||
implements Range {
|
||||
|
||||
static final Factory FACTORY = new Factory();
|
||||
|
||||
|
@ -123,6 +125,14 @@ public class InternalRange<B extends InternalRange.Bucket> extends InternalMulti
|
|||
return to;
|
||||
}
|
||||
|
||||
public boolean getKeyed() {
|
||||
return keyed;
|
||||
}
|
||||
|
||||
public ValueFormatter getFormatter() {
|
||||
return formatter;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String getFromAsString() {
|
||||
if (Double.isInfinite(from)) {
|
||||
|
@ -215,31 +225,44 @@ public class InternalRange<B extends InternalRange.Bucket> extends InternalMulti
|
|||
}
|
||||
}
|
||||
|
||||
public static class Factory<B extends Bucket, R extends InternalRange<B>> {
|
||||
public static class Factory<B extends Bucket, R extends InternalRange<B, R>> {
|
||||
|
||||
public String type() {
|
||||
return TYPE.name();
|
||||
}
|
||||
|
||||
public R create(String name, List<B> ranges, @Nullable ValueFormatter formatter, boolean keyed, Map<String, Object> metaData) {
|
||||
return (R) new InternalRange<>(name, ranges, formatter, keyed, metaData);
|
||||
public R create(String name, List<B> ranges, @Nullable ValueFormatter formatter, boolean keyed, List<Reducer> reducers,
|
||||
Map<String, Object> metaData) {
|
||||
return (R) new InternalRange<>(name, ranges, formatter, keyed, reducers, metaData);
|
||||
}
|
||||
|
||||
|
||||
public B createBucket(String key, double from, double to, long docCount, InternalAggregations aggregations, boolean keyed, @Nullable ValueFormatter formatter) {
|
||||
public B createBucket(String key, double from, double to, long docCount, InternalAggregations aggregations, boolean keyed,
|
||||
@Nullable ValueFormatter formatter) {
|
||||
return (B) new Bucket(key, from, to, docCount, aggregations, keyed, formatter);
|
||||
}
|
||||
|
||||
public R create(List<B> ranges, R prototype) {
|
||||
return (R) new InternalRange<>(prototype.name, ranges, prototype.formatter, prototype.keyed, prototype.reducers(),
|
||||
prototype.metaData);
|
||||
}
|
||||
|
||||
public B createBucket(InternalAggregations aggregations, B prototype) {
|
||||
return (B) new Bucket(prototype.getKey(), prototype.from, prototype.to, prototype.getDocCount(), aggregations, prototype.keyed,
|
||||
prototype.formatter);
|
||||
}
|
||||
}
|
||||
|
||||
private List<B> ranges;
|
||||
private Map<String, B> rangeMap;
|
||||
private @Nullable ValueFormatter formatter;
|
||||
private boolean keyed;
|
||||
@Nullable
|
||||
protected ValueFormatter formatter;
|
||||
protected boolean keyed;
|
||||
|
||||
public InternalRange() {} // for serialization
|
||||
|
||||
public InternalRange(String name, List<B> ranges, @Nullable ValueFormatter formatter, boolean keyed, Map<String, Object> metaData) {
|
||||
super(name, metaData);
|
||||
public InternalRange(String name, List<B> ranges, @Nullable ValueFormatter formatter, boolean keyed, List<Reducer> reducers,
|
||||
Map<String, Object> metaData) {
|
||||
super(name, reducers, metaData);
|
||||
this.ranges = ranges;
|
||||
this.formatter = formatter;
|
||||
this.keyed = keyed;
|
||||
|
@ -255,19 +278,29 @@ public class InternalRange<B extends InternalRange.Bucket> extends InternalMulti
|
|||
return ranges;
|
||||
}
|
||||
|
||||
protected Factory<B, ?> getFactory() {
|
||||
public Factory<B, R> getFactory() {
|
||||
return FACTORY;
|
||||
}
|
||||
|
||||
@Override
|
||||
public InternalAggregation reduce(List<InternalAggregation> aggregations, ReduceContext reduceContext) {
|
||||
public R create(List<B> buckets) {
|
||||
return getFactory().create(buckets, (R) this);
|
||||
}
|
||||
|
||||
@Override
|
||||
public B createBucket(InternalAggregations aggregations, B prototype) {
|
||||
return getFactory().createBucket(aggregations, prototype);
|
||||
}
|
||||
|
||||
@Override
|
||||
public InternalAggregation doReduce(List<InternalAggregation> aggregations, ReduceContext reduceContext) {
|
||||
@SuppressWarnings("unchecked")
|
||||
List<Bucket>[] rangeList = new List[ranges.size()];
|
||||
for (int i = 0; i < rangeList.length; ++i) {
|
||||
rangeList[i] = new ArrayList<Bucket>();
|
||||
}
|
||||
for (InternalAggregation aggregation : aggregations) {
|
||||
InternalRange<?> ranges = (InternalRange<?>) aggregation;
|
||||
InternalRange<B, R> ranges = (InternalRange<B, R>) aggregation;
|
||||
int i = 0;
|
||||
for (Bucket range : ranges.ranges) {
|
||||
rangeList[i++].add(range);
|
||||
|
@ -278,7 +311,7 @@ public class InternalRange<B extends InternalRange.Bucket> extends InternalMulti
|
|||
for (int i = 0; i < this.ranges.size(); ++i) {
|
||||
ranges.add((B) rangeList[i].get(0).reduce(rangeList[i], reduceContext));
|
||||
}
|
||||
return getFactory().create(name, ranges, formatter, keyed, getMetaData());
|
||||
return getFactory().create(name, ranges, formatter, keyed, reducers(), getMetaData());
|
||||
}
|
||||
|
||||
@Override
|
||||
|
|
|
@ -33,6 +33,7 @@ import org.elasticsearch.search.aggregations.InternalAggregations;
|
|||
import org.elasticsearch.search.aggregations.LeafBucketCollector;
|
||||
import org.elasticsearch.search.aggregations.NonCollectingAggregator;
|
||||
import org.elasticsearch.search.aggregations.bucket.BucketsAggregator;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
import org.elasticsearch.search.aggregations.support.AggregationContext;
|
||||
import org.elasticsearch.search.aggregations.support.ValuesSource;
|
||||
import org.elasticsearch.search.aggregations.support.ValuesSourceAggregatorFactory;
|
||||
|
@ -104,10 +105,10 @@ public class RangeAggregator extends BucketsAggregator {
|
|||
List<Range> ranges,
|
||||
boolean keyed,
|
||||
AggregationContext aggregationContext,
|
||||
Aggregator parent,
|
||||
Aggregator parent, List<Reducer> reducers,
|
||||
Map<String, Object> metaData) throws IOException {
|
||||
|
||||
super(name, factories, aggregationContext, parent, metaData);
|
||||
super(name, factories, aggregationContext, parent, reducers, metaData);
|
||||
assert valuesSource != null;
|
||||
this.valuesSource = valuesSource;
|
||||
this.formatter = format != null ? format.formatter() : null;
|
||||
|
@ -149,54 +150,54 @@ public class RangeAggregator extends BucketsAggregator {
|
|||
}
|
||||
}
|
||||
|
||||
private int collect(int doc, double value, long owningBucketOrdinal, int lowBound) throws IOException {
|
||||
int lo = lowBound, hi = ranges.length - 1; // all candidates are between these indexes
|
||||
int mid = (lo + hi) >>> 1;
|
||||
while (lo <= hi) {
|
||||
if (value < ranges[mid].from) {
|
||||
hi = mid - 1;
|
||||
} else if (value >= maxTo[mid]) {
|
||||
lo = mid + 1;
|
||||
} else {
|
||||
break;
|
||||
}
|
||||
mid = (lo + hi) >>> 1;
|
||||
}
|
||||
if (lo > hi) return lo; // no potential candidate
|
||||
|
||||
// binary search the lower bound
|
||||
int startLo = lo, startHi = mid;
|
||||
while (startLo <= startHi) {
|
||||
final int startMid = (startLo + startHi) >>> 1;
|
||||
if (value >= maxTo[startMid]) {
|
||||
startLo = startMid + 1;
|
||||
} else {
|
||||
startHi = startMid - 1;
|
||||
}
|
||||
}
|
||||
|
||||
// binary search the upper bound
|
||||
int endLo = mid, endHi = hi;
|
||||
while (endLo <= endHi) {
|
||||
final int endMid = (endLo + endHi) >>> 1;
|
||||
if (value < ranges[endMid].from) {
|
||||
endHi = endMid - 1;
|
||||
} else {
|
||||
endLo = endMid + 1;
|
||||
}
|
||||
}
|
||||
|
||||
assert startLo == lowBound || value >= maxTo[startLo - 1];
|
||||
assert endHi == ranges.length - 1 || value < ranges[endHi + 1].from;
|
||||
|
||||
for (int i = startLo; i <= endHi; ++i) {
|
||||
if (ranges[i].matches(value)) {
|
||||
collectBucket(sub, doc, subBucketOrdinal(owningBucketOrdinal, i));
|
||||
}
|
||||
}
|
||||
|
||||
return endHi + 1;
|
||||
private int collect(int doc, double value, long owningBucketOrdinal, int lowBound) throws IOException {
|
||||
int lo = lowBound, hi = ranges.length - 1; // all candidates are between these indexes
|
||||
int mid = (lo + hi) >>> 1;
|
||||
while (lo <= hi) {
|
||||
if (value < ranges[mid].from) {
|
||||
hi = mid - 1;
|
||||
} else if (value >= maxTo[mid]) {
|
||||
lo = mid + 1;
|
||||
} else {
|
||||
break;
|
||||
}
|
||||
mid = (lo + hi) >>> 1;
|
||||
}
|
||||
if (lo > hi) return lo; // no potential candidate
|
||||
|
||||
// binary search the lower bound
|
||||
int startLo = lo, startHi = mid;
|
||||
while (startLo <= startHi) {
|
||||
final int startMid = (startLo + startHi) >>> 1;
|
||||
if (value >= maxTo[startMid]) {
|
||||
startLo = startMid + 1;
|
||||
} else {
|
||||
startHi = startMid - 1;
|
||||
}
|
||||
}
|
||||
|
||||
// binary search the upper bound
|
||||
int endLo = mid, endHi = hi;
|
||||
while (endLo <= endHi) {
|
||||
final int endMid = (endLo + endHi) >>> 1;
|
||||
if (value < ranges[endMid].from) {
|
||||
endHi = endMid - 1;
|
||||
} else {
|
||||
endLo = endMid + 1;
|
||||
}
|
||||
}
|
||||
|
||||
assert startLo == lowBound || value >= maxTo[startLo - 1];
|
||||
assert endHi == ranges.length - 1 || value < ranges[endHi + 1].from;
|
||||
|
||||
for (int i = startLo; i <= endHi; ++i) {
|
||||
if (ranges[i].matches(value)) {
|
||||
collectBucket(sub, doc, subBucketOrdinal(owningBucketOrdinal, i));
|
||||
}
|
||||
}
|
||||
|
||||
return endHi + 1;
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
|
@ -215,7 +216,7 @@ public class RangeAggregator extends BucketsAggregator {
|
|||
buckets.add(bucket);
|
||||
}
|
||||
// value source can be null in the case of unmapped fields
|
||||
return rangeFactory.create(name, buckets, formatter, keyed, metaData());
|
||||
return rangeFactory.create(name, buckets, formatter, keyed, reducers(), metaData());
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -229,7 +230,7 @@ public class RangeAggregator extends BucketsAggregator {
|
|||
buckets.add(bucket);
|
||||
}
|
||||
// value source can be null in the case of unmapped fields
|
||||
return rangeFactory.create(name, buckets, formatter, keyed, metaData());
|
||||
return rangeFactory.create(name, buckets, formatter, keyed, reducers(), metaData());
|
||||
}
|
||||
|
||||
private static final void sortRanges(final Range[] ranges) {
|
||||
|
@ -266,10 +267,10 @@ public class RangeAggregator extends BucketsAggregator {
|
|||
ValueFormat format,
|
||||
AggregationContext context,
|
||||
Aggregator parent,
|
||||
InternalRange.Factory factory,
|
||||
InternalRange.Factory factory, List<Reducer> reducers,
|
||||
Map<String, Object> metaData) throws IOException {
|
||||
|
||||
super(name, context, parent, metaData);
|
||||
super(name, context, parent, reducers, metaData);
|
||||
this.ranges = ranges;
|
||||
ValueParser parser = format != null ? format.parser() : ValueParser.RAW;
|
||||
for (Range range : this.ranges) {
|
||||
|
@ -287,7 +288,7 @@ public class RangeAggregator extends BucketsAggregator {
|
|||
for (RangeAggregator.Range range : ranges) {
|
||||
buckets.add(factory.createBucket(range.key, range.from, range.to, 0, subAggs, keyed, formatter));
|
||||
}
|
||||
return factory.create(name, buckets, formatter, keyed, metaData());
|
||||
return factory.create(name, buckets, formatter, keyed, reducers(), metaData());
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -305,13 +306,15 @@ public class RangeAggregator extends BucketsAggregator {
|
|||
}
|
||||
|
||||
@Override
|
||||
protected Aggregator createUnmapped(AggregationContext aggregationContext, Aggregator parent, Map<String, Object> metaData) throws IOException {
|
||||
return new Unmapped(name, ranges, keyed, config.format(), aggregationContext, parent, rangeFactory, metaData);
|
||||
protected Aggregator createUnmapped(AggregationContext aggregationContext, Aggregator parent, List<Reducer> reducers,
|
||||
Map<String, Object> metaData) throws IOException {
|
||||
return new Unmapped(name, ranges, keyed, config.format(), aggregationContext, parent, rangeFactory, reducers, metaData);
|
||||
}
|
||||
|
||||
@Override
|
||||
protected Aggregator doCreateInternal(ValuesSource.Numeric valuesSource, AggregationContext aggregationContext, Aggregator parent, boolean collectsFromSingleBucket, Map<String, Object> metaData) throws IOException {
|
||||
return new RangeAggregator(name, factories, valuesSource, config.format(), rangeFactory, ranges, keyed, aggregationContext, parent, metaData);
|
||||
protected Aggregator doCreateInternal(ValuesSource.Numeric valuesSource, AggregationContext aggregationContext, Aggregator parent,
|
||||
boolean collectsFromSingleBucket, List<Reducer> reducers, Map<String, Object> metaData) throws IOException {
|
||||
return new RangeAggregator(name, factories, valuesSource, config.format(), rangeFactory, ranges, keyed, aggregationContext, parent, reducers, metaData);
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -26,6 +26,7 @@ import org.elasticsearch.search.aggregations.InternalAggregations;
|
|||
import org.elasticsearch.search.aggregations.bucket.BucketStreamContext;
|
||||
import org.elasticsearch.search.aggregations.bucket.BucketStreams;
|
||||
import org.elasticsearch.search.aggregations.bucket.range.InternalRange;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
import org.elasticsearch.search.aggregations.support.format.ValueFormatter;
|
||||
import org.joda.time.DateTime;
|
||||
import org.joda.time.DateTimeZone;
|
||||
|
@ -37,7 +38,7 @@ import java.util.Map;
|
|||
/**
|
||||
*
|
||||
*/
|
||||
public class InternalDateRange extends InternalRange<InternalDateRange.Bucket> {
|
||||
public class InternalDateRange extends InternalRange<InternalDateRange.Bucket, InternalDateRange> {
|
||||
|
||||
public final static Type TYPE = new Type("date_range", "drange");
|
||||
|
||||
|
@ -112,7 +113,7 @@ public class InternalDateRange extends InternalRange<InternalDateRange.Bucket> {
|
|||
}
|
||||
}
|
||||
|
||||
private static class Factory extends InternalRange.Factory<InternalDateRange.Bucket, InternalDateRange> {
|
||||
public static class Factory extends InternalRange.Factory<InternalDateRange.Bucket, InternalDateRange> {
|
||||
|
||||
@Override
|
||||
public String type() {
|
||||
|
@ -120,20 +121,34 @@ public class InternalDateRange extends InternalRange<InternalDateRange.Bucket> {
|
|||
}
|
||||
|
||||
@Override
|
||||
public InternalDateRange create(String name, List<InternalDateRange.Bucket> ranges, ValueFormatter formatter, boolean keyed, Map<String, Object> metaData) {
|
||||
return new InternalDateRange(name, ranges, formatter, keyed, metaData);
|
||||
public InternalDateRange create(String name, List<InternalDateRange.Bucket> ranges, ValueFormatter formatter, boolean keyed,
|
||||
List<Reducer> reducers, Map<String, Object> metaData) {
|
||||
return new InternalDateRange(name, ranges, formatter, keyed, reducers, metaData);
|
||||
}
|
||||
|
||||
@Override
|
||||
public InternalDateRange create(List<Bucket> ranges, InternalDateRange prototype) {
|
||||
return new InternalDateRange(prototype.name, ranges, prototype.formatter, prototype.keyed, prototype.reducers(),
|
||||
prototype.metaData);
|
||||
}
|
||||
|
||||
@Override
|
||||
public Bucket createBucket(String key, double from, double to, long docCount, InternalAggregations aggregations, boolean keyed, ValueFormatter formatter) {
|
||||
return new Bucket(key, from, to, docCount, aggregations, keyed, formatter);
|
||||
}
|
||||
|
||||
@Override
|
||||
public Bucket createBucket(InternalAggregations aggregations, Bucket prototype) {
|
||||
return new Bucket(prototype.getKey(), ((Number) prototype.getFrom()).doubleValue(), ((Number) prototype.getTo()).doubleValue(),
|
||||
prototype.getDocCount(), aggregations, prototype.getKeyed(), prototype.getFormatter());
|
||||
}
|
||||
}
|
||||
|
||||
InternalDateRange() {} // for serialization
|
||||
|
||||
InternalDateRange(String name, List<InternalDateRange.Bucket> ranges, @Nullable ValueFormatter formatter, boolean keyed, Map<String, Object> metaData) {
|
||||
super(name, ranges, formatter, keyed, metaData);
|
||||
InternalDateRange(String name, List<InternalDateRange.Bucket> ranges, @Nullable ValueFormatter formatter, boolean keyed,
|
||||
List<Reducer> reducers, Map<String, Object> metaData) {
|
||||
super(name, ranges, formatter, keyed, reducers, metaData);
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -142,7 +157,7 @@ public class InternalDateRange extends InternalRange<InternalDateRange.Bucket> {
|
|||
}
|
||||
|
||||
@Override
|
||||
protected InternalRange.Factory<Bucket, ?> getFactory() {
|
||||
public InternalRange.Factory<Bucket, InternalDateRange> getFactory() {
|
||||
return FACTORY;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -35,6 +35,7 @@ import org.elasticsearch.search.aggregations.AggregatorFactory;
|
|||
import org.elasticsearch.search.aggregations.bucket.range.InternalRange;
|
||||
import org.elasticsearch.search.aggregations.bucket.range.RangeAggregator;
|
||||
import org.elasticsearch.search.aggregations.bucket.range.RangeAggregator.Unmapped;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
import org.elasticsearch.search.aggregations.support.AggregationContext;
|
||||
import org.elasticsearch.search.aggregations.support.GeoPointParser;
|
||||
import org.elasticsearch.search.aggregations.support.ValuesSource;
|
||||
|
@ -185,14 +186,18 @@ public class GeoDistanceParser implements Aggregator.Parser {
|
|||
}
|
||||
|
||||
@Override
|
||||
protected Aggregator createUnmapped(AggregationContext aggregationContext, Aggregator parent, Map<String, Object> metaData) throws IOException {
|
||||
return new Unmapped(name, ranges, keyed, null, aggregationContext, parent, rangeFactory, metaData);
|
||||
protected Aggregator createUnmapped(AggregationContext aggregationContext, Aggregator parent, List<Reducer> reducers,
|
||||
Map<String, Object> metaData) throws IOException {
|
||||
return new Unmapped(name, ranges, keyed, null, aggregationContext, parent, rangeFactory, reducers, metaData);
|
||||
}
|
||||
|
||||
@Override
|
||||
protected Aggregator doCreateInternal(final ValuesSource.GeoPoint valuesSource, AggregationContext aggregationContext, Aggregator parent, boolean collectsFromSingleBucket, Map<String, Object> metaData) throws IOException {
|
||||
protected Aggregator doCreateInternal(final ValuesSource.GeoPoint valuesSource, AggregationContext aggregationContext,
|
||||
Aggregator parent, boolean collectsFromSingleBucket, List<Reducer> reducers, Map<String, Object> metaData)
|
||||
throws IOException {
|
||||
DistanceSource distanceSource = new DistanceSource(valuesSource, distanceType, origin, unit);
|
||||
return new RangeAggregator(name, factories, distanceSource, null, rangeFactory, ranges, keyed, aggregationContext, parent, metaData);
|
||||
return new RangeAggregator(name, factories, distanceSource, null, rangeFactory, ranges, keyed, aggregationContext, parent,
|
||||
reducers, metaData);
|
||||
}
|
||||
|
||||
private static class DistanceSource extends ValuesSource.Numeric {
|
||||
|
|
|
@ -26,6 +26,7 @@ import org.elasticsearch.search.aggregations.InternalAggregations;
|
|||
import org.elasticsearch.search.aggregations.bucket.BucketStreamContext;
|
||||
import org.elasticsearch.search.aggregations.bucket.BucketStreams;
|
||||
import org.elasticsearch.search.aggregations.bucket.range.InternalRange;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
import org.elasticsearch.search.aggregations.support.format.ValueFormatter;
|
||||
|
||||
import java.io.IOException;
|
||||
|
@ -35,7 +36,7 @@ import java.util.Map;
|
|||
/**
|
||||
*
|
||||
*/
|
||||
public class InternalGeoDistance extends InternalRange<InternalGeoDistance.Bucket> {
|
||||
public class InternalGeoDistance extends InternalRange<InternalGeoDistance.Bucket, InternalGeoDistance> {
|
||||
|
||||
public static final Type TYPE = new Type("geo_distance", "gdist");
|
||||
|
||||
|
@ -100,7 +101,7 @@ public class InternalGeoDistance extends InternalRange<InternalGeoDistance.Bucke
|
|||
}
|
||||
}
|
||||
|
||||
private static class Factory extends InternalRange.Factory<InternalGeoDistance.Bucket, InternalGeoDistance> {
|
||||
public static class Factory extends InternalRange.Factory<InternalGeoDistance.Bucket, InternalGeoDistance> {
|
||||
|
||||
@Override
|
||||
public String type() {
|
||||
|
@ -108,20 +109,34 @@ public class InternalGeoDistance extends InternalRange<InternalGeoDistance.Bucke
|
|||
}
|
||||
|
||||
@Override
|
||||
public InternalGeoDistance create(String name, List<Bucket> ranges, @Nullable ValueFormatter formatter, boolean keyed, Map<String, Object> metaData) {
|
||||
return new InternalGeoDistance(name, ranges, formatter, keyed, metaData);
|
||||
public InternalGeoDistance create(String name, List<Bucket> ranges, @Nullable ValueFormatter formatter, boolean keyed,
|
||||
List<Reducer> reducers, Map<String, Object> metaData) {
|
||||
return new InternalGeoDistance(name, ranges, formatter, keyed, reducers, metaData);
|
||||
}
|
||||
|
||||
@Override
|
||||
public InternalGeoDistance create(List<Bucket> ranges, InternalGeoDistance prototype) {
|
||||
return new InternalGeoDistance(prototype.name, ranges, prototype.formatter, prototype.keyed, prototype.reducers(),
|
||||
prototype.metaData);
|
||||
}
|
||||
|
||||
@Override
|
||||
public Bucket createBucket(String key, double from, double to, long docCount, InternalAggregations aggregations, boolean keyed, @Nullable ValueFormatter formatter) {
|
||||
return new Bucket(key, from, to, docCount, aggregations, keyed, formatter);
|
||||
}
|
||||
|
||||
@Override
|
||||
public Bucket createBucket(InternalAggregations aggregations, Bucket prototype) {
|
||||
return new Bucket(prototype.getKey(), ((Number) prototype.getFrom()).doubleValue(), ((Number) prototype.getTo()).doubleValue(),
|
||||
prototype.getDocCount(), aggregations, prototype.getKeyed(), prototype.getFormatter());
|
||||
}
|
||||
}
|
||||
|
||||
InternalGeoDistance() {} // for serialization
|
||||
|
||||
public InternalGeoDistance(String name, List<Bucket> ranges, @Nullable ValueFormatter formatter, boolean keyed, Map<String, Object> metaData) {
|
||||
super(name, ranges, formatter, keyed, metaData);
|
||||
public InternalGeoDistance(String name, List<Bucket> ranges, @Nullable ValueFormatter formatter, boolean keyed, List<Reducer> reducers,
|
||||
Map<String, Object> metaData) {
|
||||
super(name, ranges, formatter, keyed, reducers, metaData);
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -130,7 +145,7 @@ public class InternalGeoDistance extends InternalRange<InternalGeoDistance.Bucke
|
|||
}
|
||||
|
||||
@Override
|
||||
protected InternalRange.Factory<Bucket, ?> getFactory() {
|
||||
public InternalRange.Factory<Bucket, InternalGeoDistance> getFactory() {
|
||||
return FACTORY;
|
||||
}
|
||||
}
|
|
@ -26,6 +26,7 @@ import org.elasticsearch.search.aggregations.InternalAggregations;
|
|||
import org.elasticsearch.search.aggregations.bucket.BucketStreamContext;
|
||||
import org.elasticsearch.search.aggregations.bucket.BucketStreams;
|
||||
import org.elasticsearch.search.aggregations.bucket.range.InternalRange;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
import org.elasticsearch.search.aggregations.support.format.ValueFormatter;
|
||||
|
||||
import java.io.IOException;
|
||||
|
@ -35,7 +36,7 @@ import java.util.Map;
|
|||
/**
|
||||
*
|
||||
*/
|
||||
public class InternalIPv4Range extends InternalRange<InternalIPv4Range.Bucket> {
|
||||
public class InternalIPv4Range extends InternalRange<InternalIPv4Range.Bucket, InternalIPv4Range> {
|
||||
|
||||
public static final long MAX_IP = 4294967296l;
|
||||
|
||||
|
@ -109,7 +110,7 @@ public class InternalIPv4Range extends InternalRange<InternalIPv4Range.Bucket> {
|
|||
}
|
||||
}
|
||||
|
||||
private static class Factory extends InternalRange.Factory<InternalIPv4Range.Bucket, InternalIPv4Range> {
|
||||
public static class Factory extends InternalRange.Factory<InternalIPv4Range.Bucket, InternalIPv4Range> {
|
||||
|
||||
@Override
|
||||
public String type() {
|
||||
|
@ -117,20 +118,33 @@ public class InternalIPv4Range extends InternalRange<InternalIPv4Range.Bucket> {
|
|||
}
|
||||
|
||||
@Override
|
||||
public InternalIPv4Range create(String name, List<Bucket> ranges, @Nullable ValueFormatter formatter, boolean keyed, Map<String, Object> metaData) {
|
||||
return new InternalIPv4Range(name, ranges, keyed, metaData);
|
||||
public InternalIPv4Range create(String name, List<Bucket> ranges, @Nullable ValueFormatter formatter, boolean keyed,
|
||||
List<Reducer> reducers, Map<String, Object> metaData) {
|
||||
return new InternalIPv4Range(name, ranges, keyed, reducers, metaData);
|
||||
}
|
||||
|
||||
@Override
|
||||
public InternalIPv4Range create(List<Bucket> ranges, InternalIPv4Range prototype) {
|
||||
return new InternalIPv4Range(prototype.name, ranges, prototype.keyed, prototype.reducers(), prototype.metaData);
|
||||
}
|
||||
|
||||
@Override
|
||||
public Bucket createBucket(String key, double from, double to, long docCount, InternalAggregations aggregations, boolean keyed, @Nullable ValueFormatter formatter) {
|
||||
return new Bucket(key, from, to, docCount, aggregations, keyed);
|
||||
}
|
||||
|
||||
@Override
|
||||
public Bucket createBucket(InternalAggregations aggregations, Bucket prototype) {
|
||||
return new Bucket(prototype.getKey(), ((Number) prototype.getFrom()).doubleValue(), ((Number) prototype.getTo()).doubleValue(),
|
||||
prototype.getDocCount(), aggregations, prototype.getKeyed());
|
||||
}
|
||||
}
|
||||
|
||||
public InternalIPv4Range() {} // for serialization
|
||||
|
||||
public InternalIPv4Range(String name, List<InternalIPv4Range.Bucket> ranges, boolean keyed, Map<String, Object> metaData) {
|
||||
super(name, ranges, ValueFormatter.IPv4, keyed, metaData);
|
||||
public InternalIPv4Range(String name, List<InternalIPv4Range.Bucket> ranges, boolean keyed, List<Reducer> reducers,
|
||||
Map<String, Object> metaData) {
|
||||
super(name, ranges, ValueFormatter.IPv4, keyed, reducers, metaData);
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -139,7 +153,7 @@ public class InternalIPv4Range extends InternalRange<InternalIPv4Range.Bucket> {
|
|||
}
|
||||
|
||||
@Override
|
||||
protected InternalRange.Factory<Bucket, ?> getFactory() {
|
||||
public InternalRange.Factory<Bucket, InternalIPv4Range> getFactory() {
|
||||
return FACTORY;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -31,10 +31,12 @@ import org.elasticsearch.search.aggregations.Aggregator;
|
|||
import org.elasticsearch.search.aggregations.AggregatorFactories;
|
||||
import org.elasticsearch.search.aggregations.bucket.BestDocsDeferringCollector;
|
||||
import org.elasticsearch.search.aggregations.bucket.DeferringBucketCollector;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
import org.elasticsearch.search.aggregations.support.AggregationContext;
|
||||
import org.elasticsearch.search.aggregations.support.ValuesSource;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
/**
|
||||
|
@ -47,9 +49,10 @@ public class DiversifiedBytesHashSamplerAggregator extends SamplerAggregator {
|
|||
private int maxDocsPerValue;
|
||||
|
||||
public DiversifiedBytesHashSamplerAggregator(String name, int shardSize, AggregatorFactories factories,
|
||||
AggregationContext aggregationContext, Aggregator parent, Map<String, Object> metaData, ValuesSource valuesSource,
|
||||
AggregationContext aggregationContext, Aggregator parent, List<Reducer> reducers, Map<String, Object> metaData,
|
||||
ValuesSource valuesSource,
|
||||
int maxDocsPerValue) throws IOException {
|
||||
super(name, shardSize, factories, aggregationContext, parent, metaData);
|
||||
super(name, shardSize, factories, aggregationContext, parent, reducers, metaData);
|
||||
this.valuesSource = valuesSource;
|
||||
this.maxDocsPerValue = maxDocsPerValue;
|
||||
}
|
||||
|
|
|
@ -33,10 +33,12 @@ import org.elasticsearch.search.aggregations.Aggregator;
|
|||
import org.elasticsearch.search.aggregations.AggregatorFactories;
|
||||
import org.elasticsearch.search.aggregations.bucket.BestDocsDeferringCollector;
|
||||
import org.elasticsearch.search.aggregations.bucket.DeferringBucketCollector;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
import org.elasticsearch.search.aggregations.support.AggregationContext;
|
||||
import org.elasticsearch.search.aggregations.support.ValuesSource;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
public class DiversifiedMapSamplerAggregator extends SamplerAggregator {
|
||||
|
@ -46,9 +48,9 @@ public class DiversifiedMapSamplerAggregator extends SamplerAggregator {
|
|||
private BytesRefHash bucketOrds;
|
||||
|
||||
public DiversifiedMapSamplerAggregator(String name, int shardSize, AggregatorFactories factories,
|
||||
AggregationContext aggregationContext, Aggregator parent, Map<String, Object> metaData, ValuesSource valuesSource,
|
||||
int maxDocsPerValue) throws IOException {
|
||||
super(name, shardSize, factories, aggregationContext, parent, metaData);
|
||||
AggregationContext aggregationContext, Aggregator parent, List<Reducer> reducers, Map<String, Object> metaData,
|
||||
ValuesSource valuesSource, int maxDocsPerValue) throws IOException {
|
||||
super(name, shardSize, factories, aggregationContext, parent, reducers, metaData);
|
||||
this.valuesSource = valuesSource;
|
||||
this.maxDocsPerValue = maxDocsPerValue;
|
||||
bucketOrds = new BytesRefHash(shardSize, aggregationContext.bigArrays());
|
||||
|
|
|
@ -30,10 +30,12 @@ import org.elasticsearch.search.aggregations.Aggregator;
|
|||
import org.elasticsearch.search.aggregations.AggregatorFactories;
|
||||
import org.elasticsearch.search.aggregations.bucket.BestDocsDeferringCollector;
|
||||
import org.elasticsearch.search.aggregations.bucket.DeferringBucketCollector;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
import org.elasticsearch.search.aggregations.support.AggregationContext;
|
||||
import org.elasticsearch.search.aggregations.support.ValuesSource;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
public class DiversifiedNumericSamplerAggregator extends SamplerAggregator {
|
||||
|
@ -42,9 +44,9 @@ public class DiversifiedNumericSamplerAggregator extends SamplerAggregator {
|
|||
private int maxDocsPerValue;
|
||||
|
||||
public DiversifiedNumericSamplerAggregator(String name, int shardSize, AggregatorFactories factories,
|
||||
AggregationContext aggregationContext, Aggregator parent, Map<String, Object> metaData, ValuesSource.Numeric valuesSource,
|
||||
int maxDocsPerValue) throws IOException {
|
||||
super(name, shardSize, factories, aggregationContext, parent, metaData);
|
||||
AggregationContext aggregationContext, Aggregator parent, List<Reducer> reducers, Map<String, Object> metaData,
|
||||
ValuesSource.Numeric valuesSource, int maxDocsPerValue) throws IOException {
|
||||
super(name, shardSize, factories, aggregationContext, parent, reducers, metaData);
|
||||
this.valuesSource = valuesSource;
|
||||
this.maxDocsPerValue = maxDocsPerValue;
|
||||
}
|
||||
|
|
|
@ -31,10 +31,12 @@ import org.elasticsearch.search.aggregations.Aggregator;
|
|||
import org.elasticsearch.search.aggregations.AggregatorFactories;
|
||||
import org.elasticsearch.search.aggregations.bucket.BestDocsDeferringCollector;
|
||||
import org.elasticsearch.search.aggregations.bucket.DeferringBucketCollector;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
import org.elasticsearch.search.aggregations.support.AggregationContext;
|
||||
import org.elasticsearch.search.aggregations.support.ValuesSource;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
public class DiversifiedOrdinalsSamplerAggregator extends SamplerAggregator {
|
||||
|
@ -43,9 +45,9 @@ public class DiversifiedOrdinalsSamplerAggregator extends SamplerAggregator {
|
|||
private int maxDocsPerValue;
|
||||
|
||||
public DiversifiedOrdinalsSamplerAggregator(String name, int shardSize, AggregatorFactories factories,
|
||||
AggregationContext aggregationContext, Aggregator parent, Map<String, Object> metaData,
|
||||
AggregationContext aggregationContext, Aggregator parent, List<Reducer> reducers, Map<String, Object> metaData,
|
||||
ValuesSource.Bytes.WithOrdinals.FieldData valuesSource, int maxDocsPerValue) throws IOException {
|
||||
super(name, shardSize, factories, aggregationContext, parent, metaData);
|
||||
super(name, shardSize, factories, aggregationContext, parent, reducers, metaData);
|
||||
this.valuesSource = valuesSource;
|
||||
this.maxDocsPerValue = maxDocsPerValue;
|
||||
}
|
||||
|
|
|
@ -22,8 +22,10 @@ import org.elasticsearch.common.io.stream.StreamInput;
|
|||
import org.elasticsearch.search.aggregations.AggregationStreams;
|
||||
import org.elasticsearch.search.aggregations.InternalAggregations;
|
||||
import org.elasticsearch.search.aggregations.bucket.InternalSingleBucketAggregation;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
/**
|
||||
|
@ -49,8 +51,8 @@ public class InternalSampler extends InternalSingleBucketAggregation implements
|
|||
InternalSampler() {
|
||||
} // for serialization
|
||||
|
||||
InternalSampler(String name, long docCount, InternalAggregations subAggregations, Map<String, Object> metaData) {
|
||||
super(name, docCount, subAggregations, metaData);
|
||||
InternalSampler(String name, long docCount, InternalAggregations subAggregations, List<Reducer> reducers, Map<String, Object> metaData) {
|
||||
super(name, docCount, subAggregations, reducers, metaData);
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -59,7 +61,8 @@ public class InternalSampler extends InternalSingleBucketAggregation implements
|
|||
}
|
||||
|
||||
@Override
|
||||
protected InternalSingleBucketAggregation newAggregation(String name, long docCount, InternalAggregations subAggregations) {
|
||||
return new InternalSampler(name, docCount, subAggregations, metaData);
|
||||
protected InternalSingleBucketAggregation newAggregation(String name, long docCount,
|
||||
InternalAggregations subAggregations) {
|
||||
return new InternalSampler(name, docCount, subAggregations, reducers(), metaData);
|
||||
}
|
||||
}
|
|
@ -30,6 +30,7 @@ import org.elasticsearch.search.aggregations.NonCollectingAggregator;
|
|||
import org.elasticsearch.search.aggregations.bucket.BestDocsDeferringCollector;
|
||||
import org.elasticsearch.search.aggregations.bucket.DeferringBucketCollector;
|
||||
import org.elasticsearch.search.aggregations.bucket.SingleBucketAggregator;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
import org.elasticsearch.search.aggregations.support.AggregationContext;
|
||||
import org.elasticsearch.search.aggregations.support.ValuesSource;
|
||||
import org.elasticsearch.search.aggregations.support.ValuesSource.Numeric;
|
||||
|
@ -37,6 +38,7 @@ import org.elasticsearch.search.aggregations.support.ValuesSourceAggregatorFacto
|
|||
import org.elasticsearch.search.aggregations.support.ValuesSourceConfig;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
/**
|
||||
|
@ -58,9 +60,9 @@ public class SamplerAggregator extends SingleBucketAggregator {
|
|||
|
||||
@Override
|
||||
Aggregator create(String name, AggregatorFactories factories, int shardSize, int maxDocsPerValue, ValuesSource valuesSource,
|
||||
AggregationContext context, Aggregator parent, Map<String, Object> metaData) throws IOException {
|
||||
AggregationContext context, Aggregator parent, List<Reducer> reducers, Map<String, Object> metaData) throws IOException {
|
||||
|
||||
return new DiversifiedMapSamplerAggregator(name, shardSize, factories, context, parent, metaData, valuesSource,
|
||||
return new DiversifiedMapSamplerAggregator(name, shardSize, factories, context, parent, reducers, metaData, valuesSource,
|
||||
maxDocsPerValue);
|
||||
}
|
||||
|
||||
|
@ -74,9 +76,10 @@ public class SamplerAggregator extends SingleBucketAggregator {
|
|||
|
||||
@Override
|
||||
Aggregator create(String name, AggregatorFactories factories, int shardSize, int maxDocsPerValue, ValuesSource valuesSource,
|
||||
AggregationContext context, Aggregator parent, Map<String, Object> metaData) throws IOException {
|
||||
AggregationContext context, Aggregator parent, List<Reducer> reducers, Map<String, Object> metaData) throws IOException {
|
||||
|
||||
return new DiversifiedBytesHashSamplerAggregator(name, shardSize, factories, context, parent, metaData, valuesSource,
|
||||
return new DiversifiedBytesHashSamplerAggregator(name, shardSize, factories, context, parent, reducers, metaData,
|
||||
valuesSource,
|
||||
maxDocsPerValue);
|
||||
}
|
||||
|
||||
|
@ -90,8 +93,8 @@ public class SamplerAggregator extends SingleBucketAggregator {
|
|||
|
||||
@Override
|
||||
Aggregator create(String name, AggregatorFactories factories, int shardSize, int maxDocsPerValue, ValuesSource valuesSource,
|
||||
AggregationContext context, Aggregator parent, Map<String, Object> metaData) throws IOException {
|
||||
return new DiversifiedOrdinalsSamplerAggregator(name, shardSize, factories, context, parent, metaData,
|
||||
AggregationContext context, Aggregator parent, List<Reducer> reducers, Map<String, Object> metaData) throws IOException {
|
||||
return new DiversifiedOrdinalsSamplerAggregator(name, shardSize, factories, context, parent, reducers, metaData,
|
||||
(ValuesSource.Bytes.WithOrdinals.FieldData) valuesSource, maxDocsPerValue);
|
||||
}
|
||||
|
||||
|
@ -118,7 +121,8 @@ public class SamplerAggregator extends SingleBucketAggregator {
|
|||
}
|
||||
|
||||
abstract Aggregator create(String name, AggregatorFactories factories, int shardSize, int maxDocsPerValue, ValuesSource valuesSource,
|
||||
AggregationContext context, Aggregator parent, Map<String, Object> metaData) throws IOException;
|
||||
AggregationContext context, Aggregator parent, List<Reducer> reducers,
|
||||
Map<String, Object> metaData) throws IOException;
|
||||
|
||||
abstract boolean needsGlobalOrdinals();
|
||||
|
||||
|
@ -132,9 +136,9 @@ public class SamplerAggregator extends SingleBucketAggregator {
|
|||
protected final int shardSize;
|
||||
protected BestDocsDeferringCollector bdd;
|
||||
|
||||
public SamplerAggregator(String name, int shardSize, AggregatorFactories factories,
|
||||
AggregationContext aggregationContext, Aggregator parent, Map<String, Object> metaData) throws IOException {
|
||||
super(name, factories, aggregationContext, parent, metaData);
|
||||
public SamplerAggregator(String name, int shardSize, AggregatorFactories factories, AggregationContext aggregationContext,
|
||||
Aggregator parent, List<Reducer> reducers, Map<String, Object> metaData) throws IOException {
|
||||
super(name, factories, aggregationContext, parent, reducers, metaData);
|
||||
this.shardSize = shardSize;
|
||||
}
|
||||
|
||||
|
@ -159,12 +163,13 @@ public class SamplerAggregator extends SingleBucketAggregator {
|
|||
@Override
|
||||
public InternalAggregation buildAggregation(long owningBucketOrdinal) throws IOException {
|
||||
runDeferredCollections(owningBucketOrdinal);
|
||||
return new InternalSampler(name, bdd == null ? 0 : bdd.getDocCount(), bucketAggregations(owningBucketOrdinal), metaData());
|
||||
return new InternalSampler(name, bdd == null ? 0 : bdd.getDocCount(), bucketAggregations(owningBucketOrdinal), reducers(),
|
||||
metaData());
|
||||
}
|
||||
|
||||
@Override
|
||||
public InternalAggregation buildEmptyAggregation() {
|
||||
return new InternalSampler(name, 0, buildEmptySubAggregations(), metaData());
|
||||
return new InternalSampler(name, 0, buildEmptySubAggregations(), reducers(), metaData());
|
||||
}
|
||||
|
||||
public static class Factory extends AggregatorFactory {
|
||||
|
@ -178,12 +183,12 @@ public class SamplerAggregator extends SingleBucketAggregator {
|
|||
|
||||
@Override
|
||||
public Aggregator createInternal(AggregationContext context, Aggregator parent, boolean collectsFromSingleBucket,
|
||||
Map<String, Object> metaData) throws IOException {
|
||||
List<Reducer> reducers, Map<String, Object> metaData) throws IOException {
|
||||
|
||||
if (collectsFromSingleBucket == false) {
|
||||
return asMultiBucketAggregator(this, context, parent);
|
||||
}
|
||||
return new SamplerAggregator(name, shardSize, factories, context, parent, metaData);
|
||||
return new SamplerAggregator(name, shardSize, factories, context, parent, reducers, metaData);
|
||||
}
|
||||
|
||||
}
|
||||
|
@ -203,7 +208,7 @@ public class SamplerAggregator extends SingleBucketAggregator {
|
|||
|
||||
@Override
|
||||
protected Aggregator doCreateInternal(ValuesSource valuesSource, AggregationContext context, Aggregator parent,
|
||||
boolean collectsFromSingleBucket, Map<String, Object> metaData) throws IOException {
|
||||
boolean collectsFromSingleBucket, List<Reducer> reducers, Map<String, Object> metaData) throws IOException {
|
||||
|
||||
if (collectsFromSingleBucket == false) {
|
||||
return asMultiBucketAggregator(this, context, parent);
|
||||
|
@ -211,7 +216,7 @@ public class SamplerAggregator extends SingleBucketAggregator {
|
|||
|
||||
|
||||
if (valuesSource instanceof ValuesSource.Numeric) {
|
||||
return new DiversifiedNumericSamplerAggregator(name, shardSize, factories, context, parent, metaData,
|
||||
return new DiversifiedNumericSamplerAggregator(name, shardSize, factories, context, parent, reducers, metaData,
|
||||
(Numeric) valuesSource, maxDocsPerValue);
|
||||
}
|
||||
|
||||
|
@ -229,7 +234,7 @@ public class SamplerAggregator extends SingleBucketAggregator {
|
|||
if ((execution.needsGlobalOrdinals()) && (!(valuesSource instanceof ValuesSource.Bytes.WithOrdinals))) {
|
||||
execution = ExecutionMode.MAP;
|
||||
}
|
||||
return execution.create(name, factories, shardSize, maxDocsPerValue, valuesSource, context, parent, metaData);
|
||||
return execution.create(name, factories, shardSize, maxDocsPerValue, valuesSource, context, parent, reducers, metaData);
|
||||
}
|
||||
|
||||
throw new AggregationExecutionException("Sampler aggregation cannot be applied to field [" + config.fieldContext().field() +
|
||||
|
@ -237,11 +242,11 @@ public class SamplerAggregator extends SingleBucketAggregator {
|
|||
}
|
||||
|
||||
@Override
|
||||
protected Aggregator createUnmapped(AggregationContext aggregationContext, Aggregator parent, Map<String, Object> metaData)
|
||||
throws IOException {
|
||||
final UnmappedSampler aggregation = new UnmappedSampler(name, metaData);
|
||||
protected Aggregator createUnmapped(AggregationContext aggregationContext, Aggregator parent, List<Reducer> reducers,
|
||||
Map<String, Object> metaData) throws IOException {
|
||||
final UnmappedSampler aggregation = new UnmappedSampler(name, reducers, metaData);
|
||||
|
||||
return new NonCollectingAggregator(name, aggregationContext, parent, factories, metaData) {
|
||||
return new NonCollectingAggregator(name, aggregationContext, parent, factories, reducers, metaData) {
|
||||
@Override
|
||||
public InternalAggregation buildEmptyAggregation() {
|
||||
return aggregation;
|
||||
|
|
|
@ -23,6 +23,7 @@ import org.elasticsearch.common.xcontent.XContentBuilder;
|
|||
import org.elasticsearch.search.aggregations.AggregationStreams;
|
||||
import org.elasticsearch.search.aggregations.InternalAggregation;
|
||||
import org.elasticsearch.search.aggregations.InternalAggregations;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.List;
|
||||
|
@ -52,8 +53,8 @@ public class UnmappedSampler extends InternalSampler {
|
|||
UnmappedSampler() {
|
||||
}
|
||||
|
||||
public UnmappedSampler(String name, Map<String, Object> metaData) {
|
||||
super(name, 0, InternalAggregations.EMPTY, metaData);
|
||||
public UnmappedSampler(String name, List<Reducer> reducers, Map<String, Object> metaData) {
|
||||
super(name, 0, InternalAggregations.EMPTY, reducers, metaData);
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -62,7 +63,7 @@ public class UnmappedSampler extends InternalSampler {
|
|||
}
|
||||
|
||||
@Override
|
||||
public InternalAggregation reduce(List<InternalAggregation> aggregations, ReduceContext reduceContext) {
|
||||
public InternalAggregation doReduce(List<InternalAggregation> aggregations, ReduceContext reduceContext) {
|
||||
for (InternalAggregation agg : aggregations) {
|
||||
if (!(agg instanceof UnmappedSampler)) {
|
||||
return agg.reduce(aggregations, reduceContext);
|
||||
|
|
|
@ -25,10 +25,11 @@ import org.elasticsearch.common.lease.Releasables;
|
|||
import org.elasticsearch.common.util.LongHash;
|
||||
import org.elasticsearch.search.aggregations.Aggregator;
|
||||
import org.elasticsearch.search.aggregations.AggregatorFactories;
|
||||
import org.elasticsearch.search.aggregations.LeafBucketCollectorBase;
|
||||
import org.elasticsearch.search.aggregations.LeafBucketCollector;
|
||||
import org.elasticsearch.search.aggregations.LeafBucketCollectorBase;
|
||||
import org.elasticsearch.search.aggregations.bucket.terms.GlobalOrdinalsStringTermsAggregator;
|
||||
import org.elasticsearch.search.aggregations.bucket.terms.support.IncludeExclude;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
import org.elasticsearch.search.aggregations.support.AggregationContext;
|
||||
import org.elasticsearch.search.aggregations.support.ValuesSource;
|
||||
import org.elasticsearch.search.internal.ContextIndexSearcher;
|
||||
|
@ -36,6 +37,7 @@ import org.elasticsearch.search.internal.ContextIndexSearcher;
|
|||
import java.io.IOException;
|
||||
import java.util.Arrays;
|
||||
import java.util.Collections;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
/**
|
||||
|
@ -46,12 +48,13 @@ public class GlobalOrdinalsSignificantTermsAggregator extends GlobalOrdinalsStri
|
|||
protected long numCollectedDocs;
|
||||
protected final SignificantTermsAggregatorFactory termsAggFactory;
|
||||
|
||||
public GlobalOrdinalsSignificantTermsAggregator(String name, AggregatorFactories factories, ValuesSource.Bytes.WithOrdinals.FieldData valuesSource,
|
||||
BucketCountThresholds bucketCountThresholds,
|
||||
IncludeExclude.OrdinalsFilter includeExclude, AggregationContext aggregationContext, Aggregator parent,
|
||||
SignificantTermsAggregatorFactory termsAggFactory, Map<String, Object> metaData) throws IOException {
|
||||
public GlobalOrdinalsSignificantTermsAggregator(String name, AggregatorFactories factories,
|
||||
ValuesSource.Bytes.WithOrdinals.FieldData valuesSource, BucketCountThresholds bucketCountThresholds,
|
||||
IncludeExclude.OrdinalsFilter includeExclude, AggregationContext aggregationContext, Aggregator parent,
|
||||
SignificantTermsAggregatorFactory termsAggFactory, List<Reducer> reducers, Map<String, Object> metaData) throws IOException {
|
||||
|
||||
super(name, factories, valuesSource, null, bucketCountThresholds, includeExclude, aggregationContext, parent, SubAggCollectionMode.DEPTH_FIRST, false, metaData);
|
||||
super(name, factories, valuesSource, null, bucketCountThresholds, includeExclude, aggregationContext, parent,
|
||||
SubAggCollectionMode.DEPTH_FIRST, false, reducers, metaData);
|
||||
this.termsAggFactory = termsAggFactory;
|
||||
}
|
||||
|
||||
|
@ -124,7 +127,9 @@ public class GlobalOrdinalsSignificantTermsAggregator extends GlobalOrdinalsStri
|
|||
list[i] = bucket;
|
||||
}
|
||||
|
||||
return new SignificantStringTerms(subsetSize, supersetSize, name, bucketCountThresholds.getRequiredSize(), bucketCountThresholds.getMinDocCount(), termsAggFactory.getSignificanceHeuristic(), Arrays.asList(list), metaData());
|
||||
return new SignificantStringTerms(subsetSize, supersetSize, name, bucketCountThresholds.getRequiredSize(),
|
||||
bucketCountThresholds.getMinDocCount(), termsAggFactory.getSignificanceHeuristic(), Arrays.asList(list), reducers(),
|
||||
metaData());
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -133,7 +138,9 @@ public class GlobalOrdinalsSignificantTermsAggregator extends GlobalOrdinalsStri
|
|||
ContextIndexSearcher searcher = context.searchContext().searcher();
|
||||
IndexReader topReader = searcher.getIndexReader();
|
||||
int supersetSize = topReader.numDocs();
|
||||
return new SignificantStringTerms(0, supersetSize, name, bucketCountThresholds.getRequiredSize(), bucketCountThresholds.getMinDocCount(), termsAggFactory.getSignificanceHeuristic(), Collections.<InternalSignificantTerms.Bucket>emptyList(), metaData());
|
||||
return new SignificantStringTerms(0, supersetSize, name, bucketCountThresholds.getRequiredSize(),
|
||||
bucketCountThresholds.getMinDocCount(), termsAggFactory.getSignificanceHeuristic(),
|
||||
Collections.<InternalSignificantTerms.Bucket> emptyList(), reducers(), metaData());
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -145,8 +152,8 @@ public class GlobalOrdinalsSignificantTermsAggregator extends GlobalOrdinalsStri
|
|||
|
||||
private final LongHash bucketOrds;
|
||||
|
||||
public WithHash(String name, AggregatorFactories factories, ValuesSource.Bytes.WithOrdinals.FieldData valuesSource, BucketCountThresholds bucketCountThresholds, IncludeExclude.OrdinalsFilter includeExclude, AggregationContext aggregationContext, Aggregator parent, SignificantTermsAggregatorFactory termsAggFactory, Map<String, Object> metaData) throws IOException {
|
||||
super(name, factories, valuesSource, bucketCountThresholds, includeExclude, aggregationContext, parent, termsAggFactory, metaData);
|
||||
public WithHash(String name, AggregatorFactories factories, ValuesSource.Bytes.WithOrdinals.FieldData valuesSource, BucketCountThresholds bucketCountThresholds, IncludeExclude.OrdinalsFilter includeExclude, AggregationContext aggregationContext, Aggregator parent, SignificantTermsAggregatorFactory termsAggFactory, List<Reducer> reducers, Map<String, Object> metaData) throws IOException {
|
||||
super(name, factories, valuesSource, bucketCountThresholds, includeExclude, aggregationContext, parent, termsAggFactory, reducers, metaData);
|
||||
bucketOrds = new LongHash(1, aggregationContext.bigArrays());
|
||||
}
|
||||
|
||||
|
|
|
@ -27,6 +27,7 @@ import org.elasticsearch.search.aggregations.InternalAggregation;
|
|||
import org.elasticsearch.search.aggregations.InternalAggregations;
|
||||
import org.elasticsearch.search.aggregations.InternalMultiBucketAggregation;
|
||||
import org.elasticsearch.search.aggregations.bucket.significant.heuristics.SignificanceHeuristic;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
|
||||
import java.util.ArrayList;
|
||||
import java.util.Arrays;
|
||||
|
@ -38,12 +39,13 @@ import java.util.Map;
|
|||
/**
|
||||
*
|
||||
*/
|
||||
public abstract class InternalSignificantTerms extends InternalMultiBucketAggregation implements SignificantTerms, ToXContent, Streamable {
|
||||
public abstract class InternalSignificantTerms<A extends InternalSignificantTerms, B extends InternalSignificantTerms.Bucket> extends
|
||||
InternalMultiBucketAggregation<A, B> implements SignificantTerms, ToXContent, Streamable {
|
||||
|
||||
protected SignificanceHeuristic significanceHeuristic;
|
||||
protected int requiredSize;
|
||||
protected long minDocCount;
|
||||
protected List<Bucket> buckets;
|
||||
protected List<? extends Bucket> buckets;
|
||||
protected Map<String, Bucket> bucketMap;
|
||||
protected long subsetSize;
|
||||
protected long supersetSize;
|
||||
|
@ -122,8 +124,10 @@ public abstract class InternalSignificantTerms extends InternalMultiBucketAggreg
|
|||
}
|
||||
}
|
||||
|
||||
protected InternalSignificantTerms(long subsetSize, long supersetSize, String name, int requiredSize, long minDocCount, SignificanceHeuristic significanceHeuristic, List<Bucket> buckets, Map<String, Object> metaData) {
|
||||
super(name, metaData);
|
||||
protected InternalSignificantTerms(long subsetSize, long supersetSize, String name, int requiredSize, long minDocCount,
|
||||
SignificanceHeuristic significanceHeuristic, List<? extends Bucket> buckets, List<Reducer> reducers,
|
||||
Map<String, Object> metaData) {
|
||||
super(name, reducers, metaData);
|
||||
this.requiredSize = requiredSize;
|
||||
this.minDocCount = minDocCount;
|
||||
this.buckets = buckets;
|
||||
|
@ -156,20 +160,20 @@ public abstract class InternalSignificantTerms extends InternalMultiBucketAggreg
|
|||
}
|
||||
|
||||
@Override
|
||||
public InternalAggregation reduce(List<InternalAggregation> aggregations, ReduceContext reduceContext) {
|
||||
public InternalAggregation doReduce(List<InternalAggregation> aggregations, ReduceContext reduceContext) {
|
||||
|
||||
long globalSubsetSize = 0;
|
||||
long globalSupersetSize = 0;
|
||||
// Compute the overall result set size and the corpus size using the
|
||||
// top-level Aggregations from each shard
|
||||
for (InternalAggregation aggregation : aggregations) {
|
||||
InternalSignificantTerms terms = (InternalSignificantTerms) aggregation;
|
||||
InternalSignificantTerms<A, B> terms = (InternalSignificantTerms<A, B>) aggregation;
|
||||
globalSubsetSize += terms.subsetSize;
|
||||
globalSupersetSize += terms.supersetSize;
|
||||
}
|
||||
Map<String, List<InternalSignificantTerms.Bucket>> buckets = new HashMap<>();
|
||||
for (InternalAggregation aggregation : aggregations) {
|
||||
InternalSignificantTerms terms = (InternalSignificantTerms) aggregation;
|
||||
InternalSignificantTerms<A, B> terms = (InternalSignificantTerms<A, B>) aggregation;
|
||||
for (Bucket bucket : terms.buckets) {
|
||||
List<Bucket> existingBuckets = buckets.get(bucket.getKey());
|
||||
if (existingBuckets == null) {
|
||||
|
@ -197,9 +201,10 @@ public abstract class InternalSignificantTerms extends InternalMultiBucketAggreg
|
|||
for (int i = ordered.size() - 1; i >= 0; i--) {
|
||||
list[i] = (Bucket) ordered.pop();
|
||||
}
|
||||
return newAggregation(globalSubsetSize, globalSupersetSize, Arrays.asList(list));
|
||||
return create(globalSubsetSize, globalSupersetSize, Arrays.asList(list), this);
|
||||
}
|
||||
|
||||
abstract InternalSignificantTerms newAggregation(long subsetSize, long supersetSize, List<Bucket> buckets);
|
||||
protected abstract A create(long subsetSize, long supersetSize, List<InternalSignificantTerms.Bucket> buckets,
|
||||
InternalSignificantTerms prototype);
|
||||
|
||||
}
|
||||
|
|
|
@ -29,6 +29,7 @@ import org.elasticsearch.search.aggregations.bucket.BucketStreamContext;
|
|||
import org.elasticsearch.search.aggregations.bucket.BucketStreams;
|
||||
import org.elasticsearch.search.aggregations.bucket.significant.heuristics.SignificanceHeuristic;
|
||||
import org.elasticsearch.search.aggregations.bucket.significant.heuristics.SignificanceHeuristicStreams;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
import org.elasticsearch.search.aggregations.support.format.ValueFormatter;
|
||||
import org.elasticsearch.search.aggregations.support.format.ValueFormatterStreams;
|
||||
|
||||
|
@ -41,7 +42,7 @@ import java.util.Map;
|
|||
/**
|
||||
*
|
||||
*/
|
||||
public class SignificantLongTerms extends InternalSignificantTerms {
|
||||
public class SignificantLongTerms extends InternalSignificantTerms<SignificantLongTerms, SignificantLongTerms.Bucket> {
|
||||
|
||||
public static final Type TYPE = new Type("significant_terms", "siglterms");
|
||||
|
||||
|
@ -161,16 +162,16 @@ public class SignificantLongTerms extends InternalSignificantTerms {
|
|||
return builder;
|
||||
}
|
||||
}
|
||||
|
||||
private ValueFormatter formatter;
|
||||
|
||||
SignificantLongTerms() {
|
||||
} // for serialization
|
||||
|
||||
public SignificantLongTerms(long subsetSize, long supersetSize, String name, @Nullable ValueFormatter formatter,
|
||||
int requiredSize, long minDocCount, SignificanceHeuristic significanceHeuristic, List<InternalSignificantTerms.Bucket> buckets, Map<String, Object> metaData) {
|
||||
public SignificantLongTerms(long subsetSize, long supersetSize, String name, @Nullable ValueFormatter formatter, int requiredSize,
|
||||
long minDocCount, SignificanceHeuristic significanceHeuristic, List<? extends InternalSignificantTerms.Bucket> buckets,
|
||||
List<Reducer> reducers, Map<String, Object> metaData) {
|
||||
|
||||
super(subsetSize, supersetSize, name, requiredSize, minDocCount, significanceHeuristic, buckets, metaData);
|
||||
super(subsetSize, supersetSize, name, requiredSize, minDocCount, significanceHeuristic, buckets, reducers, metaData);
|
||||
this.formatter = formatter;
|
||||
}
|
||||
|
||||
|
@ -180,9 +181,24 @@ public class SignificantLongTerms extends InternalSignificantTerms {
|
|||
}
|
||||
|
||||
@Override
|
||||
InternalSignificantTerms newAggregation(long subsetSize, long supersetSize,
|
||||
List<InternalSignificantTerms.Bucket> buckets) {
|
||||
return new SignificantLongTerms(subsetSize, supersetSize, getName(), formatter, requiredSize, minDocCount, significanceHeuristic, buckets, getMetaData());
|
||||
public SignificantLongTerms create(List<SignificantLongTerms.Bucket> buckets) {
|
||||
return new SignificantLongTerms(this.subsetSize, this.supersetSize, this.name, this.formatter, this.requiredSize, this.minDocCount,
|
||||
this.significanceHeuristic, buckets, this.reducers(), this.metaData);
|
||||
}
|
||||
|
||||
@Override
|
||||
public Bucket createBucket(InternalAggregations aggregations, SignificantLongTerms.Bucket prototype) {
|
||||
return new Bucket(prototype.subsetDf, prototype.subsetSize, prototype.supersetDf, prototype.supersetSize, prototype.term,
|
||||
aggregations, prototype.formatter);
|
||||
}
|
||||
|
||||
@Override
|
||||
protected SignificantLongTerms create(long subsetSize, long supersetSize,
|
||||
List<org.elasticsearch.search.aggregations.bucket.significant.InternalSignificantTerms.Bucket> buckets,
|
||||
InternalSignificantTerms prototype) {
|
||||
return new SignificantLongTerms(subsetSize, supersetSize, prototype.getName(), ((SignificantLongTerms) prototype).formatter,
|
||||
prototype.requiredSize, prototype.minDocCount, prototype.significanceHeuristic, buckets, prototype.reducers(),
|
||||
prototype.getMetaData());
|
||||
}
|
||||
|
||||
@Override
|
||||
|
|
|
@ -28,6 +28,7 @@ import org.elasticsearch.search.aggregations.LeafBucketCollectorBase;
|
|||
import org.elasticsearch.search.aggregations.LeafBucketCollector;
|
||||
import org.elasticsearch.search.aggregations.bucket.terms.LongTermsAggregator;
|
||||
import org.elasticsearch.search.aggregations.bucket.terms.support.IncludeExclude;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
import org.elasticsearch.search.aggregations.support.AggregationContext;
|
||||
import org.elasticsearch.search.aggregations.support.ValuesSource;
|
||||
import org.elasticsearch.search.aggregations.support.format.ValueFormat;
|
||||
|
@ -36,6 +37,7 @@ import org.elasticsearch.search.internal.ContextIndexSearcher;
|
|||
import java.io.IOException;
|
||||
import java.util.Arrays;
|
||||
import java.util.Collections;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
/**
|
||||
|
@ -44,10 +46,12 @@ import java.util.Map;
|
|||
public class SignificantLongTermsAggregator extends LongTermsAggregator {
|
||||
|
||||
public SignificantLongTermsAggregator(String name, AggregatorFactories factories, ValuesSource.Numeric valuesSource, @Nullable ValueFormat format,
|
||||
BucketCountThresholds bucketCountThresholds,
|
||||
AggregationContext aggregationContext, Aggregator parent, SignificantTermsAggregatorFactory termsAggFactory, IncludeExclude.LongFilter includeExclude, Map<String, Object> metaData) throws IOException {
|
||||
BucketCountThresholds bucketCountThresholds, AggregationContext aggregationContext,
|
||||
Aggregator parent, SignificantTermsAggregatorFactory termsAggFactory, IncludeExclude.LongFilter includeExclude,
|
||||
List<Reducer> reducers, Map<String, Object> metaData) throws IOException {
|
||||
|
||||
super(name, factories, valuesSource, format, null, bucketCountThresholds, aggregationContext, parent, SubAggCollectionMode.DEPTH_FIRST, false, includeExclude, metaData);
|
||||
super(name, factories, valuesSource, format, null, bucketCountThresholds, aggregationContext, parent,
|
||||
SubAggCollectionMode.DEPTH_FIRST, false, includeExclude, reducers, metaData);
|
||||
this.termsAggFactory = termsAggFactory;
|
||||
}
|
||||
|
||||
|
@ -102,7 +106,9 @@ public class SignificantLongTermsAggregator extends LongTermsAggregator {
|
|||
bucket.aggregations = bucketAggregations(bucket.bucketOrd);
|
||||
list[i] = bucket;
|
||||
}
|
||||
return new SignificantLongTerms(subsetSize, supersetSize, name, formatter, bucketCountThresholds.getRequiredSize(), bucketCountThresholds.getMinDocCount(), termsAggFactory.getSignificanceHeuristic(), Arrays.asList(list), metaData());
|
||||
return new SignificantLongTerms(subsetSize, supersetSize, name, formatter, bucketCountThresholds.getRequiredSize(),
|
||||
bucketCountThresholds.getMinDocCount(), termsAggFactory.getSignificanceHeuristic(), Arrays.asList(list), reducers(),
|
||||
metaData());
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -111,7 +117,9 @@ public class SignificantLongTermsAggregator extends LongTermsAggregator {
|
|||
ContextIndexSearcher searcher = context.searchContext().searcher();
|
||||
IndexReader topReader = searcher.getIndexReader();
|
||||
int supersetSize = topReader.numDocs();
|
||||
return new SignificantLongTerms(0, supersetSize, name, formatter, bucketCountThresholds.getRequiredSize(), bucketCountThresholds.getMinDocCount(), termsAggFactory.getSignificanceHeuristic(), Collections.<InternalSignificantTerms.Bucket>emptyList(), metaData());
|
||||
return new SignificantLongTerms(0, supersetSize, name, formatter, bucketCountThresholds.getRequiredSize(),
|
||||
bucketCountThresholds.getMinDocCount(), termsAggFactory.getSignificanceHeuristic(),
|
||||
Collections.<InternalSignificantTerms.Bucket> emptyList(), reducers(), metaData());
|
||||
}
|
||||
|
||||
@Override
|
||||
|
|
|
@ -30,6 +30,7 @@ import org.elasticsearch.search.aggregations.bucket.BucketStreamContext;
|
|||
import org.elasticsearch.search.aggregations.bucket.BucketStreams;
|
||||
import org.elasticsearch.search.aggregations.bucket.significant.heuristics.SignificanceHeuristic;
|
||||
import org.elasticsearch.search.aggregations.bucket.significant.heuristics.SignificanceHeuristicStreams;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.ArrayList;
|
||||
|
@ -40,7 +41,7 @@ import java.util.Map;
|
|||
/**
|
||||
*
|
||||
*/
|
||||
public class SignificantStringTerms extends InternalSignificantTerms {
|
||||
public class SignificantStringTerms extends InternalSignificantTerms<SignificantStringTerms, SignificantStringTerms.Bucket> {
|
||||
|
||||
public static final InternalAggregation.Type TYPE = new Type("significant_terms", "sigsterms");
|
||||
|
||||
|
@ -159,9 +160,10 @@ public class SignificantStringTerms extends InternalSignificantTerms {
|
|||
|
||||
SignificantStringTerms() {} // for serialization
|
||||
|
||||
public SignificantStringTerms(long subsetSize, long supersetSize, String name, int requiredSize,
|
||||
long minDocCount, SignificanceHeuristic significanceHeuristic, List<InternalSignificantTerms.Bucket> buckets, Map<String, Object> metaData) {
|
||||
super(subsetSize, supersetSize, name, requiredSize, minDocCount, significanceHeuristic, buckets, metaData);
|
||||
public SignificantStringTerms(long subsetSize, long supersetSize, String name, int requiredSize, long minDocCount,
|
||||
SignificanceHeuristic significanceHeuristic, List<? extends InternalSignificantTerms.Bucket> buckets, List<Reducer> reducers,
|
||||
Map<String, Object> metaData) {
|
||||
super(subsetSize, supersetSize, name, requiredSize, minDocCount, significanceHeuristic, buckets, reducers, metaData);
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -170,9 +172,22 @@ public class SignificantStringTerms extends InternalSignificantTerms {
|
|||
}
|
||||
|
||||
@Override
|
||||
InternalSignificantTerms newAggregation(long subsetSize, long supersetSize,
|
||||
List<InternalSignificantTerms.Bucket> buckets) {
|
||||
return new SignificantStringTerms(subsetSize, supersetSize, getName(), requiredSize, minDocCount, significanceHeuristic, buckets, getMetaData());
|
||||
public SignificantStringTerms create(List<SignificantStringTerms.Bucket> buckets) {
|
||||
return new SignificantStringTerms(this.subsetSize, this.supersetSize, this.name, this.requiredSize, this.minDocCount,
|
||||
this.significanceHeuristic, buckets, this.reducers(), this.metaData);
|
||||
}
|
||||
|
||||
@Override
|
||||
public Bucket createBucket(InternalAggregations aggregations, SignificantStringTerms.Bucket prototype) {
|
||||
return new Bucket(prototype.termBytes, prototype.subsetDf, prototype.subsetSize, prototype.supersetDf, prototype.supersetSize,
|
||||
aggregations);
|
||||
}
|
||||
|
||||
@Override
|
||||
protected SignificantStringTerms create(long subsetSize, long supersetSize, List<InternalSignificantTerms.Bucket> buckets,
|
||||
InternalSignificantTerms prototype) {
|
||||
return new SignificantStringTerms(subsetSize, supersetSize, prototype.getName(), prototype.requiredSize, prototype.minDocCount,
|
||||
prototype.significanceHeuristic, buckets, prototype.reducers(), prototype.getMetaData());
|
||||
}
|
||||
|
||||
@Override
|
||||
|
|
|
@ -28,6 +28,7 @@ import org.elasticsearch.search.aggregations.LeafBucketCollectorBase;
|
|||
import org.elasticsearch.search.aggregations.LeafBucketCollector;
|
||||
import org.elasticsearch.search.aggregations.bucket.terms.StringTermsAggregator;
|
||||
import org.elasticsearch.search.aggregations.bucket.terms.support.IncludeExclude;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
import org.elasticsearch.search.aggregations.support.AggregationContext;
|
||||
import org.elasticsearch.search.aggregations.support.ValuesSource;
|
||||
import org.elasticsearch.search.internal.ContextIndexSearcher;
|
||||
|
@ -35,6 +36,7 @@ import org.elasticsearch.search.internal.ContextIndexSearcher;
|
|||
import java.io.IOException;
|
||||
import java.util.Arrays;
|
||||
import java.util.Collections;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
/**
|
||||
|
@ -48,9 +50,11 @@ public class SignificantStringTermsAggregator extends StringTermsAggregator {
|
|||
public SignificantStringTermsAggregator(String name, AggregatorFactories factories, ValuesSource valuesSource,
|
||||
BucketCountThresholds bucketCountThresholds,
|
||||
IncludeExclude.StringFilter includeExclude, AggregationContext aggregationContext, Aggregator parent,
|
||||
SignificantTermsAggregatorFactory termsAggFactory, Map<String, Object> metaData) throws IOException {
|
||||
SignificantTermsAggregatorFactory termsAggFactory, List<Reducer> reducers, Map<String, Object> metaData)
|
||||
throws IOException {
|
||||
|
||||
super(name, factories, valuesSource, null, bucketCountThresholds, includeExclude, aggregationContext, parent, SubAggCollectionMode.DEPTH_FIRST, false, metaData);
|
||||
super(name, factories, valuesSource, null, bucketCountThresholds, includeExclude, aggregationContext, parent,
|
||||
SubAggCollectionMode.DEPTH_FIRST, false, reducers, metaData);
|
||||
this.termsAggFactory = termsAggFactory;
|
||||
}
|
||||
|
||||
|
@ -107,7 +111,9 @@ public class SignificantStringTermsAggregator extends StringTermsAggregator {
|
|||
list[i] = bucket;
|
||||
}
|
||||
|
||||
return new SignificantStringTerms(subsetSize, supersetSize, name, bucketCountThresholds.getRequiredSize(), bucketCountThresholds.getMinDocCount(), termsAggFactory.getSignificanceHeuristic(), Arrays.asList(list), metaData());
|
||||
return new SignificantStringTerms(subsetSize, supersetSize, name, bucketCountThresholds.getRequiredSize(),
|
||||
bucketCountThresholds.getMinDocCount(), termsAggFactory.getSignificanceHeuristic(), Arrays.asList(list), reducers(),
|
||||
metaData());
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -116,7 +122,9 @@ public class SignificantStringTermsAggregator extends StringTermsAggregator {
|
|||
ContextIndexSearcher searcher = context.searchContext().searcher();
|
||||
IndexReader topReader = searcher.getIndexReader();
|
||||
int supersetSize = topReader.numDocs();
|
||||
return new SignificantStringTerms(0, supersetSize, name, bucketCountThresholds.getRequiredSize(), bucketCountThresholds.getMinDocCount(), termsAggFactory.getSignificanceHeuristic(), Collections.<InternalSignificantTerms.Bucket>emptyList(), metaData());
|
||||
return new SignificantStringTerms(0, supersetSize, name, bucketCountThresholds.getRequiredSize(),
|
||||
bucketCountThresholds.getMinDocCount(), termsAggFactory.getSignificanceHeuristic(),
|
||||
Collections.<InternalSignificantTerms.Bucket> emptyList(), reducers(), metaData());
|
||||
}
|
||||
|
||||
@Override
|
||||
|
|
|
@ -37,6 +37,7 @@ import org.elasticsearch.search.aggregations.NonCollectingAggregator;
|
|||
import org.elasticsearch.search.aggregations.bucket.significant.heuristics.SignificanceHeuristic;
|
||||
import org.elasticsearch.search.aggregations.bucket.terms.TermsAggregator;
|
||||
import org.elasticsearch.search.aggregations.bucket.terms.support.IncludeExclude;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
import org.elasticsearch.search.aggregations.support.AggregationContext;
|
||||
import org.elasticsearch.search.aggregations.support.ValuesSource;
|
||||
import org.elasticsearch.search.aggregations.support.ValuesSourceAggregatorFactory;
|
||||
|
@ -44,6 +45,7 @@ import org.elasticsearch.search.aggregations.support.ValuesSourceConfig;
|
|||
import org.elasticsearch.search.internal.SearchContext;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
/**
|
||||
|
@ -61,10 +63,12 @@ public class SignificantTermsAggregatorFactory extends ValuesSourceAggregatorFac
|
|||
|
||||
@Override
|
||||
Aggregator create(String name, AggregatorFactories factories, ValuesSource valuesSource,
|
||||
TermsAggregator.BucketCountThresholds bucketCountThresholds, IncludeExclude includeExclude,
|
||||
AggregationContext aggregationContext, Aggregator parent, SignificantTermsAggregatorFactory termsAggregatorFactory, Map<String, Object> metaData) throws IOException {
|
||||
TermsAggregator.BucketCountThresholds bucketCountThresholds, IncludeExclude includeExclude,
|
||||
AggregationContext aggregationContext, Aggregator parent, SignificantTermsAggregatorFactory termsAggregatorFactory,
|
||||
List<Reducer> reducers, Map<String, Object> metaData) throws IOException {
|
||||
final IncludeExclude.StringFilter filter = includeExclude == null ? null : includeExclude.convertToStringFilter();
|
||||
return new SignificantStringTermsAggregator(name, factories, valuesSource, bucketCountThresholds, filter, aggregationContext, parent, termsAggregatorFactory, metaData);
|
||||
return new SignificantStringTermsAggregator(name, factories, valuesSource, bucketCountThresholds, filter,
|
||||
aggregationContext, parent, termsAggregatorFactory, reducers, metaData);
|
||||
}
|
||||
|
||||
},
|
||||
|
@ -73,11 +77,12 @@ public class SignificantTermsAggregatorFactory extends ValuesSourceAggregatorFac
|
|||
@Override
|
||||
Aggregator create(String name, AggregatorFactories factories, ValuesSource valuesSource,
|
||||
TermsAggregator.BucketCountThresholds bucketCountThresholds, IncludeExclude includeExclude,
|
||||
AggregationContext aggregationContext, Aggregator parent, SignificantTermsAggregatorFactory termsAggregatorFactory, Map<String, Object> metaData) throws IOException {
|
||||
AggregationContext aggregationContext, Aggregator parent, SignificantTermsAggregatorFactory termsAggregatorFactory,
|
||||
List<Reducer> reducers, Map<String, Object> metaData) throws IOException {
|
||||
ValuesSource.Bytes.WithOrdinals valueSourceWithOrdinals = (ValuesSource.Bytes.WithOrdinals) valuesSource;
|
||||
IndexSearcher indexSearcher = aggregationContext.searchContext().searcher();
|
||||
final IncludeExclude.OrdinalsFilter filter = includeExclude == null ? null : includeExclude.convertToOrdinalsFilter();
|
||||
return new GlobalOrdinalsSignificantTermsAggregator(name, factories, (ValuesSource.Bytes.WithOrdinals.FieldData) valuesSource, bucketCountThresholds, filter, aggregationContext, parent, termsAggregatorFactory, metaData);
|
||||
return new GlobalOrdinalsSignificantTermsAggregator(name, factories, (ValuesSource.Bytes.WithOrdinals.FieldData) valuesSource, bucketCountThresholds, filter, aggregationContext, parent, termsAggregatorFactory, reducers, metaData);
|
||||
}
|
||||
|
||||
},
|
||||
|
@ -86,9 +91,12 @@ public class SignificantTermsAggregatorFactory extends ValuesSourceAggregatorFac
|
|||
@Override
|
||||
Aggregator create(String name, AggregatorFactories factories, ValuesSource valuesSource,
|
||||
TermsAggregator.BucketCountThresholds bucketCountThresholds, IncludeExclude includeExclude,
|
||||
AggregationContext aggregationContext, Aggregator parent, SignificantTermsAggregatorFactory termsAggregatorFactory, Map<String, Object> metaData) throws IOException {
|
||||
AggregationContext aggregationContext, Aggregator parent, SignificantTermsAggregatorFactory termsAggregatorFactory,
|
||||
List<Reducer> reducers, Map<String, Object> metaData) throws IOException {
|
||||
final IncludeExclude.OrdinalsFilter filter = includeExclude == null ? null : includeExclude.convertToOrdinalsFilter();
|
||||
return new GlobalOrdinalsSignificantTermsAggregator.WithHash(name, factories, (ValuesSource.Bytes.WithOrdinals.FieldData) valuesSource, bucketCountThresholds, filter, aggregationContext, parent, termsAggregatorFactory, metaData);
|
||||
return new GlobalOrdinalsSignificantTermsAggregator.WithHash(name, factories,
|
||||
(ValuesSource.Bytes.WithOrdinals.FieldData) valuesSource, bucketCountThresholds, filter,
|
||||
aggregationContext, parent, termsAggregatorFactory, reducers, metaData);
|
||||
}
|
||||
};
|
||||
|
||||
|
@ -109,7 +117,8 @@ public class SignificantTermsAggregatorFactory extends ValuesSourceAggregatorFac
|
|||
|
||||
abstract Aggregator create(String name, AggregatorFactories factories, ValuesSource valuesSource,
|
||||
TermsAggregator.BucketCountThresholds bucketCountThresholds, IncludeExclude includeExclude,
|
||||
AggregationContext aggregationContext, Aggregator parent, SignificantTermsAggregatorFactory termsAggregatorFactory, Map<String, Object> metaData) throws IOException;
|
||||
AggregationContext aggregationContext, Aggregator parent, SignificantTermsAggregatorFactory termsAggregatorFactory,
|
||||
List<Reducer> reducers, Map<String, Object> metaData) throws IOException;
|
||||
|
||||
@Override
|
||||
public String toString() {
|
||||
|
@ -146,9 +155,11 @@ public class SignificantTermsAggregatorFactory extends ValuesSourceAggregatorFac
|
|||
}
|
||||
|
||||
@Override
|
||||
protected Aggregator createUnmapped(AggregationContext aggregationContext, Aggregator parent, Map<String, Object> metaData) throws IOException {
|
||||
final InternalAggregation aggregation = new UnmappedSignificantTerms(name, bucketCountThresholds.getRequiredSize(), bucketCountThresholds.getMinDocCount(), metaData);
|
||||
return new NonCollectingAggregator(name, aggregationContext, parent, metaData) {
|
||||
protected Aggregator createUnmapped(AggregationContext aggregationContext, Aggregator parent, List<Reducer> reducers,
|
||||
Map<String, Object> metaData) throws IOException {
|
||||
final InternalAggregation aggregation = new UnmappedSignificantTerms(name, bucketCountThresholds.getRequiredSize(),
|
||||
bucketCountThresholds.getMinDocCount(), reducers, metaData);
|
||||
return new NonCollectingAggregator(name, aggregationContext, parent, reducers, metaData) {
|
||||
@Override
|
||||
public InternalAggregation buildEmptyAggregation() {
|
||||
return aggregation;
|
||||
|
@ -157,7 +168,8 @@ public class SignificantTermsAggregatorFactory extends ValuesSourceAggregatorFac
|
|||
}
|
||||
|
||||
@Override
|
||||
protected Aggregator doCreateInternal(ValuesSource valuesSource, AggregationContext aggregationContext, Aggregator parent, boolean collectsFromSingleBucket, Map<String, Object> metaData) throws IOException {
|
||||
protected Aggregator doCreateInternal(ValuesSource valuesSource, AggregationContext aggregationContext, Aggregator parent,
|
||||
boolean collectsFromSingleBucket, List<Reducer> reducers, Map<String, Object> metaData) throws IOException {
|
||||
if (collectsFromSingleBucket == false) {
|
||||
return asMultiBucketAggregator(this, aggregationContext, parent);
|
||||
}
|
||||
|
@ -180,7 +192,8 @@ public class SignificantTermsAggregatorFactory extends ValuesSourceAggregatorFac
|
|||
}
|
||||
}
|
||||
assert execution != null;
|
||||
return execution.create(name, factories, valuesSource, bucketCountThresholds, includeExclude, aggregationContext, parent, this, metaData);
|
||||
return execution.create(name, factories, valuesSource, bucketCountThresholds, includeExclude, aggregationContext, parent, this,
|
||||
reducers, metaData);
|
||||
}
|
||||
|
||||
|
||||
|
@ -198,7 +211,8 @@ public class SignificantTermsAggregatorFactory extends ValuesSourceAggregatorFac
|
|||
if (includeExclude != null) {
|
||||
longFilter = includeExclude.convertToLongFilter();
|
||||
}
|
||||
return new SignificantLongTermsAggregator(name, factories, (ValuesSource.Numeric) valuesSource, config.format(), bucketCountThresholds, aggregationContext, parent, this, longFilter, metaData);
|
||||
return new SignificantLongTermsAggregator(name, factories, (ValuesSource.Numeric) valuesSource, config.format(),
|
||||
bucketCountThresholds, aggregationContext, parent, this, longFilter, reducers, metaData);
|
||||
}
|
||||
|
||||
throw new AggregationExecutionException("sigfnificant_terms aggregation cannot be applied to field [" + config.fieldContext().field() +
|
||||
|
|
|
@ -23,7 +23,9 @@ import org.elasticsearch.common.io.stream.StreamOutput;
|
|||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
import org.elasticsearch.search.aggregations.AggregationStreams;
|
||||
import org.elasticsearch.search.aggregations.InternalAggregation;
|
||||
import org.elasticsearch.search.aggregations.InternalAggregations;
|
||||
import org.elasticsearch.search.aggregations.bucket.significant.heuristics.JLHScore;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.Collections;
|
||||
|
@ -33,7 +35,7 @@ import java.util.Map;
|
|||
/**
|
||||
*
|
||||
*/
|
||||
public class UnmappedSignificantTerms extends InternalSignificantTerms {
|
||||
public class UnmappedSignificantTerms extends InternalSignificantTerms<UnmappedSignificantTerms, InternalSignificantTerms.Bucket> {
|
||||
|
||||
public static final Type TYPE = new Type("significant_terms", "umsigterms");
|
||||
|
||||
|
@ -55,10 +57,10 @@ public class UnmappedSignificantTerms extends InternalSignificantTerms {
|
|||
|
||||
UnmappedSignificantTerms() {} // for serialization
|
||||
|
||||
public UnmappedSignificantTerms(String name, int requiredSize, long minDocCount, Map<String, Object> metaData) {
|
||||
public UnmappedSignificantTerms(String name, int requiredSize, long minDocCount, List<Reducer> reducers, Map<String, Object> metaData) {
|
||||
//We pass zero for index/subset sizes because for the purpose of significant term analysis
|
||||
// we assume an unmapped index's size is irrelevant to the proceedings.
|
||||
super(0, 0, name, requiredSize, minDocCount, JLHScore.INSTANCE, BUCKETS, metaData);
|
||||
super(0, 0, name, requiredSize, minDocCount, JLHScore.INSTANCE, BUCKETS, reducers, metaData);
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -67,7 +69,22 @@ public class UnmappedSignificantTerms extends InternalSignificantTerms {
|
|||
}
|
||||
|
||||
@Override
|
||||
public InternalAggregation reduce(List<InternalAggregation> aggregations, ReduceContext reduceContext) {
|
||||
public UnmappedSignificantTerms create(List<InternalSignificantTerms.Bucket> buckets) {
|
||||
return new UnmappedSignificantTerms(this.name, this.requiredSize, this.minDocCount, this.reducers(), this.metaData);
|
||||
}
|
||||
|
||||
@Override
|
||||
public InternalSignificantTerms.Bucket createBucket(InternalAggregations aggregations, InternalSignificantTerms.Bucket prototype) {
|
||||
throw new UnsupportedOperationException("not supported for UnmappedSignificantTerms");
|
||||
}
|
||||
|
||||
@Override
|
||||
protected UnmappedSignificantTerms create(long subsetSize, long supersetSize, List<Bucket> buckets, InternalSignificantTerms prototype) {
|
||||
throw new UnsupportedOperationException("not supported for UnmappedSignificantTerms");
|
||||
}
|
||||
|
||||
@Override
|
||||
public InternalAggregation doReduce(List<InternalAggregation> aggregations, ReduceContext reduceContext) {
|
||||
for (InternalAggregation aggregation : aggregations) {
|
||||
if (!(aggregation instanceof UnmappedSignificantTerms)) {
|
||||
return aggregation.reduce(aggregations, reduceContext);
|
||||
|
@ -76,11 +93,6 @@ public class UnmappedSignificantTerms extends InternalSignificantTerms {
|
|||
return this;
|
||||
}
|
||||
|
||||
@Override
|
||||
InternalSignificantTerms newAggregation(long subsetSize, long supersetSize, List<Bucket> buckets) {
|
||||
throw new UnsupportedOperationException("How did you get there?");
|
||||
}
|
||||
|
||||
@Override
|
||||
protected void doReadFrom(StreamInput in) throws IOException {
|
||||
this.requiredSize = readSize(in);
|
||||
|
|
|
@ -22,27 +22,30 @@ package org.elasticsearch.search.aggregations.bucket.terms;
|
|||
import org.elasticsearch.search.aggregations.Aggregator;
|
||||
import org.elasticsearch.search.aggregations.AggregatorFactories;
|
||||
import org.elasticsearch.search.aggregations.InternalAggregation;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
import org.elasticsearch.search.aggregations.support.AggregationContext;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.Collections;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
abstract class AbstractStringTermsAggregator extends TermsAggregator {
|
||||
|
||||
protected final boolean showTermDocCountError;
|
||||
|
||||
public AbstractStringTermsAggregator(String name, AggregatorFactories factories,
|
||||
AggregationContext context, Aggregator parent,
|
||||
Terms.Order order, BucketCountThresholds bucketCountThresholds,
|
||||
SubAggCollectionMode subAggCollectMode, boolean showTermDocCountError, Map<String, Object> metaData) throws IOException {
|
||||
super(name, factories, context, parent, bucketCountThresholds, order, subAggCollectMode, metaData);
|
||||
public AbstractStringTermsAggregator(String name, AggregatorFactories factories, AggregationContext context, Aggregator parent,
|
||||
Terms.Order order, BucketCountThresholds bucketCountThresholds, SubAggCollectionMode subAggCollectMode,
|
||||
boolean showTermDocCountError, List<Reducer> reducers, Map<String, Object> metaData) throws IOException {
|
||||
super(name, factories, context, parent, bucketCountThresholds, order, subAggCollectMode, reducers, metaData);
|
||||
this.showTermDocCountError = showTermDocCountError;
|
||||
}
|
||||
|
||||
@Override
|
||||
public InternalAggregation buildEmptyAggregation() {
|
||||
return new StringTerms(name, order, bucketCountThresholds.getRequiredSize(), bucketCountThresholds.getShardSize(), bucketCountThresholds.getMinDocCount(), Collections.<InternalTerms.Bucket>emptyList(), showTermDocCountError, 0, 0, metaData());
|
||||
return new StringTerms(name, order, bucketCountThresholds.getRequiredSize(), bucketCountThresholds.getShardSize(),
|
||||
bucketCountThresholds.getMinDocCount(), Collections.<InternalTerms.Bucket> emptyList(), showTermDocCountError, 0, 0,
|
||||
reducers(), metaData());
|
||||
}
|
||||
|
||||
}
|
||||
|
|
|
@ -27,6 +27,7 @@ import org.elasticsearch.search.aggregations.AggregationStreams;
|
|||
import org.elasticsearch.search.aggregations.InternalAggregations;
|
||||
import org.elasticsearch.search.aggregations.bucket.BucketStreamContext;
|
||||
import org.elasticsearch.search.aggregations.bucket.BucketStreams;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
import org.elasticsearch.search.aggregations.support.format.ValueFormatter;
|
||||
import org.elasticsearch.search.aggregations.support.format.ValueFormatterStreams;
|
||||
|
||||
|
@ -39,7 +40,7 @@ import java.util.Map;
|
|||
/**
|
||||
*
|
||||
*/
|
||||
public class DoubleTerms extends InternalTerms {
|
||||
public class DoubleTerms extends InternalTerms<DoubleTerms, DoubleTerms.Bucket> {
|
||||
|
||||
public static final Type TYPE = new Type("terms", "dterms");
|
||||
|
||||
|
@ -84,7 +85,8 @@ public class DoubleTerms extends InternalTerms {
|
|||
super(formatter, showDocCountError);
|
||||
}
|
||||
|
||||
public Bucket(double term, long docCount, InternalAggregations aggregations, boolean showDocCountError, long docCountError, @Nullable ValueFormatter formatter) {
|
||||
public Bucket(double term, long docCount, InternalAggregations aggregations, boolean showDocCountError, long docCountError,
|
||||
@Nullable ValueFormatter formatter) {
|
||||
super(docCount, aggregations, showDocCountError, docCountError, formatter);
|
||||
this.term = term;
|
||||
}
|
||||
|
@ -152,12 +154,17 @@ public class DoubleTerms extends InternalTerms {
|
|||
}
|
||||
}
|
||||
|
||||
private @Nullable ValueFormatter formatter;
|
||||
private @Nullable
|
||||
ValueFormatter formatter;
|
||||
|
||||
DoubleTerms() {} // for serialization
|
||||
DoubleTerms() {
|
||||
} // for serialization
|
||||
|
||||
public DoubleTerms(String name, Terms.Order order, @Nullable ValueFormatter formatter, int requiredSize, int shardSize, long minDocCount, List<InternalTerms.Bucket> buckets, boolean showTermDocCountError, long docCountError, long otherDocCount, Map<String, Object> metaData) {
|
||||
super(name, order, requiredSize, shardSize, minDocCount, buckets, showTermDocCountError, docCountError, otherDocCount, metaData);
|
||||
public DoubleTerms(String name, Terms.Order order, @Nullable ValueFormatter formatter, int requiredSize, int shardSize,
|
||||
long minDocCount, List<? extends InternalTerms.Bucket> buckets, boolean showTermDocCountError, long docCountError,
|
||||
long otherDocCount, List<Reducer> reducers, Map<String, Object> metaData) {
|
||||
super(name, order, requiredSize, shardSize, minDocCount, buckets, showTermDocCountError, docCountError, otherDocCount, reducers,
|
||||
metaData);
|
||||
this.formatter = formatter;
|
||||
}
|
||||
|
||||
|
@ -167,8 +174,23 @@ public class DoubleTerms extends InternalTerms {
|
|||
}
|
||||
|
||||
@Override
|
||||
protected InternalTerms newAggregation(String name, List<InternalTerms.Bucket> buckets, boolean showTermDocCountError, long docCountError, long otherDocCount, Map<String, Object> metaData) {
|
||||
return new DoubleTerms(name, order, formatter, requiredSize, shardSize, minDocCount, buckets, showTermDocCountError, docCountError, otherDocCount, metaData);
|
||||
public DoubleTerms create(List<Bucket> buckets) {
|
||||
return new DoubleTerms(this.name, this.order, this.formatter, this.requiredSize, this.shardSize, this.minDocCount, buckets,
|
||||
this.showTermDocCountError, this.docCountError, this.otherDocCount, this.reducers(), this.metaData);
|
||||
}
|
||||
|
||||
@Override
|
||||
public Bucket createBucket(InternalAggregations aggregations, Bucket prototype) {
|
||||
return new Bucket(prototype.term, prototype.docCount, aggregations, prototype.showDocCountError, prototype.docCountError,
|
||||
prototype.formatter);
|
||||
}
|
||||
|
||||
@Override
|
||||
protected DoubleTerms create(String name, List<org.elasticsearch.search.aggregations.bucket.terms.InternalTerms.Bucket> buckets,
|
||||
long docCountError, long otherDocCount, InternalTerms prototype) {
|
||||
return new DoubleTerms(name, prototype.order, ((DoubleTerms) prototype).formatter, prototype.requiredSize, prototype.shardSize,
|
||||
prototype.minDocCount, buckets, prototype.showTermDocCountError, docCountError, otherDocCount, prototype.reducers(),
|
||||
prototype.getMetaData());
|
||||
}
|
||||
|
||||
@Override
|
||||
|
|
|
@ -26,6 +26,7 @@ import org.elasticsearch.index.fielddata.FieldData;
|
|||
import org.elasticsearch.search.aggregations.Aggregator;
|
||||
import org.elasticsearch.search.aggregations.AggregatorFactories;
|
||||
import org.elasticsearch.search.aggregations.bucket.terms.support.IncludeExclude;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
import org.elasticsearch.search.aggregations.support.AggregationContext;
|
||||
import org.elasticsearch.search.aggregations.support.ValuesSource;
|
||||
import org.elasticsearch.search.aggregations.support.ValuesSource.Numeric;
|
||||
|
@ -33,6 +34,7 @@ import org.elasticsearch.search.aggregations.support.format.ValueFormat;
|
|||
|
||||
import java.io.IOException;
|
||||
import java.util.Arrays;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
/**
|
||||
|
@ -41,8 +43,11 @@ import java.util.Map;
|
|||
public class DoubleTermsAggregator extends LongTermsAggregator {
|
||||
|
||||
public DoubleTermsAggregator(String name, AggregatorFactories factories, ValuesSource.Numeric valuesSource, @Nullable ValueFormat format,
|
||||
Terms.Order order, BucketCountThresholds bucketCountThresholds, AggregationContext aggregationContext, Aggregator parent, SubAggCollectionMode collectionMode, boolean showTermDocCountError, IncludeExclude.LongFilter longFilter, Map<String, Object> metaData) throws IOException {
|
||||
super(name, factories, valuesSource, format, order, bucketCountThresholds, aggregationContext, parent, collectionMode, showTermDocCountError, longFilter, metaData);
|
||||
Terms.Order order, BucketCountThresholds bucketCountThresholds,
|
||||
AggregationContext aggregationContext, Aggregator parent, SubAggCollectionMode collectionMode, boolean showTermDocCountError,
|
||||
IncludeExclude.LongFilter longFilter, List<Reducer> reducers, Map<String, Object> metaData) throws IOException {
|
||||
super(name, factories, valuesSource, format, order, bucketCountThresholds, aggregationContext, parent, collectionMode,
|
||||
showTermDocCountError, longFilter, reducers, metaData);
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -73,7 +78,9 @@ public class DoubleTermsAggregator extends LongTermsAggregator {
|
|||
for (int i = 0; i < buckets.length; ++i) {
|
||||
buckets[i] = convertToDouble(buckets[i]);
|
||||
}
|
||||
return new DoubleTerms(terms.getName(), terms.order, terms.formatter, terms.requiredSize, terms.shardSize, terms.minDocCount, Arrays.asList(buckets), terms.showTermDocCountError, terms.docCountError, terms.otherDocCount, terms.getMetaData());
|
||||
return new DoubleTerms(terms.getName(), terms.order, terms.formatter, terms.requiredSize, terms.shardSize, terms.minDocCount,
|
||||
Arrays.asList(buckets), terms.showTermDocCountError, terms.docCountError, terms.otherDocCount, terms.reducers(),
|
||||
terms.getMetaData());
|
||||
}
|
||||
|
||||
}
|
||||
|
|
|
@ -37,18 +37,20 @@ import org.elasticsearch.index.fielddata.AbstractRandomAccessOrds;
|
|||
import org.elasticsearch.index.fielddata.ordinals.GlobalOrdinalMapping;
|
||||
import org.elasticsearch.search.aggregations.Aggregator;
|
||||
import org.elasticsearch.search.aggregations.AggregatorFactories;
|
||||
import org.elasticsearch.search.aggregations.LeafBucketCollectorBase;
|
||||
import org.elasticsearch.search.aggregations.InternalAggregation;
|
||||
import org.elasticsearch.search.aggregations.InternalAggregations;
|
||||
import org.elasticsearch.search.aggregations.LeafBucketCollector;
|
||||
import org.elasticsearch.search.aggregations.LeafBucketCollectorBase;
|
||||
import org.elasticsearch.search.aggregations.bucket.terms.InternalTerms.Bucket;
|
||||
import org.elasticsearch.search.aggregations.bucket.terms.support.BucketPriorityQueue;
|
||||
import org.elasticsearch.search.aggregations.bucket.terms.support.IncludeExclude;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
import org.elasticsearch.search.aggregations.support.AggregationContext;
|
||||
import org.elasticsearch.search.aggregations.support.ValuesSource;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.Arrays;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
/**
|
||||
|
@ -71,8 +73,9 @@ public class GlobalOrdinalsStringTermsAggregator extends AbstractStringTermsAggr
|
|||
|
||||
public GlobalOrdinalsStringTermsAggregator(String name, AggregatorFactories factories, ValuesSource.Bytes.WithOrdinals.FieldData valuesSource,
|
||||
Terms.Order order, BucketCountThresholds bucketCountThresholds,
|
||||
IncludeExclude.OrdinalsFilter includeExclude, AggregationContext aggregationContext, Aggregator parent, SubAggCollectionMode collectionMode, boolean showTermDocCountError, Map<String, Object> metaData) throws IOException {
|
||||
super(name, factories, aggregationContext, parent, order, bucketCountThresholds, collectionMode, showTermDocCountError, metaData);
|
||||
IncludeExclude.OrdinalsFilter includeExclude, AggregationContext aggregationContext, Aggregator parent, SubAggCollectionMode collectionMode, boolean showTermDocCountError, List<Reducer> reducers, Map<String, Object> metaData) throws IOException {
|
||||
super(name, factories, aggregationContext, parent, order, bucketCountThresholds, collectionMode, showTermDocCountError, reducers,
|
||||
metaData);
|
||||
this.valuesSource = valuesSource;
|
||||
this.includeExclude = includeExclude;
|
||||
}
|
||||
|
@ -196,7 +199,9 @@ public class GlobalOrdinalsStringTermsAggregator extends AbstractStringTermsAggr
|
|||
bucket.docCountError = 0;
|
||||
}
|
||||
|
||||
return new StringTerms(name, order, bucketCountThresholds.getRequiredSize(), bucketCountThresholds.getShardSize(), bucketCountThresholds.getMinDocCount(), Arrays.asList(list), showTermDocCountError, 0, otherDocCount, metaData());
|
||||
return new StringTerms(name, order, bucketCountThresholds.getRequiredSize(), bucketCountThresholds.getShardSize(),
|
||||
bucketCountThresholds.getMinDocCount(), Arrays.asList(list), showTermDocCountError, 0, otherDocCount, reducers(),
|
||||
metaData());
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -261,8 +266,8 @@ public class GlobalOrdinalsStringTermsAggregator extends AbstractStringTermsAggr
|
|||
|
||||
public WithHash(String name, AggregatorFactories factories, ValuesSource.Bytes.WithOrdinals.FieldData valuesSource,
|
||||
Terms.Order order, BucketCountThresholds bucketCountThresholds, IncludeExclude.OrdinalsFilter includeExclude, AggregationContext aggregationContext,
|
||||
Aggregator parent, SubAggCollectionMode collectionMode, boolean showTermDocCountError, Map<String, Object> metaData) throws IOException {
|
||||
super(name, factories, valuesSource, order, bucketCountThresholds, includeExclude, aggregationContext, parent, collectionMode, showTermDocCountError, metaData);
|
||||
Aggregator parent, SubAggCollectionMode collectionMode, boolean showTermDocCountError, List<Reducer> reducers, Map<String, Object> metaData) throws IOException {
|
||||
super(name, factories, valuesSource, order, bucketCountThresholds, includeExclude, aggregationContext, parent, collectionMode, showTermDocCountError, reducers, metaData);
|
||||
bucketOrds = new LongHash(1, aggregationContext.bigArrays());
|
||||
}
|
||||
|
||||
|
@ -330,8 +335,8 @@ public class GlobalOrdinalsStringTermsAggregator extends AbstractStringTermsAggr
|
|||
private RandomAccessOrds segmentOrds;
|
||||
|
||||
public LowCardinality(String name, AggregatorFactories factories, ValuesSource.Bytes.WithOrdinals.FieldData valuesSource,
|
||||
Terms.Order order, BucketCountThresholds bucketCountThresholds, AggregationContext aggregationContext, Aggregator parent, SubAggCollectionMode collectionMode, boolean showTermDocCountError, Map<String, Object> metaData) throws IOException {
|
||||
super(name, factories, valuesSource, order, bucketCountThresholds, null, aggregationContext, parent, collectionMode, showTermDocCountError, metaData);
|
||||
Terms.Order order, BucketCountThresholds bucketCountThresholds, AggregationContext aggregationContext, Aggregator parent, SubAggCollectionMode collectionMode, boolean showTermDocCountError, List<Reducer> reducers, Map<String, Object> metaData) throws IOException {
|
||||
super(name, factories, valuesSource, order, bucketCountThresholds, null, aggregationContext, parent, collectionMode, showTermDocCountError, reducers, metaData);
|
||||
assert factories == null || factories.count() == 0;
|
||||
this.segmentDocCounts = context.bigArrays().newIntArray(1, true);
|
||||
}
|
||||
|
|
|
@ -30,6 +30,7 @@ import org.elasticsearch.search.aggregations.InternalAggregation;
|
|||
import org.elasticsearch.search.aggregations.InternalAggregations;
|
||||
import org.elasticsearch.search.aggregations.InternalMultiBucketAggregation;
|
||||
import org.elasticsearch.search.aggregations.bucket.terms.support.BucketPriorityQueue;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
import org.elasticsearch.search.aggregations.support.format.ValueFormatter;
|
||||
|
||||
import java.util.ArrayList;
|
||||
|
@ -41,7 +42,8 @@ import java.util.Map;
|
|||
/**
|
||||
*
|
||||
*/
|
||||
public abstract class InternalTerms extends InternalMultiBucketAggregation implements Terms, ToXContent, Streamable {
|
||||
public abstract class InternalTerms<A extends InternalTerms, B extends InternalTerms.Bucket> extends InternalMultiBucketAggregation<A, B>
|
||||
implements Terms, ToXContent, Streamable {
|
||||
|
||||
protected static final String DOC_COUNT_ERROR_UPPER_BOUND_FIELD_NAME = "doc_count_error_upper_bound";
|
||||
protected static final String SUM_OF_OTHER_DOC_COUNTS = "sum_other_doc_count";
|
||||
|
@ -113,7 +115,7 @@ public abstract class InternalTerms extends InternalMultiBucketAggregation imple
|
|||
protected int requiredSize;
|
||||
protected int shardSize;
|
||||
protected long minDocCount;
|
||||
protected List<Bucket> buckets;
|
||||
protected List<? extends Bucket> buckets;
|
||||
protected Map<String, Bucket> bucketMap;
|
||||
protected long docCountError;
|
||||
protected boolean showTermDocCountError;
|
||||
|
@ -121,8 +123,10 @@ public abstract class InternalTerms extends InternalMultiBucketAggregation imple
|
|||
|
||||
protected InternalTerms() {} // for serialization
|
||||
|
||||
protected InternalTerms(String name, Terms.Order order, int requiredSize, int shardSize, long minDocCount, List<Bucket> buckets, boolean showTermDocCountError, long docCountError, long otherDocCount, Map<String, Object> metaData) {
|
||||
super(name, metaData);
|
||||
protected InternalTerms(String name, Terms.Order order, int requiredSize, int shardSize, long minDocCount,
|
||||
List<? extends Bucket> buckets, boolean showTermDocCountError, long docCountError, long otherDocCount, List<Reducer> reducers,
|
||||
Map<String, Object> metaData) {
|
||||
super(name, reducers, metaData);
|
||||
this.order = order;
|
||||
this.requiredSize = requiredSize;
|
||||
this.shardSize = shardSize;
|
||||
|
@ -161,13 +165,13 @@ public abstract class InternalTerms extends InternalMultiBucketAggregation imple
|
|||
}
|
||||
|
||||
@Override
|
||||
public InternalAggregation reduce(List<InternalAggregation> aggregations, ReduceContext reduceContext) {
|
||||
public InternalAggregation doReduce(List<InternalAggregation> aggregations, ReduceContext reduceContext) {
|
||||
|
||||
Multimap<Object, InternalTerms.Bucket> buckets = ArrayListMultimap.create();
|
||||
long sumDocCountError = 0;
|
||||
long otherDocCount = 0;
|
||||
for (InternalAggregation aggregation : aggregations) {
|
||||
InternalTerms terms = (InternalTerms) aggregation;
|
||||
InternalTerms<A, B> terms = (InternalTerms<A, B>) aggregation;
|
||||
otherDocCount += terms.getSumOfOtherDocCounts();
|
||||
final long thisAggDocCountError;
|
||||
if (terms.buckets.size() < this.shardSize || this.order == InternalOrder.TERM_ASC || this.order == InternalOrder.TERM_DESC) {
|
||||
|
@ -220,9 +224,10 @@ public abstract class InternalTerms extends InternalMultiBucketAggregation imple
|
|||
} else {
|
||||
docCountError = aggregations.size() == 1 ? 0 : sumDocCountError;
|
||||
}
|
||||
return newAggregation(name, Arrays.asList(list), showTermDocCountError, docCountError, otherDocCount, getMetaData());
|
||||
return create(name, Arrays.asList(list), docCountError, otherDocCount, this);
|
||||
}
|
||||
|
||||
protected abstract InternalTerms newAggregation(String name, List<Bucket> buckets, boolean showTermDocCountError, long docCountError, long otherDocCount, Map<String, Object> metaData);
|
||||
protected abstract A create(String name, List<InternalTerms.Bucket> buckets, long docCountError, long otherDocCount,
|
||||
InternalTerms prototype);
|
||||
|
||||
}
|
||||
|
|
|
@ -26,6 +26,7 @@ import org.elasticsearch.search.aggregations.AggregationStreams;
|
|||
import org.elasticsearch.search.aggregations.InternalAggregations;
|
||||
import org.elasticsearch.search.aggregations.bucket.BucketStreamContext;
|
||||
import org.elasticsearch.search.aggregations.bucket.BucketStreams;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
import org.elasticsearch.search.aggregations.support.format.ValueFormatter;
|
||||
import org.elasticsearch.search.aggregations.support.format.ValueFormatterStreams;
|
||||
|
||||
|
@ -38,7 +39,7 @@ import java.util.Map;
|
|||
/**
|
||||
*
|
||||
*/
|
||||
public class LongTerms extends InternalTerms {
|
||||
public class LongTerms extends InternalTerms<LongTerms, LongTerms.Bucket> {
|
||||
|
||||
public static final Type TYPE = new Type("terms", "lterms");
|
||||
|
||||
|
@ -155,8 +156,11 @@ public class LongTerms extends InternalTerms {
|
|||
|
||||
LongTerms() {} // for serialization
|
||||
|
||||
public LongTerms(String name, Terms.Order order, @Nullable ValueFormatter formatter, int requiredSize, int shardSize, long minDocCount, List<InternalTerms.Bucket> buckets, boolean showTermDocCountError, long docCountError, long otherDocCount, Map<String, Object> metaData) {
|
||||
super(name, order, requiredSize, shardSize, minDocCount, buckets, showTermDocCountError, docCountError, otherDocCount, metaData);
|
||||
public LongTerms(String name, Terms.Order order, @Nullable ValueFormatter formatter, int requiredSize, int shardSize, long minDocCount,
|
||||
List<? extends InternalTerms.Bucket> buckets, boolean showTermDocCountError, long docCountError, long otherDocCount,
|
||||
List<Reducer> reducers, Map<String, Object> metaData) {
|
||||
super(name, order, requiredSize, shardSize, minDocCount, buckets, showTermDocCountError, docCountError, otherDocCount, reducers,
|
||||
metaData);
|
||||
this.formatter = formatter;
|
||||
}
|
||||
|
||||
|
@ -166,8 +170,23 @@ public class LongTerms extends InternalTerms {
|
|||
}
|
||||
|
||||
@Override
|
||||
protected InternalTerms newAggregation(String name, List<InternalTerms.Bucket> buckets, boolean showTermDocCountError, long docCountError, long otherDocCount, Map<String, Object> metaData) {
|
||||
return new LongTerms(name, order, formatter, requiredSize, shardSize, minDocCount, buckets, showTermDocCountError, docCountError, otherDocCount, metaData);
|
||||
public LongTerms create(List<Bucket> buckets) {
|
||||
return new LongTerms(this.name, this.order, this.formatter, this.requiredSize, this.shardSize, this.minDocCount, buckets,
|
||||
this.showTermDocCountError, this.docCountError, this.otherDocCount, this.reducers(), this.metaData);
|
||||
}
|
||||
|
||||
@Override
|
||||
public Bucket createBucket(InternalAggregations aggregations, Bucket prototype) {
|
||||
return new Bucket(prototype.term, prototype.docCount, aggregations, prototype.showDocCountError, prototype.docCountError,
|
||||
prototype.formatter);
|
||||
}
|
||||
|
||||
@Override
|
||||
protected LongTerms create(String name, List<org.elasticsearch.search.aggregations.bucket.terms.InternalTerms.Bucket> buckets,
|
||||
long docCountError, long otherDocCount, InternalTerms prototype) {
|
||||
return new LongTerms(name, prototype.order, ((LongTerms) prototype).formatter, prototype.requiredSize, prototype.shardSize,
|
||||
prototype.minDocCount, buckets, prototype.showTermDocCountError, docCountError, otherDocCount, prototype.reducers(),
|
||||
prototype.getMetaData());
|
||||
}
|
||||
|
||||
@Override
|
||||
|
|
|
@ -31,6 +31,7 @@ import org.elasticsearch.search.aggregations.LeafBucketCollector;
|
|||
import org.elasticsearch.search.aggregations.bucket.terms.support.BucketPriorityQueue;
|
||||
import org.elasticsearch.search.aggregations.bucket.terms.support.IncludeExclude;
|
||||
import org.elasticsearch.search.aggregations.bucket.terms.support.IncludeExclude.LongFilter;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
import org.elasticsearch.search.aggregations.support.AggregationContext;
|
||||
import org.elasticsearch.search.aggregations.support.ValuesSource;
|
||||
import org.elasticsearch.search.aggregations.support.format.ValueFormat;
|
||||
|
@ -39,6 +40,7 @@ import org.elasticsearch.search.aggregations.support.format.ValueFormatter;
|
|||
import java.io.IOException;
|
||||
import java.util.Arrays;
|
||||
import java.util.Collections;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
/**
|
||||
|
@ -53,8 +55,10 @@ public class LongTermsAggregator extends TermsAggregator {
|
|||
private LongFilter longFilter;
|
||||
|
||||
public LongTermsAggregator(String name, AggregatorFactories factories, ValuesSource.Numeric valuesSource, @Nullable ValueFormat format,
|
||||
Terms.Order order, BucketCountThresholds bucketCountThresholds, AggregationContext aggregationContext, Aggregator parent, SubAggCollectionMode subAggCollectMode, boolean showTermDocCountError, IncludeExclude.LongFilter longFilter, Map<String, Object> metaData) throws IOException {
|
||||
super(name, factories, aggregationContext, parent, bucketCountThresholds, order, subAggCollectMode, metaData);
|
||||
Terms.Order order, BucketCountThresholds bucketCountThresholds, AggregationContext aggregationContext, Aggregator parent,
|
||||
SubAggCollectionMode subAggCollectMode, boolean showTermDocCountError, IncludeExclude.LongFilter longFilter,
|
||||
List<Reducer> reducers, Map<String, Object> metaData) throws IOException {
|
||||
super(name, factories, aggregationContext, parent, bucketCountThresholds, order, subAggCollectMode, reducers, metaData);
|
||||
this.valuesSource = valuesSource;
|
||||
this.showTermDocCountError = showTermDocCountError;
|
||||
this.formatter = format != null ? format.formatter() : null;
|
||||
|
@ -157,13 +161,16 @@ public class LongTermsAggregator extends TermsAggregator {
|
|||
list[i].docCountError = 0;
|
||||
}
|
||||
|
||||
return new LongTerms(name, order, formatter, bucketCountThresholds.getRequiredSize(), bucketCountThresholds.getShardSize(), bucketCountThresholds.getMinDocCount(), Arrays.asList(list), showTermDocCountError, 0, otherDocCount, metaData());
|
||||
return new LongTerms(name, order, formatter, bucketCountThresholds.getRequiredSize(), bucketCountThresholds.getShardSize(),
|
||||
bucketCountThresholds.getMinDocCount(), Arrays.asList(list), showTermDocCountError, 0, otherDocCount, reducers(),
|
||||
metaData());
|
||||
}
|
||||
|
||||
|
||||
@Override
|
||||
public InternalAggregation buildEmptyAggregation() {
|
||||
return new LongTerms(name, order, formatter, bucketCountThresholds.getRequiredSize(), bucketCountThresholds.getShardSize(), bucketCountThresholds.getMinDocCount(), Collections.<InternalTerms.Bucket>emptyList(), showTermDocCountError, 0, 0, metaData());
|
||||
return new LongTerms(name, order, formatter, bucketCountThresholds.getRequiredSize(), bucketCountThresholds.getShardSize(),
|
||||
bucketCountThresholds.getMinDocCount(), Collections.<InternalTerms.Bucket> emptyList(), showTermDocCountError, 0, 0,
|
||||
reducers(), metaData());
|
||||
}
|
||||
|
||||
@Override
|
||||
|
|
|
@ -27,6 +27,7 @@ import org.elasticsearch.search.aggregations.InternalAggregation;
|
|||
import org.elasticsearch.search.aggregations.InternalAggregations;
|
||||
import org.elasticsearch.search.aggregations.bucket.BucketStreamContext;
|
||||
import org.elasticsearch.search.aggregations.bucket.BucketStreams;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.ArrayList;
|
||||
|
@ -37,7 +38,7 @@ import java.util.Map;
|
|||
/**
|
||||
*
|
||||
*/
|
||||
public class StringTerms extends InternalTerms {
|
||||
public class StringTerms extends InternalTerms<StringTerms, StringTerms.Bucket> {
|
||||
|
||||
public static final InternalAggregation.Type TYPE = new Type("terms", "sterms");
|
||||
|
||||
|
@ -73,7 +74,6 @@ public class StringTerms extends InternalTerms {
|
|||
BucketStreams.registerStream(BUCKET_STREAM, TYPE.stream());
|
||||
}
|
||||
|
||||
|
||||
public static class Bucket extends InternalTerms.Bucket {
|
||||
|
||||
BytesRef termBytes;
|
||||
|
@ -148,10 +148,14 @@ public class StringTerms extends InternalTerms {
|
|||
}
|
||||
}
|
||||
|
||||
StringTerms() {} // for serialization
|
||||
StringTerms() {
|
||||
} // for serialization
|
||||
|
||||
public StringTerms(String name, Terms.Order order, int requiredSize, int shardSize, long minDocCount, List<InternalTerms.Bucket> buckets, boolean showTermDocCountError, long docCountError, long otherDocCount, Map<String, Object> metaData) {
|
||||
super(name, order, requiredSize, shardSize, minDocCount, buckets, showTermDocCountError, docCountError, otherDocCount, metaData);
|
||||
public StringTerms(String name, Terms.Order order, int requiredSize, int shardSize, long minDocCount,
|
||||
List<? extends InternalTerms.Bucket> buckets, boolean showTermDocCountError, long docCountError, long otherDocCount,
|
||||
List<Reducer> reducers, Map<String, Object> metaData) {
|
||||
super(name, order, requiredSize, shardSize, minDocCount, buckets, showTermDocCountError, docCountError, otherDocCount, reducers,
|
||||
metaData);
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -160,8 +164,21 @@ public class StringTerms extends InternalTerms {
|
|||
}
|
||||
|
||||
@Override
|
||||
protected InternalTerms newAggregation(String name, List<InternalTerms.Bucket> buckets, boolean showTermDocCountError, long docCountError, long otherDocCount, Map<String, Object> metaData) {
|
||||
return new StringTerms(name, order, requiredSize, shardSize, minDocCount, buckets, showTermDocCountError, docCountError, otherDocCount, metaData);
|
||||
public StringTerms create(List<Bucket> buckets) {
|
||||
return new StringTerms(this.name, this.order, this.requiredSize, this.shardSize, this.minDocCount, buckets,
|
||||
this.showTermDocCountError, this.docCountError, this.otherDocCount, this.reducers(), this.metaData);
|
||||
}
|
||||
|
||||
@Override
|
||||
public Bucket createBucket(InternalAggregations aggregations, Bucket prototype) {
|
||||
return new Bucket(prototype.termBytes, prototype.docCount, aggregations, prototype.showDocCountError, prototype.docCountError);
|
||||
}
|
||||
|
||||
@Override
|
||||
protected StringTerms create(String name, List<org.elasticsearch.search.aggregations.bucket.terms.InternalTerms.Bucket> buckets,
|
||||
long docCountError, long otherDocCount, InternalTerms prototype) {
|
||||
return new StringTerms(name, prototype.order, prototype.requiredSize, prototype.shardSize, prototype.minDocCount, buckets,
|
||||
prototype.showTermDocCountError, docCountError, otherDocCount, prototype.reducers(), prototype.getMetaData());
|
||||
}
|
||||
|
||||
@Override
|
||||
|
|
|
@ -31,11 +31,13 @@ import org.elasticsearch.search.aggregations.InternalAggregation;
|
|||
import org.elasticsearch.search.aggregations.LeafBucketCollector;
|
||||
import org.elasticsearch.search.aggregations.bucket.terms.support.BucketPriorityQueue;
|
||||
import org.elasticsearch.search.aggregations.bucket.terms.support.IncludeExclude;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
import org.elasticsearch.search.aggregations.support.AggregationContext;
|
||||
import org.elasticsearch.search.aggregations.support.ValuesSource;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.Arrays;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
/**
|
||||
|
@ -49,9 +51,12 @@ public class StringTermsAggregator extends AbstractStringTermsAggregator {
|
|||
|
||||
public StringTermsAggregator(String name, AggregatorFactories factories, ValuesSource valuesSource,
|
||||
Terms.Order order, BucketCountThresholds bucketCountThresholds,
|
||||
IncludeExclude.StringFilter includeExclude, AggregationContext aggregationContext, Aggregator parent, SubAggCollectionMode collectionMode, boolean showTermDocCountError, Map<String, Object> metaData) throws IOException {
|
||||
IncludeExclude.StringFilter includeExclude, AggregationContext aggregationContext,
|
||||
Aggregator parent, SubAggCollectionMode collectionMode, boolean showTermDocCountError, List<Reducer> reducers,
|
||||
Map<String, Object> metaData) throws IOException {
|
||||
|
||||
super(name, factories, aggregationContext, parent, order, bucketCountThresholds, collectionMode, showTermDocCountError, metaData);
|
||||
super(name, factories, aggregationContext, parent, order, bucketCountThresholds, collectionMode, showTermDocCountError, reducers,
|
||||
metaData);
|
||||
this.valuesSource = valuesSource;
|
||||
this.includeExclude = includeExclude;
|
||||
bucketOrds = new BytesRefHash(1, aggregationContext.bigArrays());
|
||||
|
@ -158,7 +163,9 @@ public class StringTermsAggregator extends AbstractStringTermsAggregator {
|
|||
bucket.docCountError = 0;
|
||||
}
|
||||
|
||||
return new StringTerms(name, order, bucketCountThresholds.getRequiredSize(), bucketCountThresholds.getShardSize(), bucketCountThresholds.getMinDocCount(), Arrays.asList(list), showTermDocCountError, 0, otherDocCount, metaData());
|
||||
return new StringTerms(name, order, bucketCountThresholds.getRequiredSize(), bucketCountThresholds.getShardSize(),
|
||||
bucketCountThresholds.getMinDocCount(), Arrays.asList(list), showTermDocCountError, 0, otherDocCount, reducers(),
|
||||
metaData());
|
||||
}
|
||||
|
||||
@Override
|
||||
|
|
|
@ -28,11 +28,13 @@ import org.elasticsearch.search.aggregations.AggregatorFactories;
|
|||
import org.elasticsearch.search.aggregations.bucket.BucketsAggregator;
|
||||
import org.elasticsearch.search.aggregations.bucket.terms.InternalOrder.Aggregation;
|
||||
import org.elasticsearch.search.aggregations.bucket.terms.InternalOrder.CompoundOrder;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
import org.elasticsearch.search.aggregations.support.AggregationContext;
|
||||
import org.elasticsearch.search.aggregations.support.AggregationPath;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.HashSet;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.Set;
|
||||
|
||||
|
@ -135,8 +137,8 @@ public abstract class TermsAggregator extends BucketsAggregator {
|
|||
protected final Set<Aggregator> aggsUsedForSorting = new HashSet<>();
|
||||
protected final SubAggCollectionMode collectMode;
|
||||
|
||||
public TermsAggregator(String name, AggregatorFactories factories, AggregationContext context, Aggregator parent, BucketCountThresholds bucketCountThresholds, Terms.Order order, SubAggCollectionMode collectMode, Map<String, Object> metaData) throws IOException {
|
||||
super(name, factories, context, parent, metaData);
|
||||
public TermsAggregator(String name, AggregatorFactories factories, AggregationContext context, Aggregator parent, BucketCountThresholds bucketCountThresholds, Terms.Order order, SubAggCollectionMode collectMode, List<Reducer> reducers, Map<String, Object> metaData) throws IOException {
|
||||
super(name, factories, context, parent, reducers, metaData);
|
||||
this.bucketCountThresholds = bucketCountThresholds;
|
||||
this.order = InternalOrder.validate(order, this);
|
||||
this.collectMode = collectMode;
|
||||
|
|
|
@ -27,12 +27,14 @@ import org.elasticsearch.search.aggregations.AggregatorFactories;
|
|||
import org.elasticsearch.search.aggregations.InternalAggregation;
|
||||
import org.elasticsearch.search.aggregations.NonCollectingAggregator;
|
||||
import org.elasticsearch.search.aggregations.bucket.terms.support.IncludeExclude;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
import org.elasticsearch.search.aggregations.support.AggregationContext;
|
||||
import org.elasticsearch.search.aggregations.support.ValuesSource;
|
||||
import org.elasticsearch.search.aggregations.support.ValuesSourceAggregatorFactory;
|
||||
import org.elasticsearch.search.aggregations.support.ValuesSourceConfig;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
/**
|
||||
|
@ -47,9 +49,11 @@ public class TermsAggregatorFactory extends ValuesSourceAggregatorFactory<Values
|
|||
@Override
|
||||
Aggregator create(String name, AggregatorFactories factories, ValuesSource valuesSource,
|
||||
Terms.Order order, TermsAggregator.BucketCountThresholds bucketCountThresholds, IncludeExclude includeExclude,
|
||||
AggregationContext aggregationContext, Aggregator parent, SubAggCollectionMode subAggCollectMode, boolean showTermDocCountError, Map<String, Object> metaData) throws IOException {
|
||||
AggregationContext aggregationContext, Aggregator parent, SubAggCollectionMode subAggCollectMode,
|
||||
boolean showTermDocCountError, List<Reducer> reducers, Map<String, Object> metaData) throws IOException {
|
||||
final IncludeExclude.StringFilter filter = includeExclude == null ? null : includeExclude.convertToStringFilter();
|
||||
return new StringTermsAggregator(name, factories, valuesSource, order, bucketCountThresholds, filter, aggregationContext, parent, subAggCollectMode, showTermDocCountError, metaData);
|
||||
return new StringTermsAggregator(name, factories, valuesSource, order, bucketCountThresholds, filter,
|
||||
aggregationContext, parent, subAggCollectMode, showTermDocCountError, reducers, metaData);
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -63,9 +67,9 @@ public class TermsAggregatorFactory extends ValuesSourceAggregatorFactory<Values
|
|||
@Override
|
||||
Aggregator create(String name, AggregatorFactories factories, ValuesSource valuesSource,
|
||||
Terms.Order order, TermsAggregator.BucketCountThresholds bucketCountThresholds, IncludeExclude includeExclude,
|
||||
AggregationContext aggregationContext, Aggregator parent, SubAggCollectionMode subAggCollectMode, boolean showTermDocCountError, Map<String, Object> metaData) throws IOException {
|
||||
AggregationContext aggregationContext, Aggregator parent, SubAggCollectionMode subAggCollectMode, boolean showTermDocCountError, List<Reducer> reducers, Map<String, Object> metaData) throws IOException {
|
||||
final IncludeExclude.OrdinalsFilter filter = includeExclude == null ? null : includeExclude.convertToOrdinalsFilter();
|
||||
return new GlobalOrdinalsStringTermsAggregator(name, factories, (ValuesSource.Bytes.WithOrdinals.FieldData) valuesSource, order, bucketCountThresholds, filter, aggregationContext, parent, subAggCollectMode, showTermDocCountError, metaData);
|
||||
return new GlobalOrdinalsStringTermsAggregator(name, factories, (ValuesSource.Bytes.WithOrdinals.FieldData) valuesSource, order, bucketCountThresholds, filter, aggregationContext, parent, subAggCollectMode, showTermDocCountError, reducers, metaData);
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -79,9 +83,9 @@ public class TermsAggregatorFactory extends ValuesSourceAggregatorFactory<Values
|
|||
@Override
|
||||
Aggregator create(String name, AggregatorFactories factories, ValuesSource valuesSource,
|
||||
Terms.Order order, TermsAggregator.BucketCountThresholds bucketCountThresholds, IncludeExclude includeExclude,
|
||||
AggregationContext aggregationContext, Aggregator parent, SubAggCollectionMode subAggCollectMode, boolean showTermDocCountError, Map<String, Object> metaData) throws IOException {
|
||||
AggregationContext aggregationContext, Aggregator parent, SubAggCollectionMode subAggCollectMode, boolean showTermDocCountError, List<Reducer> reducers, Map<String, Object> metaData) throws IOException {
|
||||
final IncludeExclude.OrdinalsFilter filter = includeExclude == null ? null : includeExclude.convertToOrdinalsFilter();
|
||||
return new GlobalOrdinalsStringTermsAggregator.WithHash(name, factories, (ValuesSource.Bytes.WithOrdinals.FieldData) valuesSource, order, bucketCountThresholds, filter, aggregationContext, parent, subAggCollectMode, showTermDocCountError, metaData);
|
||||
return new GlobalOrdinalsStringTermsAggregator.WithHash(name, factories, (ValuesSource.Bytes.WithOrdinals.FieldData) valuesSource, order, bucketCountThresholds, filter, aggregationContext, parent, subAggCollectMode, showTermDocCountError, reducers, metaData);
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -94,11 +98,12 @@ public class TermsAggregatorFactory extends ValuesSourceAggregatorFactory<Values
|
|||
@Override
|
||||
Aggregator create(String name, AggregatorFactories factories, ValuesSource valuesSource,
|
||||
Terms.Order order, TermsAggregator.BucketCountThresholds bucketCountThresholds, IncludeExclude includeExclude,
|
||||
AggregationContext aggregationContext, Aggregator parent, SubAggCollectionMode subAggCollectMode, boolean showTermDocCountError, Map<String, Object> metaData) throws IOException {
|
||||
AggregationContext aggregationContext, Aggregator parent, SubAggCollectionMode subAggCollectMode,
|
||||
boolean showTermDocCountError, List<Reducer> reducers, Map<String, Object> metaData) throws IOException {
|
||||
if (includeExclude != null || factories.count() > 0) {
|
||||
return GLOBAL_ORDINALS.create(name, factories, valuesSource, order, bucketCountThresholds, includeExclude, aggregationContext, parent, subAggCollectMode, showTermDocCountError, metaData);
|
||||
return GLOBAL_ORDINALS.create(name, factories, valuesSource, order, bucketCountThresholds, includeExclude, aggregationContext, parent, subAggCollectMode, showTermDocCountError, reducers, metaData);
|
||||
}
|
||||
return new GlobalOrdinalsStringTermsAggregator.LowCardinality(name, factories, (ValuesSource.Bytes.WithOrdinals.FieldData) valuesSource, order, bucketCountThresholds, aggregationContext, parent, subAggCollectMode, showTermDocCountError, metaData);
|
||||
return new GlobalOrdinalsStringTermsAggregator.LowCardinality(name, factories, (ValuesSource.Bytes.WithOrdinals.FieldData) valuesSource, order, bucketCountThresholds, aggregationContext, parent, subAggCollectMode, showTermDocCountError, reducers, metaData);
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -125,7 +130,7 @@ public class TermsAggregatorFactory extends ValuesSourceAggregatorFactory<Values
|
|||
abstract Aggregator create(String name, AggregatorFactories factories, ValuesSource valuesSource,
|
||||
Terms.Order order, TermsAggregator.BucketCountThresholds bucketCountThresholds,
|
||||
IncludeExclude includeExclude, AggregationContext aggregationContext, Aggregator parent,
|
||||
SubAggCollectionMode subAggCollectMode, boolean showTermDocCountError, Map<String, Object> metaData) throws IOException;
|
||||
SubAggCollectionMode subAggCollectMode, boolean showTermDocCountError, List<Reducer> reducers, Map<String, Object> metaData) throws IOException;
|
||||
|
||||
abstract boolean needsGlobalOrdinals();
|
||||
|
||||
|
@ -153,9 +158,11 @@ public class TermsAggregatorFactory extends ValuesSourceAggregatorFactory<Values
|
|||
}
|
||||
|
||||
@Override
|
||||
protected Aggregator createUnmapped(AggregationContext aggregationContext, Aggregator parent, Map<String, Object> metaData) throws IOException {
|
||||
final InternalAggregation aggregation = new UnmappedTerms(name, order, bucketCountThresholds.getRequiredSize(), bucketCountThresholds.getShardSize(), bucketCountThresholds.getMinDocCount(), metaData);
|
||||
return new NonCollectingAggregator(name, aggregationContext, parent, factories, metaData) {
|
||||
protected Aggregator createUnmapped(AggregationContext aggregationContext, Aggregator parent, List<Reducer> reducers,
|
||||
Map<String, Object> metaData) throws IOException {
|
||||
final InternalAggregation aggregation = new UnmappedTerms(name, order, bucketCountThresholds.getRequiredSize(),
|
||||
bucketCountThresholds.getShardSize(), bucketCountThresholds.getMinDocCount(), reducers, metaData);
|
||||
return new NonCollectingAggregator(name, aggregationContext, parent, factories, reducers, metaData) {
|
||||
{
|
||||
// even in the case of an unmapped aggregator, validate the order
|
||||
InternalOrder.validate(order, this);
|
||||
|
@ -168,7 +175,8 @@ public class TermsAggregatorFactory extends ValuesSourceAggregatorFactory<Values
|
|||
}
|
||||
|
||||
@Override
|
||||
protected Aggregator doCreateInternal(ValuesSource valuesSource, AggregationContext aggregationContext, Aggregator parent, boolean collectsFromSingleBucket, Map<String, Object> metaData) throws IOException {
|
||||
protected Aggregator doCreateInternal(ValuesSource valuesSource, AggregationContext aggregationContext, Aggregator parent,
|
||||
boolean collectsFromSingleBucket, List<Reducer> reducers, Map<String, Object> metaData) throws IOException {
|
||||
if (collectsFromSingleBucket == false) {
|
||||
return asMultiBucketAggregator(this, aggregationContext, parent);
|
||||
}
|
||||
|
@ -218,7 +226,7 @@ public class TermsAggregatorFactory extends ValuesSourceAggregatorFactory<Values
|
|||
}
|
||||
|
||||
assert execution != null;
|
||||
return execution.create(name, factories, valuesSource, order, bucketCountThresholds, includeExclude, aggregationContext, parent, collectMode, showTermDocCountError, metaData);
|
||||
return execution.create(name, factories, valuesSource, order, bucketCountThresholds, includeExclude, aggregationContext, parent, collectMode, showTermDocCountError, reducers, metaData);
|
||||
}
|
||||
|
||||
if ((includeExclude != null) && (includeExclude.isRegexBased())) {
|
||||
|
@ -234,13 +242,13 @@ public class TermsAggregatorFactory extends ValuesSourceAggregatorFactory<Values
|
|||
}
|
||||
return new DoubleTermsAggregator(name, factories, (ValuesSource.Numeric) valuesSource, config.format(),
|
||||
order, bucketCountThresholds, aggregationContext, parent, collectMode,
|
||||
showTermDocCountError, longFilter, metaData);
|
||||
showTermDocCountError, longFilter, reducers, metaData);
|
||||
}
|
||||
if (includeExclude != null) {
|
||||
longFilter = includeExclude.convertToLongFilter();
|
||||
}
|
||||
return new LongTermsAggregator(name, factories, (ValuesSource.Numeric) valuesSource, config.format(),
|
||||
order, bucketCountThresholds, aggregationContext, parent, collectMode, showTermDocCountError, longFilter, metaData);
|
||||
order, bucketCountThresholds, aggregationContext, parent, collectMode, showTermDocCountError, longFilter, reducers, metaData);
|
||||
}
|
||||
|
||||
throw new AggregationExecutionException("terms aggregation cannot be applied to field [" + config.fieldContext().field() +
|
||||
|
|
|
@ -23,6 +23,8 @@ import org.elasticsearch.common.io.stream.StreamOutput;
|
|||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
import org.elasticsearch.search.aggregations.AggregationStreams;
|
||||
import org.elasticsearch.search.aggregations.InternalAggregation;
|
||||
import org.elasticsearch.search.aggregations.InternalAggregations;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.Collections;
|
||||
|
@ -32,7 +34,7 @@ import java.util.Map;
|
|||
/**
|
||||
*
|
||||
*/
|
||||
public class UnmappedTerms extends InternalTerms {
|
||||
public class UnmappedTerms extends InternalTerms<UnmappedTerms, InternalTerms.Bucket> {
|
||||
|
||||
public static final Type TYPE = new Type("terms", "umterms");
|
||||
|
||||
|
@ -54,8 +56,9 @@ public class UnmappedTerms extends InternalTerms {
|
|||
|
||||
UnmappedTerms() {} // for serialization
|
||||
|
||||
public UnmappedTerms(String name, Terms.Order order, int requiredSize, int shardSize, long minDocCount, Map<String, Object> metaData) {
|
||||
super(name, order, requiredSize, shardSize, minDocCount, BUCKETS, false, 0, 0, metaData);
|
||||
public UnmappedTerms(String name, Terms.Order order, int requiredSize, int shardSize, long minDocCount, List<Reducer> reducers,
|
||||
Map<String, Object> metaData) {
|
||||
super(name, order, requiredSize, shardSize, minDocCount, BUCKETS, false, 0, 0, reducers, metaData);
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -63,6 +66,21 @@ public class UnmappedTerms extends InternalTerms {
|
|||
return TYPE;
|
||||
}
|
||||
|
||||
@Override
|
||||
public UnmappedTerms create(List<InternalTerms.Bucket> buckets) {
|
||||
return new UnmappedTerms(this.name, this.order, this.requiredSize, this.shardSize, this.minDocCount, this.reducers(), this.metaData);
|
||||
}
|
||||
|
||||
@Override
|
||||
public InternalTerms.Bucket createBucket(InternalAggregations aggregations, InternalTerms.Bucket prototype) {
|
||||
throw new UnsupportedOperationException("not supported for UnmappedTerms");
|
||||
}
|
||||
|
||||
@Override
|
||||
protected UnmappedTerms create(String name, List<Bucket> buckets, long docCountError, long otherDocCount, InternalTerms prototype) {
|
||||
throw new UnsupportedOperationException("not supported for UnmappedTerms");
|
||||
}
|
||||
|
||||
@Override
|
||||
protected void doReadFrom(StreamInput in) throws IOException {
|
||||
this.docCountError = 0;
|
||||
|
@ -81,7 +99,7 @@ public class UnmappedTerms extends InternalTerms {
|
|||
}
|
||||
|
||||
@Override
|
||||
public InternalAggregation reduce(List<InternalAggregation> aggregations, ReduceContext reduceContext) {
|
||||
public InternalAggregation doReduce(List<InternalAggregation> aggregations, ReduceContext reduceContext) {
|
||||
for (InternalAggregation agg : aggregations) {
|
||||
if (!(agg instanceof UnmappedTerms)) {
|
||||
return agg.reduce(aggregations, reduceContext);
|
||||
|
@ -90,11 +108,6 @@ public class UnmappedTerms extends InternalTerms {
|
|||
return this;
|
||||
}
|
||||
|
||||
@Override
|
||||
protected InternalTerms newAggregation(String name, List<Bucket> buckets, boolean showTermDocCountError, long docCountError, long otherDocCount, Map<String, Object> metaData) {
|
||||
throw new UnsupportedOperationException("How did you get there?");
|
||||
}
|
||||
|
||||
@Override
|
||||
public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException {
|
||||
builder.field(InternalTerms.DOC_COUNT_ERROR_UPPER_BOUND_FIELD_NAME, docCountError);
|
||||
|
|
|
@ -20,14 +20,16 @@
|
|||
package org.elasticsearch.search.aggregations.metrics;
|
||||
|
||||
import org.elasticsearch.search.aggregations.InternalAggregation;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
public abstract class InternalMetricsAggregation extends InternalAggregation {
|
||||
|
||||
protected InternalMetricsAggregation() {} // for serialization
|
||||
|
||||
protected InternalMetricsAggregation(String name, Map<String, Object> metaData) {
|
||||
super(name, metaData);
|
||||
protected InternalMetricsAggregation(String name, List<Reducer> reducers, Map<String, Object> metaData) {
|
||||
super(name, reducers, metaData);
|
||||
}
|
||||
}
|
||||
|
|
|
@ -18,6 +18,7 @@
|
|||
*/
|
||||
package org.elasticsearch.search.aggregations.metrics;
|
||||
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
import org.elasticsearch.search.aggregations.support.format.ValueFormatter;
|
||||
|
||||
import java.util.List;
|
||||
|
@ -34,8 +35,8 @@ public abstract class InternalNumericMetricsAggregation extends InternalMetricsA
|
|||
|
||||
protected SingleValue() {}
|
||||
|
||||
protected SingleValue(String name, Map<String, Object> metaData) {
|
||||
super(name, metaData);
|
||||
protected SingleValue(String name, List<Reducer> reducers, Map<String, Object> metaData) {
|
||||
super(name, reducers, metaData);
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -64,8 +65,8 @@ public abstract class InternalNumericMetricsAggregation extends InternalMetricsA
|
|||
|
||||
protected MultiValue() {}
|
||||
|
||||
protected MultiValue(String name, Map<String, Object> metaData) {
|
||||
super(name, metaData);
|
||||
protected MultiValue(String name, List<Reducer> reducers, Map<String, Object> metaData) {
|
||||
super(name, reducers, metaData);
|
||||
}
|
||||
|
||||
public abstract double value(String name);
|
||||
|
@ -92,8 +93,8 @@ public abstract class InternalNumericMetricsAggregation extends InternalMetricsA
|
|||
|
||||
private InternalNumericMetricsAggregation() {} // for serialization
|
||||
|
||||
private InternalNumericMetricsAggregation(String name, Map<String, Object> metaData) {
|
||||
super(name, metaData);
|
||||
private InternalNumericMetricsAggregation(String name, List<Reducer> reducers, Map<String, Object> metaData) {
|
||||
super(name, reducers, metaData);
|
||||
}
|
||||
|
||||
}
|
||||
|
|
|
@ -22,14 +22,17 @@ package org.elasticsearch.search.aggregations.metrics;
|
|||
import org.elasticsearch.search.aggregations.Aggregator;
|
||||
import org.elasticsearch.search.aggregations.AggregatorBase;
|
||||
import org.elasticsearch.search.aggregations.AggregatorFactories;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
import org.elasticsearch.search.aggregations.support.AggregationContext;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
public abstract class MetricsAggregator extends AggregatorBase {
|
||||
|
||||
protected MetricsAggregator(String name, AggregationContext context, Aggregator parent, Map<String, Object> metaData) throws IOException {
|
||||
super(name, AggregatorFactories.EMPTY, context, parent, metaData);
|
||||
protected MetricsAggregator(String name, AggregationContext context, Aggregator parent, List<Reducer> reducers,
|
||||
Map<String, Object> metaData) throws IOException {
|
||||
super(name, AggregatorFactories.EMPTY, context, parent, reducers, metaData);
|
||||
}
|
||||
}
|
||||
|
|
|
@ -19,9 +19,11 @@
|
|||
package org.elasticsearch.search.aggregations.metrics;
|
||||
|
||||
import org.elasticsearch.search.aggregations.Aggregator;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
import org.elasticsearch.search.aggregations.support.AggregationContext;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
/**
|
||||
|
@ -29,14 +31,16 @@ import java.util.Map;
|
|||
*/
|
||||
public abstract class NumericMetricsAggregator extends MetricsAggregator {
|
||||
|
||||
private NumericMetricsAggregator(String name, AggregationContext context, Aggregator parent, Map<String, Object> metaData) throws IOException {
|
||||
super(name, context, parent, metaData);
|
||||
private NumericMetricsAggregator(String name, AggregationContext context, Aggregator parent, List<Reducer> reducers,
|
||||
Map<String, Object> metaData) throws IOException {
|
||||
super(name, context, parent, reducers, metaData);
|
||||
}
|
||||
|
||||
public static abstract class SingleValue extends NumericMetricsAggregator {
|
||||
|
||||
protected SingleValue(String name, AggregationContext context, Aggregator parent, Map<String, Object> metaData) throws IOException {
|
||||
super(name, context, parent, metaData);
|
||||
protected SingleValue(String name, AggregationContext context, Aggregator parent, List<Reducer> reducers,
|
||||
Map<String, Object> metaData) throws IOException {
|
||||
super(name, context, parent, reducers, metaData);
|
||||
}
|
||||
|
||||
public abstract double metric(long owningBucketOrd);
|
||||
|
@ -44,8 +48,9 @@ public abstract class NumericMetricsAggregator extends MetricsAggregator {
|
|||
|
||||
public static abstract class MultiValue extends NumericMetricsAggregator {
|
||||
|
||||
protected MultiValue(String name, AggregationContext context, Aggregator parent, Map<String, Object> metaData) throws IOException {
|
||||
super(name, context, parent, metaData);
|
||||
protected MultiValue(String name, AggregationContext context, Aggregator parent, List<Reducer> reducers,
|
||||
Map<String, Object> metaData) throws IOException {
|
||||
super(name, context, parent, reducers, metaData);
|
||||
}
|
||||
|
||||
public abstract boolean hasMetric(String name);
|
||||
|
|
|
@ -30,6 +30,7 @@ import org.elasticsearch.search.aggregations.InternalAggregation;
|
|||
import org.elasticsearch.search.aggregations.LeafBucketCollector;
|
||||
import org.elasticsearch.search.aggregations.LeafBucketCollectorBase;
|
||||
import org.elasticsearch.search.aggregations.metrics.NumericMetricsAggregator;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
import org.elasticsearch.search.aggregations.support.AggregationContext;
|
||||
import org.elasticsearch.search.aggregations.support.ValuesSource;
|
||||
import org.elasticsearch.search.aggregations.support.ValuesSourceAggregatorFactory;
|
||||
|
@ -37,6 +38,7 @@ import org.elasticsearch.search.aggregations.support.ValuesSourceConfig;
|
|||
import org.elasticsearch.search.aggregations.support.format.ValueFormatter;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
/**
|
||||
|
@ -51,8 +53,8 @@ public class AvgAggregator extends NumericMetricsAggregator.SingleValue {
|
|||
ValueFormatter formatter;
|
||||
|
||||
public AvgAggregator(String name, ValuesSource.Numeric valuesSource, @Nullable ValueFormatter formatter,
|
||||
AggregationContext context, Aggregator parent, Map<String, Object> metaData) throws IOException {
|
||||
super(name,context, parent, metaData);
|
||||
AggregationContext context, Aggregator parent, List<Reducer> reducers, Map<String, Object> metaData) throws IOException {
|
||||
super(name, context, parent, reducers, metaData);
|
||||
this.valuesSource = valuesSource;
|
||||
this.formatter = formatter;
|
||||
if (valuesSource != null) {
|
||||
|
@ -103,12 +105,12 @@ public class AvgAggregator extends NumericMetricsAggregator.SingleValue {
|
|||
if (valuesSource == null || bucket >= sums.size()) {
|
||||
return buildEmptyAggregation();
|
||||
}
|
||||
return new InternalAvg(name, sums.get(bucket), counts.get(bucket), formatter, metaData());
|
||||
return new InternalAvg(name, sums.get(bucket), counts.get(bucket), formatter, reducers(), metaData());
|
||||
}
|
||||
|
||||
@Override
|
||||
public InternalAggregation buildEmptyAggregation() {
|
||||
return new InternalAvg(name, 0.0, 0l, formatter, metaData());
|
||||
return new InternalAvg(name, 0.0, 0l, formatter, reducers(), metaData());
|
||||
}
|
||||
|
||||
public static class Factory extends ValuesSourceAggregatorFactory.LeafOnly<ValuesSource.Numeric> {
|
||||
|
@ -118,13 +120,15 @@ public class AvgAggregator extends NumericMetricsAggregator.SingleValue {
|
|||
}
|
||||
|
||||
@Override
|
||||
protected Aggregator createUnmapped(AggregationContext aggregationContext, Aggregator parent, Map<String, Object> metaData) throws IOException {
|
||||
return new AvgAggregator(name, null, config.formatter(), aggregationContext, parent, metaData);
|
||||
protected Aggregator createUnmapped(AggregationContext aggregationContext, Aggregator parent, List<Reducer> reducers,
|
||||
Map<String, Object> metaData) throws IOException {
|
||||
return new AvgAggregator(name, null, config.formatter(), aggregationContext, parent, reducers, metaData);
|
||||
}
|
||||
|
||||
@Override
|
||||
protected Aggregator doCreateInternal(ValuesSource.Numeric valuesSource, AggregationContext aggregationContext, Aggregator parent, boolean collectsFromSingleBucket, Map<String, Object> metaData) throws IOException {
|
||||
return new AvgAggregator(name, valuesSource, config.formatter(), aggregationContext, parent, metaData);
|
||||
protected Aggregator doCreateInternal(ValuesSource.Numeric valuesSource, AggregationContext aggregationContext, Aggregator parent,
|
||||
boolean collectsFromSingleBucket, List<Reducer> reducers, Map<String, Object> metaData) throws IOException {
|
||||
return new AvgAggregator(name, valuesSource, config.formatter(), aggregationContext, parent, reducers, metaData);
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -25,6 +25,7 @@ import org.elasticsearch.common.xcontent.XContentBuilder;
|
|||
import org.elasticsearch.search.aggregations.AggregationStreams;
|
||||
import org.elasticsearch.search.aggregations.InternalAggregation;
|
||||
import org.elasticsearch.search.aggregations.metrics.InternalNumericMetricsAggregation;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
import org.elasticsearch.search.aggregations.support.format.ValueFormatter;
|
||||
import org.elasticsearch.search.aggregations.support.format.ValueFormatterStreams;
|
||||
|
||||
|
@ -57,8 +58,9 @@ public class InternalAvg extends InternalNumericMetricsAggregation.SingleValue i
|
|||
|
||||
InternalAvg() {} // for serialization
|
||||
|
||||
public InternalAvg(String name, double sum, long count, @Nullable ValueFormatter formatter, Map<String, Object> metaData) {
|
||||
super(name, metaData);
|
||||
public InternalAvg(String name, double sum, long count, @Nullable ValueFormatter formatter, List<Reducer> reducers,
|
||||
Map<String, Object> metaData) {
|
||||
super(name, reducers, metaData);
|
||||
this.sum = sum;
|
||||
this.count = count;
|
||||
this.valueFormatter = formatter;
|
||||
|
@ -80,14 +82,14 @@ public class InternalAvg extends InternalNumericMetricsAggregation.SingleValue i
|
|||
}
|
||||
|
||||
@Override
|
||||
public InternalAvg reduce(List<InternalAggregation> aggregations, ReduceContext reduceContext) {
|
||||
public InternalAvg doReduce(List<InternalAggregation> aggregations, ReduceContext reduceContext) {
|
||||
long count = 0;
|
||||
double sum = 0;
|
||||
for (InternalAggregation aggregation : aggregations) {
|
||||
count += ((InternalAvg) aggregation).count;
|
||||
sum += ((InternalAvg) aggregation).sum;
|
||||
}
|
||||
return new InternalAvg(getName(), sum, count, valueFormatter, getMetaData());
|
||||
return new InternalAvg(getName(), sum, count, valueFormatter, reducers(), getMetaData());
|
||||
}
|
||||
|
||||
@Override
|
||||
|
|
|
@ -42,11 +42,13 @@ import org.elasticsearch.search.aggregations.Aggregator;
|
|||
import org.elasticsearch.search.aggregations.InternalAggregation;
|
||||
import org.elasticsearch.search.aggregations.LeafBucketCollector;
|
||||
import org.elasticsearch.search.aggregations.metrics.NumericMetricsAggregator;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
import org.elasticsearch.search.aggregations.support.AggregationContext;
|
||||
import org.elasticsearch.search.aggregations.support.ValuesSource;
|
||||
import org.elasticsearch.search.aggregations.support.format.ValueFormatter;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
/**
|
||||
|
@ -66,8 +68,8 @@ public class CardinalityAggregator extends NumericMetricsAggregator.SingleValue
|
|||
private ValueFormatter formatter;
|
||||
|
||||
public CardinalityAggregator(String name, ValuesSource valuesSource, boolean rehash, int precision, @Nullable ValueFormatter formatter,
|
||||
AggregationContext context, Aggregator parent, Map<String, Object> metaData) throws IOException {
|
||||
super(name, context, parent, metaData);
|
||||
AggregationContext context, Aggregator parent, List<Reducer> reducers, Map<String, Object> metaData) throws IOException {
|
||||
super(name, context, parent, reducers, metaData);
|
||||
this.valuesSource = valuesSource;
|
||||
this.rehash = rehash;
|
||||
this.precision = precision;
|
||||
|
@ -156,12 +158,12 @@ public class CardinalityAggregator extends NumericMetricsAggregator.SingleValue
|
|||
// this Aggregator (and its HLL++ counters) is released.
|
||||
HyperLogLogPlusPlus copy = new HyperLogLogPlusPlus(precision, BigArrays.NON_RECYCLING_INSTANCE, 1);
|
||||
copy.merge(0, counts, owningBucketOrdinal);
|
||||
return new InternalCardinality(name, copy, formatter, metaData());
|
||||
return new InternalCardinality(name, copy, formatter, reducers(), metaData());
|
||||
}
|
||||
|
||||
@Override
|
||||
public InternalAggregation buildEmptyAggregation() {
|
||||
return new InternalCardinality(name, null, formatter, metaData());
|
||||
return new InternalCardinality(name, null, formatter, reducers(), metaData());
|
||||
}
|
||||
|
||||
@Override
|
||||
|
|
|
@ -22,12 +22,14 @@ package org.elasticsearch.search.aggregations.metrics.cardinality;
|
|||
import org.elasticsearch.search.aggregations.AggregationExecutionException;
|
||||
import org.elasticsearch.search.aggregations.Aggregator;
|
||||
import org.elasticsearch.search.aggregations.bucket.SingleBucketAggregator;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
import org.elasticsearch.search.aggregations.support.AggregationContext;
|
||||
import org.elasticsearch.search.aggregations.support.ValuesSource;
|
||||
import org.elasticsearch.search.aggregations.support.ValuesSourceAggregatorFactory;
|
||||
import org.elasticsearch.search.aggregations.support.ValuesSourceConfig;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
final class CardinalityAggregatorFactory extends ValuesSourceAggregatorFactory<ValuesSource> {
|
||||
|
@ -46,16 +48,19 @@ final class CardinalityAggregatorFactory extends ValuesSourceAggregatorFactory<V
|
|||
}
|
||||
|
||||
@Override
|
||||
protected Aggregator createUnmapped(AggregationContext context, Aggregator parent, Map<String, Object> metaData) throws IOException {
|
||||
return new CardinalityAggregator(name, null, true, precision(parent), config.formatter(), context, parent, metaData);
|
||||
protected Aggregator createUnmapped(AggregationContext context, Aggregator parent, List<Reducer> reducers, Map<String, Object> metaData)
|
||||
throws IOException {
|
||||
return new CardinalityAggregator(name, null, true, precision(parent), config.formatter(), context, parent, reducers, metaData);
|
||||
}
|
||||
|
||||
@Override
|
||||
protected Aggregator doCreateInternal(ValuesSource valuesSource, AggregationContext context, Aggregator parent, boolean collectsFromSingleBucket, Map<String, Object> metaData) throws IOException {
|
||||
protected Aggregator doCreateInternal(ValuesSource valuesSource, AggregationContext context, Aggregator parent,
|
||||
boolean collectsFromSingleBucket, List<Reducer> reducers, Map<String, Object> metaData) throws IOException {
|
||||
if (!(valuesSource instanceof ValuesSource.Numeric) && !rehash) {
|
||||
throw new AggregationExecutionException("Turning off rehashing for cardinality aggregation [" + name + "] on non-numeric values in not allowed");
|
||||
}
|
||||
return new CardinalityAggregator(name, valuesSource, rehash, precision(parent), config.formatter(), context, parent, metaData);
|
||||
return new CardinalityAggregator(name, valuesSource, rehash, precision(parent), config.formatter(), context, parent, reducers,
|
||||
metaData);
|
||||
}
|
||||
|
||||
/*
|
||||
|
|
|
@ -27,6 +27,7 @@ import org.elasticsearch.common.xcontent.XContentBuilder;
|
|||
import org.elasticsearch.search.aggregations.AggregationStreams;
|
||||
import org.elasticsearch.search.aggregations.InternalAggregation;
|
||||
import org.elasticsearch.search.aggregations.metrics.InternalNumericMetricsAggregation;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
import org.elasticsearch.search.aggregations.support.format.ValueFormatter;
|
||||
import org.elasticsearch.search.aggregations.support.format.ValueFormatterStreams;
|
||||
|
||||
|
@ -53,8 +54,9 @@ public final class InternalCardinality extends InternalNumericMetricsAggregation
|
|||
|
||||
private HyperLogLogPlusPlus counts;
|
||||
|
||||
InternalCardinality(String name, HyperLogLogPlusPlus counts, @Nullable ValueFormatter formatter, Map<String, Object> metaData) {
|
||||
super(name, metaData);
|
||||
InternalCardinality(String name, HyperLogLogPlusPlus counts, @Nullable ValueFormatter formatter, List<Reducer> reducers,
|
||||
Map<String, Object> metaData) {
|
||||
super(name, reducers, metaData);
|
||||
this.counts = counts;
|
||||
this.valueFormatter = formatter;
|
||||
}
|
||||
|
@ -99,14 +101,14 @@ public final class InternalCardinality extends InternalNumericMetricsAggregation
|
|||
}
|
||||
|
||||
@Override
|
||||
public InternalAggregation reduce(List<InternalAggregation> aggregations, ReduceContext reduceContext) {
|
||||
public InternalAggregation doReduce(List<InternalAggregation> aggregations, ReduceContext reduceContext) {
|
||||
InternalCardinality reduced = null;
|
||||
for (InternalAggregation aggregation : aggregations) {
|
||||
final InternalCardinality cardinality = (InternalCardinality) aggregation;
|
||||
if (cardinality.counts != null) {
|
||||
if (reduced == null) {
|
||||
reduced = new InternalCardinality(name, new HyperLogLogPlusPlus(cardinality.counts.precision(),
|
||||
BigArrays.NON_RECYCLING_INSTANCE, 1), this.valueFormatter, getMetaData());
|
||||
BigArrays.NON_RECYCLING_INSTANCE, 1), this.valueFormatter, reducers(), getMetaData());
|
||||
}
|
||||
reduced.merge(cardinality);
|
||||
}
|
||||
|
|
|
@ -30,12 +30,14 @@ import org.elasticsearch.search.aggregations.InternalAggregation;
|
|||
import org.elasticsearch.search.aggregations.LeafBucketCollector;
|
||||
import org.elasticsearch.search.aggregations.LeafBucketCollectorBase;
|
||||
import org.elasticsearch.search.aggregations.metrics.MetricsAggregator;
|
||||
import org.elasticsearch.search.aggregations.reducers.Reducer;
|
||||
import org.elasticsearch.search.aggregations.support.AggregationContext;
|
||||
import org.elasticsearch.search.aggregations.support.ValuesSource;
|
||||
import org.elasticsearch.search.aggregations.support.ValuesSourceAggregatorFactory;
|
||||
import org.elasticsearch.search.aggregations.support.ValuesSourceConfig;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
public final class GeoBoundsAggregator extends MetricsAggregator {
|
||||
|
@ -49,9 +51,10 @@ public final class GeoBoundsAggregator extends MetricsAggregator {
|
|||
DoubleArray negLefts;
|
||||
DoubleArray negRights;
|
||||
|
||||
protected GeoBoundsAggregator(String name, AggregationContext aggregationContext,
|
||||
Aggregator parent, ValuesSource.GeoPoint valuesSource, boolean wrapLongitude, Map<String, Object> metaData) throws IOException {
|
||||
super(name, aggregationContext, parent, metaData);
|
||||
protected GeoBoundsAggregator(String name, AggregationContext aggregationContext, Aggregator parent,
|
||||
ValuesSource.GeoPoint valuesSource, boolean wrapLongitude, List<Reducer> reducers,
|
||||
Map<String, Object> metaData) throws IOException {
|
||||
super(name, aggregationContext, parent, reducers, metaData);
|
||||
this.valuesSource = valuesSource;
|
||||
this.wrapLongitude = wrapLongitude;
|
||||
if (valuesSource != null) {
|
||||
|
@ -149,13 +152,13 @@ public final class GeoBoundsAggregator extends MetricsAggregator {
|
|||
double posRight = posRights.get(owningBucketOrdinal);
|
||||
double negLeft = negLefts.get(owningBucketOrdinal);
|
||||
double negRight = negRights.get(owningBucketOrdinal);
|
||||
return new InternalGeoBounds(name, top, bottom, posLeft, posRight, negLeft, negRight, wrapLongitude, metaData());
|
||||
return new InternalGeoBounds(name, top, bottom, posLeft, posRight, negLeft, negRight, wrapLongitude, reducers(), metaData());
|
||||
}
|
||||
|
||||
@Override
|
||||
public InternalAggregation buildEmptyAggregation() {
|
||||
return new InternalGeoBounds(name, Double.NEGATIVE_INFINITY, Double.POSITIVE_INFINITY, Double.POSITIVE_INFINITY,
|
||||
Double.NEGATIVE_INFINITY, Double.POSITIVE_INFINITY, Double.NEGATIVE_INFINITY, wrapLongitude, metaData());
|
||||
Double.NEGATIVE_INFINITY, Double.POSITIVE_INFINITY, Double.NEGATIVE_INFINITY, wrapLongitude, reducers(), metaData());
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -173,14 +176,15 @@ public final class GeoBoundsAggregator extends MetricsAggregator {
|
|||
}
|
||||
|
||||
@Override
|
||||
protected Aggregator createUnmapped(AggregationContext aggregationContext, Aggregator parent, Map<String, Object> metaData) throws IOException {
|
||||
return new GeoBoundsAggregator(name, aggregationContext, parent, null, wrapLongitude, metaData);
|
||||
protected Aggregator createUnmapped(AggregationContext aggregationContext, Aggregator parent, List<Reducer> reducers,
|
||||
Map<String, Object> metaData) throws IOException {
|
||||
return new GeoBoundsAggregator(name, aggregationContext, parent, null, wrapLongitude, reducers, metaData);
|
||||
}
|
||||
|
||||
@Override
|
||||
protected Aggregator doCreateInternal(ValuesSource.GeoPoint valuesSource, AggregationContext aggregationContext,
|
||||
Aggregator parent, boolean collectsFromSingleBucket, Map<String, Object> metaData) throws IOException {
|
||||
return new GeoBoundsAggregator(name, aggregationContext, parent, valuesSource, wrapLongitude, metaData);
|
||||
Aggregator parent, boolean collectsFromSingleBucket, List<Reducer> reducers, Map<String, Object> metaData) throws IOException {
|
||||
return new GeoBoundsAggregator(name, aggregationContext, parent, valuesSource, wrapLongitude, reducers, metaData);
|
||||
}
|
||||
|
||||
}
|
||||
|
|