10 KiB
id | title |
---|---|
moving-average-query | Moving Average Query |
Overview
Moving Average Query is an extension which provides support for Moving Average and other Aggregate Window Functions in Druid queries.
These Aggregate Window Functions consume standard Druid Aggregators and outputs additional windowed aggregates called Averagers.
High level algorithm
Moving Average encapsulates the groupBy query (Or timeseries in case of no dimensions) in order to rely on the maturity of these query types.
It runs the query in two main phases:
- Runs an inner groupBy or timeseries query to compute Aggregators (i.e. daily count of events).
- Passes over aggregated results in Broker, in order to compute Averagers (i.e. moving 7 day average of the daily count).
Main enhancements provided by this extension:
- Functionality: Extending druid query functionality (i.e. initial introduction of Window Functions).
- Performance: Improving performance of such moving aggregations by eliminating multiple segment scans.
Further reading
Operations
To use this extension, make sure to load druid-moving-average-query
only to the Broker.
Configuration
There are currently no configuration properties specific to Moving Average.
Limitations
- movingAverage is missing support for the following groupBy properties:
subtotalsSpec
,virtualColumns
. - movingAverage is missing support for the following timeseries properties:
descending
. - movingAverage is missing support for SQL-compatible null handling (So setting druid.generic.useDefaultValueForNull in configuration will give an error).
Query spec
- Most properties in the query spec derived from groupBy query / timeseries, see documentation for these query types.
property | description | required? |
---|---|---|
queryType | This String should always be "movingAverage"; this is the first thing Druid looks at to figure out how to interpret the query. | yes |
dataSource | A String or Object defining the data source to query, very similar to a table in a relational database. See DataSource for more information. | yes |
dimensions | A JSON list of DimensionSpec (Notice that property is optional) | no |
limitSpec | See LimitSpec | no |
having | See Having | no |
granularity | A period granularity; See Period Granularities | yes |
filter | See Filters | no |
aggregations | Aggregations forms the input to Averagers; See Aggregations | yes |
postAggregations | Supports only aggregations as input; See Post Aggregations | no |
intervals | A JSON Object representing ISO-8601 Intervals. This defines the time ranges to run the query over. | yes |
context | An additional JSON Object which can be used to specify certain flags. | no |
averagers | Defines the moving average function; See Averagers | yes |
postAveragers | Support input of both averagers and aggregations; Syntax is identical to postAggregations (See Post Aggregations) | no |
Averagers
Averagers are used to define the Moving-Average function. Averagers are not limited to an average - they can also provide other types of window functions such as MAX()/MIN().
Properties
These are properties which are common to all Averagers:
property | description | required? |
---|---|---|
type | Averager type; See Averager types | yes |
name | Averager name | yes |
fieldName | Input name (An aggregation name) | yes |
buckets | Number of lookback buckets (time periods), including current one. Must be >0 | yes |
cycleSize | Cycle size; Used to calculate day-of-week option; See Cycle size (Day of Week) | no, defaults to 1 |
Averager types:
- Standard averagers:
- doubleMean
- doubleMeanNoNulls
- doubleSum
- doubleMax
- doubleMin
- longMean
- longMeanNoNulls
- longSum
- longMax
- longMin
Standard averagers
These averagers offer four functions:
- Mean (Average)
- MeanNoNulls (Ignores empty buckets).
- Sum
- Max
- Min
Ignoring nulls: Using a MeanNoNulls averager is useful when the interval starts at the dataset beginning time. In that case, the first records will ignore missing buckets and average won't be artificially low. However, this also means that empty days in a sparse dataset will also be ignored.
Example of usage:
{ "type" : "doubleMean", "name" : <output_name>, "fieldName": <input_name> }
Cycle size (Day of Week)
This optional parameter is used to calculate over a single bucket within each cycle instead of all buckets. A prime example would be weekly buckets, resulting in a Day of Week calculation. (Other examples: Month of year, Hour of day).
I.e. when using these parameters:
- granularity: period=P1D (daily)
- buckets: 28
- cycleSize: 7
Within each output record, the averager will compute the result over the following buckets: current (#0), #7, #14, #21. Whereas without specifying cycleSize it would have computed over all 28 buckets.
Examples
All examples are based on the Wikipedia dataset provided in the Druid tutorials.
Basic example
Calculating a 7-buckets moving average for Wikipedia edit deltas.
Query syntax:
{
"queryType": "movingAverage",
"dataSource": "wikipedia",
"granularity": {
"type": "period",
"period": "PT30M"
},
"intervals": [
"2015-09-12T00:00:00Z/2015-09-13T00:00:00Z"
],
"aggregations": [
{
"name": "delta30Min",
"fieldName": "delta",
"type": "longSum"
}
],
"averagers": [
{
"name": "trailing30MinChanges",
"fieldName": "delta30Min",
"type": "longMean",
"buckets": 7
}
]
}
Result:
[ {
"version" : "v1",
"timestamp" : "2015-09-12T00:30:00.000Z",
"event" : {
"delta30Min" : 30490,
"trailing30MinChanges" : 4355.714285714285
}
}, {
"version" : "v1",
"timestamp" : "2015-09-12T01:00:00.000Z",
"event" : {
"delta30Min" : 96526,
"trailing30MinChanges" : 18145.14285714286
}
}, {
...
...
...
}, {
"version" : "v1",
"timestamp" : "2015-09-12T23:00:00.000Z",
"event" : {
"delta30Min" : 119100,
"trailing30MinChanges" : 198697.2857142857
}
}, {
"version" : "v1",
"timestamp" : "2015-09-12T23:30:00.000Z",
"event" : {
"delta30Min" : 177882,
"trailing30MinChanges" : 193890.0
}
}
Post averager example
Calculating a 7-buckets moving average for Wikipedia edit deltas, plus a ratio between the current period and the moving average.
Query syntax:
{
"queryType": "movingAverage",
"dataSource": "wikipedia",
"granularity": {
"type": "period",
"period": "PT30M"
},
"intervals": [
"2015-09-12T22:00:00Z/2015-09-13T00:00:00Z"
],
"aggregations": [
{
"name": "delta30Min",
"fieldName": "delta",
"type": "longSum"
}
],
"averagers": [
{
"name": "trailing30MinChanges",
"fieldName": "delta30Min",
"type": "longMean",
"buckets": 7
}
],
"postAveragers" : [
{
"name": "ratioTrailing30MinChanges",
"type": "arithmetic",
"fn": "/",
"fields": [
{
"type": "fieldAccess",
"fieldName": "delta30Min"
},
{
"type": "fieldAccess",
"fieldName": "trailing30MinChanges"
}
]
}
]
}
Result:
[ {
"version" : "v1",
"timestamp" : "2015-09-12T22:00:00.000Z",
"event" : {
"delta30Min" : 144269,
"trailing30MinChanges" : 204088.14285714287,
"ratioTrailing30MinChanges" : 0.7068955500319539
}
}, {
"version" : "v1",
"timestamp" : "2015-09-12T22:30:00.000Z",
"event" : {
"delta30Min" : 242860,
"trailing30MinChanges" : 214031.57142857142,
"ratioTrailing30MinChanges" : 1.134692411867141
}
}, {
"version" : "v1",
"timestamp" : "2015-09-12T23:00:00.000Z",
"event" : {
"delta30Min" : 119100,
"trailing30MinChanges" : 198697.2857142857,
"ratioTrailing30MinChanges" : 0.5994042624782422
}
}, {
"version" : "v1",
"timestamp" : "2015-09-12T23:30:00.000Z",
"event" : {
"delta30Min" : 177882,
"trailing30MinChanges" : 193890.0,
"ratioTrailing30MinChanges" : 0.9174377224199288
}
} ]
Cycle size example
Calculating an average of every first 10-minutes of the last 3 hours:
Query syntax:
{
"queryType": "movingAverage",
"dataSource": "wikipedia",
"granularity": {
"type": "period",
"period": "PT10M"
},
"intervals": [
"2015-09-12T00:00:00Z/2015-09-13T00:00:00Z"
],
"aggregations": [
{
"name": "delta10Min",
"fieldName": "delta",
"type": "doubleSum"
}
],
"averagers": [
{
"name": "trailing10MinPerHourChanges",
"fieldName": "delta10Min",
"type": "doubleMeanNoNulls",
"buckets": 18,
"cycleSize": 6
}
]
}