2013-11-24 06:13:08 -05:00
|
|
|
[[search-aggregations-bucket-datehistogram-aggregation]]
|
2014-05-12 19:35:58 -04:00
|
|
|
=== Date Histogram Aggregation
|
2013-11-24 06:13:08 -05:00
|
|
|
|
2018-10-31 10:16:36 -04:00
|
|
|
This multi-bucket aggregation is similar to the normal
|
|
|
|
<<search-aggregations-bucket-histogram-aggregation,histogram>>, but it can
|
|
|
|
only be used with date values. Because dates are represented internally in
|
|
|
|
Elasticsearch as long values, it is possible, but not as accurate, to use the
|
|
|
|
normal `histogram` on dates as well. The main difference in the two APIs is
|
|
|
|
that here the interval can be specified using date/time expressions. Time-based
|
|
|
|
data requires special support because time-based intervals are not always a
|
|
|
|
fixed length.
|
|
|
|
|
|
|
|
==== Setting intervals
|
|
|
|
|
|
|
|
There seems to be no limit to the creativity we humans apply to setting our
|
|
|
|
clocks and calendars. We've invented leap years and leap seconds, standard and
|
|
|
|
daylight savings times, and timezone offsets of 30 or 45 minutes rather than a
|
|
|
|
full hour. While these creations help keep us in sync with the cosmos and our
|
|
|
|
environment, they can make specifying time intervals accurately a real challenge.
|
|
|
|
The only universal truth our researchers have yet to disprove is that a
|
|
|
|
millisecond is always the same duration, and a second is always 1000 milliseconds.
|
|
|
|
Beyond that, things get complicated.
|
|
|
|
|
|
|
|
Generally speaking, when you specify a single time unit, such as 1 hour or 1 day, you
|
|
|
|
are working with a _calendar interval_, but multiples, such as 6 hours or 3 days, are
|
|
|
|
_fixed-length intervals_.
|
|
|
|
|
|
|
|
For example, a specification of 1 day (1d) from now is a calendar interval that
|
|
|
|
means "at
|
|
|
|
this exact time tomorrow" no matter the length of the day. A change to or from
|
|
|
|
daylight savings time that results in a 23 or 25 hour day is compensated for and the
|
|
|
|
specification of "this exact time tomorrow" is maintained. But if you specify 2 or
|
|
|
|
more days, each day must be of the same fixed duration (24 hours). In this case, if
|
|
|
|
the specified interval includes the change to or from daylight savings time, the
|
|
|
|
interval will end an hour sooner or later than you expect.
|
|
|
|
|
|
|
|
There are similar differences to consider when you specify single versus multiple
|
|
|
|
minutes or hours. Multiple time periods longer than a day are not supported.
|
|
|
|
|
|
|
|
Here are the valid time specifications and their meanings:
|
|
|
|
|
|
|
|
milliseconds (ms) ::
|
|
|
|
Fixed length interval; supports multiples.
|
|
|
|
|
|
|
|
seconds (s) ::
|
|
|
|
1000 milliseconds; fixed length interval (except for the last second of a
|
|
|
|
minute that contains a leap-second, which is 2000ms long); supports multiples.
|
|
|
|
|
|
|
|
minutes (m) ::
|
|
|
|
All minutes begin at 00 seconds.
|
|
|
|
|
|
|
|
* One minute (1m) is the interval between 00 seconds of the first minute and 00
|
|
|
|
seconds of the following minute in the specified timezone, compensating for any
|
|
|
|
intervening leap seconds, so that the number of minutes and seconds past the
|
|
|
|
hour is the same at the start and end.
|
|
|
|
* Multiple minutes (__n__m) are intervals of exactly 60x1000=60,000 milliseconds
|
|
|
|
each.
|
|
|
|
|
|
|
|
hours (h) ::
|
|
|
|
All hours begin at 00 minutes and 00 seconds.
|
|
|
|
|
|
|
|
* One hour (1h) is the interval between 00:00 minutes of the first hour and 00:00
|
|
|
|
minutes of the following hour in the specified timezone, compensating for any
|
|
|
|
intervening leap seconds, so that the number of minutes and seconds past the hour
|
|
|
|
is the same at the start and end.
|
|
|
|
* Multiple hours (__n__h) are intervals of exactly 60x60x1000=3,600,000 milliseconds
|
|
|
|
each.
|
|
|
|
|
|
|
|
days (d) ::
|
|
|
|
All days begin at the earliest possible time, which is usually 00:00:00
|
|
|
|
(midnight).
|
|
|
|
|
|
|
|
* One day (1d) is the interval between the start of the day and the start of
|
|
|
|
of the following day in the specified timezone, compensating for any intervening
|
|
|
|
time changes.
|
|
|
|
* Multiple days (__n__d) are intervals of exactly 24x60x60x1000=86,400,000
|
|
|
|
milliseconds each.
|
|
|
|
|
|
|
|
weeks (w) ::
|
|
|
|
|
|
|
|
* One week (1w) is the interval between the start day_of_week:hour:minute:second
|
|
|
|
and the same day of the week and time of the following week in the specified
|
|
|
|
timezone.
|
|
|
|
* Multiple weeks (__n__w) are not supported.
|
|
|
|
|
|
|
|
months (M) ::
|
|
|
|
|
|
|
|
* One month (1M) is the interval between the start day of the month and time of
|
|
|
|
day and the same day of the month and time of the following month in the specified
|
|
|
|
timezone, so that the day of the month and time of day are the same at the start
|
|
|
|
and end.
|
|
|
|
* Multiple months (__n__M) are not supported.
|
|
|
|
|
|
|
|
quarters (q) ::
|
|
|
|
|
|
|
|
* One quarter (1q) is the interval between the start day of the month and
|
|
|
|
time of day and the same day of the month and time of day three months later,
|
|
|
|
so that the day of the month and time of day are the same at the start and end. +
|
|
|
|
* Multiple quarters (__n__q) are not supported.
|
|
|
|
|
|
|
|
years (y) ::
|
|
|
|
|
|
|
|
* One year (1y) is the interval between the start day of the month and time of
|
|
|
|
day and the same day of the month and time of day the following year in the
|
|
|
|
specified timezone, so that the date and time are the same at the start and end. +
|
|
|
|
* Multiple years (__n__y) are not supported.
|
|
|
|
|
|
|
|
NOTE:
|
|
|
|
In all cases, when the specified end time does not exist, the actual end time is
|
|
|
|
the closest available time after the specified end.
|
|
|
|
|
|
|
|
Widely distributed applications must also consider vagaries such as countries that
|
|
|
|
start and stop daylight savings time at 12:01 A.M., so end up with one minute of
|
|
|
|
Sunday followed by an additional 59 minutes of Saturday once a year, and countries
|
|
|
|
that decide to move across the international date line. Situations like
|
|
|
|
that can make irregular timezone offsets seem easy.
|
|
|
|
|
|
|
|
As always, rigorous testing, especially around time-change events, will ensure
|
|
|
|
that your time interval specification is
|
|
|
|
what you intend it to be.
|
|
|
|
|
|
|
|
WARNING:
|
|
|
|
To avoid unexpected results, all connected servers and clients must sync to a
|
|
|
|
reliable network time service.
|
|
|
|
|
|
|
|
==== Examples
|
2013-11-24 06:13:08 -05:00
|
|
|
|
2014-01-17 11:20:05 -05:00
|
|
|
Requesting bucket intervals of a month.
|
2013-11-24 06:13:08 -05:00
|
|
|
|
|
|
|
[source,js]
|
|
|
|
--------------------------------------------------
|
2017-01-20 16:23:28 -05:00
|
|
|
POST /sales/_search?size=0
|
2013-11-24 06:13:08 -05:00
|
|
|
{
|
|
|
|
"aggs" : {
|
2017-01-20 16:23:28 -05:00
|
|
|
"sales_over_time" : {
|
2013-11-24 06:13:08 -05:00
|
|
|
"date_histogram" : {
|
|
|
|
"field" : "date",
|
|
|
|
"interval" : "month"
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
--------------------------------------------------
|
2017-01-20 16:23:28 -05:00
|
|
|
// CONSOLE
|
|
|
|
// TEST[setup:sales]
|
2013-11-24 06:13:08 -05:00
|
|
|
|
2018-10-31 10:16:36 -04:00
|
|
|
You can also specify time values using abbreviations supported by
|
|
|
|
<<time-units,time units>> parsing.
|
|
|
|
Note that fractional time values are not supported, but you can address this by
|
|
|
|
shifting to another
|
|
|
|
time unit (e.g., `1.5h` could instead be specified as `90m`).
|
2013-11-24 06:13:08 -05:00
|
|
|
|
|
|
|
[source,js]
|
|
|
|
--------------------------------------------------
|
2017-01-20 16:23:28 -05:00
|
|
|
POST /sales/_search?size=0
|
2013-11-24 06:13:08 -05:00
|
|
|
{
|
|
|
|
"aggs" : {
|
2017-01-20 16:23:28 -05:00
|
|
|
"sales_over_time" : {
|
2013-11-24 06:13:08 -05:00
|
|
|
"date_histogram" : {
|
|
|
|
"field" : "date",
|
2016-06-29 17:02:15 -04:00
|
|
|
"interval" : "90m"
|
2013-11-24 06:13:08 -05:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
--------------------------------------------------
|
2017-01-20 16:23:28 -05:00
|
|
|
// CONSOLE
|
|
|
|
// TEST[setup:sales]
|
2013-11-24 06:13:08 -05:00
|
|
|
|
2018-10-31 10:16:36 -04:00
|
|
|
===== Keys
|
2013-11-24 06:13:08 -05:00
|
|
|
|
2015-09-07 13:53:50 -04:00
|
|
|
Internally, a date is represented as a 64 bit number representing a timestamp
|
2018-10-31 10:16:36 -04:00
|
|
|
in milliseconds-since-the-epoch (01/01/1970 midnight UTC). These timestamps are
|
|
|
|
returned as the ++key++ name of the bucket. The `key_as_string` is the same
|
|
|
|
timestamp converted to a formatted
|
2018-11-23 09:13:44 -05:00
|
|
|
date string using the `format` parameter specification:
|
2015-09-07 13:53:50 -04:00
|
|
|
|
2018-10-31 10:16:36 -04:00
|
|
|
TIP: If you don't specify `format`, the first date
|
|
|
|
<<mapping-date-format,format>> specified in the field mapping is used.
|
2013-11-24 06:13:08 -05:00
|
|
|
|
|
|
|
[source,js]
|
|
|
|
--------------------------------------------------
|
2017-01-20 16:23:28 -05:00
|
|
|
POST /sales/_search?size=0
|
2013-11-24 06:13:08 -05:00
|
|
|
{
|
|
|
|
"aggs" : {
|
2017-01-20 16:23:28 -05:00
|
|
|
"sales_over_time" : {
|
2013-11-24 06:13:08 -05:00
|
|
|
"date_histogram" : {
|
|
|
|
"field" : "date",
|
|
|
|
"interval" : "1M",
|
|
|
|
"format" : "yyyy-MM-dd" <1>
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
--------------------------------------------------
|
2017-01-20 16:23:28 -05:00
|
|
|
// CONSOLE
|
|
|
|
// TEST[setup:sales]
|
2013-11-24 06:13:08 -05:00
|
|
|
|
|
|
|
<1> Supports expressive date <<date-format-pattern,format pattern>>
|
|
|
|
|
|
|
|
Response:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
--------------------------------------------------
|
|
|
|
{
|
2017-01-20 16:23:28 -05:00
|
|
|
...
|
2013-11-24 06:13:08 -05:00
|
|
|
"aggregations": {
|
2017-01-20 16:23:28 -05:00
|
|
|
"sales_over_time": {
|
2014-01-28 11:46:26 -05:00
|
|
|
"buckets": [
|
|
|
|
{
|
2017-01-20 16:23:28 -05:00
|
|
|
"key_as_string": "2015-01-01",
|
|
|
|
"key": 1420070400000,
|
|
|
|
"doc_count": 3
|
2014-01-28 11:46:26 -05:00
|
|
|
},
|
|
|
|
{
|
2017-01-20 16:23:28 -05:00
|
|
|
"key_as_string": "2015-02-01",
|
|
|
|
"key": 1422748800000,
|
2014-01-28 11:46:26 -05:00
|
|
|
"doc_count": 2
|
|
|
|
},
|
2017-01-20 16:23:28 -05:00
|
|
|
{
|
|
|
|
"key_as_string": "2015-03-01",
|
|
|
|
"key": 1425168000000,
|
|
|
|
"doc_count": 2
|
|
|
|
}
|
2014-01-28 11:46:26 -05:00
|
|
|
]
|
|
|
|
}
|
2013-11-24 06:13:08 -05:00
|
|
|
}
|
|
|
|
}
|
|
|
|
--------------------------------------------------
|
2017-01-20 16:23:28 -05:00
|
|
|
// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/]
|
2013-11-24 06:13:08 -05:00
|
|
|
|
2018-10-31 10:16:36 -04:00
|
|
|
===== Timezone
|
2015-09-07 13:53:50 -04:00
|
|
|
|
|
|
|
Date-times are stored in Elasticsearch in UTC. By default, all bucketing and
|
2018-10-31 10:16:36 -04:00
|
|
|
rounding is also done in UTC. Use the `time_zone` parameter to indicate
|
|
|
|
that bucketing should use a different timezone.
|
2015-09-07 13:53:50 -04:00
|
|
|
|
2018-10-31 10:16:36 -04:00
|
|
|
You can specify timezones as either an ISO 8601 UTC offset (e.g. `+01:00` or
|
|
|
|
`-08:00`) or as a timezone ID as specified in the IANA timezone database,
|
|
|
|
such as`America/Los_Angeles`.
|
2015-09-07 13:53:50 -04:00
|
|
|
|
|
|
|
Consider the following example:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
---------------------------------
|
2017-12-14 11:47:53 -05:00
|
|
|
PUT my_index/_doc/1?refresh
|
2015-09-07 13:53:50 -04:00
|
|
|
{
|
|
|
|
"date": "2015-10-01T00:30:00Z"
|
|
|
|
}
|
|
|
|
|
2017-12-14 11:47:53 -05:00
|
|
|
PUT my_index/_doc/2?refresh
|
2015-09-07 13:53:50 -04:00
|
|
|
{
|
|
|
|
"date": "2015-10-01T01:30:00Z"
|
|
|
|
}
|
|
|
|
|
|
|
|
GET my_index/_search?size=0
|
|
|
|
{
|
|
|
|
"aggs": {
|
|
|
|
"by_day": {
|
|
|
|
"date_histogram": {
|
|
|
|
"field": "date",
|
|
|
|
"interval": "day"
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
---------------------------------
|
2017-01-20 16:23:28 -05:00
|
|
|
// CONSOLE
|
2015-09-07 13:53:50 -04:00
|
|
|
|
2018-10-31 10:16:36 -04:00
|
|
|
If you don't specify a timezone, UTC is used. This would result in both of these
|
2015-09-07 13:53:50 -04:00
|
|
|
documents being placed into the same day bucket, which starts at midnight UTC
|
|
|
|
on 1 October 2015:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
---------------------------------
|
2017-01-20 16:23:28 -05:00
|
|
|
{
|
|
|
|
...
|
|
|
|
"aggregations": {
|
|
|
|
"by_day": {
|
|
|
|
"buckets": [
|
|
|
|
{
|
|
|
|
"key_as_string": "2015-10-01T00:00:00.000Z",
|
|
|
|
"key": 1443657600000,
|
|
|
|
"doc_count": 2
|
|
|
|
}
|
|
|
|
]
|
|
|
|
}
|
2015-09-07 13:53:50 -04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
---------------------------------
|
2017-01-20 16:23:28 -05:00
|
|
|
// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/]
|
2015-09-07 13:53:50 -04:00
|
|
|
|
2018-10-31 10:16:36 -04:00
|
|
|
If you specify a `time_zone` of `-01:00`, midnight in that timezone is one hour
|
|
|
|
before midnight UTC:
|
2015-09-07 13:53:50 -04:00
|
|
|
|
|
|
|
[source,js]
|
|
|
|
---------------------------------
|
|
|
|
GET my_index/_search?size=0
|
|
|
|
{
|
|
|
|
"aggs": {
|
|
|
|
"by_day": {
|
|
|
|
"date_histogram": {
|
|
|
|
"field": "date",
|
|
|
|
"interval": "day",
|
|
|
|
"time_zone": "-01:00"
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
---------------------------------
|
2017-01-20 16:23:28 -05:00
|
|
|
// CONSOLE
|
|
|
|
// TEST[continued]
|
2015-09-07 13:53:50 -04:00
|
|
|
|
|
|
|
Now the first document falls into the bucket for 30 September 2015, while the
|
|
|
|
second document falls into the bucket for 1 October 2015:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
---------------------------------
|
2017-01-20 16:23:28 -05:00
|
|
|
{
|
|
|
|
...
|
|
|
|
"aggregations": {
|
|
|
|
"by_day": {
|
|
|
|
"buckets": [
|
|
|
|
{
|
|
|
|
"key_as_string": "2015-09-30T00:00:00.000-01:00", <1>
|
|
|
|
"key": 1443574800000,
|
|
|
|
"doc_count": 1
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"key_as_string": "2015-10-01T00:00:00.000-01:00", <1>
|
|
|
|
"key": 1443661200000,
|
|
|
|
"doc_count": 1
|
|
|
|
}
|
|
|
|
]
|
|
|
|
}
|
2015-09-07 13:53:50 -04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
---------------------------------
|
2017-01-20 16:23:28 -05:00
|
|
|
// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/]
|
|
|
|
|
2015-09-07 13:53:50 -04:00
|
|
|
<1> The `key_as_string` value represents midnight on each day
|
2018-10-31 10:16:36 -04:00
|
|
|
in the specified timezone.
|
2015-09-07 13:53:50 -04:00
|
|
|
|
2016-11-15 13:06:35 -05:00
|
|
|
WARNING: When using time zones that follow DST (daylight savings time) changes,
|
|
|
|
buckets close to the moment when those changes happen can have slightly different
|
2018-10-31 10:16:36 -04:00
|
|
|
sizes than you would expect from the used `interval`.
|
2016-11-15 13:06:35 -05:00
|
|
|
For example, consider a DST start in the `CET` time zone: on 27 March 2016 at 2am,
|
2018-10-31 10:16:36 -04:00
|
|
|
clocks were turned forward 1 hour to 3am local time. If you use `day` as `interval`,
|
2016-11-15 13:06:35 -05:00
|
|
|
the bucket covering that day will only hold data for 23 hours instead of the usual
|
2018-10-31 10:16:36 -04:00
|
|
|
24 hours for other buckets. The same is true for shorter intervals, like 12h,
|
|
|
|
where you'll have only a 11h bucket on the morning of 27 March when the DST shift
|
2016-11-15 13:06:35 -05:00
|
|
|
happens.
|
|
|
|
|
2018-10-31 10:16:36 -04:00
|
|
|
===== Offset
|
2016-11-15 13:06:35 -05:00
|
|
|
|
2018-10-31 10:16:36 -04:00
|
|
|
Use the `offset` parameter to change the start value of each bucket by the
|
2015-09-07 13:53:50 -04:00
|
|
|
specified positive (`+`) or negative offset (`-`) duration, such as `1h` for
|
2017-01-30 02:49:49 -05:00
|
|
|
an hour, or `1d` for a day. See <<time-units>> for more possible time
|
2015-09-07 13:53:50 -04:00
|
|
|
duration options.
|
|
|
|
|
2018-10-31 10:16:36 -04:00
|
|
|
For example, when using an interval of `day`, each bucket runs from midnight
|
|
|
|
to midnight. Setting the `offset` parameter to `+6h` changes each bucket
|
2015-09-07 13:53:50 -04:00
|
|
|
to run from 6am to 6am:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
-----------------------------
|
2017-12-14 11:47:53 -05:00
|
|
|
PUT my_index/_doc/1?refresh
|
2015-09-07 13:53:50 -04:00
|
|
|
{
|
|
|
|
"date": "2015-10-01T05:30:00Z"
|
|
|
|
}
|
|
|
|
|
2017-12-14 11:47:53 -05:00
|
|
|
PUT my_index/_doc/2?refresh
|
2015-09-07 13:53:50 -04:00
|
|
|
{
|
|
|
|
"date": "2015-10-01T06:30:00Z"
|
|
|
|
}
|
|
|
|
|
|
|
|
GET my_index/_search?size=0
|
|
|
|
{
|
|
|
|
"aggs": {
|
|
|
|
"by_day": {
|
|
|
|
"date_histogram": {
|
|
|
|
"field": "date",
|
|
|
|
"interval": "day",
|
|
|
|
"offset": "+6h"
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
-----------------------------
|
2017-01-20 16:23:28 -05:00
|
|
|
// CONSOLE
|
2015-09-07 13:53:50 -04:00
|
|
|
|
|
|
|
Instead of a single bucket starting at midnight, the above request groups the
|
|
|
|
documents into buckets starting at 6am:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
-----------------------------
|
2017-01-20 16:23:28 -05:00
|
|
|
{
|
|
|
|
...
|
|
|
|
"aggregations": {
|
|
|
|
"by_day": {
|
|
|
|
"buckets": [
|
|
|
|
{
|
|
|
|
"key_as_string": "2015-09-30T06:00:00.000Z",
|
|
|
|
"key": 1443592800000,
|
|
|
|
"doc_count": 1
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"key_as_string": "2015-10-01T06:00:00.000Z",
|
|
|
|
"key": 1443679200000,
|
|
|
|
"doc_count": 1
|
|
|
|
}
|
|
|
|
]
|
|
|
|
}
|
2015-09-07 13:53:50 -04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
-----------------------------
|
2017-01-20 16:23:28 -05:00
|
|
|
// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/]
|
2015-09-07 13:53:50 -04:00
|
|
|
|
2018-10-31 10:16:36 -04:00
|
|
|
NOTE: The start `offset` of each bucket is calculated after `time_zone`
|
2015-09-07 13:53:50 -04:00
|
|
|
adjustments have been made.
|
|
|
|
|
2018-10-31 10:16:36 -04:00
|
|
|
===== Keyed Response
|
2017-04-18 09:57:50 -04:00
|
|
|
|
2018-10-31 10:16:36 -04:00
|
|
|
Setting the `keyed` flag to `true` associates a unique string key with each
|
|
|
|
bucket and returns the ranges as a hash rather than an array:
|
2017-04-18 09:57:50 -04:00
|
|
|
|
|
|
|
[source,js]
|
|
|
|
--------------------------------------------------
|
|
|
|
POST /sales/_search?size=0
|
|
|
|
{
|
|
|
|
"aggs" : {
|
|
|
|
"sales_over_time" : {
|
|
|
|
"date_histogram" : {
|
|
|
|
"field" : "date",
|
|
|
|
"interval" : "1M",
|
|
|
|
"format" : "yyyy-MM-dd",
|
|
|
|
"keyed": true
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
--------------------------------------------------
|
|
|
|
// CONSOLE
|
|
|
|
// TEST[setup:sales]
|
|
|
|
|
|
|
|
Response:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
--------------------------------------------------
|
|
|
|
{
|
|
|
|
...
|
|
|
|
"aggregations": {
|
|
|
|
"sales_over_time": {
|
|
|
|
"buckets": {
|
|
|
|
"2015-01-01": {
|
|
|
|
"key_as_string": "2015-01-01",
|
|
|
|
"key": 1420070400000,
|
|
|
|
"doc_count": 3
|
|
|
|
},
|
|
|
|
"2015-02-01": {
|
|
|
|
"key_as_string": "2015-02-01",
|
|
|
|
"key": 1422748800000,
|
|
|
|
"doc_count": 2
|
|
|
|
},
|
|
|
|
"2015-03-01": {
|
|
|
|
"key_as_string": "2015-03-01",
|
|
|
|
"key": 1425168000000,
|
|
|
|
"doc_count": 2
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
--------------------------------------------------
|
|
|
|
// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/]
|
|
|
|
|
2018-10-31 10:16:36 -04:00
|
|
|
===== Scripts
|
2015-09-07 13:53:50 -04:00
|
|
|
|
2018-10-31 10:16:36 -04:00
|
|
|
As with the normal <<search-aggregations-bucket-histogram-aggregation,histogram>>,
|
|
|
|
both document-level scripts and
|
|
|
|
value-level scripts are supported. You can control the order of the returned
|
|
|
|
buckets using the `order`
|
|
|
|
settings and filter the returned buckets based on a `min_doc_count` setting
|
|
|
|
(by default all buckets between the first
|
|
|
|
bucket that matches documents and the last one are returned). This histogram
|
|
|
|
also supports the `extended_bounds`
|
|
|
|
setting, which enables extending the bounds of the histogram beyond the data
|
|
|
|
itself. For more information, see
|
|
|
|
<<search-aggregations-bucket-histogram-aggregation-extended-bounds,`Extended Bounds`>>.
|
2015-05-07 10:46:40 -04:00
|
|
|
|
2018-10-31 10:16:36 -04:00
|
|
|
===== Missing value
|
2015-05-07 10:46:40 -04:00
|
|
|
|
2018-10-31 10:16:36 -04:00
|
|
|
The `missing` parameter defines how to treat documents that are missing a value.
|
|
|
|
By default, they are ignored, but it is also possible to treat them as if they
|
|
|
|
have a value.
|
2015-05-07 10:46:40 -04:00
|
|
|
|
|
|
|
[source,js]
|
|
|
|
--------------------------------------------------
|
2017-01-20 16:23:28 -05:00
|
|
|
POST /sales/_search?size=0
|
2015-05-07 10:46:40 -04:00
|
|
|
{
|
|
|
|
"aggs" : {
|
2017-01-20 16:23:28 -05:00
|
|
|
"sale_date" : {
|
2015-10-01 07:49:09 -04:00
|
|
|
"date_histogram" : {
|
2017-01-20 16:23:28 -05:00
|
|
|
"field" : "date",
|
2015-05-07 10:46:40 -04:00
|
|
|
"interval": "year",
|
2017-01-20 16:23:28 -05:00
|
|
|
"missing": "2000/01/01" <1>
|
2015-05-07 10:46:40 -04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
--------------------------------------------------
|
2017-01-20 16:23:28 -05:00
|
|
|
// CONSOLE
|
|
|
|
// TEST[setup:sales]
|
2015-05-07 10:46:40 -04:00
|
|
|
|
2018-10-31 10:16:36 -04:00
|
|
|
<1> Documents without a value in the `publish_date` field will fall into the
|
|
|
|
same bucket as documents that have the value `2000-01-01`.
|
2017-05-11 13:06:26 -04:00
|
|
|
|
2018-10-31 10:16:36 -04:00
|
|
|
===== Order
|
2017-05-11 13:06:26 -04:00
|
|
|
|
2018-10-31 10:16:36 -04:00
|
|
|
By default the returned buckets are sorted by their `key` ascending, but you can
|
|
|
|
control the order using
|
|
|
|
the `order` setting. This setting supports the same `order` functionality as
|
|
|
|
<<search-aggregations-bucket-terms-aggregation-order,`Terms Aggregation`>>.
|
2017-05-11 13:06:26 -04:00
|
|
|
|
|
|
|
deprecated[6.0.0, Use `_key` instead of `_time` to order buckets by their dates/keys]
|
2017-07-07 14:16:53 -04:00
|
|
|
|
2018-10-31 10:16:36 -04:00
|
|
|
===== Using a script to aggregate by day of the week
|
2017-07-07 14:16:53 -04:00
|
|
|
|
2018-10-31 10:16:36 -04:00
|
|
|
When you need to aggregate the results by day of the week, use a script that
|
|
|
|
returns the day of the week:
|
2017-07-07 14:16:53 -04:00
|
|
|
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
--------------------------------------------------
|
|
|
|
POST /sales/_search?size=0
|
|
|
|
{
|
|
|
|
"aggs": {
|
|
|
|
"dayOfWeek": {
|
|
|
|
"terms": {
|
|
|
|
"script": {
|
|
|
|
"lang": "painless",
|
2018-09-16 22:18:00 -04:00
|
|
|
"source": "doc['date'].value.dayOfWeekEnum.value"
|
2017-07-07 14:16:53 -04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
--------------------------------------------------
|
|
|
|
// CONSOLE
|
|
|
|
// TEST[setup:sales]
|
|
|
|
|
|
|
|
Response:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
--------------------------------------------------
|
|
|
|
{
|
|
|
|
...
|
|
|
|
"aggregations": {
|
|
|
|
"dayOfWeek": {
|
|
|
|
"doc_count_error_upper_bound": 0,
|
|
|
|
"sum_other_doc_count": 0,
|
|
|
|
"buckets": [
|
|
|
|
{
|
|
|
|
"key": "7",
|
|
|
|
"doc_count": 4
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"key": "4",
|
|
|
|
"doc_count": 3
|
|
|
|
}
|
|
|
|
]
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
--------------------------------------------------
|
|
|
|
// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/]
|
|
|
|
|
2018-10-31 10:16:36 -04:00
|
|
|
The response will contain all the buckets having the relative day of
|
|
|
|
the week as key : 1 for Monday, 2 for Tuesday... 7 for Sunday.
|