2018-02-23 17:10:37 -05:00
|
|
|
[role="xpack"]
|
2018-08-31 13:50:43 -04:00
|
|
|
[testenv="basic"]
|
2018-02-23 17:10:37 -05:00
|
|
|
[[rollup-get-job]]
|
2018-12-20 13:23:28 -05:00
|
|
|
=== Get rollup jobs API
|
2018-02-23 17:10:37 -05:00
|
|
|
++++
|
2018-12-20 13:23:28 -05:00
|
|
|
<titleabbrev>Get job</titleabbrev>
|
2018-02-23 17:10:37 -05:00
|
|
|
++++
|
|
|
|
|
2018-06-13 15:42:20 -04:00
|
|
|
experimental[]
|
|
|
|
|
2018-02-23 17:10:37 -05:00
|
|
|
This API returns the configuration, stats and status of rollup jobs. The API can return the details for a single job,
|
|
|
|
or for all jobs.
|
|
|
|
|
|
|
|
Note: This API only returns active (both `STARTED` and `STOPPED`) jobs. If a job was created, ran for a while then deleted,
|
|
|
|
this API will not return any details about that job.
|
|
|
|
|
|
|
|
For details about a historical job, the <<rollup-get-rollup-caps,Rollup Capabilities API>> may be more useful
|
|
|
|
|
|
|
|
==== Request
|
|
|
|
|
2018-12-11 19:43:17 -05:00
|
|
|
`GET _rollup/job/<job_id>`
|
2018-02-23 17:10:37 -05:00
|
|
|
|
|
|
|
//===== Description
|
|
|
|
|
|
|
|
==== Path Parameters
|
|
|
|
|
|
|
|
`job_id`::
|
|
|
|
(string) Identifier for the job to retrieve. If omitted (or `_all` is used) all jobs will be returned
|
|
|
|
|
|
|
|
|
|
|
|
==== Request Body
|
|
|
|
|
|
|
|
There is no request body for the Get Jobs API.
|
|
|
|
|
|
|
|
==== Authorization
|
|
|
|
|
|
|
|
You must have `monitor`, `monitor_rollup`, `manage` or `manage_rollup` cluster privileges to use this API.
|
|
|
|
For more information, see
|
|
|
|
{xpack-ref}/security-privileges.html[Security Privileges].
|
|
|
|
|
|
|
|
==== Examples
|
|
|
|
|
|
|
|
If we have already created a rollup job named `sensor`, the details about the job can be retrieved with:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
--------------------------------------------------
|
2018-12-11 19:43:17 -05:00
|
|
|
GET _rollup/job/sensor
|
2018-02-23 17:10:37 -05:00
|
|
|
--------------------------------------------------
|
|
|
|
// CONSOLE
|
|
|
|
// TEST[setup:sensor_rollup_job]
|
|
|
|
|
|
|
|
Which will yield the following response:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
----
|
|
|
|
{
|
|
|
|
"jobs" : [
|
|
|
|
{
|
|
|
|
"config" : {
|
|
|
|
"id" : "sensor",
|
|
|
|
"index_pattern" : "sensor-*",
|
|
|
|
"rollup_index" : "sensor_rollup",
|
|
|
|
"cron" : "*/30 * * * * ?",
|
|
|
|
"groups" : {
|
|
|
|
"date_histogram" : {
|
[7.x Backport] Force selection of calendar or fixed intervals (#41906)
The date_histogram accepts an interval which can be either a calendar
interval (DST-aware, leap seconds, arbitrary length of months, etc) or
fixed interval (strict multiples of SI units). Unfortunately this is inferred
by first trying to parse as a calendar interval, then falling back to fixed
if that fails.
This leads to confusing arrangement where `1d` == calendar, but
`2d` == fixed. And if you want a day of fixed time, you have to
specify `24h` (e.g. the next smallest unit). This arrangement is very
error-prone for users.
This PR adds `calendar_interval` and `fixed_interval` parameters to any
code that uses intervals (date_histogram, rollup, composite, datafeed, etc).
Calendar only accepts calendar intervals, fixed accepts any combination of
units (meaning `1d` can be used to specify `24h` in fixed time), and both
are mutually exclusive.
The old interval behavior is deprecated and will throw a deprecation warning.
It is also mutually exclusive with the two new parameters. In the future the
old dual-purpose interval will be removed.
The change applies to both REST and java clients.
2019-05-20 12:07:29 -04:00
|
|
|
"fixed_interval" : "1h",
|
2018-02-23 17:10:37 -05:00
|
|
|
"delay": "7d",
|
|
|
|
"field": "timestamp",
|
|
|
|
"time_zone": "UTC"
|
|
|
|
},
|
|
|
|
"terms" : {
|
|
|
|
"fields" : [
|
|
|
|
"node"
|
|
|
|
]
|
|
|
|
}
|
|
|
|
},
|
|
|
|
"metrics" : [
|
|
|
|
{
|
|
|
|
"field" : "temperature",
|
|
|
|
"metrics" : [
|
|
|
|
"min",
|
|
|
|
"max",
|
|
|
|
"sum"
|
|
|
|
]
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"field" : "voltage",
|
|
|
|
"metrics" : [
|
|
|
|
"avg"
|
|
|
|
]
|
|
|
|
}
|
|
|
|
],
|
|
|
|
"timeout" : "20s",
|
2018-04-10 16:34:40 -04:00
|
|
|
"page_size" : 1000
|
2018-02-23 17:10:37 -05:00
|
|
|
},
|
|
|
|
"status" : {
|
[Rollup] Improve ID scheme for rollup documents (#32558)
Previously, we were using a simple CRC32 for the IDs of rollup documents.
This is a very poor choice however, since 32bit IDs leads to collisions
between documents very quickly.
This commit moves Rollups over to a 128bit ID. The ID is a concatenation
of all the keys in the document (similar to the rolling CRC before),
hashed with 128bit Murmur3, then base64 encoded. Finally, the job
ID and a delimiter (`$`) are prepended to the ID.
This gurantees that there are 128bits per-job. 128bits should
essentially remove all chances of collisions, and the prepended
job ID means that _if_ there is a collision, it stays "within"
the job.
BWC notes:
We can only upgrade the ID scheme after we know there has been a good
checkpoint during indexing. We don't rely on a STARTED/STOPPED
status since we can't guarantee that resulted from a real checkpoint,
or other state. So we only upgrade the ID after we have reached
a checkpoint state during an active index run, and only after the
checkpoint has been confirmed.
Once a job has been upgraded and checkpointed, the version increments
and the new ID is used in the future. All new jobs use the
new ID from the start
2018-08-03 11:13:25 -04:00
|
|
|
"job_state" : "stopped",
|
|
|
|
"upgraded_doc_id": true
|
2018-02-23 17:10:37 -05:00
|
|
|
},
|
|
|
|
"stats" : {
|
|
|
|
"pages_processed" : 0,
|
|
|
|
"documents_processed" : 0,
|
|
|
|
"rollups_indexed" : 0,
|
2018-11-27 15:46:10 -05:00
|
|
|
"trigger_count" : 0,
|
|
|
|
"index_failures": 0,
|
|
|
|
"index_time_in_ms": 0,
|
|
|
|
"index_total": 0,
|
|
|
|
"search_failures": 0,
|
|
|
|
"search_time_in_ms": 0,
|
|
|
|
"search_total": 0
|
2018-02-23 17:10:37 -05:00
|
|
|
}
|
|
|
|
}
|
|
|
|
]
|
|
|
|
}
|
|
|
|
----
|
|
|
|
// TESTRESPONSE
|
|
|
|
|
|
|
|
The `jobs` array contains a single job (`id: sensor`) since we requested a single job in the endpoint's URL. The
|
|
|
|
details for this job contain three top-level parameters: `config`, `status` and `stats`
|
|
|
|
|
|
|
|
`config` holds the rollup job's configuration, which is identical to the configuration that was supplied when creating
|
|
|
|
the job via the <<rollup-put-job,Create Job API>>.
|
|
|
|
|
|
|
|
The `status` object holds the current status of the rollup job's indexer. The possible values and their meanings are:
|
|
|
|
|
|
|
|
- `stopped` means the indexer is paused and will not process data, even if it's cron interval triggers
|
|
|
|
- `started` means the indexer is running, but not actively indexing data. When the cron interval triggers, the job's
|
|
|
|
indexer will begin to process data
|
|
|
|
- `indexing` means the indexer is actively processing data and creating new rollup documents. When in this state, any
|
|
|
|
subsequent cron interval triggers will be ignored because the job is already active with the prior trigger
|
|
|
|
- `abort` a transient state, which is usually not witnessed by the user. The `abort` state is used if the task needs to
|
|
|
|
be shut down for some reason (job has been deleted, an unrecoverable error has been encountered, etc). Shortly after
|
|
|
|
the `abort` state is set, the job will remove itself from the cluster
|
|
|
|
|
|
|
|
Finally, the `stats` object provides transient statistics about the rollup job, such as how many documents have been
|
|
|
|
processed and how many rollup summary docs have been indexed. These stats are not persisted, so if a node is restarted
|
|
|
|
these stats will be reset.
|
|
|
|
|
|
|
|
If we add another job, we can see how multi-job responses are handled:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
--------------------------------------------------
|
2018-12-11 19:43:17 -05:00
|
|
|
PUT _rollup/job/sensor2 <1>
|
2018-02-23 17:10:37 -05:00
|
|
|
{
|
|
|
|
"index_pattern": "sensor-*",
|
|
|
|
"rollup_index": "sensor_rollup",
|
|
|
|
"cron": "*/30 * * * * ?",
|
2018-04-10 16:34:40 -04:00
|
|
|
"page_size" :1000,
|
2018-02-23 17:10:37 -05:00
|
|
|
"groups" : {
|
|
|
|
"date_histogram": {
|
|
|
|
"field": "timestamp",
|
[7.x Backport] Force selection of calendar or fixed intervals (#41906)
The date_histogram accepts an interval which can be either a calendar
interval (DST-aware, leap seconds, arbitrary length of months, etc) or
fixed interval (strict multiples of SI units). Unfortunately this is inferred
by first trying to parse as a calendar interval, then falling back to fixed
if that fails.
This leads to confusing arrangement where `1d` == calendar, but
`2d` == fixed. And if you want a day of fixed time, you have to
specify `24h` (e.g. the next smallest unit). This arrangement is very
error-prone for users.
This PR adds `calendar_interval` and `fixed_interval` parameters to any
code that uses intervals (date_histogram, rollup, composite, datafeed, etc).
Calendar only accepts calendar intervals, fixed accepts any combination of
units (meaning `1d` can be used to specify `24h` in fixed time), and both
are mutually exclusive.
The old interval behavior is deprecated and will throw a deprecation warning.
It is also mutually exclusive with the two new parameters. In the future the
old dual-purpose interval will be removed.
The change applies to both REST and java clients.
2019-05-20 12:07:29 -04:00
|
|
|
"fixed_interval": "1h",
|
2018-02-23 17:10:37 -05:00
|
|
|
"delay": "7d"
|
|
|
|
},
|
|
|
|
"terms": {
|
|
|
|
"fields": ["node"]
|
|
|
|
}
|
|
|
|
},
|
|
|
|
"metrics": [
|
|
|
|
{
|
|
|
|
"field": "temperature",
|
|
|
|
"metrics": ["min", "max", "sum"]
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"field": "voltage",
|
|
|
|
"metrics": ["avg"]
|
|
|
|
}
|
|
|
|
]
|
|
|
|
}
|
|
|
|
|
2018-12-11 19:43:17 -05:00
|
|
|
GET _rollup/job/_all <2>
|
2018-02-23 17:10:37 -05:00
|
|
|
--------------------------------------------------
|
|
|
|
// CONSOLE
|
|
|
|
// TEST[setup:sensor_rollup_job]
|
|
|
|
<1> We create a second job with name `sensor2`
|
|
|
|
<2> Then request all jobs by using `_all` in the GetJobs API
|
|
|
|
|
|
|
|
Which will yield the following response:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
----
|
|
|
|
{
|
|
|
|
"jobs" : [
|
|
|
|
{
|
|
|
|
"config" : {
|
|
|
|
"id" : "sensor2",
|
|
|
|
"index_pattern" : "sensor-*",
|
|
|
|
"rollup_index" : "sensor_rollup",
|
|
|
|
"cron" : "*/30 * * * * ?",
|
|
|
|
"groups" : {
|
|
|
|
"date_histogram" : {
|
[7.x Backport] Force selection of calendar or fixed intervals (#41906)
The date_histogram accepts an interval which can be either a calendar
interval (DST-aware, leap seconds, arbitrary length of months, etc) or
fixed interval (strict multiples of SI units). Unfortunately this is inferred
by first trying to parse as a calendar interval, then falling back to fixed
if that fails.
This leads to confusing arrangement where `1d` == calendar, but
`2d` == fixed. And if you want a day of fixed time, you have to
specify `24h` (e.g. the next smallest unit). This arrangement is very
error-prone for users.
This PR adds `calendar_interval` and `fixed_interval` parameters to any
code that uses intervals (date_histogram, rollup, composite, datafeed, etc).
Calendar only accepts calendar intervals, fixed accepts any combination of
units (meaning `1d` can be used to specify `24h` in fixed time), and both
are mutually exclusive.
The old interval behavior is deprecated and will throw a deprecation warning.
It is also mutually exclusive with the two new parameters. In the future the
old dual-purpose interval will be removed.
The change applies to both REST and java clients.
2019-05-20 12:07:29 -04:00
|
|
|
"fixed_interval" : "1h",
|
2018-02-23 17:10:37 -05:00
|
|
|
"delay": "7d",
|
|
|
|
"field": "timestamp",
|
|
|
|
"time_zone": "UTC"
|
|
|
|
},
|
|
|
|
"terms" : {
|
|
|
|
"fields" : [
|
|
|
|
"node"
|
|
|
|
]
|
|
|
|
}
|
|
|
|
},
|
|
|
|
"metrics" : [
|
|
|
|
{
|
|
|
|
"field" : "temperature",
|
|
|
|
"metrics" : [
|
|
|
|
"min",
|
|
|
|
"max",
|
|
|
|
"sum"
|
|
|
|
]
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"field" : "voltage",
|
|
|
|
"metrics" : [
|
|
|
|
"avg"
|
|
|
|
]
|
|
|
|
}
|
|
|
|
],
|
|
|
|
"timeout" : "20s",
|
2018-04-10 16:34:40 -04:00
|
|
|
"page_size" : 1000
|
2018-02-23 17:10:37 -05:00
|
|
|
},
|
|
|
|
"status" : {
|
[Rollup] Improve ID scheme for rollup documents (#32558)
Previously, we were using a simple CRC32 for the IDs of rollup documents.
This is a very poor choice however, since 32bit IDs leads to collisions
between documents very quickly.
This commit moves Rollups over to a 128bit ID. The ID is a concatenation
of all the keys in the document (similar to the rolling CRC before),
hashed with 128bit Murmur3, then base64 encoded. Finally, the job
ID and a delimiter (`$`) are prepended to the ID.
This gurantees that there are 128bits per-job. 128bits should
essentially remove all chances of collisions, and the prepended
job ID means that _if_ there is a collision, it stays "within"
the job.
BWC notes:
We can only upgrade the ID scheme after we know there has been a good
checkpoint during indexing. We don't rely on a STARTED/STOPPED
status since we can't guarantee that resulted from a real checkpoint,
or other state. So we only upgrade the ID after we have reached
a checkpoint state during an active index run, and only after the
checkpoint has been confirmed.
Once a job has been upgraded and checkpointed, the version increments
and the new ID is used in the future. All new jobs use the
new ID from the start
2018-08-03 11:13:25 -04:00
|
|
|
"job_state" : "stopped",
|
|
|
|
"upgraded_doc_id": true
|
2018-02-23 17:10:37 -05:00
|
|
|
},
|
|
|
|
"stats" : {
|
|
|
|
"pages_processed" : 0,
|
|
|
|
"documents_processed" : 0,
|
|
|
|
"rollups_indexed" : 0,
|
2018-11-27 15:46:10 -05:00
|
|
|
"trigger_count" : 0,
|
|
|
|
"index_failures": 0,
|
|
|
|
"index_time_in_ms": 0,
|
|
|
|
"index_total": 0,
|
|
|
|
"search_failures": 0,
|
|
|
|
"search_time_in_ms": 0,
|
|
|
|
"search_total": 0
|
2018-02-23 17:10:37 -05:00
|
|
|
}
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"config" : {
|
|
|
|
"id" : "sensor",
|
|
|
|
"index_pattern" : "sensor-*",
|
|
|
|
"rollup_index" : "sensor_rollup",
|
|
|
|
"cron" : "*/30 * * * * ?",
|
|
|
|
"groups" : {
|
|
|
|
"date_histogram" : {
|
[7.x Backport] Force selection of calendar or fixed intervals (#41906)
The date_histogram accepts an interval which can be either a calendar
interval (DST-aware, leap seconds, arbitrary length of months, etc) or
fixed interval (strict multiples of SI units). Unfortunately this is inferred
by first trying to parse as a calendar interval, then falling back to fixed
if that fails.
This leads to confusing arrangement where `1d` == calendar, but
`2d` == fixed. And if you want a day of fixed time, you have to
specify `24h` (e.g. the next smallest unit). This arrangement is very
error-prone for users.
This PR adds `calendar_interval` and `fixed_interval` parameters to any
code that uses intervals (date_histogram, rollup, composite, datafeed, etc).
Calendar only accepts calendar intervals, fixed accepts any combination of
units (meaning `1d` can be used to specify `24h` in fixed time), and both
are mutually exclusive.
The old interval behavior is deprecated and will throw a deprecation warning.
It is also mutually exclusive with the two new parameters. In the future the
old dual-purpose interval will be removed.
The change applies to both REST and java clients.
2019-05-20 12:07:29 -04:00
|
|
|
"fixed_interval" : "1h",
|
2018-02-23 17:10:37 -05:00
|
|
|
"delay": "7d",
|
|
|
|
"field": "timestamp",
|
|
|
|
"time_zone": "UTC"
|
|
|
|
},
|
|
|
|
"terms" : {
|
|
|
|
"fields" : [
|
|
|
|
"node"
|
|
|
|
]
|
|
|
|
}
|
|
|
|
},
|
|
|
|
"metrics" : [
|
|
|
|
{
|
|
|
|
"field" : "temperature",
|
|
|
|
"metrics" : [
|
|
|
|
"min",
|
|
|
|
"max",
|
|
|
|
"sum"
|
|
|
|
]
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"field" : "voltage",
|
|
|
|
"metrics" : [
|
|
|
|
"avg"
|
|
|
|
]
|
|
|
|
}
|
|
|
|
],
|
|
|
|
"timeout" : "20s",
|
2018-04-10 16:34:40 -04:00
|
|
|
"page_size" : 1000
|
2018-02-23 17:10:37 -05:00
|
|
|
},
|
|
|
|
"status" : {
|
[Rollup] Improve ID scheme for rollup documents (#32558)
Previously, we were using a simple CRC32 for the IDs of rollup documents.
This is a very poor choice however, since 32bit IDs leads to collisions
between documents very quickly.
This commit moves Rollups over to a 128bit ID. The ID is a concatenation
of all the keys in the document (similar to the rolling CRC before),
hashed with 128bit Murmur3, then base64 encoded. Finally, the job
ID and a delimiter (`$`) are prepended to the ID.
This gurantees that there are 128bits per-job. 128bits should
essentially remove all chances of collisions, and the prepended
job ID means that _if_ there is a collision, it stays "within"
the job.
BWC notes:
We can only upgrade the ID scheme after we know there has been a good
checkpoint during indexing. We don't rely on a STARTED/STOPPED
status since we can't guarantee that resulted from a real checkpoint,
or other state. So we only upgrade the ID after we have reached
a checkpoint state during an active index run, and only after the
checkpoint has been confirmed.
Once a job has been upgraded and checkpointed, the version increments
and the new ID is used in the future. All new jobs use the
new ID from the start
2018-08-03 11:13:25 -04:00
|
|
|
"job_state" : "stopped",
|
|
|
|
"upgraded_doc_id": true
|
2018-02-23 17:10:37 -05:00
|
|
|
},
|
|
|
|
"stats" : {
|
|
|
|
"pages_processed" : 0,
|
|
|
|
"documents_processed" : 0,
|
|
|
|
"rollups_indexed" : 0,
|
2018-11-27 15:46:10 -05:00
|
|
|
"trigger_count" : 0,
|
|
|
|
"index_failures": 0,
|
|
|
|
"index_time_in_ms": 0,
|
|
|
|
"index_total": 0,
|
|
|
|
"search_failures": 0,
|
|
|
|
"search_time_in_ms": 0,
|
|
|
|
"search_total": 0
|
2018-02-23 17:10:37 -05:00
|
|
|
}
|
|
|
|
}
|
|
|
|
]
|
|
|
|
}
|
|
|
|
----
|
|
|
|
// NOTCONSOLE
|