2017-06-19 21:23:58 -04:00
|
|
|
[role="xpack"]
|
2018-08-31 19:49:24 -04:00
|
|
|
[testenv="platinum"]
|
2017-04-19 13:52:30 -04:00
|
|
|
[[ml-jobstats]]
|
2018-12-20 13:23:28 -05:00
|
|
|
=== Job statistics
|
2017-04-04 18:26:39 -04:00
|
|
|
|
2017-04-11 21:52:47 -04:00
|
|
|
The get job statistics API provides information about the operational
|
|
|
|
progress of a job.
|
2017-04-04 18:26:39 -04:00
|
|
|
|
2017-04-19 13:52:30 -04:00
|
|
|
`assignment_explanation`::
|
2017-04-19 16:31:07 -04:00
|
|
|
(string) For open jobs only, contains messages relating to the selection
|
|
|
|
of a node to run the job.
|
2017-04-04 18:26:39 -04:00
|
|
|
|
2017-04-11 21:52:47 -04:00
|
|
|
`data_counts`::
|
2017-04-19 16:31:07 -04:00
|
|
|
(object) An object that describes the number of records processed and
|
|
|
|
any related error counts. See <<ml-datacounts,data counts objects>>.
|
2017-04-11 21:52:47 -04:00
|
|
|
|
|
|
|
`job_id`::
|
2017-04-19 16:31:07 -04:00
|
|
|
(string) A unique identifier for the job.
|
2017-04-11 21:52:47 -04:00
|
|
|
|
|
|
|
`model_size_stats`::
|
2017-04-11 22:26:18 -04:00
|
|
|
(object) An object that provides information about the size and contents of the model.
|
2019-06-17 02:58:26 -04:00
|
|
|
See <<ml-modelsizestats,model size stats objects>>.
|
2017-04-11 21:52:47 -04:00
|
|
|
|
2018-07-04 02:15:45 -04:00
|
|
|
`forecasts_stats`::
|
|
|
|
(object) An object that provides statistical information about forecasts
|
2019-06-17 02:58:26 -04:00
|
|
|
of this job. See <<ml-forecastsstats, forecasts stats objects>>.
|
|
|
|
|
|
|
|
`timing_stats`::
|
|
|
|
(object) An object that provides statistical information about timing aspect
|
|
|
|
of this job. See <<ml-timingstats, timing stats objects>>.
|
2018-07-04 02:15:45 -04:00
|
|
|
|
2017-04-19 13:52:30 -04:00
|
|
|
`node`::
|
2017-04-19 16:31:07 -04:00
|
|
|
(object) For open jobs only, contains information about the node where the
|
|
|
|
job runs. See <<ml-stats-node,node object>>.
|
2017-04-19 13:52:30 -04:00
|
|
|
|
|
|
|
`open_time`::
|
|
|
|
(string) For open jobs only, the elapsed time for which the job has been open.
|
2017-04-19 16:31:07 -04:00
|
|
|
For example, `28746386s`.
|
2017-04-19 13:52:30 -04:00
|
|
|
|
2017-04-11 21:52:47 -04:00
|
|
|
`state`::
|
2017-04-11 22:26:18 -04:00
|
|
|
(string) The status of the job, which can be one of the following values:
|
2017-04-11 21:52:47 -04:00
|
|
|
|
2017-11-24 06:29:06 -05:00
|
|
|
`opened`::: The job is available to receive and process data.
|
2017-04-19 13:52:30 -04:00
|
|
|
`closed`::: The job finished successfully with its model state persisted.
|
|
|
|
The job must be opened before it can accept further data.
|
|
|
|
`closing`::: The job close action is in progress and has not yet completed.
|
|
|
|
A closing job cannot accept further data.
|
2017-04-18 18:13:21 -04:00
|
|
|
`failed`::: The job did not finish successfully due to an error.
|
2017-04-19 16:31:07 -04:00
|
|
|
This situation can occur due to invalid input data.
|
|
|
|
If the job had irrevocably failed, it must be force closed and then deleted.
|
2017-05-02 15:45:42 -04:00
|
|
|
If the {dfeed} can be corrected, the job can be closed and then re-opened.
|
2017-11-24 06:29:06 -05:00
|
|
|
`opening`::: The job open action is in progress and has not yet completed.
|
2017-04-18 18:13:21 -04:00
|
|
|
|
2017-04-11 21:52:47 -04:00
|
|
|
[float]
|
2017-04-04 18:26:39 -04:00
|
|
|
[[ml-datacounts]]
|
2017-06-06 16:42:47 -04:00
|
|
|
==== Data Counts Objects
|
2017-04-04 18:26:39 -04:00
|
|
|
|
2017-04-11 21:52:47 -04:00
|
|
|
The `data_counts` object describes the number of records processed
|
2017-04-19 16:31:07 -04:00
|
|
|
and any related error counts.
|
2017-04-19 13:52:30 -04:00
|
|
|
|
|
|
|
The `data_count` values are cumulative for the lifetime of a job. If a model snapshot is reverted
|
|
|
|
or old results are deleted, the job counts are not reset.
|
2017-04-04 18:26:39 -04:00
|
|
|
|
2017-04-11 21:52:47 -04:00
|
|
|
`bucket_count`::
|
2017-04-11 22:26:18 -04:00
|
|
|
(long) The number of bucket results produced by the job.
|
2017-04-04 18:26:39 -04:00
|
|
|
|
2017-04-11 21:52:47 -04:00
|
|
|
`earliest_record_timestamp`::
|
2017-04-11 22:26:18 -04:00
|
|
|
(string) The timestamp of the earliest chronologically ordered record.
|
2017-04-11 21:52:47 -04:00
|
|
|
The datetime string is in ISO 8601 format.
|
2017-04-04 18:26:39 -04:00
|
|
|
|
2017-04-11 21:52:47 -04:00
|
|
|
`empty_bucket_count`::
|
2017-04-19 16:31:07 -04:00
|
|
|
(long) The number of buckets which did not contain any data. If your data contains many
|
|
|
|
empty buckets, consider increasing your `bucket_span` or using functions that are tolerant
|
2017-04-19 13:52:30 -04:00
|
|
|
to gaps in data such as `mean`, `non_null_sum` or `non_zero_count`.
|
2017-04-04 18:26:39 -04:00
|
|
|
|
|
|
|
`input_bytes`::
|
2017-04-11 22:26:18 -04:00
|
|
|
(long) The number of raw bytes read by the job.
|
2017-04-04 18:26:39 -04:00
|
|
|
|
|
|
|
`input_field_count`::
|
2017-04-11 22:26:18 -04:00
|
|
|
(long) The total number of record fields read by the job. This count includes
|
2017-04-04 18:26:39 -04:00
|
|
|
fields that are not used in the analysis.
|
|
|
|
|
2017-04-11 21:52:47 -04:00
|
|
|
`input_record_count`::
|
2017-04-11 22:26:18 -04:00
|
|
|
(long) The number of data records read by the job.
|
2017-04-11 21:52:47 -04:00
|
|
|
|
2017-04-04 18:26:39 -04:00
|
|
|
`invalid_date_count`::
|
2017-04-11 22:26:18 -04:00
|
|
|
(long) The number of records with either a missing date field or a date that could not be parsed.
|
2017-04-04 18:26:39 -04:00
|
|
|
|
2017-04-11 21:52:47 -04:00
|
|
|
`job_id`::
|
2017-04-19 16:31:07 -04:00
|
|
|
(string) A unique identifier for the job.
|
2017-04-11 21:52:47 -04:00
|
|
|
|
|
|
|
`last_data_time`::
|
2017-04-19 16:31:07 -04:00
|
|
|
(datetime) The timestamp at which data was last analyzed, according to server time.
|
2017-04-19 13:52:30 -04:00
|
|
|
|
|
|
|
`latest_empty_bucket_timestamp`::
|
|
|
|
(date) The timestamp of the last bucket that did not contain any data.
|
2017-04-11 21:52:47 -04:00
|
|
|
|
|
|
|
`latest_record_timestamp`::
|
2017-04-19 13:52:30 -04:00
|
|
|
(date) The timestamp of the last processed record.
|
2017-04-11 21:52:47 -04:00
|
|
|
|
|
|
|
`latest_sparse_bucket_timestamp`::
|
2017-04-19 13:52:30 -04:00
|
|
|
(date) The timestamp of the last bucket that was considered sparse.
|
2017-04-11 21:52:47 -04:00
|
|
|
|
2017-04-04 18:26:39 -04:00
|
|
|
`missing_field_count`::
|
2017-04-19 16:31:07 -04:00
|
|
|
(long) The number of records that are missing a field that the job is
|
|
|
|
configured to analyze. Records with missing fields are still processed because
|
|
|
|
it is possible that not all fields are missing. The value of
|
|
|
|
`processed_record_count` includes this count. +
|
2017-04-24 13:46:17 -04:00
|
|
|
|
2017-05-02 15:45:42 -04:00
|
|
|
NOTE: If you are using {dfeeds} or posting data to the job in JSON format, a
|
2017-04-18 14:30:30 -04:00
|
|
|
high `missing_field_count` is often not an indication of data issues. It is not
|
|
|
|
necessarily a cause for concern.
|
|
|
|
|
2017-04-04 18:26:39 -04:00
|
|
|
`out_of_order_timestamp_count`::
|
2017-04-19 16:31:07 -04:00
|
|
|
(long) The number of records that are out of time sequence and
|
|
|
|
outside of the latency window. This information is applicable only when
|
|
|
|
you provide data to the job by using the <<ml-post-data,post data API>>.
|
|
|
|
These out of order records are discarded, since jobs require time series data
|
|
|
|
to be in ascending chronological order.
|
2017-04-04 18:26:39 -04:00
|
|
|
|
2017-04-11 21:52:47 -04:00
|
|
|
`processed_field_count`::
|
2017-04-19 16:31:07 -04:00
|
|
|
(long) The total number of fields in all the records that have been processed
|
|
|
|
by the job. Only fields that are specified in the detector configuration
|
|
|
|
object contribute to this count. The time stamp is not included in this count.
|
2017-04-04 18:26:39 -04:00
|
|
|
|
2017-04-11 21:52:47 -04:00
|
|
|
`processed_record_count`::
|
2017-04-11 22:26:18 -04:00
|
|
|
(long) The number of records that have been processed by the job.
|
2017-04-19 16:31:07 -04:00
|
|
|
This value includes records with missing fields, since they are nonetheless
|
|
|
|
analyzed. +
|
2017-05-02 15:45:42 -04:00
|
|
|
If you use {dfeeds} and have aggregations in your search query,
|
2017-04-27 13:51:48 -04:00
|
|
|
the `processed_record_count` will be the number of aggregated records
|
2017-05-02 15:45:42 -04:00
|
|
|
processed, not the number of {es} documents.
|
2017-04-04 18:26:39 -04:00
|
|
|
|
2017-04-11 21:52:47 -04:00
|
|
|
`sparse_bucket_count`::
|
2017-04-19 16:31:07 -04:00
|
|
|
(long) The number of buckets that contained few data points compared to the
|
|
|
|
expected number of data points. If your data contains many sparse buckets,
|
|
|
|
consider using a longer `bucket_span`.
|
2017-04-04 18:26:39 -04:00
|
|
|
|
2017-04-11 21:52:47 -04:00
|
|
|
[float]
|
2017-04-04 18:26:39 -04:00
|
|
|
[[ml-modelsizestats]]
|
2017-06-06 16:42:47 -04:00
|
|
|
==== Model Size Stats Objects
|
2017-04-04 18:26:39 -04:00
|
|
|
|
|
|
|
The `model_size_stats` object has the following properties:
|
|
|
|
|
2017-04-11 22:26:18 -04:00
|
|
|
`bucket_allocation_failures_count`::
|
2017-04-19 16:31:07 -04:00
|
|
|
(long) The number of buckets for which new entities in incoming data were not
|
|
|
|
processed due to insufficient model memory. This situation is also signified
|
|
|
|
by a `hard_limit: memory_status` property value.
|
2017-04-11 22:26:18 -04:00
|
|
|
|
2017-04-04 18:26:39 -04:00
|
|
|
`job_id`::
|
2017-04-11 22:26:18 -04:00
|
|
|
(string) A numerical character string that uniquely identifies the job.
|
2017-04-04 18:26:39 -04:00
|
|
|
|
2017-04-11 22:26:18 -04:00
|
|
|
`log_time`::
|
2017-04-19 13:52:30 -04:00
|
|
|
(date) The timestamp of the `model_size_stats` according to server time.
|
2017-04-11 22:26:18 -04:00
|
|
|
|
|
|
|
`memory_status`::
|
2017-04-19 16:31:07 -04:00
|
|
|
(string) The status of the mathematical models.
|
|
|
|
This property can have one of the following values:
|
2017-04-11 22:26:18 -04:00
|
|
|
`ok`::: The models stayed below the configured value.
|
2017-04-19 16:31:07 -04:00
|
|
|
`soft_limit`::: The models used more than 60% of the configured memory limit
|
|
|
|
and older unused models will be pruned to free up space.
|
|
|
|
`hard_limit`::: The models used more space than the configured memory limit.
|
|
|
|
As a result, not all incoming data was processed.
|
2017-04-04 18:26:39 -04:00
|
|
|
|
|
|
|
`model_bytes`::
|
2017-04-19 16:31:07 -04:00
|
|
|
(long) The number of bytes of memory used by the models. This is the maximum
|
|
|
|
value since the last time the model was persisted. If the job is closed,
|
|
|
|
this value indicates the latest size.
|
2017-04-04 18:26:39 -04:00
|
|
|
|
2017-04-11 22:26:18 -04:00
|
|
|
`result_type`::
|
2017-04-19 13:52:30 -04:00
|
|
|
(string) For internal use. The type of result.
|
2017-04-11 22:26:18 -04:00
|
|
|
|
2017-04-04 18:26:39 -04:00
|
|
|
`total_by_field_count`::
|
2017-04-27 13:51:48 -04:00
|
|
|
(long) The number of `by` field values that were analyzed by the models.+
|
2017-04-19 16:31:07 -04:00
|
|
|
|
2017-04-24 13:46:17 -04:00
|
|
|
NOTE: The `by` field values are counted separately for each detector and partition.
|
2017-04-19 13:52:30 -04:00
|
|
|
|
2017-04-04 18:26:39 -04:00
|
|
|
`total_over_field_count`::
|
2017-04-27 13:51:48 -04:00
|
|
|
(long) The number of `over` field values that were analyzed by the models.+
|
2017-04-19 16:31:07 -04:00
|
|
|
|
2017-04-24 13:46:17 -04:00
|
|
|
NOTE: The `over` field values are counted separately for each detector and partition.
|
2017-04-19 13:52:30 -04:00
|
|
|
|
2017-04-04 18:26:39 -04:00
|
|
|
`total_partition_field_count`::
|
2017-04-11 22:26:18 -04:00
|
|
|
(long) The number of `partition` field values that were analyzed by the models.
|
2017-04-04 18:26:39 -04:00
|
|
|
|
|
|
|
`timestamp`::
|
2017-04-19 13:52:30 -04:00
|
|
|
(date) The timestamp of the `model_size_stats` according to the timestamp of the data.
|
|
|
|
|
2018-07-04 02:15:45 -04:00
|
|
|
[float]
|
|
|
|
[[ml-forecastsstats]]
|
|
|
|
==== Forecasts Stats Objects
|
|
|
|
|
|
|
|
The `forecasts_stats` object shows statistics about forecasts. It has the following properties:
|
|
|
|
|
|
|
|
`total`::
|
|
|
|
(long) The number of forecasts currently available for this model.
|
|
|
|
|
|
|
|
`forecasted_jobs`::
|
|
|
|
(long) The number of jobs that have at least one forecast.
|
|
|
|
|
|
|
|
`memory_bytes`::
|
|
|
|
(object) Statistics about the memory usage: minimum, maximum, average and total.
|
|
|
|
|
|
|
|
`records`::
|
|
|
|
(object) Statistics about the number of forecast records: minimum, maximum, average and total.
|
|
|
|
|
|
|
|
`processing_time_ms`::
|
|
|
|
(object) Statistics about the forecast runtime in milliseconds: minimum, maximum, average and total.
|
|
|
|
|
|
|
|
`status`::
|
|
|
|
(object) Counts per forecast status, for example: {"finished" : 2}.
|
|
|
|
|
|
|
|
NOTE: `memory_bytes`, `records`, `processing_time_ms` and `status` require at least 1 forecast, otherwise
|
2018-10-03 13:11:39 -04:00
|
|
|
these fields are omitted.
|
2018-07-04 02:15:45 -04:00
|
|
|
|
2019-06-17 02:58:26 -04:00
|
|
|
[float]
|
|
|
|
[[ml-timingstats]]
|
|
|
|
==== Timing Stats Objects
|
|
|
|
|
|
|
|
The `timing_stats` object shows timing-related statistics about the job's progress. It has the following properties:
|
|
|
|
|
|
|
|
`job_id`::
|
|
|
|
(string) A numerical character string that uniquely identifies the job.
|
|
|
|
|
|
|
|
`bucket_count`::
|
|
|
|
(long) The number of buckets processed.
|
|
|
|
|
|
|
|
`minimum_bucket_processing_time_ms`::
|
|
|
|
(double) Minimum among all bucket processing times in milliseconds.
|
|
|
|
|
|
|
|
`maximum_bucket_processing_time_ms`::
|
|
|
|
(double) Maximum among all bucket processing times in milliseconds.
|
|
|
|
|
|
|
|
`average_bucket_processing_time_ms`::
|
|
|
|
(double) Average of all bucket processing times in milliseconds.
|
|
|
|
|
|
|
|
`exponential_average_bucket_processing_time_ms`::
|
|
|
|
(double) Exponential moving average of all bucket processing times in milliseconds.
|
|
|
|
|
|
|
|
|
2017-04-19 13:52:30 -04:00
|
|
|
[float]
|
|
|
|
[[ml-stats-node]]
|
2017-06-06 16:42:47 -04:00
|
|
|
==== Node Objects
|
2017-04-19 13:52:30 -04:00
|
|
|
|
2017-04-19 16:31:07 -04:00
|
|
|
The `node` objects contains properties for the node that runs the job.
|
|
|
|
This information is available only for open jobs.
|
2017-04-19 13:52:30 -04:00
|
|
|
|
|
|
|
`id`::
|
2017-04-19 16:31:07 -04:00
|
|
|
(string) The unique identifier of the node.
|
2017-04-19 13:52:30 -04:00
|
|
|
|
|
|
|
`name`::
|
2017-04-19 16:31:07 -04:00
|
|
|
(string) The node name.
|
2017-04-19 13:52:30 -04:00
|
|
|
|
|
|
|
`ephemeral_id`::
|
2017-04-27 13:51:48 -04:00
|
|
|
(string) The ephemeral id of the node.
|
2017-04-19 13:52:30 -04:00
|
|
|
|
|
|
|
`transport_address`::
|
2017-04-19 16:31:07 -04:00
|
|
|
(string) The host and port where transport HTTP connections are accepted.
|
2017-04-19 13:52:30 -04:00
|
|
|
|
|
|
|
`attributes`::
|
2019-03-06 07:29:34 -05:00
|
|
|
(object) For example, {"ml.machine_memory": "17179869184"}.
|