[DOCS] Multiple fixes related to privileges in ML documentation (elastic/x-pack-elasticsearch#1110)
* [DOCS] Add privilege requirements to ML API docs * [DOCS] Document ML cluster-level privileges Original commit: elastic/x-pack-elasticsearch@221c67d395
This commit is contained in:
parent
b86cdd6c8e
commit
5223acdd9f
|
@ -133,9 +133,8 @@ locally, go to `http://localhost:5601/`. To use the {ml} features,
|
||||||
you must log in as a user who has the `kibana_user`
|
you must log in as a user who has the `kibana_user`
|
||||||
and `monitor_ml` roles (TBD).
|
and `monitor_ml` roles (TBD).
|
||||||
|
|
||||||
. Click **Machine Learning** in the side navigation.
|
. Click **Machine Learning** in the side navigation:
|
||||||
|
image::images/ml.jpg["Job Management"]
|
||||||
//image::images/ml.jpg["Job Management"]
|
|
||||||
|
|
||||||
You can choose to create single-metric, multi-metric, or advanced jobs in Kibana.
|
You can choose to create single-metric, multi-metric, or advanced jobs in Kibana.
|
||||||
|
|
||||||
|
|
|
@ -22,6 +22,8 @@ After it is closed, the job has almost no overhead on the cluster except for
|
||||||
maintaining its meta data. A closed job cannot receive data or perform analysis
|
maintaining its meta data. A closed job cannot receive data or perform analysis
|
||||||
operations, but you can still explore and navigate results.
|
operations, but you can still explore and navigate results.
|
||||||
|
|
||||||
|
You must have `manage_ml`, or `manage` cluster privileges to use this API.
|
||||||
|
For more information, see <<privileges-list-cluster>>.
|
||||||
//NOTE: TBD
|
//NOTE: TBD
|
||||||
//OUTDATED?: If using the {prelert} UI, the job will be automatically closed when stopping a datafeed job.
|
//OUTDATED?: If using the {prelert} UI, the job will be automatically closed when stopping a datafeed job.
|
||||||
|
|
||||||
|
|
|
@ -12,6 +12,9 @@ The delete data feed API allows you to delete an existing data feed.
|
||||||
|
|
||||||
NOTE: You must stop the data feed before you can delete it.
|
NOTE: You must stop the data feed before you can delete it.
|
||||||
|
|
||||||
|
You must have `manage_ml`, or `manage` cluster privileges to use this API.
|
||||||
|
For more information, see <<privileges-list-cluster>>.
|
||||||
|
|
||||||
===== Path Parameters
|
===== Path Parameters
|
||||||
|
|
||||||
`feed_id` (required)::
|
`feed_id` (required)::
|
||||||
|
|
|
@ -17,6 +17,9 @@ IMPORTANT: Deleting a job must be done via this API only. Do not delete the
|
||||||
DELETE Document API. When {security} is enabled, make sure no `write`
|
DELETE Document API. When {security} is enabled, make sure no `write`
|
||||||
privileges are granted to anyone over the `.ml-*` indices.
|
privileges are granted to anyone over the `.ml-*` indices.
|
||||||
|
|
||||||
|
You must have `manage_ml`, or `manage` cluster privileges to use this API.
|
||||||
|
For more information, see <<privileges-list-cluster>>.
|
||||||
|
|
||||||
Before you can delete a job, you must delete the data feeds that are associated with it.
|
Before you can delete a job, you must delete the data feeds that are associated with it.
|
||||||
See <<ml-delete-datafeed,Delete Data Feeds>>.
|
See <<ml-delete-datafeed,Delete Data Feeds>>.
|
||||||
|
|
||||||
|
|
|
@ -13,6 +13,9 @@ The delete model snapshot API enables you to delete an existing model snapshot.
|
||||||
IMPORTANT: You cannot delete the active model snapshot. To delete that snapshot,
|
IMPORTANT: You cannot delete the active model snapshot. To delete that snapshot,
|
||||||
first revert to a different one.
|
first revert to a different one.
|
||||||
|
|
||||||
|
You must have `manage_ml`, or `manage` cluster privileges to use this API.
|
||||||
|
For more information, see <<privileges-list-cluster>>.
|
||||||
|
|
||||||
//TBD: Where do you see restorePriority? Per old docs, the active model snapshot
|
//TBD: Where do you see restorePriority? Per old docs, the active model snapshot
|
||||||
//is "...the snapshot with the highest restorePriority".
|
//is "...the snapshot with the highest restorePriority".
|
||||||
|
|
||||||
|
|
|
@ -12,9 +12,14 @@ The flush job API forces any buffered data to be processed by the job.
|
||||||
|
|
||||||
The flush job API is only applicable when sending data for analysis using the <<ml-post-data,post data API>>. Depending on the content of the buffer, then it might additionally calculate new results.
|
The flush job API is only applicable when sending data for analysis using the <<ml-post-data,post data API>>. Depending on the content of the buffer, then it might additionally calculate new results.
|
||||||
|
|
||||||
Both flush and close operations are similar, however the flush is more efficient if you are expecting to send more data for analysis.
|
Both flush and close operations are similar, however the flush is more efficient
|
||||||
|
if you are expecting to send more data for analysis.
|
||||||
When flushing, the job remains open and is available to continue analyzing data.
|
When flushing, the job remains open and is available to continue analyzing data.
|
||||||
A close operation additionally prunes and persists the model state to disk and the job must be opened again before analyzing further data.
|
A close operation additionally prunes and persists the model state to disk
|
||||||
|
and the job must be opened again before analyzing further data.
|
||||||
|
|
||||||
|
You must have `manage_ml`, or `manage` cluster privileges to use this API.
|
||||||
|
For more information, see <<privileges-list-cluster>>.
|
||||||
|
|
||||||
===== Path Parameters
|
===== Path Parameters
|
||||||
|
|
||||||
|
|
|
@ -15,6 +15,9 @@ results from a job.
|
||||||
|
|
||||||
This API presents a chronological view of the records, grouped by bucket.
|
This API presents a chronological view of the records, grouped by bucket.
|
||||||
|
|
||||||
|
You must have `monitor_ml`, `monitor`, `manage_ml`, or `manage` cluster
|
||||||
|
privileges to use this API. For more information, see <<privileges-list-cluster>>.
|
||||||
|
|
||||||
===== Path Parameters
|
===== Path Parameters
|
||||||
|
|
||||||
`job_id`::
|
`job_id`::
|
||||||
|
@ -23,7 +26,7 @@ This API presents a chronological view of the records, grouped by bucket.
|
||||||
`timestamp`::
|
`timestamp`::
|
||||||
(string) The timestamp of a single bucket result.
|
(string) The timestamp of a single bucket result.
|
||||||
If you do not specify this optional parameter, the API returns information
|
If you do not specify this optional parameter, the API returns information
|
||||||
about all buckets that you have authority to view in the job.
|
about all buckets.
|
||||||
|
|
||||||
===== Request Body
|
===== Request Body
|
||||||
|
|
||||||
|
|
|
@ -10,10 +10,12 @@ about the categories in the results for a job.
|
||||||
`GET _xpack/ml/anomaly_detectors/<job_id>/results/categories` +
|
`GET _xpack/ml/anomaly_detectors/<job_id>/results/categories` +
|
||||||
|
|
||||||
`GET _xpack/ml/anomaly_detectors/<job_id>/results/categories/<category_id>`
|
`GET _xpack/ml/anomaly_detectors/<job_id>/results/categories/<category_id>`
|
||||||
////
|
|
||||||
===== Description
|
===== Description
|
||||||
|
|
||||||
////
|
You must have `monitor_ml`, `monitor`, `manage_ml`, or `manage` cluster
|
||||||
|
privileges to use this API. For more information, see <<privileges-list-cluster>>.
|
||||||
|
|
||||||
===== Path Parameters
|
===== Path Parameters
|
||||||
|
|
||||||
`job_id`::
|
`job_id`::
|
||||||
|
@ -21,7 +23,7 @@ about the categories in the results for a job.
|
||||||
|
|
||||||
`category_id`::
|
`category_id`::
|
||||||
(string) Identifier for the category. If you do not specify this optional parameter,
|
(string) Identifier for the category. If you do not specify this optional parameter,
|
||||||
the API returns information about all categories that you have authority to view.
|
the API returns information about all categories in the job.
|
||||||
|
|
||||||
===== Request Body
|
===== Request Body
|
||||||
|
|
||||||
|
|
|
@ -16,12 +16,15 @@ data feeds.
|
||||||
If the data feed is stopped, the only information you receive is the
|
If the data feed is stopped, the only information you receive is the
|
||||||
`datafeed_id` and the `state`.
|
`datafeed_id` and the `state`.
|
||||||
|
|
||||||
|
You must have `monitor_ml`, `monitor`, `manage_ml`, or `manage` cluster
|
||||||
|
privileges to use this API. For more information, see <<privileges-list-cluster>>.
|
||||||
|
|
||||||
===== Path Parameters
|
===== Path Parameters
|
||||||
|
|
||||||
`feed_id`::
|
`feed_id`::
|
||||||
(string) Identifier for the data feed.
|
(string) Identifier for the data feed.
|
||||||
If you do not specify this optional parameter, the API returns information
|
If you do not specify this optional parameter, the API returns information
|
||||||
about all data feeds that you have authority to view.
|
about all data feeds.
|
||||||
|
|
||||||
===== Results
|
===== Results
|
||||||
|
|
||||||
|
|
|
@ -10,17 +10,20 @@ data feeds.
|
||||||
`GET _xpack/ml/datafeeds/` +
|
`GET _xpack/ml/datafeeds/` +
|
||||||
|
|
||||||
`GET _xpack/ml/datafeeds/<feed_id>`
|
`GET _xpack/ml/datafeeds/<feed_id>`
|
||||||
////
|
|
||||||
===== Description
|
===== Description
|
||||||
|
|
||||||
OUTDATED?: The get job API can also be applied to all jobs by using `_all` as the job name.
|
You must have `monitor_ml`, `monitor`, `manage_ml`, or `manage` cluster
|
||||||
////
|
privileges to use this API. For more information, see <<privileges-list-cluster>>.
|
||||||
|
|
||||||
|
//TBD: The get job API can also be applied to all jobs by using `_all` as the job name.
|
||||||
|
|
||||||
===== Path Parameters
|
===== Path Parameters
|
||||||
|
|
||||||
`feed_id`::
|
`feed_id`::
|
||||||
(string) Identifier for the data feed.
|
(string) Identifier for the data feed.
|
||||||
If you do not specify this optional parameter, the API returns information
|
If you do not specify this optional parameter, the API returns information
|
||||||
about all data feeds that you have authority to view.
|
about all data feeds.
|
||||||
|
|
||||||
===== Results
|
===== Results
|
||||||
|
|
||||||
|
|
|
@ -8,10 +8,12 @@ in a job.
|
||||||
|
|
||||||
`GET _xpack/ml/anomaly_detectors/<job_id>/results/influencers`
|
`GET _xpack/ml/anomaly_detectors/<job_id>/results/influencers`
|
||||||
|
|
||||||
////
|
|
||||||
===== Description
|
===== Description
|
||||||
|
|
||||||
////
|
You must have `monitor_ml`, `monitor`, `manage_ml`, or `manage` cluster
|
||||||
|
privileges to use this API. For more information, see <<privileges-list-cluster>>.
|
||||||
|
|
||||||
===== Path Parameters
|
===== Path Parameters
|
||||||
|
|
||||||
`job_id`::
|
`job_id`::
|
||||||
|
|
|
@ -10,15 +10,17 @@ The get jobs API allows you to retrieve usage information for jobs.
|
||||||
|
|
||||||
`GET _xpack/ml/anomaly_detectors/<job_id>/_stats`
|
`GET _xpack/ml/anomaly_detectors/<job_id>/_stats`
|
||||||
|
|
||||||
////
|
|
||||||
===== Description
|
===== Description
|
||||||
|
|
||||||
////
|
You must have `monitor_ml`, `monitor`, `manage_ml`, or `manage` cluster
|
||||||
|
privileges to use this API. For more information, see <<privileges-list-cluster>>.
|
||||||
|
|
||||||
===== Path Parameters
|
===== Path Parameters
|
||||||
|
|
||||||
`job_id`::
|
`job_id`::
|
||||||
(string) Identifier for the job. If you do not specify this optional parameter,
|
(string) Identifier for the job. If you do not specify this optional parameter,
|
||||||
the API returns information about all jobs that you have authority to view.
|
the API returns information about all jobs.
|
||||||
|
|
||||||
|
|
||||||
===== Results
|
===== Results
|
||||||
|
|
|
@ -10,15 +10,17 @@ The get jobs API enables you to retrieve configuration information for jobs.
|
||||||
|
|
||||||
`GET _xpack/ml/anomaly_detectors/<job_id>`
|
`GET _xpack/ml/anomaly_detectors/<job_id>`
|
||||||
|
|
||||||
////
|
|
||||||
===== Description
|
===== Description
|
||||||
|
|
||||||
////
|
You must have `monitor_ml`, `monitor`, `manage_ml`, or `manage` cluster
|
||||||
|
privileges to use this API. For more information, see <<privileges-list-cluster>>.
|
||||||
|
|
||||||
===== Path Parameters
|
===== Path Parameters
|
||||||
|
|
||||||
`job_id`::
|
`job_id`::
|
||||||
(string) Identifier for the job. If you do not specify this optional parameter,
|
(string) Identifier for the job. If you do not specify this optional parameter,
|
||||||
the API returns information about all jobs that you have authority to view.
|
the API returns information about all jobs.
|
||||||
|
|
||||||
===== Results
|
===== Results
|
||||||
|
|
||||||
|
|
|
@ -8,10 +8,12 @@ The get records API enables you to retrieve anomaly records for a job.
|
||||||
|
|
||||||
`GET _xpack/ml/anomaly_detectors/<job_id>/results/records`
|
`GET _xpack/ml/anomaly_detectors/<job_id>/results/records`
|
||||||
|
|
||||||
////
|
|
||||||
===== Description
|
===== Description
|
||||||
|
|
||||||
////
|
You must have `monitor_ml`, `monitor`, `manage_ml`, or `manage` cluster
|
||||||
|
privileges to use this API. For more information, see <<privileges-list-cluster>>.
|
||||||
|
|
||||||
===== Path Parameters
|
===== Path Parameters
|
||||||
|
|
||||||
`job_id`::
|
`job_id`::
|
||||||
|
|
|
@ -9,18 +9,20 @@ The get model snapshots API enables you to retrieve information about model snap
|
||||||
`GET _xpack/ml/anomaly_detectors/<job_id>/model_snapshots` +
|
`GET _xpack/ml/anomaly_detectors/<job_id>/model_snapshots` +
|
||||||
|
|
||||||
`GET _xpack/ml/anomaly_detectors/<job_id>/model_snapshots/<snapshot_id>`
|
`GET _xpack/ml/anomaly_detectors/<job_id>/model_snapshots/<snapshot_id>`
|
||||||
////
|
|
||||||
===== Description
|
===== Description
|
||||||
|
|
||||||
////
|
You must have `monitor_ml`, `monitor`, `manage_ml`, or `manage` cluster
|
||||||
|
privileges to use this API. For more information, see <<privileges-list-cluster>>.
|
||||||
|
|
||||||
===== Path Parameters
|
===== Path Parameters
|
||||||
|
|
||||||
`job_id`::
|
`job_id`::
|
||||||
(string) Identifier for the job.
|
(string) Identifier for the job.
|
||||||
|
|
||||||
`snapshot_id`::
|
`snapshot_id`::
|
||||||
(string) Identifier for the model snapshot. If you do not specify this optional parameter,
|
(string) Identifier for the model snapshot. If you do not specify this
|
||||||
the API returns information about all model snapshots that you have authority to view.
|
optional parameter, the API returns information about all model snapshots.
|
||||||
|
|
||||||
===== Request Body
|
===== Request Body
|
||||||
|
|
||||||
|
|
|
@ -21,20 +21,23 @@ or old results are deleted, the job counts are not reset.
|
||||||
|
|
||||||
`state`::
|
`state`::
|
||||||
(string) The status of the job, which can be one of the following values:
|
(string) The status of the job, which can be one of the following values:
|
||||||
`open`::: The job is actively receiving and processing data.
|
|
||||||
`closed`::: The job finished successfully with its model state persisted.
|
`closed`::: The job finished successfully with its model state persisted.
|
||||||
The job is still available to accept further data.
|
The job is still available to accept further data. +
|
||||||
`closing`::: TBD
|
+
|
||||||
`failed`::: The job did not finish successfully due to an error.
|
--
|
||||||
This situation can occur due to invalid input data. In this case,
|
|
||||||
sending corrected data to a failed job re-opens the job and
|
|
||||||
resets it to an open state.
|
|
||||||
|
|
||||||
NOTE: If you send data in a periodic cycle and close the job at the end of
|
NOTE: If you send data in a periodic cycle and close the job at the end of
|
||||||
each transaction, the job is marked as closed in the intervals between
|
each transaction, the job is marked as closed in the intervals between
|
||||||
when data is sent. For example, if data is sent every minute and it takes
|
when data is sent. For example, if data is sent every minute and it takes
|
||||||
1 second to process, the job has a closed state for 59 seconds.
|
1 second to process, the job has a closed state for 59 seconds.
|
||||||
|
|
||||||
|
--
|
||||||
|
`closing`::: TBD. The job is in the process of closing?
|
||||||
|
`failed`::: The job did not finish successfully due to an error.
|
||||||
|
This situation can occur due to invalid input data. In this case,
|
||||||
|
sending corrected data to a failed job re-opens the job and
|
||||||
|
resets it to an open state.
|
||||||
|
`open`::: The job is actively receiving and processing data.
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
[[ml-datacounts]]
|
[[ml-datacounts]]
|
||||||
===== Data Counts Objects
|
===== Data Counts Objects
|
||||||
|
@ -142,15 +145,19 @@ The `model_size_stats` object has the following properties:
|
||||||
TBD
|
TBD
|
||||||
|
|
||||||
`total_by_field_count`::
|
`total_by_field_count`::
|
||||||
(long) The number of `by` field values that were analyzed by the models.
|
(long) The number of `by` field values that were analyzed by the models. +
|
||||||
|
+
|
||||||
|
--
|
||||||
NOTE: The `by` field values are counted separately for each detector and partition.
|
NOTE: The `by` field values are counted separately for each detector and partition.
|
||||||
|
|
||||||
|
--
|
||||||
`total_over_field_count`::
|
`total_over_field_count`::
|
||||||
(long) The number of `over` field values that were analyzed by the models.
|
(long) The number of `over` field values that were analyzed by the models. +
|
||||||
|
+
|
||||||
|
--
|
||||||
NOTE: The `over` field values are counted separately for each detector and partition.
|
NOTE: The `over` field values are counted separately for each detector and partition.
|
||||||
|
|
||||||
|
--
|
||||||
`total_partition_field_count`::
|
`total_partition_field_count`::
|
||||||
(long) The number of `partition` field values that were analyzed by the models.
|
(long) The number of `partition` field values that were analyzed by the models.
|
||||||
|
|
||||||
|
|
|
@ -69,11 +69,13 @@ An analysis configuration object has the following properties:
|
||||||
`detectors` (required)::
|
`detectors` (required)::
|
||||||
(array) An array of detector configuration objects,
|
(array) An array of detector configuration objects,
|
||||||
which describe the anomaly detectors that are used in the job.
|
which describe the anomaly detectors that are used in the job.
|
||||||
See <<ml-detectorconfig,detector configuration objects>>.
|
See <<ml-detectorconfig,detector configuration objects>>. +
|
||||||
|
+
|
||||||
|
--
|
||||||
NOTE: If the `detectors` array does not contain at least one detector, no analysis can occur
|
NOTE: If the `detectors` array does not contain at least one detector, no analysis can occur
|
||||||
and an error is returned.
|
and an error is returned.
|
||||||
|
|
||||||
|
--
|
||||||
`influencers`::
|
`influencers`::
|
||||||
(array of strings) A comma separated list of influencer field names.
|
(array of strings) A comma separated list of influencer field names.
|
||||||
Typically these can be the by, over, or partition fields that are used in the detector configuration.
|
Typically these can be the by, over, or partition fields that are used in the detector configuration.
|
||||||
|
@ -82,10 +84,12 @@ and an error is returned.
|
||||||
the use of influencers is recommended as it aggregates results for each influencer entity.
|
the use of influencers is recommended as it aggregates results for each influencer entity.
|
||||||
|
|
||||||
`latency`::
|
`latency`::
|
||||||
(unsigned integer) The size of the window, in seconds, in which to expect data that is out of time order. The default value is 0 milliseconds (no latency).
|
(unsigned integer) The size of the window, in seconds, in which to expect data that is out of time order. The default value is 0 milliseconds (no latency). +
|
||||||
|
+
|
||||||
|
--
|
||||||
NOTE: Latency is only applicable when you send data by using the <<ml-post-data, Post Data to Jobs>> API.
|
NOTE: Latency is only applicable when you send data by using the <<ml-post-data, Post Data to Jobs>> API.
|
||||||
|
|
||||||
|
--
|
||||||
`multivariate_by_fields`::
|
`multivariate_by_fields`::
|
||||||
(boolean) If set to `true`, the analysis will automatically find correlations
|
(boolean) If set to `true`, the analysis will automatically find correlations
|
||||||
between metrics for a given `by` field value and report anomalies when those
|
between metrics for a given `by` field value and report anomalies when those
|
||||||
|
@ -94,10 +98,12 @@ NOTE: Latency is only applicable when you send data by using the <<ml-post-data,
|
||||||
correlation occurs because they are running a load-balanced application.
|
correlation occurs because they are running a load-balanced application.
|
||||||
If you enable this property, then anomalies will be reported when, for example,
|
If you enable this property, then anomalies will be reported when, for example,
|
||||||
CPU usage on host A is high and the value of CPU usage on host B is low.
|
CPU usage on host A is high and the value of CPU usage on host B is low.
|
||||||
That is to say, you'll see an anomaly when the CPU of host A is unusual given the CPU of host B.
|
That is to say, you'll see an anomaly when the CPU of host A is unusual given the CPU of host B. +
|
||||||
|
+
|
||||||
|
--
|
||||||
NOTE: To use the `multivariate_by_fields` property, you must also specify `by_field_name` in your detector.
|
NOTE: To use the `multivariate_by_fields` property, you must also specify `by_field_name` in your detector.
|
||||||
|
|
||||||
|
--
|
||||||
`overlapping_buckets`::
|
`overlapping_buckets`::
|
||||||
(boolean) If set to `true`, an additional analysis occurs that runs out of phase by half a bucket length.
|
(boolean) If set to `true`, an additional analysis occurs that runs out of phase by half a bucket length.
|
||||||
This requires more system resources and enhances detection of anomalies that span bucket boundaries.
|
This requires more system resources and enhances detection of anomalies that span bucket boundaries.
|
||||||
|
@ -105,10 +111,12 @@ NOTE: To use the `multivariate_by_fields` property, you must also specify `by_fi
|
||||||
`summary_count_field_name`::
|
`summary_count_field_name`::
|
||||||
(string) If not null, the data fed to the job is expected to be pre-summarized.
|
(string) If not null, the data fed to the job is expected to be pre-summarized.
|
||||||
This property value is the name of the field that contains the count of raw data points that have been summarized.
|
This property value is the name of the field that contains the count of raw data points that have been summarized.
|
||||||
The same `summary_count_field_name` applies to all detectors in the job.
|
The same `summary_count_field_name` applies to all detectors in the job. +
|
||||||
|
+
|
||||||
|
--
|
||||||
NOTE: The `summary_count_field_name` property cannot be used with the `metric` function.
|
NOTE: The `summary_count_field_name` property cannot be used with the `metric` function.
|
||||||
|
|
||||||
|
--
|
||||||
`use_per_partition_normalization`::
|
`use_per_partition_normalization`::
|
||||||
() TBD
|
() TBD
|
||||||
|
|
||||||
|
@ -140,19 +148,23 @@ Each detector has the following properties:
|
||||||
|
|
||||||
`field_name`::
|
`field_name`::
|
||||||
(string) The field that the detector uses in the function. If you use an event rate
|
(string) The field that the detector uses in the function. If you use an event rate
|
||||||
function such as `count` or `rare`, do not specify this field.
|
function such as `count` or `rare`, do not specify this field. +
|
||||||
|
+
|
||||||
|
--
|
||||||
NOTE: The `field_name` cannot contain double quotes or backslashes.
|
NOTE: The `field_name` cannot contain double quotes or backslashes.
|
||||||
|
|
||||||
|
--
|
||||||
`function` (required)::
|
`function` (required)::
|
||||||
(string) The analysis function that is used.
|
(string) The analysis function that is used.
|
||||||
For example, `count`, `rare`, `mean`, `min`, `max`, and `sum`.
|
For example, `count`, `rare`, `mean`, `min`, `max`, and `sum`.
|
||||||
The default function is `metric`, which looks for anomalies in all of `min`, `max`,
|
The default function is `metric`, which looks for anomalies in all of `min`, `max`,
|
||||||
and `mean`.
|
and `mean`. +
|
||||||
|
+
|
||||||
|
--
|
||||||
NOTE: You cannot use the `metric` function with pre-summarized input. If `summary_count_field_name`
|
NOTE: You cannot use the `metric` function with pre-summarized input. If `summary_count_field_name`
|
||||||
is not null, you must specify a function other than `metric`.
|
is not null, you must specify a function other than `metric`.
|
||||||
|
|
||||||
|
--
|
||||||
`over_field_name`::
|
`over_field_name`::
|
||||||
(string) The field used to split the data.
|
(string) The field used to split the data.
|
||||||
In particular, this property is used for analyzing the splits with respect to the history of all splits.
|
In particular, this property is used for analyzing the splits with respect to the history of all splits.
|
||||||
|
@ -164,10 +176,13 @@ NOTE: You cannot use the `metric` function with pre-summarized input. If `summar
|
||||||
|
|
||||||
`use_null`::
|
`use_null`::
|
||||||
(boolean) Defines whether a new series is used as the null series
|
(boolean) Defines whether a new series is used as the null series
|
||||||
when there is no value for the by or partition fields. The default value is `false`
|
when there is no value for the by or partition fields. The default value is `false`. +
|
||||||
|
+
|
||||||
|
--
|
||||||
IMPORTANT: Field names are case sensitive, for example a field named 'Bytes' is different to one named 'bytes'.
|
IMPORTANT: Field names are case sensitive, for example a field named 'Bytes' is different to one named 'bytes'.
|
||||||
|
|
||||||
|
--
|
||||||
|
|
||||||
[[ml-datadescription]]
|
[[ml-datadescription]]
|
||||||
===== Data Description Objects
|
===== Data Description Objects
|
||||||
|
|
||||||
|
@ -203,9 +218,11 @@ A data description object has the following properties:
|
||||||
since 1 Jan 1970) and corresponds to the time_t type in C and C++.
|
since 1 Jan 1970) and corresponds to the time_t type in C and C++.
|
||||||
The value `epoch_ms` indicates that time is measured in milliseconds since the epoch.
|
The value `epoch_ms` indicates that time is measured in milliseconds since the epoch.
|
||||||
The `epoch` and `epoch_ms` time formats accept either integer or real values. +
|
The `epoch` and `epoch_ms` time formats accept either integer or real values. +
|
||||||
|
+
|
||||||
|
--
|
||||||
NOTE: Custom patterns must conform to the Java `DateTimeFormatter` class. When you use date-time formatting patterns, it is recommended that you provide the full date, time and time zone. For example: `yyyy-MM-dd'T'HH:mm:ssX`. If the pattern that you specify is not sufficient to produce a complete timestamp, job creation fails.
|
NOTE: Custom patterns must conform to the Java `DateTimeFormatter` class. When you use date-time formatting patterns, it is recommended that you provide the full date, time and time zone. For example: `yyyy-MM-dd'T'HH:mm:ssX`. If the pattern that you specify is not sufficient to produce a complete timestamp, job creation fails.
|
||||||
|
|
||||||
|
--
|
||||||
`quotecharacter`::
|
`quotecharacter`::
|
||||||
() TBD
|
() TBD
|
||||||
|
|
||||||
|
|
|
@ -18,6 +18,9 @@ When you open a new job, it starts with an empty model.
|
||||||
When you open an existing job, the most recent model state is automatically loaded.
|
When you open an existing job, the most recent model state is automatically loaded.
|
||||||
The job is ready to resume its analysis from where it left off, once new data is received.
|
The job is ready to resume its analysis from where it left off, once new data is received.
|
||||||
|
|
||||||
|
You must have `manage_ml`, or `manage` cluster privileges to use this API.
|
||||||
|
For more information, see <<privileges-list-cluster>>.
|
||||||
|
|
||||||
===== Path Parameters
|
===== Path Parameters
|
||||||
|
|
||||||
`job_id` (required)::
|
`job_id` (required)::
|
||||||
|
|
|
@ -16,16 +16,18 @@ then split it into multiple files and upload each one separately in sequential t
|
||||||
When running in real-time, it is generally recommended to arrange to perform
|
When running in real-time, it is generally recommended to arrange to perform
|
||||||
many small uploads, rather than queueing data to upload larger files.
|
many small uploads, rather than queueing data to upload larger files.
|
||||||
|
|
||||||
|
|
||||||
IMPORTANT: Data can only be accepted from a single connection.
|
IMPORTANT: Data can only be accepted from a single connection.
|
||||||
Use a single connection synchronously to send data, close, flush, or delete a single job.
|
Use a single connection synchronously to send data, close, flush, or delete a single job.
|
||||||
It is not currently possible to post data to multiple jobs using wildcards
|
It is not currently possible to post data to multiple jobs using wildcards
|
||||||
or a comma separated list.
|
or a comma separated list.
|
||||||
|
|
||||||
|
You must have `manage_ml`, or `manage` cluster privileges to use this API.
|
||||||
|
For more information, see <<privileges-list-cluster>>.
|
||||||
|
|
||||||
===== Path Parameters
|
===== Path Parameters
|
||||||
|
|
||||||
`job_id` (required)::
|
`job_id` (required)::
|
||||||
(string) Identifier for the job
|
(string) Identifier for the job
|
||||||
|
|
||||||
===== Request Body
|
===== Request Body
|
||||||
|
|
||||||
|
@ -33,7 +35,7 @@ or a comma separated list.
|
||||||
(string) Specifies the start of the bucket resetting range
|
(string) Specifies the start of the bucket resetting range
|
||||||
|
|
||||||
`reset_end`::
|
`reset_end`::
|
||||||
(string) Specifies the end of the bucket resetting range"
|
(string) Specifies the end of the bucket resetting range
|
||||||
|
|
||||||
////
|
////
|
||||||
===== Responses
|
===== Responses
|
||||||
|
|
|
@ -14,6 +14,9 @@ The preview data feed API enables you to preview a data feed.
|
||||||
//TBD: How much data does it return?
|
//TBD: How much data does it return?
|
||||||
The API returns example data by using the current data feed settings.
|
The API returns example data by using the current data feed settings.
|
||||||
|
|
||||||
|
You must have `monitor_ml`, `monitor`, `manage_ml`, or `manage` cluster
|
||||||
|
privileges to use this API. For more information, see <<privileges-list-cluster>>.
|
||||||
|
|
||||||
===== Path Parameters
|
===== Path Parameters
|
||||||
|
|
||||||
`feed_id` (required)::
|
`feed_id` (required)::
|
||||||
|
|
|
@ -13,6 +13,9 @@ The create data feed API enables you to instantiate a data feed.
|
||||||
You must create a job before you create a data feed. You can associate only one
|
You must create a job before you create a data feed. You can associate only one
|
||||||
data feed to each job.
|
data feed to each job.
|
||||||
|
|
||||||
|
You must have `manage_ml`, or `manage` cluster privileges to use this API.
|
||||||
|
For more information, see <<privileges-list-cluster>>.
|
||||||
|
|
||||||
===== Path Parameters
|
===== Path Parameters
|
||||||
|
|
||||||
`feed_id` (required)::
|
`feed_id` (required)::
|
||||||
|
|
|
@ -8,10 +8,12 @@ The create job API enables you to instantiate a job.
|
||||||
|
|
||||||
`PUT _xpack/ml/anomaly_detectors/<job_id>`
|
`PUT _xpack/ml/anomaly_detectors/<job_id>`
|
||||||
|
|
||||||
////
|
|
||||||
===== Description
|
===== Description
|
||||||
|
|
||||||
////
|
You must have `manage_ml`, or `manage` cluster privileges to use this API.
|
||||||
|
For more information, see <<privileges-list-cluster>>.
|
||||||
|
|
||||||
===== Path Parameters
|
===== Path Parameters
|
||||||
|
|
||||||
`job_id` (required)::
|
`job_id` (required)::
|
||||||
|
|
|
@ -312,11 +312,14 @@ A bucket resource has the following properties:
|
||||||
|
|
||||||
`timestamp`::
|
`timestamp`::
|
||||||
(date) The start time of the bucket, specified in ISO 8601 format.
|
(date) The start time of the bucket, specified in ISO 8601 format.
|
||||||
For example, 1454020800000. This timestamp uniquely identifies the bucket.
|
For example, 1454020800000. This timestamp uniquely identifies the bucket. +
|
||||||
|
+
|
||||||
|
--
|
||||||
NOTE: Events that occur exactly at the timestamp of the bucket are included in
|
NOTE: Events that occur exactly at the timestamp of the bucket are included in
|
||||||
the results for the bucket.
|
the results for the bucket.
|
||||||
|
|
||||||
|
--
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
[[ml-results-categories]]
|
[[ml-results-categories]]
|
||||||
===== Categories
|
===== Categories
|
||||||
|
|
|
@ -50,6 +50,9 @@ IMPORTANT: Before you revert to a saved snapshot, you must close the job.
|
||||||
Sending data to a closed job changes its status to `open`, so you must also
|
Sending data to a closed job changes its status to `open`, so you must also
|
||||||
ensure that you do not expect data imminently.
|
ensure that you do not expect data imminently.
|
||||||
|
|
||||||
|
You must have `manage_ml`, or `manage` cluster privileges to use this API.
|
||||||
|
For more information, see <<privileges-list-cluster>>.
|
||||||
|
|
||||||
===== Path Parameters
|
===== Path Parameters
|
||||||
|
|
||||||
`job_id` (required)::
|
`job_id` (required)::
|
||||||
|
|
|
@ -50,6 +50,9 @@ because the job might not have completely processed all data for that millisecon
|
||||||
If you specify a `start` value that is earlier than the timestamp of the latest
|
If you specify a `start` value that is earlier than the timestamp of the latest
|
||||||
processed record, that value is ignored.
|
processed record, that value is ignored.
|
||||||
|
|
||||||
|
You must have `manage_ml`, or `manage` cluster privileges to use this API.
|
||||||
|
For more information, see <<privileges-list-cluster>>.
|
||||||
|
|
||||||
===== Path Parameters
|
===== Path Parameters
|
||||||
|
|
||||||
`feed_id` (required)::
|
`feed_id` (required)::
|
||||||
|
|
|
@ -9,10 +9,11 @@ A data feed can be opened and closed multiple times throughout its lifecycle.
|
||||||
|
|
||||||
`POST _xpack/ml/datafeeds/<feed_id>/_stop`
|
`POST _xpack/ml/datafeeds/<feed_id>/_stop`
|
||||||
|
|
||||||
////
|
|
||||||
===== Description
|
===== Description
|
||||||
|
|
||||||
////
|
You must have `manage_ml`, or `manage` cluster privileges to use this API.
|
||||||
|
For more information, see <<privileges-list-cluster>>.
|
||||||
|
|
||||||
===== Path Parameters
|
===== Path Parameters
|
||||||
|
|
||||||
`feed_id` (required)::
|
`feed_id` (required)::
|
||||||
|
|
|
@ -8,10 +8,12 @@ The update data feed API enables you to update certain properties of a data feed
|
||||||
|
|
||||||
`POST _xpack/ml/datafeeds/<feed_id>/_update`
|
`POST _xpack/ml/datafeeds/<feed_id>/_update`
|
||||||
|
|
||||||
////
|
|
||||||
===== Description
|
===== Description
|
||||||
|
|
||||||
////
|
You must have `manage_ml`, or `manage` cluster privileges to use this API.
|
||||||
|
For more information, see <<privileges-list-cluster>>.
|
||||||
|
|
||||||
===== Path Parameters
|
===== Path Parameters
|
||||||
|
|
||||||
`feed_id` (required)::
|
`feed_id` (required)::
|
||||||
|
|
|
@ -8,12 +8,13 @@ The update job API allows you to update certain properties of a job.
|
||||||
|
|
||||||
`POST _xpack/ml/anomaly_detectors/<job_id>/_update`
|
`POST _xpack/ml/anomaly_detectors/<job_id>/_update`
|
||||||
|
|
||||||
////
|
|
||||||
===== Description
|
===== Description
|
||||||
|
|
||||||
Important:: Updates do not take effect until after then job is closed and new
|
You must have `manage_ml`, or `manage` cluster privileges to use this API.
|
||||||
data is sent to it.
|
For more information, see <<privileges-list-cluster>>.
|
||||||
////
|
//TBD: Important:: Updates do not take effect until after then job is closed and new data is sent to it.
|
||||||
|
|
||||||
===== Path Parameters
|
===== Path Parameters
|
||||||
|
|
||||||
`job_id` (required)::
|
`job_id` (required)::
|
||||||
|
|
|
@ -16,6 +16,9 @@ The update model snapshot API enables you to update certain properties of a snap
|
||||||
Updates to the configuration are only applied after the job has been closed
|
Updates to the configuration are only applied after the job has been closed
|
||||||
and new data has been sent to it.
|
and new data has been sent to it.
|
||||||
|
|
||||||
|
You must have `manage_ml`, or `manage` cluster privileges to use this API.
|
||||||
|
For more information, see <<privileges-list-cluster>>.
|
||||||
|
|
||||||
===== Path Parameters
|
===== Path Parameters
|
||||||
|
|
||||||
`job_id` (required)::
|
`job_id` (required)::
|
||||||
|
|
|
@ -12,6 +12,9 @@ The validate detectors API validates detector configuration information.
|
||||||
|
|
||||||
This API enables you validate the detector configuration before you create a job.
|
This API enables you validate the detector configuration before you create a job.
|
||||||
|
|
||||||
|
You must have `manage_ml`, or `manage` cluster privileges to use this API.
|
||||||
|
For more information, see <<privileges-list-cluster>>.
|
||||||
|
|
||||||
////
|
////
|
||||||
===== Path Parameters
|
===== Path Parameters
|
||||||
|
|
||||||
|
|
|
@ -12,6 +12,8 @@ The validate jobs API validates job configuration information.
|
||||||
|
|
||||||
This API enables you validate the job configuration before you create the job.
|
This API enables you validate the job configuration before you create the job.
|
||||||
|
|
||||||
|
You must have `manage_ml`, or `manage` cluster privileges to use this API.
|
||||||
|
For more information, see <<privileges-list-cluster>>.
|
||||||
////
|
////
|
||||||
===== Path Parameters
|
===== Path Parameters
|
||||||
|
|
||||||
|
|
|
@ -15,6 +15,10 @@ settings update, rerouting, or managing users and roles.
|
||||||
All cluster read-only operations, like cluster health & state, hot threads, node
|
All cluster read-only operations, like cluster health & state, hot threads, node
|
||||||
info, node & cluster stats, snapshot/restore status, pending cluster tasks.
|
info, node & cluster stats, snapshot/restore status, pending cluster tasks.
|
||||||
|
|
||||||
|
`monitor_ml`::
|
||||||
|
All read only {ml} operations, such as getting information about data feeds, jobs,
|
||||||
|
model snapshots, or results.
|
||||||
|
|
||||||
`monitor_watcher`::
|
`monitor_watcher`::
|
||||||
All read only watcher operations, such as getting a watch and watcher stats.
|
All read only watcher operations, such as getting a watch and watcher stats.
|
||||||
|
|
||||||
|
@ -23,16 +27,20 @@ Builds on `monitor` and adds cluster operations that change values in the cluste
|
||||||
This includes snapshotting,updating settings, and rerouting. This privilege does
|
This includes snapshotting,updating settings, and rerouting. This privilege does
|
||||||
not include the ability to manage security.
|
not include the ability to manage security.
|
||||||
|
|
||||||
`manage_security`::
|
|
||||||
All security related operations such as CRUD operations on users and roles and
|
|
||||||
cache clearing.
|
|
||||||
|
|
||||||
`manage_index_templates`::
|
`manage_index_templates`::
|
||||||
All operations on index templates.
|
All operations on index templates.
|
||||||
|
|
||||||
|
`manage_ml`::
|
||||||
|
All {ml} operations, such as creating and deleting data feeds, jobs, and model
|
||||||
|
snapshots.
|
||||||
|
|
||||||
`manage_pipeline`::
|
`manage_pipeline`::
|
||||||
All operations on ingest pipelines.
|
All operations on ingest pipelines.
|
||||||
|
|
||||||
|
`manage_security`::
|
||||||
|
All security related operations such as CRUD operations on users and roles and
|
||||||
|
cache clearing.
|
||||||
|
|
||||||
`manage_watcher`::
|
`manage_watcher`::
|
||||||
All watcher operations, such as putting watches, executing, activate or acknowledging.
|
All watcher operations, such as putting watches, executing, activate or acknowledging.
|
||||||
|
|
||||||
|
@ -63,7 +71,7 @@ privilege is primarily available for use by <<kibana-roles, Kibana users>>.
|
||||||
`read`::
|
`read`::
|
||||||
Read only access to actions (count, explain, get, mget, get indexed scripts,
|
Read only access to actions (count, explain, get, mget, get indexed scripts,
|
||||||
more like this, multi percolate/search/termvector, percolate, scroll,
|
more like this, multi percolate/search/termvector, percolate, scroll,
|
||||||
clear_scroll, search, tv). Also grants access to the update mapping
|
clear_scroll, search, tv). Also grants access to the update mapping
|
||||||
action.
|
action.
|
||||||
|
|
||||||
`index`::
|
`index`::
|
||||||
|
|
Loading…
Reference in New Issue