[DOCS] Remove data type formatting from API pages

Original commit: elastic/x-pack-elasticsearch@fb06ece3f0
This commit is contained in:
lcawley 2017-04-11 19:26:18 -07:00
parent 1a6f813d5a
commit 412bb63383
30 changed files with 298 additions and 301 deletions

View File

@ -28,12 +28,12 @@ operations, but you can still explore and navigate results.
===== Path Parameters
`job_id` (required)::
(+string+) Identifier for the job
(string) Identifier for the job
===== Query Parameters
`close_timeout`::
(+time+) Controls the time to wait until a job has closed.
(time) Controls the time to wait until a job has closed.
The default value is 30 minutes.
////

View File

@ -5,7 +5,7 @@
A data feed resource has the following properties:
`aggregations`::
(+object+) TBD
(object) TBD
The aggregations object describes the aggregations that are
applied to the search query?
For more information, see {ref}search-aggregations.html[Aggregations].
@ -16,23 +16,23 @@ A data feed resource has the following properties:
"field": "events_per_min"}}}}}`.
`chunking_config`::
(+object+) TBD.
For example: {"mode": "manual", "time_span": "30000000ms"}
(object) TBD.
For example: {"mode": "manual", "time_span": "30000000ms"}
`datafeed_id`::
(+string+) A numerical character string that uniquely identifies the data feed.
(string) A numerical character string that uniquely identifies the data feed.
`frequency`::
TBD. For example: "150s"
`indexes` (required)::
(+array+) An array of index names. For example: ["it_ops_metrics"]
(array) An array of index names. For example: ["it_ops_metrics"]
`job_id` (required)::
(+string+) A numerical character string that uniquely identifies the job.
(string) A numerical character string that uniquely identifies the job.
`query`::
(+object+) TBD. The query that retrieves the data.
(object) TBD. The query that retrieves the data.
By default, this property has the following value: `{"match_all": {"boost": 1}}`.
`query_delay`::
@ -44,7 +44,7 @@ A data feed resource has the following properties:
The default value is `1000`.
`types` (required)::
(+array+) TBD. For example: ["network","sql","kpi"]
(array) TBD. For example: ["network","sql","kpi"]
[float]
[[ml-datafeed-counts]]
@ -57,10 +57,10 @@ progress of a data feed. For example:
TBD. For example: " "
`datafeed_id`::
(+string+) A numerical character string that uniquely identifies the data feed.
(string) A numerical character string that uniquely identifies the data feed.
`node`::
(+object+) TBD
(object) TBD
The node that is running the query?
`id`::: TBD. For example, "0-o0tOoRTwKFZifatTWKNw".
`name`::: TBD. For example, "0-o0tOo".
@ -69,7 +69,6 @@ progress of a data feed. For example:
`attributes`::: TBD. For example, {"max_running_jobs": "10"}.
`state`::
(+string+) The status of the data feed,
which can be one of the following values:
started:: The data feed is actively receiving data.
stopped:: The data feed is stopped and will not receive data until it is re-started.
(string) The status of the data feed, which can be one of the following values: +
started::: The data feed is actively receiving data.
stopped::: The data feed is stopped and will not receive data until it is re-started.

View File

@ -15,7 +15,7 @@ NOTE: You must stop the data feed before you can delete it.
===== Path Parameters
`feed_id` (required)::
(+string+) Identifier for the data feed
(string) Identifier for the data feed
////
===== Responses

View File

@ -25,7 +25,7 @@ It is not currently possible to delete multiple jobs using wildcards or a comma
===== Path Parameters
`job_id` (required)::
(+string+) Identifier for the job
(string) Identifier for the job
////
===== Responses

View File

@ -19,10 +19,10 @@ first revert to a different one.
===== Path Parameters
`job_id` (required)::
(+string+) Identifier for the job
(string) Identifier for the job
`snapshot_id` (required)::
(+string+) Identifier for the model snapshot
(string) Identifier for the model snapshot
////
===== Responses

View File

@ -19,23 +19,23 @@ A close operation additionally prunes and persists the model state to disk and t
===== Path Parameters
`job_id` (required)::
( +string+) Identifier for the job
(string) Identifier for the job
===== Query Parameters
`advance_time`::
(+string+) Specifies that no data prior to the date `advance_time` is expected.
(string) Specifies that no data prior to the date `advance_time` is expected.
`end`::
(+string+) When used in conjunction with `calc_interim`, specifies the range
(string) When used in conjunction with `calc_interim`, specifies the range
of buckets on which to calculate interim results.
`calc_interim`::
(+boolean+) If true, calculates the interim results for the most recent bucket
(boolean) If true, calculates the interim results for the most recent bucket
or all buckets within the latency period.
`start`::
(+string+) When used in conjunction with `calc_interim`, specifies the range of
(string) When used in conjunction with `calc_interim`, specifies the range of
buckets on which to calculate interim results.
////

View File

@ -18,46 +18,46 @@ This API presents a chronological view of the records, grouped by bucket.
===== Path Parameters
`job_id`::
(+string+) Identifier for the job
(string) Identifier for the job
`timestamp`::
(+string+) The timestamp of a single bucket result.
(string) The timestamp of a single bucket result.
If you do not specify this optional parameter, the API returns information
about all buckets that you have authority to view in the job.
===== Request Body
`anomaly_score`::
(+double+) Returns buckets with anomaly scores higher than this value.
(double) Returns buckets with anomaly scores higher than this value.
`end`::
(+string+) Returns buckets with timestamps earlier than this time.
(string) Returns buckets with timestamps earlier than this time.
`expand`::
(+boolean+) If true, the output includes anomaly records.
(boolean) If true, the output includes anomaly records.
`from`::
(+integer+) Skips the specified number of buckets.
(integer) Skips the specified number of buckets.
`include_interim`::
(+boolean+) If true, the output includes interim results.
(boolean) If true, the output includes interim results.
`partition_value`::
(+string+) If `expand` is true, the anomaly records are filtered by this
(string) If `expand` is true, the anomaly records are filtered by this
partition value.
`size`::
(+integer+) Specifies the maximum number of buckets to obtain.
(integer) Specifies the maximum number of buckets to obtain.
`start`::
(+string+) Returns buckets with timestamps after this time.
(string) Returns buckets with timestamps after this time.
===== Results
The API returns the following information:
`buckets`::
(+array+) An array of bucket objects. For more information, see
(array) An array of bucket objects. For more information, see
<<ml-results-buckets,Buckets>>.
////

View File

@ -17,10 +17,10 @@ about the categories in the results for a job.
===== Path Parameters
`job_id`::
(+string+) Identifier for the job.
(string) Identifier for the job.
`category_id`::
(+string+) Identifier for the category. If you do not specify this optional parameter,
(string) Identifier for the category. If you do not specify this optional parameter,
the API returns information about all categories that you have authority to view.
===== Request Body
@ -28,17 +28,17 @@ about the categories in the results for a job.
//TBD: Test these properties, since they didn't work on older build.
`from`::
(+integer+) Skips the specified number of categories.
(integer) Skips the specified number of categories.
`size`::
(+integer+) Specifies the maximum number of categories to obtain.
(integer) Specifies the maximum number of categories to obtain.
===== Results
The API returns the following information:
`categories`::
(+array+) An array of category objects. For more information, see
(array) An array of category objects. For more information, see
<<ml-results-categories,Categories>>.
////
===== Responses

View File

@ -19,7 +19,7 @@ If the data feed is stopped, the only information you receive is the
===== Path Parameters
`feed_id`::
(+string+) Identifier for the data feed.
(string) Identifier for the data feed.
If you do not specify this optional parameter, the API returns information
about all data feeds that you have authority to view.
@ -28,7 +28,7 @@ If the data feed is stopped, the only information you receive is the
The API returns the following information:
`datafeeds`::
(+array+) An array of data feed count objects.
(array) An array of data feed count objects.
For more information, see <<ml-datafeed-counts,Data Feed Counts>>.
////

View File

@ -18,7 +18,7 @@ OUTDATED?: The get job API can also be applied to all jobs by using `_all` as th
===== Path Parameters
`feed_id`::
(+string+) Identifier for the data feed.
(string) Identifier for the data feed.
If you do not specify this optional parameter, the API returns information
about all data feeds that you have authority to view.
@ -27,7 +27,7 @@ OUTDATED?: The get job API can also be applied to all jobs by using `_all` as th
The API returns the following information:
`datafeeds`::
(+array+) An array of data feed objects.
(array) An array of data feed objects.
For more information, see <<ml-datafeed-resource,data feed resources>>.
////

View File

@ -15,42 +15,42 @@ in a job.
===== Path Parameters
`job_id`::
(+string+) Identifier for the job.
(string) Identifier for the job.
===== Request Body
`desc`::
(+boolean+) If true, the results are sorted in descending order.
(boolean) If true, the results are sorted in descending order.
//TBD: Using the "sort" value?
`end`::
(+string+) Returns influencers with timestamps earlier than this time.
(string) Returns influencers with timestamps earlier than this time.
`from`::
(+integer+) Skips the specified number of influencers.
(integer) Skips the specified number of influencers.
`include_interim`::
(+boolean+) If true, the output includes interim results.
(boolean) If true, the output includes interim results.
`influencer_score`::
(+double+) Returns influencers with anomaly scores higher than this value.
(double) Returns influencers with anomaly scores higher than this value.
`size`::
(+integer+) Specifies the maximum number of influencers to obtain.
(integer) Specifies the maximum number of influencers to obtain.
`sort`::
(+string+) Specifies the sort field for the requested influencers.
(string) Specifies the sort field for the requested influencers.
//TBD: By default the results are sorted on the influencer score?
`start`::
(+string+) Returns influencers with timestamps after this time.
(string) Returns influencers with timestamps after this time.
===== Results
The API returns the following information:
`influencers`::
(+array+) An array of influencer objects.
(array) An array of influencer objects.
For more information, see <<ml-results-influencers,Influencers>>.
////

View File

@ -17,7 +17,7 @@ The get jobs API allows you to retrieve usage information for jobs.
===== Path Parameters
`job_id`::
(+string+) Identifier for the job. If you do not specify this optional parameter,
(string) Identifier for the job. If you do not specify this optional parameter,
the API returns information about all jobs that you have authority to view.
@ -26,7 +26,7 @@ The get jobs API allows you to retrieve usage information for jobs.
The API returns the following information:
`jobs`::
(+array+) An array of job count objects.
(array) An array of job count objects.
For more information, see <<ml-jobcounts,Job Counts>>.
////

View File

@ -17,15 +17,15 @@ The get jobs API enables you to retrieve configuration information for jobs.
===== Path Parameters
`job_id`::
(+string+) Identifier for the job. If you do not specify this optional parameter,
(string) Identifier for the job. If you do not specify this optional parameter,
the API returns information about all jobs that you have authority to view.
===== Results
he API returns the following information:
The API returns the following information:
`jobs`::
(+array+) An array of job resources.
(array) An array of job resources.
For more information, see <<ml-job-resource,Job Resources>>.
////

View File

@ -15,49 +15,49 @@ The get records API enables you to retrieve anomaly records for a job.
===== Path Parameters
`job_id`::
(+string+) Identifier for the job.
(string) Identifier for the job.
===== Request Body
`desc`::
(+boolean+) If true, the results are sorted in descending order.
(boolean) If true, the results are sorted in descending order.
`end`::
(+string+) Returns records with timestamps earlier than this time.
(string) Returns records with timestamps earlier than this time.
`expand`::
(+boolean+) TBD
(boolean) TBD
//This field did not work on older build.
`from`::
(+integer+) Skips the specified number of records.
(integer) Skips the specified number of records.
`include_interim`::
(+boolean+) If true, the output includes interim results.
(boolean) If true, the output includes interim results.
`partition_value`::
(+string+) If `expand` is true, the records are filtered by this
(string) If `expand` is true, the records are filtered by this
partition value.
`record_score`::
(+double+) Returns records with anomaly scores higher than this value.
(double) Returns records with anomaly scores higher than this value.
`size`::
(+integer+) Specifies the maximum number of records to obtain.
(integer) Specifies the maximum number of records to obtain.
`sort`::
(+string+) Specifies the sort field for the requested records.
(string) Specifies the sort field for the requested records.
By default, the records are sorted by the `anomaly_score` value.
`start`::
(+string+) Returns records with timestamps after this time.
(string) Returns records with timestamps after this time.
===== Results
The API returns the following information:
`records`::
(+array+) An array of record objects. For more information, see
(array) An array of record objects. For more information, see
<<ml-results-records,Records>>.
////

View File

@ -16,44 +16,44 @@ The get model snapshots API enables you to retrieve information about model snap
===== Path Parameters
`job_id`::
(+string+) Identifier for the job.
(string) Identifier for the job.
`snapshot_id`::
(+string+) Identifier for the model snapshot. If you do not specify this optional parameter,
(string) Identifier for the model snapshot. If you do not specify this optional parameter,
the API returns information about all model snapshots that you have authority to view.
===== Request Body
`desc`::
(+boolean+) If true, the results are sorted in descending order.
(boolean) If true, the results are sorted in descending order.
`description`::
(+string+) Returns snapshots that match this description.
(string) Returns snapshots that match this description.
//TBD: I couldn't get this to work. What description value is it using?
NOTE: It might be necessary to URL encode the description.
`end`::
(+date+) Returns snapshots with timestamps earlier than this time.
(date) Returns snapshots with timestamps earlier than this time.
`from`::
(+integer+) Skips the specified number of snapshots.
(integer) Skips the specified number of snapshots.
`size`::
(+integer+) Specifies the maximum number of snapshots to obtain.
(integer) Specifies the maximum number of snapshots to obtain.
`sort`::
(+string+) Specifies the sort field for the requested snapshots.
(string) Specifies the sort field for the requested snapshots.
//By default, the snapshots are sorted by the xxx value.
`start`::
(+string+) Returns snapshots with timestamps after this time.
(string) Returns snapshots with timestamps after this time.
===== Results
The API returns the following information:
`model_snapshots`::
(+array+) An array of model snapshot objects. For more information, see
(array) An array of model snapshot objects. For more information, see
<<ml-snapshot-resource,Model Snapshots>>.
////

View File

@ -9,18 +9,18 @@ NOTE: Job count values are cumulative for the lifetime of a job. If a model snap
or old results are deleted, the job counts are not reset.
`data_counts`::
(+object+) An object that describes the number of records processed and any related error counts.
(object) An object that describes the number of records processed and any related error counts.
See <<ml-datacounts,data counts objects>>.
`job_id`::
(+string+) A numerical character string that uniquely identifies the job.
(string) A numerical character string that uniquely identifies the job.
`model_size_stats`::
(+object+) An object that provides information about the size and contents of the model.
(object) An object that provides information about the size and contents of the model.
See <<ml-modelsizestats,model size stats objects>>
`state`::
(+string+) The status of the job, which can be one of the following values:
(string) The status of the job, which can be one of the following values:
`open`::: The job is actively receiving and processing data.
`closed`::: The job finished successfully with its model state persisted.
The job is still available to accept further data.
@ -43,59 +43,59 @@ The `data_counts` object describes the number of records processed
and any related error counts. It has the following properties:
`bucket_count`::
(+long+) The number of bucket results produced by the job.
(long) The number of bucket results produced by the job.
`earliest_record_timestamp`::
(+string+) The timestamp of the earliest chronologically ordered record.
(string) The timestamp of the earliest chronologically ordered record.
The datetime string is in ISO 8601 format.
`empty_bucket_count`::
TBD
() TBD
`input_bytes`::
(+long+) The number of raw bytes read by the job.
(long) The number of raw bytes read by the job.
`input_field_count`::
(+long+) The total number of record fields read by the job. This count includes
(long) The total number of record fields read by the job. This count includes
fields that are not used in the analysis.
`input_record_count`::
(+long+) The number of data records read by the job.
(long) The number of data records read by the job.
`invalid_date_count`::
(+long+) The number of records with either a missing date field or a date that could not be parsed.
(long) The number of records with either a missing date field or a date that could not be parsed.
`job_id`::
(+string+) A numerical character string that uniquely identifies the job.
(string) A numerical character string that uniquely identifies the job.
`last_data_time`::
(++) TBD
() TBD
`latest_record_timestamp`::
(+string+) The timestamp of the last chronologically ordered record.
(string) The timestamp of the last chronologically ordered record.
If the records are not in strict chronological order, this value might not be
the same as the timestamp of the last record.
The datetime string is in ISO 8601 format.
`latest_sparse_bucket_timestamp`::
(++) TBD
() TBD
`missing_field_count`::
(+long+) The number of records that are missing a field that the job is configured to analyze.
(long) The number of records that are missing a field that the job is configured to analyze.
Records with missing fields are still processed because it is possible that not all fields are missing.
The value of `processed_record_count` includes this count.
`out_of_order_timestamp_count`::
(+long+) The number of records that are out of time sequence and outside of the latency window.
(long) The number of records that are out of time sequence and outside of the latency window.
These records are discarded, since jobs require time series data to be in ascending chronological order.
`processed_field_count`::
(+long+) The total number of fields in all the records that have been processed by the job.
(long) The total number of fields in all the records that have been processed by the job.
Only fields that are specified in the detector configuration object contribute to this count.
The time stamp is not included in this count.
`processed_record_count`::
(+long+) The number of records that have been processed by the job.
(long) The number of records that have been processed by the job.
This value includes records with missing fields, since they are nonetheless analyzed.
+
The following records are not processed:
@ -104,7 +104,7 @@ and any related error counts. It has the following properties:
* Records filtered by an exclude transform
`sparse_bucket_count`::
TBD
() TBD
[float]
[[ml-modelsizestats]]
@ -112,41 +112,40 @@ and any related error counts. It has the following properties:
The `model_size_stats` object has the following properties:
`job_id`::
(+string+) A numerical character string that uniquely identifies the job.
`result_type`::
TBD
`model_bytes`::
(+long+) The number of bytes of memory used by the models. This is the maximum value since the
last time the model was persisted. If the job is closed, this value indicates the latest size.
`total_by_field_count`::
(+long+) The number of `by` field values that were analyzed by the models.
NOTE: The `by` field values are counted separately for each detector and partition.
`total_over_field_count`::
(+long+) The number of `over` field values that were analyzed by the models.
NOTE: The `over` field values are counted separately for each detector and partition.
`total_partition_field_count`::
(+long+) The number of `partition` field values that were analyzed by the models.
`bucket_allocation_failures_count`::
TBD
() TBD
`job_id`::
(string) A numerical character string that uniquely identifies the job.
`log_time`::
() TBD
`memory_status`::
(+string+) The status of the mathematical models. This property can have one of the following values:
(string) The status of the mathematical models. This property can have one of the following values:
`ok`::: The models stayed below the configured value.
`soft_limit`::: The models used more than 60% of the configured memory limit and older unused models will be pruned to free up space.
`hard_limit`::: The models used more space than the configured memory limit. As a result, not all incoming data was processed.
`log_time`::
`model_bytes`::
(long) The number of bytes of memory used by the models. This is the maximum value since the
last time the model was persisted. If the job is closed, this value indicates the latest size.
`result_type`::
TBD
`total_by_field_count`::
(long) The number of `by` field values that were analyzed by the models.
NOTE: The `by` field values are counted separately for each detector and partition.
`total_over_field_count`::
(long) The number of `over` field values that were analyzed by the models.
NOTE: The `over` field values are counted separately for each detector and partition.
`total_partition_field_count`::
(long) The number of `partition` field values that were analyzed by the models.
`timestamp`::
TBD

View File

@ -5,44 +5,44 @@
A job resource has the following properties:
`analysis_config`::
(+object+) The analysis configuration, which specifies how to analyze the data. See <<ml-analysisconfig, analysis configuration objects>>.
(object) The analysis configuration, which specifies how to analyze the data. See <<ml-analysisconfig, analysis configuration objects>>.
`analysis_limits`::
(+object+) Defines limits on the number of field values and time buckets to be analyzed.
(object) Defines limits on the number of field values and time buckets to be analyzed.
See <<ml-apilimits,analysis limits>>.
`create_time`::
(+string+) The time the job was created, in ISO 8601 format. For example, `1491007356077`.
(string) The time the job was created, in ISO 8601 format. For example, `1491007356077`.
`data_description`::
(+object+) Describes the data format and how APIs parse timestamp fields. See <<ml-datadescription,data description objects>>.
(object) Describes the data format and how APIs parse timestamp fields. See <<ml-datadescription,data description objects>>.
`description`::
(+string+) An optional description of the job.
(string) An optional description of the job.
`finished_time`::
(+string+) If the job closed of failed, this is the time the job finished, in ISO 8601 format.
(string) If the job closed of failed, this is the time the job finished, in ISO 8601 format.
Otherwise, it is `null`. For example, `1491007365347`.
`job_id`::
(+string+) A numerical character string that uniquely identifies the job.
(string) A numerical character string that uniquely identifies the job.
`job_type`::
(+string+) TBD. For example: "anomaly_detector".
(string) TBD. For example: "anomaly_detector".
`model_plot_config`:: TBD
`enabled`:: TBD. For example, `true`.
`model_snapshot_id`::
(+string+) A numerical character string that uniquely identifies the model
(string) A numerical character string that uniquely identifies the model
snapshot. For example, `1491007364`.
`model_snapshot_retention_days`::
(+long+) The time in days that model snapshots are retained for the job.
(long) The time in days that model snapshots are retained for the job.
Older snapshots are deleted. The default value is 1 day.
`results_index_name`::
TBD. For example, `shared`.
() TBD. For example, `shared`.
[[ml-analysisconfig]]
===== Analysis Configuration Objects
@ -50,16 +50,16 @@ A job resource has the following properties:
An analysis configuration object has the following properties:
`bucket_span` (required)::
(+unsigned integer+) The size of the interval that the analysis is aggregated into, measured in seconds. The default value is 5 minutes.
(unsigned integer) The size of the interval that the analysis is aggregated into, measured in seconds. The default value is 5 minutes.
//TBD: Is this now measured in minutes?
`categorization_field_name`::
(+string+) If not null, the values of the specified field will be categorized.
(string) If not null, the values of the specified field will be categorized.
The resulting categories can be used in a detector by setting `by_field_name`,
`over_field_name`, or `partition_field_name` to the keyword `prelertcategory`.
`categorization_filters`::
(+array of strings+) If `categorization_field_name` is specified, you can also define optional filters.
(array of strings) If `categorization_field_name` is specified, you can also define optional filters.
This property expects an array of regular expressions.
The expressions are used to filter out matching sequences off the categorization field values.
This functionality is useful to fine tune categorization by excluding sequences
@ -67,7 +67,7 @@ An analysis configuration object has the following properties:
For example, you can exclude SQL statements that appear in your log files.
`detectors` (required)::
(+array+) An array of detector configuration objects,
(array) An array of detector configuration objects,
which describe the anomaly detectors that are used in the job.
See <<ml-detectorconfig,detector configuration objects>>.
@ -75,19 +75,19 @@ NOTE: If the `detectors` array does not contain at least one detector, no analys
and an error is returned.
`influencers`::
(+array of strings+) A comma separated list of influencer field names.
(array of strings) A comma separated list of influencer field names.
Typically these can be the by, over, or partition fields that are used in the detector configuration.
You might also want to use a field name that is not specifically named in a detector,
but is available as part of the input data. When you use multiple detectors,
the use of influencers is recommended as it aggregates results for each influencer entity.
`latency`::
(+unsigned integer+) The size of the window, in seconds, in which to expect data that is out of time order. The default value is 0 milliseconds (no latency).
(unsigned integer) The size of the window, in seconds, in which to expect data that is out of time order. The default value is 0 milliseconds (no latency).
NOTE: Latency is only applicable when you send data by using the <<ml-post-data, Post Data to Jobs>> API.
`multivariate_by_fields`::
(+boolean+) If set to `true`, the analysis will automatically find correlations
(boolean) If set to `true`, the analysis will automatically find correlations
between metrics for a given `by` field value and report anomalies when those
correlations cease to hold. For example, suppose CPU and memory usage on host A
is usually highly correlated with the same metrics on host B. Perhaps this
@ -99,19 +99,18 @@ NOTE: Latency is only applicable when you send data by using the <<ml-post-data,
NOTE: To use the `multivariate_by_fields` property, you must also specify `by_field_name` in your detector.
`overlapping_buckets`::
(+boolean+) If set to `true`, an additional analysis occurs that runs out of phase by half a bucket length.
(boolean) If set to `true`, an additional analysis occurs that runs out of phase by half a bucket length.
This requires more system resources and enhances detection of anomalies that span bucket boundaries.
`summary_count_field_name`::
(+string+) If not null, the data fed to the job is expected to be pre-summarized.
(string) If not null, the data fed to the job is expected to be pre-summarized.
This property value is the name of the field that contains the count of raw data points that have been summarized.
The same `summary_count_field_name` applies to all detectors in the job.
NOTE: The `summary_count_field_name` property cannot be used with the `metric` function.
`use_per_partition_normalization`::
TBD
() TBD
[[ml-detectorconfig]]
===== Detector Configuration Objects
@ -122,31 +121,31 @@ You can specify multiple detectors for a job.
Each detector has the following properties:
`by_field_name`::
(+string+) The field used to split the data.
(string) The field used to split the data.
In particular, this property is used for analyzing the splits with respect to their own history.
It is used for finding unusual values in the context of the split.
`detector_description`::
(+string+) A description of the detector. For example, `low_sum(events_per_min)`.
(string) A description of the detector. For example, `low_sum(events_per_min)`.
`detector_rules`::
(+array+) TBD
(array) TBD
`exclude_frequent`::
(+string+) Contains one of the following values: `all`, `none`, `by`, or `over`.
(string) Contains one of the following values: `all`, `none`, `by`, or `over`.
If set, frequent entities are excluded from influencing the anomaly results.
Entities can be considered frequent over time or frequent in a population.
If you are working with both over and by fields, then you can set `exclude_frequent`
to `all` for both fields, or to `by` or `over` for those specific fields.
`field_name`::
(+string+) The field that the detector uses in the function. If you use an event rate
(string) The field that the detector uses in the function. If you use an event rate
function such as `count` or `rare`, do not specify this field.
NOTE: The `field_name` cannot contain double quotes or backslashes.
`function` (required)::
(+string+) The analysis function that is used.
(string) The analysis function that is used.
For example, `count`, `rare`, `mean`, `min`, `max`, and `sum`.
The default function is `metric`, which looks for anomalies in all of `min`, `max`,
and `mean`.
@ -155,16 +154,16 @@ NOTE: You cannot use the `metric` function with pre-summarized input. If `summar
is not null, you must specify a function other than `metric`.
`over_field_name`::
(+string+) The field used to split the data.
(string) The field used to split the data.
In particular, this property is used for analyzing the splits with respect to the history of all splits.
It is used for finding unusual values in the population of all splits.
`partition_field_name`::
(+string+) The field used to segment the analysis.
(string) The field used to segment the analysis.
When you use this property, you have completely independent baselines for each value of this field.
`use_null`::
(+boolean+) Defines whether a new series is used as the null series
(boolean) Defines whether a new series is used as the null series
when there is no value for the by or partition fields. The default value is `false`
IMPORTANT: Field names are case sensitive, for example a field named 'Bytes' is different to one named 'bytes'.
@ -189,17 +188,17 @@ format of your data.
A data description object has the following properties:
`fieldDelimiter`::
TBD
() TBD
`format`::
TBD
() TBD
`time_field`::
(+string+) The name of the field that contains the timestamp.
(string) The name of the field that contains the timestamp.
The default value is `time`.
`time_format`::
(+string+) The time format, which can be `epoch`, `epoch_ms`, or a custom pattern.
(string) The time format, which can be `epoch`, `epoch_ms`, or a custom pattern.
The default value is `epoch`, which refers to UNIX or Epoch time (the number of seconds
since 1 Jan 1970) and corresponds to the time_t type in C and C++.
The value `epoch_ms` indicates that time is measured in milliseconds since the epoch.
@ -208,7 +207,7 @@ A data description object has the following properties:
NOTE: Custom patterns must conform to the Java `DateTimeFormatter` class. When you use date-time formatting patterns, it is recommended that you provide the full date, time and time zone. For example: `yyyy-MM-dd'T'HH:mm:ssX`. If the pattern that you specify is not sufficient to produce a complete timestamp, job creation fails.
`quotecharacter`::
TBD
() TBD
[[ml-apilimits]]
===== Analysis Limits
@ -220,7 +219,7 @@ If necessary, the limits can also be updated after the job is created.
The `analysis_limits` object has the following properties:
`categorization_examples_limit`::
(+long+) The maximum number of examples stored per category in memory and
(long) The maximum number of examples stored per category in memory and
in the results data store. The default value is 4. If you increase this value,
more examples are available, however it requires that you have more storage available.
If you set this value to `0`, no examples are stored.
@ -229,6 +228,6 @@ The `analysis_limits` object has the following properties:
NOTE: The `categorization_examples_limit` only applies to analysis that uses categorization.
////
`model_memory_limit`::
(+long+) The maximum amount of memory, in MiB, that the mathematical models can use.
(long) The maximum amount of memory, in MiB, that the mathematical models can use.
Once this limit is approached, data pruning becomes more aggressive.
Upon exceeding this limit, new entities are not modeled. The default value is 4096.

View File

@ -21,16 +21,16 @@ The job is ready to resume its analysis from where it left off, once new data is
===== Path Parameters
`job_id` (required)::
(+string+) Identifier for the job
(string) Identifier for the job
===== Request Body
`open_timeout`::
(+time+) Controls the time to wait until a job has opened.
(time) Controls the time to wait until a job has opened.
The default value is 30 minutes.
`ignore_downtime`::
(+boolean+) If true (default), any gap in data since it was
(boolean) If true (default), any gap in data since it was
last closed is treated as a maintenance window. That is to say, it is not an anomaly
////

View File

@ -25,15 +25,15 @@ or a comma separated list.
===== Path Parameters
`job_id` (required)::
(+string+) Identifier for the job
(string) Identifier for the job
===== Request Body
`reset_start`::
(+string+) Specifies the start of the bucket resetting range
(string) Specifies the start of the bucket resetting range
`reset_end`::
(+string+) Specifies the end of the bucket resetting range"
(string) Specifies the end of the bucket resetting range"
////
===== Responses

View File

@ -17,7 +17,7 @@ The API returns example data by using the current data feed settings.
===== Path Parameters
`feed_id` (required)::
(+string+) Identifier for the data feed
(string) Identifier for the data feed
////
===== Request Body

View File

@ -16,28 +16,28 @@ data feed to each job.
===== Path Parameters
`feed_id` (required)::
(+string+) A numerical character string that uniquely identifies the data feed.
(string) A numerical character string that uniquely identifies the data feed.
===== Request Body
`aggregations`::
(+object+) TBD.
(object) TBD.
`chunking_config`::
(+object+) TBD.
(object) TBD.
For example: {"mode": "manual", "time_span": "30000000ms"}
`frequency`::
TBD: For example: "150s"
`indexes` (required)::
(+array+) An array of index names. For example: ["it_ops_metrics"]
(array) An array of index names. For example: ["it_ops_metrics"]
`job_id` (required)::
(+string+) A numerical character string that uniquely identifies the job.
(string) A numerical character string that uniquely identifies the job.
`query`::
(+object+) The query that retrieves the data.
(object) The query that retrieves the data.
By default, this property has the following value: `{"match_all": {"boost": 1}}`.
`query_delay`::

View File

@ -15,31 +15,31 @@ The create job API enables you to instantiate a job.
===== Path Parameters
`job_id` (required)::
(+string+) Identifier for the job
(string) Identifier for the job
===== Request Body
`analysis_config`::
(+object+) The analysis configuration, which specifies how to analyze the data.
(object) The analysis configuration, which specifies how to analyze the data.
See <<ml-analysisconfig, analysis configuration objects>>.
`analysis_limits`::
Optionally specifies runtime limits for the job. See <<ml-apilimits,analysis limits>>.
`data_description`::
(+object+) Describes the format of the input data.
(object) Describes the format of the input data.
See <<ml-datadescription,data description objects>>.
`description`::
(+string+) An optional description of the job.
(string) An optional description of the job.
`model_snapshot_retention_days`::
(+long+) The time in days that model snapshots are retained for the job.
(long) The time in days that model snapshots are retained for the job.
Older snapshots are deleted. The default value is 1 day.
`results_index_name`::
(+string+) TBD. For example, `shared`.
(string) TBD. For example, `shared`.
////
===== Responses

View File

@ -33,23 +33,23 @@ the anomaly records into buckets.
A record object has the following properties:
`actual`::
(+number+) The actual value for the bucket.
(number) The actual value for the bucket.
`bucket_span`::
(+number+) The length of the bucket in seconds.
(number) The length of the bucket in seconds.
This value matches the `bucket_span` that is specified in the job.
//`byFieldName`::
//TBD: This field did not appear in my results, but it might be a valid property.
// (+string+) The name of the analyzed field, if it was specified in the detector.
// (string) The name of the analyzed field, if it was specified in the detector.
//`byFieldValue`::
//TBD: This field did not appear in my results, but it might be a valid property.
// (+string+) The value of `by_field_name`, if it was specified in the detecter.
// (string) The value of `by_field_name`, if it was specified in the detecter.
//`causes`
//TBD: This field did not appear in my results, but it might be a valid property.
// (+array+) If an over field was specified in the detector, this property
// (array) If an over field was specified in the detector, this property
// contains an array of anomaly records that are the causes for the anomaly
// that has been identified for the over field.
// If no over fields exist. this field will not be present.
@ -62,76 +62,76 @@ A record object has the following properties:
// Probability and scores are not applicable to causes.
`detector_index`::
(+number+) A unique identifier for the detector.
(number) A unique identifier for the detector.
`field_name`::
(+string+) Certain functions require a field to operate on.
(string) Certain functions require a field to operate on.
For those functions, this is the name of the field to be analyzed.
`function`::
(+string+) The function in which the anomaly occurs.
(string) The function in which the anomaly occurs.
`function_description`::
(+string+) The description of the function in which the anomaly occurs, as
(string) The description of the function in which the anomaly occurs, as
specified in the detector configuration information.
`influencers`::
(+array+) If `influencers` was specified in the detector configuration, then
(array) If `influencers` was specified in the detector configuration, then
this array contains influencers that contributed to or were to blame for an
anomaly.
`initial_record_score`::
(++) TBD. For example, 94.1386.
() TBD. For example, 94.1386.
`is_interim`::
(+boolean+) If true, then this anomaly record is an interim result.
(boolean) If true, then this anomaly record is an interim result.
In other words, it is calculated based on partial input data
`job_id`::
(+string+) A numerical character string that uniquely identifies the job.
(string) A numerical character string that uniquely identifies the job.
//`kpi_indicator`::
// (++) TBD. For example, ["online_purchases"]
// () TBD. For example, ["online_purchases"]
// I did not receive this in later tests. Is it still valid?
`partition_field_name`::
(+string+) The name of the partition field that was used in the analysis, if
(string) The name of the partition field that was used in the analysis, if
such a field was specified in the detector.
//`overFieldName`::
// TBD: This field did not appear in my results, but it might be a valid property.
// (+string+) The name of the over field, if `over_field_name` was specified
// (string) The name of the over field, if `over_field_name` was specified
// in the detector.
`partition_field_value`::
(+string+) The value of the partition field that was used in the analysis, if
(string) The value of the partition field that was used in the analysis, if
`partition_field_name` was specified in the detector.
`probability`::
(+number+) The probability of the individual anomaly occurring.
(number) The probability of the individual anomaly occurring.
This value is in the range 0 to 1. For example, 0.0000772031.
//This value is held to a high precision of over 300 decimal places.
//In scientific notation, a value of 3.24E-300 is highly unlikely and therefore
//highly anomalous.
`record_score`::
(+number+) An anomaly score for the bucket time interval.
(number) An anomaly score for the bucket time interval.
The score is calculated based on a sophisticated aggregation of the anomalies
in the bucket.
//Use this score for rate-controlled alerting.
`result_type`::
(+string+) TBD. For example, "record".
(string) TBD. For example, "record".
`sequence_num`::
(++) TBD. For example, 1.
() TBD. For example, 1.
`timestamp`::
(+date+) The start time of the bucket that contains the record, specified in
(date) The start time of the bucket that contains the record, specified in
ISO 8601 format. For example, 1454020800000.
`typical`::
(+number+) The typical value for the bucket, according to analytical modeling.
(number) The typical value for the bucket, according to analytical modeling.
[float]
[[ml-results-influencers]]
@ -150,99 +150,99 @@ records that contain this influencer.
An influencer object has the following properties:
`bucket_span`::
(++) TBD. For example, 300.
() TBD. For example, 300.
// Same as for buckets? i.e. (+unsigned integer+) The length of the bucket in seconds.
// Same as for buckets? i.e. (unsigned integer) The length of the bucket in seconds.
// This value is equal to the `bucket_span` value in the job configuration.
`influencer_score`::
(+number+) An anomaly score for the influencer in this bucket time interval.
(number) An anomaly score for the influencer in this bucket time interval.
The score is calculated based upon a sophisticated aggregation of the anomalies
in the bucket for this entity. For example: 94.1386.
`initial_influencer_score`::
(++) TBD. For example, 83.3831.
() TBD. For example, 83.3831.
`influencer_field_name`::
(+string+) The field name of the influencer.
(string) The field name of the influencer.
`influencer_field_value`::
(+string+) The entity that influenced, contributed to, or was to blame for the
(string) The entity that influenced, contributed to, or was to blame for the
anomaly.
`is_interim`::
(+boolean+) If true, then this is an interim result.
(boolean) If true, then this is an interim result.
In other words, it is calculated based on partial input data.
`job_id`::
(+string+) A numerical character string that uniquely identifies the job.
(string) A numerical character string that uniquely identifies the job.
`kpi_indicator`::
(++) TBD. For example, "online_purchases".
() TBD. For example, "online_purchases".
`probability`::
(+number+) The probability that the influencer has this behavior.
(number) The probability that the influencer has this behavior.
This value is in the range 0 to 1. For example, 0.0000109783.
// For example, 0.03 means 3%. This value is held to a high precision of over
//300 decimal places. In scientific notation, a value of 3.24E-300 is highly
//unlikely and therefore highly anomalous.
`result_type`::
(++) TBD. For example, "influencer".
() TBD. For example, "influencer".
`sequence_num`::
(++) TBD. For example, 2.
() TBD. For example, 2.
`timestamp`::
(+date+) Influencers are produced in buckets. This value is the start time
(date) Influencers are produced in buckets. This value is the start time
of the bucket, specified in ISO 8601 format. For example, 1454943900000.
An bucket influencer object has the same following properties:
`anomaly_score`::
(+number+) TBD
(number) TBD
//It is unclear how this differs from the influencer_score.
//An anomaly score for the influencer in this bucket time interval.
//The score is calculated based upon a sophisticated aggregation of the anomalies
//in the bucket for this entity. For example: 94.1386.
`bucket_span`::
(++) TBD. For example, 300.
() TBD. For example, 300.
////
// Same as for buckets? i.e. (+unsigned integer+) The length of the bucket in seconds.
// Same as for buckets? i.e. (unsigned integer) The length of the bucket in seconds.
// This value is equal to the `bucket_span` value in the job configuration.
////
`initial_anomaly_score`::
(++) TBD. For example, 83.3831.
() TBD. For example, 83.3831.
`influencer_field_name`::
(+string+) The field name of the influencer.
(string) The field name of the influencer.
`is_interim`::
(+boolean+) If true, then this is an interim result.
(boolean) If true, then this is an interim result.
In other words, it is calculated based on partial input data.
`job_id`::
(+string+) A numerical character string that uniquely identifies the job.
(string) A numerical character string that uniquely identifies the job.
`probability`::
(+number+) The probability that the influencer has this behavior.
(number) The probability that the influencer has this behavior.
This value is in the range 0 to 1. For example, 0.0000109783.
// For example, 0.03 means 3%. This value is held to a high precision of over
//300 decimal places. In scientific notation, a value of 3.24E-300 is highly
//unlikely and therefore highly anomalous.
`raw_anomaly_score`::
(++) TBD. For example, 2.32119.
() TBD. For example, 2.32119.
`result_type`::
(++) TBD. For example, "bucket_influencer".
() TBD. For example, "bucket_influencer".
`sequence_num`::
(++) TBD. For example, 2.
() TBD. For example, 2.
`timestamp`::
(+date+) Influencers are produced in buckets. This value is the start time
(date) Influencers are produced in buckets. This value is the start time
of the bucket, specified in ISO 8601 format. For example, 1454943900000.
[float]
@ -270,48 +270,48 @@ accessing the records resource directly and filtering upon date range.
A bucket resource has the following properties:
`anomaly_score`::
(+number+) The aggregated and normalized anomaly score.
(number) The aggregated and normalized anomaly score.
All the anomaly records in the bucket contribute to this score.
`bucket_influencers`::
(+array+) An array of influencer objects.
(array) An array of influencer objects.
For more information, see <<ml-results-influencers,Influencers>>.
`bucket_span`::
(+unsigned integer+) The length of the bucket in seconds. This value is
(unsigned integer) The length of the bucket in seconds. This value is
equal to the `bucket_span` value in the job configuration.
`event_count`::
(+unsigned integer+) The number of input data records processed in this bucket.
(unsigned integer) The number of input data records processed in this bucket.
`initial_anomaly_score`::
(+number+) The value of `anomaly_score` at the time the bucket result was
(number) The value of `anomaly_score` at the time the bucket result was
created. This is normalized based on data which has already been seen;
this is not re-normalized and therefore is not adjusted for more recent data.
//TBD. This description is unclear.
`is_interim`::
(+boolean+) If true, then this bucket result is an interim result.
(boolean) If true, then this bucket result is an interim result.
In other words, it is calculated based on partial input data.
`job_id`::
(+string+) A numerical character string that uniquely identifies the job.
(string) A numerical character string that uniquely identifies the job.
`partition_scores`::
(+TBD+) TBD. For example, [].
(TBD) TBD. For example, [].
`processing_time_ms`::
(+unsigned integer+) The time in milliseconds taken to analyze the bucket
(unsigned integer) The time in milliseconds taken to analyze the bucket
contents and produce results.
`record_count`::
(+unsigned integer+) The number of anomaly records in this bucket.
(unsigned integer) The number of anomaly records in this bucket.
`result_type`::
(+string+) TBD. For example, "bucket".
(string) TBD. For example, "bucket".
`timestamp`::
(+date+) The start time of the bucket, specified in ISO 8601 format.
(date) The start time of the bucket, specified in ISO 8601 format.
For example, 1454020800000. This timestamp uniquely identifies the bucket.
NOTE: Events that occur exactly at the timestamp of the bucket are included in
@ -329,24 +329,24 @@ values.
A category resource has the following properties:
`category_id`::
(+unsigned integer+) A unique identifier for the category.
(unsigned integer) A unique identifier for the category.
`examples`::
(+array+) A list of examples of actual values that matched the category.
(array) A list of examples of actual values that matched the category.
`job_id`::
(+string+) A numerical character string that uniquely identifies the job.
(string) A numerical character string that uniquely identifies the job.
`max_matching_length`::
(+unsigned integer+) The maximum length of the fields that matched the
(unsigned integer) The maximum length of the fields that matched the
category.
//TBD: Still true? "The value is increased by 10% to enable matching for
//similar fields that have not been analyzed"
`regex`::
(+string+) A regular expression that is used to search for values that match
(string) A regular expression that is used to search for values that match
the category.
`terms`::
(+string+) A space separated list of the common tokens that are matched in
(string) A space separated list of the common tokens that are matched in
values of the category.

View File

@ -53,17 +53,17 @@ ensure that you do not expect data imminently.
===== Path Parameters
`job_id` (required)::
(+string+) Identifier for the job
(string) Identifier for the job
`snapshot_id` (required)::
(+string+) Identifier for the model snapshot
(string) Identifier for the model snapshot
===== Request Body
`delete_intervening_results`::
(+boolean+) If true, deletes the results in the time period between the
latest results and the time of the reverted snapshot. It also resets the
model to accept records for this time period.
(boolean) If true, deletes the results in the time period between the
latest results and the time of the reverted snapshot. It also resets the
model to accept records for this time period.
NOTE: If you choose not to delete intervening results when reverting a snapshot,
the job will not accept input data that is older than the current time.

View File

@ -20,32 +20,32 @@ When choosing a new value, consider the following:
A model snapshot resource has the following properties:
`description`::
(+string+) An optional description of the job.
(string) An optional description of the job.
`job_id`::
(+string+) A numerical character string that uniquely identifies the job.
(string) A numerical character string that uniquely identifies the job.
`latest_record_time_stamp`::
(++) TBD. For example: 1455232663000.
() TBD. For example: 1455232663000.
`latest_result_time_stamp`::
(++) TBD. For example: 1455229800000.
() TBD. For example: 1455229800000.
`model_size_stats`::
(+object+) TBD. See <<ml-snapshot-stats,Model Size Statistics>>.
(object) TBD. See <<ml-snapshot-stats,Model Size Statistics>>.
`retain`::
(+boolean+) TBD. For example: false.
(boolean) TBD. For example: false.
`snapshot_id`::
(+string+) A numerical character string that uniquely identifies the model
(string) A numerical character string that uniquely identifies the model
snapshot. For example: "1491852978".
`snapshot_doc_count`::
(++) TBD. For example: 1.
() TBD. For example: 1.
`timestamp`::
(+date+) The creation timestamp for the snapshot, specified in ISO 8601 format.
(date) The creation timestamp for the snapshot, specified in ISO 8601 format.
For example: 1491852978000.
[float]
@ -55,31 +55,31 @@ A model snapshot resource has the following properties:
The `model_size_stats` object has the following properties:
`bucket_allocation_failures_count`::
(++) TBD. For example: 0.
() TBD. For example: 0.
`job_id`::
(+string+) A numerical character string that uniquely identifies the job.
(string) A numerical character string that uniquely identifies the job.
`log_time`::
(++) TBD. For example: 1491852978000.
() TBD. For example: 1491852978000.
`memory_status`::
(++) TBD. For example: "ok".
() TBD. For example: "ok".
`model_bytes`::
(++) TBD. For example: 100393.
() TBD. For example: 100393.
`result_type`::
(++) TBD. For example: "model_size_stats".
() TBD. For example: "model_size_stats".
`timestamp`::
(++) TBD. For example: 1455229800000.
() TBD. For example: 1455229800000.
`total_by_field_count`::
(++) TBD. For example: 13.
() TBD. For example: 13.
`total_over_field_count`::
(++) TBD. For example: 0.
() TBD. For example: 0.
`total_partition_field_count`::
(++) TBD. For example: 2.
() TBD. For example: 2.

View File

@ -53,20 +53,20 @@ processed record, that value is ignored.
===== Path Parameters
`feed_id` (required)::
(+string+) Identifier for the data feed
(string) Identifier for the data feed
===== Request Body
`end`::
(+string+) The time that the data feed should end. This value is exclusive.
(string) The time that the data feed should end. This value is exclusive.
The default value is an empty string.
`start`::
(+string+) The time that the data feed should begin. This value is inclusive.
(string) The time that the data feed should begin. This value is inclusive.
The default value is an empty string.
`timeout`::
(+time+) Controls the amount of time to wait until a data feed starts.
(time) Controls the amount of time to wait until a data feed starts.
The default value is 20 seconds.
////

View File

@ -16,15 +16,15 @@ A data feed can be opened and closed multiple times throughout its lifecycle.
===== Path Parameters
`feed_id` (required)::
(+string+) Identifier for the data feed
(string) Identifier for the data feed
===== Request Body
`force`::
(+boolean+) If true, the data feed is stopped forcefully.
(boolean) If true, the data feed is stopped forcefully.
`timeout`::
(+time+) Controls the amount of time to wait until a data feed stops.
(time) Controls the amount of time to wait until a data feed stops.
The default value is 20 seconds.
////

View File

@ -15,40 +15,40 @@ The update data feed API enables you to update certain properties of a data feed
===== Path Parameters
`feed_id` (required)::
(+string+) Identifier for the data feed
(string) Identifier for the data feed
===== Request Body
The following properties can be updated after the data feed is created:
`aggregations`::
(+object+) TBD.
(object) TBD.
`chunking_config`::
(+object+) TBD.
(object) TBD.
For example: {"mode": "manual", "time_span": "30000000ms"}
`frequency`::
TBD: For example: "150s"
() TBD: For example: "150s"
`indexes` (required)::
(+array+) An array of index names. For example: ["it_ops_metrics"]
(array) An array of index names. For example: ["it_ops_metrics"]
`job_id`::
(+string+) A numerical character string that uniquely identifies the job.
(string) A numerical character string that uniquely identifies the job.
`query`::
(+object+) The query that retrieves the data.
(object) The query that retrieves the data.
By default, this property has the following value: `{"match_all": {"boost": 1}}`.
`query_delay`::
TBD. For example: "60s"
() TBD. For example: "60s"
`scroll_size`::
TBD. For example, 1000
() TBD. For example, 1000
`types` (required)::
TBD. For example: ["network","sql","kpi"]
() TBD. For example: ["network","sql","kpi"]
For more information about these properties,
see <<ml-datafeed-resource, Data Feed Resources>>.

View File

@ -17,18 +17,18 @@ data is sent to it.
===== Path Parameters
`job_id` (required)::
(+string+) Identifier for the job
(string) Identifier for the job
===== Request Body
The following properties can be updated after the job is created:
`analysis_config`::
(+object+) The analysis configuration, which specifies how to analyze the data.
(object) The analysis configuration, which specifies how to analyze the data.
See <<ml-analysisconfig, analysis configuration objects>>. In particular, the following properties can be updated: `categorization_filters`, `detector_description`, TBD.
`analysis_limits`::
(+object+) Specifies runtime limits for the job.
(object) Specifies runtime limits for the job.
See <<ml-apilimits,analysis limits>>. NOTE:
* You can update the `analysis_limits` only while the job is closed.
* The `model_memory_limit` property value cannot be decreased.
@ -36,7 +36,7 @@ The following properties can be updated after the job is created:
increasing the `model_memory_limit` is not recommended.
`description`::
(+string+) An optional description of the job.
(string) An optional description of the job.
////
This expects data to be sent in JSON format using the POST `_data` API.

View File

@ -19,20 +19,20 @@ and new data has been sent to it.
===== Path Parameters
`job_id` (required)::
(+string+) Identifier for the job
(string) Identifier for the job
`snapshot_id` (required)::
(+string+) Identifier for the model snapshot
(string) Identifier for the model snapshot
===== Request Body
The following properties can be updated after the model snapshot is created:
`description`::
(+string+) An optional description of the model snapshot.
(string) An optional description of the model snapshot.
`retain`::
(+boolean+) TBD.
(boolean) TBD.
////
===== Responses