[DOCS] Fix minor ML documentation problems (elastic/x-pack-elasticsearch#1336)
Original commit: elastic/x-pack-elasticsearch@53e65b90fc
This commit is contained in:
parent
28f2ba3ef8
commit
0542d730c9
|
@ -30,10 +30,10 @@ A {dfeed} resource has the following properties:
|
|||
bucket spans, or, for longer bucket spans, a sensible fraction of the bucket
|
||||
span. For example: "150s"
|
||||
|
||||
`indices` (required)::
|
||||
`indices`::
|
||||
(array) An array of index names. For example: ["it_ops_metrics"]
|
||||
|
||||
`job_id` (required)::
|
||||
`job_id`::
|
||||
(string) The unique identifier for the job to which the {dfeed} sends data.
|
||||
|
||||
`query`::
|
||||
|
@ -52,7 +52,7 @@ A {dfeed} resource has the following properties:
|
|||
(unsigned integer) The `size` parameter that is used in {es} searches.
|
||||
The default value is `1000`.
|
||||
|
||||
`types` (required)::
|
||||
`types`::
|
||||
(array) A list of types to search for within the specified indices.
|
||||
For example: ["network","sql","kpi"].
|
||||
|
||||
|
@ -66,7 +66,7 @@ chunks are calculated and is an advanced configuration option.
|
|||
|
||||
A chunking configuration object has the following properties:
|
||||
|
||||
`mode` (required)::
|
||||
`mode`::
|
||||
There are three available modes: +
|
||||
`auto`::: The chunk size will be dynamically calculated. This is the default
|
||||
and recommended value.
|
||||
|
|
|
@ -44,10 +44,10 @@ This API presents a chronological view of the records, grouped by bucket.
|
|||
`expand`::
|
||||
(boolean) If true, the output includes anomaly records.
|
||||
|
||||
`from`::
|
||||
`page`::
|
||||
`from`:::
|
||||
(integer) Skips the specified number of buckets.
|
||||
|
||||
`size`::
|
||||
`size`:::
|
||||
(integer) Specifies the maximum number of buckets to obtain.
|
||||
|
||||
`start`::
|
||||
|
|
|
@ -29,13 +29,13 @@ in a job.
|
|||
(boolean) If true, the output excludes interim results.
|
||||
By default, interim results are included.
|
||||
|
||||
`from`::
|
||||
(integer) Skips the specified number of influencers.
|
||||
|
||||
`influencer_score`::
|
||||
(double) Returns influencers with anomaly scores higher than this value.
|
||||
|
||||
`size`::
|
||||
`page`::
|
||||
`from`:::
|
||||
(integer) Skips the specified number of influencers.
|
||||
`size`:::
|
||||
(integer) Specifies the maximum number of influencers to obtain.
|
||||
|
||||
`sort`::
|
||||
|
|
|
@ -26,8 +26,8 @@ The get jobs API enables you to retrieve usage information for jobs.
|
|||
The API returns the following information:
|
||||
|
||||
`jobs`::
|
||||
(array) An array of job count objects.
|
||||
For more information, see <<ml-jobstats,Job Stats>>.
|
||||
(array) An array of job statistics objects.
|
||||
For more information, see <<ml-jobstats,Job Statistics>>.
|
||||
|
||||
|
||||
===== Authorization
|
||||
|
@ -47,8 +47,7 @@ GET _xpack/ml/anomaly_detectors/farequote/_stats
|
|||
// CONSOLE
|
||||
// TEST[skip:todo]
|
||||
|
||||
In this example, the API returns a single result that matches the specified
|
||||
score and time constraints:
|
||||
The API returns the following results:
|
||||
[source,js]
|
||||
----
|
||||
{
|
||||
|
|
|
@ -47,8 +47,7 @@ GET _xpack/ml/anomaly_detectors/farequote
|
|||
// CONSOLE
|
||||
// TEST[skip:todo]
|
||||
|
||||
In this example, the API returns a single result that matches the specified
|
||||
score and time constraints:
|
||||
The API returns the following results:
|
||||
[source,js]
|
||||
----
|
||||
{
|
||||
|
|
|
@ -29,15 +29,15 @@ The get records API enables you to retrieve anomaly records for a job.
|
|||
(boolean) If true, the output excludes interim results.
|
||||
By default, interim results are included.
|
||||
|
||||
`from`::
|
||||
`page`::
|
||||
`from`:::
|
||||
(integer) Skips the specified number of records.
|
||||
`size`:::
|
||||
(integer) Specifies the maximum number of records to obtain.
|
||||
|
||||
`record_score`::
|
||||
(double) Returns records with anomaly scores higher than this value.
|
||||
|
||||
`size`::
|
||||
(integer) Specifies the maximum number of records to obtain.
|
||||
|
||||
`sort`::
|
||||
(string) Specifies the sort field for the requested records.
|
||||
By default, the records are sorted by the `anomaly_score` value.
|
||||
|
@ -66,7 +66,7 @@ roles provide these privileges. For more information, see
|
|||
|
||||
===== Examples
|
||||
|
||||
The following example gets bucket information for the `it-ops-kpi` job:
|
||||
The following example gets record information for the `it-ops-kpi` job:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
|
|
|
@ -23,8 +23,7 @@ TIP: For very large models (several GB), persistence could take 10-20 minutes,
|
|||
so do not set the `background_persist_interval` value too low.
|
||||
|
||||
`create_time`::
|
||||
(string) The time the job was created, in ISO 8601 format.
|
||||
For example, `1491007356077`.
|
||||
(string) The time the job was created. For example, `1491007356077`.
|
||||
|
||||
`data_description`::
|
||||
(object) Describes the data format and how APIs parse timestamp fields.
|
||||
|
@ -77,7 +76,7 @@ so do not set the `background_persist_interval` value too low.
|
|||
|
||||
An analysis configuration object has the following properties:
|
||||
|
||||
`bucket_span` (required)::
|
||||
`bucket_span`::
|
||||
(time units) The size of the interval that the analysis is aggregated into,
|
||||
typically between `5m` and `1h`. The default value is `5m`.
|
||||
|
||||
|
@ -95,7 +94,7 @@ An analysis configuration object has the following properties:
|
|||
consideration for defining categories. For example, you can exclude SQL
|
||||
statements that appear in your log files.
|
||||
|
||||
`detectors` (required)::
|
||||
`detectors`::
|
||||
(array) An array of detector configuration objects,
|
||||
which describe the anomaly detectors that are used in the job.
|
||||
See <<ml-detectorconfig,detector configuration objects>>. +
|
||||
|
@ -133,10 +132,10 @@ NOTE: To use the `multivariate_by_fields` property, you must also specify
|
|||
`by_field_name` in your detector.
|
||||
|
||||
`summary_count_field_name`::
|
||||
(string) If not null, the data fed to the job is expected to be pre-summarized.
|
||||
This property value is the name of the field that contains the count of raw
|
||||
data points that have been summarized. The same `summary_count_field_name`
|
||||
applies to all detectors in the job. +
|
||||
(string) If not null, the data that is fed to the job is expected to be
|
||||
pre-summarized. This property value is the name of the field that contains
|
||||
the count of raw data points that have been summarized.
|
||||
The same `summary_count_field_name` applies to all detectors in the job. +
|
||||
|
||||
NOTE: The `summary_count_field_name` property cannot be used with the `metric`
|
||||
function.
|
||||
|
@ -180,7 +179,7 @@ Each detector has the following properties:
|
|||
|
||||
NOTE: The `field_name` cannot contain double quotes or backslashes.
|
||||
|
||||
`function` (required)::
|
||||
`function`::
|
||||
(string) The analysis function that is used.
|
||||
For example, `count`, `rare`, `mean`, `min`, `max`, and `sum`. For more
|
||||
information, see <<ml-functions>>.
|
||||
|
|
|
@ -47,8 +47,6 @@ is planned for.
|
|||
Model size (in bytes) is available as part of the Job Resource Model Size Stats.
|
||||
////
|
||||
IMPORTANT: Before you revert to a saved snapshot, you must close the job.
|
||||
Sending data to a closed job changes its status to `open`, so you must also
|
||||
ensure that you do not expect data imminently.
|
||||
|
||||
|
||||
===== Path Parameters
|
||||
|
@ -64,7 +62,7 @@ ensure that you do not expect data imminently.
|
|||
`delete_intervening_results`::
|
||||
(boolean) If true, deletes the results in the time period between the
|
||||
latest results and the time of the reverted snapshot. It also resets the
|
||||
model to accept records for this time period.
|
||||
model to accept records for this time period. The default value is false.
|
||||
|
||||
NOTE: If you choose not to delete intervening results when reverting a snapshot,
|
||||
the job will not accept input data that is older than the current time.
|
||||
|
|
Loading…
Reference in New Issue