[DOCS] Fix minor ML documentation problems (elastic/x-pack-elasticsearch#1336)

Original commit: elastic/x-pack-elasticsearch@53e65b90fc
This commit is contained in:
Lisa Cawley 2017-05-08 06:53:04 -07:00 committed by GitHub
parent 28f2ba3ef8
commit 0542d730c9
8 changed files with 29 additions and 34 deletions

View File

@ -30,10 +30,10 @@ A {dfeed} resource has the following properties:
bucket spans, or, for longer bucket spans, a sensible fraction of the bucket bucket spans, or, for longer bucket spans, a sensible fraction of the bucket
span. For example: "150s" span. For example: "150s"
`indices` (required):: `indices`::
(array) An array of index names. For example: ["it_ops_metrics"] (array) An array of index names. For example: ["it_ops_metrics"]
`job_id` (required):: `job_id`::
(string) The unique identifier for the job to which the {dfeed} sends data. (string) The unique identifier for the job to which the {dfeed} sends data.
`query`:: `query`::
@ -52,7 +52,7 @@ A {dfeed} resource has the following properties:
(unsigned integer) The `size` parameter that is used in {es} searches. (unsigned integer) The `size` parameter that is used in {es} searches.
The default value is `1000`. The default value is `1000`.
`types` (required):: `types`::
(array) A list of types to search for within the specified indices. (array) A list of types to search for within the specified indices.
For example: ["network","sql","kpi"]. For example: ["network","sql","kpi"].
@ -66,7 +66,7 @@ chunks are calculated and is an advanced configuration option.
A chunking configuration object has the following properties: A chunking configuration object has the following properties:
`mode` (required):: `mode`::
There are three available modes: + There are three available modes: +
`auto`::: The chunk size will be dynamically calculated. This is the default `auto`::: The chunk size will be dynamically calculated. This is the default
and recommended value. and recommended value.

View File

@ -44,10 +44,10 @@ This API presents a chronological view of the records, grouped by bucket.
`expand`:: `expand`::
(boolean) If true, the output includes anomaly records. (boolean) If true, the output includes anomaly records.
`from`:: `page`::
`from`:::
(integer) Skips the specified number of buckets. (integer) Skips the specified number of buckets.
`size`:::
`size`::
(integer) Specifies the maximum number of buckets to obtain. (integer) Specifies the maximum number of buckets to obtain.
`start`:: `start`::

View File

@ -29,13 +29,13 @@ in a job.
(boolean) If true, the output excludes interim results. (boolean) If true, the output excludes interim results.
By default, interim results are included. By default, interim results are included.
`from`::
(integer) Skips the specified number of influencers.
`influencer_score`:: `influencer_score`::
(double) Returns influencers with anomaly scores higher than this value. (double) Returns influencers with anomaly scores higher than this value.
`size`:: `page`::
`from`:::
(integer) Skips the specified number of influencers.
`size`:::
(integer) Specifies the maximum number of influencers to obtain. (integer) Specifies the maximum number of influencers to obtain.
`sort`:: `sort`::

View File

@ -26,8 +26,8 @@ The get jobs API enables you to retrieve usage information for jobs.
The API returns the following information: The API returns the following information:
`jobs`:: `jobs`::
(array) An array of job count objects. (array) An array of job statistics objects.
For more information, see <<ml-jobstats,Job Stats>>. For more information, see <<ml-jobstats,Job Statistics>>.
===== Authorization ===== Authorization
@ -47,8 +47,7 @@ GET _xpack/ml/anomaly_detectors/farequote/_stats
// CONSOLE // CONSOLE
// TEST[skip:todo] // TEST[skip:todo]
In this example, the API returns a single result that matches the specified The API returns the following results:
score and time constraints:
[source,js] [source,js]
---- ----
{ {

View File

@ -47,8 +47,7 @@ GET _xpack/ml/anomaly_detectors/farequote
// CONSOLE // CONSOLE
// TEST[skip:todo] // TEST[skip:todo]
In this example, the API returns a single result that matches the specified The API returns the following results:
score and time constraints:
[source,js] [source,js]
---- ----
{ {

View File

@ -29,15 +29,15 @@ The get records API enables you to retrieve anomaly records for a job.
(boolean) If true, the output excludes interim results. (boolean) If true, the output excludes interim results.
By default, interim results are included. By default, interim results are included.
`from`:: `page`::
`from`:::
(integer) Skips the specified number of records. (integer) Skips the specified number of records.
`size`:::
(integer) Specifies the maximum number of records to obtain.
`record_score`:: `record_score`::
(double) Returns records with anomaly scores higher than this value. (double) Returns records with anomaly scores higher than this value.
`size`::
(integer) Specifies the maximum number of records to obtain.
`sort`:: `sort`::
(string) Specifies the sort field for the requested records. (string) Specifies the sort field for the requested records.
By default, the records are sorted by the `anomaly_score` value. By default, the records are sorted by the `anomaly_score` value.
@ -66,7 +66,7 @@ roles provide these privileges. For more information, see
===== Examples ===== Examples
The following example gets bucket information for the `it-ops-kpi` job: The following example gets record information for the `it-ops-kpi` job:
[source,js] [source,js]
-------------------------------------------------- --------------------------------------------------

View File

@ -23,8 +23,7 @@ TIP: For very large models (several GB), persistence could take 10-20 minutes,
so do not set the `background_persist_interval` value too low. so do not set the `background_persist_interval` value too low.
`create_time`:: `create_time`::
(string) The time the job was created, in ISO 8601 format. (string) The time the job was created. For example, `1491007356077`.
For example, `1491007356077`.
`data_description`:: `data_description`::
(object) Describes the data format and how APIs parse timestamp fields. (object) Describes the data format and how APIs parse timestamp fields.
@ -77,7 +76,7 @@ so do not set the `background_persist_interval` value too low.
An analysis configuration object has the following properties: An analysis configuration object has the following properties:
`bucket_span` (required):: `bucket_span`::
(time units) The size of the interval that the analysis is aggregated into, (time units) The size of the interval that the analysis is aggregated into,
typically between `5m` and `1h`. The default value is `5m`. typically between `5m` and `1h`. The default value is `5m`.
@ -95,7 +94,7 @@ An analysis configuration object has the following properties:
consideration for defining categories. For example, you can exclude SQL consideration for defining categories. For example, you can exclude SQL
statements that appear in your log files. statements that appear in your log files.
`detectors` (required):: `detectors`::
(array) An array of detector configuration objects, (array) An array of detector configuration objects,
which describe the anomaly detectors that are used in the job. which describe the anomaly detectors that are used in the job.
See <<ml-detectorconfig,detector configuration objects>>. + See <<ml-detectorconfig,detector configuration objects>>. +
@ -133,10 +132,10 @@ NOTE: To use the `multivariate_by_fields` property, you must also specify
`by_field_name` in your detector. `by_field_name` in your detector.
`summary_count_field_name`:: `summary_count_field_name`::
(string) If not null, the data fed to the job is expected to be pre-summarized. (string) If not null, the data that is fed to the job is expected to be
This property value is the name of the field that contains the count of raw pre-summarized. This property value is the name of the field that contains
data points that have been summarized. The same `summary_count_field_name` the count of raw data points that have been summarized.
applies to all detectors in the job. + The same `summary_count_field_name` applies to all detectors in the job. +
NOTE: The `summary_count_field_name` property cannot be used with the `metric` NOTE: The `summary_count_field_name` property cannot be used with the `metric`
function. function.
@ -180,7 +179,7 @@ Each detector has the following properties:
NOTE: The `field_name` cannot contain double quotes or backslashes. NOTE: The `field_name` cannot contain double quotes or backslashes.
`function` (required):: `function`::
(string) The analysis function that is used. (string) The analysis function that is used.
For example, `count`, `rare`, `mean`, `min`, `max`, and `sum`. For more For example, `count`, `rare`, `mean`, `min`, `max`, and `sum`. For more
information, see <<ml-functions>>. information, see <<ml-functions>>.

View File

@ -47,8 +47,6 @@ is planned for.
Model size (in bytes) is available as part of the Job Resource Model Size Stats. Model size (in bytes) is available as part of the Job Resource Model Size Stats.
//// ////
IMPORTANT: Before you revert to a saved snapshot, you must close the job. IMPORTANT: Before you revert to a saved snapshot, you must close the job.
Sending data to a closed job changes its status to `open`, so you must also
ensure that you do not expect data imminently.
===== Path Parameters ===== Path Parameters
@ -64,7 +62,7 @@ ensure that you do not expect data imminently.
`delete_intervening_results`:: `delete_intervening_results`::
(boolean) If true, deletes the results in the time period between the (boolean) If true, deletes the results in the time period between the
latest results and the time of the reverted snapshot. It also resets the latest results and the time of the reverted snapshot. It also resets the
model to accept records for this time period. model to accept records for this time period. The default value is false.
NOTE: If you choose not to delete intervening results when reverting a snapshot, NOTE: If you choose not to delete intervening results when reverting a snapshot,
the job will not accept input data that is older than the current time. the job will not accept input data that is older than the current time.