[DOCS] Moves ml folder from x-pack/docs to docs (#33248)

This commit is contained in:
Lisa Cawley 2018-08-31 11:56:26 -07:00 committed by GitHub
parent cdeadfc585
commit 874ebcb6d4
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
40 changed files with 50 additions and 140 deletions

View File

@ -19,6 +19,12 @@
apply plugin: 'elasticsearch.docs-test' apply plugin: 'elasticsearch.docs-test'
/* List of files that have snippets that require a gold or platinum licence
and therefore cannot be tested yet... */
buildRestTests.expectedUnconvertedCandidates = [
'reference/ml/transforms.asciidoc',
]
integTestCluster { integTestCluster {
/* Enable regexes in painless so our tests don't complain about example /* Enable regexes in painless so our tests don't complain about example
* snippets that use them. */ * snippets that use them. */

View File

@ -41,7 +41,7 @@ PUT _xpack/ml/anomaly_detectors/farequote
} }
---------------------------------- ----------------------------------
// CONSOLE // CONSOLE
// TEST[setup:farequote_data] // TEST[skip:setup:farequote_data]
In this example, the `airline`, `responsetime`, and `time` fields are In this example, the `airline`, `responsetime`, and `time` fields are
aggregations. aggregations.
@ -90,7 +90,7 @@ PUT _xpack/ml/datafeeds/datafeed-farequote
} }
---------------------------------- ----------------------------------
// CONSOLE // CONSOLE
// TEST[setup:farequote_job] // TEST[skip:setup:farequote_job]
In this example, the aggregations have names that match the fields that they In this example, the aggregations have names that match the fields that they
operate on. That is to say, the `max` aggregation is named `time` and its operate on. That is to say, the `max` aggregation is named `time` and its

View File

@ -44,6 +44,7 @@ PUT _xpack/ml/anomaly_detectors/it_ops_new_logs
} }
---------------------------------- ----------------------------------
//CONSOLE //CONSOLE
// TEST[skip:needs-licence]
<1> The `categorization_field_name` property indicates which field will be <1> The `categorization_field_name` property indicates which field will be
categorized. categorized.
<2> The resulting categories are used in a detector by setting `by_field_name`, <2> The resulting categories are used in a detector by setting `by_field_name`,
@ -127,6 +128,7 @@ PUT _xpack/ml/anomaly_detectors/it_ops_new_logs2
} }
---------------------------------- ----------------------------------
//CONSOLE //CONSOLE
// TEST[skip:needs-licence]
<1> The <1> The
{ref}/analysis-pattern-replace-charfilter.html[`pattern_replace` character filter] {ref}/analysis-pattern-replace-charfilter.html[`pattern_replace` character filter]
here achieves exactly the same as the `categorization_filters` in the first here achieves exactly the same as the `categorization_filters` in the first
@ -193,6 +195,7 @@ PUT _xpack/ml/anomaly_detectors/it_ops_new_logs3
} }
---------------------------------- ----------------------------------
//CONSOLE //CONSOLE
// TEST[skip:needs-licence]
<1> Tokens basically consist of hyphens, digits, letters, underscores and dots. <1> Tokens basically consist of hyphens, digits, letters, underscores and dots.
<2> By default, categorization ignores tokens that begin with a digit. <2> By default, categorization ignores tokens that begin with a digit.
<3> By default, categorization also ignores tokens that are hexadecimal numbers. <3> By default, categorization also ignores tokens that are hexadecimal numbers.

View File

@ -36,20 +36,20 @@ The scenarios in this section describe some best practices for generating useful
* <<ml-configuring-transform>> * <<ml-configuring-transform>>
* <<ml-configuring-detector-custom-rules>> * <<ml-configuring-detector-custom-rules>>
:edit_url: https://github.com/elastic/elasticsearch/edit/{branch}/x-pack/docs/en/ml/customurl.asciidoc :edit_url: https://github.com/elastic/elasticsearch/edit/{branch}/docs/reference/ml/customurl.asciidoc
include::customurl.asciidoc[] include::customurl.asciidoc[]
:edit_url: https://github.com/elastic/elasticsearch/edit/{branch}/x-pack/docs/en/ml/aggregations.asciidoc :edit_url: https://github.com/elastic/elasticsearch/edit/{branch}/docs/reference/ml/aggregations.asciidoc
include::aggregations.asciidoc[] include::aggregations.asciidoc[]
:edit_url: https://github.com/elastic/elasticsearch/edit/{branch}/x-pack/docs/en/ml/categories.asciidoc :edit_url: https://github.com/elastic/elasticsearch/edit/{branch}/docs/reference/ml/categories.asciidoc
include::categories.asciidoc[] include::categories.asciidoc[]
:edit_url: https://github.com/elastic/elasticsearch/edit/{branch}/x-pack/docs/en/ml/populations.asciidoc :edit_url: https://github.com/elastic/elasticsearch/edit/{branch}/docs/reference/ml/populations.asciidoc
include::populations.asciidoc[] include::populations.asciidoc[]
:edit_url: https://github.com/elastic/elasticsearch/edit/{branch}/x-pack/docs/en/ml/transforms.asciidoc :edit_url: https://github.com/elastic/elasticsearch/edit/{branch}/docs/reference/ml/transforms.asciidoc
include::transforms.asciidoc[] include::transforms.asciidoc[]
:edit_url: https://github.com/elastic/elasticsearch/edit/{branch}/x-pack/docs/en/ml/detector-custom-rules.asciidoc :edit_url: https://github.com/elastic/elasticsearch/edit/{branch}/docs/reference/ml/detector-custom-rules.asciidoc
include::detector-custom-rules.asciidoc[] include::detector-custom-rules.asciidoc[]

View File

@ -106,7 +106,7 @@ POST _xpack/ml/anomaly_detectors/sample_job/_update
} }
---------------------------------- ----------------------------------
//CONSOLE //CONSOLE
//TEST[setup:sample_job] //TEST[skip:setup:sample_job]
When you click this custom URL in the anomalies table in {kib}, it opens up the When you click this custom URL in the anomalies table in {kib}, it opens up the
*Discover* page and displays source data for the period one hour before and *Discover* page and displays source data for the period one hour before and

View File

@ -39,6 +39,7 @@ PUT _xpack/ml/filters/safe_domains
} }
---------------------------------- ----------------------------------
// CONSOLE // CONSOLE
// TEST[skip:needs-licence]
Now, we can create our job specifying a scope that uses the `safe_domains` Now, we can create our job specifying a scope that uses the `safe_domains`
filter for the `highest_registered_domain` field: filter for the `highest_registered_domain` field:
@ -70,6 +71,7 @@ PUT _xpack/ml/anomaly_detectors/dns_exfiltration_with_rule
} }
---------------------------------- ----------------------------------
// CONSOLE // CONSOLE
// TEST[skip:needs-licence]
As time advances and we see more data and more results, we might encounter new As time advances and we see more data and more results, we might encounter new
domains that we want to add in the filter. We can do that by using the domains that we want to add in the filter. We can do that by using the
@ -83,7 +85,7 @@ POST _xpack/ml/filters/safe_domains/_update
} }
---------------------------------- ----------------------------------
// CONSOLE // CONSOLE
// TEST[setup:ml_filter_safe_domains] // TEST[skip:setup:ml_filter_safe_domains]
Note that we can use any of the `partition_field_name`, `over_field_name`, or Note that we can use any of the `partition_field_name`, `over_field_name`, or
`by_field_name` fields in the `scope`. `by_field_name` fields in the `scope`.
@ -123,6 +125,7 @@ PUT _xpack/ml/anomaly_detectors/scoping_multiple_fields
} }
---------------------------------- ----------------------------------
// CONSOLE // CONSOLE
// TEST[skip:needs-licence]
Such a detector will skip results when the values of all 3 scoped fields Such a detector will skip results when the values of all 3 scoped fields
are included in the referenced filters. are included in the referenced filters.
@ -166,6 +169,7 @@ PUT _xpack/ml/anomaly_detectors/cpu_with_rule
} }
---------------------------------- ----------------------------------
// CONSOLE // CONSOLE
// TEST[skip:needs-licence]
When there are multiple conditions they are combined with a logical `and`. When there are multiple conditions they are combined with a logical `and`.
This is useful when we want the rule to apply to a range. We simply create This is useful when we want the rule to apply to a range. We simply create
@ -205,6 +209,7 @@ PUT _xpack/ml/anomaly_detectors/rule_with_range
} }
---------------------------------- ----------------------------------
// CONSOLE // CONSOLE
// TEST[skip:needs-licence]
==== Custom rules in the life-cycle of a job ==== Custom rules in the life-cycle of a job

View File

@ -59,6 +59,7 @@ PUT _xpack/ml/anomaly_detectors/example1
} }
-------------------------------------------------- --------------------------------------------------
// CONSOLE // CONSOLE
// TEST[skip:needs-licence]
This example is probably the simplest possible analysis. It identifies This example is probably the simplest possible analysis. It identifies
time buckets during which the overall count of events is higher or lower than time buckets during which the overall count of events is higher or lower than
@ -86,6 +87,7 @@ PUT _xpack/ml/anomaly_detectors/example2
} }
-------------------------------------------------- --------------------------------------------------
// CONSOLE // CONSOLE
// TEST[skip:needs-licence]
If you use this `high_count` function in a detector in your job, it If you use this `high_count` function in a detector in your job, it
models the event rate for each error code. It detects users that generate an models the event rate for each error code. It detects users that generate an
@ -110,6 +112,7 @@ PUT _xpack/ml/anomaly_detectors/example3
} }
-------------------------------------------------- --------------------------------------------------
// CONSOLE // CONSOLE
// TEST[skip:needs-licence]
In this example, the function detects when the count of events for a In this example, the function detects when the count of events for a
status code is lower than usual. status code is lower than usual.
@ -136,6 +139,7 @@ PUT _xpack/ml/anomaly_detectors/example4
} }
-------------------------------------------------- --------------------------------------------------
// CONSOLE // CONSOLE
// TEST[skip:needs-licence]
If you are analyzing an aggregated `events_per_min` field, do not use a sum If you are analyzing an aggregated `events_per_min` field, do not use a sum
function (for example, `sum(events_per_min)`). Instead, use the count function function (for example, `sum(events_per_min)`). Instead, use the count function
@ -200,6 +204,7 @@ PUT _xpack/ml/anomaly_detectors/example5
} }
-------------------------------------------------- --------------------------------------------------
// CONSOLE // CONSOLE
// TEST[skip:needs-licence]
If you use this `high_non_zero_count` function in a detector in your job, it If you use this `high_non_zero_count` function in a detector in your job, it
models the count of events for the `signaturename` field. It ignores any buckets models the count of events for the `signaturename` field. It ignores any buckets
@ -253,6 +258,7 @@ PUT _xpack/ml/anomaly_detectors/example6
} }
-------------------------------------------------- --------------------------------------------------
// CONSOLE // CONSOLE
// TEST[skip:needs-licence]
This `distinct_count` function detects when a system has an unusual number This `distinct_count` function detects when a system has an unusual number
of logged in users. When you use this function in a detector in your job, it of logged in users. When you use this function in a detector in your job, it
@ -278,6 +284,7 @@ PUT _xpack/ml/anomaly_detectors/example7
} }
-------------------------------------------------- --------------------------------------------------
// CONSOLE // CONSOLE
// TEST[skip:needs-licence]
This example detects instances of port scanning. When you use this function in a This example detects instances of port scanning. When you use this function in a
detector in your job, it models the distinct count of ports. It also detects the detector in your job, it models the distinct count of ports. It also detects the

View File

@ -47,6 +47,7 @@ PUT _xpack/ml/anomaly_detectors/example1
} }
-------------------------------------------------- --------------------------------------------------
// CONSOLE // CONSOLE
// TEST[skip:needs-licence]
If you use this `lat_long` function in a detector in your job, it If you use this `lat_long` function in a detector in your job, it
detects anomalies where the geographic location of a credit card transaction is detects anomalies where the geographic location of a credit card transaction is
@ -98,6 +99,6 @@ PUT _xpack/ml/datafeeds/datafeed-test2
} }
-------------------------------------------------- --------------------------------------------------
// CONSOLE // CONSOLE
// TEST[setup:farequote_job] // TEST[skip:setup:farequote_job]
For more information, see <<ml-configuring-transform>>. For more information, see <<ml-configuring-transform>>.

View File

Before

Width:  |  Height:  |  Size: 118 KiB

After

Width:  |  Height:  |  Size: 118 KiB

View File

Before

Width:  |  Height:  |  Size: 347 KiB

After

Width:  |  Height:  |  Size: 347 KiB

View File

Before

Width:  |  Height:  |  Size: 70 KiB

After

Width:  |  Height:  |  Size: 70 KiB

View File

Before

Width:  |  Height:  |  Size: 187 KiB

After

Width:  |  Height:  |  Size: 187 KiB

View File

Before

Width:  |  Height:  |  Size: 36 KiB

After

Width:  |  Height:  |  Size: 36 KiB

View File

Before

Width:  |  Height:  |  Size: 130 KiB

After

Width:  |  Height:  |  Size: 130 KiB

View File

Before

Width:  |  Height:  |  Size: 384 KiB

After

Width:  |  Height:  |  Size: 384 KiB

View File

Before

Width:  |  Height:  |  Size: 120 KiB

After

Width:  |  Height:  |  Size: 120 KiB

View File

Before

Width:  |  Height:  |  Size: 163 KiB

After

Width:  |  Height:  |  Size: 163 KiB

View File

Before

Width:  |  Height:  |  Size: 17 KiB

After

Width:  |  Height:  |  Size: 17 KiB

View File

Before

Width:  |  Height:  |  Size: 17 KiB

After

Width:  |  Height:  |  Size: 17 KiB

View File

Before

Width:  |  Height:  |  Size: 350 KiB

After

Width:  |  Height:  |  Size: 350 KiB

View File

Before

Width:  |  Height:  |  Size: 99 KiB

After

Width:  |  Height:  |  Size: 99 KiB

View File

Before

Width:  |  Height:  |  Size: 75 KiB

After

Width:  |  Height:  |  Size: 75 KiB

View File

Before

Width:  |  Height:  |  Size: 1.9 KiB

After

Width:  |  Height:  |  Size: 1.9 KiB

View File

Before

Width:  |  Height:  |  Size: 176 KiB

After

Width:  |  Height:  |  Size: 176 KiB

View File

Before

Width:  |  Height:  |  Size: 96 KiB

After

Width:  |  Height:  |  Size: 96 KiB

View File

Before

Width:  |  Height:  |  Size: 205 KiB

After

Width:  |  Height:  |  Size: 205 KiB

View File

Before

Width:  |  Height:  |  Size: 100 KiB

After

Width:  |  Height:  |  Size: 100 KiB

View File

Before

Width:  |  Height:  |  Size: 1.3 KiB

After

Width:  |  Height:  |  Size: 1.3 KiB

View File

Before

Width:  |  Height:  |  Size: 4.5 KiB

After

Width:  |  Height:  |  Size: 4.5 KiB

View File

Before

Width:  |  Height:  |  Size: 90 KiB

After

Width:  |  Height:  |  Size: 90 KiB

View File

@ -51,14 +51,11 @@ PUT _xpack/ml/anomaly_detectors/population
} }
---------------------------------- ----------------------------------
//CONSOLE //CONSOLE
// TEST[skip:needs-licence]
<1> This `over_field_name` property indicates that the metrics for each user ( <1> This `over_field_name` property indicates that the metrics for each user (
as identified by their `username` value) are analyzed relative to other users as identified by their `username` value) are analyzed relative to other users
in each bucket. in each bucket.
//TO-DO: Per sophiec20 "Perhaps add the datafeed config and add a query filter to
//include only workstations as servers and printers would behave differently
//from the population
If your data is stored in {es}, you can use the population job wizard in {kib} If your data is stored in {es}, you can use the population job wizard in {kib}
to create a job with these same properties. For example, the population job to create a job with these same properties. For example, the population job
wizard provides the following job settings: wizard provides the following job settings:

View File

@ -28,7 +28,7 @@ request stops the `feed1` {dfeed}:
POST _xpack/ml/datafeeds/datafeed-total-requests/_stop POST _xpack/ml/datafeeds/datafeed-total-requests/_stop
-------------------------------------------------- --------------------------------------------------
// CONSOLE // CONSOLE
// TEST[setup:server_metrics_startdf] // TEST[skip:setup:server_metrics_startdf]
NOTE: You must have `manage_ml`, or `manage` cluster privileges to stop {dfeeds}. NOTE: You must have `manage_ml`, or `manage` cluster privileges to stop {dfeeds}.
For more information, see <<security-privileges>>. For more information, see <<security-privileges>>.
@ -49,6 +49,7 @@ If you are upgrading your cluster, you can use the following request to stop all
POST _xpack/ml/datafeeds/_all/_stop POST _xpack/ml/datafeeds/_all/_stop
---------------------------------- ----------------------------------
// CONSOLE // CONSOLE
// TEST[skip:needs-licence]
[float] [float]
[[closing-ml-jobs]] [[closing-ml-jobs]]
@ -67,7 +68,7 @@ example, the following request closes the `job1` job:
POST _xpack/ml/anomaly_detectors/total-requests/_close POST _xpack/ml/anomaly_detectors/total-requests/_close
-------------------------------------------------- --------------------------------------------------
// CONSOLE // CONSOLE
// TEST[setup:server_metrics_openjob] // TEST[skip:setup:server_metrics_openjob]
NOTE: You must have `manage_ml`, or `manage` cluster privileges to stop {dfeeds}. NOTE: You must have `manage_ml`, or `manage` cluster privileges to stop {dfeeds}.
For more information, see <<security-privileges>>. For more information, see <<security-privileges>>.
@ -86,3 +87,4 @@ all open jobs on the cluster:
POST _xpack/ml/anomaly_detectors/_all/_close POST _xpack/ml/anomaly_detectors/_all/_close
---------------------------------- ----------------------------------
// CONSOLE // CONSOLE
// TEST[skip:needs-licence]

View File

@ -95,7 +95,7 @@ PUT /my_index/my_type/1
} }
---------------------------------- ----------------------------------
// CONSOLE // CONSOLE
// TESTSETUP // TEST[skip:SETUP]
<1> In this example, string fields are mapped as `keyword` fields to support <1> In this example, string fields are mapped as `keyword` fields to support
aggregation. If you want both a full text (`text`) and a keyword (`keyword`) aggregation. If you want both a full text (`text`) and a keyword (`keyword`)
version of the same field, use multi-fields. For more information, see version of the same field, use multi-fields. For more information, see
@ -144,7 +144,7 @@ PUT _xpack/ml/datafeeds/datafeed-test1
} }
---------------------------------- ----------------------------------
// CONSOLE // CONSOLE
// TEST[skip:broken] // TEST[skip:needs-licence]
<1> A script field named `total_error_count` is referenced in the detector <1> A script field named `total_error_count` is referenced in the detector
within the job. within the job.
<2> The script field is defined in the {dfeed}. <2> The script field is defined in the {dfeed}.
@ -163,7 +163,7 @@ You can preview the contents of the {dfeed} by using the following API:
GET _xpack/ml/datafeeds/datafeed-test1/_preview GET _xpack/ml/datafeeds/datafeed-test1/_preview
---------------------------------- ----------------------------------
// CONSOLE // CONSOLE
// TEST[continued] // TEST[skip:continued]
In this example, the API returns the following results, which contain a sum of In this example, the API returns the following results, which contain a sum of
the `error_count` and `aborted_count` values: the `error_count` and `aborted_count` values:
@ -177,8 +177,6 @@ the `error_count` and `aborted_count` values:
} }
] ]
---------------------------------- ----------------------------------
// TESTRESPONSE
NOTE: This example demonstrates how to use script fields, but it contains NOTE: This example demonstrates how to use script fields, but it contains
insufficient data to generate meaningful results. For a full demonstration of insufficient data to generate meaningful results. For a full demonstration of
@ -254,7 +252,7 @@ PUT _xpack/ml/datafeeds/datafeed-test2
GET _xpack/ml/datafeeds/datafeed-test2/_preview GET _xpack/ml/datafeeds/datafeed-test2/_preview
-------------------------------------------------- --------------------------------------------------
// CONSOLE // CONSOLE
// TEST[skip:broken] // TEST[skip:needs-licence]
<1> The script field has a rather generic name in this case, since it will <1> The script field has a rather generic name in this case, since it will
be used for various tests in the subsequent examples. be used for various tests in the subsequent examples.
<2> The script field uses the plus (+) operator to concatenate strings. <2> The script field uses the plus (+) operator to concatenate strings.
@ -271,7 +269,6 @@ and "SMITH " have been concatenated and an underscore was added:
} }
] ]
---------------------------------- ----------------------------------
// TESTRESPONSE
[[ml-configuring-transform3]] [[ml-configuring-transform3]]
.Example 3: Trimming strings .Example 3: Trimming strings
@ -292,7 +289,7 @@ POST _xpack/ml/datafeeds/datafeed-test2/_update
GET _xpack/ml/datafeeds/datafeed-test2/_preview GET _xpack/ml/datafeeds/datafeed-test2/_preview
-------------------------------------------------- --------------------------------------------------
// CONSOLE // CONSOLE
// TEST[continued] // TEST[skip:continued]
<1> This script field uses the `trim()` function to trim extra white space from a <1> This script field uses the `trim()` function to trim extra white space from a
string. string.
@ -308,7 +305,6 @@ has been trimmed to "SMITH":
} }
] ]
---------------------------------- ----------------------------------
// TESTRESPONSE
[[ml-configuring-transform4]] [[ml-configuring-transform4]]
.Example 4: Converting strings to lowercase .Example 4: Converting strings to lowercase
@ -329,7 +325,7 @@ POST _xpack/ml/datafeeds/datafeed-test2/_update
GET _xpack/ml/datafeeds/datafeed-test2/_preview GET _xpack/ml/datafeeds/datafeed-test2/_preview
-------------------------------------------------- --------------------------------------------------
// CONSOLE // CONSOLE
// TEST[continued] // TEST[skip:continued]
<1> This script field uses the `toLowerCase` function to convert a string to all <1> This script field uses the `toLowerCase` function to convert a string to all
lowercase letters. Likewise, you can use the `toUpperCase{}` function to convert lowercase letters. Likewise, you can use the `toUpperCase{}` function to convert
a string to uppercase letters. a string to uppercase letters.
@ -346,7 +342,6 @@ has been converted to "joe":
} }
] ]
---------------------------------- ----------------------------------
// TESTRESPONSE
[[ml-configuring-transform5]] [[ml-configuring-transform5]]
.Example 5: Converting strings to mixed case formats .Example 5: Converting strings to mixed case formats
@ -367,7 +362,7 @@ POST _xpack/ml/datafeeds/datafeed-test2/_update
GET _xpack/ml/datafeeds/datafeed-test2/_preview GET _xpack/ml/datafeeds/datafeed-test2/_preview
-------------------------------------------------- --------------------------------------------------
// CONSOLE // CONSOLE
// TEST[continued] // TEST[skip:continued]
<1> This script field is a more complicated example of case manipulation. It uses <1> This script field is a more complicated example of case manipulation. It uses
the `subString()` function to capitalize the first letter of a string and the `subString()` function to capitalize the first letter of a string and
converts the remaining characters to lowercase. converts the remaining characters to lowercase.
@ -384,7 +379,6 @@ has been converted to "Joe":
} }
] ]
---------------------------------- ----------------------------------
// TESTRESPONSE
[[ml-configuring-transform6]] [[ml-configuring-transform6]]
.Example 6: Replacing tokens .Example 6: Replacing tokens
@ -405,7 +399,7 @@ POST _xpack/ml/datafeeds/datafeed-test2/_update
GET _xpack/ml/datafeeds/datafeed-test2/_preview GET _xpack/ml/datafeeds/datafeed-test2/_preview
-------------------------------------------------- --------------------------------------------------
// CONSOLE // CONSOLE
// TEST[continued] // TEST[skip:continued]
<1> This script field uses regular expressions to replace white <1> This script field uses regular expressions to replace white
space with underscores. space with underscores.
@ -421,7 +415,6 @@ The preview {dfeed} API returns the following results, which show that
} }
] ]
---------------------------------- ----------------------------------
// TESTRESPONSE
[[ml-configuring-transform7]] [[ml-configuring-transform7]]
.Example 7: Regular expression matching and concatenation .Example 7: Regular expression matching and concatenation
@ -442,7 +435,7 @@ POST _xpack/ml/datafeeds/datafeed-test2/_update
GET _xpack/ml/datafeeds/datafeed-test2/_preview GET _xpack/ml/datafeeds/datafeed-test2/_preview
-------------------------------------------------- --------------------------------------------------
// CONSOLE // CONSOLE
// TEST[continued] // TEST[skip:continued]
<1> This script field looks for a specific regular expression pattern and emits the <1> This script field looks for a specific regular expression pattern and emits the
matched groups as a concatenated string. If no match is found, it emits an empty matched groups as a concatenated string. If no match is found, it emits an empty
string. string.
@ -459,7 +452,6 @@ The preview {dfeed} API returns the following results, which show that
} }
] ]
---------------------------------- ----------------------------------
// TESTRESPONSE
[[ml-configuring-transform8]] [[ml-configuring-transform8]]
.Example 8: Splitting strings by domain name .Example 8: Splitting strings by domain name
@ -509,7 +501,7 @@ PUT _xpack/ml/datafeeds/datafeed-test3
GET _xpack/ml/datafeeds/datafeed-test3/_preview GET _xpack/ml/datafeeds/datafeed-test3/_preview
-------------------------------------------------- --------------------------------------------------
// CONSOLE // CONSOLE
// TEST[skip:broken] // TEST[skip:needs-licence]
If you have a single field that contains a well-formed DNS domain name, you can If you have a single field that contains a well-formed DNS domain name, you can
use the `domainSplit()` function to split the string into its highest registered use the `domainSplit()` function to split the string into its highest registered
@ -537,7 +529,6 @@ The preview {dfeed} API returns the following results, which show that
} }
] ]
---------------------------------- ----------------------------------
// TESTRESPONSE
[[ml-configuring-transform9]] [[ml-configuring-transform9]]
.Example 9: Transforming geo_point data .Example 9: Transforming geo_point data
@ -583,7 +574,7 @@ PUT _xpack/ml/datafeeds/datafeed-test4
GET _xpack/ml/datafeeds/datafeed-test4/_preview GET _xpack/ml/datafeeds/datafeed-test4/_preview
-------------------------------------------------- --------------------------------------------------
// CONSOLE // CONSOLE
// TEST[skip:broken] // TEST[skip:needs-licence]
In {es}, location data can be stored in `geo_point` fields but this data type is In {es}, location data can be stored in `geo_point` fields but this data type is
not supported natively in {xpackml} analytics. This example of a script field not supported natively in {xpackml} analytics. This example of a script field
@ -602,4 +593,4 @@ The preview {dfeed} API returns the following results, which show that
} }
] ]
---------------------------------- ----------------------------------
// TESTRESPONSE

View File

@ -1,102 +0,0 @@
[role="xpack"]
[[ml-api-quickref]]
== API quick reference
All {ml} endpoints have the following base:
[source,js]
----
/_xpack/ml/
----
// NOTCONSOLE
The main {ml} resources can be accessed with a variety of endpoints:
* <<ml-api-jobs,+/anomaly_detectors/+>>: Create and manage {ml} jobs
* <<ml-api-datafeeds,+/datafeeds/+>>: Select data from {es} to be analyzed
* <<ml-api-results,+/results/+>>: Access the results of a {ml} job
* <<ml-api-snapshots,+/model_snapshots/+>>: Manage model snapshots
//* <<ml-api-validate,+/validate/+>>: Validate subsections of job configurations
[float]
[[ml-api-jobs]]
=== /anomaly_detectors/
* {ref}/ml-put-job.html[PUT /anomaly_detectors/<job_id+++>+++]: Create a job
* {ref}/ml-open-job.html[POST /anomaly_detectors/<job_id>/_open]: Open a job
* {ref}/ml-post-data.html[POST /anomaly_detectors/<job_id>/_data]: Send data to a job
* {ref}/ml-get-job.html[GET /anomaly_detectors]: List jobs
* {ref}/ml-get-job.html[GET /anomaly_detectors/<job_id+++>+++]: Get job details
* {ref}/ml-get-job-stats.html[GET /anomaly_detectors/<job_id>/_stats]: Get job statistics
* {ref}/ml-update-job.html[POST /anomaly_detectors/<job_id>/_update]: Update certain properties of the job configuration
* {ref}/ml-flush-job.html[POST anomaly_detectors/<job_id>/_flush]: Force a job to analyze buffered data
* {ref}/ml-forecast.html[POST anomaly_detectors/<job_id>/_forecast]: Forecast future job behavior
* {ref}/ml-close-job.html[POST /anomaly_detectors/<job_id>/_close]: Close a job
* {ref}/ml-delete-job.html[DELETE /anomaly_detectors/<job_id+++>+++]: Delete a job
[float]
[[ml-api-calendars]]
=== /calendars/
* {ref}/ml-put-calendar.html[PUT /calendars/<calendar_id+++>+++]: Create a calendar
* {ref}/ml-post-calendar-event.html[POST /calendars/<calendar_id+++>+++/events]: Add a scheduled event to a calendar
* {ref}/ml-put-calendar-job.html[PUT /calendars/<calendar_id+++>+++/jobs/<job_id+++>+++]: Associate a job with a calendar
* {ref}/ml-get-calendar.html[GET /calendars/<calendar_id+++>+++]: Get calendar details
* {ref}/ml-get-calendar-event.html[GET /calendars/<calendar_id+++>+++/events]: Get scheduled event details
* {ref}/ml-delete-calendar-event.html[DELETE /calendars/<calendar_id+++>+++/events/<event_id+++>+++]: Remove a scheduled event from a calendar
* {ref}/ml-delete-calendar-job.html[DELETE /calendars/<calendar_id+++>+++/jobs/<job_id+++>+++]: Disassociate a job from a calendar
* {ref}/ml-delete-calendar.html[DELETE /calendars/<calendar_id+++>+++]: Delete a calendar
[float]
[[ml-api-filters]]
=== /filters/
* {ref}/ml-put-filter.html[PUT /filters/<filter_id+++>+++]: Create a filter
* {ref}/ml-update-filter.html[POST /filters/<filter_id+++>+++/_update]: Update a filter
* {ref}/ml-get-filter.html[GET /filters/<filter_id+++>+++]: List filters
* {ref}/ml-delete-filter.html[DELETE /filter/<filter_id+++>+++]: Delete a filter
[float]
[[ml-api-datafeeds]]
=== /datafeeds/
* {ref}/ml-put-datafeed.html[PUT /datafeeds/<datafeed_id+++>+++]: Create a {dfeed}
* {ref}/ml-start-datafeed.html[POST /datafeeds/<datafeed_id>/_start]: Start a {dfeed}
* {ref}/ml-get-datafeed.html[GET /datafeeds]: List {dfeeds}
* {ref}/ml-get-datafeed.html[GET /datafeeds/<datafeed_id+++>+++]: Get {dfeed} details
* {ref}/ml-get-datafeed-stats.html[GET /datafeeds/<datafeed_id>/_stats]: Get statistical information for {dfeeds}
* {ref}/ml-preview-datafeed.html[GET /datafeeds/<datafeed_id>/_preview]: Get a preview of a {dfeed}
* {ref}/ml-update-datafeed.html[POST /datafeeds/<datafeedid>/_update]: Update certain settings for a {dfeed}
* {ref}/ml-stop-datafeed.html[POST /datafeeds/<datafeed_id>/_stop]: Stop a {dfeed}
* {ref}/ml-delete-datafeed.html[DELETE /datafeeds/<datafeed_id+++>+++]: Delete {dfeed}
[float]
[[ml-api-results]]
=== /results/
* {ref}/ml-get-bucket.html[GET /results/buckets]: List the buckets in the results
* {ref}/ml-get-bucket.html[GET /results/buckets/<bucket_id+++>+++]: Get bucket details
* {ref}/ml-get-overall-buckets.html[GET /results/overall_buckets]: Get overall bucket results for multiple jobs
* {ref}/ml-get-category.html[GET /results/categories]: List the categories in the results
* {ref}/ml-get-category.html[GET /results/categories/<category_id+++>+++]: Get category details
* {ref}/ml-get-influencer.html[GET /results/influencers]: Get influencer details
* {ref}/ml-get-record.html[GET /results/records]: Get records from the results
[float]
[[ml-api-snapshots]]
=== /model_snapshots/
* {ref}/ml-get-snapshot.html[GET /model_snapshots]: List model snapshots
* {ref}/ml-get-snapshot.html[GET /model_snapshots/<snapshot_id+++>+++]: Get model snapshot details
* {ref}/ml-revert-snapshot.html[POST /model_snapshots/<snapshot_id>/_revert]: Revert a model snapshot
* {ref}/ml-update-snapshot.html[POST /model_snapshots/<snapshot_id>/_update]: Update certain settings for a model snapshot
* {ref}/ml-delete-snapshot.html[DELETE /model_snapshots/<snapshot_id+++>+++]: Delete a model snapshot
////
[float]
[[ml-api-validate]]
=== /validate/
* {ref}/ml-valid-detector.html[POST /anomaly_detectors/_validate/detector]: Validate a detector
* {ref}/ml-valid-job.html[POST /anomaly_detectors/_validate]: Validate a job
////