[DOCS] Moves ml folder from x-pack/docs to docs (#33248)
|
@ -19,6 +19,12 @@
|
|||
|
||||
apply plugin: 'elasticsearch.docs-test'
|
||||
|
||||
/* List of files that have snippets that require a gold or platinum licence
|
||||
and therefore cannot be tested yet... */
|
||||
buildRestTests.expectedUnconvertedCandidates = [
|
||||
'reference/ml/transforms.asciidoc',
|
||||
]
|
||||
|
||||
integTestCluster {
|
||||
/* Enable regexes in painless so our tests don't complain about example
|
||||
* snippets that use them. */
|
||||
|
|
|
@ -41,7 +41,7 @@ PUT _xpack/ml/anomaly_detectors/farequote
|
|||
}
|
||||
----------------------------------
|
||||
// CONSOLE
|
||||
// TEST[setup:farequote_data]
|
||||
// TEST[skip:setup:farequote_data]
|
||||
|
||||
In this example, the `airline`, `responsetime`, and `time` fields are
|
||||
aggregations.
|
||||
|
@ -90,7 +90,7 @@ PUT _xpack/ml/datafeeds/datafeed-farequote
|
|||
}
|
||||
----------------------------------
|
||||
// CONSOLE
|
||||
// TEST[setup:farequote_job]
|
||||
// TEST[skip:setup:farequote_job]
|
||||
|
||||
In this example, the aggregations have names that match the fields that they
|
||||
operate on. That is to say, the `max` aggregation is named `time` and its
|
|
@ -44,6 +44,7 @@ PUT _xpack/ml/anomaly_detectors/it_ops_new_logs
|
|||
}
|
||||
----------------------------------
|
||||
//CONSOLE
|
||||
// TEST[skip:needs-licence]
|
||||
<1> The `categorization_field_name` property indicates which field will be
|
||||
categorized.
|
||||
<2> The resulting categories are used in a detector by setting `by_field_name`,
|
||||
|
@ -127,6 +128,7 @@ PUT _xpack/ml/anomaly_detectors/it_ops_new_logs2
|
|||
}
|
||||
----------------------------------
|
||||
//CONSOLE
|
||||
// TEST[skip:needs-licence]
|
||||
<1> The
|
||||
{ref}/analysis-pattern-replace-charfilter.html[`pattern_replace` character filter]
|
||||
here achieves exactly the same as the `categorization_filters` in the first
|
||||
|
@ -193,6 +195,7 @@ PUT _xpack/ml/anomaly_detectors/it_ops_new_logs3
|
|||
}
|
||||
----------------------------------
|
||||
//CONSOLE
|
||||
// TEST[skip:needs-licence]
|
||||
<1> Tokens basically consist of hyphens, digits, letters, underscores and dots.
|
||||
<2> By default, categorization ignores tokens that begin with a digit.
|
||||
<3> By default, categorization also ignores tokens that are hexadecimal numbers.
|
|
@ -36,20 +36,20 @@ The scenarios in this section describe some best practices for generating useful
|
|||
* <<ml-configuring-transform>>
|
||||
* <<ml-configuring-detector-custom-rules>>
|
||||
|
||||
:edit_url: https://github.com/elastic/elasticsearch/edit/{branch}/x-pack/docs/en/ml/customurl.asciidoc
|
||||
:edit_url: https://github.com/elastic/elasticsearch/edit/{branch}/docs/reference/ml/customurl.asciidoc
|
||||
include::customurl.asciidoc[]
|
||||
|
||||
:edit_url: https://github.com/elastic/elasticsearch/edit/{branch}/x-pack/docs/en/ml/aggregations.asciidoc
|
||||
:edit_url: https://github.com/elastic/elasticsearch/edit/{branch}/docs/reference/ml/aggregations.asciidoc
|
||||
include::aggregations.asciidoc[]
|
||||
|
||||
:edit_url: https://github.com/elastic/elasticsearch/edit/{branch}/x-pack/docs/en/ml/categories.asciidoc
|
||||
:edit_url: https://github.com/elastic/elasticsearch/edit/{branch}/docs/reference/ml/categories.asciidoc
|
||||
include::categories.asciidoc[]
|
||||
|
||||
:edit_url: https://github.com/elastic/elasticsearch/edit/{branch}/x-pack/docs/en/ml/populations.asciidoc
|
||||
:edit_url: https://github.com/elastic/elasticsearch/edit/{branch}/docs/reference/ml/populations.asciidoc
|
||||
include::populations.asciidoc[]
|
||||
|
||||
:edit_url: https://github.com/elastic/elasticsearch/edit/{branch}/x-pack/docs/en/ml/transforms.asciidoc
|
||||
:edit_url: https://github.com/elastic/elasticsearch/edit/{branch}/docs/reference/ml/transforms.asciidoc
|
||||
include::transforms.asciidoc[]
|
||||
|
||||
:edit_url: https://github.com/elastic/elasticsearch/edit/{branch}/x-pack/docs/en/ml/detector-custom-rules.asciidoc
|
||||
:edit_url: https://github.com/elastic/elasticsearch/edit/{branch}/docs/reference/ml/detector-custom-rules.asciidoc
|
||||
include::detector-custom-rules.asciidoc[]
|
|
@ -106,7 +106,7 @@ POST _xpack/ml/anomaly_detectors/sample_job/_update
|
|||
}
|
||||
----------------------------------
|
||||
//CONSOLE
|
||||
//TEST[setup:sample_job]
|
||||
//TEST[skip:setup:sample_job]
|
||||
|
||||
When you click this custom URL in the anomalies table in {kib}, it opens up the
|
||||
*Discover* page and displays source data for the period one hour before and
|
|
@ -39,6 +39,7 @@ PUT _xpack/ml/filters/safe_domains
|
|||
}
|
||||
----------------------------------
|
||||
// CONSOLE
|
||||
// TEST[skip:needs-licence]
|
||||
|
||||
Now, we can create our job specifying a scope that uses the `safe_domains`
|
||||
filter for the `highest_registered_domain` field:
|
||||
|
@ -70,6 +71,7 @@ PUT _xpack/ml/anomaly_detectors/dns_exfiltration_with_rule
|
|||
}
|
||||
----------------------------------
|
||||
// CONSOLE
|
||||
// TEST[skip:needs-licence]
|
||||
|
||||
As time advances and we see more data and more results, we might encounter new
|
||||
domains that we want to add in the filter. We can do that by using the
|
||||
|
@ -83,7 +85,7 @@ POST _xpack/ml/filters/safe_domains/_update
|
|||
}
|
||||
----------------------------------
|
||||
// CONSOLE
|
||||
// TEST[setup:ml_filter_safe_domains]
|
||||
// TEST[skip:setup:ml_filter_safe_domains]
|
||||
|
||||
Note that we can use any of the `partition_field_name`, `over_field_name`, or
|
||||
`by_field_name` fields in the `scope`.
|
||||
|
@ -123,6 +125,7 @@ PUT _xpack/ml/anomaly_detectors/scoping_multiple_fields
|
|||
}
|
||||
----------------------------------
|
||||
// CONSOLE
|
||||
// TEST[skip:needs-licence]
|
||||
|
||||
Such a detector will skip results when the values of all 3 scoped fields
|
||||
are included in the referenced filters.
|
||||
|
@ -166,6 +169,7 @@ PUT _xpack/ml/anomaly_detectors/cpu_with_rule
|
|||
}
|
||||
----------------------------------
|
||||
// CONSOLE
|
||||
// TEST[skip:needs-licence]
|
||||
|
||||
When there are multiple conditions they are combined with a logical `and`.
|
||||
This is useful when we want the rule to apply to a range. We simply create
|
||||
|
@ -205,6 +209,7 @@ PUT _xpack/ml/anomaly_detectors/rule_with_range
|
|||
}
|
||||
----------------------------------
|
||||
// CONSOLE
|
||||
// TEST[skip:needs-licence]
|
||||
|
||||
==== Custom rules in the life-cycle of a job
|
||||
|
|
@ -59,6 +59,7 @@ PUT _xpack/ml/anomaly_detectors/example1
|
|||
}
|
||||
--------------------------------------------------
|
||||
// CONSOLE
|
||||
// TEST[skip:needs-licence]
|
||||
|
||||
This example is probably the simplest possible analysis. It identifies
|
||||
time buckets during which the overall count of events is higher or lower than
|
||||
|
@ -86,6 +87,7 @@ PUT _xpack/ml/anomaly_detectors/example2
|
|||
}
|
||||
--------------------------------------------------
|
||||
// CONSOLE
|
||||
// TEST[skip:needs-licence]
|
||||
|
||||
If you use this `high_count` function in a detector in your job, it
|
||||
models the event rate for each error code. It detects users that generate an
|
||||
|
@ -110,6 +112,7 @@ PUT _xpack/ml/anomaly_detectors/example3
|
|||
}
|
||||
--------------------------------------------------
|
||||
// CONSOLE
|
||||
// TEST[skip:needs-licence]
|
||||
|
||||
In this example, the function detects when the count of events for a
|
||||
status code is lower than usual.
|
||||
|
@ -136,6 +139,7 @@ PUT _xpack/ml/anomaly_detectors/example4
|
|||
}
|
||||
--------------------------------------------------
|
||||
// CONSOLE
|
||||
// TEST[skip:needs-licence]
|
||||
|
||||
If you are analyzing an aggregated `events_per_min` field, do not use a sum
|
||||
function (for example, `sum(events_per_min)`). Instead, use the count function
|
||||
|
@ -200,6 +204,7 @@ PUT _xpack/ml/anomaly_detectors/example5
|
|||
}
|
||||
--------------------------------------------------
|
||||
// CONSOLE
|
||||
// TEST[skip:needs-licence]
|
||||
|
||||
If you use this `high_non_zero_count` function in a detector in your job, it
|
||||
models the count of events for the `signaturename` field. It ignores any buckets
|
||||
|
@ -253,6 +258,7 @@ PUT _xpack/ml/anomaly_detectors/example6
|
|||
}
|
||||
--------------------------------------------------
|
||||
// CONSOLE
|
||||
// TEST[skip:needs-licence]
|
||||
|
||||
This `distinct_count` function detects when a system has an unusual number
|
||||
of logged in users. When you use this function in a detector in your job, it
|
||||
|
@ -278,6 +284,7 @@ PUT _xpack/ml/anomaly_detectors/example7
|
|||
}
|
||||
--------------------------------------------------
|
||||
// CONSOLE
|
||||
// TEST[skip:needs-licence]
|
||||
|
||||
This example detects instances of port scanning. When you use this function in a
|
||||
detector in your job, it models the distinct count of ports. It also detects the
|
|
@ -47,6 +47,7 @@ PUT _xpack/ml/anomaly_detectors/example1
|
|||
}
|
||||
--------------------------------------------------
|
||||
// CONSOLE
|
||||
// TEST[skip:needs-licence]
|
||||
|
||||
If you use this `lat_long` function in a detector in your job, it
|
||||
detects anomalies where the geographic location of a credit card transaction is
|
||||
|
@ -98,6 +99,6 @@ PUT _xpack/ml/datafeeds/datafeed-test2
|
|||
}
|
||||
--------------------------------------------------
|
||||
// CONSOLE
|
||||
// TEST[setup:farequote_job]
|
||||
// TEST[skip:setup:farequote_job]
|
||||
|
||||
For more information, see <<ml-configuring-transform>>.
|
Before Width: | Height: | Size: 118 KiB After Width: | Height: | Size: 118 KiB |
Before Width: | Height: | Size: 347 KiB After Width: | Height: | Size: 347 KiB |
Before Width: | Height: | Size: 70 KiB After Width: | Height: | Size: 70 KiB |
Before Width: | Height: | Size: 187 KiB After Width: | Height: | Size: 187 KiB |
Before Width: | Height: | Size: 36 KiB After Width: | Height: | Size: 36 KiB |
Before Width: | Height: | Size: 130 KiB After Width: | Height: | Size: 130 KiB |
Before Width: | Height: | Size: 384 KiB After Width: | Height: | Size: 384 KiB |
Before Width: | Height: | Size: 120 KiB After Width: | Height: | Size: 120 KiB |
Before Width: | Height: | Size: 163 KiB After Width: | Height: | Size: 163 KiB |
Before Width: | Height: | Size: 17 KiB After Width: | Height: | Size: 17 KiB |
Before Width: | Height: | Size: 17 KiB After Width: | Height: | Size: 17 KiB |
Before Width: | Height: | Size: 350 KiB After Width: | Height: | Size: 350 KiB |
Before Width: | Height: | Size: 99 KiB After Width: | Height: | Size: 99 KiB |
Before Width: | Height: | Size: 75 KiB After Width: | Height: | Size: 75 KiB |
Before Width: | Height: | Size: 1.9 KiB After Width: | Height: | Size: 1.9 KiB |
Before Width: | Height: | Size: 176 KiB After Width: | Height: | Size: 176 KiB |
Before Width: | Height: | Size: 96 KiB After Width: | Height: | Size: 96 KiB |
Before Width: | Height: | Size: 205 KiB After Width: | Height: | Size: 205 KiB |
Before Width: | Height: | Size: 100 KiB After Width: | Height: | Size: 100 KiB |
Before Width: | Height: | Size: 1.3 KiB After Width: | Height: | Size: 1.3 KiB |
Before Width: | Height: | Size: 4.5 KiB After Width: | Height: | Size: 4.5 KiB |
Before Width: | Height: | Size: 90 KiB After Width: | Height: | Size: 90 KiB |
|
@ -51,14 +51,11 @@ PUT _xpack/ml/anomaly_detectors/population
|
|||
}
|
||||
----------------------------------
|
||||
//CONSOLE
|
||||
// TEST[skip:needs-licence]
|
||||
<1> This `over_field_name` property indicates that the metrics for each user (
|
||||
as identified by their `username` value) are analyzed relative to other users
|
||||
in each bucket.
|
||||
|
||||
//TO-DO: Per sophiec20 "Perhaps add the datafeed config and add a query filter to
|
||||
//include only workstations as servers and printers would behave differently
|
||||
//from the population
|
||||
|
||||
If your data is stored in {es}, you can use the population job wizard in {kib}
|
||||
to create a job with these same properties. For example, the population job
|
||||
wizard provides the following job settings:
|
|
@ -28,7 +28,7 @@ request stops the `feed1` {dfeed}:
|
|||
POST _xpack/ml/datafeeds/datafeed-total-requests/_stop
|
||||
--------------------------------------------------
|
||||
// CONSOLE
|
||||
// TEST[setup:server_metrics_startdf]
|
||||
// TEST[skip:setup:server_metrics_startdf]
|
||||
|
||||
NOTE: You must have `manage_ml`, or `manage` cluster privileges to stop {dfeeds}.
|
||||
For more information, see <<security-privileges>>.
|
||||
|
@ -49,6 +49,7 @@ If you are upgrading your cluster, you can use the following request to stop all
|
|||
POST _xpack/ml/datafeeds/_all/_stop
|
||||
----------------------------------
|
||||
// CONSOLE
|
||||
// TEST[skip:needs-licence]
|
||||
|
||||
[float]
|
||||
[[closing-ml-jobs]]
|
||||
|
@ -67,7 +68,7 @@ example, the following request closes the `job1` job:
|
|||
POST _xpack/ml/anomaly_detectors/total-requests/_close
|
||||
--------------------------------------------------
|
||||
// CONSOLE
|
||||
// TEST[setup:server_metrics_openjob]
|
||||
// TEST[skip:setup:server_metrics_openjob]
|
||||
|
||||
NOTE: You must have `manage_ml`, or `manage` cluster privileges to stop {dfeeds}.
|
||||
For more information, see <<security-privileges>>.
|
||||
|
@ -86,3 +87,4 @@ all open jobs on the cluster:
|
|||
POST _xpack/ml/anomaly_detectors/_all/_close
|
||||
----------------------------------
|
||||
// CONSOLE
|
||||
// TEST[skip:needs-licence]
|
|
@ -95,7 +95,7 @@ PUT /my_index/my_type/1
|
|||
}
|
||||
----------------------------------
|
||||
// CONSOLE
|
||||
// TESTSETUP
|
||||
// TEST[skip:SETUP]
|
||||
<1> In this example, string fields are mapped as `keyword` fields to support
|
||||
aggregation. If you want both a full text (`text`) and a keyword (`keyword`)
|
||||
version of the same field, use multi-fields. For more information, see
|
||||
|
@ -144,7 +144,7 @@ PUT _xpack/ml/datafeeds/datafeed-test1
|
|||
}
|
||||
----------------------------------
|
||||
// CONSOLE
|
||||
// TEST[skip:broken]
|
||||
// TEST[skip:needs-licence]
|
||||
<1> A script field named `total_error_count` is referenced in the detector
|
||||
within the job.
|
||||
<2> The script field is defined in the {dfeed}.
|
||||
|
@ -163,7 +163,7 @@ You can preview the contents of the {dfeed} by using the following API:
|
|||
GET _xpack/ml/datafeeds/datafeed-test1/_preview
|
||||
----------------------------------
|
||||
// CONSOLE
|
||||
// TEST[continued]
|
||||
// TEST[skip:continued]
|
||||
|
||||
In this example, the API returns the following results, which contain a sum of
|
||||
the `error_count` and `aborted_count` values:
|
||||
|
@ -177,8 +177,6 @@ the `error_count` and `aborted_count` values:
|
|||
}
|
||||
]
|
||||
----------------------------------
|
||||
// TESTRESPONSE
|
||||
|
||||
|
||||
NOTE: This example demonstrates how to use script fields, but it contains
|
||||
insufficient data to generate meaningful results. For a full demonstration of
|
||||
|
@ -254,7 +252,7 @@ PUT _xpack/ml/datafeeds/datafeed-test2
|
|||
GET _xpack/ml/datafeeds/datafeed-test2/_preview
|
||||
--------------------------------------------------
|
||||
// CONSOLE
|
||||
// TEST[skip:broken]
|
||||
// TEST[skip:needs-licence]
|
||||
<1> The script field has a rather generic name in this case, since it will
|
||||
be used for various tests in the subsequent examples.
|
||||
<2> The script field uses the plus (+) operator to concatenate strings.
|
||||
|
@ -271,7 +269,6 @@ and "SMITH " have been concatenated and an underscore was added:
|
|||
}
|
||||
]
|
||||
----------------------------------
|
||||
// TESTRESPONSE
|
||||
|
||||
[[ml-configuring-transform3]]
|
||||
.Example 3: Trimming strings
|
||||
|
@ -292,7 +289,7 @@ POST _xpack/ml/datafeeds/datafeed-test2/_update
|
|||
GET _xpack/ml/datafeeds/datafeed-test2/_preview
|
||||
--------------------------------------------------
|
||||
// CONSOLE
|
||||
// TEST[continued]
|
||||
// TEST[skip:continued]
|
||||
<1> This script field uses the `trim()` function to trim extra white space from a
|
||||
string.
|
||||
|
||||
|
@ -308,7 +305,6 @@ has been trimmed to "SMITH":
|
|||
}
|
||||
]
|
||||
----------------------------------
|
||||
// TESTRESPONSE
|
||||
|
||||
[[ml-configuring-transform4]]
|
||||
.Example 4: Converting strings to lowercase
|
||||
|
@ -329,7 +325,7 @@ POST _xpack/ml/datafeeds/datafeed-test2/_update
|
|||
GET _xpack/ml/datafeeds/datafeed-test2/_preview
|
||||
--------------------------------------------------
|
||||
// CONSOLE
|
||||
// TEST[continued]
|
||||
// TEST[skip:continued]
|
||||
<1> This script field uses the `toLowerCase` function to convert a string to all
|
||||
lowercase letters. Likewise, you can use the `toUpperCase{}` function to convert
|
||||
a string to uppercase letters.
|
||||
|
@ -346,7 +342,6 @@ has been converted to "joe":
|
|||
}
|
||||
]
|
||||
----------------------------------
|
||||
// TESTRESPONSE
|
||||
|
||||
[[ml-configuring-transform5]]
|
||||
.Example 5: Converting strings to mixed case formats
|
||||
|
@ -367,7 +362,7 @@ POST _xpack/ml/datafeeds/datafeed-test2/_update
|
|||
GET _xpack/ml/datafeeds/datafeed-test2/_preview
|
||||
--------------------------------------------------
|
||||
// CONSOLE
|
||||
// TEST[continued]
|
||||
// TEST[skip:continued]
|
||||
<1> This script field is a more complicated example of case manipulation. It uses
|
||||
the `subString()` function to capitalize the first letter of a string and
|
||||
converts the remaining characters to lowercase.
|
||||
|
@ -384,7 +379,6 @@ has been converted to "Joe":
|
|||
}
|
||||
]
|
||||
----------------------------------
|
||||
// TESTRESPONSE
|
||||
|
||||
[[ml-configuring-transform6]]
|
||||
.Example 6: Replacing tokens
|
||||
|
@ -405,7 +399,7 @@ POST _xpack/ml/datafeeds/datafeed-test2/_update
|
|||
GET _xpack/ml/datafeeds/datafeed-test2/_preview
|
||||
--------------------------------------------------
|
||||
// CONSOLE
|
||||
// TEST[continued]
|
||||
// TEST[skip:continued]
|
||||
<1> This script field uses regular expressions to replace white
|
||||
space with underscores.
|
||||
|
||||
|
@ -421,7 +415,6 @@ The preview {dfeed} API returns the following results, which show that
|
|||
}
|
||||
]
|
||||
----------------------------------
|
||||
// TESTRESPONSE
|
||||
|
||||
[[ml-configuring-transform7]]
|
||||
.Example 7: Regular expression matching and concatenation
|
||||
|
@ -442,7 +435,7 @@ POST _xpack/ml/datafeeds/datafeed-test2/_update
|
|||
GET _xpack/ml/datafeeds/datafeed-test2/_preview
|
||||
--------------------------------------------------
|
||||
// CONSOLE
|
||||
// TEST[continued]
|
||||
// TEST[skip:continued]
|
||||
<1> This script field looks for a specific regular expression pattern and emits the
|
||||
matched groups as a concatenated string. If no match is found, it emits an empty
|
||||
string.
|
||||
|
@ -459,7 +452,6 @@ The preview {dfeed} API returns the following results, which show that
|
|||
}
|
||||
]
|
||||
----------------------------------
|
||||
// TESTRESPONSE
|
||||
|
||||
[[ml-configuring-transform8]]
|
||||
.Example 8: Splitting strings by domain name
|
||||
|
@ -509,7 +501,7 @@ PUT _xpack/ml/datafeeds/datafeed-test3
|
|||
GET _xpack/ml/datafeeds/datafeed-test3/_preview
|
||||
--------------------------------------------------
|
||||
// CONSOLE
|
||||
// TEST[skip:broken]
|
||||
// TEST[skip:needs-licence]
|
||||
|
||||
If you have a single field that contains a well-formed DNS domain name, you can
|
||||
use the `domainSplit()` function to split the string into its highest registered
|
||||
|
@ -537,7 +529,6 @@ The preview {dfeed} API returns the following results, which show that
|
|||
}
|
||||
]
|
||||
----------------------------------
|
||||
// TESTRESPONSE
|
||||
|
||||
[[ml-configuring-transform9]]
|
||||
.Example 9: Transforming geo_point data
|
||||
|
@ -583,7 +574,7 @@ PUT _xpack/ml/datafeeds/datafeed-test4
|
|||
GET _xpack/ml/datafeeds/datafeed-test4/_preview
|
||||
--------------------------------------------------
|
||||
// CONSOLE
|
||||
// TEST[skip:broken]
|
||||
// TEST[skip:needs-licence]
|
||||
|
||||
In {es}, location data can be stored in `geo_point` fields but this data type is
|
||||
not supported natively in {xpackml} analytics. This example of a script field
|
||||
|
@ -602,4 +593,4 @@ The preview {dfeed} API returns the following results, which show that
|
|||
}
|
||||
]
|
||||
----------------------------------
|
||||
// TESTRESPONSE
|
||||
|
|
@ -1,102 +0,0 @@
|
|||
[role="xpack"]
|
||||
[[ml-api-quickref]]
|
||||
== API quick reference
|
||||
|
||||
All {ml} endpoints have the following base:
|
||||
|
||||
[source,js]
|
||||
----
|
||||
/_xpack/ml/
|
||||
----
|
||||
// NOTCONSOLE
|
||||
|
||||
The main {ml} resources can be accessed with a variety of endpoints:
|
||||
|
||||
* <<ml-api-jobs,+/anomaly_detectors/+>>: Create and manage {ml} jobs
|
||||
* <<ml-api-datafeeds,+/datafeeds/+>>: Select data from {es} to be analyzed
|
||||
* <<ml-api-results,+/results/+>>: Access the results of a {ml} job
|
||||
* <<ml-api-snapshots,+/model_snapshots/+>>: Manage model snapshots
|
||||
//* <<ml-api-validate,+/validate/+>>: Validate subsections of job configurations
|
||||
|
||||
[float]
|
||||
[[ml-api-jobs]]
|
||||
=== /anomaly_detectors/
|
||||
|
||||
* {ref}/ml-put-job.html[PUT /anomaly_detectors/<job_id+++>+++]: Create a job
|
||||
* {ref}/ml-open-job.html[POST /anomaly_detectors/<job_id>/_open]: Open a job
|
||||
* {ref}/ml-post-data.html[POST /anomaly_detectors/<job_id>/_data]: Send data to a job
|
||||
* {ref}/ml-get-job.html[GET /anomaly_detectors]: List jobs
|
||||
* {ref}/ml-get-job.html[GET /anomaly_detectors/<job_id+++>+++]: Get job details
|
||||
* {ref}/ml-get-job-stats.html[GET /anomaly_detectors/<job_id>/_stats]: Get job statistics
|
||||
* {ref}/ml-update-job.html[POST /anomaly_detectors/<job_id>/_update]: Update certain properties of the job configuration
|
||||
* {ref}/ml-flush-job.html[POST anomaly_detectors/<job_id>/_flush]: Force a job to analyze buffered data
|
||||
* {ref}/ml-forecast.html[POST anomaly_detectors/<job_id>/_forecast]: Forecast future job behavior
|
||||
* {ref}/ml-close-job.html[POST /anomaly_detectors/<job_id>/_close]: Close a job
|
||||
* {ref}/ml-delete-job.html[DELETE /anomaly_detectors/<job_id+++>+++]: Delete a job
|
||||
|
||||
[float]
|
||||
[[ml-api-calendars]]
|
||||
=== /calendars/
|
||||
|
||||
* {ref}/ml-put-calendar.html[PUT /calendars/<calendar_id+++>+++]: Create a calendar
|
||||
* {ref}/ml-post-calendar-event.html[POST /calendars/<calendar_id+++>+++/events]: Add a scheduled event to a calendar
|
||||
* {ref}/ml-put-calendar-job.html[PUT /calendars/<calendar_id+++>+++/jobs/<job_id+++>+++]: Associate a job with a calendar
|
||||
* {ref}/ml-get-calendar.html[GET /calendars/<calendar_id+++>+++]: Get calendar details
|
||||
* {ref}/ml-get-calendar-event.html[GET /calendars/<calendar_id+++>+++/events]: Get scheduled event details
|
||||
* {ref}/ml-delete-calendar-event.html[DELETE /calendars/<calendar_id+++>+++/events/<event_id+++>+++]: Remove a scheduled event from a calendar
|
||||
* {ref}/ml-delete-calendar-job.html[DELETE /calendars/<calendar_id+++>+++/jobs/<job_id+++>+++]: Disassociate a job from a calendar
|
||||
* {ref}/ml-delete-calendar.html[DELETE /calendars/<calendar_id+++>+++]: Delete a calendar
|
||||
|
||||
[float]
|
||||
[[ml-api-filters]]
|
||||
=== /filters/
|
||||
|
||||
* {ref}/ml-put-filter.html[PUT /filters/<filter_id+++>+++]: Create a filter
|
||||
* {ref}/ml-update-filter.html[POST /filters/<filter_id+++>+++/_update]: Update a filter
|
||||
* {ref}/ml-get-filter.html[GET /filters/<filter_id+++>+++]: List filters
|
||||
* {ref}/ml-delete-filter.html[DELETE /filter/<filter_id+++>+++]: Delete a filter
|
||||
|
||||
[float]
|
||||
[[ml-api-datafeeds]]
|
||||
=== /datafeeds/
|
||||
|
||||
* {ref}/ml-put-datafeed.html[PUT /datafeeds/<datafeed_id+++>+++]: Create a {dfeed}
|
||||
* {ref}/ml-start-datafeed.html[POST /datafeeds/<datafeed_id>/_start]: Start a {dfeed}
|
||||
* {ref}/ml-get-datafeed.html[GET /datafeeds]: List {dfeeds}
|
||||
* {ref}/ml-get-datafeed.html[GET /datafeeds/<datafeed_id+++>+++]: Get {dfeed} details
|
||||
* {ref}/ml-get-datafeed-stats.html[GET /datafeeds/<datafeed_id>/_stats]: Get statistical information for {dfeeds}
|
||||
* {ref}/ml-preview-datafeed.html[GET /datafeeds/<datafeed_id>/_preview]: Get a preview of a {dfeed}
|
||||
* {ref}/ml-update-datafeed.html[POST /datafeeds/<datafeedid>/_update]: Update certain settings for a {dfeed}
|
||||
* {ref}/ml-stop-datafeed.html[POST /datafeeds/<datafeed_id>/_stop]: Stop a {dfeed}
|
||||
* {ref}/ml-delete-datafeed.html[DELETE /datafeeds/<datafeed_id+++>+++]: Delete {dfeed}
|
||||
|
||||
[float]
|
||||
[[ml-api-results]]
|
||||
=== /results/
|
||||
|
||||
* {ref}/ml-get-bucket.html[GET /results/buckets]: List the buckets in the results
|
||||
* {ref}/ml-get-bucket.html[GET /results/buckets/<bucket_id+++>+++]: Get bucket details
|
||||
* {ref}/ml-get-overall-buckets.html[GET /results/overall_buckets]: Get overall bucket results for multiple jobs
|
||||
* {ref}/ml-get-category.html[GET /results/categories]: List the categories in the results
|
||||
* {ref}/ml-get-category.html[GET /results/categories/<category_id+++>+++]: Get category details
|
||||
* {ref}/ml-get-influencer.html[GET /results/influencers]: Get influencer details
|
||||
* {ref}/ml-get-record.html[GET /results/records]: Get records from the results
|
||||
|
||||
[float]
|
||||
[[ml-api-snapshots]]
|
||||
=== /model_snapshots/
|
||||
|
||||
* {ref}/ml-get-snapshot.html[GET /model_snapshots]: List model snapshots
|
||||
* {ref}/ml-get-snapshot.html[GET /model_snapshots/<snapshot_id+++>+++]: Get model snapshot details
|
||||
* {ref}/ml-revert-snapshot.html[POST /model_snapshots/<snapshot_id>/_revert]: Revert a model snapshot
|
||||
* {ref}/ml-update-snapshot.html[POST /model_snapshots/<snapshot_id>/_update]: Update certain settings for a model snapshot
|
||||
* {ref}/ml-delete-snapshot.html[DELETE /model_snapshots/<snapshot_id+++>+++]: Delete a model snapshot
|
||||
|
||||
////
|
||||
[float]
|
||||
[[ml-api-validate]]
|
||||
=== /validate/
|
||||
|
||||
* {ref}/ml-valid-detector.html[POST /anomaly_detectors/_validate/detector]: Validate a detector
|
||||
* {ref}/ml-valid-job.html[POST /anomaly_detectors/_validate]: Validate a job
|
||||
////
|