101 lines
3.0 KiB
Plaintext
101 lines
3.0 KiB
Plaintext
[role="xpack"]
|
|
[testenv="platinum"]
|
|
[[start-dfanalytics]]
|
|
=== Start {dfanalytics-jobs} API
|
|
|
|
[subs="attributes"]
|
|
++++
|
|
<titleabbrev>Start {dfanalytics-jobs}</titleabbrev>
|
|
++++
|
|
|
|
Starts a {dfanalytics-job}.
|
|
|
|
experimental[]
|
|
|
|
[[ml-start-dfanalytics-request]]
|
|
==== {api-request-title}
|
|
|
|
`POST _ml/data_frame/analytics/<data_frame_analytics_id>/_start`
|
|
|
|
[[ml-start-dfanalytics-prereq]]
|
|
==== {api-prereq-title}
|
|
|
|
If the {es} {security-features} are enabled, you must have the following built-in roles and privileges:
|
|
|
|
* `machine_learning_admin`
|
|
* `kibana_admin` (UI only)
|
|
|
|
* source indices: `read`, `view_index_metadata`
|
|
* destination index: `read`, `create_index`, `manage` and `index`
|
|
* cluster: `monitor` (UI only)
|
|
|
|
For more information, see <<security-privileges>> and <<built-in-roles>>.
|
|
|
|
[[ml-start-dfanalytics-desc]]
|
|
==== {api-description-title}
|
|
|
|
A {dfanalytics-job} can be started and stopped multiple times throughout its
|
|
lifecycle.
|
|
|
|
If the destination index does not exist, it is created automatically the first
|
|
time you start the {dfanalytics-job}. The `index.number_of_shards` and
|
|
`index.number_of_replicas` settings for the destination index are copied from
|
|
the source index. If there are multiple source indices, the destination index
|
|
copies the highest setting values. The mappings for the destination index are
|
|
also copied from the source indices. If there are any mapping conflicts, the job
|
|
fails to start.
|
|
|
|
If the destination index exists, it is used as is. You can therefore set up the
|
|
destination index in advance with custom settings and mappings.
|
|
|
|
IMPORTANT: When {es} {security-features} are enabled, the {dfanalytics-job}
|
|
remembers which user created it and runs the job using those credentials. If you
|
|
provided <<http-clients-secondary-authorization,secondary authorization headers>>
|
|
when you created the job, those credentials are used.
|
|
|
|
[[ml-start-dfanalytics-path-params]]
|
|
==== {api-path-parms-title}
|
|
|
|
`<data_frame_analytics_id>`::
|
|
(Required, string)
|
|
include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=job-id-data-frame-analytics-define]
|
|
|
|
[[ml-start-dfanalytics-query-params]]
|
|
==== {api-query-parms-title}
|
|
|
|
`timeout`::
|
|
(Optional, <<time-units,time units>>)
|
|
include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=timeout-start]
|
|
|
|
[[ml-start-dfanalytics-response-body]]
|
|
==== {api-response-body-title}
|
|
|
|
`acknowledged`::
|
|
(boolean) For a successful response, this value is always `true`. On failure, an
|
|
exception is returned instead.
|
|
|
|
`node`::
|
|
(string) The ID of the node that the job was started on.
|
|
If the job is allowed to open lazily and has not yet been assigned to a node, this value is an empty string.
|
|
|
|
[[ml-start-dfanalytics-example]]
|
|
==== {api-examples-title}
|
|
|
|
The following example starts the `loganalytics` {dfanalytics-job}:
|
|
|
|
[source,console]
|
|
--------------------------------------------------
|
|
POST _ml/data_frame/analytics/loganalytics/_start
|
|
--------------------------------------------------
|
|
// TEST[skip:setup:logdata_job]
|
|
|
|
When the {dfanalytics-job} starts, you receive the following results:
|
|
|
|
[source,console-result]
|
|
----
|
|
{
|
|
"acknowledged" : true,
|
|
"node" : "node-1"
|
|
}
|
|
----
|