mirror of
https://github.com/honeymoose/OpenSearch.git
synced 2025-03-27 02:18:42 +00:00
Prior to this change, if the persistent tasks framework noticed that a job was running on a node that was isolated but has rejoined the cluster then it would close that job. This was not ideal, because then the job would persist state from the autodetect process that was isolated. This commit changes the behaviour to kill the autodetect process associated with such a job, so that it does not interfere with the autodetect process that is running on the node where the persistent tasks framework thinks it should be running. In order to achieve this a change has also been made to the behaviour of force-close. Previously this would result in the autodetect process being gracefully shut down asynchronously to the force-close request. However, the mechanism by which this happened was the same as the mechanism for cancelling tasks that end up running on more than one node due to nodes becoming isolated from the cluster. Therefore, force-close now also kills the autodetect process rather than gracefully stopping it. The documentation has been changed to reflect this. It should not be a problem as force-close is supposed to be a last resort for when normal close fails. relates elastic/x-pack-elasticsearch#1186 Original commit: elastic/x-pack-elasticsearch@578c944371
82 lines
2.3 KiB
Plaintext
82 lines
2.3 KiB
Plaintext
//lcawley Verified example output 2017-04-11
|
|
[[ml-close-job]]
|
|
=== Close Jobs
|
|
|
|
The close job API enables you to close a job.
|
|
A job can be opened and closed multiple times throughout its lifecycle.
|
|
|
|
A closed job cannot receive data or perform analysis
|
|
operations, but you can still explore and navigate results.
|
|
|
|
|
|
==== Request
|
|
|
|
`POST _xpack/ml/anomaly_detectors/<job_id>/_close`
|
|
|
|
|
|
==== Description
|
|
|
|
//A job can be closed once all data has been analyzed.
|
|
|
|
When you close a job, it runs housekeeping tasks such as pruning the model history,
|
|
flushing buffers, calculating final results and persisting the model snapshots.
|
|
Depending upon the size of the job, it could take several minutes to close and
|
|
the equivalent time to re-open.
|
|
|
|
After it is closed, the job has a minimal overhead on the cluster except for
|
|
maintaining its meta data. Therefore it is a best practice to close jobs that
|
|
are no longer required to process data.
|
|
|
|
When a {dfeed} that has a specified end date stops, it automatically closes
|
|
the job.
|
|
|
|
NOTE: If you use the `force` query parameter, the request returns without performing
|
|
the associated actions such as flushing buffers and persisting the model snapshots.
|
|
Therefore, do not use this parameter if you want the job to be in a consistent state
|
|
after the close job API returns. The `force` query parameter should only be used in
|
|
situations where the job has already failed, or where you are not interested in
|
|
results the job might have recently produced or might produce in the future.
|
|
|
|
|
|
==== Path Parameters
|
|
|
|
`job_id` (required)::
|
|
(string) Identifier for the job
|
|
|
|
|
|
==== Query Parameters
|
|
|
|
`force`::
|
|
(boolean) Use to close a failed job, or to forcefully close a job which has not
|
|
responded to its initial close request.
|
|
|
|
`timeout`::
|
|
(time units) Controls the time to wait until a job has closed.
|
|
The default value is 30 minutes.
|
|
|
|
|
|
==== Authorization
|
|
|
|
You must have `manage_ml`, or `manage` cluster privileges to use this API.
|
|
For more information, see <<privileges-list-cluster>>.
|
|
|
|
|
|
==== Examples
|
|
|
|
The following example closes the `event_rate` job:
|
|
|
|
[source,js]
|
|
--------------------------------------------------
|
|
POST _xpack/ml/anomaly_detectors/event_rate/_close
|
|
--------------------------------------------------
|
|
// CONSOLE
|
|
// TEST[skip:todo]
|
|
|
|
When the job is closed, you receive the following results:
|
|
[source,js]
|
|
----
|
|
{
|
|
"closed": true
|
|
}
|
|
----
|