Backporting updates to ILM org, overview, & GS (#51898)

* [DOCS] Align with ILM API docs (#48705)

* [DOCS] Reconciled with Snapshot/Restore reorg

* [DOCS] Split off ILM overview to a separate topic. (#51287)

* [DOCS} Split off overview to a separate topic.

* [DOCS] Incorporated feedback from @jrodewig.

* [DOCS] Edit ILM GS tutorial (#51513)

* [DOCS] Edit ILM GS tutorial

* [DOCS] Incorporated review feedback from @andreidan.

* [DOCS] Removed test link & fixed anchor & title.

* Update docs/reference/ilm/getting-started-ilm.asciidoc

Co-Authored-By: James Rodewig <james.rodewig@elastic.co>

* Fixed glossary merge error.

Co-authored-by: James Rodewig <james.rodewig@elastic.co>
This commit is contained in:
debadair 2020-02-04 16:45:18 -08:00 committed by GitHub
parent 0be61a3662
commit c0156cbb5d
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
34 changed files with 1171 additions and 1085 deletions

View File

@ -80,6 +80,30 @@ In most cases, the goal of filtering is to reduce the number of documents that h
Follower indices are the target indices for <<glossary-ccr,{ccr}>>. They exist
in your local cluster and replicate <<glossary-leader-index,leader indices>>.
[[glossary-force-merge]] force merge ::
// tag::force-merge-def[]
// tag::force-merge-def-short[]
Manually trigger a merge to reduce the number of segments in each shard of an index
and free up the space used by deleted documents.
// end::force-merge-def-short[]
You should not force merge indices that are actively being written to.
Merging is normally performed automatically, but you can use force merge after
<<glossary-rollover, rollover>> to reduce the shards in the old index to a single segment.
See the {ref}/indices-forcemerge.html[force merge API].
// end::force-merge-def[]
[[glossary-freeze]] freeze ::
// tag::freeze-def[]
// tag::freeze-def-short[]
Make an index read-only and minimize its memory footprint.
// end::freeze-def-short[]
Frozen indices can be searched without incurring the overhead of of re-opening a closed index,
but searches are throttled and might be slower.
You can freeze indices to reduce the overhead of keeping older indices searchable
before you are ready to archive or delete them.
See the {ref}/freeze-index-api.html[freeze API].
// end::freeze-def[]
[[glossary-id]] id ::
The ID of a <<glossary-document,document>> identifies a document. The
@ -114,6 +138,18 @@ See {ref}/indices-add-alias.html[Add index alias].
See <<indices-add-alias>>.
--
[[glossary-index-template]] index template ::
+
--
// tag::index-template-def[]
// tag::index-template-def-short[]
Defines settings and mappings to apply to new indexes that match a simple naming pattern, such as _logs-*_.
// end::index-template-def-short[]
An index template can also attach a lifecycle policy to the new index.
Index templates are used to automatically configure indices created during <<glossary-rollover, rollover>>.
// end::index-template-def[]
--
[[glossary-leader-index]] leader index ::
Leader indices are the source indices for <<glossary-ccr,{ccr}>>. They exist
@ -206,6 +242,19 @@ By default, each primary shard has one replica, but the number of
replicas can be changed dynamically on an existing index. A replica
shard will never be started on the same node as its primary shard.
[[glossary-rollover]] rollover ::
+
--
// tag::rollover-def[]
// tag::rollover-def-short[]
Redirect an alias to begin writing to a new index when the existing index reaches a certain age, number of docs, or size.
// end::rollover-def-short[]
The new index is automatically configured according to any matching <<glossary-index-template, index templates>>.
For example, if you're indexing log data, you might use rollover to create daily or weekly indices.
See the {ref}/indices-rollover-index.html[rollover index API].
// end::rollover-def[]
--
[[glossary-routing]] routing ::
When you index a document, it is stored on a single
@ -220,7 +269,9 @@ time, or a <<mapping-routing-field,routing
field>> in the <<glossary-mapping,mapping>>.
[[glossary-shard]] shard ::
+
--
// tag::shard-def[]
A shard is a single Lucene instance. It is a low-level “worker” unit
which is managed automatically by Elasticsearch. An index is a logical
namespace which points to <<glossary-primary-shard,primary>> and
@ -234,6 +285,18 @@ Elasticsearch distributes shards amongst all <<glossary-node,nodes>> in the
<<glossary-cluster,cluster>>, and can move shards automatically from one
node to another in the case of node failure, or the addition of new
nodes.
// end::shard-def[]
--
[[glossary-shrink]] shrink ::
// tag::shrink-def[]
// tag::shrink-def-short[]
Reduce the number of primary shards in an index.
// end::shrink-def-short[]
You can shrink an index to reduce its overhead when the request volume drops.
For example, you might opt to shrink an index once it is no longer the write index.
See the {ref}/indices-shrink-index.html[shrink index API].
// end::shrink-def[]
[[glossary-source_field]] source field ::

View File

@ -6,7 +6,7 @@
<titleabbrev>Delete policy</titleabbrev>
++++
Deletes a lifecycle policy.
Deletes an index lifecycle policy.
[[ilm-delete-lifecycle-request]]
==== {api-request-title}

View File

@ -1,8 +1,8 @@
[[index-lifecycle-management-api]]
== {ilm-cap} API
You can use the following APIs to manage policies on indices. For more
information, see <<index-lifecycle-management>>.
You use the following APIs to set up policies to automatically manage the index lifecycle.
For more information about {ilm} ({ilm-init}), see <<index-lifecycle-management>>.
[float]
[[ilm-api-policy-endpoint]]

View File

@ -1,822 +0,0 @@
[role="xpack"]
[testenv="basic"]
[[snapshot-lifecycle-management-api]]
== Snapshot lifecycle management API
The Snapshot Lifecycle Management APIs are used to manage policies for the time
and frequency of automatic snapshots. Snapshot Lifecycle Management is related
to <<index-lifecycle-management,Index Lifecycle Management>>, however, instead
of managing a lifecycle of actions that are performed on a single index, SLM
allows configuring policies spanning multiple indices. Snapshot Lifecycle
Management can also perform deletion of older snapshots based on a configurable
retention policy.
SLM policy management is split into three different CRUD APIs, a way to put or update
policies, a way to retrieve policies, and a way to delete unwanted policies, as
well as a separate API for immediately invoking a snapshot based on a policy.
SLM can be stopped temporarily and restarted using the <<slm-stop,Stop SLM>> and
<<slm-start,Start SLM>> APIs. To disable SLM's functionality entirely, set the
cluster setting `xpack.slm.enabled` to `false` in elasticsearch.yml.
[[slm-api-put]]
=== Put snapshot lifecycle policy API
++++
<titleabbrev>Put snapshot lifecycle policy</titleabbrev>
++++
Creates or updates a snapshot lifecycle policy.
[[slm-api-put-request]]
==== {api-request-title}
`PUT /_slm/policy/<snapshot-lifecycle-policy-id>`
[[slm-api-put-prereqs]]
==== {api-prereq-title}
If you use {es} {security-features},
you must have:
* `manage_slm` <<privileges-list-cluster,cluster privileges>>
* `manage` <<privileges-list-indices,index privileges>> for any included indices
{slm-cap} operations are executed
as the user that last created or updated the policy.
For more information,
see <<security-privileges>>.
[[slm-api-put-desc]]
==== {api-description-title}
Use the put snapshot lifecycle policy API
to create or update a snapshot lifecycle policy.
If the policy already exists,
this request increments the policy's version.
Only the latest version of the policy is stored.
[[slm-api-put-path-params]]
==== {api-path-parms-title}
`<snapshot-lifecycle-policy-id>`::
(Required, string)
ID for the snapshot lifecycle policy
you want to create or update.
[[slm-api-put-query-params]]
==== {api-query-parms-title}
include::{docdir}/rest-api/common-parms.asciidoc[tag=timeoutparms]
[[slm-api-put-request-body]]
==== {api-request-body-title}
`schedule`::
(Required, <<schedule-cron,Cron scheduler configuration>>)
Periodic or absolute schedule
at which the policy creates snapshots
and deletes expired snapshots.
+
Schedule changes to existing policies
are applied immediately.
`name`::
+
--
(Required, string)
Name automatically assigned to each snapshot
created by the policy.
This value supports the same <<date-math-index-names,date math>>
supported in index names.
To prevent conflicting snapshot names,
a UUID is automatically appended to each snapshot name.
--
`repository`::
+
--
(Required, string)
Repository used to store snapshots
created by this policy.
This repository must exist prior to the policy's creation.
You can create a repository
using the <<modules-snapshots,snapshot repository API>>.
--
`config`::
+
--
(Required, object)
Configuration for each snapshot
created by the policy.
Parameters include:
`indices`::
(Optional, array of strings)
Array of index names or wildcard pattern of index names
included in snapshots.
`ignore_unavailable`::
(Optional, boolean)
If `true`,
missing indices do *not* cause snapshot creation to fail
and return an error.
Defaults to `false`.
`include_global_state`::
(Optional, boolean)
If `true`,
cluster states are included in snapshots.
Defaults to `false`.
--
`retention`::
+
--
(Optional, object)
Retention rules used to retain
and delete snapshots
created by the policy.
Parameters include:
`expire_after`::
(Optional, <<time-units, time units>>)
Time period after which
a snapshot is considered expired
and eligible for deletion.
`max_count`::
(Optional, integer)
Maximum number of snapshots to retain,
even if the snapshots have not yet expired.
+
If the number of snapshots in the repository exceeds this limit,
the policy retains the most recent snapshots
and deletes older snapshots.
`min_count`::
(Optional, integer)
Minimum number of snapshots to retain,
even if the snapshots have expired.
--
[[slm-api-put-example]]
==== {api-examples-title}
The following request creates a snapshot lifecycle policy
with an ID of `daily-snapshots`:
[source,console]
--------------------------------------------------
PUT /_slm/policy/daily-snapshots
{
"schedule": "0 30 1 * * ?", <1>
"name": "<daily-snap-{now/d}>", <2>
"repository": "my_repository", <3>
"config": { <4>
"indices": ["data-*", "important"], <5>
"ignore_unavailable": false,
"include_global_state": false
},
"retention": { <6>
"expire_after": "30d", <7>
"min_count": 5, <8>
"max_count": 50 <9>
}
}
--------------------------------------------------
// TEST[setup:setup-repository]
<1> When the snapshot should be taken, in this case, 1:30am daily
<2> The name each snapshot should be given
<3> Which repository to take the snapshot in
<4> Any extra snapshot configuration
<5> Which indices the snapshot should contain
<6> Optional retention configuration
<7> Keep snapshots for 30 days
<8> Always keep at least 5 successful snapshots, even if they're more than 30 days old
<9> Keep no more than 50 successful snapshots, even if they're less than 30 days old
[[slm-api-get]]
=== Get snapshot lifecycle policy API
++++
<titleabbrev>Get snapshot lifecycle policy</titleabbrev>
++++
Returns information
about one or more snapshot lifecycle policies.
[[slm-api-get-request]]
==== {api-request-title}
`GET /_slm/policy/<snapshot-lifecycle-policy-id>`
`GET /_slm/policy/`
[[slm-api-get-desc]]
==== {api-description-title}
Use the snapshot lifecycle policy API
to retrieve information
about one or more snapshot lifecycle policies.
The API response also includes information
about the latest successful and failed attempts
to create automatic snapshots.
[[slm-api-get-path-params]]
==== {api-path-parms-title}
`<snapshot-lifecycle-policy-id>`::
(Optional, string)
Comma-separated list of snapshot lifecycle policy IDs
to retrieve.
[[slm-api-get-example]]
==== {api-examples-title}
[[slm-api-get-specific-ex]]
===== Get a specific policy
[source,console]
--------------------------------------------------
GET /_slm/policy/daily-snapshots?human
--------------------------------------------------
// TEST[continued]
The API returns the following response:
[source,console-result]
--------------------------------------------------
{
"daily-snapshots" : {
"version": 1, <1>
"modified_date": "2019-04-23T01:30:00.000Z", <2>
"modified_date_millis": 1556048137314,
"policy" : {
"schedule": "0 30 1 * * ?",
"name": "<daily-snap-{now/d}>",
"repository": "my_repository",
"config": {
"indices": ["data-*", "important"],
"ignore_unavailable": false,
"include_global_state": false
},
"retention": {
"expire_after": "30d",
"min_count": 5,
"max_count": 50
}
},
"stats": {
"policy": "daily-snapshots",
"snapshots_taken": 0,
"snapshots_failed": 0,
"snapshots_deleted": 0,
"snapshot_deletion_failures": 0
},
"next_execution": "2019-04-24T01:30:00.000Z", <3>
"next_execution_millis": 1556048160000
}
}
--------------------------------------------------
// TESTRESPONSE[s/"modified_date": "2019-04-23T01:30:00.000Z"/"modified_date": $body.daily-snapshots.modified_date/ s/"modified_date_millis": 1556048137314/"modified_date_millis": $body.daily-snapshots.modified_date_millis/ s/"next_execution": "2019-04-24T01:30:00.000Z"/"next_execution": $body.daily-snapshots.next_execution/ s/"next_execution_millis": 1556048160000/"next_execution_millis": $body.daily-snapshots.next_execution_millis/]
<1> The version of the snapshot policy, only the latest version is stored and incremented when the policy is updated
<2> The last time this policy was modified.
<3> The next time this policy will be executed.
[[slm-api-get-all-ex]]
===== Get all policies
[source,console]
--------------------------------------------------
GET /_slm/policy
--------------------------------------------------
// TEST[continued]
[[slm-api-execute]]
=== Execute snapshot lifecycle policy API
++++
<titleabbrev>Execute snapshot lifecycle policy</titleabbrev>
++++
Executes a snapshot lifecycle policy, immediately creating a snapshot
without waiting for the scheduled creation time.
[[slm-api-execute-request]]
==== {api-request-title}
`PUT /_slm/policy/<snapshot-lifecycle-policy-id>/_execute`
[[slm-api-execute-desc]]
==== {api-description-title}
Sometimes it can be useful to immediately execute a snapshot based on policy,
perhaps before an upgrade or before performing other maintenance on indices. The
execute snapshot policy API allows you to perform a snapshot immediately without
waiting for a policy's scheduled invocation.
[[slm-api-execute-path-params]]
==== {api-path-parms-title}
`<snapshot-lifecycle-policy-id>`::
(Required, string)
ID of the snapshot lifecycle policy to execute.
[[slm-api-execute-example]]
==== {api-examples-title}
To take an immediate snapshot using a policy, use the following request:
[source,console]
--------------------------------------------------
POST /_slm/policy/daily-snapshots/_execute
--------------------------------------------------
// TEST[skip:we can't easily handle snapshots from docs tests]
This API returns the following response with the generated snapshot name:
[source,console-result]
--------------------------------------------------
{
"snapshot_name": "daily-snap-2019.04.24-gwrqoo2xtea3q57vvg0uea"
}
--------------------------------------------------
// TESTRESPONSE[skip:we can't handle snapshots from docs tests]
The snapshot will be taken in the background, you can use the
<<modules-snapshots,snapshot APIs>> to monitor the status of the snapshot.
Once a snapshot has been kicked off, you can see the latest successful or failed
snapshot using the get snapshot lifecycle policy API:
[source,console]
--------------------------------------------------
GET /_slm/policy/daily-snapshots?human
--------------------------------------------------
// TEST[skip:we already tested get policy above, the last_failure may not be present though]
Which, in this case shows an error because the index did not exist:
[source,console-result]
--------------------------------------------------
{
"daily-snapshots" : {
"version": 1,
"modified_date": "2019-04-23T01:30:00.000Z",
"modified_date_millis": 1556048137314,
"policy" : {
"schedule": "0 30 1 * * ?",
"name": "<daily-snap-{now/d}>",
"repository": "my_repository",
"config": {
"indices": ["data-*", "important"],
"ignore_unavailable": false,
"include_global_state": false
},
"retention": {
"expire_after": "30d",
"min_count": 5,
"max_count": 50
}
},
"stats": {
"policy": "daily-snapshots",
"snapshots_taken": 0,
"snapshots_failed": 1,
"snapshots_deleted": 0,
"snapshot_deletion_failures": 0
}
"last_failure": { <1>
"snapshot_name": "daily-snap-2019.04.02-lohisb5ith2n8hxacaq3mw",
"time_string": "2019-04-02T01:30:00.000Z",
"time": 1556042030000,
"details": "{\"type\":\"index_not_found_exception\",\"reason\":\"no such index [important]\",\"resource.type\":\"index_or_alias\",\"resource.id\":\"important\",\"index_uuid\":\"_na_\",\"index\":\"important\",\"stack_trace\":\"[important] IndexNotFoundException[no such index [important]]\\n\\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.indexNotFoundException(IndexNameExpressionResolver.java:762)\\n\\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.innerResolve(IndexNameExpressionResolver.java:714)\\n\\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.resolve(IndexNameExpressionResolver.java:670)\\n\\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndices(IndexNameExpressionResolver.java:163)\\n\\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndexNames(IndexNameExpressionResolver.java:142)\\n\\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndexNames(IndexNameExpressionResolver.java:102)\\n\\tat org.elasticsearch.snapshots.SnapshotsService$1.execute(SnapshotsService.java:280)\\n\\tat org.elasticsearch.cluster.ClusterStateUpdateTask.execute(ClusterStateUpdateTask.java:47)\\n\\tat org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:687)\\n\\tat org.elasticsearch.cluster.service.MasterService.calculateTaskOutputs(MasterService.java:310)\\n\\tat org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:210)\\n\\tat org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:142)\\n\\tat org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150)\\n\\tat org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188)\\n\\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:688)\\n\\tat org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252)\\n\\tat org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215)\\n\\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\\n\\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\\n\\tat java.base/java.lang.Thread.run(Thread.java:834)\\n\"}"
} ,
"next_execution": "2019-04-24T01:30:00.000Z",
"next_execution_millis": 1556048160000
}
}
--------------------------------------------------
// TESTRESPONSE[skip:the presence of last_failure is asynchronous and will be present for users, but is untestable]
<1> The last unsuccessfully initiated snapshot by this policy, along with the details of its failure
In this case, it failed due to the "important" index not existing and
`ignore_unavailable` setting being set to `false`.
Updating the policy to change the `ignore_unavailable` setting is done using the
same put snapshot lifecycle policy API:
[source,console]
--------------------------------------------------
PUT /_slm/policy/daily-snapshots
{
"schedule": "0 30 1 * * ?",
"name": "<daily-snap-{now/d}>",
"repository": "my_repository",
"config": {
"indices": ["data-*", "important"],
"ignore_unavailable": true,
"include_global_state": false
},
"retention": {
"expire_after": "30d",
"min_count": 5,
"max_count": 50
}
}
--------------------------------------------------
// TEST[continued]
Another snapshot can immediately be executed to ensure the new policy works:
[source,console]
--------------------------------------------------
POST /_slm/policy/daily-snapshots/_execute
--------------------------------------------------
// TEST[skip:we can't handle snapshots in docs tests]
[source,console-result]
--------------------------------------------------
{
"snapshot_name": "daily-snap-2019.04.24-tmtnyjtrsxkhbrrdcgg18a"
}
--------------------------------------------------
// TESTRESPONSE[skip:we can't handle snapshots in docs tests]
Now retrieving the policy shows that the policy has successfully been executed:
[source,console]
--------------------------------------------------
GET /_slm/policy/daily-snapshots?human
--------------------------------------------------
// TEST[skip:we already tested this above and the output may not be available yet]
Which now includes the successful snapshot information:
[source,console-result]
--------------------------------------------------
{
"daily-snapshots" : {
"version": 2, <1>
"modified_date": "2019-04-23T01:30:00.000Z",
"modified_date_millis": 1556048137314,
"policy" : {
"schedule": "0 30 1 * * ?",
"name": "<daily-snap-{now/d}>",
"repository": "my_repository",
"config": {
"indices": ["data-*", "important"],
"ignore_unavailable": true,
"include_global_state": false
},
"retention": {
"expire_after": "30d",
"min_count": 5,
"max_count": 50
}
},
"stats": {
"policy": "daily-snapshots",
"snapshots_taken": 1,
"snapshots_failed": 1,
"snapshots_deleted": 0,
"snapshot_deletion_failures": 0
},
"last_success": { <2>
"snapshot_name": "daily-snap-2019.04.24-tmtnyjtrsxkhbrrdcgg18a",
"time_string": "2019-04-24T16:43:49.316Z",
"time": 1556124229316
} ,
"last_failure": {
"snapshot_name": "daily-snap-2019.04.02-lohisb5ith2n8hxacaq3mw",
"time_string": "2019-04-02T01:30:00.000Z",
"time": 1556042030000,
"details": "{\"type\":\"index_not_found_exception\",\"reason\":\"no such index [important]\",\"resource.type\":\"index_or_alias\",\"resource.id\":\"important\",\"index_uuid\":\"_na_\",\"index\":\"important\",\"stack_trace\":\"[important] IndexNotFoundException[no such index [important]]\\n\\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.indexNotFoundException(IndexNameExpressionResolver.java:762)\\n\\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.innerResolve(IndexNameExpressionResolver.java:714)\\n\\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.resolve(IndexNameExpressionResolver.java:670)\\n\\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndices(IndexNameExpressionResolver.java:163)\\n\\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndexNames(IndexNameExpressionResolver.java:142)\\n\\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndexNames(IndexNameExpressionResolver.java:102)\\n\\tat org.elasticsearch.snapshots.SnapshotsService$1.execute(SnapshotsService.java:280)\\n\\tat org.elasticsearch.cluster.ClusterStateUpdateTask.execute(ClusterStateUpdateTask.java:47)\\n\\tat org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:687)\\n\\tat org.elasticsearch.cluster.service.MasterService.calculateTaskOutputs(MasterService.java:310)\\n\\tat org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:210)\\n\\tat org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:142)\\n\\tat org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150)\\n\\tat org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188)\\n\\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:688)\\n\\tat org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252)\\n\\tat org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215)\\n\\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\\n\\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\\n\\tat java.base/java.lang.Thread.run(Thread.java:834)\\n\"}"
} ,
"next_execution": "2019-04-24T01:30:00.000Z",
"next_execution_millis": 1556048160000
}
}
--------------------------------------------------
// TESTRESPONSE[skip:the presence of last_failure and last_success is asynchronous and will be present for users, but is untestable]
<1> The policy's version has been incremented because it was updated
<2> The last successfully initiated snapshot information
It is a good idea to test policies using the execute API to ensure they work.
[[slm-get-stats]]
=== Get snapshot lifecycle stats API
++++
<titleabbrev>Get snapshot lifecycle stats</titleabbrev>
++++
Returns global and policy-level statistics about actions taken by {slm}.
[[slm-api-stats-request]]
==== {api-request-title}
`GET /_slm/stats`
[[slm-api-stats-example]]
==== {api-examples-title}
[source,console]
--------------------------------------------------
GET /_slm/stats
--------------------------------------------------
// TEST[continued]
The API returns the following response:
[source,js]
--------------------------------------------------
{
"retention_runs": 13,
"retention_failed": 0,
"retention_timed_out": 0,
"retention_deletion_time": "1.4s",
"retention_deletion_time_millis": 1404,
"policy_stats": [ ],
"total_snapshots_taken": 1,
"total_snapshots_failed": 1,
"total_snapshots_deleted": 0,
"total_snapshot_deletion_failures": 0
}
--------------------------------------------------
// TESTRESPONSE[s/runs": 13/runs": $body.retention_runs/ s/_failed": 0/_failed": $body.retention_failed/ s/_timed_out": 0/_timed_out": $body.retention_timed_out/ s/"1.4s"/$body.retention_deletion_time/ s/1404/$body.retention_deletion_time_millis/ s/total_snapshots_taken": 1/total_snapshots_taken": $body.total_snapshots_taken/ s/total_snapshots_failed": 1/total_snapshots_failed": $body.total_snapshots_failed/ s/"policy_stats": [.*]/"policy_stats": $body.policy_stats/]
[[slm-api-delete]]
=== Delete snapshot lifecycle policy API
++++
<titleabbrev>Delete snapshot lifecycle policy</titleabbrev>
++++
Deletes an existing snapshot lifecycle policy.
[[slm-api-delete-request]]
==== {api-request-title}
`DELETE /_slm/policy/<snapshot-lifecycle-policy-id>`
[[slm-api-delete-desc]]
==== {api-description-title}
A policy can be deleted by issuing a delete request with the policy id. Note
that this prevents any future snapshots from being taken, but does not cancel
any currently ongoing snapshots or remove any previously taken snapshots.
[[slm-api-delete-path-params]]
==== {api-path-parms-title}
`<snapshot-lifecycle-policy-id>`::
(Required, string)
ID of the snapshot lifecycle policy to delete.
[[slm-api-delete-example]]
==== {api-examples-title}
[source,console]
--------------------------------------------------
DELETE /_slm/policy/daily-snapshots
--------------------------------------------------
// TEST[continued]
[[slm-api-execute-retention]]
=== Execute snapshot lifecycle retention API
++++
<titleabbrev>Execute snapshot lifecycle retention</titleabbrev>
++++
Deletes any expired snapshots based on lifecycle policy retention rules.
[[slm-api-execute-retention-request]]
==== {api-request-title}
`POST /_slm/_execute_retention`
[[slm-api-execute-retention-desc]]
==== {api-description-title}
While Snapshot Lifecycle Management retention is usually invoked through the global cluster settings
for its schedule, it can sometimes be useful to invoke a retention run to expunge expired snapshots
immediately. This API allows you to run a one-off retention run.
[[slm-api-execute-retention-example]]
==== {api-examples-title}
To immediately start snapshot retention, use the following request:
[source,console]
--------------------------------------------------
POST /_slm/_execute_retention
--------------------------------------------------
This API returns the following response as retention runs asynchronously in the
background:
[source,console-result]
--------------------------------------------------
{
"acknowledged": true
}
--------------------------------------------------
[[slm-stop]]
=== Stop Snapshot Lifecycle Management API
[subs="attributes"]
++++
<titleabbrev>Stop Snapshot Lifecycle Management</titleabbrev>
++++
Stop the Snapshot Lifecycle Management (SLM) plugin.
[[slm-stop-request]]
==== {api-request-title}
`POST /_slm/stop`
[[slm-stop-desc]]
==== {api-description-title}
Halts all snapshot lifecycle management operations and stops the SLM plugin.
This is useful when you are performing maintenance on the cluster and need to
prevent SLM from performing any actions on your indices. Note that this API does
not stop any snapshots that are currently in progress, and that snapshots can
still be taken manually via the <<slm-api-execute,Execute Policy API>> even
when SLM is stopped.
The API returns as soon as the stop request has been acknowledged, but the
plugin might continue to run until in-progress operations complete and the plugin
can be safely stopped. Use the <<slm-get-status, Get SLM Status>> API to see
if SLM is running.
==== Request Parameters
include::{docdir}/rest-api/common-parms.asciidoc[tag=timeoutparms]
==== Authorization
You must have the `manage_slm` cluster privilege to use this API.
For more information, see <<security-privileges>>.
[[slm-stop-example]]
==== {api-examples-title}
Stops the SLM plugin.
[source,console]
--------------------------------------------------
POST _slm/stop
--------------------------------------------------
// TEST[continued]
If the request does not encounter errors, you receive the following result:
[source,console-result]
--------------------------------------------------
{
"acknowledged": true
}
--------------------------------------------------
[[slm-start]]
=== Start Snapshot Lifecycle Management API
[subs="attributes"]
++++
<titleabbrev>Start Snapshot Lifecycle Management</titleabbrev>
++++
Start the Snapshot Lifecycle Management (SLM) plugin.
[[slm-start-request]]
==== {api-request-title}
`POST /_slm/start`
[[slm-start-desc]]
==== {api-description-title}
Starts the SLM plugin if it is currently stopped. SLM is started
automatically when the cluster is formed. Restarting SLM is only
necessary if it has been stopped using the <<slm-stop, Stop SLM API>>.
==== Request Parameters
include::{docdir}/rest-api/common-parms.asciidoc[tag=timeoutparms]
==== Authorization
You must have the `manage_slm` cluster privilege to use this API.
For more information, see <<security-privileges>>.
[[slm-start-example]]
==== {api-examples-title}
Starts the SLM plugin.
[source,console]
--------------------------------------------------
POST _slm/start
--------------------------------------------------
// TEST[continued]
If the request succeeds, you receive the following result:
[source,console-result]
--------------------------------------------------
{
"acknowledged": true
}
--------------------------------------------------
[[slm-get-status]]
=== Get Snapshot Lifecycle Management status API
[subs="attributes"]
++++
<titleabbrev>Get Snapshot Lifecycle Management status</titleabbrev>
++++
Retrieves the current Snapshot Lifecycle Management (SLM) status.
[[slm-get-status-request]]
==== {api-request-title}
`GET /_slm/status`
[[slm-get-status-desc]]
==== {api-description-title}
Returns the status of the SLM plugin. The `operation_mode` field in the
response shows one of three states: `STARTED`, `STOPPING`,
or `STOPPED`. You can change the status of the SLM plugin with the
<<slm-start, Start SLM>> and <<slm-stop, Stop SLM>> APIs.
==== Request Parameters
include::{docdir}/rest-api/common-parms.asciidoc[tag=timeoutparms]
==== Authorization
You must have the `manage_slm` or `read_slm` or both cluster privileges to use this API.
For more information, see <<security-privileges>>.
[[slm-get-status-example]]
==== {api-examples-title}
Gets the SLM plugin status.
[source,console]
--------------------------------------------------
GET _slm/status
--------------------------------------------------
If the request succeeds, the body of the response shows the operation mode:
[source,console-result]
--------------------------------------------------
{
"operation_mode": "RUNNING"
}
--------------------------------------------------

View File

@ -1,79 +1,93 @@
[role="xpack"]
[testenv="basic"]
[[getting-started-index-lifecycle-management]]
== Getting started with {ilm}
== Get started: Automate rollover with {ilm-init}
Let's jump into {ilm} ({ilm-init}) by working through a hands-on scenario.
This section will leverage many new concepts unique to {ilm-init} that
you may not be familiar with. The following sections will explore
these in more details.
++++
<titleabbrev>Automate rollover</titleabbrev>
++++
The goal of this example is to set up a set of indices that will encapsulate
the data from a time series data source. We can imagine there is a system
like {filebeat-ref}[Filebeat] that continuously indexes documents into
our writing index. We wish to roll over the index after it reaches a size
of 50 gigabytes, or has been created 30 days ago, and then delete the index
after 90 days.
This tutorial demonstrates how to use {ilm} ({ilm-init})
to manage indices that contain time-series data.
When you continuously index timestamped documents into {es} using
Filebeat, Logstash, or some other mechanism,
you typically use an index alias so you can periodically roll over to a new index.
This enables you to implement a hot-warm-cold architecture to meet your performance
requirements for your newest data, control costs over time, enforce retention policies,
and still get the most out of your data.
To automate rollover and management of time-series indices with {ilm-init}, you:
. <<ilm-gs-create-policy, Create a lifecycle policy>> with the {ilm-init} put policy API.
. <<ilm-gs-apply-policy, Create an index template>> to apply the policy to each new index.
. <<ilm-gs-bootstrap, Bootstrap an index>> as the initial write index.
. <<ilm-gs-check-progress, Verify indices are moving through the lifecycle phases>>
as expected with the {ilm-init} explain API.
[float]
[[ilm-gs-create-policy]]
=== Setting up a policy
=== Create a lifecycle policy
There are many new features introduced by {ilm-init}, but we will only focus on
a few that are needed for our example. For starters, we will use the
<<ilm-put-lifecycle,Put Policy>> API to define our first policy. Lifecycle
policies are defined in JSON and include specific
<<ilm-policy-definition,phases and actions>>.
A lifecycle policy specifies the phases in the index lifecycle
and the actions to perform in each phase. A lifecycle can have up to four phases:
`hot`, `warm`, `cold`, and `delete`. Policies are defined in JSON
and added through the {ilm-init} put policy API.
For example, the following request creates a `datastream_policy` with two phases:
* The `hot` phase defines a `rollover` action to specify that an index rolls over when it
reaches either a `max_size` of 50 gigabytes or a `max_age` of 30 days.
* The `delete` phase uses `min_age` to remove the index 90 days after rollover.
Note that this value is relative to the rollover time, not the index creation time.
[source,console]
------------------------
PUT _ilm/policy/datastream_policy <1>
PUT _ilm/policy/datastream_policy
{
"policy": { <2>
"policy": {
"phases": {
"hot": { <3>
"hot": { <1>
"actions": {
"rollover": { <4>
"max_size": "50GB",
"rollover": {
"max_size": "50GB", <2>
"max_age": "30d"
}
}
},
"delete": {
"min_age": "90d", <5>
"min_age": "90d", <3>
"actions": {
"delete": {} <6>
"delete": {} <4>
}
}
}
}
}
------------------------
<1> The `min_age` defaults to `0ms`, so new indices enter the `hot` phase immediately.
<2> Trigger the `rollover` action when either of the conditions are met.
<3> Move the index into the `delete` phase 90 days after rollover.
<4> Trigger the `delete` action when the index enters the delete phase.
<1> call to the <<ilm-put-lifecycle,put lifecycle API>> endpoint to create
a new policy named "datastream_policy"
<2> policy definition sub-object
<3> the hot phase defined in the "phases" section. Optional `min_age` field
not defined -- defaults to `0ms`
<4> rollover action definition
<5> delete phase begins after 90 days
<6> delete action definition
Here we created the policy called `datastream_policy` which rolls over
the index being written to after it reaches 50 gigabytes, or it is 30
days old. The rollover will occur when either of these conditions is true.
The index will be deleted 90 days after it is rolled over.
See <<_actions>> for the complete list of actions available in each phase.
[float]
[[ilm-gs-apply-policy]]
=== Applying a policy to our index
=== Create an index template to apply the lifecycle policy
There are <<set-up-lifecycle-policy,a few ways>> to associate a
policy to an index. Since we wish specific settings to be applied to
the new index created from Rollover, we will set the policy via
index templates.
To automaticaly apply a lifecycle policy to the new write index on rollover,
specify the policy in the index template used to create new indices.
For example, the following request creates a `datastream_template` that is applied to new indices
whose names match the `datastream-*` index pattern.
The template configures two {ilm-init} settings:
* `index.lifecycle.name` specifies the name of the lifecycle policy to apply to all new indices that match
the index pattern.
* `index.lifecycle.rollover_alias` specifies the index alias to be rolled over
when the rollover action is triggered for an index.
[source,console]
-----------------------
@ -90,6 +104,11 @@ PUT _template/datastream_template
-----------------------
// TEST[continued]
<1> Apply the template to a new index if its name starts with `datastream-`.
<2> The name of the lifecycle policy to apply to each new index.
<3> The name of the alias used to reference these indices.
Required for policies that use the rollover action.
//////////////////////////
[source,console]
@ -100,23 +119,17 @@ DELETE /_template/datastream_template
//////////////////////////
<1> match all indices starting with "datastream-". These will include all
newly created indices from actions like rollover
<2> the name of the lifecycle policy managing the index
<3> alias to use for the rollover action, required since a rollover action is
defined in the policy.
[float]
[[ilm-gs-bootstrap]]
=== Bootstrap the initial time-series index
The above index template introduces a few new settings specific to {ilm-init}.
The first being `index.lifecycle.name`. This setting will configure
the "datastream_policy" to the index applying this template. This means
that all newly created indices prefixed "datastream-" will be managed by
our policy. The other setting used here is `index.lifecycle.rollover_alias`.
This setting is required when using a policy containing the rollover
action and specifies which alias to rollover on behalf of this index.
The intention here is that the rollover alias is also defined on the index.
To begin, we will want to bootstrap our first index to write to.
To get things started, you need to bootstrap an initial index and
designate it as the write index for the rollover alias specified in your index template.
The name of this index must match the template's index pattern and end with a number.
On rollover, this value is incremented to generate a name for the new index.
For example, the following request creates an index called `datastream-000001`
and makes it the write index for the `datastream` alias.
[source,console]
-----------------------
@ -131,35 +144,30 @@ PUT datastream-000001
-----------------------
// TEST[continued]
When creating our index, we have to consider a few important configurations
that tie our index and our policy together correctly. We need to make sure
that our index name matches our index template pattern of "datastream-*",
which it does. We are using the <<ilm-rollover-action, Rollover Action>> in our policy, which
requires that our index name ends with a number. In our case, we used
`000001`. This is important so that Rollover can increment this number when
naming the new index created from rolling over.
When the rollover conditions are met, the `rollover` action:
Our index creation request leverages its template to apply our settings,
but we must also configure our rollover alias: "datastream". To do this,
we take advantage of <<aliases-write-index,write indices>>. This is a way
to define an alias to be used for both reading and writing, with only one
index being the index that is being written to at a time. Rollover swaps
the write index to be the new index created from rollover, and sets the
alias to be read-only for the source index.
* Creates a new index called `datastream-000002`.
This matches the `datastream-*` pattern, so the settings from `datastream_template` are applied to the new index.
* Designates the new index as the write index and makes the bootstrap index read-only.
This process repeats each time rollover conditions are met.
You can search across all of the indices managed by the `datastream_policy` with the `datastream` alias.
Write operations are routed to the current write index.
For more information about write indices and rollover, see the <<rollover-index-api-desc, rollover API>>.
[float]
[[ilm-gs-check-progress]]
=== Checking progress
Now that we have an index managed by our policy, how do we tell what is going
on? Which phase are we in? Is something broken? This section will go over a
few APIs and their responses to help us inspect our indices with respect
to {ilm-init}.
To get status information for managed indices, you use the {ilm-init} explain API.
This lets you find out things like:
With the help of the <<ilm-explain-lifecycle,Explain API>>, we can know
things like which phase we're in and when we entered that phase. The API
will also provide further info if errors occurred, or if we are blocked on
certain checks within actions.
* What phase an index is in and when it entered that phase.
* The current action and what step is being performed.
* If any errors have occurred or progress is blocked.
For example, the following request gets information about the `datastream` indices:
[source,console]
--------------------------------------------------
@ -167,29 +175,30 @@ GET datastream-*/_ilm/explain
--------------------------------------------------
// TEST[continued]
The above request will retrieve {ilm-init} execution information for all our
managed indices.
The response below shows that the bootstrap index is waiting in the `hot` phase's `rollover` action.
It remains in this state and {ilm-init} continues to call `attempt-rollover`
until the rollover conditions are met.
[[36818c6d9f434d387819c30bd9addb14]]
[source,console-result]
--------------------------------------------------
{
"indices": {
"datastream-000001": {
"index": "datastream-000001",
"managed": true, <1>
"policy": "datastream_policy", <2>
"managed": true,
"policy": "datastream_policy", <1>
"lifecycle_date_millis": 1538475653281,
"age": "30s", <3>
"phase": "hot", <4>
"age": "30s", <2>
"phase": "hot",
"phase_time_millis": 1538475653317,
"action": "rollover", <5>
"action": "rollover",
"action_time_millis": 1538475653317,
"step": "attempt-rollover", <6>
"step": "attempt-rollover", <3>
"step_time_millis": 1538475653317,
"phase_execution": {
"policy": "datastream_policy",
"phase_definition": { <7>
"phase_definition": { <4>
"min_age": "0ms",
"actions": {
"rollover": {
@ -198,7 +207,7 @@ managed indices.
}
}
},
"version": 1, <8>
"version": 1,
"modified_date_in_millis": 1539609701576
}
}
@ -207,33 +216,9 @@ managed indices.
--------------------------------------------------
// TESTRESPONSE[skip:no way to know if we will get this response immediately]
<1> this index is managed by ILM
<2> the policy in question, in this case, "datastream_policy"
<3> the current age of the index
<4> what phase the index is currently in
<5> what action the index is currently on
<6> what step the index is currently on
<7> the definition of the phase
(in this case, the "hot" phase) that the index is currently on
<8> the version of the policy being used to execute the current phase
<1> The policy used to manage the index
<2> The age of the index
<3> The step {ilm-init} is performing on the index
<4> The definition of the current phase (the `hot` phase)
You can read about the full details of this response in the
<<ilm-explain-lifecycle, explain API docs>>. For now, let's focus on how
the response details which phase, action, and step we're in. We are in the
"hot" phase, and "rollover" action. Rollover will continue to be called
by {ilm-init} until its conditions are met and it rolls over the index.
Afterwards, the original index will stay in the hot phase until 90 more
days pass and it is deleted in the delete phase.
As time goes on, new indices will be created and deleted.
With `datastream-000002` being created when the index mets the rollover
conditions and `datastream-000003` created after that. We will be able
to search across all of our managed indices using the "datastream" alias,
and we will be able to write to our to-be-rolled-over write indices using
that same alias.
That's it! We have our first use-case managed by {ilm-init}.
To learn more about all our APIs,
check out <<index-lifecycle-management-api,ILM APIs>>.
See the <<index-lifecycle-management-api,ILM APIs>> for more information.

View File

@ -1,7 +1,7 @@
[role="xpack"]
[testenv="basic"]
[[index-lifecycle-and-snapshots]]
== Restoring snapshots of managed indices
== Restore a managed index
When restoring a snapshot that contains indices managed by Index Lifecycle
Management, the lifecycle will automatically continue to execute after the

View File

@ -1,7 +1,7 @@
[role="xpack"]
[testenv="basic"]
[[ilm-with-existing-indices]]
== Using {ilm-init} with existing indices
== Manage existing indices
While it is recommended to use {ilm-init} to manage the index lifecycle from
start to finish, it may be useful to use {ilm-init} with existing indices,

View File

@ -1,72 +1,35 @@
[role="xpack"]
[testenv="basic"]
[[index-lifecycle-management]]
= Managing the index lifecycle
= Manage the index lifecycle
[partintro]
--
You can configure {ilm} ({ilm-init}) policies to automatically manage indices according to
your performance, resiliency, and retention requirements.
For example, you could use {ilm-init} to:
The <<index-lifecycle-management-api,{ilm} ({ilm-init}) APIs>> enable you to
automate how you want to manage your indices over time. Rather than simply
performing management actions on your indices on a set schedule, you can base
actions on other factors such as shard size and performance requirements.
* Create a new index each day, week, or month and archive previous ones
* Spin up a new index when an index reaches a certain size
* Delete stale indices to enforce data retention standards
You control how indices are handled as they age by attaching a
lifecycle policy to the index template used to create them. You can update
the policy to modify the lifecycle of both new and existing indices.
[TIP]
To automatically back up your indices and manage snapshots,
use <<getting-started-snapshot-lifecycle-management,snapshot lifecycle policies>>.
For time series indices, there are four stages in the index lifecycle:
* Hot--the index is actively being updated and queried.
* Warm--the index is no longer being updated, but is still being queried.
* Cold--the index is no longer being updated and is seldom queried. The
information still needs to be searchable, but it's okay if those queries are
slower.
* Delete--the index is no longer needed and can safely be deleted.
The lifecycle policy governs how the index transitions through these stages and
the actions that are performed on the index at each stage. The policy can
specify:
* The maximum size or age at which you want to roll over to a new index.
* The point at which the index is no longer being updated and the number of
primary shards can be reduced.
* When to force a merge to permanently delete documents marked for deletion.
* The point at which the index can be moved to less performant hardware.
* The point at which the availability is not as critical and the number of
replicas can be reduced.
* When the index can be safely deleted.
For example, if you are indexing metrics data from a fleet of ATMs into
Elasticsearch, you might define a policy that says:
. When the index reaches 50GB, roll over to a new index.
. Move the old index into the warm stage, mark it read only, and shrink it down
to a single shard.
. After 7 days, move the index into the cold stage and move it to less expensive
hardware.
. Delete the index once the required 30 day retention period is reached.
*Snapshot Lifecycle Management*
ILM itself does allow managing indices, however, managing snapshots for a set of
indices is outside of the scope of an index-level policy. Instead, there are
separate APIs for managing snapshot lifecycles. Please see the
<<snapshot-lifecycle-management-api,Snapshot Lifecycle Management>>
documentation for information about configuring snapshots.
See <<getting-started-snapshot-lifecycle-management,getting started with SLM>>.
[IMPORTANT]
===========================
{ilm} does not support mixed-version cluster usage. Although it
may be possible to create such new policies against
newer-versioned nodes, there is no guarantee they will
work as intended. New policies using new actions that
do not exist in the oldest versioned node will cause errors.
===========================
* <<overview-index-lifecycle-management>>
* <<getting-started-index-lifecycle-management>>
* <<ilm-policy-definition>>
* <<set-up-lifecycle-policy>>
* <<update-lifecycle-policy>>
* <<index-lifecycle-error-handling>>
* <<start-stop-ilm>>
* <<using-policies-rollover>>
* <<ilm-with-existing-indices>>
* <<index-lifecycle-and-snapshots>>
--
include::overview-ilm.asciidoc[]
include::getting-started-ilm.asciidoc[]
@ -74,18 +37,15 @@ include::policy-definitions.asciidoc[]
include::set-up-lifecycle-policy.asciidoc[]
include::using-policies-rollover.asciidoc[]
include::update-lifecycle-policy.asciidoc[]
include::error-handling.asciidoc[]
include::ilm-and-snapshots.asciidoc[]
include::start-stop-ilm.asciidoc[]
include::using-policies-rollover.asciidoc[]
include::ilm-with-existing-indices.asciidoc[]
include::getting-started-slm.asciidoc[]
include::ilm-and-snapshots.asciidoc[]
include::slm-retention.asciidoc[]

View File

@ -0,0 +1,69 @@
[role="xpack"]
[testenv="basic"]
[[overview-index-lifecycle-management]]
== Index lifecycle management overview
++++
<titleabbrev>Overview</titleabbrev>
++++
You can create and apply {ilm-cap} ({ilm-init}) policies to automatically manage your indices
according to your performance, resiliency, and retention requirements.
Index lifecycle policies can trigger actions such as:
* **Rollover** -
include::../glossary.asciidoc[tag=rollover-def-short]
* **Shrink** -
include::../glossary.asciidoc[tag=shrink-def-short]
* **Force merge** -
include::../glossary.asciidoc[tag=force-merge-def-short]
* **Freeze** -
include::../glossary.asciidoc[tag=freeze-def-short]
* **Delete** - Permanently remove an index, including all of its data and metadata.
Typically, you associate a lifecycle policy with an index template so it is automatically applied
to new indices.
You can also apply a policy manually when you create an index.
{ilm-init} simplifies managing indices in hot-warm-cold architectures,
which are common when you're working with time-series data such as logs and metrics.
As an index ages, it moves through four possible phases:
* Hot--the index is actively being updated and queried.
* Warm--the index is no longer being updated, but is still being queried.
* Cold--the index is no longer being updated and is seldom queried. The
information still needs to be searchable, but it's okay if those queries are
slower.
* Delete--the index is no longer needed and can safely be deleted.
A lifecycle policy controls how an index moves between phases and
what actions to perform during each phase. You can specify:
* The maximum size or age at which you want to roll over to a new index.
* The point at which the index is no longer being updated and the number of
primary shards can be reduced.
* When to force a merge to permanently delete documents marked for deletion.
* The point at which the index can be moved to less performant hardware.
* The point at which the availability is not as critical and the number of
replicas can be reduced.
* When the index can be safely deleted.
For example, if you are indexing metrics data from a fleet of ATMs into
Elasticsearch, you might define a policy that says:
. When the index reaches 50GB, roll over to a new index.
. Move the old index into the warm stage, mark it read only, and shrink it down
to a single shard.
. After 7 days, move the index into the cold stage and move it to less expensive
hardware.
. Delete the index once the required 30 day retention period is reached.
[IMPORTANT]
===========================
To use {ilm-init}, all nodes in a cluster must run the same version.
Although it might be possible to create and apply policies in a mixed-version cluster,
there is no guarantee they will work as intended.
Attempting to use a policy that contains actions that aren't
supported on all nodes in a cluster will cause errors.
===========================

View File

@ -1,7 +1,7 @@
[role="xpack"]
[testenv="basic"]
[[set-up-lifecycle-policy]]
== Set up {ilm} policy
== Create lifecycle policy
In order for an index to use an {ilm} policy to manage its lifecycle we must
first define a lifecycle policy for it to use. The following request creates a

View File

@ -1,7 +1,7 @@
[role="xpack"]
[testenv="basic"]
[[using-policies-rollover]]
== Using policies to manage index rollover
== Roll over automatically
The rollover action enables you to automatically roll over to a new index based
on the index size, document count, or age. When a rollover is triggered, a new

View File

@ -50,10 +50,6 @@ The modules in this section are:
Using plugins to extend Elasticsearch.
<<modules-snapshots,Snapshot/Restore>>::
Backup your data with snapshot/restore.
<<modules-threadpool,Thread pools>>::
Information about the dedicated thread pools used in Elasticsearch.

View File

@ -320,3 +320,27 @@ See <<snapshots-restore-snapshot>>.
See <<snapshots-register-repository>>.
[role="exclude",id="slm-api-delete"]
=== {slm-init} delete policy API
See <<slm-api-delete-policy>>.
[role="exclude",id="slm-api-execute"]
=== {slm-init} execute lifecycle API
See <<slm-api-execute-lifecycle>>.
[role="exclude",id="slm-api-get"]
=== {slm-init} get policy API
See <<slm-api-get-policy>>.
[role="exclude",id="slm-get-stats"]
=== {slm-init} get stats API
See <<slm-api-get-stats>>.
[role="exclude",id="slm-api-put"]
=== {slm-init} put policy API
See <<slm-api-put-policy>>.

View File

@ -53,7 +53,7 @@ include::{es-repo-dir}/indices/apis/reload-analyzers.asciidoc[]
include::{es-repo-dir}/rollup/rollup-api.asciidoc[]
include::{es-repo-dir}/search.asciidoc[]
include::{xes-repo-dir}/rest-api/security.asciidoc[]
include::{es-repo-dir}/ilm/apis/slm-api.asciidoc[]
include::{es-repo-dir}/slm/apis/slm-api.asciidoc[]
include::{es-repo-dir}/transform/apis/index.asciidoc[]
include::{xes-repo-dir}/rest-api/watcher.asciidoc[]
include::defs.asciidoc[]

View File

@ -0,0 +1,72 @@
[role="xpack"]
[testenv="basic"]
[[snapshot-lifecycle-management]]
== Manage the snapshot lifecycle
You can set up snapshot lifecycle policies to automate the timing, frequency, and retention of snapshots.
Snapshot policies can apply to multiple indices.
The snapshot lifecycle management (SLM) <<snapshot-lifecycle-management-api, CRUD APIs>> provide
the building blocks for the snapshot policy features that are part of the Management application in {kib}.
The Snapshot and Restore UI makes it easy to set up policies, register snapshot repositories,
view and manage snapshots, and restore indices.
You can stop and restart SLM to temporarily pause automatic backups while performing
upgrades or other maintenance.
To disable SLM entirely, set `xpack.slm.enabled` to `false` in `elasticsearch.yml`.
[float]
[[slm-and-security]]
=== Security and SLM
Two built-in cluster privileges control access to the SLM actions when
{es} {security-features} are enabled:
`manage_slm`:: Allows a user to perform all SLM actions, including creating and updating policies
and starting and stopping SLM.
`read_slm`:: Allows a user to perform all read-only SLM actions,
such as getting policies and checking the SLM status.
`cluster:admin/snapshot/*`:: Allows a user to take and delete snapshots of any
index, whether or not they have access to that index.
For example, the following request configures an `slm-admin` role that grants the privileges
necessary for administering SLM.
[source,console]
-----------------------------------
POST /_security/role/slm-admin
{
"cluster": ["manage_slm", "cluster:admin/snapshot/*"],
"indices": [
{
"names": [".slm-history-*"],
"privileges": ["all"]
}
]
}
-----------------------------------
// TEST[skip:security is not enabled here]
Or, for a read-only role that can retrieve policies (but not update, execute, or
delete them), as well as only view the history index:
[source,console]
-----------------------------------
POST /_security/role/slm-read-only
{
"cluster": ["read_slm"],
"indices": [
{
"names": [".slm-history-*"],
"privileges": ["read"]
}
]
}
-----------------------------------
// TEST[skip:security is not enabled here]
include::getting-started-slm.asciidoc[]
include::slm-retention.asciidoc[]

View File

@ -0,0 +1,44 @@
[role="xpack"]
[testenv="basic"]
[[snapshot-lifecycle-management-api]]
== {slm-cap} API
You use the following APIs to set up policies to automatically take snapshots and
control how long they are retained.
For more information about {slm} ({slm-init}), see <<snapshot-lifecycle-management>>.
[float]
[[slm-api-policy-endpoint]]
=== Policy management APIs
* <<slm-api-put-policy,Create lifecycle policy>>
* <<slm-api-get-policy,Get lifecycle policy>>
* <<slm-api-delete-policy,Delete lifecycle policy>>
[float]
[[slm-api-index-endpoint]]
=== Snapshot management APIs
* <<slm-api-execute-lifecycle,Execute snapshot lifecycle policy>> (take snapshots)
* <<slm-api-execute-retention,Execute snapshot retention policy>> (delete expired snapshots)
[float]
[[slm-api-management-endpoint]]
=== Operation management APIs
* <<slm-api-get-status,Get {slm-init} status>>
* <<slm-api-get-stats,Get global and policy-level action statistics>>
* <<slm-api-start,Start {slm-init}>>
* <<slm-api-stop,Stop {slm-init}>>
include::slm-put.asciidoc[]
include::slm-get.asciidoc[]
include::slm-delete.asciidoc[]
include::slm-execute.asciidoc[]
include::slm-execute-retention.asciidoc[]
include::slm-get-status.asciidoc[]
include::slm-stats.asciidoc[]
include::slm-start.asciidoc[]
include::slm-stop.asciidoc[]

View File

@ -0,0 +1,67 @@
[[slm-api-delete-policy]]
=== Delete snapshot lifecycle policy API
++++
<titleabbrev>Delete policy</titleabbrev>
++++
Deletes an existing snapshot lifecycle policy.
[[slm-api-delete-lifecycle-request]]
==== {api-request-title}
`DELETE /_slm/policy/<snapshot-lifecycle-policy-id>`
[[slm-api-delete-lifecycle-prereqs]]
==== {api-prereq-title}
If the {es} {security-features} are enabled, you must have the `manage_slm`
cluster privilege to use this API. For more information, see
<<security-privileges>>.
[[slm-api-delete-lifecycle-desc]]
==== {api-description-title}
Deletes the specified lifecycle policy definition.
This prevents any future snapshots from being taken
but does not cancel in-progress snapshots
or remove previously-taken snapshots.
[[slm-api-delete-lifecycle-path-params]]
==== {api-path-parms-title}
`<policy-id>`::
(Required, string)
ID of the snapshot lifecycle policy to delete.
[[slm-api-delete-lifecycle-example]]
==== {api-examples-title}
////
[source,console]
--------------------------------------------------
PUT /_slm/policy/daily-snapshots
{
"schedule": "0 30 1 * * ?", <1>
"name": "<daily-snap-{now/d}>", <2>
"repository": "my_repository", <3>
"config": { <4>
"indices": ["data-*", "important"], <5>
"ignore_unavailable": false,
"include_global_state": false
},
"retention": { <6>
"expire_after": "30d", <7>
"min_count": 5, <8>
"max_count": 50 <9>
}
}
--------------------------------------------------
// TEST[setup:setup-repository]
////
[source,console]
--------------------------------------------------
DELETE /_slm/policy/daily-snapshots
--------------------------------------------------
// TEST[continued]

View File

@ -0,0 +1,37 @@
[[slm-api-execute-retention]]
=== Execute snapshot retention policy API
++++
<titleabbrev>Execute snapshot retention policy</titleabbrev>
++++
Deletes any snapshots that are expired according to the policy's retention rules.
[[slm-api-execute-retention-request]]
==== {api-request-title}
`POST /_slm/_execute_retention`
[[slm-api-execute-retention-prereqs]]
==== {api-prereq-title}
If the {es} {security-features} are enabled, you must have the `manage_slm`
cluster privilege to use this API. For more information, see
<<security-privileges>>.
[[slm-api-execute-retention-desc]]
==== {api-description-title}
Manually applies the retention policy to force immediate removal of expired snapshots.
The retention policy is normally applied according to its schedule.
[[slm-api-execute-retention-example]]
==== {api-examples-title}
To force removal of expired snapshots:
[source,console]
--------------------------------------------------
POST /_slm/_execute_retention
--------------------------------------------------
Retention runs asynchronously in the background.

View File

@ -0,0 +1,61 @@
[[slm-api-execute-lifecycle]]
=== Execute snapshot lifecycle policy API
++++
<titleabbrev>Execute snapshot lifecycle policy</titleabbrev>
++++
Immediately creates a snapshot according to the lifecycle policy,
without waiting for the scheduled time.
[[slm-api-execute-lifecycle-request]]
==== {api-request-title}
`PUT /_slm/policy/<snapshot-lifecycle-policy-id>/_execute`
[[slm-api-execute-lifecycle-prereqs]]
==== {api-prereq-title}
If the {es} {security-features} are enabled, you must have the `manage_slm`
cluster privilege to use this API. For more information, see
<<security-privileges>>.
[[slm-api-execute-lifecycle-desc]]
==== {api-description-title}
Manually applies the snapshot policy to immediately create a snapshot.
The snapshot policy is normally applied according to its schedule,
but you might want to manually execute a policy before performing an upgrade
or other maintenance.
[[slm-api-execute-lifecycle-path-params]]
==== {api-path-parms-title}
`<policy-id>`::
(Required, string)
ID of the snapshot lifecycle policy to execute.
[[slm-api-execute-lifecycle-example]]
==== {api-examples-title}
To take an immediate snapshot according to the `daily-snapshots` policy:
[source,console]
--------------------------------------------------
POST /_slm/policy/daily-snapshots/_execute
--------------------------------------------------
// TEST[skip:we can't easily handle snapshots from docs tests]
If successful, this request returns the generated snapshot name:
[source,console-result]
--------------------------------------------------
{
"snapshot_name": "daily-snap-2019.04.24-gwrqoo2xtea3q57vvg0uea"
}
--------------------------------------------------
// TESTRESPONSE[skip:we can't handle snapshots from docs tests]
The snapshot is taken in the background.
You can use the<<modules-snapshots,snapshot APIs>> to monitor the status of the snapshot.
To see the status of a policy's most recent snapshot, you can use the <<slm-api-get-policy>>.

View File

@ -0,0 +1,53 @@
[[slm-api-get-status]]
=== Get {slm} status API
[subs="attributes"]
++++
<titleabbrev>Get {slm} status</titleabbrev>
++++
Retrieves the status of {slm} ({slm-init}).
[[slm-api-get-status-request]]
==== {api-request-title}
`GET /_slm/status`
[[slm-api-get-status-desc]]
==== {api-description-title}
Returns the status of the {slm-init} plugin.
The `operation_mode` field in the response shows one of three states:
`STARTED`, `STOPPING`, or `STOPPED`.
You halt and restart the {slm-init} plugin with the
<<slm-api-stop, stop>> and <<slm-api-start, start>> APIs.
==== {api-query-parms-title}
include::{docdir}/rest-api/common-parms.asciidoc[tag=timeoutparms]
[[slm-api-get-status-prereqs]]
==== {api-prereq-title}
If the {es} {security-features} are enabled, you must have the
`manage_slm` or `read_slm` cluster privileges to use this API.
For more information, see <<security-privileges>>.
[[slm-api-get-status-example]]
==== {api-examples-title}
[source,console]
--------------------------------------------------
GET _slm/status
--------------------------------------------------
The API returns the following result:
[source,console-result]
--------------------------------------------------
{
"operation_mode": "RUNNING"
}
--------------------------------------------------
// TESTRESPONSE[s/"operation_mode": "RUNNING"/"operation_mode": $body.operation_mode/]

View File

@ -0,0 +1,124 @@
[[slm-api-get-policy]]
=== Get snapshot lifecycle policy API
++++
<titleabbrev>Get policy</titleabbrev>
++++
Retrieves one or more snapshot lifecycle policy definitions and
information about the latest snapshot attempts.
[[slm-api-get-request]]
==== {api-request-title}
`GET /_slm/policy/<policy-id>`
`GET /_slm/policy`
[[slm-api-get-lifecycle-prereqs]]
==== {api-prereq-title}
If the {es} {security-features} are enabled, you must have the `manage_slm`
cluster privilege to use this API. For more information, see
<<security-privileges>>.
[[slm-api-get-desc]]
==== {api-description-title}
Returns the specified policy definition and
information about the latest successful and failed attempts to create snapshots.
If no policy is specified, returns all defined policies.
[[slm-api-get-path-params]]
==== {api-path-parms-title}
`<policy-id>`::
(Optional, string)
Comma-separated list of snapshot lifecycle policy IDs.
[[slm-api-get-example]]
==== {api-examples-title}
[[slm-api-get-specific-ex]]
===== Get a specific policy
////
[source,console]
--------------------------------------------------
PUT /_slm/policy/daily-snapshots
{
"schedule": "0 30 1 * * ?", <1>
"name": "<daily-snap-{now/d}>", <2>
"repository": "my_repository", <3>
"config": { <4>
"indices": ["data-*", "important"], <5>
"ignore_unavailable": false,
"include_global_state": false
},
"retention": { <6>
"expire_after": "30d", <7>
"min_count": 5, <8>
"max_count": 50 <9>
}
}
--------------------------------------------------
// TEST[setup:setup-repository]
////
Get the `daily-snapshots` policy:
[source,console]
--------------------------------------------------
GET /_slm/policy/daily-snapshots?human
--------------------------------------------------
// TEST[continued]
This request returns the following response:
[source,console-result]
--------------------------------------------------
{
"daily-snapshots" : {
"version": 1, <1>
"modified_date": "2019-04-23T01:30:00.000Z", <2>
"modified_date_millis": 1556048137314,
"policy" : {
"schedule": "0 30 1 * * ?",
"name": "<daily-snap-{now/d}>",
"repository": "my_repository",
"config": {
"indices": ["data-*", "important"],
"ignore_unavailable": false,
"include_global_state": false
},
"retention": {
"expire_after": "30d",
"min_count": 5,
"max_count": 50
}
},
"stats": {
"policy": "daily-snapshots",
"snapshots_taken": 0,
"snapshots_failed": 0,
"snapshots_deleted": 0,
"snapshot_deletion_failures": 0
},
"next_execution": "2019-04-24T01:30:00.000Z", <3>
"next_execution_millis": 1556048160000
}
}
--------------------------------------------------
// TESTRESPONSE[s/"modified_date": "2019-04-23T01:30:00.000Z"/"modified_date": $body.daily-snapshots.modified_date/ s/"modified_date_millis": 1556048137314/"modified_date_millis": $body.daily-snapshots.modified_date_millis/ s/"next_execution": "2019-04-24T01:30:00.000Z"/"next_execution": $body.daily-snapshots.next_execution/ s/"next_execution_millis": 1556048160000/"next_execution_millis": $body.daily-snapshots.next_execution_millis/]
<1> The version of the snapshot policy, only the latest verison is stored and incremented when the policy is updated
<2> The last time this policy was modified.
<3> The next time this policy will be executed.
[[slm-api-get-all-ex]]
===== Get all policies
[source,console]
--------------------------------------------------
GET /_slm/policy
--------------------------------------------------
// TEST[continued]

View File

@ -0,0 +1,177 @@
[[slm-api-put-policy]]
=== Put snapshot lifecycle policy API
++++
<titleabbrev>Put policy</titleabbrev>
++++
Creates or updates a snapshot lifecycle policy.
[[slm-api-put-request]]
==== {api-request-title}
`PUT /_slm/policy/<snapshot-lifecycle-policy-id>`
[[slm-api-put-prereqs]]
==== {api-prereq-title}
If the {es} {security-features} are enabled, you must have the
`manage_slm` cluster privilege and the `manage` index privilege
for any included indices to use this API.
For more information, see <<security-privileges>>.
[[slm-api-put-desc]]
==== {api-description-title}
Use the put snapshot lifecycle policy API
to create or update a snapshot lifecycle policy.
If the policy already exists,
this request increments the policy's version.
Only the latest version of a policy is stored.
[[slm-api-put-path-params]]
==== {api-path-parms-title}
`<snapshot-lifecycle-policy-id>`::
(Required, string)
ID for the snapshot lifecycle policy
you want to create or update.
[[slm-api-put-query-params]]
==== {api-query-parms-title}
include::{docdir}/rest-api/common-parms.asciidoc[tag=timeoutparms]
[[slm-api-put-request-body]]
==== {api-request-body-title}
`schedule`::
(Required, <<schedule-cron,Cron scheduler configuration>>)
Periodic or absolute schedule
at which the policy creates snapshots
and deletes expired snapshots.
+
Schedule changes to existing policies
are applied immediately.
`name`::
+
--
(Required, string)
Name automatically assigned to each snapshot
created by the policy.
This value supports the same <<date-math-index-names,date math>>
supported in index names.
To prevent conflicting snapshot names,
a UUID is automatically appended to each snapshot name.
--
`repository`::
+
--
(Required, string)
Repository used to store snapshots
created by this policy.
This repository must exist prior to the policy's creation.
You can create a repository
using the <<modules-snapshots,snapshot repository API>>.
--
`config`::
+
--
(Required, object)
Configuration for each snapshot
created by the policy.
Parameters include:
`indices`::
(Optional, array of strings)
Array of index names or wildcard pattern of index names
included in snapshots.
`ignore_unavailable`::
(Optional, boolean)
If `true`,
missing indices do *not* cause snapshot creation to fail
and return an error.
Defaults to `false`.
`include_global_state`::
(Optional, boolean)
If `true`,
cluster states are included in snapshots.
Defaults to `false`.
--
`retention`::
+
--
(Optional, object)
Retention rules used to retain
and delete snapshots
created by the policy.
Parameters include:
`expire_after`::
(Optional, <<time-units, time units>>)
Time period after which
a snapshot is considered expired
and eligible for deletion.
`max_count`::
(Optional, integer)
Maximum number of snapshots to retain,
even if the snapshots have not yet expired.
+
If the number of snapshots in the repository exceeds this limit,
the policy retains the most recent snapshots
and deletes older snapshots.
`min_count`::
(Optional, integer)
Minimum number of snapshots to retain,
even if the snapshots have expired.
--
[[slm-api-put-example]]
==== {api-examples-title}
Create a `daily-snapshots` lifecycle policy:
[source,console]
--------------------------------------------------
PUT /_slm/policy/daily-snapshots
{
"schedule": "0 30 1 * * ?", <1>
"name": "<daily-snap-{now/d}>", <2>
"repository": "my_repository", <3>
"config": { <4>
"indices": ["data-*", "important"], <5>
"ignore_unavailable": false,
"include_global_state": false
},
"retention": { <6>
"expire_after": "30d", <7>
"min_count": 5, <8>
"max_count": 50 <9>
}
}
--------------------------------------------------
// TEST[setup:setup-repository]
<1> When the snapshot should be taken, in this case, 1:30am daily
<2> The name each snapshot should be given
<3> Which repository to take the snapshot in
<4> Any extra snapshot configuration
<5> Which indices the snapshot should contain
<6> Optional retention configuration
<7> Keep snapshots for 30 days
<8> Always keep at least 5 successful snapshots, even if they're more than 30 days old
<9> Keep no more than 50 successful snapshots, even if they're less than 30 days old

View File

@ -0,0 +1,51 @@
[[slm-api-start]]
=== Start {slm} API
[subs="attributes"]
++++
<titleabbrev>Start {slm}</titleabbrev>
++++
Turns on {slm} ({slm-init}).
[[slm-api-start-request]]
==== {api-request-title}
`POST /_slm/start`
[[slm-api-start-prereqs]]
==== {api-prereq-title}
If the {es} {security-features} are enabled, you must have the `manage_slm`
cluster privilege to use this API. For more information, see
<<security-privileges>>.
[[slm-api-start-desc]]
==== {api-description-title}
Starts the {slm-init} plugin if it's not running.
{slm-init} starts automatically when a cluster is formed.
Manually starting {slm-init} is only necessary if it has been stopped using the <<slm-api-stop>>.
==== {api-query-parms-title}
include::{docdir}/rest-api/common-parms.asciidoc[tag=timeoutparms]
[[slm-api-start-example]]
==== {api-examples-title}
Start the {slm-init} plugin:
[source,console]
--------------------------------------------------
POST _slm/start
--------------------------------------------------
If successful, this request returns:
[source,console-result]
--------------------------------------------------
{
"acknowledged": true
}
--------------------------------------------------

View File

@ -0,0 +1,46 @@
[[slm-api-get-stats]]
=== Get snapshot lifecycle stats API
++++
<titleabbrev>Get snapshot lifecycle stats</titleabbrev>
++++
Returns global and policy-level statistics about actions taken by {slm}.
[[slm-api-stats-request]]
==== {api-request-title}
`GET /_slm/stats`
[[slm-api-stats-prereqs]]
==== {api-prereq-title}
If the {es} {security-features} are enabled, you must have the `manage_slm`
cluster privilege to use this API. For more information, see
<<security-privileges>>.
[[slm-api-stats-example]]
==== {api-examples-title}
[source,console]
--------------------------------------------------
GET /_slm/stats
--------------------------------------------------
The API returns the following response:
[source,js]
--------------------------------------------------
{
"retention_runs": 13,
"retention_failed": 0,
"retention_timed_out": 0,
"retention_deletion_time": "1.4s",
"retention_deletion_time_millis": 1404,
"policy_stats": [ ],
"total_snapshots_taken": 1,
"total_snapshots_failed": 1,
"total_snapshots_deleted": 0,
"total_snapshot_deletion_failures": 0
}
--------------------------------------------------
// TESTRESPONSE[s/runs": 13/runs": $body.retention_runs/ s/_failed": 0/_failed": $body.retention_failed/ s/_timed_out": 0/_timed_out": $body.retention_timed_out/ s/"1.4s"/$body.retention_deletion_time/ s/1404/$body.retention_deletion_time_millis/ s/total_snapshots_taken": 1/total_snapshots_taken": $body.total_snapshots_taken/ s/total_snapshots_failed": 1/total_snapshots_failed": $body.total_snapshots_failed/ s/"policy_stats": [.*]/"policy_stats": $body.policy_stats/]

View File

@ -0,0 +1,48 @@
[[slm-api-stop]]
=== Stop {slm} API
[subs="attributes"]
++++
<titleabbrev>Stop {slm}</titleabbrev>
++++
Turn off {slm} ({slm-init}).
[[slm-api-stop-request]]
==== {api-request-title}
`POST /_slm/stop`
[[slm-api-stop-prereqs]]
==== {api-prereq-title}
If the {es} {security-features} are enabled, you must have the `manage_slm`
cluster privilege to use this API. For more information, see
<<security-privileges>>.
[[slm-api-stop-desc]]
==== {api-description-title}
Halts all {slm} ({slm-init}) operations and stops the {slm-init} plugin.
This is useful when you are performing maintenance on a cluster and need to
prevent {slm-init} from performing any actions on your indices.
Stopping {slm-init} does not stop any snapshots that are in progress.
You can manually trigger snapshots with the <<slm-api-execute-lifecycle>> even if {slm-init} is stopped.
The API returns a response as soon as the request is acknowledged, but
the plugin might continue to run until in-progress operations complete and it can be safely stopped.
This conversation was marked as resolved by debadair
Use the <<slm-api-get-status>> to see if {slm-init} is running.
==== {api-query-parms-title}
include::{docdir}/rest-api/common-parms.asciidoc[tag=timeoutparms]
[[slm-api-stop-example]]
==== {api-examples-title}
[source,console]
--------------------------------------------------
POST _slm/stop
--------------------------------------------------

View File

@ -1,62 +1,17 @@
[role="xpack"]
[testenv="basic"]
[[getting-started-snapshot-lifecycle-management]]
== Getting started with snapshot lifecycle management
=== Configure snapshot lifecycle policies
Let's get started with snapshot lifecycle management (SLM) by working through a
Let's get started with {slm} ({slm-init}) by working through a
hands-on scenario. The goal of this example is to automatically back up {es}
indices using the <<snapshot-restore,snapshots>> every day at a particular
time. Once these snapshots have been created, they are kept for a configured
amount of time and then deleted per a configured retention policy.
[float]
[[slm-and-security]]
=== Security and SLM
Before starting, it's important to understand the privileges that are needed
when configuring SLM if you are using the security plugin. There are two
built-in cluster privileges that can be used to assist: `manage_slm` and
`read_slm`. It's also good to note that the `cluster:admin/snapshot/*`
permission allows taking and deleting snapshots even for indices the role may
not have access to.
An example of configuring an administrator role for SLM follows:
[source,console]
-----------------------------------
POST /_security/role/slm-admin
{
"cluster": ["manage_slm", "cluster:admin/snapshot/*"],
"indices": [
{
"names": [".slm-history-*"],
"privileges": ["all"]
}
]
}
-----------------------------------
// TEST[skip:security is not enabled here]
Or, for a read-only role that can retrieve policies (but not update, execute, or
delete them), as well as only view the history index:
[source,console]
-----------------------------------
POST /_security/role/slm-read-only
{
"cluster": ["read_slm"],
"indices": [
{
"names": [".slm-history-*"],
"privileges": ["read"]
}
]
}
-----------------------------------
// TEST[skip:security is not enabled here]
[float]
[[slm-gs-create-policy]]
=== Setting up a repository
[[slm-gs-register-repository]]
==== Register a repository
Before we can set up an SLM policy, we'll need to set up a
snapshot repository where the snapshots will be
@ -76,12 +31,13 @@ PUT /_snapshot/my_repository
-----------------------------------
[float]
=== Setting up a policy
[[slm-gs-create-policy]]
==== Setting up a snapshot policy
Now that we have a repository in place, we can create a policy to automatically
take snapshots. Policies are written in JSON and will define when to take
snapshots, what the snapshots should be named, and which indices should be
included, among other things. We'll use the <<slm-api-put,Put Policy>> API
included, among other things. We'll use the <<slm-api-put-policy>> API
to create the policy.
When configurating a policy, retention can also optionally be configured. See
@ -131,13 +87,14 @@ request>>, so you can specify, for example, whether the snapshot should fail in
special cases, such as if one of the specified indices cannot be found.
[float]
=== Making sure the policy works
[[slm-gs-test-policy]]
==== Test the snapshot policy
While snapshots taken by SLM policies can be viewed through the standard snapshot
API, SLM also keeps track of policy successes and failures in ways that are a bit
easier to use to make sure the policy is working. Once a policy has executed at
least once, when you view the policy using the <<slm-api-get,Get Policy API>>,
some metadata will be returned indicating whether the snapshot was successfully
least once, when you view the policy using the <<slm-api-get-policy>>,
some metadata will be returned indicating whether the snapshot was sucessfully
initiated or not.
Instead of waiting for our policy to run, let's tell SLM to take a snapshot
@ -219,7 +176,7 @@ field - it's included above only as an example. You should, however, see a
`last_success` field and a snapshot name. If you do, you've successfully taken
your first snapshot using SLM!
While only the most recent success and failure are available through the Get Policy
While only the most recent sucess and failure are available through the Get Policy
API, all policy executions are recorded to a history index, which may be queried
by searching the index pattern `.slm-history*`.

View File

@ -0,0 +1,72 @@
[role="xpack"]
[testenv="basic"]
[[snapshot-lifecycle-management]]
== Manage the snapshot lifecycle
You can set up snapshot lifecycle policies to automate the timing, frequency, and retention of snapshots.
Snapshot policies can apply to multiple indices.
The snapshot lifecycle management (SLM) <<snapshot-lifecycle-management-api, CRUD APIs>> provide
the building blocks for the snapshot policy features that are part of the Management application in {kib}.
The Snapshot and Restore UI makes it easy to set up policies, register snapshot repositories,
view and manage snapshots, and restore indices.
You can stop and restart SLM to temporarily pause automatic backups while performing
upgrades or other maintenance.
To disable SLM entirely, set `xpack.slm.enabled` to `false` in `elasticsearch.yml`.
[float]
[[slm-and-security]]
=== Security and SLM
Two built-in cluster privileges control access to the SLM actions when
{es} {security-features} are enabled:
`manage_slm`:: Allows a user to perform all SLM actions, including creating and updating policies
and starting and stopping SLM.
`read_slm`:: Allows a user to perform all read-only SLM actions,
such as getting policies and checking the SLM status.
`cluster:admin/snapshot/*`:: Allows a user to take and delete snapshots of any
index, whether or not they have access to that index.
For example, the following request configures an `slm-admin` role that grants the privileges
necessary for administering SLM.
[source,console]
-----------------------------------
POST /_security/role/slm-admin
{
"cluster": ["manage_slm", "cluster:admin/snapshot/*"],
"indices": [
{
"names": [".slm-history-*"],
"privileges": ["all"]
}
]
}
-----------------------------------
// TEST[skip:security is not enabled here]
Or, for a read-only role that can retrieve policies (but not update, execute, or
delete them), as well as only view the history index:
[source,console]
-----------------------------------
POST /_security/role/slm-read-only
{
"cluster": ["read_slm"],
"indices": [
{
"names": [".slm-history-*"],
"privileges": ["read"]
}
]
}
-----------------------------------
// TEST[skip:security is not enabled here]
include::getting-started-slm.asciidoc[]
include::slm-retention.asciidoc[]

View File

@ -1,7 +1,7 @@
[role="xpack"]
[testenv="basic"]
[[slm-retention]]
== Snapshot lifecycle management retention
=== Snapshot retention
Automatic deletion of older snapshots is an optional feature of snapshot lifecycle management.
Retention is run as a cluster level task that is not associated with a particular policy's schedule
@ -11,7 +11,7 @@ run and for how long, the second configured on a policy for which snapshots shou
retention.
The cluster level settings for retention are shown below, and can be changed dynamically using the
<<cluster-update-settings,cluster-update-settings>> API:
<<cluster-update-settings>> API:
|=====================================
| Setting | Default value | Description
@ -19,7 +19,7 @@ The cluster level settings for retention are shown below, and can be changed dyn
| `slm.retention_schedule` | `0 30 1 * * ?` | A periodic or absolute time schedule for when
retention should be run. Supports all values supported by the cron scheduler: <<schedule-cron,Cron
scheduler configuration>>. Retention can also be manually run using the
<<slm-api-execute-retention,Execute retention API>>. Defaults to daily at 1:30am in the master
<<slm-api-execute-retention>> API. Defaults to daily at 1:30am in the master
node's timezone.
| `slm.retention_duration` | `"1h"` | A limit of how long SLM should spend deleting old snapshots.
@ -70,12 +70,12 @@ successful snapshots.
____
If multiple policies are configured to snapshot to the same repository, or manual snapshots have
been taken without using the <<slm-api-execute,Execute Policy API>>, they are treated as not
been taken without using the <<slm-api-execute-lifecycle>> API, they are treated as not
eligible for retention, and do not count towards any limits. This allows multiple policies to have
differing retention configuration while using the same snapshot repository.
Statistics for snapshot retention can be retrieved using the <<slm-get-stats,Get Snapshot Lifecycle
Stats API>>:
Statistics for snapshot retention can be retrieved using the
<<slm-api-get-stats>> API:
[source,console]
--------------------------------------------------

View File

@ -87,4 +87,6 @@ include::register-repository.asciidoc[]
include::take-snapshot.asciidoc[]
include::restore-snapshot.asciidoc[]
include::monitor-snapshot-restore.asciidoc[]
include::../slm/index.asciidoc[]

View File

@ -1,7 +1,7 @@
{
"slm.delete_lifecycle":{
"documentation":{
"url":"https://www.elastic.co/guide/en/elasticsearch/reference/current/slm-api-delete.html"
"url":"https://www.elastic.co/guide/en/elasticsearch/reference/current/slm-api-delete-policy.html"
},
"stability":"stable",
"url":{

View File

@ -1,7 +1,7 @@
{
"slm.execute_lifecycle":{
"documentation":{
"url":"https://www.elastic.co/guide/en/elasticsearch/reference/current/slm-api-execute.html"
"url":"https://www.elastic.co/guide/en/elasticsearch/reference/current/slm-api-execute-policy.html"
},
"stability":"stable",
"url":{

View File

@ -1,7 +1,7 @@
{
"slm.get_lifecycle":{
"documentation":{
"url":"https://www.elastic.co/guide/en/elasticsearch/reference/current/slm-api-get.html"
"url":"https://www.elastic.co/guide/en/elasticsearch/reference/current/slm-api-get-policy.html"
},
"stability":"stable",
"url":{

View File

@ -1,7 +1,7 @@
{
"slm.get_stats":{
"documentation":{
"url":"https://www.elastic.co/guide/en/elasticsearch/reference/master/slm-get-stats.html"
"url":"https://www.elastic.co/guide/en/elasticsearch/reference/master/slm-api-get-stats.html"
},
"stability":"stable",
"url":{

View File

@ -1,7 +1,7 @@
{
"slm.put_lifecycle":{
"documentation":{
"url":"https://www.elastic.co/guide/en/elasticsearch/reference/current/slm-api-put.html"
"url":"https://www.elastic.co/guide/en/elasticsearch/reference/current/slm-api-put-policy.html"
},
"stability":"stable",
"url":{