[DOCS] Align with ILM changes. (#55953) (#56455)

* [DOCS] Align with ILM changes.

* Apply suggestions from code review

Co-authored-by: James Rodewig <james.rodewig@elastic.co>
Co-authored-by: Lee Hinman <dakrone@users.noreply.github.com>

* Incorporated review comments.
This commit is contained in:
debadair 2020-05-08 14:22:27 -07:00 committed by GitHub
parent 0a254cf223
commit 6ae7327061
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
9 changed files with 232 additions and 211 deletions

View File

@ -42,9 +42,8 @@ A lifecycle policy specifies the phases in the index lifecycle
and the actions to perform in each phase. A lifecycle can have up to four phases:
`hot`, `warm`, `cold`, and `delete`.
You can define and manage policies through the {kib} Management UI,
which invokes the {ilm-init} <<ilm-put-lifecycle, put policy>> API to create policies
according to the options you specify.
You can define and manage policies through {kib} Management or with the
<<ilm-put-lifecycle, put policy>> API.
For example, you might define a `timeseries_policy` that has two phases:

View File

@ -1,38 +1,47 @@
[role="xpack"]
[[ilm-settings]]
=== {ilm-cap} settings
=== {ilm-cap} settings in {es}
[subs="attributes"]
++++
<titleabbrev>{ilm-cap} settings</titleabbrev>
++++
These are the settings available for configuring Index Lifecycle Management
These are the settings available for configuring <<index-lifecycle-management, {ilm}>> ({ilm-init}).
==== Cluster level settings
`xpack.ilm.enabled`::
(boolean)
deprecated:[7.8.0,Basic License features are always enabled] +
This deprecated setting has no effect and will be removed in Elasticsearch 8.0.
`indices.lifecycle.poll_interval`::
(<<time-units, time units>>) How often {ilm} checks for indices that meet policy
criteria. Defaults to `10m`.
`indices.lifecycle.history_index_enabled`::
(boolean)
Whether ILM's history index is enabled. If enabled, ILM will record the
history of actions taken as part of ILM policies to the `ilm-history-*`
indices. Defaults to `true`.
`indices.lifecycle.poll_interval`::
(<<cluster-update-settings,Dynamic>>, <<time-units, time unit value>>)
How often {ilm} checks for indices that meet policy criteria. Defaults to `10m`.
==== Index level settings
These index-level {ilm-init} settings are typically configured through index
templates. For more information, see <<ilm-gs-create-policy>>.
`index.lifecycle.name`::
(<<indices-update-settings, Dynamic>>, string)
The name of the policy to use to manage the index.
`index.lifecycle.rollover_alias`::
(<<indices-update-settings,Dynamic>>, string)
The index alias to update when the index rolls over. Specify when using a
policy that contains a rollover action. When the index rolls over, the alias is
updated to reflect that the index is no longer the write index. For more
information about rollover, see <<using-policies-rollover>>.
`index.lifecycle.parse_origination_date`::
(<<indices-update-settings,Dynamic>>, boolean)
When configured to `true` the origination date will be parsed from the index
name. The index format must match the pattern `^.*-{date_format}-\\d+`, where
the `date_format` is `yyyy.MM.dd` and the trailing digits are optional (an
@ -41,6 +50,8 @@ index that was rolled over would normally match the full format eg.
the index creation will fail.
`index.lifecycle.origination_date`::
(<<indices-update-settings,Dynamic>>, long)
The timestamp that will be used to calculate the index age for its phase
transitions. This allows the users to create an index containing old data and
use the original creation date of the old data to calculate the index age. Must be a long (Unix epoch) value.
use the original creation date of the old data to calculate the index age.
Must be a long (Unix epoch) value.

View File

@ -0,0 +1,33 @@
[role="xpack"]
[[slm-settings]]
=== {slm-cap} settings in {es}
[subs="attributes"]
++++
<titleabbrev>{slm-cap} settings</titleabbrev>
++++
These are the settings available for configuring
<<snapshot-lifecycle-management, {slm}>> ({slm-init}).
==== Cluster-level settings
[[slm-history-index-enabled]]
`slm.history_index_enabled`::
(boolean)
Controls whether {slm-init} records the history of actions taken as part of {slm-init} policies
to the `slm-history-*` indices. Defaults to `true`.
[[slm-retention-schedule]]
`slm.retention_schedule`::
(<<cluster-update-settings,Dynamic>>, <<schedule-cron,cron scheduler value>>)
Controls when the <<slm-retention,retention task>> runs.
Can be a periodic or absolute time schedule.
Supports all values supported by the <<schedule-cron,cron scheduler>>.
Defaults to daily at 1:30am UTC: `0 30 1 * * ?`.
[[slm-retention-duration]]
`slm.retention_duration`::
(<<cluster-update-settings,Dynamic>>, <<time-units,time value>>)
Limits how long {slm-init} should spend deleting old snapshots.
Defaults to one hour: `1h`.

View File

@ -61,6 +61,8 @@ include::settings/monitoring-settings.asciidoc[]
include::settings/security-settings.asciidoc[]
include::settings/slm-settings.asciidoc[]
include::settings/sql-settings.asciidoc[]
include::settings/transform-settings.asciidoc[]

View File

@ -82,6 +82,7 @@ Repository used to store snapshots created by this policy. This repository must
exist prior to the policy's creation. You can create a repository using the
<<modules-snapshots,snapshot repository API>>.
[[slm-api-put-retention]]
`retention`::
(Optional, object)
Retention rules used to retain and delete snapshots created by the policy.

View File

@ -1,23 +1,34 @@
[role="xpack"]
[testenv="basic"]
[[getting-started-snapshot-lifecycle-management]]
=== Configure snapshot lifecycle policies
=== Tutorial: Automate backups with {slm-init}
Let's get started with {slm} ({slm-init}) by working through a
hands-on scenario. The goal of this example is to automatically back up {es}
indices using the <<snapshot-restore,snapshots>> every day at a particular
time. Once these snapshots have been created, they are kept for a configured
amount of time and then deleted per a configured retention policy.
This tutorial demonstrates how to automate daily backups of {es} indices using an {slm-init} policy.
The policy takes <<modules-snapshots, snapshots>> of all indices in the cluster
and stores them in a local repository.
It also defines a retention policy and automatically deletes snapshots
when they are no longer needed.
[float]
To manage snapshots with {slm-init}, you:
. <<slm-gs-register-repository, Register a repository>>.
. <<slm-gs-create-policy, Create an {slm-init} policy>>.
To test the policy, you can manually trigger it to take an initial snapshot.
[discrete]
[[slm-gs-register-repository]]
==== Register a repository
Before we can set up an SLM policy, we'll need to set up a
snapshot repository where the snapshots will be
stored. Repositories can use {plugins}/repository.html[many different backends],
including cloud storage providers. You'll probably want to use one of these in
production, but for this example we'll use a shared file system repository:
To use {slm-init}, you must have a snapshot repository configured.
The repository can be local (shared filesystem) or remote (cloud storage).
Remote repositories can reside on S3, HDFS, Azure, Google Cloud Storage,
or any other platform supported by a {plugins}/repository.html[repository plugin].
Remote repositories are generally used for production deployments.
For this tutorial, you can register a local repository from
{kibana-ref}/snapshot-repositories.html[{kib} Management]
or use the put repository API:
[source,console]
-----------------------------------
@ -30,19 +41,26 @@ PUT /_snapshot/my_repository
}
-----------------------------------
[float]
[discrete]
[[slm-gs-create-policy]]
==== Setting up a snapshot policy
==== Set up a snapshot policy
Now that we have a repository in place, we can create a policy to automatically
take snapshots. Policies are written in JSON and will define when to take
snapshots, what the snapshots should be named, and which indices should be
included, among other things. We'll use the <<slm-api-put-policy>> API
to create the policy.
Once you have a repository in place,
you can define an {slm-init} policy to take snapshots automatically.
The policy defines when to take snapshots, which indices should be included,
and what to name the snapshots.
A policy can also specify a <<slm-retention,retention policy>> and
automatically delete snapshots when they are no longer needed.
When configurating a policy, retention can also optionally be configured. See
the <<slm-retention,SLM retention>> documentation for the full documentation of
how retention works.
TIP: Don't be afraid to configure a policy that takes frequent snapshots.
Snapshots are incremental and make efficient use of storage.
You can define and manage policies through {kib} Management or with the put policy API.
For example, you could define a `nightly-snapshots` policy
to back up all of your indices daily at 2:30AM UTC.
A put policy request defines the policy configuration in JSON:
[source,console]
--------------------------------------------------
@ -62,44 +80,39 @@ PUT /_slm/policy/nightly-snapshots
}
--------------------------------------------------
// TEST[continued]
<1> when the snapshot should be taken, using
<<schedule-cron,Cron syntax>>, in this
case at 1:30AM each day
<2> whe name each snapshot should be given, using
<<date-math-index-names,date math>> to include the current date in the name
of the snapshot
<3> the repository the snapshot should be stored in
<4> the configuration to be used for the snapshot requests (see below)
<5> which indices should be included in the snapshot, in this case, every index
<6> Optional retention configuration
<7> Keep snapshots for 30 days
<8> Always keep at least 5 successful snapshots
<9> Keep no more than 50 successful snapshots, even if they're less than 30 days old
<1> When the snapshot should be taken in
<<schedule-cron,Cron syntax>>: daily at 2:30AM UTC
<2> How to name the snapshot: use
<<date-math-index-names,date math>> to include the current date in the snapshot name
<3> Where to store the snapshot
<4> The configuration to be used for the snapshot requests (see below)
<5> Which indices to include in the snapshot: all indices
<6> Optional retention policy: keep snapshots for 30 days,
retaining at least 5 and no more than 50 snapshots regardless of age
This policy will take a snapshot of every index each day at 1:30AM UTC.
Snapshots are incremental, allowing frequent snapshots to be stored efficiently,
so don't be afraid to configure a policy to take frequent snapshots.
You can specify additional snapshot configuration options to customize how snapshots are taken.
For example, you could configure the policy to fail the snapshot
if one of the specified indices is missing.
For more information about snapshot options, see <<snapshots-take-snapshot,snapshot requests>>.
In addition to specifying the indices that should be included in the snapshot,
the `config` field can be used to customize other aspects of the snapshot. You
can use any option allowed in <<snapshots-take-snapshot,a regular snapshot
request>>, so you can specify, for example, whether the snapshot should fail in
special cases, such as if one of the specified indices cannot be found.
[float]
[discrete]
[[slm-gs-test-policy]]
==== Test the snapshot policy
While snapshots taken by SLM policies can be viewed through the standard snapshot
API, SLM also keeps track of policy successes and failures in ways that are a bit
easier to use to make sure the policy is working. Once a policy has executed at
least once, when you view the policy using the <<slm-api-get-policy>>,
some metadata will be returned indicating whether the snapshot was sucessfully
initiated or not.
A snapshot taken by {slm-init} is just like any other snapshot.
You can view information about snapshots in {kib} Management or
get info with the <<snapshots-monitor-snapshot-restore, snapshot APIs>>.
In addition, {slm-init} keeps track of policy successes and failures so you
have insight into how the policy is working. If the policy has executed at
least once, the <<slm-api-get-policy, get policy>> API returns additional metadata
that shows if the snapshot succeeded.
Instead of waiting for our policy to run, let's tell SLM to take a snapshot
as using the configuration from our policy right now instead of waiting for
1:30AM.
You can manually execute a snapshot policy to take a snapshot immediately.
This is useful for taking snapshots before making a configuration change,
upgrading, or to test a new policy.
Manually executing a policy does not affect its configured schedule.
For example, the following request manually triggers the `nightly-snapshots` policy:
[source,console]
--------------------------------------------------
@ -107,11 +120,9 @@ POST /_slm/policy/nightly-snapshots/_execute
--------------------------------------------------
// TEST[skip:we can't easily handle snapshots from docs tests]
This request will kick off a snapshot for our policy right now, regardless of
the schedule in the policy. This is useful for taking snapshots before making
a configuration change, upgrading, or for our purposes, making sure our policy
is going to work successfully. The policy will continue to run on its configured
schedule after this execution of the policy.
After forcing the `nightly-snapshots` policy to run,
you can retrieve the policy to get success or failure information.
[source,console]
--------------------------------------------------
@ -119,9 +130,14 @@ GET /_slm/policy/nightly-snapshots?human
--------------------------------------------------
// TEST[continued]
This request will return a response that includes the policy, as well as
information about the last time the policy succeeded and failed, as well as the
next time the policy will be executed.
Only the most recent success and failure are returned,
but all policy executions are recorded in the `.slm-history*` indices.
The response also shows when the policy is scheduled to execute next.
NOTE: The response shows if the policy succeeded in _initiating_ a snapshot.
However, that does not guarantee that the snapshot completed successfully.
It is possible for the initiated snapshot to fail if, for example, the connection to a remote
repository is lost while copying files.
[source,console-result]
--------------------------------------------------
@ -143,44 +159,19 @@ next time the policy will be executed.
"max_count": 50
}
},
"last_success": { <1>
"snapshot_name": "nightly-snap-2019.04.24-tmtnyjtrsxkhbrrdcgg18a", <2>
"time_string": "2019-04-24T16:43:49.316Z",
"last_success": {
"snapshot_name": "nightly-snap-2019.04.24-tmtnyjtrsxkhbrrdcgg18a", <1>
"time_string": "2019-04-24T16:43:49.316Z", <2>
"time": 1556124229316
} ,
"last_failure": { <3>
"snapshot_name": "nightly-snap-2019.04.02-lohisb5ith2n8hxacaq3mw",
"time_string": "2019-04-02T01:30:00.000Z",
"time": 1556042030000,
"details": "{\"type\":\"index_not_found_exception\",\"reason\":\"no such index [important]\",\"resource.type\":\"index_or_alias\",\"resource.id\":\"important\",\"index_uuid\":\"_na_\",\"index\":\"important\",\"stack_trace\":\"[important] IndexNotFoundException[no such index [important]]\\n\\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.indexNotFoundException(IndexNameExpressionResolver.java:762)\\n\\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.innerResolve(IndexNameExpressionResolver.java:714)\\n\\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.resolve(IndexNameExpressionResolver.java:670)\\n\\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndices(IndexNameExpressionResolver.java:163)\\n\\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndexNames(IndexNameExpressionResolver.java:142)\\n\\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndexNames(IndexNameExpressionResolver.java:102)\\n\\tat org.elasticsearch.snapshots.SnapshotsService$1.execute(SnapshotsService.java:280)\\n\\tat org.elasticsearch.cluster.ClusterStateUpdateTask.execute(ClusterStateUpdateTask.java:47)\\n\\tat org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:687)\\n\\tat org.elasticsearch.cluster.service.MasterService.calculateTaskOutputs(MasterService.java:310)\\n\\tat org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:210)\\n\\tat org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:142)\\n\\tat org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150)\\n\\tat org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188)\\n\\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:688)\\n\\tat org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252)\\n\\tat org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215)\\n\\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\\n\\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\\n\\tat java.base/java.lang.Thread.run(Thread.java:834)\\n\"}"
} ,
"next_execution": "2019-04-24T01:30:00.000Z", <4>
"next_execution_millis": 1556048160000
"next_execution": "2019-04-24T01:30:00.000Z", <3>
"next_execution_millis": 1556048160000
}
}
--------------------------------------------------
// TESTRESPONSE[skip:the presence of last_failure and last_success is asynchronous and will be present for users, but is untestable]
<1> information about the last time the policy successfully initated a snapshot
<2> the name of the snapshot that was successfully initiated
<3> information about the last time the policy failed to initiate a snapshot
<4> the next time the policy will execute
<1> The name of the last snapshot that was succesfully initiated by the policy
<2> When the snapshot was initiated
<3> When the policy will initiate the next snapshot
NOTE: This metadata only indicates whether the request to initiate the snapshot was
made successfully or not - after the snapshot has been successfully started, it
is possible for the snapshot to fail if, for example, the connection to a remote
repository is lost while copying files.
If you're following along, the returned SLM policy shouldn't have a `last_failure`
field - it's included above only as an example. You should, however, see a
`last_success` field and a snapshot name. If you do, you've successfully taken
your first snapshot using SLM!
While only the most recent sucess and failure are available through the Get Policy
API, all policy executions are recorded to a history index, which may be queried
by searching the index pattern `.slm-history*`.
That's it! We have our first SLM policy set up to periodically take snapshots
so that our backups are always up to date. You can read more details in the
<<snapshot-lifecycle-management-api,SLM API documentation>> and the
<<modules-snapshots,general snapshot documentation.>>

View File

@ -1,71 +1,21 @@
[role="xpack"]
[testenv="basic"]
[[snapshot-lifecycle-management]]
== Manage the snapshot lifecycle
== {slm-init}: Manage the snapshot lifecycle
You can set up snapshot lifecycle policies to automate the timing, frequency, and retention of snapshots.
Snapshot policies can apply to multiple indices.
The snapshot lifecycle management (SLM) <<snapshot-lifecycle-management-api, CRUD APIs>> provide
the building blocks for the snapshot policy features that are part of the Management application in {kib}.
The Snapshot and Restore UI makes it easy to set up policies, register snapshot repositories,
view and manage snapshots, and restore indices.
The {slm} ({slm-init}) <<snapshot-lifecycle-management-api, CRUD APIs>> provide
the building blocks for the snapshot policy features that are part of {kib} Management.
{kibana-ref}/snapshot-repositories.html[Snapshot and Restore] makes it easy to
set up policies, register snapshot repositories, view and manage snapshots, and restore indices.
You can stop and restart SLM to temporarily pause automatic backups while performing
You can stop and restart {slm-init} to temporarily pause automatic backups while performing
upgrades or other maintenance.
[float]
[[slm-and-security]]
=== Security and SLM
Two built-in cluster privileges control access to the SLM actions when
{es} {security-features} are enabled:
`manage_slm`:: Allows a user to perform all SLM actions, including creating and updating policies
and starting and stopping SLM.
`read_slm`:: Allows a user to perform all read-only SLM actions,
such as getting policies and checking the SLM status.
`cluster:admin/snapshot/*`:: Allows a user to take and delete snapshots of any
index, whether or not they have access to that index.
For example, the following request configures an `slm-admin` role that grants the privileges
necessary for administering SLM.
[source,console]
-----------------------------------
POST /_security/role/slm-admin
{
"cluster": ["manage_slm", "cluster:admin/snapshot/*"],
"indices": [
{
"names": [".slm-history-*"],
"privileges": ["all"]
}
]
}
-----------------------------------
// TEST[skip:security is not enabled here]
Or, for a read-only role that can retrieve policies (but not update, execute, or
delete them), as well as only view the history index:
[source,console]
-----------------------------------
POST /_security/role/slm-read-only
{
"cluster": ["read_slm"],
"indices": [
{
"names": [".slm-history-*"],
"privileges": ["read"]
}
]
}
-----------------------------------
// TEST[skip:security is not enabled here]
include::getting-started-slm.asciidoc[]
include::slm-security.asciidoc[]
include::slm-retention.asciidoc[]

View File

@ -3,30 +3,34 @@
[[slm-retention]]
=== Snapshot retention
Automatic deletion of older snapshots is an optional feature of snapshot lifecycle management.
Retention is run as a cluster level task that is not associated with a particular policy's schedule
(though the configuration of which snapshots to keep is done on a per-policy basis). Retention
configuration consists of two parts—The first a cluster-level configuration for when retention is
run and for how long, the second configured on a policy for which snapshots should be eligible for
retention.
You can include a retention policy in an {slm-init} policy to automatically delete old snapshots.
Retention runs as a cluster-level task and is not associated with a particular policy's schedule.
The retention criteria are evaluated as part of the retention task, not when the policy executes.
For the retention task to automatically delete snapshots,
you need to include a <<slm-api-put-retention,`retention`>> object in your {slm-init} policy.
The cluster level settings for retention are shown below, and can be changed dynamically using the
<<cluster-update-settings>> API:
To control when the retention task runs, configure
<<slm-retention-schedule,`slm.retention_schedule`>> in the cluster settings.
You can define the schedule as a periodic or absolute <<schedule-cron, cron schedule>>.
The <<slm-retention-duration,`slm.retention_duration`>> setting limits how long
{slm-init} should spend deleting old snapshots.
|=====================================
| Setting | Default value | Description
You can update the schedule and duration dynamically with the
<<cluster-update-settings, update settings>> API.
You can run the retention task manually with the
<<slm-api-execute-retention, execute retention >> API.
| `slm.retention_schedule` | `0 30 1 * * ?` | A periodic or absolute time schedule for when
retention should be run. Supports all values supported by the cron scheduler: <<schedule-cron,Cron
scheduler configuration>>. Retention can also be manually run using the
<<slm-api-execute-retention>> API. Defaults to daily at 1:30am UTC.
The retention task only considers snapshots initiated through {slm-init} policies,
either according to the policy schedule or through the
<<slm-api-execute-lifecycle, execute lifecycle>> API.
Manual snapshots are ignored and don't count toward the retention limits.
| `slm.retention_duration` | `"1h"` | A limit of how long SLM should spend deleting old snapshots.
|=====================================
If multiple policies snapshot to the same repository, they can define differing retention criteria.
Policy level configuration for retention is done inside the `retention` object when creating or
updating a policy. All of the retention configurations options are optional.
To retrieve information about the snapshot retention task history,
use the <<slm-api-get-stats, get stats>> API:
////
[source,console]
--------------------------------------------------
PUT /_slm/policy/daily-snapshots
@ -46,35 +50,7 @@ PUT /_slm/policy/daily-snapshots
<2> Keep snapshots for 30 days
<3> Always keep at least 5 successful snapshots
<4> Keep no more than 50 successful snapshots
Supported configuration for retention from within a policy are as follows. The default value for
each is unset unless specified by the user in the policy configuration.
NOTE: The oldest snapshots are always deleted first, in the case of a `max_count` of 5 for a policy
with 6 snapshots, the oldest snapshot will be deleted.
|=====================================
| Setting | Description
| `expire_after` | A timevalue for how old a snapshot must be in order to be eligible for deletion.
| `min_count` | A minimum number of snapshots to keep, regardless of age.
| `max_count` | The maximum number of snapshots to keep, regardless of age.
|=====================================
As an example, the retention setting in the policy configured about would read in English as:
____
Remove snapshots older than thirty days, but always keep the latest five snapshots. If there are
more than fifty snapshots, remove the oldest surplus snapshots until there are no more than fifty
successful snapshots.
____
If multiple policies are configured to snapshot to the same repository, or manual snapshots have
been taken without using the <<slm-api-execute-lifecycle>> API, they are treated as not
eligible for retention, and do not count towards any limits. This allows multiple policies to have
differing retention configuration while using the same snapshot repository.
Statistics for snapshot retention can be retrieved using the
<<slm-api-get-stats>> API:
////
[source,console]
--------------------------------------------------
@ -82,7 +58,7 @@ GET /_slm/stats
--------------------------------------------------
// TEST[continued]
Which returns a response
The response includes the following statistics:
[source,js]
--------------------------------------------------

View File

@ -0,0 +1,58 @@
[[slm-and-security]]
=== Security and {slm-init}
Two built-in cluster privileges control access to the {slm-init} actions when
{es} {security-features} are enabled:
`manage_slm`:: Allows a user to perform all {slm-init} actions, including creating and updating policies
and starting and stopping {slm-init}.
`read_slm`:: Allows a user to perform all read-only {slm-init} actions,
such as getting policies and checking the {slm-init} status.
`cluster:admin/snapshot/*`:: Allows a user to take and delete snapshots of any
index, whether or not they have access to that index.
You can create and manage roles to assign these privileges through {kib} Management.
To grant the privileges necessary to create and manage {slm-init} policies and snapshots,
you can set up a role with the `manage_slm` and `cluster:admin/snapshot/*` cluster privileges
and full access to the {slm-init} history indices.
For example, the following request creates an `slm-admin` role:
[source,console]
-----------------------------------
POST /_security/role/slm-admin
{
"cluster": ["manage_slm", "cluster:admin/snapshot/*"],
"indices": [
{
"names": [".slm-history-*"],
"privileges": ["all"]
}
]
}
-----------------------------------
// TEST[skip:security is not enabled here]
To grant read-only access to {slm-init} policies and the snapshot history,
you can set up a role with the `read_slm` cluster privilege and read access
to the {slm} history indices.
For example, the following request creates a `slm-read-only` role:
[source,console]
-----------------------------------
POST /_security/role/slm-read-only
{
"cluster": ["read_slm"],
"indices": [
{
"names": [".slm-history-*"],
"privileges": ["read"]
}
]
}
-----------------------------------
// TEST[skip:security is not enabled here]