mirror of
https://github.com/honeymoose/OpenSearch.git
synced 2025-02-17 10:25:15 +00:00
* Add retention to Snapshot Lifecycle Management (#46407)
This commit adds retention to the existing Snapshot Lifecycle Management feature (#38461) as described in #43663. This allows a user to configure SLM to automatically delete older snapshots based on a number of criteria.
An example policy would look like:
```
PUT /_slm/policy/snapshot-every-day
{
"schedule": "0 30 2 * * ?",
"name": "<production-snap-{now/d}>",
"repository": "my-s3-repository",
"config": {
"indices": ["foo-*", "important"]
},
// Newly configured retention options
"retention": {
// Snapshots should be deleted after 14 days
"expire_after": "14d",
// Keep a maximum of thirty snapshots
"max_count": 30,
// Keep a minimum of the four most recent snapshots
"min_count": 4
}
}
```
SLM Retention is run on a scheduled configurable with the `slm.retention_schedule` setting, which supports cron expressions. Deletions are run for a configurable time bounded by the `slm.retention_duration` setting, which defaults to 1 hour.
Included in this work is a new SLM stats API endpoint available through
``` json
GET /_slm/stats
```
That returns statistics about snapshot taken and deleted, as well as successful retention runs, failures, and the time spent deleting snapshots. #45362 has more information as well as an example of the output. These stats are also included when retrieving SLM policies via the API.
* Add base framework for snapshot retention (#43605)
* Add base framework for snapshot retention
This adds a basic `SnapshotRetentionService` and `SnapshotRetentionTask`
to start as the basis for SLM's retention implementation.
Relates to #38461
* Remove extraneous 'public'
* Use a local var instead of reading class var repeatedly
* Add SnapshotRetentionConfiguration for retention configuration (#43777)
* Add SnapshotRetentionConfiguration for retention configuration
This commit adds the `SnapshotRetentionConfiguration` class and its HLRC
counterpart to encapsulate the configuration for SLM retention.
Currently only a single parameter is supported as an example (we still
need to discuss the different options we want to support and their
names) to keep the size of the PR down. It also does not yet include version serialization checks
since the original SLM branch has not yet been merged.
Relates to #43663
* Fix REST tests
* Fix more documentation
* Use Objects.equals to avoid NPE
* Put `randomSnapshotLifecyclePolicy` in only one place
* Occasionally return retention with no configuration
* Implement SnapshotRetentionTask's snapshot filtering and delet… (#44764)
* Implement SnapshotRetentionTask's snapshot filtering and deletion
This commit implements the snapshot filtering and deletion for
`SnapshotRetentionTask`. Currently only the expire-after age is used for
determining whether a snapshot is eligible for deletion.
Relates to #43663
* Fix deletes running on the wrong thread
* Handle missing or null policy in snap metadata differently
* Convert Tuple<String, List<SnapshotInfo>> to Map<String, List<SnapshotInfo>>
* Use the `OriginSettingClient` to work with security, enhance logging
* Prevent NPE in test by mocking Client
* Allow empty/missing SLM retention configuration (#45018)
Semi-related to #44465, this allows the `"retention"` configuration map
to be missing.
Relates to #43663
* Add min_count and max_count as SLM retention predicates (#44926)
This adds the configuration options for `min_count` and `max_count` as
well as the logic for determining whether a snapshot meets this criteria
to SLM's retention feature.
These options are optional and one, two, or all three can be specified
in an SLM policy.
Relates to #43663
* Time-bound deletion of snapshots in retention delete function (#45065)
* Time-bound deletion of snapshots in retention delete function
With a cluster that has a large number of snapshots, it's possible that
snapshot deletion can take a very long time (especially since deletes
currently have to happen in a serial fashion). To prevent snapshot
deletion from taking forever in a cluster and blocking other operations,
this commit adds a setting to allow configuring a maximum time to spend
deletion snapshots during retention. This dynamic setting defaults to 1
hour and is best-effort, meaning that it doesn't hard stop a deletion
at an hour mark, but ensures that once the time has passed, all
subsequent deletions are deferred until the next retention cycle.
Relates to #43663
* Wow snapshots suuuure can take a long time.
* Use a LongSupplier instead of actually sleeping
* Remove TestLogging annotation
* Remove rate limiting
* Add SLM metrics gathering and endpoint (#45362)
* Add SLM metrics gathering and endpoint
This commit adds the infrastructure to gather metrics about the different SLM actions that a cluster
takes. These actions are stored in `SnapshotLifecycleStats` and perpetuated in cluster state. The
stats stored include the number of snapshots taken, failed, deleted, the number of retention runs,
as well as per-policy counts for snapshots taken, failed, and deleted. It also includes the amount
of time spent deleting snapshots from SLM retention.
This commit also adds an endpoint for retrieving all stats (further commits will expose this in the
SLM get-policy API) that looks like:
```
GET /_slm/stats
{
"retention_runs" : 13,
"retention_failed" : 0,
"retention_timed_out" : 0,
"retention_deletion_time" : "1.4s",
"retention_deletion_time_millis" : 1404,
"policy_metrics" : {
"daily-snapshots2" : {
"snapshots_taken" : 7,
"snapshots_failed" : 0,
"snapshots_deleted" : 6,
"snapshot_deletion_failures" : 0
},
"daily-snapshots" : {
"snapshots_taken" : 12,
"snapshots_failed" : 0,
"snapshots_deleted" : 12,
"snapshot_deletion_failures" : 6
}
},
"total_snapshots_taken" : 19,
"total_snapshots_failed" : 0,
"total_snapshots_deleted" : 18,
"total_snapshot_deletion_failures" : 6
}
```
This does not yet include HLRC for this, as this commit is quite large on its own. That will be
added in a subsequent commit.
Relates to #43663
* Version qualify serialization
* Initialize counters outside constructor
* Use computeIfAbsent instead of being too verbose
* Move part of XContent generation into subclass
* Fix REST action for master merge
* Unused import
* Record history of SLM retention actions (#45513)
This commit records the deletion of snapshots by the retention component
of SLM into the SLM history index for the purposes of reviewing operations
taken by SLM and alerting.
* Retry SLM retention after currently running snapshot completes (#45802)
* Retry SLM retention after currently running snapshot completes
This commit adds a ClusterStateObserver to wait until the currently
running snapshot is complete before proceeding with snapshot deletion.
SLM retention waits for the maximum allowed deletion time for the
snapshot to complete, however, the waiting time is not factored into
the limit on actual deletions.
Relates to #43663
* Increase timeout waiting for snapshot completion
* Apply patch
From 2374316f0d
.patch
* Rename test variables
* [TEST] Be less strict for stats checking
* Skip SLM retention if ILM is STOPPING or STOPPED (#45869)
This adds a check to ensure we take no action during SLM retention if
ILM is currently stopped or in the process of stopping.
Relates to #43663
* Check all actions preventing snapshot delete during retention (#45992)
* Check all actions preventing snapshot delete during retention run
Previously we only checked to see if a snapshot was currently running,
but it turns out that more things can block snapshot deletion. This
changes the check to be a check for:
- a snapshot currently running
- a deletion already in progress
- a repo cleanup in progress
- a restore currently running
This was found by CI where a third party delete in a test caused SLM
retention deletion to throw an exception.
Relates to #43663
* Add unit test for okayToDeleteSnapshots
* Fix bug where SLM retention task would be scheduled on every node
* Enhance test logging
* Ignore if snapshot is already deleted
* Missing import
* Fix SnapshotRetentionServiceTests
* Expose SLM policy stats in get SLM policy API (#45989)
This also adds support for the SLM stats endpoint to the high level rest client.
Retrieving a policy now looks like:
```json
{
"daily-snapshots" : {
"version": 1,
"modified_date": "2019-04-23T01:30:00.000Z",
"modified_date_millis": 1556048137314,
"policy" : {
"schedule": "0 30 1 * * ?",
"name": "<daily-snap-{now/d}>",
"repository": "my_repository",
"config": {
"indices": ["data-*", "important"],
"ignore_unavailable": false,
"include_global_state": false
},
"retention": {}
},
"stats": {
"snapshots_taken": 0,
"snapshots_failed": 0,
"snapshots_deleted": 0,
"snapshot_deletion_failures": 0
},
"next_execution": "2019-04-24T01:30:00.000Z",
"next_execution_millis": 1556048160000
}
}
```
Relates to #43663
* Rewrite SnapshotLifecycleIT as as ESIntegTestCase (#46356)
* Rewrite SnapshotLifecycleIT as as ESIntegTestCase
This commit splits `SnapshotLifecycleIT` into two different tests.
`SnapshotLifecycleRestIT` which includes the tests that do not require
slow repositories, and `SLMSnapshotBlockingIntegTests` which is now an
integration test using `MockRepository` to simulate a snapshot being in
progress.
Relates to #43663
Resolves #46205
* Add error logging when exceptions are thrown
* Update serialization versions
* Fix type inference
* Use non-Cancellable HLRC return value
* Fix Client mocking in test
* Fix SLMSnapshotBlockingIntegTests for 7.x branch
* Update SnapshotRetentionTask for non-multi-repo snapshot retrieval
* Add serialization guards for SnapshotLifecyclePolicy
212 lines
10 KiB
Plaintext
212 lines
10 KiB
Plaintext
[role="xpack"]
|
|
[testenv="basic"]
|
|
[[getting-started-snapshot-lifecycle-management]]
|
|
== Getting started with snapshot lifecycle management
|
|
|
|
Let's get started with snapshot lifecycle management (SLM) by working through a
|
|
hands-on scenario. The goal of this example is to automatically back up {es}
|
|
indices using the <<modules-snapshots,snapshots>> every day at a particular
|
|
time.
|
|
|
|
[float]
|
|
[[slm-and-security]]
|
|
=== Security and SLM
|
|
Before starting, it's important to understand the privileges that are needed
|
|
when configuring SLM if you are using the security plugin. There are two
|
|
built-in cluster privileges that can be used to assist: `manage_slm` and
|
|
`read_slm`. It's also good to note that the `create_snapshot` permission
|
|
allows taking snapshots even for indices the role may not have access to.
|
|
|
|
An example of configuring an administrator role for SLM follows:
|
|
|
|
[source,console]
|
|
-----------------------------------
|
|
POST /_security/role/slm-admin
|
|
{
|
|
"cluster": ["manage_slm", "create_snapshot"],
|
|
"indices": [
|
|
{
|
|
"names": [".slm-history-*"],
|
|
"privileges": ["all"]
|
|
}
|
|
]
|
|
}
|
|
-----------------------------------
|
|
// TEST[skip:security is not enabled here]
|
|
|
|
Or, for a read-only role that can retrieve policies (but not update, execute, or
|
|
delete them), as well as only view the history index:
|
|
|
|
[source,console]
|
|
-----------------------------------
|
|
POST /_security/role/slm-read-only
|
|
{
|
|
"cluster": ["read_slm"],
|
|
"indices": [
|
|
{
|
|
"names": [".slm-history-*"],
|
|
"privileges": ["read"]
|
|
}
|
|
]
|
|
}
|
|
-----------------------------------
|
|
// TEST[skip:security is not enabled here]
|
|
|
|
[float]
|
|
[[slm-gs-create-policy]]
|
|
=== Setting up a repository
|
|
|
|
Before we can set up an SLM policy, we'll need to set up a
|
|
<<snapshots-repositories,snapshot repository>> where the snapshots will be
|
|
stored. Repositories can use {plugins}/repository.html[many different backends],
|
|
including cloud storage providers. You'll probably want to use one of these in
|
|
production, but for this example we'll use a shared file system repository:
|
|
|
|
[source,console]
|
|
-----------------------------------
|
|
PUT /_snapshot/my_repository
|
|
{
|
|
"type": "fs",
|
|
"settings": {
|
|
"location": "my_backup_location"
|
|
}
|
|
}
|
|
-----------------------------------
|
|
|
|
[float]
|
|
=== Setting up a policy
|
|
|
|
Now that we have a repository in place, we can create a policy to automatically
|
|
take snapshots. Policies are written in JSON and will define when to take
|
|
snapshots, what the snapshots should be named, and which indices should be
|
|
included, among other things. We'll use the <<slm-api-put,Put Policy>> API
|
|
to create the policy.
|
|
|
|
[source,console]
|
|
--------------------------------------------------
|
|
PUT /_slm/policy/nightly-snapshots
|
|
{
|
|
"schedule": "0 30 1 * * ?", <1>
|
|
"name": "<nightly-snap-{now/d}>", <2>
|
|
"repository": "my_repository", <3>
|
|
"config": { <4>
|
|
"indices": ["*"] <5>
|
|
},
|
|
"retention": {}
|
|
}
|
|
--------------------------------------------------
|
|
// TEST[continued]
|
|
<1> when the snapshot should be taken, using
|
|
{xpack-ref}/trigger-schedule.html#schedule-cron[Cron syntax], in this
|
|
case at 1:30AM each day
|
|
<2> whe name each snapshot should be given, using
|
|
<<date-math-index-names,date math>> to include the current date in the name
|
|
of the snapshot
|
|
<3> the repository the snapshot should be stored in
|
|
<4> the configuration to be used for the snapshot requests (see below)
|
|
<5> which indices should be included in the snapshot, in this case, every index
|
|
|
|
This policy will take a snapshot of every index each day at 1:30AM UTC.
|
|
Snapshots are incremental, allowing frequent snapshots to be stored efficiently,
|
|
so don't be afraid to configure a policy to take frequent snapshots.
|
|
|
|
In addition to specifying the indices that should be included in the snapshot,
|
|
the `config` field can be used to customize other aspects of the snapshot. You
|
|
can use any option allowed in <<snapshots-take-snapshot,a regular snapshot
|
|
request>>, so you can specify, for example, whether the snapshot should fail in
|
|
special cases, such as if one of the specified indices cannot be found.
|
|
|
|
[float]
|
|
=== Making sure the policy works
|
|
|
|
While snapshots taken by SLM policies can be viewed through the standard snapshot
|
|
API, SLM also keeps track of policy successes and failures in ways that are a bit
|
|
easier to use to make sure the policy is working. Once a policy has executed at
|
|
least once, when you view the policy using the <<slm-api-get,Get Policy API>>,
|
|
some metadata will be returned indicating whether the snapshot was sucessfully
|
|
initiated or not.
|
|
|
|
Instead of waiting for our policy to run, let's tell SLM to take a snapshot
|
|
as using the configuration from our policy right now instead of waiting for
|
|
1:30AM.
|
|
|
|
[source,console]
|
|
--------------------------------------------------
|
|
PUT /_slm/policy/nightly-snapshots/_execute
|
|
--------------------------------------------------
|
|
// TEST[skip:we can't easily handle snapshots from docs tests]
|
|
|
|
This request will kick off a snapshot for our policy right now, regardless of
|
|
the schedule in the policy. This is useful for taking snapshots before making
|
|
a configuration change, upgrading, or for our purposes, making sure our policy
|
|
is going to work successfully. The policy will continue to run on its configured
|
|
schedule after this execution of the policy.
|
|
|
|
[source,console]
|
|
--------------------------------------------------
|
|
GET /_slm/policy/nightly-snapshots?human
|
|
--------------------------------------------------
|
|
// TEST[continued]
|
|
|
|
This request will return a response that includes the policy, as well as
|
|
information about the last time the policy succeeded and failed, as well as the
|
|
next time the policy will be executed.
|
|
|
|
[source,console-result]
|
|
--------------------------------------------------
|
|
{
|
|
"nightly-snapshots" : {
|
|
"version": 1,
|
|
"modified_date": "2019-04-23T01:30:00.000Z",
|
|
"modified_date_millis": 1556048137314,
|
|
"policy" : {
|
|
"schedule": "0 30 1 * * ?",
|
|
"name": "<nightly-snap-{now/d}>",
|
|
"repository": "my_repository",
|
|
"config": {
|
|
"indices": ["*"],
|
|
},
|
|
"retention": {}
|
|
},
|
|
"last_success": { <1>
|
|
"snapshot_name": "nightly-snap-2019.04.24-tmtnyjtrsxkhbrrdcgg18a", <2>
|
|
"time_string": "2019-04-24T16:43:49.316Z",
|
|
"time": 1556124229316
|
|
} ,
|
|
"last_failure": { <3>
|
|
"snapshot_name": "nightly-snap-2019.04.02-lohisb5ith2n8hxacaq3mw",
|
|
"time_string": "2019-04-02T01:30:00.000Z",
|
|
"time": 1556042030000,
|
|
"details": "{\"type\":\"index_not_found_exception\",\"reason\":\"no such index [important]\",\"resource.type\":\"index_or_alias\",\"resource.id\":\"important\",\"index_uuid\":\"_na_\",\"index\":\"important\",\"stack_trace\":\"[important] IndexNotFoundException[no such index [important]]\\n\\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.indexNotFoundException(IndexNameExpressionResolver.java:762)\\n\\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.innerResolve(IndexNameExpressionResolver.java:714)\\n\\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.resolve(IndexNameExpressionResolver.java:670)\\n\\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndices(IndexNameExpressionResolver.java:163)\\n\\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndexNames(IndexNameExpressionResolver.java:142)\\n\\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndexNames(IndexNameExpressionResolver.java:102)\\n\\tat org.elasticsearch.snapshots.SnapshotsService$1.execute(SnapshotsService.java:280)\\n\\tat org.elasticsearch.cluster.ClusterStateUpdateTask.execute(ClusterStateUpdateTask.java:47)\\n\\tat org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:687)\\n\\tat org.elasticsearch.cluster.service.MasterService.calculateTaskOutputs(MasterService.java:310)\\n\\tat org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:210)\\n\\tat org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:142)\\n\\tat org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150)\\n\\tat org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188)\\n\\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:688)\\n\\tat org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252)\\n\\tat org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215)\\n\\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\\n\\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\\n\\tat java.base/java.lang.Thread.run(Thread.java:834)\\n\"}"
|
|
} ,
|
|
"next_execution": "2019-04-24T01:30:00.000Z", <4>
|
|
"next_execution_millis": 1556048160000
|
|
}
|
|
}
|
|
--------------------------------------------------
|
|
// TESTRESPONSE[skip:the presence of last_failure and last_success is asynchronous and will be present for users, but is untestable]
|
|
|
|
<1> information about the last time the policy successfully initated a snapshot
|
|
<2> the name of the snapshot that was successfully initiated
|
|
<3> information about the last time the policy failed to initiate a snapshot
|
|
<4> the is the next time the policy will execute
|
|
|
|
NOTE: This metadata only indicates whether the request to initiate the snapshot was
|
|
made successfully or not - after the snapshot has been successfully started, it
|
|
is possible for the snapshot to fail if, for example, the connection to a remote
|
|
repository is lost while copying files.
|
|
|
|
If you're following along, the returned SLM policy shouldn't have a `last_failure`
|
|
field - it's included above only as an example. You should, however, see a
|
|
`last_success` field and a snapshot name. If you do, you've successfully taken
|
|
your first snapshot using SLM!
|
|
|
|
While only the most recent sucess and failure are available through the Get Policy
|
|
API, all policy executions are recorded to a history index, which may be queried
|
|
by searching the index pattern `.slm-history*`.
|
|
|
|
That's it! We have our first SLM policy set up to periodically take snapshots
|
|
so that our backups are always up to date. You can read more details in the
|
|
<<snapshot-lifecycle-management-api,SLM API documentation>> and the
|
|
<<modules-snapshots,general snapshot documentation.>>
|