[7.x] Add Snapshot Lifecycle Management (#44382)
* Add Snapshot Lifecycle Management (#43934) * Add SnapshotLifecycleService and related CRUD APIs This commit adds `SnapshotLifecycleService` as a new service under the ilm plugin. This service handles snapshot lifecycle policies by scheduling based on the policies defined schedule. This also includes the get, put, and delete APIs for these policies Relates to #38461 * Make scheduledJobIds return an immutable set * Use Object.equals for SnapshotLifecyclePolicy * Remove unneeded TODO * Implement ToXContentFragment on SnapshotLifecyclePolicyItem * Copy contents of the scheduledJobIds * Handle snapshot lifecycle policy updates and deletions (#40062) (Note this is a PR against the `snapshot-lifecycle-management` feature branch) This adds logic to `SnapshotLifecycleService` to handle updates and deletes for snapshot policies. Policies with incremented versions have the old policy cancelled and the new one scheduled. Deleted policies have their schedules cancelled when they are no longer present in the cluster state metadata. Relates to #38461 * Take a snapshot for the policy when the SLM policy is triggered (#40383) (This is a PR for the `snapshot-lifecycle-management` branch) This commit fills in `SnapshotLifecycleTask` to actually perform the snapshotting when the policy is triggered. Currently there is no handling of the results (other than logging) as that will be added in subsequent work. This also adds unit tests and an integration test that schedules a policy and ensures that a snapshot is correctly taken. Relates to #38461 * Record most recent snapshot policy success/failure (#40619) Keeping a record of the results of the successes and failures will aid troubleshooting of policies and make users more confident that their snapshots are being taken as expected. This is the first step toward writing history in a more permanent fashion. * Validate snapshot lifecycle policies (#40654) (This is a PR against the `snapshot-lifecycle-management` branch) With the commit, we now validate the content of snapshot lifecycle policies when the policy is being created or updated. This checks for the validity of the id, name, schedule, and repository. Additionally, cluster state is checked to ensure that the repository exists prior to the lifecycle being added to the cluster state. Part of #38461 * Hook SLM into ILM's start and stop APIs (#40871) (This pull request is for the `snapshot-lifecycle-management` branch) This change allows the existing `/_ilm/stop` and `/_ilm/start` APIs to also manage snapshot lifecycle scheduling. When ILM is stopped all scheduled jobs are cancelled. Relates to #38461 * Add tests for SnapshotLifecyclePolicyItem (#40912) Adds serialization tests for SnapshotLifecyclePolicyItem. * Fix improper import in build.gradle after master merge * Add human readable version of modified date for snapshot lifecycle policy (#41035) * Add human readable version of modified date for snapshot lifecycle policy This small change changes it from: ``` ... "modified_date": 1554843903242, ... ``` To ``` ... "modified_date" : "2019-04-09T21:05:03.242Z", "modified_date_millis" : 1554843903242, ... ``` Including the `"modified_date"` field when the `?human` field is used. Relates to #38461 * Fix test * Add API to execute SLM policy on demand (#41038) This commit adds the ability to perform a snapshot on demand for a policy. This can be useful to take a snapshot immediately prior to performing some sort of maintenance. ```json PUT /_ilm/snapshot/<policy>/_execute ``` And it returns the response with the generated snapshot name: ```json { "snapshot_name" : "production-snap-2019.04.09-rfyv3j9qreixkdbnfuw0ug" } ``` Note that this does not allow waiting for the snapshot, and the snapshot could still fail. It *does* record this information into the cluster state similar to a regularly trigged SLM job. Relates to #38461 * Add next_execution to SLM policy metadata (#41221) * Add next_execution to SLM policy metadata This adds the next time a snapshot lifecycle policy will be executed when retriving a policy's metadata, for example: ```json GET /_ilm/snapshot?human { "production" : { "version" : 1, "modified_date" : "2019-04-15T21:16:21.865Z", "modified_date_millis" : 1555362981865, "policy" : { "name" : "<production-snap-{now/d}>", "schedule" : "*/30 * * * * ?", "repository" : "repo", "config" : { "indices" : [ "foo-*", "important" ], "ignore_unavailable" : true, "include_global_state" : false } }, "next_execution" : "2019-04-15T21:16:30.000Z", "next_execution_millis" : 1555362990000 }, "other" : { "version" : 1, "modified_date" : "2019-04-15T21:12:19.959Z", "modified_date_millis" : 1555362739959, "policy" : { "name" : "<other-snap-{now/d}>", "schedule" : "0 30 2 * * ?", "repository" : "repo", "config" : { "indices" : [ "other" ], "ignore_unavailable" : false, "include_global_state" : true } }, "next_execution" : "2019-04-16T02:30:00.000Z", "next_execution_millis" : 1555381800000 } } ``` Relates to #38461 * Fix and enhance tests * Figured out how to Cron * Change SLM endpoint from /_ilm/* to /_slm/* (#41320) This commit changes the endpoint for snapshot lifecycle management from: ``` GET /_ilm/snapshot/<policy> ``` to: ``` GET /_slm/policy/<policy> ``` It mimics the ILM path only using `slm` instead of `ilm`. Relates to #38461 * Add initial documentation for SLM (#41510) * Add initial documentation for SLM This adds the initial documentation for snapshot lifecycle management. It also includes the REST spec API json files since they're sort of documentation. Relates to #38461 * Add `manage_slm` and `read_slm` roles (#41607) * Add `manage_slm` and `read_slm` roles This adds two more built in roles - `manage_slm` which has permission to perform any of the SLM actions, as well as stopping, starting, and retrieving the operation status of ILM. `read_slm` which has permission to retrieve snapshot lifecycle policies as well as retrieving the operation status of ILM. Relates to #38461 * Add execute to the test * Fix ilm -> slm typo in test * Record SLM history into an index (#41707) It is useful to have a record of the actions that Snapshot Lifecycle Management takes, especially for the purposes of alerting when a snapshot fails or has not been taken successfully for a certain amount of time. This adds the infrastructure to record SLM actions into an index that can be queried at leisure, along with a lifecycle policy so that this history does not grow without bound. Additionally, SLM automatically setting up an index + lifecycle policy leads to `index_lifecycle` custom metadata in the cluster state, which some of the ML tests don't know how to deal with due to setting up custom `NamedXContentRegistry`s. Watcher would cause the same problem, but it is already disabled (for the same reason). * High Level Rest Client support for SLM (#41767) * High Level Rest Client support for SLM This commit add HLRC support for SLM. Relates to #38461 * Fill out documentation tests with tags * Add more callouts and asciidoc for HLRC * Update javadoc links to real locations * Add security test testing SLM cluster privileges (#42678) * Add security test testing SLM cluster privileges This adds a test to `PermissionsIT` that uses the `manage_slm` and `read_slm` cluster privileges. Relates to #38461 * Don't redefine vars * Add Getting Started Guide for SLM (#42878) This commit adds a basic Getting Started Guide for SLM. * Include SLM policy name in Snapshot metadata (#43132) Keep track of which SLM policy in the metadata field of the Snapshots taken by SLM. This allows users to more easily understand where the snapshot came from, and will enable future SLM features such as retention policies. * Fix compilation after master merge * [TEST] Move exception wrapping for devious exception throwing Fixes an issue where an exception was created from one line and thrown in another. * Fix SLM for the change to AcknowledgedResponse * Add Snapshot Lifecycle Management Package Docs (#43535) * Fix compilation for transport actions now that task is required * Add a note mentioning the privileges needed for SLM (#43708) * Add a note mentioning the privileges needed for SLM This adds a note to the top of the "getting started with SLM" documentation mentioning that there are two built-in privileges to assist with creating roles for SLM users and administrators. Relates to #38461 * Mention that you can create snapshots for indices you can't read * Fix REST tests for new number of cluster privileges * Mute testThatNonExistingTemplatesAreAddedImmediately (#43951) * Fix SnapshotHistoryStoreTests after merge * Remove overridden newResponse functions that have been removed * Fix compilation for backport * Fix get snapshot output parsing in test * [DOCS] Add redirects for removed autogen anchors (#44380) * Switch <tt>...</tt> in javadocs for {@code ...}
This commit is contained in:
parent
aa9dd313cf
commit
fb0461ac76
|
@ -34,6 +34,12 @@ import org.elasticsearch.client.indexlifecycle.RemoveIndexLifecyclePolicyRespons
|
|||
import org.elasticsearch.client.indexlifecycle.RetryLifecyclePolicyRequest;
|
||||
import org.elasticsearch.client.indexlifecycle.StartILMRequest;
|
||||
import org.elasticsearch.client.indexlifecycle.StopILMRequest;
|
||||
import org.elasticsearch.client.snapshotlifecycle.DeleteSnapshotLifecyclePolicyRequest;
|
||||
import org.elasticsearch.client.snapshotlifecycle.ExecuteSnapshotLifecyclePolicyRequest;
|
||||
import org.elasticsearch.client.snapshotlifecycle.ExecuteSnapshotLifecyclePolicyResponse;
|
||||
import org.elasticsearch.client.snapshotlifecycle.GetSnapshotLifecyclePolicyRequest;
|
||||
import org.elasticsearch.client.snapshotlifecycle.GetSnapshotLifecyclePolicyResponse;
|
||||
import org.elasticsearch.client.snapshotlifecycle.PutSnapshotLifecyclePolicyRequest;
|
||||
|
||||
import java.io.IOException;
|
||||
|
||||
|
@ -300,4 +306,144 @@ public class IndexLifecycleClient {
|
|||
restHighLevelClient.performRequestAsyncAndParseEntity(request, IndexLifecycleRequestConverters::retryLifecycle, options,
|
||||
AcknowledgedResponse::fromXContent, listener, emptySet());
|
||||
}
|
||||
|
||||
/**
|
||||
* Retrieve one or more snapshot lifecycle policy definitions.
|
||||
* See <pre>
|
||||
* https://www.elastic.co/guide/en/elasticsearch/client/java-rest/current/
|
||||
* java-rest-high-ilm-slm-get-snapshot-lifecycle-policy.html
|
||||
* </pre>
|
||||
* for more.
|
||||
* @param request the request
|
||||
* @param options the request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
|
||||
* @return the response
|
||||
* @throws IOException in case there is a problem sending the request or parsing back the response
|
||||
*/
|
||||
public GetSnapshotLifecyclePolicyResponse getSnapshotLifecyclePolicy(GetSnapshotLifecyclePolicyRequest request,
|
||||
RequestOptions options) throws IOException {
|
||||
return restHighLevelClient.performRequestAndParseEntity(request, IndexLifecycleRequestConverters::getSnapshotLifecyclePolicy,
|
||||
options, GetSnapshotLifecyclePolicyResponse::fromXContent, emptySet());
|
||||
}
|
||||
|
||||
/**
|
||||
* Asynchronously retrieve one or more snapshot lifecycle policy definition.
|
||||
* See <pre>
|
||||
* https://www.elastic.co/guide/en/elasticsearch/client/java-rest/current/
|
||||
* java-rest-high-ilm-slm-get-snapshot-lifecycle-policy.html
|
||||
* </pre>
|
||||
* for more.
|
||||
* @param request the request
|
||||
* @param options the request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
|
||||
* @param listener the listener to be notified upon request completion
|
||||
*/
|
||||
public void getSnapshotLifecyclePolicyAsync(GetSnapshotLifecyclePolicyRequest request, RequestOptions options,
|
||||
ActionListener<GetSnapshotLifecyclePolicyResponse> listener) {
|
||||
restHighLevelClient.performRequestAsyncAndParseEntity(request, IndexLifecycleRequestConverters::getSnapshotLifecyclePolicy,
|
||||
options, GetSnapshotLifecyclePolicyResponse::fromXContent, listener, emptySet());
|
||||
}
|
||||
|
||||
/**
|
||||
* Create or modify a snapshot lifecycle definition.
|
||||
* See <pre>
|
||||
* https://www.elastic.co/guide/en/elasticsearch/client/java-rest/current/
|
||||
* java-rest-high-ilm-slm-put-snapshot-lifecycle-policy.html
|
||||
* </pre>
|
||||
* for more.
|
||||
* @param request the request
|
||||
* @param options the request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
|
||||
* @return the response
|
||||
* @throws IOException in case there is a problem sending the request or parsing back the response
|
||||
*/
|
||||
public AcknowledgedResponse putSnapshotLifecyclePolicy(PutSnapshotLifecyclePolicyRequest request,
|
||||
RequestOptions options) throws IOException {
|
||||
return restHighLevelClient.performRequestAndParseEntity(request, IndexLifecycleRequestConverters::putSnapshotLifecyclePolicy,
|
||||
options, AcknowledgedResponse::fromXContent, emptySet());
|
||||
}
|
||||
|
||||
/**
|
||||
* Asynchronously create or modify a snapshot lifecycle definition.
|
||||
* See <pre>
|
||||
* https://www.elastic.co/guide/en/elasticsearch/client/java-rest/current/
|
||||
* java-rest-high-ilm-slm-put-snapshot-lifecycle-policy.html
|
||||
* </pre>
|
||||
* for more.
|
||||
* @param request the request
|
||||
* @param options the request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
|
||||
* @param listener the listener to be notified upon request completion
|
||||
*/
|
||||
public void putSnapshotLifecyclePolicyAsync(PutSnapshotLifecyclePolicyRequest request, RequestOptions options,
|
||||
ActionListener<AcknowledgedResponse> listener) {
|
||||
restHighLevelClient.performRequestAsyncAndParseEntity(request, IndexLifecycleRequestConverters::putSnapshotLifecyclePolicy,
|
||||
options, AcknowledgedResponse::fromXContent, listener, emptySet());
|
||||
}
|
||||
|
||||
/**
|
||||
* Delete a snapshot lifecycle definition
|
||||
* See <pre>
|
||||
* https://www.elastic.co/guide/en/elasticsearch/client/java-rest/current/
|
||||
* java-rest-high-ilm-slm-delete-snapshot-lifecycle-policy.html
|
||||
* </pre>
|
||||
* for more.
|
||||
* @param request the request
|
||||
* @param options the request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
|
||||
* @return the response
|
||||
* @throws IOException in case there is a problem sending the request or parsing back the response
|
||||
*/
|
||||
public AcknowledgedResponse deleteSnapshotLifecyclePolicy(DeleteSnapshotLifecyclePolicyRequest request,
|
||||
RequestOptions options) throws IOException {
|
||||
return restHighLevelClient.performRequestAndParseEntity(request, IndexLifecycleRequestConverters::deleteSnapshotLifecyclePolicy,
|
||||
options, AcknowledgedResponse::fromXContent, emptySet());
|
||||
}
|
||||
|
||||
/**
|
||||
* Asynchronously delete a snapshot lifecycle definition
|
||||
* See <pre>
|
||||
* https://www.elastic.co/guide/en/elasticsearch/client/java-rest/current/
|
||||
* java-rest-high-ilm-slm-delete-snapshot-lifecycle-policy.html
|
||||
* </pre>
|
||||
* for more.
|
||||
* @param request the request
|
||||
* @param options the request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
|
||||
* @param listener the listener to be notified upon request completion
|
||||
*/
|
||||
public void deleteSnapshotLifecyclePolicyAsync(DeleteSnapshotLifecyclePolicyRequest request, RequestOptions options,
|
||||
ActionListener<AcknowledgedResponse> listener) {
|
||||
restHighLevelClient.performRequestAsyncAndParseEntity(request, IndexLifecycleRequestConverters::deleteSnapshotLifecyclePolicy,
|
||||
options, AcknowledgedResponse::fromXContent, listener, emptySet());
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute a snapshot lifecycle definition
|
||||
* See <pre>
|
||||
* https://www.elastic.co/guide/en/elasticsearch/client/java-rest/current/
|
||||
* java-rest-high-ilm-slm-execute-snapshot-lifecycle-policy.html
|
||||
* </pre>
|
||||
* for more.
|
||||
* @param request the request
|
||||
* @param options the request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
|
||||
* @return the response
|
||||
* @throws IOException in case there is a problem sending the request or parsing back the response
|
||||
*/
|
||||
public ExecuteSnapshotLifecyclePolicyResponse executeSnapshotLifecyclePolicy(ExecuteSnapshotLifecyclePolicyRequest request,
|
||||
RequestOptions options) throws IOException {
|
||||
return restHighLevelClient.performRequestAndParseEntity(request, IndexLifecycleRequestConverters::executeSnapshotLifecyclePolicy,
|
||||
options, ExecuteSnapshotLifecyclePolicyResponse::fromXContent, emptySet());
|
||||
}
|
||||
|
||||
/**
|
||||
* Asynchronously execute a snapshot lifecycle definition
|
||||
* See <pre>
|
||||
* https://www.elastic.co/guide/en/elasticsearch/client/java-rest/current/
|
||||
* java-rest-high-ilm-slm-execute-snapshot-lifecycle-policy.html
|
||||
* </pre>
|
||||
* for more.
|
||||
* @param request the request
|
||||
* @param options the request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
|
||||
* @param listener the listener to be notified upon request completion
|
||||
*/
|
||||
public void executeSnapshotLifecyclePolicyAsync(ExecuteSnapshotLifecyclePolicyRequest request, RequestOptions options,
|
||||
ActionListener<ExecuteSnapshotLifecyclePolicyResponse> listener) {
|
||||
restHighLevelClient.performRequestAsyncAndParseEntity(request, IndexLifecycleRequestConverters::executeSnapshotLifecyclePolicy,
|
||||
options, ExecuteSnapshotLifecyclePolicyResponse::fromXContent, listener, emptySet());
|
||||
}
|
||||
}
|
||||
|
|
|
@ -32,6 +32,10 @@ import org.elasticsearch.client.indexlifecycle.RemoveIndexLifecyclePolicyRequest
|
|||
import org.elasticsearch.client.indexlifecycle.RetryLifecyclePolicyRequest;
|
||||
import org.elasticsearch.client.indexlifecycle.StartILMRequest;
|
||||
import org.elasticsearch.client.indexlifecycle.StopILMRequest;
|
||||
import org.elasticsearch.client.snapshotlifecycle.DeleteSnapshotLifecyclePolicyRequest;
|
||||
import org.elasticsearch.client.snapshotlifecycle.ExecuteSnapshotLifecyclePolicyRequest;
|
||||
import org.elasticsearch.client.snapshotlifecycle.GetSnapshotLifecyclePolicyRequest;
|
||||
import org.elasticsearch.client.snapshotlifecycle.PutSnapshotLifecyclePolicyRequest;
|
||||
import org.elasticsearch.common.Strings;
|
||||
|
||||
import java.io.IOException;
|
||||
|
@ -159,4 +163,56 @@ final class IndexLifecycleRequestConverters {
|
|||
request.addParameters(params.asMap());
|
||||
return request;
|
||||
}
|
||||
|
||||
static Request getSnapshotLifecyclePolicy(GetSnapshotLifecyclePolicyRequest getSnapshotLifecyclePolicyRequest) {
|
||||
String endpoint = new RequestConverters.EndpointBuilder().addPathPartAsIs("_slm/policy")
|
||||
.addCommaSeparatedPathParts(getSnapshotLifecyclePolicyRequest.getPolicyIds()).build();
|
||||
Request request = new Request(HttpGet.METHOD_NAME, endpoint);
|
||||
RequestConverters.Params params = new RequestConverters.Params();
|
||||
params.withMasterTimeout(getSnapshotLifecyclePolicyRequest.masterNodeTimeout());
|
||||
params.withTimeout(getSnapshotLifecyclePolicyRequest.timeout());
|
||||
request.addParameters(params.asMap());
|
||||
return request;
|
||||
}
|
||||
|
||||
static Request putSnapshotLifecyclePolicy(PutSnapshotLifecyclePolicyRequest putSnapshotLifecyclePolicyRequest) throws IOException {
|
||||
String endpoint = new RequestConverters.EndpointBuilder()
|
||||
.addPathPartAsIs("_slm/policy")
|
||||
.addPathPartAsIs(putSnapshotLifecyclePolicyRequest.getPolicy().getId())
|
||||
.build();
|
||||
Request request = new Request(HttpPut.METHOD_NAME, endpoint);
|
||||
RequestConverters.Params params = new RequestConverters.Params();
|
||||
params.withMasterTimeout(putSnapshotLifecyclePolicyRequest.masterNodeTimeout());
|
||||
params.withTimeout(putSnapshotLifecyclePolicyRequest.timeout());
|
||||
request.addParameters(params.asMap());
|
||||
request.setEntity(RequestConverters.createEntity(putSnapshotLifecyclePolicyRequest, RequestConverters.REQUEST_BODY_CONTENT_TYPE));
|
||||
return request;
|
||||
}
|
||||
|
||||
static Request deleteSnapshotLifecyclePolicy(DeleteSnapshotLifecyclePolicyRequest deleteSnapshotLifecyclePolicyRequest) {
|
||||
Request request = new Request(HttpDelete.METHOD_NAME,
|
||||
new RequestConverters.EndpointBuilder()
|
||||
.addPathPartAsIs("_slm/policy")
|
||||
.addPathPartAsIs(deleteSnapshotLifecyclePolicyRequest.getPolicyId())
|
||||
.build());
|
||||
RequestConverters.Params params = new RequestConverters.Params();
|
||||
params.withMasterTimeout(deleteSnapshotLifecyclePolicyRequest.masterNodeTimeout());
|
||||
params.withTimeout(deleteSnapshotLifecyclePolicyRequest.timeout());
|
||||
request.addParameters(params.asMap());
|
||||
return request;
|
||||
}
|
||||
|
||||
static Request executeSnapshotLifecyclePolicy(ExecuteSnapshotLifecyclePolicyRequest executeSnapshotLifecyclePolicyRequest) {
|
||||
Request request = new Request(HttpPut.METHOD_NAME,
|
||||
new RequestConverters.EndpointBuilder()
|
||||
.addPathPartAsIs("_slm/policy")
|
||||
.addPathPartAsIs(executeSnapshotLifecyclePolicyRequest.getPolicyId())
|
||||
.addPathPartAsIs("_execute")
|
||||
.build());
|
||||
RequestConverters.Params params = new RequestConverters.Params();
|
||||
params.withMasterTimeout(executeSnapshotLifecyclePolicyRequest.masterNodeTimeout());
|
||||
params.withTimeout(executeSnapshotLifecyclePolicyRequest.timeout());
|
||||
request.addParameters(params.asMap());
|
||||
return request;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -0,0 +1,49 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.client.snapshotlifecycle;
|
||||
|
||||
import org.elasticsearch.client.TimedRequest;
|
||||
|
||||
import java.util.Objects;
|
||||
|
||||
public class DeleteSnapshotLifecyclePolicyRequest extends TimedRequest {
|
||||
private final String policyId;
|
||||
|
||||
public DeleteSnapshotLifecyclePolicyRequest(String policyId) {
|
||||
this.policyId = policyId;
|
||||
}
|
||||
|
||||
public String getPolicyId() {
|
||||
return this.policyId;
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean equals(Object o) {
|
||||
if (this == o) return true;
|
||||
if (o == null || getClass() != o.getClass()) return false;
|
||||
DeleteSnapshotLifecyclePolicyRequest other = (DeleteSnapshotLifecyclePolicyRequest) o;
|
||||
return this.policyId.equals(other.policyId);
|
||||
}
|
||||
|
||||
@Override
|
||||
public int hashCode() {
|
||||
return Objects.hash(this.policyId);
|
||||
}
|
||||
}
|
|
@ -0,0 +1,49 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.client.snapshotlifecycle;
|
||||
|
||||
import org.elasticsearch.client.TimedRequest;
|
||||
|
||||
import java.util.Objects;
|
||||
|
||||
public class ExecuteSnapshotLifecyclePolicyRequest extends TimedRequest {
|
||||
private final String policyId;
|
||||
|
||||
public ExecuteSnapshotLifecyclePolicyRequest(String policyId) {
|
||||
this.policyId = policyId;
|
||||
}
|
||||
|
||||
public String getPolicyId() {
|
||||
return this.policyId;
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean equals(Object o) {
|
||||
if (this == o) return true;
|
||||
if (o == null || getClass() != o.getClass()) return false;
|
||||
ExecuteSnapshotLifecyclePolicyRequest other = (ExecuteSnapshotLifecyclePolicyRequest) o;
|
||||
return this.policyId.equals(other.policyId);
|
||||
}
|
||||
|
||||
@Override
|
||||
public int hashCode() {
|
||||
return Objects.hash(this.policyId);
|
||||
}
|
||||
}
|
|
@ -0,0 +1,81 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.client.snapshotlifecycle;
|
||||
|
||||
import org.elasticsearch.common.ParseField;
|
||||
import org.elasticsearch.common.xcontent.ConstructingObjectParser;
|
||||
import org.elasticsearch.common.xcontent.ToXContentObject;
|
||||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
import org.elasticsearch.common.xcontent.XContentParser;
|
||||
|
||||
import java.io.IOException;
|
||||
|
||||
public class ExecuteSnapshotLifecyclePolicyResponse implements ToXContentObject {
|
||||
|
||||
private static final ParseField SNAPSHOT_NAME = new ParseField("snapshot_name");
|
||||
private static final ConstructingObjectParser<ExecuteSnapshotLifecyclePolicyResponse, Void> PARSER =
|
||||
new ConstructingObjectParser<>("excecute_snapshot_policy", true,
|
||||
a -> new ExecuteSnapshotLifecyclePolicyResponse((String) a[0]));
|
||||
|
||||
static {
|
||||
PARSER.declareString(ConstructingObjectParser.constructorArg(), SNAPSHOT_NAME);
|
||||
}
|
||||
|
||||
private final String snapshotName;
|
||||
|
||||
public ExecuteSnapshotLifecyclePolicyResponse(String snapshotName) {
|
||||
this.snapshotName = snapshotName;
|
||||
}
|
||||
|
||||
public static ExecuteSnapshotLifecyclePolicyResponse fromXContent(XContentParser parser) {
|
||||
return PARSER.apply(parser, null);
|
||||
}
|
||||
|
||||
public String getSnapshotName() {
|
||||
return this.snapshotName;
|
||||
}
|
||||
|
||||
@Override
|
||||
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
|
||||
builder.startObject();
|
||||
builder.field(SNAPSHOT_NAME.getPreferredName(), snapshotName);
|
||||
builder.endObject();
|
||||
return builder;
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean equals(Object o) {
|
||||
if (this == o) {
|
||||
return true;
|
||||
}
|
||||
|
||||
if (o == null || getClass() != o.getClass()) {
|
||||
return false;
|
||||
}
|
||||
|
||||
ExecuteSnapshotLifecyclePolicyResponse other = (ExecuteSnapshotLifecyclePolicyResponse) o;
|
||||
return this.snapshotName.equals(other.snapshotName);
|
||||
}
|
||||
|
||||
@Override
|
||||
public int hashCode() {
|
||||
return this.snapshotName.hashCode();
|
||||
}
|
||||
}
|
|
@ -0,0 +1,49 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.client.snapshotlifecycle;
|
||||
|
||||
import org.elasticsearch.client.TimedRequest;
|
||||
|
||||
import java.util.Arrays;
|
||||
|
||||
public class GetSnapshotLifecyclePolicyRequest extends TimedRequest {
|
||||
private final String[] policyIds;
|
||||
|
||||
public GetSnapshotLifecyclePolicyRequest(String... ids) {
|
||||
this.policyIds = ids;
|
||||
}
|
||||
|
||||
public String[] getPolicyIds() {
|
||||
return this.policyIds;
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean equals(Object o) {
|
||||
if (this == o) return true;
|
||||
if (o == null || getClass() != o.getClass()) return false;
|
||||
GetSnapshotLifecyclePolicyRequest other = (GetSnapshotLifecyclePolicyRequest) o;
|
||||
return Arrays.equals(this.policyIds, other.policyIds);
|
||||
}
|
||||
|
||||
@Override
|
||||
public int hashCode() {
|
||||
return Arrays.hashCode(this.policyIds);
|
||||
}
|
||||
}
|
|
@ -0,0 +1,88 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.client.snapshotlifecycle;
|
||||
|
||||
import org.elasticsearch.common.xcontent.ToXContentObject;
|
||||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
import org.elasticsearch.common.xcontent.XContentParser;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.HashMap;
|
||||
import java.util.Map;
|
||||
import java.util.Objects;
|
||||
|
||||
import static org.elasticsearch.common.xcontent.XContentParserUtils.ensureExpectedToken;
|
||||
|
||||
public class GetSnapshotLifecyclePolicyResponse implements ToXContentObject {
|
||||
|
||||
private final Map<String, SnapshotLifecyclePolicyMetadata> policies;
|
||||
|
||||
public GetSnapshotLifecyclePolicyResponse(Map<String, SnapshotLifecyclePolicyMetadata> policies) {
|
||||
this.policies = policies;
|
||||
}
|
||||
|
||||
public Map<String, SnapshotLifecyclePolicyMetadata> getPolicies() {
|
||||
return this.policies;
|
||||
}
|
||||
|
||||
public static GetSnapshotLifecyclePolicyResponse fromXContent(XContentParser parser) throws IOException {
|
||||
if (parser.currentToken() == null) {
|
||||
parser.nextToken();
|
||||
}
|
||||
ensureExpectedToken(XContentParser.Token.START_OBJECT, parser.currentToken(), parser::getTokenLocation);
|
||||
parser.nextToken();
|
||||
|
||||
Map<String, SnapshotLifecyclePolicyMetadata> policies = new HashMap<>();
|
||||
while (parser.isClosed() == false) {
|
||||
if (parser.currentToken() == XContentParser.Token.START_OBJECT) {
|
||||
final String policyId = parser.currentName();
|
||||
SnapshotLifecyclePolicyMetadata policyDefinition = SnapshotLifecyclePolicyMetadata.parse(parser, policyId);
|
||||
policies.put(policyId, policyDefinition);
|
||||
} else {
|
||||
parser.nextToken();
|
||||
}
|
||||
}
|
||||
return new GetSnapshotLifecyclePolicyResponse(policies);
|
||||
}
|
||||
|
||||
@Override
|
||||
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
|
||||
return builder;
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean equals(Object o) {
|
||||
if (this == o) {
|
||||
return true;
|
||||
}
|
||||
|
||||
if (o == null || getClass() != o.getClass()) {
|
||||
return false;
|
||||
}
|
||||
|
||||
GetSnapshotLifecyclePolicyResponse other = (GetSnapshotLifecyclePolicyResponse) o;
|
||||
return Objects.equals(this.policies, other.policies);
|
||||
}
|
||||
|
||||
@Override
|
||||
public int hashCode() {
|
||||
return Objects.hash(this.policies);
|
||||
}
|
||||
}
|
|
@ -0,0 +1,59 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.client.snapshotlifecycle;
|
||||
|
||||
import org.elasticsearch.client.TimedRequest;
|
||||
import org.elasticsearch.common.xcontent.ToXContentObject;
|
||||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.Objects;
|
||||
|
||||
public class PutSnapshotLifecyclePolicyRequest extends TimedRequest implements ToXContentObject {
|
||||
|
||||
private final SnapshotLifecyclePolicy policy;
|
||||
|
||||
public PutSnapshotLifecyclePolicyRequest(SnapshotLifecyclePolicy policy) {
|
||||
this.policy = Objects.requireNonNull(policy, "policy definition cannot be null");
|
||||
}
|
||||
|
||||
public SnapshotLifecyclePolicy getPolicy() {
|
||||
return policy;
|
||||
}
|
||||
|
||||
@Override
|
||||
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
|
||||
policy.toXContent(builder, params);
|
||||
return builder;
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean equals(Object o) {
|
||||
if (this == o) return true;
|
||||
if (o == null || getClass() != o.getClass()) return false;
|
||||
PutSnapshotLifecyclePolicyRequest other = (PutSnapshotLifecyclePolicyRequest) o;
|
||||
return Objects.equals(this.policy, other.policy);
|
||||
}
|
||||
|
||||
@Override
|
||||
public int hashCode() {
|
||||
return Objects.hash(this.policy);
|
||||
}
|
||||
}
|
|
@ -0,0 +1,100 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.client.snapshotlifecycle;
|
||||
|
||||
import org.elasticsearch.common.ParseField;
|
||||
import org.elasticsearch.common.xcontent.ConstructingObjectParser;
|
||||
import org.elasticsearch.common.xcontent.ToXContentObject;
|
||||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
import org.elasticsearch.common.xcontent.XContentParser;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.Objects;
|
||||
|
||||
public class SnapshotInvocationRecord implements ToXContentObject {
|
||||
static final ParseField SNAPSHOT_NAME = new ParseField("snapshot_name");
|
||||
static final ParseField TIMESTAMP = new ParseField("time");
|
||||
static final ParseField DETAILS = new ParseField("details");
|
||||
|
||||
private String snapshotName;
|
||||
private long timestamp;
|
||||
private String details;
|
||||
|
||||
public static final ConstructingObjectParser<SnapshotInvocationRecord, String> PARSER =
|
||||
new ConstructingObjectParser<>("snapshot_policy_invocation_record", true,
|
||||
a -> new SnapshotInvocationRecord((String) a[0], (long) a[1], (String) a[2]));
|
||||
|
||||
static {
|
||||
PARSER.declareString(ConstructingObjectParser.constructorArg(), SNAPSHOT_NAME);
|
||||
PARSER.declareLong(ConstructingObjectParser.constructorArg(), TIMESTAMP);
|
||||
PARSER.declareString(ConstructingObjectParser.optionalConstructorArg(), DETAILS);
|
||||
}
|
||||
|
||||
public static SnapshotInvocationRecord parse(XContentParser parser, String name) {
|
||||
return PARSER.apply(parser, name);
|
||||
}
|
||||
|
||||
public SnapshotInvocationRecord(String snapshotName, long timestamp, String details) {
|
||||
this.snapshotName = Objects.requireNonNull(snapshotName, "snapshot name must be provided");
|
||||
this.timestamp = timestamp;
|
||||
this.details = details;
|
||||
}
|
||||
|
||||
public String getSnapshotName() {
|
||||
return snapshotName;
|
||||
}
|
||||
|
||||
public long getTimestamp() {
|
||||
return timestamp;
|
||||
}
|
||||
|
||||
public String getDetails() {
|
||||
return details;
|
||||
}
|
||||
|
||||
@Override
|
||||
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
|
||||
builder.startObject();
|
||||
{
|
||||
builder.field(SNAPSHOT_NAME.getPreferredName(), snapshotName);
|
||||
builder.timeField(TIMESTAMP.getPreferredName(), "time_string", timestamp);
|
||||
if (Objects.nonNull(details)) {
|
||||
builder.field(DETAILS.getPreferredName(), details);
|
||||
}
|
||||
}
|
||||
builder.endObject();
|
||||
return builder;
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean equals(Object o) {
|
||||
if (this == o) return true;
|
||||
if (o == null || getClass() != o.getClass()) return false;
|
||||
SnapshotInvocationRecord that = (SnapshotInvocationRecord) o;
|
||||
return getTimestamp() == that.getTimestamp() &&
|
||||
Objects.equals(getSnapshotName(), that.getSnapshotName()) &&
|
||||
Objects.equals(getDetails(), that.getDetails());
|
||||
}
|
||||
|
||||
@Override
|
||||
public int hashCode() {
|
||||
return Objects.hash(getSnapshotName(), getTimestamp(), getDetails());
|
||||
}
|
||||
}
|
|
@ -0,0 +1,137 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.client.snapshotlifecycle;
|
||||
|
||||
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
|
||||
import org.elasticsearch.common.ParseField;
|
||||
import org.elasticsearch.common.Strings;
|
||||
import org.elasticsearch.common.xcontent.ConstructingObjectParser;
|
||||
import org.elasticsearch.common.xcontent.ToXContentObject;
|
||||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
import org.elasticsearch.common.xcontent.XContentParser;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.Map;
|
||||
import java.util.Objects;
|
||||
|
||||
public class SnapshotLifecyclePolicy implements ToXContentObject {
|
||||
|
||||
private final String id;
|
||||
private final String name;
|
||||
private final String schedule;
|
||||
private final String repository;
|
||||
private final Map<String, Object> configuration;
|
||||
|
||||
private static final ParseField NAME = new ParseField("name");
|
||||
private static final ParseField SCHEDULE = new ParseField("schedule");
|
||||
private static final ParseField REPOSITORY = new ParseField("repository");
|
||||
private static final ParseField CONFIG = new ParseField("config");
|
||||
private static final IndexNameExpressionResolver.DateMathExpressionResolver DATE_MATH_RESOLVER =
|
||||
new IndexNameExpressionResolver.DateMathExpressionResolver();
|
||||
|
||||
@SuppressWarnings("unchecked")
|
||||
private static final ConstructingObjectParser<SnapshotLifecyclePolicy, String> PARSER =
|
||||
new ConstructingObjectParser<>("snapshot_lifecycle", true,
|
||||
(a, id) -> {
|
||||
String name = (String) a[0];
|
||||
String schedule = (String) a[1];
|
||||
String repo = (String) a[2];
|
||||
Map<String, Object> config = (Map<String, Object>) a[3];
|
||||
return new SnapshotLifecyclePolicy(id, name, schedule, repo, config);
|
||||
});
|
||||
|
||||
static {
|
||||
PARSER.declareString(ConstructingObjectParser.constructorArg(), NAME);
|
||||
PARSER.declareString(ConstructingObjectParser.constructorArg(), SCHEDULE);
|
||||
PARSER.declareString(ConstructingObjectParser.constructorArg(), REPOSITORY);
|
||||
PARSER.declareObject(ConstructingObjectParser.constructorArg(), (p, c) -> p.map(), CONFIG);
|
||||
}
|
||||
|
||||
public SnapshotLifecyclePolicy(final String id, final String name, final String schedule,
|
||||
final String repository, Map<String, Object> configuration) {
|
||||
this.id = Objects.requireNonNull(id);
|
||||
this.name = name;
|
||||
this.schedule = schedule;
|
||||
this.repository = repository;
|
||||
this.configuration = configuration;
|
||||
}
|
||||
|
||||
public String getId() {
|
||||
return this.id;
|
||||
}
|
||||
|
||||
public String getName() {
|
||||
return this.name;
|
||||
}
|
||||
|
||||
public String getSchedule() {
|
||||
return this.schedule;
|
||||
}
|
||||
|
||||
public String getRepository() {
|
||||
return this.repository;
|
||||
}
|
||||
|
||||
public Map<String, Object> getConfig() {
|
||||
return this.configuration;
|
||||
}
|
||||
|
||||
public static SnapshotLifecyclePolicy parse(XContentParser parser, String id) {
|
||||
return PARSER.apply(parser, id);
|
||||
}
|
||||
|
||||
@Override
|
||||
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
|
||||
builder.startObject();
|
||||
builder.field(NAME.getPreferredName(), this.name);
|
||||
builder.field(SCHEDULE.getPreferredName(), this.schedule);
|
||||
builder.field(REPOSITORY.getPreferredName(), this.repository);
|
||||
builder.field(CONFIG.getPreferredName(), this.configuration);
|
||||
builder.endObject();
|
||||
return builder;
|
||||
}
|
||||
|
||||
@Override
|
||||
public int hashCode() {
|
||||
return Objects.hash(id, name, schedule, repository, configuration);
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean equals(Object obj) {
|
||||
if (obj == null) {
|
||||
return false;
|
||||
}
|
||||
|
||||
if (obj.getClass() != getClass()) {
|
||||
return false;
|
||||
}
|
||||
SnapshotLifecyclePolicy other = (SnapshotLifecyclePolicy) obj;
|
||||
return Objects.equals(id, other.id) &&
|
||||
Objects.equals(name, other.name) &&
|
||||
Objects.equals(schedule, other.schedule) &&
|
||||
Objects.equals(repository, other.repository) &&
|
||||
Objects.equals(configuration, other.configuration);
|
||||
}
|
||||
|
||||
@Override
|
||||
public String toString() {
|
||||
return Strings.toString(this);
|
||||
}
|
||||
}
|
|
@ -0,0 +1,157 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.client.snapshotlifecycle;
|
||||
|
||||
import org.elasticsearch.common.Nullable;
|
||||
import org.elasticsearch.common.ParseField;
|
||||
import org.elasticsearch.common.xcontent.ConstructingObjectParser;
|
||||
import org.elasticsearch.common.xcontent.ToXContentObject;
|
||||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
import org.elasticsearch.common.xcontent.XContentParser;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.Objects;
|
||||
|
||||
public class SnapshotLifecyclePolicyMetadata implements ToXContentObject {
|
||||
|
||||
static final ParseField POLICY = new ParseField("policy");
|
||||
static final ParseField VERSION = new ParseField("version");
|
||||
static final ParseField MODIFIED_DATE_MILLIS = new ParseField("modified_date_millis");
|
||||
static final ParseField MODIFIED_DATE = new ParseField("modified_date");
|
||||
static final ParseField LAST_SUCCESS = new ParseField("last_success");
|
||||
static final ParseField LAST_FAILURE = new ParseField("last_failure");
|
||||
static final ParseField NEXT_EXECUTION_MILLIS = new ParseField("next_execution_millis");
|
||||
static final ParseField NEXT_EXECUTION = new ParseField("next_execution");
|
||||
|
||||
private final SnapshotLifecyclePolicy policy;
|
||||
private final long version;
|
||||
private final long modifiedDate;
|
||||
private final long nextExecution;
|
||||
@Nullable
|
||||
private final SnapshotInvocationRecord lastSuccess;
|
||||
@Nullable
|
||||
private final SnapshotInvocationRecord lastFailure;
|
||||
|
||||
@SuppressWarnings("unchecked")
|
||||
public static final ConstructingObjectParser<SnapshotLifecyclePolicyMetadata, String> PARSER =
|
||||
new ConstructingObjectParser<>("snapshot_policy_metadata",
|
||||
a -> {
|
||||
SnapshotLifecyclePolicy policy = (SnapshotLifecyclePolicy) a[0];
|
||||
long version = (long) a[1];
|
||||
long modifiedDate = (long) a[2];
|
||||
SnapshotInvocationRecord lastSuccess = (SnapshotInvocationRecord) a[3];
|
||||
SnapshotInvocationRecord lastFailure = (SnapshotInvocationRecord) a[4];
|
||||
long nextExecution = (long) a[5];
|
||||
|
||||
return new SnapshotLifecyclePolicyMetadata(policy, version, modifiedDate, lastSuccess, lastFailure, nextExecution);
|
||||
});
|
||||
|
||||
static {
|
||||
PARSER.declareObject(ConstructingObjectParser.constructorArg(), SnapshotLifecyclePolicy::parse, POLICY);
|
||||
PARSER.declareLong(ConstructingObjectParser.constructorArg(), VERSION);
|
||||
PARSER.declareLong(ConstructingObjectParser.constructorArg(), MODIFIED_DATE_MILLIS);
|
||||
PARSER.declareObject(ConstructingObjectParser.optionalConstructorArg(), SnapshotInvocationRecord::parse, LAST_SUCCESS);
|
||||
PARSER.declareObject(ConstructingObjectParser.optionalConstructorArg(), SnapshotInvocationRecord::parse, LAST_FAILURE);
|
||||
PARSER.declareLong(ConstructingObjectParser.constructorArg(), NEXT_EXECUTION_MILLIS);
|
||||
}
|
||||
|
||||
public static SnapshotLifecyclePolicyMetadata parse(XContentParser parser, String id) {
|
||||
return PARSER.apply(parser, id);
|
||||
}
|
||||
|
||||
public SnapshotLifecyclePolicyMetadata(SnapshotLifecyclePolicy policy, long version, long modifiedDate,
|
||||
SnapshotInvocationRecord lastSuccess, SnapshotInvocationRecord lastFailure,
|
||||
long nextExecution) {
|
||||
this.policy = policy;
|
||||
this.version = version;
|
||||
this.modifiedDate = modifiedDate;
|
||||
this.lastSuccess = lastSuccess;
|
||||
this.lastFailure = lastFailure;
|
||||
this.nextExecution = nextExecution;
|
||||
}
|
||||
|
||||
public SnapshotLifecyclePolicy getPolicy() {
|
||||
return policy;
|
||||
}
|
||||
|
||||
public String getName() {
|
||||
return policy.getName();
|
||||
}
|
||||
|
||||
public long getVersion() {
|
||||
return version;
|
||||
}
|
||||
|
||||
public long getModifiedDate() {
|
||||
return modifiedDate;
|
||||
}
|
||||
|
||||
public SnapshotInvocationRecord getLastSuccess() {
|
||||
return lastSuccess;
|
||||
}
|
||||
|
||||
public SnapshotInvocationRecord getLastFailure() {
|
||||
return lastFailure;
|
||||
}
|
||||
|
||||
public long getNextExecution() {
|
||||
return this.nextExecution;
|
||||
}
|
||||
|
||||
@Override
|
||||
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
|
||||
builder.startObject();
|
||||
builder.field(POLICY.getPreferredName(), policy);
|
||||
builder.field(VERSION.getPreferredName(), version);
|
||||
builder.timeField(MODIFIED_DATE_MILLIS.getPreferredName(), MODIFIED_DATE.getPreferredName(), modifiedDate);
|
||||
if (Objects.nonNull(lastSuccess)) {
|
||||
builder.field(LAST_SUCCESS.getPreferredName(), lastSuccess);
|
||||
}
|
||||
if (Objects.nonNull(lastFailure)) {
|
||||
builder.field(LAST_FAILURE.getPreferredName(), lastFailure);
|
||||
}
|
||||
builder.timeField(NEXT_EXECUTION_MILLIS.getPreferredName(), NEXT_EXECUTION.getPreferredName(), nextExecution);
|
||||
builder.endObject();
|
||||
return builder;
|
||||
}
|
||||
|
||||
@Override
|
||||
public int hashCode() {
|
||||
return Objects.hash(policy, version, modifiedDate, lastSuccess, lastFailure, nextExecution);
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean equals(Object obj) {
|
||||
if (obj == null) {
|
||||
return false;
|
||||
}
|
||||
if (getClass() != obj.getClass()) {
|
||||
return false;
|
||||
}
|
||||
SnapshotLifecyclePolicyMetadata other = (SnapshotLifecyclePolicyMetadata) obj;
|
||||
return Objects.equals(policy, other.policy) &&
|
||||
Objects.equals(version, other.version) &&
|
||||
Objects.equals(modifiedDate, other.modifiedDate) &&
|
||||
Objects.equals(lastSuccess, other.lastSuccess) &&
|
||||
Objects.equals(lastFailure, other.lastFailure) &&
|
||||
Objects.equals(nextExecution, other.nextExecution);
|
||||
}
|
||||
|
||||
}
|
|
@ -22,6 +22,9 @@ package org.elasticsearch.client.documentation;
|
|||
import org.apache.http.util.EntityUtils;
|
||||
import org.elasticsearch.action.ActionListener;
|
||||
import org.elasticsearch.action.LatchedActionListener;
|
||||
import org.elasticsearch.action.admin.cluster.repositories.put.PutRepositoryRequest;
|
||||
import org.elasticsearch.action.admin.cluster.snapshots.get.GetSnapshotsRequest;
|
||||
import org.elasticsearch.action.admin.cluster.snapshots.get.GetSnapshotsResponse;
|
||||
import org.elasticsearch.action.admin.indices.alias.Alias;
|
||||
import org.elasticsearch.client.ESRestHighLevelClientTestCase;
|
||||
import org.elasticsearch.client.RequestOptions;
|
||||
|
@ -51,6 +54,15 @@ import org.elasticsearch.client.indexlifecycle.ShrinkAction;
|
|||
import org.elasticsearch.client.indexlifecycle.StartILMRequest;
|
||||
import org.elasticsearch.client.indexlifecycle.StopILMRequest;
|
||||
import org.elasticsearch.client.indices.CreateIndexRequest;
|
||||
import org.elasticsearch.client.snapshotlifecycle.DeleteSnapshotLifecyclePolicyRequest;
|
||||
import org.elasticsearch.client.snapshotlifecycle.ExecuteSnapshotLifecyclePolicyRequest;
|
||||
import org.elasticsearch.client.snapshotlifecycle.ExecuteSnapshotLifecyclePolicyResponse;
|
||||
import org.elasticsearch.client.snapshotlifecycle.GetSnapshotLifecyclePolicyRequest;
|
||||
import org.elasticsearch.client.snapshotlifecycle.GetSnapshotLifecyclePolicyResponse;
|
||||
import org.elasticsearch.client.snapshotlifecycle.PutSnapshotLifecyclePolicyRequest;
|
||||
import org.elasticsearch.client.snapshotlifecycle.SnapshotInvocationRecord;
|
||||
import org.elasticsearch.client.snapshotlifecycle.SnapshotLifecyclePolicy;
|
||||
import org.elasticsearch.client.snapshotlifecycle.SnapshotLifecyclePolicyMetadata;
|
||||
import org.elasticsearch.cluster.metadata.IndexMetaData;
|
||||
import org.elasticsearch.common.Strings;
|
||||
import org.elasticsearch.common.collect.ImmutableOpenMap;
|
||||
|
@ -60,6 +72,9 @@ import org.elasticsearch.common.unit.ByteSizeValue;
|
|||
import org.elasticsearch.common.unit.TimeValue;
|
||||
import org.elasticsearch.common.xcontent.XContentHelper;
|
||||
import org.elasticsearch.common.xcontent.json.JsonXContent;
|
||||
import org.elasticsearch.repositories.fs.FsRepository;
|
||||
import org.elasticsearch.snapshots.SnapshotInfo;
|
||||
import org.elasticsearch.snapshots.SnapshotState;
|
||||
import org.hamcrest.Matchers;
|
||||
|
||||
import java.io.IOException;
|
||||
|
@ -68,6 +83,7 @@ import java.util.Collections;
|
|||
import java.util.HashMap;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.Optional;
|
||||
import java.util.concurrent.CountDownLatch;
|
||||
import java.util.concurrent.TimeUnit;
|
||||
|
||||
|
@ -740,6 +756,237 @@ public class ILMDocumentationIT extends ESRestHighLevelClientTestCase {
|
|||
assertTrue(latch.await(30L, TimeUnit.SECONDS));
|
||||
}
|
||||
|
||||
public void testAddSnapshotLifecyclePolicy() throws Exception {
|
||||
RestHighLevelClient client = highLevelClient();
|
||||
|
||||
PutRepositoryRequest repoRequest = new PutRepositoryRequest();
|
||||
|
||||
Settings.Builder settingsBuilder = Settings.builder().put("location", ".");
|
||||
repoRequest.settings(settingsBuilder);
|
||||
repoRequest.name("my_repository");
|
||||
repoRequest.type(FsRepository.TYPE);
|
||||
org.elasticsearch.action.support.master.AcknowledgedResponse response =
|
||||
client.snapshot().createRepository(repoRequest, RequestOptions.DEFAULT);
|
||||
assertTrue(response.isAcknowledged());
|
||||
|
||||
//////// PUT
|
||||
// tag::slm-put-snapshot-lifecycle-policy
|
||||
Map<String, Object> config = new HashMap<>();
|
||||
config.put("indices", Collections.singletonList("idx"));
|
||||
SnapshotLifecyclePolicy policy = new SnapshotLifecyclePolicy(
|
||||
"policy_id", "name", "1 2 3 * * ?", "my_repository", config);
|
||||
PutSnapshotLifecyclePolicyRequest request =
|
||||
new PutSnapshotLifecyclePolicyRequest(policy);
|
||||
// end::slm-put-snapshot-lifecycle-policy
|
||||
|
||||
// tag::slm-put-snapshot-lifecycle-policy-execute
|
||||
AcknowledgedResponse resp = client.indexLifecycle()
|
||||
.putSnapshotLifecyclePolicy(request, RequestOptions.DEFAULT);
|
||||
// end::slm-put-snapshot-lifecycle-policy-execute
|
||||
|
||||
// tag::slm-put-snapshot-lifecycle-policy-response
|
||||
boolean putAcknowledged = resp.isAcknowledged(); // <1>
|
||||
// end::slm-put-snapshot-lifecycle-policy-response
|
||||
assertTrue(putAcknowledged);
|
||||
|
||||
// tag::slm-put-snapshot-lifecycle-policy-execute-listener
|
||||
ActionListener<AcknowledgedResponse> putListener =
|
||||
new ActionListener<AcknowledgedResponse>() {
|
||||
@Override
|
||||
public void onResponse(AcknowledgedResponse resp) {
|
||||
boolean acknowledged = resp.isAcknowledged(); // <1>
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onFailure(Exception e) {
|
||||
// <2>
|
||||
}
|
||||
};
|
||||
// end::slm-put-snapshot-lifecycle-policy-execute-listener
|
||||
|
||||
// tag::slm-put-snapshot-lifecycle-policy-execute-async
|
||||
client.indexLifecycle().putSnapshotLifecyclePolicyAsync(request,
|
||||
RequestOptions.DEFAULT, putListener);
|
||||
// end::slm-put-snapshot-lifecycle-policy-execute-async
|
||||
|
||||
//////// GET
|
||||
// tag::slm-get-snapshot-lifecycle-policy
|
||||
GetSnapshotLifecyclePolicyRequest getAllRequest =
|
||||
new GetSnapshotLifecyclePolicyRequest(); // <1>
|
||||
GetSnapshotLifecyclePolicyRequest getRequest =
|
||||
new GetSnapshotLifecyclePolicyRequest("policy_id"); // <2>
|
||||
// end::slm-get-snapshot-lifecycle-policy
|
||||
|
||||
// tag::slm-get-snapshot-lifecycle-policy-execute
|
||||
GetSnapshotLifecyclePolicyResponse getResponse =
|
||||
client.indexLifecycle()
|
||||
.getSnapshotLifecyclePolicy(getRequest,
|
||||
RequestOptions.DEFAULT);
|
||||
// end::slm-get-snapshot-lifecycle-policy-execute
|
||||
|
||||
// tag::slm-get-snapshot-lifecycle-policy-execute-listener
|
||||
ActionListener<GetSnapshotLifecyclePolicyResponse> getListener =
|
||||
new ActionListener<GetSnapshotLifecyclePolicyResponse>() {
|
||||
@Override
|
||||
public void onResponse(GetSnapshotLifecyclePolicyResponse resp) {
|
||||
Map<String, SnapshotLifecyclePolicyMetadata> policies =
|
||||
resp.getPolicies(); // <1>
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onFailure(Exception e) {
|
||||
// <2>
|
||||
}
|
||||
};
|
||||
// end::slm-get-snapshot-lifecycle-policy-execute-listener
|
||||
|
||||
// tag::slm-get-snapshot-lifecycle-policy-execute-async
|
||||
client.indexLifecycle().getSnapshotLifecyclePolicyAsync(getRequest,
|
||||
RequestOptions.DEFAULT, getListener);
|
||||
// end::slm-get-snapshot-lifecycle-policy-execute-async
|
||||
|
||||
assertThat(getResponse.getPolicies().size(), equalTo(1));
|
||||
// tag::slm-get-snapshot-lifecycle-policy-response
|
||||
SnapshotLifecyclePolicyMetadata policyMeta =
|
||||
getResponse.getPolicies().get("policy_id"); // <1>
|
||||
long policyVersion = policyMeta.getVersion();
|
||||
long policyModificationDate = policyMeta.getModifiedDate();
|
||||
long nextPolicyExecutionDate = policyMeta.getNextExecution();
|
||||
SnapshotInvocationRecord lastSuccess = policyMeta.getLastSuccess();
|
||||
SnapshotInvocationRecord lastFailure = policyMeta.getLastFailure();
|
||||
SnapshotLifecyclePolicy retrievedPolicy = policyMeta.getPolicy(); // <2>
|
||||
String id = retrievedPolicy.getId();
|
||||
String snapshotNameFormat = retrievedPolicy.getName();
|
||||
String repositoryName = retrievedPolicy.getRepository();
|
||||
String schedule = retrievedPolicy.getSchedule();
|
||||
Map<String, Object> snapshotConfiguration = retrievedPolicy.getConfig();
|
||||
// end::slm-get-snapshot-lifecycle-policy-response
|
||||
|
||||
assertNotNull(policyMeta);
|
||||
assertThat(retrievedPolicy, equalTo(policy));
|
||||
assertThat(policyVersion, equalTo(1L));
|
||||
|
||||
createIndex("idx", Settings.builder().put("index.number_of_shards", 1).build());
|
||||
|
||||
//////// EXECUTE
|
||||
// tag::slm-execute-snapshot-lifecycle-policy
|
||||
ExecuteSnapshotLifecyclePolicyRequest executeRequest =
|
||||
new ExecuteSnapshotLifecyclePolicyRequest("policy_id"); // <1>
|
||||
// end::slm-execute-snapshot-lifecycle-policy
|
||||
|
||||
// tag::slm-execute-snapshot-lifecycle-policy-execute
|
||||
ExecuteSnapshotLifecyclePolicyResponse executeResponse =
|
||||
client.indexLifecycle()
|
||||
.executeSnapshotLifecyclePolicy(executeRequest,
|
||||
RequestOptions.DEFAULT);
|
||||
// end::slm-execute-snapshot-lifecycle-policy-execute
|
||||
|
||||
// tag::slm-execute-snapshot-lifecycle-policy-response
|
||||
final String snapshotName = executeResponse.getSnapshotName(); // <1>
|
||||
// end::slm-execute-snapshot-lifecycle-policy-response
|
||||
|
||||
assertSnapshotExists(client, "my_repository", snapshotName);
|
||||
|
||||
// tag::slm-execute-snapshot-lifecycle-policy-execute-listener
|
||||
ActionListener<ExecuteSnapshotLifecyclePolicyResponse> executeListener =
|
||||
new ActionListener<ExecuteSnapshotLifecyclePolicyResponse>() {
|
||||
@Override
|
||||
public void onResponse(ExecuteSnapshotLifecyclePolicyResponse r) {
|
||||
String snapshotName = r.getSnapshotName(); // <1>
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onFailure(Exception e) {
|
||||
// <2>
|
||||
}
|
||||
};
|
||||
// end::slm-execute-snapshot-lifecycle-policy-execute-listener
|
||||
|
||||
// We need a listener that will actually wait for the snapshot to be created
|
||||
CountDownLatch latch = new CountDownLatch(1);
|
||||
executeListener =
|
||||
new ActionListener<ExecuteSnapshotLifecyclePolicyResponse>() {
|
||||
@Override
|
||||
public void onResponse(ExecuteSnapshotLifecyclePolicyResponse r) {
|
||||
try {
|
||||
assertSnapshotExists(client, "my_repository", r.getSnapshotName());
|
||||
} catch (Exception e) {
|
||||
// Ignore
|
||||
} finally {
|
||||
latch.countDown();
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onFailure(Exception e) {
|
||||
latch.countDown();
|
||||
fail("failed to execute slm execute: " + e);
|
||||
}
|
||||
};
|
||||
|
||||
// tag::slm-execute-snapshot-lifecycle-policy-execute-async
|
||||
client.indexLifecycle()
|
||||
.executeSnapshotLifecyclePolicyAsync(executeRequest,
|
||||
RequestOptions.DEFAULT, executeListener);
|
||||
// end::slm-execute-snapshot-lifecycle-policy-execute-async
|
||||
latch.await(5, TimeUnit.SECONDS);
|
||||
|
||||
//////// DELETE
|
||||
// tag::slm-delete-snapshot-lifecycle-policy
|
||||
DeleteSnapshotLifecyclePolicyRequest deleteRequest =
|
||||
new DeleteSnapshotLifecyclePolicyRequest("policy_id"); // <1>
|
||||
// end::slm-delete-snapshot-lifecycle-policy
|
||||
|
||||
// tag::slm-delete-snapshot-lifecycle-policy-execute
|
||||
AcknowledgedResponse deleteResp = client.indexLifecycle()
|
||||
.deleteSnapshotLifecyclePolicy(deleteRequest, RequestOptions.DEFAULT);
|
||||
// end::slm-delete-snapshot-lifecycle-policy-execute
|
||||
assertTrue(deleteResp.isAcknowledged());
|
||||
|
||||
ActionListener<AcknowledgedResponse> deleteListener = new ActionListener<AcknowledgedResponse>() {
|
||||
@Override
|
||||
public void onResponse(AcknowledgedResponse resp) {
|
||||
// no-op
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onFailure(Exception e) {
|
||||
// no-op
|
||||
}
|
||||
};
|
||||
|
||||
// tag::slm-delete-snapshot-lifecycle-policy-execute-async
|
||||
client.indexLifecycle()
|
||||
.deleteSnapshotLifecyclePolicyAsync(deleteRequest,
|
||||
RequestOptions.DEFAULT, deleteListener);
|
||||
// end::slm-delete-snapshot-lifecycle-policy-execute-async
|
||||
|
||||
assertTrue(deleteResp.isAcknowledged());
|
||||
}
|
||||
|
||||
private void assertSnapshotExists(final RestHighLevelClient client, final String repo, final String snapshotName) throws Exception {
|
||||
assertBusy(() -> {
|
||||
GetSnapshotsRequest getSnapshotsRequest = new GetSnapshotsRequest(repo, new String[]{snapshotName});
|
||||
try {
|
||||
final GetSnapshotsResponse snaps = client.snapshot().get(getSnapshotsRequest, RequestOptions.DEFAULT);
|
||||
Optional<SnapshotInfo> info = snaps.getSnapshots().stream().findFirst();
|
||||
if (info.isPresent()) {
|
||||
info.ifPresent(si -> {
|
||||
assertThat(si.snapshotId().getName(), equalTo(snapshotName));
|
||||
assertThat(si.state(), equalTo(SnapshotState.SUCCESS));
|
||||
});
|
||||
} else {
|
||||
fail("unable to find snapshot; " + snapshotName);
|
||||
}
|
||||
} catch (Exception e) {
|
||||
if (e.getMessage().contains("snapshot_missing_exception")) {
|
||||
fail("snapshot does not exist: " + snapshotName);
|
||||
}
|
||||
throw e;
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
static Map<String, Object> toMap(Response response) throws IOException {
|
||||
return XContentHelper.convertToMap(JsonXContent.jsonXContent, EntityUtils.toString(response.getEntity()), false);
|
||||
}
|
||||
|
|
|
@ -44,6 +44,7 @@ testClusters.integTest {
|
|||
|
||||
// enable regexes in painless so our tests don't complain about example snippets that use them
|
||||
setting 'script.painless.regex.enabled', 'true'
|
||||
setting 'path.repo', "${buildDir}/cluster/shared/repo"
|
||||
Closure configFile = {
|
||||
extraConfigFile it, file("src/test/cluster/config/$it")
|
||||
}
|
||||
|
@ -1185,3 +1186,13 @@ buildRestTests.setups['logdata_job'] = buildRestTests.setups['setup_logdata'] +
|
|||
}
|
||||
}
|
||||
'''
|
||||
// Used by snapshot lifecycle management docs
|
||||
buildRestTests.setups['setup-repository'] = '''
|
||||
- do:
|
||||
snapshot.create_repository:
|
||||
repository: my_repository
|
||||
body:
|
||||
type: fs
|
||||
settings:
|
||||
location: buildDir/cluster/shared/repo
|
||||
'''
|
||||
|
|
|
@ -0,0 +1,36 @@
|
|||
--
|
||||
:api: slm-delete-snapshot-lifecycle-policy
|
||||
:request: DeleteSnapshotLifecyclePolicyRequest
|
||||
:response: AcknowledgedResponse
|
||||
--
|
||||
|
||||
[id="{upid}-{api}"]
|
||||
=== Delete Snapshot Lifecycle Policy API
|
||||
|
||||
|
||||
[id="{upid}-{api}-request"]
|
||||
==== Request
|
||||
|
||||
The Delete Snapshot Lifecycle Policy API allows you to delete a Snapshot Lifecycle Management Policy
|
||||
from the cluster.
|
||||
|
||||
["source","java",subs="attributes,callouts,macros"]
|
||||
--------------------------------------------------
|
||||
include-tagged::{doc-tests-file}[{api}-request]
|
||||
--------------------------------------------------
|
||||
<1> The policy with the id `policy_id` will be deleted.
|
||||
|
||||
[id="{upid}-{api}-response"]
|
||||
==== Response
|
||||
|
||||
The returned +{response}+ indicates if the delete snapshot lifecycle policy request was received.
|
||||
|
||||
["source","java",subs="attributes,callouts,macros"]
|
||||
--------------------------------------------------
|
||||
include-tagged::{doc-tests-file}[{api}-response]
|
||||
--------------------------------------------------
|
||||
<1> Whether or not the delete snapshot lifecycle policy request was acknowledged.
|
||||
|
||||
include::../execution.asciidoc[]
|
||||
|
||||
|
|
@ -0,0 +1,36 @@
|
|||
--
|
||||
:api: slm-execute-snapshot-lifecycle-policy
|
||||
:request: ExecuteSnapshotLifecyclePolicyRequest
|
||||
:response: ExecuteSnapshotLifecyclePolicyResponse
|
||||
--
|
||||
|
||||
[id="{upid}-{api}"]
|
||||
=== Execute Snapshot Lifecycle Policy API
|
||||
|
||||
|
||||
[id="{upid}-{api}-request"]
|
||||
==== Request
|
||||
|
||||
The Execute Snapshot Lifecycle Policy API allows you to execute a Snapshot Lifecycle Management
|
||||
Policy, taking a snapshot immediately.
|
||||
|
||||
["source","java",subs="attributes,callouts,macros"]
|
||||
--------------------------------------------------
|
||||
include-tagged::{doc-tests-file}[{api}-request]
|
||||
--------------------------------------------------
|
||||
<1> The policy id to execute
|
||||
|
||||
[id="{upid}-{api}-response"]
|
||||
==== Response
|
||||
|
||||
The returned +{response}+ contains the name of the snapshot that was created.
|
||||
|
||||
["source","java",subs="attributes,callouts,macros"]
|
||||
--------------------------------------------------
|
||||
include-tagged::{doc-tests-file}[{api}-response]
|
||||
--------------------------------------------------
|
||||
<1> The created snapshot name
|
||||
|
||||
include::../execution.asciidoc[]
|
||||
|
||||
|
|
@ -0,0 +1,39 @@
|
|||
--
|
||||
:api: slm-get-snapshot-lifecycle-policy
|
||||
:request: GetSnapshotLifecyclePolicyRequest
|
||||
:response: GetSnapshotLifecyclePolicyResponse
|
||||
--
|
||||
|
||||
[id="{upid}-{api}"]
|
||||
=== Get Snapshot Lifecycle Policy API
|
||||
|
||||
|
||||
[id="{upid}-{api}-request"]
|
||||
==== Request
|
||||
|
||||
The Get Snapshot Lifecycle Policy API allows you to retrieve the definition of a Snapshot Lifecycle
|
||||
Management Policy from the cluster.
|
||||
|
||||
["source","java",subs="attributes,callouts,macros"]
|
||||
--------------------------------------------------
|
||||
include-tagged::{doc-tests-file}[{api}-request]
|
||||
--------------------------------------------------
|
||||
<1> Gets all policies.
|
||||
<2> Gets `policy_id`
|
||||
|
||||
[id="{upid}-{api}-response"]
|
||||
==== Response
|
||||
|
||||
The returned +{response}+ contains a map of `SnapshotLifecyclePolicyMetadata`, accessible by the id
|
||||
of the policy, which contains data about each policy, as well as the policy definition.
|
||||
|
||||
["source","java",subs="attributes,callouts,macros"]
|
||||
--------------------------------------------------
|
||||
include-tagged::{doc-tests-file}[{api}-response]
|
||||
--------------------------------------------------
|
||||
<1> The retrieved policies are retrieved by id.
|
||||
<2> The policy definition itself.
|
||||
|
||||
include::../execution.asciidoc[]
|
||||
|
||||
|
|
@ -0,0 +1,35 @@
|
|||
--
|
||||
:api: slm-put-snapshot-lifecycle-policy
|
||||
:request: PutSnapshotLifecyclePolicyRequest
|
||||
:response: AcknowledgedResponse
|
||||
--
|
||||
|
||||
[id="{upid}-{api}"]
|
||||
=== Put Snapshot Lifecycle Policy API
|
||||
|
||||
|
||||
[id="{upid}-{api}-request"]
|
||||
==== Request
|
||||
|
||||
The Put Snapshot Lifecycle Policy API allows you to add of update the definition of a Snapshot
|
||||
Lifecycle Management Policy in the cluster.
|
||||
|
||||
["source","java",subs="attributes,callouts,macros"]
|
||||
--------------------------------------------------
|
||||
include-tagged::{doc-tests-file}[{api}-request]
|
||||
--------------------------------------------------
|
||||
|
||||
[id="{upid}-{api}-response"]
|
||||
==== Response
|
||||
|
||||
The returned +{response}+ indicates if the put snapshot lifecycle policy request was received.
|
||||
|
||||
["source","java",subs="attributes,callouts,macros"]
|
||||
--------------------------------------------------
|
||||
include-tagged::{doc-tests-file}[{api}-response]
|
||||
--------------------------------------------------
|
||||
<1> Whether or not the put snapshot lifecycle policy was acknowledged.
|
||||
|
||||
include::../execution.asciidoc[]
|
||||
|
||||
|
|
@ -0,0 +1,350 @@
|
|||
[role="xpack"]
|
||||
[testenv="basic"]
|
||||
[[snapshot-lifecycle-management-api]]
|
||||
== Snapshot Lifecycle Management API
|
||||
|
||||
The Snapshot Lifecycle Management APIs are used to manage policies for the time
|
||||
and frequency of automatic snapshots. Snapshot Lifecycle Management is related
|
||||
to <<index-lifecycle-management,Index Lifecycle Management>>, however, instead
|
||||
of managing a lifecycle of actions that are performed on a single index, SLM
|
||||
allows configuring policies spanning multiple indices.
|
||||
|
||||
SLM policy management is split into three different CRUD APIs, a way to put or update
|
||||
policies, a way to retrieve policies, and a way to delete unwanted policies, as
|
||||
well as a separate API for immediately invoking a snapshot based on a policy.
|
||||
|
||||
Since SLM falls under the same category as ILM, it is stopped and started by
|
||||
using the <<start-stop-ilm,start and stop>> ILM APIs.
|
||||
|
||||
[[slm-api-put]]
|
||||
=== Put Snapshot Lifecycle Policy API
|
||||
|
||||
Creates or updates a snapshot policy. If the policy already exists, the version
|
||||
is incremented. Only the latest version of a policy is stored.
|
||||
|
||||
When a policy is created it is immediately scheduled based on the schedule of
|
||||
the policy, when a policy is updated its schedule changes are immediately
|
||||
applied.
|
||||
|
||||
==== Path Parameters
|
||||
|
||||
`policy_id` (required)::
|
||||
(string) Identifier (id) for the policy.
|
||||
|
||||
==== Request Parameters
|
||||
|
||||
include::{docdir}/rest-api/timeoutparms.asciidoc[]
|
||||
|
||||
==== Authorization
|
||||
|
||||
You must have the `manage_slm` cluster privilege to use this API. You must also
|
||||
have the `manage` index privilege on all indices being managed by `policy`. All
|
||||
operations executed by {slm} for a policy are executed as the user that put the
|
||||
latest version of a policy. For more information, see
|
||||
{stack-ov}/security-privileges.html[Security Privileges].
|
||||
|
||||
==== Example
|
||||
|
||||
The following creates a snapshot lifecycle policy with an id of
|
||||
`daily-snapshots`:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
PUT /_slm/policy/daily-snapshots
|
||||
{
|
||||
"schedule": "0 30 1 * * ?", <1>
|
||||
"name": "<daily-snap-{now/d}>", <2>
|
||||
"repository": "my_repository", <3>
|
||||
"config": { <4>
|
||||
"indices": ["data-*", "important"], <5>
|
||||
"ignore_unavailable": false,
|
||||
"include_global_state": false
|
||||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// CONSOLE
|
||||
// TEST[setup:setup-repository]
|
||||
<1> When the snapshot should be taken, in this case, 1:30am daily
|
||||
<2> The name each snapshot should be given
|
||||
<3> Which repository to take the snapshot in
|
||||
<4> Any extra snapshot configuration
|
||||
<5> Which indices the snapshot should contain
|
||||
|
||||
The top-level keys that the policy supports are described below:
|
||||
|
||||
|==================
|
||||
| Key | Description
|
||||
|
||||
| `schedule` | A periodic or absolute time schedule. Supports all values
|
||||
supported by the cron scheduler:
|
||||
{xpack-ref}/trigger-schedule.html#schedule-cron[Cron scheduler configuration]
|
||||
|
||||
| `name` | A name automatically given to each snapshot performed by this policy.
|
||||
Supports the same <<date-math-index-names,date math>> supported in index
|
||||
names. A UUID is automatically appended to the end of the name to prevent
|
||||
conflicting snapshot names.
|
||||
|
||||
| `repository` | The snapshot repository that will contain snapshots created by
|
||||
this policy. The repository must exist prior to the policy's creation and can
|
||||
be created with the <<modules-snapshots,snapshot repository API>>.
|
||||
|
||||
| `config` | Configuration for each snapshot that will be created by this
|
||||
policy. Any configuration is included with <<modules-snapshots,create snapshot
|
||||
requests>> issued by this policy.
|
||||
|==================
|
||||
|
||||
To update an existing policy, simply use the put snapshot lifecycle policy API
|
||||
with the same policy id as an existing policy.
|
||||
|
||||
[[slm-api-get]]
|
||||
=== Get Snapshot Lifecycle Policy API
|
||||
|
||||
Once a policy is in place, you can retrieve one or more of the policies using
|
||||
the get snapshot lifecycle policy API. This also includes information about the
|
||||
latest successful and failed invocation that the automatic snapshots have taken.
|
||||
|
||||
==== Path Parameters
|
||||
|
||||
`policy_ids` (optional)::
|
||||
(string) Comma-separated ids of policies to retrieve.
|
||||
|
||||
==== Examples
|
||||
|
||||
To retrieve a policy, perform a `GET` with the policy's id
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
GET /_slm/policy/daily-snapshots?human
|
||||
--------------------------------------------------
|
||||
// CONSOLE
|
||||
// TEST[continued]
|
||||
|
||||
The output looks similar to the following:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
{
|
||||
"daily-snapshots" : {
|
||||
"version": 1, <1>
|
||||
"modified_date": "2019-04-23T01:30:00.000Z", <2>
|
||||
"modified_date_millis": 1556048137314,
|
||||
"policy" : {
|
||||
"schedule": "0 30 1 * * ?",
|
||||
"name": "<daily-snap-{now/d}>",
|
||||
"repository": "my_repository",
|
||||
"config": {
|
||||
"indices": ["data-*", "important"],
|
||||
"ignore_unavailable": false,
|
||||
"include_global_state": false
|
||||
}
|
||||
},
|
||||
"next_execution": "2019-04-24T01:30:00.000Z", <3>
|
||||
"next_execution_millis": 1556048160000
|
||||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// TESTRESPONSE[s/"modified_date": "2019-04-23T01:30:00.000Z"/"modified_date": $body.daily-snapshots.modified_date/ s/"modified_date_millis": 1556048137314/"modified_date_millis": $body.daily-snapshots.modified_date_millis/ s/"next_execution": "2019-04-24T01:30:00.000Z"/"next_execution": $body.daily-snapshots.next_execution/ s/"next_execution_millis": 1556048160000/"next_execution_millis": $body.daily-snapshots.next_execution_millis/]
|
||||
<1> The version of the snapshot policy, only the latest verison is stored and incremented when the policy is updated
|
||||
<2> The last time this policy was modified
|
||||
<3> The next time this policy will be executed
|
||||
|
||||
Or, to retrieve all policies:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
GET /_slm/policy
|
||||
--------------------------------------------------
|
||||
// CONSOLE
|
||||
// TEST[continued]
|
||||
|
||||
[[slm-api-execute]]
|
||||
=== Execute Snapshot Lifecycle Policy API
|
||||
|
||||
Sometimes it can be useful to immediately execute a snapshot based on policy,
|
||||
perhaps before an upgrade or before performing other maintenance on indices. The
|
||||
execute snapshot policy API allows you to perform a snapshot immediately without
|
||||
waiting for a policy's scheduled invocation.
|
||||
|
||||
==== Path Parameters
|
||||
|
||||
`policy_id` (required)::
|
||||
(string) Id of the policy to execute
|
||||
|
||||
==== Example
|
||||
|
||||
To take an immediate snapshot using a policy, use the following
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
PUT /_slm/policy/daily-snapshots/_execute
|
||||
--------------------------------------------------
|
||||
// CONSOLE
|
||||
// TEST[skip:we can't easily handle snapshots from docs tests]
|
||||
|
||||
This API will immediately return with the generated snapshot name
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
{
|
||||
"snapshot_name": "daily-snap-2019.04.24-gwrqoo2xtea3q57vvg0uea"
|
||||
}
|
||||
--------------------------------------------------
|
||||
// TESTRESPONSE[skip:we can't handle snapshots from docs tests]
|
||||
|
||||
The snapshot will be taken in the background, you can use the
|
||||
<<modules-snapshots,snapshot APIs>> to monitor the status of the snapshot.
|
||||
|
||||
Once a snapshot has been kicked off, you can see the latest successful or failed
|
||||
snapshot using the get snapshot lifecycle policy API:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
GET /_slm/policy/daily-snapshots?human
|
||||
--------------------------------------------------
|
||||
// CONSOLE
|
||||
// TEST[skip:we already tested get policy above, the last_failure may not be present though]
|
||||
|
||||
Which, in this case shows an error because the index did not exist:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
{
|
||||
"daily-snapshots" : {
|
||||
"version": 1,
|
||||
"modified_date": "2019-04-23T01:30:00.000Z",
|
||||
"modified_date_millis": 1556048137314,
|
||||
"policy" : {
|
||||
"schedule": "0 30 1 * * ?",
|
||||
"name": "<daily-snap-{now/d}>",
|
||||
"repository": "my_repository",
|
||||
"config": {
|
||||
"indices": ["data-*", "important"],
|
||||
"ignore_unavailable": false,
|
||||
"include_global_state": false
|
||||
}
|
||||
},
|
||||
"last_failure": { <1>
|
||||
"snapshot_name": "daily-snap-2019.04.02-lohisb5ith2n8hxacaq3mw",
|
||||
"time_string": "2019-04-02T01:30:00.000Z",
|
||||
"time": 1556042030000,
|
||||
"details": "{\"type\":\"index_not_found_exception\",\"reason\":\"no such index [important]\",\"resource.type\":\"index_or_alias\",\"resource.id\":\"important\",\"index_uuid\":\"_na_\",\"index\":\"important\",\"stack_trace\":\"[important] IndexNotFoundException[no such index [important]]\\n\\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.indexNotFoundException(IndexNameExpressionResolver.java:762)\\n\\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.innerResolve(IndexNameExpressionResolver.java:714)\\n\\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.resolve(IndexNameExpressionResolver.java:670)\\n\\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndices(IndexNameExpressionResolver.java:163)\\n\\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndexNames(IndexNameExpressionResolver.java:142)\\n\\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndexNames(IndexNameExpressionResolver.java:102)\\n\\tat org.elasticsearch.snapshots.SnapshotsService$1.execute(SnapshotsService.java:280)\\n\\tat org.elasticsearch.cluster.ClusterStateUpdateTask.execute(ClusterStateUpdateTask.java:47)\\n\\tat org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:687)\\n\\tat org.elasticsearch.cluster.service.MasterService.calculateTaskOutputs(MasterService.java:310)\\n\\tat org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:210)\\n\\tat org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:142)\\n\\tat org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150)\\n\\tat org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188)\\n\\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:688)\\n\\tat org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252)\\n\\tat org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215)\\n\\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\\n\\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\\n\\tat java.base/java.lang.Thread.run(Thread.java:834)\\n\"}"
|
||||
} ,
|
||||
"next_execution": "2019-04-24T01:30:00.000Z",
|
||||
"next_execution_millis": 1556048160000
|
||||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// TESTRESPONSE[skip:the presence of last_failure is asynchronous and will be present for users, but is untestable]
|
||||
<1> The last unsuccessfully initiated snapshot by this policy, along with the details of its failure
|
||||
|
||||
In this case, it failed due to the "important" index not existing and
|
||||
`ignore_unavailable` setting being set to `false`.
|
||||
|
||||
Updating the policy to change the `ignore_unavailable` setting is done using the
|
||||
same put snapshot lifecycle policy API:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
PUT /_slm/policy/daily-snapshots
|
||||
{
|
||||
"schedule": "0 30 1 * * ?",
|
||||
"name": "<daily-snap-{now/d}>",
|
||||
"repository": "my_repository",
|
||||
"config": {
|
||||
"indices": ["data-*", "important"],
|
||||
"ignore_unavailable": true,
|
||||
"include_global_state": false
|
||||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// CONSOLE
|
||||
// TEST[continued]
|
||||
|
||||
Another snapshot can immediately be executed to ensure the new policy works:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
PUT /_slm/policy/daily-snapshots/_execute
|
||||
--------------------------------------------------
|
||||
// CONSOLE
|
||||
// TEST[skip:we can't handle snapshots in docs tests]
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
{
|
||||
"snapshot_name": "daily-snap-2019.04.24-tmtnyjtrsxkhbrrdcgg18a"
|
||||
}
|
||||
--------------------------------------------------
|
||||
// TESTRESPONSE[skip:we can't handle snapshots in docs tests]
|
||||
|
||||
Now retriving the policy shows that the policy has successfully been executed:
|
||||
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
GET /_slm/policy/daily-snapshots?human
|
||||
--------------------------------------------------
|
||||
// CONSOLE
|
||||
// TEST[skip:we already tested this above and the output may not be available yet]
|
||||
|
||||
Which now includes the successful snapshot information:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
{
|
||||
"daily-snapshots" : {
|
||||
"version": 2, <1>
|
||||
"modified_date": "2019-04-23T01:30:00.000Z",
|
||||
"modified_date_millis": 1556048137314,
|
||||
"policy" : {
|
||||
"schedule": "0 30 1 * * ?",
|
||||
"name": "<daily-snap-{now/d}>",
|
||||
"repository": "my_repository",
|
||||
"config": {
|
||||
"indices": ["data-*", "important"],
|
||||
"ignore_unavailable": true,
|
||||
"include_global_state": false
|
||||
}
|
||||
},
|
||||
"last_success": { <2>
|
||||
"snapshot_name": "daily-snap-2019.04.24-tmtnyjtrsxkhbrrdcgg18a",
|
||||
"time_string": "2019-04-24T16:43:49.316Z",
|
||||
"time": 1556124229316
|
||||
} ,
|
||||
"last_failure": {
|
||||
"snapshot_name": "daily-snap-2019.04.02-lohisb5ith2n8hxacaq3mw",
|
||||
"time_string": "2019-04-02T01:30:00.000Z",
|
||||
"time": 1556042030000,
|
||||
"details": "{\"type\":\"index_not_found_exception\",\"reason\":\"no such index [important]\",\"resource.type\":\"index_or_alias\",\"resource.id\":\"important\",\"index_uuid\":\"_na_\",\"index\":\"important\",\"stack_trace\":\"[important] IndexNotFoundException[no such index [important]]\\n\\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.indexNotFoundException(IndexNameExpressionResolver.java:762)\\n\\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.innerResolve(IndexNameExpressionResolver.java:714)\\n\\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.resolve(IndexNameExpressionResolver.java:670)\\n\\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndices(IndexNameExpressionResolver.java:163)\\n\\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndexNames(IndexNameExpressionResolver.java:142)\\n\\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndexNames(IndexNameExpressionResolver.java:102)\\n\\tat org.elasticsearch.snapshots.SnapshotsService$1.execute(SnapshotsService.java:280)\\n\\tat org.elasticsearch.cluster.ClusterStateUpdateTask.execute(ClusterStateUpdateTask.java:47)\\n\\tat org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:687)\\n\\tat org.elasticsearch.cluster.service.MasterService.calculateTaskOutputs(MasterService.java:310)\\n\\tat org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:210)\\n\\tat org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:142)\\n\\tat org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150)\\n\\tat org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188)\\n\\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:688)\\n\\tat org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252)\\n\\tat org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215)\\n\\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\\n\\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\\n\\tat java.base/java.lang.Thread.run(Thread.java:834)\\n\"}"
|
||||
} ,
|
||||
"next_execution": "2019-04-24T01:30:00.000Z",
|
||||
"next_execution_millis": 1556048160000
|
||||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// TESTRESPONSE[skip:the presence of last_failure and last_success is asynchronous and will be present for users, but is untestable]
|
||||
<1> The policy's version has been incremented because it was updated
|
||||
<2> The last successfully initiated snapshot information
|
||||
|
||||
It is a good idea to test policies using the execute API to ensure they work.
|
||||
|
||||
[[slm-api-delete]]
|
||||
=== Delete Snapshot Lifecycle Policy API
|
||||
|
||||
A policy can be deleted by issuing a delete request with the policy id. Note
|
||||
that this prevents any future snapshots from being taken, but does not cancel
|
||||
any currently ongoing snapshots or remove any previously taken snapshots.
|
||||
|
||||
==== Path Parameters
|
||||
|
||||
`policy_id` (optional)::
|
||||
(string) Id of the policy to remove.
|
||||
|
||||
==== Example
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
DELETE /_slm/policy/daily-snapshots
|
||||
--------------------------------------------------
|
||||
// CONSOLE
|
||||
// TEST[continued]
|
|
@ -0,0 +1,215 @@
|
|||
[role="xpack"]
|
||||
[testenv="basic"]
|
||||
[[getting-started-snapshot-lifecycle-management]]
|
||||
== Getting started with snapshot lifecycle management
|
||||
|
||||
Let's get started with snapshot lifecycle management (SLM) by working through a
|
||||
hands-on scenario. The goal of this example is to automatically back up {es}
|
||||
indices using the <<modules-snapshots,snapshots>> every day at a particular
|
||||
time.
|
||||
|
||||
[float]
|
||||
[[slm-and-security]]
|
||||
=== Security and SLM
|
||||
Before starting, it's important to understand the privileges that are needed
|
||||
when configuring SLM if you are using the security plugin. There are two
|
||||
built-in cluster privileges that can be used to assist: `manage_slm` and
|
||||
`read_slm`. It's also good to note that the `create_snapshot` permission
|
||||
allows taking snapshots even for indices the role may not have access to.
|
||||
|
||||
An example of configuring an administrator role for SLM follows:
|
||||
|
||||
[source,js]
|
||||
-----------------------------------
|
||||
POST /_security/role/slm-admin
|
||||
{
|
||||
"cluster": ["manage_slm", "create_snapshot"],
|
||||
"indices": [
|
||||
{
|
||||
"names": [".slm-history-*"],
|
||||
"privileges": ["all"]
|
||||
}
|
||||
]
|
||||
}
|
||||
-----------------------------------
|
||||
// CONSOLE
|
||||
// TEST[skip:security is not enabled here]
|
||||
|
||||
Or, for a read-only role that can retrieve policies (but not update, execute, or
|
||||
delete them), as well as only view the history index:
|
||||
|
||||
[source,js]
|
||||
-----------------------------------
|
||||
POST /_security/role/slm-read-only
|
||||
{
|
||||
"cluster": ["read_slm"],
|
||||
"indices": [
|
||||
{
|
||||
"names": [".slm-history-*"],
|
||||
"privileges": ["read"]
|
||||
}
|
||||
]
|
||||
}
|
||||
-----------------------------------
|
||||
// CONSOLE
|
||||
// TEST[skip:security is not enabled here]
|
||||
|
||||
[float]
|
||||
[[slm-gs-create-policy]]
|
||||
=== Setting up a repository
|
||||
|
||||
Before we can set up an SLM policy, we'll need to set up a
|
||||
<<snapshots-repositories,snapshot repository>> where the snapshots will be
|
||||
stored. Repositories can use {plugins}/repository.html[many different backends],
|
||||
including cloud storage providers. You'll probably want to use one of these in
|
||||
production, but for this example we'll use a shared file system repository:
|
||||
|
||||
[source,js]
|
||||
-----------------------------------
|
||||
PUT /_snapshot/my_repository
|
||||
{
|
||||
"type": "fs",
|
||||
"settings": {
|
||||
"location": "my_backup_location"
|
||||
}
|
||||
}
|
||||
-----------------------------------
|
||||
// CONSOLE
|
||||
// TEST
|
||||
|
||||
[float]
|
||||
=== Setting up a policy
|
||||
|
||||
Now that we have a repository in place, we can create a policy to automatically
|
||||
take snapshots. Policies are written in JSON and will define when to take
|
||||
snapshots, what the snapshots should be named, and which indices should be
|
||||
included, among other things. We'll use the <<slm-api-put,Put Policy>> API
|
||||
to create the policy.
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
PUT /_slm/policy/nightly-snapshots
|
||||
{
|
||||
"schedule": "0 30 1 * * ?", <1>
|
||||
"name": "<nightly-snap-{now/d}>", <2>
|
||||
"repository": "my_repository", <3>
|
||||
"config": { <4>
|
||||
"indices": ["*"] <5>
|
||||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// CONSOLE
|
||||
// TEST[continued]
|
||||
<1> when the snapshot should be taken, using
|
||||
{xpack-ref}/trigger-schedule.html#schedule-cron[Cron syntax], in this
|
||||
case at 1:30AM each day
|
||||
<2> whe name each snapshot should be given, using
|
||||
<<date-math-index-names,date math>> to include the current date in the name
|
||||
of the snapshot
|
||||
<3> the repository the snapshot should be stored in
|
||||
<4> the configuration to be used for the snapshot requests (see below)
|
||||
<5> which indices should be included in the snapshot, in this case, every index
|
||||
|
||||
This policy will take a snapshot of every index each day at 1:30AM UTC.
|
||||
Snapshots are incremental, allowing frequent snapshots to be stored efficiently,
|
||||
so don't be afraid to configure a policy to take frequent snapshots.
|
||||
|
||||
In addition to specifying the indices that should be included in the snapshot,
|
||||
the `config` field can be used to customize other aspects of the snapshot. You
|
||||
can use any option allowed in <<snapshots-take-snapshot,a regular snapshot
|
||||
request>>, so you can specify, for example, whether the snapshot should fail in
|
||||
special cases, such as if one of the specified indices cannot be found.
|
||||
|
||||
[float]
|
||||
=== Making sure the policy works
|
||||
|
||||
While snapshots taken by SLM policies can be viewed through the standard snapshot
|
||||
API, SLM also keeps track of policy successes and failures in ways that are a bit
|
||||
easier to use to make sure the policy is working. Once a policy has executed at
|
||||
least once, when you view the policy using the <<slm-api-get,Get Policy API>>,
|
||||
some metadata will be returned indicating whether the snapshot was sucessfully
|
||||
initiated or not.
|
||||
|
||||
Instead of waiting for our policy to run, let's tell SLM to take a snapshot
|
||||
as using the configuration from our policy right now instead of waiting for
|
||||
1:30AM.
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
PUT /_slm/policy/nightly-snapshots/_execute
|
||||
--------------------------------------------------
|
||||
// CONSOLE
|
||||
// TEST[skip:we can't easily handle snapshots from docs tests]
|
||||
|
||||
This request will kick off a snapshot for our policy right now, regardless of
|
||||
the schedule in the policy. This is useful for taking snapshots before making
|
||||
a configuration change, upgrading, or for our purposes, making sure our policy
|
||||
is going to work successfully. The policy will continue to run on its configured
|
||||
schedule after this execution of the policy.
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
GET /_slm/policy/nightly-snapshots?human
|
||||
--------------------------------------------------
|
||||
// CONSOLE
|
||||
// TEST[continued]
|
||||
|
||||
This request will return a response that includes the policy, as well as
|
||||
information about the last time the policy succeeded and failed, as well as the
|
||||
next time the policy will be executed.
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
{
|
||||
"nightly-snapshots" : {
|
||||
"version": 1,
|
||||
"modified_date": "2019-04-23T01:30:00.000Z",
|
||||
"modified_date_millis": 1556048137314,
|
||||
"policy" : {
|
||||
"schedule": "0 30 1 * * ?",
|
||||
"name": "<nightly-snap-{now/d}>",
|
||||
"repository": "my_repository",
|
||||
"config": {
|
||||
"indices": ["*"],
|
||||
}
|
||||
},
|
||||
"last_success": { <1>
|
||||
"snapshot_name": "nightly-snap-2019.04.24-tmtnyjtrsxkhbrrdcgg18a", <2>
|
||||
"time_string": "2019-04-24T16:43:49.316Z",
|
||||
"time": 1556124229316
|
||||
} ,
|
||||
"last_failure": { <3>
|
||||
"snapshot_name": "nightly-snap-2019.04.02-lohisb5ith2n8hxacaq3mw",
|
||||
"time_string": "2019-04-02T01:30:00.000Z",
|
||||
"time": 1556042030000,
|
||||
"details": "{\"type\":\"index_not_found_exception\",\"reason\":\"no such index [important]\",\"resource.type\":\"index_or_alias\",\"resource.id\":\"important\",\"index_uuid\":\"_na_\",\"index\":\"important\",\"stack_trace\":\"[important] IndexNotFoundException[no such index [important]]\\n\\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.indexNotFoundException(IndexNameExpressionResolver.java:762)\\n\\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.innerResolve(IndexNameExpressionResolver.java:714)\\n\\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.resolve(IndexNameExpressionResolver.java:670)\\n\\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndices(IndexNameExpressionResolver.java:163)\\n\\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndexNames(IndexNameExpressionResolver.java:142)\\n\\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndexNames(IndexNameExpressionResolver.java:102)\\n\\tat org.elasticsearch.snapshots.SnapshotsService$1.execute(SnapshotsService.java:280)\\n\\tat org.elasticsearch.cluster.ClusterStateUpdateTask.execute(ClusterStateUpdateTask.java:47)\\n\\tat org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:687)\\n\\tat org.elasticsearch.cluster.service.MasterService.calculateTaskOutputs(MasterService.java:310)\\n\\tat org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:210)\\n\\tat org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:142)\\n\\tat org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150)\\n\\tat org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188)\\n\\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:688)\\n\\tat org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252)\\n\\tat org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215)\\n\\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\\n\\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\\n\\tat java.base/java.lang.Thread.run(Thread.java:834)\\n\"}"
|
||||
} ,
|
||||
"next_execution": "2019-04-24T01:30:00.000Z", <4>
|
||||
"next_execution_millis": 1556048160000
|
||||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// TESTRESPONSE[skip:the presence of last_failure and last_success is asynchronous and will be present for users, but is untestable]
|
||||
<1> information about the last time the policy successfully initated a snapshot
|
||||
<2> the name of the snapshot that was successfully initiated
|
||||
<3> information about the last time the policy failed to initiate a snapshot
|
||||
<4> the is the next time the policy will execute
|
||||
|
||||
NOTE: This metadata only indicates whether the request to initiate the snapshot was
|
||||
made successfully or not - after the snapshot has been successfully started, it
|
||||
is possible for the snapshot to fail if, for example, the connection to a remote
|
||||
repository is lost while copying files.
|
||||
|
||||
If you're following along, the returned SLM policy shouldn't have a `last_failure`
|
||||
field - it's included above only as an example. You should, however, see a
|
||||
`last_success` field and a snapshot name. If you do, you've successfully taken
|
||||
your first snapshot using SLM!
|
||||
|
||||
While only the most recent sucess and failure are available through the Get Policy
|
||||
API, all policy executions are recorded to a history index, which may be queried
|
||||
by searching the index pattern `.slm-history*`.
|
||||
|
||||
That's it! We have our first SLM policy set up to periodically take snapshots
|
||||
so that our backups are always up to date. You can read more details in the
|
||||
<<snapshot-lifecycle-management-api,SLM API documentation>> and the
|
||||
<<modules-snapshots,general snapshot documentation.>>
|
|
@ -47,6 +47,16 @@ to a single shard.
|
|||
hardware.
|
||||
. Delete the index once the required 30 day retention period is reached.
|
||||
|
||||
*Snapshot Lifecycle Management*
|
||||
|
||||
ILM itself does allow managing indices, however, managing snapshots for a set of
|
||||
indices is outside of the scope of an index-level policy. Instead, there are
|
||||
separate APIs for managing snapshot lifecycles. Please see the
|
||||
<<snapshot-lifecycle-management-api,Snapshot Lifecycle Management>>
|
||||
documentation for information about configuring snapshots.
|
||||
|
||||
See <<getting-started-snapshot-lifecycle-management,getting started with SLM>>.
|
||||
|
||||
[IMPORTANT]
|
||||
===========================
|
||||
{ilm} does not support mixed-version cluster usage. Although it
|
||||
|
@ -73,3 +83,5 @@ include::error-handling.asciidoc[]
|
|||
include::ilm-and-snapshots.asciidoc[]
|
||||
|
||||
include::start-stop-ilm.asciidoc[]
|
||||
|
||||
include::getting-started-slm.asciidoc[]
|
||||
|
|
|
@ -10,6 +10,10 @@ maybe there are scheduled maintenance windows when cluster topology
|
|||
changes are desired that may impact running ILM actions. For this reason,
|
||||
ILM has two ways to disable operations.
|
||||
|
||||
When stopping ILM, snapshot lifecycle management operations are also stopped,
|
||||
this means that no scheduled snapshots are created (currently ongoing snapshots
|
||||
are unaffected).
|
||||
|
||||
Normally, ILM will be running by default.
|
||||
To see the current operating status of ILM, use the <<ilm-get-status,Get Status API>>
|
||||
to see the current state of ILM.
|
||||
|
|
|
@ -70,6 +70,7 @@ recommend testing the reindex from remote process with a subset of your data to
|
|||
understand the time requirements before proceeding.
|
||||
|
||||
[float]
|
||||
[[snapshots-repositories]]
|
||||
=== Repositories
|
||||
|
||||
You must register a snapshot repository before you can perform snapshot and
|
||||
|
@ -329,6 +330,7 @@ POST /_snapshot/my_unverified_backup/_verify
|
|||
It returns a list of nodes where repository was successfully verified or an error message if verification process failed.
|
||||
|
||||
[float]
|
||||
[[snapshots-take-snapshot]]
|
||||
=== Snapshot
|
||||
|
||||
A repository can contain multiple snapshots of the same cluster. Snapshots are identified by unique names within the
|
||||
|
|
|
@ -626,3 +626,11 @@ See <<ml-get-filter>> and
|
|||
See <<ml-get-calendar-event>> and
|
||||
{stack-ov}/ml-calendars.html[Calendars and scheduled events].
|
||||
|
||||
[role="exclude",id="_repositories"]
|
||||
=== Snapshot repositories
|
||||
See <<snapshots-repositories>>.
|
||||
|
||||
[role="exclude",id="_snapshot"]
|
||||
=== Snapshot
|
||||
See <<snapshots-take-snapshot>>.
|
||||
|
||||
|
|
|
@ -17,6 +17,7 @@ not be included yet.
|
|||
* <<index-apis>>
|
||||
* <<indices-reload-analyzers,Reload Search Analyzers API>>
|
||||
* <<index-lifecycle-management-api,Index lifecycle management APIs>>
|
||||
* <<snapshot-lifecycle-management-api,Snapshot lifecycle management APIs>>
|
||||
* <<licensing-apis,Licensing APIs>>
|
||||
* <<ml-apis,Machine Learning APIs>>
|
||||
* <<security-api,Security APIs>>
|
||||
|
@ -31,6 +32,7 @@ include::{es-repo-dir}/ccr/apis/ccr-apis.asciidoc[]
|
|||
include::{es-repo-dir}/data-frames/apis/index.asciidoc[]
|
||||
include::{es-repo-dir}/graph/explore.asciidoc[]
|
||||
include::{es-repo-dir}/ilm/apis/ilm-api.asciidoc[]
|
||||
include::{es-repo-dir}/ilm/apis/slm-api.asciidoc[]
|
||||
include::{es-repo-dir}/indices/apis/index.asciidoc[]
|
||||
include::{es-repo-dir}/licensing/index.asciidoc[]
|
||||
include::{es-repo-dir}/migration/migration.asciidoc[]
|
||||
|
|
|
@ -36,7 +36,6 @@ import org.elasticsearch.common.xcontent.XContentFactory;
|
|||
import org.elasticsearch.common.xcontent.XContentType;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.ArrayList;
|
||||
import java.util.Arrays;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
@ -164,7 +163,7 @@ public class CreateSnapshotRequest extends MasterNodeRequest<CreateSnapshotReque
|
|||
return validationException;
|
||||
}
|
||||
|
||||
private static int metadataSize(Map<String, Object> userMetadata) {
|
||||
public static int metadataSize(Map<String, Object> userMetadata) {
|
||||
if (userMetadata == null) {
|
||||
return 0;
|
||||
}
|
||||
|
@ -431,8 +430,8 @@ public class CreateSnapshotRequest extends MasterNodeRequest<CreateSnapshotReque
|
|||
if (name.equals("indices")) {
|
||||
if (entry.getValue() instanceof String) {
|
||||
indices(Strings.splitStringByCommaToArray((String) entry.getValue()));
|
||||
} else if (entry.getValue() instanceof ArrayList) {
|
||||
indices((ArrayList<String>) entry.getValue());
|
||||
} else if (entry.getValue() instanceof List) {
|
||||
indices((List<String>) entry.getValue());
|
||||
} else {
|
||||
throw new IllegalArgumentException("malformed indices section, should be an array of strings");
|
||||
}
|
||||
|
|
|
@ -577,7 +577,7 @@ public class IndexNameExpressionResolver {
|
|||
return false;
|
||||
}
|
||||
|
||||
static final class Context {
|
||||
public static class Context {
|
||||
|
||||
private final ClusterState state;
|
||||
private final IndicesOptions options;
|
||||
|
@ -597,7 +597,8 @@ public class IndexNameExpressionResolver {
|
|||
this(state, options, startTime, false, false);
|
||||
}
|
||||
|
||||
Context(ClusterState state, IndicesOptions options, long startTime, boolean preserveAliases, boolean resolveToWriteIndex) {
|
||||
protected Context(ClusterState state, IndicesOptions options, long startTime,
|
||||
boolean preserveAliases, boolean resolveToWriteIndex) {
|
||||
this.state = state;
|
||||
this.options = options;
|
||||
this.startTime = startTime;
|
||||
|
@ -855,7 +856,7 @@ public class IndexNameExpressionResolver {
|
|||
}
|
||||
}
|
||||
|
||||
static final class DateMathExpressionResolver implements ExpressionResolver {
|
||||
public static final class DateMathExpressionResolver implements ExpressionResolver {
|
||||
|
||||
private static final DateFormatter DEFAULT_DATE_FORMATTER = DateFormatter.forPattern("uuuu.MM.dd");
|
||||
private static final String EXPRESSION_LEFT_BOUND = "<";
|
||||
|
|
|
@ -538,7 +538,7 @@ public abstract class ESRestTestCase extends ESTestCase {
|
|||
* the snapshots intact in the repository.
|
||||
* @return Map of repository name to list of snapshots found in unfinished state
|
||||
*/
|
||||
private Map<String, List<Map<?, ?>>> wipeSnapshots() throws IOException {
|
||||
protected Map<String, List<Map<?, ?>>> wipeSnapshots() throws IOException {
|
||||
final Map<String, List<Map<?, ?>>> inProgressSnapshots = new HashMap<>();
|
||||
for (Map.Entry<String, ?> repo : entityAsMap(adminClient.performRequest(new Request("GET", "/_snapshot/_all"))).entrySet()) {
|
||||
String repoName = repo.getKey();
|
||||
|
|
|
@ -72,6 +72,7 @@ A successful call returns an object with "cluster" and "index" fields.
|
|||
"manage_rollup",
|
||||
"manage_saml",
|
||||
"manage_security",
|
||||
"manage_slm",
|
||||
"manage_token",
|
||||
"manage_watcher",
|
||||
"monitor",
|
||||
|
@ -82,6 +83,7 @@ A successful call returns an object with "cluster" and "index" fields.
|
|||
"none",
|
||||
"read_ccr",
|
||||
"read_ilm",
|
||||
"read_slm",
|
||||
"transport_client"
|
||||
],
|
||||
"index" : [
|
||||
|
|
|
@ -218,6 +218,11 @@ import org.elasticsearch.xpack.core.watcher.transport.actions.get.GetWatchAction
|
|||
import org.elasticsearch.xpack.core.watcher.transport.actions.put.PutWatchAction;
|
||||
import org.elasticsearch.xpack.core.watcher.transport.actions.service.WatcherServiceAction;
|
||||
import org.elasticsearch.xpack.core.watcher.transport.actions.stats.WatcherStatsAction;
|
||||
import org.elasticsearch.xpack.core.snapshotlifecycle.SnapshotLifecycleMetadata;
|
||||
import org.elasticsearch.xpack.core.snapshotlifecycle.action.DeleteSnapshotLifecycleAction;
|
||||
import org.elasticsearch.xpack.core.snapshotlifecycle.action.ExecuteSnapshotLifecycleAction;
|
||||
import org.elasticsearch.xpack.core.snapshotlifecycle.action.GetSnapshotLifecycleAction;
|
||||
import org.elasticsearch.xpack.core.snapshotlifecycle.action.PutSnapshotLifecycleAction;
|
||||
|
||||
import java.util.ArrayList;
|
||||
import java.util.Arrays;
|
||||
|
@ -402,6 +407,11 @@ public class XPackClientPlugin extends Plugin implements ActionPlugin, NetworkPl
|
|||
RemoveIndexLifecyclePolicyAction.INSTANCE,
|
||||
MoveToStepAction.INSTANCE,
|
||||
RetryAction.INSTANCE,
|
||||
PutSnapshotLifecycleAction.INSTANCE,
|
||||
GetSnapshotLifecycleAction.INSTANCE,
|
||||
DeleteSnapshotLifecycleAction.INSTANCE,
|
||||
ExecuteSnapshotLifecycleAction.INSTANCE,
|
||||
// Freeze
|
||||
TransportFreezeIndexAction.FreezeIndexAction.INSTANCE,
|
||||
// Data Frame
|
||||
PutDataFrameTransformAction.INSTANCE,
|
||||
|
@ -498,6 +508,9 @@ public class XPackClientPlugin extends Plugin implements ActionPlugin, NetworkPl
|
|||
new NamedWriteableRegistry.Entry(MetaData.Custom.class, IndexLifecycleMetadata.TYPE, IndexLifecycleMetadata::new),
|
||||
new NamedWriteableRegistry.Entry(NamedDiff.class, IndexLifecycleMetadata.TYPE,
|
||||
IndexLifecycleMetadata.IndexLifecycleMetadataDiff::new),
|
||||
new NamedWriteableRegistry.Entry(MetaData.Custom.class, SnapshotLifecycleMetadata.TYPE, SnapshotLifecycleMetadata::new),
|
||||
new NamedWriteableRegistry.Entry(NamedDiff.class, SnapshotLifecycleMetadata.TYPE,
|
||||
SnapshotLifecycleMetadata.SnapshotLifecycleMetadataDiff::new),
|
||||
// ILM - LifecycleTypes
|
||||
new NamedWriteableRegistry.Entry(LifecycleType.class, TimeseriesLifecycleType.TYPE,
|
||||
(in) -> TimeseriesLifecycleType.INSTANCE),
|
||||
|
|
|
@ -15,6 +15,7 @@ public class LifecycleSettings {
|
|||
public static final String LIFECYCLE_POLL_INTERVAL = "indices.lifecycle.poll_interval";
|
||||
public static final String LIFECYCLE_NAME = "index.lifecycle.name";
|
||||
public static final String LIFECYCLE_INDEXING_COMPLETE = "index.lifecycle.indexing_complete";
|
||||
public static final String SLM_HISTORY_INDEX_ENABLED = "slm.history_index_enabled";
|
||||
|
||||
public static final Setting<TimeValue> LIFECYCLE_POLL_INTERVAL_SETTING = Setting.positiveTimeSetting(LIFECYCLE_POLL_INTERVAL,
|
||||
TimeValue.timeValueMinutes(10), Setting.Property.Dynamic, Setting.Property.NodeScope);
|
||||
|
@ -22,4 +23,7 @@ public class LifecycleSettings {
|
|||
Setting.Property.Dynamic, Setting.Property.IndexScope);
|
||||
public static final Setting<Boolean> LIFECYCLE_INDEXING_COMPLETE_SETTING = Setting.boolSetting(LIFECYCLE_INDEXING_COMPLETE, false,
|
||||
Setting.Property.Dynamic, Setting.Property.IndexScope);
|
||||
|
||||
public static final Setting<Boolean> SLM_HISTORY_INDEX_ENABLED_SETTING = Setting.boolSetting(SLM_HISTORY_INDEX_ENABLED, true,
|
||||
Setting.Property.NodeScope);
|
||||
}
|
||||
|
|
|
@ -3,15 +3,12 @@
|
|||
* or more contributor license agreements. Licensed under the Elastic License;
|
||||
* you may not use this file except in compliance with the Elastic License.
|
||||
*/
|
||||
package org.elasticsearch.xpack.rollup.job;
|
||||
|
||||
import org.elasticsearch.xpack.core.scheduler.Cron;
|
||||
import org.elasticsearch.xpack.core.scheduler.SchedulerEngine;
|
||||
package org.elasticsearch.xpack.core.scheduler;
|
||||
|
||||
public class CronSchedule implements SchedulerEngine.Schedule {
|
||||
private final Cron cron;
|
||||
|
||||
CronSchedule(String cronExpression) {
|
||||
public CronSchedule(String cronExpression) {
|
||||
this.cron = new Cron(cronExpression);
|
||||
}
|
||||
|
|
@ -17,9 +17,12 @@ import org.elasticsearch.common.util.concurrent.FutureUtils;
|
|||
|
||||
import java.time.Clock;
|
||||
import java.util.Collection;
|
||||
import java.util.Collections;
|
||||
import java.util.HashSet;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.Objects;
|
||||
import java.util.Set;
|
||||
import java.util.concurrent.CopyOnWriteArrayList;
|
||||
import java.util.concurrent.Executors;
|
||||
import java.util.concurrent.ScheduledExecutorService;
|
||||
|
@ -136,6 +139,10 @@ public class SchedulerEngine {
|
|||
}
|
||||
}
|
||||
|
||||
public Set<String> scheduledJobIds() {
|
||||
return Collections.unmodifiableSet(new HashSet<>(schedules.keySet()));
|
||||
}
|
||||
|
||||
public void add(Job job) {
|
||||
ActiveSchedule schedule = new ActiveSchedule(job.getId(), job.getSchedule(), clock.millis());
|
||||
schedules.compute(schedule.name, (name, previousSchedule) -> {
|
||||
|
|
|
@ -15,10 +15,13 @@ import org.elasticsearch.common.Strings;
|
|||
import org.elasticsearch.common.collect.MapBuilder;
|
||||
import org.elasticsearch.xpack.core.indexlifecycle.action.GetLifecycleAction;
|
||||
import org.elasticsearch.xpack.core.indexlifecycle.action.GetStatusAction;
|
||||
import org.elasticsearch.xpack.core.indexlifecycle.action.StartILMAction;
|
||||
import org.elasticsearch.xpack.core.indexlifecycle.action.StopILMAction;
|
||||
import org.elasticsearch.xpack.core.security.action.token.InvalidateTokenAction;
|
||||
import org.elasticsearch.xpack.core.security.action.token.RefreshTokenAction;
|
||||
import org.elasticsearch.xpack.core.security.action.user.HasPrivilegesAction;
|
||||
import org.elasticsearch.xpack.core.security.support.Automatons;
|
||||
import org.elasticsearch.xpack.core.snapshotlifecycle.action.GetSnapshotLifecycleAction;
|
||||
|
||||
import java.util.Collections;
|
||||
import java.util.HashSet;
|
||||
|
@ -61,6 +64,9 @@ public final class ClusterPrivilege extends Privilege {
|
|||
private static final Automaton READ_CCR_AUTOMATON = patterns(ClusterStateAction.NAME, HasPrivilegesAction.NAME);
|
||||
private static final Automaton MANAGE_ILM_AUTOMATON = patterns("cluster:admin/ilm/*");
|
||||
private static final Automaton READ_ILM_AUTOMATON = patterns(GetLifecycleAction.NAME, GetStatusAction.NAME);
|
||||
private static final Automaton MANAGE_SLM_AUTOMATON =
|
||||
patterns("cluster:admin/slm/*", StartILMAction.NAME, StopILMAction.NAME, GetStatusAction.NAME);
|
||||
private static final Automaton READ_SLM_AUTOMATON = patterns(GetSnapshotLifecycleAction.NAME, GetStatusAction.NAME);
|
||||
|
||||
public static final ClusterPrivilege NONE = new ClusterPrivilege("none", Automatons.EMPTY);
|
||||
public static final ClusterPrivilege ALL = new ClusterPrivilege("all", ALL_CLUSTER_AUTOMATON);
|
||||
|
@ -92,6 +98,8 @@ public final class ClusterPrivilege extends Privilege {
|
|||
public static final ClusterPrivilege CREATE_SNAPSHOT = new ClusterPrivilege("create_snapshot", CREATE_SNAPSHOT_AUTOMATON);
|
||||
public static final ClusterPrivilege MANAGE_ILM = new ClusterPrivilege("manage_ilm", MANAGE_ILM_AUTOMATON);
|
||||
public static final ClusterPrivilege READ_ILM = new ClusterPrivilege("read_ilm", READ_ILM_AUTOMATON);
|
||||
public static final ClusterPrivilege MANAGE_SLM = new ClusterPrivilege("manage_slm", MANAGE_SLM_AUTOMATON);
|
||||
public static final ClusterPrivilege READ_SLM = new ClusterPrivilege("read_slm", READ_SLM_AUTOMATON);
|
||||
|
||||
public static final Predicate<String> ACTION_MATCHER = ClusterPrivilege.ALL.predicate();
|
||||
|
||||
|
@ -122,6 +130,8 @@ public final class ClusterPrivilege extends Privilege {
|
|||
.put("create_snapshot", CREATE_SNAPSHOT)
|
||||
.put("manage_ilm", MANAGE_ILM)
|
||||
.put("read_ilm", READ_ILM)
|
||||
.put("manage_slm", MANAGE_SLM)
|
||||
.put("read_slm", READ_SLM)
|
||||
.immutableMap();
|
||||
|
||||
private static final ConcurrentHashMap<Set<String>, ClusterPrivilege> CACHE = new ConcurrentHashMap<>();
|
||||
|
|
|
@ -0,0 +1,111 @@
|
|||
/*
|
||||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||
* or more contributor license agreements. Licensed under the Elastic License;
|
||||
* you may not use this file except in compliance with the Elastic License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.xpack.core.snapshotlifecycle;
|
||||
|
||||
import org.elasticsearch.cluster.AbstractDiffable;
|
||||
import org.elasticsearch.cluster.Diffable;
|
||||
import org.elasticsearch.common.ParseField;
|
||||
import org.elasticsearch.common.io.stream.StreamInput;
|
||||
import org.elasticsearch.common.io.stream.StreamOutput;
|
||||
import org.elasticsearch.common.io.stream.Writeable;
|
||||
import org.elasticsearch.common.xcontent.ConstructingObjectParser;
|
||||
import org.elasticsearch.common.xcontent.ToXContentObject;
|
||||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
import org.elasticsearch.common.xcontent.XContentParser;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.Objects;
|
||||
|
||||
/**
|
||||
* Holds information about Snapshots kicked off by Snapshot Lifecycle Management in the cluster state, so that this information can be
|
||||
* presented to the user. This class is used for both successes and failures as the structure of the data is very similar.
|
||||
*/
|
||||
public class SnapshotInvocationRecord extends AbstractDiffable<SnapshotInvocationRecord>
|
||||
implements Writeable, ToXContentObject, Diffable<SnapshotInvocationRecord> {
|
||||
|
||||
static final ParseField SNAPSHOT_NAME = new ParseField("snapshot_name");
|
||||
static final ParseField TIMESTAMP = new ParseField("time");
|
||||
static final ParseField DETAILS = new ParseField("details");
|
||||
|
||||
private String snapshotName;
|
||||
private long timestamp;
|
||||
private String details;
|
||||
|
||||
public static final ConstructingObjectParser<SnapshotInvocationRecord, String> PARSER =
|
||||
new ConstructingObjectParser<>("snapshot_policy_invocation_record", true,
|
||||
a -> new SnapshotInvocationRecord((String) a[0], (long) a[1], (String) a[2]));
|
||||
|
||||
static {
|
||||
PARSER.declareString(ConstructingObjectParser.constructorArg(), SNAPSHOT_NAME);
|
||||
PARSER.declareLong(ConstructingObjectParser.constructorArg(), TIMESTAMP);
|
||||
PARSER.declareString(ConstructingObjectParser.optionalConstructorArg(), DETAILS);
|
||||
}
|
||||
|
||||
public static SnapshotInvocationRecord parse(XContentParser parser, String name) {
|
||||
return PARSER.apply(parser, name);
|
||||
}
|
||||
|
||||
public SnapshotInvocationRecord(String snapshotName, long timestamp, String details) {
|
||||
this.snapshotName = Objects.requireNonNull(snapshotName, "snapshot name must be provided");
|
||||
this.timestamp = timestamp;
|
||||
this.details = details;
|
||||
}
|
||||
|
||||
public SnapshotInvocationRecord(StreamInput in) throws IOException {
|
||||
this.snapshotName = in.readString();
|
||||
this.timestamp = in.readVLong();
|
||||
this.details = in.readOptionalString();
|
||||
}
|
||||
|
||||
public String getSnapshotName() {
|
||||
return snapshotName;
|
||||
}
|
||||
|
||||
public long getTimestamp() {
|
||||
return timestamp;
|
||||
}
|
||||
|
||||
public String getDetails() {
|
||||
return details;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void writeTo(StreamOutput out) throws IOException {
|
||||
out.writeString(snapshotName);
|
||||
out.writeVLong(timestamp);
|
||||
out.writeOptionalString(details);
|
||||
}
|
||||
|
||||
@Override
|
||||
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
|
||||
builder.startObject();
|
||||
{
|
||||
builder.field(SNAPSHOT_NAME.getPreferredName(), snapshotName);
|
||||
builder.timeField(TIMESTAMP.getPreferredName(), "time_string", timestamp);
|
||||
if (Objects.nonNull(details)) {
|
||||
builder.field(DETAILS.getPreferredName(), details);
|
||||
}
|
||||
}
|
||||
builder.endObject();
|
||||
return builder;
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean equals(Object o) {
|
||||
if (this == o) return true;
|
||||
if (o == null || getClass() != o.getClass()) return false;
|
||||
SnapshotInvocationRecord that = (SnapshotInvocationRecord) o;
|
||||
return getTimestamp() == that.getTimestamp() &&
|
||||
Objects.equals(getSnapshotName(), that.getSnapshotName()) &&
|
||||
Objects.equals(getDetails(), that.getDetails());
|
||||
}
|
||||
|
||||
@Override
|
||||
public int hashCode() {
|
||||
return Objects.hash(getSnapshotName(), getTimestamp(), getDetails());
|
||||
}
|
||||
}
|
|
@ -0,0 +1,178 @@
|
|||
/*
|
||||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||
* or more contributor license agreements. Licensed under the Elastic License;
|
||||
* you may not use this file except in compliance with the Elastic License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.xpack.core.snapshotlifecycle;
|
||||
|
||||
import org.elasticsearch.Version;
|
||||
import org.elasticsearch.cluster.AbstractDiffable;
|
||||
import org.elasticsearch.cluster.Diff;
|
||||
import org.elasticsearch.cluster.DiffableUtils;
|
||||
import org.elasticsearch.cluster.NamedDiff;
|
||||
import org.elasticsearch.cluster.metadata.MetaData;
|
||||
import org.elasticsearch.common.ParseField;
|
||||
import org.elasticsearch.common.Strings;
|
||||
import org.elasticsearch.common.io.stream.StreamInput;
|
||||
import org.elasticsearch.common.io.stream.StreamOutput;
|
||||
import org.elasticsearch.common.xcontent.ConstructingObjectParser;
|
||||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
import org.elasticsearch.xpack.core.XPackPlugin.XPackMetaDataCustom;
|
||||
import org.elasticsearch.xpack.core.indexlifecycle.OperationMode;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.Collections;
|
||||
import java.util.EnumSet;
|
||||
import java.util.HashMap;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.Objects;
|
||||
import java.util.TreeMap;
|
||||
import java.util.function.Function;
|
||||
import java.util.stream.Collectors;
|
||||
|
||||
/**
|
||||
* Custom cluster state metadata that stores all the snapshot lifecycle
|
||||
* policies and their associated metadata
|
||||
*/
|
||||
public class SnapshotLifecycleMetadata implements XPackMetaDataCustom {
|
||||
|
||||
public static final String TYPE = "snapshot_lifecycle";
|
||||
public static final ParseField OPERATION_MODE_FIELD = new ParseField("operation_mode");
|
||||
public static final ParseField POLICIES_FIELD = new ParseField("policies");
|
||||
|
||||
public static final SnapshotLifecycleMetadata EMPTY = new SnapshotLifecycleMetadata(Collections.emptyMap(), OperationMode.RUNNING);
|
||||
|
||||
@SuppressWarnings("unchecked")
|
||||
public static final ConstructingObjectParser<SnapshotLifecycleMetadata, Void> PARSER = new ConstructingObjectParser<>(TYPE,
|
||||
a -> new SnapshotLifecycleMetadata(
|
||||
((List<SnapshotLifecyclePolicyMetadata>) a[0]).stream()
|
||||
.collect(Collectors.toMap(m -> m.getPolicy().getId(), Function.identity())),
|
||||
OperationMode.valueOf((String) a[1])));
|
||||
|
||||
static {
|
||||
PARSER.declareNamedObjects(ConstructingObjectParser.constructorArg(), (p, c, n) -> SnapshotLifecyclePolicyMetadata.parse(p, n),
|
||||
v -> {
|
||||
throw new IllegalArgumentException("ordered " + POLICIES_FIELD.getPreferredName() + " are not supported");
|
||||
}, POLICIES_FIELD);
|
||||
}
|
||||
|
||||
private final Map<String, SnapshotLifecyclePolicyMetadata> snapshotConfigurations;
|
||||
private final OperationMode operationMode;
|
||||
|
||||
public SnapshotLifecycleMetadata(Map<String, SnapshotLifecyclePolicyMetadata> snapshotConfigurations, OperationMode operationMode) {
|
||||
this.snapshotConfigurations = new HashMap<>(snapshotConfigurations);
|
||||
this.operationMode = operationMode;
|
||||
}
|
||||
|
||||
public SnapshotLifecycleMetadata(StreamInput in) throws IOException {
|
||||
this.snapshotConfigurations = in.readMap(StreamInput::readString, SnapshotLifecyclePolicyMetadata::new);
|
||||
this.operationMode = in.readEnum(OperationMode.class);
|
||||
}
|
||||
|
||||
public Map<String, SnapshotLifecyclePolicyMetadata> getSnapshotConfigurations() {
|
||||
return Collections.unmodifiableMap(this.snapshotConfigurations);
|
||||
}
|
||||
|
||||
public OperationMode getOperationMode() {
|
||||
return operationMode;
|
||||
}
|
||||
|
||||
@Override
|
||||
public EnumSet<MetaData.XContentContext> context() {
|
||||
return MetaData.ALL_CONTEXTS;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Diff<MetaData.Custom> diff(MetaData.Custom previousState) {
|
||||
return new SnapshotLifecycleMetadataDiff((SnapshotLifecycleMetadata) previousState, this);
|
||||
}
|
||||
|
||||
@Override
|
||||
public String getWriteableName() {
|
||||
return TYPE;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Version getMinimalSupportedVersion() {
|
||||
return Version.V_7_4_0;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void writeTo(StreamOutput out) throws IOException {
|
||||
out.writeMap(this.snapshotConfigurations, StreamOutput::writeString, (out1, value) -> value.writeTo(out1));
|
||||
out.writeEnum(this.operationMode);
|
||||
}
|
||||
|
||||
@Override
|
||||
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
|
||||
builder.field(POLICIES_FIELD.getPreferredName(), this.snapshotConfigurations);
|
||||
builder.field(OPERATION_MODE_FIELD.getPreferredName(), operationMode);
|
||||
return builder;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String toString() {
|
||||
return Strings.toString(this);
|
||||
}
|
||||
|
||||
@Override
|
||||
public int hashCode() {
|
||||
return Objects.hash(this.snapshotConfigurations, this.operationMode);
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean equals(Object obj) {
|
||||
if (obj == null) {
|
||||
return false;
|
||||
}
|
||||
if (obj.getClass() != getClass()) {
|
||||
return false;
|
||||
}
|
||||
SnapshotLifecycleMetadata other = (SnapshotLifecycleMetadata) obj;
|
||||
return this.snapshotConfigurations.equals(other.snapshotConfigurations) &&
|
||||
this.operationMode.equals(other.operationMode);
|
||||
}
|
||||
|
||||
public static class SnapshotLifecycleMetadataDiff implements NamedDiff<MetaData.Custom> {
|
||||
|
||||
final Diff<Map<String, SnapshotLifecyclePolicyMetadata>> lifecycles;
|
||||
final OperationMode operationMode;
|
||||
|
||||
SnapshotLifecycleMetadataDiff(SnapshotLifecycleMetadata before, SnapshotLifecycleMetadata after) {
|
||||
this.lifecycles = DiffableUtils.diff(before.snapshotConfigurations, after.snapshotConfigurations,
|
||||
DiffableUtils.getStringKeySerializer());
|
||||
this.operationMode = after.operationMode;
|
||||
}
|
||||
|
||||
public SnapshotLifecycleMetadataDiff(StreamInput in) throws IOException {
|
||||
this.lifecycles = DiffableUtils.readJdkMapDiff(in, DiffableUtils.getStringKeySerializer(),
|
||||
SnapshotLifecyclePolicyMetadata::new,
|
||||
SnapshotLifecycleMetadataDiff::readLifecyclePolicyDiffFrom);
|
||||
this.operationMode = in.readEnum(OperationMode.class);
|
||||
}
|
||||
|
||||
@Override
|
||||
public MetaData.Custom apply(MetaData.Custom part) {
|
||||
TreeMap<String, SnapshotLifecyclePolicyMetadata> newLifecycles = new TreeMap<>(
|
||||
lifecycles.apply(((SnapshotLifecycleMetadata) part).snapshotConfigurations));
|
||||
return new SnapshotLifecycleMetadata(newLifecycles, this.operationMode);
|
||||
}
|
||||
|
||||
@Override
|
||||
public String getWriteableName() {
|
||||
return TYPE;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void writeTo(StreamOutput out) throws IOException {
|
||||
lifecycles.writeTo(out);
|
||||
out.writeEnum(this.operationMode);
|
||||
}
|
||||
|
||||
static Diff<SnapshotLifecyclePolicyMetadata> readLifecyclePolicyDiffFrom(StreamInput in) throws IOException {
|
||||
return AbstractDiffable.readDiffFrom(SnapshotLifecyclePolicyMetadata::new, in);
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,324 @@
|
|||
/*
|
||||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||
* or more contributor license agreements. Licensed under the Elastic License;
|
||||
* you may not use this file except in compliance with the Elastic License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.xpack.core.snapshotlifecycle;
|
||||
|
||||
import org.elasticsearch.ExceptionsHelper;
|
||||
import org.elasticsearch.action.ActionRequestValidationException;
|
||||
import org.elasticsearch.action.admin.cluster.snapshots.create.CreateSnapshotRequest;
|
||||
import org.elasticsearch.action.support.IndicesOptions;
|
||||
import org.elasticsearch.cluster.AbstractDiffable;
|
||||
import org.elasticsearch.cluster.ClusterState;
|
||||
import org.elasticsearch.cluster.Diffable;
|
||||
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
|
||||
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.Context;
|
||||
import org.elasticsearch.common.ParseField;
|
||||
import org.elasticsearch.common.Strings;
|
||||
import org.elasticsearch.common.UUIDs;
|
||||
import org.elasticsearch.common.io.stream.StreamInput;
|
||||
import org.elasticsearch.common.io.stream.StreamOutput;
|
||||
import org.elasticsearch.common.io.stream.Writeable;
|
||||
import org.elasticsearch.common.xcontent.ConstructingObjectParser;
|
||||
import org.elasticsearch.common.xcontent.ToXContentObject;
|
||||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
import org.elasticsearch.common.xcontent.XContentParser;
|
||||
import org.elasticsearch.xpack.core.scheduler.Cron;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.nio.charset.StandardCharsets;
|
||||
import java.util.Collections;
|
||||
import java.util.HashMap;
|
||||
import java.util.List;
|
||||
import java.util.Locale;
|
||||
import java.util.Map;
|
||||
import java.util.Objects;
|
||||
|
||||
import static org.elasticsearch.cluster.metadata.MetaDataCreateIndexService.MAX_INDEX_NAME_BYTES;
|
||||
|
||||
/**
|
||||
* A {@code SnapshotLifecyclePolicy} is a policy for the cluster including a schedule of when a
|
||||
* snapshot should be triggered, what the snapshot should be named, what repository it should go
|
||||
* to, and the configuration for the snapshot itself.
|
||||
*/
|
||||
public class SnapshotLifecyclePolicy extends AbstractDiffable<SnapshotLifecyclePolicy>
|
||||
implements Writeable, Diffable<SnapshotLifecyclePolicy>, ToXContentObject {
|
||||
|
||||
private final String id;
|
||||
private final String name;
|
||||
private final String schedule;
|
||||
private final String repository;
|
||||
private final Map<String, Object> configuration;
|
||||
|
||||
private static final ParseField NAME = new ParseField("name");
|
||||
private static final ParseField SCHEDULE = new ParseField("schedule");
|
||||
private static final ParseField REPOSITORY = new ParseField("repository");
|
||||
private static final ParseField CONFIG = new ParseField("config");
|
||||
private static final IndexNameExpressionResolver.DateMathExpressionResolver DATE_MATH_RESOLVER =
|
||||
new IndexNameExpressionResolver.DateMathExpressionResolver();
|
||||
private static final String POLICY_ID_METADATA_FIELD = "policy";
|
||||
private static final String METADATA_FIELD_NAME = "metadata";
|
||||
|
||||
@SuppressWarnings("unchecked")
|
||||
private static final ConstructingObjectParser<SnapshotLifecyclePolicy, String> PARSER =
|
||||
new ConstructingObjectParser<>("snapshot_lifecycle", true,
|
||||
(a, id) -> {
|
||||
String name = (String) a[0];
|
||||
String schedule = (String) a[1];
|
||||
String repo = (String) a[2];
|
||||
Map<String, Object> config = (Map<String, Object>) a[3];
|
||||
return new SnapshotLifecyclePolicy(id, name, schedule, repo, config);
|
||||
});
|
||||
|
||||
static {
|
||||
PARSER.declareString(ConstructingObjectParser.constructorArg(), NAME);
|
||||
PARSER.declareString(ConstructingObjectParser.constructorArg(), SCHEDULE);
|
||||
PARSER.declareString(ConstructingObjectParser.constructorArg(), REPOSITORY);
|
||||
PARSER.declareObject(ConstructingObjectParser.constructorArg(), (p, c) -> p.map(), CONFIG);
|
||||
}
|
||||
|
||||
public SnapshotLifecyclePolicy(final String id, final String name, final String schedule,
|
||||
final String repository, Map<String, Object> configuration) {
|
||||
this.id = Objects.requireNonNull(id);
|
||||
this.name = name;
|
||||
this.schedule = schedule;
|
||||
this.repository = repository;
|
||||
this.configuration = configuration;
|
||||
}
|
||||
|
||||
public SnapshotLifecyclePolicy(StreamInput in) throws IOException {
|
||||
this.id = in.readString();
|
||||
this.name = in.readString();
|
||||
this.schedule = in.readString();
|
||||
this.repository = in.readString();
|
||||
this.configuration = in.readMap();
|
||||
}
|
||||
|
||||
public String getId() {
|
||||
return this.id;
|
||||
}
|
||||
|
||||
public String getName() {
|
||||
return this.name;
|
||||
}
|
||||
|
||||
public String getSchedule() {
|
||||
return this.schedule;
|
||||
}
|
||||
|
||||
public String getRepository() {
|
||||
return this.repository;
|
||||
}
|
||||
|
||||
public Map<String, Object> getConfig() {
|
||||
return this.configuration;
|
||||
}
|
||||
|
||||
public long calculateNextExecution() {
|
||||
final Cron schedule = new Cron(this.schedule);
|
||||
return schedule.getNextValidTimeAfter(System.currentTimeMillis());
|
||||
}
|
||||
|
||||
public ActionRequestValidationException validate() {
|
||||
ActionRequestValidationException err = new ActionRequestValidationException();
|
||||
|
||||
// ID validation
|
||||
if (id.contains(",")) {
|
||||
err.addValidationError("invalid policy id [" + id + "]: must not contain ','");
|
||||
}
|
||||
if (id.contains(" ")) {
|
||||
err.addValidationError("invalid policy id [" + id + "]: must not contain spaces");
|
||||
}
|
||||
if (id.charAt(0) == '_') {
|
||||
err.addValidationError("invalid policy id [" + id + "]: must not start with '_'");
|
||||
}
|
||||
int byteCount = id.getBytes(StandardCharsets.UTF_8).length;
|
||||
if (byteCount > MAX_INDEX_NAME_BYTES) {
|
||||
err.addValidationError("invalid policy id [" + id + "]: name is too long, (" + byteCount + " > " +
|
||||
MAX_INDEX_NAME_BYTES + " bytes)");
|
||||
}
|
||||
|
||||
// Snapshot name validation
|
||||
// We generate a snapshot name here to make sure it validates after applying date math
|
||||
final String snapshotName = generateSnapshotName(new ResolverContext());
|
||||
if (Strings.hasText(name) == false) {
|
||||
err.addValidationError("invalid snapshot name [" + name + "]: cannot be empty");
|
||||
}
|
||||
if (snapshotName.contains("#")) {
|
||||
err.addValidationError("invalid snapshot name [" + name + "]: must not contain '#'");
|
||||
}
|
||||
if (snapshotName.charAt(0) == '_') {
|
||||
err.addValidationError("invalid snapshot name [" + name + "]: must not start with '_'");
|
||||
}
|
||||
if (snapshotName.toLowerCase(Locale.ROOT).equals(snapshotName) == false) {
|
||||
err.addValidationError("invalid snapshot name [" + name + "]: must be lowercase");
|
||||
}
|
||||
if (Strings.validFileName(snapshotName) == false) {
|
||||
err.addValidationError("invalid snapshot name [" + name + "]: must not contain contain the following characters " +
|
||||
Strings.INVALID_FILENAME_CHARS);
|
||||
}
|
||||
|
||||
// Schedule validation
|
||||
if (Strings.hasText(schedule) == false) {
|
||||
err.addValidationError("invalid schedule [" + schedule + "]: must not be empty");
|
||||
} else {
|
||||
try {
|
||||
new Cron(schedule);
|
||||
} catch (IllegalArgumentException e) {
|
||||
err.addValidationError("invalid schedule: " +
|
||||
ExceptionsHelper.unwrapCause(e).getMessage());
|
||||
}
|
||||
}
|
||||
|
||||
if (configuration.containsKey(METADATA_FIELD_NAME)) {
|
||||
if (configuration.get(METADATA_FIELD_NAME) instanceof Map == false) {
|
||||
err.addValidationError("invalid configuration." + METADATA_FIELD_NAME + " [" + configuration.get(METADATA_FIELD_NAME) +
|
||||
"]: must be an object if present");
|
||||
} else {
|
||||
@SuppressWarnings("unchecked")
|
||||
Map<String, Object> metadata = (Map<String, Object>) configuration.get(METADATA_FIELD_NAME);
|
||||
if (metadata.containsKey(POLICY_ID_METADATA_FIELD)) {
|
||||
err.addValidationError("invalid configuration." + METADATA_FIELD_NAME + ": field name [" + POLICY_ID_METADATA_FIELD +
|
||||
"] is reserved and will be added automatically");
|
||||
} else {
|
||||
Map<String, Object> metadataWithPolicyField = addPolicyNameToMetadata(metadata);
|
||||
int serializedSizeOriginal = CreateSnapshotRequest.metadataSize(metadata);
|
||||
int serializedSizeWithMetadata = CreateSnapshotRequest.metadataSize(metadataWithPolicyField);
|
||||
int policyNameAddedBytes = serializedSizeWithMetadata - serializedSizeOriginal;
|
||||
if (serializedSizeWithMetadata > CreateSnapshotRequest.MAXIMUM_METADATA_BYTES) {
|
||||
err.addValidationError("invalid configuration." + METADATA_FIELD_NAME + ": must be smaller than [" +
|
||||
(CreateSnapshotRequest.MAXIMUM_METADATA_BYTES - policyNameAddedBytes) +
|
||||
"] bytes, but is [" + serializedSizeOriginal + "] bytes");
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Repository validation, validation of whether the repository actually exists happens
|
||||
// elsewhere as it requires cluster state
|
||||
if (Strings.hasText(repository) == false) {
|
||||
err.addValidationError("invalid repository name [" + repository + "]: cannot be empty");
|
||||
}
|
||||
|
||||
return err.validationErrors().size() == 0 ? null : err;
|
||||
}
|
||||
|
||||
private Map<String, Object> addPolicyNameToMetadata(final Map<String, Object> metadata) {
|
||||
Map<String, Object> newMetadata;
|
||||
if (metadata == null) {
|
||||
newMetadata = new HashMap<>();
|
||||
} else {
|
||||
newMetadata = new HashMap<>(metadata);
|
||||
}
|
||||
newMetadata.put(POLICY_ID_METADATA_FIELD, this.id);
|
||||
return newMetadata;
|
||||
}
|
||||
|
||||
/**
|
||||
* Since snapshots need to be uniquely named, this method will resolve any date math used in
|
||||
* the provided name, as well as appending a unique identifier so expressions that may overlap
|
||||
* still result in unique snapshot names.
|
||||
*/
|
||||
public String generateSnapshotName(Context context) {
|
||||
List<String> candidates = DATE_MATH_RESOLVER.resolve(context, Collections.singletonList(this.name));
|
||||
if (candidates.size() != 1) {
|
||||
throw new IllegalStateException("resolving snapshot name " + this.name + " generated more than one candidate: " + candidates);
|
||||
}
|
||||
// TODO: we are breaking the rules of UUIDs by lowercasing this here, find an alternative (snapshot names must be lowercase)
|
||||
return candidates.get(0) + "-" + UUIDs.randomBase64UUID().toLowerCase(Locale.ROOT);
|
||||
}
|
||||
|
||||
/**
|
||||
* Generate a new create snapshot request from this policy. The name of the snapshot is
|
||||
* generated at this time based on any date math expressions in the "name" field.
|
||||
*/
|
||||
public CreateSnapshotRequest toRequest() {
|
||||
CreateSnapshotRequest req = new CreateSnapshotRequest(repository, generateSnapshotName(new ResolverContext()));
|
||||
@SuppressWarnings("unchecked")
|
||||
Map<String, Object> metadata = (Map<String, Object>) configuration.get("metadata");
|
||||
Map<String, Object> metadataWithAddedPolicyName = addPolicyNameToMetadata(metadata);
|
||||
Map<String, Object> mergedConfiguration = new HashMap<>(configuration);
|
||||
mergedConfiguration.put("metadata", metadataWithAddedPolicyName);
|
||||
req.source(mergedConfiguration);
|
||||
req.waitForCompletion(false);
|
||||
return req;
|
||||
}
|
||||
|
||||
public static SnapshotLifecyclePolicy parse(XContentParser parser, String id) {
|
||||
return PARSER.apply(parser, id);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void writeTo(StreamOutput out) throws IOException {
|
||||
out.writeString(this.id);
|
||||
out.writeString(this.name);
|
||||
out.writeString(this.schedule);
|
||||
out.writeString(this.repository);
|
||||
out.writeMap(this.configuration);
|
||||
}
|
||||
|
||||
@Override
|
||||
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
|
||||
builder.startObject();
|
||||
builder.field(NAME.getPreferredName(), this.name);
|
||||
builder.field(SCHEDULE.getPreferredName(), this.schedule);
|
||||
builder.field(REPOSITORY.getPreferredName(), this.repository);
|
||||
builder.field(CONFIG.getPreferredName(), this.configuration);
|
||||
builder.endObject();
|
||||
return builder;
|
||||
}
|
||||
|
||||
@Override
|
||||
public int hashCode() {
|
||||
return Objects.hash(id, name, schedule, repository, configuration);
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean equals(Object obj) {
|
||||
if (obj == null) {
|
||||
return false;
|
||||
}
|
||||
|
||||
if (obj.getClass() != getClass()) {
|
||||
return false;
|
||||
}
|
||||
SnapshotLifecyclePolicy other = (SnapshotLifecyclePolicy) obj;
|
||||
return Objects.equals(id, other.id) &&
|
||||
Objects.equals(name, other.name) &&
|
||||
Objects.equals(schedule, other.schedule) &&
|
||||
Objects.equals(repository, other.repository) &&
|
||||
Objects.equals(configuration, other.configuration);
|
||||
}
|
||||
|
||||
@Override
|
||||
public String toString() {
|
||||
return Strings.toString(this);
|
||||
}
|
||||
|
||||
/**
|
||||
* This is a context for the DateMathExpressionResolver, which does not require
|
||||
* {@code IndicesOptions} or {@code ClusterState} since it only uses the start
|
||||
* time to resolve expressions
|
||||
*/
|
||||
public static final class ResolverContext extends Context {
|
||||
public ResolverContext() {
|
||||
this(System.currentTimeMillis());
|
||||
}
|
||||
|
||||
public ResolverContext(long startTime) {
|
||||
super(null, null, startTime, false, false);
|
||||
}
|
||||
|
||||
@Override
|
||||
public ClusterState getState() {
|
||||
throw new UnsupportedOperationException("should never be called");
|
||||
}
|
||||
|
||||
@Override
|
||||
public IndicesOptions getOptions() {
|
||||
throw new UnsupportedOperationException("should never be called");
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,135 @@
|
|||
/*
|
||||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||
* or more contributor license agreements. Licensed under the Elastic License;
|
||||
* you may not use this file except in compliance with the Elastic License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.xpack.core.snapshotlifecycle;
|
||||
|
||||
import org.elasticsearch.common.Nullable;
|
||||
import org.elasticsearch.common.Strings;
|
||||
import org.elasticsearch.common.io.stream.StreamInput;
|
||||
import org.elasticsearch.common.io.stream.StreamOutput;
|
||||
import org.elasticsearch.common.io.stream.Writeable;
|
||||
import org.elasticsearch.common.xcontent.ToXContentFragment;
|
||||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.Objects;
|
||||
|
||||
/**
|
||||
* The {@code SnapshotLifecyclePolicyItem} class is a special wrapper almost exactly like the
|
||||
* {@link SnapshotLifecyclePolicyMetadata}, however, it elides the headers to ensure that they
|
||||
* are not leaked to the user since they may contain sensitive information.
|
||||
*/
|
||||
public class SnapshotLifecyclePolicyItem implements ToXContentFragment, Writeable {
|
||||
|
||||
private final SnapshotLifecyclePolicy policy;
|
||||
private final long version;
|
||||
private final long modifiedDate;
|
||||
|
||||
@Nullable
|
||||
private final SnapshotInvocationRecord lastSuccess;
|
||||
|
||||
@Nullable
|
||||
private final SnapshotInvocationRecord lastFailure;
|
||||
public SnapshotLifecyclePolicyItem(SnapshotLifecyclePolicyMetadata policyMetadata) {
|
||||
this.policy = policyMetadata.getPolicy();
|
||||
this.version = policyMetadata.getVersion();
|
||||
this.modifiedDate = policyMetadata.getModifiedDate();
|
||||
this.lastSuccess = policyMetadata.getLastSuccess();
|
||||
this.lastFailure = policyMetadata.getLastFailure();
|
||||
}
|
||||
|
||||
public SnapshotLifecyclePolicyItem(StreamInput in) throws IOException {
|
||||
this.policy = new SnapshotLifecyclePolicy(in);
|
||||
this.version = in.readVLong();
|
||||
this.modifiedDate = in.readVLong();
|
||||
this.lastSuccess = in.readOptionalWriteable(SnapshotInvocationRecord::new);
|
||||
this.lastFailure = in.readOptionalWriteable(SnapshotInvocationRecord::new);
|
||||
}
|
||||
|
||||
// For testing
|
||||
|
||||
SnapshotLifecyclePolicyItem(SnapshotLifecyclePolicy policy, long version, long modifiedDate,
|
||||
SnapshotInvocationRecord lastSuccess, SnapshotInvocationRecord lastFailure) {
|
||||
this.policy = policy;
|
||||
this.version = version;
|
||||
this.modifiedDate = modifiedDate;
|
||||
this.lastSuccess = lastSuccess;
|
||||
this.lastFailure = lastFailure;
|
||||
}
|
||||
public SnapshotLifecyclePolicy getPolicy() {
|
||||
return policy;
|
||||
}
|
||||
|
||||
public long getVersion() {
|
||||
return version;
|
||||
}
|
||||
|
||||
public long getModifiedDate() {
|
||||
return modifiedDate;
|
||||
}
|
||||
|
||||
public SnapshotInvocationRecord getLastSuccess() {
|
||||
return lastSuccess;
|
||||
}
|
||||
|
||||
public SnapshotInvocationRecord getLastFailure() {
|
||||
return lastFailure;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void writeTo(StreamOutput out) throws IOException {
|
||||
policy.writeTo(out);
|
||||
out.writeVLong(version);
|
||||
out.writeVLong(modifiedDate);
|
||||
out.writeOptionalWriteable(lastSuccess);
|
||||
out.writeOptionalWriteable(lastFailure);
|
||||
}
|
||||
|
||||
@Override
|
||||
public int hashCode() {
|
||||
return Objects.hash(policy, version, modifiedDate, lastSuccess, lastFailure);
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean equals(Object obj) {
|
||||
if (obj == null) {
|
||||
return false;
|
||||
}
|
||||
if (obj.getClass() != getClass()) {
|
||||
return false;
|
||||
}
|
||||
SnapshotLifecyclePolicyItem other = (SnapshotLifecyclePolicyItem) obj;
|
||||
return policy.equals(other.policy) &&
|
||||
version == other.version &&
|
||||
modifiedDate == other.modifiedDate &&
|
||||
Objects.equals(lastSuccess, other.lastSuccess) &&
|
||||
Objects.equals(lastFailure, other.lastFailure);
|
||||
}
|
||||
|
||||
@Override
|
||||
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
|
||||
builder.startObject(policy.getId());
|
||||
builder.field(SnapshotLifecyclePolicyMetadata.VERSION.getPreferredName(), version);
|
||||
builder.timeField(SnapshotLifecyclePolicyMetadata.MODIFIED_DATE_MILLIS.getPreferredName(),
|
||||
SnapshotLifecyclePolicyMetadata.MODIFIED_DATE.getPreferredName(), modifiedDate);
|
||||
builder.field(SnapshotLifecyclePolicyMetadata.POLICY.getPreferredName(), policy);
|
||||
if (lastSuccess != null) {
|
||||
builder.field(SnapshotLifecyclePolicyMetadata.LAST_SUCCESS.getPreferredName(), lastSuccess);
|
||||
}
|
||||
if (lastFailure != null) {
|
||||
builder.field(SnapshotLifecyclePolicyMetadata.LAST_FAILURE.getPreferredName(), lastFailure);
|
||||
}
|
||||
builder.timeField(SnapshotLifecyclePolicyMetadata.NEXT_EXECUTION_MILLIS.getPreferredName(),
|
||||
SnapshotLifecyclePolicyMetadata.NEXT_EXECUTION.getPreferredName(), policy.calculateNextExecution());
|
||||
builder.endObject();
|
||||
return builder;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String toString() {
|
||||
return Strings.toString(this);
|
||||
}
|
||||
}
|
|
@ -0,0 +1,260 @@
|
|||
/*
|
||||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||
* or more contributor license agreements. Licensed under the Elastic License;
|
||||
* you may not use this file except in compliance with the Elastic License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.xpack.core.snapshotlifecycle;
|
||||
|
||||
import org.elasticsearch.cluster.AbstractDiffable;
|
||||
import org.elasticsearch.cluster.Diffable;
|
||||
import org.elasticsearch.common.Nullable;
|
||||
import org.elasticsearch.common.ParseField;
|
||||
import org.elasticsearch.common.io.stream.StreamInput;
|
||||
import org.elasticsearch.common.io.stream.StreamOutput;
|
||||
import org.elasticsearch.common.xcontent.ConstructingObjectParser;
|
||||
import org.elasticsearch.common.xcontent.ObjectParser;
|
||||
import org.elasticsearch.common.xcontent.ToXContentObject;
|
||||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
import org.elasticsearch.common.xcontent.XContentParser;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.HashMap;
|
||||
import java.util.Map;
|
||||
import java.util.Objects;
|
||||
import java.util.Optional;
|
||||
|
||||
/**
|
||||
* {@code SnapshotLifecyclePolicyMetadata} encapsulates a {@link SnapshotLifecyclePolicy} as well as
|
||||
* the additional meta information link headers used for execution, version (a monotonically
|
||||
* incrementing number), and last modified date
|
||||
*/
|
||||
public class SnapshotLifecyclePolicyMetadata extends AbstractDiffable<SnapshotLifecyclePolicyMetadata>
|
||||
implements ToXContentObject, Diffable<SnapshotLifecyclePolicyMetadata> {
|
||||
|
||||
static final ParseField POLICY = new ParseField("policy");
|
||||
static final ParseField HEADERS = new ParseField("headers");
|
||||
static final ParseField VERSION = new ParseField("version");
|
||||
static final ParseField MODIFIED_DATE_MILLIS = new ParseField("modified_date_millis");
|
||||
static final ParseField MODIFIED_DATE = new ParseField("modified_date");
|
||||
static final ParseField LAST_SUCCESS = new ParseField("last_success");
|
||||
static final ParseField LAST_FAILURE = new ParseField("last_failure");
|
||||
static final ParseField NEXT_EXECUTION_MILLIS = new ParseField("next_execution_millis");
|
||||
static final ParseField NEXT_EXECUTION = new ParseField("next_execution");
|
||||
|
||||
private final SnapshotLifecyclePolicy policy;
|
||||
private final Map<String, String> headers;
|
||||
private final long version;
|
||||
private final long modifiedDate;
|
||||
@Nullable
|
||||
private final SnapshotInvocationRecord lastSuccess;
|
||||
@Nullable
|
||||
private final SnapshotInvocationRecord lastFailure;
|
||||
|
||||
@SuppressWarnings("unchecked")
|
||||
public static final ConstructingObjectParser<SnapshotLifecyclePolicyMetadata, String> PARSER =
|
||||
new ConstructingObjectParser<>("snapshot_policy_metadata",
|
||||
a -> {
|
||||
SnapshotLifecyclePolicy policy = (SnapshotLifecyclePolicy) a[0];
|
||||
SnapshotInvocationRecord lastSuccess = (SnapshotInvocationRecord) a[4];
|
||||
SnapshotInvocationRecord lastFailure = (SnapshotInvocationRecord) a[5];
|
||||
|
||||
return builder()
|
||||
.setPolicy(policy)
|
||||
.setHeaders((Map<String, String>) a[1])
|
||||
.setVersion((long) a[2])
|
||||
.setModifiedDate((long) a[3])
|
||||
.setLastSuccess(lastSuccess)
|
||||
.setLastFailure(lastFailure)
|
||||
.build();
|
||||
});
|
||||
|
||||
static {
|
||||
PARSER.declareObject(ConstructingObjectParser.constructorArg(), SnapshotLifecyclePolicy::parse, POLICY);
|
||||
PARSER.declareField(ConstructingObjectParser.constructorArg(), XContentParser::mapStrings, HEADERS, ObjectParser.ValueType.OBJECT);
|
||||
PARSER.declareLong(ConstructingObjectParser.constructorArg(), VERSION);
|
||||
PARSER.declareLong(ConstructingObjectParser.constructorArg(), MODIFIED_DATE_MILLIS);
|
||||
PARSER.declareObject(ConstructingObjectParser.optionalConstructorArg(), SnapshotInvocationRecord::parse, LAST_SUCCESS);
|
||||
PARSER.declareObject(ConstructingObjectParser.optionalConstructorArg(), SnapshotInvocationRecord::parse, LAST_FAILURE);
|
||||
}
|
||||
|
||||
public static SnapshotLifecyclePolicyMetadata parse(XContentParser parser, String name) {
|
||||
return PARSER.apply(parser, name);
|
||||
}
|
||||
|
||||
SnapshotLifecyclePolicyMetadata(SnapshotLifecyclePolicy policy, Map<String, String> headers, long version, long modifiedDate,
|
||||
SnapshotInvocationRecord lastSuccess, SnapshotInvocationRecord lastFailure) {
|
||||
this.policy = policy;
|
||||
this.headers = headers;
|
||||
this.version = version;
|
||||
this.modifiedDate = modifiedDate;
|
||||
this.lastSuccess = lastSuccess;
|
||||
this.lastFailure = lastFailure;
|
||||
}
|
||||
|
||||
@SuppressWarnings("unchecked")
|
||||
SnapshotLifecyclePolicyMetadata(StreamInput in) throws IOException {
|
||||
this.policy = new SnapshotLifecyclePolicy(in);
|
||||
this.headers = (Map<String, String>) in.readGenericValue();
|
||||
this.version = in.readVLong();
|
||||
this.modifiedDate = in.readVLong();
|
||||
this.lastSuccess = in.readOptionalWriteable(SnapshotInvocationRecord::new);
|
||||
this.lastFailure = in.readOptionalWriteable(SnapshotInvocationRecord::new);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void writeTo(StreamOutput out) throws IOException {
|
||||
this.policy.writeTo(out);
|
||||
out.writeGenericValue(this.headers);
|
||||
out.writeVLong(this.version);
|
||||
out.writeVLong(this.modifiedDate);
|
||||
out.writeOptionalWriteable(this.lastSuccess);
|
||||
out.writeOptionalWriteable(this.lastFailure);
|
||||
}
|
||||
|
||||
public static Builder builder() {
|
||||
return new Builder();
|
||||
}
|
||||
|
||||
public static Builder builder(SnapshotLifecyclePolicyMetadata metadata) {
|
||||
if (metadata == null) {
|
||||
return builder();
|
||||
}
|
||||
return new Builder()
|
||||
.setHeaders(metadata.getHeaders())
|
||||
.setPolicy(metadata.getPolicy())
|
||||
.setVersion(metadata.getVersion())
|
||||
.setModifiedDate(metadata.getModifiedDate())
|
||||
.setLastSuccess(metadata.getLastSuccess())
|
||||
.setLastFailure(metadata.getLastFailure());
|
||||
}
|
||||
|
||||
public Map<String, String> getHeaders() {
|
||||
return headers;
|
||||
}
|
||||
|
||||
public SnapshotLifecyclePolicy getPolicy() {
|
||||
return policy;
|
||||
}
|
||||
|
||||
public String getName() {
|
||||
return policy.getName();
|
||||
}
|
||||
|
||||
public long getVersion() {
|
||||
return version;
|
||||
}
|
||||
|
||||
public long getModifiedDate() {
|
||||
return modifiedDate;
|
||||
}
|
||||
|
||||
public SnapshotInvocationRecord getLastSuccess() {
|
||||
return lastSuccess;
|
||||
}
|
||||
|
||||
public SnapshotInvocationRecord getLastFailure() {
|
||||
return lastFailure;
|
||||
}
|
||||
|
||||
@Override
|
||||
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
|
||||
builder.startObject();
|
||||
builder.field(POLICY.getPreferredName(), policy);
|
||||
builder.field(HEADERS.getPreferredName(), headers);
|
||||
builder.field(VERSION.getPreferredName(), version);
|
||||
builder.timeField(MODIFIED_DATE_MILLIS.getPreferredName(), MODIFIED_DATE.getPreferredName(), modifiedDate);
|
||||
if (Objects.nonNull(lastSuccess)) {
|
||||
builder.field(LAST_SUCCESS.getPreferredName(), lastSuccess);
|
||||
}
|
||||
if (Objects.nonNull(lastFailure)) {
|
||||
builder.field(LAST_FAILURE.getPreferredName(), lastFailure);
|
||||
}
|
||||
builder.endObject();
|
||||
return builder;
|
||||
}
|
||||
|
||||
@Override
|
||||
public int hashCode() {
|
||||
return Objects.hash(policy, headers, version, modifiedDate, lastSuccess, lastFailure);
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean equals(Object obj) {
|
||||
if (obj == null) {
|
||||
return false;
|
||||
}
|
||||
if (getClass() != obj.getClass()) {
|
||||
return false;
|
||||
}
|
||||
SnapshotLifecyclePolicyMetadata other = (SnapshotLifecyclePolicyMetadata) obj;
|
||||
return Objects.equals(policy, other.policy) &&
|
||||
Objects.equals(headers, other.headers) &&
|
||||
Objects.equals(version, other.version) &&
|
||||
Objects.equals(modifiedDate, other.modifiedDate) &&
|
||||
Objects.equals(lastSuccess, other.lastSuccess) &&
|
||||
Objects.equals(lastFailure, other.lastFailure);
|
||||
}
|
||||
|
||||
@Override
|
||||
public String toString() {
|
||||
// Note: this is on purpose. While usually we would use Strings.toString(this) to render
|
||||
// this using toXContent, it may contain sensitive information in the headers and thus
|
||||
// should not emit them in case it accidentally gets logged.
|
||||
return super.toString();
|
||||
}
|
||||
|
||||
public static class Builder {
|
||||
|
||||
private Builder() {
|
||||
}
|
||||
|
||||
private SnapshotLifecyclePolicy policy;
|
||||
private Map<String, String> headers;
|
||||
private long version = 1L;
|
||||
private Long modifiedDate;
|
||||
private SnapshotInvocationRecord lastSuccessDate;
|
||||
private SnapshotInvocationRecord lastFailureDate;
|
||||
|
||||
public Builder setPolicy(SnapshotLifecyclePolicy policy) {
|
||||
this.policy = policy;
|
||||
return this;
|
||||
}
|
||||
|
||||
public Builder setHeaders(Map<String, String> headers) {
|
||||
this.headers = headers;
|
||||
return this;
|
||||
}
|
||||
|
||||
public Builder setVersion(long version) {
|
||||
this.version = version;
|
||||
return this;
|
||||
}
|
||||
|
||||
public Builder setModifiedDate(long modifiedDate) {
|
||||
this.modifiedDate = modifiedDate;
|
||||
return this;
|
||||
}
|
||||
|
||||
public Builder setLastSuccess(SnapshotInvocationRecord lastSuccessDate) {
|
||||
this.lastSuccessDate = lastSuccessDate;
|
||||
return this;
|
||||
}
|
||||
|
||||
public Builder setLastFailure(SnapshotInvocationRecord lastFailureDate) {
|
||||
this.lastFailureDate = lastFailureDate;
|
||||
return this;
|
||||
}
|
||||
|
||||
public SnapshotLifecyclePolicyMetadata build() {
|
||||
return new SnapshotLifecyclePolicyMetadata(
|
||||
Objects.requireNonNull(policy),
|
||||
Optional.ofNullable(headers).orElse(new HashMap<>()),
|
||||
version,
|
||||
Objects.requireNonNull(modifiedDate, "modifiedDate must be set"),
|
||||
lastSuccessDate,
|
||||
lastFailureDate);
|
||||
}
|
||||
}
|
||||
|
||||
}
|
|
@ -0,0 +1,93 @@
|
|||
/*
|
||||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||
* or more contributor license agreements. Licensed under the Elastic License;
|
||||
* you may not use this file except in compliance with the Elastic License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.xpack.core.snapshotlifecycle.action;
|
||||
|
||||
import org.elasticsearch.action.ActionRequestValidationException;
|
||||
import org.elasticsearch.action.ActionType;
|
||||
import org.elasticsearch.action.support.master.AcknowledgedRequest;
|
||||
import org.elasticsearch.action.support.master.AcknowledgedResponse;
|
||||
import org.elasticsearch.common.io.stream.StreamInput;
|
||||
import org.elasticsearch.common.io.stream.StreamOutput;
|
||||
import org.elasticsearch.common.io.stream.Writeable;
|
||||
import org.elasticsearch.common.xcontent.ToXContentObject;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.Objects;
|
||||
|
||||
public class DeleteSnapshotLifecycleAction extends ActionType<DeleteSnapshotLifecycleAction.Response> {
|
||||
public static final DeleteSnapshotLifecycleAction INSTANCE = new DeleteSnapshotLifecycleAction();
|
||||
public static final String NAME = "cluster:admin/slm/delete";
|
||||
|
||||
protected DeleteSnapshotLifecycleAction() {
|
||||
super(NAME);
|
||||
}
|
||||
|
||||
@Override
|
||||
public Writeable.Reader<Response> getResponseReader() {
|
||||
return Response::new;
|
||||
}
|
||||
|
||||
public static class Request extends AcknowledgedRequest<Request> {
|
||||
|
||||
private String lifecycleId;
|
||||
|
||||
public Request() { }
|
||||
|
||||
public Request(String lifecycleId) {
|
||||
this.lifecycleId = Objects.requireNonNull(lifecycleId, "id may not be null");
|
||||
}
|
||||
|
||||
public String getLifecycleId() {
|
||||
return this.lifecycleId;
|
||||
}
|
||||
|
||||
@Override
|
||||
public ActionRequestValidationException validate() {
|
||||
return null;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void readFrom(StreamInput in) throws IOException {
|
||||
super.readFrom(in);
|
||||
lifecycleId = in.readString();
|
||||
}
|
||||
|
||||
@Override
|
||||
public void writeTo(StreamOutput out) throws IOException {
|
||||
super.writeTo(out);
|
||||
out.writeString(lifecycleId);
|
||||
}
|
||||
|
||||
@Override
|
||||
public int hashCode() {
|
||||
return lifecycleId.hashCode();
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean equals(Object obj) {
|
||||
if (obj == null) {
|
||||
return false;
|
||||
}
|
||||
if (obj.getClass() != getClass()) {
|
||||
return false;
|
||||
}
|
||||
Request other = (Request) obj;
|
||||
return Objects.equals(lifecycleId, other.lifecycleId);
|
||||
}
|
||||
}
|
||||
|
||||
public static class Response extends AcknowledgedResponse implements ToXContentObject {
|
||||
|
||||
public Response(boolean acknowledged) {
|
||||
super(acknowledged);
|
||||
}
|
||||
|
||||
public Response(StreamInput streamInput) throws IOException {
|
||||
this(streamInput.readBoolean());
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,130 @@
|
|||
/*
|
||||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||
* or more contributor license agreements. Licensed under the Elastic License;
|
||||
* you may not use this file except in compliance with the Elastic License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.xpack.core.snapshotlifecycle.action;
|
||||
|
||||
import org.elasticsearch.action.ActionRequestValidationException;
|
||||
import org.elasticsearch.action.ActionResponse;
|
||||
import org.elasticsearch.action.ActionType;
|
||||
import org.elasticsearch.action.support.master.AcknowledgedRequest;
|
||||
import org.elasticsearch.common.Strings;
|
||||
import org.elasticsearch.common.io.stream.StreamInput;
|
||||
import org.elasticsearch.common.io.stream.StreamOutput;
|
||||
import org.elasticsearch.common.io.stream.Writeable;
|
||||
import org.elasticsearch.common.xcontent.ToXContentObject;
|
||||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.Objects;
|
||||
|
||||
/**
|
||||
* Action used to manually invoke a create snapshot request for a given
|
||||
* snapshot lifecycle policy regardless of schedule.
|
||||
*/
|
||||
public class ExecuteSnapshotLifecycleAction extends ActionType<ExecuteSnapshotLifecycleAction.Response> {
|
||||
public static final ExecuteSnapshotLifecycleAction INSTANCE = new ExecuteSnapshotLifecycleAction();
|
||||
public static final String NAME = "cluster:admin/slm/execute";
|
||||
|
||||
protected ExecuteSnapshotLifecycleAction() {
|
||||
super(NAME);
|
||||
}
|
||||
|
||||
@Override
|
||||
public Writeable.Reader<ExecuteSnapshotLifecycleAction.Response> getResponseReader() {
|
||||
return Response::new;
|
||||
}
|
||||
|
||||
public static class Request extends AcknowledgedRequest<Request> implements ToXContentObject {
|
||||
|
||||
private String lifecycleId;
|
||||
|
||||
public Request(String lifecycleId) {
|
||||
this.lifecycleId = lifecycleId;
|
||||
}
|
||||
|
||||
public Request() { }
|
||||
|
||||
public String getLifecycleId() {
|
||||
return this.lifecycleId;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void readFrom(StreamInput in) throws IOException {
|
||||
super.readFrom(in);
|
||||
lifecycleId = in.readString();
|
||||
}
|
||||
|
||||
@Override
|
||||
public void writeTo(StreamOutput out) throws IOException {
|
||||
super.writeTo(out);
|
||||
out.writeString(lifecycleId);
|
||||
}
|
||||
|
||||
@Override
|
||||
public ActionRequestValidationException validate() {
|
||||
return null;
|
||||
}
|
||||
|
||||
@Override
|
||||
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
|
||||
builder.startObject();
|
||||
builder.endObject();
|
||||
return builder;
|
||||
}
|
||||
|
||||
@Override
|
||||
public int hashCode() {
|
||||
return Objects.hash(lifecycleId);
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean equals(Object obj) {
|
||||
if (obj == null) {
|
||||
return false;
|
||||
}
|
||||
if (obj.getClass() != getClass()) {
|
||||
return false;
|
||||
}
|
||||
Request other = (Request) obj;
|
||||
return lifecycleId.equals(other.lifecycleId);
|
||||
}
|
||||
|
||||
@Override
|
||||
public String toString() {
|
||||
return Strings.toString(this);
|
||||
}
|
||||
}
|
||||
|
||||
public static class Response extends ActionResponse implements ToXContentObject {
|
||||
|
||||
private final String snapshotName;
|
||||
|
||||
public Response(String snapshotName) {
|
||||
this.snapshotName = snapshotName;
|
||||
}
|
||||
|
||||
public String getSnapshotName() {
|
||||
return this.snapshotName;
|
||||
}
|
||||
|
||||
public Response(StreamInput in) throws IOException {
|
||||
this(in.readString());
|
||||
}
|
||||
|
||||
@Override
|
||||
public void writeTo(StreamOutput out) throws IOException {
|
||||
out.writeString(this.snapshotName);
|
||||
}
|
||||
|
||||
@Override
|
||||
public final XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
|
||||
builder.startObject();
|
||||
builder.field("snapshot_name", getSnapshotName());
|
||||
builder.endObject();
|
||||
return builder;
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,142 @@
|
|||
/*
|
||||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||
* or more contributor license agreements. Licensed under the Elastic License;
|
||||
* you may not use this file except in compliance with the Elastic License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.xpack.core.snapshotlifecycle.action;
|
||||
|
||||
import org.elasticsearch.action.ActionRequestValidationException;
|
||||
import org.elasticsearch.action.ActionResponse;
|
||||
import org.elasticsearch.action.ActionType;
|
||||
import org.elasticsearch.action.support.master.AcknowledgedRequest;
|
||||
import org.elasticsearch.common.Strings;
|
||||
import org.elasticsearch.common.io.stream.StreamInput;
|
||||
import org.elasticsearch.common.io.stream.StreamOutput;
|
||||
import org.elasticsearch.common.io.stream.Writeable;
|
||||
import org.elasticsearch.common.xcontent.ToXContentObject;
|
||||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
import org.elasticsearch.xpack.core.snapshotlifecycle.SnapshotLifecyclePolicyItem;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.Arrays;
|
||||
import java.util.List;
|
||||
import java.util.Objects;
|
||||
|
||||
public class GetSnapshotLifecycleAction extends ActionType<GetSnapshotLifecycleAction.Response> {
|
||||
public static final GetSnapshotLifecycleAction INSTANCE = new GetSnapshotLifecycleAction();
|
||||
public static final String NAME = "cluster:admin/slm/get";
|
||||
|
||||
protected GetSnapshotLifecycleAction() {
|
||||
super(NAME);
|
||||
}
|
||||
|
||||
@Override
|
||||
public Writeable.Reader<GetSnapshotLifecycleAction.Response> getResponseReader() {
|
||||
return GetSnapshotLifecycleAction.Response::new;
|
||||
}
|
||||
|
||||
public static class Request extends AcknowledgedRequest<GetSnapshotLifecycleAction.Request> {
|
||||
|
||||
private String[] lifecycleIds;
|
||||
|
||||
public Request(String... lifecycleIds) {
|
||||
this.lifecycleIds = Objects.requireNonNull(lifecycleIds, "ids may not be null");
|
||||
}
|
||||
|
||||
public Request() {
|
||||
this.lifecycleIds = Strings.EMPTY_ARRAY;
|
||||
}
|
||||
|
||||
public String[] getLifecycleIds() {
|
||||
return this.lifecycleIds;
|
||||
}
|
||||
|
||||
@Override
|
||||
public ActionRequestValidationException validate() {
|
||||
return null;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void readFrom(StreamInput in) throws IOException {
|
||||
super.readFrom(in);
|
||||
lifecycleIds = in.readStringArray();
|
||||
}
|
||||
|
||||
@Override
|
||||
public void writeTo(StreamOutput out) throws IOException {
|
||||
super.writeTo(out);
|
||||
out.writeStringArray(lifecycleIds);
|
||||
}
|
||||
|
||||
@Override
|
||||
public int hashCode() {
|
||||
return Arrays.hashCode(lifecycleIds);
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean equals(Object obj) {
|
||||
if (obj == null) {
|
||||
return false;
|
||||
}
|
||||
if (obj.getClass() != getClass()) {
|
||||
return false;
|
||||
}
|
||||
Request other = (Request) obj;
|
||||
return Arrays.equals(lifecycleIds, other.lifecycleIds);
|
||||
}
|
||||
}
|
||||
|
||||
public static class Response extends ActionResponse implements ToXContentObject {
|
||||
|
||||
private List<SnapshotLifecyclePolicyItem> lifecycles;
|
||||
|
||||
public Response() { }
|
||||
|
||||
public Response(List<SnapshotLifecyclePolicyItem> lifecycles) {
|
||||
this.lifecycles = lifecycles;
|
||||
}
|
||||
|
||||
public Response(StreamInput in) throws IOException {
|
||||
this.lifecycles = in.readList(SnapshotLifecyclePolicyItem::new);
|
||||
}
|
||||
|
||||
@Override
|
||||
public String toString() {
|
||||
return Strings.toString(this);
|
||||
}
|
||||
|
||||
@Override
|
||||
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
|
||||
builder.startObject();
|
||||
for (SnapshotLifecyclePolicyItem item : lifecycles) {
|
||||
item.toXContent(builder, params);
|
||||
}
|
||||
builder.endObject();
|
||||
return builder;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void writeTo(StreamOutput out) throws IOException {
|
||||
out.writeList(lifecycles);
|
||||
}
|
||||
|
||||
@Override
|
||||
public int hashCode() {
|
||||
return Objects.hash(lifecycles);
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean equals(Object obj) {
|
||||
if (obj == null) {
|
||||
return false;
|
||||
}
|
||||
if (obj.getClass() != getClass()) {
|
||||
return false;
|
||||
}
|
||||
Response other = (Response) obj;
|
||||
return lifecycles.equals(other.lifecycles);
|
||||
}
|
||||
}
|
||||
|
||||
}
|
|
@ -0,0 +1,123 @@
|
|||
/*
|
||||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||
* or more contributor license agreements. Licensed under the Elastic License;
|
||||
* you may not use this file except in compliance with the Elastic License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.xpack.core.snapshotlifecycle.action;
|
||||
|
||||
import org.elasticsearch.action.ActionRequestValidationException;
|
||||
import org.elasticsearch.action.ActionType;
|
||||
import org.elasticsearch.action.support.master.AcknowledgedRequest;
|
||||
import org.elasticsearch.action.support.master.AcknowledgedResponse;
|
||||
import org.elasticsearch.common.Strings;
|
||||
import org.elasticsearch.common.io.stream.StreamInput;
|
||||
import org.elasticsearch.common.io.stream.StreamOutput;
|
||||
import org.elasticsearch.common.io.stream.Writeable;
|
||||
import org.elasticsearch.common.xcontent.ToXContentObject;
|
||||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
import org.elasticsearch.common.xcontent.XContentParser;
|
||||
import org.elasticsearch.xpack.core.snapshotlifecycle.SnapshotLifecyclePolicy;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.Objects;
|
||||
|
||||
public class PutSnapshotLifecycleAction extends ActionType<PutSnapshotLifecycleAction.Response> {
|
||||
public static final PutSnapshotLifecycleAction INSTANCE = new PutSnapshotLifecycleAction();
|
||||
public static final String NAME = "cluster:admin/slm/put";
|
||||
|
||||
protected PutSnapshotLifecycleAction() {
|
||||
super(NAME);
|
||||
}
|
||||
|
||||
@Override
|
||||
public Writeable.Reader<PutSnapshotLifecycleAction.Response> getResponseReader() {
|
||||
return Response::new;
|
||||
}
|
||||
|
||||
public static class Request extends AcknowledgedRequest<Request> implements ToXContentObject {
|
||||
|
||||
private String lifecycleId;
|
||||
private SnapshotLifecyclePolicy lifecycle;
|
||||
|
||||
public Request(String lifecycleId, SnapshotLifecyclePolicy lifecycle) {
|
||||
this.lifecycleId = lifecycleId;
|
||||
this.lifecycle = lifecycle;
|
||||
}
|
||||
|
||||
public Request() { }
|
||||
|
||||
public String getLifecycleId() {
|
||||
return this.lifecycleId;
|
||||
}
|
||||
|
||||
public SnapshotLifecyclePolicy getLifecycle() {
|
||||
return this.lifecycle;
|
||||
}
|
||||
|
||||
public static Request parseRequest(String lifecycleId, XContentParser parser) {
|
||||
return new Request(lifecycleId, SnapshotLifecyclePolicy.parse(parser, lifecycleId));
|
||||
}
|
||||
|
||||
@Override
|
||||
public void readFrom(StreamInput in) throws IOException {
|
||||
super.readFrom(in);
|
||||
lifecycleId = in.readString();
|
||||
lifecycle = new SnapshotLifecyclePolicy(in);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void writeTo(StreamOutput out) throws IOException {
|
||||
super.writeTo(out);
|
||||
out.writeString(lifecycleId);
|
||||
lifecycle.writeTo(out);
|
||||
}
|
||||
|
||||
@Override
|
||||
public ActionRequestValidationException validate() {
|
||||
return lifecycle.validate();
|
||||
}
|
||||
|
||||
@Override
|
||||
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
|
||||
builder.startObject();
|
||||
builder.field(lifecycleId, lifecycle);
|
||||
builder.endObject();
|
||||
return builder;
|
||||
}
|
||||
|
||||
@Override
|
||||
public int hashCode() {
|
||||
return Objects.hash(lifecycleId, lifecycle);
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean equals(Object obj) {
|
||||
if (obj == null) {
|
||||
return false;
|
||||
}
|
||||
if (obj.getClass() != getClass()) {
|
||||
return false;
|
||||
}
|
||||
Request other = (Request) obj;
|
||||
return lifecycleId.equals(other.lifecycleId) &&
|
||||
lifecycle.equals(other.lifecycle);
|
||||
}
|
||||
|
||||
@Override
|
||||
public String toString() {
|
||||
return Strings.toString(this);
|
||||
}
|
||||
}
|
||||
|
||||
public static class Response extends AcknowledgedResponse implements ToXContentObject {
|
||||
|
||||
public Response(boolean acknowledged) {
|
||||
super(acknowledged);
|
||||
}
|
||||
|
||||
public Response(StreamInput streamInput) throws IOException {
|
||||
this(streamInput.readBoolean());
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,11 @@
|
|||
/*
|
||||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||
* or more contributor license agreements. Licensed under the Elastic License;
|
||||
* you may not use this file except in compliance with the Elastic License.
|
||||
*/
|
||||
|
||||
/**
|
||||
* Contains the action definitions for SLM. For the transport and rest action implementations, please see the {@code ilm} module's
|
||||
* {@code org.elasticsearch.xpack.snapshotlifecycle} package.
|
||||
*/
|
||||
package org.elasticsearch.xpack.core.snapshotlifecycle.action;
|
|
@ -0,0 +1,223 @@
|
|||
/*
|
||||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||
* or more contributor license agreements. Licensed under the Elastic License;
|
||||
* you may not use this file except in compliance with the Elastic License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.xpack.core.snapshotlifecycle.history;
|
||||
|
||||
import org.elasticsearch.ElasticsearchException;
|
||||
import org.elasticsearch.common.Nullable;
|
||||
import org.elasticsearch.common.ParseField;
|
||||
import org.elasticsearch.common.Strings;
|
||||
import org.elasticsearch.common.bytes.BytesReference;
|
||||
import org.elasticsearch.common.io.stream.StreamInput;
|
||||
import org.elasticsearch.common.io.stream.StreamOutput;
|
||||
import org.elasticsearch.common.io.stream.Writeable;
|
||||
import org.elasticsearch.common.xcontent.ConstructingObjectParser;
|
||||
import org.elasticsearch.common.xcontent.ToXContent;
|
||||
import org.elasticsearch.common.xcontent.ToXContentObject;
|
||||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
import org.elasticsearch.common.xcontent.XContentParser;
|
||||
import org.elasticsearch.common.xcontent.json.JsonXContent;
|
||||
import org.elasticsearch.xpack.core.snapshotlifecycle.SnapshotLifecyclePolicy;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.Collections;
|
||||
import java.util.Map;
|
||||
import java.util.Objects;
|
||||
|
||||
import static org.elasticsearch.ElasticsearchException.REST_EXCEPTION_SKIP_STACK_TRACE;
|
||||
|
||||
/**
|
||||
* Represents the record of a Snapshot Lifecycle Management action, so that it
|
||||
* can be indexed in a history index or recorded to a log in a structured way
|
||||
*/
|
||||
public class SnapshotHistoryItem implements Writeable, ToXContentObject {
|
||||
static final ParseField TIMESTAMP = new ParseField("@timestamp");
|
||||
static final ParseField POLICY_ID = new ParseField("policy");
|
||||
static final ParseField REPOSITORY = new ParseField("repository");
|
||||
static final ParseField SNAPSHOT_NAME = new ParseField("snapshot_name");
|
||||
static final ParseField OPERATION = new ParseField("operation");
|
||||
static final ParseField SUCCESS = new ParseField("success");
|
||||
private static final String CREATE_OPERATION = "CREATE";
|
||||
protected final long timestamp;
|
||||
protected final String policyId;
|
||||
protected final String repository;
|
||||
protected final String snapshotName;
|
||||
protected final String operation;
|
||||
protected final boolean success;
|
||||
|
||||
private final Map<String, Object> snapshotConfiguration;
|
||||
@Nullable
|
||||
private final String errorDetails;
|
||||
|
||||
static final ParseField SNAPSHOT_CONFIG = new ParseField("configuration");
|
||||
static final ParseField ERROR_DETAILS = new ParseField("error_details");
|
||||
|
||||
@SuppressWarnings("unchecked")
|
||||
private static final ConstructingObjectParser<SnapshotHistoryItem, String> PARSER =
|
||||
new ConstructingObjectParser<>("snapshot_lifecycle_history_item", true,
|
||||
(a, id) -> {
|
||||
final long timestamp = (long) a[0];
|
||||
final String policyId = (String) a[1];
|
||||
final String repository = (String) a[2];
|
||||
final String snapshotName = (String) a[3];
|
||||
final String operation = (String) a[4];
|
||||
final boolean success = (boolean) a[5];
|
||||
final Map<String, Object> snapshotConfiguration = (Map<String, Object>) a[6];
|
||||
final String errorDetails = (String) a[7];
|
||||
return new SnapshotHistoryItem(timestamp, policyId, repository, snapshotName, operation, success,
|
||||
snapshotConfiguration, errorDetails);
|
||||
});
|
||||
|
||||
static {
|
||||
PARSER.declareLong(ConstructingObjectParser.constructorArg(), TIMESTAMP);
|
||||
PARSER.declareString(ConstructingObjectParser.constructorArg(), POLICY_ID);
|
||||
PARSER.declareString(ConstructingObjectParser.constructorArg(), REPOSITORY);
|
||||
PARSER.declareString(ConstructingObjectParser.constructorArg(), SNAPSHOT_NAME);
|
||||
PARSER.declareString(ConstructingObjectParser.constructorArg(), OPERATION);
|
||||
PARSER.declareBoolean(ConstructingObjectParser.constructorArg(), SUCCESS);
|
||||
PARSER.declareObject(ConstructingObjectParser.constructorArg(), (p, c) -> p.map(), SNAPSHOT_CONFIG);
|
||||
PARSER.declareStringOrNull(ConstructingObjectParser.constructorArg(), ERROR_DETAILS);
|
||||
}
|
||||
|
||||
public static SnapshotHistoryItem parse(XContentParser parser, String name) {
|
||||
return PARSER.apply(parser, name);
|
||||
}
|
||||
|
||||
SnapshotHistoryItem(long timestamp, String policyId, String repository, String snapshotName, String operation,
|
||||
boolean success, Map<String, Object> snapshotConfiguration, String errorDetails) {
|
||||
this.timestamp = timestamp;
|
||||
this.policyId = Objects.requireNonNull(policyId);
|
||||
this.repository = Objects.requireNonNull(repository);
|
||||
this.snapshotName = Objects.requireNonNull(snapshotName);
|
||||
this.operation = Objects.requireNonNull(operation);
|
||||
this.success = success;
|
||||
this.snapshotConfiguration = Objects.requireNonNull(snapshotConfiguration);
|
||||
this.errorDetails = errorDetails;
|
||||
}
|
||||
|
||||
public static SnapshotHistoryItem successRecord(long timestamp, SnapshotLifecyclePolicy policy, String snapshotName) {
|
||||
return new SnapshotHistoryItem(timestamp, policy.getId(), policy.getRepository(), snapshotName, CREATE_OPERATION, true,
|
||||
policy.getConfig(), null);
|
||||
}
|
||||
|
||||
public static SnapshotHistoryItem failureRecord(long timeStamp, SnapshotLifecyclePolicy policy, String snapshotName,
|
||||
Exception exception) throws IOException {
|
||||
ToXContent.Params stacktraceParams = new ToXContent.MapParams(Collections.singletonMap(REST_EXCEPTION_SKIP_STACK_TRACE, "false"));
|
||||
String exceptionString;
|
||||
try (XContentBuilder causeXContentBuilder = JsonXContent.contentBuilder()) {
|
||||
causeXContentBuilder.startObject();
|
||||
ElasticsearchException.generateThrowableXContent(causeXContentBuilder, stacktraceParams, exception);
|
||||
causeXContentBuilder.endObject();
|
||||
exceptionString = BytesReference.bytes(causeXContentBuilder).utf8ToString();
|
||||
}
|
||||
return new SnapshotHistoryItem(timeStamp, policy.getId(), policy.getRepository(), snapshotName, CREATE_OPERATION, false,
|
||||
policy.getConfig(), exceptionString);
|
||||
}
|
||||
|
||||
public SnapshotHistoryItem(StreamInput in) throws IOException {
|
||||
this.timestamp = in.readVLong();
|
||||
this.policyId = in.readString();
|
||||
this.repository = in.readString();
|
||||
this.snapshotName = in.readString();
|
||||
this.operation = in.readString();
|
||||
this.success = in.readBoolean();
|
||||
this.snapshotConfiguration = in.readMap();
|
||||
this.errorDetails = in.readOptionalString();
|
||||
}
|
||||
|
||||
public Map<String, Object> getSnapshotConfiguration() {
|
||||
return snapshotConfiguration;
|
||||
}
|
||||
|
||||
public String getErrorDetails() {
|
||||
return errorDetails;
|
||||
}
|
||||
|
||||
public long getTimestamp() {
|
||||
return timestamp;
|
||||
}
|
||||
|
||||
public String getPolicyId() {
|
||||
return policyId;
|
||||
}
|
||||
|
||||
public String getRepository() {
|
||||
return repository;
|
||||
}
|
||||
|
||||
public String getSnapshotName() {
|
||||
return snapshotName;
|
||||
}
|
||||
|
||||
public String getOperation() {
|
||||
return operation;
|
||||
}
|
||||
|
||||
public boolean isSuccess() {
|
||||
return success;
|
||||
}
|
||||
|
||||
@Override
|
||||
public final void writeTo(StreamOutput out) throws IOException {
|
||||
out.writeVLong(timestamp);
|
||||
out.writeString(policyId);
|
||||
out.writeString(repository);
|
||||
out.writeString(snapshotName);
|
||||
out.writeString(operation);
|
||||
out.writeBoolean(success);
|
||||
out.writeMap(snapshotConfiguration);
|
||||
out.writeOptionalString(errorDetails);
|
||||
}
|
||||
|
||||
@Override
|
||||
public final XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
|
||||
builder.startObject();
|
||||
{
|
||||
builder.timeField(TIMESTAMP.getPreferredName(), "timestamp_string", timestamp);
|
||||
builder.field(POLICY_ID.getPreferredName(), policyId);
|
||||
builder.field(REPOSITORY.getPreferredName(), repository);
|
||||
builder.field(SNAPSHOT_NAME.getPreferredName(), snapshotName);
|
||||
builder.field(OPERATION.getPreferredName(), operation);
|
||||
builder.field(SUCCESS.getPreferredName(), success);
|
||||
builder.field(SNAPSHOT_CONFIG.getPreferredName(), snapshotConfiguration);
|
||||
builder.field(ERROR_DETAILS.getPreferredName(), errorDetails);
|
||||
}
|
||||
builder.endObject();
|
||||
|
||||
return builder;
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean equals(Object o) {
|
||||
if (this == o) return true;
|
||||
if (o == null || getClass() != o.getClass()) return false;
|
||||
boolean result;
|
||||
if (this == o) result = true;
|
||||
if (o == null || getClass() != o.getClass()) result = false;
|
||||
SnapshotHistoryItem that1 = (SnapshotHistoryItem) o;
|
||||
result = isSuccess() == that1.isSuccess() &&
|
||||
timestamp == that1.getTimestamp() &&
|
||||
Objects.equals(getPolicyId(), that1.getPolicyId()) &&
|
||||
Objects.equals(getRepository(), that1.getRepository()) &&
|
||||
Objects.equals(getSnapshotName(), that1.getSnapshotName()) &&
|
||||
Objects.equals(getOperation(), that1.getOperation());
|
||||
if (!result) return false;
|
||||
SnapshotHistoryItem that = (SnapshotHistoryItem) o;
|
||||
return Objects.equals(getSnapshotConfiguration(), that.getSnapshotConfiguration()) &&
|
||||
Objects.equals(getErrorDetails(), that.getErrorDetails());
|
||||
}
|
||||
|
||||
@Override
|
||||
public int hashCode() {
|
||||
return Objects.hash(getTimestamp(), getPolicyId(), getRepository(), getSnapshotName(), getOperation(), isSuccess(),
|
||||
getSnapshotConfiguration(), getErrorDetails());
|
||||
}
|
||||
|
||||
@Override
|
||||
public String toString() {
|
||||
return Strings.toString(this);
|
||||
}
|
||||
}
|
|
@ -0,0 +1,84 @@
|
|||
/*
|
||||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||
* or more contributor license agreements. Licensed under the Elastic License;
|
||||
* you may not use this file except in compliance with the Elastic License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.xpack.core.snapshotlifecycle.history;
|
||||
|
||||
import org.apache.logging.log4j.LogManager;
|
||||
import org.apache.logging.log4j.Logger;
|
||||
import org.apache.logging.log4j.message.ParameterizedMessage;
|
||||
import org.elasticsearch.action.ActionListener;
|
||||
import org.elasticsearch.action.index.IndexRequest;
|
||||
import org.elasticsearch.client.Client;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.time.DateFormatter;
|
||||
import org.elasticsearch.common.xcontent.ToXContent;
|
||||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
import org.elasticsearch.common.xcontent.XContentFactory;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.time.Instant;
|
||||
import java.time.ZoneId;
|
||||
import java.time.ZonedDateTime;
|
||||
|
||||
import static org.elasticsearch.xpack.core.indexlifecycle.LifecycleSettings.SLM_HISTORY_INDEX_ENABLED_SETTING;
|
||||
import static org.elasticsearch.xpack.core.snapshotlifecycle.history.SnapshotLifecycleTemplateRegistry.INDEX_TEMPLATE_VERSION;
|
||||
|
||||
/**
|
||||
* Records Snapshot Lifecycle Management actions as represented by {@link SnapshotHistoryItem} into an index
|
||||
* for the purposes of querying and alerting.
|
||||
*/
|
||||
public class SnapshotHistoryStore {
|
||||
private static final Logger logger = LogManager.getLogger(SnapshotHistoryStore.class);
|
||||
private static final DateFormatter indexTimeFormat = DateFormatter.forPattern("yyyy.MM");
|
||||
|
||||
public static final String SLM_HISTORY_INDEX_PREFIX = ".slm-history-" + INDEX_TEMPLATE_VERSION + "-";
|
||||
|
||||
private final Client client;
|
||||
private final ZoneId timeZone;
|
||||
private final boolean slmHistoryEnabled;
|
||||
|
||||
public SnapshotHistoryStore(Settings nodeSettings, Client client, ZoneId timeZone) {
|
||||
this.client = client;
|
||||
this.timeZone = timeZone;
|
||||
slmHistoryEnabled = SLM_HISTORY_INDEX_ENABLED_SETTING.get(nodeSettings);
|
||||
}
|
||||
|
||||
/**
|
||||
* Attempts to asynchronously index a snapshot lifecycle management history entry
|
||||
*
|
||||
* @param item The entry to index
|
||||
*/
|
||||
public void putAsync(SnapshotHistoryItem item) {
|
||||
if (slmHistoryEnabled == false) {
|
||||
logger.trace("not recording snapshot history item because [{}] is [false]: [{}]",
|
||||
SLM_HISTORY_INDEX_ENABLED_SETTING.getKey(), item);
|
||||
return;
|
||||
}
|
||||
final ZonedDateTime dateTime = ZonedDateTime.ofInstant(Instant.ofEpochMilli(item.getTimestamp()), timeZone);
|
||||
final String index = getHistoryIndexNameForTime(dateTime);
|
||||
logger.trace("about to index snapshot history item in index [{}]: [{}]", index, item);
|
||||
try (XContentBuilder builder = XContentFactory.jsonBuilder()) {
|
||||
item.toXContent(builder, ToXContent.EMPTY_PARAMS);
|
||||
IndexRequest request = new IndexRequest(index)
|
||||
.source(builder);
|
||||
client.index(request, ActionListener.wrap(indexResponse -> {
|
||||
logger.debug("successfully indexed snapshot history item with id [{}] in index [{}]: [{}]",
|
||||
indexResponse.getId(), index, item);
|
||||
}, exception -> {
|
||||
logger.error(new ParameterizedMessage("failed to index snapshot history item in index [{}]: [{}]",
|
||||
index, item), exception);
|
||||
}));
|
||||
} catch (IOException exception) {
|
||||
logger.error(new ParameterizedMessage("failed to index snapshot history item in index [{}]: [{}]",
|
||||
index, item), exception);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
static String getHistoryIndexNameForTime(ZonedDateTime time) {
|
||||
return SLM_HISTORY_INDEX_PREFIX + indexTimeFormat.format(time);
|
||||
}
|
||||
}
|
|
@ -0,0 +1,104 @@
|
|||
/*
|
||||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||
* or more contributor license agreements. Licensed under the Elastic License;
|
||||
* you may not use this file except in compliance with the Elastic License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.xpack.core.snapshotlifecycle.history;
|
||||
|
||||
import org.elasticsearch.client.Client;
|
||||
import org.elasticsearch.cluster.ClusterState;
|
||||
import org.elasticsearch.cluster.service.ClusterService;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.xcontent.NamedXContentRegistry;
|
||||
import org.elasticsearch.threadpool.ThreadPool;
|
||||
import org.elasticsearch.xpack.core.indexlifecycle.IndexLifecycleMetadata;
|
||||
import org.elasticsearch.xpack.core.indexlifecycle.LifecyclePolicy;
|
||||
import org.elasticsearch.xpack.core.template.IndexTemplateConfig;
|
||||
import org.elasticsearch.xpack.core.template.IndexTemplateRegistry;
|
||||
import org.elasticsearch.xpack.core.template.LifecyclePolicyConfig;
|
||||
|
||||
import java.util.Collections;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.Optional;
|
||||
import java.util.Set;
|
||||
import java.util.stream.Collectors;
|
||||
|
||||
import static org.elasticsearch.xpack.core.ClientHelper.INDEX_LIFECYCLE_ORIGIN;
|
||||
import static org.elasticsearch.xpack.core.indexlifecycle.LifecycleSettings.SLM_HISTORY_INDEX_ENABLED_SETTING;
|
||||
|
||||
/**
|
||||
* Manages the index template and associated ILM policy for the Snapshot
|
||||
* Lifecycle Management history index.
|
||||
*/
|
||||
public class SnapshotLifecycleTemplateRegistry extends IndexTemplateRegistry {
|
||||
// history (please add a comment why you increased the version here)
|
||||
// version 1: initial
|
||||
public static final String INDEX_TEMPLATE_VERSION = "1";
|
||||
|
||||
public static final String SLM_TEMPLATE_VERSION_VARIABLE = "xpack.slm.template.version";
|
||||
public static final String SLM_TEMPLATE_NAME = ".slm-history";
|
||||
|
||||
public static final String SLM_POLICY_NAME = "slm-history-ilm-policy";
|
||||
|
||||
public static final IndexTemplateConfig TEMPLATE_SLM_HISTORY = new IndexTemplateConfig(
|
||||
SLM_TEMPLATE_NAME,
|
||||
"/slm-history.json",
|
||||
INDEX_TEMPLATE_VERSION,
|
||||
SLM_TEMPLATE_VERSION_VARIABLE
|
||||
);
|
||||
|
||||
public static final LifecyclePolicyConfig SLM_HISTORY_POLICY = new LifecyclePolicyConfig(
|
||||
SLM_POLICY_NAME,
|
||||
"/slm-history-ilm-policy.json"
|
||||
);
|
||||
|
||||
private final boolean slmHistoryEnabled;
|
||||
|
||||
public SnapshotLifecycleTemplateRegistry(Settings nodeSettings, ClusterService clusterService, ThreadPool threadPool, Client client,
|
||||
NamedXContentRegistry xContentRegistry) {
|
||||
super(nodeSettings, clusterService, threadPool, client, xContentRegistry);
|
||||
slmHistoryEnabled = SLM_HISTORY_INDEX_ENABLED_SETTING.get(nodeSettings);
|
||||
}
|
||||
|
||||
@Override
|
||||
protected List<IndexTemplateConfig> getTemplateConfigs() {
|
||||
if (slmHistoryEnabled == false) {
|
||||
return Collections.emptyList();
|
||||
}
|
||||
return Collections.singletonList(TEMPLATE_SLM_HISTORY);
|
||||
}
|
||||
|
||||
@Override
|
||||
protected List<LifecyclePolicyConfig> getPolicyConfigs() {
|
||||
if (slmHistoryEnabled == false) {
|
||||
return Collections.emptyList();
|
||||
}
|
||||
return Collections.singletonList(SLM_HISTORY_POLICY);
|
||||
}
|
||||
|
||||
@Override
|
||||
protected String getOrigin() {
|
||||
return INDEX_LIFECYCLE_ORIGIN; // TODO use separate SLM origin?
|
||||
}
|
||||
|
||||
public boolean validate(ClusterState state) {
|
||||
boolean allTemplatesPresent = getTemplateConfigs().stream()
|
||||
.map(IndexTemplateConfig::getTemplateName)
|
||||
.allMatch(name -> state.metaData().getTemplates().containsKey(name));
|
||||
|
||||
Optional<Map<String, LifecyclePolicy>> maybePolicies = Optional
|
||||
.<IndexLifecycleMetadata>ofNullable(state.metaData().custom(IndexLifecycleMetadata.TYPE))
|
||||
.map(IndexLifecycleMetadata::getPolicies);
|
||||
Set<String> policyNames = getPolicyConfigs().stream()
|
||||
.map(LifecyclePolicyConfig::getPolicyName)
|
||||
.collect(Collectors.toSet());
|
||||
|
||||
boolean allPoliciesPresent = maybePolicies
|
||||
.map(policies -> policies.keySet()
|
||||
.containsAll(policyNames))
|
||||
.orElse(false);
|
||||
return allTemplatesPresent && allPoliciesPresent;
|
||||
}
|
||||
}
|
|
@ -0,0 +1,23 @@
|
|||
/*
|
||||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||
* or more contributor license agreements. Licensed under the Elastic License;
|
||||
* you may not use this file except in compliance with the Elastic License.
|
||||
*/
|
||||
|
||||
/**
|
||||
* This package contains the utility classes used to persist SLM policy execution results to an internal index.
|
||||
*
|
||||
* <p>The {@link org.elasticsearch.xpack.core.snapshotlifecycle.history.SnapshotLifecycleTemplateRegistry} class is registered as a
|
||||
* cluster state listener when the ILM plugin starts up. It executes only on the elected master node, and ensures that a template is
|
||||
* configured for the SLM history index, as well as an ILM policy (since the two are always enabled in lock step).
|
||||
*
|
||||
* <p>The {@link org.elasticsearch.xpack.core.snapshotlifecycle.history.SnapshotHistoryItem} is used to encapsulate historical
|
||||
* information about a snapshot policy execution. This contains more data than the
|
||||
* {@link org.elasticsearch.xpack.core.snapshotlifecycle.SnapshotInvocationRecord} since it is a more complete history record
|
||||
* stored on disk instead of a low surface area status entry.
|
||||
*
|
||||
* <p>The {@link org.elasticsearch.xpack.core.snapshotlifecycle.history.SnapshotHistoryStore} manages the persistence of the previously
|
||||
* mentioned {@link org.elasticsearch.xpack.core.snapshotlifecycle.history.SnapshotHistoryItem}. It simply does an asynchronous put
|
||||
* operation against the SLM history internal index.
|
||||
*/
|
||||
package org.elasticsearch.xpack.core.snapshotlifecycle.history;
|
|
@ -0,0 +1,36 @@
|
|||
/*
|
||||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||
* or more contributor license agreements. Licensed under the Elastic License;
|
||||
* you may not use this file except in compliance with the Elastic License.
|
||||
*/
|
||||
|
||||
/**
|
||||
* This is the Snapshot Lifecycle Management (SLM) core package. This package contains the core classes for SLM, including all of the
|
||||
* custom cluster state metadata objects, execution history storage facilities, and the action definitions. For the main SLM service
|
||||
* implementation classes, please see the {@code ilm}ilm module's {@code org.elasticsearch.xpack.snapshotlifecycle} package.
|
||||
*
|
||||
* <p>Contained within this specific package are the custom metadata objects and models used through out the SLM service. The names can
|
||||
* be confusing, so it's important to know the differences between each metadata object.
|
||||
*
|
||||
* <p>The {@link org.elasticsearch.xpack.core.snapshotlifecycle.SnapshotLifecyclePolicy} object is the user provided definition of the
|
||||
* SLM policy. This is what a user provides when creating a snapshot policy, and acts as the blueprint for the create snapshot request
|
||||
* that the service launches. It additionally surfaces the next point in time a policy should be executed.
|
||||
*
|
||||
* <p>Lateral to the policy, the {@link org.elasticsearch.xpack.core.snapshotlifecycle.SnapshotInvocationRecord} represents an execution
|
||||
* of a policy. It includes within it the policy name and details about its execution, success or failure.
|
||||
*
|
||||
* <p>Next is the {@link org.elasticsearch.xpack.core.snapshotlifecycle.SnapshotLifecyclePolicyMetadata} object, which not only contains
|
||||
* the {@link org.elasticsearch.xpack.core.snapshotlifecycle.SnapshotLifecyclePolicy} blueprint, but also any contextual information about
|
||||
* that policy, like the user information of who created it so that it may be used during execution, as well as the version of the policy,
|
||||
* and both the last failed and successful runs as {@link org.elasticsearch.xpack.core.snapshotlifecycle.SnapshotInvocationRecord}s. This
|
||||
* is the living representation of a policy within the cluster state.
|
||||
*
|
||||
* <p>When a "Get Policy" action is executed, the {@link org.elasticsearch.xpack.core.snapshotlifecycle.SnapshotLifecyclePolicyItem} is
|
||||
* returned instead. This is a thin wrapper around the internal
|
||||
* {@link org.elasticsearch.xpack.core.snapshotlifecycle.SnapshotLifecyclePolicyMetadata} object so that we do not expose any sensitive
|
||||
* internal headers or user information in the Get API.
|
||||
*
|
||||
* <p>Finally, the {@link org.elasticsearch.xpack.core.snapshotlifecycle.SnapshotLifecycleMetadata} class contains all living SLM
|
||||
* policies and their metadata, acting as the SLM specific root object within the cluster state.
|
||||
*/
|
||||
package org.elasticsearch.xpack.core.snapshotlifecycle;
|
|
@ -0,0 +1,10 @@
|
|||
{
|
||||
"phases": {
|
||||
"delete": {
|
||||
"min_age": "60d",
|
||||
"actions": {
|
||||
"delete": {}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,58 @@
|
|||
{
|
||||
"index_patterns": [
|
||||
".slm-history-${xpack.slm.template.version}*"
|
||||
],
|
||||
"order": 2147483647,
|
||||
"settings": {
|
||||
"index.number_of_shards": 1,
|
||||
"index.number_of_replicas": 0,
|
||||
"index.auto_expand_replicas": "0-1",
|
||||
"index.lifecycle.name": "slm-history-ilm-policy",
|
||||
"index.format": 1
|
||||
},
|
||||
"mappings": {
|
||||
"_doc": {
|
||||
"dynamic": false,
|
||||
"properties": {
|
||||
"@timestamp": {
|
||||
"type": "date",
|
||||
"format": "epoch_millis"
|
||||
},
|
||||
"policy": {
|
||||
"type": "keyword"
|
||||
},
|
||||
"repository": {
|
||||
"type": "keyword"
|
||||
},
|
||||
"snapshot_name":{
|
||||
"type": "keyword"
|
||||
},
|
||||
"operation": {
|
||||
"type": "keyword"
|
||||
},
|
||||
"success": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"configuration": {
|
||||
"type": "object",
|
||||
"dynamic": false,
|
||||
"properties": {
|
||||
"indices": {
|
||||
"type": "keyword"
|
||||
},
|
||||
"partial": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"include_global_state": {
|
||||
"type": "boolean"
|
||||
}
|
||||
}
|
||||
},
|
||||
"error_details": {
|
||||
"type": "text",
|
||||
"index": false
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
|
@ -204,4 +204,34 @@ public class PrivilegeTests extends ESTestCase {
|
|||
assertThat(predicate.test("indices:admin/whatever"), is(false));
|
||||
}
|
||||
}
|
||||
|
||||
public void testSlmPriviledges() {
|
||||
{
|
||||
Predicate<String> predicate = ClusterPrivilege.MANAGE_SLM.predicate();
|
||||
// check cluster actions
|
||||
assertThat(predicate.test("cluster:admin/slm/delete"), is(true));
|
||||
assertThat(predicate.test("cluster:admin/slm/put"), is(true));
|
||||
assertThat(predicate.test("cluster:admin/slm/get"), is(true));
|
||||
assertThat(predicate.test("cluster:admin/ilm/start"), is(true));
|
||||
assertThat(predicate.test("cluster:admin/ilm/stop"), is(true));
|
||||
assertThat(predicate.test("cluster:admin/slm/execute"), is(true));
|
||||
assertThat(predicate.test("cluster:admin/ilm/operation_mode/get"), is(true));
|
||||
// check non-slm action
|
||||
assertThat(predicate.test("cluster:admin/whatever"), is(false));
|
||||
}
|
||||
|
||||
{
|
||||
Predicate<String> predicate = ClusterPrivilege.READ_SLM.predicate();
|
||||
// check cluster actions
|
||||
assertThat(predicate.test("cluster:admin/slm/delete"), is(false));
|
||||
assertThat(predicate.test("cluster:admin/slm/put"), is(false));
|
||||
assertThat(predicate.test("cluster:admin/slm/get"), is(true));
|
||||
assertThat(predicate.test("cluster:admin/ilm/start"), is(false));
|
||||
assertThat(predicate.test("cluster:admin/ilm/stop"), is(false));
|
||||
assertThat(predicate.test("cluster:admin/slm/execute"), is(false));
|
||||
assertThat(predicate.test("cluster:admin/ilm/operation_mode/get"), is(true));
|
||||
// check non-slm action
|
||||
assertThat(predicate.test("cluster:admin/whatever"), is(false));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -0,0 +1,61 @@
|
|||
/*
|
||||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||
* or more contributor license agreements. Licensed under the Elastic License;
|
||||
* you may not use this file except in compliance with the Elastic License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.xpack.core.snapshotlifecycle;
|
||||
|
||||
import org.elasticsearch.common.io.stream.Writeable;
|
||||
import org.elasticsearch.common.xcontent.XContentParser;
|
||||
import org.elasticsearch.test.AbstractSerializingTestCase;
|
||||
import org.elasticsearch.test.ESTestCase;
|
||||
|
||||
import java.io.IOException;
|
||||
|
||||
public class SnapshotInvocationRecordTests extends AbstractSerializingTestCase<SnapshotInvocationRecord> {
|
||||
|
||||
@Override
|
||||
protected SnapshotInvocationRecord doParseInstance(XContentParser parser) throws IOException {
|
||||
return SnapshotInvocationRecord.parse(parser, null);
|
||||
}
|
||||
|
||||
@Override
|
||||
protected SnapshotInvocationRecord createTestInstance() {
|
||||
return randomSnapshotInvocationRecord();
|
||||
}
|
||||
|
||||
@Override
|
||||
protected Writeable.Reader<SnapshotInvocationRecord> instanceReader() {
|
||||
return SnapshotInvocationRecord::new;
|
||||
}
|
||||
|
||||
@Override
|
||||
protected SnapshotInvocationRecord mutateInstance(SnapshotInvocationRecord instance) {
|
||||
switch (between(0, 2)) {
|
||||
case 0:
|
||||
return new SnapshotInvocationRecord(
|
||||
randomValueOtherThan(instance.getSnapshotName(), () -> randomAlphaOfLengthBetween(2,10)),
|
||||
instance.getTimestamp(),
|
||||
instance.getDetails());
|
||||
case 1:
|
||||
return new SnapshotInvocationRecord(instance.getSnapshotName(),
|
||||
randomValueOtherThan(instance.getTimestamp(), ESTestCase::randomNonNegativeLong),
|
||||
instance.getDetails());
|
||||
case 2:
|
||||
return new SnapshotInvocationRecord(instance.getSnapshotName(),
|
||||
instance.getTimestamp(),
|
||||
randomValueOtherThan(instance.getDetails(), () -> randomAlphaOfLengthBetween(2,10)));
|
||||
default:
|
||||
throw new AssertionError("failure, got illegal switch case");
|
||||
}
|
||||
}
|
||||
|
||||
public static SnapshotInvocationRecord randomSnapshotInvocationRecord() {
|
||||
return new SnapshotInvocationRecord(
|
||||
randomAlphaOfLengthBetween(5,10),
|
||||
randomNonNegativeLong(),
|
||||
randomBoolean() ? null : randomAlphaOfLengthBetween(5, 10));
|
||||
}
|
||||
|
||||
}
|
|
@ -0,0 +1,68 @@
|
|||
/*
|
||||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||
* or more contributor license agreements. Licensed under the Elastic License;
|
||||
* you may not use this file except in compliance with the Elastic License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.xpack.core.snapshotlifecycle;
|
||||
|
||||
import org.elasticsearch.common.io.stream.Writeable;
|
||||
import org.elasticsearch.test.AbstractWireSerializingTestCase;
|
||||
import org.elasticsearch.test.ESTestCase;
|
||||
|
||||
import static org.elasticsearch.xpack.core.snapshotlifecycle.SnapshotLifecyclePolicyMetadataTests.createRandomPolicy;
|
||||
import static org.elasticsearch.xpack.core.snapshotlifecycle.SnapshotLifecyclePolicyMetadataTests.createRandomPolicyMetadata;
|
||||
|
||||
public class SnapshotLifecyclePolicyItemTests extends AbstractWireSerializingTestCase<SnapshotLifecyclePolicyItem> {
|
||||
|
||||
@Override
|
||||
protected SnapshotLifecyclePolicyItem createTestInstance() {
|
||||
return new SnapshotLifecyclePolicyItem(createRandomPolicyMetadata(randomAlphaOfLengthBetween(5, 10)));
|
||||
}
|
||||
|
||||
@Override
|
||||
protected SnapshotLifecyclePolicyItem mutateInstance(SnapshotLifecyclePolicyItem instance) {
|
||||
switch (between(0, 4)) {
|
||||
case 0:
|
||||
String newPolicyId = randomValueOtherThan(instance.getPolicy().getId(), () -> randomAlphaOfLengthBetween(5, 10));
|
||||
return new SnapshotLifecyclePolicyItem(createRandomPolicy(newPolicyId),
|
||||
instance.getVersion(),
|
||||
instance.getModifiedDate(),
|
||||
instance.getLastSuccess(),
|
||||
instance.getLastFailure());
|
||||
case 1:
|
||||
return new SnapshotLifecyclePolicyItem(instance.getPolicy(),
|
||||
randomValueOtherThan(instance.getVersion(), ESTestCase::randomNonNegativeLong),
|
||||
instance.getModifiedDate(),
|
||||
instance.getLastSuccess(),
|
||||
instance.getLastFailure());
|
||||
case 2:
|
||||
return new SnapshotLifecyclePolicyItem(instance.getPolicy(),
|
||||
instance.getVersion(),
|
||||
randomValueOtherThan(instance.getModifiedDate(), ESTestCase::randomNonNegativeLong),
|
||||
instance.getLastSuccess(),
|
||||
instance.getLastFailure());
|
||||
case 3:
|
||||
return new SnapshotLifecyclePolicyItem(instance.getPolicy(),
|
||||
instance.getVersion(),
|
||||
instance.getModifiedDate(),
|
||||
randomValueOtherThan(instance.getLastSuccess(),
|
||||
SnapshotInvocationRecordTests::randomSnapshotInvocationRecord),
|
||||
instance.getLastFailure());
|
||||
case 4:
|
||||
return new SnapshotLifecyclePolicyItem(instance.getPolicy(),
|
||||
instance.getVersion(),
|
||||
instance.getModifiedDate(),
|
||||
instance.getLastSuccess(),
|
||||
randomValueOtherThan(instance.getLastFailure(),
|
||||
SnapshotInvocationRecordTests::randomSnapshotInvocationRecord));
|
||||
default:
|
||||
throw new AssertionError("failure, got illegal switch case");
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
protected Writeable.Reader<SnapshotLifecyclePolicyItem> instanceReader() {
|
||||
return SnapshotLifecyclePolicyItem::new;
|
||||
}
|
||||
}
|
|
@ -0,0 +1,116 @@
|
|||
/*
|
||||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||
* or more contributor license agreements. Licensed under the Elastic License;
|
||||
* you may not use this file except in compliance with the Elastic License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.xpack.core.snapshotlifecycle;
|
||||
|
||||
import org.elasticsearch.common.io.stream.Writeable;
|
||||
import org.elasticsearch.common.xcontent.XContentParser;
|
||||
import org.elasticsearch.test.AbstractSerializingTestCase;
|
||||
import org.elasticsearch.test.ESTestCase;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.HashMap;
|
||||
import java.util.Map;
|
||||
|
||||
import static org.elasticsearch.xpack.core.snapshotlifecycle.SnapshotInvocationRecordTests.randomSnapshotInvocationRecord;
|
||||
|
||||
public class SnapshotLifecyclePolicyMetadataTests extends AbstractSerializingTestCase<SnapshotLifecyclePolicyMetadata> {
|
||||
private String policyId;
|
||||
|
||||
@Override
|
||||
protected SnapshotLifecyclePolicyMetadata doParseInstance(XContentParser parser) throws IOException {
|
||||
return SnapshotLifecyclePolicyMetadata.PARSER.apply(parser, policyId);
|
||||
}
|
||||
|
||||
@Override
|
||||
protected SnapshotLifecyclePolicyMetadata createTestInstance() {
|
||||
policyId = randomAlphaOfLength(5);
|
||||
return createRandomPolicyMetadata(policyId);
|
||||
}
|
||||
|
||||
private static Map<String, String> randomHeaders() {
|
||||
Map<String, String> headers = new HashMap<>();
|
||||
int headerCount = randomIntBetween(1,10);
|
||||
for (int i = 0; i < headerCount; i++) {
|
||||
headers.put(randomAlphaOfLengthBetween(5,10), randomAlphaOfLengthBetween(5,10));
|
||||
}
|
||||
return headers;
|
||||
}
|
||||
|
||||
@Override
|
||||
protected Writeable.Reader<SnapshotLifecyclePolicyMetadata> instanceReader() {
|
||||
return SnapshotLifecyclePolicyMetadata::new;
|
||||
}
|
||||
|
||||
@Override
|
||||
protected SnapshotLifecyclePolicyMetadata mutateInstance(SnapshotLifecyclePolicyMetadata instance) throws IOException {
|
||||
switch (between(0, 5)) {
|
||||
case 0:
|
||||
return SnapshotLifecyclePolicyMetadata.builder(instance)
|
||||
.setPolicy(randomValueOtherThan(instance.getPolicy(), () -> createRandomPolicy(randomAlphaOfLength(10))))
|
||||
.build();
|
||||
case 1:
|
||||
return SnapshotLifecyclePolicyMetadata.builder(instance)
|
||||
.setVersion(randomValueOtherThan(instance.getVersion(), ESTestCase::randomNonNegativeLong))
|
||||
.build();
|
||||
case 2:
|
||||
return SnapshotLifecyclePolicyMetadata.builder(instance)
|
||||
.setModifiedDate(randomValueOtherThan(instance.getModifiedDate(), ESTestCase::randomNonNegativeLong))
|
||||
.build();
|
||||
case 3:
|
||||
return SnapshotLifecyclePolicyMetadata.builder(instance)
|
||||
.setHeaders(randomValueOtherThan(instance.getHeaders(), SnapshotLifecyclePolicyMetadataTests::randomHeaders))
|
||||
.build();
|
||||
case 4:
|
||||
return SnapshotLifecyclePolicyMetadata.builder(instance)
|
||||
.setLastSuccess(randomValueOtherThan(instance.getLastSuccess(),
|
||||
SnapshotInvocationRecordTests::randomSnapshotInvocationRecord))
|
||||
.build();
|
||||
case 5:
|
||||
return SnapshotLifecyclePolicyMetadata.builder(instance)
|
||||
.setLastFailure(randomValueOtherThan(instance.getLastFailure(),
|
||||
SnapshotInvocationRecordTests::randomSnapshotInvocationRecord))
|
||||
.build();
|
||||
default:
|
||||
throw new AssertionError("failure, got illegal switch case");
|
||||
}
|
||||
}
|
||||
|
||||
public static SnapshotLifecyclePolicyMetadata createRandomPolicyMetadata(String policyId) {
|
||||
SnapshotLifecyclePolicyMetadata.Builder builder = SnapshotLifecyclePolicyMetadata.builder()
|
||||
.setPolicy(createRandomPolicy(policyId))
|
||||
.setVersion(randomNonNegativeLong())
|
||||
.setModifiedDate(randomNonNegativeLong());
|
||||
if (randomBoolean()) {
|
||||
builder.setHeaders(randomHeaders());
|
||||
}
|
||||
if (randomBoolean()) {
|
||||
builder.setLastSuccess(randomSnapshotInvocationRecord());
|
||||
}
|
||||
if (randomBoolean()) {
|
||||
builder.setLastFailure(randomSnapshotInvocationRecord());
|
||||
}
|
||||
return builder.build();
|
||||
}
|
||||
|
||||
public static SnapshotLifecyclePolicy createRandomPolicy(String policyId) {
|
||||
Map<String, Object> config = new HashMap<>();
|
||||
for (int i = 0; i < randomIntBetween(2, 5); i++) {
|
||||
config.put(randomAlphaOfLength(4), randomAlphaOfLength(4));
|
||||
}
|
||||
return new SnapshotLifecyclePolicy(policyId,
|
||||
randomAlphaOfLength(4),
|
||||
randomSchedule(),
|
||||
randomAlphaOfLength(4),
|
||||
config);
|
||||
}
|
||||
|
||||
private static String randomSchedule() {
|
||||
return randomIntBetween(0, 59) + " " +
|
||||
randomIntBetween(0, 59) + " " +
|
||||
randomIntBetween(0, 12) + " * * ?";
|
||||
}
|
||||
}
|
|
@ -0,0 +1,108 @@
|
|||
/*
|
||||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||
* or more contributor license agreements. Licensed under the Elastic License;
|
||||
* you may not use this file except in compliance with the Elastic License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.xpack.core.snapshotlifecycle.history;
|
||||
|
||||
import org.elasticsearch.common.io.stream.Writeable;
|
||||
import org.elasticsearch.common.xcontent.XContentParser;
|
||||
import org.elasticsearch.test.AbstractSerializingTestCase;
|
||||
import org.elasticsearch.test.ESTestCase;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.Arrays;
|
||||
import java.util.HashMap;
|
||||
import java.util.Map;
|
||||
|
||||
public class SnapshotHistoryItemTests extends AbstractSerializingTestCase<SnapshotHistoryItem> {
|
||||
|
||||
@Override
|
||||
protected SnapshotHistoryItem doParseInstance(XContentParser parser) throws IOException {
|
||||
return SnapshotHistoryItem.parse(parser, this.getClass().getCanonicalName());
|
||||
}
|
||||
|
||||
@Override
|
||||
protected Writeable.Reader<SnapshotHistoryItem> instanceReader() {
|
||||
return SnapshotHistoryItem::new;
|
||||
}
|
||||
|
||||
@Override
|
||||
protected SnapshotHistoryItem createTestInstance() {
|
||||
long timestamp = randomNonNegativeLong();
|
||||
String policyId = randomAlphaOfLengthBetween(5, 10);
|
||||
String repository = randomAlphaOfLengthBetween(5, 10);
|
||||
String snapshotName = randomAlphaOfLengthBetween(5, 10);
|
||||
String operation = randomAlphaOfLengthBetween(5, 10);
|
||||
boolean success = randomBoolean();
|
||||
Map<String, Object> snapshotConfig = randomSnapshotConfiguration();
|
||||
String errorDetails = randomBoolean() ? null : randomAlphaOfLengthBetween(10, 20);
|
||||
|
||||
return new SnapshotHistoryItem(timestamp, policyId, repository, snapshotName, operation, success, snapshotConfig,
|
||||
errorDetails);
|
||||
}
|
||||
|
||||
@Override
|
||||
protected SnapshotHistoryItem mutateInstance(SnapshotHistoryItem instance) {
|
||||
final int branch = between(0, 7);
|
||||
switch (branch) {
|
||||
case 0: // New timestamp
|
||||
return new SnapshotHistoryItem(
|
||||
randomValueOtherThan(instance.getTimestamp(), ESTestCase::randomNonNegativeLong),
|
||||
instance.getPolicyId(), instance.getRepository(), instance.getSnapshotName(), instance.getOperation(),
|
||||
instance.isSuccess(), instance.getSnapshotConfiguration(), instance.getErrorDetails());
|
||||
case 1: // new policyId
|
||||
return new SnapshotHistoryItem(instance.getTimestamp(),
|
||||
randomValueOtherThan(instance.getPolicyId(), () -> randomAlphaOfLengthBetween(5, 10)),
|
||||
instance.getSnapshotName(), instance.getRepository(), instance.getOperation(), instance.isSuccess(),
|
||||
instance.getSnapshotConfiguration(), instance.getErrorDetails());
|
||||
case 2: // new repo name
|
||||
return new SnapshotHistoryItem(instance.getTimestamp(), instance.getPolicyId(), instance.getSnapshotName(),
|
||||
randomValueOtherThan(instance.getRepository(), () -> randomAlphaOfLengthBetween(5, 10)),
|
||||
instance.getOperation(), instance.isSuccess(), instance.getSnapshotConfiguration(), instance.getErrorDetails());
|
||||
case 3:
|
||||
return new SnapshotHistoryItem(instance.getTimestamp(), instance.getPolicyId(), instance.getRepository(),
|
||||
randomValueOtherThan(instance.getSnapshotName(), () -> randomAlphaOfLengthBetween(5, 10)),
|
||||
instance.getOperation(), instance.isSuccess(), instance.getSnapshotConfiguration(), instance.getErrorDetails());
|
||||
case 4:
|
||||
return new SnapshotHistoryItem(instance.getTimestamp(), instance.getPolicyId(), instance.getRepository(),
|
||||
instance.getSnapshotName(),
|
||||
randomValueOtherThan(instance.getOperation(), () -> randomAlphaOfLengthBetween(5, 10)),
|
||||
instance.isSuccess(), instance.getSnapshotConfiguration(), instance.getErrorDetails());
|
||||
case 5:
|
||||
return new SnapshotHistoryItem(instance.getTimestamp(), instance.getPolicyId(), instance.getRepository(),
|
||||
instance.getSnapshotName(),
|
||||
instance.getOperation(),
|
||||
instance.isSuccess() == false,
|
||||
instance.getSnapshotConfiguration(), instance.getErrorDetails());
|
||||
case 6:
|
||||
return new SnapshotHistoryItem(instance.getTimestamp(), instance.getPolicyId(), instance.getRepository(),
|
||||
instance.getSnapshotName(), instance.getOperation(), instance.isSuccess(),
|
||||
randomValueOtherThan(instance.getSnapshotConfiguration(),
|
||||
SnapshotHistoryItemTests::randomSnapshotConfiguration),
|
||||
instance.getErrorDetails());
|
||||
case 7:
|
||||
return new SnapshotHistoryItem(instance.getTimestamp(), instance.getPolicyId(), instance.getRepository(),
|
||||
instance.getSnapshotName(), instance.getOperation(), instance.isSuccess(), instance.getSnapshotConfiguration(),
|
||||
randomValueOtherThan(instance.getErrorDetails(), () -> randomAlphaOfLengthBetween(10, 20)));
|
||||
default:
|
||||
throw new IllegalArgumentException("illegal randomization: " + branch);
|
||||
}
|
||||
}
|
||||
|
||||
public static Map<String, Object> randomSnapshotConfiguration() {
|
||||
Map<String, Object> configuration = new HashMap<>();
|
||||
configuration.put("indices", Arrays.asList(generateRandomStringArray(1, 10, false, false)));
|
||||
if (frequently()) {
|
||||
configuration.put("ignore_unavailable", randomBoolean());
|
||||
}
|
||||
if (frequently()) {
|
||||
configuration.put("include_global_state", randomBoolean());
|
||||
}
|
||||
if (frequently()) {
|
||||
configuration.put("partial", randomBoolean());
|
||||
}
|
||||
return configuration;
|
||||
}
|
||||
}
|
|
@ -0,0 +1,195 @@
|
|||
/*
|
||||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||
* or more contributor license agreements. Licensed under the Elastic License;
|
||||
* you may not use this file except in compliance with the Elastic License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.xpack.core.snapshotlifecycle.history;
|
||||
|
||||
import org.elasticsearch.action.index.IndexAction;
|
||||
import org.elasticsearch.action.index.IndexRequest;
|
||||
import org.elasticsearch.action.index.IndexResponse;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.index.shard.ShardId;
|
||||
import org.elasticsearch.test.ESTestCase;
|
||||
import org.elasticsearch.threadpool.TestThreadPool;
|
||||
import org.elasticsearch.threadpool.ThreadPool;
|
||||
import org.elasticsearch.xpack.core.snapshotlifecycle.SnapshotLifecyclePolicy;
|
||||
import org.junit.After;
|
||||
import org.junit.Before;
|
||||
|
||||
import java.time.Instant;
|
||||
import java.time.ZoneOffset;
|
||||
import java.util.HashMap;
|
||||
import java.util.Map;
|
||||
import java.util.concurrent.atomic.AtomicInteger;
|
||||
|
||||
import static org.elasticsearch.xpack.core.indexlifecycle.LifecycleSettings.SLM_HISTORY_INDEX_ENABLED_SETTING;
|
||||
import static org.elasticsearch.xpack.core.snapshotlifecycle.history.SnapshotHistoryStore.getHistoryIndexNameForTime;
|
||||
import static org.elasticsearch.xpack.core.snapshotlifecycle.history.SnapshotLifecycleTemplateRegistry.INDEX_TEMPLATE_VERSION;
|
||||
import static org.hamcrest.Matchers.containsString;
|
||||
import static org.hamcrest.Matchers.instanceOf;
|
||||
import static org.hamcrest.core.IsEqual.equalTo;
|
||||
|
||||
public class SnapshotHistoryStoreTests extends ESTestCase {
|
||||
|
||||
private ThreadPool threadPool;
|
||||
private SnapshotLifecycleTemplateRegistryTests.VerifyingClient client;
|
||||
private SnapshotHistoryStore historyStore;
|
||||
|
||||
@Before
|
||||
public void setup() {
|
||||
threadPool = new TestThreadPool(this.getClass().getName());
|
||||
client = new SnapshotLifecycleTemplateRegistryTests.VerifyingClient(threadPool);
|
||||
historyStore = new SnapshotHistoryStore(Settings.EMPTY, client, ZoneOffset.UTC);
|
||||
}
|
||||
|
||||
@After
|
||||
@Override
|
||||
public void tearDown() throws Exception {
|
||||
super.tearDown();
|
||||
threadPool.shutdownNow();
|
||||
}
|
||||
|
||||
public void testNoActionIfDisabled() {
|
||||
Settings settings = Settings.builder().put(SLM_HISTORY_INDEX_ENABLED_SETTING.getKey(), false).build();
|
||||
SnapshotHistoryStore disabledHistoryStore = new SnapshotHistoryStore(settings, client, ZoneOffset.UTC);
|
||||
String policyId = randomAlphaOfLength(5);
|
||||
SnapshotLifecyclePolicy policy = randomSnapshotLifecyclePolicy(policyId);
|
||||
final long timestamp = randomNonNegativeLong();
|
||||
SnapshotLifecyclePolicy.ResolverContext context = new SnapshotLifecyclePolicy.ResolverContext(timestamp);
|
||||
String snapshotId = policy.generateSnapshotName(context);
|
||||
SnapshotHistoryItem record = SnapshotHistoryItem.successRecord(timestamp, policy, snapshotId);
|
||||
|
||||
client.setVerifier((a,r,l) -> {
|
||||
fail("the history store is disabled, no action should have been taken");
|
||||
return null;
|
||||
});
|
||||
disabledHistoryStore.putAsync(record);
|
||||
}
|
||||
|
||||
@SuppressWarnings("unchecked")
|
||||
public void testPut() throws Exception {
|
||||
String policyId = randomAlphaOfLength(5);
|
||||
SnapshotLifecyclePolicy policy = randomSnapshotLifecyclePolicy(policyId);
|
||||
final long timestamp = randomNonNegativeLong();
|
||||
SnapshotLifecyclePolicy.ResolverContext context = new SnapshotLifecyclePolicy.ResolverContext(timestamp);
|
||||
String snapshotId = policy.generateSnapshotName(context);
|
||||
{
|
||||
SnapshotHistoryItem record = SnapshotHistoryItem.successRecord(timestamp, policy, snapshotId);
|
||||
|
||||
AtomicInteger calledTimes = new AtomicInteger(0);
|
||||
client.setVerifier((action, request, listener) -> {
|
||||
calledTimes.incrementAndGet();
|
||||
assertThat(action, instanceOf(IndexAction.class));
|
||||
assertThat(request, instanceOf(IndexRequest.class));
|
||||
IndexRequest indexRequest = (IndexRequest) request;
|
||||
assertEquals(getHistoryIndexNameForTime(Instant.ofEpochMilli(timestamp).atZone(ZoneOffset.UTC)), indexRequest.index());
|
||||
final String indexedDocument = indexRequest.source().utf8ToString();
|
||||
assertThat(indexedDocument, containsString(policy.getId()));
|
||||
assertThat(indexedDocument, containsString(policy.getRepository()));
|
||||
assertThat(indexedDocument, containsString(snapshotId));
|
||||
if (policy.getConfig() != null) {
|
||||
assertContainsMap(indexedDocument, policy.getConfig());
|
||||
}
|
||||
assertNotNull(listener);
|
||||
// The content of this IndexResponse doesn't matter, so just make it 100% random
|
||||
return new IndexResponse(
|
||||
new ShardId(randomAlphaOfLength(5), randomAlphaOfLength(5), randomInt(100)),
|
||||
randomAlphaOfLength(5),
|
||||
randomAlphaOfLength(5),
|
||||
randomLongBetween(1,1000),
|
||||
randomLongBetween(1,1000),
|
||||
randomLongBetween(1,1000),
|
||||
randomBoolean());
|
||||
});
|
||||
|
||||
historyStore.putAsync(record);
|
||||
assertBusy(() -> assertThat(calledTimes.get(), equalTo(1)));
|
||||
}
|
||||
|
||||
{
|
||||
final String cause = randomAlphaOfLength(9);
|
||||
Exception failureException = new RuntimeException(cause);
|
||||
SnapshotHistoryItem record = SnapshotHistoryItem.failureRecord(timestamp, policy, snapshotId, failureException);
|
||||
|
||||
AtomicInteger calledTimes = new AtomicInteger(0);
|
||||
client.setVerifier((action, request, listener) -> {
|
||||
calledTimes.incrementAndGet();
|
||||
assertThat(action, instanceOf(IndexAction.class));
|
||||
assertThat(request, instanceOf(IndexRequest.class));
|
||||
IndexRequest indexRequest = (IndexRequest) request;
|
||||
assertEquals(getHistoryIndexNameForTime(Instant.ofEpochMilli(timestamp).atZone(ZoneOffset.UTC)), indexRequest.index());
|
||||
final String indexedDocument = indexRequest.source().utf8ToString();
|
||||
assertThat(indexedDocument, containsString(policy.getId()));
|
||||
assertThat(indexedDocument, containsString(policy.getRepository()));
|
||||
assertThat(indexedDocument, containsString(snapshotId));
|
||||
if (policy.getConfig() != null) {
|
||||
assertContainsMap(indexedDocument, policy.getConfig());
|
||||
}
|
||||
assertThat(indexedDocument, containsString("runtime_exception"));
|
||||
assertThat(indexedDocument, containsString(cause));
|
||||
assertNotNull(listener);
|
||||
// The content of this IndexResponse doesn't matter, so just make it 100% random
|
||||
return new IndexResponse(
|
||||
new ShardId(randomAlphaOfLength(5), randomAlphaOfLength(5), randomInt(100)),
|
||||
randomAlphaOfLength(5),
|
||||
randomAlphaOfLength(5),
|
||||
randomLongBetween(1,1000),
|
||||
randomLongBetween(1,1000),
|
||||
randomLongBetween(1,1000),
|
||||
randomBoolean());
|
||||
});
|
||||
|
||||
historyStore.putAsync(record);
|
||||
assertBusy(() -> assertThat(calledTimes.get(), equalTo(1)));
|
||||
}
|
||||
}
|
||||
|
||||
@SuppressWarnings("unchecked")
|
||||
private void assertContainsMap(String indexedDocument, Map<String, Object> map) {
|
||||
map.forEach((k, v) -> {
|
||||
assertThat(indexedDocument, containsString(k));
|
||||
if (v instanceof Map) {
|
||||
assertContainsMap(indexedDocument, (Map<String, Object>) v);
|
||||
} if (v instanceof Iterable) {
|
||||
((Iterable) v).forEach(elem -> {
|
||||
assertThat(indexedDocument, containsString(elem.toString()));
|
||||
});
|
||||
} else {
|
||||
assertThat(indexedDocument, containsString(v.toString()));
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
|
||||
public void testIndexNameGeneration() {
|
||||
String indexTemplateVersion = INDEX_TEMPLATE_VERSION;
|
||||
assertThat(getHistoryIndexNameForTime(Instant.ofEpochMilli((long) 0).atZone(ZoneOffset.UTC)),
|
||||
equalTo(".slm-history-"+ indexTemplateVersion +"-1970.01"));
|
||||
assertThat(getHistoryIndexNameForTime(Instant.ofEpochMilli(100000000000L).atZone(ZoneOffset.UTC)),
|
||||
equalTo(".slm-history-" + indexTemplateVersion + "-1973.03"));
|
||||
assertThat(getHistoryIndexNameForTime(Instant.ofEpochMilli(1416582852000L).atZone(ZoneOffset.UTC)),
|
||||
equalTo(".slm-history-" + indexTemplateVersion + "-2014.11"));
|
||||
assertThat(getHistoryIndexNameForTime(Instant.ofEpochMilli(2833165811000L).atZone(ZoneOffset.UTC)),
|
||||
equalTo(".slm-history-" + indexTemplateVersion + "-2059.10"));
|
||||
}
|
||||
|
||||
public static SnapshotLifecyclePolicy randomSnapshotLifecyclePolicy(String id) {
|
||||
Map<String, Object> config = new HashMap<>();
|
||||
for (int i = 0; i < randomIntBetween(2, 5); i++) {
|
||||
config.put(randomAlphaOfLength(4), randomAlphaOfLength(4));
|
||||
}
|
||||
return new SnapshotLifecyclePolicy(id,
|
||||
randomAlphaOfLength(4),
|
||||
randomSchedule(),
|
||||
randomAlphaOfLength(4),
|
||||
config);
|
||||
}
|
||||
|
||||
private static String randomSchedule() {
|
||||
return randomIntBetween(0, 59) + " " +
|
||||
randomIntBetween(0, 59) + " " +
|
||||
randomIntBetween(0, 12) + " * * ?";
|
||||
}
|
||||
}
|
|
@ -0,0 +1,333 @@
|
|||
/*
|
||||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||
* or more contributor license agreements. Licensed under the Elastic License;
|
||||
* you may not use this file except in compliance with the Elastic License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.xpack.core.snapshotlifecycle.history;
|
||||
|
||||
import org.elasticsearch.Version;
|
||||
import org.elasticsearch.action.ActionListener;
|
||||
import org.elasticsearch.action.ActionRequest;
|
||||
import org.elasticsearch.action.ActionResponse;
|
||||
import org.elasticsearch.action.ActionType;
|
||||
import org.elasticsearch.action.admin.indices.template.put.PutIndexTemplateAction;
|
||||
import org.elasticsearch.action.admin.indices.template.put.PutIndexTemplateRequest;
|
||||
import org.elasticsearch.action.support.master.AcknowledgedResponse;
|
||||
import org.elasticsearch.cluster.ClusterChangedEvent;
|
||||
import org.elasticsearch.cluster.ClusterModule;
|
||||
import org.elasticsearch.cluster.ClusterName;
|
||||
import org.elasticsearch.cluster.ClusterState;
|
||||
import org.elasticsearch.cluster.block.ClusterBlocks;
|
||||
import org.elasticsearch.cluster.metadata.IndexTemplateMetaData;
|
||||
import org.elasticsearch.cluster.metadata.MetaData;
|
||||
import org.elasticsearch.cluster.node.DiscoveryNode;
|
||||
import org.elasticsearch.cluster.node.DiscoveryNodes;
|
||||
import org.elasticsearch.cluster.service.ClusterService;
|
||||
import org.elasticsearch.common.ParseField;
|
||||
import org.elasticsearch.common.TriFunction;
|
||||
import org.elasticsearch.common.collect.ImmutableOpenMap;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.xcontent.LoggingDeprecationHandler;
|
||||
import org.elasticsearch.common.xcontent.NamedXContentRegistry;
|
||||
import org.elasticsearch.common.xcontent.XContentParser;
|
||||
import org.elasticsearch.common.xcontent.XContentType;
|
||||
import org.elasticsearch.test.ClusterServiceUtils;
|
||||
import org.elasticsearch.test.ESTestCase;
|
||||
import org.elasticsearch.test.client.NoOpClient;
|
||||
import org.elasticsearch.threadpool.TestThreadPool;
|
||||
import org.elasticsearch.threadpool.ThreadPool;
|
||||
import org.elasticsearch.xpack.core.indexlifecycle.DeleteAction;
|
||||
import org.elasticsearch.xpack.core.indexlifecycle.IndexLifecycleMetadata;
|
||||
import org.elasticsearch.xpack.core.indexlifecycle.LifecycleAction;
|
||||
import org.elasticsearch.xpack.core.indexlifecycle.LifecyclePolicy;
|
||||
import org.elasticsearch.xpack.core.indexlifecycle.LifecyclePolicyMetadata;
|
||||
import org.elasticsearch.xpack.core.indexlifecycle.LifecycleType;
|
||||
import org.elasticsearch.xpack.core.indexlifecycle.OperationMode;
|
||||
import org.elasticsearch.xpack.core.indexlifecycle.TimeseriesLifecycleType;
|
||||
import org.elasticsearch.xpack.core.indexlifecycle.action.PutLifecycleAction;
|
||||
import org.junit.After;
|
||||
import org.junit.Before;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.ArrayList;
|
||||
import java.util.Arrays;
|
||||
import java.util.Collections;
|
||||
import java.util.HashMap;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.concurrent.atomic.AtomicInteger;
|
||||
import java.util.stream.Collectors;
|
||||
|
||||
import static org.elasticsearch.mock.orig.Mockito.when;
|
||||
import static org.elasticsearch.xpack.core.indexlifecycle.LifecycleSettings.SLM_HISTORY_INDEX_ENABLED_SETTING;
|
||||
import static org.elasticsearch.xpack.core.snapshotlifecycle.history.SnapshotLifecycleTemplateRegistry.SLM_POLICY_NAME;
|
||||
import static org.elasticsearch.xpack.core.snapshotlifecycle.history.SnapshotLifecycleTemplateRegistry.SLM_TEMPLATE_NAME;
|
||||
import static org.hamcrest.Matchers.equalTo;
|
||||
import static org.hamcrest.Matchers.hasSize;
|
||||
import static org.hamcrest.Matchers.instanceOf;
|
||||
import static org.mockito.Mockito.mock;
|
||||
import static org.mockito.Mockito.spy;
|
||||
|
||||
public class SnapshotLifecycleTemplateRegistryTests extends ESTestCase {
|
||||
private SnapshotLifecycleTemplateRegistry registry;
|
||||
private NamedXContentRegistry xContentRegistry;
|
||||
private ClusterService clusterService;
|
||||
private ThreadPool threadPool;
|
||||
private VerifyingClient client;
|
||||
|
||||
@Before
|
||||
public void createRegistryAndClient() {
|
||||
threadPool = new TestThreadPool(this.getClass().getName());
|
||||
client = new VerifyingClient(threadPool);
|
||||
clusterService = ClusterServiceUtils.createClusterService(threadPool);
|
||||
List<NamedXContentRegistry.Entry> entries = new ArrayList<>(ClusterModule.getNamedXWriteables());
|
||||
entries.addAll(Arrays.asList(
|
||||
new NamedXContentRegistry.Entry(LifecycleType.class, new ParseField(TimeseriesLifecycleType.TYPE),
|
||||
(p) -> TimeseriesLifecycleType.INSTANCE),
|
||||
new NamedXContentRegistry.Entry(LifecycleAction.class, new ParseField(DeleteAction.NAME), DeleteAction::parse)));
|
||||
xContentRegistry = new NamedXContentRegistry(entries);
|
||||
registry = new SnapshotLifecycleTemplateRegistry(Settings.EMPTY, clusterService, threadPool, client, xContentRegistry);
|
||||
}
|
||||
|
||||
@After
|
||||
@Override
|
||||
public void tearDown() throws Exception {
|
||||
super.tearDown();
|
||||
threadPool.shutdownNow();
|
||||
}
|
||||
|
||||
public void testDisabledDoesNotAddTemplates() {
|
||||
Settings settings = Settings.builder().put(SLM_HISTORY_INDEX_ENABLED_SETTING.getKey(), false).build();
|
||||
SnapshotLifecycleTemplateRegistry disabledRegistry = new SnapshotLifecycleTemplateRegistry(settings, clusterService, threadPool,
|
||||
client, xContentRegistry);
|
||||
assertThat(disabledRegistry.getTemplateConfigs(), hasSize(0));
|
||||
assertThat(disabledRegistry.getPolicyConfigs(), hasSize(0));
|
||||
}
|
||||
|
||||
@AwaitsFix(bugUrl = "https://github.com/elastic/elasticsearch/issues/43950")
|
||||
public void testThatNonExistingTemplatesAreAddedImmediately() throws Exception {
|
||||
DiscoveryNode node = new DiscoveryNode("node", ESTestCase.buildNewFakeTransportAddress(), Version.CURRENT);
|
||||
DiscoveryNodes nodes = DiscoveryNodes.builder().localNodeId("node").masterNodeId("node").add(node).build();
|
||||
|
||||
ClusterChangedEvent event = createClusterChangedEvent(Collections.emptyList(), nodes);
|
||||
|
||||
AtomicInteger calledTimes = new AtomicInteger(0);
|
||||
client.setVerifier((action, request, listener) -> {
|
||||
if (action instanceof PutIndexTemplateAction) {
|
||||
calledTimes.incrementAndGet();
|
||||
assertThat(action, instanceOf(PutIndexTemplateAction.class));
|
||||
assertThat(request, instanceOf(PutIndexTemplateRequest.class));
|
||||
final PutIndexTemplateRequest putRequest = (PutIndexTemplateRequest) request;
|
||||
assertThat(putRequest.name(), equalTo(SLM_TEMPLATE_NAME));
|
||||
assertThat(putRequest.settings().get("index.lifecycle.name"), equalTo(SLM_POLICY_NAME));
|
||||
assertNotNull(listener);
|
||||
return new TestPutIndexTemplateResponse(true);
|
||||
} else if (action instanceof PutLifecycleAction) {
|
||||
// Ignore this, it's verified in another test
|
||||
return new PutLifecycleAction.Response(true);
|
||||
} else {
|
||||
fail("client called with unexpected request:" + request.toString());
|
||||
return null;
|
||||
}
|
||||
});
|
||||
registry.clusterChanged(event);
|
||||
assertBusy(() -> assertThat(calledTimes.get(), equalTo(registry.getTemplateConfigs().size())));
|
||||
|
||||
calledTimes.set(0);
|
||||
// now delete one template from the cluster state and lets retry
|
||||
ClusterChangedEvent newEvent = createClusterChangedEvent(Collections.emptyList(), nodes);
|
||||
registry.clusterChanged(newEvent);
|
||||
assertBusy(() -> assertThat(calledTimes.get(), equalTo(1)));
|
||||
}
|
||||
|
||||
public void testThatNonExistingPoliciesAreAddedImmediately() throws Exception {
|
||||
DiscoveryNode node = new DiscoveryNode("node", ESTestCase.buildNewFakeTransportAddress(), Version.CURRENT);
|
||||
DiscoveryNodes nodes = DiscoveryNodes.builder().localNodeId("node").masterNodeId("node").add(node).build();
|
||||
|
||||
AtomicInteger calledTimes = new AtomicInteger(0);
|
||||
client.setVerifier((action, request, listener) -> {
|
||||
if (action instanceof PutLifecycleAction) {
|
||||
calledTimes.incrementAndGet();
|
||||
assertThat(action, instanceOf(PutLifecycleAction.class));
|
||||
assertThat(request, instanceOf(PutLifecycleAction.Request.class));
|
||||
final PutLifecycleAction.Request putRequest = (PutLifecycleAction.Request) request;
|
||||
assertThat(putRequest.getPolicy().getName(), equalTo(SLM_POLICY_NAME));
|
||||
assertNotNull(listener);
|
||||
return new PutLifecycleAction.Response(true);
|
||||
} else if (action instanceof PutIndexTemplateAction) {
|
||||
// Ignore this, it's verified in another test
|
||||
return new TestPutIndexTemplateResponse(true);
|
||||
} else {
|
||||
fail("client called with unexpected request:" + request.toString());
|
||||
return null;
|
||||
}
|
||||
});
|
||||
|
||||
ClusterChangedEvent event = createClusterChangedEvent(Collections.emptyList(), nodes);
|
||||
registry.clusterChanged(event);
|
||||
assertBusy(() -> assertThat(calledTimes.get(), equalTo(1)));
|
||||
}
|
||||
|
||||
public void testPolicyAlreadyExists() {
|
||||
DiscoveryNode node = new DiscoveryNode("node", ESTestCase.buildNewFakeTransportAddress(), Version.CURRENT);
|
||||
DiscoveryNodes nodes = DiscoveryNodes.builder().localNodeId("node").masterNodeId("node").add(node).build();
|
||||
|
||||
Map<String, LifecyclePolicy> policyMap = new HashMap<>();
|
||||
List<LifecyclePolicy> policies = registry.getPolicyConfigs().stream()
|
||||
.map(policyConfig -> policyConfig.load(xContentRegistry))
|
||||
.collect(Collectors.toList());
|
||||
assertThat(policies, hasSize(1));
|
||||
LifecyclePolicy policy = policies.get(0);
|
||||
policyMap.put(policy.getName(), policy);
|
||||
|
||||
client.setVerifier((action, request, listener) -> {
|
||||
if (action instanceof PutIndexTemplateAction) {
|
||||
// Ignore this, it's verified in another test
|
||||
return new TestPutIndexTemplateResponse(true);
|
||||
} else if (action instanceof PutLifecycleAction) {
|
||||
fail("if the policy already exists it should be re-put");
|
||||
} else {
|
||||
fail("client called with unexpected request:" + request.toString());
|
||||
}
|
||||
return null;
|
||||
});
|
||||
|
||||
ClusterChangedEvent event = createClusterChangedEvent(Collections.emptyList(), policyMap, nodes);
|
||||
registry.clusterChanged(event);
|
||||
}
|
||||
|
||||
public void testPolicyAlreadyExistsButDiffers() throws IOException {
|
||||
DiscoveryNode node = new DiscoveryNode("node", ESTestCase.buildNewFakeTransportAddress(), Version.CURRENT);
|
||||
DiscoveryNodes nodes = DiscoveryNodes.builder().localNodeId("node").masterNodeId("node").add(node).build();
|
||||
|
||||
Map<String, LifecyclePolicy> policyMap = new HashMap<>();
|
||||
String policyStr = "{\"phases\":{\"delete\":{\"min_age\":\"1m\",\"actions\":{\"delete\":{}}}}}";
|
||||
List<LifecyclePolicy> policies = registry.getPolicyConfigs().stream()
|
||||
.map(policyConfig -> policyConfig.load(xContentRegistry))
|
||||
.collect(Collectors.toList());
|
||||
assertThat(policies, hasSize(1));
|
||||
LifecyclePolicy policy = policies.get(0);
|
||||
|
||||
client.setVerifier((action, request, listener) -> {
|
||||
if (action instanceof PutIndexTemplateAction) {
|
||||
// Ignore this, it's verified in another test
|
||||
return new TestPutIndexTemplateResponse(true);
|
||||
} else if (action instanceof PutLifecycleAction) {
|
||||
fail("if the policy already exists it should be re-put");
|
||||
} else {
|
||||
fail("client called with unexpected request:" + request.toString());
|
||||
}
|
||||
return null;
|
||||
});
|
||||
|
||||
try (XContentParser parser = XContentType.JSON.xContent()
|
||||
.createParser(xContentRegistry, LoggingDeprecationHandler.THROW_UNSUPPORTED_OPERATION, policyStr)) {
|
||||
LifecyclePolicy different = LifecyclePolicy.parse(parser, policy.getName());
|
||||
policyMap.put(policy.getName(), different);
|
||||
ClusterChangedEvent event = createClusterChangedEvent(Collections.emptyList(), policyMap, nodes);
|
||||
registry.clusterChanged(event);
|
||||
}
|
||||
}
|
||||
|
||||
public void testThatMissingMasterNodeDoesNothing() {
|
||||
DiscoveryNode localNode = new DiscoveryNode("node", ESTestCase.buildNewFakeTransportAddress(), Version.CURRENT);
|
||||
DiscoveryNodes nodes = DiscoveryNodes.builder().localNodeId("node").add(localNode).build();
|
||||
|
||||
client.setVerifier((a,r,l) -> {
|
||||
fail("if the master is missing nothing should happen");
|
||||
return null;
|
||||
});
|
||||
|
||||
ClusterChangedEvent event = createClusterChangedEvent(Arrays.asList(SLM_TEMPLATE_NAME), nodes);
|
||||
registry.clusterChanged(event);
|
||||
}
|
||||
|
||||
public void testValidate() {
|
||||
assertFalse(registry.validate(createClusterState(Settings.EMPTY, Collections.emptyList(), Collections.emptyMap(), null)));
|
||||
assertFalse(registry.validate(createClusterState(Settings.EMPTY, Collections.singletonList(SLM_TEMPLATE_NAME),
|
||||
Collections.emptyMap(), null)));
|
||||
|
||||
Map<String, LifecyclePolicy> policyMap = new HashMap<>();
|
||||
policyMap.put(SLM_POLICY_NAME, new LifecyclePolicy(SLM_POLICY_NAME, new HashMap<>()));
|
||||
assertFalse(registry.validate(createClusterState(Settings.EMPTY, Collections.emptyList(), policyMap, null)));
|
||||
|
||||
assertTrue(registry.validate(createClusterState(Settings.EMPTY, Collections.singletonList(SLM_TEMPLATE_NAME), policyMap, null)));
|
||||
}
|
||||
|
||||
// -------------
|
||||
|
||||
/**
|
||||
* A client that delegates to a verifying function for action/request/listener
|
||||
*/
|
||||
public static class VerifyingClient extends NoOpClient {
|
||||
|
||||
private TriFunction<ActionType<?>, ActionRequest, ActionListener<?>, ActionResponse> verifier = (a, r, l) -> {
|
||||
fail("verifier not set");
|
||||
return null;
|
||||
};
|
||||
|
||||
VerifyingClient(ThreadPool threadPool) {
|
||||
super(threadPool);
|
||||
}
|
||||
|
||||
@Override
|
||||
@SuppressWarnings("unchecked")
|
||||
protected <Request extends ActionRequest, Response extends ActionResponse> void doExecute(ActionType<Response> action,
|
||||
Request request,
|
||||
ActionListener<Response> listener) {
|
||||
listener.onResponse((Response) verifier.apply(action, request, listener));
|
||||
}
|
||||
|
||||
public VerifyingClient setVerifier(TriFunction<ActionType<?>, ActionRequest, ActionListener<?>, ActionResponse> verifier) {
|
||||
this.verifier = verifier;
|
||||
return this;
|
||||
}
|
||||
}
|
||||
|
||||
private ClusterChangedEvent createClusterChangedEvent(List<String> existingTemplateNames, DiscoveryNodes nodes) {
|
||||
return createClusterChangedEvent(existingTemplateNames, Collections.emptyMap(), nodes);
|
||||
}
|
||||
|
||||
private ClusterChangedEvent createClusterChangedEvent(List<String> existingTemplateNames,
|
||||
Map<String, LifecyclePolicy> existingPolicies,
|
||||
DiscoveryNodes nodes) {
|
||||
ClusterState cs = createClusterState(Settings.EMPTY, existingTemplateNames, existingPolicies, nodes);
|
||||
ClusterChangedEvent realEvent = new ClusterChangedEvent("created-from-test", cs,
|
||||
ClusterState.builder(new ClusterName("test")).build());
|
||||
ClusterChangedEvent event = spy(realEvent);
|
||||
when(event.localNodeMaster()).thenReturn(nodes.isLocalNodeElectedMaster());
|
||||
|
||||
return event;
|
||||
}
|
||||
|
||||
private ClusterState createClusterState(Settings nodeSettings,
|
||||
List<String> existingTemplateNames,
|
||||
Map<String, LifecyclePolicy> existingPolicies,
|
||||
DiscoveryNodes nodes) {
|
||||
ImmutableOpenMap.Builder<String, IndexTemplateMetaData> indexTemplates = ImmutableOpenMap.builder();
|
||||
for (String name : existingTemplateNames) {
|
||||
indexTemplates.put(name, mock(IndexTemplateMetaData.class));
|
||||
}
|
||||
|
||||
Map<String, LifecyclePolicyMetadata> existingILMMeta = existingPolicies.entrySet().stream()
|
||||
.collect(Collectors.toMap(Map.Entry::getKey, e -> new LifecyclePolicyMetadata(e.getValue(), Collections.emptyMap(), 1, 1)));
|
||||
IndexLifecycleMetadata ilmMeta = new IndexLifecycleMetadata(existingILMMeta, OperationMode.RUNNING);
|
||||
|
||||
return ClusterState.builder(new ClusterName("test"))
|
||||
.metaData(MetaData.builder()
|
||||
.templates(indexTemplates.build())
|
||||
.transientSettings(nodeSettings)
|
||||
.putCustom(IndexLifecycleMetadata.TYPE, ilmMeta)
|
||||
.build())
|
||||
.blocks(new ClusterBlocks.Builder().build())
|
||||
.nodes(nodes)
|
||||
.build();
|
||||
}
|
||||
|
||||
private static class TestPutIndexTemplateResponse extends AcknowledgedResponse {
|
||||
TestPutIndexTemplateResponse(boolean acknowledged) {
|
||||
super(acknowledged);
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,325 @@
|
|||
/*
|
||||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||
* or more contributor license agreements. Licensed under the Elastic License;
|
||||
* you may not use this file except in compliance with the Elastic License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.xpack.snapshotlifecycle;
|
||||
|
||||
import org.apache.http.util.EntityUtils;
|
||||
import org.elasticsearch.action.index.IndexRequestBuilder;
|
||||
import org.elasticsearch.client.Request;
|
||||
import org.elasticsearch.client.Response;
|
||||
import org.elasticsearch.client.ResponseException;
|
||||
import org.elasticsearch.client.RestClient;
|
||||
import org.elasticsearch.common.Strings;
|
||||
import org.elasticsearch.common.xcontent.DeprecationHandler;
|
||||
import org.elasticsearch.common.xcontent.NamedXContentRegistry;
|
||||
import org.elasticsearch.common.xcontent.ToXContent;
|
||||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
import org.elasticsearch.common.xcontent.XContentHelper;
|
||||
import org.elasticsearch.common.xcontent.XContentParser;
|
||||
import org.elasticsearch.common.xcontent.XContentType;
|
||||
import org.elasticsearch.common.xcontent.json.JsonXContent;
|
||||
import org.elasticsearch.test.rest.ESRestTestCase;
|
||||
import org.elasticsearch.xpack.core.snapshotlifecycle.SnapshotLifecyclePolicy;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.io.InputStream;
|
||||
import java.util.ArrayList;
|
||||
import java.util.Collections;
|
||||
import java.util.HashMap;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;
|
||||
import static org.hamcrest.Matchers.containsString;
|
||||
import static org.hamcrest.Matchers.equalTo;
|
||||
import static org.hamcrest.Matchers.greaterThan;
|
||||
import static org.hamcrest.Matchers.greaterThanOrEqualTo;
|
||||
import static org.hamcrest.Matchers.startsWith;
|
||||
|
||||
public class SnapshotLifecycleIT extends ESRestTestCase {
|
||||
|
||||
public void testMissingRepo() throws Exception {
|
||||
SnapshotLifecyclePolicy policy = new SnapshotLifecyclePolicy("test-policy", "snap",
|
||||
"*/1 * * * * ?", "missing-repo", Collections.emptyMap());
|
||||
|
||||
Request putLifecycle = new Request("PUT", "/_slm/policy/test-policy");
|
||||
XContentBuilder lifecycleBuilder = JsonXContent.contentBuilder();
|
||||
policy.toXContent(lifecycleBuilder, ToXContent.EMPTY_PARAMS);
|
||||
putLifecycle.setJsonEntity(Strings.toString(lifecycleBuilder));
|
||||
ResponseException e = expectThrows(ResponseException.class, () -> client().performRequest(putLifecycle));
|
||||
Response resp = e.getResponse();
|
||||
assertThat(resp.getStatusLine().getStatusCode(), equalTo(400));
|
||||
String jsonError = EntityUtils.toString(resp.getEntity());
|
||||
assertThat(jsonError, containsString("\"type\":\"illegal_argument_exception\""));
|
||||
assertThat(jsonError, containsString("\"reason\":\"no such repository [missing-repo]\""));
|
||||
}
|
||||
|
||||
@SuppressWarnings("unchecked")
|
||||
public void testFullPolicySnapshot() throws Exception {
|
||||
final String indexName = "test";
|
||||
final String policyName = "test-policy";
|
||||
final String repoId = "my-repo";
|
||||
int docCount = randomIntBetween(10, 50);
|
||||
List<IndexRequestBuilder> indexReqs = new ArrayList<>();
|
||||
for (int i = 0; i < docCount; i++) {
|
||||
index(client(), indexName, "" + i, "foo", "bar");
|
||||
}
|
||||
|
||||
// Create a snapshot repo
|
||||
inializeRepo(repoId);
|
||||
|
||||
createSnapshotPolicy(policyName, "snap", "*/1 * * * * ?", repoId, indexName, true);
|
||||
|
||||
// Check that the snapshot was actually taken
|
||||
assertBusy(() -> {
|
||||
Response response = client().performRequest(new Request("GET", "/_snapshot/" + repoId + "/_all"));
|
||||
Map<String, Object> snapshotResponseMap;
|
||||
try (InputStream is = response.getEntity().getContent()) {
|
||||
snapshotResponseMap = XContentHelper.convertToMap(XContentType.JSON.xContent(), is, true);
|
||||
}
|
||||
assertThat(snapshotResponseMap.size(), greaterThan(0));
|
||||
assertThat(((List<Map<String, Object>>) snapshotResponseMap.get("snapshots")).size(), greaterThan(0));
|
||||
Map<String, Object> snapResponse = ((List<Map<String, Object>>) snapshotResponseMap.get("snapshots")).get(0);
|
||||
assertThat(snapResponse.get("snapshot").toString(), startsWith("snap-"));
|
||||
assertThat(snapResponse.get("indices"), equalTo(Collections.singletonList(indexName)));
|
||||
Map<String, Object> metadata = (Map<String, Object>) snapResponse.get("metadata");
|
||||
assertNotNull(metadata);
|
||||
assertThat(metadata.get("policy"), equalTo(policyName));
|
||||
assertHistoryIsPresent(policyName, true, repoId);
|
||||
|
||||
// Check that the last success date was written to the cluster state
|
||||
Request getReq = new Request("GET", "/_slm/policy/" + policyName);
|
||||
Response policyMetadata = client().performRequest(getReq);
|
||||
Map<String, Object> policyResponseMap;
|
||||
try (InputStream is = policyMetadata.getEntity().getContent()) {
|
||||
policyResponseMap = XContentHelper.convertToMap(XContentType.JSON.xContent(), is, true);
|
||||
}
|
||||
Map<String, Object> policyMetadataMap = (Map<String, Object>) policyResponseMap.get(policyName);
|
||||
Map<String, Object> lastSuccessObject = (Map<String, Object>) policyMetadataMap.get("last_success");
|
||||
assertNotNull(lastSuccessObject);
|
||||
Long lastSuccess = (Long) lastSuccessObject.get("time");
|
||||
Long modifiedDate = (Long) policyMetadataMap.get("modified_date_millis");
|
||||
assertNotNull(lastSuccess);
|
||||
assertNotNull(modifiedDate);
|
||||
assertThat(lastSuccess, greaterThan(modifiedDate));
|
||||
|
||||
String lastSnapshotName = (String) lastSuccessObject.get("snapshot_name");
|
||||
assertThat(lastSnapshotName, startsWith("snap-"));
|
||||
|
||||
assertHistoryIsPresent(policyName, true, repoId);
|
||||
});
|
||||
|
||||
Request delReq = new Request("DELETE", "/_slm/policy/" + policyName);
|
||||
assertOK(client().performRequest(delReq));
|
||||
|
||||
// It's possible there could have been a snapshot in progress when the
|
||||
// policy is deleted, so wait for it to be finished
|
||||
assertBusy(() -> {
|
||||
assertThat(wipeSnapshots().size(), equalTo(0));
|
||||
});
|
||||
}
|
||||
|
||||
@SuppressWarnings("unchecked")
|
||||
public void testPolicyFailure() throws Exception {
|
||||
final String policyName = "test-policy";
|
||||
final String repoName = "test-repo";
|
||||
final String indexPattern = "index-doesnt-exist";
|
||||
inializeRepo(repoName);
|
||||
|
||||
// Create a policy with ignore_unvailable: false and an index that doesn't exist
|
||||
createSnapshotPolicy(policyName, "snap", "*/1 * * * * ?", repoName, indexPattern, false);
|
||||
|
||||
assertBusy(() -> {
|
||||
// Check that the failure is written to the cluster state
|
||||
Request getReq = new Request("GET", "/_slm/policy/" + policyName);
|
||||
Response policyMetadata = client().performRequest(getReq);
|
||||
try (InputStream is = policyMetadata.getEntity().getContent()) {
|
||||
Map<String, Object> responseMap = XContentHelper.convertToMap(XContentType.JSON.xContent(), is, true);
|
||||
Map<String, Object> policyMetadataMap = (Map<String, Object>) responseMap.get(policyName);
|
||||
Map<String, Object> lastFailureObject = (Map<String, Object>) policyMetadataMap.get("last_failure");
|
||||
assertNotNull(lastFailureObject);
|
||||
|
||||
Long lastFailure = (Long) lastFailureObject.get("time");
|
||||
Long modifiedDate = (Long) policyMetadataMap.get("modified_date_millis");
|
||||
assertNotNull(lastFailure);
|
||||
assertNotNull(modifiedDate);
|
||||
assertThat(lastFailure, greaterThan(modifiedDate));
|
||||
|
||||
String lastFailureInfo = (String) lastFailureObject.get("details");
|
||||
assertNotNull(lastFailureInfo);
|
||||
assertThat(lastFailureInfo, containsString("no such index [index-doesnt-exist]"));
|
||||
|
||||
String snapshotName = (String) lastFailureObject.get("snapshot_name");
|
||||
assertNotNull(snapshotName);
|
||||
assertThat(snapshotName, startsWith("snap-"));
|
||||
}
|
||||
assertHistoryIsPresent(policyName, false, repoName);
|
||||
});
|
||||
|
||||
Request delReq = new Request("DELETE", "/_slm/policy/" + policyName);
|
||||
assertOK(client().performRequest(delReq));
|
||||
}
|
||||
|
||||
public void testPolicyManualExecution() throws Exception {
|
||||
final String indexName = "test";
|
||||
final String policyName = "test-policy";
|
||||
final String repoId = "my-repo";
|
||||
int docCount = randomIntBetween(10, 50);
|
||||
List<IndexRequestBuilder> indexReqs = new ArrayList<>();
|
||||
for (int i = 0; i < docCount; i++) {
|
||||
index(client(), indexName, "" + i, "foo", "bar");
|
||||
}
|
||||
|
||||
// Create a snapshot repo
|
||||
inializeRepo(repoId);
|
||||
|
||||
createSnapshotPolicy(policyName, "snap", "1 2 3 4 5 ?", repoId, indexName, true);
|
||||
|
||||
ResponseException badResp = expectThrows(ResponseException.class,
|
||||
() -> client().performRequest(new Request("PUT", "/_slm/policy/" + policyName + "-bad/_execute")));
|
||||
assertThat(EntityUtils.toString(badResp.getResponse().getEntity()),
|
||||
containsString("no such snapshot lifecycle policy [" + policyName + "-bad]"));
|
||||
|
||||
Response goodResp = client().performRequest(new Request("PUT", "/_slm/policy/" + policyName + "/_execute"));
|
||||
|
||||
try (XContentParser parser = JsonXContent.jsonXContent.createParser(NamedXContentRegistry.EMPTY,
|
||||
DeprecationHandler.THROW_UNSUPPORTED_OPERATION, EntityUtils.toByteArray(goodResp.getEntity()))) {
|
||||
final String snapshotName = parser.mapStrings().get("snapshot_name");
|
||||
|
||||
// Check that the executed snapshot is created
|
||||
assertBusy(() -> {
|
||||
try {
|
||||
Response response = client().performRequest(new Request("GET", "/_snapshot/" + repoId + "/" + snapshotName));
|
||||
Map<String, Object> snapshotResponseMap;
|
||||
try (InputStream is = response.getEntity().getContent()) {
|
||||
snapshotResponseMap = XContentHelper.convertToMap(XContentType.JSON.xContent(), is, true);
|
||||
}
|
||||
assertThat(snapshotResponseMap.size(), greaterThan(0));
|
||||
final Map<String, Object> metadata = extractMetadata(snapshotResponseMap, snapshotName);
|
||||
assertNotNull(metadata);
|
||||
assertThat(metadata.get("policy"), equalTo(policyName));
|
||||
assertHistoryIsPresent(policyName, true, repoId);
|
||||
} catch (ResponseException e) {
|
||||
fail("expected snapshot to exist but it does not: " + EntityUtils.toString(e.getResponse().getEntity()));
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
Request delReq = new Request("DELETE", "/_slm/policy/" + policyName);
|
||||
assertOK(client().performRequest(delReq));
|
||||
|
||||
// It's possible there could have been a snapshot in progress when the
|
||||
// policy is deleted, so wait for it to be finished
|
||||
assertBusy(() -> {
|
||||
assertThat(wipeSnapshots().size(), equalTo(0));
|
||||
});
|
||||
}
|
||||
|
||||
@SuppressWarnings("unchecked")
|
||||
private static Map<String, Object> extractMetadata(Map<String, Object> snapshotResponseMap, String snapshotPrefix) {
|
||||
List<Map<String, Object>> snapshots = ((List<Map<String, Object>>) snapshotResponseMap.get("snapshots"));
|
||||
return snapshots.stream()
|
||||
.filter(snapshot -> ((String) snapshot.get("snapshot")).startsWith(snapshotPrefix))
|
||||
.map(snapshot -> (Map<String, Object>) snapshot.get("metadata"))
|
||||
.findFirst()
|
||||
.orElse(null);
|
||||
}
|
||||
|
||||
// This method should be called inside an assertBusy, it has no retry logic of its own
|
||||
private void assertHistoryIsPresent(String policyName, boolean success, String repository) throws IOException {
|
||||
final Request historySearchRequest = new Request("GET", ".slm-history*/_search");
|
||||
historySearchRequest.setJsonEntity("{\n" +
|
||||
" \"query\": {\n" +
|
||||
" \"bool\": {\n" +
|
||||
" \"must\": [\n" +
|
||||
" {\n" +
|
||||
" \"term\": {\n" +
|
||||
" \"policy\": \"" + policyName + "\"\n" +
|
||||
" }\n" +
|
||||
" },\n" +
|
||||
" {\n" +
|
||||
" \"term\": {\n" +
|
||||
" \"success\": " + success + "\n" +
|
||||
" }\n" +
|
||||
" },\n" +
|
||||
" {\n" +
|
||||
" \"term\": {\n" +
|
||||
" \"repository\": \"" + repository + "\"\n" +
|
||||
" }\n" +
|
||||
" },\n" +
|
||||
" {\n" +
|
||||
" \"term\": {\n" +
|
||||
" \"operation\": \"CREATE\"\n" +
|
||||
" }\n" +
|
||||
" }\n" +
|
||||
" ]\n" +
|
||||
" }\n" +
|
||||
" }\n" +
|
||||
"}");
|
||||
Response historyResponse;
|
||||
try {
|
||||
historyResponse = client().performRequest(historySearchRequest);
|
||||
Map<String, Object> historyResponseMap;
|
||||
try (InputStream is = historyResponse.getEntity().getContent()) {
|
||||
historyResponseMap = XContentHelper.convertToMap(XContentType.JSON.xContent(), is, true);
|
||||
}
|
||||
assertThat((int)((Map<String, Object>) ((Map<String, Object>) historyResponseMap.get("hits")).get("total")).get("value"),
|
||||
greaterThanOrEqualTo(1));
|
||||
} catch (ResponseException e) {
|
||||
// Throw AssertionError instead of an exception if the search fails so that assertBusy works as expected
|
||||
logger.error(e);
|
||||
fail("failed to perform search:" + e.getMessage());
|
||||
}
|
||||
}
|
||||
|
||||
private void createSnapshotPolicy(String policyName, String snapshotNamePattern, String schedule, String repoId,
|
||||
String indexPattern, boolean ignoreUnavailable) throws IOException {
|
||||
Map<String, Object> snapConfig = new HashMap<>();
|
||||
snapConfig.put("indices", Collections.singletonList(indexPattern));
|
||||
snapConfig.put("ignore_unavailable", ignoreUnavailable);
|
||||
if (randomBoolean()) {
|
||||
Map<String, Object> metadata = new HashMap<>();
|
||||
int fieldCount = randomIntBetween(2,5);
|
||||
for (int i = 0; i < fieldCount; i++) {
|
||||
metadata.put(randomValueOtherThanMany(key -> "policy".equals(key) || metadata.containsKey(key),
|
||||
() -> randomAlphaOfLength(5)), randomAlphaOfLength(4));
|
||||
}
|
||||
}
|
||||
SnapshotLifecyclePolicy policy = new SnapshotLifecyclePolicy(policyName, snapshotNamePattern, schedule, repoId, snapConfig);
|
||||
|
||||
Request putLifecycle = new Request("PUT", "/_slm/policy/" + policyName);
|
||||
XContentBuilder lifecycleBuilder = JsonXContent.contentBuilder();
|
||||
policy.toXContent(lifecycleBuilder, ToXContent.EMPTY_PARAMS);
|
||||
putLifecycle.setJsonEntity(Strings.toString(lifecycleBuilder));
|
||||
assertOK(client().performRequest(putLifecycle));
|
||||
}
|
||||
|
||||
private void inializeRepo(String repoName) throws IOException {
|
||||
Request request = new Request("PUT", "/_snapshot/" + repoName);
|
||||
request.setJsonEntity(Strings
|
||||
.toString(JsonXContent.contentBuilder()
|
||||
.startObject()
|
||||
.field("type", "fs")
|
||||
.startObject("settings")
|
||||
.field("compress", randomBoolean())
|
||||
.field("location", System.getProperty("tests.path.repo"))
|
||||
.field("max_snapshot_bytes_per_sec", "256b")
|
||||
.endObject()
|
||||
.endObject()));
|
||||
assertOK(client().performRequest(request));
|
||||
}
|
||||
|
||||
private static void index(RestClient client, String index, String id, Object... fields) throws IOException {
|
||||
XContentBuilder document = jsonBuilder().startObject();
|
||||
for (int i = 0; i < fields.length; i += 2) {
|
||||
document.field((String) fields[i], fields[i + 1]);
|
||||
}
|
||||
document.endObject();
|
||||
final Request request = new Request("POST", "/" + index + "/_doc/" + id);
|
||||
request.setJsonEntity(Strings.toString(document));
|
||||
assertOK(client.performRequest(request));
|
||||
}
|
||||
}
|
|
@ -4,6 +4,7 @@ apply plugin: 'elasticsearch.rest-test'
|
|||
|
||||
dependencies {
|
||||
testCompile project(path: xpackProject('plugin').path, configuration: 'testArtifacts')
|
||||
testCompile project(":client:rest-high-level")
|
||||
}
|
||||
|
||||
def clusterCredentials = [username: System.getProperty('tests.rest.cluster.username', 'test_admin'),
|
||||
|
|
|
@ -7,12 +7,23 @@ package org.elasticsearch.xpack.security;
|
|||
|
||||
import org.apache.http.entity.ContentType;
|
||||
import org.apache.http.entity.StringEntity;
|
||||
import org.elasticsearch.ElasticsearchStatusException;
|
||||
import org.elasticsearch.action.admin.cluster.repositories.put.PutRepositoryRequest;
|
||||
import org.elasticsearch.action.admin.cluster.snapshots.delete.DeleteSnapshotRequest;
|
||||
import org.elasticsearch.client.Node;
|
||||
import org.elasticsearch.client.Request;
|
||||
import org.elasticsearch.client.RequestOptions;
|
||||
import org.elasticsearch.client.Response;
|
||||
import org.elasticsearch.client.ResponseException;
|
||||
import org.elasticsearch.client.RestClient;
|
||||
import org.elasticsearch.client.RestClientBuilder;
|
||||
import org.elasticsearch.client.RestHighLevelClient;
|
||||
import org.elasticsearch.client.snapshotlifecycle.DeleteSnapshotLifecyclePolicyRequest;
|
||||
import org.elasticsearch.client.snapshotlifecycle.ExecuteSnapshotLifecyclePolicyRequest;
|
||||
import org.elasticsearch.client.snapshotlifecycle.ExecuteSnapshotLifecyclePolicyResponse;
|
||||
import org.elasticsearch.client.snapshotlifecycle.GetSnapshotLifecyclePolicyRequest;
|
||||
import org.elasticsearch.client.snapshotlifecycle.PutSnapshotLifecyclePolicyRequest;
|
||||
import org.elasticsearch.client.snapshotlifecycle.SnapshotLifecyclePolicy;
|
||||
import org.elasticsearch.common.Strings;
|
||||
import org.elasticsearch.common.settings.SecureString;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
|
@ -23,6 +34,7 @@ import org.elasticsearch.common.xcontent.XContentHelper;
|
|||
import org.elasticsearch.common.xcontent.XContentType;
|
||||
import org.elasticsearch.common.xcontent.json.JsonXContent;
|
||||
import org.elasticsearch.common.xcontent.support.XContentMapValues;
|
||||
import org.elasticsearch.repositories.fs.FsRepository;
|
||||
import org.elasticsearch.rest.RestStatus;
|
||||
import org.elasticsearch.test.rest.ESRestTestCase;
|
||||
import org.elasticsearch.xpack.core.indexlifecycle.DeleteAction;
|
||||
|
@ -35,6 +47,8 @@ import org.junit.Before;
|
|||
|
||||
import java.io.IOException;
|
||||
import java.io.InputStream;
|
||||
import java.util.Collections;
|
||||
import java.util.HashMap;
|
||||
import java.util.Map;
|
||||
|
||||
import static java.util.Collections.singletonMap;
|
||||
|
@ -126,6 +140,95 @@ public class PermissionsIT extends ESRestTestCase {
|
|||
});
|
||||
}
|
||||
|
||||
public void testSLMWithPermissions() throws Exception {
|
||||
createIndexAsAdmin("index", Settings.builder().put("index.number_of_replicas", 0).build(), "");
|
||||
|
||||
// Set up two roles and users, one for reading SLM, another for managing SLM
|
||||
Request roleRequest = new Request("PUT", "/_security/role/slm-read");
|
||||
roleRequest.setJsonEntity("{ \"cluster\": [\"read_slm\"] }");
|
||||
assertOK(adminClient().performRequest(roleRequest));
|
||||
roleRequest = new Request("PUT", "/_security/role/slm-manage");
|
||||
roleRequest.setJsonEntity("{ \"cluster\": [\"manage_slm\", \"create_snapshot\"]," +
|
||||
"\"indices\": [{ \"names\": [\".slm-history*\"],\"privileges\": [\"all\"] }] }");
|
||||
assertOK(adminClient().performRequest(roleRequest));
|
||||
|
||||
createUser("slm_admin", "slm-pass", "slm-manage");
|
||||
createUser("slm_user", "slm-user-pass", "slm-read");
|
||||
|
||||
final HighLevelClient hlAdminClient = new HighLevelClient(adminClient());
|
||||
|
||||
// Build two high level clients, each using a different user
|
||||
final RestClientBuilder adminBuilder = RestClient.builder(adminClient().getNodes().toArray(new Node[0]));
|
||||
final String adminToken = basicAuthHeaderValue("slm_admin", new SecureString("slm-pass".toCharArray()));
|
||||
configureClient(adminBuilder, Settings.builder()
|
||||
.put(ThreadContext.PREFIX + ".Authorization", adminToken)
|
||||
.build());
|
||||
adminBuilder.setStrictDeprecationMode(true);
|
||||
final RestHighLevelClient adminHLRC = new RestHighLevelClient(adminBuilder);
|
||||
|
||||
final RestClientBuilder userBuilder = RestClient.builder(adminClient().getNodes().toArray(new Node[0]));
|
||||
final String userToken = basicAuthHeaderValue("slm_user", new SecureString("slm-user-pass".toCharArray()));
|
||||
configureClient(userBuilder, Settings.builder()
|
||||
.put(ThreadContext.PREFIX + ".Authorization", userToken)
|
||||
.build());
|
||||
userBuilder.setStrictDeprecationMode(true);
|
||||
final RestHighLevelClient readHlrc = new RestHighLevelClient(userBuilder);
|
||||
|
||||
PutRepositoryRequest repoRequest = new PutRepositoryRequest();
|
||||
|
||||
Settings.Builder settingsBuilder = Settings.builder().put("location", ".");
|
||||
repoRequest.settings(settingsBuilder);
|
||||
repoRequest.name("my_repository");
|
||||
repoRequest.type(FsRepository.TYPE);
|
||||
org.elasticsearch.action.support.master.AcknowledgedResponse response =
|
||||
hlAdminClient.snapshot().createRepository(repoRequest, RequestOptions.DEFAULT);
|
||||
assertTrue(response.isAcknowledged());
|
||||
|
||||
Map<String, Object> config = new HashMap<>();
|
||||
config.put("indices", Collections.singletonList("index"));
|
||||
SnapshotLifecyclePolicy policy = new SnapshotLifecyclePolicy(
|
||||
"policy_id", "name", "1 2 3 * * ?", "my_repository", config);
|
||||
PutSnapshotLifecyclePolicyRequest request = new PutSnapshotLifecyclePolicyRequest(policy);
|
||||
|
||||
expectThrows(ElasticsearchStatusException.class,
|
||||
() -> readHlrc.indexLifecycle().putSnapshotLifecyclePolicy(request, RequestOptions.DEFAULT));
|
||||
|
||||
adminHLRC.indexLifecycle().putSnapshotLifecyclePolicy(request, RequestOptions.DEFAULT);
|
||||
|
||||
GetSnapshotLifecyclePolicyRequest getRequest = new GetSnapshotLifecyclePolicyRequest("policy_id");
|
||||
readHlrc.indexLifecycle().getSnapshotLifecyclePolicy(getRequest, RequestOptions.DEFAULT);
|
||||
adminHLRC.indexLifecycle().getSnapshotLifecyclePolicy(getRequest, RequestOptions.DEFAULT);
|
||||
|
||||
ExecuteSnapshotLifecyclePolicyRequest executeRequest = new ExecuteSnapshotLifecyclePolicyRequest("policy_id");
|
||||
expectThrows(ElasticsearchStatusException.class, () ->
|
||||
readHlrc.indexLifecycle().executeSnapshotLifecyclePolicy(executeRequest, RequestOptions.DEFAULT));
|
||||
|
||||
ExecuteSnapshotLifecyclePolicyResponse executeResp =
|
||||
adminHLRC.indexLifecycle().executeSnapshotLifecyclePolicy(executeRequest, RequestOptions.DEFAULT);
|
||||
|
||||
DeleteSnapshotLifecyclePolicyRequest deleteRequest = new DeleteSnapshotLifecyclePolicyRequest("policy_id");
|
||||
expectThrows(ElasticsearchStatusException.class, () ->
|
||||
readHlrc.indexLifecycle().deleteSnapshotLifecyclePolicy(deleteRequest, RequestOptions.DEFAULT));
|
||||
|
||||
adminHLRC.indexLifecycle().deleteSnapshotLifecyclePolicy(deleteRequest, RequestOptions.DEFAULT);
|
||||
|
||||
// Delete snapshot to clean up and make sure it's not on-going.
|
||||
// This is inside an assertBusy because the snapshot may not
|
||||
// yet exist (in which case it throws an error)
|
||||
assertBusy(() -> {
|
||||
try {
|
||||
DeleteSnapshotRequest delReq = new DeleteSnapshotRequest("my_repository", executeResp.getSnapshotName());
|
||||
hlAdminClient.snapshot().delete(delReq, RequestOptions.DEFAULT);
|
||||
} catch (ElasticsearchStatusException e) {
|
||||
fail("got exception: " + e);
|
||||
}
|
||||
});
|
||||
|
||||
hlAdminClient.close();
|
||||
readHlrc.close();
|
||||
adminHLRC.close();
|
||||
}
|
||||
|
||||
public void testCanViewExplainOnUnmanagedIndex() throws Exception {
|
||||
createIndexAsAdmin("view-only-ilm", indexSettingsWithPolicy, "");
|
||||
Request request = new Request("GET", "/view-only-ilm/_ilm/explain");
|
||||
|
@ -262,4 +365,10 @@ public class PermissionsIT extends ESRestTestCase {
|
|||
Request request = new Request("POST", "/" + index + "/_refresh");
|
||||
assertOK(adminClient().performRequest(request));
|
||||
}
|
||||
|
||||
private static class HighLevelClient extends RestHighLevelClient {
|
||||
private HighLevelClient(RestClient restClient) {
|
||||
super(restClient, (client) -> {}, Collections.emptyList());
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -6,6 +6,7 @@
|
|||
package org.elasticsearch.xpack.indexlifecycle;
|
||||
|
||||
import org.apache.lucene.util.SetOnce;
|
||||
import org.elasticsearch.ElasticsearchException;
|
||||
import org.elasticsearch.action.ActionRequest;
|
||||
import org.elasticsearch.action.ActionResponse;
|
||||
import org.elasticsearch.client.Client;
|
||||
|
@ -23,6 +24,7 @@ import org.elasticsearch.common.settings.Setting;
|
|||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.settings.SettingsFilter;
|
||||
import org.elasticsearch.common.xcontent.NamedXContentRegistry;
|
||||
import org.elasticsearch.core.internal.io.IOUtils;
|
||||
import org.elasticsearch.env.Environment;
|
||||
import org.elasticsearch.env.NodeEnvironment;
|
||||
import org.elasticsearch.plugins.ActionPlugin;
|
||||
|
@ -58,6 +60,13 @@ import org.elasticsearch.xpack.core.indexlifecycle.action.RemoveIndexLifecyclePo
|
|||
import org.elasticsearch.xpack.core.indexlifecycle.action.RetryAction;
|
||||
import org.elasticsearch.xpack.core.indexlifecycle.action.StartILMAction;
|
||||
import org.elasticsearch.xpack.core.indexlifecycle.action.StopILMAction;
|
||||
import org.elasticsearch.xpack.core.snapshotlifecycle.SnapshotLifecycleMetadata;
|
||||
import org.elasticsearch.xpack.core.snapshotlifecycle.action.DeleteSnapshotLifecycleAction;
|
||||
import org.elasticsearch.xpack.core.snapshotlifecycle.action.ExecuteSnapshotLifecycleAction;
|
||||
import org.elasticsearch.xpack.core.snapshotlifecycle.action.GetSnapshotLifecycleAction;
|
||||
import org.elasticsearch.xpack.core.snapshotlifecycle.action.PutSnapshotLifecycleAction;
|
||||
import org.elasticsearch.xpack.core.snapshotlifecycle.history.SnapshotHistoryStore;
|
||||
import org.elasticsearch.xpack.core.snapshotlifecycle.history.SnapshotLifecycleTemplateRegistry;
|
||||
import org.elasticsearch.xpack.indexlifecycle.action.RestDeleteLifecycleAction;
|
||||
import org.elasticsearch.xpack.indexlifecycle.action.RestExplainLifecycleAction;
|
||||
import org.elasticsearch.xpack.indexlifecycle.action.RestGetLifecycleAction;
|
||||
|
@ -78,7 +87,18 @@ import org.elasticsearch.xpack.indexlifecycle.action.TransportRemoveIndexLifecyc
|
|||
import org.elasticsearch.xpack.indexlifecycle.action.TransportRetryAction;
|
||||
import org.elasticsearch.xpack.indexlifecycle.action.TransportStartILMAction;
|
||||
import org.elasticsearch.xpack.indexlifecycle.action.TransportStopILMAction;
|
||||
import org.elasticsearch.xpack.snapshotlifecycle.SnapshotLifecycleService;
|
||||
import org.elasticsearch.xpack.snapshotlifecycle.SnapshotLifecycleTask;
|
||||
import org.elasticsearch.xpack.snapshotlifecycle.action.RestDeleteSnapshotLifecycleAction;
|
||||
import org.elasticsearch.xpack.snapshotlifecycle.action.RestExecuteSnapshotLifecycleAction;
|
||||
import org.elasticsearch.xpack.snapshotlifecycle.action.RestGetSnapshotLifecycleAction;
|
||||
import org.elasticsearch.xpack.snapshotlifecycle.action.RestPutSnapshotLifecycleAction;
|
||||
import org.elasticsearch.xpack.snapshotlifecycle.action.TransportDeleteSnapshotLifecycleAction;
|
||||
import org.elasticsearch.xpack.snapshotlifecycle.action.TransportExecuteSnapshotLifecycleAction;
|
||||
import org.elasticsearch.xpack.snapshotlifecycle.action.TransportGetSnapshotLifecycleAction;
|
||||
import org.elasticsearch.xpack.snapshotlifecycle.action.TransportPutSnapshotLifecycleAction;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.time.Clock;
|
||||
import java.util.ArrayList;
|
||||
import java.util.Arrays;
|
||||
|
@ -91,6 +111,8 @@ import static java.util.Collections.emptyList;
|
|||
|
||||
public class IndexLifecycle extends Plugin implements ActionPlugin {
|
||||
private final SetOnce<IndexLifecycleService> indexLifecycleInitialisationService = new SetOnce<>();
|
||||
private final SetOnce<SnapshotLifecycleService> snapshotLifecycleService = new SetOnce<>();
|
||||
private final SetOnce<SnapshotHistoryStore> snapshotHistoryStore = new SetOnce<>();
|
||||
private Settings settings;
|
||||
private boolean enabled;
|
||||
private boolean transportClientMode;
|
||||
|
@ -124,7 +146,8 @@ public class IndexLifecycle extends Plugin implements ActionPlugin {
|
|||
LifecycleSettings.LIFECYCLE_POLL_INTERVAL_SETTING,
|
||||
LifecycleSettings.LIFECYCLE_NAME_SETTING,
|
||||
LifecycleSettings.LIFECYCLE_INDEXING_COMPLETE_SETTING,
|
||||
RolloverAction.LIFECYCLE_ROLLOVER_ALIAS_SETTING);
|
||||
RolloverAction.LIFECYCLE_ROLLOVER_ALIAS_SETTING,
|
||||
LifecycleSettings.SLM_HISTORY_INDEX_ENABLED_SETTING);
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -137,12 +160,17 @@ public class IndexLifecycle extends Plugin implements ActionPlugin {
|
|||
}
|
||||
indexLifecycleInitialisationService.set(new IndexLifecycleService(settings, client, clusterService, threadPool,
|
||||
getClock(), System::currentTimeMillis, xContentRegistry));
|
||||
return Collections.singletonList(indexLifecycleInitialisationService.get());
|
||||
SnapshotLifecycleTemplateRegistry templateRegistry = new SnapshotLifecycleTemplateRegistry(settings, clusterService, threadPool,
|
||||
client, xContentRegistry);
|
||||
snapshotHistoryStore.set(new SnapshotHistoryStore(settings, client, getClock().getZone()));
|
||||
snapshotLifecycleService.set(new SnapshotLifecycleService(settings,
|
||||
() -> new SnapshotLifecycleTask(client, clusterService, snapshotHistoryStore.get()), clusterService, getClock()));
|
||||
return Arrays.asList(indexLifecycleInitialisationService.get(), snapshotLifecycleService.get(), snapshotHistoryStore.get());
|
||||
}
|
||||
|
||||
@Override
|
||||
public List<Entry> getNamedWriteables() {
|
||||
return Arrays.asList();
|
||||
return Collections.emptyList();
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -151,6 +179,8 @@ public class IndexLifecycle extends Plugin implements ActionPlugin {
|
|||
// Custom Metadata
|
||||
new NamedXContentRegistry.Entry(MetaData.Custom.class, new ParseField(IndexLifecycleMetadata.TYPE),
|
||||
parser -> IndexLifecycleMetadata.PARSER.parse(parser, null)),
|
||||
new NamedXContentRegistry.Entry(MetaData.Custom.class, new ParseField(SnapshotLifecycleMetadata.TYPE),
|
||||
parser -> SnapshotLifecycleMetadata.PARSER.parse(parser, null)),
|
||||
// Lifecycle Types
|
||||
new NamedXContentRegistry.Entry(LifecycleType.class, new ParseField(TimeseriesLifecycleType.TYPE),
|
||||
(p, c) -> TimeseriesLifecycleType.INSTANCE),
|
||||
|
@ -184,7 +214,12 @@ public class IndexLifecycle extends Plugin implements ActionPlugin {
|
|||
new RestRetryAction(settings, restController),
|
||||
new RestStopAction(settings, restController),
|
||||
new RestStartILMAction(settings, restController),
|
||||
new RestGetStatusAction(settings, restController)
|
||||
new RestGetStatusAction(settings, restController),
|
||||
// Snapshot lifecycle actions
|
||||
new RestPutSnapshotLifecycleAction(settings, restController),
|
||||
new RestDeleteSnapshotLifecycleAction(settings, restController),
|
||||
new RestGetSnapshotLifecycleAction(settings, restController),
|
||||
new RestExecuteSnapshotLifecycleAction(settings, restController)
|
||||
);
|
||||
}
|
||||
|
||||
|
@ -203,14 +238,20 @@ public class IndexLifecycle extends Plugin implements ActionPlugin {
|
|||
new ActionHandler<>(RetryAction.INSTANCE, TransportRetryAction.class),
|
||||
new ActionHandler<>(StartILMAction.INSTANCE, TransportStartILMAction.class),
|
||||
new ActionHandler<>(StopILMAction.INSTANCE, TransportStopILMAction.class),
|
||||
new ActionHandler<>(GetStatusAction.INSTANCE, TransportGetStatusAction.class));
|
||||
new ActionHandler<>(GetStatusAction.INSTANCE, TransportGetStatusAction.class),
|
||||
// Snapshot lifecycle actions
|
||||
new ActionHandler<>(PutSnapshotLifecycleAction.INSTANCE, TransportPutSnapshotLifecycleAction.class),
|
||||
new ActionHandler<>(DeleteSnapshotLifecycleAction.INSTANCE, TransportDeleteSnapshotLifecycleAction.class),
|
||||
new ActionHandler<>(GetSnapshotLifecycleAction.INSTANCE, TransportGetSnapshotLifecycleAction.class),
|
||||
new ActionHandler<>(ExecuteSnapshotLifecycleAction.INSTANCE, TransportExecuteSnapshotLifecycleAction.class));
|
||||
}
|
||||
|
||||
@Override
|
||||
public void close() {
|
||||
IndexLifecycleService lifecycleService = indexLifecycleInitialisationService.get();
|
||||
if (lifecycleService != null) {
|
||||
lifecycleService.close();
|
||||
try {
|
||||
IOUtils.close(indexLifecycleInitialisationService.get(), snapshotLifecycleService.get());
|
||||
} catch (IOException e) {
|
||||
throw new ElasticsearchException("unable to close index lifecycle services", e);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -12,6 +12,7 @@ import org.elasticsearch.cluster.ClusterStateUpdateTask;
|
|||
import org.elasticsearch.cluster.metadata.MetaData;
|
||||
import org.elasticsearch.xpack.core.indexlifecycle.OperationMode;
|
||||
import org.elasticsearch.xpack.core.indexlifecycle.IndexLifecycleMetadata;
|
||||
import org.elasticsearch.xpack.core.snapshotlifecycle.SnapshotLifecycleMetadata;
|
||||
|
||||
public class OperationModeUpdateTask extends ClusterStateUpdateTask {
|
||||
private static final Logger logger = LogManager.getLogger(OperationModeUpdateTask.class);
|
||||
|
@ -27,6 +28,13 @@ public class OperationModeUpdateTask extends ClusterStateUpdateTask {
|
|||
|
||||
@Override
|
||||
public ClusterState execute(ClusterState currentState) {
|
||||
ClusterState newState = currentState;
|
||||
newState = updateILMState(newState);
|
||||
newState = updateSLMState(newState);
|
||||
return newState;
|
||||
}
|
||||
|
||||
private ClusterState updateILMState(final ClusterState currentState) {
|
||||
IndexLifecycleMetadata currentMetadata = currentState.metaData().custom(IndexLifecycleMetadata.TYPE);
|
||||
if (currentMetadata != null && currentMetadata.getOperationMode().isValidChange(mode) == false) {
|
||||
return currentState;
|
||||
|
@ -41,12 +49,33 @@ public class OperationModeUpdateTask extends ClusterStateUpdateTask {
|
|||
newMode = currentMetadata.getOperationMode();
|
||||
}
|
||||
|
||||
ClusterState.Builder builder = new ClusterState.Builder(currentState);
|
||||
MetaData.Builder metadataBuilder = MetaData.builder(currentState.metaData());
|
||||
metadataBuilder.putCustom(IndexLifecycleMetadata.TYPE,
|
||||
new IndexLifecycleMetadata(currentMetadata.getPolicyMetadatas(), newMode));
|
||||
builder.metaData(metadataBuilder.build());
|
||||
return builder.build();
|
||||
return ClusterState.builder(currentState)
|
||||
.metaData(MetaData.builder(currentState.metaData())
|
||||
.putCustom(IndexLifecycleMetadata.TYPE,
|
||||
new IndexLifecycleMetadata(currentMetadata.getPolicyMetadatas(), newMode)))
|
||||
.build();
|
||||
}
|
||||
|
||||
private ClusterState updateSLMState(final ClusterState currentState) {
|
||||
SnapshotLifecycleMetadata currentMetadata = currentState.metaData().custom(SnapshotLifecycleMetadata.TYPE);
|
||||
if (currentMetadata != null && currentMetadata.getOperationMode().isValidChange(mode) == false) {
|
||||
return currentState;
|
||||
} else if (currentMetadata == null) {
|
||||
currentMetadata = SnapshotLifecycleMetadata.EMPTY;
|
||||
}
|
||||
|
||||
final OperationMode newMode;
|
||||
if (currentMetadata.getOperationMode().isValidChange(mode)) {
|
||||
newMode = mode;
|
||||
} else {
|
||||
newMode = currentMetadata.getOperationMode();
|
||||
}
|
||||
|
||||
return ClusterState.builder(currentState)
|
||||
.metaData(MetaData.builder(currentState.metaData())
|
||||
.putCustom(SnapshotLifecycleMetadata.TYPE,
|
||||
new SnapshotLifecycleMetadata(currentMetadata.getSnapshotConfigurations(), newMode)))
|
||||
.build();
|
||||
}
|
||||
|
||||
@Override
|
||||
|
|
|
@ -0,0 +1,224 @@
|
|||
/*
|
||||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||
* or more contributor license agreements. Licensed under the Elastic License;
|
||||
* you may not use this file except in compliance with the Elastic License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.xpack.snapshotlifecycle;
|
||||
|
||||
import org.apache.logging.log4j.LogManager;
|
||||
import org.apache.logging.log4j.Logger;
|
||||
import org.elasticsearch.cluster.ClusterChangedEvent;
|
||||
import org.elasticsearch.cluster.ClusterState;
|
||||
import org.elasticsearch.cluster.ClusterStateListener;
|
||||
import org.elasticsearch.cluster.LocalNodeMasterListener;
|
||||
import org.elasticsearch.cluster.metadata.RepositoriesMetaData;
|
||||
import org.elasticsearch.cluster.service.ClusterService;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.util.concurrent.ConcurrentCollections;
|
||||
import org.elasticsearch.threadpool.ThreadPool;
|
||||
import org.elasticsearch.xpack.core.indexlifecycle.OperationMode;
|
||||
import org.elasticsearch.xpack.core.scheduler.CronSchedule;
|
||||
import org.elasticsearch.xpack.core.scheduler.SchedulerEngine;
|
||||
import org.elasticsearch.xpack.core.snapshotlifecycle.SnapshotLifecycleMetadata;
|
||||
import org.elasticsearch.xpack.core.snapshotlifecycle.SnapshotLifecyclePolicy;
|
||||
import org.elasticsearch.xpack.core.snapshotlifecycle.SnapshotLifecyclePolicyMetadata;
|
||||
|
||||
import java.io.Closeable;
|
||||
import java.time.Clock;
|
||||
import java.util.Map;
|
||||
import java.util.Optional;
|
||||
import java.util.Set;
|
||||
import java.util.function.Supplier;
|
||||
import java.util.regex.Pattern;
|
||||
import java.util.stream.Collectors;
|
||||
|
||||
/**
|
||||
* {@code SnapshotLifecycleService} manages snapshot policy scheduling and triggering of the
|
||||
* {@link SnapshotLifecycleTask}. It reacts to new policies in the cluster state by scheduling a
|
||||
* task according to the policy's schedule.
|
||||
*/
|
||||
public class SnapshotLifecycleService implements LocalNodeMasterListener, Closeable, ClusterStateListener {
|
||||
|
||||
private static final Logger logger = LogManager.getLogger(SnapshotLifecycleMetadata.class);
|
||||
private static final String JOB_PATTERN_SUFFIX = "-\\d+$";
|
||||
|
||||
private final SchedulerEngine scheduler;
|
||||
private final ClusterService clusterService;
|
||||
private final SnapshotLifecycleTask snapshotTask;
|
||||
private final Map<String, SchedulerEngine.Job> scheduledTasks = ConcurrentCollections.newConcurrentMap();
|
||||
private volatile boolean isMaster = false;
|
||||
|
||||
public SnapshotLifecycleService(Settings settings,
|
||||
Supplier<SnapshotLifecycleTask> taskSupplier,
|
||||
ClusterService clusterService,
|
||||
Clock clock) {
|
||||
this.scheduler = new SchedulerEngine(settings, clock);
|
||||
this.clusterService = clusterService;
|
||||
this.snapshotTask = taskSupplier.get();
|
||||
clusterService.addLocalNodeMasterListener(this); // TODO: change this not to use 'this'
|
||||
clusterService.addListener(this);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void clusterChanged(final ClusterChangedEvent event) {
|
||||
if (this.isMaster) {
|
||||
final ClusterState state = event.state();
|
||||
|
||||
if (ilmStoppedOrStopping(state)) {
|
||||
if (scheduler.scheduledJobIds().size() > 0) {
|
||||
cancelSnapshotJobs();
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
scheduleSnapshotJobs(state);
|
||||
cleanupDeletedPolicies(state);
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onMaster() {
|
||||
this.isMaster = true;
|
||||
scheduler.register(snapshotTask);
|
||||
final ClusterState state = clusterService.state();
|
||||
if (ilmStoppedOrStopping(state)) {
|
||||
// ILM is currently stopped, so don't schedule jobs
|
||||
return;
|
||||
}
|
||||
scheduleSnapshotJobs(state);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void offMaster() {
|
||||
this.isMaster = false;
|
||||
scheduler.unregister(snapshotTask);
|
||||
cancelSnapshotJobs();
|
||||
}
|
||||
|
||||
// Only used for testing
|
||||
SchedulerEngine getScheduler() {
|
||||
return this.scheduler;
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns true if ILM is in the stopped or stopped state
|
||||
*/
|
||||
private static boolean ilmStoppedOrStopping(ClusterState state) {
|
||||
return Optional.ofNullable((SnapshotLifecycleMetadata) state.metaData().custom(SnapshotLifecycleMetadata.TYPE))
|
||||
.map(SnapshotLifecycleMetadata::getOperationMode)
|
||||
.map(mode -> OperationMode.STOPPING == mode || OperationMode.STOPPED == mode)
|
||||
.orElse(false);
|
||||
}
|
||||
|
||||
/**
|
||||
* Schedule all non-scheduled snapshot jobs contained in the cluster state
|
||||
*/
|
||||
public void scheduleSnapshotJobs(final ClusterState state) {
|
||||
SnapshotLifecycleMetadata snapMeta = state.metaData().custom(SnapshotLifecycleMetadata.TYPE);
|
||||
if (snapMeta != null) {
|
||||
snapMeta.getSnapshotConfigurations().values().forEach(this::maybeScheduleSnapshot);
|
||||
}
|
||||
}
|
||||
|
||||
public void cleanupDeletedPolicies(final ClusterState state) {
|
||||
SnapshotLifecycleMetadata snapMeta = state.metaData().custom(SnapshotLifecycleMetadata.TYPE);
|
||||
if (snapMeta != null) {
|
||||
// Retrieve all of the expected policy job ids from the policies in the metadata
|
||||
final Set<String> policyJobIds = snapMeta.getSnapshotConfigurations().values().stream()
|
||||
.map(SnapshotLifecycleService::getJobId)
|
||||
.collect(Collectors.toSet());
|
||||
|
||||
// Cancel all jobs that are *NOT* in the scheduled tasks map
|
||||
scheduledTasks.keySet().stream()
|
||||
.filter(jobId -> policyJobIds.contains(jobId) == false)
|
||||
.forEach(this::cancelScheduledSnapshot);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Schedule the {@link SnapshotLifecyclePolicy} job if it does not already exist. First checks
|
||||
* to see if any previous versions of the policy were scheduled, and if so, cancels those. If
|
||||
* the same version of a policy has already been scheduled it does not overwrite the job.
|
||||
*/
|
||||
public void maybeScheduleSnapshot(final SnapshotLifecyclePolicyMetadata snapshotLifecyclePolicy) {
|
||||
final String jobId = getJobId(snapshotLifecyclePolicy);
|
||||
final Pattern existingJobPattern = Pattern.compile(snapshotLifecyclePolicy.getPolicy().getId() + JOB_PATTERN_SUFFIX);
|
||||
|
||||
// Find and cancel any existing jobs for this policy
|
||||
final boolean existingJobsFoundAndCancelled = scheduledTasks.keySet().stream()
|
||||
// Find all jobs matching the `jobid-\d+` pattern
|
||||
.filter(jId -> existingJobPattern.matcher(jId).matches())
|
||||
// Filter out a job that has not been changed (matches the id exactly meaning the version is the same)
|
||||
.filter(jId -> jId.equals(jobId) == false)
|
||||
.map(existingJobId -> {
|
||||
// Cancel existing job so the new one can be scheduled
|
||||
logger.debug("removing existing snapshot lifecycle job [{}] as it has been updated", existingJobId);
|
||||
scheduledTasks.remove(existingJobId);
|
||||
boolean existed = scheduler.remove(existingJobId);
|
||||
assert existed : "expected job for " + existingJobId + " to exist in scheduler";
|
||||
return existed;
|
||||
})
|
||||
.reduce(false, (a, b) -> a || b);
|
||||
|
||||
// Now atomically schedule the new job and add it to the scheduled tasks map. If the jobId
|
||||
// is identical to an existing job (meaning the version has not changed) then this does
|
||||
// not reschedule it.
|
||||
scheduledTasks.computeIfAbsent(jobId, id -> {
|
||||
final SchedulerEngine.Job job = new SchedulerEngine.Job(jobId,
|
||||
new CronSchedule(snapshotLifecyclePolicy.getPolicy().getSchedule()));
|
||||
if (existingJobsFoundAndCancelled) {
|
||||
logger.info("rescheduling updated snapshot lifecycle job [{}]", jobId);
|
||||
} else {
|
||||
logger.info("scheduling snapshot lifecycle job [{}]", jobId);
|
||||
}
|
||||
scheduler.add(job);
|
||||
return job;
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Generate the job id for a given policy metadata. The job id is {@code <policyid>-<version>}
|
||||
*/
|
||||
public static String getJobId(SnapshotLifecyclePolicyMetadata policyMeta) {
|
||||
return policyMeta.getPolicy().getId() + "-" + policyMeta.getVersion();
|
||||
}
|
||||
|
||||
/**
|
||||
* Cancel all scheduled snapshot jobs
|
||||
*/
|
||||
public void cancelSnapshotJobs() {
|
||||
logger.trace("cancelling all snapshot lifecycle jobs");
|
||||
scheduler.scheduledJobIds().forEach(scheduler::remove);
|
||||
scheduledTasks.clear();
|
||||
}
|
||||
|
||||
/**
|
||||
* Cancel the given policy job id (from {@link #getJobId(SnapshotLifecyclePolicyMetadata)}
|
||||
*/
|
||||
public void cancelScheduledSnapshot(final String lifecycleJobId) {
|
||||
logger.debug("cancelling snapshot lifecycle job [{}] as it no longer exists", lifecycleJobId);
|
||||
scheduledTasks.remove(lifecycleJobId);
|
||||
scheduler.remove(lifecycleJobId);
|
||||
}
|
||||
|
||||
/**
|
||||
* Validates that the {@code repository} exists as a registered snapshot repository
|
||||
* @throws IllegalArgumentException if the repository does not exist
|
||||
*/
|
||||
public static void validateRepositoryExists(final String repository, final ClusterState state) {
|
||||
Optional.ofNullable((RepositoriesMetaData) state.metaData().custom(RepositoriesMetaData.TYPE))
|
||||
.map(repoMeta -> repoMeta.repository(repository))
|
||||
.orElseThrow(() -> new IllegalArgumentException("no such repository [" + repository + "]"));
|
||||
}
|
||||
|
||||
@Override
|
||||
public String executorName() {
|
||||
return ThreadPool.Names.SNAPSHOT;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void close() {
|
||||
this.scheduler.stop();
|
||||
}
|
||||
}
|
|
@ -0,0 +1,217 @@
|
|||
/*
|
||||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||
* or more contributor license agreements. Licensed under the Elastic License;
|
||||
* you may not use this file except in compliance with the Elastic License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.xpack.snapshotlifecycle;
|
||||
|
||||
import org.apache.logging.log4j.LogManager;
|
||||
import org.apache.logging.log4j.Logger;
|
||||
import org.apache.logging.log4j.message.ParameterizedMessage;
|
||||
import org.elasticsearch.ElasticsearchException;
|
||||
import org.elasticsearch.action.ActionListener;
|
||||
import org.elasticsearch.action.admin.cluster.snapshots.create.CreateSnapshotRequest;
|
||||
import org.elasticsearch.action.admin.cluster.snapshots.create.CreateSnapshotResponse;
|
||||
import org.elasticsearch.client.Client;
|
||||
import org.elasticsearch.cluster.ClusterState;
|
||||
import org.elasticsearch.cluster.ClusterStateUpdateTask;
|
||||
import org.elasticsearch.cluster.metadata.MetaData;
|
||||
import org.elasticsearch.cluster.service.ClusterService;
|
||||
import org.elasticsearch.common.Strings;
|
||||
import org.elasticsearch.common.bytes.BytesReference;
|
||||
import org.elasticsearch.common.xcontent.ToXContent;
|
||||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
import org.elasticsearch.common.xcontent.json.JsonXContent;
|
||||
import org.elasticsearch.xpack.core.ClientHelper;
|
||||
import org.elasticsearch.xpack.core.scheduler.SchedulerEngine;
|
||||
import org.elasticsearch.xpack.core.snapshotlifecycle.SnapshotInvocationRecord;
|
||||
import org.elasticsearch.xpack.core.snapshotlifecycle.SnapshotLifecycleMetadata;
|
||||
import org.elasticsearch.xpack.core.snapshotlifecycle.SnapshotLifecyclePolicyMetadata;
|
||||
import org.elasticsearch.xpack.core.snapshotlifecycle.history.SnapshotHistoryItem;
|
||||
import org.elasticsearch.xpack.core.snapshotlifecycle.history.SnapshotHistoryStore;
|
||||
import org.elasticsearch.xpack.indexlifecycle.LifecyclePolicySecurityClient;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.time.Instant;
|
||||
import java.util.Collections;
|
||||
import java.util.HashMap;
|
||||
import java.util.Map;
|
||||
import java.util.Optional;
|
||||
|
||||
import static org.elasticsearch.ElasticsearchException.REST_EXCEPTION_SKIP_STACK_TRACE;
|
||||
|
||||
public class SnapshotLifecycleTask implements SchedulerEngine.Listener {
|
||||
|
||||
private static Logger logger = LogManager.getLogger(SnapshotLifecycleTask.class);
|
||||
|
||||
private final Client client;
|
||||
private final ClusterService clusterService;
|
||||
private final SnapshotHistoryStore historyStore;
|
||||
|
||||
public SnapshotLifecycleTask(final Client client, final ClusterService clusterService, final SnapshotHistoryStore historyStore) {
|
||||
this.client = client;
|
||||
this.clusterService = clusterService;
|
||||
this.historyStore = historyStore;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void triggered(SchedulerEngine.Event event) {
|
||||
logger.debug("snapshot lifecycle policy task triggered from job [{}]", event.getJobName());
|
||||
|
||||
final Optional<String> snapshotName = maybeTakeSnapshot(event.getJobName(), client, clusterService, historyStore);
|
||||
|
||||
// Would be cleaner if we could use Optional#ifPresentOrElse
|
||||
snapshotName.ifPresent(name ->
|
||||
logger.info("snapshot lifecycle policy job [{}] issued new snapshot creation for [{}] successfully",
|
||||
event.getJobName(), name));
|
||||
|
||||
if (snapshotName.isPresent() == false) {
|
||||
logger.warn("snapshot lifecycle policy for job [{}] no longer exists, snapshot not created", event.getJobName());
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* For the given job id (a combination of policy id and version), issue a create snapshot
|
||||
* request. On a successful or failed create snapshot issuing the state is stored in the cluster
|
||||
* state in the policy's metadata
|
||||
* @return An optional snapshot name if the request was issued successfully
|
||||
*/
|
||||
public static Optional<String> maybeTakeSnapshot(final String jobId, final Client client, final ClusterService clusterService,
|
||||
final SnapshotHistoryStore historyStore) {
|
||||
Optional<SnapshotLifecyclePolicyMetadata> maybeMetadata = getSnapPolicyMetadata(jobId, clusterService.state());
|
||||
String snapshotName = maybeMetadata.map(policyMetadata -> {
|
||||
CreateSnapshotRequest request = policyMetadata.getPolicy().toRequest();
|
||||
final LifecyclePolicySecurityClient clientWithHeaders = new LifecyclePolicySecurityClient(client,
|
||||
ClientHelper.INDEX_LIFECYCLE_ORIGIN, policyMetadata.getHeaders());
|
||||
logger.info("snapshot lifecycle policy [{}] issuing create snapshot [{}]",
|
||||
policyMetadata.getPolicy().getId(), request.snapshot());
|
||||
clientWithHeaders.admin().cluster().createSnapshot(request, new ActionListener<CreateSnapshotResponse>() {
|
||||
@Override
|
||||
public void onResponse(CreateSnapshotResponse createSnapshotResponse) {
|
||||
logger.debug("snapshot response for [{}]: {}",
|
||||
policyMetadata.getPolicy().getId(), Strings.toString(createSnapshotResponse));
|
||||
final long timestamp = Instant.now().toEpochMilli();
|
||||
clusterService.submitStateUpdateTask("slm-record-success-" + policyMetadata.getPolicy().getId(),
|
||||
WriteJobStatus.success(policyMetadata.getPolicy().getId(), request.snapshot(), timestamp));
|
||||
historyStore.putAsync(SnapshotHistoryItem.successRecord(timestamp, policyMetadata.getPolicy(), request.snapshot()));
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onFailure(Exception e) {
|
||||
logger.error("failed to issue create snapshot request for snapshot lifecycle policy [{}]: {}",
|
||||
policyMetadata.getPolicy().getId(), e);
|
||||
final long timestamp = Instant.now().toEpochMilli();
|
||||
clusterService.submitStateUpdateTask("slm-record-failure-" + policyMetadata.getPolicy().getId(),
|
||||
WriteJobStatus.failure(policyMetadata.getPolicy().getId(), request.snapshot(), timestamp, e));
|
||||
final SnapshotHistoryItem failureRecord;
|
||||
try {
|
||||
failureRecord = SnapshotHistoryItem.failureRecord(timestamp, policyMetadata.getPolicy(), request.snapshot(), e);
|
||||
historyStore.putAsync(failureRecord);
|
||||
} catch (IOException ex) {
|
||||
// This shouldn't happen unless there's an issue with serializing the original exception, which shouldn't happen
|
||||
logger.error(new ParameterizedMessage(
|
||||
"failed to record snapshot creation failure for snapshot lifecycle policy [{}]",
|
||||
policyMetadata.getPolicy().getId()), e);
|
||||
}
|
||||
}
|
||||
});
|
||||
return request.snapshot();
|
||||
}).orElse(null);
|
||||
|
||||
return Optional.ofNullable(snapshotName);
|
||||
}
|
||||
|
||||
/**
|
||||
* For the given job id, return an optional policy metadata object, if one exists
|
||||
*/
|
||||
static Optional<SnapshotLifecyclePolicyMetadata> getSnapPolicyMetadata(final String jobId, final ClusterState state) {
|
||||
return Optional.ofNullable((SnapshotLifecycleMetadata) state.metaData().custom(SnapshotLifecycleMetadata.TYPE))
|
||||
.map(SnapshotLifecycleMetadata::getSnapshotConfigurations)
|
||||
.flatMap(configMap -> configMap.values().stream()
|
||||
.filter(policyMeta -> jobId.equals(SnapshotLifecycleService.getJobId(policyMeta)))
|
||||
.findFirst());
|
||||
}
|
||||
|
||||
/**
|
||||
* A cluster state update task to write the result of a snapshot job to the cluster metadata for the associated policy.
|
||||
*/
|
||||
private static class WriteJobStatus extends ClusterStateUpdateTask {
|
||||
private static final ToXContent.Params STACKTRACE_PARAMS =
|
||||
new ToXContent.MapParams(Collections.singletonMap(REST_EXCEPTION_SKIP_STACK_TRACE, "false"));
|
||||
|
||||
private final String policyName;
|
||||
private final String snapshotName;
|
||||
private final long timestamp;
|
||||
private final Optional<Exception> exception;
|
||||
|
||||
private WriteJobStatus(String policyName, String snapshotName, long timestamp, Optional<Exception> exception) {
|
||||
this.policyName = policyName;
|
||||
this.snapshotName = snapshotName;
|
||||
this.exception = exception;
|
||||
this.timestamp = timestamp;
|
||||
}
|
||||
|
||||
static WriteJobStatus success(String policyId, String snapshotName, long timestamp) {
|
||||
return new WriteJobStatus(policyId, snapshotName, timestamp, Optional.empty());
|
||||
}
|
||||
|
||||
static WriteJobStatus failure(String policyId, String snapshotName, long timestamp, Exception exception) {
|
||||
return new WriteJobStatus(policyId, snapshotName, timestamp, Optional.of(exception));
|
||||
}
|
||||
|
||||
private String exceptionToString() throws IOException {
|
||||
if (exception.isPresent()) {
|
||||
try (XContentBuilder causeXContentBuilder = JsonXContent.contentBuilder()) {
|
||||
causeXContentBuilder.startObject();
|
||||
ElasticsearchException.generateThrowableXContent(causeXContentBuilder, STACKTRACE_PARAMS, exception.get());
|
||||
causeXContentBuilder.endObject();
|
||||
return BytesReference.bytes(causeXContentBuilder).utf8ToString();
|
||||
}
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
@Override
|
||||
public ClusterState execute(ClusterState currentState) throws Exception {
|
||||
SnapshotLifecycleMetadata snapMeta = currentState.metaData().custom(SnapshotLifecycleMetadata.TYPE);
|
||||
|
||||
assert snapMeta != null : "this should never be called while the snapshot lifecycle cluster metadata is null";
|
||||
if (snapMeta == null) {
|
||||
logger.error("failed to record snapshot [{}] for snapshot [{}] in policy [{}]: snapshot lifecycle metadata is null",
|
||||
exception.isPresent() ? "failure" : "success", snapshotName, policyName);
|
||||
return currentState;
|
||||
}
|
||||
|
||||
Map<String, SnapshotLifecyclePolicyMetadata> snapLifecycles = new HashMap<>(snapMeta.getSnapshotConfigurations());
|
||||
SnapshotLifecyclePolicyMetadata policyMetadata = snapLifecycles.get(policyName);
|
||||
if (policyMetadata == null) {
|
||||
logger.warn("failed to record snapshot [{}] for snapshot [{}] in policy [{}]: policy not found",
|
||||
exception.isPresent() ? "failure" : "success", snapshotName, policyName);
|
||||
return currentState;
|
||||
}
|
||||
|
||||
SnapshotLifecyclePolicyMetadata.Builder newPolicyMetadata = SnapshotLifecyclePolicyMetadata.builder(policyMetadata);
|
||||
|
||||
if (exception.isPresent()) {
|
||||
newPolicyMetadata.setLastFailure(new SnapshotInvocationRecord(snapshotName, timestamp, exceptionToString()));
|
||||
} else {
|
||||
newPolicyMetadata.setLastSuccess(new SnapshotInvocationRecord(snapshotName, timestamp, null));
|
||||
}
|
||||
|
||||
snapLifecycles.put(policyName, newPolicyMetadata.build());
|
||||
SnapshotLifecycleMetadata lifecycleMetadata = new SnapshotLifecycleMetadata(snapLifecycles, snapMeta.getOperationMode());
|
||||
MetaData currentMeta = currentState.metaData();
|
||||
return ClusterState.builder(currentState)
|
||||
.metaData(MetaData.builder(currentMeta)
|
||||
.putCustom(SnapshotLifecycleMetadata.TYPE, lifecycleMetadata))
|
||||
.build();
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onFailure(String source, Exception e) {
|
||||
logger.error("failed to record snapshot policy execution status for snapshot [{}] in policy [{}], (source: [{}]): {}",
|
||||
snapshotName, policyName, source, e);
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,38 @@
|
|||
/*
|
||||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||
* or more contributor license agreements. Licensed under the Elastic License;
|
||||
* you may not use this file except in compliance with the Elastic License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.xpack.snapshotlifecycle.action;
|
||||
|
||||
import org.elasticsearch.client.node.NodeClient;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.rest.BaseRestHandler;
|
||||
import org.elasticsearch.rest.RestController;
|
||||
import org.elasticsearch.rest.RestRequest;
|
||||
import org.elasticsearch.rest.action.RestToXContentListener;
|
||||
import org.elasticsearch.xpack.core.snapshotlifecycle.action.DeleteSnapshotLifecycleAction;
|
||||
|
||||
public class RestDeleteSnapshotLifecycleAction extends BaseRestHandler {
|
||||
|
||||
public RestDeleteSnapshotLifecycleAction(Settings settings, RestController controller) {
|
||||
super(settings);
|
||||
controller.registerHandler(RestRequest.Method.DELETE, "/_slm/policy/{name}", this);
|
||||
}
|
||||
|
||||
@Override
|
||||
public String getName() {
|
||||
return "slm_delete_lifecycle";
|
||||
}
|
||||
|
||||
@Override
|
||||
protected RestChannelConsumer prepareRequest(RestRequest request, NodeClient client) {
|
||||
String lifecycleId = request.param("name");
|
||||
DeleteSnapshotLifecycleAction.Request req = new DeleteSnapshotLifecycleAction.Request(lifecycleId);
|
||||
req.timeout(request.paramAsTime("timeout", req.timeout()));
|
||||
req.masterNodeTimeout(request.paramAsTime("master_timeout", req.masterNodeTimeout()));
|
||||
|
||||
return channel -> client.execute(DeleteSnapshotLifecycleAction.INSTANCE, req, new RestToXContentListener<>(channel));
|
||||
}
|
||||
}
|
|
@ -0,0 +1,39 @@
|
|||
/*
|
||||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||
* or more contributor license agreements. Licensed under the Elastic License;
|
||||
* you may not use this file except in compliance with the Elastic License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.xpack.snapshotlifecycle.action;
|
||||
|
||||
import org.elasticsearch.client.node.NodeClient;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.rest.BaseRestHandler;
|
||||
import org.elasticsearch.rest.RestController;
|
||||
import org.elasticsearch.rest.RestRequest;
|
||||
import org.elasticsearch.rest.action.RestToXContentListener;
|
||||
import org.elasticsearch.xpack.core.snapshotlifecycle.action.ExecuteSnapshotLifecycleAction;
|
||||
|
||||
import java.io.IOException;
|
||||
|
||||
public class RestExecuteSnapshotLifecycleAction extends BaseRestHandler {
|
||||
|
||||
public RestExecuteSnapshotLifecycleAction(Settings settings, RestController controller) {
|
||||
super(settings);
|
||||
controller.registerHandler(RestRequest.Method.PUT, "/_slm/policy/{name}/_execute", this);
|
||||
}
|
||||
|
||||
@Override
|
||||
public String getName() {
|
||||
return "slm_execute_lifecycle";
|
||||
}
|
||||
|
||||
@Override
|
||||
protected RestChannelConsumer prepareRequest(RestRequest request, NodeClient client) throws IOException {
|
||||
String snapLifecycleId = request.param("name");
|
||||
ExecuteSnapshotLifecycleAction.Request req = new ExecuteSnapshotLifecycleAction.Request(snapLifecycleId);
|
||||
req.timeout(request.paramAsTime("timeout", req.timeout()));
|
||||
req.masterNodeTimeout(request.paramAsTime("master_timeout", req.masterNodeTimeout()));
|
||||
return channel -> client.execute(ExecuteSnapshotLifecycleAction.INSTANCE, req, new RestToXContentListener<>(channel));
|
||||
}
|
||||
}
|
|
@ -0,0 +1,40 @@
|
|||
/*
|
||||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||
* or more contributor license agreements. Licensed under the Elastic License;
|
||||
* you may not use this file except in compliance with the Elastic License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.xpack.snapshotlifecycle.action;
|
||||
|
||||
import org.elasticsearch.client.node.NodeClient;
|
||||
import org.elasticsearch.common.Strings;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.rest.BaseRestHandler;
|
||||
import org.elasticsearch.rest.RestController;
|
||||
import org.elasticsearch.rest.RestRequest;
|
||||
import org.elasticsearch.rest.action.RestToXContentListener;
|
||||
import org.elasticsearch.xpack.core.snapshotlifecycle.action.GetSnapshotLifecycleAction;
|
||||
|
||||
public class RestGetSnapshotLifecycleAction extends BaseRestHandler {
|
||||
|
||||
public RestGetSnapshotLifecycleAction(Settings settings, RestController controller) {
|
||||
super(settings);
|
||||
controller.registerHandler(RestRequest.Method.GET, "/_slm/policy", this);
|
||||
controller.registerHandler(RestRequest.Method.GET, "/_slm/policy/{name}", this);
|
||||
}
|
||||
|
||||
@Override
|
||||
public String getName() {
|
||||
return "slm_get_lifecycle";
|
||||
}
|
||||
|
||||
@Override
|
||||
protected RestChannelConsumer prepareRequest(RestRequest request, NodeClient client) {
|
||||
String[] lifecycleNames = Strings.splitStringByCommaToArray(request.param("name"));
|
||||
GetSnapshotLifecycleAction.Request req = new GetSnapshotLifecycleAction.Request(lifecycleNames);
|
||||
req.timeout(request.paramAsTime("timeout", req.timeout()));
|
||||
req.masterNodeTimeout(request.paramAsTime("master_timeout", req.masterNodeTimeout()));
|
||||
|
||||
return channel -> client.execute(GetSnapshotLifecycleAction.INSTANCE, req, new RestToXContentListener<>(channel));
|
||||
}
|
||||
}
|
|
@ -0,0 +1,42 @@
|
|||
/*
|
||||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||
* or more contributor license agreements. Licensed under the Elastic License;
|
||||
* you may not use this file except in compliance with the Elastic License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.xpack.snapshotlifecycle.action;
|
||||
|
||||
import org.elasticsearch.client.node.NodeClient;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.xcontent.XContentParser;
|
||||
import org.elasticsearch.rest.BaseRestHandler;
|
||||
import org.elasticsearch.rest.RestController;
|
||||
import org.elasticsearch.rest.RestRequest;
|
||||
import org.elasticsearch.rest.action.RestToXContentListener;
|
||||
import org.elasticsearch.xpack.core.snapshotlifecycle.action.PutSnapshotLifecycleAction;
|
||||
|
||||
import java.io.IOException;
|
||||
|
||||
public class RestPutSnapshotLifecycleAction extends BaseRestHandler {
|
||||
|
||||
public RestPutSnapshotLifecycleAction(Settings settings, RestController controller) {
|
||||
super(settings);
|
||||
controller.registerHandler(RestRequest.Method.PUT, "/_slm/policy/{name}", this);
|
||||
}
|
||||
|
||||
@Override
|
||||
public String getName() {
|
||||
return "slm_put_lifecycle";
|
||||
}
|
||||
|
||||
@Override
|
||||
protected RestChannelConsumer prepareRequest(RestRequest request, NodeClient client) throws IOException {
|
||||
String snapLifecycleName = request.param("name");
|
||||
try (XContentParser parser = request.contentParser()) {
|
||||
PutSnapshotLifecycleAction.Request req = PutSnapshotLifecycleAction.Request.parseRequest(snapLifecycleName, parser);
|
||||
req.timeout(request.paramAsTime("timeout", req.timeout()));
|
||||
req.masterNodeTimeout(request.paramAsTime("master_timeout", req.masterNodeTimeout()));
|
||||
return channel -> client.execute(PutSnapshotLifecycleAction.INSTANCE, req, new RestToXContentListener<>(channel));
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,94 @@
|
|||
/*
|
||||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||
* or more contributor license agreements. Licensed under the Elastic License;
|
||||
* you may not use this file except in compliance with the Elastic License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.xpack.snapshotlifecycle.action;
|
||||
|
||||
import org.elasticsearch.ResourceNotFoundException;
|
||||
import org.elasticsearch.action.ActionListener;
|
||||
import org.elasticsearch.action.support.ActionFilters;
|
||||
import org.elasticsearch.action.support.master.TransportMasterNodeAction;
|
||||
import org.elasticsearch.cluster.AckedClusterStateUpdateTask;
|
||||
import org.elasticsearch.cluster.ClusterState;
|
||||
import org.elasticsearch.cluster.block.ClusterBlockException;
|
||||
import org.elasticsearch.cluster.block.ClusterBlockLevel;
|
||||
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
|
||||
import org.elasticsearch.cluster.metadata.MetaData;
|
||||
import org.elasticsearch.cluster.service.ClusterService;
|
||||
import org.elasticsearch.common.inject.Inject;
|
||||
import org.elasticsearch.common.io.stream.StreamInput;
|
||||
import org.elasticsearch.threadpool.ThreadPool;
|
||||
import org.elasticsearch.transport.TransportService;
|
||||
import org.elasticsearch.xpack.core.snapshotlifecycle.SnapshotLifecycleMetadata;
|
||||
import org.elasticsearch.xpack.core.snapshotlifecycle.SnapshotLifecyclePolicyMetadata;
|
||||
import org.elasticsearch.xpack.core.snapshotlifecycle.action.DeleteSnapshotLifecycleAction;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.Map;
|
||||
import java.util.stream.Collectors;
|
||||
|
||||
public class TransportDeleteSnapshotLifecycleAction extends
|
||||
TransportMasterNodeAction<DeleteSnapshotLifecycleAction.Request, DeleteSnapshotLifecycleAction.Response> {
|
||||
|
||||
@Inject
|
||||
public TransportDeleteSnapshotLifecycleAction(TransportService transportService, ClusterService clusterService, ThreadPool threadPool,
|
||||
ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver) {
|
||||
super(DeleteSnapshotLifecycleAction.NAME, transportService, clusterService, threadPool, actionFilters,
|
||||
indexNameExpressionResolver, DeleteSnapshotLifecycleAction.Request::new);
|
||||
}
|
||||
|
||||
@Override
|
||||
protected String executor() {
|
||||
return ThreadPool.Names.SAME;
|
||||
}
|
||||
|
||||
@Override
|
||||
protected DeleteSnapshotLifecycleAction.Response read(StreamInput in) throws IOException {
|
||||
return new DeleteSnapshotLifecycleAction.Response(in);
|
||||
}
|
||||
|
||||
@Override
|
||||
protected void masterOperation(DeleteSnapshotLifecycleAction.Request request,
|
||||
ClusterState state,
|
||||
ActionListener<DeleteSnapshotLifecycleAction.Response> listener) throws Exception {
|
||||
clusterService.submitStateUpdateTask("delete-snapshot-lifecycle-" + request.getLifecycleId(),
|
||||
new AckedClusterStateUpdateTask<DeleteSnapshotLifecycleAction.Response>(request, listener) {
|
||||
@Override
|
||||
protected DeleteSnapshotLifecycleAction.Response newResponse(boolean acknowledged) {
|
||||
return new DeleteSnapshotLifecycleAction.Response(acknowledged);
|
||||
}
|
||||
|
||||
@Override
|
||||
public ClusterState execute(ClusterState currentState) {
|
||||
SnapshotLifecycleMetadata snapMeta = currentState.metaData().custom(SnapshotLifecycleMetadata.TYPE);
|
||||
if (snapMeta == null) {
|
||||
throw new ResourceNotFoundException("snapshot lifecycle policy not found: {}", request.getLifecycleId());
|
||||
}
|
||||
// Check that the policy exists in the first place
|
||||
snapMeta.getSnapshotConfigurations().entrySet().stream()
|
||||
.filter(e -> e.getValue().getPolicy().getId().equals(request.getLifecycleId()))
|
||||
.findAny()
|
||||
.orElseThrow(() -> new ResourceNotFoundException("snapshot lifecycle policy not found: {}",
|
||||
request.getLifecycleId()));
|
||||
|
||||
Map<String, SnapshotLifecyclePolicyMetadata> newConfigs = snapMeta.getSnapshotConfigurations().entrySet().stream()
|
||||
.filter(e -> e.getKey().equals(request.getLifecycleId()) == false)
|
||||
.collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue));
|
||||
|
||||
MetaData metaData = currentState.metaData();
|
||||
return ClusterState.builder(currentState)
|
||||
.metaData(MetaData.builder(metaData)
|
||||
.putCustom(SnapshotLifecycleMetadata.TYPE,
|
||||
new SnapshotLifecycleMetadata(newConfigs, snapMeta.getOperationMode())))
|
||||
.build();
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
@Override
|
||||
protected ClusterBlockException checkBlock(DeleteSnapshotLifecycleAction.Request request, ClusterState state) {
|
||||
return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_WRITE);
|
||||
}
|
||||
}
|
|
@ -0,0 +1,96 @@
|
|||
/*
|
||||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||
* or more contributor license agreements. Licensed under the Elastic License;
|
||||
* you may not use this file except in compliance with the Elastic License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.xpack.snapshotlifecycle.action;
|
||||
|
||||
import org.apache.logging.log4j.LogManager;
|
||||
import org.apache.logging.log4j.Logger;
|
||||
import org.elasticsearch.ElasticsearchException;
|
||||
import org.elasticsearch.action.ActionListener;
|
||||
import org.elasticsearch.action.support.ActionFilters;
|
||||
import org.elasticsearch.action.support.master.TransportMasterNodeAction;
|
||||
import org.elasticsearch.client.Client;
|
||||
import org.elasticsearch.cluster.ClusterState;
|
||||
import org.elasticsearch.cluster.block.ClusterBlockException;
|
||||
import org.elasticsearch.cluster.block.ClusterBlockLevel;
|
||||
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
|
||||
import org.elasticsearch.cluster.service.ClusterService;
|
||||
import org.elasticsearch.common.inject.Inject;
|
||||
import org.elasticsearch.common.io.stream.StreamInput;
|
||||
import org.elasticsearch.threadpool.ThreadPool;
|
||||
import org.elasticsearch.transport.TransportService;
|
||||
import org.elasticsearch.xpack.core.snapshotlifecycle.SnapshotLifecycleMetadata;
|
||||
import org.elasticsearch.xpack.core.snapshotlifecycle.SnapshotLifecyclePolicyMetadata;
|
||||
import org.elasticsearch.xpack.core.snapshotlifecycle.action.ExecuteSnapshotLifecycleAction;
|
||||
import org.elasticsearch.xpack.core.snapshotlifecycle.history.SnapshotHistoryStore;
|
||||
import org.elasticsearch.xpack.snapshotlifecycle.SnapshotLifecycleService;
|
||||
import org.elasticsearch.xpack.snapshotlifecycle.SnapshotLifecycleTask;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.Optional;
|
||||
|
||||
public class TransportExecuteSnapshotLifecycleAction
|
||||
extends TransportMasterNodeAction<ExecuteSnapshotLifecycleAction.Request, ExecuteSnapshotLifecycleAction.Response> {
|
||||
|
||||
private static final Logger logger = LogManager.getLogger(TransportExecuteSnapshotLifecycleAction.class);
|
||||
|
||||
private final Client client;
|
||||
private final SnapshotHistoryStore historyStore;
|
||||
|
||||
@Inject
|
||||
public TransportExecuteSnapshotLifecycleAction(TransportService transportService, ClusterService clusterService, ThreadPool threadPool,
|
||||
ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver,
|
||||
Client client, SnapshotHistoryStore historyStore) {
|
||||
super(ExecuteSnapshotLifecycleAction.NAME, transportService, clusterService, threadPool, actionFilters, indexNameExpressionResolver,
|
||||
ExecuteSnapshotLifecycleAction.Request::new);
|
||||
this.client = client;
|
||||
this.historyStore = historyStore;
|
||||
}
|
||||
@Override
|
||||
protected String executor() {
|
||||
return ThreadPool.Names.SNAPSHOT;
|
||||
}
|
||||
|
||||
@Override
|
||||
protected ExecuteSnapshotLifecycleAction.Response read(StreamInput in) throws IOException {
|
||||
return new ExecuteSnapshotLifecycleAction.Response(in);
|
||||
}
|
||||
|
||||
@Override
|
||||
protected void masterOperation(final ExecuteSnapshotLifecycleAction.Request request,
|
||||
final ClusterState state,
|
||||
final ActionListener<ExecuteSnapshotLifecycleAction.Response> listener) {
|
||||
try {
|
||||
final String policyId = request.getLifecycleId();
|
||||
SnapshotLifecycleMetadata snapMeta = state.metaData().custom(SnapshotLifecycleMetadata.TYPE);
|
||||
if (snapMeta == null) {
|
||||
listener.onFailure(new IllegalArgumentException("no such snapshot lifecycle policy [" + policyId + "]"));
|
||||
return;
|
||||
}
|
||||
|
||||
SnapshotLifecyclePolicyMetadata policyMetadata = snapMeta.getSnapshotConfigurations().get(policyId);
|
||||
if (policyMetadata == null) {
|
||||
listener.onFailure(new IllegalArgumentException("no such snapshot lifecycle policy [" + policyId + "]"));
|
||||
return;
|
||||
}
|
||||
|
||||
final Optional<String> snapshotName = SnapshotLifecycleTask.maybeTakeSnapshot(SnapshotLifecycleService.getJobId(policyMetadata),
|
||||
client, clusterService, historyStore);
|
||||
if (snapshotName.isPresent()) {
|
||||
listener.onResponse(new ExecuteSnapshotLifecycleAction.Response(snapshotName.get()));
|
||||
} else {
|
||||
listener.onFailure(new ElasticsearchException("failed to execute snapshot lifecycle policy [" + policyId + "]"));
|
||||
}
|
||||
} catch (Exception e) {
|
||||
listener.onFailure(e);
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
protected ClusterBlockException checkBlock(ExecuteSnapshotLifecycleAction.Request request, ClusterState state) {
|
||||
return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_READ);
|
||||
}
|
||||
}
|
|
@ -0,0 +1,85 @@
|
|||
/*
|
||||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||
* or more contributor license agreements. Licensed under the Elastic License;
|
||||
* you may not use this file except in compliance with the Elastic License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.xpack.snapshotlifecycle.action;
|
||||
|
||||
import org.apache.logging.log4j.LogManager;
|
||||
import org.apache.logging.log4j.Logger;
|
||||
import org.elasticsearch.action.ActionListener;
|
||||
import org.elasticsearch.action.support.ActionFilters;
|
||||
import org.elasticsearch.action.support.master.TransportMasterNodeAction;
|
||||
import org.elasticsearch.cluster.ClusterState;
|
||||
import org.elasticsearch.cluster.block.ClusterBlockException;
|
||||
import org.elasticsearch.cluster.block.ClusterBlockLevel;
|
||||
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
|
||||
import org.elasticsearch.cluster.service.ClusterService;
|
||||
import org.elasticsearch.common.inject.Inject;
|
||||
import org.elasticsearch.common.io.stream.StreamInput;
|
||||
import org.elasticsearch.threadpool.ThreadPool;
|
||||
import org.elasticsearch.transport.TransportService;
|
||||
import org.elasticsearch.xpack.core.snapshotlifecycle.SnapshotLifecycleMetadata;
|
||||
import org.elasticsearch.xpack.core.snapshotlifecycle.SnapshotLifecyclePolicyItem;
|
||||
import org.elasticsearch.xpack.core.snapshotlifecycle.action.GetSnapshotLifecycleAction;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.Arrays;
|
||||
import java.util.Collections;
|
||||
import java.util.HashSet;
|
||||
import java.util.List;
|
||||
import java.util.Set;
|
||||
import java.util.stream.Collectors;
|
||||
|
||||
public class TransportGetSnapshotLifecycleAction extends
|
||||
TransportMasterNodeAction<GetSnapshotLifecycleAction.Request, GetSnapshotLifecycleAction.Response> {
|
||||
|
||||
private static final Logger logger = LogManager.getLogger(TransportPutSnapshotLifecycleAction.class);
|
||||
|
||||
@Inject
|
||||
public TransportGetSnapshotLifecycleAction(TransportService transportService, ClusterService clusterService, ThreadPool threadPool,
|
||||
ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver) {
|
||||
super(GetSnapshotLifecycleAction.NAME, transportService, clusterService, threadPool, actionFilters, indexNameExpressionResolver,
|
||||
GetSnapshotLifecycleAction.Request::new);
|
||||
}
|
||||
@Override
|
||||
protected String executor() {
|
||||
return ThreadPool.Names.SAME;
|
||||
}
|
||||
|
||||
@Override
|
||||
protected GetSnapshotLifecycleAction.Response read(StreamInput in) throws IOException {
|
||||
return new GetSnapshotLifecycleAction.Response(in);
|
||||
}
|
||||
|
||||
@Override
|
||||
protected void masterOperation(final GetSnapshotLifecycleAction.Request request,
|
||||
final ClusterState state,
|
||||
final ActionListener<GetSnapshotLifecycleAction.Response> listener) {
|
||||
SnapshotLifecycleMetadata snapMeta = state.metaData().custom(SnapshotLifecycleMetadata.TYPE);
|
||||
if (snapMeta == null) {
|
||||
listener.onResponse(new GetSnapshotLifecycleAction.Response(Collections.emptyList()));
|
||||
} else {
|
||||
final Set<String> ids = new HashSet<>(Arrays.asList(request.getLifecycleIds()));
|
||||
List<SnapshotLifecyclePolicyItem> lifecycles = snapMeta.getSnapshotConfigurations()
|
||||
.values()
|
||||
.stream()
|
||||
.filter(meta -> {
|
||||
if (ids.isEmpty()) {
|
||||
return true;
|
||||
} else {
|
||||
return ids.contains(meta.getPolicy().getId());
|
||||
}
|
||||
})
|
||||
.map(SnapshotLifecyclePolicyItem::new)
|
||||
.collect(Collectors.toList());
|
||||
listener.onResponse(new GetSnapshotLifecycleAction.Response(lifecycles));
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
protected ClusterBlockException checkBlock(GetSnapshotLifecycleAction.Request request, ClusterState state) {
|
||||
return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_READ);
|
||||
}
|
||||
}
|
|
@ -0,0 +1,133 @@
|
|||
/*
|
||||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||
* or more contributor license agreements. Licensed under the Elastic License;
|
||||
* you may not use this file except in compliance with the Elastic License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.xpack.snapshotlifecycle.action;
|
||||
|
||||
import org.apache.logging.log4j.LogManager;
|
||||
import org.apache.logging.log4j.Logger;
|
||||
import org.elasticsearch.action.ActionListener;
|
||||
import org.elasticsearch.action.support.ActionFilters;
|
||||
import org.elasticsearch.action.support.master.TransportMasterNodeAction;
|
||||
import org.elasticsearch.cluster.AckedClusterStateUpdateTask;
|
||||
import org.elasticsearch.cluster.ClusterState;
|
||||
import org.elasticsearch.cluster.block.ClusterBlockException;
|
||||
import org.elasticsearch.cluster.block.ClusterBlockLevel;
|
||||
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
|
||||
import org.elasticsearch.cluster.metadata.MetaData;
|
||||
import org.elasticsearch.cluster.service.ClusterService;
|
||||
import org.elasticsearch.common.inject.Inject;
|
||||
import org.elasticsearch.common.io.stream.StreamInput;
|
||||
import org.elasticsearch.threadpool.ThreadPool;
|
||||
import org.elasticsearch.transport.TransportService;
|
||||
import org.elasticsearch.xpack.core.ClientHelper;
|
||||
import org.elasticsearch.xpack.core.indexlifecycle.IndexLifecycleMetadata;
|
||||
import org.elasticsearch.xpack.core.indexlifecycle.LifecyclePolicy;
|
||||
import org.elasticsearch.xpack.core.indexlifecycle.OperationMode;
|
||||
import org.elasticsearch.xpack.core.snapshotlifecycle.SnapshotLifecycleMetadata;
|
||||
import org.elasticsearch.xpack.core.snapshotlifecycle.SnapshotLifecyclePolicyMetadata;
|
||||
import org.elasticsearch.xpack.core.snapshotlifecycle.action.PutSnapshotLifecycleAction;
|
||||
import org.elasticsearch.xpack.snapshotlifecycle.SnapshotLifecycleService;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.time.Instant;
|
||||
import java.util.Collections;
|
||||
import java.util.HashMap;
|
||||
import java.util.Map;
|
||||
import java.util.Optional;
|
||||
import java.util.stream.Collectors;
|
||||
|
||||
public class TransportPutSnapshotLifecycleAction extends
|
||||
TransportMasterNodeAction<PutSnapshotLifecycleAction.Request, PutSnapshotLifecycleAction.Response> {
|
||||
|
||||
private static final Logger logger = LogManager.getLogger(TransportPutSnapshotLifecycleAction.class);
|
||||
|
||||
@Inject
|
||||
public TransportPutSnapshotLifecycleAction(TransportService transportService, ClusterService clusterService, ThreadPool threadPool,
|
||||
ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver) {
|
||||
super(PutSnapshotLifecycleAction.NAME, transportService, clusterService, threadPool, actionFilters, indexNameExpressionResolver,
|
||||
PutSnapshotLifecycleAction.Request::new);
|
||||
}
|
||||
@Override
|
||||
protected String executor() {
|
||||
return ThreadPool.Names.SAME;
|
||||
}
|
||||
|
||||
@Override
|
||||
protected PutSnapshotLifecycleAction.Response read(StreamInput in) throws IOException {
|
||||
return new PutSnapshotLifecycleAction.Response(in);
|
||||
}
|
||||
|
||||
@Override
|
||||
protected void masterOperation(final PutSnapshotLifecycleAction.Request request,
|
||||
final ClusterState state,
|
||||
final ActionListener<PutSnapshotLifecycleAction.Response> listener) {
|
||||
SnapshotLifecycleService.validateRepositoryExists(request.getLifecycle().getRepository(), state);
|
||||
|
||||
// headers from the thread context stored by the AuthenticationService to be shared between the
|
||||
// REST layer and the Transport layer here must be accessed within this thread and not in the
|
||||
// cluster state thread in the ClusterStateUpdateTask below since that thread does not share the
|
||||
// same context, and therefore does not have access to the appropriate security headers.
|
||||
final Map<String, String> filteredHeaders = threadPool.getThreadContext().getHeaders().entrySet().stream()
|
||||
.filter(e -> ClientHelper.SECURITY_HEADER_FILTERS.contains(e.getKey()))
|
||||
.collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue));
|
||||
LifecyclePolicy.validatePolicyName(request.getLifecycleId());
|
||||
clusterService.submitStateUpdateTask("put-snapshot-lifecycle-" + request.getLifecycleId(),
|
||||
new AckedClusterStateUpdateTask<PutSnapshotLifecycleAction.Response>(request, listener) {
|
||||
@Override
|
||||
public ClusterState execute(ClusterState currentState) {
|
||||
SnapshotLifecycleMetadata snapMeta = currentState.metaData().custom(SnapshotLifecycleMetadata.TYPE);
|
||||
|
||||
String id = request.getLifecycleId();
|
||||
final SnapshotLifecycleMetadata lifecycleMetadata;
|
||||
if (snapMeta == null) {
|
||||
SnapshotLifecyclePolicyMetadata meta = SnapshotLifecyclePolicyMetadata.builder()
|
||||
.setPolicy(request.getLifecycle())
|
||||
.setHeaders(filteredHeaders)
|
||||
.setModifiedDate(Instant.now().toEpochMilli())
|
||||
.build();
|
||||
IndexLifecycleMetadata ilmMeta = currentState.metaData().custom(IndexLifecycleMetadata.TYPE);
|
||||
OperationMode mode = Optional.ofNullable(ilmMeta)
|
||||
.map(IndexLifecycleMetadata::getOperationMode)
|
||||
.orElse(OperationMode.RUNNING);
|
||||
lifecycleMetadata = new SnapshotLifecycleMetadata(Collections.singletonMap(id, meta), mode);
|
||||
logger.info("adding new snapshot lifecycle [{}]", id);
|
||||
} else {
|
||||
Map<String, SnapshotLifecyclePolicyMetadata> snapLifecycles = new HashMap<>(snapMeta.getSnapshotConfigurations());
|
||||
SnapshotLifecyclePolicyMetadata oldLifecycle = snapLifecycles.get(id);
|
||||
SnapshotLifecyclePolicyMetadata newLifecycle = SnapshotLifecyclePolicyMetadata.builder(oldLifecycle)
|
||||
.setPolicy(request.getLifecycle())
|
||||
.setHeaders(filteredHeaders)
|
||||
.setVersion(oldLifecycle == null ? 1L : oldLifecycle.getVersion() + 1)
|
||||
.setModifiedDate(Instant.now().toEpochMilli())
|
||||
.build();
|
||||
snapLifecycles.put(id, newLifecycle);
|
||||
lifecycleMetadata = new SnapshotLifecycleMetadata(snapLifecycles, snapMeta.getOperationMode());
|
||||
if (oldLifecycle == null) {
|
||||
logger.info("adding new snapshot lifecycle [{}]", id);
|
||||
} else {
|
||||
logger.info("updating existing snapshot lifecycle [{}]", id);
|
||||
}
|
||||
}
|
||||
|
||||
MetaData currentMeta = currentState.metaData();
|
||||
return ClusterState.builder(currentState)
|
||||
.metaData(MetaData.builder(currentMeta)
|
||||
.putCustom(SnapshotLifecycleMetadata.TYPE, lifecycleMetadata))
|
||||
.build();
|
||||
}
|
||||
|
||||
@Override
|
||||
protected PutSnapshotLifecycleAction.Response newResponse(boolean acknowledged) {
|
||||
return new PutSnapshotLifecycleAction.Response(acknowledged);
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
@Override
|
||||
protected ClusterBlockException checkBlock(PutSnapshotLifecycleAction.Request request, ClusterState state) {
|
||||
return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_WRITE);
|
||||
}
|
||||
}
|
|
@ -0,0 +1,20 @@
|
|||
/*
|
||||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||
* or more contributor license agreements. Licensed under the Elastic License;
|
||||
* you may not use this file except in compliance with the Elastic License.
|
||||
*/
|
||||
|
||||
/**
|
||||
* This package contains all the SLM Rest and Transport actions.
|
||||
*
|
||||
* <p>The {@link org.elasticsearch.xpack.snapshotlifecycle.action.TransportPutSnapshotLifecycleAction} creates or updates a snapshot
|
||||
* lifecycle policy in the cluster state. The {@link org.elasticsearch.xpack.snapshotlifecycle.action.TransportGetSnapshotLifecycleAction}
|
||||
* simply retrieves a policy by id. The {@link org.elasticsearch.xpack.snapshotlifecycle.action.TransportDeleteSnapshotLifecycleAction}
|
||||
* removes a policy from the cluster state. These actions only interact with the cluster state. Most of the logic that take place in
|
||||
* response to these actions happens on the master node in the {@link org.elasticsearch.xpack.snapshotlifecycle.SnapshotLifecycleService}.
|
||||
*
|
||||
* <p>The {@link org.elasticsearch.xpack.snapshotlifecycle.action.TransportExecuteSnapshotLifecycleAction} operates as if the snapshot
|
||||
* policy given was immediately triggered by the scheduler. It does not interfere with any currently scheduled operations, it just runs
|
||||
* the snapshot operation ad hoc.
|
||||
*/
|
||||
package org.elasticsearch.xpack.snapshotlifecycle.action;
|
|
@ -0,0 +1,39 @@
|
|||
/*
|
||||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||
* or more contributor license agreements. Licensed under the Elastic License;
|
||||
* you may not use this file except in compliance with the Elastic License.
|
||||
*/
|
||||
|
||||
/**
|
||||
* This is the Snapshot Lifecycle Management (SLM) main package. SLM is part of the wider ILM feature, reusing quite a bit of the
|
||||
* functionality for itself in some places, which is why the two features are contained in the same plugin.
|
||||
*
|
||||
* This package contains the {@link org.elasticsearch.xpack.snapshotlifecycle.SnapshotLifecycleService} and
|
||||
* {@link org.elasticsearch.xpack.snapshotlifecycle.SnapshotLifecycleTask}, as well as the Rest and Transport actions for the
|
||||
* feature set.
|
||||
* This package contains the primary execution logic and most of the user facing
|
||||
* surface area for the plugin, but not everything. The model objects for the cluster state as well as several supporting classes are
|
||||
* contained in the {@link org.elasticsearch.xpack.core.snapshotlifecycle} package.
|
||||
*
|
||||
* <p>{@link org.elasticsearch.xpack.snapshotlifecycle.SnapshotLifecycleService} maintains an internal
|
||||
* {@link org.elasticsearch.xpack.core.scheduler.SchedulerEngine SchedulerEngine} that handles scheduling snapshots. The service
|
||||
* executes on the currently elected master node. It listens to the cluster state, detecting new policies to schedule, and unscheduling
|
||||
* policies when they are deleted or if ILM is stopped. The bulk of this scheduling management is handled within
|
||||
* {@link org.elasticsearch.xpack.snapshotlifecycle.SnapshotLifecycleService#maybeScheduleSnapshot(SnapshotLifecyclePolicyMetadata)}
|
||||
* which is executed on all snapshot policies each update.
|
||||
*
|
||||
* <p>The {@link org.elasticsearch.xpack.snapshotlifecycle.SnapshotLifecycleTask} object is what receives an event when a scheduled policy
|
||||
* is triggered for execution. It constructs a snapshot request and runs it as the user who originally set up the policy. The bulk of this
|
||||
* logic is contained in the
|
||||
* {@link org.elasticsearch.xpack.snapshotlifecycle.SnapshotLifecycleTask#maybeTakeSnapshot(String, Client, ClusterService,
|
||||
* SnapshotHistoryStore)} method. After a snapshot request has been submitted, it persists the result (success or failure) in a history
|
||||
* store (an index), caching the latest success and failure information in the cluster state. It is important to note that this task
|
||||
* fires the snapshot request off and forgets it; It does not wait until the entire snapshot completes. Any success or failure that this
|
||||
* task sees will be from the initial submission of the snapshot request only.
|
||||
*/
|
||||
package org.elasticsearch.xpack.snapshotlifecycle;
|
||||
|
||||
import org.elasticsearch.client.Client;
|
||||
import org.elasticsearch.cluster.service.ClusterService;
|
||||
import org.elasticsearch.xpack.core.snapshotlifecycle.SnapshotLifecyclePolicyMetadata;
|
||||
import org.elasticsearch.xpack.core.snapshotlifecycle.history.SnapshotHistoryStore;
|
|
@ -101,6 +101,9 @@ public class IndexLifecycleInitialisationTests extends ESIntegTestCase {
|
|||
settings.put(XPackSettings.GRAPH_ENABLED.getKey(), false);
|
||||
settings.put(XPackSettings.LOGSTASH_ENABLED.getKey(), false);
|
||||
settings.put(LifecycleSettings.LIFECYCLE_POLL_INTERVAL, "1s");
|
||||
|
||||
// This is necessary to prevent SLM installing a lifecycle policy, these tests assume a blank slate
|
||||
settings.put(LifecycleSettings.SLM_HISTORY_INDEX_ENABLED_SETTING.getKey(), false);
|
||||
return settings.build();
|
||||
}
|
||||
|
||||
|
|
|
@ -10,9 +10,10 @@ import org.elasticsearch.cluster.ClusterName;
|
|||
import org.elasticsearch.cluster.ClusterState;
|
||||
import org.elasticsearch.cluster.metadata.MetaData;
|
||||
import org.elasticsearch.common.collect.ImmutableOpenMap;
|
||||
import org.elasticsearch.xpack.core.indexlifecycle.OperationMode;
|
||||
import org.elasticsearch.test.ESTestCase;
|
||||
import org.elasticsearch.xpack.core.indexlifecycle.IndexLifecycleMetadata;
|
||||
import org.elasticsearch.xpack.core.indexlifecycle.OperationMode;
|
||||
import org.elasticsearch.xpack.core.snapshotlifecycle.SnapshotLifecycleMetadata;
|
||||
|
||||
import java.util.Collections;
|
||||
|
||||
|
@ -57,11 +58,15 @@ public class OperationModeUpdateTaskTests extends ESTestCase {
|
|||
private OperationMode executeUpdate(boolean metadataInstalled, OperationMode currentMode, OperationMode requestMode,
|
||||
boolean assertSameClusterState) {
|
||||
IndexLifecycleMetadata indexLifecycleMetadata = new IndexLifecycleMetadata(Collections.emptyMap(), currentMode);
|
||||
SnapshotLifecycleMetadata snapshotLifecycleMetadata = new SnapshotLifecycleMetadata(Collections.emptyMap(), currentMode);
|
||||
ImmutableOpenMap.Builder<String, MetaData.Custom> customsMapBuilder = ImmutableOpenMap.builder();
|
||||
MetaData.Builder metaData = MetaData.builder()
|
||||
.persistentSettings(settings(Version.CURRENT).build());
|
||||
if (metadataInstalled) {
|
||||
metaData.customs(customsMapBuilder.fPut(IndexLifecycleMetadata.TYPE, indexLifecycleMetadata).build());
|
||||
metaData.customs(customsMapBuilder
|
||||
.fPut(IndexLifecycleMetadata.TYPE, indexLifecycleMetadata)
|
||||
.fPut(SnapshotLifecycleMetadata.TYPE, snapshotLifecycleMetadata)
|
||||
.build());
|
||||
}
|
||||
ClusterState state = ClusterState.builder(ClusterName.DEFAULT).metaData(metaData).build();
|
||||
OperationModeUpdateTask task = new OperationModeUpdateTask(requestMode);
|
||||
|
|
|
@ -0,0 +1,197 @@
|
|||
/*
|
||||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||
* or more contributor license agreements. Licensed under the Elastic License;
|
||||
* you may not use this file except in compliance with the Elastic License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.xpack.snapshotlifecycle;
|
||||
|
||||
import org.elasticsearch.common.ValidationException;
|
||||
import org.elasticsearch.common.io.stream.Writeable;
|
||||
import org.elasticsearch.common.xcontent.XContentParser;
|
||||
import org.elasticsearch.test.AbstractSerializingTestCase;
|
||||
import org.elasticsearch.xpack.core.snapshotlifecycle.SnapshotLifecyclePolicy;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.Collections;
|
||||
import java.util.HashMap;
|
||||
import java.util.Map;
|
||||
|
||||
import static org.hamcrest.Matchers.contains;
|
||||
import static org.hamcrest.Matchers.containsInAnyOrder;
|
||||
import static org.hamcrest.Matchers.equalTo;
|
||||
import static org.hamcrest.Matchers.greaterThan;
|
||||
import static org.hamcrest.Matchers.startsWith;
|
||||
|
||||
public class SnapshotLifecyclePolicyTests extends AbstractSerializingTestCase<SnapshotLifecyclePolicy> {
|
||||
|
||||
private String id;
|
||||
|
||||
public void testNameGeneration() {
|
||||
long time = 1552684146542L; // Fri Mar 15 2019 21:09:06 UTC
|
||||
SnapshotLifecyclePolicy.ResolverContext context = new SnapshotLifecyclePolicy.ResolverContext(time);
|
||||
SnapshotLifecyclePolicy p = new SnapshotLifecyclePolicy("id", "name", "1 * * * * ?", "repo", Collections.emptyMap());
|
||||
assertThat(p.generateSnapshotName(context), startsWith("name-"));
|
||||
assertThat(p.generateSnapshotName(context).length(), greaterThan("name-".length()));
|
||||
|
||||
p = new SnapshotLifecyclePolicy("id", "<name-{now}>", "1 * * * * ?", "repo", Collections.emptyMap());
|
||||
assertThat(p.generateSnapshotName(context), startsWith("name-2019.03.15-"));
|
||||
assertThat(p.generateSnapshotName(context).length(), greaterThan("name-2019.03.15-".length()));
|
||||
|
||||
p = new SnapshotLifecyclePolicy("id", "<name-{now/M}>", "1 * * * * ?", "repo", Collections.emptyMap());
|
||||
assertThat(p.generateSnapshotName(context), startsWith("name-2019.03.01-"));
|
||||
|
||||
p = new SnapshotLifecyclePolicy("id", "<name-{now/m{yyyy-MM-dd.HH:mm:ss}}>", "1 * * * * ?", "repo", Collections.emptyMap());
|
||||
assertThat(p.generateSnapshotName(context), startsWith("name-2019-03-15.21:09:00-"));
|
||||
}
|
||||
|
||||
public void testNextExecutionTime() {
|
||||
SnapshotLifecyclePolicy p = new SnapshotLifecyclePolicy("id", "name", "0 1 2 3 4 ? 2099", "repo", Collections.emptyMap());
|
||||
assertThat(p.calculateNextExecution(), equalTo(4078864860000L));
|
||||
}
|
||||
|
||||
public void testValidation() {
|
||||
SnapshotLifecyclePolicy policy = new SnapshotLifecyclePolicy("a,b", "<my, snapshot-{now/M}>",
|
||||
"* * * * * L", " ", Collections.emptyMap());
|
||||
|
||||
ValidationException e = policy.validate();
|
||||
assertThat(e.validationErrors(),
|
||||
containsInAnyOrder("invalid policy id [a,b]: must not contain ','",
|
||||
"invalid snapshot name [<my, snapshot-{now/M}>]: must not contain contain" +
|
||||
" the following characters [ , \", *, \\, <, |, ,, >, /, ?]",
|
||||
"invalid repository name [ ]: cannot be empty",
|
||||
"invalid schedule: invalid cron expression [* * * * * L]"));
|
||||
|
||||
policy = new SnapshotLifecyclePolicy("_my_policy", "mySnap",
|
||||
" ", "repo", Collections.emptyMap());
|
||||
|
||||
e = policy.validate();
|
||||
assertThat(e.validationErrors(),
|
||||
containsInAnyOrder("invalid policy id [_my_policy]: must not start with '_'",
|
||||
"invalid snapshot name [mySnap]: must be lowercase",
|
||||
"invalid schedule [ ]: must not be empty"));
|
||||
}
|
||||
|
||||
public void testMetadataValidation() {
|
||||
{
|
||||
Map<String, Object> configuration = new HashMap<>();
|
||||
final String metadataString = randomAlphaOfLength(10);
|
||||
configuration.put("metadata", metadataString);
|
||||
|
||||
SnapshotLifecyclePolicy policy = new SnapshotLifecyclePolicy("mypolicy", "<mysnapshot-{now/M}>",
|
||||
"1 * * * * ?", "myrepo", configuration);
|
||||
ValidationException e = policy.validate();
|
||||
assertThat(e.validationErrors(), contains("invalid configuration.metadata [" + metadataString +
|
||||
"]: must be an object if present"));
|
||||
}
|
||||
|
||||
{
|
||||
Map<String, Object> metadata = new HashMap<>();
|
||||
metadata.put("policy", randomAlphaOfLength(5));
|
||||
Map<String, Object> configuration = new HashMap<>();
|
||||
configuration.put("metadata", metadata);
|
||||
|
||||
SnapshotLifecyclePolicy policy = new SnapshotLifecyclePolicy("mypolicy", "<mysnapshot-{now/M}>",
|
||||
"1 * * * * ?", "myrepo", configuration);
|
||||
ValidationException e = policy.validate();
|
||||
assertThat(e.validationErrors(), contains("invalid configuration.metadata: field name [policy] is reserved and " +
|
||||
"will be added automatically"));
|
||||
}
|
||||
|
||||
{
|
||||
Map<String, Object> metadata = new HashMap<>();
|
||||
final int fieldCount = randomIntBetween(67, 100); // 67 is the smallest field count with these sizes that causes an error
|
||||
final int keyBytes = 5; // chosen arbitrarily
|
||||
final int valueBytes = 4; // chosen arbitrarily
|
||||
int totalBytes = fieldCount * (keyBytes + valueBytes + 6 /* bytes of overhead per key/value pair */) + 1;
|
||||
for (int i = 0; i < fieldCount; i++) {
|
||||
metadata.put(randomValueOtherThanMany(key -> "policy".equals(key) || metadata.containsKey(key),
|
||||
() -> randomAlphaOfLength(keyBytes)), randomAlphaOfLength(valueBytes));
|
||||
}
|
||||
Map<String, Object> configuration = new HashMap<>();
|
||||
configuration.put("metadata", metadata);
|
||||
|
||||
SnapshotLifecyclePolicy policy = new SnapshotLifecyclePolicy("mypolicy", "<mysnapshot-{now/M}>",
|
||||
"1 * * * * ?", "myrepo", configuration);
|
||||
ValidationException e = policy.validate();
|
||||
assertThat(e.validationErrors(), contains("invalid configuration.metadata: must be smaller than [1004] bytes, but is [" +
|
||||
totalBytes + "] bytes"));
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
protected SnapshotLifecyclePolicy doParseInstance(XContentParser parser) throws IOException {
|
||||
return SnapshotLifecyclePolicy.parse(parser, id);
|
||||
}
|
||||
|
||||
@Override
|
||||
protected SnapshotLifecyclePolicy createTestInstance() {
|
||||
id = randomAlphaOfLength(5);
|
||||
return randomSnapshotLifecyclePolicy(id);
|
||||
}
|
||||
|
||||
public static SnapshotLifecyclePolicy randomSnapshotLifecyclePolicy(String id) {
|
||||
Map<String, Object> config = new HashMap<>();
|
||||
for (int i = 0; i < randomIntBetween(2, 5); i++) {
|
||||
config.put(randomAlphaOfLength(4), randomAlphaOfLength(4));
|
||||
}
|
||||
return new SnapshotLifecyclePolicy(id,
|
||||
randomAlphaOfLength(4),
|
||||
randomSchedule(),
|
||||
randomAlphaOfLength(4),
|
||||
config);
|
||||
}
|
||||
|
||||
private static String randomSchedule() {
|
||||
return randomIntBetween(0, 59) + " " +
|
||||
randomIntBetween(0, 59) + " " +
|
||||
randomIntBetween(0, 12) + " * * ?";
|
||||
}
|
||||
|
||||
@Override
|
||||
protected SnapshotLifecyclePolicy mutateInstance(SnapshotLifecyclePolicy instance) throws IOException {
|
||||
switch (between(0, 4)) {
|
||||
case 0:
|
||||
return new SnapshotLifecyclePolicy(instance.getId() + randomAlphaOfLength(2),
|
||||
instance.getName(),
|
||||
instance.getSchedule(),
|
||||
instance.getRepository(),
|
||||
instance.getConfig());
|
||||
case 1:
|
||||
return new SnapshotLifecyclePolicy(instance.getId(),
|
||||
instance.getName() + randomAlphaOfLength(2),
|
||||
instance.getSchedule(),
|
||||
instance.getRepository(),
|
||||
instance.getConfig());
|
||||
case 2:
|
||||
return new SnapshotLifecyclePolicy(instance.getId(),
|
||||
instance.getName(),
|
||||
randomValueOtherThan(instance.getSchedule(), SnapshotLifecyclePolicyTests::randomSchedule),
|
||||
instance.getRepository(),
|
||||
instance.getConfig());
|
||||
case 3:
|
||||
return new SnapshotLifecyclePolicy(instance.getId(),
|
||||
instance.getName(),
|
||||
instance.getSchedule(),
|
||||
instance.getRepository() + randomAlphaOfLength(2),
|
||||
instance.getConfig());
|
||||
case 4:
|
||||
Map<String, Object> newConfig = new HashMap<>();
|
||||
for (int i = 0; i < randomIntBetween(2, 5); i++) {
|
||||
newConfig.put(randomAlphaOfLength(3), randomAlphaOfLength(3));
|
||||
}
|
||||
return new SnapshotLifecyclePolicy(instance.getId(),
|
||||
instance.getName() + randomAlphaOfLength(2),
|
||||
instance.getSchedule(),
|
||||
instance.getRepository(),
|
||||
newConfig);
|
||||
default:
|
||||
throw new AssertionError("failure, got illegal switch case");
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
protected Writeable.Reader<SnapshotLifecyclePolicy> instanceReader() {
|
||||
return SnapshotLifecyclePolicy::new;
|
||||
}
|
||||
}
|
|
@ -0,0 +1,339 @@
|
|||
/*
|
||||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||
* or more contributor license agreements. Licensed under the Elastic License;
|
||||
* you may not use this file except in compliance with the Elastic License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.xpack.snapshotlifecycle;
|
||||
|
||||
import org.elasticsearch.cluster.ClusterChangedEvent;
|
||||
import org.elasticsearch.cluster.ClusterName;
|
||||
import org.elasticsearch.cluster.ClusterState;
|
||||
import org.elasticsearch.cluster.metadata.MetaData;
|
||||
import org.elasticsearch.cluster.metadata.RepositoriesMetaData;
|
||||
import org.elasticsearch.cluster.metadata.RepositoryMetaData;
|
||||
import org.elasticsearch.cluster.service.ClusterService;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.test.ClusterServiceUtils;
|
||||
import org.elasticsearch.test.ESTestCase;
|
||||
import org.elasticsearch.threadpool.TestThreadPool;
|
||||
import org.elasticsearch.threadpool.ThreadPool;
|
||||
import org.elasticsearch.xpack.core.indexlifecycle.OperationMode;
|
||||
import org.elasticsearch.xpack.core.scheduler.SchedulerEngine;
|
||||
import org.elasticsearch.xpack.core.snapshotlifecycle.SnapshotLifecycleMetadata;
|
||||
import org.elasticsearch.xpack.core.snapshotlifecycle.SnapshotLifecyclePolicy;
|
||||
import org.elasticsearch.xpack.core.snapshotlifecycle.SnapshotLifecyclePolicyMetadata;
|
||||
import org.elasticsearch.xpack.core.watcher.watch.ClockMock;
|
||||
|
||||
import java.util.ArrayList;
|
||||
import java.util.Collections;
|
||||
import java.util.HashMap;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.concurrent.atomic.AtomicInteger;
|
||||
import java.util.concurrent.atomic.AtomicReference;
|
||||
import java.util.function.Consumer;
|
||||
|
||||
import static org.hamcrest.Matchers.containsInAnyOrder;
|
||||
import static org.hamcrest.Matchers.containsString;
|
||||
import static org.hamcrest.Matchers.equalTo;
|
||||
import static org.hamcrest.Matchers.greaterThan;
|
||||
|
||||
public class SnapshotLifecycleServiceTests extends ESTestCase {
|
||||
|
||||
public void testGetJobId() {
|
||||
String id = randomAlphaOfLengthBetween(1, 10) + (randomBoolean() ? "" : randomLong());
|
||||
SnapshotLifecyclePolicy policy = createPolicy(id);
|
||||
long version = randomNonNegativeLong();
|
||||
SnapshotLifecyclePolicyMetadata meta = SnapshotLifecyclePolicyMetadata.builder()
|
||||
.setPolicy(policy)
|
||||
.setHeaders(Collections.emptyMap())
|
||||
.setVersion(version)
|
||||
.setModifiedDate(1)
|
||||
.build();
|
||||
assertThat(SnapshotLifecycleService.getJobId(meta), equalTo(id + "-" + version));
|
||||
}
|
||||
|
||||
public void testRepositoryExistenceForExistingRepo() {
|
||||
ClusterState state = ClusterState.builder(new ClusterName("cluster")).build();
|
||||
|
||||
IllegalArgumentException e = expectThrows(IllegalArgumentException.class,
|
||||
() -> SnapshotLifecycleService.validateRepositoryExists("repo", state));
|
||||
|
||||
assertThat(e.getMessage(), containsString("no such repository [repo]"));
|
||||
|
||||
RepositoryMetaData repo = new RepositoryMetaData("repo", "fs", Settings.EMPTY);
|
||||
RepositoriesMetaData repoMeta = new RepositoriesMetaData(Collections.singletonList(repo));
|
||||
ClusterState stateWithRepo = ClusterState.builder(state)
|
||||
.metaData(MetaData.builder()
|
||||
.putCustom(RepositoriesMetaData.TYPE, repoMeta))
|
||||
.build();
|
||||
|
||||
SnapshotLifecycleService.validateRepositoryExists("repo", stateWithRepo);
|
||||
}
|
||||
|
||||
public void testRepositoryExistenceForMissingRepo() {
|
||||
ClusterState state = ClusterState.builder(new ClusterName("cluster")).build();
|
||||
|
||||
IllegalArgumentException e = expectThrows(IllegalArgumentException.class,
|
||||
() -> SnapshotLifecycleService.validateRepositoryExists("repo", state));
|
||||
|
||||
assertThat(e.getMessage(), containsString("no such repository [repo]"));
|
||||
}
|
||||
|
||||
public void testNothingScheduledWhenNotRunning() {
|
||||
ClockMock clock = new ClockMock();
|
||||
SnapshotLifecyclePolicyMetadata initialPolicy = SnapshotLifecyclePolicyMetadata.builder()
|
||||
.setPolicy(createPolicy("initial", "*/1 * * * * ?"))
|
||||
.setHeaders(Collections.emptyMap())
|
||||
.setVersion(1)
|
||||
.setModifiedDate(1)
|
||||
.build();
|
||||
ClusterState initialState = createState(new SnapshotLifecycleMetadata(
|
||||
Collections.singletonMap(initialPolicy.getPolicy().getId(), initialPolicy), OperationMode.RUNNING));
|
||||
try (ThreadPool threadPool = new TestThreadPool("test");
|
||||
ClusterService clusterService = ClusterServiceUtils.createClusterService(initialState, threadPool);
|
||||
SnapshotLifecycleService sls = new SnapshotLifecycleService(Settings.EMPTY,
|
||||
() -> new FakeSnapshotTask(e -> logger.info("triggered")), clusterService, clock)) {
|
||||
|
||||
sls.offMaster();
|
||||
|
||||
SnapshotLifecyclePolicyMetadata newPolicy = SnapshotLifecyclePolicyMetadata.builder()
|
||||
.setPolicy(createPolicy("foo", "*/1 * * * * ?"))
|
||||
.setHeaders(Collections.emptyMap())
|
||||
.setVersion(2)
|
||||
.setModifiedDate(2)
|
||||
.build();
|
||||
Map<String, SnapshotLifecyclePolicyMetadata> policies = new HashMap<>();
|
||||
policies.put(newPolicy.getPolicy().getId(), newPolicy);
|
||||
ClusterState emptyState = createState(new SnapshotLifecycleMetadata(Collections.emptyMap(), OperationMode.RUNNING));
|
||||
ClusterState state = createState(new SnapshotLifecycleMetadata(policies, OperationMode.RUNNING));
|
||||
|
||||
sls.clusterChanged(new ClusterChangedEvent("1", state, emptyState));
|
||||
|
||||
// Since the service does not think it is master, it should not be triggered or scheduled
|
||||
assertThat(sls.getScheduler().scheduledJobIds(), equalTo(Collections.emptySet()));
|
||||
|
||||
sls.onMaster();
|
||||
assertThat(sls.getScheduler().scheduledJobIds(), equalTo(Collections.singleton("initial-1")));
|
||||
|
||||
state = createState(new SnapshotLifecycleMetadata(policies, OperationMode.STOPPING));
|
||||
sls.clusterChanged(new ClusterChangedEvent("2", state, emptyState));
|
||||
|
||||
// Since the service is stopping, jobs should have been cancelled
|
||||
assertThat(sls.getScheduler().scheduledJobIds(), equalTo(Collections.emptySet()));
|
||||
|
||||
state = createState(new SnapshotLifecycleMetadata(policies, OperationMode.STOPPED));
|
||||
sls.clusterChanged(new ClusterChangedEvent("3", state, emptyState));
|
||||
|
||||
// Since the service is stopped, jobs should have been cancelled
|
||||
assertThat(sls.getScheduler().scheduledJobIds(), equalTo(Collections.emptySet()));
|
||||
|
||||
threadPool.shutdownNow();
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Test new policies getting scheduled correctly, updated policies also being scheduled,
|
||||
* and deleted policies having their schedules cancelled.
|
||||
*/
|
||||
public void testPolicyCRUD() throws Exception {
|
||||
ClockMock clock = new ClockMock();
|
||||
final AtomicInteger triggerCount = new AtomicInteger(0);
|
||||
final AtomicReference<Consumer<SchedulerEngine.Event>> trigger = new AtomicReference<>(e -> triggerCount.incrementAndGet());
|
||||
try (ThreadPool threadPool = new TestThreadPool("test");
|
||||
ClusterService clusterService = ClusterServiceUtils.createClusterService(threadPool);
|
||||
SnapshotLifecycleService sls = new SnapshotLifecycleService(Settings.EMPTY,
|
||||
() -> new FakeSnapshotTask(e -> trigger.get().accept(e)), clusterService, clock)) {
|
||||
|
||||
sls.offMaster();
|
||||
SnapshotLifecycleMetadata snapMeta = new SnapshotLifecycleMetadata(Collections.emptyMap(), OperationMode.RUNNING);
|
||||
ClusterState previousState = createState(snapMeta);
|
||||
Map<String, SnapshotLifecyclePolicyMetadata> policies = new HashMap<>();
|
||||
|
||||
SnapshotLifecyclePolicyMetadata policy = SnapshotLifecyclePolicyMetadata.builder()
|
||||
.setPolicy(createPolicy("foo", "*/1 * * * * ?"))
|
||||
.setHeaders(Collections.emptyMap())
|
||||
.setModifiedDate(1)
|
||||
.build();
|
||||
policies.put(policy.getPolicy().getId(), policy);
|
||||
snapMeta = new SnapshotLifecycleMetadata(policies, OperationMode.RUNNING);
|
||||
ClusterState state = createState(snapMeta);
|
||||
ClusterChangedEvent event = new ClusterChangedEvent("1", state, previousState);
|
||||
trigger.set(e -> {
|
||||
fail("trigger should not be invoked");
|
||||
});
|
||||
sls.clusterChanged(event);
|
||||
|
||||
// Since the service does not think it is master, it should not be triggered or scheduled
|
||||
assertThat(sls.getScheduler().scheduledJobIds(), equalTo(Collections.emptySet()));
|
||||
|
||||
// Change the service to think it's on the master node, events should be scheduled now
|
||||
sls.onMaster();
|
||||
trigger.set(e -> triggerCount.incrementAndGet());
|
||||
sls.clusterChanged(event);
|
||||
assertThat(sls.getScheduler().scheduledJobIds(), equalTo(Collections.singleton("foo-1")));
|
||||
|
||||
assertBusy(() -> assertThat(triggerCount.get(), greaterThan(0)));
|
||||
|
||||
clock.freeze();
|
||||
int currentCount = triggerCount.get();
|
||||
previousState = state;
|
||||
SnapshotLifecyclePolicyMetadata newPolicy = SnapshotLifecyclePolicyMetadata.builder()
|
||||
.setPolicy(createPolicy("foo", "*/1 * * * * ?"))
|
||||
.setHeaders(Collections.emptyMap())
|
||||
.setVersion(2)
|
||||
.setModifiedDate(2)
|
||||
.build();
|
||||
policies.put(policy.getPolicy().getId(), newPolicy);
|
||||
state = createState(new SnapshotLifecycleMetadata(policies, OperationMode.RUNNING));
|
||||
event = new ClusterChangedEvent("2", state, previousState);
|
||||
sls.clusterChanged(event);
|
||||
assertThat(sls.getScheduler().scheduledJobIds(), equalTo(Collections.singleton("foo-2")));
|
||||
|
||||
trigger.set(e -> {
|
||||
// Make sure the job got updated
|
||||
assertThat(e.getJobName(), equalTo("foo-2"));
|
||||
triggerCount.incrementAndGet();
|
||||
});
|
||||
clock.fastForwardSeconds(1);
|
||||
|
||||
assertBusy(() -> assertThat(triggerCount.get(), greaterThan(currentCount)));
|
||||
|
||||
final int currentCount2 = triggerCount.get();
|
||||
previousState = state;
|
||||
// Create a state simulating the policy being deleted
|
||||
state = createState(new SnapshotLifecycleMetadata(Collections.emptyMap(), OperationMode.RUNNING));
|
||||
event = new ClusterChangedEvent("2", state, previousState);
|
||||
sls.clusterChanged(event);
|
||||
clock.fastForwardSeconds(2);
|
||||
|
||||
// The existing job should be cancelled and no longer trigger
|
||||
assertThat(triggerCount.get(), equalTo(currentCount2));
|
||||
assertThat(sls.getScheduler().scheduledJobIds(), equalTo(Collections.emptySet()));
|
||||
|
||||
// When the service is no longer master, all jobs should be automatically cancelled
|
||||
policy = SnapshotLifecyclePolicyMetadata.builder()
|
||||
.setPolicy(createPolicy("foo", "*/1 * * * * ?"))
|
||||
.setHeaders(Collections.emptyMap())
|
||||
.setVersion(3)
|
||||
.setModifiedDate(1)
|
||||
.build();
|
||||
policies.put(policy.getPolicy().getId(), policy);
|
||||
snapMeta = new SnapshotLifecycleMetadata(policies, OperationMode.RUNNING);
|
||||
previousState = state;
|
||||
state = createState(snapMeta);
|
||||
event = new ClusterChangedEvent("1", state, previousState);
|
||||
trigger.set(e -> triggerCount.incrementAndGet());
|
||||
sls.clusterChanged(event);
|
||||
clock.fastForwardSeconds(2);
|
||||
|
||||
// Make sure at least one triggers and the job is scheduled
|
||||
assertBusy(() -> assertThat(triggerCount.get(), greaterThan(currentCount2)));
|
||||
assertThat(sls.getScheduler().scheduledJobIds(), equalTo(Collections.singleton("foo-3")));
|
||||
|
||||
// Signify becoming non-master, the jobs should all be cancelled
|
||||
sls.offMaster();
|
||||
assertThat(sls.getScheduler().scheduledJobIds(), equalTo(Collections.emptySet()));
|
||||
|
||||
threadPool.shutdownNow();
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Test for policy ids ending in numbers the way generate job ids doesn't cause confusion
|
||||
*/
|
||||
public void testPolicyNamesEndingInNumbers() throws Exception {
|
||||
ClockMock clock = new ClockMock();
|
||||
final AtomicInteger triggerCount = new AtomicInteger(0);
|
||||
final AtomicReference<Consumer<SchedulerEngine.Event>> trigger = new AtomicReference<>(e -> triggerCount.incrementAndGet());
|
||||
try (ThreadPool threadPool = new TestThreadPool("test");
|
||||
ClusterService clusterService = ClusterServiceUtils.createClusterService(threadPool);
|
||||
SnapshotLifecycleService sls = new SnapshotLifecycleService(Settings.EMPTY,
|
||||
() -> new FakeSnapshotTask(e -> trigger.get().accept(e)), clusterService, clock)) {
|
||||
sls.onMaster();
|
||||
|
||||
SnapshotLifecycleMetadata snapMeta = new SnapshotLifecycleMetadata(Collections.emptyMap(), OperationMode.RUNNING);
|
||||
ClusterState previousState = createState(snapMeta);
|
||||
Map<String, SnapshotLifecyclePolicyMetadata> policies = new HashMap<>();
|
||||
|
||||
SnapshotLifecyclePolicyMetadata policy = SnapshotLifecyclePolicyMetadata.builder()
|
||||
.setPolicy(createPolicy("foo-2", "30 * * * * ?"))
|
||||
.setHeaders(Collections.emptyMap())
|
||||
.setVersion(1)
|
||||
.setModifiedDate(1)
|
||||
.build();
|
||||
policies.put(policy.getPolicy().getId(), policy);
|
||||
snapMeta = new SnapshotLifecycleMetadata(policies, OperationMode.RUNNING);
|
||||
ClusterState state = createState(snapMeta);
|
||||
ClusterChangedEvent event = new ClusterChangedEvent("1", state, previousState);
|
||||
sls.clusterChanged(event);
|
||||
|
||||
assertThat(sls.getScheduler().scheduledJobIds(), equalTo(Collections.singleton("foo-2-1")));
|
||||
|
||||
previousState = state;
|
||||
SnapshotLifecyclePolicyMetadata secondPolicy = SnapshotLifecyclePolicyMetadata.builder()
|
||||
.setPolicy(createPolicy("foo-1", "45 * * * * ?"))
|
||||
.setHeaders(Collections.emptyMap())
|
||||
.setVersion(2)
|
||||
.setModifiedDate(1)
|
||||
.build();
|
||||
policies.put(secondPolicy.getPolicy().getId(), secondPolicy);
|
||||
snapMeta = new SnapshotLifecycleMetadata(policies, OperationMode.RUNNING);
|
||||
state = createState(snapMeta);
|
||||
event = new ClusterChangedEvent("2", state, previousState);
|
||||
sls.clusterChanged(event);
|
||||
|
||||
assertThat(sls.getScheduler().scheduledJobIds(), containsInAnyOrder("foo-2-1", "foo-1-2"));
|
||||
|
||||
sls.offMaster();
|
||||
assertThat(sls.getScheduler().scheduledJobIds(), equalTo(Collections.emptySet()));
|
||||
|
||||
threadPool.shutdownNow();
|
||||
}
|
||||
}
|
||||
|
||||
class FakeSnapshotTask extends SnapshotLifecycleTask {
|
||||
private final Consumer<SchedulerEngine.Event> onTriggered;
|
||||
|
||||
FakeSnapshotTask(Consumer<SchedulerEngine.Event> onTriggered) {
|
||||
super(null, null, null);
|
||||
this.onTriggered = onTriggered;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void triggered(SchedulerEngine.Event event) {
|
||||
logger.info("--> fake snapshot task triggered");
|
||||
onTriggered.accept(event);
|
||||
}
|
||||
}
|
||||
|
||||
public ClusterState createState(SnapshotLifecycleMetadata snapMeta) {
|
||||
MetaData metaData = MetaData.builder()
|
||||
.putCustom(SnapshotLifecycleMetadata.TYPE, snapMeta)
|
||||
.build();
|
||||
return ClusterState.builder(new ClusterName("cluster"))
|
||||
.metaData(metaData)
|
||||
.build();
|
||||
}
|
||||
|
||||
public static SnapshotLifecyclePolicy createPolicy(String id) {
|
||||
return createPolicy(id, randomSchedule());
|
||||
}
|
||||
|
||||
public static SnapshotLifecyclePolicy createPolicy(String id, String schedule) {
|
||||
Map<String, Object> config = new HashMap<>();
|
||||
config.put("ignore_unavailable", randomBoolean());
|
||||
List<String> indices = new ArrayList<>();
|
||||
indices.add("foo-*");
|
||||
indices.add(randomAlphaOfLength(4));
|
||||
config.put("indices", indices);
|
||||
return new SnapshotLifecyclePolicy(id, randomAlphaOfLength(4), schedule, randomAlphaOfLength(4), config);
|
||||
}
|
||||
|
||||
private static String randomSchedule() {
|
||||
return randomIntBetween(0, 59) + " " +
|
||||
randomIntBetween(0, 59) + " " +
|
||||
randomIntBetween(0, 12) + " * * ?";
|
||||
}
|
||||
}
|
|
@ -0,0 +1,244 @@
|
|||
/*
|
||||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||
* or more contributor license agreements. Licensed under the Elastic License;
|
||||
* you may not use this file except in compliance with the Elastic License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.xpack.snapshotlifecycle;
|
||||
|
||||
import org.apache.lucene.util.SetOnce;
|
||||
import org.elasticsearch.Version;
|
||||
import org.elasticsearch.action.ActionListener;
|
||||
import org.elasticsearch.action.ActionRequest;
|
||||
import org.elasticsearch.action.ActionResponse;
|
||||
import org.elasticsearch.action.ActionType;
|
||||
import org.elasticsearch.action.admin.cluster.snapshots.create.CreateSnapshotAction;
|
||||
import org.elasticsearch.action.admin.cluster.snapshots.create.CreateSnapshotRequest;
|
||||
import org.elasticsearch.action.admin.cluster.snapshots.create.CreateSnapshotResponse;
|
||||
import org.elasticsearch.client.Client;
|
||||
import org.elasticsearch.cluster.ClusterName;
|
||||
import org.elasticsearch.cluster.ClusterState;
|
||||
import org.elasticsearch.cluster.metadata.MetaData;
|
||||
import org.elasticsearch.cluster.service.ClusterService;
|
||||
import org.elasticsearch.common.TriFunction;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.xcontent.json.JsonXContent;
|
||||
import org.elasticsearch.test.ClusterServiceUtils;
|
||||
import org.elasticsearch.test.ESTestCase;
|
||||
import org.elasticsearch.test.client.NoOpClient;
|
||||
import org.elasticsearch.threadpool.TestThreadPool;
|
||||
import org.elasticsearch.threadpool.ThreadPool;
|
||||
import org.elasticsearch.xpack.core.indexlifecycle.OperationMode;
|
||||
import org.elasticsearch.xpack.core.scheduler.SchedulerEngine;
|
||||
import org.elasticsearch.xpack.core.snapshotlifecycle.SnapshotLifecycleMetadata;
|
||||
import org.elasticsearch.xpack.core.snapshotlifecycle.SnapshotLifecyclePolicy;
|
||||
import org.elasticsearch.xpack.core.snapshotlifecycle.SnapshotLifecyclePolicyMetadata;
|
||||
import org.elasticsearch.xpack.core.snapshotlifecycle.history.SnapshotHistoryItem;
|
||||
import org.elasticsearch.xpack.core.snapshotlifecycle.history.SnapshotHistoryStore;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.time.ZoneId;
|
||||
import java.time.ZoneOffset;
|
||||
import java.util.Arrays;
|
||||
import java.util.Collections;
|
||||
import java.util.HashMap;
|
||||
import java.util.Map;
|
||||
import java.util.Optional;
|
||||
import java.util.concurrent.atomic.AtomicBoolean;
|
||||
import java.util.function.Consumer;
|
||||
|
||||
import static org.hamcrest.Matchers.equalTo;
|
||||
import static org.hamcrest.Matchers.instanceOf;
|
||||
import static org.hamcrest.Matchers.startsWith;
|
||||
|
||||
public class SnapshotLifecycleTaskTests extends ESTestCase {
|
||||
|
||||
public void testGetSnapMetadata() {
|
||||
final String id = randomAlphaOfLength(4);
|
||||
final SnapshotLifecyclePolicyMetadata slpm = makePolicyMeta(id);
|
||||
final SnapshotLifecycleMetadata meta = new SnapshotLifecycleMetadata(Collections.singletonMap(id, slpm), OperationMode.RUNNING);
|
||||
|
||||
final ClusterState state = ClusterState.builder(new ClusterName("test"))
|
||||
.metaData(MetaData.builder()
|
||||
.putCustom(SnapshotLifecycleMetadata.TYPE, meta)
|
||||
.build())
|
||||
.build();
|
||||
|
||||
final Optional<SnapshotLifecyclePolicyMetadata> o =
|
||||
SnapshotLifecycleTask.getSnapPolicyMetadata(SnapshotLifecycleService.getJobId(slpm), state);
|
||||
|
||||
assertTrue("the policy metadata should be retrieved from the cluster state", o.isPresent());
|
||||
assertThat(o.get(), equalTo(slpm));
|
||||
|
||||
assertFalse(SnapshotLifecycleTask.getSnapPolicyMetadata("bad-jobid", state).isPresent());
|
||||
}
|
||||
|
||||
public void testSkipCreatingSnapshotWhenJobDoesNotMatch() {
|
||||
final String id = randomAlphaOfLength(4);
|
||||
final SnapshotLifecyclePolicyMetadata slpm = makePolicyMeta(id);
|
||||
final SnapshotLifecycleMetadata meta = new SnapshotLifecycleMetadata(Collections.singletonMap(id, slpm), OperationMode.RUNNING);
|
||||
|
||||
final ClusterState state = ClusterState.builder(new ClusterName("test"))
|
||||
.metaData(MetaData.builder()
|
||||
.putCustom(SnapshotLifecycleMetadata.TYPE, meta)
|
||||
.build())
|
||||
.build();
|
||||
|
||||
final ThreadPool threadPool = new TestThreadPool("test");
|
||||
try (ClusterService clusterService = ClusterServiceUtils.createClusterService(state, threadPool);
|
||||
VerifyingClient client = new VerifyingClient(threadPool, (a, r, l) -> {
|
||||
fail("should not have tried to take a snapshot");
|
||||
return null;
|
||||
})) {
|
||||
SnapshotHistoryStore historyStore = new VerifyingHistoryStore(null, ZoneOffset.UTC,
|
||||
item -> fail("should not have tried to store an item"));
|
||||
|
||||
SnapshotLifecycleTask task = new SnapshotLifecycleTask(client, clusterService, historyStore);
|
||||
|
||||
// Trigger the event, but since the job name does not match, it should
|
||||
// not run the function to create a snapshot
|
||||
task.triggered(new SchedulerEngine.Event("nonexistent-job", System.currentTimeMillis(), System.currentTimeMillis()));
|
||||
}
|
||||
|
||||
threadPool.shutdownNow();
|
||||
}
|
||||
|
||||
public void testCreateSnapshotOnTrigger() {
|
||||
final String id = randomAlphaOfLength(4);
|
||||
final SnapshotLifecyclePolicyMetadata slpm = makePolicyMeta(id);
|
||||
final SnapshotLifecycleMetadata meta = new SnapshotLifecycleMetadata(Collections.singletonMap(id, slpm), OperationMode.RUNNING);
|
||||
|
||||
final ClusterState state = ClusterState.builder(new ClusterName("test"))
|
||||
.metaData(MetaData.builder()
|
||||
.putCustom(SnapshotLifecycleMetadata.TYPE, meta)
|
||||
.build())
|
||||
.build();
|
||||
|
||||
final ThreadPool threadPool = new TestThreadPool("test");
|
||||
final String createSnapResponse = "{" +
|
||||
" \"snapshot\" : {" +
|
||||
" \"snapshot\" : \"snapshot_1\"," +
|
||||
" \"uuid\" : \"bcP3ClgCSYO_TP7_FCBbBw\"," +
|
||||
" \"version_id\" : " + Version.CURRENT.id + "," +
|
||||
" \"version\" : \"" + Version.CURRENT + "\"," +
|
||||
" \"indices\" : [ ]," +
|
||||
" \"include_global_state\" : true," +
|
||||
" \"state\" : \"SUCCESS\"," +
|
||||
" \"start_time\" : \"2019-03-19T22:19:53.542Z\"," +
|
||||
" \"start_time_in_millis\" : 1553033993542," +
|
||||
" \"end_time\" : \"2019-03-19T22:19:53.567Z\"," +
|
||||
" \"end_time_in_millis\" : 1553033993567," +
|
||||
" \"duration_in_millis\" : 25," +
|
||||
" \"failures\" : [ ]," +
|
||||
" \"shards\" : {" +
|
||||
" \"total\" : 0," +
|
||||
" \"failed\" : 0," +
|
||||
" \"successful\" : 0" +
|
||||
" }" +
|
||||
" }" +
|
||||
"}";
|
||||
|
||||
final AtomicBoolean clientCalled = new AtomicBoolean(false);
|
||||
final SetOnce<String> snapshotName = new SetOnce<>();
|
||||
try (ClusterService clusterService = ClusterServiceUtils.createClusterService(state, threadPool);
|
||||
// This verifying client will verify that we correctly invoked
|
||||
// client.admin().createSnapshot(...) with the appropriate
|
||||
// request. It also returns a mock real response
|
||||
VerifyingClient client = new VerifyingClient(threadPool,
|
||||
(action, request, listener) -> {
|
||||
assertFalse(clientCalled.getAndSet(true));
|
||||
assertThat(action, instanceOf(CreateSnapshotAction.class));
|
||||
assertThat(request, instanceOf(CreateSnapshotRequest.class));
|
||||
|
||||
CreateSnapshotRequest req = (CreateSnapshotRequest) request;
|
||||
|
||||
SnapshotLifecyclePolicy policy = slpm.getPolicy();
|
||||
assertThat(req.snapshot(), startsWith(policy.getName() + "-"));
|
||||
assertThat(req.repository(), equalTo(policy.getRepository()));
|
||||
snapshotName.set(req.snapshot());
|
||||
if (req.indices().length > 0) {
|
||||
assertThat(Arrays.asList(req.indices()), equalTo(policy.getConfig().get("indices")));
|
||||
}
|
||||
boolean globalState = policy.getConfig().get("include_global_state") == null ||
|
||||
Boolean.parseBoolean((String) policy.getConfig().get("include_global_state"));
|
||||
assertThat(req.includeGlobalState(), equalTo(globalState));
|
||||
|
||||
try {
|
||||
return CreateSnapshotResponse.fromXContent(createParser(JsonXContent.jsonXContent, createSnapResponse));
|
||||
} catch (IOException e) {
|
||||
fail("failed to parse snapshot response");
|
||||
return null;
|
||||
}
|
||||
})) {
|
||||
final AtomicBoolean historyStoreCalled = new AtomicBoolean(false);
|
||||
SnapshotHistoryStore historyStore = new VerifyingHistoryStore(null, ZoneOffset.UTC,
|
||||
item -> {
|
||||
assertFalse(historyStoreCalled.getAndSet(true));
|
||||
final SnapshotLifecyclePolicy policy = slpm.getPolicy();
|
||||
assertEquals(policy.getId(), item.getPolicyId());
|
||||
assertEquals(policy.getRepository(), item.getRepository());
|
||||
assertEquals(policy.getConfig(), item.getSnapshotConfiguration());
|
||||
assertEquals(snapshotName.get(), item.getSnapshotName());
|
||||
});
|
||||
|
||||
SnapshotLifecycleTask task = new SnapshotLifecycleTask(client, clusterService, historyStore);
|
||||
// Trigger the event with a matching job name for the policy
|
||||
task.triggered(new SchedulerEngine.Event(SnapshotLifecycleService.getJobId(slpm),
|
||||
System.currentTimeMillis(), System.currentTimeMillis()));
|
||||
|
||||
assertTrue("snapshot should be triggered once", clientCalled.get());
|
||||
assertTrue("history store should be called once", historyStoreCalled.get());
|
||||
}
|
||||
|
||||
threadPool.shutdownNow();
|
||||
}
|
||||
|
||||
/**
|
||||
* A client that delegates to a verifying function for action/request/listener
|
||||
*/
|
||||
public static class VerifyingClient extends NoOpClient {
|
||||
|
||||
private final TriFunction<ActionType<?>, ActionRequest, ActionListener<?>, ActionResponse> verifier;
|
||||
|
||||
VerifyingClient(ThreadPool threadPool,
|
||||
TriFunction<ActionType<?>, ActionRequest, ActionListener<?>, ActionResponse> verifier) {
|
||||
super(threadPool);
|
||||
this.verifier = verifier;
|
||||
}
|
||||
|
||||
@Override
|
||||
@SuppressWarnings("unchecked")
|
||||
protected <Request extends ActionRequest, Response extends ActionResponse> void doExecute(ActionType<Response> action,
|
||||
Request request,
|
||||
ActionListener<Response> listener) {
|
||||
listener.onResponse((Response) verifier.apply(action, request, listener));
|
||||
}
|
||||
}
|
||||
|
||||
private SnapshotLifecyclePolicyMetadata makePolicyMeta(final String id) {
|
||||
SnapshotLifecyclePolicy policy = SnapshotLifecycleServiceTests.createPolicy(id);
|
||||
Map<String, String> headers = new HashMap<>();
|
||||
headers.put("X-Opaque-ID", randomAlphaOfLength(4));
|
||||
return SnapshotLifecyclePolicyMetadata.builder()
|
||||
.setPolicy(policy)
|
||||
.setHeaders(headers)
|
||||
.setVersion(1)
|
||||
.setModifiedDate(1)
|
||||
.build();
|
||||
}
|
||||
|
||||
public static class VerifyingHistoryStore extends SnapshotHistoryStore {
|
||||
|
||||
Consumer<SnapshotHistoryItem> verifier;
|
||||
|
||||
public VerifyingHistoryStore(Client client, ZoneId timeZone, Consumer<SnapshotHistoryItem> verifier) {
|
||||
super(Settings.EMPTY, client, timeZone);
|
||||
this.verifier = verifier;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void putAsync(SnapshotHistoryItem item) {
|
||||
verifier.accept(item);
|
||||
}
|
||||
}
|
||||
}
|
|
@ -42,6 +42,7 @@ testClusters.integTest {
|
|||
setting 'xpack.security.enabled', 'true'
|
||||
setting 'xpack.ml.enabled', 'true'
|
||||
setting 'xpack.watcher.enabled', 'false'
|
||||
setting 'xpack.ilm.enabled', 'false'
|
||||
setting 'xpack.monitoring.enabled', 'false'
|
||||
setting 'xpack.security.authc.token.enabled', 'true'
|
||||
setting 'xpack.security.transport.ssl.enabled', 'true'
|
||||
|
|
|
@ -33,6 +33,7 @@ import org.elasticsearch.xpack.core.rollup.job.RollupIndexerJobStats;
|
|||
import org.elasticsearch.xpack.core.rollup.job.RollupJob;
|
||||
import org.elasticsearch.xpack.core.rollup.job.RollupJobConfig;
|
||||
import org.elasticsearch.xpack.core.rollup.job.RollupJobStatus;
|
||||
import org.elasticsearch.xpack.core.scheduler.CronSchedule;
|
||||
import org.elasticsearch.xpack.core.scheduler.SchedulerEngine;
|
||||
import org.elasticsearch.xpack.rollup.Rollup;
|
||||
|
||||
|
|
|
@ -0,0 +1,20 @@
|
|||
{
|
||||
"slm.delete_lifecycle": {
|
||||
"documentation": "https://www.elastic.co/guide/en/elasticsearch/reference/current/slm-api.html",
|
||||
"stability": "stable",
|
||||
"methods": [ "DELETE" ],
|
||||
"url": {
|
||||
"path": "/_slm/policy/{policy_id}",
|
||||
"paths": ["/_slm/policy/{policy_id}"],
|
||||
"parts": {
|
||||
"policy": {
|
||||
"type" : "string",
|
||||
"description" : "The id of the snapshot lifecycle policy to remove"
|
||||
}
|
||||
},
|
||||
"params": {
|
||||
}
|
||||
},
|
||||
"body": null
|
||||
}
|
||||
}
|
|
@ -0,0 +1,20 @@
|
|||
{
|
||||
"slm.execute_lifecycle": {
|
||||
"documentation": "https://www.elastic.co/guide/en/elasticsearch/reference/current/slm-api.html",
|
||||
"stability": "stable",
|
||||
"methods": [ "PUT" ],
|
||||
"url": {
|
||||
"path": "/_slm/policy/{policy_id}/_execute",
|
||||
"paths": ["/_slm/policy/{policy_id}/_execute"],
|
||||
"parts": {
|
||||
"policy_id": {
|
||||
"type" : "string",
|
||||
"description" : "The id of the snapshot lifecycle policy to be executed"
|
||||
}
|
||||
},
|
||||
"params": {
|
||||
}
|
||||
},
|
||||
"body": null
|
||||
}
|
||||
}
|
|
@ -0,0 +1,20 @@
|
|||
{
|
||||
"slm.get_lifecycle": {
|
||||
"documentation": "https://www.elastic.co/guide/en/elasticsearch/reference/current/slm-api.html",
|
||||
"stability": "stable",
|
||||
"methods": [ "GET" ],
|
||||
"url": {
|
||||
"path": "/_slm/policy/{policy_id}",
|
||||
"paths": ["/_slm/policy/{policy_id}", "/_slm/policy"],
|
||||
"parts": {
|
||||
"policy_id": {
|
||||
"type" : "string",
|
||||
"description" : "Comma-separated list of snapshot lifecycle policies to retrieve"
|
||||
}
|
||||
},
|
||||
"params": {
|
||||
}
|
||||
},
|
||||
"body": null
|
||||
}
|
||||
}
|
|
@ -0,0 +1,22 @@
|
|||
{
|
||||
"slm.put_lifecycle": {
|
||||
"documentation": "https://www.elastic.co/guide/en/elasticsearch/reference/current/slm-api.html",
|
||||
"stability": "stable",
|
||||
"methods": [ "PUT" ],
|
||||
"url": {
|
||||
"path": "/_slm/policy/{policy_id}",
|
||||
"paths": ["/_slm/policy/{policy_id}"],
|
||||
"parts": {
|
||||
"policy_id": {
|
||||
"type" : "string",
|
||||
"description" : "The id of the snapshot lifecycle policy"
|
||||
}
|
||||
},
|
||||
"params": {
|
||||
}
|
||||
},
|
||||
"body": {
|
||||
"description": "The snapshot lifecycle policy definition to register"
|
||||
}
|
||||
}
|
||||
}
|
|
@ -15,5 +15,5 @@ setup:
|
|||
# This is fragile - it needs to be updated every time we add a new cluster/index privilege
|
||||
# I would much prefer we could just check that specific entries are in the array, but we don't have
|
||||
# an assertion for that
|
||||
- length: { "cluster" : 26 }
|
||||
- length: { "cluster" : 28 }
|
||||
- length: { "index" : 16 }
|
||||
|
|
Loading…
Reference in New Issue