Initial implementation of Snapshot/Restore API

Closes #3826
This commit is contained in:
Igor Motov 2013-11-08 19:20:43 -05:00
parent 81928bd323
commit 510397aecd
140 changed files with 16482 additions and 65 deletions

View File

@ -27,5 +27,7 @@ include::modules/thrift.asciidoc[]
include::modules/transport.asciidoc[]
include::modules/snapshots.asciidoc[]

View File

@ -0,0 +1,183 @@
[[modules-snapshots]]
== Snapshot And Restore
The snapshot and restore module allows to create snapshots of individual indices or an entire cluster into a remote
repository. At the time of the initial release only shared file system repository is supported.
[float]
=== Repositories
Before any snapshot or restore operation can be performed a snapshot repository should be registered in
Elasticsearch. The following command registers a shared file system repository with the name `my_backup` that
will use location `/mount/backups/my_backup` to store snapshots.
[source,js]
-----------------------------------
$ curl -XPUT 'http://localhost:9200/_snapshot/my_backup' -d '{
"type": "fs",
"settings": {
"location": "/mount/backups/my_backup",
"compress": true
}
}'
-----------------------------------
Once repository is registered, its information can be obtained using the following command:
[source,js]
-----------------------------------
$ curl -XGET 'http://localhost:9200/_snapshot/my_backup?pretty'
-----------------------------------
[source,js]
-----------------------------------
{
"my_backup" : {
"type" : "fs",
"settings" : {
"compress" : "false",
"location" : "/mount/backups/my_backup"
}
}
}
-----------------------------------
If a repository name is not specified, or `_all` is used as repository name Elasticsearch will return information about
all repositories currently registered in the cluster:
[source,js]
-----------------------------------
$ curl -XGET 'http://localhost:9200/_snapshot'
-----------------------------------
or
[source,js]
-----------------------------------
$ curl -XGET 'http://localhost:9200/_snapshot/all'
-----------------------------------
[float]
===== Shared File System Repository
The shared file system repository (`"type": "fs"`) is using shared file system to store snapshot. The path
specified in the `location` parameter should point to the same location in the shared filesystem and be accessible
on all data and master nodes. The following settings are supported:
[horizontal]
`location`:: Location of the snapshots. Mandatory.
`compress`:: Turns on compression of the snapshot files. Defaults to `true`.
`concurrent_streams`:: Throttles the number of streams (per node) preforming snapshot operation. Defaults to `5`
`chunk_size`:: Big files can be broken down into chunks during snapshotting if needed. The chunk size can be specified in bytes or by
using size value notation, i.e. 1g, 10m, 5k. Defaults to `null` (unlimited chunk size).
[float]
=== Snapshot
A repository can contain multiple snapshots of the same cluster. Snapshot are identified by unique names within the
cluster. A snapshot with the name `snapshot_1` in the repository `my_backup` can be created by executing the following
command:
[source,js]
-----------------------------------
$ curl -XPUT "localhost:9200/_snapshot/my_backup/snapshot_1?wait_for_completion=true"
-----------------------------------
The `wait_for_completion` parameter specifies whether or not the request should return immediately or wait for snapshot
completion. By default snapshot of all open and started indices in the cluster is created. This behavior can be changed
by specifying the list of indices in the body of the snapshot request.
[source,js]
-----------------------------------
$ curl -XPUT "localhost:9200/_snapshot/my_backup/snapshot_1" -d '{
"indices": "index_1,index_2",
"ignore_indices": "missing",
"include_global_state": false
}'
-----------------------------------
The list of indices that should be included into the snapshot can be specified using the `indices` parameter that
supports <<search-multi-index-type,multi index syntax>>. The snapshot request also supports the
`ignore_indices` option. Setting it to `missing` will cause indices that do not exists to be ignored during snapshot
creation. By default, when `ignore_indices` option is not set and an index is missing the snapshot request will fail.
By setting `include_global_state` to false it's possible to prevent the cluster global state to be stored as part of
the snapshot.
The index snapshot process is incremental. In the process of making the index snapshot Elasticsearch analyses
the list of the index files that are already stored in the repository and copies only files that were created or
changed since the last snapshot. That allows multiple snapshots to be preserved in the repository in a compact form.
Snapshotting process is executed in non-blocking fashion. All indexing and searching operation can continue to be
executed against the index that is being snapshotted. However, a snapshot represents the point-in-time view of the index
at the moment when snapshot was created, so no records that were added to the index after snapshot process had started
will be present in the snapshot.
Besides creating a copy of each index the snapshot process can also store global cluster metadata, which includes persistent
cluster settings and templates. The transient settings and registered snapshot repositories are not stored as part of
the snapshot.
Only one snapshot process can be executed in the cluster at any time. While snapshot of a particular shard is being
created this shard cannot be moved to another node, which can interfere with rebalancing process and allocation
filtering. Once snapshot of the shard is finished Elasticsearch will be able to move shard to another node according
to the current allocation filtering settings and rebalancing algorithm.
Once a snapshot is created information about this snapshot can be obtained using the following command:
[source,shell]
-----------------------------------
$ curl -XGET "localhost:9200/_snapshot/my_backup/snapshot_1"
-----------------------------------
All snapshots currently stored in the repository can be listed using the following command:
[source,shell]
-----------------------------------
$ curl -XGET "localhost:9200/_snapshot/my_backup/_all"
-----------------------------------
A snapshot can be deleted from the repository using the following command:
[source,shell]
-----------------------------------
$ curl -XDELETE "localhost:9200/_snapshot/my_backup/snapshot_1"
-----------------------------------
When a snapshot is deleted from a repository, Elasticsearch deletes all files that are associated with the deleted
snapshot and not used by any other snapshots. If the deleted snapshot operation is executed while the snapshot is being
created the snapshotting process will be aborted and all files created as part of the snapshotting process will be
cleaned. Therefore, the delete snapshot operation can be used to cancel long running snapshot operations that were
started by mistake.
[float]
=== Restore
A snapshot can be restored using this following command:
[source,shell]
-----------------------------------
$ curl -XPOST "localhost:9200/_snapshot/my_backup/snapshot_1/_restore"
-----------------------------------
By default, all indices in the snapshot as well as cluster state are restored. It's possible to select indices that
should be restored as well as prevent global cluster state from being restored by using `indices` and
`include_global_state` options in the restore request body. The list of indices supports
<<search-multi-index-type,multi index syntax>>. The `rename_pattern` and `rename_replacement` options can be also used to
rename index on restore using regular expression that supports referencing the original text as explained
http://docs.oracle.com/javase/6/docs/api/java/util/regex/Matcher.html#appendReplacement(java.lang.StringBuffer,%20java.lang.String)[here].
[source,js]
-----------------------------------
$ curl -XPOST "localhost:9200/_snapshot/my_backup/snapshot_1/_restore" -d '{
"indices": "index_1,index_2",
"ignore_indices": "missing",
"include_global_state": false,
"rename_pattern": "index_(.)+",
"rename_replacement": "restored_index_$1"
}'
-----------------------------------
The restore operation can be performed on a functioning cluster. However, an existing index can be only restored if it's
closed. The restore operation automatically opens restored indices if they were closed and creates new indices if they
didn't exist in the cluster. If cluster state is restored, the restored templates that don't currently exist in the
cluster are added and existing templates with the same name are replaced by the restored templates. The restored
persistent settings are added to the existing persistent settings.

View File

@ -32,12 +32,26 @@ import org.elasticsearch.action.admin.cluster.node.shutdown.NodesShutdownAction;
import org.elasticsearch.action.admin.cluster.node.shutdown.TransportNodesShutdownAction;
import org.elasticsearch.action.admin.cluster.node.stats.NodesStatsAction;
import org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction;
import org.elasticsearch.action.admin.cluster.repositories.delete.DeleteRepositoryAction;
import org.elasticsearch.action.admin.cluster.repositories.delete.TransportDeleteRepositoryAction;
import org.elasticsearch.action.admin.cluster.repositories.get.GetRepositoriesAction;
import org.elasticsearch.action.admin.cluster.repositories.get.TransportGetRepositoriesAction;
import org.elasticsearch.action.admin.cluster.repositories.put.PutRepositoryAction;
import org.elasticsearch.action.admin.cluster.repositories.put.TransportPutRepositoryAction;
import org.elasticsearch.action.admin.cluster.reroute.ClusterRerouteAction;
import org.elasticsearch.action.admin.cluster.reroute.TransportClusterRerouteAction;
import org.elasticsearch.action.admin.cluster.settings.ClusterUpdateSettingsAction;
import org.elasticsearch.action.admin.cluster.settings.TransportClusterUpdateSettingsAction;
import org.elasticsearch.action.admin.cluster.shards.ClusterSearchShardsAction;
import org.elasticsearch.action.admin.cluster.shards.TransportClusterSearchShardsAction;
import org.elasticsearch.action.admin.cluster.snapshots.create.CreateSnapshotAction;
import org.elasticsearch.action.admin.cluster.snapshots.create.TransportCreateSnapshotAction;
import org.elasticsearch.action.admin.cluster.snapshots.delete.DeleteSnapshotAction;
import org.elasticsearch.action.admin.cluster.snapshots.delete.TransportDeleteSnapshotAction;
import org.elasticsearch.action.admin.cluster.snapshots.get.GetSnapshotsAction;
import org.elasticsearch.action.admin.cluster.snapshots.get.TransportGetSnapshotsAction;
import org.elasticsearch.action.admin.cluster.snapshots.restore.RestoreSnapshotAction;
import org.elasticsearch.action.admin.cluster.snapshots.restore.TransportRestoreSnapshotAction;
import org.elasticsearch.action.admin.cluster.state.ClusterStateAction;
import org.elasticsearch.action.admin.cluster.state.TransportClusterStateAction;
import org.elasticsearch.action.admin.cluster.tasks.PendingClusterTasksAction;
@ -191,6 +205,13 @@ public class ActionModule extends AbstractModule {
registerAction(ClusterRerouteAction.INSTANCE, TransportClusterRerouteAction.class);
registerAction(ClusterSearchShardsAction.INSTANCE, TransportClusterSearchShardsAction.class);
registerAction(PendingClusterTasksAction.INSTANCE, TransportPendingClusterTasksAction.class);
registerAction(PutRepositoryAction.INSTANCE, TransportPutRepositoryAction.class);
registerAction(GetRepositoriesAction.INSTANCE, TransportGetRepositoriesAction.class);
registerAction(DeleteRepositoryAction.INSTANCE, TransportDeleteRepositoryAction.class);
registerAction(GetSnapshotsAction.INSTANCE, TransportGetSnapshotsAction.class);
registerAction(DeleteSnapshotAction.INSTANCE, TransportDeleteSnapshotAction.class);
registerAction(CreateSnapshotAction.INSTANCE, TransportCreateSnapshotAction.class);
registerAction(RestoreSnapshotAction.INSTANCE, TransportRestoreSnapshotAction.class);
registerAction(IndicesStatsAction.INSTANCE, TransportIndicesStatsAction.class);
registerAction(IndicesStatusAction.INSTANCE, TransportIndicesStatusAction.class);

View File

@ -0,0 +1,47 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.action.admin.cluster.repositories.delete;
import org.elasticsearch.action.admin.cluster.ClusterAction;
import org.elasticsearch.client.ClusterAdminClient;
/**
* Unregister repository action
*/
public class DeleteRepositoryAction extends ClusterAction<DeleteRepositoryRequest, DeleteRepositoryResponse, DeleteRepositoryRequestBuilder> {
public static final DeleteRepositoryAction INSTANCE = new DeleteRepositoryAction();
public static final String NAME = "cluster/repository/delete";
private DeleteRepositoryAction() {
super(NAME);
}
@Override
public DeleteRepositoryResponse newResponse() {
return new DeleteRepositoryResponse();
}
@Override
public DeleteRepositoryRequestBuilder newRequestBuilder(ClusterAdminClient client) {
return new DeleteRepositoryRequestBuilder(client);
}
}

View File

@ -0,0 +1,93 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.action.admin.cluster.repositories.delete;
import org.elasticsearch.action.ActionRequestValidationException;
import org.elasticsearch.action.support.master.AcknowledgedRequest;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import java.io.IOException;
import static org.elasticsearch.action.ValidateActions.addValidationError;
/**
* Unregister repository request.
* <p/>
* The unregister repository command just unregisters the repository. No data is getting deleted from the repository.
*/
public class DeleteRepositoryRequest extends AcknowledgedRequest<DeleteRepositoryRequest> {
private String name;
DeleteRepositoryRequest() {
}
/**
* Constructs a new unregister repository request with the provided name.
*
* @param name name of the repository
*/
public DeleteRepositoryRequest(String name) {
this.name = name;
}
@Override
public ActionRequestValidationException validate() {
ActionRequestValidationException validationException = null;
if (name == null) {
validationException = addValidationError("name is missing", validationException);
}
return validationException;
}
/**
* Sets the name of the repository to unregister.
*
* @param name name of the repository
*/
public DeleteRepositoryRequest name(String name) {
this.name = name;
return this;
}
/**
* The name of the repository.
*
* @return the name of the repository
*/
public String name() {
return this.name;
}
@Override
public void readFrom(StreamInput in) throws IOException {
super.readFrom(in);
name = in.readString();
readTimeout(in, null);
}
@Override
public void writeTo(StreamOutput out) throws IOException {
super.writeTo(out);
out.writeString(name);
writeTimeout(out, null);
}
}

View File

@ -0,0 +1,64 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.action.admin.cluster.repositories.delete;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.support.master.AcknowledgedRequestBuilder;
import org.elasticsearch.client.ClusterAdminClient;
import org.elasticsearch.client.internal.InternalClusterAdminClient;
/**
* Builder for unregister repository request
*/
public class DeleteRepositoryRequestBuilder extends AcknowledgedRequestBuilder<DeleteRepositoryRequest, DeleteRepositoryResponse, DeleteRepositoryRequestBuilder> {
/**
* Constructs unregister repository request builder
*
* @param clusterAdminClient cluster admin client
*/
public DeleteRepositoryRequestBuilder(ClusterAdminClient clusterAdminClient) {
super((InternalClusterAdminClient) clusterAdminClient, new DeleteRepositoryRequest());
}
/**
* Constructs unregister repository request builder with specified repository name
*
* @param clusterAdminClient cluster adming client
*/
public DeleteRepositoryRequestBuilder(ClusterAdminClient clusterAdminClient, String name) {
super((InternalClusterAdminClient) clusterAdminClient, new DeleteRepositoryRequest(name));
}
/**
* Sets the repository name
*
* @param name the repository name
*/
public DeleteRepositoryRequestBuilder setName(String name) {
request.name(name);
return this;
}
@Override
protected void doExecute(ActionListener<DeleteRepositoryResponse> listener) {
((ClusterAdminClient) client).deleteRepository(request, listener);
}
}

View File

@ -0,0 +1,52 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.action.admin.cluster.repositories.delete;
import org.elasticsearch.action.support.master.AcknowledgedResponse;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import java.io.IOException;
/**
* Unregister repository response
*/
public class DeleteRepositoryResponse extends AcknowledgedResponse {
DeleteRepositoryResponse() {
}
DeleteRepositoryResponse(boolean acknowledged) {
super(acknowledged);
}
@Override
public void readFrom(StreamInput in) throws IOException {
super.readFrom(in);
readAcknowledged(in, null);
}
@Override
public void writeTo(StreamOutput out) throws IOException {
super.writeTo(out);
writeAcknowledged(out, null);
}
}

View File

@ -0,0 +1,92 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.action.admin.cluster.repositories.delete;
import org.elasticsearch.ElasticSearchException;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.support.master.TransportMasterNodeOperationAction;
import org.elasticsearch.cluster.ClusterService;
import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.block.ClusterBlockException;
import org.elasticsearch.cluster.block.ClusterBlockLevel;
import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.repositories.RepositoriesService;
import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.transport.TransportService;
/**
* Transport action for unregister repository operation
*/
public class TransportDeleteRepositoryAction extends TransportMasterNodeOperationAction<DeleteRepositoryRequest, DeleteRepositoryResponse> {
private final RepositoriesService repositoriesService;
@Inject
public TransportDeleteRepositoryAction(Settings settings, TransportService transportService, ClusterService clusterService,
RepositoriesService repositoriesService, ThreadPool threadPool) {
super(settings, transportService, clusterService, threadPool);
this.repositoriesService = repositoriesService;
}
@Override
protected String executor() {
return ThreadPool.Names.SAME;
}
@Override
protected String transportAction() {
return DeleteRepositoryAction.NAME;
}
@Override
protected DeleteRepositoryRequest newRequest() {
return new DeleteRepositoryRequest();
}
@Override
protected DeleteRepositoryResponse newResponse() {
return new DeleteRepositoryResponse();
}
@Override
protected ClusterBlockException checkBlock(DeleteRepositoryRequest request, ClusterState state) {
return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, "");
}
@Override
protected void masterOperation(final DeleteRepositoryRequest request, ClusterState state, final ActionListener<DeleteRepositoryResponse> listener) throws ElasticSearchException {
repositoriesService.unregisterRepository(
new RepositoriesService.UnregisterRepositoryRequest("delete_repository [" + request.name() + "]", request.name())
.masterNodeTimeout(request.masterNodeTimeout()).ackTimeout(request.timeout()),
new ActionListener<RepositoriesService.UnregisterRepositoryResponse>() {
@Override
public void onResponse(RepositoriesService.UnregisterRepositoryResponse unregisterRepositoryResponse) {
listener.onResponse(new DeleteRepositoryResponse(unregisterRepositoryResponse.isAcknowledged()));
}
@Override
public void onFailure(Throwable e) {
listener.onFailure(e);
}
});
}
}

View File

@ -0,0 +1,47 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.action.admin.cluster.repositories.get;
import org.elasticsearch.action.admin.cluster.ClusterAction;
import org.elasticsearch.client.ClusterAdminClient;
/**
* Get repositories action
*/
public class GetRepositoriesAction extends ClusterAction<GetRepositoriesRequest, GetRepositoriesResponse, GetRepositoriesRequestBuilder> {
public static final GetRepositoriesAction INSTANCE = new GetRepositoriesAction();
public static final String NAME = "cluster/repository/get";
private GetRepositoriesAction() {
super(NAME);
}
@Override
public GetRepositoriesResponse newResponse() {
return new GetRepositoriesResponse();
}
@Override
public GetRepositoriesRequestBuilder newRequestBuilder(ClusterAdminClient client) {
return new GetRepositoriesRequestBuilder(client);
}
}

View File

@ -0,0 +1,97 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.action.admin.cluster.repositories.get;
import org.elasticsearch.action.ActionRequestValidationException;
import org.elasticsearch.action.support.master.MasterNodeOperationRequest;
import org.elasticsearch.common.Strings;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import java.io.IOException;
import static org.elasticsearch.action.ValidateActions.addValidationError;
/**
* Get repository request
*/
public class GetRepositoriesRequest extends MasterNodeOperationRequest<GetRepositoriesRequest> {
private String[] repositories = Strings.EMPTY_ARRAY;
GetRepositoriesRequest() {
}
/**
* Constructs a new get repositories request with a list of repositories.
* <p/>
* If the list of repositories is empty or it contains a single element "_all", all registered repositories
* are returned.
*
* @param repositories list of repositories
*/
public GetRepositoriesRequest(String[] repositories) {
this.repositories = repositories;
}
@Override
public ActionRequestValidationException validate() {
ActionRequestValidationException validationException = null;
if (repositories == null) {
validationException = addValidationError("repositories is null", validationException);
}
return validationException;
}
/**
* The names of the repositories.
*
* @return list of repositories
*/
public String[] repositories() {
return this.repositories;
}
/**
* Sets the list or repositories.
* <p/>
* If the list of repositories is empty or it contains a single element "_all", all registered repositories
* are returned.
*
* @param repositories list of repositories
* @return this request
*/
public GetRepositoriesRequest repositories(String[] repositories) {
this.repositories = repositories;
return this;
}
@Override
public void readFrom(StreamInput in) throws IOException {
super.readFrom(in);
repositories = in.readStringArray();
}
@Override
public void writeTo(StreamOutput out) throws IOException {
super.writeTo(out);
out.writeStringArray(repositories);
}
}

View File

@ -0,0 +1,78 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.action.admin.cluster.repositories.get;
import com.google.common.collect.ObjectArrays;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.support.master.MasterNodeOperationRequestBuilder;
import org.elasticsearch.client.ClusterAdminClient;
import org.elasticsearch.client.internal.InternalClusterAdminClient;
/**
* Get repository request builder
*/
public class GetRepositoriesRequestBuilder extends MasterNodeOperationRequestBuilder<GetRepositoriesRequest, GetRepositoriesResponse, GetRepositoriesRequestBuilder> {
/**
* Creates new get repository request builder
*
* @param clusterAdminClient cluster admin client
*/
public GetRepositoriesRequestBuilder(ClusterAdminClient clusterAdminClient) {
super((InternalClusterAdminClient) clusterAdminClient, new GetRepositoriesRequest());
}
/**
* Creates new get repository request builder
*
* @param clusterAdminClient cluster admin client
* @param repositories list of repositories to get
*/
public GetRepositoriesRequestBuilder(ClusterAdminClient clusterAdminClient, String... repositories) {
super((InternalClusterAdminClient) clusterAdminClient, new GetRepositoriesRequest(repositories));
}
/**
* Sets list of repositories to get
*
* @param repositories list of repositories
* @return builder
*/
public GetRepositoriesRequestBuilder setRepositories(String... repositories) {
request.repositories(repositories);
return this;
}
/**
* Adds repositories to the list of repositories to get
*
* @param repositories list of repositories
* @return builder
*/
public GetRepositoriesRequestBuilder addRepositories(String... repositories) {
request.repositories(ObjectArrays.concat(request.repositories(), repositories, String.class));
return this;
}
@Override
protected void doExecute(ActionListener<GetRepositoriesResponse> listener) {
((ClusterAdminClient) client).getRepositories(request, listener);
}
}

View File

@ -0,0 +1,92 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.action.admin.cluster.repositories.get;
import com.google.common.collect.ImmutableList;
import org.elasticsearch.action.ActionResponse;
import org.elasticsearch.cluster.metadata.RepositoryMetaData;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.settings.ImmutableSettings;
import java.io.IOException;
import java.util.Iterator;
/**
* Get repositories response
*/
public class GetRepositoriesResponse extends ActionResponse implements Iterable<RepositoryMetaData> {
private ImmutableList<RepositoryMetaData> repositories = ImmutableList.of();
GetRepositoriesResponse() {
}
GetRepositoriesResponse(ImmutableList<RepositoryMetaData> repositories) {
this.repositories = repositories;
}
/**
* List of repositories to return
*
* @return list or repositories
*/
public ImmutableList<RepositoryMetaData> repositories() {
return repositories;
}
@Override
public void readFrom(StreamInput in) throws IOException {
super.readFrom(in);
int size = in.readVInt();
ImmutableList.Builder<RepositoryMetaData> repositoryListBuilder = ImmutableList.builder();
for (int j = 0; j < size; j++) {
repositoryListBuilder.add(new RepositoryMetaData(
in.readString(),
in.readString(),
ImmutableSettings.readSettingsFromStream(in))
);
}
repositories = repositoryListBuilder.build();
}
@Override
public void writeTo(StreamOutput out) throws IOException {
super.writeTo(out);
out.writeVInt(repositories.size());
for (RepositoryMetaData repository : repositories) {
out.writeString(repository.name());
out.writeString(repository.type());
ImmutableSettings.writeSettingsToStream(repository.settings(), out);
}
}
/**
* Iterator over the repositories data
*
* @return iterator over the repositories data
*/
@Override
public Iterator<RepositoryMetaData> iterator() {
return repositories.iterator();
}
}

View File

@ -0,0 +1,102 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.action.admin.cluster.repositories.get;
import com.google.common.collect.ImmutableList;
import org.elasticsearch.ElasticSearchException;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.support.master.TransportMasterNodeOperationAction;
import org.elasticsearch.cluster.ClusterService;
import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.block.ClusterBlockException;
import org.elasticsearch.cluster.block.ClusterBlockLevel;
import org.elasticsearch.cluster.metadata.MetaData;
import org.elasticsearch.cluster.metadata.RepositoriesMetaData;
import org.elasticsearch.cluster.metadata.RepositoryMetaData;
import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.repositories.RepositoryMissingException;
import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.transport.TransportService;
/**
* Transport action for get repositories operation
*/
public class TransportGetRepositoriesAction extends TransportMasterNodeOperationAction<GetRepositoriesRequest, GetRepositoriesResponse> {
@Inject
public TransportGetRepositoriesAction(Settings settings, TransportService transportService, ClusterService clusterService,
ThreadPool threadPool) {
super(settings, transportService, clusterService, threadPool);
}
@Override
protected String executor() {
return ThreadPool.Names.MANAGEMENT;
}
@Override
protected String transportAction() {
return GetRepositoriesAction.NAME;
}
@Override
protected GetRepositoriesRequest newRequest() {
return new GetRepositoriesRequest();
}
@Override
protected GetRepositoriesResponse newResponse() {
return new GetRepositoriesResponse();
}
@Override
protected ClusterBlockException checkBlock(GetRepositoriesRequest request, ClusterState state) {
return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, "");
}
@Override
protected void masterOperation(final GetRepositoriesRequest request, ClusterState state, final ActionListener<GetRepositoriesResponse> listener) throws ElasticSearchException {
MetaData metaData = state.metaData();
RepositoriesMetaData repositories = metaData.custom(RepositoriesMetaData.TYPE);
if (request.repositories().length == 0 || (request.repositories().length == 1 && "_all".equals(request.repositories()[0]))) {
if (repositories != null) {
listener.onResponse(new GetRepositoriesResponse(repositories.repositories()));
} else {
listener.onResponse(new GetRepositoriesResponse(ImmutableList.<RepositoryMetaData>of()));
}
} else {
if (repositories != null) {
ImmutableList.Builder<RepositoryMetaData> repositoryListBuilder = ImmutableList.builder();
for (String repository : request.repositories()) {
RepositoryMetaData repositoryMetaData = repositories.repository(repository);
if (repositoryMetaData == null) {
listener.onFailure(new RepositoryMissingException(repository));
return;
}
repositoryListBuilder.add(repositoryMetaData);
}
listener.onResponse(new GetRepositoriesResponse(repositoryListBuilder.build()));
} else {
listener.onFailure(new RepositoryMissingException(request.repositories()[0]));
}
}
}
}

View File

@ -0,0 +1,47 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.action.admin.cluster.repositories.put;
import org.elasticsearch.action.admin.cluster.ClusterAction;
import org.elasticsearch.client.ClusterAdminClient;
/**
* Register repository action
*/
public class PutRepositoryAction extends ClusterAction<PutRepositoryRequest, PutRepositoryResponse, PutRepositoryRequestBuilder> {
public static final PutRepositoryAction INSTANCE = new PutRepositoryAction();
public static final String NAME = "cluster/repository/put";
private PutRepositoryAction() {
super(NAME);
}
@Override
public PutRepositoryResponse newResponse() {
return new PutRepositoryResponse();
}
@Override
public PutRepositoryRequestBuilder newRequestBuilder(ClusterAdminClient client) {
return new PutRepositoryRequestBuilder(client);
}
}

View File

@ -0,0 +1,282 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.action.admin.cluster.repositories.put;
import org.elasticsearch.ElasticSearchGenerationException;
import org.elasticsearch.ElasticSearchIllegalArgumentException;
import org.elasticsearch.action.ActionRequestValidationException;
import org.elasticsearch.action.support.master.AcknowledgedRequest;
import org.elasticsearch.action.support.master.MasterNodeOperationRequest;
import org.elasticsearch.common.bytes.BytesReference;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.settings.ImmutableSettings;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentFactory;
import org.elasticsearch.common.xcontent.XContentType;
import java.io.IOException;
import java.util.Map;
import static org.elasticsearch.action.ValidateActions.addValidationError;
import static org.elasticsearch.common.settings.ImmutableSettings.Builder.EMPTY_SETTINGS;
import static org.elasticsearch.common.settings.ImmutableSettings.readSettingsFromStream;
import static org.elasticsearch.common.settings.ImmutableSettings.writeSettingsToStream;
/**
* Register repository request.
* <p/>
* Registers a repository with given name, type and settings. If the repository with the same name already
* exists in the cluster, the new repository will replace the existing repository.
*/
public class PutRepositoryRequest extends AcknowledgedRequest<PutRepositoryRequest> {
private String name;
private String type;
private Settings settings = EMPTY_SETTINGS;
PutRepositoryRequest() {
}
/**
* Constructs a new put repository request with the provided name.
*/
public PutRepositoryRequest(String name) {
this.name = name;
}
@Override
public ActionRequestValidationException validate() {
ActionRequestValidationException validationException = null;
if (name == null) {
validationException = addValidationError("name is missing", validationException);
}
if (type == null) {
validationException = addValidationError("type is missing", validationException);
}
return validationException;
}
/**
* Sets the name of the repository.
*
* @param name repository name
*/
public PutRepositoryRequest name(String name) {
this.name = name;
return this;
}
/**
* The name of the repository.
*
* @return repository name
*/
public String name() {
return this.name;
}
/**
* The type of the repository
* <p/>
* <ul>
* <li>"fs" - shared filesystem repository</li>
* </ul>
*
* @param type repository type
* @return this request
*/
public PutRepositoryRequest type(String type) {
this.type = type;
return this;
}
/**
* Returns repository type
*
* @return repository type
*/
public String type() {
return this.type;
}
/**
* Sets the repository settings
*
* @param settings repository settings
* @return this request
*/
public PutRepositoryRequest settings(Settings settings) {
this.settings = settings;
return this;
}
/**
* Sets the repository settings
*
* @param settings repository settings
* @return this request
*/
public PutRepositoryRequest settings(Settings.Builder settings) {
this.settings = settings.build();
return this;
}
/**
* Sets the repository settings.
*
* @param source repository settings in json, yaml or properties format
* @return this request
*/
public PutRepositoryRequest settings(String source) {
this.settings = ImmutableSettings.settingsBuilder().loadFromSource(source).build();
return this;
}
/**
* Sets the repository settings.
*
* @param source repository settings
* @return this request
*/
public PutRepositoryRequest settings(Map<String, Object> source) {
try {
XContentBuilder builder = XContentFactory.contentBuilder(XContentType.JSON);
builder.map(source);
settings(builder.string());
} catch (IOException e) {
throw new ElasticSearchGenerationException("Failed to generate [" + source + "]", e);
}
return this;
}
/**
* Returns repository settings
*
* @return repository settings
*/
public Settings settings() {
return this.settings;
}
/**
* Parses repository definition.
*
* @param repositoryDefinition repository definition
*/
public PutRepositoryRequest source(XContentBuilder repositoryDefinition) {
return source(repositoryDefinition.bytes());
}
/**
* Parses repository definition.
*
* @param repositoryDefinition repository definition
*/
public PutRepositoryRequest source(Map repositoryDefinition) {
Map<String, Object> source = repositoryDefinition;
for (Map.Entry<String, Object> entry : source.entrySet()) {
String name = entry.getKey();
if (name.equals("type")) {
type(entry.getValue().toString());
} else if (name.equals("settings")) {
if (!(entry.getValue() instanceof Map)) {
throw new ElasticSearchIllegalArgumentException("Malformed settings section, should include an inner object");
}
settings((Map<String, Object>) entry.getValue());
}
}
return this;
}
/**
* Parses repository definition.
* JSON, Smile and YAML formats are supported
*
* @param repositoryDefinition repository definition
*/
public PutRepositoryRequest source(String repositoryDefinition) {
try {
return source(XContentFactory.xContent(repositoryDefinition).createParser(repositoryDefinition).mapOrderedAndClose());
} catch (IOException e) {
throw new ElasticSearchIllegalArgumentException("failed to parse repository source [" + repositoryDefinition + "]", e);
}
}
/**
* Parses repository definition.
* JSON, Smile and YAML formats are supported
*
* @param repositoryDefinition repository definition
*/
public PutRepositoryRequest source(byte[] repositoryDefinition) {
return source(repositoryDefinition, 0, repositoryDefinition.length);
}
/**
* Parses repository definition.
* JSON, Smile and YAML formats are supported
*
* @param repositoryDefinition repository definition
*/
public PutRepositoryRequest source(byte[] repositoryDefinition, int offset, int length) {
try {
return source(XContentFactory.xContent(repositoryDefinition, offset, length).createParser(repositoryDefinition, offset, length).mapOrderedAndClose());
} catch (IOException e) {
throw new ElasticSearchIllegalArgumentException("failed to parse repository source", e);
}
}
/**
* Parses repository definition.
* JSON, Smile and YAML formats are supported
*
* @param repositoryDefinition repository definition
*/
public PutRepositoryRequest source(BytesReference repositoryDefinition) {
try {
return source(XContentFactory.xContent(repositoryDefinition).createParser(repositoryDefinition).mapOrderedAndClose());
} catch (IOException e) {
throw new ElasticSearchIllegalArgumentException("failed to parse template source", e);
}
}
@Override
public void readFrom(StreamInput in) throws IOException {
super.readFrom(in);
name = in.readString();
type = in.readString();
settings = readSettingsFromStream(in);
readTimeout(in, null);
}
@Override
public void writeTo(StreamOutput out) throws IOException {
super.writeTo(out);
out.writeString(name);
out.writeString(type);
writeSettingsToStream(settings, out);
writeTimeout(out, null);
}
}

View File

@ -0,0 +1,124 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.action.admin.cluster.repositories.put;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.support.master.AcknowledgedRequestBuilder;
import org.elasticsearch.client.ClusterAdminClient;
import org.elasticsearch.client.internal.InternalClusterAdminClient;
import org.elasticsearch.common.settings.Settings;
import java.util.Map;
/**
* Register repository request builder
*/
public class PutRepositoryRequestBuilder extends AcknowledgedRequestBuilder<PutRepositoryRequest, PutRepositoryResponse, PutRepositoryRequestBuilder> {
/**
* Constructs register repository request
*
* @param clusterAdminClient cluster admin client
*/
public PutRepositoryRequestBuilder(ClusterAdminClient clusterAdminClient) {
super((InternalClusterAdminClient) clusterAdminClient, new PutRepositoryRequest());
}
/**
* Constructs register repository request for the repository with a given name
*
* @param clusterAdminClient cluster admin client
* @param name repository name
*/
public PutRepositoryRequestBuilder(ClusterAdminClient clusterAdminClient, String name) {
super((InternalClusterAdminClient) clusterAdminClient, new PutRepositoryRequest(name));
}
/**
* Sets the repository name
*
* @param name repository name
* @return this builder
*/
public PutRepositoryRequestBuilder setName(String name) {
request.name(name);
return this;
}
/**
* Sets the repository type
*
* @param type repository type
* @return this builder
*/
public PutRepositoryRequestBuilder setType(String type) {
request.type(type);
return this;
}
/**
* Sets the repository settings
*
* @param settings repository settings
* @return this builder
*/
public PutRepositoryRequestBuilder setSettings(Settings settings) {
request.settings(settings);
return this;
}
/**
* Sets the repository settings
*
* @param settings repository settings builder
* @return this builder
*/
public PutRepositoryRequestBuilder setSettings(Settings.Builder settings) {
request.settings(settings);
return this;
}
/**
* Sets the repository settings in Json, Yaml or properties format
*
* @param source repository settings
* @return this builder
*/
public PutRepositoryRequestBuilder setSettings(String source) {
request.settings(source);
return this;
}
/**
* Sets the repository settings
*
* @param source repository settings
* @return this builder
*/
public PutRepositoryRequestBuilder setSettings(Map<String, Object> source) {
request.settings(source);
return this;
}
@Override
protected void doExecute(ActionListener<PutRepositoryResponse> listener) {
((ClusterAdminClient) client).putRepository(request, listener);
}
}

View File

@ -0,0 +1,53 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.action.admin.cluster.repositories.put;
import org.elasticsearch.action.ActionResponse;
import org.elasticsearch.action.support.master.AcknowledgedResponse;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import java.io.IOException;
/**
* Register repository response
*/
public class PutRepositoryResponse extends AcknowledgedResponse {
PutRepositoryResponse() {
}
PutRepositoryResponse(boolean acknowledged) {
super(acknowledged);
}
@Override
public void readFrom(StreamInput in) throws IOException {
super.readFrom(in);
readAcknowledged(in, null);
}
@Override
public void writeTo(StreamOutput out) throws IOException {
super.writeTo(out);
writeAcknowledged(out, null);
}
}

View File

@ -0,0 +1,94 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.action.admin.cluster.repositories.put;
import org.elasticsearch.ElasticSearchException;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.support.master.TransportMasterNodeOperationAction;
import org.elasticsearch.cluster.ClusterService;
import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.block.ClusterBlockException;
import org.elasticsearch.cluster.block.ClusterBlockLevel;
import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.repositories.RepositoriesService;
import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.transport.TransportService;
/**
* Transport action for register repository operation
*/
public class TransportPutRepositoryAction extends TransportMasterNodeOperationAction<PutRepositoryRequest, PutRepositoryResponse> {
private final RepositoriesService repositoriesService;
@Inject
public TransportPutRepositoryAction(Settings settings, TransportService transportService, ClusterService clusterService,
RepositoriesService repositoriesService, ThreadPool threadPool) {
super(settings, transportService, clusterService, threadPool);
this.repositoriesService = repositoriesService;
}
@Override
protected String executor() {
return ThreadPool.Names.SAME;
}
@Override
protected String transportAction() {
return PutRepositoryAction.NAME;
}
@Override
protected PutRepositoryRequest newRequest() {
return new PutRepositoryRequest();
}
@Override
protected PutRepositoryResponse newResponse() {
return new PutRepositoryResponse();
}
@Override
protected ClusterBlockException checkBlock(PutRepositoryRequest request, ClusterState state) {
return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, "");
}
@Override
protected void masterOperation(final PutRepositoryRequest request, ClusterState state, final ActionListener<PutRepositoryResponse> listener) throws ElasticSearchException {
repositoriesService.registerRepository(new RepositoriesService.RegisterRepositoryRequest("put_repository [" + request.name() + "]", request.name(), request.type())
.settings(request.settings())
.masterNodeTimeout(request.masterNodeTimeout())
.ackTimeout(request.timeout()), new ActionListener<RepositoriesService.RegisterRepositoryResponse>() {
@Override
public void onResponse(RepositoriesService.RegisterRepositoryResponse response) {
listener.onResponse(new PutRepositoryResponse(response.isAcknowledged()));
}
@Override
public void onFailure(Throwable e) {
listener.onFailure(e);
}
});
}
}

View File

@ -0,0 +1,47 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.action.admin.cluster.snapshots.create;
import org.elasticsearch.action.admin.cluster.ClusterAction;
import org.elasticsearch.client.ClusterAdminClient;
/**
* Create snapshot action
*/
public class CreateSnapshotAction extends ClusterAction<CreateSnapshotRequest, CreateSnapshotResponse, CreateSnapshotRequestBuilder> {
public static final CreateSnapshotAction INSTANCE = new CreateSnapshotAction();
public static final String NAME = "cluster/snapshot/create";
private CreateSnapshotAction() {
super(NAME);
}
@Override
public CreateSnapshotResponse newResponse() {
return new CreateSnapshotResponse();
}
@Override
public CreateSnapshotRequestBuilder newRequestBuilder(ClusterAdminClient client) {
return new CreateSnapshotRequestBuilder(client);
}
}

View File

@ -0,0 +1,455 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.action.admin.cluster.snapshots.create;
import org.elasticsearch.ElasticSearchGenerationException;
import org.elasticsearch.ElasticSearchIllegalArgumentException;
import org.elasticsearch.action.ActionRequestValidationException;
import org.elasticsearch.action.support.IgnoreIndices;
import org.elasticsearch.action.support.master.MasterNodeOperationRequest;
import org.elasticsearch.common.Strings;
import org.elasticsearch.common.bytes.BytesReference;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.settings.ImmutableSettings;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentFactory;
import org.elasticsearch.common.xcontent.XContentType;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import static org.elasticsearch.action.ValidateActions.addValidationError;
import static org.elasticsearch.common.Strings.EMPTY_ARRAY;
import static org.elasticsearch.common.Strings.hasLength;
import static org.elasticsearch.common.settings.ImmutableSettings.Builder.EMPTY_SETTINGS;
import static org.elasticsearch.common.settings.ImmutableSettings.readSettingsFromStream;
import static org.elasticsearch.common.settings.ImmutableSettings.writeSettingsToStream;
/**
* Create snapshot request
* <p/>
* The only mandatory parameter is repository name. The repository name has to satisfy the following requirements
* <ul>
* <li>be a non-empty string</li>
* <li>must not contain whitespace (tabs or spaces)</li>
* <li>must not contain comma (',')</li>
* <li>must not contain hash sign ('#')</li>
* <li>must not start with underscore ('-')</li>
* <li>must be lowercase</li>
* <li>must not contain invalid file name characters {@link org.elasticsearch.common.Strings#INVALID_FILENAME_CHARS} </li>
* </ul>
*/
public class CreateSnapshotRequest extends MasterNodeOperationRequest<CreateSnapshotRequest> {
private String snapshot;
private String repository;
private String[] indices = EMPTY_ARRAY;
private IgnoreIndices ignoreIndices = IgnoreIndices.DEFAULT;
private Settings settings = EMPTY_SETTINGS;
private boolean includeGlobalState = true;
private boolean waitForCompletion;
CreateSnapshotRequest() {
}
/**
* Constructs a new put repository request with the provided snapshot and repository names
*
* @param repository repository name
* @param snapshot snapshot name
*/
public CreateSnapshotRequest(String repository, String snapshot) {
this.snapshot = snapshot;
this.repository = repository;
}
@Override
public ActionRequestValidationException validate() {
ActionRequestValidationException validationException = null;
if (snapshot == null) {
validationException = addValidationError("snapshot is missing", validationException);
}
if (repository == null) {
validationException = addValidationError("repository is missing", validationException);
}
if (indices == null) {
validationException = addValidationError("indices is null", validationException);
}
for (String index : indices) {
if (index == null) {
validationException = addValidationError("index is null", validationException);
break;
}
}
if (ignoreIndices == null) {
validationException = addValidationError("ignoreIndices is null", validationException);
}
if (settings == null) {
validationException = addValidationError("settings is null", validationException);
}
return validationException;
}
/**
* Sets the snapshot name
*
* @param snapshot snapshot name
*/
public CreateSnapshotRequest snapshot(String snapshot) {
this.snapshot = snapshot;
return this;
}
/**
* The snapshot name
*
* @return snapshot name
*/
public String snapshot() {
return this.snapshot;
}
/**
* Sets repository name
*
* @param repository name
* @return this request
*/
public CreateSnapshotRequest repository(String repository) {
this.repository = repository;
return this;
}
/**
* Returns repository name
*
* @return repository name
*/
public String repository() {
return this.repository;
}
/**
* Sets a list of indices that should be included into the snapshot
* <p/>
* The list of indices supports multi-index syntax. For example: "+test*" ,"-test42" will index all indices with
* prefix "test" except index "test42". Aliases are supported. An empty list or {"_all"} will snapshot all open
* indices in the cluster.
*
* @param indices
* @return this request
*/
public CreateSnapshotRequest indices(String... indices) {
this.indices = indices;
return this;
}
/**
* Sets a list of indices that should be included into the snapshot
* <p/>
* The list of indices supports multi-index syntax. For example: "+test*" ,"-test42" will index all indices with
* prefix "test" except index "test42". Aliases are supported. An empty list or {"_all"} will snapshot all open
* indices in the cluster.
*
* @param indices
* @return this request
*/
public CreateSnapshotRequest indices(List<String> indices) {
this.indices = indices.toArray(new String[indices.size()]);
return this;
}
/**
* Retuns a list of indices that should be included into the snapshot
*
* @return list of indices
*/
public String[] indices() {
return indices;
}
/**
* Specifies what type of requested indices to ignore. For example indices that don't exist.
*
* @return the desired behaviour regarding indices to ignore
*/
public IgnoreIndices ignoreIndices() {
return ignoreIndices;
}
/**
* Specifies what type of requested indices to ignore. For example indices that don't exist.
*
* @param ignoreIndices the desired behaviour regarding indices to ignore
* @return this request
*/
public CreateSnapshotRequest ignoreIndices(IgnoreIndices ignoreIndices) {
this.ignoreIndices = ignoreIndices;
return this;
}
/**
* If set to true the request should wait for the snapshot completion before returning.
*
* @param waitForCompletion true if
* @return this request
*/
public CreateSnapshotRequest waitForCompletion(boolean waitForCompletion) {
this.waitForCompletion = waitForCompletion;
return this;
}
/**
* Returns true if the request should wait for the snapshot completion before returning
*
* @return true if the request should wait for completion
*/
public boolean waitForCompletion() {
return waitForCompletion;
}
/**
* Sets repository-specific snapshot settings.
* <p/>
* See repository documentation for more information.
*
* @param settings repository-specific snapshot settings
* @return this request
*/
public CreateSnapshotRequest settings(Settings settings) {
this.settings = settings;
return this;
}
/**
* Sets repository-specific snapshot settings.
* <p/>
* See repository documentation for more information.
*
* @param settings repository-specific snapshot settings
* @return this request
*/
public CreateSnapshotRequest settings(Settings.Builder settings) {
this.settings = settings.build();
return this;
}
/**
* Sets repository-specific snapshot settings in JSON, YAML or properties format
* <p/>
* See repository documentation for more information.
*
* @param source repository-specific snapshot settings
* @return this request
*/
public CreateSnapshotRequest settings(String source) {
this.settings = ImmutableSettings.settingsBuilder().loadFromSource(source).build();
return this;
}
/**
* Sets repository-specific snapshot settings.
* <p/>
* See repository documentation for more information.
*
* @param source repository-specific snapshot settings
* @return this request
*/
public CreateSnapshotRequest settings(Map<String, Object> source) {
try {
XContentBuilder builder = XContentFactory.contentBuilder(XContentType.JSON);
builder.map(source);
settings(builder.string());
} catch (IOException e) {
throw new ElasticSearchGenerationException("Failed to generate [" + source + "]", e);
}
return this;
}
/**
* Returns repository-specific snapshot settings
*
* @return repository-specific snapshot settings
*/
public Settings settings() {
return this.settings;
}
/**
* Set to true if global state should be stored as part of the snapshot
*
* @param includeGlobalState true if global state should be stored
* @return this request
*/
public CreateSnapshotRequest includeGlobalState(boolean includeGlobalState) {
this.includeGlobalState = includeGlobalState;
return this;
}
/**
* Returns true if global state should be stored as part of the snapshot
* @return true if global state should be stored as part of the snapshot
*/
public boolean includeGlobalState() {
return includeGlobalState;
}
/**
* Parses snapshot definition.
*
* @param source snapshot definition
* @return this request
*/
public CreateSnapshotRequest source(XContentBuilder source) {
return source(source.bytes());
}
/**
* Parses snapshot definition.
*
* @param source snapshot definition
* @return this request
*/
public CreateSnapshotRequest source(Map source) {
for (Map.Entry<String, Object> entry : ((Map<String, Object>) source).entrySet()) {
String name = entry.getKey();
if (name.equals("indices")) {
if (entry.getValue() instanceof String) {
indices(Strings.splitStringByCommaToArray((String) entry.getValue()));
} else if (entry.getValue() instanceof ArrayList) {
indices((ArrayList<String>) entry.getValue());
} else {
throw new ElasticSearchIllegalArgumentException("malformed indices section, should be an array of strings");
}
} else if (name.equals("ignore_indices")) {
if (entry.getValue() instanceof String) {
ignoreIndices(IgnoreIndices.fromString((String) entry.getValue()));
} else {
throw new ElasticSearchIllegalArgumentException("malformed ignore_indices");
}
} else if (name.equals("settings")) {
if (!(entry.getValue() instanceof Map)) {
throw new ElasticSearchIllegalArgumentException("malformed settings section, should indices an inner object");
}
settings((Map<String, Object>) entry.getValue());
} else if (name.equals("include_global_state")) {
if (!(entry.getValue() instanceof Boolean)) {
throw new ElasticSearchIllegalArgumentException("malformed include_global_state, should be boolean");
}
includeGlobalState((Boolean) entry.getValue());
}
}
return this;
}
/**
* Parses snapshot definition. JSON, YAML and properties formats are supported
*
* @param source snapshot definition
* @return this request
*/
public CreateSnapshotRequest source(String source) {
if (hasLength(source)) {
try {
return source(XContentFactory.xContent(source).createParser(source).mapOrderedAndClose());
} catch (Exception e) {
throw new ElasticSearchIllegalArgumentException("failed to parse repository source [" + source + "]", e);
}
}
return this;
}
/**
* Parses snapshot definition. JSON, YAML and properties formats are supported
*
* @param source snapshot definition
* @return this request
*/
public CreateSnapshotRequest source(byte[] source) {
return source(source, 0, source.length);
}
/**
* Parses snapshot definition. JSON, YAML and properties formats are supported
*
* @param source snapshot definition
* @param offset offset
* @param length length
* @return this request
*/
public CreateSnapshotRequest source(byte[] source, int offset, int length) {
if (length > 0) {
try {
return source(XContentFactory.xContent(source, offset, length).createParser(source, offset, length).mapOrderedAndClose());
} catch (IOException e) {
throw new ElasticSearchIllegalArgumentException("failed to parse repository source", e);
}
}
return this;
}
/**
* Parses snapshot definition. JSON, YAML and properties formats are supported
*
* @param source snapshot definition
* @return this request
*/
public CreateSnapshotRequest source(BytesReference source) {
try {
return source(XContentFactory.xContent(source).createParser(source).mapOrderedAndClose());
} catch (IOException e) {
throw new ElasticSearchIllegalArgumentException("failed to parse snapshot source", e);
}
}
@Override
public void readFrom(StreamInput in) throws IOException {
super.readFrom(in);
snapshot = in.readString();
repository = in.readString();
indices = in.readStringArray();
ignoreIndices = IgnoreIndices.fromId(in.readByte());
settings = readSettingsFromStream(in);
includeGlobalState = in.readBoolean();
waitForCompletion = in.readBoolean();
}
@Override
public void writeTo(StreamOutput out) throws IOException {
super.writeTo(out);
out.writeString(snapshot);
out.writeString(repository);
out.writeStringArray(indices);
out.writeByte(ignoreIndices.id());
writeSettingsToStream(settings, out);
out.writeBoolean(includeGlobalState);
out.writeBoolean(waitForCompletion);
}
}

View File

@ -0,0 +1,182 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.action.admin.cluster.snapshots.create;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.support.IgnoreIndices;
import org.elasticsearch.action.support.master.MasterNodeOperationRequestBuilder;
import org.elasticsearch.client.ClusterAdminClient;
import org.elasticsearch.client.internal.InternalClusterAdminClient;
import org.elasticsearch.common.settings.Settings;
import java.util.Map;
/**
* Create snapshot request builder
*/
public class CreateSnapshotRequestBuilder extends MasterNodeOperationRequestBuilder<CreateSnapshotRequest, CreateSnapshotResponse, CreateSnapshotRequestBuilder> {
/**
* Constructs a new create snapshot request builder
*
* @param clusterAdminClient cluster admin client
*/
public CreateSnapshotRequestBuilder(ClusterAdminClient clusterAdminClient) {
super((InternalClusterAdminClient) clusterAdminClient, new CreateSnapshotRequest());
}
/**
* Constructs a new create snapshot request builder with specified repository and snapshot names
*
* @param clusterAdminClient cluster admin client
* @param repository repository name
* @param snapshot snapshot name
*/
public CreateSnapshotRequestBuilder(ClusterAdminClient clusterAdminClient, String repository, String snapshot) {
super((InternalClusterAdminClient) clusterAdminClient, new CreateSnapshotRequest(repository, snapshot));
}
/**
* Sets the snapshot name
*
* @param snapshot snapshot name
* @return this builder
*/
public CreateSnapshotRequestBuilder setSnapshot(String snapshot) {
request.snapshot(snapshot);
return this;
}
/**
* Sets the repository name
*
* @param repository repository name
* @return this builder
*/
public CreateSnapshotRequestBuilder setRepository(String repository) {
request.repository(repository);
return this;
}
/**
* Sets a list of indices that should be included into the snapshot
* <p/>
* The list of indices supports multi-index syntax. For example: "+test*" ,"-test42" will index all indices with
* prefix "test" except index "test42". Aliases are supported. An empty list or {"_all"} will snapshot all open
* indices in the cluster.
*
* @param indices
* @return this builder
*/
public CreateSnapshotRequestBuilder setIndices(String... indices) {
request.indices(indices);
return this;
}
/**
* Specifies what type of requested indices to ignore. For example indices that don't exist.
*
* @param ignoreIndices the desired behaviour regarding indices to ignore
* @return this builder
*/
public CreateSnapshotRequestBuilder setIgnoreIndices(IgnoreIndices ignoreIndices) {
request.ignoreIndices(ignoreIndices);
return this;
}
/**
* If set to true the request should wait for the snapshot completion before returning.
*
* @param waitForCompletion true if
* @return this builder
*/
public CreateSnapshotRequestBuilder setWaitForCompletion(boolean waitForCompletion) {
request.waitForCompletion(waitForCompletion);
return this;
}
/**
* Sets repository-specific snapshot settings.
* <p/>
* See repository documentation for more information.
*
* @param settings repository-specific snapshot settings
* @return this builder
*/
public CreateSnapshotRequestBuilder setSettings(Settings settings) {
request.settings(settings);
return this;
}
/**
* Sets repository-specific snapshot settings.
* <p/>
* See repository documentation for more information.
*
* @param settings repository-specific snapshot settings
* @return this builder
*/
public CreateSnapshotRequestBuilder setSettings(Settings.Builder settings) {
request.settings(settings);
return this;
}
/**
* Sets repository-specific snapshot settings in YAML, JSON or properties format
* <p/>
* See repository documentation for more information.
*
* @param source repository-specific snapshot settings
* @return this builder
*/
public CreateSnapshotRequestBuilder setSettings(String source) {
request.settings(source);
return this;
}
/**
* Sets repository-specific snapshot settings.
* <p/>
* See repository documentation for more information.
*
* @param settings repository-specific snapshot settings
* @return this builder
*/
public CreateSnapshotRequestBuilder setSettings(Map<String, Object> settings) {
request.settings(settings);
return this;
}
/**
* Set to true if snapshot should include global cluster state
*
* @param includeGlobalState true if snapshot should include global cluster state
* @return this builder
*/
public CreateSnapshotRequestBuilder setIncludeGlobalState(boolean includeGlobalState) {
request.includeGlobalState(includeGlobalState);
return this;
}
@Override
protected void doExecute(ActionListener<CreateSnapshotResponse> listener) {
((ClusterAdminClient) client).createSnapshot(request, listener);
}
}

View File

@ -0,0 +1,103 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.action.admin.cluster.snapshots.create;
import org.elasticsearch.action.ActionResponse;
import org.elasticsearch.common.Nullable;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.xcontent.ToXContent;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentBuilderString;
import org.elasticsearch.rest.RestStatus;
import org.elasticsearch.snapshots.SnapshotInfo;
import java.io.IOException;
/**
* Create snapshot response
*/
public class CreateSnapshotResponse extends ActionResponse implements ToXContent {
@Nullable
private SnapshotInfo snapshotInfo;
CreateSnapshotResponse(@Nullable SnapshotInfo snapshotInfo) {
this.snapshotInfo = snapshotInfo;
}
CreateSnapshotResponse() {
}
/**
* Returns snapshot information if snapshot was completed by the time this method returned or null otherwise.
*
* @return snapshot information or null
*/
public SnapshotInfo getSnapshotInfo() {
return snapshotInfo;
}
@Override
public void readFrom(StreamInput in) throws IOException {
super.readFrom(in);
snapshotInfo = SnapshotInfo.readOptionalSnapshotInfo(in);
}
@Override
public void writeTo(StreamOutput out) throws IOException {
super.writeTo(out);
out.writeOptionalStreamable(snapshotInfo);
}
/**
* Returns HTTP status
* <p/>
* <ul>
* <li>{@link RestStatus#ACCEPTED}</li> if snapshot is still in progress
* <li>{@link RestStatus#OK}</li> if snapshot was successful or partially successful
* <li>{@link RestStatus#INTERNAL_SERVER_ERROR}</li> if snapshot failed completely
* </ul>
*
* @return
*/
public RestStatus status() {
if (snapshotInfo == null) {
return RestStatus.ACCEPTED;
}
return snapshotInfo.status();
}
static final class Fields {
static final XContentBuilderString SNAPSHOT = new XContentBuilderString("snapshot");
static final XContentBuilderString ACCEPTED = new XContentBuilderString("accepted");
}
@Override
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
if (snapshotInfo != null) {
builder.field(Fields.SNAPSHOT);
snapshotInfo.toXContent(builder, params);
} else {
builder.field(Fields.ACCEPTED, true);
}
return builder;
}
}

View File

@ -0,0 +1,118 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.action.admin.cluster.snapshots.create;
import org.elasticsearch.ElasticSearchException;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.support.master.TransportMasterNodeOperationAction;
import org.elasticsearch.cluster.ClusterService;
import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.block.ClusterBlockException;
import org.elasticsearch.cluster.block.ClusterBlockLevel;
import org.elasticsearch.cluster.metadata.SnapshotId;
import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.snapshots.SnapshotInfo;
import org.elasticsearch.snapshots.SnapshotsService;
import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.transport.TransportService;
/**
* Transport action for create snapshot operation
*/
public class TransportCreateSnapshotAction extends TransportMasterNodeOperationAction<CreateSnapshotRequest, CreateSnapshotResponse> {
private final SnapshotsService snapshotsService;
@Inject
public TransportCreateSnapshotAction(Settings settings, TransportService transportService, ClusterService clusterService,
ThreadPool threadPool, SnapshotsService snapshotsService) {
super(settings, transportService, clusterService, threadPool);
this.snapshotsService = snapshotsService;
}
@Override
protected String executor() {
return ThreadPool.Names.SNAPSHOT;
}
@Override
protected String transportAction() {
return CreateSnapshotAction.NAME;
}
@Override
protected CreateSnapshotRequest newRequest() {
return new CreateSnapshotRequest();
}
@Override
protected CreateSnapshotResponse newResponse() {
return new CreateSnapshotResponse();
}
@Override
protected ClusterBlockException checkBlock(CreateSnapshotRequest request, ClusterState state) {
return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, "");
}
@Override
protected void masterOperation(final CreateSnapshotRequest request, ClusterState state, final ActionListener<CreateSnapshotResponse> listener) throws ElasticSearchException {
SnapshotsService.SnapshotRequest snapshotRequest =
new SnapshotsService.SnapshotRequest("create_snapshot[" + request.snapshot() + "]", request.snapshot(), request.repository())
.indices(request.indices())
.ignoreIndices(request.ignoreIndices())
.settings(request.settings())
.includeGlobalState(request.includeGlobalState())
.masterNodeTimeout(request.masterNodeTimeout());
snapshotsService.createSnapshot(snapshotRequest, new SnapshotsService.CreateSnapshotListener() {
@Override
public void onResponse() {
if (request.waitForCompletion()) {
snapshotsService.addListener(new SnapshotsService.SnapshotCompletionListener() {
SnapshotId snapshotId = new SnapshotId(request.repository(), request.snapshot());
@Override
public void onSnapshotCompletion(SnapshotId snapshotId, SnapshotInfo snapshot) {
if (this.snapshotId.equals(snapshotId)) {
listener.onResponse(new CreateSnapshotResponse(snapshot));
snapshotsService.removeListener(this);
}
}
@Override
public void onSnapshotFailure(SnapshotId snapshotId, Throwable t) {
if (this.snapshotId.equals(snapshotId)) {
listener.onFailure(t);
snapshotsService.removeListener(this);
}
}
});
} else {
listener.onResponse(new CreateSnapshotResponse());
}
}
@Override
public void onFailure(Throwable t) {
listener.onFailure(t);
}
});
}
}

View File

@ -0,0 +1,47 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.action.admin.cluster.snapshots.delete;
import org.elasticsearch.action.admin.cluster.ClusterAction;
import org.elasticsearch.client.ClusterAdminClient;
/**
* Delete snapshot action
*/
public class DeleteSnapshotAction extends ClusterAction<DeleteSnapshotRequest, DeleteSnapshotResponse, DeleteSnapshotRequestBuilder> {
public static final DeleteSnapshotAction INSTANCE = new DeleteSnapshotAction();
public static final String NAME = "cluster/snapshot/delete";
private DeleteSnapshotAction() {
super(NAME);
}
@Override
public DeleteSnapshotResponse newResponse() {
return new DeleteSnapshotResponse();
}
@Override
public DeleteSnapshotRequestBuilder newRequestBuilder(ClusterAdminClient client) {
return new DeleteSnapshotRequestBuilder(client);
}
}

View File

@ -0,0 +1,129 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.action.admin.cluster.snapshots.delete;
import org.elasticsearch.action.ActionRequestValidationException;
import org.elasticsearch.action.support.master.MasterNodeOperationRequest;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import java.io.IOException;
import static org.elasticsearch.action.ValidateActions.addValidationError;
/**
* Delete snapshot request
* <p/>
* Delete snapshot request removes the snapshot record from the repository and cleans up all
* files that are associated with this particular snapshot. All files that are shared with
* at least one other existing snapshot are left intact.
*/
public class DeleteSnapshotRequest extends MasterNodeOperationRequest<DeleteSnapshotRequest> {
private String repository;
private String snapshot;
/**
* Constructs a new delete snapshots request
*/
public DeleteSnapshotRequest() {
}
/**
* Constructs a new delete snapshots request with repository and snapshot name
*
* @param repository repository name
* @param snapshot snapshot name
*/
public DeleteSnapshotRequest(String repository, String snapshot) {
this.repository = repository;
this.snapshot = snapshot;
}
/**
* Constructs a new delete snapshots request with repository name
*
* @param repository repository name
*/
public DeleteSnapshotRequest(String repository) {
this.repository = repository;
}
@Override
public ActionRequestValidationException validate() {
ActionRequestValidationException validationException = null;
if (repository == null) {
validationException = addValidationError("repository is missing", validationException);
}
if (snapshot == null) {
validationException = addValidationError("snapshot is missing", validationException);
}
return validationException;
}
public DeleteSnapshotRequest repository(String repository) {
this.repository = repository;
return this;
}
/**
* Returns repository name
*
* @return repository name
*/
public String repository() {
return this.repository;
}
/**
* Returns repository name
*
* @return repository name
*/
public String snapshot() {
return this.snapshot;
}
/**
* Sets snapshot name
*
* @return this request
*/
public DeleteSnapshotRequest snapshot(String snapshot) {
this.snapshot = snapshot;
return this;
}
@Override
public void readFrom(StreamInput in) throws IOException {
super.readFrom(in);
repository = in.readString();
snapshot = in.readString();
}
@Override
public void writeTo(StreamOutput out) throws IOException {
super.writeTo(out);
out.writeString(repository);
out.writeString(snapshot);
}
}

View File

@ -0,0 +1,78 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.action.admin.cluster.snapshots.delete;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.support.master.MasterNodeOperationRequestBuilder;
import org.elasticsearch.client.ClusterAdminClient;
import org.elasticsearch.client.internal.InternalClusterAdminClient;
/**
* Delete snapshot request builder
*/
public class DeleteSnapshotRequestBuilder extends MasterNodeOperationRequestBuilder<DeleteSnapshotRequest, DeleteSnapshotResponse, DeleteSnapshotRequestBuilder> {
/**
* Constructs delete snapshot request builder
*
* @param clusterAdminClient cluster admin client
*/
public DeleteSnapshotRequestBuilder(ClusterAdminClient clusterAdminClient) {
super((InternalClusterAdminClient) clusterAdminClient, new DeleteSnapshotRequest());
}
/**
* Constructs delete snapshot request builder with specified repository and snapshot names
*
* @param clusterAdminClient cluster admin client
* @param repository repository name
* @param snapshot snapshot name
*/
public DeleteSnapshotRequestBuilder(ClusterAdminClient clusterAdminClient, String repository, String snapshot) {
super((InternalClusterAdminClient) clusterAdminClient, new DeleteSnapshotRequest(repository, snapshot));
}
/**
* Sets the repository name
*
* @param repository repository name
* @return this builder
*/
public DeleteSnapshotRequestBuilder setRepository(String repository) {
request.repository(repository);
return this;
}
/**
* Sets the snapshot name
*
* @param snapshot snapshot name
* @return this builder
*/
public DeleteSnapshotRequestBuilder setSnapshot(String snapshot) {
request.snapshot(snapshot);
return this;
}
@Override
protected void doExecute(ActionListener<DeleteSnapshotResponse> listener) {
((ClusterAdminClient) client).deleteSnapshot(request, listener);
}
}

View File

@ -0,0 +1,52 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.action.admin.cluster.snapshots.delete;
import org.elasticsearch.action.support.master.AcknowledgedResponse;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import java.io.IOException;
/**
* Delete snapshot response
*/
public class DeleteSnapshotResponse extends AcknowledgedResponse {
DeleteSnapshotResponse() {
}
DeleteSnapshotResponse(boolean acknowledged) {
super(acknowledged);
}
@Override
public void readFrom(StreamInput in) throws IOException {
super.readFrom(in);
readAcknowledged(in, null);
}
@Override
public void writeTo(StreamOutput out) throws IOException {
super.writeTo(out);
writeAcknowledged(out, null);
}
}

View File

@ -0,0 +1,89 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.action.admin.cluster.snapshots.delete;
import org.elasticsearch.ElasticSearchException;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.support.master.TransportMasterNodeOperationAction;
import org.elasticsearch.cluster.ClusterService;
import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.block.ClusterBlockException;
import org.elasticsearch.cluster.block.ClusterBlockLevel;
import org.elasticsearch.cluster.metadata.SnapshotId;
import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.snapshots.SnapshotsService;
import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.transport.TransportService;
/**
* Transport action for delete snapshot operation
*/
public class TransportDeleteSnapshotAction extends TransportMasterNodeOperationAction<DeleteSnapshotRequest, DeleteSnapshotResponse> {
private final SnapshotsService snapshotsService;
@Inject
public TransportDeleteSnapshotAction(Settings settings, TransportService transportService, ClusterService clusterService,
ThreadPool threadPool, SnapshotsService snapshotsService) {
super(settings, transportService, clusterService, threadPool);
this.snapshotsService = snapshotsService;
}
@Override
protected String executor() {
return ThreadPool.Names.GENERIC;
}
@Override
protected String transportAction() {
return DeleteSnapshotAction.NAME;
}
@Override
protected DeleteSnapshotRequest newRequest() {
return new DeleteSnapshotRequest();
}
@Override
protected DeleteSnapshotResponse newResponse() {
return new DeleteSnapshotResponse();
}
@Override
protected ClusterBlockException checkBlock(DeleteSnapshotRequest request, ClusterState state) {
return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, "");
}
@Override
protected void masterOperation(final DeleteSnapshotRequest request, ClusterState state, final ActionListener<DeleteSnapshotResponse> listener) throws ElasticSearchException {
SnapshotId snapshotIds = new SnapshotId(request.repository(), request.snapshot());
snapshotsService.deleteSnapshot(snapshotIds, new SnapshotsService.DeleteSnapshotListener() {
@Override
public void onResponse() {
listener.onResponse(new DeleteSnapshotResponse(true));
}
@Override
public void onFailure(Throwable t) {
listener.onFailure(t);
}
});
}
}

View File

@ -0,0 +1,47 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.action.admin.cluster.snapshots.get;
import org.elasticsearch.action.admin.cluster.ClusterAction;
import org.elasticsearch.client.ClusterAdminClient;
/**
* Get snapshots action
*/
public class GetSnapshotsAction extends ClusterAction<GetSnapshotsRequest, GetSnapshotsResponse, GetSnapshotsRequestBuilder> {
public static final GetSnapshotsAction INSTANCE = new GetSnapshotsAction();
public static final String NAME = "cluster/snapshot/get";
private GetSnapshotsAction() {
super(NAME);
}
@Override
public GetSnapshotsResponse newResponse() {
return new GetSnapshotsResponse();
}
@Override
public GetSnapshotsRequestBuilder newRequestBuilder(ClusterAdminClient client) {
return new GetSnapshotsRequestBuilder(client);
}
}

View File

@ -0,0 +1,126 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.action.admin.cluster.snapshots.get;
import org.elasticsearch.action.ActionRequestValidationException;
import org.elasticsearch.action.support.master.MasterNodeOperationRequest;
import org.elasticsearch.common.Strings;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import java.io.IOException;
import static org.elasticsearch.action.ValidateActions.addValidationError;
/**
* Get snapshot request
*/
public class GetSnapshotsRequest extends MasterNodeOperationRequest<GetSnapshotsRequest> {
private String repository;
private String[] snapshots = Strings.EMPTY_ARRAY;
GetSnapshotsRequest() {
}
/**
* Constructs a new get snapshots request with given repository name and list of snapshots
*
* @param repository repository name
* @param snapshots list of snapshots
*/
public GetSnapshotsRequest(String repository, String[] snapshots) {
this.repository = repository;
this.snapshots = snapshots;
}
/**
* Constructs a new get snapshots request with given repository name
*
* @param repository repository name
*/
public GetSnapshotsRequest(String repository) {
this.repository = repository;
}
@Override
public ActionRequestValidationException validate() {
ActionRequestValidationException validationException = null;
if (repository == null) {
validationException = addValidationError("repository is missing", validationException);
}
return validationException;
}
/**
* Sets repository name
*
* @param repository repository name
* @return this request
*/
public GetSnapshotsRequest repository(String repository) {
this.repository = repository;
return this;
}
/**
* Returns repository name
*
* @return repository name
*/
public String repository() {
return this.repository;
}
/**
* Returns the names of the snapshots.
*
* @return the names of snapshots
*/
public String[] snapshots() {
return this.snapshots;
}
/**
* Sets the list of snapshots to be returned
*
* @param snapshots
* @return this request
*/
public GetSnapshotsRequest snapshots(String[] snapshots) {
this.snapshots = snapshots;
return this;
}
@Override
public void readFrom(StreamInput in) throws IOException {
super.readFrom(in);
repository = in.readString();
snapshots = in.readStringArray();
}
@Override
public void writeTo(StreamOutput out) throws IOException {
super.writeTo(out);
out.writeString(repository);
out.writeStringArray(snapshots);
}
}

View File

@ -0,0 +1,89 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.action.admin.cluster.snapshots.get;
import com.google.common.collect.ObjectArrays;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.support.master.MasterNodeOperationRequestBuilder;
import org.elasticsearch.client.ClusterAdminClient;
import org.elasticsearch.client.internal.InternalClusterAdminClient;
/**
* Get snapshots request builder
*/
public class GetSnapshotsRequestBuilder extends MasterNodeOperationRequestBuilder<GetSnapshotsRequest, GetSnapshotsResponse, GetSnapshotsRequestBuilder> {
/**
* Constructs the new get snapshot request
*
* @param clusterAdminClient cluster admin client
*/
public GetSnapshotsRequestBuilder(ClusterAdminClient clusterAdminClient) {
super((InternalClusterAdminClient) clusterAdminClient, new GetSnapshotsRequest());
}
/**
* Constructs the new get snapshot request with specified repository
*
* @param clusterAdminClient cluster admin client
* @param repository repository name
*/
public GetSnapshotsRequestBuilder(ClusterAdminClient clusterAdminClient, String repository) {
super((InternalClusterAdminClient) clusterAdminClient, new GetSnapshotsRequest(repository));
}
/**
* Sets the repository name
*
* @param repository repository name
* @return this builder
*/
public GetSnapshotsRequestBuilder setRepository(String repository) {
request.repository(repository);
return this;
}
/**
* Sets list of snapshots to return
*
* @param snapshots list of snapshots
* @return this builder
*/
public GetSnapshotsRequestBuilder setSnapshots(String... snapshots) {
request.snapshots(snapshots);
return this;
}
/**
* Adds additional snapshots to the list of snapshots to return
*
* @param snapshots additional snapshots
* @return this builder
*/
public GetSnapshotsRequestBuilder addSnapshots(String... snapshots) {
request.snapshots(ObjectArrays.concat(request.snapshots(), snapshots, String.class));
return this;
}
@Override
protected void doExecute(ActionListener<GetSnapshotsResponse> listener) {
((ClusterAdminClient) client).getSnapshots(request, listener);
}
}

View File

@ -0,0 +1,90 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.action.admin.cluster.snapshots.get;
import com.google.common.collect.ImmutableList;
import org.elasticsearch.action.ActionResponse;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.xcontent.ToXContent;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentBuilderString;
import org.elasticsearch.snapshots.SnapshotInfo;
import java.io.IOException;
/**
* Get snapshots response
*/
public class GetSnapshotsResponse extends ActionResponse implements ToXContent {
private ImmutableList<SnapshotInfo> snapshots = ImmutableList.of();
GetSnapshotsResponse() {
}
GetSnapshotsResponse(ImmutableList<SnapshotInfo> snapshots) {
this.snapshots = snapshots;
}
/**
* Returns the list of snapshots
*
* @return the list of snapshots
*/
public ImmutableList<SnapshotInfo> getSnapshots() {
return snapshots;
}
@Override
public void readFrom(StreamInput in) throws IOException {
super.readFrom(in);
int size = in.readVInt();
ImmutableList.Builder<SnapshotInfo> builder = ImmutableList.builder();
for (int i = 0; i < size; i++) {
builder.add(SnapshotInfo.readSnapshotInfo(in));
}
snapshots = builder.build();
}
@Override
public void writeTo(StreamOutput out) throws IOException {
super.writeTo(out);
out.writeVInt(snapshots.size());
for (SnapshotInfo snapshotInfo : snapshots) {
snapshotInfo.writeTo(out);
}
}
static final class Fields {
static final XContentBuilderString SNAPSHOTS = new XContentBuilderString("snapshots");
}
@Override
public XContentBuilder toXContent(XContentBuilder builder, ToXContent.Params params) throws IOException {
builder.startArray(Fields.SNAPSHOTS);
for (SnapshotInfo snapshotInfo : snapshots) {
snapshotInfo.toXContent(builder, params);
}
builder.endArray();
return builder;
}
}

View File

@ -0,0 +1,101 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.action.admin.cluster.snapshots.get;
import com.google.common.collect.ImmutableList;
import org.elasticsearch.ElasticSearchException;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.support.master.TransportMasterNodeOperationAction;
import org.elasticsearch.cluster.ClusterService;
import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.block.ClusterBlockException;
import org.elasticsearch.cluster.block.ClusterBlockLevel;
import org.elasticsearch.cluster.metadata.SnapshotId;
import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.snapshots.Snapshot;
import org.elasticsearch.snapshots.SnapshotInfo;
import org.elasticsearch.snapshots.SnapshotsService;
import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.transport.TransportService;
/**
* Transport Action for get snapshots operation
*/
public class TransportGetSnapshotsAction extends TransportMasterNodeOperationAction<GetSnapshotsRequest, GetSnapshotsResponse> {
private final SnapshotsService snapshotsService;
@Inject
public TransportGetSnapshotsAction(Settings settings, TransportService transportService, ClusterService clusterService,
ThreadPool threadPool, SnapshotsService snapshotsService) {
super(settings, transportService, clusterService, threadPool);
this.snapshotsService = snapshotsService;
}
@Override
protected String executor() {
return ThreadPool.Names.SNAPSHOT;
}
@Override
protected String transportAction() {
return GetSnapshotsAction.NAME;
}
@Override
protected GetSnapshotsRequest newRequest() {
return new GetSnapshotsRequest();
}
@Override
protected GetSnapshotsResponse newResponse() {
return new GetSnapshotsResponse();
}
@Override
protected ClusterBlockException checkBlock(GetSnapshotsRequest request, ClusterState state) {
return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, "");
}
@Override
protected void masterOperation(final GetSnapshotsRequest request, ClusterState state, final ActionListener<GetSnapshotsResponse> listener) throws ElasticSearchException {
SnapshotId[] snapshotIds = new SnapshotId[request.snapshots().length];
for (int i = 0; i < snapshotIds.length; i++) {
snapshotIds[i] = new SnapshotId(request.repository(), request.snapshots()[i]);
}
try {
ImmutableList.Builder<SnapshotInfo> snapshotInfoBuilder = ImmutableList.builder();
if (snapshotIds.length > 0) {
for (SnapshotId snapshotId : snapshotIds) {
snapshotInfoBuilder.add(new SnapshotInfo(snapshotsService.snapshot(snapshotId)));
}
} else {
ImmutableList<Snapshot> snapshots = snapshotsService.snapshots(request.repository());
for (Snapshot snapshot : snapshots) {
snapshotInfoBuilder.add(new SnapshotInfo(snapshot));
}
}
listener.onResponse(new GetSnapshotsResponse(snapshotInfoBuilder.build()));
} catch (Throwable t) {
listener.onFailure(t);
}
}
}

View File

@ -0,0 +1,47 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.action.admin.cluster.snapshots.restore;
import org.elasticsearch.action.admin.cluster.ClusterAction;
import org.elasticsearch.client.ClusterAdminClient;
/**
* Restore snapshot action
*/
public class RestoreSnapshotAction extends ClusterAction<RestoreSnapshotRequest, RestoreSnapshotResponse, RestoreSnapshotRequestBuilder> {
public static final RestoreSnapshotAction INSTANCE = new RestoreSnapshotAction();
public static final String NAME = "cluster/snapshot/restore";
private RestoreSnapshotAction() {
super(NAME);
}
@Override
public RestoreSnapshotResponse newResponse() {
return new RestoreSnapshotResponse();
}
@Override
public RestoreSnapshotRequestBuilder newRequestBuilder(ClusterAdminClient client) {
return new RestoreSnapshotRequestBuilder(client);
}
}

View File

@ -0,0 +1,522 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.action.admin.cluster.snapshots.restore;
import org.elasticsearch.ElasticSearchGenerationException;
import org.elasticsearch.ElasticSearchIllegalArgumentException;
import org.elasticsearch.action.ActionRequestValidationException;
import org.elasticsearch.action.support.IgnoreIndices;
import org.elasticsearch.action.support.master.MasterNodeOperationRequest;
import org.elasticsearch.common.Strings;
import org.elasticsearch.common.bytes.BytesReference;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.settings.ImmutableSettings;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentFactory;
import org.elasticsearch.common.xcontent.XContentType;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import static org.elasticsearch.action.ValidateActions.addValidationError;
import static org.elasticsearch.common.Strings.hasLength;
import static org.elasticsearch.common.settings.ImmutableSettings.Builder.EMPTY_SETTINGS;
import static org.elasticsearch.common.settings.ImmutableSettings.readSettingsFromStream;
import static org.elasticsearch.common.settings.ImmutableSettings.writeSettingsToStream;
/**
* Restore snapshot request
*/
public class RestoreSnapshotRequest extends MasterNodeOperationRequest<RestoreSnapshotRequest> {
private String snapshot;
private String repository;
private String[] indices = Strings.EMPTY_ARRAY;
private IgnoreIndices ignoreIndices = IgnoreIndices.DEFAULT;
private String renamePattern;
private String renameReplacement;
private boolean waitForCompletion;
private boolean includeGlobalState = true;
private Settings settings = EMPTY_SETTINGS;
RestoreSnapshotRequest() {
}
/**
* Constructs a new put repository request with the provided repository and snapshot names.
*
* @param repository repository name
* @param snapshot snapshot name
*/
public RestoreSnapshotRequest(String repository, String snapshot) {
this.snapshot = snapshot;
this.repository = repository;
}
@Override
public ActionRequestValidationException validate() {
ActionRequestValidationException validationException = null;
if (snapshot == null) {
validationException = addValidationError("name is missing", validationException);
}
if (repository == null) {
validationException = addValidationError("repository is missing", validationException);
}
if (indices == null) {
validationException = addValidationError("indices are missing", validationException);
}
if (ignoreIndices == null) {
validationException = addValidationError("ignoreIndices is missing", validationException);
}
if (settings == null) {
validationException = addValidationError("settings are missing", validationException);
}
return validationException;
}
/**
* Sets the name of the snapshot.
*
* @param snapshot snapshot name
* @return this request
*/
public RestoreSnapshotRequest snapshot(String snapshot) {
this.snapshot = snapshot;
return this;
}
/**
* Returns the name of the snapshot.
*
* @return snapshot name
*/
public String snapshot() {
return this.snapshot;
}
/**
* Sets repository name
*
* @param repository repository name
* @return this request
*/
public RestoreSnapshotRequest repository(String repository) {
this.repository = repository;
return this;
}
/**
* Returns repository name
*
* @return repository name
*/
public String repository() {
return this.repository;
}
/**
* Sets the list of indices that should be restored from snapshot
* <p/>
* The list of indices supports multi-index syntax. For example: "+test*" ,"-test42" will index all indices with
* prefix "test" except index "test42". Aliases are not supported. An empty list or {"_all"} will restore all open
* indices in the snapshot.
*
* @param indices list of indices
* @return this request
*/
public RestoreSnapshotRequest indices(String... indices) {
this.indices = indices;
return this;
}
/**
* Sets the list of indices that should be restored from snapshot
* <p/>
* The list of indices supports multi-index syntax. For example: "+test*" ,"-test42" will index all indices with
* prefix "test" except index "test42". Aliases are not supported. An empty list or {"_all"} will restore all open
* indices in the snapshot.
*
* @param indices list of indices
* @return this request
*/
public RestoreSnapshotRequest indices(List<String> indices) {
this.indices = indices.toArray(new String[indices.size()]);
return this;
}
/**
* Returns list of indices that should be restored from snapshot
*
* @return
*/
public String[] indices() {
return indices;
}
/**
* Specifies what type of requested indices to ignore. For example indices that don't exist.
*
* @return the desired behaviour regarding indices to ignore
*/
public IgnoreIndices ignoreIndices() {
return ignoreIndices;
}
/**
* Specifies what type of requested indices to ignore. For example indices that don't exist.
*
* @param ignoreIndices the desired behaviour regarding indices to ignore
* @return this request
*/
public RestoreSnapshotRequest ignoreIndices(IgnoreIndices ignoreIndices) {
this.ignoreIndices = ignoreIndices;
return this;
}
/**
* Sets rename pattern that should be applied to restored indices.
* <p/>
* Indices that match the rename pattern will be renamed according to {@link #renameReplacement(String)}. The
* rename pattern is applied according to the {@link java.util.regex.Matcher#appendReplacement(StringBuffer, String)}
* The request will fail if two or more indices will be renamed into the same name.
*
* @param renamePattern rename pattern
* @return this request
*/
public RestoreSnapshotRequest renamePattern(String renamePattern) {
this.renamePattern = renamePattern;
return this;
}
/**
* Returns rename pattern
*
* @return rename pattern
*/
public String renamePattern() {
return renamePattern;
}
/**
* Sets rename replacement
* <p/>
* See {@link #renamePattern(String)} for more information.
*
* @param renameReplacement rename replacement
* @return
*/
public RestoreSnapshotRequest renameReplacement(String renameReplacement) {
this.renameReplacement = renameReplacement;
return this;
}
/**
* Returns rename replacement
*
* @return rename replacement
*/
public String renameReplacement() {
return renameReplacement;
}
/**
* If this parameter is set to true the operation will wait for completion of restore process before returning.
*
* @param waitForCompletion if true the operation will wait for completion
* @return this request
*/
public RestoreSnapshotRequest waitForCompletion(boolean waitForCompletion) {
this.waitForCompletion = waitForCompletion;
return this;
}
/**
* Returns wait for completion setting
*
* @return true if the operation will wait for completion
*/
public boolean waitForCompletion() {
return waitForCompletion;
}
/**
* Sets repository-specific restore settings.
* <p/>
* See repository documentation for more information.
*
* @param settings repository-specific snapshot settings
* @return this request
*/
public RestoreSnapshotRequest settings(Settings settings) {
this.settings = settings;
return this;
}
/**
* Sets repository-specific restore settings.
* <p/>
* See repository documentation for more information.
*
* @param settings repository-specific snapshot settings
* @return this request
*/
public RestoreSnapshotRequest settings(Settings.Builder settings) {
this.settings = settings.build();
return this;
}
/**
* Sets repository-specific restore settings in JSON, YAML or properties format
* <p/>
* See repository documentation for more information.
*
* @param source repository-specific snapshot settings
* @return this request
*/
public RestoreSnapshotRequest settings(String source) {
this.settings = ImmutableSettings.settingsBuilder().loadFromSource(source).build();
return this;
}
/**
* Sets repository-specific restore settings
* <p/>
* See repository documentation for more information.
*
* @param source repository-specific snapshot settings
* @return this request
*/
public RestoreSnapshotRequest settings(Map<String, Object> source) {
try {
XContentBuilder builder = XContentFactory.contentBuilder(XContentType.JSON);
builder.map(source);
settings(builder.string());
} catch (IOException e) {
throw new ElasticSearchGenerationException("Failed to generate [" + source + "]", e);
}
return this;
}
/**
* Returns repository-specific restore settings
*
* @return restore settings
*/
public Settings settings() {
return this.settings;
}
/**
* If set to true the restore procedure will restore global cluster state.
* <p/>
* The global cluster state includes persistent settings and index template definitions.
*
* @param includeGlobalState true if global state should be restored from the snapshot
* @return this request
*/
public RestoreSnapshotRequest includeGlobalState(boolean includeGlobalState) {
this.includeGlobalState = includeGlobalState;
return this;
}
/**
* Returns true if global state should be restored from this snapshot
*
* @return true if global state should be restored
*/
public boolean includeGlobalState() {
return includeGlobalState;
}
/**
* Parses restore definition
*
* @param source restore definition
* @return this request
*/
public RestoreSnapshotRequest source(XContentBuilder source) {
try {
return source(source.bytes());
} catch (Exception e) {
throw new ElasticSearchIllegalArgumentException("Failed to build json for repository request", e);
}
}
/**
* Parses restore definition
*
* @param source restore definition
* @return this request
*/
public RestoreSnapshotRequest source(Map source) {
for (Map.Entry<String, Object> entry : ((Map<String, Object>) source).entrySet()) {
String name = entry.getKey();
if (name.equals("indices")) {
if (entry.getValue() instanceof String) {
indices(Strings.splitStringByCommaToArray((String) entry.getValue()));
} else if (entry.getValue() instanceof ArrayList) {
indices((ArrayList<String>) entry.getValue());
} else {
throw new ElasticSearchIllegalArgumentException("malformed indices section, should be an array of strings");
}
} else if (name.equals("ignore_indices")) {
if (entry.getValue() instanceof String) {
ignoreIndices(IgnoreIndices.fromString((String) entry.getValue()));
} else {
throw new ElasticSearchIllegalArgumentException("malformed ignore_indices");
}
} else if (name.equals("settings")) {
if (!(entry.getValue() instanceof Map)) {
throw new ElasticSearchIllegalArgumentException("malformed settings section, should indices an inner object");
}
settings((Map<String, Object>) entry.getValue());
} else if (name.equals("include_global_state")) {
if (!(entry.getValue() instanceof Boolean)) {
throw new ElasticSearchIllegalArgumentException("malformed include_global_state, should be boolean");
}
includeGlobalState((Boolean) entry.getValue());
} else if (name.equals("rename_pattern")) {
if (entry.getValue() instanceof String) {
renamePattern((String) entry.getValue());
} else {
throw new ElasticSearchIllegalArgumentException("malformed rename_pattern");
}
} else if (name.equals("rename_replacement")) {
if (entry.getValue() instanceof String) {
renameReplacement((String) entry.getValue());
} else {
throw new ElasticSearchIllegalArgumentException("malformed rename_replacement");
}
} else {
throw new ElasticSearchIllegalArgumentException("Unknown parameter " + name);
}
}
return this;
}
/**
* Parses restore definition
* <p/>
* JSON, YAML and properties formats are supported
*
* @param source restore definition
* @return this request
*/
public RestoreSnapshotRequest source(String source) {
if (hasLength(source)) {
try {
return source(XContentFactory.xContent(source).createParser(source).mapOrderedAndClose());
} catch (Exception e) {
throw new ElasticSearchIllegalArgumentException("failed to parse repository source [" + source + "]", e);
}
}
return this;
}
/**
* Parses restore definition
* <p/>
* JSON, YAML and properties formats are supported
*
* @param source restore definition
* @return this request
*/
public RestoreSnapshotRequest source(byte[] source) {
return source(source, 0, source.length);
}
/**
* Parses restore definition
* <p/>
* JSON, YAML and properties formats are supported
*
* @param source restore definition
* @param offset offset
* @param length length
* @return this request
*/
public RestoreSnapshotRequest source(byte[] source, int offset, int length) {
if (length > 0) {
try {
return source(XContentFactory.xContent(source, offset, length).createParser(source, offset, length).mapOrderedAndClose());
} catch (IOException e) {
throw new ElasticSearchIllegalArgumentException("failed to parse repository source", e);
}
}
return this;
}
/**
* Parses restore definition
* <p/>
* JSON, YAML and properties formats are supported
*
* @param source restore definition
* @return this request
*/
public RestoreSnapshotRequest source(BytesReference source) {
try {
return source(XContentFactory.xContent(source).createParser(source).mapOrderedAndClose());
} catch (IOException e) {
throw new ElasticSearchIllegalArgumentException("failed to parse template source", e);
}
}
@Override
public void readFrom(StreamInput in) throws IOException {
super.readFrom(in);
snapshot = in.readString();
repository = in.readString();
indices = in.readStringArray();
ignoreIndices = IgnoreIndices.fromId(in.readByte());
renamePattern = in.readOptionalString();
renameReplacement = in.readOptionalString();
waitForCompletion = in.readBoolean();
includeGlobalState = in.readBoolean();
settings = readSettingsFromStream(in);
}
@Override
public void writeTo(StreamOutput out) throws IOException {
super.writeTo(out);
out.writeString(snapshot);
out.writeString(repository);
out.writeStringArray(indices);
out.writeByte(ignoreIndices.id());
out.writeOptionalString(renamePattern);
out.writeOptionalString(renameReplacement);
out.writeBoolean(waitForCompletion);
out.writeBoolean(includeGlobalState);
writeSettingsToStream(settings, out);
}
}

View File

@ -0,0 +1,215 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.action.admin.cluster.snapshots.restore;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.support.IgnoreIndices;
import org.elasticsearch.action.support.master.MasterNodeOperationRequestBuilder;
import org.elasticsearch.client.ClusterAdminClient;
import org.elasticsearch.client.internal.InternalClusterAdminClient;
import org.elasticsearch.common.settings.Settings;
import java.util.Map;
/**
* Restore snapshot request builder
*/
public class RestoreSnapshotRequestBuilder extends MasterNodeOperationRequestBuilder<RestoreSnapshotRequest, RestoreSnapshotResponse, RestoreSnapshotRequestBuilder> {
/**
* Constructs new restore snapshot request builder
*
* @param clusterAdminClient cluster admin client
*/
public RestoreSnapshotRequestBuilder(ClusterAdminClient clusterAdminClient) {
super((InternalClusterAdminClient) clusterAdminClient, new RestoreSnapshotRequest());
}
/**
* Constructs new restore snapshot request builder with specified repository and snapshot names
*
* @param clusterAdminClient cluster admin client
* @param repository reposiory name
* @param name snapshot name
*/
public RestoreSnapshotRequestBuilder(ClusterAdminClient clusterAdminClient, String repository, String name) {
super((InternalClusterAdminClient) clusterAdminClient, new RestoreSnapshotRequest(repository, name));
}
/**
* Sets snapshot name
*
* @param snapshot snapshot name
* @return this builder
*/
public RestoreSnapshotRequestBuilder setSnapshot(String snapshot) {
request.snapshot(snapshot);
return this;
}
/**
* Sets repository name
*
* @param repository repository name
* @return this builder
*/
public RestoreSnapshotRequestBuilder setRepository(String repository) {
request.repository(repository);
return this;
}
/**
* Sets the list of indices that should be restored from snapshot
* <p/>
* The list of indices supports multi-index syntax. For example: "+test*" ,"-test42" will index all indices with
* prefix "test" except index "test42". Aliases are not supported. An empty list or {"_all"} will restore all open
* indices in the snapshot.
*
* @param indices list of indices
* @return this builder
*/
public RestoreSnapshotRequestBuilder setIndices(String... indices) {
request.indices(indices);
return this;
}
/**
* Specifies what type of requested indices to ignore. For example indices that don't exist.
*
* @param ignoreIndices the desired behaviour regarding indices to ignore
* @return this builder
*/
public RestoreSnapshotRequestBuilder setIgnoreIndices(IgnoreIndices ignoreIndices) {
request.ignoreIndices(ignoreIndices);
return this;
}
/**
* Sets rename pattern that should be applied to restored indices.
* <p/>
* Indices that match the rename pattern will be renamed according to {@link #setRenameReplacement(String)}. The
* rename pattern is applied according to the {@link java.util.regex.Matcher#appendReplacement(StringBuffer, String)}
* The request will fail if two or more indices will be renamed into the same name.
*
* @param renamePattern rename pattern
* @return this builder
*/
public RestoreSnapshotRequestBuilder setRenamePattern(String renamePattern) {
request.renamePattern(renamePattern);
return this;
}
/**
* Sets rename replacement
* <p/>
* See {@link #setRenamePattern(String)} for more information.
*
* @param renameReplacement rename replacement
* @return
*/
public RestoreSnapshotRequestBuilder setRenameReplacement(String renameReplacement) {
request.renameReplacement(renameReplacement);
return this;
}
/**
* Sets repository-specific restore settings.
* <p/>
* See repository documentation for more information.
*
* @param settings repository-specific snapshot settings
* @return this builder
*/
public RestoreSnapshotRequestBuilder setSettings(Settings settings) {
request.settings(settings);
return this;
}
/**
* Sets repository-specific restore settings.
* <p/>
* See repository documentation for more information.
*
* @param settings repository-specific snapshot settings
* @return this builder
*/
public RestoreSnapshotRequestBuilder setSettings(Settings.Builder settings) {
request.settings(settings);
return this;
}
/**
* Sets repository-specific restore settings in JSON, YAML or properties format
* <p/>
* See repository documentation for more information.
*
* @param source repository-specific snapshot settings
* @return this builder
*/
public RestoreSnapshotRequestBuilder setSettings(String source) {
request.settings(source);
return this;
}
/**
* Sets repository-specific restore settings
* <p/>
* See repository documentation for more information.
*
* @param source repository-specific snapshot settings
* @return this builder
*/
public RestoreSnapshotRequestBuilder setSettings(Map<String, Object> source) {
request.settings(source);
return this;
}
/**
* If this parameter is set to true the operation will wait for completion of restore process before returning.
*
* @param waitForCompletion if true the operation will wait for completion
* @return this builder
*/
public RestoreSnapshotRequestBuilder setWaitForCompletion(boolean waitForCompletion) {
request.waitForCompletion(waitForCompletion);
return this;
}
/**
* If set to true the restore procedure will restore global cluster state.
* <p/>
* The global cluster state includes persistent settings and index template definitions.
*
* @param restoreGlobalState true if global state should be restored from the snapshot
* @return this request
*/
public RestoreSnapshotRequestBuilder setRestoreGlobalState(boolean restoreGlobalState) {
request.includeGlobalState(restoreGlobalState);
return this;
}
@Override
protected void doExecute(ActionListener<RestoreSnapshotResponse> listener) {
((ClusterAdminClient) client).restoreSnapshot(request, listener);
}
}

View File

@ -0,0 +1,92 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.action.admin.cluster.snapshots.restore;
import org.elasticsearch.action.ActionResponse;
import org.elasticsearch.common.Nullable;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.xcontent.ToXContent;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentBuilderString;
import org.elasticsearch.rest.RestStatus;
import org.elasticsearch.snapshots.RestoreInfo;
import java.io.IOException;
/**
* Contains information about restores snapshot
*/
public class RestoreSnapshotResponse extends ActionResponse implements ToXContent {
@Nullable
private RestoreInfo restoreInfo;
RestoreSnapshotResponse(@Nullable RestoreInfo restoreInfo) {
this.restoreInfo = restoreInfo;
}
RestoreSnapshotResponse() {
}
/**
* Returns restore information if snapshot was completed before this method returned, null otherwise
*
* @return restore information or null
*/
public RestoreInfo getRestoreInfo() {
return restoreInfo;
}
@Override
public void readFrom(StreamInput in) throws IOException {
super.readFrom(in);
restoreInfo = RestoreInfo.readOptionalRestoreInfo(in);
}
@Override
public void writeTo(StreamOutput out) throws IOException {
super.writeTo(out);
out.writeOptionalStreamable(restoreInfo);
}
public RestStatus status() {
if (restoreInfo == null) {
return RestStatus.ACCEPTED;
}
return restoreInfo.status();
}
static final class Fields {
static final XContentBuilderString SNAPSHOT = new XContentBuilderString("snapshot");
static final XContentBuilderString ACCEPTED = new XContentBuilderString("accepted");
}
@Override
public XContentBuilder toXContent(XContentBuilder builder, ToXContent.Params params) throws IOException {
if (restoreInfo != null) {
builder.field(Fields.SNAPSHOT);
restoreInfo.toXContent(builder, params);
} else {
builder.field(Fields.ACCEPTED, true);
}
return builder;
}
}

View File

@ -0,0 +1,117 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.action.admin.cluster.snapshots.restore;
import org.elasticsearch.ElasticSearchException;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.support.master.TransportMasterNodeOperationAction;
import org.elasticsearch.cluster.ClusterService;
import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.block.ClusterBlockException;
import org.elasticsearch.cluster.block.ClusterBlockLevel;
import org.elasticsearch.cluster.metadata.SnapshotId;
import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.snapshots.RestoreInfo;
import org.elasticsearch.snapshots.RestoreService;
import org.elasticsearch.snapshots.RestoreService.RestoreSnapshotListener;
import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.transport.TransportService;
/**
* Transport action for restore snapshot operation
*/
public class TransportRestoreSnapshotAction extends TransportMasterNodeOperationAction<RestoreSnapshotRequest, RestoreSnapshotResponse> {
private final RestoreService restoreService;
@Inject
public TransportRestoreSnapshotAction(Settings settings, TransportService transportService, ClusterService clusterService,
ThreadPool threadPool, RestoreService restoreService) {
super(settings, transportService, clusterService, threadPool);
this.restoreService = restoreService;
}
@Override
protected String executor() {
return ThreadPool.Names.SNAPSHOT;
}
@Override
protected String transportAction() {
return RestoreSnapshotAction.NAME;
}
@Override
protected RestoreSnapshotRequest newRequest() {
return new RestoreSnapshotRequest();
}
@Override
protected RestoreSnapshotResponse newResponse() {
return new RestoreSnapshotResponse();
}
@Override
protected ClusterBlockException checkBlock(RestoreSnapshotRequest request, ClusterState state) {
return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, "");
}
@Override
protected void masterOperation(final RestoreSnapshotRequest request, ClusterState state, final ActionListener<RestoreSnapshotResponse> listener) throws ElasticSearchException {
RestoreService.RestoreRequest restoreRequest =
new RestoreService.RestoreRequest("restore_snapshot[" + request.snapshot() + "]", request.repository(), request.snapshot())
.indices(request.indices())
.ignoreIndices(request.ignoreIndices())
.renamePattern(request.renamePattern())
.renameReplacement(request.renameReplacement())
.includeGlobalState(request.includeGlobalState())
.settings(request.settings())
.masterNodeTimeout(request.masterNodeTimeout());
restoreService.restoreSnapshot(restoreRequest, new RestoreSnapshotListener() {
@Override
public void onResponse(RestoreInfo restoreInfo) {
if (restoreInfo == null) {
if (request.waitForCompletion()) {
restoreService.addListener(new RestoreService.RestoreCompletionListener() {
SnapshotId snapshotId = new SnapshotId(request.repository(), request.snapshot());
@Override
public void onRestoreCompletion(SnapshotId snapshotId, RestoreInfo snapshot) {
if (this.snapshotId.equals(snapshotId)) {
listener.onResponse(new RestoreSnapshotResponse(snapshot));
restoreService.removeListener(this);
}
}
});
} else {
listener.onResponse(new RestoreSnapshotResponse(null));
}
} else {
listener.onResponse(new RestoreSnapshotResponse(restoreInfo));
}
}
@Override
public void onFailure(Throwable t) {
listener.onFailure(t);
}
});
}
}

View File

@ -23,7 +23,9 @@ import org.elasticsearch.action.admin.indices.IndicesAction;
import org.elasticsearch.client.IndicesAdminClient;
/**
* @deprecated Use snapshot/restore API instead
*/
@Deprecated
public class GatewaySnapshotAction extends IndicesAction<GatewaySnapshotRequest, GatewaySnapshotResponse, GatewaySnapshotRequestBuilder> {
public static final GatewaySnapshotAction INSTANCE = new GatewaySnapshotAction();

View File

@ -29,7 +29,9 @@ import org.elasticsearch.action.support.broadcast.BroadcastOperationRequest;
* @see org.elasticsearch.client.Requests#gatewaySnapshotRequest(String...)
* @see org.elasticsearch.client.IndicesAdminClient#gatewaySnapshot(GatewaySnapshotRequest)
* @see GatewaySnapshotResponse
* @deprecated Use snapshot/restore API instead
*/
@Deprecated
public class GatewaySnapshotRequest extends BroadcastOperationRequest<GatewaySnapshotRequest> {
GatewaySnapshotRequest() {

View File

@ -25,8 +25,10 @@ import org.elasticsearch.client.IndicesAdminClient;
import org.elasticsearch.client.internal.InternalIndicesAdminClient;
/**
*
* @deprecated Use snapshot/restore API instead
*/
@Deprecated
public class GatewaySnapshotRequestBuilder extends BroadcastOperationRequestBuilder<GatewaySnapshotRequest, GatewaySnapshotResponse, GatewaySnapshotRequestBuilder> {
public GatewaySnapshotRequestBuilder(IndicesAdminClient indicesClient) {

View File

@ -30,8 +30,9 @@ import java.util.List;
/**
* Response for the gateway snapshot action.
*
*
* @deprecated Use snapshot/restore API instead
*/
@Deprecated
public class GatewaySnapshotResponse extends BroadcastOperationResponse {
GatewaySnapshotResponse() {

View File

@ -42,8 +42,9 @@ import java.util.List;
import java.util.concurrent.atomic.AtomicReferenceArray;
/**
*
* @deprecated use Snapshot/Restore API instead
*/
@Deprecated
public class TransportGatewaySnapshotAction extends TransportBroadcastOperationAction<GatewaySnapshotRequest, GatewaySnapshotResponse, ShardGatewaySnapshotRequest, ShardGatewaySnapshotResponse> {
private final IndicesService indicesService;

View File

@ -19,5 +19,6 @@
/**
* GAteway Snapshot Action.
* @deprecated Use snapshot/restore API instead
*/
package org.elasticsearch.action.admin.indices.gateway.snapshot;

View File

@ -39,6 +39,15 @@ import org.elasticsearch.action.admin.cluster.node.shutdown.NodesShutdownRespons
import org.elasticsearch.action.admin.cluster.node.stats.NodesStatsRequest;
import org.elasticsearch.action.admin.cluster.node.stats.NodesStatsRequestBuilder;
import org.elasticsearch.action.admin.cluster.node.stats.NodesStatsResponse;
import org.elasticsearch.action.admin.cluster.repositories.delete.DeleteRepositoryRequest;
import org.elasticsearch.action.admin.cluster.repositories.delete.DeleteRepositoryRequestBuilder;
import org.elasticsearch.action.admin.cluster.repositories.delete.DeleteRepositoryResponse;
import org.elasticsearch.action.admin.cluster.repositories.get.GetRepositoriesRequest;
import org.elasticsearch.action.admin.cluster.repositories.get.GetRepositoriesRequestBuilder;
import org.elasticsearch.action.admin.cluster.repositories.get.GetRepositoriesResponse;
import org.elasticsearch.action.admin.cluster.repositories.put.PutRepositoryRequest;
import org.elasticsearch.action.admin.cluster.repositories.put.PutRepositoryRequestBuilder;
import org.elasticsearch.action.admin.cluster.repositories.put.PutRepositoryResponse;
import org.elasticsearch.action.admin.cluster.reroute.ClusterRerouteRequest;
import org.elasticsearch.action.admin.cluster.reroute.ClusterRerouteRequestBuilder;
import org.elasticsearch.action.admin.cluster.reroute.ClusterRerouteResponse;
@ -48,6 +57,18 @@ import org.elasticsearch.action.admin.cluster.settings.ClusterUpdateSettingsResp
import org.elasticsearch.action.admin.cluster.shards.ClusterSearchShardsRequest;
import org.elasticsearch.action.admin.cluster.shards.ClusterSearchShardsRequestBuilder;
import org.elasticsearch.action.admin.cluster.shards.ClusterSearchShardsResponse;
import org.elasticsearch.action.admin.cluster.snapshots.create.CreateSnapshotRequest;
import org.elasticsearch.action.admin.cluster.snapshots.create.CreateSnapshotRequestBuilder;
import org.elasticsearch.action.admin.cluster.snapshots.create.CreateSnapshotResponse;
import org.elasticsearch.action.admin.cluster.snapshots.delete.DeleteSnapshotRequest;
import org.elasticsearch.action.admin.cluster.snapshots.delete.DeleteSnapshotRequestBuilder;
import org.elasticsearch.action.admin.cluster.snapshots.delete.DeleteSnapshotResponse;
import org.elasticsearch.action.admin.cluster.snapshots.get.GetSnapshotsRequest;
import org.elasticsearch.action.admin.cluster.snapshots.get.GetSnapshotsRequestBuilder;
import org.elasticsearch.action.admin.cluster.snapshots.get.GetSnapshotsResponse;
import org.elasticsearch.action.admin.cluster.snapshots.restore.RestoreSnapshotRequest;
import org.elasticsearch.action.admin.cluster.snapshots.restore.RestoreSnapshotRequestBuilder;
import org.elasticsearch.action.admin.cluster.snapshots.restore.RestoreSnapshotResponse;
import org.elasticsearch.action.admin.cluster.state.ClusterStateRequest;
import org.elasticsearch.action.admin.cluster.state.ClusterStateRequestBuilder;
import org.elasticsearch.action.admin.cluster.state.ClusterStateResponse;
@ -262,6 +283,111 @@ public interface ClusterAdminClient {
*/
ClusterSearchShardsRequestBuilder prepareSearchShards(String... indices);
/**
* Registers a snapshot repository.
*/
ActionFuture<PutRepositoryResponse> putRepository(PutRepositoryRequest request);
/**
* Registers a snapshot repository.
*/
void putRepository(PutRepositoryRequest request, ActionListener<PutRepositoryResponse> listener);
/**
* Registers a snapshot repository.
*/
PutRepositoryRequestBuilder preparePutRepository(String name);
/**
* Unregisters a repository.
*/
ActionFuture<DeleteRepositoryResponse> deleteRepository(DeleteRepositoryRequest request);
/**
* Unregisters a repository.
*/
void deleteRepository(DeleteRepositoryRequest request, ActionListener<DeleteRepositoryResponse> listener);
/**
* Unregisters a repository.
*/
DeleteRepositoryRequestBuilder prepareDeleteRepository(String name);
/**
* Gets repositories.
*/
ActionFuture<GetRepositoriesResponse> getRepositories(GetRepositoriesRequest request);
/**
* Gets repositories.
*/
void getRepositories(GetRepositoriesRequest request, ActionListener<GetRepositoriesResponse> listener);
/**
* Gets repositories.
*/
GetRepositoriesRequestBuilder prepareGetRepositories(String... name);
/**
* Creates a new snapshot.
*/
ActionFuture<CreateSnapshotResponse> createSnapshot(CreateSnapshotRequest request);
/**
* Creates a new snapshot.
*/
void createSnapshot(CreateSnapshotRequest request, ActionListener<CreateSnapshotResponse> listener);
/**
* Creates a new snapshot.
*/
CreateSnapshotRequestBuilder prepareCreateSnapshot(String repository, String name);
/**
* Get snapshot.
*/
ActionFuture<GetSnapshotsResponse> getSnapshots(GetSnapshotsRequest request);
/**
* Get snapshot.
*/
void getSnapshots(GetSnapshotsRequest request, ActionListener<GetSnapshotsResponse> listener);
/**
* Get snapshot.
*/
GetSnapshotsRequestBuilder prepareGetSnapshots(String repository);
/**
* Delete snapshot.
*/
ActionFuture<DeleteSnapshotResponse> deleteSnapshot(DeleteSnapshotRequest request);
/**
* Delete snapshot.
*/
void deleteSnapshot(DeleteSnapshotRequest request, ActionListener<DeleteSnapshotResponse> listener);
/**
* Delete snapshot.
*/
DeleteSnapshotRequestBuilder prepareDeleteSnapshot(String repository, String snapshot);
/**
* Restores a snapshot.
*/
ActionFuture<RestoreSnapshotResponse> restoreSnapshot(RestoreSnapshotRequest request);
/**
* Restores a snapshot.
*/
void restoreSnapshot(RestoreSnapshotRequest request, ActionListener<RestoreSnapshotResponse> listener);
/**
* Restores a snapshot.
*/
RestoreSnapshotRequestBuilder prepareRestoreSnapshot(String repository, String snapshot);
/**
* Returns a list of the pending cluster tasks, that are scheduled to be executed. This includes operations
* that update the cluster state (for example, a create index operation)

View File

@ -478,7 +478,9 @@ public interface IndicesAdminClient {
* @param request The gateway snapshot request
* @return The result future
* @see org.elasticsearch.client.Requests#gatewaySnapshotRequest(String...)
* @deprecated Use snapshot/restore API instead
*/
@Deprecated
ActionFuture<GatewaySnapshotResponse> gatewaySnapshot(GatewaySnapshotRequest request);
/**
@ -487,12 +489,17 @@ public interface IndicesAdminClient {
* @param request The gateway snapshot request
* @param listener A listener to be notified with a result
* @see org.elasticsearch.client.Requests#gatewaySnapshotRequest(String...)
* @deprecated Use snapshot/restore API instead
*/
@Deprecated
void gatewaySnapshot(GatewaySnapshotRequest request, ActionListener<GatewaySnapshotResponse> listener);
/**
* Explicitly perform gateway snapshot for one or more indices.
*
* @deprecated Use snapshot/restore API instead
*/
@Deprecated
GatewaySnapshotRequestBuilder prepareGatewaySnapshot(String... indices);
/**

View File

@ -24,9 +24,16 @@ import org.elasticsearch.action.admin.cluster.node.info.NodesInfoRequest;
import org.elasticsearch.action.admin.cluster.node.restart.NodesRestartRequest;
import org.elasticsearch.action.admin.cluster.node.shutdown.NodesShutdownRequest;
import org.elasticsearch.action.admin.cluster.node.stats.NodesStatsRequest;
import org.elasticsearch.action.admin.cluster.repositories.delete.DeleteRepositoryRequest;
import org.elasticsearch.action.admin.cluster.repositories.get.GetRepositoriesRequest;
import org.elasticsearch.action.admin.cluster.repositories.put.PutRepositoryRequest;
import org.elasticsearch.action.admin.cluster.reroute.ClusterRerouteRequest;
import org.elasticsearch.action.admin.cluster.settings.ClusterUpdateSettingsRequest;
import org.elasticsearch.action.admin.cluster.shards.ClusterSearchShardsRequest;
import org.elasticsearch.action.admin.cluster.snapshots.create.CreateSnapshotRequest;
import org.elasticsearch.action.admin.cluster.snapshots.delete.DeleteSnapshotRequest;
import org.elasticsearch.action.admin.cluster.snapshots.get.GetSnapshotsRequest;
import org.elasticsearch.action.admin.cluster.snapshots.restore.RestoreSnapshotRequest;
import org.elasticsearch.action.admin.cluster.state.ClusterStateRequest;
import org.elasticsearch.action.admin.indices.alias.IndicesAliasesRequest;
import org.elasticsearch.action.admin.indices.cache.clear.ClearIndicesCacheRequest;
@ -316,7 +323,9 @@ public class Requests {
* @param indices The indices the gateway snapshot will be performed on. Use <tt>null</tt> or <tt>_all</tt> to execute against all indices
* @return The gateway snapshot request
* @see org.elasticsearch.client.IndicesAdminClient#gatewaySnapshot(org.elasticsearch.action.admin.indices.gateway.snapshot.GatewaySnapshotRequest)
* @deprecated Use snapshot/restore API instead
*/
@Deprecated
public static GatewaySnapshotRequest gatewaySnapshotRequest(String... indices) {
return new GatewaySnapshotRequest(indices);
}
@ -452,4 +461,77 @@ public class Requests {
return new NodesRestartRequest(nodesIds);
}
/**
* Registers snapshot repository
*
* @param name repository name
* @return repository registration request
*/
public static PutRepositoryRequest putRepositoryRequest(String name) {
return new PutRepositoryRequest(name);
}
/**
* Gets snapshot repository
*
* @param repositories names of repositories
* @return get repository request
*/
public static GetRepositoriesRequest getRepositoryRequest(String... repositories) {
return new GetRepositoriesRequest(repositories);
}
/**
* Deletes registration for snapshot repository
*
* @param name repository name
* @return delete repository request
*/
public static DeleteRepositoryRequest deleteRepositoryRequest(String name) {
return new DeleteRepositoryRequest(name);
}
/**
* Creates new snapshot
*
* @param repository repository name
* @param snapshot snapshot name
* @return create snapshot request
*/
public static CreateSnapshotRequest createSnapshotRequest(String repository, String snapshot) {
return new CreateSnapshotRequest(repository, snapshot);
}
/**
* Gets snapshots from repository
*
* @param repository repository name
* @return get snapshot request
*/
public static GetSnapshotsRequest getSnapshotsRequest(String repository) {
return new GetSnapshotsRequest(repository);
}
/**
* Restores new snapshot
*
* @param repository repository name
* @param snapshot snapshot name
* @return snapshot creation request
*/
public static RestoreSnapshotRequest restoreSnapshotRequest(String repository, String snapshot) {
return new RestoreSnapshotRequest(repository, snapshot);
}
/**
* Restores new snapshot
*
* @param snapshot snapshot name
* @param repository repository name
* @return delete snapshot request
*/
public static DeleteSnapshotRequest deleteSnapshotRequest(String repository, String snapshot) {
return new DeleteSnapshotRequest(repository, snapshot);
}
}

View File

@ -45,6 +45,18 @@ import org.elasticsearch.action.admin.cluster.node.stats.NodesStatsAction;
import org.elasticsearch.action.admin.cluster.node.stats.NodesStatsRequest;
import org.elasticsearch.action.admin.cluster.node.stats.NodesStatsRequestBuilder;
import org.elasticsearch.action.admin.cluster.node.stats.NodesStatsResponse;
import org.elasticsearch.action.admin.cluster.repositories.delete.DeleteRepositoryAction;
import org.elasticsearch.action.admin.cluster.repositories.delete.DeleteRepositoryRequest;
import org.elasticsearch.action.admin.cluster.repositories.delete.DeleteRepositoryRequestBuilder;
import org.elasticsearch.action.admin.cluster.repositories.delete.DeleteRepositoryResponse;
import org.elasticsearch.action.admin.cluster.repositories.get.GetRepositoriesAction;
import org.elasticsearch.action.admin.cluster.repositories.get.GetRepositoriesRequest;
import org.elasticsearch.action.admin.cluster.repositories.get.GetRepositoriesRequestBuilder;
import org.elasticsearch.action.admin.cluster.repositories.get.GetRepositoriesResponse;
import org.elasticsearch.action.admin.cluster.repositories.put.PutRepositoryAction;
import org.elasticsearch.action.admin.cluster.repositories.put.PutRepositoryRequest;
import org.elasticsearch.action.admin.cluster.repositories.put.PutRepositoryRequestBuilder;
import org.elasticsearch.action.admin.cluster.repositories.put.PutRepositoryResponse;
import org.elasticsearch.action.admin.cluster.reroute.ClusterRerouteAction;
import org.elasticsearch.action.admin.cluster.reroute.ClusterRerouteRequest;
import org.elasticsearch.action.admin.cluster.reroute.ClusterRerouteRequestBuilder;
@ -57,6 +69,22 @@ import org.elasticsearch.action.admin.cluster.shards.ClusterSearchShardsAction;
import org.elasticsearch.action.admin.cluster.shards.ClusterSearchShardsRequest;
import org.elasticsearch.action.admin.cluster.shards.ClusterSearchShardsRequestBuilder;
import org.elasticsearch.action.admin.cluster.shards.ClusterSearchShardsResponse;
import org.elasticsearch.action.admin.cluster.snapshots.create.CreateSnapshotAction;
import org.elasticsearch.action.admin.cluster.snapshots.create.CreateSnapshotRequest;
import org.elasticsearch.action.admin.cluster.snapshots.create.CreateSnapshotRequestBuilder;
import org.elasticsearch.action.admin.cluster.snapshots.create.CreateSnapshotResponse;
import org.elasticsearch.action.admin.cluster.snapshots.delete.DeleteSnapshotAction;
import org.elasticsearch.action.admin.cluster.snapshots.delete.DeleteSnapshotRequest;
import org.elasticsearch.action.admin.cluster.snapshots.delete.DeleteSnapshotRequestBuilder;
import org.elasticsearch.action.admin.cluster.snapshots.delete.DeleteSnapshotResponse;
import org.elasticsearch.action.admin.cluster.snapshots.get.GetSnapshotsAction;
import org.elasticsearch.action.admin.cluster.snapshots.get.GetSnapshotsRequest;
import org.elasticsearch.action.admin.cluster.snapshots.get.GetSnapshotsRequestBuilder;
import org.elasticsearch.action.admin.cluster.snapshots.get.GetSnapshotsResponse;
import org.elasticsearch.action.admin.cluster.snapshots.restore.RestoreSnapshotAction;
import org.elasticsearch.action.admin.cluster.snapshots.restore.RestoreSnapshotRequest;
import org.elasticsearch.action.admin.cluster.snapshots.restore.RestoreSnapshotRequestBuilder;
import org.elasticsearch.action.admin.cluster.snapshots.restore.RestoreSnapshotResponse;
import org.elasticsearch.action.admin.cluster.state.ClusterStateAction;
import org.elasticsearch.action.admin.cluster.state.ClusterStateRequest;
import org.elasticsearch.action.admin.cluster.state.ClusterStateRequestBuilder;
@ -246,4 +274,110 @@ public abstract class AbstractClusterAdminClient implements InternalClusterAdmin
public void pendingClusterTasks(PendingClusterTasksRequest request, ActionListener<PendingClusterTasksResponse> listener) {
execute(PendingClusterTasksAction.INSTANCE, request, listener);
}
public ActionFuture<PutRepositoryResponse> putRepository(PutRepositoryRequest request) {
return execute(PutRepositoryAction.INSTANCE, request);
}
@Override
public void putRepository(PutRepositoryRequest request, ActionListener<PutRepositoryResponse> listener) {
execute(PutRepositoryAction.INSTANCE, request, listener);
}
@Override
public PutRepositoryRequestBuilder preparePutRepository(String name) {
return new PutRepositoryRequestBuilder(this, name);
}
@Override
public ActionFuture<CreateSnapshotResponse> createSnapshot(CreateSnapshotRequest request) {
return execute(CreateSnapshotAction.INSTANCE, request);
}
@Override
public void createSnapshot(CreateSnapshotRequest request, ActionListener<CreateSnapshotResponse> listener) {
execute(CreateSnapshotAction.INSTANCE, request, listener);
}
@Override
public CreateSnapshotRequestBuilder prepareCreateSnapshot(String repository, String name) {
return new CreateSnapshotRequestBuilder(this, repository, name);
}
@Override
public ActionFuture<GetSnapshotsResponse> getSnapshots(GetSnapshotsRequest request) {
return execute(GetSnapshotsAction.INSTANCE, request);
}
@Override
public void getSnapshots(GetSnapshotsRequest request, ActionListener<GetSnapshotsResponse> listener) {
execute(GetSnapshotsAction.INSTANCE, request, listener);
}
@Override
public GetSnapshotsRequestBuilder prepareGetSnapshots(String repository) {
return new GetSnapshotsRequestBuilder(this, repository);
}
@Override
public ActionFuture<DeleteSnapshotResponse> deleteSnapshot(DeleteSnapshotRequest request) {
return execute(DeleteSnapshotAction.INSTANCE, request);
}
@Override
public void deleteSnapshot(DeleteSnapshotRequest request, ActionListener<DeleteSnapshotResponse> listener) {
execute(DeleteSnapshotAction.INSTANCE, request, listener);
}
@Override
public DeleteSnapshotRequestBuilder prepareDeleteSnapshot(String repository, String name) {
return new DeleteSnapshotRequestBuilder(this, repository, name);
}
@Override
public ActionFuture<DeleteRepositoryResponse> deleteRepository(DeleteRepositoryRequest request) {
return execute(DeleteRepositoryAction.INSTANCE, request);
}
@Override
public void deleteRepository(DeleteRepositoryRequest request, ActionListener<DeleteRepositoryResponse> listener) {
execute(DeleteRepositoryAction.INSTANCE, request, listener);
}
@Override
public DeleteRepositoryRequestBuilder prepareDeleteRepository(String name) {
return new DeleteRepositoryRequestBuilder(this, name);
}
@Override
public ActionFuture<GetRepositoriesResponse> getRepositories(GetRepositoriesRequest request) {
return execute(GetRepositoriesAction.INSTANCE, request);
}
@Override
public void getRepositories(GetRepositoriesRequest request, ActionListener<GetRepositoriesResponse> listener) {
execute(GetRepositoriesAction.INSTANCE, request, listener);
}
@Override
public GetRepositoriesRequestBuilder prepareGetRepositories(String... name) {
return new GetRepositoriesRequestBuilder(this, name);
}
@Override
public ActionFuture<RestoreSnapshotResponse> restoreSnapshot(RestoreSnapshotRequest request) {
return execute(RestoreSnapshotAction.INSTANCE, request);
}
@Override
public void restoreSnapshot(RestoreSnapshotRequest request, ActionListener<RestoreSnapshotResponse> listener) {
execute(RestoreSnapshotAction.INSTANCE, request, listener);
}
@Override
public RestoreSnapshotRequestBuilder prepareRestoreSnapshot(String repository, String snapshot) {
return new RestoreSnapshotRequestBuilder(this, repository, snapshot);
}
}

View File

@ -366,6 +366,12 @@ public class ClusterState implements ToXContent {
}
builder.endObject();
for (Map.Entry<String, MetaData.Custom> entry : metaData.customs().entrySet()) {
builder.startObject(entry.getKey());
MetaData.lookupFactorySafe(entry.getKey()).toXContent(entry.getValue(), builder, params);
builder.endObject();
}
builder.endObject();
}

View File

@ -68,12 +68,24 @@ public class MetaData implements Iterable<IndexMetaData> {
T fromXContent(XContentParser parser) throws IOException;
void toXContent(T customIndexMetaData, XContentBuilder builder, ToXContent.Params params);
void toXContent(T customIndexMetaData, XContentBuilder builder, ToXContent.Params params) throws IOException;
/**
* Returns true if this custom metadata should be persisted as part of global cluster state
*/
boolean isPersistent();
}
}
public static Map<String, Custom.Factory> customFactories = new HashMap<String, Custom.Factory>();
static {
// register non plugin custom metadata
registerFactory(RepositoriesMetaData.TYPE, RepositoriesMetaData.FACTORY);
registerFactory(SnapshotMetaData.TYPE, SnapshotMetaData.FACTORY);
registerFactory(RestoreMetaData.TYPE, RestoreMetaData.FACTORY);
}
/**
* Register a custom index meta data factory. Make sure to call it from a static block.
*/
@ -101,6 +113,8 @@ public class MetaData implements Iterable<IndexMetaData> {
public static final MetaData EMPTY_META_DATA = builder().build();
public static final String GLOBAL_PERSISTENT_ONLY_PARAM = "global_persistent_only";
private final long version;
private final Settings transientSettings;
@ -771,6 +785,10 @@ public class MetaData implements Iterable<IndexMetaData> {
return this.customs;
}
public <T extends Custom> T custom(String type) {
return (T) customs.get(type);
}
public int totalNumberOfShards() {
return this.totalNumberOfShards;
}
@ -913,6 +931,21 @@ public class MetaData implements Iterable<IndexMetaData> {
if (!metaData1.templates.equals(metaData2.templates())) {
return false;
}
// Check if any persistent metadata needs to be saved
int customCount1 = 0;
for (Map.Entry<String, Custom> entry : metaData1.customs.entrySet()) {
if (customFactories.get(entry.getKey()).isPersistent()) {
if (!entry.equals(metaData2.custom(entry.getKey()))) return false;
customCount1++;
}
}
int customCount2 = 0;
for (Map.Entry<String, Custom> entry : metaData2.customs.entrySet()) {
if (customFactories.get(entry.getKey()).isPersistent()) {
customCount2++;
}
}
if (customCount1 != customCount2) return false;
return true;
}
@ -1075,6 +1108,7 @@ public class MetaData implements Iterable<IndexMetaData> {
}
public static void toXContent(MetaData metaData, XContentBuilder builder, ToXContent.Params params) throws IOException {
boolean globalPersistentOnly = params.paramAsBoolean(GLOBAL_PERSISTENT_ONLY_PARAM, false);
builder.startObject("meta-data");
builder.field("version", metaData.version());
@ -1087,13 +1121,21 @@ public class MetaData implements Iterable<IndexMetaData> {
builder.endObject();
}
if (!globalPersistentOnly && !metaData.transientSettings().getAsMap().isEmpty()) {
builder.startObject("transient_settings");
for (Map.Entry<String, String> entry : metaData.transientSettings().getAsMap().entrySet()) {
builder.field(entry.getKey(), entry.getValue());
}
builder.endObject();
}
builder.startObject("templates");
for (IndexTemplateMetaData template : metaData.templates().values()) {
IndexTemplateMetaData.Builder.toXContent(template, builder, params);
}
builder.endObject();
if (!metaData.indices().isEmpty()) {
if (!globalPersistentOnly && !metaData.indices().isEmpty()) {
builder.startObject("indices");
for (IndexMetaData indexMetaData : metaData) {
IndexMetaData.Builder.toXContent(indexMetaData, builder, params);
@ -1102,9 +1144,12 @@ public class MetaData implements Iterable<IndexMetaData> {
}
for (Map.Entry<String, Custom> entry : metaData.customs().entrySet()) {
builder.startObject(entry.getKey());
lookupFactorySafe(entry.getKey()).toXContent(entry.getValue(), builder, params);
builder.endObject();
Custom.Factory factory = lookupFactorySafe(entry.getKey());
if (!globalPersistentOnly || factory.isPersistent()) {
builder.startObject(entry.getKey());
factory.toXContent(entry.getValue(), builder, params);
builder.endObject();
}
}
builder.endObject();

View File

@ -146,6 +146,30 @@ public class MetaDataCreateIndexService extends AbstractComponent {
});
}
public void validateIndexName(String index, ClusterState state) throws ElasticSearchException {
if (state.routingTable().hasIndex(index)) {
throw new IndexAlreadyExistsException(new Index(index));
}
if (state.metaData().hasIndex(index)) {
throw new IndexAlreadyExistsException(new Index(index));
}
if (!Strings.validFileName(index)) {
throw new InvalidIndexNameException(new Index(index), index, "must not contain the following characters " + Strings.INVALID_FILENAME_CHARS);
}
if (index.contains("#")) {
throw new InvalidIndexNameException(new Index(index), index, "must not contain '#'");
}
if (!index.equals(riverIndexName) && index.charAt(0) == '_') {
throw new InvalidIndexNameException(new Index(index), index, "must not start with '_'");
}
if (!index.toLowerCase(Locale.ROOT).equals(index)) {
throw new InvalidIndexNameException(new Index(index), index, "must be lowercase");
}
if (state.metaData().aliases().containsKey(index)) {
throw new InvalidIndexNameException(new Index(index), index, "already exists as alias");
}
}
private void createIndex(final Request request, final Listener userListener, Semaphore mdLock) {
final CreateIndexListener listener = new CreateIndexListener(mdLock, request, userListener);
clusterService.submitStateUpdateTask("create-index [" + request.index + "], cause [" + request.cause + "]", Priority.URGENT, new TimeoutClusterStateUpdateTask() {
@ -478,33 +502,7 @@ public class MetaDataCreateIndexService extends AbstractComponent {
}
private void validate(Request request, ClusterState state) throws ElasticSearchException {
if (state.routingTable().hasIndex(request.index)) {
throw new IndexAlreadyExistsException(new Index(request.index));
}
if (state.metaData().hasIndex(request.index)) {
throw new IndexAlreadyExistsException(new Index(request.index));
}
if (request.index.contains(" ")) {
throw new InvalidIndexNameException(new Index(request.index), request.index, "must not contain whitespace");
}
if (request.index.contains(",")) {
throw new InvalidIndexNameException(new Index(request.index), request.index, "must not contain ',");
}
if (request.index.contains("#")) {
throw new InvalidIndexNameException(new Index(request.index), request.index, "must not contain '#");
}
if (!request.index.equals(riverIndexName) && request.index.charAt(0) == '_') {
throw new InvalidIndexNameException(new Index(request.index), request.index, "must not start with '_'");
}
if (!request.index.toLowerCase(Locale.ROOT).equals(request.index)) {
throw new InvalidIndexNameException(new Index(request.index), request.index, "must be lowercase");
}
if (!Strings.validFileName(request.index)) {
throw new InvalidIndexNameException(new Index(request.index), request.index, "must not contain the following characters " + Strings.INVALID_FILENAME_CHARS);
}
if (state.metaData().aliases().containsKey(request.index)) {
throw new IndexAlreadyExistsException(new Index(request.index), "already exists as alias");
}
validateIndexName(request.index, state);
}
public static interface Listener {

View File

@ -0,0 +1,204 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.cluster.metadata;
import com.google.common.collect.ImmutableList;
import org.elasticsearch.ElasticSearchParseException;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.settings.ImmutableSettings;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.settings.loader.SettingsLoader;
import org.elasticsearch.common.xcontent.ToXContent;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentParser;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
/**
* Contains metadata about registered snapshot repositories
*/
public class RepositoriesMetaData implements MetaData.Custom {
public static final String TYPE = "repositories";
public static final Factory FACTORY = new Factory();
private final ImmutableList<RepositoryMetaData> repositories;
/**
* Constructs new repository metadata
*
* @param repositories list of repositories
*/
public RepositoriesMetaData(RepositoryMetaData... repositories) {
this.repositories = ImmutableList.copyOf(repositories);
}
/**
* Returns list of currently registered repositories
*
* @return list of repositories
*/
public ImmutableList<RepositoryMetaData> repositories() {
return this.repositories;
}
/**
* Returns a repository with a given name or null if such repository doesn't exist
*
* @param name name of repository
* @return repository metadata
*/
public RepositoryMetaData repository(String name) {
for (RepositoryMetaData repository : repositories) {
if (name.equals(repository.name())) {
return repository;
}
}
return null;
}
/**
* Repository metadata factory
*/
public static class Factory implements MetaData.Custom.Factory<RepositoriesMetaData> {
/**
* {@inheritDoc}
*/
@Override
public String type() {
return TYPE;
}
/**
* {@inheritDoc}
*/
@Override
public RepositoriesMetaData readFrom(StreamInput in) throws IOException {
RepositoryMetaData[] repository = new RepositoryMetaData[in.readVInt()];
for (int i = 0; i < repository.length; i++) {
repository[i] = RepositoryMetaData.readFrom(in);
}
return new RepositoriesMetaData(repository);
}
/**
* {@inheritDoc}
*/
@Override
public void writeTo(RepositoriesMetaData repositories, StreamOutput out) throws IOException {
out.writeVInt(repositories.repositories().size());
for (RepositoryMetaData repository : repositories.repositories()) {
repository.writeTo(out);
}
}
/**
* {@inheritDoc}
*/
@Override
public RepositoriesMetaData fromXContent(XContentParser parser) throws IOException {
XContentParser.Token token;
List<RepositoryMetaData> repository = new ArrayList<RepositoryMetaData>();
while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {
if (token == XContentParser.Token.FIELD_NAME) {
String name = parser.currentName();
if (parser.nextToken() != XContentParser.Token.START_OBJECT) {
throw new ElasticSearchParseException("failed to parse repository [" + name + "], expected object");
}
String type = null;
Settings settings = ImmutableSettings.EMPTY;
while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {
if (token == XContentParser.Token.FIELD_NAME) {
String currentFieldName = parser.currentName();
if ("type".equals(currentFieldName)) {
if (parser.nextToken() != XContentParser.Token.VALUE_STRING) {
throw new ElasticSearchParseException("failed to parse repository [" + name + "], unknown type");
}
type = parser.text();
} else if ("settings".equals(currentFieldName)) {
if (parser.nextToken() != XContentParser.Token.START_OBJECT) {
throw new ElasticSearchParseException("failed to parse repository [" + name + "], incompatible params");
}
settings = ImmutableSettings.settingsBuilder().put(SettingsLoader.Helper.loadNestedFromMap(parser.mapOrdered())).build();
} else {
throw new ElasticSearchParseException("failed to parse repository [" + name + "], unknown field [" + currentFieldName + "]");
}
} else {
throw new ElasticSearchParseException("failed to parse repository [" + name + "]");
}
}
if (type == null) {
throw new ElasticSearchParseException("failed to parse repository [" + name + "], missing repository type");
}
repository.add(new RepositoryMetaData(name, type, settings));
} else {
throw new ElasticSearchParseException("failed to parse repositories");
}
}
return new RepositoriesMetaData(repository.toArray(new RepositoryMetaData[repository.size()]));
}
/**
* {@inheritDoc}
*/
@Override
public void toXContent(RepositoriesMetaData customIndexMetaData, XContentBuilder builder, ToXContent.Params params) throws IOException {
for (RepositoryMetaData repository : customIndexMetaData.repositories()) {
toXContent(repository, builder, params);
}
}
/**
* Serializes information about a single repository
*
* @param repository repository metadata
* @param builder XContent builder
* @param params serialization parameters
* @throws IOException
*/
public void toXContent(RepositoryMetaData repository, XContentBuilder builder, ToXContent.Params params) throws IOException {
builder.startObject(repository.name(), XContentBuilder.FieldCaseConversion.NONE);
builder.field("type", repository.type());
builder.startObject("settings");
for (Map.Entry<String, String> settingEntry : repository.settings().getAsMap().entrySet()) {
builder.field(settingEntry.getKey(), settingEntry.getValue());
}
builder.endObject();
builder.endObject();
}
/**
* {@inheritDoc}
*/
@Override
public boolean isPersistent() {
return true;
}
}
}

View File

@ -0,0 +1,102 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.cluster.metadata;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.settings.ImmutableSettings;
import org.elasticsearch.common.settings.Settings;
import java.io.IOException;
/**
* Metadata about registered repository
*/
public class RepositoryMetaData {
private final String name;
private final String type;
private final Settings settings;
/**
* Constructs new repository metadata
*
* @param name repository name
* @param type repository type
* @param settings repository settings
*/
public RepositoryMetaData(String name, String type, Settings settings) {
this.name = name;
this.type = type;
this.settings = settings;
}
/**
* Returns repository name
*
* @return repository name
*/
public String name() {
return this.name;
}
/**
* Returns repository type
*
* @return repository type
*/
public String type() {
return this.type;
}
/**
* Returns repository settings
*
* @return repository settings
*/
public Settings settings() {
return this.settings;
}
/**
* Reads repository metadata from stream input
*
* @param in stream input
* @return repository metadata
* @throws IOException
*/
public static RepositoryMetaData readFrom(StreamInput in) throws IOException {
String name = in.readString();
String type = in.readString();
Settings settings = ImmutableSettings.readSettingsFromStream(in);
return new RepositoryMetaData(name, type, settings);
}
/**
* Writes repository metadata to stream output
*
* @param out stream output
* @throws IOException
*/
public void writeTo(StreamOutput out) throws IOException {
out.writeString(name);
out.writeString(type);
ImmutableSettings.writeSettingsToStream(settings, out);
}
}

View File

@ -0,0 +1,527 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.cluster.metadata;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableMap;
import org.elasticsearch.ElasticSearchIllegalArgumentException;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.xcontent.ToXContent;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentParser;
import org.elasticsearch.index.shard.ShardId;
import java.io.IOException;
import java.util.Map;
/**
* Meta data about restore processes that are currently executing
*/
public class RestoreMetaData implements MetaData.Custom {
public static final String TYPE = "restore";
public static final Factory FACTORY = new Factory();
private final ImmutableList<Entry> entries;
/**
* Constructs new restore metadata
*
* @param entries list of currently running restore processes
*/
public RestoreMetaData(ImmutableList<Entry> entries) {
this.entries = entries;
}
/**
* Constructs new restore metadata
*
* @param entries list of currently running restore processes
*/
public RestoreMetaData(Entry... entries) {
this.entries = ImmutableList.copyOf(entries);
}
/**
* Returns list of currently running restore processes
*
* @return list of currently running restore processes
*/
public ImmutableList<Entry> entries() {
return this.entries;
}
/**
* Returns currently running restore process with corresponding snapshot id or null if this snapshot is not being
* restored
*
* @param snapshotId snapshot id
* @return restore metadata or null
*/
public Entry snapshot(SnapshotId snapshotId) {
for (Entry entry : entries) {
if (snapshotId.equals(entry.snapshotId())) {
return entry;
}
}
return null;
}
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
RestoreMetaData that = (RestoreMetaData) o;
if (!entries.equals(that.entries)) return false;
return true;
}
@Override
public int hashCode() {
return entries.hashCode();
}
/**
* Restore metadata
*/
public static class Entry {
private final State state;
private final SnapshotId snapshotId;
private final ImmutableMap<ShardId, ShardRestoreStatus> shards;
private final ImmutableList<String> indices;
/**
* Creates new restore metadata
*
* @param snapshotId snapshot id
* @param state current state of the restore process
* @param indices list of indices being restored
* @param shards list of shards being restored and thier current restore status
*/
public Entry(SnapshotId snapshotId, State state, ImmutableList<String> indices, ImmutableMap<ShardId, ShardRestoreStatus> shards) {
this.snapshotId = snapshotId;
this.state = state;
this.indices = indices;
if (shards == null) {
this.shards = ImmutableMap.of();
} else {
this.shards = shards;
}
}
/**
* Returns snapshot id
*
* @return snapshot id
*/
public SnapshotId snapshotId() {
return this.snapshotId;
}
/**
* Returns list of shards that being restore and their status
*
* @return list of shards
*/
public ImmutableMap<ShardId, ShardRestoreStatus> shards() {
return this.shards;
}
/**
* Returns current restore state
*
* @return restore state
*/
public State state() {
return state;
}
/**
* Returns list of indices
*
* @return list of indices
*/
public ImmutableList<String> indices() {
return indices;
}
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
Entry entry = (Entry) o;
if (!indices.equals(entry.indices)) return false;
if (!snapshotId.equals(entry.snapshotId)) return false;
if (!shards.equals(entry.shards)) return false;
if (state != entry.state) return false;
return true;
}
@Override
public int hashCode() {
int result = state.hashCode();
result = 31 * result + snapshotId.hashCode();
result = 31 * result + shards.hashCode();
result = 31 * result + indices.hashCode();
return result;
}
}
/**
* Represents status of a restored shard
*/
public static class ShardRestoreStatus {
private State state;
private String nodeId;
private String reason;
private ShardRestoreStatus() {
}
/**
* Constructs a new shard restore status in initializing state on the given node
*
* @param nodeId node id
*/
public ShardRestoreStatus(String nodeId) {
this(nodeId, State.INIT);
}
/**
* Constructs a new shard restore status in with specified state on the given node
*
* @param nodeId node id
* @param state restore state
*/
public ShardRestoreStatus(String nodeId, State state) {
this(nodeId, state, null);
}
/**
* Constructs a new shard restore status in with specified state on the given node with specified failure reason
*
* @param nodeId node id
* @param state restore state
* @param reason failure reason
*/
public ShardRestoreStatus(String nodeId, State state, String reason) {
this.nodeId = nodeId;
this.state = state;
this.reason = reason;
}
/**
* Returns current state
*
* @return current state
*/
public State state() {
return state;
}
/**
* Returns node id of the node where shared is getting restored
*
* @return node id
*/
public String nodeId() {
return nodeId;
}
/**
* Returns failure reason
*
* @return failure reason
*/
public String reason() {
return reason;
}
/**
* Reads restore status from stream input
*
* @param in stream input
* @return restore status
* @throws IOException
*/
public static ShardRestoreStatus readShardRestoreStatus(StreamInput in) throws IOException {
ShardRestoreStatus shardSnapshotStatus = new ShardRestoreStatus();
shardSnapshotStatus.readFrom(in);
return shardSnapshotStatus;
}
/**
* Reads restore status from stream input
*
* @param in stream input
* @throws IOException
*/
public void readFrom(StreamInput in) throws IOException {
nodeId = in.readOptionalString();
state = State.fromValue(in.readByte());
reason = in.readOptionalString();
}
/**
* Writes restore status to stream output
*
* @param out stream input
* @throws IOException
*/
public void writeTo(StreamOutput out) throws IOException {
out.writeOptionalString(nodeId);
out.writeByte(state.value);
out.writeOptionalString(reason);
}
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
ShardRestoreStatus status = (ShardRestoreStatus) o;
if (nodeId != null ? !nodeId.equals(status.nodeId) : status.nodeId != null) return false;
if (reason != null ? !reason.equals(status.reason) : status.reason != null) return false;
if (state != status.state) return false;
return true;
}
@Override
public int hashCode() {
int result = state != null ? state.hashCode() : 0;
result = 31 * result + (nodeId != null ? nodeId.hashCode() : 0);
result = 31 * result + (reason != null ? reason.hashCode() : 0);
return result;
}
}
/**
* Shard restore process state
*/
public static enum State {
/**
* Initializing state
*/
INIT((byte) 0),
/**
* Started state
*/
STARTED((byte) 1),
/**
* Restore finished successfully
*/
SUCCESS((byte) 2),
/**
* Restore failed
*/
FAILURE((byte) 3);
private byte value;
/**
* Constructs new state
*
* @param value state code
*/
State(byte value) {
this.value = value;
}
/**
* Returns state code
*
* @return state code
*/
public byte value() {
return value;
}
/**
* Returns true if restore process completed (either successfully or with failure)
*
* @return true if restore process completed
*/
public boolean completed() {
return this == SUCCESS || this == FAILURE;
}
/**
* Returns state corresponding to state code
*
* @param value stat code
* @return state
*/
public static State fromValue(byte value) {
switch (value) {
case 0:
return INIT;
case 1:
return STARTED;
case 2:
return SUCCESS;
case 3:
return FAILURE;
default:
throw new ElasticSearchIllegalArgumentException("No snapshot state for value [" + value + "]");
}
}
}
/**
* Restore metadata factory
*/
public static class Factory implements MetaData.Custom.Factory<RestoreMetaData> {
/**
* {@inheritDoc}
*/
@Override
public String type() {
return TYPE;
}
/**
* {@inheritDoc}
*/
@Override
public RestoreMetaData readFrom(StreamInput in) throws IOException {
Entry[] entries = new Entry[in.readVInt()];
for (int i = 0; i < entries.length; i++) {
SnapshotId snapshotId = SnapshotId.readSnapshotId(in);
State state = State.fromValue(in.readByte());
int indices = in.readVInt();
ImmutableList.Builder<String> indexBuilder = ImmutableList.builder();
for (int j = 0; j < indices; j++) {
indexBuilder.add(in.readString());
}
ImmutableMap.Builder<ShardId, ShardRestoreStatus> builder = ImmutableMap.<ShardId, ShardRestoreStatus>builder();
int shards = in.readVInt();
for (int j = 0; j < shards; j++) {
ShardId shardId = ShardId.readShardId(in);
ShardRestoreStatus shardState = ShardRestoreStatus.readShardRestoreStatus(in);
builder.put(shardId, shardState);
}
entries[i] = new Entry(snapshotId, state, indexBuilder.build(), builder.build());
}
return new RestoreMetaData(entries);
}
/**
* {@inheritDoc}
*/
@Override
public void writeTo(RestoreMetaData repositories, StreamOutput out) throws IOException {
out.writeVInt(repositories.entries().size());
for (Entry entry : repositories.entries()) {
entry.snapshotId().writeTo(out);
out.writeByte(entry.state().value());
out.writeVInt(entry.indices().size());
for (String index : entry.indices()) {
out.writeString(index);
}
out.writeVInt(entry.shards().size());
for (Map.Entry<ShardId, ShardRestoreStatus> shardEntry : entry.shards().entrySet()) {
shardEntry.getKey().writeTo(out);
shardEntry.getValue().writeTo(out);
}
}
}
/**
* {@inheritDoc}
*/
@Override
public RestoreMetaData fromXContent(XContentParser parser) throws IOException {
throw new UnsupportedOperationException();
}
/**
* {@inheritDoc}
*/
@Override
public void toXContent(RestoreMetaData customIndexMetaData, XContentBuilder builder, ToXContent.Params params) throws IOException {
builder.startArray("snapshots");
for (Entry entry : customIndexMetaData.entries()) {
toXContent(entry, builder, params);
}
builder.endArray();
}
/**
* Serializes single restore operation
*
* @param entry restore operation metadata
* @param builder XContent builder
* @param params serialization parameters
* @throws IOException
*/
public void toXContent(Entry entry, XContentBuilder builder, ToXContent.Params params) throws IOException {
builder.startObject();
builder.field("snapshot", entry.snapshotId().getSnapshot());
builder.field("repository", entry.snapshotId().getRepository());
builder.field("state", entry.state());
builder.startArray("indices");
{
for (String index : entry.indices()) {
builder.value(index);
}
}
builder.endArray();
builder.startArray("shards");
{
for (Map.Entry<ShardId, ShardRestoreStatus> shardEntry : entry.shards.entrySet()) {
ShardId shardId = shardEntry.getKey();
ShardRestoreStatus status = shardEntry.getValue();
builder.startObject();
{
builder.field("index", shardId.getIndex());
builder.field("shard", shardId.getId());
builder.field("state", status.state());
}
builder.endObject();
}
}
builder.endArray();
builder.endObject();
}
/**
* {@inheritDoc}
*/
@Override
public boolean isPersistent() {
return false;
}
}
}

View File

@ -0,0 +1,129 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.cluster.metadata;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.io.stream.Streamable;
import java.io.IOException;
import java.io.Serializable;
/**
* Snapshot ID - repository name + snapshot name
*/
public class SnapshotId implements Serializable, Streamable {
private String repository;
private String snapshot;
// Caching hash code
private int hashCode;
private SnapshotId() {
}
/**
* Constructs new snapshot id
*
* @param repository repository name
* @param snapshot snapshot name
*/
public SnapshotId(String repository, String snapshot) {
this.repository = repository;
this.snapshot = snapshot;
this.hashCode = computeHashCode();
}
/**
* Returns repository name
*
* @return repository name
*/
public String getRepository() {
return repository;
}
/**
* Returns snapshot name
*
* @return snapshot name
*/
public String getSnapshot() {
return snapshot;
}
@Override
public String toString() {
return repository + ":" + snapshot;
}
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null) return false;
SnapshotId snapshotId = (SnapshotId) o;
return snapshot.equals(snapshotId.snapshot) && repository.equals(snapshotId.repository);
}
@Override
public int hashCode() {
return hashCode;
}
private int computeHashCode() {
int result = repository != null ? repository.hashCode() : 0;
result = 31 * result + snapshot.hashCode();
return result;
}
/**
* Reads snapshot id from stream input
*
* @param in stream input
* @return snapshot id
* @throws IOException
*/
public static SnapshotId readSnapshotId(StreamInput in) throws IOException {
SnapshotId snapshot = new SnapshotId();
snapshot.readFrom(in);
return snapshot;
}
/**
* {@inheritDoc}
*/
@Override
public void readFrom(StreamInput in) throws IOException {
repository = in.readString();
snapshot = in.readString();
hashCode = computeHashCode();
}
/**
* {@inheritDoc}
*/
@Override
public void writeTo(StreamOutput out) throws IOException {
out.writeString(repository);
out.writeString(snapshot);
}
}

View File

@ -0,0 +1,371 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.cluster.metadata;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableMap;
import org.elasticsearch.ElasticSearchIllegalArgumentException;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.xcontent.ToXContent;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentParser;
import org.elasticsearch.index.shard.ShardId;
import java.io.IOException;
import java.util.Map;
/**
* Meta data about snapshots that are currently executing
*/
public class SnapshotMetaData implements MetaData.Custom {
public static final String TYPE = "snapshots";
public static final Factory FACTORY = new Factory();
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
SnapshotMetaData that = (SnapshotMetaData) o;
if (!entries.equals(that.entries)) return false;
return true;
}
@Override
public int hashCode() {
return entries.hashCode();
}
public static class Entry {
private final State state;
private final SnapshotId snapshotId;
private final boolean includeGlobalState;
private final ImmutableMap<ShardId, ShardSnapshotStatus> shards;
private final ImmutableList<String> indices;
public Entry(SnapshotId snapshotId, boolean includeGlobalState, State state, ImmutableList<String> indices, ImmutableMap<ShardId, ShardSnapshotStatus> shards) {
this.state = state;
this.snapshotId = snapshotId;
this.includeGlobalState = includeGlobalState;
this.indices = indices;
if (shards == null) {
this.shards = ImmutableMap.of();
} else {
this.shards = shards;
}
}
public SnapshotId snapshotId() {
return this.snapshotId;
}
public ImmutableMap<ShardId, ShardSnapshotStatus> shards() {
return this.shards;
}
public State state() {
return state;
}
public ImmutableList<String> indices() {
return indices;
}
public boolean includeGlobalState() {
return includeGlobalState;
}
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
Entry entry = (Entry) o;
if (includeGlobalState != entry.includeGlobalState) return false;
if (!indices.equals(entry.indices)) return false;
if (!shards.equals(entry.shards)) return false;
if (!snapshotId.equals(entry.snapshotId)) return false;
if (state != entry.state) return false;
return true;
}
@Override
public int hashCode() {
int result = state.hashCode();
result = 31 * result + snapshotId.hashCode();
result = 31 * result + (includeGlobalState ? 1 : 0);
result = 31 * result + shards.hashCode();
result = 31 * result + indices.hashCode();
return result;
}
}
public static class ShardSnapshotStatus {
private State state;
private String nodeId;
private String reason;
private ShardSnapshotStatus() {
}
public ShardSnapshotStatus(String nodeId) {
this(nodeId, State.INIT);
}
public ShardSnapshotStatus(String nodeId, State state) {
this(nodeId, state, null);
}
public ShardSnapshotStatus(String nodeId, State state, String reason) {
this.nodeId = nodeId;
this.state = state;
this.reason = reason;
}
public State state() {
return state;
}
public String nodeId() {
return nodeId;
}
public String reason() {
return reason;
}
public static ShardSnapshotStatus readShardSnapshotStatus(StreamInput in) throws IOException {
ShardSnapshotStatus shardSnapshotStatus = new ShardSnapshotStatus();
shardSnapshotStatus.readFrom(in);
return shardSnapshotStatus;
}
public void readFrom(StreamInput in) throws IOException {
nodeId = in.readOptionalString();
state = State.fromValue(in.readByte());
reason = in.readOptionalString();
}
public void writeTo(StreamOutput out) throws IOException {
out.writeOptionalString(nodeId);
out.writeByte(state.value);
out.writeOptionalString(reason);
}
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
ShardSnapshotStatus status = (ShardSnapshotStatus) o;
if (nodeId != null ? !nodeId.equals(status.nodeId) : status.nodeId != null) return false;
if (reason != null ? !reason.equals(status.reason) : status.reason != null) return false;
if (state != status.state) return false;
return true;
}
@Override
public int hashCode() {
int result = state != null ? state.hashCode() : 0;
result = 31 * result + (nodeId != null ? nodeId.hashCode() : 0);
result = 31 * result + (reason != null ? reason.hashCode() : 0);
return result;
}
}
public static enum State {
INIT((byte) 0),
STARTED((byte) 1),
SUCCESS((byte) 2),
FAILED((byte) 3),
ABORTED((byte) 4);
private byte value;
State(byte value) {
this.value = value;
}
public byte value() {
return value;
}
public boolean completed() {
return this == SUCCESS || this == FAILED;
}
public static State fromValue(byte value) {
switch (value) {
case 0:
return INIT;
case 1:
return STARTED;
case 2:
return SUCCESS;
case 3:
return FAILED;
case 4:
return ABORTED;
default:
throw new ElasticSearchIllegalArgumentException("No snapshot state for value [" + value + "]");
}
}
}
private final ImmutableList<Entry> entries;
public SnapshotMetaData(ImmutableList<Entry> entries) {
this.entries = entries;
}
public SnapshotMetaData(Entry... entries) {
this.entries = ImmutableList.copyOf(entries);
}
public ImmutableList<Entry> entries() {
return this.entries;
}
public Entry snapshot(SnapshotId snapshotId) {
for (Entry entry : entries) {
if (snapshotId.equals(entry.snapshotId())) {
return entry;
}
}
return null;
}
public static class Factory implements MetaData.Custom.Factory<SnapshotMetaData> {
@Override
public String type() {
return TYPE; //To change body of implemented methods use File | Settings | File Templates.
}
@Override
public SnapshotMetaData readFrom(StreamInput in) throws IOException {
Entry[] entries = new Entry[in.readVInt()];
for (int i = 0; i < entries.length; i++) {
SnapshotId snapshotId = SnapshotId.readSnapshotId(in);
boolean includeGlobalState = in.readBoolean();
State state = State.fromValue(in.readByte());
int indices = in.readVInt();
ImmutableList.Builder<String> indexBuilder = ImmutableList.builder();
for (int j = 0; j < indices; j++) {
indexBuilder.add(in.readString());
}
ImmutableMap.Builder<ShardId, ShardSnapshotStatus> builder = ImmutableMap.<ShardId, ShardSnapshotStatus>builder();
int shards = in.readVInt();
for (int j = 0; j < shards; j++) {
ShardId shardId = ShardId.readShardId(in);
String nodeId = in.readOptionalString();
State shardState = State.fromValue(in.readByte());
builder.put(shardId, new ShardSnapshotStatus(nodeId, shardState));
}
entries[i] = new Entry(snapshotId, includeGlobalState, state, indexBuilder.build(), builder.build());
}
return new SnapshotMetaData(entries);
}
@Override
public void writeTo(SnapshotMetaData repositories, StreamOutput out) throws IOException {
out.writeVInt(repositories.entries().size());
for (Entry entry : repositories.entries()) {
entry.snapshotId().writeTo(out);
out.writeBoolean(entry.includeGlobalState());
out.writeByte(entry.state().value());
out.writeVInt(entry.indices().size());
for (String index : entry.indices()) {
out.writeString(index);
}
out.writeVInt(entry.shards().size());
for (Map.Entry<ShardId, ShardSnapshotStatus> shardEntry : entry.shards().entrySet()) {
shardEntry.getKey().writeTo(out);
out.writeOptionalString(shardEntry.getValue().nodeId());
out.writeByte(shardEntry.getValue().state().value());
}
}
}
@Override
public SnapshotMetaData fromXContent(XContentParser parser) throws IOException {
throw new UnsupportedOperationException();
}
@Override
public void toXContent(SnapshotMetaData customIndexMetaData, XContentBuilder builder, ToXContent.Params params) throws IOException {
builder.startArray("snapshots");
for (Entry entry : customIndexMetaData.entries()) {
toXContent(entry, builder, params);
}
builder.endArray();
}
public void toXContent(Entry entry, XContentBuilder builder, ToXContent.Params params) throws IOException {
builder.startObject();
builder.field("repository", entry.snapshotId().getRepository());
builder.field("snapshot", entry.snapshotId().getSnapshot());
builder.field("include_global_state", entry.includeGlobalState());
builder.field("state", entry.state());
builder.startArray("indices");
{
for (String index : entry.indices()) {
builder.value(index);
}
}
builder.endArray();
builder.startArray("shards");
{
for (Map.Entry<ShardId, ShardSnapshotStatus> shardEntry : entry.shards.entrySet()) {
ShardId shardId = shardEntry.getKey();
ShardSnapshotStatus status = shardEntry.getValue();
builder.startObject();
{
builder.field("index", shardId.getIndex());
builder.field("shard", shardId.getId());
builder.field("state", status.state());
builder.field("node", status.nodeId());
}
builder.endObject();
}
}
builder.endArray();
builder.endObject();
}
public boolean isPersistent() {
return false;
}
}
}

View File

@ -51,6 +51,8 @@ public class ImmutableShardRouting implements Streamable, Serializable, ShardRou
private transient ShardId shardIdentifier;
protected RestoreSource restoreSource;
private final transient ImmutableList<ShardRouting> asList;
ImmutableShardRouting() {
@ -60,11 +62,13 @@ public class ImmutableShardRouting implements Streamable, Serializable, ShardRou
public ImmutableShardRouting(ShardRouting copy) {
this(copy.index(), copy.id(), copy.currentNodeId(), copy.primary(), copy.state(), copy.version());
this.relocatingNodeId = copy.relocatingNodeId();
this.restoreSource = copy.restoreSource();
}
public ImmutableShardRouting(ShardRouting copy, long version) {
this(copy.index(), copy.id(), copy.currentNodeId(), copy.primary(), copy.state(), copy.version());
this.relocatingNodeId = copy.relocatingNodeId();
this.restoreSource = copy.restoreSource();
this.version = version;
}
@ -74,6 +78,12 @@ public class ImmutableShardRouting implements Streamable, Serializable, ShardRou
this.relocatingNodeId = relocatingNodeId;
}
public ImmutableShardRouting(String index, int shardId, String currentNodeId,
String relocatingNodeId, RestoreSource restoreSource, boolean primary, ShardRoutingState state, long version) {
this(index, shardId, currentNodeId, relocatingNodeId, primary, state, version);
this.restoreSource = restoreSource;
}
public ImmutableShardRouting(String index, int shardId, String currentNodeId, boolean primary, ShardRoutingState state, long version) {
this.index = index;
this.shardId = shardId;
@ -149,6 +159,11 @@ public class ImmutableShardRouting implements Streamable, Serializable, ShardRou
return this.relocatingNodeId;
}
@Override
public RestoreSource restoreSource() {
return restoreSource;
}
@Override
public boolean primary() {
return this.primary;
@ -204,6 +219,8 @@ public class ImmutableShardRouting implements Streamable, Serializable, ShardRou
primary = in.readBoolean();
state = ShardRoutingState.fromValue(in.readByte());
restoreSource = RestoreSource.readOptionalRestoreSource(in);
}
@Override
@ -235,6 +252,13 @@ public class ImmutableShardRouting implements Streamable, Serializable, ShardRou
out.writeBoolean(primary);
out.writeByte(state.value());
if (restoreSource != null) {
out.writeBoolean(true);
restoreSource.writeTo(out);
} else {
out.writeBoolean(false);
}
}
@Override
@ -260,6 +284,8 @@ public class ImmutableShardRouting implements Streamable, Serializable, ShardRou
if (relocatingNodeId != null ? !relocatingNodeId.equals(that.relocatingNodeId) : that.relocatingNodeId != null)
return false;
if (state != that.state) return false;
if (restoreSource != null ? !restoreSource.equals(that.restoreSource) : that.restoreSource != null)
return false;
return true;
}
@ -272,6 +298,7 @@ public class ImmutableShardRouting implements Streamable, Serializable, ShardRou
result = 31 * result + (relocatingNodeId != null ? relocatingNodeId.hashCode() : 0);
result = 31 * result + (primary ? 1 : 0);
result = 31 * result + (state != null ? state.hashCode() : 0);
result = 31 * result + (restoreSource != null ? restoreSource.hashCode() : 0);
return result;
}
@ -293,19 +320,28 @@ public class ImmutableShardRouting implements Streamable, Serializable, ShardRou
} else {
sb.append("[R]");
}
if (this.restoreSource != null) {
sb.append(", restoring[" + restoreSource + "]");
}
sb.append(", s[").append(state).append("]");
return sb.toString();
}
@Override
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
return builder.startObject()
builder.startObject()
.field("state", state())
.field("primary", primary())
.field("node", currentNodeId())
.field("relocating_node", relocatingNodeId())
.field("shard", shardId().id())
.field("index", shardId().index().name())
.endObject();
.field("restore_source");
if (restoreSource() != null) {
restoreSource().toXContent(builder, params);
} else {
builder.nullValue();
}
return builder.endObject();
}
}

View File

@ -361,6 +361,37 @@ public class IndexRoutingTable implements Iterable<IndexShardRoutingTable> {
return initializeEmpty(indexMetaData, false);
}
/**
* Initializes a new empty index, to be restored from a snapshot
*/
public Builder initializeAsNewRestore(IndexMetaData indexMetaData, RestoreSource restoreSource) {
return initializeAsRestore(indexMetaData, restoreSource, true);
}
/**
* Initializes an existing index, to be restored from a snapshot
*/
public Builder initializeAsRestore(IndexMetaData indexMetaData, RestoreSource restoreSource) {
return initializeAsRestore(indexMetaData, restoreSource, false);
}
/**
* Initializes an index, to be restored from snapshot
*/
private Builder initializeAsRestore(IndexMetaData indexMetaData, RestoreSource restoreSource, boolean asNew) {
if (!shards.isEmpty()) {
throw new ElasticSearchIllegalStateException("trying to initialize an index with fresh shards, but already has shards created");
}
for (int shardId = 0; shardId < indexMetaData.numberOfShards(); shardId++) {
IndexShardRoutingTable.Builder indexShardRoutingBuilder = new IndexShardRoutingTable.Builder(new ShardId(indexMetaData.index(), shardId), asNew ? false : true);
for (int i = 0; i <= indexMetaData.numberOfReplicas(); i++) {
indexShardRoutingBuilder.addShard(new ImmutableShardRouting(index, shardId, null, null, i == 0 ? restoreSource : null, i == 0, ShardRoutingState.UNASSIGNED, 0));
}
shards.put(shardId, indexShardRoutingBuilder.build());
}
return this;
}
/**
* Initializes a new empty index, with an option to control if its from an API or not.
*/

View File

@ -42,12 +42,17 @@ public class MutableShardRouting extends ImmutableShardRouting {
public MutableShardRouting(String index, int shardId, String currentNodeId,
String relocatingNodeId, boolean primary, ShardRoutingState state, long version) {
super(index, shardId, currentNodeId, relocatingNodeId, primary, state, version);
super(index, shardId, currentNodeId, relocatingNodeId, null, primary, state, version);
}
public MutableShardRouting(String index, int shardId, String currentNodeId,
String relocatingNodeId, RestoreSource restoreSource, boolean primary, ShardRoutingState state, long version) {
super(index, shardId, currentNodeId, relocatingNodeId, restoreSource, primary, state, version);
}
/**
* Assign this shard to a node.
*
*
* @param nodeId id of the node to assign this shard to
*/
public void assignToNode(String nodeId) {
@ -68,7 +73,7 @@ public class MutableShardRouting extends ImmutableShardRouting {
/**
* Relocate the shard to another node.
*
*
* @param relocatingNodeId id of the node to relocate the shard
*/
public void relocate(String relocatingNodeId) {
@ -108,12 +113,13 @@ public class MutableShardRouting extends ImmutableShardRouting {
/**
* Set the shards state to <code>STARTED</code>. The shards state must be
* <code>INITIALIZING</code> or <code>RELOCATING</code>. Any relocation will be
* canceled.
* canceled.
*/
public void moveToStarted() {
version++;
assert state == ShardRoutingState.INITIALIZING || state == ShardRoutingState.RELOCATING;
relocatingNodeId = null;
restoreSource = null;
state = ShardRoutingState.STARTED;
}
@ -139,5 +145,13 @@ public class MutableShardRouting extends ImmutableShardRouting {
}
primary = false;
}
public void restoreFrom(RestoreSource restoreSource) {
version++;
if (!primary) {
throw new IllegalShardRoutingStateException(this, "Not primary, can't restore from snapshot to replica");
}
this.restoreSource = restoreSource;
}
}

View File

@ -0,0 +1,111 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.cluster.routing;
import org.elasticsearch.cluster.metadata.SnapshotId;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.io.stream.Streamable;
import org.elasticsearch.common.xcontent.ToXContent;
import org.elasticsearch.common.xcontent.XContentBuilder;
import java.io.IOException;
/**
* Represents snapshot and index from which a recovering index should be restored
*/
public class RestoreSource implements Streamable, ToXContent {
private SnapshotId snapshotId;
private String index;
RestoreSource() {
}
public RestoreSource(SnapshotId snapshotId, String index) {
this.snapshotId = snapshotId;
this.index = index;
}
public SnapshotId snapshotId() {
return snapshotId;
}
public String index() {
return index;
}
public static RestoreSource readRestoreSource(StreamInput in) throws IOException {
RestoreSource restoreSource = new RestoreSource();
restoreSource.readFrom(in);
return restoreSource;
}
public static RestoreSource readOptionalRestoreSource(StreamInput in) throws IOException {
return in.readOptionalStreamable(new RestoreSource());
}
@Override
public void readFrom(StreamInput in) throws IOException {
snapshotId = SnapshotId.readSnapshotId(in);
index = in.readString();
}
@Override
public void writeTo(StreamOutput out) throws IOException {
snapshotId.writeTo(out);
out.writeString(index);
}
@Override
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
return builder.startObject()
.field("repository", snapshotId.getRepository())
.field("snapshot", snapshotId.getSnapshot())
.field("index", index)
.endObject();
}
@Override
public String toString() {
return snapshotId.toString();
}
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
RestoreSource that = (RestoreSource) o;
if (!index.equals(that.index)) return false;
if (!snapshotId.equals(that.snapshotId)) return false;
return true;
}
@Override
public int hashCode() {
int result = snapshotId.hashCode();
result = 31 * result + index.hashCode();
return result;
}
}

View File

@ -382,6 +382,20 @@ public class RoutingTable implements Iterable<IndexRoutingTable> {
return this;
}
public Builder addAsRestore(IndexMetaData indexMetaData, RestoreSource restoreSource) {
IndexRoutingTable.Builder indexRoutingBuilder = new IndexRoutingTable.Builder(indexMetaData.index())
.initializeAsRestore(indexMetaData, restoreSource);
add(indexRoutingBuilder);
return this;
}
public Builder addAsNewRestore(IndexMetaData indexMetaData, RestoreSource restoreSource) {
IndexRoutingTable.Builder indexRoutingBuilder = new IndexRoutingTable.Builder(indexMetaData.index())
.initializeAsNewRestore(indexMetaData, restoreSource);
add(indexRoutingBuilder);
return this;
}
public Builder add(IndexRoutingTable indexRoutingTable) {
indexRoutingTable.validate();
indicesRouting.put(indexRoutingTable.index(), indexRoutingTable);

View File

@ -116,6 +116,11 @@ public interface ShardRouting extends Streamable, Serializable, ToXContent {
*/
String relocatingNodeId();
/**
* Snapshot id and repository where this shard is being restored from
*/
RestoreSource restoreSource();
/**
* Returns <code>true</code> iff this shard is a primary.
*/

View File

@ -531,8 +531,8 @@ public class AllocationService extends AbstractComponent {
allocation.routingNodes().unassigned().addAll(shardsToMove);
}
allocation.routingNodes().unassigned().add(new MutableShardRouting(failedShard.index(), failedShard.id(),
null, failedShard.primary(), ShardRoutingState.UNASSIGNED, failedShard.version() + 1));
allocation.routingNodes().unassigned().add(new MutableShardRouting(failedShard.index(), failedShard.id(), null,
null, failedShard.restoreSource(), failedShard.primary(), ShardRoutingState.UNASSIGNED, failedShard.version() + 1));
break;
}

View File

@ -544,7 +544,7 @@ public class BalancedShardsAllocator extends AbstractComponent implements Shards
if (decision.type() == Type.YES) { // TODO maybe we can respect throttling here too?
sourceNode.removeShard(shard);
final MutableShardRouting initializingShard = new MutableShardRouting(shard.index(), shard.id(), currentNode.getNodeId(),
shard.currentNodeId(), shard.primary(), INITIALIZING, shard.version() + 1);
shard.currentNodeId(), shard.restoreSource(), shard.primary(), INITIALIZING, shard.version() + 1);
currentNode.addShard(initializingShard, decision);
target.add(initializingShard);
shard.relocate(target.nodeId()); // set the node to relocate after we added the initializing shard
@ -647,7 +647,7 @@ public class BalancedShardsAllocator extends AbstractComponent implements Shards
node.addShard(shard, Decision.ALWAYS);
float currentWeight = weight.weight(Operation.ALLOCATE, this, node, shard.index());
/*
* Remove the shard from the node again this is only a
* Remove the shard from the node again this is only a
* simulation
*/
Decision removed = node.removeShard(shard);
@ -782,7 +782,7 @@ public class BalancedShardsAllocator extends AbstractComponent implements Shards
if (candidate.started()) {
RoutingNode lowRoutingNode = allocation.routingNodes().node(minNode.getNodeId());
lowRoutingNode.add(new MutableShardRouting(candidate.index(), candidate.id(), lowRoutingNode.nodeId(), candidate
.currentNodeId(), candidate.primary(), INITIALIZING, candidate.version() + 1));
.currentNodeId(), candidate.restoreSource(), candidate.primary(), INITIALIZING, candidate.version() + 1));
candidate.relocate(lowRoutingNode.nodeId());
} else {

View File

@ -174,7 +174,7 @@ public class EvenShardsCountAllocator extends AbstractComponent implements Shard
if (allocateDecision.type() == Decision.Type.YES) {
changed = true;
lowRoutingNode.add(new MutableShardRouting(startedShard.index(), startedShard.id(),
lowRoutingNode.nodeId(), startedShard.currentNodeId(),
lowRoutingNode.nodeId(), startedShard.currentNodeId(), startedShard.restoreSource(),
startedShard.primary(), INITIALIZING, startedShard.version() + 1));
startedShard.relocate(lowRoutingNode.nodeId());
@ -211,7 +211,7 @@ public class EvenShardsCountAllocator extends AbstractComponent implements Shard
Decision decision = allocation.deciders().canAllocate(shardRouting, nodeToCheck, allocation);
if (decision.type() == Decision.Type.YES) {
nodeToCheck.add(new MutableShardRouting(shardRouting.index(), shardRouting.id(),
nodeToCheck.nodeId(), shardRouting.currentNodeId(),
nodeToCheck.nodeId(), shardRouting.currentNodeId(), shardRouting.restoreSource(),
shardRouting.primary(), INITIALIZING, shardRouting.version() + 1));
shardRouting.relocate(nodeToCheck.nodeId());

View File

@ -168,7 +168,7 @@ public class MoveAllocationCommand implements AllocationCommand {
}
toRoutingNode.add(new MutableShardRouting(shardRouting.index(), shardRouting.id(),
toRoutingNode.nodeId(), shardRouting.currentNodeId(),
toRoutingNode.nodeId(), shardRouting.currentNodeId(), shardRouting.restoreSource(),
shardRouting.primary(), ShardRoutingState.INITIALIZING, shardRouting.version() + 1));
shardRouting.relocate(toRoutingNode.nodeId());

View File

@ -39,7 +39,8 @@ public class AllocationDeciders extends AllocationDecider {
/**
* Create a new {@link AllocationDeciders} instance
* @param settings settings to use
*
* @param settings settings to use
* @param nodeSettingsService per-node settings to use
*/
public AllocationDeciders(Settings settings, NodeSettingsService nodeSettingsService) {
@ -55,6 +56,7 @@ public class AllocationDeciders extends AllocationDecider {
.add(new AwarenessAllocationDecider(settings, nodeSettingsService))
.add(new ShardsLimitAllocationDecider(settings))
.add(new DiskThresholdDecider(settings, nodeSettingsService))
.add(new SnapshotInProgressAllocationDecider(settings))
.build()
);
}

View File

@ -29,6 +29,7 @@ import java.util.List;
/**
* This module configures several {@link AllocationDecider}s
* that make configuration specific decisions if shards can be allocated on certain nodes.
*
* @see Decision
* @see AllocationDecider
*/
@ -61,6 +62,7 @@ public class AllocationDecidersModule extends AbstractModule {
allocationMultibinder.addBinding().to(AwarenessAllocationDecider.class);
allocationMultibinder.addBinding().to(ShardsLimitAllocationDecider.class);
allocationMultibinder.addBinding().to(DiskThresholdDecider.class);
allocationMultibinder.addBinding().to(SnapshotInProgressAllocationDecider.class);
for (Class<? extends AllocationDecider> allocation : allocations) {
allocationMultibinder.addBinding().to(allocation);
}

View File

@ -0,0 +1,116 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.cluster.routing.allocation.decider;
import org.elasticsearch.cluster.metadata.SnapshotMetaData;
import org.elasticsearch.cluster.routing.RoutingNode;
import org.elasticsearch.cluster.routing.ShardRouting;
import org.elasticsearch.cluster.routing.allocation.RoutingAllocation;
import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.settings.ImmutableSettings;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.node.settings.NodeSettingsService;
/**
* This {@link org.elasticsearch.cluster.routing.allocation.decider.AllocationDecider} prevents shards that
* are currently been snapshotted to be moved to other nodes.
*/
public class SnapshotInProgressAllocationDecider extends AllocationDecider {
/**
* Disables relocation of shards that are currently being snapshotted.
*/
public static final String CLUSTER_ROUTING_ALLOCATION_SNAPSHOT_RELOCATION_ENABLED = "cluster.routing.allocation.snapshot.relocation_enabled";
class ApplySettings implements NodeSettingsService.Listener {
@Override
public void onRefreshSettings(Settings settings) {
boolean newEnableRelocation = settings.getAsBoolean(CLUSTER_ROUTING_ALLOCATION_SNAPSHOT_RELOCATION_ENABLED, enableRelocation);
if (newEnableRelocation != enableRelocation) {
logger.info("updating [{}] from [{}], to [{}]", CLUSTER_ROUTING_ALLOCATION_SNAPSHOT_RELOCATION_ENABLED, enableRelocation, newEnableRelocation);
enableRelocation = newEnableRelocation;
}
}
}
private volatile boolean enableRelocation = false;
/**
* Creates a new {@link org.elasticsearch.cluster.routing.allocation.decider.SnapshotInProgressAllocationDecider} instance
*/
public SnapshotInProgressAllocationDecider() {
this(ImmutableSettings.Builder.EMPTY_SETTINGS);
}
/**
* Creates a new {@link org.elasticsearch.cluster.routing.allocation.decider.SnapshotInProgressAllocationDecider} instance from given settings
*
* @param settings {@link org.elasticsearch.common.settings.Settings} to use
*/
public SnapshotInProgressAllocationDecider(Settings settings) {
this(settings, new NodeSettingsService(settings));
}
@Inject
public SnapshotInProgressAllocationDecider(Settings settings, NodeSettingsService nodeSettingsService) {
super(settings);
enableRelocation = settings.getAsBoolean(CLUSTER_ROUTING_ALLOCATION_SNAPSHOT_RELOCATION_ENABLED, enableRelocation);
nodeSettingsService.addListener(new ApplySettings());
}
/**
* Returns a {@link Decision} whether the given shard routing can be
* re-balanced to the given allocation. The default is
* {@link Decision#ALWAYS}.
*/
public Decision canRebalance(ShardRouting shardRouting, RoutingAllocation allocation) {
return canMove(shardRouting, allocation);
}
/**
* Returns a {@link Decision} whether the given shard routing can be
* allocated on the given node. The default is {@link Decision#ALWAYS}.
*/
public Decision canAllocate(ShardRouting shardRouting, RoutingNode node, RoutingAllocation allocation) {
return canMove(shardRouting, allocation);
}
private Decision canMove(ShardRouting shardRouting, RoutingAllocation allocation) {
if (!enableRelocation && shardRouting.primary()) {
// Only primary shards are snapshotted
SnapshotMetaData snapshotMetaData = allocation.metaData().custom(SnapshotMetaData.TYPE);
if (snapshotMetaData == null) {
// Snapshots are not running
return Decision.YES;
}
for (SnapshotMetaData.Entry snapshot : snapshotMetaData.entries()) {
SnapshotMetaData.ShardSnapshotStatus shardSnapshotStatus = snapshot.shards().get(shardRouting.shardId());
if (shardSnapshotStatus != null && !shardSnapshotStatus.state().completed() && shardSnapshotStatus.nodeId() != null && shardSnapshotStatus.nodeId().equals(shardRouting.currentNodeId())) {
logger.trace("Preventing snapshotted shard [{}] to be moved from node [{}]", shardRouting.shardId(), shardSnapshotStatus.nodeId());
return Decision.NO;
}
}
}
return Decision.YES;
}
}

View File

@ -74,6 +74,7 @@ public class ClusterDynamicSettingsModule extends AbstractModule {
clusterDynamicSettings.addDynamicSetting(DiskThresholdDecider.CLUSTER_ROUTING_ALLOCATION_HIGH_DISK_WATERMARK);
clusterDynamicSettings.addDynamicSetting(DiskThresholdDecider.CLUSTER_ROUTING_ALLOCATION_DISK_THRESHOLD_ENABLED);
clusterDynamicSettings.addDynamicSetting(InternalClusterInfoService.INTERNAL_CLUSTER_INFO_UPDATE_INTERVAL, Validator.TIME);
clusterDynamicSettings.addDynamicSetting(SnapshotInProgressAllocationDecider.CLUSTER_ROUTING_ALLOCATION_SNAPSHOT_RELOCATION_ENABLED);
}
public void addDynamicSettings(String... settings) {

View File

@ -19,7 +19,15 @@
package org.elasticsearch.common;
import org.elasticsearch.common.inject.Module;
import org.elasticsearch.common.settings.NoClassSettingsException;
import java.lang.reflect.Modifier;
import java.util.Arrays;
import java.util.Iterator;
import java.util.Locale;
import static org.elasticsearch.common.Strings.toCamelCase;
/**
*
@ -91,6 +99,46 @@ public class Classes {
return !clazz.isInterface() && !Modifier.isAbstract(modifiers);
}
public static <T> Class<? extends T> loadClass(ClassLoader classLoader, String className, String prefixPackage, String suffixClassName) {
return loadClass(classLoader, className, prefixPackage, suffixClassName, null);
}
@SuppressWarnings({"unchecked"})
public static <T> Class<? extends T> loadClass(ClassLoader classLoader, String className, String prefixPackage, String suffixClassName, String errorPrefix) {
Throwable t = null;
String[] classNames = classNames(className, prefixPackage, suffixClassName);
for (String fullClassName : classNames) {
try {
return (Class<? extends T>) classLoader.loadClass(fullClassName);
} catch (ClassNotFoundException ex) {
t = ex;
} catch (NoClassDefFoundError er) {
t = er;
}
}
if (errorPrefix == null) {
errorPrefix = "failed to load class";
}
throw new NoClassSettingsException(errorPrefix + " with value [" + className + "]; tried " + Arrays.toString(classNames), t);
}
private static String[] classNames(String className, String prefixPackage, String suffixClassName) {
String prefixValue = prefixPackage;
int packageSeparator = className.lastIndexOf('.');
String classNameValue = className;
// If class name contains package use it as package prefix instead of specified default one
if (packageSeparator > 0) {
prefixValue = className.substring(0, packageSeparator + 1);
classNameValue = className.substring(packageSeparator + 1);
}
return new String[]{
className,
prefixValue + Strings.capitalize(toCamelCase(classNameValue)) + suffixClassName,
prefixValue + toCamelCase(classNameValue) + "." + Strings.capitalize(toCamelCase(classNameValue)) + suffixClassName,
prefixValue + toCamelCase(classNameValue).toLowerCase(Locale.ROOT) + "." + Strings.capitalize(toCamelCase(classNameValue)) + suffixClassName,
};
}
private Classes() {
}

View File

@ -0,0 +1,117 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.common.blobstore.url;
import com.google.common.collect.ImmutableMap;
import org.apache.lucene.util.IOUtils;
import org.elasticsearch.common.blobstore.BlobMetaData;
import org.elasticsearch.common.blobstore.BlobPath;
import org.elasticsearch.common.blobstore.support.AbstractBlobContainer;
import java.io.IOException;
import java.io.InputStream;
import java.net.URL;
/**
* URL blob implementation of {@link org.elasticsearch.common.blobstore.BlobContainer}
*/
public abstract class AbstractURLBlobContainer extends AbstractBlobContainer {
protected final URLBlobStore blobStore;
protected final URL path;
/**
* Constructs new AbstractURLBlobContainer
*
* @param blobStore blob store
* @param blobPath blob path for this container
* @param path URL for this container
*/
public AbstractURLBlobContainer(URLBlobStore blobStore, BlobPath blobPath, URL path) {
super(blobPath);
this.blobStore = blobStore;
this.path = path;
}
/**
* Returns URL for this container
*
* @return URL for this container
*/
public URL url() {
return this.path;
}
/**
* This operation is not supported by AbstractURLBlobContainer
*/
@Override
public ImmutableMap<String, BlobMetaData> listBlobs() throws IOException {
throw new UnsupportedOperationException("URL repository doesn't support this operation");
}
/**
* This operation is not supported by AbstractURLBlobContainer
*/
@Override
public boolean deleteBlob(String blobName) throws IOException {
throw new UnsupportedOperationException("URL repository is read only");
}
/**
* This operation is not supported by AbstractURLBlobContainer
*/
@Override
public boolean blobExists(String blobName) {
throw new UnsupportedOperationException("URL repository doesn't support this operation");
}
/**
* {@inheritDoc}
*/
@Override
public void readBlob(final String blobName, final ReadBlobListener listener) {
blobStore.executor().execute(new Runnable() {
@Override
public void run() {
byte[] buffer = new byte[blobStore.bufferSizeInBytes()];
InputStream is = null;
try {
is = new URL(path, blobName).openStream();
} catch (IOException e) {
IOUtils.closeWhileHandlingException(is);
listener.onFailure(e);
return;
}
try {
int bytesRead;
while ((bytesRead = is.read(buffer)) != -1) {
listener.onPartial(buffer, 0, bytesRead);
}
listener.onCompleted();
} catch (IOException e) {
IOUtils.closeWhileHandlingException(is);
listener.onFailure(e);
}
}
});
}
}

View File

@ -0,0 +1,151 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.common.blobstore.url;
import org.elasticsearch.common.blobstore.BlobPath;
import org.elasticsearch.common.blobstore.BlobStore;
import org.elasticsearch.common.blobstore.BlobStoreException;
import org.elasticsearch.common.blobstore.ImmutableBlobContainer;
import org.elasticsearch.common.component.AbstractComponent;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.unit.ByteSizeUnit;
import org.elasticsearch.common.unit.ByteSizeValue;
import java.net.MalformedURLException;
import java.net.URL;
import java.util.concurrent.Executor;
/**
* Read-only URL-based blob store
*/
public class URLBlobStore extends AbstractComponent implements BlobStore {
private final Executor executor;
private final URL path;
private final int bufferSizeInBytes;
/**
* Constructs new read-only URL-based blob store
* <p/>
* The following settings are supported
* <dl>
* <dt>buffer_size</dt>
* <dd>- size of the read buffer, defaults to 100KB</dd>
* </dl>
*
* @param settings settings
* @param executor executor for read operations
* @param path base URL
*/
public URLBlobStore(Settings settings, Executor executor, URL path) {
super(settings);
this.path = path;
this.bufferSizeInBytes = (int) settings.getAsBytesSize("buffer_size", new ByteSizeValue(100, ByteSizeUnit.KB)).bytes();
this.executor = executor;
}
/**
* {@inheritDoc}
*/
@Override
public String toString() {
return path.toString();
}
/**
* Returns base URL
*
* @return base URL
*/
public URL path() {
return path;
}
/**
* Returns read buffer size
*
* @return read buffer size
*/
public int bufferSizeInBytes() {
return this.bufferSizeInBytes;
}
/**
* Returns executor used for read operations
*
* @return executor
*/
public Executor executor() {
return executor;
}
/**
* {@inheritDoc}
*/
@Override
public ImmutableBlobContainer immutableBlobContainer(BlobPath path) {
try {
return new URLImmutableBlobContainer(this, path, buildPath(path));
} catch (MalformedURLException ex) {
throw new BlobStoreException("malformed URL " + path, ex);
}
}
/**
* This operation is not supported by URL Blob Store
*
* @param path
*/
@Override
public void delete(BlobPath path) {
throw new UnsupportedOperationException("URL repository is read only");
}
/**
* {@inheritDoc}
*/
@Override
public void close() {
// nothing to do here...
}
/**
* Builds URL using base URL and specified path
*
* @param path relative path
* @return Base URL + path
* @throws MalformedURLException
*/
private URL buildPath(BlobPath path) throws MalformedURLException {
String[] paths = path.toArray();
if (paths.length == 0) {
return path();
}
URL blobPath = new URL(this.path, paths[0] + "/");
if (paths.length > 1) {
for (int i = 1; i < paths.length; i++) {
blobPath = new URL(blobPath, paths[i] + "/");
}
}
return blobPath;
}
}

View File

@ -0,0 +1,60 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.common.blobstore.url;
import org.elasticsearch.common.blobstore.BlobPath;
import org.elasticsearch.common.blobstore.ImmutableBlobContainer;
import java.io.IOException;
import java.io.InputStream;
import java.net.URL;
/**
* Read-only URL-based implementation of {@link ImmutableBlobContainer}
*/
public class URLImmutableBlobContainer extends AbstractURLBlobContainer implements ImmutableBlobContainer {
/**
* Constructs a new URLImmutableBlobContainer
*
* @param blobStore blob store
* @param blobPath blob path to this container
* @param path URL of this container
*/
public URLImmutableBlobContainer(URLBlobStore blobStore, BlobPath blobPath, URL path) {
super(blobStore, blobPath, path);
}
/**
* This operation is not supported by URL Blob Container
*/
@Override
public void writeBlob(final String blobName, final InputStream is, final long sizeInBytes, final WriterListener listener) {
throw new UnsupportedOperationException("URL repository is read only");
}
/**
* This operation is not supported by URL Blob Container
*/
@Override
public void writeBlob(String blobName, InputStream is, long sizeInBytes) throws IOException {
throw new UnsupportedOperationException("URL repository is read only");
}
}

View File

@ -106,6 +106,7 @@ public class LocalGatewayMetaState extends AbstractComponent implements ClusterS
private final XContentType format;
private final ToXContent.Params formatParams;
private final ToXContent.Params globalOnlyFormatParams;
private final AutoImportDangledState autoImportDangled;
@ -129,8 +130,15 @@ public class LocalGatewayMetaState extends AbstractComponent implements ClusterS
Map<String, String> params = Maps.newHashMap();
params.put("binary", "true");
formatParams = new ToXContent.MapParams(params);
Map<String, String> globalOnlyParams = Maps.newHashMap();
globalOnlyParams.put("binary", "true");
globalOnlyParams.put(MetaData.GLOBAL_PERSISTENT_ONLY_PARAM, "true");
globalOnlyFormatParams = new ToXContent.MapParams(globalOnlyParams);
} else {
formatParams = ToXContent.EMPTY_PARAMS;
Map<String, String> globalOnlyParams = Maps.newHashMap();
globalOnlyParams.put(MetaData.GLOBAL_PERSISTENT_ONLY_PARAM, "true");
globalOnlyFormatParams = new ToXContent.MapParams(globalOnlyParams);
}
this.autoImportDangled = AutoImportDangledState.fromString(settings.get("gateway.local.auto_import_dangled", AutoImportDangledState.YES.toString()));
@ -386,16 +394,13 @@ public class LocalGatewayMetaState extends AbstractComponent implements ClusterS
private void writeGlobalState(String reason, MetaData metaData, @Nullable MetaData previousMetaData) throws Exception {
logger.trace("[_global] writing state, reason [{}]", reason);
// create metadata to write with just the global state
MetaData globalMetaData = MetaData.builder(metaData).removeAllIndices().build();
XContentBuilder builder = XContentFactory.contentBuilder(format);
builder.startObject();
MetaData.Builder.toXContent(globalMetaData, builder, formatParams);
MetaData.Builder.toXContent(metaData, builder, globalOnlyFormatParams);
builder.endObject();
builder.flush();
String globalFileName = "global-" + globalMetaData.version();
String globalFileName = "global-" + metaData.version();
Throwable lastFailure = null;
boolean wroteAtLeastOnce = false;
for (File dataLocation : nodeEnv.nodeDataLocations()) {

View File

@ -33,7 +33,9 @@ import org.elasticsearch.index.settings.IndexSettingsService;
import org.elasticsearch.index.shard.*;
import org.elasticsearch.index.shard.service.IndexShard;
import org.elasticsearch.index.shard.service.InternalIndexShard;
import org.elasticsearch.index.snapshots.IndexShardSnapshotAndRestoreService;
import org.elasticsearch.index.translog.Translog;
import org.elasticsearch.repositories.RepositoriesService;
import org.elasticsearch.threadpool.ThreadPool;
import java.util.concurrent.ScheduledFuture;
@ -55,6 +57,8 @@ public class IndexShardGatewayService extends AbstractIndexShardComponent implem
private final IndexShardGateway shardGateway;
private final IndexShardSnapshotAndRestoreService snapshotService;
private volatile long lastIndexVersion;
@ -76,14 +80,17 @@ public class IndexShardGatewayService extends AbstractIndexShardComponent implem
private final ApplySettings applySettings = new ApplySettings();
@Inject
public IndexShardGatewayService(ShardId shardId, @IndexSettings Settings indexSettings, IndexSettingsService indexSettingsService,
ThreadPool threadPool, IndexShard indexShard, IndexShardGateway shardGateway) {
ThreadPool threadPool, IndexShard indexShard, IndexShardGateway shardGateway, IndexShardSnapshotAndRestoreService snapshotService,
RepositoriesService repositoriesService) {
super(shardId, indexSettings);
this.threadPool = threadPool;
this.indexSettingsService = indexSettingsService;
this.indexShard = (InternalIndexShard) indexShard;
this.shardGateway = shardGateway;
this.snapshotService = snapshotService;
this.snapshotOnClose = componentSettings.getAsBoolean("snapshot_on_close", true);
this.snapshotInterval = componentSettings.getAsTime("snapshot_interval", TimeValue.timeValueSeconds(10));
@ -156,7 +163,11 @@ public class IndexShardGatewayService extends AbstractIndexShardComponent implem
return;
}
try {
indexShard.recovering("from gateway");
if (indexShard.routingEntry().restoreSource() != null) {
indexShard.recovering("from snapshot");
} else {
indexShard.recovering("from gateway");
}
} catch (IllegalIndexShardStateException e) {
// that's fine, since we might be called concurrently, just ignore this, we are already recovering
listener.onIgnoreRecovery("already in recovering process, " + e.getMessage());
@ -170,8 +181,13 @@ public class IndexShardGatewayService extends AbstractIndexShardComponent implem
recoveryStatus.updateStage(RecoveryStatus.Stage.INIT);
try {
logger.debug("starting recovery from {} ...", shardGateway);
shardGateway.recover(indexShouldExists, recoveryStatus);
if (indexShard.routingEntry().restoreSource() != null) {
logger.debug("restoring from {} ...", indexShard.routingEntry().restoreSource());
snapshotService.restore(recoveryStatus);
} else {
logger.debug("starting recovery from {} ...", shardGateway);
shardGateway.recover(indexShouldExists, recoveryStatus);
}
lastIndexVersion = recoveryStatus.index().version();
lastTranslogId = -1;

View File

@ -63,6 +63,7 @@ import org.elasticsearch.index.shard.ShardId;
import org.elasticsearch.index.shard.service.IndexShard;
import org.elasticsearch.index.shard.service.InternalIndexShard;
import org.elasticsearch.index.similarity.SimilarityService;
import org.elasticsearch.index.snapshots.IndexShardSnapshotModule;
import org.elasticsearch.index.store.IndexStore;
import org.elasticsearch.index.store.Store;
import org.elasticsearch.index.store.StoreModule;
@ -335,6 +336,7 @@ public class InternalIndexService extends AbstractIndexComponent implements Inde
modules.add(new IndexShardGatewayModule(injector.getInstance(IndexGateway.class)));
modules.add(new PercolatorShardModule());
modules.add(new ShardTermVectorModule());
modules.add(new IndexShardSnapshotModule());
Injector shardInjector;
try {

View File

@ -27,6 +27,7 @@ import org.elasticsearch.index.cache.filter.FilterCacheStats;
import org.elasticsearch.index.cache.filter.ShardFilterCache;
import org.elasticsearch.index.cache.id.IdCacheStats;
import org.elasticsearch.index.cache.id.ShardIdCache;
import org.elasticsearch.index.deletionpolicy.SnapshotIndexCommit;
import org.elasticsearch.index.engine.Engine;
import org.elasticsearch.index.engine.EngineException;
import org.elasticsearch.index.engine.SegmentsStats;
@ -146,6 +147,8 @@ public interface IndexShard extends IndexShardComponent {
<T> T snapshot(Engine.SnapshotHandler<T> snapshotHandler) throws EngineException;
SnapshotIndexCommit snapshotIndex() throws EngineException;
void recover(Engine.RecoveryHandler recoveryHandler) throws EngineException;
Engine.Searcher acquireSearcher(String source);

View File

@ -48,6 +48,7 @@ import org.elasticsearch.index.cache.filter.ShardFilterCache;
import org.elasticsearch.index.cache.id.IdCacheStats;
import org.elasticsearch.index.cache.id.ShardIdCache;
import org.elasticsearch.index.codec.CodecService;
import org.elasticsearch.index.deletionpolicy.SnapshotIndexCommit;
import org.elasticsearch.index.engine.*;
import org.elasticsearch.index.fielddata.FieldDataStats;
import org.elasticsearch.index.fielddata.IndexFieldDataService;
@ -604,10 +605,22 @@ public class InternalIndexShard extends AbstractIndexShardComponent implements I
public <T> T snapshot(Engine.SnapshotHandler<T> snapshotHandler) throws EngineException {
IndexShardState state = this.state; // one time volatile read
// we allow snapshot on closed index shard, since we want to do one after we close the shard and before we close the engine
if (state != IndexShardState.POST_RECOVERY && state != IndexShardState.STARTED && state != IndexShardState.RELOCATED && state != IndexShardState.CLOSED) {
if (state == IndexShardState.POST_RECOVERY || state == IndexShardState.STARTED || state == IndexShardState.RELOCATED || state == IndexShardState.CLOSED) {
return engine.snapshot(snapshotHandler);
} else {
throw new IllegalIndexShardStateException(shardId, state, "snapshot is not allowed");
}
}
@Override
public SnapshotIndexCommit snapshotIndex() throws EngineException {
IndexShardState state = this.state; // one time volatile read
// we allow snapshot on closed index shard, since we want to do one after we close the shard and before we close the engine
if (state == IndexShardState.STARTED || state == IndexShardState.RELOCATED || state == IndexShardState.CLOSED) {
return engine.snapshotIndex();
} else {
throw new IllegalIndexShardStateException(shardId, state, "snapshot is not allowed");
}
return engine.snapshot(snapshotHandler);
}
@Override

View File

@ -0,0 +1,63 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.index.snapshots;
import org.elasticsearch.cluster.metadata.SnapshotId;
import org.elasticsearch.index.deletionpolicy.SnapshotIndexCommit;
import org.elasticsearch.index.gateway.RecoveryStatus;
import org.elasticsearch.index.shard.ShardId;
/**
* Shard-level snapshot repository
* <p/>
* IndexShardRepository is used on data node to create snapshots of individual shards. See {@link org.elasticsearch.repositories.Repository}
* for more information.
*/
public interface IndexShardRepository {
/**
* Creates a snapshot of the shard based on the index commit point.
* <p/>
* The index commit point can be obtained by using {@link org.elasticsearch.index.engine.robin.RobinEngine#snapshotIndex()} method.
* IndexShardRepository implementations shouldn't release the snapshot index commit point. It is done by the method caller.
* <p/>
* As snapshot process progresses, implementation of this method should update {@link IndexShardSnapshotStatus} object and check
* {@link IndexShardSnapshotStatus#aborted()} to see if the snapshot process should be aborted.
*
* @param snapshotId snapshot id
* @param shardId shard to be snapshotted
* @param snapshotIndexCommit commit point
* @param snapshotStatus snapshot status
*/
void snapshot(SnapshotId snapshotId, ShardId shardId, SnapshotIndexCommit snapshotIndexCommit, IndexShardSnapshotStatus snapshotStatus);
/**
* Restores snapshot of the shard.
* <p/>
* The index can be renamed on restore, hence different {@code shardId} and {@code snapshotShardId} are supplied.
*
* @param snapshotId snapshot id
* @param shardId shard id (in the current index)
* @param snapshotShardId shard id (in the snapshot)
* @param recoveryStatus recovery status
*/
void restore(SnapshotId snapshotId, ShardId shardId, ShardId snapshotShardId, RecoveryStatus recoveryStatus);
}

View File

@ -0,0 +1,36 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.index.snapshots;
import org.elasticsearch.index.shard.IndexShardException;
import org.elasticsearch.index.shard.ShardId;
/**
* Generic shard restore exception
*/
public class IndexShardRestoreException extends IndexShardException {
public IndexShardRestoreException(ShardId shardId, String msg) {
super(shardId, msg);
}
public IndexShardRestoreException(ShardId shardId, String msg, Throwable cause) {
super(shardId, msg, cause);
}
}

View File

@ -0,0 +1,35 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.index.snapshots;
import org.elasticsearch.index.shard.ShardId;
/**
* Thrown when restore of a shard fails
*/
public class IndexShardRestoreFailedException extends IndexShardRestoreException {
public IndexShardRestoreFailedException(ShardId shardId, String msg) {
super(shardId, msg);
}
public IndexShardRestoreFailedException(ShardId shardId, String msg, Throwable cause) {
super(shardId, msg, cause);
}
}

View File

@ -0,0 +1,130 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.index.snapshots;
import org.elasticsearch.cluster.metadata.SnapshotId;
import org.elasticsearch.cluster.routing.RestoreSource;
import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.unit.ByteSizeValue;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.index.deletionpolicy.SnapshotIndexCommit;
import org.elasticsearch.index.engine.SnapshotFailedEngineException;
import org.elasticsearch.index.gateway.RecoveryStatus;
import org.elasticsearch.index.settings.IndexSettings;
import org.elasticsearch.index.shard.AbstractIndexShardComponent;
import org.elasticsearch.index.shard.IndexShardState;
import org.elasticsearch.index.shard.ShardId;
import org.elasticsearch.index.shard.service.IndexShard;
import org.elasticsearch.index.shard.service.InternalIndexShard;
import org.elasticsearch.repositories.RepositoriesService;
import org.elasticsearch.snapshots.RestoreService;
/**
* Shard level snapshot and restore service
* <p/>
* Performs snapshot and restore operations on the shard level.
*/
public class IndexShardSnapshotAndRestoreService extends AbstractIndexShardComponent {
private final InternalIndexShard indexShard;
private final RepositoriesService repositoriesService;
private final RestoreService restoreService;
@Inject
public IndexShardSnapshotAndRestoreService(ShardId shardId, @IndexSettings Settings indexSettings, IndexShard indexShard, RepositoriesService repositoriesService, RestoreService restoreService) {
super(shardId, indexSettings);
this.indexShard = (InternalIndexShard) indexShard;
this.repositoriesService = repositoriesService;
this.restoreService = restoreService;
}
/**
* Creates shard snapshot
*
* @param snapshotId snapshot id
* @param snapshotStatus snapshot status
*/
public void snapshot(final SnapshotId snapshotId, final IndexShardSnapshotStatus snapshotStatus) {
IndexShardRepository indexShardRepository = repositoriesService.indexShardRepository(snapshotId.getRepository());
if (!indexShard.routingEntry().primary()) {
throw new IndexShardSnapshotFailedException(shardId, "snapshot should be performed only on primary");
}
if (indexShard.routingEntry().relocating()) {
// do not snapshot when in the process of relocation of primaries so we won't get conflicts
throw new IndexShardSnapshotFailedException(shardId, "cannot snapshot while relocating");
}
if (indexShard.state() == IndexShardState.CREATED || indexShard.state() == IndexShardState.RECOVERING) {
// shard has just been created, or still recovering
throw new IndexShardSnapshotFailedException(shardId, "shard didn't fully recover yet");
}
try {
SnapshotIndexCommit snapshotIndexCommit = indexShard.snapshotIndex();
try {
indexShardRepository.snapshot(snapshotId, shardId, snapshotIndexCommit, snapshotStatus);
if (logger.isDebugEnabled()) {
StringBuilder sb = new StringBuilder();
sb.append("snapshot (").append(snapshotId.getSnapshot()).append(") completed to ").append(indexShardRepository).append(", took [").append(TimeValue.timeValueMillis(snapshotStatus.time())).append("]\n");
sb.append(" index : version [").append(snapshotStatus.indexVersion()).append("], number_of_files [").append(snapshotStatus.numberOfFiles()).append("] with total_size [").append(new ByteSizeValue(snapshotStatus.totalSize())).append("]\n");
logger.debug(sb.toString());
}
} finally {
snapshotIndexCommit.release();
}
} catch (SnapshotFailedEngineException e) {
throw e;
} catch (IndexShardSnapshotFailedException e) {
throw e;
} catch (Throwable e) {
throw new IndexShardSnapshotFailedException(shardId, "Failed to snapshot", e);
}
}
/**
* Restores shard from {@link RestoreSource} associated with this shard in routing table
*
* @param recoveryStatus recovery status
*/
public void restore(final RecoveryStatus recoveryStatus) {
RestoreSource restoreSource = indexShard.routingEntry().restoreSource();
if (restoreSource == null) {
throw new IndexShardRestoreFailedException(shardId, "empty restore source");
}
if (logger.isTraceEnabled()) {
logger.trace("[{}] restoring shard [{}]", restoreSource.snapshotId(), shardId);
}
try {
IndexShardRepository indexShardRepository = repositoriesService.indexShardRepository(restoreSource.snapshotId().getRepository());
ShardId snapshotShardId = shardId;
if (!shardId.getIndex().equals(restoreSource.index())) {
snapshotShardId = new ShardId(restoreSource.index(), shardId.id());
}
indexShardRepository.restore(restoreSource.snapshotId(), shardId, snapshotShardId, recoveryStatus);
restoreService.indexShardRestoreCompleted(restoreSource.snapshotId(), shardId);
} catch (Throwable t) {
throw new IndexShardRestoreFailedException(shardId, "restore failed", t);
}
}
}

View File

@ -0,0 +1,36 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.index.snapshots;
import org.elasticsearch.index.shard.IndexShardException;
import org.elasticsearch.index.shard.ShardId;
/**
* Generic shard snapshot exception
*/
public class IndexShardSnapshotException extends IndexShardException {
public IndexShardSnapshotException(ShardId shardId, String msg) {
super(shardId, msg);
}
public IndexShardSnapshotException(ShardId shardId, String msg, Throwable cause) {
super(shardId, msg, cause);
}
}

View File

@ -0,0 +1,35 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.index.snapshots;
import org.elasticsearch.index.shard.ShardId;
/**
* Thrown when snapshot process is failed on a shard level
*/
public class IndexShardSnapshotFailedException extends IndexShardSnapshotException {
public IndexShardSnapshotFailedException(ShardId shardId, String msg) {
super(shardId, msg);
}
public IndexShardSnapshotFailedException(ShardId shardId, String msg, Throwable cause) {
super(shardId, msg, cause);
}
}

View File

@ -0,0 +1,33 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.index.snapshots;
import org.elasticsearch.common.inject.AbstractModule;
/**
* This shard-level module configures {@link IndexShardSnapshotAndRestoreService}
*/
public class IndexShardSnapshotModule extends AbstractModule {
@Override
protected void configure() {
bind(IndexShardSnapshotAndRestoreService.class).asEagerSingleton();
}
}

View File

@ -0,0 +1,183 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.index.snapshots;
/**
* Represent shard snapshot status
*/
public class IndexShardSnapshotStatus {
/**
* Snapshot stage
*/
public static enum Stage {
/**
* Snapshot hasn't started yet
*/
INIT,
/**
* Index files are being copied
*/
STARTED,
/**
* Snapshot metadata is being written
*/
FINALIZE,
/**
* Snapshot completed successfully
*/
DONE,
/**
* Snapshot failed
*/
FAILURE
}
private Stage stage = Stage.INIT;
private long startTime;
private long time;
private int numberOfFiles;
private long totalSize;
private long indexVersion;
private boolean aborted;
/**
* Returns current snapshot stage
*
* @return current snapshot stage
*/
public Stage stage() {
return this.stage;
}
/**
* Sets new snapshot stage
*
* @param stage new snapshot stage
*/
public void updateStage(Stage stage) {
this.stage = stage;
}
/**
* Returns snapshot start time
*
* @return snapshot start time
*/
public long startTime() {
return this.startTime;
}
/**
* Sets snapshot start time
*
* @param startTime snapshot start time
*/
public void startTime(long startTime) {
this.startTime = startTime;
}
/**
* Returns snapshot processing time
*
* @return processing time
*/
public long time() {
return this.time;
}
/**
* Sets snapshot processing time
*
* @param time snapshot processing time
*/
public void time(long time) {
this.time = time;
}
/**
* Returns true if snapshot process was aborted
*
* @return true if snapshot process was aborted
*/
public boolean aborted() {
return this.aborted;
}
/**
* Marks snapshot as aborted
*/
public void abort() {
this.aborted = true;
}
/**
* Sets files stats
*
* @param numberOfFiles number of files in this snapshot
* @param totalSize total size of files in this snapshot
*/
public void files(int numberOfFiles, long totalSize) {
this.numberOfFiles = numberOfFiles;
this.totalSize = totalSize;
}
/**
* Number of files
*
* @return number of files
*/
public int numberOfFiles() {
return numberOfFiles;
}
/**
* Total snapshot size
*
* @return snapshot size
*/
public long totalSize() {
return totalSize;
}
/**
* Sets index version
*
* @param indexVersion index version
*/
public void indexVersion(long indexVersion) {
this.indexVersion = indexVersion;
}
/**
* Returns index version
*
* @return index version
*/
public long indexVersion() {
return indexVersion;
}
}

View File

@ -0,0 +1,723 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.index.snapshots.blobstore;
import com.google.common.collect.ImmutableMap;
import com.google.common.collect.Lists;
import org.apache.lucene.store.IOContext;
import org.apache.lucene.store.IndexInput;
import org.apache.lucene.store.IndexOutput;
import org.apache.lucene.util.IOUtils;
import org.elasticsearch.cluster.metadata.SnapshotId;
import org.elasticsearch.common.blobstore.*;
import org.elasticsearch.common.component.AbstractComponent;
import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.io.stream.BytesStreamInput;
import org.elasticsearch.common.lucene.Lucene;
import org.elasticsearch.common.lucene.store.InputStreamIndexInput;
import org.elasticsearch.common.lucene.store.ThreadSafeInputStreamIndexInput;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.unit.ByteSizeValue;
import org.elasticsearch.common.xcontent.*;
import org.elasticsearch.index.deletionpolicy.SnapshotIndexCommit;
import org.elasticsearch.index.gateway.RecoveryStatus;
import org.elasticsearch.index.shard.ShardId;
import org.elasticsearch.index.snapshots.*;
import org.elasticsearch.index.snapshots.blobstore.BlobStoreIndexShardSnapshot.FileInfo;
import org.elasticsearch.index.store.Store;
import org.elasticsearch.index.store.StoreFileMetaData;
import org.elasticsearch.indices.IndicesService;
import org.elasticsearch.repositories.RepositoryName;
import java.io.IOException;
import java.util.Collections;
import java.util.List;
import java.util.concurrent.CopyOnWriteArrayList;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.concurrent.atomic.AtomicLong;
import static com.google.common.collect.Lists.newArrayList;
/**
* Blob store based implementation of IndexShardRepository
*/
public class BlobStoreIndexShardRepository extends AbstractComponent implements IndexShardRepository {
private BlobStore blobStore;
private BlobPath basePath;
private final String repositoryName;
private ByteSizeValue chunkSize;
private final IndicesService indicesService;
private static final String SNAPSHOT_PREFIX = "snapshot-";
@Inject
BlobStoreIndexShardRepository(Settings settings, RepositoryName repositoryName, IndicesService indicesService) {
super(settings);
this.repositoryName = repositoryName.name();
this.indicesService = indicesService;
}
/**
* Called by {@link org.elasticsearch.repositories.blobstore.BlobStoreRepository} on repository startup
*
* @param blobStore blob store
* @param basePath base path to blob store
* @param chunkSize chunk size
*/
public void initialize(BlobStore blobStore, BlobPath basePath, ByteSizeValue chunkSize) {
this.blobStore = blobStore;
this.basePath = basePath;
this.chunkSize = chunkSize;
}
/**
* {@inheritDoc}
*/
@Override
public void snapshot(SnapshotId snapshotId, ShardId shardId, SnapshotIndexCommit snapshotIndexCommit, IndexShardSnapshotStatus snapshotStatus) {
SnapshotContext snapshotContext = new SnapshotContext(snapshotId, shardId, snapshotStatus);
snapshotStatus.startTime(System.currentTimeMillis());
try {
snapshotContext.snapshot(snapshotIndexCommit);
snapshotStatus.time(System.currentTimeMillis() - snapshotStatus.startTime());
snapshotStatus.updateStage(IndexShardSnapshotStatus.Stage.DONE);
} catch (Throwable e) {
snapshotStatus.time(System.currentTimeMillis() - snapshotStatus.startTime());
snapshotStatus.updateStage(IndexShardSnapshotStatus.Stage.FAILURE);
if (e instanceof IndexShardSnapshotFailedException) {
throw (IndexShardSnapshotFailedException) e;
} else {
throw new IndexShardSnapshotFailedException(shardId, e.getMessage(), e);
}
}
}
/**
* {@inheritDoc}
*/
@Override
public void restore(SnapshotId snapshotId, ShardId shardId, ShardId snapshotShardId, RecoveryStatus recoveryStatus) {
RestoreContext snapshotContext = new RestoreContext(snapshotId, shardId, snapshotShardId, recoveryStatus);
try {
recoveryStatus.index().startTime(System.currentTimeMillis());
snapshotContext.restore();
recoveryStatus.index().time(System.currentTimeMillis() - recoveryStatus.index().startTime());
} catch (Throwable e) {
throw new IndexShardRestoreFailedException(shardId, "failed to restore snapshot [" + snapshotId.getSnapshot() + "]", e);
}
}
/**
* Delete shard snapshot
*
* @param snapshotId snapshot id
* @param shardId shard id
*/
public void delete(SnapshotId snapshotId, ShardId shardId) {
Context context = new Context(snapshotId, shardId, shardId);
context.delete();
}
@Override
public String toString() {
return "BlobStoreIndexShardRepository[" +
"[" + repositoryName +
"], [" + blobStore + ']' +
']';
}
/**
* Returns shard snapshot metadata file name
*
* @param snapshotId snapshot id
* @return shard snapshot metadata file name
*/
private String snapshotBlobName(SnapshotId snapshotId) {
return SNAPSHOT_PREFIX + snapshotId.getSnapshot();
}
/**
* Serializes snapshot to JSON
*
* @param snapshot snapshot
* @return JSON representation of the snapshot
* @throws IOException
*/
public static byte[] writeSnapshot(BlobStoreIndexShardSnapshot snapshot) throws IOException {
XContentBuilder builder = XContentFactory.contentBuilder(XContentType.JSON).prettyPrint();
BlobStoreIndexShardSnapshot.toXContent(snapshot, builder, ToXContent.EMPTY_PARAMS);
return builder.bytes().toBytes();
}
/**
* Parses JSON representation of a snapshot
*
* @param data JSON
* @return snapshot
* @throws IOException
*/
public static BlobStoreIndexShardSnapshot readSnapshot(byte[] data) throws IOException {
XContentParser parser = XContentFactory.xContent(XContentType.JSON).createParser(data);
try {
parser.nextToken();
return BlobStoreIndexShardSnapshot.fromXContent(parser);
} finally {
parser.close();
}
}
/**
* Context for snapshot/restore operations
*/
private class Context {
protected final SnapshotId snapshotId;
protected final ShardId shardId;
protected final ImmutableBlobContainer blobContainer;
public Context(SnapshotId snapshotId, ShardId shardId) {
this(snapshotId, shardId, shardId);
}
public Context(SnapshotId snapshotId, ShardId shardId, ShardId snapshotShardId) {
this.snapshotId = snapshotId;
this.shardId = shardId;
blobContainer = blobStore.immutableBlobContainer(basePath.add("indices").add(snapshotShardId.getIndex()).add(Integer.toString(snapshotShardId.getId())));
}
/**
* Delete shard snapshot
*/
public void delete() {
final ImmutableMap<String, BlobMetaData> blobs;
try {
blobs = blobContainer.listBlobs();
} catch (IOException e) {
throw new IndexShardSnapshotException(shardId, "Failed to list content of gateway", e);
}
BlobStoreIndexShardSnapshots snapshots = buildBlobStoreIndexShardSnapshots(blobs);
String commitPointName = snapshotBlobName(snapshotId);
try {
blobContainer.deleteBlob(commitPointName);
} catch (IOException e) {
logger.debug("[{}] [{}] failed to delete shard snapshot file", shardId, snapshotId);
}
// delete all files that are not referenced by any commit point
// build a new BlobStoreIndexShardSnapshot, that includes this one and all the saved ones
List<BlobStoreIndexShardSnapshot> newSnapshotsList = Lists.newArrayList();
for (BlobStoreIndexShardSnapshot point : snapshots) {
if (!point.snapshot().equals(snapshotId.getSnapshot())) {
newSnapshotsList.add(point);
}
}
cleanup(newSnapshotsList, blobs);
}
/**
* Removes all unreferenced files from the repository
*
* @param snapshots list of active snapshots in the container
* @param blobs list of blobs in the container
*/
protected void cleanup(List<BlobStoreIndexShardSnapshot> snapshots, ImmutableMap<String, BlobMetaData> blobs) {
BlobStoreIndexShardSnapshots newSnapshots = new BlobStoreIndexShardSnapshots(snapshots);
// now go over all the blobs, and if they don't exists in a snapshot, delete them
for (String blobName : blobs.keySet()) {
if (!blobName.startsWith("__")) {
continue;
}
if (newSnapshots.findNameFile(FileInfo.canonicalName(blobName)) == null) {
try {
blobContainer.deleteBlob(blobName);
} catch (IOException e) {
logger.debug("[{}] [{}] error deleting blob [{}] during cleanup", e, snapshotId, shardId, blobName);
}
}
}
}
/**
* Generates blob name
*
* @param generation the blob number
* @return the blob name
*/
protected String fileNameFromGeneration(long generation) {
return "__" + Long.toString(generation, Character.MAX_RADIX);
}
/**
* Finds the next available blob number
*
* @param blobs list of blobs in the repository
* @return next available blob number
*/
protected long findLatestFileNameGeneration(ImmutableMap<String, BlobMetaData> blobs) {
long generation = -1;
for (String name : blobs.keySet()) {
if (!name.startsWith("__")) {
continue;
}
name = FileInfo.canonicalName(name);
try {
long currentGen = Long.parseLong(name.substring(2) /*__*/, Character.MAX_RADIX);
if (currentGen > generation) {
generation = currentGen;
}
} catch (NumberFormatException e) {
logger.warn("file [{}] does not conform to the '__' schema");
}
}
return generation;
}
/**
* Loads all available snapshots in the repository
*
* @param blobs list of blobs in repository
* @return BlobStoreIndexShardSnapshots
*/
protected BlobStoreIndexShardSnapshots buildBlobStoreIndexShardSnapshots(ImmutableMap<String, BlobMetaData> blobs) {
List<BlobStoreIndexShardSnapshot> snapshots = Lists.newArrayList();
for (String name : blobs.keySet()) {
if (name.startsWith(SNAPSHOT_PREFIX)) {
try {
snapshots.add(readSnapshot(blobContainer.readBlobFully(name)));
} catch (IOException e) {
logger.warn("failed to read commit point [{}]", e, name);
}
}
}
return new BlobStoreIndexShardSnapshots(snapshots);
}
}
/**
* Context for snapshot operations
*/
private class SnapshotContext extends Context {
private final Store store;
private final IndexShardSnapshotStatus snapshotStatus;
/**
* Constructs new context
*
* @param snapshotId snapshot id
* @param shardId shard to be snapshotted
* @param snapshotStatus snapshot status to report progress
*/
public SnapshotContext(SnapshotId snapshotId, ShardId shardId, IndexShardSnapshotStatus snapshotStatus) {
super(snapshotId, shardId);
store = indicesService.indexServiceSafe(shardId.getIndex()).shardInjectorSafe(shardId.id()).getInstance(Store.class);
this.snapshotStatus = snapshotStatus;
}
/**
* Create snapshot from index commit point
*
* @param snapshotIndexCommit
*/
public void snapshot(SnapshotIndexCommit snapshotIndexCommit) {
logger.debug("[{}] [{}] snapshot to [{}] ...", shardId, snapshotId, repositoryName);
final ImmutableMap<String, BlobMetaData> blobs;
try {
blobs = blobContainer.listBlobs();
} catch (IOException e) {
throw new IndexShardSnapshotFailedException(shardId, "failed to list blobs", e);
}
long generation = findLatestFileNameGeneration(blobs);
BlobStoreIndexShardSnapshots snapshots = buildBlobStoreIndexShardSnapshots(blobs);
snapshotStatus.updateStage(IndexShardSnapshotStatus.Stage.STARTED);
final CountDownLatch indexLatch = new CountDownLatch(snapshotIndexCommit.getFiles().length);
final CopyOnWriteArrayList<Throwable> failures = new CopyOnWriteArrayList<Throwable>();
final List<BlobStoreIndexShardSnapshot.FileInfo> indexCommitPointFiles = newArrayList();
int indexNumberOfFiles = 0;
long indexTotalFilesSize = 0;
for (String fileName : snapshotIndexCommit.getFiles()) {
if (snapshotStatus.aborted()) {
logger.debug("[{}] [{}] Aborted on the file [{}], exiting", shardId, snapshotId, fileName);
throw new IndexShardSnapshotFailedException(shardId, "Aborted");
}
logger.trace("[{}] [{}] Processing [{}]", shardId, snapshotId, fileName);
final StoreFileMetaData md;
try {
md = store.metaData(fileName);
} catch (IOException e) {
throw new IndexShardSnapshotFailedException(shardId, "Failed to get store file metadata", e);
}
boolean snapshotRequired = false;
// TODO: For now segment files are copied on each commit because segment files don't have checksum
// if (snapshot.indexChanged() && fileName.equals(snapshotIndexCommit.getSegmentsFileName())) {
// snapshotRequired = true; // we want to always snapshot the segment file if the index changed
// }
BlobStoreIndexShardSnapshot.FileInfo fileInfo = snapshots.findPhysicalIndexFile(fileName);
if (fileInfo == null || !fileInfo.isSame(md) || !snapshotFileExistsInBlobs(fileInfo, blobs)) {
// commit point file does not exists in any commit point, or has different length, or does not fully exists in the listed blobs
snapshotRequired = true;
}
if (snapshotRequired) {
indexNumberOfFiles++;
indexTotalFilesSize += md.length();
// create a new FileInfo
try {
BlobStoreIndexShardSnapshot.FileInfo snapshotFileInfo = new BlobStoreIndexShardSnapshot.FileInfo(fileNameFromGeneration(++generation), fileName, md.length(), chunkSize, md.checksum());
indexCommitPointFiles.add(snapshotFileInfo);
snapshotFile(snapshotFileInfo, indexLatch, failures);
} catch (IOException e) {
failures.add(e);
}
} else {
indexCommitPointFiles.add(fileInfo);
indexLatch.countDown();
}
}
snapshotStatus.files(indexNumberOfFiles, indexTotalFilesSize);
snapshotStatus.indexVersion(snapshotIndexCommit.getGeneration());
try {
indexLatch.await();
} catch (InterruptedException e) {
failures.add(e);
Thread.currentThread().interrupt();
}
if (!failures.isEmpty()) {
throw new IndexShardSnapshotFailedException(shardId, "Failed to perform snapshot (index files)", failures.get(0));
}
// now create and write the commit point
snapshotStatus.updateStage(IndexShardSnapshotStatus.Stage.FINALIZE);
String commitPointName = snapshotBlobName(snapshotId);
BlobStoreIndexShardSnapshot snapshot = new BlobStoreIndexShardSnapshot(snapshotId.getSnapshot(), snapshotIndexCommit.getGeneration(), indexCommitPointFiles);
try {
byte[] snapshotData = writeSnapshot(snapshot);
logger.trace("[{}] [{}] writing shard snapshot file", shardId, snapshotId);
blobContainer.writeBlob(commitPointName, new BytesStreamInput(snapshotData, false), snapshotData.length);
} catch (IOException e) {
throw new IndexShardSnapshotFailedException(shardId, "Failed to write commit point", e);
}
// delete all files that are not referenced by any commit point
// build a new BlobStoreIndexShardSnapshot, that includes this one and all the saved ones
List<BlobStoreIndexShardSnapshot> newSnapshotsList = Lists.newArrayList();
newSnapshotsList.add(snapshot);
for (BlobStoreIndexShardSnapshot point : snapshots) {
newSnapshotsList.add(point);
}
cleanup(newSnapshotsList, blobs);
snapshotStatus.updateStage(IndexShardSnapshotStatus.Stage.DONE);
}
/**
* Snapshot individual file
* <p/>
* This is asynchronous method. Upon completion of the operation latch is getting counted down and any failures are
* added to the {@code failures} list
*
* @param fileInfo file to be snapshotted
* @param latch latch that should be counted down once file is snapshoted
* @param failures thread-safe list of failures
* @throws IOException
*/
private void snapshotFile(final BlobStoreIndexShardSnapshot.FileInfo fileInfo, final CountDownLatch latch, final List<Throwable> failures) throws IOException {
final AtomicLong counter = new AtomicLong(fileInfo.numberOfParts());
for (long i = 0; i < fileInfo.numberOfParts(); i++) {
IndexInput indexInput = null;
try {
indexInput = store.openInputRaw(fileInfo.physicalName(), IOContext.READONCE);
indexInput.seek(i * fileInfo.partBytes());
InputStreamIndexInput is = new ThreadSafeInputStreamIndexInput(indexInput, fileInfo.partBytes());
final IndexInput fIndexInput = indexInput;
blobContainer.writeBlob(fileInfo.partName(i), is, is.actualSizeToRead(), new ImmutableBlobContainer.WriterListener() {
@Override
public void onCompleted() {
IOUtils.closeWhileHandlingException(fIndexInput);
if (counter.decrementAndGet() == 0) {
latch.countDown();
}
}
@Override
public void onFailure(Throwable t) {
IOUtils.closeWhileHandlingException(fIndexInput);
failures.add(t);
if (counter.decrementAndGet() == 0) {
latch.countDown();
}
}
});
} catch (Throwable e) {
IOUtils.closeWhileHandlingException(indexInput);
failures.add(e);
latch.countDown();
}
}
}
/**
* Checks if snapshot file already exists in the list of blobs
*
* @param fileInfo file to check
* @param blobs list of blobs
* @return true if file exists in the list of blobs
*/
private boolean snapshotFileExistsInBlobs(BlobStoreIndexShardSnapshot.FileInfo fileInfo, ImmutableMap<String, BlobMetaData> blobs) {
BlobMetaData blobMetaData = blobs.get(fileInfo.name());
if (blobMetaData != null) {
return blobMetaData.length() == fileInfo.length();
} else if (blobs.containsKey(fileInfo.partName(0))) {
// multi part file sum up the size and check
int part = 0;
long totalSize = 0;
while (true) {
blobMetaData = blobs.get(fileInfo.partName(part++));
if (blobMetaData == null) {
break;
}
totalSize += blobMetaData.length();
}
return totalSize == fileInfo.length();
}
// no file, not exact and not multipart
return false;
}
}
/**
* Context for restore operations
*/
private class RestoreContext extends Context {
private final Store store;
private final RecoveryStatus recoveryStatus;
/**
* Constructs new restore context
*
* @param snapshotId snapshot id
* @param shardId shard to be restored
* @param snapshotShardId shard in the snapshot that data should be restored from
* @param recoveryStatus recovery status to report progress
*/
public RestoreContext(SnapshotId snapshotId, ShardId shardId, ShardId snapshotShardId, RecoveryStatus recoveryStatus) {
super(snapshotId, shardId, snapshotShardId);
store = indicesService.indexServiceSafe(shardId.getIndex()).shardInjectorSafe(shardId.id()).getInstance(Store.class);
this.recoveryStatus = recoveryStatus;
}
/**
* Performs restore operation
*/
public void restore() {
logger.debug("[{}] [{}] restoring to [{}] ...", snapshotId, repositoryName, shardId);
BlobStoreIndexShardSnapshot snapshot;
try {
snapshot = readSnapshot(blobContainer.readBlobFully(snapshotBlobName(snapshotId)));
} catch (IOException ex) {
throw new IndexShardRestoreFailedException(shardId, "failed to read shard snapshot file", ex);
}
recoveryStatus.updateStage(RecoveryStatus.Stage.INDEX);
int numberOfFiles = 0;
long totalSize = 0;
int numberOfReusedFiles = 0;
long reusedTotalSize = 0;
List<FileInfo> filesToRecover = Lists.newArrayList();
for (FileInfo fileInfo : snapshot.indexFiles()) {
String fileName = fileInfo.physicalName();
StoreFileMetaData md = null;
try {
md = store.metaData(fileName);
} catch (IOException e) {
// no file
}
numberOfFiles++;
// we don't compute checksum for segments, so always recover them
if (!fileName.startsWith("segments") && md != null && fileInfo.isSame(md)) {
totalSize += md.length();
numberOfReusedFiles++;
reusedTotalSize += md.length();
if (logger.isTraceEnabled()) {
logger.trace("not_recovering [{}], exists in local store and is same", fileInfo.physicalName());
}
} else {
totalSize += fileInfo.length();
filesToRecover.add(fileInfo);
if (logger.isTraceEnabled()) {
if (md == null) {
logger.trace("recovering [{}], does not exists in local store", fileInfo.physicalName());
} else {
logger.trace("recovering [{}], exists in local store but is different", fileInfo.physicalName());
}
}
}
}
recoveryStatus.index().files(numberOfFiles, totalSize, numberOfReusedFiles, reusedTotalSize);
if (filesToRecover.isEmpty()) {
logger.trace("no files to recover, all exists within the local store");
}
if (logger.isTraceEnabled()) {
logger.trace("[{}] [{}] recovering_files [{}] with total_size [{}], reusing_files [{}] with reused_size [{}]", shardId, snapshotId, numberOfFiles, new ByteSizeValue(totalSize), numberOfReusedFiles, new ByteSizeValue(reusedTotalSize));
}
final CountDownLatch latch = new CountDownLatch(filesToRecover.size());
final CopyOnWriteArrayList<Throwable> failures = new CopyOnWriteArrayList<Throwable>();
for (final FileInfo fileToRecover : filesToRecover) {
logger.trace("[{}] [{}] restoring file [{}]", shardId, snapshotId, fileToRecover.name());
restoreFile(fileToRecover, latch, failures);
}
try {
latch.await();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
if (!failures.isEmpty()) {
throw new IndexShardRestoreFailedException(shardId, "Failed to recover index", failures.get(0));
}
// read the snapshot data persisted
long version = -1;
try {
if (Lucene.indexExists(store.directory())) {
version = Lucene.readSegmentInfos(store.directory()).getVersion();
}
} catch (IOException e) {
throw new IndexShardRestoreFailedException(shardId, "Failed to fetch index version after copying it over", e);
}
recoveryStatus.index().updateVersion(version);
/// now, go over and clean files that are in the store, but were not in the snapshot
try {
for (String storeFile : store.directory().listAll()) {
if (!snapshot.containPhysicalIndexFile(storeFile)) {
try {
store.directory().deleteFile(storeFile);
} catch (IOException e) {
// ignore
}
}
}
} catch (IOException e) {
// ignore
}
}
/**
* Restores a file
* This is asynchronous method. Upon completion of the operation latch is getting counted down and any failures are
* added to the {@code failures} list
*
* @param fileInfo file to be restored
* @param latch latch that should be counted down once file is snapshoted
* @param failures thread-safe list of failures
*/
private void restoreFile(final FileInfo fileInfo, final CountDownLatch latch, final List<Throwable> failures) {
final IndexOutput indexOutput;
try {
// we create an output with no checksum, this is because the pure binary data of the file is not
// the checksum (because of seek). We will create the checksum file once copying is done
indexOutput = store.createOutputRaw(fileInfo.physicalName());
} catch (IOException e) {
failures.add(e);
latch.countDown();
return;
}
String firstFileToRecover = fileInfo.partName(0);
final AtomicInteger partIndex = new AtomicInteger();
blobContainer.readBlob(firstFileToRecover, new BlobContainer.ReadBlobListener() {
@Override
public synchronized void onPartial(byte[] data, int offset, int size) throws IOException {
recoveryStatus.index().addCurrentFilesSize(size);
indexOutput.writeBytes(data, offset, size);
}
@Override
public synchronized void onCompleted() {
int part = partIndex.incrementAndGet();
if (part < fileInfo.numberOfParts()) {
String partName = fileInfo.partName(part);
// continue with the new part
blobContainer.readBlob(partName, this);
return;
} else {
// we are done...
try {
indexOutput.close();
// write the checksum
if (fileInfo.checksum() != null) {
store.writeChecksum(fileInfo.physicalName(), fileInfo.checksum());
}
store.directory().sync(Collections.singleton(fileInfo.physicalName()));
} catch (IOException e) {
onFailure(e);
return;
}
}
latch.countDown();
}
@Override
public void onFailure(Throwable t) {
failures.add(t);
latch.countDown();
}
});
}
}
}

View File

@ -0,0 +1,413 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.index.snapshots.blobstore;
import com.google.common.collect.ImmutableList;
import org.elasticsearch.ElasticSearchParseException;
import org.elasticsearch.common.Nullable;
import org.elasticsearch.common.unit.ByteSizeValue;
import org.elasticsearch.common.xcontent.ToXContent;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentBuilderString;
import org.elasticsearch.common.xcontent.XContentParser;
import org.elasticsearch.index.store.StoreFileMetaData;
import java.io.IOException;
import java.util.List;
import static com.google.common.collect.Lists.newArrayList;
/**
* Shard snapshot metadata
*/
public class BlobStoreIndexShardSnapshot {
/**
* Information about snapshotted file
*/
public static class FileInfo {
private final String name;
private final String physicalName;
private final long length;
private final String checksum;
private final ByteSizeValue partSize;
private final long partBytes;
private final long numberOfParts;
/**
* Constructs a new instance of file info
*
* @param name file name as stored in the blob store
* @param physicalName original file name
* @param length total length of the file
* @param partSize size of the single chunk
* @param checksum checksum for the file
*/
public FileInfo(String name, String physicalName, long length, ByteSizeValue partSize, String checksum) {
this.name = name;
this.physicalName = physicalName;
this.length = length;
this.checksum = checksum;
long partBytes = Long.MAX_VALUE;
if (partSize != null) {
partBytes = partSize.bytes();
}
long totalLength = length;
long numberOfParts = totalLength / partBytes;
if (totalLength % partBytes > 0) {
numberOfParts++;
}
if (numberOfParts == 0) {
numberOfParts++;
}
this.numberOfParts = numberOfParts;
this.partSize = partSize;
this.partBytes = partBytes;
}
/**
* Returns the base file name
*
* @return file name
*/
public String name() {
return name;
}
/**
* Returns part name if file is stored as multiple parts
*
* @param part part number
* @return part name
*/
public String partName(long part) {
if (numberOfParts > 1) {
return name + ".part" + part;
} else {
return name;
}
}
/**
* Returns base file name from part name
*
* @param blobName part name
* @return base file name
*/
public static String canonicalName(String blobName) {
if (blobName.contains(".part")) {
return blobName.substring(0, blobName.indexOf(".part"));
}
return blobName;
}
/**
* Returns original file name
*
* @return original file name
*/
public String physicalName() {
return this.physicalName;
}
/**
* File length
*
* @return file length
*/
public long length() {
return length;
}
/**
* Returns part size
*
* @return part size
*/
public ByteSizeValue partSize() {
return partSize;
}
/**
* Return maximum number of bytes in a part
*
* @return maximum number of bytes in a part
*/
public long partBytes() {
return partBytes;
}
/**
* Returns number of parts
*
* @return number of parts
*/
public long numberOfParts() {
return numberOfParts;
}
/**
* Returns file md5 checksum provided by {@link org.elasticsearch.index.store.Store}
*
* @return file checksum
*/
@Nullable
public String checksum() {
return checksum;
}
/**
* Checks if a file in a store is the same file
*
* @param md file in a store
* @return true if file in a store this this file have the same checksum and length
*/
public boolean isSame(StoreFileMetaData md) {
if (checksum == null || md.checksum() == null) {
return false;
}
return length == md.length() && checksum.equals(md.checksum());
}
static final class Fields {
static final XContentBuilderString NAME = new XContentBuilderString("name");
static final XContentBuilderString PHYSICAL_NAME = new XContentBuilderString("physical_name");
static final XContentBuilderString LENGTH = new XContentBuilderString("length");
static final XContentBuilderString CHECKSUM = new XContentBuilderString("checksum");
static final XContentBuilderString PART_SIZE = new XContentBuilderString("part_size");
}
/**
* Serializes file info into JSON
*
* @param file file info
* @param builder XContent builder
* @param params parameters
* @throws IOException
*/
public static void toXContent(FileInfo file, XContentBuilder builder, ToXContent.Params params) throws IOException {
builder.startObject();
builder.field(Fields.NAME, file.name);
builder.field(Fields.PHYSICAL_NAME, file.physicalName);
builder.field(Fields.LENGTH, file.length);
if (file.checksum != null) {
builder.field(Fields.CHECKSUM, file.checksum);
}
if (file.partSize != null) {
builder.field(Fields.PART_SIZE, file.partSize.bytes());
}
builder.endObject();
}
/**
* Parses JSON that represents file info
*
* @param parser parser
* @return file info
* @throws IOException
*/
public static FileInfo fromXContent(XContentParser parser) throws IOException {
XContentParser.Token token = parser.currentToken();
String name = null;
String physicalName = null;
long length = -1;
String checksum = null;
ByteSizeValue partSize = null;
if (token == XContentParser.Token.START_OBJECT) {
while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {
if (token == XContentParser.Token.FIELD_NAME) {
String currentFieldName = parser.currentName();
token = parser.nextToken();
if (token.isValue()) {
if ("name".equals(currentFieldName)) {
name = parser.text();
} else if ("physical_name".equals(currentFieldName)) {
physicalName = parser.text();
} else if ("length".equals(currentFieldName)) {
length = parser.longValue();
} else if ("checksum".equals(currentFieldName)) {
checksum = parser.text();
} else if ("part_size".equals(currentFieldName)) {
partSize = new ByteSizeValue(parser.longValue());
} else {
throw new ElasticSearchParseException("unknown parameter [" + currentFieldName + "]");
}
} else {
throw new ElasticSearchParseException("unexpected token [" + token + "]");
}
} else {
throw new ElasticSearchParseException("unexpected token [" + token + "]");
}
}
}
// TODO: Verify???
return new FileInfo(name, physicalName, length, partSize, checksum);
}
}
private final String snapshot;
private final long indexVersion;
private final ImmutableList<FileInfo> indexFiles;
/**
* Constructs new shard snapshot metadata from snapshot metadata
*
* @param snapshot snapshot id
* @param indexVersion index version
* @param indexFiles list of files in the shard
*/
public BlobStoreIndexShardSnapshot(String snapshot, long indexVersion, List<FileInfo> indexFiles) {
assert snapshot != null;
assert indexVersion >= 0;
this.snapshot = snapshot;
this.indexVersion = indexVersion;
this.indexFiles = ImmutableList.copyOf(indexFiles);
}
/**
* Returns index version
*
* @return index version
*/
public long indexVersion() {
return indexVersion;
}
/**
* Returns snapshot id
*
* @return snapshot id
*/
public String snapshot() {
return snapshot;
}
/**
* Returns list of files in the shard
*
* @return list of files
*/
public ImmutableList<FileInfo> indexFiles() {
return indexFiles;
}
/**
* Serializes shard snapshot metadata info into JSON
*
* @param snapshot shard snapshot metadata
* @param builder XContent builder
* @param params parameters
* @throws IOException
*/
public static void toXContent(BlobStoreIndexShardSnapshot snapshot, XContentBuilder builder, ToXContent.Params params) throws IOException {
builder.startObject();
builder.field("name", snapshot.snapshot);
builder.field("index-version", snapshot.indexVersion);
builder.startArray("files");
for (FileInfo fileInfo : snapshot.indexFiles) {
FileInfo.toXContent(fileInfo, builder, params);
}
builder.endArray();
builder.endObject();
}
/**
* Parses shard snapshot metadata
*
* @param parser parser
* @return shard snapshot metadata
* @throws IOException
*/
public static BlobStoreIndexShardSnapshot fromXContent(XContentParser parser) throws IOException {
String snapshot = null;
long indexVersion = -1;
List<FileInfo> indexFiles = newArrayList();
XContentParser.Token token = parser.currentToken();
if (token == XContentParser.Token.START_OBJECT) {
while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {
if (token == XContentParser.Token.FIELD_NAME) {
String currentFieldName = parser.currentName();
token = parser.nextToken();
if (token.isValue()) {
if ("name".equals(currentFieldName)) {
snapshot = parser.text();
} else if ("index-version".equals(currentFieldName)) {
indexVersion = parser.longValue();
} else {
throw new ElasticSearchParseException("unknown parameter [" + currentFieldName + "]");
}
} else if (token == XContentParser.Token.START_ARRAY) {
while ((parser.nextToken()) != XContentParser.Token.END_ARRAY) {
indexFiles.add(FileInfo.fromXContent(parser));
}
} else {
throw new ElasticSearchParseException("unexpected token [" + token + "]");
}
} else {
throw new ElasticSearchParseException("unexpected token [" + token + "]");
}
}
}
return new BlobStoreIndexShardSnapshot(snapshot, indexVersion, ImmutableList.<FileInfo>copyOf(indexFiles));
}
/**
* Returns true if this snapshot contains a file with a given original name
*
* @param physicalName original file name
* @return true if the file was found, false otherwise
*/
public boolean containPhysicalIndexFile(String physicalName) {
return findPhysicalIndexFile(physicalName) != null;
}
public FileInfo findPhysicalIndexFile(String physicalName) {
for (FileInfo file : indexFiles) {
if (file.physicalName().equals(physicalName)) {
return file;
}
}
return null;
}
/**
* Returns true if this snapshot contains a file with a given name
*
* @param name file name
* @return true if file was found, false otherwise
*/
public FileInfo findNameFile(String name) {
for (FileInfo file : indexFiles) {
if (file.name().equals(name)) {
return file;
}
}
return null;
}
}

View File

@ -0,0 +1,86 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.index.snapshots.blobstore;
import com.google.common.collect.ImmutableList;
import org.elasticsearch.index.snapshots.blobstore.BlobStoreIndexShardSnapshot.FileInfo;
import java.util.Iterator;
import java.util.List;
/**
* Contains information about all snapshot for the given shard in repository
* <p/>
* This class is used to find files that were already snapshoted and clear out files that no longer referenced by any
* snapshots
*/
public class BlobStoreIndexShardSnapshots implements Iterable<BlobStoreIndexShardSnapshot> {
private final ImmutableList<BlobStoreIndexShardSnapshot> shardSnapshots;
public BlobStoreIndexShardSnapshots(List<BlobStoreIndexShardSnapshot> shardSnapshots) {
this.shardSnapshots = ImmutableList.copyOf(shardSnapshots);
}
/**
* Returns list of snapshots
*
* @return list of snapshots
*/
public ImmutableList<BlobStoreIndexShardSnapshot> snapshots() {
return this.shardSnapshots;
}
/**
* Finds reference to a snapshotted file by its original name
*
* @param physicalName original name
* @return file info or null if file is not present in any of snapshots
*/
public FileInfo findPhysicalIndexFile(String physicalName) {
for (BlobStoreIndexShardSnapshot snapshot : shardSnapshots) {
FileInfo fileInfo = snapshot.findPhysicalIndexFile(physicalName);
if (fileInfo != null) {
return fileInfo;
}
}
return null;
}
/**
* Finds reference to a snapshotted file by its snapshot name
*
* @param name file name
* @return file info or null if file is not present in any of snapshots
*/
public FileInfo findNameFile(String name) {
for (BlobStoreIndexShardSnapshot snapshot : shardSnapshots) {
FileInfo fileInfo = snapshot.findNameFile(name);
if (fileInfo != null) {
return fileInfo;
}
}
return null;
}
@Override
public Iterator<BlobStoreIndexShardSnapshot> iterator() {
return shardSnapshots.iterator();
}
}

View File

@ -287,10 +287,9 @@ public class IndicesClusterStateService extends AbstractLifecycleComponent<Indic
// now, go over and delete shards that needs to get deleted
newShardIds.clear();
List<MutableShardRouting> shards = routingNode.shards();
for (int i = 0; i < shards.size(); i++) {
ShardRouting shardRouting = shards.get(i);
if (shardRouting.index().equals(index)) {
newShardIds.add(shardRouting.id());
for (MutableShardRouting shard : shards) {
if (shard.index().equals(index)) {
newShardIds.add(shard.id());
}
}
for (Integer existingShardId : indexService.shardIds()) {

View File

@ -78,6 +78,7 @@ import org.elasticsearch.percolator.PercolatorModule;
import org.elasticsearch.percolator.PercolatorService;
import org.elasticsearch.plugins.PluginsModule;
import org.elasticsearch.plugins.PluginsService;
import org.elasticsearch.repositories.RepositoriesModule;
import org.elasticsearch.rest.RestController;
import org.elasticsearch.rest.RestModule;
import org.elasticsearch.river.RiversManager;
@ -173,6 +174,7 @@ public final class InternalNode implements Node {
modules.add(new ShapeModule());
modules.add(new PercolatorModule());
modules.add(new ResourceWatcherModule());
modules.add(new RepositoriesModule());
injector = modules.createInjector();

View File

@ -0,0 +1,67 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.repositories;
import com.google.common.collect.ImmutableMap;
import com.google.common.collect.Maps;
import org.elasticsearch.common.inject.AbstractModule;
import org.elasticsearch.common.inject.Module;
import org.elasticsearch.repositories.fs.FsRepository;
import org.elasticsearch.repositories.fs.FsRepositoryModule;
import org.elasticsearch.repositories.uri.URLRepository;
import org.elasticsearch.repositories.uri.URLRepositoryModule;
import org.elasticsearch.snapshots.RestoreService;
import org.elasticsearch.snapshots.SnapshotsService;
import java.util.Map;
/**
* Module responsible for registering other repositories.
* <p/>
* Repositories implemented as plugins should implement {@code onModule(RepositoriesModule module)} method, in which
* they should register repository using {@link #registerRepository(String, Class)} method.
*/
public class RepositoriesModule extends AbstractModule {
private Map<String, Class<? extends Module>> repositoryTypes = Maps.newHashMap();
public RepositoriesModule() {
registerRepository(FsRepository.TYPE, FsRepositoryModule.class);
registerRepository(URLRepository.TYPE, URLRepositoryModule.class);
}
/**
* Registers a custom repository type name against a module.
*
* @param type The type
* @param module The module
*/
public void registerRepository(String type, Class<? extends Module> module) {
repositoryTypes.put(type, module);
}
@Override
protected void configure() {
bind(RepositoriesService.class).asEagerSingleton();
bind(SnapshotsService.class).asEagerSingleton();
bind(RestoreService.class).asEagerSingleton();
bind(RepositoryTypesRegistry.class).toInstance(new RepositoryTypesRegistry(ImmutableMap.copyOf(repositoryTypes)));
}
}

View File

@ -0,0 +1,507 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.repositories;
import com.google.common.collect.ImmutableMap;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.support.master.AcknowledgedRequest;
import org.elasticsearch.action.support.master.MasterNodeOperationRequest;
import org.elasticsearch.cluster.*;
import org.elasticsearch.cluster.ack.ClusterStateUpdateRequest;
import org.elasticsearch.cluster.ack.ClusterStateUpdateResponse;
import org.elasticsearch.cluster.metadata.MetaData;
import org.elasticsearch.cluster.metadata.RepositoriesMetaData;
import org.elasticsearch.cluster.metadata.RepositoryMetaData;
import org.elasticsearch.cluster.node.DiscoveryNode;
import org.elasticsearch.common.Nullable;
import org.elasticsearch.common.component.AbstractComponent;
import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.inject.Injector;
import org.elasticsearch.common.inject.Injectors;
import org.elasticsearch.common.inject.ModulesBuilder;
import org.elasticsearch.common.regex.Regex;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.index.snapshots.IndexShardRepository;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import static com.google.common.collect.Maps.newHashMap;
import static org.elasticsearch.common.settings.ImmutableSettings.Builder.EMPTY_SETTINGS;
import static org.elasticsearch.common.unit.TimeValue.timeValueSeconds;
/**
* Service responsible for maintaining and providing access to snapshot repositories on nodes.
*/
public class RepositoriesService extends AbstractComponent implements ClusterStateListener {
private final RepositoryTypesRegistry typesRegistry;
private final Injector injector;
private final ClusterService clusterService;
private volatile ImmutableMap<String, RepositoryHolder> repositories = ImmutableMap.of();
@Inject
public RepositoriesService(Settings settings, ClusterService clusterService, RepositoryTypesRegistry typesRegistry, Injector injector) {
super(settings);
this.typesRegistry = typesRegistry;
this.injector = injector;
this.clusterService = clusterService;
// Doesn't make sense to maintain repositories on non-master and non-data nodes
// Nothing happens there anyway
if (DiscoveryNode.dataNode(settings) || DiscoveryNode.masterNode(settings)) {
clusterService.add(this);
}
}
/**
* Registers new repository in the cluster
* <p/>
* This method can be only called on the master node. It tries to create a new repository on the master
* and if it was successful it adds new repository to cluster metadata.
*
* @param request register repository request
* @param listener register repository listener
*/
public void registerRepository(final RegisterRepositoryRequest request, final ActionListener<RegisterRepositoryResponse> listener) {
final RepositoryMetaData newRepositoryMetaData = new RepositoryMetaData(request.name, request.type, request.settings);
clusterService.submitStateUpdateTask(request.cause, new AckedClusterStateUpdateTask() {
@Override
public ClusterState execute(ClusterState currentState) {
// Trying to create the new repository on master to make sure it works
if (!registerRepository(newRepositoryMetaData)) {
// The new repository has the same settings as the old one - ignore
return currentState;
}
MetaData metaData = currentState.metaData();
MetaData.Builder mdBuilder = MetaData.builder(currentState.metaData());
RepositoriesMetaData repositories = metaData.custom(RepositoriesMetaData.TYPE);
if (repositories == null) {
logger.info("put repository [{}]", request.name);
repositories = new RepositoriesMetaData(new RepositoryMetaData(request.name, request.type, request.settings));
} else {
boolean found = false;
List<RepositoryMetaData> repositoriesMetaData = new ArrayList<RepositoryMetaData>(repositories.repositories().size() + 1);
for (RepositoryMetaData repositoryMetaData : repositories.repositories()) {
if (repositoryMetaData.name().equals(newRepositoryMetaData.name())) {
found = true;
repositoriesMetaData.add(newRepositoryMetaData);
} else {
repositoriesMetaData.add(repositoryMetaData);
}
}
if (!found) {
logger.info("put repository [{}]", request.name);
repositoriesMetaData.add(new RepositoryMetaData(request.name, request.type, request.settings));
} else {
logger.info("update repository [{}]", request.name);
}
repositories = new RepositoriesMetaData(repositoriesMetaData.toArray(new RepositoryMetaData[repositoriesMetaData.size()]));
}
mdBuilder.putCustom(RepositoriesMetaData.TYPE, repositories);
return ClusterState.builder(currentState).metaData(mdBuilder).build();
}
@Override
public void onFailure(String source, Throwable t) {
logger.warn("failed to create repository [{}]", t, request.name);
listener.onFailure(t);
}
@Override
public TimeValue timeout() {
return request.masterNodeTimeout;
}
@Override
public void clusterStateProcessed(String source, ClusterState oldState, ClusterState newState) {
}
@Override
public boolean mustAck(DiscoveryNode discoveryNode) {
return discoveryNode.masterNode();
}
@Override
public void onAllNodesAcked(@Nullable Throwable t) {
listener.onResponse(new RegisterRepositoryResponse(true));
}
@Override
public void onAckTimeout() {
listener.onResponse(new RegisterRepositoryResponse(false));
}
@Override
public TimeValue ackTimeout() {
return request.ackTimeout();
}
});
}
/**
* Unregisters repository in the cluster
* <p/>
* This method can be only called on the master node. It removes repository information from cluster metadata.
*
* @param request unregister repository request
* @param listener unregister repository listener
*/
public void unregisterRepository(final UnregisterRepositoryRequest request, final ActionListener<UnregisterRepositoryResponse> listener) {
clusterService.submitStateUpdateTask(request.cause, new AckedClusterStateUpdateTask() {
@Override
public ClusterState execute(ClusterState currentState) {
MetaData metaData = currentState.metaData();
MetaData.Builder mdBuilder = MetaData.builder(currentState.metaData());
RepositoriesMetaData repositories = metaData.custom(RepositoriesMetaData.TYPE);
if (repositories != null && repositories.repositories().size() > 0) {
List<RepositoryMetaData> repositoriesMetaData = new ArrayList<RepositoryMetaData>(repositories.repositories().size());
boolean changed = false;
for (RepositoryMetaData repositoryMetaData : repositories.repositories()) {
if (Regex.simpleMatch(request.name, repositoryMetaData.name())) {
logger.info("delete repository [{}]", repositoryMetaData.name());
changed = true;
} else {
repositoriesMetaData.add(repositoryMetaData);
}
}
if (changed) {
repositories = new RepositoriesMetaData(repositoriesMetaData.toArray(new RepositoryMetaData[repositoriesMetaData.size()]));
mdBuilder.putCustom(RepositoriesMetaData.TYPE, repositories);
return ClusterState.builder(currentState).metaData(mdBuilder).build();
}
}
throw new RepositoryMissingException(request.name);
}
@Override
public void onFailure(String source, Throwable t) {
listener.onFailure(t);
}
@Override
public TimeValue timeout() {
return request.masterNodeTimeout();
}
@Override
public void clusterStateProcessed(String source, ClusterState oldState, ClusterState newState) {
}
@Override
public boolean mustAck(DiscoveryNode discoveryNode) {
// Since operation occurs only on masters, it's enough that only master-eligible nodes acked
return discoveryNode.masterNode();
}
@Override
public void onAllNodesAcked(@Nullable Throwable t) {
listener.onResponse(new UnregisterRepositoryResponse(true));
}
@Override
public void onAckTimeout() {
listener.onResponse(new UnregisterRepositoryResponse(false));
}
@Override
public TimeValue ackTimeout() {
return request.ackTimeout();
}
});
}
/**
* Checks if new repositories appeared in or disappeared from cluster metadata and updates current list of
* repositories accordingly.
*
* @param event cluster changed event
*/
@Override
public void clusterChanged(ClusterChangedEvent event) {
try {
RepositoriesMetaData oldMetaData = event.previousState().getMetaData().custom(RepositoriesMetaData.TYPE);
RepositoriesMetaData newMetaData = event.state().getMetaData().custom(RepositoriesMetaData.TYPE);
// Check if repositories got changed
if ((oldMetaData == null && newMetaData == null) || (oldMetaData != null && oldMetaData.equals(newMetaData))) {
return;
}
Map<String, RepositoryHolder> survivors = newHashMap();
// First, remove repositories that are no longer there
for (Map.Entry<String, RepositoryHolder> entry : repositories.entrySet()) {
if (newMetaData == null || newMetaData.repository(entry.getKey()) == null) {
closeRepository(entry.getKey(), entry.getValue());
} else {
survivors.put(entry.getKey(), entry.getValue());
}
}
ImmutableMap.Builder<String, RepositoryHolder> builder = ImmutableMap.builder();
// Now go through all repositories and update existing or create missing
for (RepositoryMetaData repositoryMetaData : newMetaData.repositories()) {
RepositoryHolder holder = survivors.get(repositoryMetaData.name());
if (holder != null) {
// Found previous version of this repository
if (!holder.type.equals(repositoryMetaData.type()) || !holder.settings.equals(repositoryMetaData.settings())) {
// Previous version is different from the version in settings
closeRepository(repositoryMetaData.name(), holder);
holder = createRepositoryHolder(repositoryMetaData);
//TODO: Error handling and proper Injector cleanup
}
} else {
holder = createRepositoryHolder(repositoryMetaData);
}
if (holder != null) {
builder.put(repositoryMetaData.name(), holder);
}
}
repositories = builder.build();
} catch (Throwable ex) {
logger.warn("failure updating cluster state ", ex);
}
}
/**
* Returns registered repository
* <p/>
* This method is called only on the master node
*
* @param repository repository name
* @return registered repository
* @throws RepositoryMissingException if repository with such name isn't registered
*/
public Repository repository(String repository) {
RepositoryHolder holder = repositories.get(repository);
if (holder != null) {
return holder.repository;
}
throw new RepositoryMissingException(repository);
}
/**
* Returns registered index shard repository
* <p/>
* This method is called only on data nodes
*
* @param repository repository name
* @return registered repository
* @throws RepositoryMissingException if repository with such name isn't registered
*/
public IndexShardRepository indexShardRepository(String repository) {
RepositoryHolder holder = repositories.get(repository);
if (holder != null) {
return holder.indexShardRepository;
}
throw new RepositoryMissingException(repository);
}
/**
* Creates a new repository and adds it to the list of registered repositories.
* <p/>
* If a repository with the same name but different types or settings already exists, it will be closed and
* replaced with the new repository. If a repository with the same name exists but it has the same type and settings
* the new repository is ignored.
*
* @param repositoryMetaData new repository metadata
* @return {@code true} if new repository was added or {@code false} if it was ignored
*/
private boolean registerRepository(RepositoryMetaData repositoryMetaData) {
RepositoryHolder previous = repositories.get(repositoryMetaData.name());
if (previous != null) {
if (!previous.type.equals(repositoryMetaData.type()) && previous.settings.equals(repositoryMetaData.settings())) {
// Previous version is the same as this one - ignore it
return false;
}
}
RepositoryHolder holder = createRepositoryHolder(repositoryMetaData);
if (previous != null) {
// Closing previous version
closeRepository(repositoryMetaData.name(), previous);
}
Map<String, RepositoryHolder> newRepositories = newHashMap(repositories);
newRepositories.put(repositoryMetaData.name(), holder);
repositories = ImmutableMap.copyOf(newRepositories);
return true;
}
/**
* Closes the repository
*
* @param name repository name
* @param holder repository holder
*/
private void closeRepository(String name, RepositoryHolder holder) {
logger.debug("closing repository [{}][{}]", holder.type, name);
if (holder.injector != null) {
Injectors.close(holder.injector);
}
if (holder.repository != null) {
holder.repository.close();
}
}
/**
* Creates repository holder
*/
private RepositoryHolder createRepositoryHolder(RepositoryMetaData repositoryMetaData) {
logger.debug("creating repository [{}][{}]", repositoryMetaData.type(), repositoryMetaData.name());
Injector repositoryInjector = null;
try {
ModulesBuilder modules = new ModulesBuilder();
RepositoryName name = new RepositoryName(repositoryMetaData.type(), repositoryMetaData.name());
modules.add(new RepositoryNameModule(name));
modules.add(new RepositoryModule(name, repositoryMetaData.settings(), this.settings, typesRegistry));
repositoryInjector = modules.createChildInjector(injector);
Repository repository = repositoryInjector.getInstance(Repository.class);
IndexShardRepository indexShardRepository = repositoryInjector.getInstance(IndexShardRepository.class);
repository.start();
return new RepositoryHolder(repositoryMetaData.type(), repositoryMetaData.settings(), repositoryInjector, repository, indexShardRepository);
} catch (Throwable t) {
if (repositoryInjector != null) {
Injectors.close(repositoryInjector);
}
logger.warn("failed to create repository [{}][{}]", t, repositoryMetaData.type(), repositoryMetaData.name());
throw new RepositoryException(repositoryMetaData.name(), "failed to create repository", t);
}
}
/**
* Internal data structure for holding repository with its configuration information and injector
*/
private static class RepositoryHolder {
private final String type;
private final Settings settings;
private final Injector injector;
private final Repository repository;
private final IndexShardRepository indexShardRepository;
public RepositoryHolder(String type, Settings settings, Injector injector, Repository repository, IndexShardRepository indexShardRepository) {
this.type = type;
this.settings = settings;
this.repository = repository;
this.indexShardRepository = indexShardRepository;
this.injector = injector;
}
}
/**
* Register repository request
*/
public static class RegisterRepositoryRequest extends ClusterStateUpdateRequest<RegisterRepositoryRequest> {
final String cause;
final String name;
final String type;
Settings settings = EMPTY_SETTINGS;
TimeValue masterNodeTimeout = MasterNodeOperationRequest.DEFAULT_MASTER_NODE_TIMEOUT;
/**
* Constructs new register repository request
*
* @param cause repository registration cause
* @param name repository name
* @param type repository type
*/
public RegisterRepositoryRequest(String cause, String name, String type) {
this.cause = cause;
this.name = name;
this.type = type;
}
/**
* Sets repository settings
*
* @param settings repository settings
* @return this request
*/
public RegisterRepositoryRequest settings(Settings settings) {
this.settings = settings;
return this;
}
/**
* Sets master node operation timeout
*
* @param masterNodeTimeout master node operation timeout
* @return this request
*/
public RegisterRepositoryRequest masterNodeTimeout(TimeValue masterNodeTimeout) {
this.masterNodeTimeout = masterNodeTimeout;
return this;
}
}
/**
* Register repository response
*/
public static class RegisterRepositoryResponse extends ClusterStateUpdateResponse {
RegisterRepositoryResponse(boolean acknowledged) {
super(acknowledged);
}
}
/**
* Unregister repository request
*/
public static class UnregisterRepositoryRequest extends ClusterStateUpdateRequest<UnregisterRepositoryRequest> {
final String cause;
final String name;
/**
* Creates a new unregister repository request
*
* @param cause repository unregistration cause
* @param name repository name
*/
public UnregisterRepositoryRequest(String cause, String name) {
this.cause = cause;
this.name = name;
}
}
/**
* Unregister repository response
*/
public static class UnregisterRepositoryResponse extends ClusterStateUpdateResponse {
UnregisterRepositoryResponse(boolean acknowledged) {
super(acknowledged);
}
}
}

View File

@ -0,0 +1,100 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.repositories;
import com.google.common.collect.ImmutableList;
import org.elasticsearch.cluster.metadata.MetaData;
import org.elasticsearch.cluster.metadata.SnapshotId;
import org.elasticsearch.common.component.LifecycleComponent;
import org.elasticsearch.snapshots.Snapshot;
import org.elasticsearch.snapshots.SnapshotShardFailure;
/**
* Snapshot repository interface.
* <p/>
* Responsible for index and cluster level operations. It's called only on master.
* Shard-level operations are performed using {@link org.elasticsearch.index.snapshots.IndexShardRepository}
* interface on data nodes.
* <p/>
* Typical snapshot usage pattern:
* <ul>
* <li>Master calls {@link #initializeSnapshot(org.elasticsearch.cluster.metadata.SnapshotId, com.google.common.collect.ImmutableList, org.elasticsearch.cluster.metadata.MetaData)}
* with list of indices that will be included into the snapshot</li>
* <li>Data nodes call {@link org.elasticsearch.index.snapshots.IndexShardRepository#snapshot(org.elasticsearch.cluster.metadata.SnapshotId, org.elasticsearch.index.shard.ShardId, org.elasticsearch.index.deletionpolicy.SnapshotIndexCommit, org.elasticsearch.index.snapshots.IndexShardSnapshotStatus)} for each shard</li>
* <li>When all shard calls return master calls {@link #finalizeSnapshot(org.elasticsearch.cluster.metadata.SnapshotId, String, int, com.google.common.collect.ImmutableList)}
* with possible list of failures</li>
* </ul>
*/
public interface Repository extends LifecycleComponent<Repository> {
/**
* Reads snapshot description from repository.
*
* @param snapshotId snapshot ID
* @return information about snapshot
*/
Snapshot readSnapshot(SnapshotId snapshotId);
/**
* Returns global metadata associate with the snapshot.
* <p/>
* The returned meta data contains global metadata as well as metadata for all indices listed in the indices parameter.
*
* @param snapshotId snapshot ID
* @param indices list of indices
* @return information about snapshot
*/
MetaData readSnapshotMetaData(SnapshotId snapshotId, ImmutableList<String> indices);
/**
* Returns the list of snapshots currently stored in the repository
*
* @return snapshot list
*/
ImmutableList<SnapshotId> snapshots();
/**
* Starts snapshotting process
*
* @param snapshotId snapshot id
* @param indices list of indices to be snapshotted
* @param metaData cluster metadata
*/
void initializeSnapshot(SnapshotId snapshotId, ImmutableList<String> indices, MetaData metaData);
/**
* Finalizes snapshotting process
* <p/>
* This method is called on master after all shards are snapshotted.
*
* @param snapshotId snapshot id
* @param failure global failure reason or null
* @param totalShards total number of shards
* @param shardFailures list of shard failures
* @return snapshot description
*/
Snapshot finalizeSnapshot(SnapshotId snapshotId, String failure, int totalShards, ImmutableList<SnapshotShardFailure> shardFailures);
/**
* Deletes snapshot
*
* @param snapshotId snapshot id
*/
void deleteSnapshot(SnapshotId snapshotId);
}

View File

@ -0,0 +1,47 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.repositories;
import org.elasticsearch.ElasticSearchException;
/**
* Generic repository exception
*/
public class RepositoryException extends ElasticSearchException {
private final String repository;
public RepositoryException(String repository, String msg) {
this(repository, msg, null);
}
public RepositoryException(String repository, String msg, Throwable cause) {
super("[" + (repository == null ? "_na" : repository) + "] " + msg, cause);
this.repository = repository;
}
/**
* Returns repository name
*
* @return repository name
*/
public String repository() {
return repository;
}
}

View File

@ -0,0 +1,38 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.repositories;
import org.elasticsearch.rest.RestStatus;
/**
* Repository missing exception
*/
public class RepositoryMissingException extends RepositoryException {
public RepositoryMissingException(String repository) {
super(repository, "missing");
}
@Override
public RestStatus status() {
return RestStatus.NOT_FOUND;
}
}

View File

@ -0,0 +1,92 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.repositories;
import com.google.common.collect.ImmutableList;
import org.elasticsearch.common.Classes;
import org.elasticsearch.common.Strings;
import org.elasticsearch.common.inject.AbstractModule;
import org.elasticsearch.common.inject.Module;
import org.elasticsearch.common.inject.Modules;
import org.elasticsearch.common.inject.SpawnModules;
import org.elasticsearch.common.settings.NoClassSettingsException;
import org.elasticsearch.common.settings.Settings;
import java.util.Locale;
import static org.elasticsearch.common.Strings.toCamelCase;
/**
* This module spawns specific repository module
*/
public class RepositoryModule extends AbstractModule implements SpawnModules {
private RepositoryName repositoryName;
private final Settings globalSettings;
private final Settings settings;
private final RepositoryTypesRegistry typesRegistry;
/**
* Spawns module for repository with specified name, type and settings
*
* @param repositoryName repository name and type
* @param settings repository settings
* @param globalSettings global settings
* @param typesRegistry registry of repository types
*/
public RepositoryModule(RepositoryName repositoryName, Settings settings, Settings globalSettings, RepositoryTypesRegistry typesRegistry) {
this.repositoryName = repositoryName;
this.globalSettings = globalSettings;
this.settings = settings;
this.typesRegistry = typesRegistry;
}
/**
* Returns repository module.
* <p/>
* First repository type is looked up in typesRegistry and if it's not found there, this module tries to
* load repository by it's class name.
*
* @return repository module
*/
@Override
public Iterable<? extends Module> spawnModules() {
return ImmutableList.of(Modules.createModule(loadTypeModule(repositoryName.type(), "org.elasticsearch.repositories.", "RepositoryModule"), globalSettings));
}
/**
* {@inheritDoc}
*/
@Override
protected void configure() {
bind(RepositorySettings.class).toInstance(new RepositorySettings(globalSettings, settings));
}
private Class<? extends Module> loadTypeModule(String type, String prefixPackage, String suffixClassName) {
Class<? extends Module> registered = typesRegistry.type(type);
if (registered != null) {
return registered;
}
return Classes.loadClass(globalSettings.getClassLoader(), type, prefixPackage, suffixClassName);
}
}

View File

@ -0,0 +1,71 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.repositories;
/**
* Combines together the name and type of the repository
*/
public class RepositoryName {
private final String type;
private final String name;
public RepositoryName(String type, String name) {
this.type = type;
this.name = name;
}
public String type() {
return this.type;
}
public String getType() {
return type();
}
public String name() {
return this.name;
}
public String getName() {
return name();
}
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
RepositoryName that = (RepositoryName) o;
if (name != null ? !name.equals(that.name) : that.name != null) return false;
if (type != null ? !type.equals(that.type) : that.type != null) return false;
return true;
}
@Override
public int hashCode() {
int result = type != null ? type.hashCode() : 0;
result = 31 * result + (name != null ? name.hashCode() : 0);
return result;
}
}

View File

@ -0,0 +1,39 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.repositories;
import org.elasticsearch.common.inject.AbstractModule;
/**
* Binds specific instance of RepositoryName for injection to repository module
*/
public class RepositoryNameModule extends AbstractModule {
private final RepositoryName repositoryName;
public RepositoryNameModule(RepositoryName repositoryName) {
this.repositoryName = repositoryName;
}
@Override
protected void configure() {
bind(RepositoryName.class).toInstance(repositoryName);
}
}

Some files were not shown because too many files have changed in this diff Show More