2015-09-03 13:12:52 -04:00
[[repository-azure]]
=== Azure Repository Plugin
The Azure Repository plugin adds support for using Azure as a repository for
{ref}/modules-snapshots.html[Snapshot/Restore].
2017-04-20 09:01:37 -04:00
:plugin_name: repository-azure
include::install_remove.asciidoc[]
2015-09-03 13:12:52 -04:00
[[repository-azure-usage]]
==== Azure Repository
2017-02-28 06:25:01 -05:00
To enable Azure repositories, you have first to define your azure storage settings as
2018-08-01 05:07:23 -04:00
{ref}/secure-settings.html[secure settings], before starting up the node:
2015-09-03 13:12:52 -04:00
2017-02-28 06:25:01 -05:00
[source,sh]
----------------------------------------------------------------
bin/elasticsearch-keystore add azure.client.default.account
bin/elasticsearch-keystore add azure.client.default.key
----------------------------------------------------------------
2019-06-26 07:43:32 -04:00
Where `account` is the azure account name and `key` the azure secret key. Instead of an azure secret key under `key`, you can alternatively
define a shared access signatures (SAS) token under `sas_token` to use for authentication instead. When using an SAS token instead of an
account key, the SAS token must have read (r), write (w), list (l), and delete (d) permissions for the repository base path and
all its contents. These permissions need to be granted for the blob service (b) and apply to resource types service (s), container (c), and
object (o).
2018-08-01 05:07:23 -04:00
These settings are used by the repository's internal azure client.
2015-09-03 13:12:52 -04:00
2015-08-31 15:44:48 -04:00
Note that you can also define more than one account:
2015-09-03 13:12:52 -04:00
2017-02-28 06:25:01 -05:00
[source,sh]
----------------------------------------------------------------
bin/elasticsearch-keystore add azure.client.default.account
bin/elasticsearch-keystore add azure.client.default.key
bin/elasticsearch-keystore add azure.client.secondary.account
2019-06-26 07:43:32 -04:00
bin/elasticsearch-keystore add azure.client.secondary.sas_token
2017-02-28 06:25:01 -05:00
----------------------------------------------------------------
2015-09-03 13:12:52 -04:00
2018-08-01 05:07:23 -04:00
`default` is the default account name which will be used by a repository,
unless you set an explicit one in the
<<repository-azure-repository-settings, repository settings>>.
2019-06-26 07:43:32 -04:00
The `account`, `key`, and `sas_token` storage settings are
2018-08-01 05:07:23 -04:00
{ref}/secure-settings.html#reloadable-secure-settings[reloadable]. After you
reload the settings, the internal azure clients, which are used to transfer the
snapshot, will utilize the latest settings from the keystore.
NOTE: In progress snapshot/restore jobs will not be preempted by a *reload*
of the storage secure settings. They will complete using the client as it was built
when the operation started.
2015-08-31 15:44:48 -04:00
Upgrade Azure Storage client to 4.0.0
We are using `2.0.0` today but Azure team now recommends:
```xml
<dependency>
<groupId>com.microsoft.azure</groupId>
<artifactId>azure-storage</artifactId>
<version>4.0.0</version>
</dependency>
```
This new version fix the timeout issues we have seen with azure storage although #15080 adds a timeout support.
Azure storage client 2.0.0 was not passing correctly this value when it was calling Azure services.
Note that the timeout is a server side timeout and not client side timeout.
It means that it will raise only a timeout when:
* upload of blob is complete
* if azure service is not able to process the blob (and store it) within a given time range.
In which case it will raise an exception which elasticsearch can deal with:
```
java.io.IOException
at __randomizedtesting.SeedInfo.seed([91BC11AEF16E073F:6886FA5308FCE4D8]:0)
at com.microsoft.azure.storage.core.Utility.initIOException(Utility.java:643)
at com.microsoft.azure.storage.blob.BlobOutputStream.writeBlock(BlobOutputStream.java:444)
at com.microsoft.azure.storage.blob.BlobOutputStream.access$000(BlobOutputStream.java:53)
at com.microsoft.azure.storage.blob.BlobOutputStream$1.call(BlobOutputStream.java:388)
at com.microsoft.azure.storage.blob.BlobOutputStream$1.call(BlobOutputStream.java:385)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: com.microsoft.azure.storage.StorageException: Operation could not be completed within the specified time.
at com.microsoft.azure.storage.StorageException.translateException(StorageException.java:89)
at com.microsoft.azure.storage.core.StorageRequest.materializeException(StorageRequest.java:305)
at com.microsoft.azure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:175)
at com.microsoft.azure.storage.blob.CloudBlockBlob.uploadBlockInternal(CloudBlockBlob.java:1006)
at com.microsoft.azure.storage.blob.CloudBlockBlob.uploadBlock(CloudBlockBlob.java:978)
at com.microsoft.azure.storage.blob.BlobOutputStream.writeBlock(BlobOutputStream.java:438)
... 9 more
```
The following code was used to test this against Azure platform:
```java
public void testDumb() throws URISyntaxException, StorageException, IOException, InvalidKeyException {
String connectionString = "MY-AZURE-STRING";
CloudStorageAccount storageAccount = CloudStorageAccount.parse(connectionString);
CloudBlobClient client = storageAccount.createCloudBlobClient();
client.getDefaultRequestOptions().setTimeoutIntervalInMs(1000);
CloudBlobContainer container = client.getContainerReference("dumb");
container.createIfNotExists();
CloudBlockBlob blob = container.getBlockBlobReference("blob");
File sourceFile = File.createTempFile("sourceFile", ".tmp");
try {
int fileSize = 10000000;
byte[] buffer = new byte[fileSize];
Random random = new Random();
random.nextBytes(buffer);
logger.info("Generate local file");
FileOutputStream fos = new FileOutputStream(sourceFile);
fos.write(buffer);
fos.close();
logger.info("End generate local file");
FileInputStream fis = new FileInputStream(sourceFile);
logger.info("Start uploading");
blob.upload(fis, fileSize);
logger.info("End uploading");
}
finally {
if (sourceFile.exists()) {
sourceFile.delete();
}
}
}
```
With 2.0.0, the above code was not raising any exception. With 4.0.0, the exception is now thrown correctly.
The default timeout is 5 minutes. See https://github.com/Azure/azure-storage-java/blob/master/microsoft-azure-storage/src/com/microsoft/azure/storage/core/Utility.java#L352-L375
Closes #12567.
Release notes from 2.0.0:
* Removed deprecated table AtomPub support.
* Removed deprecated constructors which take service clients in favor of constructors which take credentials.
* Added support for "Add" permissions on Blob SAS.
* Added support for "Create" permissions on Blob and File SAS.
* Added support for IP Restricted SAS and Protocol SAS.
* Added support for Account SAS to all services.
* Added support for Minute and Hour Metrics to FileServiceProperties and added support for File Metrics to CloudAnalyticsClient.
* Removed deprecated startCopyFromBlob() on CloudBlob. Use startCopy() instead.
* Removed deprecated Credentials and StorageKey classes. Please use the appropriate methods on StorageCredentialsAccountAndKey instead.
* Fixed a bug in table where a select on a non-existent field resulted in a null reference exception if the corresponding field in the TableEntity was not nullable.
* Fixed a bug in table where JsonParser was automatically closing the response stream before it was completely drained causing socket exhaustion.
* Fixed a bug in StorageCredentialsAccountAndKey.updateKey(String) which prevented valid keys from being set.
* Added CloudBlobContainer.listBlobs(final String, final boolean) method.
* Fixed a bug in blob where using AccessConditions on block blob uploads larger than 64MB done with the upload* methods or block blob uploads done openOutputStream with would fail if the blob did not already exist.
* Added support for setting a proxy per request. Proxy can be set on an OperationContext instance and will be used when that instance is passed to the request method.
* Added support for SAS to the Azure File service.
* Added support for Append Blob.
* Added support for Access Control Lists (ACL) to File Shares.
* Added support for getting and setting of CORS rules to File service.
* Added support for ShareStats to File Shares.
* Added support for copying an Azure File to another Azure File or a Block Blob asynchronously, and aborting Azure File copy operations asynchronously.
* Added support for copying a Blob to an Azure File asynchronously.
* Added support for setting a maximum quota property on a File Share.
* Removed deprecated AuthenticationScheme and its getter and setter. In the future only SharedKey will be used.
* Removed deprecated getter/setters for all request option properties on the service clients. Please use the default request options getter/setters instead.
* Removed getSubDirectoryReference() for blob directories and file directories. Use getDirectoryReference() instead.
* Removed getEntityClass() in TableQuery. Please use getClazzType() instead.
* Added client-side verification for lease duration and break periods.
* Deprecated the setters in table for timestamp as this property is only modifiable by the service.
* Deprecated startCopyFromBlob() on CloudBlob. Use startCopy() instead.
* Deprecated the Credentials and StorageKey classes. Please use the appropriate methods on StorageCredentialsAccountAndKey instead.
* Deprecated constructors which take service clients in favor of constructors which take credentials.
* Fixed a bug where the DateBackwardCompatibility flag was not applied if set on the CloudTableClient default request options.
* Changed library behavior to retry all exceptions thrown when parsing a response object.
* Changed behavior to stop removing query parameters passed in with the resource URI if that URI contains a SAS token. Some query parameters such as comp, restype, snapshot and api-version will still be removed.
* Added support for logging StringToSign to SharedKey and SAS.
* **Added a connect timeout to prevent hangs when establishing the network connection.**
* **Made performance enhancements to the BlobOutputStream class.**
* Fixed a bug where maximum execution time was ignored for file, queue, and table services.
* **Changed the socket timeout to be set to the service side timeout plus 5 minutes when maximum execution time is not set.**
* **Changed the socket timeout to default to 5 minutes rather than infinite when neither service side timeout or maximum execution time are set.**
* Fixed a bug where MD5 was calculated for commitBlockList even though UseTransactionalMD5 was set to false.
* Fixed a bug where selecting fields that did not exist returned an error rather than an EntityProperty with a null value.
* Fixed a bug where table entities with a single quote in their partition or row key could be inserted but not operated on in any other way.
* Fixed a bug for all listing API's where next() would sometimes throw an exception if hasNext() had not been called even if there were more elements to iterate on.
* Added sequence number to the blob properties. This is populated for page blobs.
* Creating a page blob sets its length property.
* Added support for page blob sequence numbers and sequence number access conditions.
* Fixed a bug in abort copy where the lease access condition was not sent to the service.
* Fixed an issue in startCopyFromBlob where if the URI of the source blob contained certain non-ASCII characters they would not be encoded appropriately. This would result in Authorization failures.
* Fixed a small performance issue in XML serialization.
* Fixed a bug in BlobOutputStream and FileOutputStream where flush added data to a request pool rather than immediately committing it to the Azure service.
* Refactored to remove the blob, queue, and file package dependency on table in the error handling code.
* Added additional client-side logging for REST requests, responses, and errors.
Closes #15976.
2016-01-14 04:23:30 -05:00
You can set the client side timeout to use when making any single request. It can be defined globally, per account or both.
2017-11-29 03:44:25 -05:00
It's not set by default which means that Elasticsearch is using the
Upgrade Azure Storage client to 4.0.0
We are using `2.0.0` today but Azure team now recommends:
```xml
<dependency>
<groupId>com.microsoft.azure</groupId>
<artifactId>azure-storage</artifactId>
<version>4.0.0</version>
</dependency>
```
This new version fix the timeout issues we have seen with azure storage although #15080 adds a timeout support.
Azure storage client 2.0.0 was not passing correctly this value when it was calling Azure services.
Note that the timeout is a server side timeout and not client side timeout.
It means that it will raise only a timeout when:
* upload of blob is complete
* if azure service is not able to process the blob (and store it) within a given time range.
In which case it will raise an exception which elasticsearch can deal with:
```
java.io.IOException
at __randomizedtesting.SeedInfo.seed([91BC11AEF16E073F:6886FA5308FCE4D8]:0)
at com.microsoft.azure.storage.core.Utility.initIOException(Utility.java:643)
at com.microsoft.azure.storage.blob.BlobOutputStream.writeBlock(BlobOutputStream.java:444)
at com.microsoft.azure.storage.blob.BlobOutputStream.access$000(BlobOutputStream.java:53)
at com.microsoft.azure.storage.blob.BlobOutputStream$1.call(BlobOutputStream.java:388)
at com.microsoft.azure.storage.blob.BlobOutputStream$1.call(BlobOutputStream.java:385)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: com.microsoft.azure.storage.StorageException: Operation could not be completed within the specified time.
at com.microsoft.azure.storage.StorageException.translateException(StorageException.java:89)
at com.microsoft.azure.storage.core.StorageRequest.materializeException(StorageRequest.java:305)
at com.microsoft.azure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:175)
at com.microsoft.azure.storage.blob.CloudBlockBlob.uploadBlockInternal(CloudBlockBlob.java:1006)
at com.microsoft.azure.storage.blob.CloudBlockBlob.uploadBlock(CloudBlockBlob.java:978)
at com.microsoft.azure.storage.blob.BlobOutputStream.writeBlock(BlobOutputStream.java:438)
... 9 more
```
The following code was used to test this against Azure platform:
```java
public void testDumb() throws URISyntaxException, StorageException, IOException, InvalidKeyException {
String connectionString = "MY-AZURE-STRING";
CloudStorageAccount storageAccount = CloudStorageAccount.parse(connectionString);
CloudBlobClient client = storageAccount.createCloudBlobClient();
client.getDefaultRequestOptions().setTimeoutIntervalInMs(1000);
CloudBlobContainer container = client.getContainerReference("dumb");
container.createIfNotExists();
CloudBlockBlob blob = container.getBlockBlobReference("blob");
File sourceFile = File.createTempFile("sourceFile", ".tmp");
try {
int fileSize = 10000000;
byte[] buffer = new byte[fileSize];
Random random = new Random();
random.nextBytes(buffer);
logger.info("Generate local file");
FileOutputStream fos = new FileOutputStream(sourceFile);
fos.write(buffer);
fos.close();
logger.info("End generate local file");
FileInputStream fis = new FileInputStream(sourceFile);
logger.info("Start uploading");
blob.upload(fis, fileSize);
logger.info("End uploading");
}
finally {
if (sourceFile.exists()) {
sourceFile.delete();
}
}
}
```
With 2.0.0, the above code was not raising any exception. With 4.0.0, the exception is now thrown correctly.
The default timeout is 5 minutes. See https://github.com/Azure/azure-storage-java/blob/master/microsoft-azure-storage/src/com/microsoft/azure/storage/core/Utility.java#L352-L375
Closes #12567.
Release notes from 2.0.0:
* Removed deprecated table AtomPub support.
* Removed deprecated constructors which take service clients in favor of constructors which take credentials.
* Added support for "Add" permissions on Blob SAS.
* Added support for "Create" permissions on Blob and File SAS.
* Added support for IP Restricted SAS and Protocol SAS.
* Added support for Account SAS to all services.
* Added support for Minute and Hour Metrics to FileServiceProperties and added support for File Metrics to CloudAnalyticsClient.
* Removed deprecated startCopyFromBlob() on CloudBlob. Use startCopy() instead.
* Removed deprecated Credentials and StorageKey classes. Please use the appropriate methods on StorageCredentialsAccountAndKey instead.
* Fixed a bug in table where a select on a non-existent field resulted in a null reference exception if the corresponding field in the TableEntity was not nullable.
* Fixed a bug in table where JsonParser was automatically closing the response stream before it was completely drained causing socket exhaustion.
* Fixed a bug in StorageCredentialsAccountAndKey.updateKey(String) which prevented valid keys from being set.
* Added CloudBlobContainer.listBlobs(final String, final boolean) method.
* Fixed a bug in blob where using AccessConditions on block blob uploads larger than 64MB done with the upload* methods or block blob uploads done openOutputStream with would fail if the blob did not already exist.
* Added support for setting a proxy per request. Proxy can be set on an OperationContext instance and will be used when that instance is passed to the request method.
* Added support for SAS to the Azure File service.
* Added support for Append Blob.
* Added support for Access Control Lists (ACL) to File Shares.
* Added support for getting and setting of CORS rules to File service.
* Added support for ShareStats to File Shares.
* Added support for copying an Azure File to another Azure File or a Block Blob asynchronously, and aborting Azure File copy operations asynchronously.
* Added support for copying a Blob to an Azure File asynchronously.
* Added support for setting a maximum quota property on a File Share.
* Removed deprecated AuthenticationScheme and its getter and setter. In the future only SharedKey will be used.
* Removed deprecated getter/setters for all request option properties on the service clients. Please use the default request options getter/setters instead.
* Removed getSubDirectoryReference() for blob directories and file directories. Use getDirectoryReference() instead.
* Removed getEntityClass() in TableQuery. Please use getClazzType() instead.
* Added client-side verification for lease duration and break periods.
* Deprecated the setters in table for timestamp as this property is only modifiable by the service.
* Deprecated startCopyFromBlob() on CloudBlob. Use startCopy() instead.
* Deprecated the Credentials and StorageKey classes. Please use the appropriate methods on StorageCredentialsAccountAndKey instead.
* Deprecated constructors which take service clients in favor of constructors which take credentials.
* Fixed a bug where the DateBackwardCompatibility flag was not applied if set on the CloudTableClient default request options.
* Changed library behavior to retry all exceptions thrown when parsing a response object.
* Changed behavior to stop removing query parameters passed in with the resource URI if that URI contains a SAS token. Some query parameters such as comp, restype, snapshot and api-version will still be removed.
* Added support for logging StringToSign to SharedKey and SAS.
* **Added a connect timeout to prevent hangs when establishing the network connection.**
* **Made performance enhancements to the BlobOutputStream class.**
* Fixed a bug where maximum execution time was ignored for file, queue, and table services.
* **Changed the socket timeout to be set to the service side timeout plus 5 minutes when maximum execution time is not set.**
* **Changed the socket timeout to default to 5 minutes rather than infinite when neither service side timeout or maximum execution time are set.**
* Fixed a bug where MD5 was calculated for commitBlockList even though UseTransactionalMD5 was set to false.
* Fixed a bug where selecting fields that did not exist returned an error rather than an EntityProperty with a null value.
* Fixed a bug where table entities with a single quote in their partition or row key could be inserted but not operated on in any other way.
* Fixed a bug for all listing API's where next() would sometimes throw an exception if hasNext() had not been called even if there were more elements to iterate on.
* Added sequence number to the blob properties. This is populated for page blobs.
* Creating a page blob sets its length property.
* Added support for page blob sequence numbers and sequence number access conditions.
* Fixed a bug in abort copy where the lease access condition was not sent to the service.
* Fixed an issue in startCopyFromBlob where if the URI of the source blob contained certain non-ASCII characters they would not be encoded appropriately. This would result in Authorization failures.
* Fixed a small performance issue in XML serialization.
* Fixed a bug in BlobOutputStream and FileOutputStream where flush added data to a request pool rather than immediately committing it to the Azure service.
* Refactored to remove the blob, queue, and file package dependency on table in the error handling code.
* Added additional client-side logging for REST requests, responses, and errors.
Closes #15976.
2016-01-14 04:23:30 -05:00
http://azure.github.io/azure-storage-java/com/microsoft/azure/storage/RequestOptions.html#setTimeoutIntervalInMs(java.lang.Integer)[default value]
set by the azure client (known as 5 minutes).
2015-11-28 06:59:09 -05:00
2017-02-27 11:47:21 -05:00
`max_retries` can help to control the exponential backoff policy. It will fix the number of retries
in case of failures before considering the snapshot is failing. Defaults to `3` retries.
The initial backoff period is defined by Azure SDK as `30s`. Which means `30s` of wait time
before retrying after a first timeout or failure. The maximum backoff period is defined by Azure SDK as
`90s`.
2017-09-21 01:26:19 -04:00
`endpoint_suffix` can be used to specify Azure endpoint suffix explicitly. Defaults to `core.windows.net`.
2015-11-28 06:59:09 -05:00
[source,yaml]
----
2017-09-12 10:51:44 -04:00
azure.client.default.timeout: 10s
2017-02-28 06:25:01 -05:00
azure.client.default.max_retries: 7
2017-09-21 01:26:19 -04:00
azure.client.default.endpoint_suffix: core.chinacloudapi.cn
2017-02-28 06:25:01 -05:00
azure.client.secondary.timeout: 30s
2015-11-28 06:59:09 -05:00
----
2017-02-28 06:25:01 -05:00
In this example, timeout will be `10s` per try for `default` with `7` retries before failing
2017-09-21 01:26:19 -04:00
and endpoint suffix will be `core.chinacloudapi.cn` and `30s` per try for `secondary` with `3` retries.
2015-11-28 06:59:09 -05:00
2017-06-12 20:03:18 -04:00
[IMPORTANT]
.Supported Azure Storage Account types
===============================================
The Azure Repository plugin works with all Standard storage accounts
* Standard Locally Redundant Storage - `Standard_LRS`
* Standard Zone-Redundant Storage - `Standard_ZRS`
* Standard Geo-Redundant Storage - `Standard_GRS`
* Standard Read Access Geo-Redundant Storage - `Standard_RAGRS`
https://azure.microsoft.com/en-gb/documentation/articles/storage-premium-storage[Premium Locally Redundant Storage] (`Premium_LRS`) is **not supported** as it is only usable as VM disk storage, not as general storage.
===============================================
2017-09-13 05:51:55 -04:00
You can register a proxy per client using the following settings:
[source,yaml]
----
azure.client.default.proxy.host: proxy.host
azure.client.default.proxy.port: 8888
azure.client.default.proxy.type: http
----
Supported values for `proxy.type` are `direct` (default), `http` or `socks`.
When `proxy.type` is set to `http` or `socks`, `proxy.host` and `proxy.port` must be provided.
2015-12-01 05:35:56 -05:00
[[repository-azure-repository-settings]]
2018-05-22 03:19:07 -04:00
==== Repository settings
2015-08-31 15:44:48 -04:00
2015-09-03 13:12:52 -04:00
The Azure repository supports following settings:
2017-02-28 06:25:01 -05:00
`client`::
2015-08-31 15:44:48 -04:00
2017-02-28 06:25:01 -05:00
Azure named client to use. Defaults to `default`.
2015-08-31 15:44:48 -04:00
2015-09-03 13:12:52 -04:00
`container`::
2017-01-30 05:47:08 -05:00
Container name. You must create the azure container before creating the repository.
Defaults to `elasticsearch-snapshots`.
2015-09-03 13:12:52 -04:00
`base_path`::
Specifies the path within container to repository data. Defaults to empty
(root directory).
`chunk_size`::
Big files can be broken down into chunks during snapshotting if needed.
The chunk size can be specified in bytes or by using size value notation,
i.e. `1g`, `10m`, `5k`. Defaults to `64m` (64m max)
`compress`::
When set to `true` metadata files are stored in compressed format. This
setting doesn't affect index files that are already compressed by default.
Defaults to `false`.
2019-03-28 15:03:14 -04:00
include::repository-shared-settings.asciidoc[]
2015-09-03 13:12:52 -04:00
2015-08-31 15:44:48 -04:00
`location_mode`::
`primary_only` or `secondary_only`. Defaults to `primary_only`. Note that if you set it
2016-12-08 10:57:50 -05:00
to `secondary_only`, it will force `readonly` to true.
2015-08-31 15:44:48 -04:00
2015-09-03 13:12:52 -04:00
Some examples, using scripts:
2016-05-12 12:43:01 -04:00
[source,js]
2015-09-03 13:12:52 -04:00
----
2019-01-07 08:44:12 -05:00
# The simplest one
2015-09-03 13:12:52 -04:00
PUT _snapshot/my_backup1
{
"type": "azure"
}
# With some settings
PUT _snapshot/my_backup2
{
"type": "azure",
"settings": {
2015-11-23 07:14:02 -05:00
"container": "backup-container",
2015-09-03 13:12:52 -04:00
"base_path": "backups",
"chunk_size": "32m",
"compress": true
}
}
2015-08-31 15:44:48 -04:00
# With two accounts defined in elasticsearch.yml (my_account1 and my_account2)
PUT _snapshot/my_backup3
{
"type": "azure",
"settings": {
2017-02-28 06:25:01 -05:00
"client": "secondary"
2015-08-31 15:44:48 -04:00
}
}
PUT _snapshot/my_backup4
{
"type": "azure",
"settings": {
2017-02-28 06:25:01 -05:00
"client": "secondary",
2015-08-31 15:44:48 -04:00
"location_mode": "primary_only"
}
}
2015-09-03 13:12:52 -04:00
----
2016-05-09 09:42:23 -04:00
// CONSOLE
2016-05-13 16:15:51 -04:00
// TEST[skip:we don't have azure setup while testing this]
2015-09-03 13:12:52 -04:00
Example using Java:
[source,java]
----
2015-08-31 15:44:48 -04:00
client.admin().cluster().preparePutRepository("my_backup_java1")
2017-03-16 06:20:42 -04:00
.setType("azure").setSettings(Settings.builder()
2015-11-23 07:14:02 -05:00
.put(Storage.CONTAINER, "backup-container")
2015-09-03 13:12:52 -04:00
.put(Storage.CHUNK_SIZE, new ByteSizeValue(32, ByteSizeUnit.MB))
).get();
----
[[repository-azure-validation]]
2018-05-22 03:19:07 -04:00
==== Repository validation rules
2015-09-03 13:12:52 -04:00
According to the http://msdn.microsoft.com/en-us/library/dd135715.aspx[containers naming guide], a container name must
be a valid DNS name, conforming to the following naming rules:
* Container names must start with a letter or number, and can contain only letters, numbers, and the dash (-) character.
* Every dash (-) character must be immediately preceded and followed by a letter or number; consecutive dashes are not
permitted in container names.
* All letters in a container name must be lowercase.
* Container names must be from 3 through 63 characters long.