[[repository-azure]] === Azure Repository Plugin The Azure Repository plugin adds support for using Azure as a repository for {ref}/modules-snapshots.html[Snapshot/Restore]. :plugin_name: repository-azure include::install_remove.asciidoc[] [[repository-azure-usage]] ==== Azure Repository To enable Azure repositories, you have first to set your azure storage settings in `elasticsearch.yml` file: [source,yaml] ---- cloud: azure: storage: my_account: account: your_azure_storage_account key: your_azure_storage_key ---- Note that you can also define more than one account: [source,yaml] ---- cloud: azure: storage: my_account1: account: your_azure_storage_account1 key: your_azure_storage_key1 default: true my_account2: account: your_azure_storage_account2 key: your_azure_storage_key2 ---- `my_account1` is the default account which will be used by a repository unless you set an explicit one. You can set the client side timeout to use when making any single request. It can be defined globally, per account or both. It's not set by default which means that elasticsearch is using the http://azure.github.io/azure-storage-java/com/microsoft/azure/storage/RequestOptions.html#setTimeoutIntervalInMs(java.lang.Integer)[default value] set by the azure client (known as 5 minutes). `max_retries` can help to control the exponential backoff policy. It will fix the number of retries in case of failures before considering the snapshot is failing. Defaults to `3` retries. The initial backoff period is defined by Azure SDK as `30s`. Which means `30s` of wait time before retrying after a first timeout or failure. The maximum backoff period is defined by Azure SDK as `90s`. [source,yaml] ---- cloud: azure: storage: timeout: 10s my_account1: account: your_azure_storage_account1 key: your_azure_storage_key1 default: true max_retries: 7 my_account2: account: your_azure_storage_account2 key: your_azure_storage_key2 timeout: 30s ---- In this example, timeout will be `10s` per try for `my_account1` with `7` retries before failing and `30s` per try for `my_account2` with `3` retries. [IMPORTANT] .Supported Azure Storage Account types =============================================== The Azure Repository plugin works with all Standard storage accounts * Standard Locally Redundant Storage - `Standard_LRS` * Standard Zone-Redundant Storage - `Standard_ZRS` * Standard Geo-Redundant Storage - `Standard_GRS` * Standard Read Access Geo-Redundant Storage - `Standard_RAGRS` https://azure.microsoft.com/en-gb/documentation/articles/storage-premium-storage[Premium Locally Redundant Storage] (`Premium_LRS`) is **not supported** as it is only usable as VM disk storage, not as general storage. =============================================== [[repository-azure-repository-settings]] ===== Repository settings The Azure repository supports following settings: `account`:: Azure account settings to use. Defaults to the only one if you set a single account or to the one marked as `default` if you have more than one. `container`:: Container name. You must create the azure container before creating the repository. Defaults to `elasticsearch-snapshots`. `base_path`:: Specifies the path within container to repository data. Defaults to empty (root directory). `chunk_size`:: Big files can be broken down into chunks during snapshotting if needed. The chunk size can be specified in bytes or by using size value notation, i.e. `1g`, `10m`, `5k`. Defaults to `64m` (64m max) `compress`:: When set to `true` metadata files are stored in compressed format. This setting doesn't affect index files that are already compressed by default. Defaults to `false`. `readonly`:: Makes repository read-only. Defaults to `false`. `location_mode`:: `primary_only` or `secondary_only`. Defaults to `primary_only`. Note that if you set it to `secondary_only`, it will force `readonly` to true. Some examples, using scripts: [source,js] ---- # The simpliest one PUT _snapshot/my_backup1 { "type": "azure" } # With some settings PUT _snapshot/my_backup2 { "type": "azure", "settings": { "container": "backup-container", "base_path": "backups", "chunk_size": "32m", "compress": true } } # With two accounts defined in elasticsearch.yml (my_account1 and my_account2) PUT _snapshot/my_backup3 { "type": "azure", "settings": { "account": "my_account1" } } PUT _snapshot/my_backup4 { "type": "azure", "settings": { "account": "my_account2", "location_mode": "primary_only" } } ---- // CONSOLE // TEST[skip:we don't have azure setup while testing this] Example using Java: [source,java] ---- client.admin().cluster().preparePutRepository("my_backup_java1") .setType("azure").setSettings(Settings.builder() .put(Storage.CONTAINER, "backup-container") .put(Storage.CHUNK_SIZE, new ByteSizeValue(32, ByteSizeUnit.MB)) ).get(); ---- [[repository-azure-validation]] ===== Repository validation rules According to the http://msdn.microsoft.com/en-us/library/dd135715.aspx[containers naming guide], a container name must be a valid DNS name, conforming to the following naming rules: * Container names must start with a letter or number, and can contain only letters, numbers, and the dash (-) character. * Every dash (-) character must be immediately preceded and followed by a letter or number; consecutive dashes are not permitted in container names. * All letters in a container name must be lowercase. * Container names must be from 3 through 63 characters long.