When listing existing blobs for an azure repository, `path` to look at is incorrectly computed which leads to 404 errors.
Closes#26.
(cherry picked from commit 656fadc)
According to [Containers naming guide](http://msdn.microsoft.com/en-us/library/dd135715.aspx):
> A container name must be a valid DNS name, conforming to the following naming rules:
>
> * Container names must start with a letter or number, and can contain only letters, numbers, and the dash (-) character.
> * Every dash (-) character must be immediately preceded and followed by a letter or number; consecutive dashes are not permitted in container names.
> * All letters in a container name must be lowercase.
> * Container names must be from 3 through 63 characters long.
We need to fix the documentation and control that before calling Azure API.
The validation will come with issue #27.
Closes#21.
(cherry picked from commit 6531165)
We create branches:
* es-0.90 for elasticsearch 0.90
* es-1.0 for elasticsearch 1.0
* es-1.1 for elasticsearch 1.1
* master for elasticsearch master
We also check that before releasing we don't have a dependency to an elasticsearch SNAPSHOT version.
Add links to each version in documentation
(cherry picked from commit 65d4862)
elasticsearch 1.0 will provide a new feature named `Snapshot & Restore`.
We want to add support for [Azure Storage](http://www.windowsazure.com/en-us/documentation/services/storage/).
To enable Azure repositories, you have first to set your azure storage settings:
```yaml
cloud:
azure:
storage_account: your_azure_storage_account
storage_key: your_azure_storage_key
```
The Azure repository supports following settings:
* `container`: Container name. Defaults to `elasticsearch-snapshots`
* `base_path`: Specifies the path within container to repository data. Defaults to empty (root directory).
* `concurrent_streams`: Throttles the number of streams (per node) preforming snapshot operation. Defaults to `5`.
* `chunk_size`: Big files can be broken down into chunks during snapshotting if needed. The chunk size can be specified
in bytes or by using size value notation, i.e. `1g`, `10m`, `5k`. Defaults to `64m` (64m max)
* `compress`: When set to `true` metadata files are stored in compressed format. This setting doesn't affect index
files that are already compressed by default. Defaults to `false`.
Some examples, using scripts:
```sh
$ curl -XPUT 'http://localhost:9200/_snapshot/my_backup1' -d '{
"type": "azure"
}'
$ curl -XPUT 'http://localhost:9200/_snapshot/my_backup2' -d '{
"type": "azure",
"settings": {
"container": "backup_container",
"base_path": "backups",
"concurrent_streams": 2,
"chunk_size": "32m",
"compress": true
}
}'
```
Example using Java:
```java
client.admin().cluster().preparePutRepository("my_backup3")
.setType("azure").setSettings(ImmutableSettings.settingsBuilder()
.put(AzureStorageService.Fields.CONTAINER, "backup_container")
.put(AzureStorageService.Fields.CHUNK_SIZE, new ByteSizeValue(32, ByteSizeUnit.MB))
).get();
```
Closes#2.
In the `readme.md` file, `azure vm list` command should be changed to `azure vm image list`. The former lists all the VMs running in the current Azure subscription and the latter displays list of official available images.
The line is located just below the following line:
`To get a list of official available images, run:`
Closes#5.
Move tests to elasticsearch test framework.
In addition to this, we want to refactor some package names to prepare next snapshot/restore feature (see #2).
Closes#3.