I noticed the documentation recommends a non-durable local resource for the Elasticsearch data path. Although this is acceptable for some deployments it might be worth warning people that the path is not durable and there is a potential for data loss, even with replicas data loss is theoretically possible.
```
# recommended
path.data: /mnt/resource/elasticsearch/data
```
Alternatively the user could attach and use data disks which do come with a significant performance tradeoff, but premium storage options with higher IOPS have been announced and are right around the corner.
Closes#46.
One of the concern we have with our documentation is that it's hard for users to understand which plugin version they should use with a given elasticsearch version.
The change will consist of:
* have a clean and simple compatibility matrix on master branch
* remove list of versions in other branches
Closes#29.
One of the concern we have with our documentation is that it's hard for users to understand which plugin version they should use with a given elasticsearch version.
The change will consist of:
* have a clean and simple compatibility matrix on master branch
* remove list of versions in other branches
Closes#29.
One of the concern we have with our documentation is that it's hard for users to understand which plugin version they should use with a given elasticsearch version.
The change will consist of:
* have a clean and simple compatibility matrix on master branch
* remove list of versions in other branches
Closes#29.
According to [Containers naming guide](http://msdn.microsoft.com/en-us/library/dd135715.aspx):
> A container name must be a valid DNS name, conforming to the following naming rules:
>
> * Container names must start with a letter or number, and can contain only letters, numbers, and the dash (-) character.
> * Every dash (-) character must be immediately preceded and followed by a letter or number; consecutive dashes are not permitted in container names.
> * All letters in a container name must be lowercase.
> * Container names must be from 3 through 63 characters long.
We need to fix the documentation and control that before calling Azure API.
The validation will come with issue #27.
Closes#21.
(cherry picked from commit 6531165)
We create branches:
* es-0.90 for elasticsearch 0.90
* es-1.0 for elasticsearch 1.0
* es-1.1 for elasticsearch 1.1
* master for elasticsearch master
We also check that before releasing we don't have a dependency to an elasticsearch SNAPSHOT version.
Add links to each version in documentation
(cherry picked from commit 65d4862)
elasticsearch 1.0 will provide a new feature named `Snapshot & Restore`.
We want to add support for [Azure Storage](http://www.windowsazure.com/en-us/documentation/services/storage/).
To enable Azure repositories, you have first to set your azure storage settings:
```yaml
cloud:
azure:
storage_account: your_azure_storage_account
storage_key: your_azure_storage_key
```
The Azure repository supports following settings:
* `container`: Container name. Defaults to `elasticsearch-snapshots`
* `base_path`: Specifies the path within container to repository data. Defaults to empty (root directory).
* `concurrent_streams`: Throttles the number of streams (per node) preforming snapshot operation. Defaults to `5`.
* `chunk_size`: Big files can be broken down into chunks during snapshotting if needed. The chunk size can be specified
in bytes or by using size value notation, i.e. `1g`, `10m`, `5k`. Defaults to `64m` (64m max)
* `compress`: When set to `true` metadata files are stored in compressed format. This setting doesn't affect index
files that are already compressed by default. Defaults to `false`.
Some examples, using scripts:
```sh
$ curl -XPUT 'http://localhost:9200/_snapshot/my_backup1' -d '{
"type": "azure"
}'
$ curl -XPUT 'http://localhost:9200/_snapshot/my_backup2' -d '{
"type": "azure",
"settings": {
"container": "backup_container",
"base_path": "backups",
"concurrent_streams": 2,
"chunk_size": "32m",
"compress": true
}
}'
```
Example using Java:
```java
client.admin().cluster().preparePutRepository("my_backup3")
.setType("azure").setSettings(ImmutableSettings.settingsBuilder()
.put(AzureStorageService.Fields.CONTAINER, "backup_container")
.put(AzureStorageService.Fields.CHUNK_SIZE, new ByteSizeValue(32, ByteSizeUnit.MB))
).get();
```
Closes#2.
In the `readme.md` file, `azure vm list` command should be changed to `azure vm image list`. The former lists all the VMs running in the current Azure subscription and the latter displays list of official available images.
The line is located just below the following line:
`To get a list of official available images, run:`
Closes#5.