When upgrading to elasticsearch 1.2.1 test framework, some tests are not working anymore because of
`ElasticsearchIntegrationTest#ensureClusterSizeConsistency()` method which check that the number of started nodes
is the number of available nodes in the cluster.
Disabling them temporary.
Also, a new clientNode could be added (depends on seed). It adds a node more than expected.
(cherry picked from commit bcc2cd5)
When no tags exists on other running instances and if we try to filter by tag, we get the following error:
```
[2014-05-19 16:17:37,377][DEBUG][discovery.gce ] [Theresa Cassidy] start building nodes list using GCE API
[2014-05-19 16:17:37,378][INFO ][cloud.gce ] [Theresa Cassidy] starting GCE discovery service
[2014-05-19 16:17:37,592][TRACE][discovery.gce ] [Theresa Cassidy] gce instance hadoop1 with status RUNNING found.
[2014-05-19 16:17:37,597][TRACE][discovery.gce ] [Theresa Cassidy] start filtering instance hadoop1 with tags [elasticsearch, dev].
[2014-05-19 16:17:37,597][TRACE][discovery.gce ] [Theresa Cassidy] comparing instance tags null with tags filter [elasticsearch, dev].
[2014-05-19 16:17:37,597][WARN ][discovery.gce ] [Theresa Cassidy] Exception caught during discovery java.lang.NullPointerException : null
[2014-05-19 16:17:37,597][TRACE][discovery.gce ] [Theresa Cassidy] Exception caught during discovery
java.lang.NullPointerException
at org.elasticsearch.discovery.gce.GceUnicastHostsProvider.buildDynamicNodes(GceUnicastHostsProvider.java:157)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing.sendPings(UnicastZenPing.java:245)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing$2.run(UnicastZenPing.java:176)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
[2014-05-19 16:17:37,598][DEBUG][discovery.gce ] [Theresa Cassidy] 0 node(s) added
[2014-05-19 16:17:37,598][DEBUG][discovery.gce ] [Theresa Cassidy] using dynamic discovery nodes []
```
Closes#22.
We create branches:
* es-0.90 for elasticsearch 0.90
* es-1.0 for elasticsearch 1.0
* es-1.1 for elasticsearch 1.1
* master for elasticsearch master
We also check that before releasing we don't have a dependency to an elasticsearch SNAPSHOT version.
Add links to each version in documentation
(cherry picked from commit a51926c)
When an instance is removed, its status become `TERMINATED`.
As stated in [GCE Documentation](https://developers.google.com/compute/docs/instances#checkmachinestatus):
> TERMINATED - The instance either failed for some reason or was shutdown. This is a permanent status, and the only way to repair the instance is to delete and recreate it.
So we need to ignore instances with such a status.
Closes#3.
-----------------
The GCE discovery can also filter machines to include in the cluster based on tags using `discovery.gce.tags` settings.
For example, setting `discovery.gce.tags` to `dev` will only filter instances having a tag set to `dev`. Several tags
set will require all of those tags to be set for the instance to be included.
One practical use for tag filtering is when an GCE cluster contains many nodes that are not running
elasticsearch. In this case (particularly with high ping_timeout values) there is a risk that a new node's discovery
phase will end before it has found the cluster (which will result in it declaring itself master of a new cluster
with the same name - highly undesirable). Adding tag on elasticsearch GCE nodes and then filtering by that
tag will resolve this issue.
Add your tag when building the new instance:
```sh
gcutil --project=es-cloud addinstance myesnode1 --service_account_scope=compute-rw --persistent_boot_disk \
--tags=elasticsearch,dev
```
Then, define it in `elasticsearch.yml`:
```yaml
cloud:
gce:
project_id: es-cloud
zone: europe-west1-a
discovery:
type: gce
gce:
tags: elasticsearch, dev
```
Closes#2.