OpenSearch/docs/plugins/discovery-gce.asciidoc

511 lines
16 KiB
Plaintext

[[discovery-gce]]
=== GCE Discovery Plugin
The Google Compute Engine Discovery plugin uses the GCE API to identify the
addresses of seed hosts.
:plugin_name: discovery-gce
include::install_remove.asciidoc[]
[[discovery-gce-usage]]
==== GCE Virtual Machine Discovery
Google Compute Engine VM discovery allows to use the google APIs to perform
automatic discovery of seed hosts. Here is a simple sample configuration:
[source,yaml]
--------------------------------------------------
cloud:
gce:
project_id: <your-google-project-id>
zone: <your-zone>
discovery:
seed_providers: gce
--------------------------------------------------
The following gce settings (prefixed with `cloud.gce`) are supported:
`project_id`::
Your Google project id.
By default the project id will be derived from the instance metadata.
Note: Deriving the project id from system properties or environment variables
(`GOOGLE_CLOUD_PROJECT` or `GCLOUD_PROJECT`) is not supported.
`zone`::
helps to retrieve instances running in a given zone.
It should be one of the https://developers.google.com/compute/docs/zones#available[GCE supported zones].
By default the zone will be derived from the instance metadata.
See also <<discovery-gce-usage-zones>>.
`retry`::
If set to `true`, client will use
https://developers.google.com/api-client-library/java/google-http-java-client/backoff[ExponentialBackOff]
policy to retry the failed http request. Defaults to `true`.
`max_wait`::
The maximum elapsed time after the client instantiating retry. If the time elapsed goes past the
`max_wait`, client stops to retry. A negative value means that it will wait indefinitely. Defaults to `0s` (retry
indefinitely).
`refresh_interval`::
How long the list of hosts is cached to prevent further requests to the GCE API. `0s` disables caching.
A negative value will cause infinite caching. Defaults to `0s`.
[IMPORTANT]
.Binding the network host
==============================================
It's important to define `network.host` as by default it's bound to `localhost`.
You can use {ref}/modules-network.html[core network host settings] or
<<discovery-gce-network-host,gce specific host settings>>:
==============================================
[[discovery-gce-network-host]]
==== GCE Network Host
When the `discovery-gce` plugin is installed, the following are also allowed
as valid network host settings:
[cols="<,<",options="header",]
|==================================================================
|GCE Host Value |Description
|`_gce:privateIp:X_` |The private IP address of the machine for a given network interface.
|`_gce:hostname_` |The hostname of the machine.
|`_gce_` |Same as `_gce:privateIp:0_` (recommended).
|==================================================================
Examples:
[source,yaml]
--------------------------------------------------
# get the IP address from network interface 1
network.host: _gce:privateIp:1_
# Using GCE internal hostname
network.host: _gce:hostname_
# shortcut for _gce:privateIp:0_ (recommended)
network.host: _gce_
--------------------------------------------------
[[discovery-gce-usage-short]]
===== How to start (short story)
* Create Google Compute Engine instance (with compute rw permissions)
* Install Elasticsearch
* Install Google Compute Engine Cloud plugin
* Modify `elasticsearch.yml` file
* Start Elasticsearch
[[discovery-gce-usage-long]]
==== Setting up GCE Discovery
[[discovery-gce-usage-long-prerequisites]]
===== Prerequisites
Before starting, you need:
* Your project ID, e.g. `es-cloud`. Get it from https://code.google.com/apis/console/[Google API Console].
* To install https://developers.google.com/cloud/sdk/[Google Cloud SDK]
If you did not set it yet, you can define your default project you will work on:
[source,sh]
--------------------------------------------------
gcloud config set project es-cloud
--------------------------------------------------
[[discovery-gce-usage-long-login]]
===== Login to Google Cloud
If you haven't already, login to Google Cloud
[source,sh]
--------------------------------------------------
gcloud auth login
--------------------------------------------------
This will open your browser. You will be asked to sign-in to a Google account and
authorize access to the Google Cloud SDK.
[[discovery-gce-usage-long-first-instance]]
===== Creating your first instance
[source,sh]
--------------------------------------------------
gcloud compute instances create myesnode1 \
--zone <your-zone> \
--scopes compute-rw
--------------------------------------------------
When done, a report like this one should appears:
[source,text]
--------------------------------------------------
Created [https://www.googleapis.com/compute/v1/projects/es-cloud-1070/zones/us-central1-f/instances/myesnode1].
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
myesnode1 us-central1-f n1-standard-1 10.240.133.54 104.197.94.25 RUNNING
--------------------------------------------------
You can now connect to your instance:
[source,sh]
--------------------------------------------------
# Connect using google cloud SDK
gcloud compute ssh myesnode1 --zone europe-west1-a
# Or using SSH with external IP address
ssh -i ~/.ssh/google_compute_engine 192.158.29.199
--------------------------------------------------
[IMPORTANT]
.Service Account Permissions
==============================================
It's important when creating an instance that the correct permissions are set. At a minimum, you must ensure you have:
[source,text]
--------------------------------------------------
scopes=compute-rw
--------------------------------------------------
Failing to set this will result in unauthorized messages when starting Elasticsearch.
See <<discovery-gce-usage-tips-permissions>>.
==============================================
Once connected, install Elasticsearch:
[source,sh]
--------------------------------------------------
sudo apt-get update
# Download Elasticsearch
wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-2.0.0.deb
# Prepare Java installation (Oracle)
sudo echo "deb http://ppa.launchpad.net/webupd8team/java/ubuntu trusty main" | sudo tee /etc/apt/sources.list.d/webupd8team-java.list
sudo echo "deb-src http://ppa.launchpad.net/webupd8team/java/ubuntu trusty main" | sudo tee -a /etc/apt/sources.list.d/webupd8team-java.list
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys EEA14886
sudo apt-get update
sudo apt-get install oracle-java8-installer
# Prepare Java installation (or OpenJDK)
# sudo apt-get install java8-runtime-headless
# Prepare Elasticsearch installation
sudo dpkg -i elasticsearch-2.0.0.deb
--------------------------------------------------
[[discovery-gce-usage-long-install-plugin]]
===== Install Elasticsearch discovery gce plugin
Install the plugin:
[source,sh]
--------------------------------------------------
# Use Plugin Manager to install it
sudo bin/elasticsearch-plugin install discovery-gce
--------------------------------------------------
Open the `elasticsearch.yml` file:
[source,sh]
--------------------------------------------------
sudo vi /etc/elasticsearch/elasticsearch.yml
--------------------------------------------------
And add the following lines:
[source,yaml]
--------------------------------------------------
cloud:
gce:
project_id: es-cloud
zone: europe-west1-a
discovery:
seed_providers: gce
--------------------------------------------------
Start Elasticsearch:
[source,sh]
--------------------------------------------------
sudo /etc/init.d/elasticsearch start
--------------------------------------------------
If anything goes wrong, you should check logs:
[source,sh]
--------------------------------------------------
tail -f /var/log/elasticsearch/elasticsearch.log
--------------------------------------------------
If needed, you can change log level to `trace` by opening `log4j2.properties`:
[source,sh]
--------------------------------------------------
sudo vi /etc/elasticsearch/log4j2.properties
--------------------------------------------------
and adding the following line:
[source,yaml]
--------------------------------------------------
# discovery
logger.discovery_gce.name = discovery.gce
logger.discovery_gce.level = trace
--------------------------------------------------
[[discovery-gce-usage-cloning]]
==== Cloning your existing machine
In order to build a cluster on many nodes, you can clone your configured instance to new nodes.
You won't have to reinstall everything!
First create an image of your running instance and upload it to Google Cloud Storage:
[source,sh]
--------------------------------------------------
# Create an image of your current instance
sudo /usr/bin/gcimagebundle -d /dev/sda -o /tmp/
# An image has been created in `/tmp` directory:
ls /tmp
e4686d7f5bf904a924ae0cfeb58d0827c6d5b966.image.tar.gz
# Upload your image to Google Cloud Storage:
# Create a bucket to hold your image, let's say `esimage`:
gsutil mb gs://esimage
# Copy your image to this bucket:
gsutil cp /tmp/e4686d7f5bf904a924ae0cfeb58d0827c6d5b966.image.tar.gz gs://esimage
# Then add your image to images collection:
gcloud compute images create elasticsearch-2-0-0 --source-uri gs://esimage/e4686d7f5bf904a924ae0cfeb58d0827c6d5b966.image.tar.gz
# If the previous command did not work for you, logout from your instance
# and launch the same command from your local machine.
--------------------------------------------------
[[discovery-gce-usage-start-new-instances]]
===== Start new instances
As you have now an image, you can create as many instances as you need:
[source,sh]
--------------------------------------------------
# Just change node name (here myesnode2)
gcloud compute instances create myesnode2 --image elasticsearch-2-0-0 --zone europe-west1-a
# If you want to provide all details directly, you can use:
gcloud compute instances create myesnode2 --image=elasticsearch-2-0-0 \
--zone europe-west1-a --machine-type f1-micro --scopes=compute-rw
--------------------------------------------------
[[discovery-gce-usage-remove-instance]]
===== Remove an instance (aka shut it down)
You can use https://cloud.google.com/console[Google Cloud Console] or CLI to manage your instances:
[source,sh]
--------------------------------------------------
# Stopping and removing instances
gcloud compute instances delete myesnode1 myesnode2 \
--zone=europe-west1-a
# Consider removing disk as well if you don't need them anymore
gcloud compute disks deleted boot-myesnode1 boot-myesnode2 \
--zone=europe-west1-a
--------------------------------------------------
[[discovery-gce-usage-zones]]
==== Using GCE zones
`cloud.gce.zone` helps to retrieve instances running in a given zone. It should be one of the
https://developers.google.com/compute/docs/zones#available[GCE supported zones].
The GCE discovery can support multi zones although you need to be aware of network latency between zones.
To enable discovery across more than one zone, just enter add your zone list to `cloud.gce.zone` setting:
[source,yaml]
--------------------------------------------------
cloud:
gce:
project_id: <your-google-project-id>
zone: ["<your-zone1>", "<your-zone2>"]
discovery:
seed_providers: gce
--------------------------------------------------
[[discovery-gce-usage-tags]]
==== Filtering by tags
The GCE discovery can also filter machines to include in the cluster based on tags using `discovery.gce.tags` settings.
For example, setting `discovery.gce.tags` to `dev` will only filter instances having a tag set to `dev`. Several tags
set will require all of those tags to be set for the instance to be included.
One practical use for tag filtering is when a GCE cluster contains many nodes
that are not master-eligible {es} nodes. In this case, tagging the GCE
instances that _are_ running the master-eligible {es} nodes, and then filtering
by that tag, will help discovery to run more efficiently.
Add your tag when building the new instance:
[source,sh]
--------------------------------------------------
gcloud compute instances create myesnode1 --project=es-cloud \
--scopes=compute-rw \
--tags=elasticsearch,dev
--------------------------------------------------
Then, define it in `elasticsearch.yml`:
[source,yaml]
--------------------------------------------------
cloud:
gce:
project_id: es-cloud
zone: europe-west1-a
discovery:
seed_providers: gce
gce:
tags: elasticsearch, dev
--------------------------------------------------
[[discovery-gce-usage-port]]
==== Changing default transport port
By default, Elasticsearch GCE plugin assumes that you run Elasticsearch on 9300 default port.
But you can specify the port value Elasticsearch is meant to use using google compute engine metadata `es_port`:
[[discovery-gce-usage-port-create]]
===== When creating instance
Add `--metadata es_port=9301` option:
[source,sh]
--------------------------------------------------
# when creating first instance
gcloud compute instances create myesnode1 \
--scopes=compute-rw,storage-full \
--metadata es_port=9301
# when creating an instance from an image
gcloud compute instances create myesnode2 --image=elasticsearch-1-0-0-RC1 \
--zone europe-west1-a --machine-type f1-micro --scopes=compute-rw \
--metadata es_port=9301
--------------------------------------------------
[[discovery-gce-usage-port-run]]
===== On a running instance
[source,sh]
--------------------------------------------------
gcloud compute instances add-metadata myesnode1 \
--zone europe-west1-a \
--metadata es_port=9301
--------------------------------------------------
[[discovery-gce-usage-tips]]
==== GCE Tips
[[discovery-gce-usage-tips-projectid]]
===== Store project id locally
If you don't want to repeat the project id each time, you can save it in the local gcloud config
[source,sh]
--------------------------------------------------
gcloud config set project es-cloud
--------------------------------------------------
[[discovery-gce-usage-tips-permissions]]
===== Machine Permissions
If you have created a machine without the correct permissions, you will see `403 unauthorized` error messages. To change machine permission on an existing instance, first stop the instance then Edit. Scroll down to `Access Scopes` to change permission. The other way to alter these permissions is to delete the instance (NOT THE DISK). Then create another with the correct permissions.
Creating machines with gcloud::
+
--
Ensure the following flags are set:
[source,text]
--------------------------------------------------
--scopes=compute-rw
--------------------------------------------------
--
Creating with console (web)::
+
--
When creating an instance using the web portal, click _Show advanced options_.
At the bottom of the page, under `PROJECT ACCESS`, choose `>> Compute >> Read Write`.
--
Creating with knife google::
+
--
Set the service account scopes when creating the machine:
[source,sh]
--------------------------------------------------
knife google server create www1 \
-m n1-standard-1 \
-I debian-8 \
-Z us-central1-a \
-i ~/.ssh/id_rsa \
-x jdoe \
--gce-service-account-scopes https://www.googleapis.com/auth/compute.full_control
--------------------------------------------------
Or, you may use the alias:
[source,sh]
--------------------------------------------------
--gce-service-account-scopes compute-rw
--------------------------------------------------
--
[[discovery-gce-usage-testing]]
==== Testing GCE
Integrations tests in this plugin require working GCE configuration and
therefore disabled by default. To enable tests prepare a config file
elasticsearch.yml with the following content:
[source,yaml]
--------------------------------------------------
cloud:
gce:
project_id: es-cloud
zone: europe-west1-a
discovery:
seed_providers: gce
--------------------------------------------------
Replaces `project_id` and `zone` with your settings.
To run test:
[source,sh]
--------------------------------------------------
mvn -Dtests.gce=true -Dtests.config=/path/to/config/file/elasticsearch.yml clean test
--------------------------------------------------