528 lines
15 KiB
Plaintext
528 lines
15 KiB
Plaintext
[[discovery-azure-classic]]
|
|
=== Azure Classic Discovery Plugin
|
|
|
|
The Azure Classic Discovery plugin uses the Azure Classic API for unicast discovery.
|
|
|
|
// TODO: Link to ARM plugin when ready
|
|
// See issue https://github.com/elastic/elasticsearch/issues/19146
|
|
deprecated[5.0.0, Use coming Azure ARM Discovery plugin instead]
|
|
|
|
[[discovery-azure-classic-install]]
|
|
[float]
|
|
==== Installation
|
|
|
|
This plugin can be installed using the plugin manager:
|
|
|
|
[source,sh]
|
|
----------------------------------------------------------------
|
|
sudo bin/elasticsearch-plugin install discovery-azure-classic
|
|
----------------------------------------------------------------
|
|
|
|
The plugin must be installed on every node in the cluster, and each node must
|
|
be restarted after installation.
|
|
|
|
This plugin can be downloaded for <<plugin-management-custom-url,offline install>> from
|
|
{plugin_url}/discovery-azure-classic/discovery-azure-classic-{version}.zip.
|
|
|
|
[[discovery-azure-classic-remove]]
|
|
[float]
|
|
==== Removal
|
|
|
|
The plugin can be removed with the following command:
|
|
|
|
[source,sh]
|
|
----------------------------------------------------------------
|
|
sudo bin/elasticsearch-plugin remove discovery-azure-classic
|
|
----------------------------------------------------------------
|
|
|
|
The node must be stopped before removing the plugin.
|
|
|
|
[[discovery-azure-classic-usage]]
|
|
==== Azure Virtual Machine Discovery
|
|
|
|
Azure VM discovery allows to use the azure APIs to perform automatic discovery (similar to multicast in non hostile
|
|
multicast environments). Here is a simple sample configuration:
|
|
|
|
[source,yaml]
|
|
----
|
|
cloud:
|
|
azure:
|
|
management:
|
|
subscription.id: XXX-XXX-XXX-XXX
|
|
cloud.service.name: es-demo-app
|
|
keystore:
|
|
path: /path/to/azurekeystore.pkcs12
|
|
password: WHATEVER
|
|
type: pkcs12
|
|
|
|
discovery:
|
|
zen.hosts_provider: azure
|
|
----
|
|
|
|
[IMPORTANT]
|
|
.Binding the network host
|
|
==============================================
|
|
|
|
The keystore file must be placed in a directory accessible by elasticsearch like the `config` directory.
|
|
|
|
It's important to define `network.host` as by default it's bound to `localhost`.
|
|
|
|
You can use {ref}/modules-network.html[core network host settings]. For example `_en0_`.
|
|
|
|
==============================================
|
|
|
|
[[discovery-azure-classic-short]]
|
|
===== How to start (short story)
|
|
|
|
* Create Azure instances
|
|
* Install Elasticsearch
|
|
* Install Azure plugin
|
|
* Modify `elasticsearch.yml` file
|
|
* Start Elasticsearch
|
|
|
|
[[discovery-azure-classic-settings]]
|
|
===== Azure credential API settings
|
|
|
|
The following are a list of settings that can further control the credential API:
|
|
|
|
[horizontal]
|
|
`cloud.azure.management.keystore.path`::
|
|
|
|
/path/to/keystore
|
|
|
|
`cloud.azure.management.keystore.type`::
|
|
|
|
`pkcs12`, `jceks` or `jks`. Defaults to `pkcs12`.
|
|
|
|
`cloud.azure.management.keystore.password`::
|
|
|
|
your_password for the keystore
|
|
|
|
`cloud.azure.management.subscription.id`::
|
|
|
|
your_azure_subscription_id
|
|
|
|
`cloud.azure.management.cloud.service.name`::
|
|
|
|
your_azure_cloud_service_name. This is the cloud service name/DNS but without the `cloudapp.net` part.
|
|
So if the DNS name is `abc.cloudapp.net` then the `cloud.service.name` to use is just `abc`.
|
|
|
|
|
|
[[discovery-azure-classic-settings-advanced]]
|
|
===== Advanced settings
|
|
|
|
The following are a list of settings that can further control the discovery:
|
|
|
|
`discovery.azure.host.type`::
|
|
|
|
Either `public_ip` or `private_ip` (default). Azure discovery will use the
|
|
one you set to ping other nodes.
|
|
|
|
`discovery.azure.endpoint.name`::
|
|
|
|
When using `public_ip` this setting is used to identify the endpoint name
|
|
used to forward requests to elasticsearch (aka transport port name).
|
|
Defaults to `elasticsearch`. In Azure management console, you could define
|
|
an endpoint `elasticsearch` forwarding for example requests on public IP
|
|
on port 8100 to the virtual machine on port 9300.
|
|
|
|
`discovery.azure.deployment.name`::
|
|
|
|
Deployment name if any. Defaults to the value set with
|
|
`cloud.azure.management.cloud.service.name`.
|
|
|
|
`discovery.azure.deployment.slot`::
|
|
|
|
Either `staging` or `production` (default).
|
|
|
|
For example:
|
|
|
|
[source,yaml]
|
|
----
|
|
discovery:
|
|
type: azure
|
|
azure:
|
|
host:
|
|
type: private_ip
|
|
endpoint:
|
|
name: elasticsearch
|
|
deployment:
|
|
name: your_azure_cloud_service_name
|
|
slot: production
|
|
----
|
|
|
|
[[discovery-azure-classic-long]]
|
|
==== Setup process for Azure Discovery
|
|
|
|
We will expose here one strategy which is to hide our Elasticsearch cluster from outside.
|
|
|
|
With this strategy, only VMs behind the same virtual port can talk to each
|
|
other. That means that with this mode, you can use elasticsearch unicast
|
|
discovery to build a cluster, using the Azure API to retrieve information
|
|
about your nodes.
|
|
|
|
[[discovery-azure-classic-long-prerequisites]]
|
|
===== Prerequisites
|
|
|
|
Before starting, you need to have:
|
|
|
|
* A http://www.windowsazure.com/[Windows Azure account]
|
|
* OpenSSL that isn't from MacPorts, specifically `OpenSSL 1.0.1f 6 Jan
|
|
2014` doesn't seem to create a valid keypair for ssh. FWIW,
|
|
`OpenSSL 1.0.1c 10 May 2012` on Ubuntu 14.04 LTS is known to work.
|
|
* SSH keys and certificate
|
|
+
|
|
--
|
|
|
|
You should follow http://azure.microsoft.com/en-us/documentation/articles/linux-use-ssh-key/[this guide] to learn
|
|
how to create or use existing SSH keys. If you have already did it, you can skip the following.
|
|
|
|
Here is a description on how to generate SSH keys using `openssl`:
|
|
|
|
[source,sh]
|
|
----
|
|
# You may want to use another dir than /tmp
|
|
cd /tmp
|
|
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout azure-private.key -out azure-certificate.pem
|
|
chmod 600 azure-private.key azure-certificate.pem
|
|
openssl x509 -outform der -in azure-certificate.pem -out azure-certificate.cer
|
|
----
|
|
|
|
Generate a keystore which will be used by the plugin to authenticate with a certificate
|
|
all Azure API calls.
|
|
|
|
[source,sh]
|
|
----
|
|
# Generate a keystore (azurekeystore.pkcs12)
|
|
# Transform private key to PEM format
|
|
openssl pkcs8 -topk8 -nocrypt -in azure-private.key -inform PEM -out azure-pk.pem -outform PEM
|
|
# Transform certificate to PEM format
|
|
openssl x509 -inform der -in azure-certificate.cer -out azure-cert.pem
|
|
cat azure-cert.pem azure-pk.pem > azure.pem.txt
|
|
# You MUST enter a password!
|
|
openssl pkcs12 -export -in azure.pem.txt -out azurekeystore.pkcs12 -name azure -noiter -nomaciter
|
|
----
|
|
|
|
Upload the `azure-certificate.cer` file both in the elasticsearch Cloud Service (under `Manage Certificates`),
|
|
and under `Settings -> Manage Certificates`.
|
|
|
|
IMPORTANT: When prompted for a password, you need to enter a non empty one.
|
|
|
|
See this http://www.windowsazure.com/en-us/manage/linux/how-to-guides/ssh-into-linux/[guide] for
|
|
more details about how to create keys for Azure.
|
|
|
|
Once done, you need to upload your certificate in Azure:
|
|
|
|
* Go to the https://account.windowsazure.com/[management console].
|
|
* Sign in using your account.
|
|
* Click on `Portal`.
|
|
* Go to Settings (bottom of the left list)
|
|
* On the bottom bar, click on `Upload` and upload your `azure-certificate.cer` file.
|
|
|
|
You may want to use
|
|
http://www.windowsazure.com/en-us/develop/nodejs/how-to-guides/command-line-tools/[Windows Azure Command-Line Tool]:
|
|
|
|
--
|
|
|
|
* Install https://github.com/joyent/node/wiki/Installing-Node.js-via-package-manager[NodeJS], for example using
|
|
homebrew on MacOS X:
|
|
+
|
|
[source,sh]
|
|
----
|
|
brew install node
|
|
----
|
|
|
|
* Install Azure tools
|
|
+
|
|
[source,sh]
|
|
----
|
|
sudo npm install azure-cli -g
|
|
----
|
|
|
|
* Download and import your azure settings:
|
|
+
|
|
[source,sh]
|
|
----
|
|
# This will open a browser and will download a .publishsettings file
|
|
azure account download
|
|
|
|
# Import this file (we have downloaded it to /tmp)
|
|
# Note, it will create needed files in ~/.azure. You can remove azure.publishsettings when done.
|
|
azure account import /tmp/azure.publishsettings
|
|
----
|
|
|
|
[[discovery-azure-classic-long-instance]]
|
|
===== Creating your first instance
|
|
|
|
You need to have a storage account available. Check http://www.windowsazure.com/en-us/develop/net/how-to-guides/blob-storage/#create-account[Azure Blob Storage documentation]
|
|
for more information.
|
|
|
|
You will need to choose the operating system you want to run on. To get a list of official available images, run:
|
|
|
|
[source,sh]
|
|
----
|
|
azure vm image list
|
|
----
|
|
|
|
Let's say we are going to deploy an Ubuntu image on an extra small instance in West Europe:
|
|
|
|
[horizontal]
|
|
Azure cluster name::
|
|
|
|
`azure-elasticsearch-cluster`
|
|
|
|
Image::
|
|
|
|
`b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-13_10-amd64-server-20130808-alpha3-en-us-30GB`
|
|
|
|
VM Name::
|
|
|
|
`myesnode1`
|
|
|
|
VM Size::
|
|
|
|
`extrasmall`
|
|
|
|
Location::
|
|
|
|
`West Europe`
|
|
|
|
Login::
|
|
|
|
`elasticsearch`
|
|
|
|
Password::
|
|
|
|
`password1234!!`
|
|
|
|
|
|
Using command line:
|
|
|
|
[source,sh]
|
|
----
|
|
azure vm create azure-elasticsearch-cluster \
|
|
b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-13_10-amd64-server-20130808-alpha3-en-us-30GB \
|
|
--vm-name myesnode1 \
|
|
--location "West Europe" \
|
|
--vm-size extrasmall \
|
|
--ssh 22 \
|
|
--ssh-cert /tmp/azure-certificate.pem \
|
|
elasticsearch password1234\!\!
|
|
----
|
|
|
|
You should see something like:
|
|
|
|
[source,text]
|
|
----
|
|
info: Executing command vm create
|
|
+ Looking up image
|
|
+ Looking up cloud service
|
|
+ Creating cloud service
|
|
+ Retrieving storage accounts
|
|
+ Configuring certificate
|
|
+ Creating VM
|
|
info: vm create command OK
|
|
----
|
|
|
|
Now, your first instance is started.
|
|
|
|
[TIP]
|
|
.Working with SSH
|
|
===============================================
|
|
|
|
You need to give the private key and username each time you log on your instance:
|
|
|
|
[source,sh]
|
|
----
|
|
ssh -i ~/.ssh/azure-private.key elasticsearch@myescluster.cloudapp.net
|
|
----
|
|
|
|
But you can also define it once in `~/.ssh/config` file:
|
|
|
|
[source,text]
|
|
----
|
|
Host *.cloudapp.net
|
|
User elasticsearch
|
|
StrictHostKeyChecking no
|
|
UserKnownHostsFile=/dev/null
|
|
IdentityFile ~/.ssh/azure-private.key
|
|
----
|
|
===============================================
|
|
|
|
Next, you need to install Elasticsearch on your new instance. First, copy your
|
|
keystore to the instance, then connect to the instance using SSH:
|
|
|
|
[source,sh]
|
|
----
|
|
scp /tmp/azurekeystore.pkcs12 azure-elasticsearch-cluster.cloudapp.net:/home/elasticsearch
|
|
ssh azure-elasticsearch-cluster.cloudapp.net
|
|
----
|
|
|
|
Once connected, install Elasticsearch:
|
|
|
|
["source","sh",subs="attributes,callouts"]
|
|
----
|
|
# Install Latest Java version
|
|
# Read http://www.webupd8.org/2012/09/install-oracle-java-8-in-ubuntu-via-ppa.html for details
|
|
sudo add-apt-repository ppa:webupd8team/java
|
|
sudo apt-get update
|
|
sudo apt-get install oracle-java8-installer
|
|
|
|
# If you want to install OpenJDK instead
|
|
# sudo apt-get update
|
|
# sudo apt-get install openjdk-8-jre-headless
|
|
|
|
# Download Elasticsearch
|
|
curl -s https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-{version}.deb -o elasticsearch-{version}.deb
|
|
|
|
# Prepare Elasticsearch installation
|
|
sudo dpkg -i elasticsearch-{version}.deb
|
|
----
|
|
// NOTCONSOLE
|
|
|
|
Check that elasticsearch is running:
|
|
|
|
[source,js]
|
|
----
|
|
GET /
|
|
----
|
|
// CONSOLE
|
|
|
|
This command should give you a JSON result:
|
|
|
|
["source","js",subs="attributes,callouts"]
|
|
--------------------------------------------
|
|
{
|
|
"name" : "Cp8oag6",
|
|
"cluster_name" : "elasticsearch",
|
|
"cluster_uuid" : "AT69_T_DTp-1qgIJlatQqA",
|
|
"version" : {
|
|
"number" : "{version}",
|
|
"build_hash" : "f27399d",
|
|
"build_date" : "2016-03-30T09:51:41.449Z",
|
|
"build_snapshot" : false,
|
|
"lucene_version" : "{lucene_version}"
|
|
},
|
|
"tagline" : "You Know, for Search"
|
|
}
|
|
--------------------------------------------
|
|
// TESTRESPONSE[s/"name" : "Cp8oag6",/"name" : "$body.name",/]
|
|
// TESTRESPONSE[s/"cluster_name" : "elasticsearch",/"cluster_name" : "$body.cluster_name",/]
|
|
// TESTRESPONSE[s/"cluster_uuid" : "AT69_T_DTp-1qgIJlatQqA",/"cluster_uuid" : "$body.cluster_uuid",/]
|
|
// TESTRESPONSE[s/"build_hash" : "f27399d",/"build_hash" : "$body.version.build_hash",/]
|
|
// TESTRESPONSE[s/"build_date" : "2016-03-30T09:51:41.449Z",/"build_date" : $body.version.build_date,/]
|
|
// TESTRESPONSE[s/"build_snapshot" : false,/"build_snapshot" : $body.version.build_snapshot,/]
|
|
// So much s/// but at least we test that the layout is close to matching....
|
|
|
|
[[discovery-azure-classic-long-plugin]]
|
|
===== Install elasticsearch cloud azure plugin
|
|
|
|
[source,sh]
|
|
----
|
|
# Stop elasticsearch
|
|
sudo service elasticsearch stop
|
|
|
|
# Install the plugin
|
|
sudo /usr/share/elasticsearch/bin/elasticsearch-plugin install discovery-azure-classic
|
|
|
|
# Configure it
|
|
sudo vi /etc/elasticsearch/elasticsearch.yml
|
|
----
|
|
|
|
And add the following lines:
|
|
|
|
[source,yaml]
|
|
----
|
|
# If you don't remember your account id, you may get it with `azure account list`
|
|
cloud:
|
|
azure:
|
|
management:
|
|
subscription.id: your_azure_subscription_id
|
|
cloud.service.name: your_azure_cloud_service_name
|
|
keystore:
|
|
path: /home/elasticsearch/azurekeystore.pkcs12
|
|
password: your_password_for_keystore
|
|
|
|
discovery:
|
|
type: azure
|
|
|
|
# Recommended (warning: non durable disk)
|
|
# path.data: /mnt/resource/elasticsearch/data
|
|
----
|
|
|
|
Restart elasticsearch:
|
|
|
|
[source,sh]
|
|
----
|
|
sudo service elasticsearch start
|
|
----
|
|
|
|
If anything goes wrong, check your logs in `/var/log/elasticsearch`.
|
|
|
|
[[discovery-azure-classic-scale]]
|
|
==== Scaling Out!
|
|
|
|
You need first to create an image of your previous machine.
|
|
Disconnect from your machine and run locally the following commands:
|
|
|
|
[source,sh]
|
|
----
|
|
# Shutdown the instance
|
|
azure vm shutdown myesnode1
|
|
|
|
# Create an image from this instance (it could take some minutes)
|
|
azure vm capture myesnode1 esnode-image --delete
|
|
|
|
# Note that the previous instance has been deleted (mandatory)
|
|
# So you need to create it again and BTW create other instances.
|
|
|
|
azure vm create azure-elasticsearch-cluster \
|
|
esnode-image \
|
|
--vm-name myesnode1 \
|
|
--location "West Europe" \
|
|
--vm-size extrasmall \
|
|
--ssh 22 \
|
|
--ssh-cert /tmp/azure-certificate.pem \
|
|
elasticsearch password1234\!\!
|
|
----
|
|
|
|
|
|
[TIP]
|
|
=========================================
|
|
It could happen that azure changes the endpoint public IP address.
|
|
DNS propagation could take some minutes before you can connect again using
|
|
name. You can get from azure the IP address if needed, using:
|
|
|
|
[source,sh]
|
|
----
|
|
# Look at Network `Endpoints 0 Vip`
|
|
azure vm show myesnode1
|
|
----
|
|
|
|
=========================================
|
|
|
|
Let's start more instances!
|
|
|
|
[source,sh]
|
|
----
|
|
for x in $(seq 2 10)
|
|
do
|
|
echo "Launching azure instance #$x..."
|
|
azure vm create azure-elasticsearch-cluster \
|
|
esnode-image \
|
|
--vm-name myesnode$x \
|
|
--vm-size extrasmall \
|
|
--ssh $((21 + $x)) \
|
|
--ssh-cert /tmp/azure-certificate.pem \
|
|
--connect \
|
|
elasticsearch password1234\!\!
|
|
done
|
|
----
|
|
|
|
If you want to remove your running instances:
|
|
|
|
[source,sh]
|
|
----
|
|
azure vm delete myesnode1
|
|
----
|