Preparing release for 1.0.0-Alpha

This commit is contained in:
David Pilato 2013-11-06 09:54:30 +01:00
parent fba4f6b1a1
commit a1f85f6d90
24 changed files with 1211 additions and 357 deletions

129
README.md
View File

@ -5,13 +5,27 @@ The Azure Cloud plugin allows to use Azure API for the unicast discovery mechani
In order to install the plugin, simply run: `bin/plugin -install elasticsearch/elasticsearch-cloud-azure/1.0.0`.
-----------------------------------------
| Azure Cloud Plugin | ElasticSearch |
-----------------------------------------
| master | 0.90 -> master |
-----------------------------------------
| 1.0.0 | 0.20.6 |
-----------------------------------------
<table>
<thead>
<tr>
<td>Azure Cloud Plugin</td>
<td>ElasticSearch</td>
<td>Release date</td>
</tr>
</thead>
<tbody>
<tr>
<td>1.1.0-SNAPSHOT (master)</td>
<td>0.90.6</td>
<td></td>
</tr>
<tr>
<td>1.0.0</td>
<td>0.90.6</td>
<td>2013-11-12</td>
</tr>
</tbody>
</table>
Azure Virtual Machine Discovery
@ -23,10 +37,10 @@ multicast environments). Here is a simple sample configuration:
```
cloud:
azure:
private_key: /path/to/private.key
certificate: /path/to/azure.certficate
password: your_password_for_pk
keystore: /path/to/keystore
password: your_password_for_keystore
subscription_id: your_azure_subscription_id
service_name: your_azure_cloud_service_name
discovery:
type: azure
```
@ -124,6 +138,7 @@ Let's say we are going to deploy an Ubuntu image on an extra small instance in W
* VM Size: `extrasmall`
* Location: `West Europe`
* Login: `elasticsearch`
* Password: `password1234!!`
Using command line:
@ -135,7 +150,7 @@ azure vm create azure-elasticsearch-cluster \
--vm-size extrasmall \
--ssh 22 \
--ssh-cert /tmp/azure-certificate.pem \
elasticsearch password
elasticsearch password1234!!
```
You should see something like:
@ -183,15 +198,17 @@ ssh azure-elasticsearch-cluster.cloudapp.net
Once connected, install Elasticsearch:
```sh
# Install Latest JDK
# Install Latest OpenJDK
# If you would like to use Oracle JDK instead, read the following:
# http://www.webupd8.org/2012/01/install-oracle-java-jdk-7-in-ubuntu-via.html
sudo apt-get update
sudo apt-get install openjdk-7-jre-headless
# Download Elasticsearch
curl -s https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-0.90.3.deb -o elasticsearch-0.90.3.deb
curl -s https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-0.90.6.deb -o elasticsearch-0.90.6.deb
# Prepare Elasticsearch installation
sudo dpkg -i elasticsearch-0.90.3.deb
sudo dpkg -i elasticsearch-0.90.6.deb
```
Check that elasticsearch is running:
@ -206,13 +223,13 @@ This command should give you a JSON result:
{
"ok" : true,
"status" : 200,
"name" : "Mandarin",
"name" : "Grey, Dr. John",
"version" : {
"number" : "0.90.3",
"build_hash" : "5c38d6076448b899d758f29443329571e2522410",
"build_timestamp" : "2013-08-06T13:18:31Z",
"number" : "0.90.6",
"build_hash" : "e2a24efdde0cb7cc1b2071ffbbd1fd874a6d8d6b",
"build_timestamp" : "2013-11-04T13:44:16Z",
"build_snapshot" : false,
"lucene_version" : "4.4"
"lucene_version" : "4.5.1"
},
"tagline" : "You Know, for Search"
}
@ -220,8 +237,6 @@ This command should give you a JSON result:
### Install nodejs and Azure tools
*TODO: check if there is a downloadable version of NodeJS*
```sh
# Install node (aka download and compile source)
sudo apt-get update
@ -240,7 +255,7 @@ azure account import /tmp/azure.publishsettings
rm /tmp/azure.publishsettings
```
### Generate private keys for this instance
### Generate a keystore
```sh
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout azure-private.key -out azure-certificate.pem
@ -250,9 +265,13 @@ openssl x509 -outform der -in azure-certificate.pem -out azure-certificate.cer
openssl pkcs8 -topk8 -nocrypt -in azure-private.key -inform PEM -out azure-pk.pem -outform PEM
# Transform certificate to PEM format
openssl x509 -inform der -in azure-certificate.cer -out azure-cert.pem
cat azure-cert.pem azure-pk.pem > azure.pem.txt
# You MUST enter a password!
openssl pkcs12 -export -in azure.pem.txt -out azurekeystore.pkcs12 -name azure -noiter -nomaciter
```
Upload the generated key to Azure platform
Upload the generated key to Azure platform. **Important**: when prompted for a password,
you need to enter a non empty one.
```sh
azure service cert create azure-elasticsearch-cluster azure-certificate.cer
@ -264,8 +283,8 @@ azure service cert create azure-elasticsearch-cluster azure-certificate.cer
# Stop elasticsearch
sudo service elasticsearch stop
# Install the plugin (TODO : USE THE RIGHT VERSION NUMBER)
sudo /usr/share/elasticsearch/bin/plugin -install elasticsearch/elasticsearch-cloud-azure/0.1.0-SNAPSHOT
# Install the plugin
sudo /usr/share/elasticsearch/bin/plugin -install elasticsearch/elasticsearch-cloud-azure/1.0.0
# Configure it
sudo vi /etc/elasticsearch/elasticsearch.yml
@ -277,9 +296,10 @@ And add the following lines:
# If you don't remember your account id, you may get it with `azure account list`
cloud:
azure:
private_key: /home/elasticsearch/azure-pk.pem
certificate: /home/elasticsearch/azure-cert.pem
keystore: /home/elasticsearch/azurekeystore.pkcs12
password: your_password_for_keystore
subscription_id: your_azure_subscription_id
service_name: your_azure_cloud_service_name
discovery:
type: azure
```
@ -293,28 +313,63 @@ sudo service elasticsearch start
If anything goes wrong, check your logs in `/var/log/elasticsearch`.
TODO: Ask pierre for Azure commands
Scaling Out!
------------
Cloning your existing machine:
You need first to create an image of your previous machine.
Disconnect from your machine and run locally the following commands:
```sh
azure ....
# Shutdown the instance
azure vm shutdown myesnode1
# Create an image from this instance (it could take some minutes)
azure vm capture myesnode1 esnode-image --delete
# Note that the previous instance has been deleted (mandatory)
# So you need to create it again and BTW create other instances.
azure vm create azure-elasticsearch-cluster \
esnode-image \
--vm-name myesnode1 \
--location "West Europe" \
--vm-size extrasmall \
--ssh 22 \
--ssh-cert /tmp/azure-certificate.pem \
elasticsearch password1234!!
```
> **Note:** It could happen that azure changes the endpoint public IP address.
> DNS propagation could take some minutes before you can connect again using
> name. You can get from azure the IP address if needed, using:
>
> ```sh
> # Look at Network `Endpoints 0 Vip`
> azure vm show myesnode1
> ```
Add a new machine:
Let's start more instances!
```sh
azure vm create -c myescluster --vm-name myesnode2 b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-13_04-amd64-server-20130501-en-us-30GB -l "West Europe" --vm-size extrasmall --ssh 22 elasticsearch fantastic0!
for x in $(seq 2 10)
do
echo "Launching azure instance #$x..."
azure vm create azure-elasticsearch-cluster \
esnode-image \
--vm-name myesnode$x \
--vm-size extrasmall \
--ssh $((21 + $x)) \
--ssh-cert /tmp/azure-certificate.pem \
--connect \
elasticsearch password1234!!
done
```
Add you certificate for this new instance.
If you want to remove your running instances:
```sh
# Add certificate for this instance
azure service cert create myescluster1 azure-certificate.cer
```
azure vm delete myesnode1
```
License
-------

16
pom.xml
View File

@ -17,7 +17,7 @@ governing permissions and limitations under the License. -->
<modelVersion>4.0.0</modelVersion>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch-cloud-azure</artifactId>
<version>0.1.0-SNAPSHOT</version>
<version>1.0.0-SNAPSHOT</version>
<packaging>jar</packaging>
<description>Azure Cloud plugin for ElasticSearch</description>
<inceptionYear>2013</inceptionYear>
@ -42,8 +42,7 @@ governing permissions and limitations under the License. -->
</parent>
<properties>
<elasticsearch.version>0.90.3</elasticsearch.version>
<jclouds.version>1.5.7</jclouds.version>
<elasticsearch.version>0.90.6</elasticsearch.version>
</properties>
<dependencies>
@ -53,17 +52,6 @@ governing permissions and limitations under the License. -->
<version>${elasticsearch.version}</version>
</dependency>
<dependency>
<groupId>org.jclouds</groupId>
<artifactId>jclouds-compute</artifactId>
<version>${jclouds.version}</version>
</dependency>
<dependency>
<groupId>org.jclouds.labs</groupId>
<artifactId>azure-management</artifactId>
<version>${jclouds.version}</version>
</dependency>
<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>

View File

@ -25,14 +25,5 @@ governing permissions and limitations under the License. -->
<exclude>org.elasticsearch:elasticsearch</exclude>
</excludes>
</dependencySet>
<dependencySet>
<outputDirectory>/</outputDirectory>
<useProjectArtifact>true</useProjectArtifact>
<useTransitiveFiltering>true</useTransitiveFiltering>
<includes>
<include>org.jclouds:jclouds-compute</include>
<include>org.jclouds.labs:azure-management</include>
</includes>
</dependencySet>
</dependencySets>
</assembly>

View File

@ -19,221 +19,22 @@
package org.elasticsearch.cloud.azure;
import com.google.common.collect.ImmutableSet;
import com.google.inject.Module;
import org.elasticsearch.ElasticSearchException;
import org.elasticsearch.Version;
import org.elasticsearch.cluster.node.DiscoveryNode;
import org.elasticsearch.common.Strings;
import org.elasticsearch.common.collect.Lists;
import org.elasticsearch.common.component.AbstractLifecycleComponent;
import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.network.NetworkService;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.settings.SettingsFilter;
import org.elasticsearch.common.transport.TransportAddress;
import org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing;
import org.elasticsearch.transport.TransportService;
import org.jclouds.Constants;
import org.jclouds.ContextBuilder;
import org.jclouds.azure.management.AzureManagementApi;
import org.jclouds.azure.management.AzureManagementApiMetadata;
import org.jclouds.azure.management.AzureManagementAsyncApi;
import org.jclouds.azure.management.config.AzureManagementProperties;
import org.jclouds.azure.management.domain.Deployment;
import org.jclouds.azure.management.domain.HostedServiceWithDetailedProperties;
import org.jclouds.logging.LoggingModules;
import org.jclouds.rest.RestContext;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.net.InetAddress;
import java.util.List;
import java.util.Properties;
import java.util.Scanner;
import java.util.Set;
import static org.elasticsearch.common.Strings.cleanPath;
/**
*
*/
public class AzureComputeService extends AbstractLifecycleComponent<AzureComputeService> {
public interface AzureComputeService {
static final class Fields {
private static final String ENDPOINT = "https://management.core.windows.net/";
private static final String VERSION = "2012-08-01";
private static final String SUBSCRIPTION_ID = "subscription_id";
private static final String PASSWORD = "password";
private static final String CERTIFICATE = "certificate";
private static final String PRIVATE_KEY = "private_key";
static public final class Fields {
public static final String SUBSCRIPTION_ID = "subscription_id";
public static final String SERVICE_NAME = "service_name";
public static final String KEYSTORE = "keystore";
public static final String PASSWORD = "password";
public static final String REFRESH = "refresh_interval";
public static final String PORT_NAME = "port_name";
public static final String HOST_TYPE = "host_type";
}
private List<DiscoveryNode> discoNodes;
private TransportService transportService;
private NetworkService networkService;
@Inject
public AzureComputeService(Settings settings, SettingsFilter settingsFilter, TransportService transportService,
NetworkService networkService) {
super(settings);
settingsFilter.addFilter(new AzureSettingsFilter());
this.transportService = transportService;
this.networkService = networkService;
}
/**
* We build the list of Nodes from Azure Management API
* @param client Azure Client
*/
private List<DiscoveryNode> buildNodes(RestContext<AzureManagementApi, AzureManagementAsyncApi> client) {
List<DiscoveryNode> discoNodes = Lists.newArrayList();
String ipAddress = null;
try {
InetAddress inetAddress = networkService.resolvePublishHostAddress(null);
if (inetAddress != null) {
ipAddress = inetAddress.getHostAddress();
}
} catch (IOException e) {
// We can't find the publish host address... Hmmm. Too bad :-(
}
Set<HostedServiceWithDetailedProperties> response = client.getApi().getHostedServiceApi().list();
for (HostedServiceWithDetailedProperties hostedService : response) {
// Ask Azure for each IP address
Deployment deployment = client.getApi().getHostedServiceApi().getDeployment(hostedService.getName(), hostedService.getName());
if (deployment != null) {
try {
if (deployment.getPrivateIpAddress().equals(ipAddress) ||
deployment.getPublicIpAddress().equals(ipAddress)) {
// We found the current node.
// We can ignore it in the list of DiscoveryNode
// We can now set the public Address as the publish address (if not already set)
String publishHost = settings.get("transport.publish_host", settings.get("transport.host"));
if (!Strings.hasText(publishHost) || !deployment.getPublicIpAddress().equals(publishHost)) {
logger.info("you should define publish_host with {}", deployment.getPublicIpAddress());
}
} else {
TransportAddress[] addresses = transportService.addressesFromString(deployment.getPublicIpAddress());
// we only limit to 1 addresses, makes no sense to ping 100 ports
for (int i = 0; (i < addresses.length && i < UnicastZenPing.LIMIT_PORTS_COUNT); i++) {
logger.trace("adding {}, address {}, transport_address {}", hostedService.getName(), deployment.getPublicIpAddress(), addresses[i]);
discoNodes.add(new DiscoveryNode("#cloud-" + hostedService.getName() + "-" + i, addresses[i], Version.CURRENT));
}
}
} catch (Exception e) {
logger.warn("failed to add {}, address {}", e, hostedService.getName(), deployment.getPublicIpAddress());
}
} else {
logger.trace("ignoring {}", hostedService.getName());
}
}
return discoNodes;
}
public synchronized List<DiscoveryNode> nodes() {
if (this.discoNodes != null) {
return this.discoNodes;
}
String PK8_PATH = componentSettings.get(Fields.PRIVATE_KEY, settings.get("cloud." + Fields.PRIVATE_KEY));
String CERTIFICATE_PATH = componentSettings.get(Fields.CERTIFICATE, settings.get("cloud." + Fields.CERTIFICATE));
String PASSWORD = componentSettings.get(Fields.PASSWORD, settings.get("cloud" + Fields.PASSWORD, ""));
String SUBSCRIPTION_ID = componentSettings.get(Fields.SUBSCRIPTION_ID, settings.get("cloud." + Fields.SUBSCRIPTION_ID));
// Check that we have all needed properties
if (!checkProperty(Fields.SUBSCRIPTION_ID, SUBSCRIPTION_ID)) return null;
if (!checkProperty(Fields.CERTIFICATE, CERTIFICATE_PATH)) return null;
if (!checkProperty(Fields.PRIVATE_KEY, PK8_PATH)) return null;
// Reading files from local disk
String pk8 = readFromFile(cleanPath(PK8_PATH));
String cert = readFromFile(cleanPath(CERTIFICATE_PATH));
// Check file content
if (!checkProperty(Fields.CERTIFICATE, cert)) return null;
if (!checkProperty(Fields.PRIVATE_KEY, pk8)) return null;
String IDENTITY = pk8 + cert;
// We set properties used to create an Azure client
Properties overrides = new Properties();
overrides.setProperty(Constants.PROPERTY_TRUST_ALL_CERTS, "true");
overrides.setProperty(Constants.PROPERTY_RELAX_HOSTNAME, "true");
overrides.setProperty("azure-management.identity", IDENTITY);
overrides.setProperty("azure-management.credential", PASSWORD);
overrides.setProperty("azure-management.endpoint", Fields.ENDPOINT + SUBSCRIPTION_ID);
overrides.setProperty("azure-management.api-version", Fields.VERSION);
overrides.setProperty("azure-management.build-version", "");
overrides.setProperty(AzureManagementProperties.SUBSCRIPTION_ID, SUBSCRIPTION_ID);
RestContext<AzureManagementApi, AzureManagementAsyncApi> client = null;
try {
client = ContextBuilder.newBuilder("azure-management")
.modules(ImmutableSet.<Module>of(LoggingModules.firstOrJDKLoggingModule()))
.overrides(overrides)
.build(AzureManagementApiMetadata.CONTEXT_TOKEN);
logger.debug("starting Azure discovery service for [{}]", SUBSCRIPTION_ID);
this.discoNodes = buildNodes(client);
} catch (Throwable t) {
logger.warn("error while trying to find nodes for azure service [{}]: {}", SUBSCRIPTION_ID, t.getMessage());
logger.debug("error found is: ", t);
// We create an empty list in that specific case.
// So discovery process won't fail with NPE but this node won't join any cluster
this.discoNodes = Lists.newArrayList();
} finally {
if (client != null) client.close();
}
logger.debug("using dynamic discovery nodes {}", discoNodes);
return this.discoNodes;
}
@Override
protected void doStart() throws ElasticSearchException {
}
@Override
protected void doStop() throws ElasticSearchException {
}
@Override
protected void doClose() throws ElasticSearchException {
}
private String readFromFile(String path) {
try {
logger.trace("reading file content from [{}]", path);
StringBuilder text = new StringBuilder();
String NL = System.getProperty("line.separator");
Scanner scanner = new Scanner(new FileInputStream(path), "UTF-8");
try {
while (scanner.hasNextLine()){
text.append(scanner.nextLine() + NL);
}
return text.toString();
}
finally{
scanner.close();
}
} catch (FileNotFoundException e) {
logger.trace("file does not exist [{}]", path);
}
return null;
}
private boolean checkProperty(String name, String value) {
if (!Strings.hasText(value)) {
logger.warn("cloud.azure.{} is not set. Disabling azure discovery.", name);
return false;
}
return true;
}
public Set<Instance> instances();
}

View File

@ -0,0 +1,221 @@
/*
* Licensed to ElasticSearch under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.cloud.azure;
import org.elasticsearch.ElasticSearchException;
import org.elasticsearch.common.Strings;
import org.elasticsearch.common.component.AbstractLifecycleComponent;
import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.settings.SettingsException;
import org.elasticsearch.common.settings.SettingsFilter;
import org.w3c.dom.Document;
import org.w3c.dom.Node;
import org.w3c.dom.NodeList;
import org.xml.sax.SAXException;
import javax.net.ssl.HttpsURLConnection;
import javax.net.ssl.KeyManagerFactory;
import javax.net.ssl.SSLContext;
import javax.net.ssl.SSLSocketFactory;
import javax.xml.parsers.DocumentBuilder;
import javax.xml.parsers.DocumentBuilderFactory;
import javax.xml.parsers.ParserConfigurationException;
import javax.xml.xpath.XPath;
import javax.xml.xpath.XPathConstants;
import javax.xml.xpath.XPathExpressionException;
import javax.xml.xpath.XPathFactory;
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.net.URL;
import java.security.*;
import java.security.cert.CertificateException;
import java.util.HashSet;
import java.util.Set;
/**
*
*/
public class AzureComputeServiceImpl extends AbstractLifecycleComponent<AzureComputeServiceImpl>
implements AzureComputeService {
static final class Azure {
private static final String ENDPOINT = "https://management.core.windows.net/";
private static final String VERSION = "2013-03-01";
}
private SSLSocketFactory socketFactory;
private final String keystore;
private final String password;
private final String subscription_id;
private final String service_name;
private final String port_name;
@Inject
public AzureComputeServiceImpl(Settings settings, SettingsFilter settingsFilter) {
super(settings);
settingsFilter.addFilter(new AzureSettingsFilter());
// Creating socketFactory
subscription_id = componentSettings.get(Fields.SUBSCRIPTION_ID, settings.get("cloud.azure." + Fields.SUBSCRIPTION_ID));
service_name = componentSettings.get(Fields.SERVICE_NAME, settings.get("cloud.azure." + Fields.SERVICE_NAME));
keystore = componentSettings.get(Fields.KEYSTORE, settings.get("cloud.azure." + Fields.KEYSTORE));
password = componentSettings.get(Fields.PASSWORD, settings.get("cloud.azure." + Fields.PASSWORD));
port_name = componentSettings.get(Fields.PORT_NAME, settings.get("cloud.azure." + Fields.PORT_NAME, "elasticsearch"));
// Check that we have all needed properties
try {
checkProperty(Fields.SUBSCRIPTION_ID, subscription_id);
checkProperty(Fields.SERVICE_NAME, service_name);
checkProperty(Fields.KEYSTORE, keystore);
checkProperty(Fields.PASSWORD, password);
socketFactory = getSocketFactory(keystore, password);
if (logger.isTraceEnabled()) logger.trace("creating new Azure client for [{}], [{}], [{}], [{}]",
subscription_id, service_name, port_name);
} catch (Exception e) {
// Can not start Azure Client
logger.error("can not start azure client: {}", e.getMessage());
socketFactory = null;
}
}
private InputStream getXML(String api) throws UnrecoverableKeyException, NoSuchAlgorithmException, KeyStoreException, IOException, CertificateException, KeyManagementException {
String https_url = Azure.ENDPOINT + subscription_id + api;
URL url = new URL( https_url );
HttpsURLConnection con = (HttpsURLConnection) url.openConnection();
con.setSSLSocketFactory( socketFactory );
con.setRequestProperty("x-ms-version", Azure.VERSION);
if (logger.isDebugEnabled()) logger.debug("calling azure REST API: {}", api);
if (logger.isTraceEnabled()) logger.trace("get {} from azure", https_url);
return con.getInputStream();
}
@Override
public Set<Instance> instances() {
if (socketFactory == null) {
// Azure plugin is disabled
if (logger.isTraceEnabled()) logger.trace("azure plugin is disabled. Returning an empty list of nodes.");
return new HashSet<Instance>();
} else {
try {
String SERVICE_NAME = componentSettings.get(Fields.SERVICE_NAME, settings.get("cloud." + Fields.SERVICE_NAME));
InputStream stream = getXML("/services/hostedservices/" + SERVICE_NAME + "?embed-detail=true");
Set<Instance> instances = buildInstancesFromXml(stream, port_name);
if (logger.isTraceEnabled()) logger.trace("get instances from azure: {}", instances);
return instances;
} catch (ParserConfigurationException e) {
logger.warn("can not parse XML response: {}", e.getMessage());
return new HashSet<Instance>();
} catch (XPathExpressionException e) {
logger.warn("can not parse XML response: {}", e.getMessage());
return new HashSet<Instance>();
} catch (SAXException e) {
logger.warn("can not parse XML response: {}", e.getMessage());
return new HashSet<Instance>();
} catch (Exception e) {
logger.warn("can not get list of azure nodes: {}", e.getMessage());
return new HashSet<Instance>();
}
}
}
private static String extractValueFromPath(Node node, String path) throws XPathExpressionException {
XPath xPath = XPathFactory.newInstance().newXPath();
Node subnode = (Node) xPath.compile(path).evaluate(node, XPathConstants.NODE);
return subnode.getFirstChild().getNodeValue();
}
public static Set<Instance> buildInstancesFromXml(InputStream inputStream, String port_name) throws ParserConfigurationException, IOException, SAXException, XPathExpressionException {
Set<Instance> instances = new HashSet<Instance>();
DocumentBuilderFactory dbFactory = DocumentBuilderFactory.newInstance();
DocumentBuilder dBuilder = dbFactory.newDocumentBuilder();
Document doc = dBuilder.parse(inputStream);
doc.getDocumentElement().normalize();
XPath xPath = XPathFactory.newInstance().newXPath();
// We only fetch Started nodes (TODO: should we start with all nodes whatever the status is?)
String expression = "/HostedService/Deployments/Deployment/RoleInstanceList/RoleInstance[PowerState='Started']";
NodeList nodeList = (NodeList) xPath.compile(expression).evaluate(doc, XPathConstants.NODESET);
for (int i = 0; i < nodeList.getLength(); i++) {
Instance instance = new Instance();
Node node = nodeList.item(i);
instance.setPrivateIp(extractValueFromPath(node, "IpAddress"));
instance.setName(extractValueFromPath(node, "InstanceName"));
instance.setStatus(Instance.Status.STARTED);
// Let's digg into <InstanceEndpoints>
expression = "InstanceEndpoints/InstanceEndpoint[Name='"+ port_name +"']";
NodeList endpoints = (NodeList) xPath.compile(expression).evaluate(node, XPathConstants.NODESET);
for (int j = 0; j < endpoints.getLength(); j++) {
Node endpoint = endpoints.item(j);
instance.setPublicIp(extractValueFromPath(endpoint, "Vip"));
instance.setPublicPort(extractValueFromPath(endpoint, "PublicPort"));
}
instances.add(instance);
}
return instances;
}
private SSLSocketFactory getSocketFactory(String keystore, String password) throws NoSuchAlgorithmException, KeyStoreException, IOException, CertificateException, UnrecoverableKeyException, KeyManagementException {
File pKeyFile = new File(keystore);
KeyManagerFactory keyManagerFactory = KeyManagerFactory.getInstance("SunX509");
KeyStore keyStore = KeyStore.getInstance("PKCS12");
InputStream keyInput = new FileInputStream(pKeyFile);
keyStore.load(keyInput, password.toCharArray());
keyInput.close();
keyManagerFactory.init(keyStore, password.toCharArray());
SSLContext context = SSLContext.getInstance("TLS");
context.init(keyManagerFactory.getKeyManagers(), null, new SecureRandom());
return context.getSocketFactory();
}
@Override
protected void doStart() throws ElasticSearchException {
}
@Override
protected void doStop() throws ElasticSearchException {
}
@Override
protected void doClose() throws ElasticSearchException {
}
private void checkProperty(String name, String value) throws ElasticSearchException {
if (!Strings.hasText(value)) {
throw new SettingsException("cloud.azure." + name +" is not set or is incorrect.");
}
}
}

View File

@ -20,14 +20,24 @@
package org.elasticsearch.cloud.azure;
import org.elasticsearch.common.inject.AbstractModule;
import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.settings.Settings;
/**
*
*/
public class AzureModule extends AbstractModule {
private Settings settings;
@Inject
public AzureModule(Settings settings) {
this.settings = settings;
}
@Override
protected void configure() {
bind(AzureComputeService.class).asEagerSingleton();
bind(AzureComputeService.class)
.to(settings.getAsClass("cloud.azure.api.impl", AzureComputeServiceImpl.class))
.asEagerSingleton();
}
}

View File

@ -0,0 +1,113 @@
/*
* Licensed to Elasticsearch (the "Author") under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. Author licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.cloud.azure;
/**
* Define an Azure Instance
*/
public class Instance {
public static enum Status {
STARTED;
}
private String privateIp;
private String publicIp;
private String publicPort;
private Status status;
private String name;
public String getPrivateIp() {
return privateIp;
}
public void setPrivateIp(String privateIp) {
this.privateIp = privateIp;
}
public Status getStatus() {
return status;
}
public void setStatus(Status status) {
this.status = status;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public String getPublicIp() {
return publicIp;
}
public void setPublicIp(String publicIp) {
this.publicIp = publicIp;
}
public String getPublicPort() {
return publicPort;
}
public void setPublicPort(String publicPort) {
this.publicPort = publicPort;
}
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
Instance instance = (Instance) o;
if (name != null ? !name.equals(instance.name) : instance.name != null) return false;
if (privateIp != null ? !privateIp.equals(instance.privateIp) : instance.privateIp != null) return false;
if (publicIp != null ? !publicIp.equals(instance.publicIp) : instance.publicIp != null) return false;
if (publicPort != null ? !publicPort.equals(instance.publicPort) : instance.publicPort != null) return false;
if (status != instance.status) return false;
return true;
}
@Override
public int hashCode() {
int result = privateIp != null ? privateIp.hashCode() : 0;
result = 31 * result + (publicIp != null ? publicIp.hashCode() : 0);
result = 31 * result + (publicPort != null ? publicPort.hashCode() : 0);
result = 31 * result + (status != null ? status.hashCode() : 0);
result = 31 * result + (name != null ? name.hashCode() : 0);
return result;
}
@Override
public String toString() {
final StringBuffer sb = new StringBuffer("Instance{");
sb.append("privateIp='").append(privateIp).append('\'');
sb.append(", publicIp='").append(publicIp).append('\'');
sb.append(", publicPort='").append(publicPort).append('\'');
sb.append(", status=").append(status);
sb.append(", name='").append(name).append('\'');
sb.append('}');
return sb.toString();
}
}

View File

@ -26,6 +26,7 @@ import org.elasticsearch.cluster.ClusterService;
import org.elasticsearch.cluster.node.DiscoveryNodeService;
import org.elasticsearch.common.collect.ImmutableList;
import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.network.NetworkService;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.discovery.zen.ZenDiscovery;
import org.elasticsearch.discovery.zen.ping.ZenPing;
@ -43,7 +44,7 @@ public class AzureDiscovery extends ZenDiscovery {
@Inject
public AzureDiscovery(Settings settings, ClusterName clusterName, ThreadPool threadPool, TransportService transportService,
ClusterService clusterService, NodeSettingsService nodeSettingsService, ZenPingService pingService,
DiscoveryNodeService discoveryNodeService, AzureComputeService azureService) {
DiscoveryNodeService discoveryNodeService, AzureComputeService azureService, NetworkService networkService) {
super(settings, clusterName, threadPool, transportService, clusterService, nodeSettingsService, discoveryNodeService, pingService, Version.CURRENT);
if (settings.getAsBoolean("cloud.enabled", true)) {
ImmutableList<? extends ZenPing> zenPings = pingService.zenPings();
@ -58,7 +59,7 @@ public class AzureDiscovery extends ZenDiscovery {
if (unicastZenPing != null) {
// update the unicast zen ping to add cloud hosts provider
// and, while we are at it, use only it and not the multicast for example
unicastZenPing.addHostsProvider(new AzureUnicastHostsProvider(settings, azureService.nodes()));
unicastZenPing.addHostsProvider(new AzureUnicastHostsProvider(settings, azureService, transportService, networkService));
pingService.zenPings(ImmutableList.of(unicastZenPing));
} else {
logger.warn("failed to apply azure unicast discovery, no unicast ping found");

View File

@ -19,29 +19,141 @@
package org.elasticsearch.discovery.azure;
import org.elasticsearch.ElasticSearchIllegalArgumentException;
import org.elasticsearch.Version;
import org.elasticsearch.cloud.azure.AzureComputeService;
import org.elasticsearch.cloud.azure.Instance;
import org.elasticsearch.cluster.node.DiscoveryNode;
import org.elasticsearch.common.collect.Lists;
import org.elasticsearch.common.component.AbstractComponent;
import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.network.NetworkService;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.transport.TransportAddress;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.discovery.zen.ping.unicast.UnicastHostsProvider;
import org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing;
import org.elasticsearch.transport.TransportService;
import java.io.IOException;
import java.net.InetAddress;
import java.util.List;
import java.util.Set;
/**
*
*/
public class AzureUnicastHostsProvider extends AbstractComponent implements UnicastHostsProvider {
private final List<DiscoveryNode> discoNodes;
public static enum HostType {
PRIVATE_IP,
PUBLIC_IP
}
private final AzureComputeService azureComputeService;
private TransportService transportService;
private NetworkService networkService;
private final TimeValue refreshInterval;
private long lastRefresh;
private List<DiscoveryNode> cachedDiscoNodes;
private final HostType host_type;
@Inject
public AzureUnicastHostsProvider(Settings settings, List<DiscoveryNode> discoNodes) {
public AzureUnicastHostsProvider(Settings settings, AzureComputeService azureComputeService,
TransportService transportService,
NetworkService networkService) {
super(settings);
this.discoNodes = discoNodes;
this.azureComputeService = azureComputeService;
this.transportService = transportService;
this.networkService = networkService;
this.refreshInterval = componentSettings.getAsTime(AzureComputeService.Fields.REFRESH,
settings.getAsTime("cloud.azure." + AzureComputeService.Fields.REFRESH, TimeValue.timeValueSeconds(0)));
this.host_type = HostType.valueOf(componentSettings.get(AzureComputeService.Fields.HOST_TYPE,
settings.get("cloud.azure." + AzureComputeService.Fields.HOST_TYPE, HostType.PRIVATE_IP.name())).toUpperCase());
}
/**
* We build the list of Nodes from Azure Management API
* Information can be cached using `cloud.azure.refresh_interval` property if needed.
* Setting `cloud.azure.refresh_interval` to `-1` will cause infinite caching.
* Setting `cloud.azure.refresh_interval` to `0` will disable caching (default).
*/
@Override
public List<DiscoveryNode> buildDynamicNodes() {
return discoNodes;
if (refreshInterval.millis() != 0) {
if (cachedDiscoNodes != null &&
(refreshInterval.millis() < 0 || (System.currentTimeMillis() - lastRefresh) < refreshInterval.millis())) {
if (logger.isTraceEnabled()) logger.trace("using cache to retrieve node list");
return cachedDiscoNodes;
}
lastRefresh = System.currentTimeMillis();
}
logger.debug("start building nodes list using Azure API");
cachedDiscoNodes = Lists.newArrayList();
Set<Instance> response = azureComputeService.instances();
String ipAddress = null;
try {
InetAddress inetAddress = networkService.resolvePublishHostAddress(null);
if (inetAddress != null) {
ipAddress = inetAddress.getHostAddress();
}
if (logger.isTraceEnabled()) logger.trace("ipAddress found: [{}]", ipAddress);
} catch (IOException e) {
// We can't find the publish host address... Hmmm. Too bad :-(
if (logger.isTraceEnabled()) logger.trace("exception while finding ipAddress", e);
}
try {
for (Instance instance : response) {
String networkAddress = null;
// Let's detect if we want to use public or private IP
if (host_type == HostType.PRIVATE_IP) {
if (instance.getPrivateIp() != null) {
if (logger.isTraceEnabled() && instance.getPrivateIp().equals(ipAddress)) {
logger.trace("adding ourselves {}", ipAddress);
}
networkAddress = instance.getPrivateIp();
} else {
logger.trace("no private ip provided ignoring {}", instance.getName());
}
}
if (host_type == HostType.PUBLIC_IP) {
if (instance.getPublicIp() != null && instance.getPublicPort() != null) {
networkAddress = instance.getPublicIp() + ":" +instance.getPublicPort();
} else {
logger.trace("no public ip provided ignoring {}", instance.getName());
}
}
if (networkAddress == null) {
// We have a bad parameter here or not enough information from azure
throw new ElasticSearchIllegalArgumentException("can't find any " + host_type.name() + " address");
} else {
TransportAddress[] addresses = transportService.addressesFromString(networkAddress);
// we only limit to 1 addresses, makes no sense to ping 100 ports
for (int i = 0; (i < addresses.length && i < UnicastZenPing.LIMIT_PORTS_COUNT); i++) {
logger.trace("adding {}, transport_address {}", networkAddress, addresses[i]);
cachedDiscoNodes.add(new DiscoveryNode("#cloud-" + instance.getName() + "-" + i, addresses[i], Version.CURRENT));
}
}
}
} catch (Throwable e) {
logger.warn("Exception caught during discovery {} : {}", e.getClass().getName(), e.getMessage());
logger.trace("Exception caught during discovery", e);
}
logger.debug("{} node(s) added", cachedDiscoNodes.size());
logger.debug("using dynamic discovery nodes {}", cachedDiscoNodes);
return cachedDiscoNodes;
}
}

View File

@ -19,10 +19,8 @@
package org.elasticsearch.plugin.cloud.azure;
import org.elasticsearch.cloud.azure.AzureComputeService;
import org.elasticsearch.cloud.azure.AzureModule;
import org.elasticsearch.common.collect.Lists;
import org.elasticsearch.common.component.LifecycleComponent;
import org.elasticsearch.common.inject.Module;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.plugins.AbstractPlugin;
@ -58,14 +56,4 @@ public class CloudAzurePlugin extends AbstractPlugin {
}
return modules;
}
@Override
public Collection<Class<? extends LifecycleComponent>> services() {
Collection<Class<? extends LifecycleComponent>> services = Lists.newArrayList();
if (settings.getAsBoolean("cloud.enabled", true)) {
services.add(AzureComputeService.class);
}
return services;
}
}

View File

@ -1,58 +0,0 @@
/*
* Licensed to ElasticSearch under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
import org.elasticsearch.common.io.FileSystemUtils;
import org.elasticsearch.common.settings.ImmutableSettings;
import org.elasticsearch.node.Node;
import org.elasticsearch.node.NodeBuilder;
import org.junit.Test;
import java.io.File;
/**
* To run this *test*, you need first to modify test/resources dir:
* - elasticsearch.yml: set your azure password and azure subscription_id
* - azure.crt: fill this file with your azure certificate (PEM format)
* - azure.pk: fill this file with your azure private key (PEM format)
*/
public class AzureSimpleTest {
@Test
public void launchNode() {
File dataDir = new File("./target/es/data");
if(dataDir.exists()) {
FileSystemUtils.deleteRecursively(dataDir, true);
}
// Then we start our node for tests
Node node = NodeBuilder
.nodeBuilder()
.settings(
ImmutableSettings.settingsBuilder()
.put("gateway.type", "local")
.put("path.data", "./target/es/data")
.put("path.logs", "./target/es/logs")
.put("path.work", "./target/es/work")
).node();
// We wait now for the yellow (or green) status
// node.client().admin().cluster().prepareHealth().setWaitForYellowStatus().execute().actionGet();
}
}

View File

@ -0,0 +1,37 @@
/*
* Licensed to ElasticSearch under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.azure.itest;
import org.elasticsearch.common.settings.ImmutableSettings;
import org.elasticsearch.node.NodeBuilder;
import org.junit.Ignore;
import org.junit.Test;
public class AzureSimpleITest {
@Test @Ignore
public void one_node_should_run() {
ImmutableSettings.Builder builder = ImmutableSettings.settingsBuilder()
//.put("gateway.type", "local")
.put("path.data", "./target/es/data")
.put("path.logs", "./target/es/logs")
.put("path.work", "./target/es/work");
NodeBuilder.nodeBuilder().settings(builder).node();
}
}

View File

@ -0,0 +1,144 @@
/*
* Licensed to ElasticSearch under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.azure.test;
import org.elasticsearch.action.admin.cluster.node.info.NodesInfoResponse;
import org.elasticsearch.client.Client;
import org.elasticsearch.client.transport.NoNodeAvailableException;
import org.elasticsearch.client.transport.TransportClient;
import org.elasticsearch.cloud.azure.AzureComputeService;
import org.elasticsearch.common.io.FileSystemUtils;
import org.elasticsearch.common.logging.ESLogger;
import org.elasticsearch.common.logging.ESLoggerFactory;
import org.elasticsearch.common.settings.ImmutableSettings;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.transport.TransportAddress;
import org.elasticsearch.node.Node;
import org.elasticsearch.node.NodeBuilder;
import org.elasticsearch.node.internal.InternalNode;
import org.elasticsearch.transport.netty.NettyTransport;
import org.junit.After;
import org.junit.Assert;
import org.junit.Before;
import java.io.File;
import java.util.ArrayList;
import java.util.List;
public abstract class AzureAbstractTest {
protected static ESLogger logger = ESLoggerFactory.getLogger(AzureAbstractTest.class.getName());
private static List<Node> nodes;
private Class<? extends AzureComputeService> mock;
public AzureAbstractTest(Class<? extends AzureComputeService> mock) {
// We want to inject the Azure API Mock
this.mock = mock;
}
@Before
public void setUp() {
nodes = new ArrayList<Node>();
File dataDir = new File("./target/es/data");
if(dataDir.exists()) {
FileSystemUtils.deleteRecursively(dataDir, true);
}
}
@After
public void tearDown() {
// Cleaning nodes after test
for (Node node : nodes) {
node.close();
}
}
protected Client getTransportClient() {
// Create a TransportClient on node 1 and 2
Settings settings = ImmutableSettings.settingsBuilder()
.put("cluster.name", "azure")
.put("transport.tcp.connect_timeout", "1s")
.build();
TransportClient client = new TransportClient(settings);
for (Node node : nodes) {
NettyTransport nettyTransport = ((InternalNode) node).injector().getInstance(NettyTransport.class);
TransportAddress transportAddress = nettyTransport.boundAddress().publishAddress();
client.addTransportAddress(transportAddress);
}
return client;
}
protected Client getNodeClient() {
for (Node node : nodes) {
return node.client();
}
return null;
}
protected void checkNumberOfNodes(int expected, boolean fail) {
NodesInfoResponse nodeInfos = null;
try {
nodeInfos = getTransportClient().admin().cluster().prepareNodesInfo().execute().actionGet();
} catch (NoNodeAvailableException e) {
// If we can't build a Transport Client, we are may be not connected to any network
// Let's try a Node Client
nodeInfos = getNodeClient().admin().cluster().prepareNodesInfo().execute().actionGet();
}
Assert.assertNotNull(nodeInfos);
Assert.assertNotNull(nodeInfos.getNodes());
if (fail) {
Assert.assertEquals(expected, nodeInfos.getNodes().length);
} else {
if (nodeInfos.getNodes().length != expected) {
logger.warn("expected {} node(s) but found {}. Could be due to no local IP address available.",
expected, nodeInfos.getNodes().length);
}
}
}
protected void checkNumberOfNodes(int expected) {
checkNumberOfNodes(expected, true);
}
protected void nodeBuilder() {
ImmutableSettings.Builder builder = ImmutableSettings.settingsBuilder()
//.put("gateway.type", "local")
.put("path.data", "./target/es/data")
.put("path.logs", "./target/es/logs")
.put("path.work", "./target/es/work")
// .put("discovery.zen.ping.timeout", "500ms")
// .put("discovery.zen.fd.ping_retries",1)
// .put("discovery.zen.fd.ping_timeout", "500ms")
// .put("discovery.initial_state_timeout", "5s")
// .put("transport.tcp.connect_timeout", "1s")
.put("cloud.azure.api.impl", mock)
.put("cloud.azure.refresh_interval", "5s")
.put("node.name", (nodes.size()+1) + "#" + mock.getSimpleName());
Node node = NodeBuilder.nodeBuilder().settings(builder).node();
nodes.add(node);
}
}

View File

@ -0,0 +1,50 @@
/*
* Licensed to ElasticSearch under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.azure.test;
import org.elasticsearch.ElasticSearchException;
import org.elasticsearch.cloud.azure.AzureComputeService;
import org.elasticsearch.common.component.AbstractLifecycleComponent;
import org.elasticsearch.common.settings.Settings;
/**
*
*/
public abstract class AzureComputeServiceAbstractMock extends AbstractLifecycleComponent<AzureComputeServiceAbstractMock>
implements AzureComputeService {
protected AzureComputeServiceAbstractMock(Settings settings) {
super(settings);
}
@Override
protected void doStart() throws ElasticSearchException {
logger.debug("starting Azure Api Mock");
}
@Override
protected void doStop() throws ElasticSearchException {
logger.debug("stopping Azure Api Mock");
}
@Override
protected void doClose() throws ElasticSearchException {
}
}

View File

@ -0,0 +1,49 @@
/*
* Licensed to ElasticSearch under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.azure.test;
import org.elasticsearch.cloud.azure.Instance;
import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.settings.Settings;
import java.util.HashSet;
import java.util.Set;
/**
* Mock Azure API with a single started node
*/
public class AzureComputeServiceSimpleMock extends AzureComputeServiceAbstractMock {
@Inject
protected AzureComputeServiceSimpleMock(Settings settings) {
super(settings);
logger.debug("starting Azure Mock");
}
@Override
public Set<Instance> instances() {
Set<Instance> instances = new HashSet<Instance>();
Instance azureHost = new Instance();
azureHost.setPrivateIp("127.0.0.1");
instances.add(azureHost);
return instances;
}
}

View File

@ -0,0 +1,51 @@
/*
* Licensed to ElasticSearch under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.azure.test;
import org.elasticsearch.cloud.azure.Instance;
import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.settings.Settings;
import java.util.HashSet;
import java.util.Set;
/**
* Mock Azure API with two started nodes
*/
public class AzureComputeServiceTwoNodesMock extends AzureComputeServiceAbstractMock {
@Inject
protected AzureComputeServiceTwoNodesMock(Settings settings) {
super(settings);
logger.debug("starting Azure Mock");
}
@Override
public Set<Instance> instances() {
Set<Instance> instances = new HashSet<Instance>();
Instance azureHost = new Instance();
azureHost.setPrivateIp("127.0.0.1");
instances.add(azureHost);
azureHost = new Instance();
azureHost.setPrivateIp("127.0.0.1");
instances.add(azureHost);
return instances;
}
}

View File

@ -0,0 +1,61 @@
/*
* Licensed to Elasticsearch (the "Author") under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. Author licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.azure.test;
import org.elasticsearch.cloud.azure.AzureComputeServiceImpl;
import org.elasticsearch.cloud.azure.Instance;
import org.junit.Assert;
import org.junit.Test;
import org.xml.sax.SAXException;
import javax.xml.parsers.ParserConfigurationException;
import javax.xml.xpath.XPathExpressionException;
import java.io.IOException;
import java.io.InputStream;
import java.util.HashSet;
import java.util.Set;
public class AzureInstanceXmlParserTest {
private Instance build(String name, String privateIpAddress,
String publicIpAddress,
String publicPort,
Instance.Status status) {
Instance instance = new Instance();
instance.setName(name);
instance.setPrivateIp(privateIpAddress);
instance.setPublicIp(publicIpAddress);
instance.setPublicPort(publicPort);
instance.setStatus(status);
return instance;
}
@Test
public void testReadXml() throws ParserConfigurationException, SAXException, XPathExpressionException, IOException {
InputStream inputStream = AzureInstanceXmlParserTest.class.getResourceAsStream("/org/elasticsearch/azure/test/services.xml");
Set<Instance> instances = AzureComputeServiceImpl.buildInstancesFromXml(inputStream, "elasticsearch");
Set<Instance> expected = new HashSet<Instance>();
expected.add(build("es-windows2008", "10.53.250.55", null, null, Instance.Status.STARTED));
expected.add(build("myesnode1", "10.53.218.75", "137.116.213.150", "9300", Instance.Status.STARTED));
Assert.assertArrayEquals(expected.toArray(), instances.toArray());
}
}

View File

@ -0,0 +1,37 @@
/*
* Licensed to ElasticSearch under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.azure.test;
import org.junit.Test;
public class AzureSimpleTest extends AzureAbstractTest {
public AzureSimpleTest() {
super(AzureComputeServiceSimpleMock.class);
}
@Test
public void one_node_should_run() {
// Then we start our node for tests
nodeBuilder();
// We expect having 2 nodes as part of the cluster, let's test that
checkNumberOfNodes(1);
}
}

View File

@ -0,0 +1,38 @@
/*
* Licensed to ElasticSearch under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.azure.test;
import org.junit.Test;
public class AzureTwoStartedNodesTest extends AzureAbstractTest {
public AzureTwoStartedNodesTest() {
super(AzureComputeServiceTwoNodesMock.class);
}
@Test
public void two_nodes_should_run() {
// Then we start our node for tests
nodeBuilder();
nodeBuilder();
// We expect having 2 nodes as part of the cluster, let's test that
checkNumberOfNodes(2, false);
}
}

View File

@ -1,3 +0,0 @@
-----BEGIN CERTIFICATE-----
YOUR-CERTIFICATE-HERE
-----END CERTIFICATE-----

View File

@ -1,3 +0,0 @@
-----BEGIN PRIVATE KEY-----
YOUR-PK-HERE
-----END PRIVATE KEY-----

View File

@ -10,6 +10,7 @@
# governing permissions and limitations under the License.
cluster.name: azure
# Azure discovery allows to use Azure API in order to perform discovery.
#
# You have to install the cloud-azure plugin for enabling the Azure discovery.
@ -21,9 +22,9 @@
# for a step-by-step tutorial.
cloud:
azure:
private_key: ${project.build.testOutputDirectory}/azure.pk
certificate: ${project.build.testOutputDirectory}/azure.crt
keystore: FULLPATH-TO-YOUR-KEYSTORE
password: YOUR-PASSWORD
subscription_id: YOUR-AZURE-SUBSCRIPTION-ID
service_name: YOUR-AZURE-SERVICE-NAME
discovery:
type: azure

View File

@ -16,7 +16,7 @@ governing permissions and limitations under the License. -->
<appender name="console" class="org.apache.log4j.ConsoleAppender">
<param name="Target" value="System.out" />
<layout class="org.apache.log4j.PatternLayout">
<param name="ConversionPattern" value="%d %-5p - %m%n" />
<param name="ConversionPattern" value="%d %-5p %c{1} - %m%n" />
</layout>
</appender>
@ -24,10 +24,10 @@ governing permissions and limitations under the License. -->
<level value="info" />
</logger>
<logger name="org.elasticsearch.cloud">
<logger name="org.elasticsearch.cloud.azure">
<level value="trace" />
</logger>
<logger name="org.elasticsearch.discovery">
<logger name="org.elasticsearch.discovery.azure">
<level value="trace" />
</logger>

View File

@ -0,0 +1,170 @@
<?xml version="1.0" encoding="UTF-8"?>
<HostedService xmlns="http://schemas.microsoft.com/windowsazure" xmlns:i="http://www.w3.org/2001/XMLSchema-instance">
<Url>https://management.core.windows.net/a836d210-c681-4916-ae7d-9ca5f1b4906f/services/hostedservices/azure-elasticsearch-cluster</Url>
<ServiceName>azure-elasticsearch-cluster</ServiceName>
<HostedServiceProperties>
<Description>Implicitly created hosted service</Description>
<Location>West Europe</Location>
<Label>YXp1cmUtZWxhc3RpY3NlYXJjaC1jbHVzdGVy</Label>
<Status>Created</Status>
<DateCreated>2013-09-23T12:45:45Z</DateCreated>
<DateLastModified>2013-09-23T13:42:46Z</DateLastModified>
<ExtendedProperties />
</HostedServiceProperties>
<Deployments>
<Deployment>
<Name>azure-elasticsearch-cluster</Name>
<DeploymentSlot>Production</DeploymentSlot>
<PrivateID>c6a690796e6b497a918487546193b94a</PrivateID>
<Status>Running</Status>
<Label>WVhwMWNtVXRaV3hoYzNScFkzTmxZWEpqYUMxamJIVnpkR1Z5</Label>
<Url>http://azure-elasticsearch-cluster.cloudapp.net/</Url>
<Configuration>PFNlcnZpY2VDb25maWd1cmF0aW9uIHhtbG5zOnhzZD0iaHR0cDovL3d3dy53My5vcmcvMjAwMS9YTUxTY2hlbWEiIHhtbG5zOnhzaT0iaHR0cDovL3d3dy53My5vcmcvMjAwMS9YTUxTY2hlbWEtaW5zdGFuY2UiIHhtbG5zPSJodHRwOi8vc2NoZW1hcy5taWNyb3NvZnQuY29tL1NlcnZpY2VIb3N0aW5nLzIwMDgvMTAvU2VydmljZUNvbmZpZ3VyYXRpb24iPg0KICA8Um9sZSBuYW1lPSJlcy13aW5kb3dzMjAwOCI+DQogICAgPEluc3RhbmNlcyBjb3VudD0iMSIgLz4NCiAgPC9Sb2xlPg0KICA8Um9sZSBuYW1lPSJteWVzbm9kZTEiPg0KICAgIDxJbnN0YW5jZXMgY291bnQ9IjEiIC8+DQogIDwvUm9sZT4NCjwvU2VydmljZUNvbmZpZ3VyYXRpb24+</Configuration>
<RoleInstanceList>
<RoleInstance>
<RoleName>es-windows2008</RoleName>
<InstanceName>es-windows2008</InstanceName>
<InstanceStatus>ReadyRole</InstanceStatus>
<InstanceUpgradeDomain>0</InstanceUpgradeDomain>
<InstanceFaultDomain>0</InstanceFaultDomain>
<InstanceSize>ExtraSmall</InstanceSize>
<InstanceStateDetails />
<IpAddress>10.53.250.55</IpAddress>
<InstanceEndpoints>
<InstanceEndpoint>
<Name>PowerShell</Name>
<Vip>137.116.213.150</Vip>
<PublicPort>5986</PublicPort>
<LocalPort>5986</LocalPort>
<Protocol>tcp</Protocol>
</InstanceEndpoint>
<InstanceEndpoint>
<Name>Remote Desktop</Name>
<Vip>137.116.213.150</Vip>
<PublicPort>53867</PublicPort>
<LocalPort>3389</LocalPort>
<Protocol>tcp</Protocol>
</InstanceEndpoint>
</InstanceEndpoints>
<PowerState>Started</PowerState>
<HostName>es-windows2008</HostName>
<RemoteAccessCertificateThumbprint>E6798DEACEAD4752825A2ABFC0A9A367389ED802</RemoteAccessCertificateThumbprint>
</RoleInstance>
<RoleInstance>
<RoleName>myesnode1</RoleName>
<InstanceName>myesnode1</InstanceName>
<InstanceStatus>ReadyRole</InstanceStatus>
<InstanceUpgradeDomain>0</InstanceUpgradeDomain>
<InstanceFaultDomain>0</InstanceFaultDomain>
<InstanceSize>ExtraSmall</InstanceSize>
<InstanceStateDetails />
<IpAddress>10.53.218.75</IpAddress>
<InstanceEndpoints>
<InstanceEndpoint>
<Name>ssh</Name>
<Vip>137.116.213.150</Vip>
<PublicPort>22</PublicPort>
<LocalPort>22</LocalPort>
<Protocol>tcp</Protocol>
</InstanceEndpoint>
<InstanceEndpoint>
<Name>elasticsearch</Name>
<Vip>137.116.213.150</Vip>
<PublicPort>9300</PublicPort>
<LocalPort>9300</LocalPort>
<Protocol>tcp</Protocol>
</InstanceEndpoint>
</InstanceEndpoints>
<PowerState>Started</PowerState>
<HostName>myesnode1</HostName>
</RoleInstance>
</RoleInstanceList>
<UpgradeDomainCount>1</UpgradeDomainCount>
<RoleList>
<Role i:type="PersistentVMRole">
<RoleName>es-windows2008</RoleName>
<OsVersion />
<RoleType>PersistentVMRole</RoleType>
<ConfigurationSets>
<ConfigurationSet i:type="NetworkConfigurationSet">
<ConfigurationSetType>NetworkConfiguration</ConfigurationSetType>
<InputEndpoints>
<InputEndpoint>
<LocalPort>5986</LocalPort>
<Name>PowerShell</Name>
<Port>5986</Port>
<Protocol>tcp</Protocol>
<Vip>137.116.213.150</Vip>
</InputEndpoint>
<InputEndpoint>
<LocalPort>3389</LocalPort>
<Name>Remote Desktop</Name>
<Port>53867</Port>
<Protocol>tcp</Protocol>
<Vip>137.116.213.150</Vip>
</InputEndpoint>
</InputEndpoints>
<SubnetNames />
</ConfigurationSet>
</ConfigurationSets>
<DataVirtualHardDisks />
<OSVirtualHardDisk>
<HostCaching>ReadWrite</HostCaching>
<DiskName>azure-elasticsearch-cluster-es-windows2008-0-201309231342520150</DiskName>
<MediaLink>http://portalvhdsv5skghwlmf765.blob.core.windows.net/vhds/azure-elasticsearch-cluster-es-windows2008-2013-09-23.vhd</MediaLink>
<SourceImageName>a699494373c04fc0bc8f2bb1389d6106__Win2K8R2SP1-Datacenter-201308.02-en.us-127GB.vhd</SourceImageName>
<OS>Windows</OS>
</OSVirtualHardDisk>
<RoleSize>ExtraSmall</RoleSize>
</Role>
<Role i:type="PersistentVMRole">
<RoleName>myesnode1</RoleName>
<OsVersion />
<RoleType>PersistentVMRole</RoleType>
<ConfigurationSets>
<ConfigurationSet i:type="NetworkConfigurationSet">
<ConfigurationSetType>NetworkConfiguration</ConfigurationSetType>
<InputEndpoints>
<InputEndpoint>
<LocalPort>22</LocalPort>
<Name>ssh</Name>
<Port>22</Port>
<Protocol>tcp</Protocol>
<Vip>137.116.213.150</Vip>
</InputEndpoint>
</InputEndpoints>
<SubnetNames />
</ConfigurationSet>
</ConfigurationSets>
<DataVirtualHardDisks />
<OSVirtualHardDisk>
<HostCaching>ReadWrite</HostCaching>
<DiskName>azure-elasticsearch-cluster-myesnode1-0-201309231246380270</DiskName>
<MediaLink>http://azureelasticsearchcluste.blob.core.windows.net/vhd-store/myesnode1-0cc86cc84c0ed2e0.vhd</MediaLink>
<SourceImageName>b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-13_10-amd64-server-20130808-alpha3-en-us-30GB</SourceImageName>
<OS>Linux</OS>
</OSVirtualHardDisk>
<RoleSize>ExtraSmall</RoleSize>
</Role>
</RoleList>
<SdkVersion />
<Locked>false</Locked>
<RollbackAllowed>false</RollbackAllowed>
<CreatedTime>2013-09-23T12:46:27Z</CreatedTime>
<LastModifiedTime>2013-09-23T20:38:58Z</LastModifiedTime>
<ExtendedProperties />
<PersistentVMDowntime>
<StartTime>2013-08-27T08:00:00Z</StartTime>
<EndTime>2013-09-27T20:00:00Z</EndTime>
<Status>PersistentVMUpdateScheduled</Status>
</PersistentVMDowntime>
<VirtualIPs>
<VirtualIP>
<Address>137.116.213.150</Address>
<IsDnsProgrammed>true</IsDnsProgrammed>
<Name>azure-elasticsearch-clusterContractContract</Name>
</VirtualIP>
</VirtualIPs>
</Deployment>
</Deployments>
</HostedService>