HBASE-2543 [EC2] Move scripts up to Github hosting

git-svn-id: https://svn.apache.org/repos/asf/hadoop/hbase/trunk@944032 13f79535-47bb-0310-9956-ffa450edef68
This commit is contained in:
Andrew Kyle Purtell 2010-05-13 22:18:20 +00:00
parent c6b5f4fa0d
commit 800fce986e
23 changed files with 0 additions and 1799 deletions

View File

@ -1,104 +0,0 @@
HBase EC2
This collection of scripts allows you to run HBase clusters on Amazon.com's Elastic Compute Cloud (EC2) service described at:
http://aws.amazon.com/ec2
To get help, type the following in a shell:
bin/hbase-ec2
You need both the EC2 API and AMI tools
http://developer.amazonwebservices.com/connect/entry.jspa?externalID=351
http://developer.amazonwebservices.com/connect/entry.jspa?externalID=368&categoryID=88
installed and on the path.
When setting up keypairs on EC2, be sure to name your keypair as 'root'.
Quick Start:
1) Download and unzip the EC2 AMI and API tools zipfiles.
For Ubuntu, "apt-get install ec2-ami-tools ec2-api-tools".
2) Put the tools on the path and set EC2_HOME in the environment to point to
the top level directory of the API tools.
3) Configure src/contrib/ec2/bin/hbase-ec2-env.sh
Fill in AWS_ACCOUNT_ID with your EC2 account number.
Fill in AWS_ACCESS_KEY_ID with your EC2 access key.
Fill in AWS_SECRET_ACCESS_KEY with your EC2 secret access key.
Fill in EC2_PRIVATE_KEY with the location of your AWS private key file --
must begin with 'pk' and end with '.pem'.
Fill in EC2_CERT with the location of your AWS certificate -- must begin
with 'cert' and end with '.pem'.
Make sure the private part of your AWS SSH keypair exists in the same
directory as EC2_PRIVATE_KEY with the name id_rsa_root. Also, insure that
the permissions on the private key file are 600 (ONLY owner readable/
writable).
4) ./bin/hbase-ec2 launch-cluster <name> <nr-zoos> <nr-slaves>, e.g
./bin/hbase-ec2 launch-cluster testcluster 3 3
5) Once the above command has finished without error, ./bin/hbase-ec2 login
<name>, e.g.
./bin/hbase-ec2 login testcluster
6) Check that the cluster is up and functional:
hbase shell
> status 'simple'
You should see something like:
3 live servers
domU-12-31-39-09-75-11.compute-1.internal:60020 1258653694915
requests=0, regions=1, usedHeap=29, maxHeap=987
domU-12-31-39-01-AC-31.compute-1.internal:60020 1258653709041
requests=0, regions=1, usedHeap=29, maxHeap=987
domU-12-31-39-01-B0-91.compute-1.internal:60020 1258653706411
requests=0, regions=0, usedHeap=27, maxHeap=987
0 dead servers
Extra Packages:
It is possible to specify that extra packages be downloaded and installed on
demand when the master and slave instances start.
1. Set up a YUM repository. See: http://yum.baseurl.org/wiki/RepoCreate
2. Host the repository somewhere public. For example, build the
repository locally and then copy it up to an S3 bucket.
3. Create a YUM repository descriptor (.repo file). See:
http://yum.baseurl.org/wiki/RepoCreate
[myrepo]
name = MyRepo
baseurl = http://mybucket.s3.amazonaws.com/myrepo
enabled = 1
Upload the .repo file somewhere public, for example, in the root
directory of the repository,
mybucket.s3.amazonaws.com/myrepo/myrepo.repo
4. Configure hbase-ec2-env.sh thus:
EXTRA_PACKAGES="http://mybucket.s3.amazonaws.com/myrepo/myrepo.repo \
pkg1 pkg2 pkg3"
When the master and slave instances start, the .repo file will be added to
the Yum repository list and then Yum will be invoked to pull the packages
listed after the URL.

View File

@ -1,69 +0,0 @@
#!/usr/bin/env bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Run commands on master or specified node of a running HBase EC2 cluster.
# if no args specified, show usage
if [ $# = 0 ]; then
echo "Command required!"
exit 1
fi
# get arguments
COMMAND="$1"
shift
# get group
CLUSTER="$1"
shift
if [ -z $CLUSTER ]; then
echo "Cluster name or instance id required!"
exit 1
fi
# Import variables
bin=`dirname "$0"`
bin=`cd "$bin"; pwd`
. "$bin"/hbase-ec2-env.sh
if [[ "$CLUSTER" = "i-*" ]]; then
HOST=`ec2-describe-instances $TOOL_OPTS $CLUSTER | grep running | awk '{print $4}'`
[ -z $HOST ] && echo "Instance still pending or no longer running: $CLUSTER" && exit 1
else
[ ! -f $MASTER_ADDR_PATH ] && echo "Wrong group name, or cluster not launched! $CLUSTER" && exit 1
HOST=`cat $MASTER_ADDR_PATH`
fi
if [ "$COMMAND" = "login" ] ; then
echo "Logging in to host $HOST."
ssh $SSH_OPTS "root@$HOST"
elif [ "$COMMAND" = "proxy" ] ; then
echo "Proxying to host $HOST via local port 6666"
echo "Gangia: http://$HOST/ganglia"
echo "JobTracker: http://$HOST:50030/"
echo "NameNode: http://$HOST:50070/"
ssh $SSH_OPTS -D 6666 -N "root@$HOST"
elif [ "$COMMAND" = "push" ] ; then
echo "Pushing $1 to host $HOST."
scp $SSH_OPTS -r $1 "root@$HOST:"
elif [ "$COMMAND" = "screen" ] ; then
echo "Logging in and attaching screen on host $HOST."
ssh $SSH_OPTS -t "root@$HOST" 'screen -D -R'
else
ssh $SSH_OPTS -t "root@$HOST" "$COMMAND $@"
fi

View File

@ -1,84 +0,0 @@
#!/usr/bin/env bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Create a HBase AMI.
# Inspired by Jonathan Siegel's EC2 script (http://blogsiegel.blogspot.com/2006/08/sandboxing-amazon-ec2.html)
# allow override of SLAVE_INSTANCE_TYPE from the command line
[ ! -z $1 ] && SLAVE_INSTANCE_TYPE=$1
# Import variables
bin=`dirname "$0"`
bin=`cd "$bin"; pwd`
. "$bin"/hbase-ec2-env.sh
type=$SLAVE_INSTANCE_TYPE
arch=$SLAVE_ARCH
echo "INSTANCE_TYPE is $type"
echo "ARCH is $arch"
AMI_IMAGE=`ec2-describe-images $TOOL_OPTS -a | grep $S3_BUCKET | grep hbase | grep $HBASE_VERSION-$arch | grep available | awk '{print $2}'`
[ ! -z $AMI_IMAGE ] && echo "AMI already registered, use: ec2-deregister $AMI_IMAGE" && exit 1
echo "Starting a AMI with ID $BASE_AMI_IMAGE."
OUTPUT=`ec2-run-instances $BASE_AMI_IMAGE $TOOL_OPTS -k root -t $type`
BOOTING_INSTANCE=`echo $OUTPUT | awk '{print $6}'`
echo "Instance is $BOOTING_INSTANCE."
echo "Polling server status"
while true; do
printf "."
HOSTNAME=`ec2-describe-instances $TOOL_OPTS $BOOTING_INSTANCE | grep running | awk '{print $4}'`
if [ ! -z $HOSTNAME ]; then
break;
fi
sleep 1
done
echo "The server is available at $HOSTNAME."
while true; do
REPLY=`ssh $SSH_OPTS "root@$HOSTNAME" 'echo "hello"'`
if [ ! -z $REPLY ]; then
break;
fi
sleep 5
done
echo "Copying scripts."
# Copy setup scripts
scp $SSH_OPTS "$bin"/hbase-ec2-env.sh "root@$HOSTNAME:/mnt"
scp $SSH_OPTS "$bin"/functions.sh "root@$HOSTNAME:/mnt"
if [ -f "$bin"/credentials.sh ] ; then
scp $SSH_OPTS "$bin"/credentials.sh "root@$HOSTNAME:/mnt"
fi
scp $SSH_OPTS "$bin"/image/create-hbase-image-remote "root@$HOSTNAME:/mnt"
scp $SSH_OPTS "$bin"/image/ec2-run-user-data "root@$HOSTNAME:/etc/init.d"
# Copy private key and certificate (for bundling image)
scp $SSH_OPTS $EC2_PRIVATE_KEY "root@$HOSTNAME:/mnt"
scp $SSH_OPTS $EC2_CERT "root@$HOSTNAME:/mnt"
# Connect to it
ssh $SSH_OPTS "root@$HOSTNAME" "sh -c \"INSTANCE_TYPE=$type ARCH=$arch HBASE_URL=$HBASE_URL HADOOP_URL=$HADOOP_URL LZO_URL=$LZO_URL JAVA_URL=$JAVA_URL /mnt/create-hbase-image-remote\""
# Register image
ec2-register $TOOL_OPTS -n hbase-$REGION-$HBASE_VERSION-$arch $S3_BUCKET/hbase-$HBASE_VERSION-$arch.manifest.xml
echo "Terminate with: ec2-terminate-instances $BOOTING_INSTANCE"

View File

@ -1,45 +0,0 @@
#!/usr/bin/env bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Delete the groups an local files associated with a cluster.
if [ -z $1 ]; then
echo "Cluster name required!"
exit 1
fi
CLUSTER=$1
# Import variables
bin=`dirname "$0"`
bin=`cd "$bin"; pwd`
. "$bin"/hbase-ec2-env.sh
# Finding HBase clusters
CLUSTERS=`ec2-describe-instances $TOOL_OPTS | \
awk '"RESERVATION" == $1 && $4 ~ /-master$/, "INSTANCE" == $1' | tr '\n' '\t' | \
grep "$CLUSTER" | grep running | cut -f4 | rev | cut -d'-' -f2- | rev`
if [ -n "$CLUSTERS" ]; then
echo "Cluster $CLUSTER has running instances. Please terminate them first."
exit 0
fi
"$bin"/revoke-hbase-cluster-secgroups $CLUSTER
rm -f $ZOOKEEPER_ADDR_PATH $ZOOKEEPER_QUORUM_PATH
rm -f $MASTER_IP_PATH $MASTER_ADDR_PATH $MASTER_ZONE_PATH

View File

@ -1,34 +0,0 @@
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
function getCredentialSetting {
name=$1
eval "val=\$$name"
if [ -z "$val" ] ; then
if [ -f "$bin"/credentials.sh ] ; then
val=`cat "$bin"/credentials.sh | grep $name | awk 'BEGIN { FS="=" } { print $2; }'`
if [ -z "$val" ] ; then
echo -n "$name: "
read -e val
echo "$name=$val" >> "$bin"/credentials.sh
fi
else
echo -n "$name: "
read -e val
echo "$name=$val" >> "$bin"/credentials.sh
fi
eval "$name=$val"
fi
}

View File

@ -1,66 +0,0 @@
#!/usr/bin/env bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
bin=`dirname "$0"`
bin=`cd "$bin"; pwd`
# if no args specified, show usage
if [ $# = 0 ]; then
echo "Usage: hbase-ec2 COMMAND"
echo "where COMMAND is one of:"
echo " list list all running HBase EC2 clusters"
echo " launch-cluster <name> <slaves> <zoos> launch a HBase cluster"
echo " launch-zookeeper <name> <zoos> launch the zookeeper quorum"
echo " launch-master <name> launch or find a cluster master"
echo " launch-slaves <name> <slaves> launch the cluster slaves"
echo " terminate-cluster <name> terminate all HBase EC2 instances"
echo " reboot-cluster <name> reboot all HBase EC2 instances"
echo " delete-cluster <name> clean up after a terminated cluster"
echo " login <name|instance id> login to the master node"
echo " screen <name|instance id> start or attach 'screen' on the master"
echo " proxy <name|instance id> start a socks proxy on localhost:6666"
echo " push <name> <file> scp a file to the master node"
echo " <shell cmd> <group|instance id> execute a command on the master"
echo " create-image create a HBase AMI"
exit 1
fi
# get arguments
COMMAND="$1"
shift
if [ "$COMMAND" = "create-image" ] ; then
. "$bin"/create-hbase-image $*
elif [ "$COMMAND" = "launch-cluster" ] ; then
. "$bin"/launch-hbase-cluster $*
elif [ "$COMMAND" = "launch-zookeeper" ] ; then
. "$bin"/launch-hbase-zookeeper $*
elif [ "$COMMAND" = "launch-master" ] ; then
. "$bin"/launch-hbase-master $*
elif [ "$COMMAND" = "launch-slaves" ] ; then
. "$bin"/launch-hbase-slaves $*
elif [ "$COMMAND" = "delete-cluster" ] ; then
. "$bin"/delete-hbase-cluster $*
elif [ "$COMMAND" = "terminate-cluster" ] ; then
. "$bin"/terminate-hbase-cluster $*
elif [ "$COMMAND" = "reboot-cluster" ] ; then
. "$bin"/reboot-hbase-cluster $*
elif [ "$COMMAND" = "list" ] ; then
. "$bin"/list-hbase-clusters
else
. "$bin"/cmd-hbase-cluster "$COMMAND" $*
fi

View File

@ -1,154 +0,0 @@
# Set environment variables for running Hbase on Amazon EC2 here.
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Your Amazon Account Number.
AWS_ACCOUNT_ID=
# Your Amazon AWS access key.
AWS_ACCESS_KEY_ID=
# Your Amazon AWS secret access key.
AWS_SECRET_ACCESS_KEY=
# Your AWS private key file -- must begin with 'pk' and end with '.pem'
EC2_PRIVATE_KEY=
# Your AWS certificate file -- must begin with 'cert' and end with '.pem'
EC2_CERT=
# Your AWS root SSH key
EC2_ROOT_SSH_KEY=
# The version of HBase to use.
HBASE_VERSION=@HBASE_VERSION@
HBASE_URL=http://hbase.s3.amazonaws.com/hbase/hbase-$HBASE_VERSION.tar.gz
# The version of Hadoop to use.
HADOOP_VERSION=0.20.2
# The Amazon S3 bucket where the HBase AMI is stored.
REGION=us-east-1
#REGION=us-west-1
#REGION=eu-west-1
#REGION=ap-southeast-1
S3_BUCKET=apache-hbase-images-$REGION
# Account for bucket
# We need this because EC2 is returning account identifiers instead of bucket
# names.
S3_ACCOUNT=720040977164
HADOOP_URL=http://hbase.s3.amazonaws.com/hadoop/hadoop-$HADOOP_VERSION.tar.gz
LZO_URL=http://hbase.s3.amazonaws.com/hadoop/lzo-linux-$HADOOP_VERSION.tar.gz
# Enable public access web interfaces
ENABLE_WEB_PORTS=false
# Enable use of elastic IPs for ZK and master instances
ENABLE_ELASTIC_IPS=false
# Extra packages
# Allows you to add a private Yum repo and pull packages from it as your
# instances boot up. Format is <repo-descriptor-URL> <pkg1> ... <pkgN>
# The repository descriptor will be fetched into /etc/yum/repos.d.
EXTRA_PACKAGES=
# Use only c1.xlarge unless you know what you are doing
MASTER_INSTANCE_TYPE=${MASTER_INSTANCE_TYPE:-c1.xlarge}
# Use only m1.large or c1.xlarge unless you know what you are doing
SLAVE_INSTANCE_TYPE=${SLAVE_INSTANCE_TYPE:-c1.xlarge}
# Use only m1.small or c1.medium unless you know what you are doing
ZOO_INSTANCE_TYPE=${ZOO_INSTANCE_TYPE:-m1.small}
############################################################################
# Generally, users do not need to edit below
bin=`dirname "$0"`
bin=`cd "$bin"; pwd`
. "$bin"/functions.sh
getCredentialSetting 'AWS_ACCOUNT_ID'
getCredentialSetting 'AWS_ACCESS_KEY_ID'
getCredentialSetting 'AWS_SECRET_ACCESS_KEY'
getCredentialSetting 'EC2_PRIVATE_KEY'
getCredentialSetting 'EC2_CERT'
getCredentialSetting 'EC2_ROOT_SSH_KEY'
export EC2_URL=https://$REGION.ec2.amazonaws.com
# SSH options used when connecting to EC2 instances.
SSH_OPTS=`echo -q -i "$EC2_ROOT_SSH_KEY" -o StrictHostKeyChecking=no -o ServerAliveInterval=30`
# EC2 command request timeout (seconds)
REQUEST_TIMEOUT=300 # 5 minutes
# Global tool options
TOOL_OPTS=`echo -K "$EC2_PRIVATE_KEY" -C "$EC2_CERT" --request-timeout $REQUEST_TIMEOUT`
# The EC2 group master name. CLUSTER is set by calling scripts
CLUSTER_MASTER=$CLUSTER-master
# Cached values for a given cluster
MASTER_IP_PATH=$HOME/.hbase-${CLUSTER_MASTER}-ip
MASTER_ADDR_PATH=$HOME/.hbase-${CLUSTER_MASTER}-addr
MASTER_ZONE_PATH=$HOME/.hbase-${CLUSTER_MASTER}-zone
# The Zookeeper EC2 group name. CLUSTER is set by calling scripts.
CLUSTER_ZOOKEEPER=$CLUSTER-zk
ZOOKEEPER_QUORUM_PATH=$HOME/.hbase-${CLUSTER_ZOOKEEPER}-quorum
ZOOKEEPER_ADDR_PATH=$HOME/.hbase-${CLUSTER_ZOOKEEPER}-addrs
# Instances path
INSTANCES_PATH=$HOME/.hbase-${CLUSTER}-instances
# The script to run on instance boot.
USER_DATA_FILE=hbase-ec2-init-remote.sh
# The version number of the installed JDK.
JAVA_VERSION=1.6.0_20
JAVA_URL=http://hbase.s3.amazonaws.com/jdk/jdk-${JAVA_VERSION}-linux-@arch@.bin
# SUPPORTED_ARCHITECTURES = ['i386', 'x86_64']
# Change the BASE_AMI_IMAGE setting if you are creating custom AMI in a region
# other than us-east-1 (the default).
if [ "$SLAVE_INSTANCE_TYPE" = "m1.small" -o "$SLAVE_INSTANCE_TYPE" = "c1.medium" ]; then
SLAVE_ARCH='i386'
BASE_AMI_IMAGE="ami-48aa4921" # us-east-1 ec2-public-images/fedora-8-i386-base-v1.10.manifest.xml
#BASE_AMI_IMAGE="ami-810657c4" # us-west-1
#BASE_AMI_IMAGE="ami-0a48637e" # eu-west-1
else
SLAVE_ARCH='x86_64'
BASE_AMI_IMAGE="ami-f61dfd9f" # us-east-1 ec2-public-images/fedora-8-x86_64-base-v1.10.manifest.xml
#BASE_AMI_IMAGE="ami-870657c2" # us-west-1
#BASE_AMI_IMAGE="ami-927a51e6" # eu-west-1
fi
if [ "$MASTER_INSTANCE_TYPE" = "m1.small" -o "$MASTER_INSTANCE_TYPE" = "c1.medium" ]; then
MASTER_ARCH='i386'
else
MASTER_ARCH='x86_64'
fi
if [ "$ZOO_INSTANCE_TYPE" = "m1.small" -o "$ZOO_INSTANCE_TYPE" = "c1.medium" ]; then
ZOO_ARCH='i386'
else
ZOO_ARCH='x86_64'
fi

View File

@ -1,303 +0,0 @@
#!/usr/bin/env bash
#
# Copyright 2010 The Apache Software Foundation
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Script that is run on each EC2 instance on boot. It is passed in the EC2 user
# data, so should not exceed 16K in size.
MASTER_HOST="%MASTER_HOST%"
ZOOKEEPER_QUORUM="%ZOOKEEPER_QUORUM%"
NUM_SLAVES="%NUM_SLAVES%"
EXTRA_PACKAGES="%EXTRA_PACKAGES%"
SECURITY_GROUPS=`wget -q -O - http://169.254.169.254/latest/meta-data/security-groups`
IS_MASTER=`echo $SECURITY_GROUPS | awk '{ a = match ($0, "-master$"); if (a) print "true"; else print "false"; }'`
if [ "$IS_MASTER" = "true" ]; then
MASTER_HOST=`wget -q -O - http://169.254.169.254/latest/meta-data/local-hostname`
fi
HADOOP_HOME=`ls -d /usr/local/hadoop-*`
HADOOP_VERSION=`echo $HADOOP_HOME | cut -d '-' -f 2`
HBASE_HOME=`ls -d /usr/local/hbase-*`
HBASE_VERSION=`echo $HBASE_HOME | cut -d '-' -f 2`
export USER="root"
# up file-max
sysctl -w fs.file-max=65535
# up ulimits
echo "root soft nofile 65535" >> /etc/security/limits.conf
echo "root hard nofile 65535" >> /etc/security/limits.conf
uname -n 65535
# up epoll limits; ok if this fails, only valid for kernels 2.6.27+
sysctl -w fs.epoll.max_user_instances=65535 > /dev/null 2>&1
# up conntrack_max
sysctl -w net.ipv4.netfilter.ip_conntrack_max=65535 > /dev/null 2>&1
[ ! -f /etc/hosts ] && echo "127.0.0.1 localhost" > /etc/hosts
# Extra packages
if [ "$EXTRA_PACKAGES" != "" ] ; then
# format should be <repo-descriptor-URL> <package1> ... <packageN>
pkg=( $EXTRA_PACKAGES )
wget -nv -O /etc/yum.repos.d/user.repo ${pkg[0]}
yum -y update yum
yum -y install ${pkg[@]:1}
fi
# Ganglia
if [ "$IS_MASTER" = "true" ]; then
sed -i -e "s|\( *mcast_join *=.*\)|#\1|" \
-e "s|\( *bind *=.*\)|#\1|" \
-e "s|\( *mute *=.*\)| mute = yes|" \
-e "s|\( *location *=.*\)| location = \"master-node\"|" \
/etc/gmond.conf
mkdir -p /mnt/ganglia/rrds
chown -R ganglia:ganglia /mnt/ganglia/rrds
rm -rf /var/lib/ganglia; cd /var/lib; ln -s /mnt/ganglia ganglia; cd
service gmond start
service gmetad start
apachectl start
else
sed -i -e "s|\( *mcast_join *=.*\)|#\1|" \
-e "s|\( *bind *=.*\)|#\1|" \
-e "s|\(udp_send_channel {\)|\1\n host=$MASTER_HOST|" \
/etc/gmond.conf
service gmond start
fi
# Reformat sdb as xfs
umount /mnt
mkfs.xfs -f /dev/sdb
mount -o noatime /dev/sdb /mnt
# Probe for additional instance volumes
# /dev/sdb as /mnt is always set up by base image
DFS_NAME_DIR="/mnt/hadoop/dfs/name"
DFS_DATA_DIR="/mnt/hadoop/dfs/data"
i=2
for d in c d e f g h i j k l m n o p q r s t u v w x y z; do
m="/mnt${i}"
mkdir -p $m
mkfs.xfs -f /dev/sd${d}
if [ $? -eq 0 ] ; then
mount -o noatime /dev/sd${d} $m > /dev/null 2>&1
if [ $i -lt 3 ] ; then # no more than two namedirs
DFS_NAME_DIR="${DFS_NAME_DIR},${m}/hadoop/dfs/name"
fi
DFS_DATA_DIR="${DFS_DATA_DIR},${m}/hadoop/dfs/data"
i=$(( i + 1 ))
fi
done
# Hadoop configuration
( cd /usr/local && ln -s $HADOOP_HOME hadoop ) || true
cat > $HADOOP_HOME/conf/core-site.xml <<EOF
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/mnt/hadoop</value>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://$MASTER_HOST:8020</value>
</property>
</configuration>
EOF
cat > $HADOOP_HOME/conf/hdfs-site.xml <<EOF
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://$MASTER_HOST:8020</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>$DFS_NAME_DIR</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>$DFS_DATA_DIR</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.datanode.handler.count</name>
<value>10</value>
</property>
<property>
<name>dfs.datanode.max.xcievers</name>
<value>10000</value>
</property>
</configuration>
EOF
cat > $HADOOP_HOME/conf/mapred-site.xml <<EOF
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>$MASTER_HOST:8021</value>
</property>
<property>
<name>io.compression.codecs</name>
<value>org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.BZip2Codec,com.hadoop.compression.lzo.LzoCodec,com.hadoop.compression.lzo.LzopCodec</value>
</property>
<property>
<name>io.compression.codec.lzo.class</name>
<value>com.hadoop.compression.lzo.LzoCodec</value>
</property>
<property>
<name>mapred.map.tasks</name>
<value>4</value>
</property>
<property>
<name>mapred.map.tasks.speculative.execution</name>
<value>false</value>
</property>
<property>
<name>mapred.child.java.opts</name>
<value>-Xmx512m -XX:+UseCompressedOops</value>
</property>
</configuration>
EOF
# Add JVM options
cat >> $HADOOP_HOME/conf/hadoop-env.sh <<EOF
export HADOOP_OPTS="$HADOOP_OPTS -XX:+UseCompressedOops"
EOF
# Update classpath to include HBase jars and config
cat >> $HADOOP_HOME/conf/hadoop-env.sh <<EOF
export HADOOP_CLASSPATH="$HBASE_HOME/hbase-${HBASE_VERSION}.jar:$HBASE_HOME/lib/zookeeper-3.3.0.jar:$HBASE_HOME/conf"
EOF
# Configure Hadoop for Ganglia
cat > $HADOOP_HOME/conf/hadoop-metrics.properties <<EOF
dfs.class=org.apache.hadoop.metrics.ganglia.GangliaContext
dfs.period=10
dfs.servers=$MASTER_HOST:8649
jvm.class=org.apache.hadoop.metrics.ganglia.GangliaContext
jvm.period=10
jvm.servers=$MASTER_HOST:8649
mapred.class=org.apache.hadoop.metrics.ganglia.GangliaContext
mapred.period=10
mapred.servers=$MASTER_HOST:8649
EOF
# HBase configuration
( cd /usr/local && ln -s $HBASE_HOME hbase ) || true
cat > $HBASE_HOME/conf/hbase-site.xml <<EOF
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://$MASTER_HOST:8020/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.regions.server.count.min</name>
<value>$NUM_SLAVES</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>$ZOOKEEPER_QUORUM</value>
</property>
<property>
<name>hbase.regionserver.handler.count</name>
<value>100</value>
</property>
<property>
<name>hbase.hregion.memstore.block.multiplier</name>
<value>3</value>
</property>
<property>
<name>hbase.hstore.blockingStoreFiles</name>
<value>15</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.client.block.write.retries</name>
<value>100</value>
</property>
<property>
<name>zookeeper.session.timeout</name>
<value>60000</value>
</property>
<property>
<name>hbase.tmp.dir</name>
<value>/mnt/hbase</value>
</property>
</configuration>
EOF
# Copy over mapred configuration for jobs started with 'hbase ...'
cp $HADOOP_HOME/conf/mapred-site.xml $HBASE_HOME/conf/mapred-site.xml
# Override JVM options
cat >> $HBASE_HOME/conf/hbase-env.sh <<EOF
export HBASE_MASTER_OPTS="-Xmx1000m -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+AggressiveOpts -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:/mnt/hbase/logs/hbase-master-gc.log"
export HBASE_REGIONSERVER_OPTS="-Xmx2000m -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=88 -XX:NewSize=128m -XX:MaxNewSize=128m -XX:+AggressiveOpts -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:/mnt/hbase/logs/hbase-regionserver-gc.log"
EOF
# Configure log4j
sed -i -e 's/hadoop.hbase=DEBUG/hadoop.hbase=INFO/g' $HBASE_HOME/conf/log4j.properties
#sed -i -e 's/#log4j.logger.org.apache.hadoop.dfs=DEBUG/log4j.logger.org.apache.hadoop.dfs=DEBUG/g' $HBASE_HOME/conf/log4j.properties
# Configure HBase for Ganglia
cat > $HBASE_HOME/conf/hadoop-metrics.properties <<EOF
dfs.class=org.apache.hadoop.metrics.ganglia.GangliaContext
dfs.period=10
dfs.servers=$MASTER_HOST:8649
hbase.class=org.apache.hadoop.metrics.ganglia.GangliaContext
hbase.period=10
hbase.servers=$MASTER_HOST:8649
jvm.class=org.apache.hadoop.metrics.ganglia.GangliaContext
jvm.period=10
jvm.servers=$MASTER_HOST:8649
EOF
mkdir -p /mnt/hadoop/logs /mnt/hbase/logs
if [ "$IS_MASTER" = "true" ]; then
# only format on first boot
[ ! -e /mnt/hadoop/dfs/name ] && "$HADOOP_HOME"/bin/hadoop namenode -format
"$HADOOP_HOME"/bin/hadoop-daemon.sh start namenode
"$HADOOP_HOME"/bin/hadoop-daemon.sh start jobtracker
"$HBASE_HOME"/bin/hbase-daemon.sh start master
else
"$HADOOP_HOME"/bin/hadoop-daemon.sh start datanode
"$HBASE_HOME"/bin/hbase-daemon.sh start regionserver
"$HADOOP_HOME"/bin/hadoop-daemon.sh start tasktracker
fi
rm -f /var/ec2/ec2-run-user-data.*

View File

@ -1,50 +0,0 @@
#!/usr/bin/env bash
# ZOOKEEPER_QUORUM set in the environment by the caller
HBASE_HOME=`ls -d /usr/local/hbase-*`
###############################################################################
# HBase configuration (Zookeeper)
###############################################################################
cat > $HBASE_HOME/conf/hbase-site.xml <<EOF
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>hbase.zookeeper.quorum</name>
<value>$ZOOKEEPER_QUORUM</value>
</property>
<property>
<name>zookeeper.session.timeout</name>
<value>60000</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/mnt/hbase/zk</value>
</property>
<property>
<name>hbase.zookeeper.property.maxClientCnxns</name>
<value>100</value>
</property>
</configuration>
EOF
###############################################################################
# Start services
###############################################################################
# up open file descriptor limits
echo "root soft nofile 32768" >> /etc/security/limits.conf
echo "root hard nofile 32768" >> /etc/security/limits.conf
# up epoll limits
# ok if this fails, only valid for kernels 2.6.27+
sysctl -w fs.epoll.max_user_instance=32768 > /dev/null 2>&1
mkdir -p /mnt/hbase/logs
mkdir -p /mnt/hbase/zk
[ ! -f /etc/hosts ] && echo "127.0.0.1 localhost" > /etc/hosts
"$HBASE_HOME"/bin/hbase-daemon.sh start zookeeper

View File

@ -1,119 +0,0 @@
#!/bin/bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Create a Hbase AMI. Runs on the EC2 instance.
# Import variables
bin=`dirname "$0"`
bin=`cd "$bin"; pwd`
. "$bin"/hbase-ec2-env.sh
type=$INSTANCE_TYPE
[ -z "$type" ] && type=$SLAVE_INSTANCE_TYPE
arch=$ARCH
[ -z "$arch" ] && arch=$SLAVE_ARCH
echo "Remote: INSTANCE_TYPE is $type"
echo "Remote: ARCH is $arch"
# Perform any URL substitutions that must be done at this late stage
JAVA_URL=`echo $JAVA_URL | sed -e "s/@arch@/$arch/g"`
# Remove sensitive information
rm -f "$bin"/hbase-ec2-env.sh
rm -f "$bin"/credentials.sh
# Install Java
echo "Downloading and installing java binary."
cd /usr/local
wget -nv -O java.bin $JAVA_URL
sh java.bin
rm -f java.bin
# Install tools
echo "Installing rpms."
yum -y update
yum -y install rsync lynx screen ganglia-gmetad ganglia-gmond ganglia-web httpd php lzo-devel xfsprogs
yum -y clean all
chkconfig --levels 0123456 httpd off
chkconfig --levels 0123456 gmetad off
chkconfig --levels 0123456 gmond off
# Install Hadoop
echo "Installing Hadoop $HADOOP_VERSION."
cd /usr/local
wget -nv $HADOOP_URL
tar xzf hadoop-$HADOOP_VERSION.tar.gz
rm -f hadoop-$HADOOP_VERSION.tar.gz
# Configure Hadoop
sed -i \
-e "s|# export JAVA_HOME=.*|export JAVA_HOME=/usr/local/jdk${JAVA_VERSION}|"\
-e 's|# export HADOOP_LOG_DIR=.*|export HADOOP_LOG_DIR=/mnt/hadoop/logs|' \
-e 's|# export HADOOP_SLAVE_SLEEP=.*|export HADOOP_SLAVE_SLEEP=1|' \
-e 's|# export HADOOP_OPTS=.*|export HADOOP_OPTS=-server|' \
/usr/local/hadoop-$HADOOP_VERSION/conf/hadoop-env.sh
# Install HBase
echo "Installing HBase $HBASE_VERSION."
cd /usr/local
wget -nv $HBASE_URL
tar xzf hbase-$HBASE_VERSION.tar.gz
rm -f hbase-$HBASE_VERSION.tar.gz
# Configure HBase
sed -i \
-e "s|# export JAVA_HOME=.*|export JAVA_HOME=/usr/local/jdk${JAVA_VERSION}|"\
-e 's|# export HBASE_OPTS=.*|export HBASE_OPTS="$HBASE_OPTS -server -XX:+HeapDumpOnOutOfMemoryError"|' \
-e 's|# export HBASE_LOG_DIR=.*|export HBASE_LOG_DIR=/mnt/hbase/logs|' \
-e 's|# export HBASE_SLAVE_SLEEP=.*|export HBASE_SLAVE_SLEEP=1|' \
/usr/local/hbase-$HBASE_VERSION/conf/hbase-env.sh
# Run user data as script on instance startup
chmod +x /etc/init.d/ec2-run-user-data
echo "/etc/init.d/ec2-run-user-data" >> /etc/rc.d/rc.local
# Setup root user bash environment
echo "export JAVA_HOME=/usr/local/jdk${JAVA_VERSION}" >> /root/.bash_profile
echo "export HADOOP_HOME=/usr/local/hadoop-${HADOOP_VERSION}" >> /root/.bash_profile
echo "export HBASE_HOME=/usr/local/hbase-${HBASE_VERSION}" >> /root/.bash_profile
echo 'export PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$HBASE_HOME/bin:$PATH' >> /root/.bash_profile
# Configure networking.
# Delete SSH authorized_keys since it includes the key it was launched with. (Note that it is re-populated when an instance starts.)
rm -f /root/.ssh/authorized_keys
# Ensure logging in to new hosts is seamless.
echo ' StrictHostKeyChecking no' >> /etc/ssh/ssh_config
# Install LZO
echo "Installing LZO codec support"
wget -nv -O /tmp/lzo-linux-${HADOOP_VERSION}.tar.gz $LZO_URL
cd /usr/local/hadoop-${HADOOP_VERSION} && tar xzf /tmp/lzo-linux-${HADOOP_VERSION}.tar.gz
cd /usr/local/hbase-${HBASE_VERSION} && tar xzf /tmp/lzo-linux-${HADOOP_VERSION}.tar.gz
rm -f /tmp/lzo-linux-${HADOOP_VERSION}.tar.gz
# Bundle and upload image
cd ~root
# Don't need to delete .bash_history since it isn't written until exit.
df -h
ec2-bundle-vol -d /mnt -k /mnt/pk*.pem -c /mnt/cert*.pem -u $AWS_ACCOUNT_ID -s 3072 -p hbase-$HBASE_VERSION-$arch -r $arch
ec2-upload-bundle -b $S3_BUCKET -m /mnt/hbase-$HBASE_VERSION-$arch.manifest.xml -a $AWS_ACCESS_KEY_ID -s $AWS_SECRET_ACCESS_KEY
# End
echo Done

View File

@ -1,51 +0,0 @@
#!/bin/bash
#
# ec2-run-user-data - Run instance user-data if it looks like a script.
#
# Only retrieves and runs the user-data script once per instance. If
# you want the user-data script to run again (e.g., on the next boot)
# then add this command in the user-data script:
# rm -f /var/ec2/ec2-run-user-data.*
#
# History:
# 2008-05-16 Eric Hammond <ehammond@thinksome.com>
# - Initial version including code from Kim Scheibel, Jorge Oliveira
# 2008-08-06 Tom White
# - Updated to use mktemp on fedora
#
prog=$(basename $0)
logger="logger -t $prog"
curl="curl --retry 3 --silent --show-error --fail"
instance_data_url=http://169.254.169.254/2008-02-01
# Wait until networking is up on the EC2 instance.
perl -MIO::Socket::INET -e '
until(new IO::Socket::INET("169.254.169.254:80")){print"Waiting for network...\n";sleep 1}
' | $logger
# Exit if we have already run on this instance (e.g., previous boot).
ami_id=$($curl $instance_data_url/meta-data/ami-id)
been_run_file=/var/ec2/$prog.$ami_id
mkdir -p $(dirname $been_run_file)
if [ -f $been_run_file ]; then
$logger < $been_run_file
exit
fi
# Retrieve the instance user-data and run it if it looks like a script
user_data_file=`mktemp -t ec2-user-data.XXXXXXXXXX`
chmod 700 $user_data_file
$logger "Retrieving user-data"
$curl -o $user_data_file $instance_data_url/user-data 2>&1 | $logger
if [ ! -s $user_data_file ]; then
$logger "No user-data available"
elif head -1 $user_data_file | egrep -v '^#!'; then
$logger "Skipping user-data as it does not begin with #!"
else
$logger "Running user-data"
echo "user-data has already been run on this instance" > $been_run_file
$user_data_file 2>&1 | logger -t "user-data"
$logger "user-data exit code: $?"
fi
rm -f $user_data_file

View File

@ -1,87 +0,0 @@
#!/usr/bin/env bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Set up security groups for the EC2 HBase cluster
if [ -z $1 ]; then
echo "Cluster name required!"
exit 1
fi
CLUSTER=$1
# Import variables
bin=`dirname "$0"`
bin=`cd "$bin"; pwd`
. "$bin"/hbase-ec2-env.sh
echo "Creating/checking security groups"
ec2-describe-group $TOOL_OPTS | egrep "[[:space:]]$CLUSTER_MASTER[[:space:]]" > /dev/null
if [ ! $? -eq 0 ]; then
echo "Creating group $CLUSTER_MASTER"
ec2-add-group $TOOL_OPTS $CLUSTER_MASTER -d "Group for HBase Master."
ec2-authorize $TOOL_OPTS $CLUSTER_MASTER -o $CLUSTER_MASTER -u $AWS_ACCOUNT_ID
ec2-authorize $TOOL_OPTS $CLUSTER_MASTER -p 22 # ssh
if [ $ENABLE_WEB_PORTS = "true" ]; then
ec2-authorize $TOOL_OPTS $CLUSTER_MASTER -p 50070 # NameNode web interface
ec2-authorize $TOOL_OPTS $CLUSTER_MASTER -p 50075 # DataNode web interface
ec2-authorize $TOOL_OPTS $CLUSTER_MASTER -p 60010 # HBase master web interface
ec2-authorize $TOOL_OPTS $CLUSTER_MASTER -p 60030 # HBase region server web interface
fi
else
echo "Security group $CLUSTER_MASTER exists, ok"
fi
ec2-describe-group $TOOL_OPTS | egrep "[[:space:]]$CLUSTER[[:space:]]" > /dev/null
if [ ! $? -eq 0 ]; then
echo "Creating group $CLUSTER"
ec2-add-group $TOOL_OPTS $CLUSTER -d "Group for HBase Slaves."
ec2-authorize $TOOL_OPTS $CLUSTER -o $CLUSTER -u $AWS_ACCOUNT_ID
ec2-authorize $TOOL_OPTS $CLUSTER -p 22 # ssh
if [ $ENABLE_WEB_PORTS = "true" ]; then
ec2-authorize $TOOL_OPTS $CLUSTER -p 50070 # NameNode web interface
ec2-authorize $TOOL_OPTS $CLUSTER -p 50075 # DataNode web interface
ec2-authorize $TOOL_OPTS $CLUSTER -p 60010 # HBase master web interface
ec2-authorize $TOOL_OPTS $CLUSTER -p 60030 # HBase region server web interface
fi
ec2-authorize $TOOL_OPTS $CLUSTER_MASTER -o $CLUSTER -u $AWS_ACCOUNT_ID
ec2-authorize $TOOL_OPTS $CLUSTER -o $CLUSTER_MASTER -u $AWS_ACCOUNT_ID
else
echo "Security group $CLUSTER exists, ok"
fi
# Set up zookeeper group
ec2-describe-group $TOOL_OPTS | egrep "[[:space:]]$CLUSTER_ZOOKEEPER[[:space:]]" > /dev/null
if [ ! $? -eq 0 ]; then
echo "Creating group $CLUSTER_ZOOKEEPER"
ec2-add-group $TOOL_OPTS $CLUSTER_ZOOKEEPER -d "Group for HBase Zookeeper quorum."
ec2-authorize $TOOL_OPTS $CLUSTER_ZOOKEEPER -o $CLUSTER_ZOOKEEPER -u $AWS_ACCOUNT_ID
ec2-authorize $TOOL_OPTS $CLUSTER_ZOOKEEPER -p 22 # ssh
ec2-authorize $TOOL_OPTS $CLUSTER_MASTER -o $CLUSTER_ZOOKEEPER -u $AWS_ACCOUNT_ID
ec2-authorize $TOOL_OPTS $CLUSTER_ZOOKEEPER -o $CLUSTER_MASTER -u $AWS_ACCOUNT_ID
ec2-authorize $TOOL_OPTS $CLUSTER -o $CLUSTER_ZOOKEEPER -u $AWS_ACCOUNT_ID
ec2-authorize $TOOL_OPTS $CLUSTER_ZOOKEEPER -o $CLUSTER -u $AWS_ACCOUNT_ID
else
echo "Security group $CLUSTER_ZOOKEEPER exists, ok"
fi

View File

@ -1,63 +0,0 @@
#!/usr/bin/env bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Launch an EC2 cluster of HBase instances.
bin=`dirname "$0"`
bin=`cd "$bin"; pwd`
if [ -z $1 ]; then
echo "Cluster name required!"
exit 1
fi
CLUSTER=$1
if [ -z $2 ]; then
echo "Must specify the number of slaves to start."
exit 1
fi
SLAVES=$2
if [ -z $3 ]; then
echo "Must specify the number of zookeepers to start."
exit 1
fi
ZOOS=$3
# Set up security groups
if ! "$bin"/init-hbase-cluster-secgroups $CLUSTER ; then
exit $?
fi
# Launch the ZK quorum peers
if ! "$bin"/launch-hbase-zookeeper $CLUSTER $ZOOS ; then
exit $?
fi
# Launch the HBase master
if ! "$bin"/launch-hbase-master $CLUSTER $SLAVES ; then
exit $?
fi
# Launch the HBase slaves
if ! "$bin"/launch-hbase-slaves $CLUSTER $SLAVES ; then
exit $?
fi

View File

@ -1,107 +0,0 @@
#!/usr/bin/env bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Launch an EC2 HBase master.
if [ -z $1 ]; then
echo "Cluster name required!"
exit 1
fi
CLUSTER=$1
if [ -z $2 ]; then
echo "Must specify the number of slaves to start."
exit 1
fi
NUM_SLAVES=$2
# Import variables
bin=`dirname "$0"`
bin=`cd "$bin"; pwd`
. "$bin"/hbase-ec2-env.sh
if [ -z $AWS_ACCOUNT_ID ]; then
echo "Please set AWS_ACCOUNT_ID in $bin/hbase-ec2-env.sh."
exit 1
fi
type=$MASTER_INSTANCE_TYPE
[ -z "$type" ] && type=$SLAVE_INSTANCE_TYPE
arch=$MASTER_ARCH
[ -z "$arch" ] && arch=$SLAVE_ARCH
echo "Testing for existing master in group: $CLUSTER"
host=`ec2-describe-instances $TOOL_OPTS | awk '"RESERVATION" == $1 && "'$CLUSTER_MASTER'" == $4, "RESERVATION" == $1 && "'$CLUSTER_MASTER'" != $4'`
host=`echo "$host" | awk '"INSTANCE" == $1 && "running" == $6 {print $4}'`
if [ ! -z "$host" ]; then
echo "Master already running on: $host"
exit 0
fi
# Finding HBase image
[ -z "$AMI_IMAGE" ] && AMI_IMAGE=`ec2-describe-images $TOOL_OPTS -a | grep -e $S3_BUCKET -e $S3_ACCOUNT | grep hbase | grep $HBASE_VERSION | grep $arch | grep available | awk '{print $2}'`
# Start a master
echo "Starting master with AMI $AMI_IMAGE (arch $arch)"
# Substituting zookeeper quorum
quorum=`cat $ZOOKEEPER_QUORUM_PATH`
sed -e "s|%ZOOKEEPER_QUORUM%|$quorum|" \
-e "s|%NUM_SLAVES%|$NUM_SLAVES|" \
-e "s|%EXTRA_PACKAGES%|$EXTRA_PACKAGES|" \
"$bin"/$USER_DATA_FILE > "$bin"/$USER_DATA_FILE.master
inst=`ec2-run-instances $TOOL_OPTS $AMI_IMAGE -n 1 -g $CLUSTER_MASTER -k root -f "$bin"/$USER_DATA_FILE.master -t $type | grep INSTANCE | awk '{print $2}'`
if [ "$ENABLE_ELASTIC_IPS" = "true" ] ; then
addr=`ec2-allocate-address $TOOL_OPTS | awk '{print $2}'`
ec2-associate-address $TOOL_OPTS $addr -i $inst
fi
echo -n "Waiting for instance $inst to start"
while true; do
printf "."
# get private dns
host=`ec2-describe-instances $TOOL_OPTS $inst | grep running | awk '{print $5}'`
if [ ! -z $host ]; then
echo " Started as $host"
break;
fi
sleep 1
done
rm -f "$bin"/$USER_DATA_FILE.master
# get public (elastic) hostname
host=`ec2-describe-instances $TOOL_OPTS $inst | grep INSTANCE | grep running | grep $host | awk '{print $4}'`
echo $host > $MASTER_ADDR_PATH
echo $addr > $MASTER_IP_PATH
# get zone
zone=`ec2-describe-instances $TOOL_OPTS $inst | grep INSTANCE | grep running | grep $host | awk '{print $11}'`
echo $zone > $MASTER_ZONE_PATH
while true; do
REPLY=`ssh $SSH_OPTS "root@$host" 'echo "hello"'`
if [ ! -z $REPLY ]; then
break;
fi
sleep 5
done
scp $SSH_OPTS $EC2_ROOT_SSH_KEY "root@$host:/root/.ssh/id_rsa"
ssh $SSH_OPTS "root@$host" "chmod 600 /root/.ssh/id_rsa"
echo "Master is $host in zone $zone"

View File

@ -1,61 +0,0 @@
#!/usr/bin/env bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Launch an EC2 HBase slaves.
if [ -z $1 ]; then
echo "Cluster name required!"
exit 1
fi
CLUSTER=$1
if [ -z $2 ]; then
echo "Must specify the number of slaves to start."
exit 1
fi
NUM_SLAVES=$2
# Import variables
bin=`dirname "$0"`
bin=`cd "$bin"; pwd`
. "$bin"/hbase-ec2-env.sh
if [ ! -f $MASTER_ADDR_PATH ]; then
echo "Must start Cluster Master first!"
exit 1
fi
[ -z "$AMI_IMAGE" ] && AMI_IMAGE=`ec2-describe-images $TOOL_OPTS -a | grep -e $S3_BUCKET -e $S3_ACCOUNT | grep hbase | grep $HBASE_VERSION | grep $SLAVE_ARCH | grep available | awk '{print $2}'`
master=`cat $MASTER_ADDR_PATH`
zone=`cat $MASTER_ZONE_PATH`
quorum=`cat $ZOOKEEPER_QUORUM_PATH`
# Substituting master hostname and zookeeper quorum
sed -e "s|%MASTER_HOST%|$master|" \
-e "s|%NUM_SLAVES%|$NUM_SLAVES|" \
-e "s|%ZOOKEEPER_QUORUM%|$quorum|" \
-e "s|%EXTRA_PACKAGES%|$EXTRA_PACKAGES|" \
"$bin"/$USER_DATA_FILE > "$bin"/$USER_DATA_FILE.slave
# Start slaves
echo "Starting $NUM_SLAVES AMI(s) with ID $AMI_IMAGE (arch $SLAVE_ARCH) in group $CLUSTER in zone $zone"
ec2-run-instances $TOOL_OPTS $AMI_IMAGE -n "$NUM_SLAVES" -g "$CLUSTER" -k root -f "$bin"/$USER_DATA_FILE.slave -t "$SLAVE_INSTANCE_TYPE" -z "$zone" | grep INSTANCE | awk '{print $2}' > $INSTANCES_PATH
rm "$bin"/$USER_DATA_FILE.slave

View File

@ -1,94 +0,0 @@
#!/usr/bin/env bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Launch the EC2 HBase Zookeepers.
if [ -z $1 ]; then
echo "Cluster name required!"
exit 1
fi
if [ -z $2 ]; then
echo "Must specify the number of zookeeper quorum peers to start."
exit 1
fi
CLUSTER=$1
NO_INSTANCES=$2
# Import variables
bin=`dirname "$0"`
bin=`cd "$bin"; pwd`
. "$bin"/hbase-ec2-env.sh
type=$ZOO_INSTANCE_TYPE
[ -z "$type" ] && type=$SLAVE_INSTANCE_TYPE
arch=$ZOO_ARCH
[ -z "$arch" ] && arch=$SLAVE_ARCH
# Finding HBase image
[ -z "$ZOO_AMI_IMAGE" ] && ZOO_AMI_IMAGE=`ec2-describe-images $TOOL_OPTS -a | grep -e $S3_BUCKET -e $S3_ACCOUNT | grep hbase | grep $HBASE_VERSION | grep $arch | grep available | awk '{print $2}'`
# Start Zookeeper instances
echo "Starting ZooKeeper quorum ensemble."
peers=""
peer_addrs=""
i=0
while [ $i -lt $NO_INSTANCES ] ; do
echo "Starting an AMI with ID $ZOO_AMI_IMAGE (arch $arch) in group $CLUSTER_ZOOKEEPER"
inst=`ec2-run-instances $TOOL_OPTS $ZOO_AMI_IMAGE -n 1 -g $CLUSTER_ZOOKEEPER -k root -t $type | grep INSTANCE | awk '{print $2}'`
if [ "$ENABLE_ELASTIC_IPS" = "true" ] ; then
addr=`ec2-allocate-address $TOOL_OPTS | awk '{print $2}'`
ec2-associate-address $TOOL_OPTS $addr -i $inst
fi
echo -n "Waiting for instance $inst to start: "
while true; do
printf "."
priv=`ec2-describe-instances $TOOL_OPTS $inst | grep running | awk '{print $5}'`
if [ ! -z $priv ]; then
echo " Started ZooKeeper instance $inst as $priv"
break
fi
sleep 1
done
host=`ec2-describe-instances $TOOL_OPTS $inst | grep INSTANCE | awk '{print $4}'`
peers="$peers $host"
peer_addrs="$peer_addrs $addr"
i=$(($i + 1))
done
quorum=`echo $peers | sed -e 's/ /,/g'`
echo $quorum > $ZOOKEEPER_QUORUM_PATH
echo $peer_addrs > $ZOOKEEPER_ADDR_PATH
echo "ZooKeeper quorum is $quorum"
# Start Zookeeper quorum
sleep 10
echo "Initializing the ZooKeeper quorum ensemble"
for host in $peers ; do
echo " $host"
i=0
while [ $i -lt 3 ] ; do
scp $SSH_OPTS "$bin"/hbase-ec2-init-zookeeper-remote.sh "root@${host}:/var/tmp" && ssh $SSH_OPTS "root@${host}" "sh -c \"ZOOKEEPER_QUORUM=\"$ZOOKEEPER_QUORUM\" sh /var/tmp/hbase-ec2-init-zookeeper-remote.sh\"" && break
sleep 5
i=$(($i + 1))
done
done

View File

@ -1,31 +0,0 @@
#!/usr/bin/env bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# List running clusters.
# Import variables
bin=`dirname "$0"`
bin=`cd "$bin"; pwd`
. "$bin"/hbase-ec2-env.sh
# Finding HBase clusters
CLUSTERS=`ec2-describe-instances $TOOL_OPTS | awk '"RESERVATION" == $1 && $4 ~ /-master$/, "INSTANCE" == $1' | tr '\n' '\t' | grep running | cut -f4 | rev | cut -d'-' -f2- | rev`
[ -z "$CLUSTERS" ] && echo "No running clusters." && exit 0
echo "Running HBase clusters:"
echo "$CLUSTERS"

View File

@ -1,32 +0,0 @@
#!/usr/bin/env bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Launch an EC2 HBase slaves.
if [ -z $1 ]; then
echo "Cluster name required!"
exit 1
fi
CLUSTER=$1
# Import variables
bin=`dirname "$0"`
bin=`cd "$bin"; pwd`
. "$bin"/hbase-ec2-env.sh
cat $MASTER_ADDR_PATH

View File

@ -1,38 +0,0 @@
#!/usr/bin/env bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Launch an EC2 HBase slaves.
if [ -z $1 ]; then
echo "Cluster name required!"
exit 1
fi
CLUSTER=$1
# Import variables
bin=`dirname "$0"`
bin=`cd "$bin"; pwd`
. "$bin"/hbase-ec2-env.sh
if [ ! -f $INSTANCES_PATH ]; then
echo "Must start Cluster first!"
exit 1
fi
instances=`cat $INSTANCES_PATH`
ec2-describe-instances $TOOL_OPTS $instances | grep INSTANCE | grep running | awk '{print $4}'

View File

@ -1,32 +0,0 @@
#!/usr/bin/env bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Launch an EC2 HBase slaves.
if [ -z $1 ]; then
echo "Cluster name required!"
exit 1
fi
CLUSTER=$1
# Import variables
bin=`dirname "$0"`
bin=`cd "$bin"; pwd`
. "$bin"/hbase-ec2-env.sh
cat $ZOOKEEPER_QUORUM_PATH

View File

@ -1,46 +0,0 @@
#!/usr/bin/env bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Reboot a cluster.
if [ -z $1 ]; then
echo "Cluster name required!"
exit 1
fi
CLUSTER=$1
# Import variables
bin=`dirname "$0"`
bin=`cd "$bin"; pwd`
. "$bin"/hbase-ec2-env.sh
# Finding HBase image
HBASE_INSTANCES=`ec2-describe-instances $TOOL_OPTS | awk '"RESERVATION" == $1 && ("'$CLUSTER'" == $4 || "'$CLUSTER_MASTER'" == $4 || "'$CLUSTER_ZOOKEEPER'" == $4), "RESERVATION" == $1 && ("'$CLUSTER'" != $4 && "'$CLUSTER_MASTER'" != $4 && "'$CLUSTER_ZOOKEEPER'" != $4)'`
HBASE_INSTANCES=`echo "$HBASE_INSTANCES" | grep INSTANCE | grep running`
[ -z "$HBASE_INSTANCES" ] && echo "No running instances in cluster $CLUSTER." && exit 0
echo "Running HBase instances:"
echo "$HBASE_INSTANCES"
read -p "Reboot all instances? [yes or no]: " answer
if [ "$answer" != "yes" ]; then
exit 1
fi
ec2-reboot-instances $TOOL_OPTS `echo "$HBASE_INSTANCES" | awk '{print $2}'`

View File

@ -1,68 +0,0 @@
#!/usr/bin/env bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Clean up security groups for the EC2 HBase cluster
if [ -z $1 ]; then
echo "Cluster name required!"
exit 1
fi
CLUSTER=$1
# Import variables
bin=`dirname "$0"`
bin=`cd "$bin"; pwd`
. "$bin"/hbase-ec2-env.sh
echo "Revoking security groups"
ec2-describe-group $TOOL_OPTS | egrep "[[:space:]]$CLUSTER_MASTER[[:space:]]" > /dev/null
if [ $? -eq 0 ]; then
ec2-revoke $TOOL_OPTS $CLUSTER_MASTER -o $CLUSTER_MASTER -u $AWS_ACCOUNT_ID
fi
ec2-describe-group $TOOL_OPTS | egrep "[[:space:]]$CLUSTER[[:space:]]" > /dev/null
if [ $? -eq 0 ]; then
ec2-revoke $TOOL_OPTS $CLUSTER -o $CLUSTER -u $AWS_ACCOUNT_ID
ec2-revoke $TOOL_OPTS $CLUSTER_MASTER -o $CLUSTER -u $AWS_ACCOUNT_ID
ec2-revoke $TOOL_OPTS $CLUSTER -o $CLUSTER_MASTER -u $AWS_ACCOUNT_ID
fi
ec2-describe-group $TOOL_OPTS | egrep "[[:space:]]$CLUSTER_ZOOKEEPER[[:space:]]" > /dev/null
if [ $? -eq 0 ]; then
ec2-revoke $TOOL_OPTS $CLUSTER_ZOOKEEPER -o $CLUSTER_ZOOKEEPER -u $AWS_ACCOUNT_ID
ec2-revoke $TOOL_OPTS $CLUSTER_MASTER -o $CLUSTER_ZOOKEEPER -u $AWS_ACCOUNT_ID
ec2-revoke $TOOL_OPTS $CLUSTER_ZOOKEEPER -o $CLUSTER_MASTER -u $AWS_ACCOUNT_ID
ec2-revoke $TOOL_OPTS $CLUSTER -o $CLUSTER_ZOOKEEPER -u $AWS_ACCOUNT_ID
ec2-revoke $TOOL_OPTS $CLUSTER_ZOOKEEPER -o $CLUSTER -u $AWS_ACCOUNT_ID
fi
ec2-describe-group $TOOL_OPTS | egrep "[[:space:]]$CLUSTER_MASTER[[:space:]]" > /dev/null
if [ $? -eq 0 ]; then
ec2-delete-group $TOOL_OPTS $CLUSTER_MASTER
fi
ec2-describe-group $TOOL_OPTS | egrep "[[:space:]]$CLUSTER_ZOOKEEPER[[:space:]]" > /dev/null
if [ $? -eq 0 ]; then
ec2-delete-group $TOOL_OPTS $CLUSTER_ZOOKEEPER
fi
ec2-describe-group $TOOL_OPTS | egrep "[[:space:]]$CLUSTER[[:space:]]" > /dev/null
if [ $? -eq 0 ]; then
ec2-delete-group $TOOL_OPTS $CLUSTER
fi

View File

@ -1,61 +0,0 @@
#!/usr/bin/env bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Terminate a cluster.
if [ -z $1 ]; then
echo "Cluster name required!"
exit 1
fi
CLUSTER=$1
# Import variables
bin=`dirname "$0"`
bin=`cd "$bin"; pwd`
. "$bin"/hbase-ec2-env.sh
# Finding HBase image
HBASE_INSTANCES=`ec2-describe-instances $TOOL_OPTS | awk '"RESERVATION" == $1 && ("'$CLUSTER'" == $4 || "'$CLUSTER_MASTER'" == $4 || "'$CLUSTER_ZOOKEEPER'" == $4), "RESERVATION" == $1 && ("'$CLUSTER'" != $4 && "'$CLUSTER_MASTER'" != $4 && "'$CLUSTER_ZOOKEEPER'" != $4)'`
HBASE_INSTANCES=`echo "$HBASE_INSTANCES" | grep INSTANCE | grep running`
[ -z "$HBASE_INSTANCES" ] && echo "No running instances in cluster $CLUSTER." && exit 0
echo "Running HBase instances:"
echo "$HBASE_INSTANCES"
read -p "Terminate all instances? [yes or no]: " answer
if [ "$answer" != "yes" ]; then
exit 1
fi
ec2-terminate-instances $TOOL_OPTS `echo "$HBASE_INSTANCES" | awk '{print $2}'`
# clean up elastic IPs
if [ "$ENABLE_ELASTIC_IPS" = "true" ] ; then
# master
ec2-release-address $TOOL_OPTS `cat $MASTER_IP_PATH`
# zookeeper quorum ensemble
for addr in `cat $ZOOKEEPER_ADDR_PATH` ; do
ec2-release-address $TOOL_OPTS $addr
done
fi
# clean up state files
rm -f $ZOOKEEPER_ADDR_PATH $ZOOKEEPER_QUORUM_PATH
rm -f $MASTER_IP_PATH $MASTER_ADDR_PATH $MASTER_ZONE_PATH