diff --git a/CHANGES.txt b/CHANGES.txt index 35f4fd807cd..c7ae4d1b187 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -40,6 +40,8 @@ Trunk (unreleased changes) HADOOP-1760 Use new MapWritable and SortedMapWritable classes from org.apache.hadoop.io HADOOP-1802 Startup scripts should wait until hdfs as cleared 'safe mode' + HADOOP-1835 Updated Documentation for HBase setup/installation + (Izaak Rubin via Stack) Below are the list of changes before 2007-08-18 diff --git a/conf/hbase-env.sh b/conf/hbase-env.sh index ac2b8d5903c..23707104453 100644 --- a/conf/hbase-env.sh +++ b/conf/hbase-env.sh @@ -21,14 +21,6 @@ # Set HBase-specific environment variables here. -# The only required environment variable is JAVA_HOME. All others are -# optional. When running a distributed configuration it is best to -# set JAVA_HOME in this file, so that it is correctly defined on -# remote nodes. - -# The java implementation to use. Required. -# export JAVA_HOME=/usr/lib/j2sdk1.5-sun - # Extra Java CLASSPATH elements. Optional. # export HBASE_CLASSPATH= @@ -38,5 +30,5 @@ # Extra Java runtime options. Empty by default. # export HBASE_OPTS=-server -# File naming remote slave hosts. $HADOOP_HOME/conf/slaves by default. +# File naming hosts on which HRegionServers will run. $HBASE_HOME/conf/regionservers by default. # export HBASE_REGIONSERVERS=${HBASE_HOME}/conf/regionservers diff --git a/src/java/org/apache/hadoop/hbase/package.html b/src/java/org/apache/hadoop/hbase/package.html index 7e5b84826c9..50939572f21 100644 --- a/src/java/org/apache/hadoop/hbase/package.html +++ b/src/java/org/apache/hadoop/hbase/package.html @@ -7,46 +7,104 @@ simple database.
JAVA_HOME
to the root of your Java installationJAVA_HOME
to the root of your Java installation when configuring Hadoop.
+First, you need a working instance of Hadoop. Download a recent release from
-Hadoop downloads.
-Unpack the release and connect to its top-level directory. Let this be
-${HADOOP_HOME}
. Edit the file ${HADOOP_HOME}/conf/hadoop-env.sh
-to define at least JAVA_HOME
. Also, add site-particular
-customizations to the file ${HADOOP_HOME}/conf/hadoop-site.xml
.
-Try the following command:
bin/hadoop -+
+Start by defining the following directory variables for your convenience:
-Next, change to the hbase root. Let this be ${HBASE_HOME}
It is
-usually located at ${HADOOP_HOME}/src/contrib/hbase
. Configure hbase.
-Edit ${HBASE_HOME}/conf/hbase-env.sh
and
-${HBASE_HOME}/conf/hbase-site.xml
to make site particular settings.
-List the hosts running regionservers in ${HBASE_HOME}/conf/regionservers
.
+
${HADOOP_HOME}
: The root directory of your Hadoop installation.${HBASE_HOME}
: The HBase root, located at
+${HADOOP_HOME}/src/contrib/hbase
.+If you are running a standalone operation, proceed to Running +and Confirming Your Installation. If you are running a distributed operation, continue below. +
+ +
+Make sure you have followed
+
+Hadoop's instructions for running a distributed operation.
+Configuring HBase for a distributed operation requires modification of the following two
+files: ${HBASE_HOME}/conf/hbase-site.xml
and
+${HBASE_HOME}/conf/regionservers
.
-Here is how to start and then stop hbase: -
${HBASE_HOME}/bin/start-hbase.sh +hbase-site.xml
allows the user to override the properties defined in +${HBASE_HOME}/conf/hbase-default.xml
.hbase-default.xml
itself +should never be modified. At a minimum thehbase.master
property should be redefined +inhbase-site.xml
to define thehost:port
pair on which to run the +HMaster (read about the +HBase master, regionservers, etc): + ++<configuration> + + <property> + <name>hbase.master</name> + <value>[YOUR_HOST]:[PORT]</value> + <description>The host and port that the HBase master runs at. + </description> + </property> + +</configuration> +++The
+regionserver
file lists all the hosts running HRegionServers, one +host per line (This file is synonymous to the slaves file at +${HADOOP_HOME}/conf/slaves
). +Additional Notes on Distributed Operation
+
${HBASE_HOME}/conf/hbase-env.sh
.+If you are running a distributed operation you will need to start the Hadoop daemons +before starting HBase and stop the daemons after HBase has shut down. Start and +stop the Hadoop daemons as per the Hadoop +instructions. Afterwards, +or if running a standalone operation, start HBase with the following command: +
++${HBASE_HOME}/bin/start-hbase.sh ++
+Once HBase has started, enter ${HBASE_HOME}/bin/hbase shell
to obtain a
+shell against HBase from which you can execute HBase commands. In the HBase shell, type
+help;
to see a list of supported commands. Note that all commands in the HBase
+shell must end with ;
. Test your installation by creating, viewing, and dropping
+a table, as per the help instructions. Be patient with the create
and
+drop
operations as they may each take 30 seconds or more. To stop hbase, exit the
+HBase shell and enter:
+
${HBASE_HOME}/bin/stop-hbase.sh-Logs can be found in ${HADOOP_LOG_DIR}. +
+If you are running a distributed operation, be sure to wait until HBase has shut down completely +before stopping the Hadoop daemons.
-To obtain a shell against a running hbase instance, run: -
${HBASE_HOME}/bin/hbase shell-Once the shell is up, type
help;
to see list of supported commands.
+
+The default location for logs is ${HADOOP_HOME}/logs
.