Getting StartedIntroductionQuick Start will get you up and
running on a single-node instance of HBase using the local filesystem. The
Not-so-quick Start Guide describes setup
of HBase in distributed mode running on top of HDFS.Quick StartThis guide describes setup of a standalone HBase instance that uses
the local filesystem. It leads you through creating a table, inserting
rows via the HBase Shell, and then cleaning
up and shutting down your standalone HBase instance. The below exercise
should take no more than ten minutes (not including download time).Download and unpack the latest stable release.Choose a download site from this list of Apache Download
Mirrors. Click on suggested top link. This will take you to a
mirror of HBase Releases. Click on the folder named
stable and then download the file that ends in
.tar.gz to your local filesystem; e.g.
hbase-.tar.gz.Decompress and untar your download and then change into the
unpacked directory.$ tar xfz hbase-.tar.gz
$ cd hbase-
At this point, you are ready to start HBase. But before starting
it, you might want to edit conf/hbase-site.xml and
set the directory you want HBase to write to,
hbase.rootdir.
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>hbase.rootdir</name>
<value>file:///DIRECTORY/hbase</value>
</property>
</configuration>
Replace DIRECTORY in the above with a
path to a directory where you want HBase to store its data. By default,
hbase.rootdir is set to
/tmp/hbase-${user.name} which means you'll lose all
your data whenever your server reboots (Most operating systems clear
/tmp on restart).Start HBaseNow start HBase:$ ./bin/start-hbase.sh
starting Master, logging to logs/hbase-user-master-example.org.outYou should now have a running standalone HBase instance. In
standalone mode, HBase runs all daemons in the the one JVM; i.e. both
the HBase and ZooKeeper daemons. HBase logs can be found in the
logs subdirectory. Check them out especially if
HBase had trouble starting.Is java installed?All of the above presumes a 1.6 version of Oracle
java is installed on your machine and
available on your path; i.e. when you type
java, you see output that describes the
options the java program takes (HBase requires java 6). If this is not
the case, HBase will not start. Install java, edit
conf/hbase-env.sh, uncommenting the
JAVA_HOME line pointing it to your java install. Then,
retry the steps above.Shell ExercisesConnect to your running HBase via the HBase
Shell.$ ./bin/hbase shell
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version: 0.90.0, r1001068, Fri Sep 24 13:55:42 PDT 2010
hbase(main):001:0> Type help and then
<RETURN> to see a listing of shell commands and
options. Browse at least the paragraphs at the end of the help emission
for the gist of how variables and command arguments are entered into the
HBase shell; in particular note how table names, rows, and columns,
etc., must be quoted.Create a table named test with a single column family named cf.
Verify its creation by listing all tables and then insert some
values.hbase(main):003:0> create 'test', 'cf'
0 row(s) in 1.2200 seconds
hbase(main):003:0> list 'table'
test
1 row(s) in 0.0550 seconds
hbase(main):004:0> put 'test', 'row1', 'cf:a', 'value1'
0 row(s) in 0.0560 seconds
hbase(main):005:0> put 'test', 'row2', 'cf:b', 'value2'
0 row(s) in 0.0370 seconds
hbase(main):006:0> put 'test', 'row3', 'cf:c', 'value3'
0 row(s) in 0.0450 secondsAbove we inserted 3 values, one at a time. The first insert is at
row1, column cf:a with a value of
value1. Columns in HBase are comprised of a column family prefix --
cf in this example -- followed by a colon and then a
column qualifier suffix (a in this case).Verify the data insert.Run a scan of the table by doing the followinghbase(main):007:0> scan 'test'
ROW COLUMN+CELL
row1 column=cf:a, timestamp=1288380727188, value=value1
row2 column=cf:b, timestamp=1288380738440, value=value2
row3 column=cf:c, timestamp=1288380747365, value=value3
3 row(s) in 0.0590 secondsGet a single row as followshbase(main):008:0> get 'test', 'row1'
COLUMN CELL
cf:a timestamp=1288380727188, value=value1
1 row(s) in 0.0400 secondsNow, disable and drop your table. This will clean up all done
above.hbase(main):012:0> disable 'test'
0 row(s) in 1.0930 seconds
hbase(main):013:0> drop 'test'
0 row(s) in 0.0770 seconds Exit the shell by typing exit.hbase(main):014:0> exitStopping HBaseStop your hbase instance by running the stop script.$ ./bin/stop-hbase.sh
stopping hbase...............Where to go nextThe above described standalone setup is good for testing and
experiments only. Move on to the next section, the Not-so-quick Start Guide where we'll go into
depth on the different HBase run modes, requirements and critical
configurations needed setting up a distributed HBase deploy.Not-so-quick Start GuideRequirementsHBase has the following requirements. Please read the section
below carefully and ensure that all requirements have been satisfied.
Failure to do so will cause you (and us) grief debugging strange errors
and/or data loss.javaJust like Hadoop, HBase requires java 6 from Oracle. Usually
you'll want to use the latest version available except the problematic
u18 (u24 is the latest version as of this writing).hadoopHadoopThis version of HBase will only run on Hadoop
0.20.x. It will not run on hadoop 0.21.x (nor 0.22.x). HBase
will lose data unless it is running on an HDFS that has a durable
sync. Currently only the branch-0.20-append
branch has this attribute See CHANGES.txt
in branch-0.20-append to see list of patches involved adding
append on the Hadoop 0.20 branch.. No official releases have been made from this branch up
to now so you will have to build your own Hadoop from the tip of this
branch. Check it out using this url, branch-0.20-append.
Scroll down in the Hadoop How To
Release to the section Build Requirements
for instruction on how to build Hadoop.Or rather than build your own, you could use Cloudera's CDH3. CDH has
the 0.20-append patches needed to add a durable sync (CDH3 betas will
suffice; b2, b3, or b4).Because HBase depends on Hadoop, it bundles an instance of the
Hadoop jar under its lib directory. The bundled
Hadoop was made from the Apache branch-0.20-append branch at the time
of this HBase's release. It is critical that the
version of Hadoop that is out on your cluster matches what is Hbase
match. Replace the hadoop jar found in the HBase
lib directory with the hadoop jar you are running
out on your cluster to avoid version mismatch issues. Make sure you
replace the jar all over your cluster. For example, versions of CDH do
not have HDFS-724 whereas Hadoops branch-0.20-append branch does have
HDFS-724. This patch changes the RPC version because protocol was
changed. Version mismatch issues have various manifestations but often
all looks like its hung up.Can I just replace the jar in Hadoop 0.20.2 tarball with the
sync-supporting Hadoop jar found in
HBase?You could do this. It works going by a recent posting up on
the mailing
list.Hadoop SecurityHBase will run on any Hadoop 0.20.x that incorporates Hadoop
security features -- e.g. Y! 0.20S or CDH3B3 -- as long as you do as
suggested above and replace the Hadoop jar that ships with HBase
with the secure version.sshssh must be installed and
sshd must be running to use Hadoop's scripts to
manage remote Hadoop and HBase daemons. You must be able to ssh to all
nodes, including your local node, using passwordless login (Google
"ssh passwordless login").DNSHBase uses the local hostname to self-report it's IP address.
Both forward and reverse DNS resolving should work.If your machine has multiple interfaces, HBase will use the
interface that the primary hostname resolves to.If this is insufficient, you can set
hbase.regionserver.dns.interface to indicate the
primary interface. This only works if your cluster configuration is
consistent and every host has the same network interface
configuration.Another alternative is setting
hbase.regionserver.dns.nameserver to choose a
different nameserver than the system wide default.NTPThe clocks on cluster members should be in basic alignments.
Some skew is tolerable but wild skew could generate odd behaviors. Run
NTP
on your cluster, or an equivalent.If you are having problems querying data, or "weird" cluster
operations, check system time!ulimitulimitHBase is a database, it uses a lot of files at the same time.
The default ulimit -n of 1024 on *nix systems is insufficient. Any
significant amount of loading will lead you to FAQ: Why do I
see "java.io.IOException...(Too many open files)" in my logs?.
You may also notice errors such as
2010-04-06 03:04:37,542 INFO org.apache.hadoop.hdfs.DFSClient: Exception increateBlockOutputStream java.io.EOFException
2010-04-06 03:04:37,542 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning block blk_-6935524980745310745_1391901
Do yourself a favor and change the upper bound on the
number of file descriptors. Set it to north of 10k. See the above
referenced FAQ for how.To be clear, upping the file descriptors for the user who is
running the HBase process is an operating system configuration, not an
HBase configuration. Also, a common mistake is that administrators
will up the file descriptors for a particular user but for whatever
reason, HBase will be running as some one else. HBase prints in its
logs as the first line the ulimit its seeing. Ensure its correct.
A useful read setting config on you hadoop cluster is Aaron
Kimballs' Configuration
Parameters: What can you just ignore?ulimit on UbuntuIf you are on Ubuntu you will need to make the following
changes:In the file /etc/security/limits.conf add
a line like: hadoop - nofile 32768
Replace hadoop with whatever user is running
Hadoop and HBase. If you have separate users, you will need 2
entries, one for each user.In the file /etc/pam.d/common-session add
as the last line in the file: session required pam_limits.so
Otherwise the changes in
/etc/security/limits.conf won't be
applied.Don't forget to log out and back in again for the changes to
take effect!dfs.datanode.max.xcieversxcieversAn Hadoop HDFS datanode has an upper bound on the number of
files that it will serve at any one time. The upper bound parameter is
called xcievers (yes, this is misspelled). Again,
before doing any loading, make sure you have configured Hadoop's
conf/hdfs-site.xml setting the
xceivers value to at least the following:
<property>
<name>dfs.datanode.max.xcievers</name>
<value>4096</value>
</property>
Be sure to restart your HDFS after making the above
configuration.Not having this configuration in place makes for strange looking
failures. Eventually you'll see a complain in the datanode logs
complaining about the xcievers exceeded, but on the run up to this one
manifestation is complaint about missing blocks. For example:
10/12/08 20:10:31 INFO hdfs.DFSClient: Could not obtain block
blk_XXXXXXXXXXXXXXXXXXXXXX_YYYYYYYY from any node:
java.io.IOException: No live nodes contain current block. Will get new
block locations from namenode and retry...WindowsHBase has been little tested running on windows. Running a
production install of HBase on top of windows is not
recommended.If you are running HBase on Windows, you must install Cygwin to have a *nix-like
environment for the shell scripts. The full details are explained in
the Windows
Installation guide. Also
search our user mailing list to pick
up latest fixes figured by windows users.HBase run modes: Standalone and DistributedHBase has two run modes: standalone and distributed. Out of the box, HBase runs in
standalone mode. To set up a distributed deploy, you will need to
configure HBase by editing files in the HBase conf
directory.Whatever your mode, you will need to edit
conf/hbase-env.sh to tell HBase which
java to use. In this file you set HBase environment
variables such as the heapsize and other options for the
JVM, the preferred location for log files,
etc. Set JAVA_HOME to point at the root of your
java install.Standalone HBaseThis is the default mode. Standalone mode is what is described
in the quickstart section. In
standalone mode, HBase does not use HDFS -- it uses the local
filesystem instead -- and it runs all HBase daemons and a local
zookeeper all up in the same JVM. Zookeeper binds to a well known port
so clients may talk to HBase.DistributedDistributed mode can be subdivided into distributed but all
daemons run on a single node -- a.k.a
pseudo-distributed-- and
fully-distributed where the daemons are spread
across all nodes in the cluster The pseudo-distributed vs fully-distributed nomenclature
comes from Hadoop..Distributed modes require an instance of the Hadoop
Distributed File System (HDFS). See the Hadoop
requirements and instructions for how to set up a HDFS. Before
proceeding, ensure you have an appropriate, working HDFS.Below we describe the different distributed setups. Starting,
verification and exploration of your install, whether a
pseudo-distributed or
fully-distributed configuration is described in a
section that follows, Running and Confirming
your Installation. The same verification script applies to both
deploy types.Pseudo-distributedA pseudo-distributed mode is simply a distributed mode run on
a single host. Use this configuration testing and prototyping on
HBase. Do not use this configuration for production nor for
evaluating HBase performance.Once you have confirmed your HDFS setup, edit
conf/hbase-site.xml. This is the file into
which you add local customizations and overrides for Default HBase
Configurations and HDFS
Client Configurations. Point HBase at the running Hadoop HDFS
instance by setting the hbase.rootdir property.
This property points HBase at the Hadoop filesystem instance to use.
For example, adding the properties below to your
hbase-site.xml says that HBase should use the
/hbase directory in the HDFS whose namenode is
at port 9000 on your local machine, and that it should run with one
replica only (recommended for pseudo-distributed mode):
<configuration>
...
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:9000/hbase</value>
<description>The directory shared by region servers.
</description>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
<description>The replication count for HLog & HFile storage. Should not be greater than HDFS datanode count.
</description>
</property>
...
</configuration>
Let HBase create the hbase.rootdir
directory. If you don't, you'll get warning saying HBase needs a
migration run because the directory is missing files expected by
HBase (it'll create them if you let it).Above we bind to localhost. This means
that a remote client cannot connect. Amend accordingly, if you
want to connect from a remote location.Now skip to Running and Confirming
your Installation for how to start and verify your
pseudo-distributed install. See Pseudo-distributed
mode extras for notes on how to start extra Masters and
regionservers when running pseudo-distributed.Fully-distributedFor running a fully-distributed operation on more than one
host, make the following configurations. In
hbase-site.xml, add the property
hbase.cluster.distributed and set it to
true and point the HBase
hbase.rootdir at the appropriate HDFS NameNode
and location in HDFS where you would like HBase to write data. For
example, if you namenode were running at namenode.example.org on
port 9000 and you wanted to home your HBase in HDFS at
/hbase, make the following
configuration.
<configuration>
...
<property>
<name>hbase.rootdir</name>
<value>hdfs://namenode.example.org:9000/hbase</value>
<description>The directory shared by region servers.
</description>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
<description>The mode the cluster will be in. Possible values are
false: standalone and pseudo-distributed setups with managed Zookeeper
true: fully-distributed with unmanaged Zookeeper Quorum (see hbase-env.sh)
</description>
</property>
...
</configuration>
regionserversIn addition, a fully-distributed mode requires that you
modify conf/regionservers. The
regionservers file
lists all hosts that you would have running
HRegionServers, one host per line (This
file in HBase is like the Hadoop slaves
file). All servers listed in this file will be started and stopped
when HBase cluster start or stop is run.ZooKeeperZooKeeperA distributed HBase depends on a running ZooKeeper cluster.
All participating nodes and clients need to be able to access the
running ZooKeeper ensemble. HBase by default manages a ZooKeeper
"cluster" for you. It will start and stop the ZooKeeper ensemble
as part of the HBase start/stop process. You can also manage the
ZooKeeper ensemble independent of HBase and just point HBase at
the cluster it should use. To toggle HBase management of
ZooKeeper, use the HBASE_MANAGES_ZK variable in
conf/hbase-env.sh. This variable, which
defaults to true, tells HBase whether to
start/stop the ZooKeeper ensemble servers as part of HBase
start/stop.When HBase manages the ZooKeeper ensemble, you can specify
ZooKeeper configuration using its native
zoo.cfg file, or, the easier option is to
just specify ZooKeeper options directly in
conf/hbase-site.xml. A ZooKeeper
configuration option can be set as a property in the HBase
hbase-site.xml XML configuration file by
prefacing the ZooKeeper option name with
hbase.zookeeper.property. For example, the
clientPort setting in ZooKeeper can be changed
by setting the
hbase.zookeeper.property.clientPort property.
For all default values used by HBase, including ZooKeeper
configuration, see the section Default HBase
Configurations. Look for the
hbase.zookeeper.property prefix For the full list of ZooKeeper configurations, see
ZooKeeper's zoo.cfg. HBase does not ship
with a zoo.cfg so you will need to browse
the conf directory in an appropriate
ZooKeeper download.You must at least list the ensemble servers in
hbase-site.xml using the
hbase.zookeeper.quorum property. This property
defaults to a single ensemble member at
localhost which is not suitable for a fully
distributed HBase. (It binds to the local machine only and remote
clients will not be able to connect). How many ZooKeepers should I run?You can run a ZooKeeper ensemble that comprises 1 node
only but in production it is recommended that you run a
ZooKeeper ensemble of 3, 5 or 7 machines; the more members an
ensemble has, the more tolerant the ensemble is of host
failures. Also, run an odd number of machines. There can be no
quorum if the number of members is an even number. Give each
ZooKeeper server around 1GB of RAM, and if possible, its own
dedicated disk (A dedicated disk is the best thing you can do
to ensure a performant ZooKeeper ensemble). For very heavily
loaded clusters, run ZooKeeper servers on separate machines
from RegionServers (DataNodes and TaskTrackers).For example, to have HBase manage a ZooKeeper quorum on
nodes rs{1,2,3,4,5}.example.com, bound to
port 2222 (the default is 2181) ensure
HBASE_MANAGE_ZK is commented out or set to
true in conf/hbase-env.sh
and then edit conf/hbase-site.xml and set
hbase.zookeeper.property.clientPort and
hbase.zookeeper.quorum. You should also set
hbase.zookeeper.property.dataDir to other than
the default as the default has ZooKeeper persist data under
/tmp which is often cleared on system
restart. In the example below we have ZooKeeper persist to
/user/local/zookeeper.
<configuration>
...
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2222</value>
<description>Property from ZooKeeper's config zoo.cfg.
The port at which the clients will connect.
</description>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>rs1.example.com,rs2.example.com,rs3.example.com,rs4.example.com,rs5.example.com</value>
<description>Comma separated list of servers in the ZooKeeper Quorum.
For example, "host1.mydomain.com,host2.mydomain.com,host3.mydomain.com".
By default this is set to localhost for local and pseudo-distributed modes
of operation. For a fully-distributed setup, this should be set to a full
list of ZooKeeper quorum servers. If HBASE_MANAGES_ZK is set in hbase-env.sh
this is the list of servers which we will start/stop ZooKeeper on.
</description>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/usr/local/zookeeper</value>
<description>Property from ZooKeeper's config zoo.cfg.
The directory where the snapshot is stored.
</description>
</property>
...
</configuration>Using existing ZooKeeper ensembleTo point HBase at an existing ZooKeeper cluster, one that
is not managed by HBase, set HBASE_MANAGES_ZK
in conf/hbase-env.sh to false
...
# Tell HBase whether it should manage it's own instance of Zookeeper or not.
export HBASE_MANAGES_ZK=false Next set ensemble locations
and client port, if non-standard, in
hbase-site.xml, or add a suitably
configured zoo.cfg to HBase's
CLASSPATH. HBase will prefer the
configuration found in zoo.cfg over any
settings in hbase-site.xml.When HBase manages ZooKeeper, it will start/stop the
ZooKeeper servers as a part of the regular start/stop scripts.
If you would like to run ZooKeeper yourself, independent of
HBase start/stop, you would do the following
${HBASE_HOME}/bin/hbase-daemons.sh {start,stop} zookeeper
Note that you can use HBase in this manner to spin up a
ZooKeeper cluster, unrelated to HBase. Just make sure to set
HBASE_MANAGES_ZK to false
if you want it to stay up across HBase restarts so that when
HBase shuts down, it doesn't take ZooKeeper down with it.For more information about running a distinct ZooKeeper
cluster, see the ZooKeeper Getting
Started Guide.HDFS Client ConfigurationOf note, if you have made HDFS client
configuration on your Hadoop cluster -- i.e.
configuration you want HDFS clients to use as opposed to
server-side configurations -- HBase will not see this
configuration unless you do one of the following:Add a pointer to your HADOOP_CONF_DIR
to the HBASE_CLASSPATH environment variable
in hbase-env.sh.Add a copy of hdfs-site.xml (or
hadoop-site.xml) or, better, symlinks,
under ${HBASE_HOME}/conf, orif only a small set of HDFS client configurations, add
them to hbase-site.xml.An example of such an HDFS client configuration is
dfs.replication. If for example, you want to
run with a replication factor of 5, hbase will create files with
the default of 3 unless you do the above to make the configuration
available to HBase.Running and Confirming Your InstallationMake sure HDFS is running first. Start and stop the Hadoop HDFS
daemons by running bin/start-hdfs.sh over in the
HADOOP_HOME directory. You can ensure it started
properly by testing the put and
get of files into the Hadoop filesystem. HBase does
not normally use the mapreduce daemons. These do not need to be
started.If you are managing your own ZooKeeper,
start it and confirm its running else, HBase will start up ZooKeeper
for you as part of its start process.Start HBase with the following command:bin/start-hbase.sh
Run the above from the
HBASE_HOME
directory.
You should now have a running HBase instance. HBase logs can be
found in the logs subdirectory. Check them out
especially if HBase had trouble starting.HBase also puts up a UI listing vital attributes. By default its
deployed on the Master host at port 60010 (HBase RegionServers listen
on port 60020 by default and put up an informational http server at
60030). If the Master were running on a host named
master.example.org on the default port, to see the
Master's homepage you'd point your browser at
http://master.example.org:60010.Once HBase has started, see the Shell Exercises section for how to
create tables, add data, scan your insertions, and finally disable and
drop your tables.To stop HBase after exiting the HBase shell enter
$ ./bin/stop-hbase.sh
stopping hbase............... Shutdown can take a moment to
complete. It can take longer if your cluster is comprised of many
machines. If you are running a distributed operation, be sure to wait
until HBase has shut down completely before stopping the Hadoop
daemons.Example ConfigurationsBasic Distributed HBase InstallHere is an example basic configuration for a distributed ten
node cluster. The nodes are named example0,
example1, etc., through node
example9 in this example. The HBase Master and the
HDFS namenode are running on the node example0.
RegionServers run on nodes
example1-example9. A 3-node
ZooKeeper ensemble runs on example1,
example2, and example3 on the
default ports. ZooKeeper data is persisted to the directory
/export/zookeeper. Below we show what the main
configuration files -- hbase-site.xml,
regionservers, and
hbase-env.sh -- found in the HBase
conf directory might look like.hbase-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>hbase.zookeeper.quorum</name>
<value>example1,example2,example3</value>
<description>The directory shared by region servers.
</description>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/export/zookeeper</value>
<description>Property from ZooKeeper's config zoo.cfg.
The directory where the snapshot is stored.
</description>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs://example0:9000/hbase</value>
<description>The directory shared by region servers.
</description>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
<description>The mode the cluster will be in. Possible values are
false: standalone and pseudo-distributed setups with managed Zookeeper
true: fully-distributed with unmanaged Zookeeper Quorum (see hbase-env.sh)
</description>
</property>
</configuration>
regionserversIn this file you list the nodes that will run regionservers.
In our case we run regionservers on all but the head node
example1 which is carrying the HBase Master and
the HDFS namenode
example1
example3
example4
example5
example6
example7
example8
example9
hbase-env.shBelow we use a diff to show the differences
from default in the hbase-env.sh file. Here we
are setting the HBase heap to be 4G instead of the default
1G.
$ git diff hbase-env.sh
diff --git a/conf/hbase-env.sh b/conf/hbase-env.sh
index e70ebc6..96f8c27 100644
--- a/conf/hbase-env.sh
+++ b/conf/hbase-env.sh
@@ -31,7 +31,7 @@ export JAVA_HOME=/usr/lib//jvm/java-6-sun/
# export HBASE_CLASSPATH=
# The maximum amount of heap to use, in MB. Default is 1000.
-# export HBASE_HEAPSIZE=1000
+export HBASE_HEAPSIZE=4096
# Extra Java runtime options.
# Below are what we set by default. May only work with SUN JVM.
Use rsync to copy the content of the
conf directory to all nodes of the
cluster.