HBASE-11155 Fix Validation Errors in Ref Guide (Misty Stanley-Jones)

git-svn-id: https://svn.apache.org/repos/asf/hbase/trunk@1596110 13f79535-47bb-0310-9956-ffa450edef68
This commit is contained in:
Michael Stack 2014-05-20 04:36:49 +00:00
parent 4d07aa5f1c
commit f980d27472
13 changed files with 2011 additions and 1692 deletions

File diff suppressed because it is too large Load Diff

View File

@ -1,13 +1,13 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter version="5.0" xml:id="casestudies"
xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:svg="http://www.w3.org/2000/svg"
xmlns:m="http://www.w3.org/1998/Math/MathML"
xmlns:html="http://www.w3.org/1999/xhtml"
xmlns:db="http://docbook.org/ns/docbook">
<!--
xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:svg="http://www.w3.org/2000/svg"
xmlns:m="http://www.w3.org/1998/Math/MathML"
xmlns:html="http://www.w3.org/1999/xhtml"
xmlns:db="http://docbook.org/ns/docbook">
<!--
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
@ -27,86 +27,86 @@
*/
-->
<title>Apache HBase Case Studies</title>
<section xml:id="casestudies.overview">
<title>Overview</title>
<para>This chapter will describe a variety of performance and troubleshooting case studies that can
<section xml:id="casestudies.overview">
<title>Overview</title>
<para>This chapter will describe a variety of performance and troubleshooting case studies that can
provide a useful blueprint on diagnosing Apache HBase cluster issues.</para>
<para>For more information on Performance and Troubleshooting, see <xref linkend="performance"/> and <xref linkend="trouble"/>.
</para>
</section>
<section xml:id="casestudies.schema">
<title>Schema Design</title>
<para>See the schema design case studies here: <xref linkend="schema.casestudies"/>
</para>
</section> <!-- schema design -->
<section xml:id="casestudies.perftroub">
<title>Performance/Troubleshooting</title>
<para>For more information on Performance and Troubleshooting, see <xref linkend="performance"/> and <xref linkend="trouble"/>.
</para>
</section>
<section xml:id="casestudies.schema">
<title>Schema Design</title>
<para>See the schema design case studies here: <xref linkend="schema.casestudies"/>
</para>
</section> <!-- schema design -->
<section xml:id="casestudies.perftroub">
<title>Performance/Troubleshooting</title>
<section xml:id="casestudies.slownode">
<title>Case Study #1 (Performance Issue On A Single Node)</title>
<section><title>Scenario</title>
<para>Following a scheduled reboot, one data node began exhibiting unusual behavior. Routine MapReduce
jobs run against HBase tables which regularly completed in five or six minutes began taking 30 or 40 minutes
to finish. These jobs were consistently found to be waiting on map and reduce tasks assigned to the troubled data node
(e.g., the slow map tasks all had the same Input Split).
The situation came to a head during a distributed copy, when the copy was severely prolonged by the lagging node.
</para>
</section>
jobs run against HBase tables which regularly completed in five or six minutes began taking 30 or 40 minutes
to finish. These jobs were consistently found to be waiting on map and reduce tasks assigned to the troubled data node
(e.g., the slow map tasks all had the same Input Split).
The situation came to a head during a distributed copy, when the copy was severely prolonged by the lagging node.
</para>
</section>
<section><title>Hardware</title>
<para>Datanodes:
<itemizedlist>
<listitem>Two 12-core processors</listitem>
<listitem>Six Enerprise SATA disks</listitem>
<listitem>24GB of RAM</listitem>
<listitem>Two bonded gigabit NICs</listitem>
</itemizedlist>
<itemizedlist>
<listitem><para>Two 12-core processors</para></listitem>
<listitem><para>Six Enerprise SATA disks</para></listitem>
<listitem><para>24GB of RAM</para></listitem>
<listitem><para>Two bonded gigabit NICs</para></listitem>
</itemizedlist>
</para>
<para>Network:
<itemizedlist>
<listitem>10 Gigabit top-of-rack switches</listitem>
<listitem>20 Gigabit bonded interconnects between racks.</listitem>
</itemizedlist>
<itemizedlist>
<listitem><para>10 Gigabit top-of-rack switches</para></listitem>
<listitem><para>20 Gigabit bonded interconnects between racks.</para></listitem>
</itemizedlist>
</para>
</section>
<section><title>Hypotheses</title>
<section><title>HBase "Hot Spot" Region</title>
<para>We hypothesized that we were experiencing a familiar point of pain: a "hot spot" region in an HBase table,
where uneven key-space distribution can funnel a huge number of requests to a single HBase region, bombarding the RegionServer
process and cause slow response time. Examination of the HBase Master status page showed that the number of HBase requests to the
troubled node was almost zero. Further, examination of the HBase logs showed that there were no region splits, compactions, or other region transitions
in progress. This effectively ruled out a "hot spot" as the root cause of the observed slowness.
<section><title>HBase "Hot Spot" Region</title>
<para>We hypothesized that we were experiencing a familiar point of pain: a "hot spot" region in an HBase table,
where uneven key-space distribution can funnel a huge number of requests to a single HBase region, bombarding the RegionServer
process and cause slow response time. Examination of the HBase Master status page showed that the number of HBase requests to the
troubled node was almost zero. Further, examination of the HBase logs showed that there were no region splits, compactions, or other region transitions
in progress. This effectively ruled out a "hot spot" as the root cause of the observed slowness.
</para>
</section>
<section><title>HBase Region With Non-Local Data</title>
<para>Our next hypothesis was that one of the MapReduce tasks was requesting data from HBase that was not local to the datanode, thus
forcing HDFS to request data blocks from other servers over the network. Examination of the datanode logs showed that there were very
few blocks being requested over the network, indicating that the HBase region was correctly assigned, and that the majority of the necessary
data was located on the node. This ruled out the possibility of non-local data causing a slowdown.
<section><title>HBase Region With Non-Local Data</title>
<para>Our next hypothesis was that one of the MapReduce tasks was requesting data from HBase that was not local to the datanode, thus
forcing HDFS to request data blocks from other servers over the network. Examination of the datanode logs showed that there were very
few blocks being requested over the network, indicating that the HBase region was correctly assigned, and that the majority of the necessary
data was located on the node. This ruled out the possibility of non-local data causing a slowdown.
</para>
</section>
<section><title>Excessive I/O Wait Due To Swapping Or An Over-Worked Or Failing Hard Disk</title>
<section><title>Excessive I/O Wait Due To Swapping Or An Over-Worked Or Failing Hard Disk</title>
<para>After concluding that the Hadoop and HBase were not likely to be the culprits, we moved on to troubleshooting the datanode's hardware.
Java, by design, will periodically scan its entire memory space to do garbage collection. If system memory is heavily overcommitted, the Linux
kernel may enter a vicious cycle, using up all of its resources swapping Java heap back and forth from disk to RAM as Java tries to run garbage
collection. Further, a failing hard disk will often retry reads and/or writes many times before giving up and returning an error. This can manifest
as high iowait, as running processes wait for reads and writes to complete. Finally, a disk nearing the upper edge of its performance envelope will
begin to cause iowait as it informs the kernel that it cannot accept any more data, and the kernel queues incoming data into the dirty write pool in memory.
However, using <code>vmstat(1)</code> and <code>free(1)</code>, we could see that no swap was being used, and the amount of disk IO was only a few kilobytes per second.
Java, by design, will periodically scan its entire memory space to do garbage collection. If system memory is heavily overcommitted, the Linux
kernel may enter a vicious cycle, using up all of its resources swapping Java heap back and forth from disk to RAM as Java tries to run garbage
collection. Further, a failing hard disk will often retry reads and/or writes many times before giving up and returning an error. This can manifest
as high iowait, as running processes wait for reads and writes to complete. Finally, a disk nearing the upper edge of its performance envelope will
begin to cause iowait as it informs the kernel that it cannot accept any more data, and the kernel queues incoming data into the dirty write pool in memory.
However, using <code>vmstat(1)</code> and <code>free(1)</code>, we could see that no swap was being used, and the amount of disk IO was only a few kilobytes per second.
</para>
</section>
<section><title>Slowness Due To High Processor Usage</title>
<section><title>Slowness Due To High Processor Usage</title>
<para>Next, we checked to see whether the system was performing slowly simply due to very high computational load. <code>top(1)</code> showed that the system load
was higher than normal, but <code>vmstat(1)</code> and <code>mpstat(1)</code> showed that the amount of processor being used for actual computation was low.
was higher than normal, but <code>vmstat(1)</code> and <code>mpstat(1)</code> showed that the amount of processor being used for actual computation was low.
</para>
</section>
<section><title>Network Saturation (The Winner)</title>
<section><title>Network Saturation (The Winner)</title>
<para>Since neither the disks nor the processors were being utilized heavily, we moved on to the performance of the network interfaces. The datanode had two
gigabit ethernet adapters, bonded to form an active-standby interface. <code>ifconfig(8)</code> showed some unusual anomalies, namely interface errors, overruns, framing errors.
While not unheard of, these kinds of errors are exceedingly rare on modern hardware which is operating as it should:
<programlisting>
gigabit ethernet adapters, bonded to form an active-standby interface. <code>ifconfig(8)</code> showed some unusual anomalies, namely interface errors, overruns, framing errors.
While not unheard of, these kinds of errors are exceedingly rare on modern hardware which is operating as it should:
<programlisting>
$ /sbin/ifconfig bond0
bond0 Link encap:Ethernet HWaddr 00:00:00:00:00:00
inet addr:10.x.x.x Bcast:10.x.x.255 Mask:255.255.255.0
@ -118,9 +118,9 @@ RX bytes:2416328868676 (2.4 TB) TX bytes:3464991094001 (3.4 TB)
</programlisting>
</para>
<para>These errors immediately lead us to suspect that one or more of the ethernet interfaces might have negotiated the wrong line speed. This was confirmed both by running an ICMP ping
from an external host and observing round-trip-time in excess of 700ms, and by running <code>ethtool(8)</code> on the members of the bond interface and discovering that the active interface
was operating at 100Mbs/, full duplex.
<programlisting>
from an external host and observing round-trip-time in excess of 700ms, and by running <code>ethtool(8)</code> on the members of the bond interface and discovering that the active interface
was operating at 100Mbs/, full duplex.
<programlisting>
$ sudo ethtool eth0
Settings for eth0:
Supported ports: [ TP ]
@ -148,44 +148,44 @@ Wake-on: g
Current message level: 0x00000003 (3)
Link detected: yes
</programlisting>
</para>
<para>In normal operation, the ICMP ping round trip time should be around 20ms, and the interface speed and duplex should read, "1000MB/s", and, "Full", respectively.
</para>
</section>
</section>
<section><title>Resolution</title>
<para>After determining that the active ethernet adapter was at the incorrect speed, we used the <code>ifenslave(8)</code> command to make the standby interface
the active interface, which yielded an immediate improvement in MapReduce performance, and a 10 times improvement in network throughput:
</para>
<para>On the next trip to the datacenter, we determined that the line speed issue was ultimately caused by a bad network cable, which was replaced.
</para>
</section>
</section> <!-- case study -->
</para>
<para>In normal operation, the ICMP ping round trip time should be around 20ms, and the interface speed and duplex should read, "1000MB/s", and, "Full", respectively.
</para>
</section>
</section>
<section><title>Resolution</title>
<para>After determining that the active ethernet adapter was at the incorrect speed, we used the <code>ifenslave(8)</code> command to make the standby interface
the active interface, which yielded an immediate improvement in MapReduce performance, and a 10 times improvement in network throughput:
</para>
<para>On the next trip to the datacenter, we determined that the line speed issue was ultimately caused by a bad network cable, which was replaced.
</para>
</section>
</section> <!-- case study -->
<section xml:id="casestudies.perf.1">
<title>Case Study #2 (Performance Research 2012)</title>
<para>Investigation results of a self-described "we're not sure what's wrong, but it seems slow" problem.
<link xlink:href="http://gbif.blogspot.com/2012/03/hbase-performance-evaluation-continued.html">http://gbif.blogspot.com/2012/03/hbase-performance-evaluation-continued.html</link>
<link xlink:href="http://gbif.blogspot.com/2012/03/hbase-performance-evaluation-continued.html">http://gbif.blogspot.com/2012/03/hbase-performance-evaluation-continued.html</link>
</para>
</section>
<section xml:id="casestudies.perf.2">
<title>Case Study #3 (Performance Research 2010))</title>
<para>
Investigation results of general cluster performance from 2010. Although this research is on an older version of the codebase, this writeup
is still very useful in terms of approach.
<link xlink:href="http://hstack.org/hbase-performance-testing/">http://hstack.org/hbase-performance-testing/</link>
Investigation results of general cluster performance from 2010. Although this research is on an older version of the codebase, this writeup
is still very useful in terms of approach.
<link xlink:href="http://hstack.org/hbase-performance-testing/">http://hstack.org/hbase-performance-testing/</link>
</para>
</section>
<section xml:id="casestudies.xceivers">
<title>Case Study #4 (xcievers Config)</title>
<para>Case study of configuring <code>xceivers</code>, and diagnosing errors from mis-configurations.
<link xlink:href="http://www.larsgeorge.com/2012/03/hadoop-hbase-and-xceivers.html">http://www.larsgeorge.com/2012/03/hadoop-hbase-and-xceivers.html</link>
<link xlink:href="http://www.larsgeorge.com/2012/03/hadoop-hbase-and-xceivers.html">http://www.larsgeorge.com/2012/03/hadoop-hbase-and-xceivers.html</link>
</para>
<para>See also <xref linkend="dfs.datanode.max.transfer.threads"/>.
</para>
</section>
</section> <!-- performance/troubleshooting -->
</chapter>
</section> <!-- performance/troubleshooting -->
</chapter>

View File

@ -173,7 +173,7 @@ needed for servers to pick up changes (caveat dynamic config. to be described la
<footnote>
<para>A useful read setting config on you hadoop cluster is Aaron
Kimballs' <link
xlink:ref="http://www.cloudera.com/blog/2009/03/configuration-parameters-what-can-you-just-ignore/">Configuration
xlink:href="http://www.cloudera.com/blog/2009/03/configuration-parameters-what-can-you-just-ignore/">Configuration
Parameters: What can you just ignore?</link></para>
</footnote></para>
@ -527,8 +527,7 @@ homed on the node <varname>h-24-30.example.com</varname>.
</para>
<para>Now skip to <xref linkend="confirm" /> for how to start and verify your
pseudo-distributed install. <footnote>
<para>See <xref linkend="pseudo.extras">Pseudo-distributed
mode extras</xref> for notes on how to start extra Masters and
<para>See <xref linkend="pseudo.extras"/> for notes on how to start extra Masters and
RegionServers when running pseudo-distributed.</para>
</footnote></para>
@ -695,11 +694,7 @@ homed on the node <varname>h-24-30.example.com</varname>.
<programlisting>bin/start-hbase.sh</programlisting>
Run the above from the
<varname>HBASE_HOME</varname>
directory.
<para>Run the above from the <varname>HBASE_HOME</varname> directory.</para>
<para>You should now have a running HBase instance. HBase logs can be
found in the <filename>logs</filename> subdirectory. Check them out
@ -771,7 +766,79 @@ stopping hbase...............</programlisting> Shutdown can take a moment to
The generated file is a docbook section with a glossary
in it-->
<!--presumes the pre-site target has put the hbase-default.xml at this location-->
<xi:include xmlns:xi="http://www.w3.org/2001/XInclude" href="../../../target/docbkx/hbase-default.xml" />
<xi:include xmlns:xi="http://www.w3.org/2001/XInclude" href="../../../target/docbkx/hbase-default.xml">
<xi:fallback>
<section xml:id="hbase_default_configurations">
<title></title>
<para>
<emphasis>This file is fallback content</emphasis>. If you are seeing this, something is wrong with the build of the HBase documentation or you are doing pre-build verification.
</para>
<para>
The file hbase-default.xml is generated as part of
the build of the hbase site. See the hbase <filename>pom.xml</filename>.
The generated file is a docbook glossary.
</para>
<section>
<title>IDs that are auto-generated and cause validation errors if not present</title>
<para>
Each of these is a reference to a configuration file parameter which will cause an error if you are using the fallback content here. This is a dirty dirty hack.
</para>
<section xml:id="fail.fast.expired.active.master">
<title>fail.fast.expired.active.master</title>
<para />
</section>
<section xml:id="hbase.hregion.memstore.flush.size">
<title>"hbase.hregion.memstore.flush.size"</title>
<para />
</section>
<section xml:id="hbase.hstore.bytes.per.checksum">
<title>hbase.hstore.bytes.per.checksum</title>
<para />
</section>
<section xml:id="hbase.online.schema.update.enable">
<title>hbase.online.schema.update.enable</title>
<para />
</section>
<section xml:id="hbase.regionserver.global.memstore.size">
<title>hbase.regionserver.global.memstore.size</title>
<para />
</section>
<section xml:id="hbase.hregion.max.filesize">
<title>hbase.hregion.max.filesize</title>
<para />
</section>
<section xml:id="hbase.hstore.blockingStoreFiles">
<title>hbase.hstore.BlockingStoreFiles</title>
<para />
</section>
<section xml:id="hfile.block.cache.size">
<title>hfile.block.cache.size</title>
<para />
</section>
<section xml:id="copy.table">
<title>copy.table</title>
<para />
</section>
<section xml:id="hbase.hstore.checksum.algorithm">
<title>hbase.hstore.checksum.algorithm</title>
<para />
</section>
<section xml:id="hbase.zookeeper.useMulti">
<title>hbase.zookeeper.useMulti</title>
<para />
</section>
<section xml:id="hbase.hregion.memstore.block.multiplier">
<title>hbase.hregion.memstore.block.multiplier</title>
<para />
</section>
<section xml:id="hbase.regionserver.global.memstore.size.lower.limit">
<title>hbase.regionserver.global.memstore.size.lower.limit</title>
<para />
</section>
</section>
</section>
</xi:fallback>
</xi:include>
</section>
<section xml:id="hbase.env.sh">

View File

@ -118,8 +118,8 @@ git clone git://github.com/apache/hbase.git
<title>Maven Classpath Variable</title>
<para>The <varname>M2_REPO</varname> classpath variable needs to be set up for the project. This needs to be set to
your local Maven repository, which is usually <filename>~/.m2/repository</filename></para>
If this classpath variable is not configured, you will see compile errors in Eclipse like this...
<programlisting>
<para>If this classpath variable is not configured, you will see compile errors in Eclipse like this:
</para> <programlisting>
Description Resource Path Location Type
The project cannot be built until build path errors are resolved hbase Unknown Java Problem
Unbound classpath variable: 'M2_REPO/asm/asm/3.1/asm-3.1.jar' in project 'hbase' hbase Build path Build Path Problem
@ -223,51 +223,52 @@ mvn compile -Dcompile-protobuf -Dprotoc.path=/opt/local/bin/protoc
poms when you build. For now, just be aware of the difference between HBase 1.x
builds and those of HBase 0.96-0.98. Below we will come back to this difference
when we list out build instructions.</para>
</section>
<para xml:id="mvn.settings.file">Publishing to maven requires you sign the artifacts you want to upload. To have the
build do this for you, you need to make sure you have a properly configured
<filename>settings.xml</filename> in your local repository under <filename>.m2</filename>.
Here is my <filename>~/.m2/settings.xml</filename>.
<programlisting>&lt;settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
<programlisting><![CDATA[<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0
http://maven.apache.org/xsd/settings-1.0.0.xsd">
&lt;servers>
&lt;!- To publish a snapshot of some part of Maven -->
&lt;server>
&lt;id>apache.snapshots.https&lt;/id>
&lt;username>YOUR_APACHE_ID
&lt;/username>
&lt;password>YOUR_APACHE_PASSWORD
&lt;/password>
&lt;/server>
&lt;!-- To publish a website using Maven -->
&lt;!-- To stage a release of some part of Maven -->
&lt;server>
&lt;id>apache.releases.https&lt;/id>
&lt;username>YOUR_APACHE_ID
&lt;/username>
&lt;password>YOUR_APACHE_PASSWORD
&lt;/password>
&lt;/server>
&lt;/servers>
&lt;profiles>
&lt;profile>
&lt;id>apache-release&lt;/id>
&lt;properties>
&lt;gpg.keyname>YOUR_KEYNAME&lt;/gpg.keyname>
&lt;!--Keyname is something like this ... 00A5F21E... do gpg --list-keys to find it-->
&lt;gpg.passphrase>YOUR_KEY_PASSWORD
&lt;/gpg.passphrase>
&lt;/properties>
&lt;/profile>
&lt;/profiles>
&lt;/settings>
<servers>
<!- To publish a snapshot of some part of Maven -->
<server>
<id>apache.snapshots.https</id>
<username>YOUR_APACHE_ID
</username>
<password>YOUR_APACHE_PASSWORD
</password>
</server>
<!-- To publish a website using Maven -->
<!-- To stage a release of some part of Maven -->
<server>
<id>apache.releases.https</id>
<username>YOUR_APACHE_ID
</username>
<password>YOUR_APACHE_PASSWORD
</password>
</server>
</servers>
<profiles>
<profile>
<id>apache-release</id>
<properties>
<gpg.keyname>YOUR_KEYNAME</gpg.keyname>
<!--Keyname is something like this ... 00A5F21E... do gpg --list-keys to find it-->
<gpg.passphrase>YOUR_KEY_PASSWORD
</gpg.passphrase>
</properties>
</profile>
</profiles>
</settings>]]>
</programlisting>
</para>
<para>You must use maven 3.0.x (Check by running <command>mvn -version</command>).
</para>
</section>
<section xml:id="maven.release">
<title>Making a Release Candidate</title>
<para>I'll explain by running through the process. See later in this section for more detail on particular steps.
@ -501,17 +502,17 @@ HBase have a character not usually seen in other projects.</para>
dependency tree).</para>
<section xml:id="hbase.moduletest.run">
<title>Running Tests in other Modules</title>
If the module you are developing in has no other dependencies on other HBase modules, then
you can cd into that module and just run:
<para>If the module you are developing in has no other dependencies on other HBase modules, then
you can cd into that module and just run:</para>
<programlisting>mvn test</programlisting>
which will just run the tests IN THAT MODULE. If there are other dependencies on other modules,
<para>which will just run the tests IN THAT MODULE. If there are other dependencies on other modules,
then you will have run the command from the ROOT HBASE DIRECTORY. This will run the tests in the other
modules, unless you specify to skip the tests in that module. For instance, to skip the tests in the hbase-server module,
you would run:
you would run:</para>
<programlisting>mvn clean test -PskipServerTests</programlisting>
from the top level directory to run all the tests in modules other than hbase-server. Note that you
<para>from the top level directory to run all the tests in modules other than hbase-server. Note that you
can specify to skip tests in multiple modules as well as just for a single module. For example, to skip
the tests in <classname>hbase-server</classname> and <classname>hbase-common</classname>, you would run:
the tests in <classname>hbase-server</classname> and <classname>hbase-common</classname>, you would run:</para>
<programlisting>mvn clean test -PskipServerTests -PskipCommonTests</programlisting>
<para>Also, keep in mind that if you are running tests in the <classname>hbase-server</classname> module you will need to
apply the maven profiles discussed in <xref linkend="hbase.unittests.cmds"/> to get the tests to run properly.</para>
@ -541,7 +542,7 @@ The first three categories, small, medium, and large are for tests run when
you type <code>$ mvn test</code>; i.e. these three categorizations are for
HBase unit tests. The integration category is for not for unit tests but for integration
tests. These are run when you invoke <code>$ mvn verify</code>. Integration tests
are described in <xref linkend="integration.tests">integration tests section</xref> and will not be discussed further
are described in <xref linkend="integration.tests"/> and will not be discussed further
in this section on HBase unit tests.</para>
<para>
Apache HBase uses a patched maven surefire plugin and maven profiles to implement
@ -579,7 +580,7 @@ the developer machine as well.
<section xml:id="hbase.unittests.integration">
<title>Integration Tests<indexterm><primary>IntegrationTests</primary></indexterm></title>
<para><emphasis>Integration</emphasis> tests are system level tests. See
<xref linkend="integration.tests">integration tests section</xref> for more info.
<xref linkend="integration.tests"/> for more info.
</para>
</section>
</section>
@ -704,17 +705,17 @@ should not impact these resources, it's worth checking these log lines
<title>General rules</title>
<itemizedlist>
<listitem>
As much as possible, tests should be written as category small tests.
<para>As much as possible, tests should be written as category small tests.</para>
</listitem>
<listitem>
All tests must be written to support parallel execution on the same machine, hence they should not use shared resources as fixed ports or fixed file names.
<para>All tests must be written to support parallel execution on the same machine, hence they should not use shared resources as fixed ports or fixed file names.</para>
</listitem>
<listitem>
Tests should not overlog. More than 100 lines/second makes the logs complex to read and use i/o that are hence not available for the other tests.
<para>Tests should not overlog. More than 100 lines/second makes the logs complex to read and use i/o that are hence not available for the other tests.</para>
</listitem>
<listitem>
Tests can be written with <classname>HBaseTestingUtility</classname>.
This class offers helper functions to create a temp directory and do the cleanup, or to start a cluster.
<para>Tests can be written with <classname>HBaseTestingUtility</classname>.
This class offers helper functions to create a temp directory and do the cleanup, or to start a cluster.</para>
</listitem>
</itemizedlist>
</section>
@ -722,19 +723,19 @@ This class offers helper functions to create a temp directory and do the cleanup
<title>Categories and execution time</title>
<itemizedlist>
<listitem>
All tests must be categorized, if not they could be skipped.
<para>All tests must be categorized, if not they could be skipped.</para>
</listitem>
<listitem>
All tests should be written to be as fast as possible.
<para>All tests should be written to be as fast as possible.</para>
</listitem>
<listitem>
Small category tests should last less than 15 seconds, and must not have any side effect.
<para>Small category tests should last less than 15 seconds, and must not have any side effect.</para>
</listitem>
<listitem>
Medium category tests should last less than 50 seconds.
<para>Medium category tests should last less than 50 seconds.</para>
</listitem>
<listitem>
Large category tests should last less than 3 minutes. This should ensure a good parallelization for people using it, and ease the analysis when the test fails.
<para>Large category tests should last less than 3 minutes. This should ensure a good parallelization for people using it, and ease the analysis when the test fails.</para>
</listitem>
</itemizedlist>
</section>
@ -862,17 +863,17 @@ are running other tests.
</para>
<para>
ChaosMonkey defines Action's and Policy's. Actions are sequences of events. We have at least the following actions:
ChaosMonkey defines Action's and Policy's. Actions are sequences of events. We have at least the following actions:</para>
<itemizedlist>
<listitem>Restart active master (sleep 5 sec)</listitem>
<listitem>Restart random regionserver (sleep 5 sec)</listitem>
<listitem>Restart random regionserver (sleep 60 sec)</listitem>
<listitem>Restart META regionserver (sleep 5 sec)</listitem>
<listitem>Restart ROOT regionserver (sleep 5 sec)</listitem>
<listitem>Batch restart of 50% of regionservers (sleep 5 sec)</listitem>
<listitem>Rolling restart of 100% of regionservers (sleep 5 sec)</listitem>
<listitem><para>Restart active master (sleep 5 sec)</para></listitem>
<listitem><para>Restart random regionserver (sleep 5 sec)</para></listitem>
<listitem><para>Restart random regionserver (sleep 60 sec)</para></listitem>
<listitem><para>Restart META regionserver (sleep 5 sec)</para></listitem>
<listitem><para>Restart ROOT regionserver (sleep 5 sec)</para></listitem>
<listitem><para>Batch restart of 50% of regionservers (sleep 5 sec)</para></listitem>
<listitem><para>Rolling restart of 100% of regionservers (sleep 5 sec)</para></listitem>
</itemizedlist>
<para>
Policies on the other hand are responsible for executing the actions based on a strategy.
The default policy is to execute a random action every minute based on predefined action
weights. ChaosMonkey executes predefined named policies until it is stopped. More than one
@ -881,11 +882,12 @@ policy can be active at any time.
<para>
To run ChaosMonkey as a standalone tool deploy your HBase cluster as usual. ChaosMonkey uses the configuration
from the bin/hbase script, thus no extra configuration needs to be done. You can invoke the ChaosMonkey by running:
from the bin/hbase script, thus no extra configuration needs to be done. You can invoke the ChaosMonkey by running:</para>
<programlisting>bin/hbase org.apache.hadoop.hbase.util.ChaosMonkey</programlisting>
<para>
This will output smt like:
<programlisting>
</para>
<screen>
12/11/19 23:21:57 INFO util.ChaosMonkey: Using ChaosMonkey Policy: class org.apache.hadoop.hbase.util.ChaosMonkey$PeriodicRandomActionPolicy, period:60000
12/11/19 23:21:57 INFO util.ChaosMonkey: Sleeping for 26953 to add jitter
12/11/19 23:22:24 INFO util.ChaosMonkey: Performing action: Restart active master
@ -921,8 +923,8 @@ This will output smt like:
12/11/19 23:24:26 INFO hbase.ClusterManager: Executed remote command, exit code:0 , output:starting regionserver, logging to /homes/enis/code/hbase-0.94/bin/../logs/hbase-enis-regionserver-rs3.example.com.out
12/11/19 23:24:27 INFO util.ChaosMonkey: Started region server:rs3.example.com,60020,1353367027826. Reported num of rs:6
</programlisting>
</screen>
<para>
As you can see from the log, ChaosMonkey started the default PeriodicRandomActionPolicy, which is configured with all the available actions, and ran RestartActiveMaster and RestartRandomRs actions. ChaosMonkey tool, if run from command line, will keep on running until the process is killed.
</para>
</section>
@ -958,7 +960,7 @@ mvn compile
<para>
The above will build against whatever explicit hadoop 1.x version we have in our <filename>pom.xml</filename> as our '1.0' version.
Tests may not all pass so you may need to pass <code>-DskipTests</code> unless you are inclined to fix the failing tests.</para>
<note id="maven.build.passing.default.profile">
<note xml:id="maven.build.passing.default.profile">
<title>'dependencyManagement.dependencies.dependency.artifactId' for org.apache.hbase:${compat.module}:test-jar with value '${compat.module}' does not match a valid id pattern</title>
<para>You will see ERRORs like the above title if you pass the <emphasis>default</emphasis> profile; e.g. if
you pass <property>hadoop.profile=1.1</property> when building 0.96 or
@ -1001,12 +1003,12 @@ pecularity that is probably fixable but we've not spent the time trying to figur
<section xml:id="jira.priorities"><title>Jira Priorities</title>
<para>The following is a guideline on setting Jira issue priorities:
<itemizedlist>
<listitem>Blocker: Should only be used if the issue WILL cause data loss or cluster instability reliably.</listitem>
<listitem>Critical: The issue described can cause data loss or cluster instability in some cases.</listitem>
<listitem>Major: Important but not tragic issues, like updates to the client API that will add a lot of much-needed functionality or significant
bugs that need to be fixed but that don't cause data loss.</listitem>
<listitem>Minor: Useful enhancements and annoying but not damaging bugs.</listitem>
<listitem>Trivial: Useful enhancements but generally cosmetic.</listitem>
<listitem><para>Blocker: Should only be used if the issue WILL cause data loss or cluster instability reliably.</para></listitem>
<listitem><para>Critical: The issue described can cause data loss or cluster instability in some cases.</para></listitem>
<listitem><para>Major: Important but not tragic issues, like updates to the client API that will add a lot of much-needed functionality or significant
bugs that need to be fixed but that don't cause data loss.</para></listitem>
<listitem><para>Minor: Useful enhancements and annoying but not damaging bugs.</para></listitem>
<listitem><para>Trivial: Useful enhancements but generally cosmetic.</para></listitem>
</itemizedlist>
</para>
</section>
@ -1161,10 +1163,9 @@ pecularity that is probably fixable but we've not spent the time trying to figur
<para>Please submit one patch-file per Jira. For example, if multiple files are changed make sure the
selected resource when generating the patch is a directory. Patch files can reflect changes in multiple files. </para>
<para>
Generating patches using git:<sbr/>
<programlisting>
$ git diff --no-prefix > HBASE_XXXX.patch
</programlisting>
Generating patches using git:</para>
<screen>$ git diff --no-prefix > HBASE_XXXX.patch</screen>
<para>
Don't forget the 'no-prefix' option; and generate the diff from the root directory of project
</para>
<para>Make sure you review <xref linkend="eclipse.code.formatting"/> for code style. </para>
@ -1283,11 +1284,10 @@ Bar bar = foo.getBar(); &lt;--- imagine there's an extra space(s) after the
</section>
<section xml:id="common.patch.feedback.javadoc">
<title>Javadoc</title>
<para>This is also a very common feedback item. Don't forget Javadoc!
<para>This is also a very common feedback item. Don't forget Javadoc!</para>
<para>Javadoc warnings are checked during precommit. If the precommit tool gives you a '-1',
please fix the javadoc issue. Your patch won't be committed if it adds such warnings.
</para>
</para>
</section>
<section xml:id="common.patch.feedback.findbugs">
<title>Findbugs</title>
@ -1345,25 +1345,25 @@ Bar bar = foo.getBar(); &lt;--- imagine there's an extra space(s) after the
</para>
<itemizedlist>
<listitem>
Do not delete the old patch file
<para>Do not delete the old patch file</para>
</listitem>
<listitem>
version your new patch file using a simple scheme like this: <sbr/>
HBASE-{jira number}-{version}.patch <sbr/>
e.g:
HBASE_XXXX-v2.patch
<para>version your new patch file using a simple scheme like this:</para>
<screen>HBASE-{jira number}-{version}.patch</screen>
<para>e.g:</para>
<screen>HBASE_XXXX-v2.patch</screen>
</listitem>
<listitem>
'Cancel Patch' on JIRA.. bug status will change back to Open
<para>'Cancel Patch' on JIRA.. bug status will change back to Open</para>
</listitem>
<listitem>
Attach new patch file (e.g. HBASE_XXXX-v2.patch) using 'Files --> Attach'
<para>Attach new patch file (e.g. HBASE_XXXX-v2.patch) using 'Files --> Attach'</para>
</listitem>
<listitem>
Click on 'Submit Patch'. Now the bug status will say 'Patch Available'.
<para>Click on 'Submit Patch'. Now the bug status will say 'Patch Available'.</para>
</listitem>
</itemizedlist>
Committers will review the patch. Rinse and repeat as many times as needed :-)
<para>Committers will review the patch. Rinse and repeat as many times as needed :-)</para>
</section>
<section>
@ -1372,32 +1372,32 @@ Bar bar = foo.getBar(); &lt;--- imagine there's an extra space(s) after the
At times you may want to break a big change into mulitple patches. Here is a sample work-flow using git
<itemizedlist>
<listitem>
patch 1:
<para>patch 1:</para>
<itemizedlist>
<listitem>
$ git diff --no-prefix > HBASE_XXXX-1.patch
<screen>$ git diff --no-prefix > HBASE_XXXX-1.patch</screen>
</listitem>
</itemizedlist>
</listitem>
<listitem>
patch 2:
<para>patch 2:</para>
<itemizedlist>
<listitem>
create a new git branch <sbr/>
$ git checkout -b my_branch
<para>create a new git branch</para>
<screen>$ git checkout -b my_branch</screen>
</listitem>
<listitem>
save your work
$ git add file1 file2 <sbr/>
$ git commit -am 'saved after HBASE_XXXX-1.patch' <sbr/>
now you have your own branch, that is different from remote master branch <sbr/>
<para>save your work</para>
<screen>$ git add file1 file2 </screen>
<screen>$ git commit -am 'saved after HBASE_XXXX-1.patch'</screen>
<para>now you have your own branch, that is different from remote master branch</para>
</listitem>
<listitem>
make more changes...
<para>make more changes...</para>
</listitem>
<listitem>
create second patch <sbr/>
$ git diff --no-prefix > HBASE_XXXX-2.patch
<para>create second patch</para>
<screen>$ git diff --no-prefix > HBASE_XXXX-2.patch</screen>
</listitem>
</itemizedlist>

View File

@ -27,8 +27,8 @@
*/
-->
<title>Apache HBase External APIs</title>
This chapter will cover access to Apache HBase either through non-Java languages, or through custom protocols.
<para> This chapter will cover access to Apache HBase either through non-Java languages, or through custom protocols.
</para>
<section xml:id="nonjava.jvm">
<title>Non-Java Languages Talking to the JVM</title>
<para>Currently the documentation on this topic in the
@ -172,7 +172,7 @@
<para>Example4:<code> =, 'substring:abc123' </code>will match everything that begins with the substring "abc123"</para>
</section>
<section xml:id="example PHP Client Program"><title>Example PHP Client Program that uses the Filter Language</title>
<section xml:id="examplePHPClientProgram"><title>Example PHP Client Program that uses the Filter Language</title>
<programlisting>
&lt;? $_SERVER['PHP_ROOT'] = realpath(dirname(__FILE__).'/..');
require_once $_SERVER['PHP_ROOT'].'/flib/__flib.php';
@ -205,7 +205,7 @@
</para>
<orderedlist>
<para>
<listitem>
<itemizedlist>
<listitem>
<para><code>“(RowFilter (=, binary:Row 1) AND TimeStampsFilter (74689, 89734)) OR
@ -216,7 +216,7 @@
<para>1) The key-value pair must be in a column that is lexicographically >= abc and &lt; xyz </para>
</listitem>
</itemizedlist>
</para>
</listitem>
</orderedlist>
<para>
@ -228,7 +228,7 @@
</para>
</section>
<section xml:id="Individual Filter Syntax"><title>Individual Filter Syntax</title>
<section xml:id="IndividualFilterSyntax"><title>Individual Filter Syntax</title>
<orderedlist>
<listitem>
<para><emphasis role="bold"><emphasis role="underline">KeyOnlyFilter</emphasis></emphasis></para>

View File

@ -1,13 +1,13 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter version="5.0" xml:id="getting_started"
xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:svg="http://www.w3.org/2000/svg"
xmlns:m="http://www.w3.org/1998/Math/MathML"
xmlns:html="http://www.w3.org/1999/xhtml"
xmlns:db="http://docbook.org/ns/docbook">
<!--
xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:svg="http://www.w3.org/2000/svg"
xmlns:m="http://www.w3.org/1998/Math/MathML"
xmlns:html="http://www.w3.org/1999/xhtml"
xmlns:db="http://docbook.org/ns/docbook">
<!--
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
@ -27,79 +27,78 @@
*/
-->
<title>Getting Started</title>
<section>
<title>Introduction</title>
<para><xref linkend="quickstart" /> will get you up and
running on a single-node, standalone instance of HBase.
running on a single-node, standalone instance of HBase.
</para>
</section>
<section xml:id="quickstart">
<title>Quick Start</title>
<para>This guide describes setup of a standalone HBase instance. It will
run against the local filesystem. In later sections we will take you through
how to run HBase on Apache Hadoop's HDFS, a distributed filesystem. This section
shows you how to create a table in HBase, inserting
rows into your new HBase table via the HBase <command>shell</command>, and then cleaning
up and shutting down your standalone, local filesystem-based HBase instance. The below exercise
should take no more than ten minutes (not including download time).
run against the local filesystem. In later sections we will take you through
how to run HBase on Apache Hadoop's HDFS, a distributed filesystem. This section
shows you how to create a table in HBase, inserting
rows into your new HBase table via the HBase <command>shell</command>, and then cleaning
up and shutting down your standalone, local filesystem-based HBase instance. The below exercise
should take no more than ten minutes (not including download time).
</para>
<note xml:id="local.fs.durability"><title>Local Filesystem and Durability</title>
<para>Using HBase with a LocalFileSystem does not currently guarantee durability.
<para>Using HBase with a LocalFileSystem does not currently guarantee durability.
The HDFS local filesystem implementation will lose edits if files are not properly
closed -- which is very likely to happen when experimenting with a new download.
You need to run HBase on HDFS to ensure all writes are preserved. Running
against the local filesystem though will get you off the ground quickly and get you
familiar with how the general system works so lets run with it for now. See
<link xlink:href="https://issues.apache.org/jira/browse/HBASE-3696"/> and its associated issues for more details.</para></note>
You need to run HBase on HDFS to ensure all writes are preserved. Running
against the local filesystem though will get you off the ground quickly and get you
familiar with how the general system works so lets run with it for now. See
<link xlink:href="https://issues.apache.org/jira/browse/HBASE-3696"/> and its associated issues for more details.</para></note>
<note xml:id="loopback.ip.getting.started">
<title>Loopback IP</title>
<note>
<para><emphasis>The below advice is for hbase-0.94.x and older versions only. We believe this fixed in hbase-0.96.0 and beyond
(let us know if we have it wrong).</emphasis> There should be no need of the below modification to <filename>/etc/hosts</filename> in
later versions of HBase.</para>
</note>
<para>HBase expects the loopback IP address to be 127.0.0.1. Ubuntu and some other distributions,
for example, will default to 127.0.1.1 and this will cause problems for you
<footnote><para>See <link xlink:href="http://blog.devving.com/why-does-hbase-care-about-etchosts/">Why does HBase care about /etc/hosts?</link> for detail.</para></footnote>.
</para>
<para><filename>/etc/hosts</filename> should look something like this:
<programlisting>
<title>Loopback IP</title>
<para><emphasis>The below advice is for hbase-0.94.x and older versions only. We believe this fixed in hbase-0.96.0 and beyond
(let us know if we have it wrong).</emphasis> There should be no need of the below modification to <filename>/etc/hosts</filename> in
later versions of HBase.</para>
<para>HBase expects the loopback IP address to be 127.0.0.1. Ubuntu and some other distributions,
for example, will default to 127.0.1.1 and this will cause problems for you
<footnote><para>See <link xlink:href="http://blog.devving.com/why-does-hbase-care-about-etchosts/">Why does HBase care about /etc/hosts?</link> for detail.</para></footnote>.
</para>
<para><filename>/etc/hosts</filename> should look something like this:
<programlisting>
127.0.0.1 localhost
127.0.0.1 ubuntu.ubuntu-domain ubuntu
</programlisting>
</para>
</note>
</para>
</note>
<section>
<title>Download and unpack the latest stable release.</title>
<para>Choose a download site from this list of <link
xlink:href="http://www.apache.org/dyn/closer.cgi/hbase/">Apache Download
Mirrors</link>. Click on the suggested top link. This will take you to a
mirror of <emphasis>HBase Releases</emphasis>. Click on the folder named
<filename>stable</filename> and then download the file that ends in
<filename>.tar.gz</filename> to your local filesystem; e.g.
<filename>hbase-0.94.2.tar.gz</filename>.</para>
xlink:href="http://www.apache.org/dyn/closer.cgi/hbase/">Apache Download
Mirrors</link>. Click on the suggested top link. This will take you to a
mirror of <emphasis>HBase Releases</emphasis>. Click on the folder named
<filename>stable</filename> and then download the file that ends in
<filename>.tar.gz</filename> to your local filesystem; e.g.
<filename>hbase-0.94.2.tar.gz</filename>.</para>
<para>Decompress and untar your download and then change into the
unpacked directory.</para>
unpacked directory.</para>
<para><programlisting>$ tar xfz hbase-<?eval ${project.version}?>.tar.gz
$ cd hbase-<?eval ${project.version}?>
</programlisting></para>
<para>At this point, you are ready to start HBase. But before starting
it, edit <filename>conf/hbase-site.xml</filename>, the file you write
your site-specific configurations into. Set
<varname>hbase.rootdir</varname>, the directory HBase writes data to,
and <varname>hbase.zookeeper.property.dataDir</varname>, the directory
ZooKeeper writes its data too:
<programlisting>&lt;?xml version="1.0"?&gt;
it, edit <filename>conf/hbase-site.xml</filename>, the file you write
your site-specific configurations into. Set
<varname>hbase.rootdir</varname>, the directory HBase writes data to,
and <varname>hbase.zookeeper.property.dataDir</varname>, the directory
ZooKeeper writes its data too:
<programlisting>&lt;?xml version="1.0"?&gt;
&lt;?xml-stylesheet type="text/xsl" href="configuration.xsl"?&gt;
&lt;configuration&gt;
&lt;property&gt;
@ -111,63 +110,63 @@ $ cd hbase-<?eval ${project.version}?>
&lt;value&gt;/DIRECTORY/zookeeper&lt;/value&gt;
&lt;/property&gt;
&lt;/configuration&gt;</programlisting> Replace <varname>DIRECTORY</varname> in the above with the
path to the directory you would have HBase and ZooKeeper write their data. By default,
<varname>hbase.rootdir</varname> is set to <filename>/tmp/hbase-${user.name}</filename>
and similarly so for the default ZooKeeper data location which means you'll lose all
your data whenever your server reboots unless you change it (Most operating systems clear
<filename>/tmp</filename> on restart).</para>
path to the directory you would have HBase and ZooKeeper write their data. By default,
<varname>hbase.rootdir</varname> is set to <filename>/tmp/hbase-${user.name}</filename>
and similarly so for the default ZooKeeper data location which means you'll lose all
your data whenever your server reboots unless you change it (Most operating systems clear
<filename>/tmp</filename> on restart).</para>
</section>
<section xml:id="start_hbase">
<title>Start HBase</title>
<para>Now start HBase:<programlisting>$ ./bin/start-hbase.sh
starting Master, logging to logs/hbase-user-master-example.org.out</programlisting></para>
<para>You should now have a running standalone HBase instance. In
standalone mode, HBase runs all daemons in the the one JVM; i.e. both
the HBase and ZooKeeper daemons. HBase logs can be found in the
<filename>logs</filename> subdirectory. Check them out especially if
it seems HBase had trouble starting.</para>
standalone mode, HBase runs all daemons in the the one JVM; i.e. both
the HBase and ZooKeeper daemons. HBase logs can be found in the
<filename>logs</filename> subdirectory. Check them out especially if
it seems HBase had trouble starting.</para>
<note>
<title>Is <application>java</application> installed?</title>
<para>All of the above presumes a 1.6 version of Oracle
<application>java</application> is installed on your machine and
available on your path (See <xref linkend="java" />); i.e. when you type
<application>java</application>, you see output that describes the
options the java program takes (HBase requires java 6). If this is not
the case, HBase will not start. Install java, edit
<filename>conf/hbase-env.sh</filename>, uncommenting the
<envar>JAVA_HOME</envar> line pointing it to your java install, then,
retry the steps above.</para>
<application>java</application> is installed on your machine and
available on your path (See <xref linkend="java" />); i.e. when you type
<application>java</application>, you see output that describes the
options the java program takes (HBase requires java 6). If this is not
the case, HBase will not start. Install java, edit
<filename>conf/hbase-env.sh</filename>, uncommenting the
<envar>JAVA_HOME</envar> line pointing it to your java install, then,
retry the steps above.</para>
</note>
</section>
<section xml:id="shell_exercises">
<title>Shell Exercises</title>
<para>Connect to your running HBase via the <command>shell</command>.</para>
<para><programlisting>$ ./bin/hbase shell
HBase Shell; enter 'help&lt;RETURN&gt;' for list of supported commands.
Type "exit&lt;RETURN&gt;" to leave the HBase Shell
Version: 0.90.0, r1001068, Fri Sep 24 13:55:42 PDT 2010
hbase(main):001:0&gt; </programlisting></para>
<para>Type <command>help</command> and then
<command>&lt;RETURN&gt;</command> to see a listing of shell commands and
options. Browse at least the paragraphs at the end of the help emission
for the gist of how variables and command arguments are entered into the
HBase shell; in particular note how table names, rows, and columns,
etc., must be quoted.</para>
<para>Create a table named <varname>test</varname> with a single column family named <varname>cf</varname>.
Verify its creation by listing all tables and then insert some
values.</para>
<command>&lt;RETURN&gt;</command> to see a listing of shell commands and
options. Browse at least the paragraphs at the end of the help emission
for the gist of how variables and command arguments are entered into the
HBase shell; in particular note how table names, rows, and columns,
etc., must be quoted.</para>
<para>Create a table named <varname>test</varname> with a single column family named <varname>cf</varname>.
Verify its creation by listing all tables and then insert some
values.</para>
<para><programlisting>hbase(main):003:0&gt; create 'test', 'cf'
0 row(s) in 1.2200 seconds
hbase(main):003:0&gt; list 'test'
@ -179,59 +178,59 @@ hbase(main):005:0&gt; put 'test', 'row2', 'cf:b', 'value2'
0 row(s) in 0.0370 seconds
hbase(main):006:0&gt; put 'test', 'row3', 'cf:c', 'value3'
0 row(s) in 0.0450 seconds</programlisting></para>
<para>Above we inserted 3 values, one at a time. The first insert is at
<varname>row1</varname>, column <varname>cf:a</varname> with a value of
<varname>value1</varname>. Columns in HBase are comprised of a column family prefix --
<varname>cf</varname> in this example -- followed by a colon and then a
column qualifier suffix (<varname>a</varname> in this case).</para>
<varname>row1</varname>, column <varname>cf:a</varname> with a value of
<varname>value1</varname>. Columns in HBase are comprised of a column family prefix --
<varname>cf</varname> in this example -- followed by a colon and then a
column qualifier suffix (<varname>a</varname> in this case).</para>
<para>Verify the data insert by running a scan of the table as follows</para>
<para><programlisting>hbase(main):007:0&gt; scan 'test'
ROW COLUMN+CELL
row1 column=cf:a, timestamp=1288380727188, value=value1
row2 column=cf:b, timestamp=1288380738440, value=value2
row3 column=cf:c, timestamp=1288380747365, value=value3
3 row(s) in 0.0590 seconds</programlisting></para>
<para>Get a single row</para>
<para><programlisting>hbase(main):008:0&gt; get 'test', 'row1'
COLUMN CELL
cf:a timestamp=1288380727188, value=value1
1 row(s) in 0.0400 seconds</programlisting></para>
<para>Now, disable and drop your table. This will clean up all done
above.</para>
above.</para>
<para><programlisting>hbase(main):012:0&gt; disable 'test'
0 row(s) in 1.0930 seconds
hbase(main):013:0&gt; drop 'test'
0 row(s) in 0.0770 seconds </programlisting></para>
<para>Exit the shell by typing exit.</para>
<para><programlisting>hbase(main):014:0&gt; exit</programlisting></para>
</section>
<section xml:id="stopping">
<title>Stopping HBase</title>
<para>Stop your hbase instance by running the stop script.</para>
<para><programlisting>$ ./bin/stop-hbase.sh
stopping hbase...............</programlisting></para>
</section>
<section>
<title>Where to go next</title>
<para>The above described standalone setup is good for testing and
experiments only. In the next chapter, <xref linkend="configuration" />,
we'll go into depth on the different HBase run modes, system requirements
running HBase, and critical configurations setting up a distributed HBase deploy.</para>
experiments only. In the next chapter, <xref linkend="configuration" />,
we'll go into depth on the different HBase run modes, system requirements
running HBase, and critical configurations setting up a distributed HBase deploy.</para>
</section>
</section>
</chapter>

File diff suppressed because it is too large Load Diff

View File

@ -51,9 +51,9 @@
<para>
Important items to consider:
<itemizedlist>
<listitem>Switching capacity of the device</listitem>
<listitem>Number of systems connected</listitem>
<listitem>Uplink capacity</listitem>
<listitem><para>Switching capacity of the device</para></listitem>
<listitem><para>Number of systems connected</para></listitem>
<listitem><para>Uplink capacity</para></listitem>
</itemizedlist>
</para>
<section xml:id="perf.network.1switch">
@ -71,9 +71,9 @@
</para>
<para>Mitigation of this issue is fairly simple and can be accomplished in multiple ways:
<itemizedlist>
<listitem>Use appropriate hardware for the scale of the cluster which you're attempting to build.</listitem>
<listitem>Use larger single switch configurations i.e. single 48 port as opposed to 2x 24 port</listitem>
<listitem>Configure port trunking for uplinks to utilize multiple interfaces to increase cross switch bandwidth.</listitem>
<listitem><para>Use appropriate hardware for the scale of the cluster which you're attempting to build.</para></listitem>
<listitem><para>Use larger single switch configurations i.e. single 48 port as opposed to 2x 24 port</para></listitem>
<listitem><para>Configure port trunking for uplinks to utilize multiple interfaces to increase cross switch bandwidth.</para></listitem>
</itemizedlist>
</para>
</section>
@ -81,8 +81,8 @@
<title>Multiple Racks</title>
<para>Multiple rack configurations carry the same potential issues as multiple switches, and can suffer performance degradation from two main areas:
<itemizedlist>
<listitem>Poor switch capacity performance</listitem>
<listitem>Insufficient uplink to another rack</listitem>
<listitem><para>Poor switch capacity performance</para></listitem>
<listitem><para>Insufficient uplink to another rack</para></listitem>
</itemizedlist>
If the the switches in your rack have appropriate switching capacity to handle all the hosts at full speed, the next most likely issue will be caused by homing
more of your cluster across racks. The easiest way to avoid issues when spanning multiple racks is to use port trunking to create a bonded uplink to other racks.

View File

@ -32,7 +32,7 @@
the various non-rdbms datastores is Ian Varley's Master thesis,
<link xlink:href="http://ianvarley.com/UT/MR/Varley_MastersReport_Full_2009-08-07.pdf">No Relation: The Mixed Blessings of Non-Relational Databases</link>.
Recommended. Also, read <xref linkend="keyvalue"/> for how HBase stores data internally, and the section on
<xref linkend="schema.casestudies">HBase Schema Design Case Studies</xref>.
<xref linkend="schema.casestudies"/>.
</para>
<section xml:id="schema.creation">
<title>
@ -41,7 +41,7 @@
<para>HBase schemas can be created or updated with <xref linkend="shell" />
or by using <link xlink:href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HBaseAdmin.html">HBaseAdmin</link> in the Java API.
</para>
<para>Tables must be disabled when making ColumnFamily modifications, for example..
<para>Tables must be disabled when making ColumnFamily modifications, for example:</para>
<programlisting>
Configuration config = HBaseConfiguration.create();
HBaseAdmin admin = new HBaseAdmin(conf);
@ -56,7 +56,7 @@ admin.modifyColumn(table, cf2); // modifying existing ColumnFamily
admin.enableTable(table);
</programlisting>
</para>See <xref linkend="client_dependencies"/> for more information about configuring client connections.
<para>See <xref linkend="client_dependencies"/> for more information about configuring client connections.</para>
<para>Note: online schema changes are supported in the 0.92.x codebase, but the 0.90.x codebase requires the table
to be disabled.
</para>
@ -98,7 +98,7 @@ admin.enableTable(table);
Monotonically Increasing Row Keys/Timeseries Data
</title>
<para>
In the HBase chapter of Tom White's book <link xlink:url="http://oreilly.com/catalog/9780596521981">Hadoop: The Definitive Guide</link> (O'Reilly) there is a an optimization note on watching out for a phenomenon where an import process walks in lock-step with all clients in concert pounding one of the table's regions (and thus, a single node), then moving onto the next region, etc. With monotonically increasing row-keys (i.e., using a timestamp), this will happen. See this comic by IKai Lan on why monotonically increasing row keys are problematic in BigTable-like datastores:
In the HBase chapter of Tom White's book <link xlink:href="http://oreilly.com/catalog/9780596521981">Hadoop: The Definitive Guide</link> (O'Reilly) there is a an optimization note on watching out for a phenomenon where an import process walks in lock-step with all clients in concert pounding one of the table's regions (and thus, a single node), then moving onto the next region, etc. With monotonically increasing row-keys (i.e., using a timestamp), this will happen. See this comic by IKai Lan on why monotonically increasing row keys are problematic in BigTable-like datastores:
<link xlink:href="http://ikaisays.com/2011/01/25/app-engine-datastore-tip-monotonically-increasing-values-are-bad/">monotonically increasing values are bad</link>. The pile-up on a single region brought on
by monotonically increasing keys can be mitigated by randomizing the input records to not be in sorted order, but in general it's best to avoid using a timestamp or a sequence (e.g. 1, 2, 3) as the row-key.
</para>
@ -107,7 +107,7 @@ admin.enableTable(table);
successful example. It has a page describing the <link xlink:href=" http://opentsdb.net/schema.html">schema</link> it uses in
HBase. The key format in OpenTSDB is effectively [metric_type][event_timestamp], which would appear at first glance to contradict the previous advice about not using a timestamp as the key. However, the difference is that the timestamp is not in the <emphasis>lead</emphasis> position of the key, and the design assumption is that there are dozens or hundreds (or more) of different metric types. Thus, even with a continual stream of input data with a mix of metric types, the Puts are distributed across various points of regions in the table.
</para>
<para>See <xref linkend="schema.casestudies">HBase Schema Design Case Studies</xref> for some rowkey design examples.
<para>See <xref linkend="schema.casestudies"/> for some rowkey design examples.
</para>
</section>
<section xml:id="keysize">
@ -119,7 +119,7 @@ admin.enableTable(table);
are large, especially compared to the size of the cell value, then
you may run up against some interesting scenarios. One such is
the case described by Marc Limotte at the tail of
<link xlink:url="https://issues.apache.org/jira/browse/HBASE-3551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&amp;focusedCommentId=13005272#comment-13005272">HBASE-3551</link>
<link xlink:href="https://issues.apache.org/jira/browse/HBASE-3551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&amp;focusedCommentId=13005272#comment-13005272">HBASE-3551</link>
(recommended!).
Therein, the indices that are kept on HBase storefiles (<xref linkend="hfile" />)
to facilitate random access may end up occupyng large chunks of the HBase
@ -213,7 +213,7 @@ COLUMN CELL
<para>The most recent value for [key] in a table can be found by performing a Scan for [key] and obtaining the first record. Since HBase keys
are in sorted order, this key sorts before any older row-keys for [key] and thus is first.
</para>
<para>This technique would be used instead of using <xref linkend="schema.versions">HBase Versioning</xref> where the intent is to hold onto all versions
<para>This technique would be used instead of using <xref linkend="schema.versions"/> where the intent is to hold onto all versions
"forever" (or a very long time) and at the same time quickly obtain access to any other version by using the same Scan technique.
</para>
</section>
@ -335,11 +335,11 @@ public static byte[][] getHexSplits(String startKey, String endKey, int numRegio
<link xlink:href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Result.html">Result</link>, so anything that can be
converted to an array of bytes can be stored as a value. Input could be strings, numbers, complex objects, or even images as long as they can rendered as bytes.
</para>
<para>There are practical limits to the size of values (e.g., storing 10-50MB objects in HBase
would probably be too much to ask); search the mailing list for conversations on this topic.
All rows in HBase conform to the <xref linkend="datamodel">datamodel</xref>, and that includes
versioning. Take that into consideration when making your design, as well as block size for
the ColumnFamily. </para>
<para>There are practical limits to the size of values (e.g., storing 10-50MB objects in HBase would probably be too much to ask);
search the mailling list for conversations on this topic. All rows in HBase conform to the <xref linkend="datamodel"/>, and
that includes versioning. Take that into consideration when making your design, as well as block size for the ColumnFamily.
</para>
<section xml:id="counters">
<title>Counters</title>
<para>
@ -389,10 +389,10 @@ public static byte[][] getHexSplits(String startKey, String endKey, int numRegio
</para>
<para>There is no single answer on the best way to handle this because it depends on...
<itemizedlist>
<listitem>Number of users</listitem>
<listitem>Data size and data arrival rate</listitem>
<listitem>Flexibility of reporting requirements (e.g., completely ad-hoc date selection vs. pre-configured ranges) </listitem>
<listitem>Desired execution speed of query (e.g., 90 seconds may be reasonable to some for an ad-hoc report, whereas it may be too long for others) </listitem>
<listitem><para>Number of users</para></listitem>
<listitem><para>Data size and data arrival rate</para></listitem>
<listitem><para>Flexibility of reporting requirements (e.g., completely ad-hoc date selection vs. pre-configured ranges) </para></listitem>
<listitem><para>Desired execution speed of query (e.g., 90 seconds may be reasonable to some for an ad-hoc report, whereas it may be too long for others) </para></listitem>
</itemizedlist>
... and solutions are also influenced by the size of the cluster and how much processing power you have to throw at the solution.
Common techniques are in sub-sections below. This is a comprehensive, but not exhaustive, list of approaches.
@ -455,26 +455,26 @@ public static byte[][] getHexSplits(String startKey, String endKey, int numRegio
can be approached. Note: this is just an illustration of potential approaches, not an exhaustive list.
Know your data, and know your processing requirements.
</para>
<para>It is highly recommended that you read the rest of the <xref linkend="schema">Schema Design Chapter</xref> first, before reading
<para>It is highly recommended that you read the rest of the <xref linkend="schema"/> first, before reading
these case studies.
</para>
<para>Thee following case studies are described:
<itemizedlist>
<listitem>Log Data / Timeseries Data</listitem>
<listitem>Log Data / Timeseries on Steroids</listitem>
<listitem>Customer/Order</listitem>
<listitem>Tall/Wide/Middle Schema Design</listitem>
<listitem>List Data</listitem>
<listitem><para>Log Data / Timeseries Data</para></listitem>
<listitem><para>Log Data / Timeseries on Steroids</para></listitem>
<listitem><para>Customer/Order</para></listitem>
<listitem><para>Tall/Wide/Middle Schema Design</para></listitem>
<listitem><para>List Data</para></listitem>
</itemizedlist>
</para>
<section xml:id="schema.casestudies.log-timeseries">
<title>Case Study - Log Data and Timeseries Data</title>
<para>Assume that the following data elements are being collected.
<itemizedlist>
<listitem>Hostname</listitem>
<listitem>Timestamp</listitem>
<listitem>Log event</listitem>
<listitem>Value/message</listitem>
<listitem><para>Hostname</para></listitem>
<listitem><para>Timestamp</para></listitem>
<listitem><para>Log event</para></listitem>
<listitem><para>Value/message</para></listitem>
</itemizedlist>
We can store them in an HBase table called LOG_DATA, but what will the rowkey be?
From these attributes the rowkey will be some combination of hostname, timestamp, and log-event - but what specifically?
@ -531,9 +531,9 @@ long bucket = timestamp % numBuckets;
</para>
<para>Composite Rowkey With Hashes:
<itemizedlist>
<listitem>[MD5 hash of hostname] = 16 bytes</listitem>
<listitem>[MD5 hash of event-type] = 16 bytes</listitem>
<listitem>[timestamp] = 8 bytes</listitem>
<listitem><para>[MD5 hash of hostname] = 16 bytes</para></listitem>
<listitem><para>[MD5 hash of event-type] = 16 bytes</para></listitem>
<listitem><para>[timestamp] = 8 bytes</para></listitem>
</itemizedlist>
</para>
<para>Composite Rowkey With Numeric Substitution:
@ -541,17 +541,17 @@ long bucket = timestamp % numBuckets;
<para>For this approach another lookup table would be needed in addition to LOG_DATA, called LOG_TYPES.
The rowkey of LOG_TYPES would be:
<itemizedlist>
<listitem>[type] (e.g., byte indicating hostname vs. event-type)</listitem>
<listitem>[bytes] variable length bytes for raw hostname or event-type.</listitem>
<listitem><para>[type] (e.g., byte indicating hostname vs. event-type)</para></listitem>
<listitem><para>[bytes] variable length bytes for raw hostname or event-type.</para></listitem>
</itemizedlist>
A column for this rowkey could be a long with an assigned number, which could be obtained by using an
<link xlink:href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html#incrementColumnValue%28byte[],%20byte[],%20byte[],%20long%29">HBase counter</link>.
</para>
<para>So the resulting composite rowkey would be:
<itemizedlist>
<listitem>[substituted long for hostname] = 8 bytes</listitem>
<listitem>[substituted long for event type] = 8 bytes</listitem>
<listitem>[timestamp] = 8 bytes</listitem>
<listitem><para>[substituted long for hostname] = 8 bytes</para></listitem>
<listitem><para>[substituted long for event type] = 8 bytes</para></listitem>
<listitem><para>[timestamp] = 8 bytes</para></listitem>
</itemizedlist>
In either the Hash or Numeric substitution approach, the raw values for hostname and event-type can be stored as columns.
</para>
@ -586,19 +586,19 @@ long bucket = timestamp % numBuckets;
</para>
<para>The Customer record type would include all the things that youd typically expect:
<itemizedlist>
<listitem>Customer number</listitem>
<listitem>Customer name</listitem>
<listitem>Address (e.g., city, state, zip)</listitem>
<listitem>Phone numbers, etc.</listitem>
<listitem><para>Customer number</para></listitem>
<listitem><para>Customer name</para></listitem>
<listitem><para>Address (e.g., city, state, zip)</para></listitem>
<listitem><para>Phone numbers, etc.</para></listitem>
</itemizedlist>
</para>
<para>The Order record type would include things like:
<itemizedlist>
<listitem>Customer number</listitem>
<listitem>Order number</listitem>
<listitem>Sales date</listitem>
<listitem>A series of nested objects for shipping locations and line-items (see <xref linkend="schema.casestudies.custorder.obj"/>
for details)</listitem>
<listitem><para>Customer number</para></listitem>
<listitem><para>Order number</para></listitem>
<listitem><para>Sales date</para></listitem>
<listitem><para>A series of nested objects for shipping locations and line-items (see <xref linkend="schema.casestudies.custorder.obj"/>
for details)</para></listitem>
</itemizedlist>
</para>
<para>Assuming that the combination of customer number and sales order uniquely identify an order, these two attributes will compose
@ -614,14 +614,14 @@ reasonable spread in the keyspace, similar options appear:
</para>
<para>Composite Rowkey With Hashes:
<itemizedlist>
<listitem>[MD5 of customer number] = 16 bytes</listitem>
<listitem>[MD5 of order number] = 16 bytes</listitem>
<listitem><para>[MD5 of customer number] = 16 bytes</para></listitem>
<listitem><para>[MD5 of order number] = 16 bytes</para></listitem>
</itemizedlist>
</para>
<para>Composite Numeric/Hash Combo Rowkey:
<itemizedlist>
<listitem>[substituted long for customer number] = 8 bytes</listitem>
<listitem>[MD5 of order number] = 16 bytes</listitem>
<listitem><para>[substituted long for customer number] = 8 bytes</para></listitem>
<listitem><para>[MD5 of order number] = 16 bytes</para></listitem>
</itemizedlist>
</para>
<section xml:id="schema.casestudies.custorder.tables">
@ -631,15 +631,15 @@ reasonable spread in the keyspace, similar options appear:
</para>
<para>Customer Record Type Rowkey:
<itemizedlist>
<listitem>[customer-id]</listitem>
<listitem>[type] = type indicating 1 for customer record type</listitem>
<listitem><para>[customer-id]</para></listitem>
<listitem><para>[type] = type indicating 1 for customer record type</para></listitem>
</itemizedlist>
</para>
<para>Order Record Type Rowkey:
<itemizedlist>
<listitem>[customer-id]</listitem>
<listitem>[type] = type indicating 2 for order record type</listitem>
<listitem>[order]</listitem>
<listitem><para>[customer-id]</para></listitem>
<listitem><para>[type] = type indicating 2 for order record type</para></listitem>
<listitem><para>[order]</para></listitem>
</itemizedlist>
</para>
<para>The advantage of this particular CUSTOMER++ approach is that organizes many different record-types by customer-id
@ -665,23 +665,23 @@ reasonable spread in the keyspace, similar options appear:
</para>
<para>The SHIPPING_LOCATION's composite rowkey would be something like this:
<itemizedlist>
<listitem>[order-rowkey]</listitem>
<listitem>[shipping location number] (e.g., 1st location, 2nd, etc.)</listitem>
<listitem><para>[order-rowkey]</para></listitem>
<listitem><para>[shipping location number] (e.g., 1st location, 2nd, etc.)</para></listitem>
</itemizedlist>
</para>
<para>The LINE_ITEM table's composite rowkey would be something like this:
<itemizedlist>
<listitem>[order-rowkey]</listitem>
<listitem>[shipping location number] (e.g., 1st location, 2nd, etc.)</listitem>
<listitem>[line item number] (e.g., 1st lineitem, 2nd, etc.)</listitem>
<listitem><para>[order-rowkey]</para></listitem>
<listitem><para>[shipping location number] (e.g., 1st location, 2nd, etc.)</para></listitem>
<listitem><para>[line item number] (e.g., 1st lineitem, 2nd, etc.)</para></listitem>
</itemizedlist>
</para>
<para>Such a normalized model is likely to be the approach with an RDBMS, but that's not your only option with HBase.
The cons of such an approach is that to retrieve information about any Order, you will need:
<itemizedlist>
<listitem>Get on the ORDER table for the Order</listitem>
<listitem>Scan on the SHIPPING_LOCATION table for that order to get the ShippingLocation instances</listitem>
<listitem>Scan on the LINE_ITEM for each ShippingLocation</listitem>
<listitem><para>Get on the ORDER table for the Order</para></listitem>
<listitem><para>Scan on the SHIPPING_LOCATION table for that order to get the ShippingLocation instances</para></listitem>
<listitem><para>Scan on the LINE_ITEM for each ShippingLocation</para></listitem>
</itemizedlist>
... granted, this is what an RDBMS would do under the covers anyway, but since there are no joins in HBase
you're just more aware of this fact.
@ -693,23 +693,23 @@ reasonable spread in the keyspace, similar options appear:
</para>
<para>The Order rowkey was described above: <xref linkend="schema.casestudies.custorder"/>
<itemizedlist>
<listitem>[order-rowkey]</listitem>
<listitem>[ORDER record type]</listitem>
<listitem><para>[order-rowkey]</para></listitem>
<listitem><para>[ORDER record type]</para></listitem>
</itemizedlist>
</para>
<para>The ShippingLocation composite rowkey would be something like this:
<itemizedlist>
<listitem>[order-rowkey]</listitem>
<listitem>[SHIPPING record type]</listitem>
<listitem>[shipping location number] (e.g., 1st location, 2nd, etc.)</listitem>
<listitem><para>[order-rowkey]</para></listitem>
<listitem><para>[SHIPPING record type]</para></listitem>
<listitem><para>[shipping location number] (e.g., 1st location, 2nd, etc.)</para></listitem>
</itemizedlist>
</para>
<para>The LineItem composite rowkey would be something like this:
<itemizedlist>
<listitem>[order-rowkey]</listitem>
<listitem>[LINE record type]</listitem>
<listitem>[shipping location number] (e.g., 1st location, 2nd, etc.)</listitem>
<listitem>[line item number] (e.g., 1st lineitem, 2nd, etc.)</listitem>
<listitem><para>[order-rowkey]</para></listitem>
<listitem><para>[LINE record type]</para></listitem>
<listitem><para>[shipping location number] (e.g., 1st location, 2nd, etc.)</para></listitem>
<listitem><para>[line item number] (e.g., 1st lineitem, 2nd, etc.)</para></listitem>
</itemizedlist>
</para>
</section>
@ -720,21 +720,21 @@ reasonable spread in the keyspace, similar options appear:
</para>
<para>The LineItem composite rowkey would be something like this:
<itemizedlist>
<listitem>[order-rowkey]</listitem>
<listitem>[LINE record type]</listitem>
<listitem>[line item number] (e.g., 1st lineitem, 2nd, etc. - care must be taken that there are unique across the entire order)</listitem>
<listitem><para>[order-rowkey]</para></listitem>
<listitem><para>[LINE record type]</para></listitem>
<listitem><para>[line item number] (e.g., 1st lineitem, 2nd, etc. - care must be taken that there are unique across the entire order)</para></listitem>
</itemizedlist>
</para>
<para>... and the LineItem columns would be something like this:
<itemizedlist>
<listitem>itemNumber</listitem>
<listitem>quantity</listitem>
<listitem>price</listitem>
<listitem>shipToLine1 (denormalized from ShippingLocation)</listitem>
<listitem>shipToLine2 (denormalized from ShippingLocation)</listitem>
<listitem>shipToCity (denormalized from ShippingLocation)</listitem>
<listitem>shipToState (denormalized from ShippingLocation)</listitem>
<listitem>shipToZip (denormalized from ShippingLocation)</listitem>
<listitem><para>itemNumber</para></listitem>
<listitem><para>quantity</para></listitem>
<listitem><para>price</para></listitem>
<listitem><para>shipToLine1 (denormalized from ShippingLocation)</para></listitem>
<listitem><para>shipToLine2 (denormalized from ShippingLocation)</para></listitem>
<listitem><para>shipToCity (denormalized from ShippingLocation)</para></listitem>
<listitem><para>shipToState (denormalized from ShippingLocation)</para></listitem>
<listitem><para>shipToZip (denormalized from ShippingLocation)</para></listitem>
</itemizedlist>
</para>
<para>The pros of this approach include a less complex object heirarchy, but one of the cons is that updating gets more
@ -789,7 +789,7 @@ reasonable spread in the keyspace, similar options appear:
OpenTSDB is the best example of this case where a single row represents a defined time-range, and then discrete events are treated as
columns. This approach is often more complex, and may require the additional complexity of re-writing your data, but has the
advantage of being I/O efficient. For an overview of this approach, see
<xref linkend="schema.casestudies.log-timeseries.log-steroids"/>.
<xref linkend="schema.casestudies.log-steroids"/>.
</para>
</section>
</section>
@ -815,7 +815,7 @@ we could have something like:
&lt;FixedWidthUserName&gt;&lt;FixedWidthValueId3&gt;:"" (no value)
</programlisting>
The other option we had was to do this entirely using:
<para>The other option we had was to do this entirely using:</para>
<programlisting>
&lt;FixedWidthUserName&gt;&lt;FixedWidthPageNum0&gt;:&lt;FixedWidthLength&gt;&lt;FixedIdNextPageNum&gt;&lt;ValueId1&gt;&lt;ValueId2&gt;&lt;ValueId3&gt;...
&lt;FixedWidthUserName&gt;&lt;FixedWidthPageNum1&gt;:&lt;FixedWidthLength&gt;&lt;FixedIdNextPageNum&gt;&lt;ValueId1&gt;&lt;ValueId2&gt;&lt;ValueId3&gt;...
@ -827,8 +827,8 @@ So in one case reading the first thirty values would be:
<programlisting>
scan { STARTROW =&gt; 'FixedWidthUsername' LIMIT =&gt; 30}
</programlisting>
And in the second case it would be
<programlisting>
<para>And in the second case it would be
</para> <programlisting>
get 'FixedWidthUserName\x00\x00\x00\x00'
</programlisting>
<para>

View File

@ -517,7 +517,7 @@
<para>
You must configure HBase for secure or simple user access operation. Refer to the
<link linkend='hbase.accesscontrol.configuration'>Secure Client Access to HBase</link> or
<link linkend='hbase.accesscontrol.simpleconfiguration'>Simple User Access to HBase</link>
<link linkend='hbase.secure.simpleconfiguration'>Simple User Access to HBase</link>
sections and complete all of the steps described
there.
</para>
@ -779,14 +779,16 @@ Access control mechanisms are mature and fairly standardized in the relational d
<para>
The HBase shell has been extended to provide simple commands for editing and updating user permissions. The following commands have been added for access control list management:
</para>
Grant
<para>
<programlisting>
<example>
<title>Grant</title>
<programlisting>
grant &lt;user|@group&gt; &lt;permissions&gt; [ &lt;table&gt; [ &lt;column family&gt; [ &lt;column qualifier&gt; ] ] ]
</programlisting>
</para>
</programlisting></example>
<para>
<code class="code">&lt;user|@group&gt;</code> is user or group (start with character '@'), Groups are created and manipulated via the Hadoop group mapping service.
<code>&lt;user|@group&gt;</code> is user or group (start with character '@'), Groups are created and manipulated via the Hadoop group mapping service.
</para>
<para>
<code>&lt;permissions&gt;</code> is zero or more letters from the set "RWCA": READ('R'), WRITE('W'), CREATE('C'), ADMIN('A').
@ -794,32 +796,32 @@ The HBase shell has been extended to provide simple commands for editing and upd
<para>
Note: Grants and revocations of individual permissions on a resource are both accomplished using the <code>grant</code> command. A separate <code>revoke</code> command is also provided by the shell, but this is for fast revocation of all of a user's access rights to a given resource only.
</para>
<para>
Revoke
</para>
<para>
<example>
<title>Revoke</title>
<programlisting>
revoke &lt;user|@group&gt; [ &lt;table&gt; [ &lt;column family&gt; [ &lt;column qualifier&gt; ] ] ]
</programlisting>
</para>
</example>
<example>
<title>Alter</title>
<para>
Alter
</para>
<para>
The <code>alter</code> command has been extended to allow ownership assignment:
The <code>alter</code> command has been extended to allow ownership assignment:</para>
<programlisting>
alter 'tablename', {OWNER => 'username|@group'}
</programlisting>
</para>
</example>
<example>
<title>User Permission</title>
<para>
User Permission
</para>
<para>
The <code>user_permission</code> command shows all access permissions for the current user for a given table:
The <code>user_permission</code> command shows all access permissions for the current user for a given table:</para>
<programlisting>
user_permission &lt;table&gt;
</programlisting>
</para>
</example>
</section>
</section> <!-- Access Control -->
@ -829,12 +831,12 @@ The HBase shell has been extended to provide simple commands for editing and upd
<para>
Bulk loading in secure mode is a bit more involved than normal setup, since the client has to transfer the ownership of the files generated from the mapreduce job to HBase. Secure bulk loading is implemented by a coprocessor, named <link xlink:href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/security/access/SecureBulkLoadEndpoint.html">SecureBulkLoadEndpoint</link>. SecureBulkLoadEndpoint uses a staging directory <code>"hbase.bulkload.staging.dir"</code>, which defaults to <code>/tmp/hbase-staging/</code>. The algorithm is as follows.
<itemizedlist>
<listitem>Create an hbase owned staging directory which is world traversable (<code>-rwx--x--x, 711</code>) <code>/tmp/hbase-staging</code>. </listitem>
<listitem>A user writes out data to his secure output directory: /user/foo/data </listitem>
<listitem>A call is made to hbase to create a secret staging directory
which is globally readable/writable (<code>-rwxrwxrwx, 777</code>): /tmp/hbase-staging/averylongandrandomdirectoryname</listitem>
<listitem>The user makes the data world readable and writable, then moves it
into the random staging directory, then calls bulkLoadHFiles()</listitem>
<listitem><para>Create an hbase owned staging directory which is world traversable (<code>-rwx--x--x, 711</code>) <code>/tmp/hbase-staging</code>. </para></listitem>
<listitem><para>A user writes out data to his secure output directory: /user/foo/data </para></listitem>
<listitem><para>A call is made to hbase to create a secret staging directory
which is globally readable/writable (<code>-rwxrwxrwx, 777</code>): /tmp/hbase-staging/averylongandrandomdirectoryname</para></listitem>
<listitem><para>The user makes the data world readable and writable, then moves it
into the random staging directory, then calls bulkLoadHFiles()</para></listitem>
</itemizedlist>
</para>
<para>
@ -870,9 +872,9 @@ The HBase shell has been extended to provide simple commands for editing and upd
<para>
Visibility expressions like the above can be added when storing or mutating a cell using the API,
</para>
<para><code>Mutation#setCellVisibility(new CellVisibility(String labelExpession));</code></para>
Where the labelExpression could be &#39;( secret | topsecret ) &amp; !probationary&#39;
<programlisting>Mutation#setCellVisibility(new CellVisibility(String labelExpession));</programlisting>
<para> Where the labelExpression could be &#39;( secret | topsecret ) &amp; !probationary&#39;
</para>
<para>
We build the user&#39;s label set in the RPC context when a request is first received by the HBase RegionServer. How users are associated with labels is pluggable. The default plugin passes through labels specified in Authorizations added to the Get or Scan and checks those against the calling user&#39;s authenticated labels list. When client passes some labels for which the user is not authenticated, this default algorithm will drop those. One can pass a subset of user authenticated labels via the Scan/Get authorizations.
</para>
@ -896,7 +898,7 @@ The HBase shell has been extended to provide simple commands for editing and upd
</para>
</section>
<section xml:id="hbase.visibility.label.administration.add.label">
<section xml:id="hbase.visibility.label.administration.add.label2">
<title>User Label Association</title>
<para>A set of labels can be associated with a user by using the API</para>
<para><code>VisibilityClient#setAuths(Configuration conf, final String[] auths, final String user)</code></para>

View File

@ -235,7 +235,7 @@ export SERVER_GC_OPTS="$SERVER_GC_OPTS -XX:NewSize=64m -XX:MaxNewSize=64m"
is generally used for questions on released versions of Apache HBase. Before going to the mailing list, make sure your
question has not already been answered by searching the mailing list archives first. Use
<xref linkend="trouble.resources.searchhadoop" />.
Take some time crafting your question<footnote><para>See <link xlink="http://www.mikeash.com/getting_answers.html">Getting Answers</link></para></footnote>; a quality question that includes all context and
Take some time crafting your question<footnote><para>See <link xlink:href="http://www.mikeash.com/getting_answers.html">Getting Answers</link></para></footnote>; a quality question that includes all context and
exhibits evidence the author has tried to find answers in the manual and out on lists
is more likely to get a prompt response.
</para>
@ -360,15 +360,15 @@ hadoop@sv4borg12:~$ jps
</programlisting>
In order, we see a:
<itemizedlist>
<listitem>Hadoop TaskTracker, manages the local Childs</listitem>
<listitem>HBase RegionServer, serves regions</listitem>
<listitem>Child, its MapReduce task, cannot tell which type exactly</listitem>
<listitem>Hadoop TaskTracker, manages the local Childs</listitem>
<listitem>Hadoop DataNode, serves blocks</listitem>
<listitem>HQuorumPeer, a ZooKeeper ensemble member</listitem>
<listitem>Jps, well… its the current process</listitem>
<listitem>ThriftServer, its a special one will be running only if thrift was started</listitem>
<listitem>jmx, this is a local process thats part of our monitoring platform ( poorly named maybe). You probably dont have that.</listitem>
<listitem><para>Hadoop TaskTracker, manages the local Childs</para></listitem>
<listitem><para>HBase RegionServer, serves regions</para></listitem>
<listitem><para>Child, its MapReduce task, cannot tell which type exactly</para></listitem>
<listitem><para>Hadoop TaskTracker, manages the local Childs</para></listitem>
<listitem><para>Hadoop DataNode, serves blocks</para></listitem>
<listitem><para>HQuorumPeer, a ZooKeeper ensemble member</para></listitem>
<listitem><para>Jps, well… its the current process</para></listitem>
<listitem><para>ThriftServer, its a special one will be running only if thrift was started</para></listitem>
<listitem><para>jmx, this is a local process thats part of our monitoring platform ( poorly named maybe). You probably dont have that.</para></listitem>
</itemizedlist>
</para>
<para>
@ -620,21 +620,19 @@ Harsh J investigated the issue as part of the mailing list thread
</section>
<section xml:id="trouble.client.oome.directmemory.leak">
<title>Client running out of memory though heap size seems to be stable (but the off-heap/direct heap keeps growing)</title>
<para> You are likely running into the issue that is described and worked through in the
mail thread <link
xhref="http://search-hadoop.com/m/ubhrX8KvcH/Suspected+memory+leak&amp;subj=Re+Suspected+memory+leak"
>HBase, mail # user - Suspected memory leak</link> and continued over in <link
xhref="http://search-hadoop.com/m/p2Agc1Zy7Va/MaxDirectMemorySize+Was%253A+Suspected+memory+leak&amp;subj=Re+FeedbackRe+Suspected+memory+leak"
>HBase, mail # dev - FeedbackRe: Suspected memory leak</link>. A workaround is passing
your client-side JVM a reasonable value for <code>-XX:MaxDirectMemorySize</code>. By
default, the <varname>MaxDirectMemorySize</varname> is equal to your <code>-Xmx</code> max
heapsize setting (if <code>-Xmx</code> is set). Try seting it to something smaller (for
example, one user had success setting it to <code>1g</code> when they had a client-side heap
of <code>12g</code>). If you set it too small, it will bring on <code>FullGCs</code> so keep
it a bit hefty. You want to make this setting client-side only especially if you are running
the new experimental server-side off-heap cache since this feature depends on being able to
use big direct buffers (You may have to keep separate client-side and server-side config
dirs). </para>
<para>
You are likely running into the issue that is described and worked through in
the mail thread <link xlink:href="http://search-hadoop.com/m/ubhrX8KvcH/Suspected+memory+leak&amp;subj=Re+Suspected+memory+leak">HBase, mail # user - Suspected memory leak</link>
and continued over in <link xlink:href="http://search-hadoop.com/m/p2Agc1Zy7Va/MaxDirectMemorySize+Was%253A+Suspected+memory+leak&amp;subj=Re+FeedbackRe+Suspected+memory+leak">HBase, mail # dev - FeedbackRe: Suspected memory leak</link>.
A workaround is passing your client-side JVM a reasonable value for <code>-XX:MaxDirectMemorySize</code>. By default,
the <varname>MaxDirectMemorySize</varname> is equal to your <code>-Xmx</code> max heapsize setting (if <code>-Xmx</code> is set).
Try seting it to something smaller (for example, one user had success setting it to <code>1g</code> when
they had a client-side heap of <code>12g</code>). If you set it too small, it will bring on <code>FullGCs</code> so keep
it a bit hefty. You want to make this setting client-side only especially if you are running the new experiemental
server-side off-heap cache since this feature depends on being able to use big direct buffers (You may have to keep
separate client-side and server-side config dirs).
</para>
</section>
<section xml:id="trouble.client.slowdown.admin">
<title>Client Slowdown When Calling Admin Methods (flush, compact, etc.)</title>
@ -753,7 +751,7 @@ Caused by: java.io.FileNotFoundException: File _partition.lst does not exist.
<filename>/&lt;HLog&gt;</filename> (WAL HLog files for the RegionServer)
</programlisting>
</para>
<para>See the <link xlink:href="see http://hadoop.apache.org/common/docs/current/hdfs_user_guide.html">HDFS User Guide</link> for other non-shell diagnostic
<para>See the <link xlink:href="http://hadoop.apache.org/common/docs/current/hdfs_user_guide.html">HDFS User Guide</link> for other non-shell diagnostic
utilities like <code>fsck</code>.
</para>
<section xml:id="trouble.namenode.0size.hlogs">
@ -926,25 +924,26 @@ ERROR org.apache.hadoop.hbase.regionserver.HRegionServer: ZooKeeper session expi
Since the RegionServer's local ZooKeeper client cannot send heartbeats, the session times out.
By design, we shut down any node that isn't able to contact the ZooKeeper ensemble after getting a timeout so that it stops serving data that may already be assigned elsewhere.
</para>
<para>
<itemizedlist>
<listitem>Make sure you give plenty of RAM (in <filename>hbase-env.sh</filename>), the default of 1GB won't be able to sustain long running imports.</listitem>
<listitem>Make sure you don't swap, the JVM never behaves well under swapping.</listitem>
<listitem>Make sure you are not CPU starving the RegionServer thread. For example, if you are running a MapReduce job using 6 CPU-intensive tasks on a machine with 4 cores, you are probably starving the RegionServer enough to create longer garbage collection pauses.</listitem>
<listitem>Increase the ZooKeeper session timeout</listitem>
<listitem><para>Make sure you give plenty of RAM (in <filename>hbase-env.sh</filename>), the default of 1GB won't be able to sustain long running imports.</para></listitem>
<listitem><para>Make sure you don't swap, the JVM never behaves well under swapping.</para></listitem>
<listitem><para>Make sure you are not CPU starving the RegionServer thread. For example, if you are running a MapReduce job using 6 CPU-intensive tasks on a machine with 4 cores, you are probably starving the RegionServer enough to create longer garbage collection pauses.</para></listitem>
<listitem><para>Increase the ZooKeeper session timeout</para></listitem>
</itemizedlist>
If you wish to increase the session timeout, add the following to your <filename>hbase-site.xml</filename> to increase the timeout from the default of 60 seconds to 120 seconds.
<programlisting>
&lt;property&gt;
&lt;name&gt;zookeeper.session.timeout&lt;/name&gt;
&lt;value&gt;1200000&lt;/value&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;hbase.zookeeper.property.tickTime&lt;/name&gt;
&lt;value&gt;6000&lt;/value&gt;
&lt;/property&gt;
<para>If you wish to increase the session timeout, add the following to your <filename>hbase-site.xml</filename> to increase the timeout from the default of 60 seconds to 120 seconds.
</para>
<programlisting>
<![CDATA[<property>
<name>zookeeper.session.timeout</name>
<value>1200000</value>
</property>
<property>
<name>hbase.zookeeper.property.tickTime</name>
<value>6000</value>
</property>]]>
</programlisting>
</para>
<para>
Be aware that setting a higher timeout means that the regions served by a failed RegionServer will take at least
that amount of time to be transfered to another RegionServer. For a production system serving live requests, we would instead
@ -954,8 +953,8 @@ ERROR org.apache.hadoop.hbase.regionserver.HRegionServer: ZooKeeper session expi
<para>
If this is happening during an upload which only happens once (like initially loading all your data into HBase), consider bulk loading.
</para>
See <xref linkend="trouble.zookeeper.general"/> for other general information about ZooKeeper troubleshooting.
</section>
<para>See <xref linkend="trouble.zookeeper.general"/> for other general information about ZooKeeper troubleshooting.
</para> </section>
<section xml:id="trouble.rs.runtime.notservingregion">
<title>NotServingRegionException</title>
<para>This exception is "normal" when found in the RegionServer logs at DEBUG level. This exception is returned back to the client
@ -970,7 +969,7 @@ ERROR org.apache.hadoop.hbase.regionserver.HRegionServer: ZooKeeper session expi
RegionServer is not using the name given it by the master; double entry in master listing of servers</link> for gorey details.
</para>
</section>
<section xml:id="trouble.rs.runtime.codecmsgs">
<section xml:id="brand.new.compressor">
<title>Logs flooded with '2011-01-10 12:40:48,407 INFO org.apache.hadoop.io.compress.CodecPool: Got
brand-new compressor' messages</title>
<para>We are not using the native versions of compression
@ -992,7 +991,7 @@ ERROR org.apache.hadoop.hbase.regionserver.HRegionServer: ZooKeeper session expi
</section>
<section xml:id="trouble.rs.shutdown">
<title>Shutdown Errors</title>
<para />
</section>
</section>
@ -1020,7 +1019,7 @@ ERROR org.apache.hadoop.hbase.regionserver.HRegionServer: ZooKeeper session expi
</section>
<section xml:id="trouble.master.shutdown">
<title>Shutdown Errors</title>
<para/>
</section>
</section>

View File

@ -225,8 +225,8 @@
Now start up hbase-0.96.0.
</para>
</section>
<section xml:id="096.migration.troubleshooting"><title>Troubleshooting</title>
<section xml:id="096.migration.troubleshooting.old.client"><title>Old Client connecting to 0.96 cluster</title>
<section xml:id="s096.migration.troubleshooting"><title>Troubleshooting</title>
<section xml:id="s096.migration.troubleshooting.old.client"><title>Old Client connecting to 0.96 cluster</title>
<para>It will fail with an exception like the below. Upgrade.
<programlisting>17:22:15 Exception in thread "main" java.lang.IllegalArgumentException: Not a host:port pair: PBUF
17:22:15 *
@ -266,20 +266,20 @@
<para>
If you've not patience, here are the important things to know upgrading.
<orderedlist>
<listitem>Once you upgrade, you cant go back.
<listitem><para>Once you upgrade, you cant go back.</para>
</listitem>
<listitem>
MSLAB is on by default. Watch that heap usage if you have a lot of regions.
<listitem><para>
MSLAB is on by default. Watch that heap usage if you have a lot of regions.</para>
</listitem>
<listitem>
<listitem><para>
Distributed splitting is on by defaul. It should make region server failover faster.
</listitem>
<listitem>
</para></listitem>
<listitem><para>
Theres a separate tarball for security.
</listitem>
<listitem>
</para></listitem>
<listitem><para>
If -XX:MaxDirectMemorySize is set in your hbase-env.sh, its going to enable the experimental off-heap cache (You may not want this).
</listitem>
</para></listitem>
</orderedlist>
</para>
</note>
@ -301,7 +301,7 @@ This means you cannot go back to 0.90.x once youve started HBase 0.92.0 over
<para>In 0.92.0, the <link xlink:href="http://hbase.apache.org/book.html#hbase.hregion.memstore.mslab.enabled">hbase.hregion.memstore.mslab.enabled</link> flag is set to true
(See <xref linkend="mslab" />). In 0.90.x it was <constant>false</constant>. When it is enabled, memstores will step allocate memory in MSLAB 2MB chunks even if the
memstore has zero or just a few small elements. This is fine usually but if you had lots of regions per regionserver in a 0.90.x cluster (and MSLAB was off),
you may find yourself OOME'ing on upgrade because the <mathphrase>thousands of regions * number of column families * 2MB MSLAB (at a minimum)</mathphrase>
you may find yourself OOME'ing on upgrade because the <code>thousands of regions * number of column families * 2MB MSLAB (at a minimum)</code>
puts your heap over the top. Set <varname>hbase.hregion.memstore.mslab.enabled</varname> to
<constant>false</constant> or set the MSLAB size down from 2MB by setting <varname>hbase.hregion.memstore.mslab.chunksize</varname> to something less.
</para>

View File

@ -219,7 +219,7 @@ ${HBASE_HOME}/bin/hbase-daemons.sh {start,stop} zookeeper
standalone Zookeeper quorum) for ease of learning.
</para>
<section><title>Operating System Prerequisites</title></section>
<section><title>Operating System Prerequisites</title>
<para>
You need to have a working Kerberos KDC setup. For
@ -283,7 +283,7 @@ ${HBASE_HOME}/bin/hbase-daemons.sh {start,stop} zookeeper
<para>We'll refer to this JAAS configuration file as
<filename>$CLIENT_CONF</filename> below.</para>
</section>
<section>
<title>HBase-managed Zookeeper Configuration</title>
@ -312,10 +312,10 @@ ${HBASE_HOME}/bin/hbase-daemons.sh {start,stop} zookeeper
};
</programlisting>
where the <filename>$PATH_TO_HBASE_KEYTAB</filename> and
<para>where the <filename>$PATH_TO_HBASE_KEYTAB</filename> and
<filename>$PATH_TO_ZOOKEEPER_KEYTAB</filename> files are what
you created above, and <code>$HOST</code> is the hostname for that
node.
node.</para>
<para>The <code>Server</code> section will be used by
the Zookeeper quorum server, while the
@ -342,9 +342,9 @@ ${HBASE_HOME}/bin/hbase-daemons.sh {start,stop} zookeeper
export HBASE_REGIONSERVER_OPTS="-Djava.security.auth.login.config=$HBASE_SERVER_CONF"
</programlisting>
where <filename>$HBASE_SERVER_CONF</filename> and
<para>where <filename>$HBASE_SERVER_CONF</filename> and
<filename>$CLIENT_CONF</filename> are the full paths to the
JAAS configuration files created above.
JAAS configuration files created above.</para>
<para>Modify your <filename>hbase-site.xml</filename> on each node
that will run zookeeper, master or regionserver to contain:</para>
@ -537,10 +537,10 @@ ${HBASE_HOME}/bin/hbase-daemons.sh {start,stop} zookeeper
<section>
<title>Configuration from Scratch</title>
This has been tested on the current standard Amazon
<para>This has been tested on the current standard Amazon
Linux AMI. First setup KDC and principals as
described above. Next checkout code and run a sanity
check.
check.</para>
<programlisting>
git clone git://git.apache.org/hbase.git
@ -548,9 +548,9 @@ ${HBASE_HOME}/bin/hbase-daemons.sh {start,stop} zookeeper
mvn clean test -Dtest=TestZooKeeperACL
</programlisting>
Then configure HBase as described above.
Manually edit target/cached_classpath.txt (see below)..
<para>Then configure HBase as described above.
Manually edit target/cached_classpath.txt (see below):
</para>
<programlisting>
bin/hbase zookeeper &amp;
bin/hbase master &amp;
@ -582,14 +582,16 @@ ${HBASE_HOME}/bin/hbase-daemons.sh {start,stop} zookeeper
programmatically</title>
This would avoid the need for a separate Hadoop jar
<para>This would avoid the need for a separate Hadoop jar
that fixes <link xlink:href="https://issues.apache.org/jira/browse/HADOOP-7070">HADOOP-7070</link>.
</para>
</section>
<section>
<title>Elimination of
<code>kerberos.removeHostFromPrincipal</code> and
<code>kerberos.removeRealmFromPrincipal</code></title>
<para />
</section>
</section>