HBASE-11199 One-time effort to pretty-print the Docbook XML, to make further patch review easier (Misty Stanley-Jones)

This commit is contained in:
Michael Stack 2014-05-28 07:58:50 -07:00
parent ab896f05d1
commit 63e8304e96
19 changed files with 8289 additions and 7042 deletions

File diff suppressed because it is too large Load Diff

View File

@ -1,5 +1,7 @@
<?xml version="1.0" encoding="UTF-8"?> <?xml version="1.0" encoding="UTF-8"?>
<chapter version="5.0" xml:id="casestudies" <chapter
version="5.0"
xml:id="casestudies"
xmlns="http://docbook.org/ns/docbook" xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude" xmlns:xi="http://www.w3.org/2001/XInclude"
@ -27,86 +29,123 @@
*/ */
--> -->
<title>Apache HBase Case Studies</title> <title>Apache HBase Case Studies</title>
<section xml:id="casestudies.overview"> <section
xml:id="casestudies.overview">
<title>Overview</title> <title>Overview</title>
<para>This chapter will describe a variety of performance and troubleshooting case studies that can <para> This chapter will describe a variety of performance and troubleshooting case studies that
provide a useful blueprint on diagnosing Apache HBase cluster issues.</para> can provide a useful blueprint on diagnosing Apache HBase cluster issues. </para>
<para>For more information on Performance and Troubleshooting, see <xref linkend="performance"/> and <xref linkend="trouble"/>. <para> For more information on Performance and Troubleshooting, see <xref
</para> linkend="performance" /> and <xref
linkend="trouble" />. </para>
</section> </section>
<section xml:id="casestudies.schema"> <section
xml:id="casestudies.schema">
<title>Schema Design</title> <title>Schema Design</title>
<para>See the schema design case studies here: <xref linkend="schema.casestudies"/> <para>See the schema design case studies here: <xref
linkend="schema.casestudies" />
</para> </para>
</section> <!-- schema design --> </section>
<!-- schema design -->
<section xml:id="casestudies.perftroub"> <section
xml:id="casestudies.perftroub">
<title>Performance/Troubleshooting</title> <title>Performance/Troubleshooting</title>
<section xml:id="casestudies.slownode"> <section
xml:id="casestudies.slownode">
<title>Case Study #1 (Performance Issue On A Single Node)</title> <title>Case Study #1 (Performance Issue On A Single Node)</title>
<section><title>Scenario</title> <section>
<para>Following a scheduled reboot, one data node began exhibiting unusual behavior. Routine MapReduce <title>Scenario</title>
jobs run against HBase tables which regularly completed in five or six minutes began taking 30 or 40 minutes <para> Following a scheduled reboot, one data node began exhibiting unusual behavior.
to finish. These jobs were consistently found to be waiting on map and reduce tasks assigned to the troubled data node Routine MapReduce jobs run against HBase tables which regularly completed in five or six
(e.g., the slow map tasks all had the same Input Split). minutes began taking 30 or 40 minutes to finish. These jobs were consistently found to be
The situation came to a head during a distributed copy, when the copy was severely prolonged by the lagging node. waiting on map and reduce tasks assigned to the troubled data node (e.g., the slow map
</para> tasks all had the same Input Split). The situation came to a head during a distributed
copy, when the copy was severely prolonged by the lagging node. </para>
</section> </section>
<section><title>Hardware</title> <section>
<para>Datanodes: <title>Hardware</title>
<itemizedlist> <itemizedlist>
<listitem><para>Two 12-core processors</para></listitem> <title>Datanodes:</title>
<listitem><para>Six Enerprise SATA disks</para></listitem> <listitem>
<listitem><para>24GB of RAM</para></listitem> <para>Two 12-core processors</para>
<listitem><para>Two bonded gigabit NICs</para></listitem> </listitem>
</itemizedlist> <listitem>
</para> <para>Six Enerprise SATA disks</para>
<para>Network: </listitem>
<itemizedlist> <listitem>
<listitem><para>10 Gigabit top-of-rack switches</para></listitem> <para>24GB of RAM</para>
<listitem><para>20 Gigabit bonded interconnects between racks.</para></listitem> </listitem>
</itemizedlist> <listitem>
</para> <para>Two bonded gigabit NICs</para>
</listitem>
</itemizedlist>
<itemizedlist>
<title>Network:</title>
<listitem>
<para>10 Gigabit top-of-rack switches</para>
</listitem>
<listitem>
<para>20 Gigabit bonded interconnects between racks.</para>
</listitem>
</itemizedlist>
</section> </section>
<section><title>Hypotheses</title> <section>
<section><title>HBase "Hot Spot" Region</title> <title>Hypotheses</title>
<para>We hypothesized that we were experiencing a familiar point of pain: a "hot spot" region in an HBase table, <section>
where uneven key-space distribution can funnel a huge number of requests to a single HBase region, bombarding the RegionServer <title>HBase "Hot Spot" Region</title>
process and cause slow response time. Examination of the HBase Master status page showed that the number of HBase requests to the <para> We hypothesized that we were experiencing a familiar point of pain: a "hot spot"
troubled node was almost zero. Further, examination of the HBase logs showed that there were no region splits, compactions, or other region transitions region in an HBase table, where uneven key-space distribution can funnel a huge number
in progress. This effectively ruled out a "hot spot" as the root cause of the observed slowness. of requests to a single HBase region, bombarding the RegionServer process and cause slow
</para> response time. Examination of the HBase Master status page showed that the number of
HBase requests to the troubled node was almost zero. Further, examination of the HBase
logs showed that there were no region splits, compactions, or other region transitions
in progress. This effectively ruled out a "hot spot" as the root cause of the observed
slowness. </para>
</section> </section>
<section><title>HBase Region With Non-Local Data</title> <section>
<para>Our next hypothesis was that one of the MapReduce tasks was requesting data from HBase that was not local to the datanode, thus <title>HBase Region With Non-Local Data</title>
forcing HDFS to request data blocks from other servers over the network. Examination of the datanode logs showed that there were very <para> Our next hypothesis was that one of the MapReduce tasks was requesting data from
few blocks being requested over the network, indicating that the HBase region was correctly assigned, and that the majority of the necessary HBase that was not local to the datanode, thus forcing HDFS to request data blocks from
data was located on the node. This ruled out the possibility of non-local data causing a slowdown. other servers over the network. Examination of the datanode logs showed that there were
</para> very few blocks being requested over the network, indicating that the HBase region was
correctly assigned, and that the majority of the necessary data was located on the node.
This ruled out the possibility of non-local data causing a slowdown. </para>
</section> </section>
<section><title>Excessive I/O Wait Due To Swapping Or An Over-Worked Or Failing Hard Disk</title> <section>
<para>After concluding that the Hadoop and HBase were not likely to be the culprits, we moved on to troubleshooting the datanode's hardware. <title>Excessive I/O Wait Due To Swapping Or An Over-Worked Or Failing Hard Disk</title>
Java, by design, will periodically scan its entire memory space to do garbage collection. If system memory is heavily overcommitted, the Linux <para> After concluding that the Hadoop and HBase were not likely to be the culprits, we
kernel may enter a vicious cycle, using up all of its resources swapping Java heap back and forth from disk to RAM as Java tries to run garbage moved on to troubleshooting the datanode's hardware. Java, by design, will periodically
collection. Further, a failing hard disk will often retry reads and/or writes many times before giving up and returning an error. This can manifest scan its entire memory space to do garbage collection. If system memory is heavily
as high iowait, as running processes wait for reads and writes to complete. Finally, a disk nearing the upper edge of its performance envelope will overcommitted, the Linux kernel may enter a vicious cycle, using up all of its resources
begin to cause iowait as it informs the kernel that it cannot accept any more data, and the kernel queues incoming data into the dirty write pool in memory. swapping Java heap back and forth from disk to RAM as Java tries to run garbage
However, using <code>vmstat(1)</code> and <code>free(1)</code>, we could see that no swap was being used, and the amount of disk IO was only a few kilobytes per second. collection. Further, a failing hard disk will often retry reads and/or writes many times
</para> before giving up and returning an error. This can manifest as high iowait, as running
processes wait for reads and writes to complete. Finally, a disk nearing the upper edge
of its performance envelope will begin to cause iowait as it informs the kernel that it
cannot accept any more data, and the kernel queues incoming data into the dirty write
pool in memory. However, using <code>vmstat(1)</code> and <code>free(1)</code>, we could
see that no swap was being used, and the amount of disk IO was only a few kilobytes per
second. </para>
</section> </section>
<section><title>Slowness Due To High Processor Usage</title> <section>
<para>Next, we checked to see whether the system was performing slowly simply due to very high computational load. <code>top(1)</code> showed that the system load <title>Slowness Due To High Processor Usage</title>
was higher than normal, but <code>vmstat(1)</code> and <code>mpstat(1)</code> showed that the amount of processor being used for actual computation was low. <para> Next, we checked to see whether the system was performing slowly simply due to very
</para> high computational load. <code>top(1)</code> showed that the system load was higher than
normal, but <code>vmstat(1)</code> and <code>mpstat(1)</code> showed that the amount of
processor being used for actual computation was low. </para>
</section> </section>
<section><title>Network Saturation (The Winner)</title> <section>
<para>Since neither the disks nor the processors were being utilized heavily, we moved on to the performance of the network interfaces. The datanode had two <title>Network Saturation (The Winner)</title>
gigabit ethernet adapters, bonded to form an active-standby interface. <code>ifconfig(8)</code> showed some unusual anomalies, namely interface errors, overruns, framing errors. <para> Since neither the disks nor the processors were being utilized heavily, we moved on
While not unheard of, these kinds of errors are exceedingly rare on modern hardware which is operating as it should: to the performance of the network interfaces. The datanode had two gigabit ethernet
<programlisting> adapters, bonded to form an active-standby interface. <code>ifconfig(8)</code> showed
some unusual anomalies, namely interface errors, overruns, framing errors. While not
unheard of, these kinds of errors are exceedingly rare on modern hardware which is
operating as it should: </para>
<screen>
$ /sbin/ifconfig bond0 $ /sbin/ifconfig bond0
bond0 Link encap:Ethernet HWaddr 00:00:00:00:00:00 bond0 Link encap:Ethernet HWaddr 00:00:00:00:00:00
inet addr:10.x.x.x Bcast:10.x.x.255 Mask:255.255.255.0 inet addr:10.x.x.x Bcast:10.x.x.255 Mask:255.255.255.0
@ -115,12 +154,13 @@ RX packets:2990700159 errors:12 dropped:0 overruns:1 frame:6 &lt;--- Lo
TX packets:3443518196 errors:0 dropped:0 overruns:0 carrier:0 TX packets:3443518196 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0 collisions:0 txqueuelen:0
RX bytes:2416328868676 (2.4 TB) TX bytes:3464991094001 (3.4 TB) RX bytes:2416328868676 (2.4 TB) TX bytes:3464991094001 (3.4 TB)
</programlisting> </screen>
</para> <para> These errors immediately lead us to suspect that one or more of the ethernet
<para>These errors immediately lead us to suspect that one or more of the ethernet interfaces might have negotiated the wrong line speed. This was confirmed both by running an ICMP ping interfaces might have negotiated the wrong line speed. This was confirmed both by
from an external host and observing round-trip-time in excess of 700ms, and by running <code>ethtool(8)</code> on the members of the bond interface and discovering that the active interface running an ICMP ping from an external host and observing round-trip-time in excess of
was operating at 100Mbs/, full duplex. 700ms, and by running <code>ethtool(8)</code> on the members of the bond interface and
<programlisting> discovering that the active interface was operating at 100Mbs/, full duplex. </para>
<screen>
$ sudo ethtool eth0 $ sudo ethtool eth0
Settings for eth0: Settings for eth0:
Supported ports: [ TP ] Supported ports: [ TP ]
@ -147,45 +187,53 @@ Supports Wake-on: umbg
Wake-on: g Wake-on: g
Current message level: 0x00000003 (3) Current message level: 0x00000003 (3)
Link detected: yes Link detected: yes
</programlisting> </screen>
</para> <para> In normal operation, the ICMP ping round trip time should be around 20ms, and the
<para>In normal operation, the ICMP ping round trip time should be around 20ms, and the interface speed and duplex should read, "1000MB/s", and, "Full", respectively. interface speed and duplex should read, "1000MB/s", and, "Full", respectively. </para>
</para>
</section> </section>
</section> </section>
<section><title>Resolution</title> <section>
<para>After determining that the active ethernet adapter was at the incorrect speed, we used the <code>ifenslave(8)</code> command to make the standby interface <title>Resolution</title>
the active interface, which yielded an immediate improvement in MapReduce performance, and a 10 times improvement in network throughput: <para> After determining that the active ethernet adapter was at the incorrect speed, we
</para> used the <code>ifenslave(8)</code> command to make the standby interface the active
<para>On the next trip to the datacenter, we determined that the line speed issue was ultimately caused by a bad network cable, which was replaced. interface, which yielded an immediate improvement in MapReduce performance, and a 10 times
</para> improvement in network throughput: </para>
<para> On the next trip to the datacenter, we determined that the line speed issue was
ultimately caused by a bad network cable, which was replaced. </para>
</section> </section>
</section> <!-- case study --> </section>
<section xml:id="casestudies.perf.1"> <!-- case study -->
<section
xml:id="casestudies.perf.1">
<title>Case Study #2 (Performance Research 2012)</title> <title>Case Study #2 (Performance Research 2012)</title>
<para>Investigation results of a self-described "we're not sure what's wrong, but it seems slow" problem. <para> Investigation results of a self-described "we're not sure what's wrong, but it seems
<link xlink:href="http://gbif.blogspot.com/2012/03/hbase-performance-evaluation-continued.html">http://gbif.blogspot.com/2012/03/hbase-performance-evaluation-continued.html</link> slow" problem. <link
xlink:href="http://gbif.blogspot.com/2012/03/hbase-performance-evaluation-continued.html">http://gbif.blogspot.com/2012/03/hbase-performance-evaluation-continued.html</link>
</para> </para>
</section> </section>
<section xml:id="casestudies.perf.2"> <section
xml:id="casestudies.perf.2">
<title>Case Study #3 (Performance Research 2010))</title> <title>Case Study #3 (Performance Research 2010))</title>
<para> <para> Investigation results of general cluster performance from 2010. Although this research
Investigation results of general cluster performance from 2010. Although this research is on an older version of the codebase, this writeup is on an older version of the codebase, this writeup is still very useful in terms of
is still very useful in terms of approach. approach. <link
<link xlink:href="http://hstack.org/hbase-performance-testing/">http://hstack.org/hbase-performance-testing/</link> xlink:href="http://hstack.org/hbase-performance-testing/">http://hstack.org/hbase-performance-testing/</link>
</para> </para>
</section> </section>
<section xml:id="casestudies.xceivers"> <section
xml:id="casestudies.xceivers">
<title>Case Study #4 (xcievers Config)</title> <title>Case Study #4 (xcievers Config)</title>
<para>Case study of configuring <code>xceivers</code>, and diagnosing errors from mis-configurations. <para> Case study of configuring <code>xceivers</code>, and diagnosing errors from
<link xlink:href="http://www.larsgeorge.com/2012/03/hadoop-hbase-and-xceivers.html">http://www.larsgeorge.com/2012/03/hadoop-hbase-and-xceivers.html</link> mis-configurations. <link
</para> xlink:href="http://www.larsgeorge.com/2012/03/hadoop-hbase-and-xceivers.html">http://www.larsgeorge.com/2012/03/hadoop-hbase-and-xceivers.html</link>
<para>See also <xref linkend="dfs.datanode.max.transfer.threads"/>.
</para> </para>
<para> See also <xref
linkend="dfs.datanode.max.transfer.threads" />. </para>
</section> </section>
</section> <!-- performance/troubleshooting --> </section>
<!-- performance/troubleshooting -->
</chapter> </chapter>

View File

@ -1,13 +1,15 @@
<?xml version="1.0"?> <?xml version="1.0"?>
<chapter xml:id="community" <chapter
version="5.0" xmlns="http://docbook.org/ns/docbook" xml:id="community"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xmlns:xi="http://www.w3.org/2001/XInclude" xmlns="http://docbook.org/ns/docbook"
xmlns:svg="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:m="http://www.w3.org/1998/Math/MathML" xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:html="http://www.w3.org/1999/xhtml" xmlns:svg="http://www.w3.org/2000/svg"
xmlns:db="http://docbook.org/ns/docbook"> xmlns:m="http://www.w3.org/1998/Math/MathML"
<!-- xmlns:html="http://www.w3.org/1999/xhtml"
xmlns:db="http://docbook.org/ns/docbook">
<!--
/** /**
* Licensed to the Apache Software Foundation (ASF) under one * Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file * or more contributor license agreements. See the NOTICE file
@ -26,132 +28,125 @@
* limitations under the License. * limitations under the License.
*/ */
--> -->
<title>Community</title> <title>Community</title>
<section xml:id="decisions"> <section
<title>Decisions</title> xml:id="decisions">
<section xml:id="feature_branches"> <title>Decisions</title>
<title>Feature Branches</title> <section
<para>Feature Branches are easy to make. You do not have to be a committer to make one. Just request the name of your branch be added to JIRA up on the xml:id="feature_branches">
developer's mailing list and a committer will add it for you. Thereafter you can file issues against your feature branch in Apache HBase JIRA. Your code you <title>Feature Branches</title>
keep elsewhere -- it should be public so it can be observed -- and you can update dev mailing list on progress. When the feature is ready for commit, <para>Feature Branches are easy to make. You do not have to be a committer to make one. Just
3 +1s from committers will get your feature merged<footnote><para>See <link xlink:href="http://search-hadoop.com/m/asM982C5FkS1">HBase, mail # dev - Thoughts about large feature dev branches</link></para></footnote> request the name of your branch be added to JIRA up on the developer's mailing list and a
</para> committer will add it for you. Thereafter you can file issues against your feature branch in
</section> Apache HBase JIRA. Your code you keep elsewhere -- it should be public so it can be observed
<section xml:id="patchplusonepolicy"> -- and you can update dev mailing list on progress. When the feature is ready for commit, 3
<title>Patch +1 Policy</title> +1s from committers will get your feature merged<footnote>
<para> <para>See <link
The below policy is something we put in place 09/2012. It is a xlink:href="http://search-hadoop.com/m/asM982C5FkS1">HBase, mail # dev - Thoughts
suggested policy rather than a hard requirement. We want to try it about large feature dev branches</link></para>
first to see if it works before we cast it in stone. </footnote>
</para> </para>
<para>
Apache HBase is made of
<link xlink:href="https://issues.apache.org/jira/browse/HBASE#selectedTab=com.atlassian.jira.plugin.system.project%3Acomponents-panel">components</link>.
Components have one or more <xref linkend="OWNER" />s. See the 'Description' field on the
<link xlink:href="https://issues.apache.org/jira/browse/HBASE#selectedTab=com.atlassian.jira.plugin.system.project%3Acomponents-panel">components</link>
JIRA page for who the current owners are by component.
</para>
<para>
Patches that fit within the scope of a single Apache HBase component require,
at least, a +1 by one of the component's owners before commit. If
owners are absent -- busy or otherwise -- two +1s by non-owners will
suffice.
</para>
<para>
Patches that span components need at least two +1s before they can be
committed, preferably +1s by owners of components touched by the
x-component patch (TODO: This needs tightening up but I think fine for
first pass).
</para>
<para>
Any -1 on a patch by anyone vetos a patch; it cannot be committed
until the justification for the -1 is addressed.
</para>
</section>
<section xml:id="hbase.fix.version.in.JIRA">
<title>How to set fix version in JIRA on issue resolve</title>
<para>Here is how <link xlink:href="http://search-hadoop.com/m/azemIi5RCJ1">we agreed</link> to set versions in JIRA when we
resolve an issue. If trunk is going to be 0.98.0 then:
<itemizedlist>
<listitem><para>
Commit only to trunk: Mark with 0.98
</para></listitem>
<listitem><para>
Commit to 0.95 and trunk : Mark with 0.98, and 0.95.x
</para></listitem>
<listitem><para>
Commit to 0.94.x and 0.95, and trunk: Mark with 0.98, 0.95.x, and 0.94.x
</para></listitem>
<listitem><para>
Commit to 89-fb: Mark with 89-fb.
</para></listitem>
<listitem><para>
Commit site fixes: no version
</para></listitem>
</itemizedlist>
</para>
</section>
<section xml:id="hbase.when.to.close.JIRA">
<title>Policy on when to set a RESOLVED JIRA as CLOSED</title>
<para>We <link xlink:href="http://search-hadoop.com/m/4cIKs1iwXMS1">agreed</link>
that for issues that list multiple releases in their <emphasis>Fix Version/s</emphasis> field,
CLOSE the issue on the release of any of the versions listed; subsequent change
to the issue must happen in a new JIRA.
</para>
</section>
<section xml:id="no.permanent.state.in.zk">
<title>Only transient state in ZooKeeper!</title>
<para>
You should be able to kill the data in zookeeper and hbase should ride over it recreating the zk content as it goes.
This is an old adage around these parts. We just made note of it now. We also are currently in violation of this
basic tenet -- replication at least keeps permanent state in zk -- but we are working to undo this breaking of a
golden rule.
</para>
</section>
</section> </section>
<section xml:id="community.roles"> <section
<title>Community Roles</title> xml:id="patchplusonepolicy">
<section xml:id="OWNER"> <title>Patch +1 Policy</title>
<title>Component Owner/Lieutenant</title> <para> The below policy is something we put in place 09/2012. It is a suggested policy rather
<para> than a hard requirement. We want to try it first to see if it works before we cast it in
Component owners are listed in the description field on this Apache HBase JIRA <link xlink:href="https://issues.apache.org/jira/browse/HBASE#selectedTab=com.atlassian.jira.plugin.system.project%3Acomponents-panel">components</link> stone. </para>
page. The owners are listed in the 'Description' field rather than in the 'Component <para> Apache HBase is made of <link
Lead' field because the latter only allows us list one individual xlink:href="https://issues.apache.org/jira/browse/HBASE#selectedTab=com.atlassian.jira.plugin.system.project%3Acomponents-panel">components</link>.
whereas it is encouraged that components have multiple owners. Components have one or more <xref
</para> linkend="OWNER" />s. See the 'Description' field on the <link
<para> xlink:href="https://issues.apache.org/jira/browse/HBASE#selectedTab=com.atlassian.jira.plugin.system.project%3Acomponents-panel">components</link>
Owners or component lieutenants are volunteers who are (usually, but not necessarily) expert in JIRA page for who the current owners are by component. </para>
their component domain and may have an agenda on how they think their <para> Patches that fit within the scope of a single Apache HBase component require, at least,
Apache HBase component should evolve. a +1 by one of the component's owners before commit. If owners are absent -- busy or
</para> otherwise -- two +1s by non-owners will suffice. </para>
<para> <para> Patches that span components need at least two +1s before they can be committed,
Duties include: preferably +1s by owners of components touched by the x-component patch (TODO: This needs
<orderedlist> tightening up but I think fine for first pass). </para>
<listitem> <para> Any -1 on a patch by anyone vetos a patch; it cannot be committed until the
<para> justification for the -1 is addressed. </para>
Owners will try and review patches that land within their component's scope.
</para>
</listitem>
<listitem>
<para>
If applicable, if an owner has an agenda, they will publish their
goals or the design toward which they are driving their component
</para>
</listitem>
</orderedlist>
</para>
<para>
If you would like to be volunteer as a component owner, just write the
dev list and we'll sign you up. Owners do not need to be committers.
</para>
</section>
</section> </section>
<section xml:id="hbase.commit.msg.format"> <section
<title>Commit Message format</title> xml:id="hbase.fix.version.in.JIRA">
<para>We <link xlink:href="http://search-hadoop.com/m/Gwxwl10cFHa1">agreed</link> <title>How to set fix version in JIRA on issue resolve</title>
to the following SVN commit message format: <para>Here is how <link
<programlisting>HBASE-xxxxx &lt;title>. (&lt;contributor>)</programlisting> xlink:href="http://search-hadoop.com/m/azemIi5RCJ1">we agreed</link> to set versions in
If the person making the commit is the contributor, leave off the '(&lt;contributor>)' element. JIRA when we resolve an issue. If trunk is going to be 0.98.0 then: </para>
<itemizedlist>
<listitem>
<para> Commit only to trunk: Mark with 0.98 </para>
</listitem>
<listitem>
<para> Commit to 0.95 and trunk : Mark with 0.98, and 0.95.x </para>
</listitem>
<listitem>
<para> Commit to 0.94.x and 0.95, and trunk: Mark with 0.98, 0.95.x, and 0.94.x </para>
</listitem>
<listitem>
<para> Commit to 89-fb: Mark with 89-fb. </para>
</listitem>
<listitem>
<para> Commit site fixes: no version </para>
</listitem>
</itemizedlist>
</section>
<section
xml:id="hbase.when.to.close.JIRA">
<title>Policy on when to set a RESOLVED JIRA as CLOSED</title>
<para>We <link
xlink:href="http://search-hadoop.com/m/4cIKs1iwXMS1">agreed</link> that for issues that
list multiple releases in their <emphasis>Fix Version/s</emphasis> field, CLOSE the issue on
the release of any of the versions listed; subsequent change to the issue must happen in a
new JIRA. </para>
</section>
<section
xml:id="no.permanent.state.in.zk">
<title>Only transient state in ZooKeeper!</title>
<para> You should be able to kill the data in zookeeper and hbase should ride over it
recreating the zk content as it goes. This is an old adage around these parts. We just made
note of it now. We also are currently in violation of this basic tenet -- replication at
least keeps permanent state in zk -- but we are working to undo this breaking of a golden
rule. </para>
</section>
</section>
<section
xml:id="community.roles">
<title>Community Roles</title>
<section
xml:id="OWNER">
<title>Component Owner/Lieutenant</title>
<para> Component owners are listed in the description field on this Apache HBase JIRA <link
xlink:href="https://issues.apache.org/jira/browse/HBASE#selectedTab=com.atlassian.jira.plugin.system.project%3Acomponents-panel">components</link>
page. The owners are listed in the 'Description' field rather than in the 'Component Lead'
field because the latter only allows us list one individual whereas it is encouraged that
components have multiple owners. </para>
<para> Owners or component lieutenants are volunteers who are (usually, but not necessarily)
expert in their component domain and may have an agenda on how they think their Apache HBase
component should evolve. </para>
<orderedlist>
<title>Component Owner Duties</title>
<listitem>
<para> Owners will try and review patches that land within their component's scope.
</para> </para>
</section> </listitem>
</chapter> <listitem>
<para> If applicable, if an owner has an agenda, they will publish their goals or the
design toward which they are driving their component </para>
</listitem>
</orderedlist>
<para> If you would like to be volunteer as a component owner, just write the dev list and
we'll sign you up. Owners do not need to be committers. </para>
</section>
</section>
<section
xml:id="hbase.commit.msg.format">
<title>Commit Message format</title>
<para>We <link
xlink:href="http://search-hadoop.com/m/Gwxwl10cFHa1">agreed</link> to the following SVN
commit message format:
<programlisting>HBASE-xxxxx &lt;title>. (&lt;contributor>)</programlisting> If the person
making the commit is the contributor, leave off the '(&lt;contributor>)' element. </para>
</section>
</chapter>

File diff suppressed because it is too large Load Diff

View File

@ -1,12 +1,15 @@
<?xml version="1.0" encoding="UTF-8"?> <?xml version="1.0" encoding="UTF-8"?>
<chapter version="5.0" xml:id="cp" xmlns="http://docbook.org/ns/docbook" <chapter
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xmlns:xi="http://www.w3.org/2001/XInclude" xml:id="cp"
xmlns:svg="http://www.w3.org/2000/svg" xmlns="http://docbook.org/ns/docbook"
xmlns:m="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:html="http://www.w3.org/1999/xhtml" xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:db="http://docbook.org/ns/docbook"> xmlns:svg="http://www.w3.org/2000/svg"
<!-- xmlns:m="http://www.w3.org/1998/Math/MathML"
xmlns:html="http://www.w3.org/1999/xhtml"
xmlns:db="http://docbook.org/ns/docbook">
<!--
/** /**
* Licensed to the Apache Software Foundation (ASF) under one * Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file * or more contributor license agreements. See the NOTICE file
@ -26,8 +29,9 @@
*/ */
--> -->
<title>Apache HBase Coprocessors</title> <title>Apache HBase Coprocessors</title>
<para>The idea of HBase coprocessors was inspired by Google's BigTable coprocessors. The <link xlink:href="https://blogs.apache.org/hbase/entry/coprocessor_introduction">Apache HBase Blog on Coprocessor</link> is a very good documentation on that. It has detailed information about the coprocessor framework, terminology, management, and so on. <para>The idea of HBase coprocessors was inspired by Google's BigTable coprocessors. The <link
</para> xlink:href="https://blogs.apache.org/hbase/entry/coprocessor_introduction">Apache HBase Blog
on Coprocessor</link> is a very good documentation on that. It has detailed information about
the coprocessor framework, terminology, management, and so on. </para>
</chapter> </chapter>

View File

@ -1,13 +1,15 @@
<?xml version="1.0"?> <?xml version="1.0"?>
<chapter xml:id="developer" <chapter
version="5.0" xmlns="http://docbook.org/ns/docbook" xml:id="developer"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xmlns:xi="http://www.w3.org/2001/XInclude" xmlns="http://docbook.org/ns/docbook"
xmlns:svg="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:m="http://www.w3.org/1998/Math/MathML" xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:html="http://www.w3.org/1999/xhtml" xmlns:svg="http://www.w3.org/2000/svg"
xmlns:db="http://docbook.org/ns/docbook"> xmlns:m="http://www.w3.org/1998/Math/MathML"
<!-- xmlns:html="http://www.w3.org/1999/xhtml"
xmlns:db="http://docbook.org/ns/docbook">
<!--
/** /**
* Licensed to the Apache Software Foundation (ASF) under one * Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file * or more contributor license agreements. See the NOTICE file
@ -502,7 +504,7 @@ HBase have a character not usually seen in other projects.</para>
integration with corresponding JUnit <link xlink:href="http://www.junit.org/node/581">categories</link>: integration with corresponding JUnit <link xlink:href="http://www.junit.org/node/581">categories</link>:
<classname>SmallTests</classname>, <classname>MediumTests</classname>, <classname>SmallTests</classname>, <classname>MediumTests</classname>,
<classname>LargeTests</classname>, <classname>IntegrationTests</classname>. <classname>LargeTests</classname>, <classname>IntegrationTests</classname>.
JUnit categories are denoted using java annotations and look like this in your unit test code. JUnit categories are denoted using java annotations and look like this in your unit test code.</para>
<programlisting>... <programlisting>...
@Category(SmallTests.class) @Category(SmallTests.class)
public class TestHRegionInfo { public class TestHRegionInfo {
@ -511,360 +513,447 @@ public class TestHRegionInfo {
// ... // ...
} }
}</programlisting> }</programlisting>
The above example shows how to mark a unit test as belonging to the small category. <para>The above example shows how to mark a unit test as belonging to the small category.
All unit tests in HBase have a categorization. All unit tests in HBase have a categorization. </para>
</para> <para> The first three categories, small, medium, and large are for tests run when you
<para> type <code>$ mvn test</code>; i.e. these three categorizations are for HBase unit
The first three categories, small, medium, and large are for tests run when tests. The integration category is for not for unit tests but for integration tests.
you type <code>$ mvn test</code>; i.e. these three categorizations are for These are run when you invoke <code>$ mvn verify</code>. Integration tests are
HBase unit tests. The integration category is for not for unit tests but for integration described in <xref
tests. These are run when you invoke <code>$ mvn verify</code>. Integration tests linkend="integration.tests" /> and will not be discussed further in this section
are described in <xref linkend="integration.tests"/> and will not be discussed further on HBase unit tests.</para>
in this section on HBase unit tests.</para> <para> Apache HBase uses a patched maven surefire plugin and maven profiles to implement
<para> its unit test characterizations. </para>
Apache HBase uses a patched maven surefire plugin and maven profiles to implement <para>Read the below to figure which annotation of the set small, medium, and large to
its unit test characterizations. put on your new HBase unit test. </para>
</para>
<para>Read the below to figure which annotation of the set small, medium, and large to
put on your new HBase unit test.
</para>
<section xml:id="hbase.unittests.small"> <section
<title>Small Tests<indexterm><primary>SmallTests</primary></indexterm></title> xml:id="hbase.unittests.small">
<para> <title>Small Tests<indexterm><primary>SmallTests</primary></indexterm></title>
<emphasis>Small</emphasis> tests are executed in a shared JVM. We put in this category all the tests that can <para>
be executed quickly in a shared JVM. The maximum execution time for a small test is 15 seconds, <emphasis>Small</emphasis> tests are executed in a shared JVM. We put in this
and small tests should not use a (mini)cluster.</para> category all the tests that can be executed quickly in a shared JVM. The maximum
</section> execution time for a small test is 15 seconds, and small tests should not use a
(mini)cluster.</para>
</section>
<section xml:id="hbase.unittests.medium"> <section
<title>Medium Tests<indexterm><primary>MediumTests</primary></indexterm></title> xml:id="hbase.unittests.medium">
<para><emphasis>Medium</emphasis> tests represent tests that must be executed <title>Medium Tests<indexterm><primary>MediumTests</primary></indexterm></title>
before proposing a patch. They are designed to run in less than 30 minutes altogether, <para><emphasis>Medium</emphasis> tests represent tests that must be executed before
and are quite stable in their results. They are designed to last less than 50 seconds proposing a patch. They are designed to run in less than 30 minutes altogether,
individually. They can use a cluster, and each of them is executed in a separate JVM. and are quite stable in their results. They are designed to last less than 50
</para> seconds individually. They can use a cluster, and each of them is executed in a
</section> separate JVM. </para>
</section>
<section xml:id="hbase.unittests.large"> <section
<title>Large Tests<indexterm><primary>LargeTests</primary></indexterm></title> xml:id="hbase.unittests.large">
<para><emphasis>Large</emphasis> tests are everything else. They are typically large-scale <title>Large Tests<indexterm><primary>LargeTests</primary></indexterm></title>
tests, regression tests for specific bugs, timeout tests, performance tests. <para><emphasis>Large</emphasis> tests are everything else. They are typically
They are executed before a commit on the pre-integration machines. They can be run on large-scale tests, regression tests for specific bugs, timeout tests,
the developer machine as well. performance tests. They are executed before a commit on the pre-integration
</para> machines. They can be run on the developer machine as well. </para>
</section> </section>
<section xml:id="hbase.unittests.integration"> <section
<title>Integration Tests<indexterm><primary>IntegrationTests</primary></indexterm></title> xml:id="hbase.unittests.integration">
<para><emphasis>Integration</emphasis> tests are system level tests. See <title>Integration
<xref linkend="integration.tests"/> for more info. Tests<indexterm><primary>IntegrationTests</primary></indexterm></title>
</para> <para><emphasis>Integration</emphasis> tests are system level tests. See <xref
</section> linkend="integration.tests" /> for more info. </para>
</section> </section>
</section>
<section xml:id="hbase.unittests.cmds"> <section
<title>Running tests</title> xml:id="hbase.unittests.cmds">
<para>Below we describe how to run the Apache HBase junit categories.</para> <title>Running tests</title>
<para>Below we describe how to run the Apache HBase junit categories.</para>
<section xml:id="hbase.unittests.cmds.test"> <section
<title>Default: small and medium category tests xml:id="hbase.unittests.cmds.test">
</title> <title>Default: small and medium category tests </title>
<para>Running <programlisting>mvn test</programlisting> will execute all small tests in a single JVM <para>Running <programlisting>mvn test</programlisting> will execute all small tests
(no fork) and then medium tests in a separate JVM for each test instance. in a single JVM (no fork) and then medium tests in a separate JVM for each test
Medium tests are NOT executed if there is an error in a small test. instance. Medium tests are NOT executed if there is an error in a small test.
Large tests are NOT executed. There is one report for small tests, and one report for Large tests are NOT executed. There is one report for small tests, and one
medium tests if they are executed. report for medium tests if they are executed. </para>
</para> </section>
</section>
<section xml:id="hbase.unittests.cmds.test.runAllTests"> <section
<title>Running all tests</title> xml:id="hbase.unittests.cmds.test.runAllTests">
<para>Running <programlisting>mvn test -P runAllTests</programlisting> <title>Running all tests</title>
will execute small tests in a single JVM then medium and large tests in a separate JVM for each test. <para>Running <programlisting>mvn test -P runAllTests</programlisting> will execute
Medium and large tests are NOT executed if there is an error in a small test. small tests in a single JVM then medium and large tests in a separate JVM for
Large tests are NOT executed if there is an error in a small or medium test. each test. Medium and large tests are NOT executed if there is an error in a
There is one report for small tests, and one report for medium and large tests if they are executed. small test. Large tests are NOT executed if there is an error in a small or
</para> medium test. There is one report for small tests, and one report for medium and
</section> large tests if they are executed. </para>
</section>
<section xml:id="hbase.unittests.cmds.test.localtests.mytest"> <section
<title>Running a single test or all tests in a package</title> xml:id="hbase.unittests.cmds.test.localtests.mytest">
<para>To run an individual test, e.g. <classname>MyTest</classname>, do <title>Running a single test or all tests in a package</title>
<programlisting>mvn test -Dtest=MyTest</programlisting> You can also <para>To run an individual test, e.g. <classname>MyTest</classname>, do
pass multiple, individual tests as a comma-delimited list: <programlisting>mvn test -Dtest=MyTest</programlisting> You can also pass
<programlisting>mvn test -Dtest=MyTest1,MyTest2,MyTest3</programlisting> multiple, individual tests as a comma-delimited list:
You can also pass a package, which will run all tests under the package: <programlisting>mvn test -Dtest=MyTest1,MyTest2,MyTest3</programlisting> You can
<programlisting>mvn test '-Dtest=org.apache.hadoop.hbase.client.*'</programlisting> also pass a package, which will run all tests under the package:
</para> <programlisting>mvn test '-Dtest=org.apache.hadoop.hbase.client.*'</programlisting>
</para>
<para> <para> When <code>-Dtest</code> is specified, <code>localTests</code> profile will
When <code>-Dtest</code> is specified, <code>localTests</code> profile will be used. It will use the official release be used. It will use the official release of maven surefire, rather than our
of maven surefire, rather than our custom surefire plugin, and the old connector (The HBase build uses a patched custom surefire plugin, and the old connector (The HBase build uses a patched
version of the maven surefire plugin). Each junit tests is executed in a separate JVM (A fork per test class). version of the maven surefire plugin). Each junit tests is executed in a
There is no parallelization when tests are running in this mode. You will see a new message at the end of the separate JVM (A fork per test class). There is no parallelization when tests are
-report: "[INFO] Tests are skipped". It's harmless. While you need to make sure the sum of <code>Tests run:</code> in running in this mode. You will see a new message at the end of the -report:
the <code>Results :</code> section of test reports matching the number of tests you specified because no "[INFO] Tests are skipped". It's harmless. While you need to make sure the sum
error will be reported when a non-existent test case is specified. of <code>Tests run:</code> in the <code>Results :</code> section of test reports
</para> matching the number of tests you specified because no error will be reported
</section> when a non-existent test case is specified. </para>
</section>
<section xml:id="hbase.unittests.cmds.test.profiles"> <section
<title>Other test invocation permutations</title> xml:id="hbase.unittests.cmds.test.profiles">
<para>Running <programlisting>mvn test -P runSmallTests</programlisting> will execute "small" tests only, using a single JVM. <title>Other test invocation permutations</title>
</para> <para>Running <command>mvn test -P runSmallTests</command> will execute "small"
<para>Running <programlisting>mvn test -P runMediumTests</programlisting> will execute "medium" tests only, launching a new JVM for each test-class. tests only, using a single JVM. </para>
</para> <para>Running <command>mvn test -P runMediumTests</command> will execute "medium"
<para>Running <programlisting>mvn test -P runLargeTests</programlisting> will execute "large" tests only, launching a new JVM for each test-class. tests only, launching a new JVM for each test-class. </para>
</para> <para>Running <command>mvn test -P runLargeTests</command> will execute "large"
<para>For convenience, you can run <programlisting>mvn test -P runDevTests</programlisting> to execute both small and medium tests, using a single JVM. tests only, launching a new JVM for each test-class. </para>
</para> <para>For convenience, you can run <command>mvn test -P runDevTests</command> to
</section> execute both small and medium tests, using a single JVM. </para>
</section>
<section xml:id="hbase.unittests.test.faster"> <section
<title>Running tests faster</title> xml:id="hbase.unittests.test.faster">
<para> By default, <code>$ mvn test -P runAllTests</code> runs 5 tests in parallel. It can be <title>Running tests faster</title>
increased on a developer's machine. Allowing that you can have 2 tests in <para> By default, <code>$ mvn test -P runAllTests</code> runs 5 tests in parallel.
parallel per core, and you need about 2Gb of memory per test (at the extreme), It can be increased on a developer's machine. Allowing that you can have 2 tests
if you have an 8 core, 24Gb box, you can have 16 tests in parallel. but the in parallel per core, and you need about 2Gb of memory per test (at the
memory available limits it to 12 (24/2), To run all tests with 12 tests in extreme), if you have an 8 core, 24Gb box, you can have 16 tests in parallel.
parallel, do this: <command>mvn test -P runAllTests but the memory available limits it to 12 (24/2), To run all tests with 12 tests
in parallel, do this: <command>mvn test -P runAllTests
-Dsurefire.secondPartThreadCount=12</command>. To increase the speed, you -Dsurefire.secondPartThreadCount=12</command>. To increase the speed, you
can as well use a ramdisk. You will need 2Gb of memory to run all tests. You can as well use a ramdisk. You will need 2Gb of memory to run all tests. You
will also need to delete the files between two test run. The typical way to will also need to delete the files between two test run. The typical way to
configure a ramdisk on Linux is: configure a ramdisk on Linux is:</para>
<programlisting>$ sudo mkdir /ram2G <screen>$ sudo mkdir /ram2G
sudo mount -t tmpfs -o size=2048M tmpfs /ram2G</programlisting> sudo mount -t tmpfs -o size=2048M tmpfs /ram2G</screen>
You can then use it to run all HBase tests with the command: <command>mvn test <para>You can then use it to run all HBase tests with the command: </para>
<screen>mvn test
-P runAllTests -Dsurefire.secondPartThreadCount=12 -P runAllTests -Dsurefire.secondPartThreadCount=12
-Dtest.build.data.basedirectory=/ram2G</command> -Dtest.build.data.basedirectory=/ram2G</screen>
</section>
<section
xml:id="hbase.unittests.cmds.test.hbasetests">
<title><command>hbasetests.sh</command></title>
<para>It's also possible to use the script <command>hbasetests.sh</command>. This
script runs the medium and large tests in parallel with two maven instances, and
provides a single report. This script does not use the hbase version of surefire
so no parallelization is being done other than the two maven instances the
script sets up. It must be executed from the directory which contains the
<filename>pom.xml</filename>.</para>
<para>For example running <command>./dev-support/hbasetests.sh</command> will
execute small and medium tests. Running <command>./dev-support/hbasetests.sh
runAllTests</command> will execute all tests. Running
<command>./dev-support/hbasetests.sh replayFailed</command> will rerun the
failed tests a second time, in a separate jvm and without parallelisation.
</para> </para>
</section> </section>
<section
xml:id="hbase.unittests.resource.checker">
<title>Test Resource Checker<indexterm><primary>Test Resource
Checker</primary></indexterm></title>
<para> A custom Maven SureFire plugin listener checks a number of resources before
and after each HBase unit test runs and logs its findings at the end of the test
output files which can be found in <filename>target/surefire-reports</filename>
per Maven module (Tests write test reports named for the test class into this
directory. Check the <filename>*-out.txt</filename> files). The resources
counted are the number of threads, the number of file descriptors, etc. If the
number has increased, it adds a <emphasis>LEAK?</emphasis> comment in the logs.
As you can have an HBase instance running in the background, some threads can be
deleted/created without any specific action in the test. However, if the test
does not work as expected, or if the test should not impact these resources,
it's worth checking these log lines
<computeroutput>...hbase.ResourceChecker(157): before...</computeroutput>
and <computeroutput>...hbase.ResourceChecker(157): after...</computeroutput>.
For example: </para>
<screen>2012-09-26 09:22:15,315 INFO [pool-1-thread-1]
hbase.ResourceChecker(157): after:
regionserver.TestColumnSeeking#testReseeking Thread=65 (was 65),
OpenFileDescriptor=107 (was 107), MaxFileDescriptor=10240 (was 10240),
ConnectionCount=1 (was 1) </screen>
</section>
</section>
<section xml:id="hbase.unittests.cmds.test.hbasetests"> <section
<title><command>hbasetests.sh</command></title> xml:id="hbase.tests.writing">
<para>It's also possible to use the script <command>hbasetests.sh</command>. This script runs the medium and <title>Writing Tests</title>
large tests in parallel with two maven instances, and provides a single report. This script does not use <section
the hbase version of surefire so no parallelization is being done other than the two maven instances the xml:id="hbase.tests.rules">
script sets up. <title>General rules</title>
It must be executed from the directory which contains the <filename>pom.xml</filename>.</para> <itemizedlist>
<para>For example running <listitem>
<programlisting>./dev-support/hbasetests.sh</programlisting> will execute small and medium tests. <para>As much as possible, tests should be written as category small
Running <programlisting>./dev-support/hbasetests.sh runAllTests</programlisting> will execute all tests. tests.</para>
Running <programlisting>./dev-support/hbasetests.sh replayFailed</programlisting> will rerun the failed tests a </listitem>
second time, in a separate jvm and without parallelisation. <listitem>
</para> <para>All tests must be written to support parallel execution on the same
</section> machine, hence they should not use shared resources as fixed ports or
<section xml:id="hbase.unittests.resource.checker"> fixed file names.</para>
<title>Test Resource Checker<indexterm><primary>Test Resource Checker</primary></indexterm></title> </listitem>
<para> <listitem>
A custom Maven SureFire plugin listener checks a number of resources before <para>Tests should not overlog. More than 100 lines/second makes the logs
and after each HBase unit test runs and logs its findings at the end of the test complex to read and use i/o that are hence not available for the other
output files which can be found in <filename>target/surefire-reports</filename> tests.</para>
per Maven module (Tests write test reports named for the test class into this directory. </listitem>
Check the <filename>*-out.txt</filename> files). The resources counted are the number <listitem>
of threads, the number of file descriptors, etc. If the number has increased, it adds <para>Tests can be written with <classname>HBaseTestingUtility</classname>.
a <emphasis>LEAK?</emphasis> comment in the logs. As you can have an HBase instance This class offers helper functions to create a temp directory and do the
running in the background, some threads can be deleted/created without any specific cleanup, or to start a cluster.</para>
action in the test. However, if the test does not work as expected, or if the test </listitem>
should not impact these resources, it's worth checking these log lines </itemizedlist>
<computeroutput>...hbase.ResourceChecker(157): before...</computeroutput> and </section>
<computeroutput>...hbase.ResourceChecker(157): after...</computeroutput>. For example: <section
<computeroutput> xml:id="hbase.tests.categories">
2012-09-26 09:22:15,315 INFO [pool-1-thread-1] hbase.ResourceChecker(157): after: regionserver.TestColumnSeeking#testReseeking Thread=65 (was 65), OpenFileDescriptor=107 (was 107), MaxFileDescriptor=10240 (was 10240), ConnectionCount=1 (was 1) <title>Categories and execution time</title>
</computeroutput> <itemizedlist>
</para> <listitem>
</section> <para>All tests must be categorized, if not they could be skipped.</para>
</section> </listitem>
<listitem>
<para>All tests should be written to be as fast as possible.</para>
</listitem>
<listitem>
<para>Small category tests should last less than 15 seconds, and must not
have any side effect.</para>
</listitem>
<listitem>
<para>Medium category tests should last less than 50 seconds.</para>
</listitem>
<listitem>
<para>Large category tests should last less than 3 minutes. This should
ensure a good parallelization for people using it, and ease the analysis
when the test fails.</para>
</listitem>
</itemizedlist>
</section>
<section
xml:id="hbase.tests.sleeps">
<title>Sleeps in tests</title>
<para>Whenever possible, tests should not use <methodname>Thread.sleep</methodname>,
but rather waiting for the real event they need. This is faster and clearer for
the reader. Tests should not do a <methodname>Thread.sleep</methodname> without
testing an ending condition. This allows understanding what the test is waiting
for. Moreover, the test will work whatever the machine performance is. Sleep
should be minimal to be as fast as possible. Waiting for a variable should be
done in a 40ms sleep loop. Waiting for a socket operation should be done in a
200 ms sleep loop. </para>
</section>
<section xml:id="hbase.tests.writing"> <section
<title>Writing Tests</title> xml:id="hbase.tests.cluster">
<section xml:id="hbase.tests.rules"> <title>Tests using a cluster </title>
<title>General rules</title>
<itemizedlist>
<listitem>
<para>As much as possible, tests should be written as category small tests.</para>
</listitem>
<listitem>
<para>All tests must be written to support parallel execution on the same machine, hence they should not use shared resources as fixed ports or fixed file names.</para>
</listitem>
<listitem>
<para>Tests should not overlog. More than 100 lines/second makes the logs complex to read and use i/o that are hence not available for the other tests.</para>
</listitem>
<listitem>
<para>Tests can be written with <classname>HBaseTestingUtility</classname>.
This class offers helper functions to create a temp directory and do the cleanup, or to start a cluster.</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="hbase.tests.categories">
<title>Categories and execution time</title>
<itemizedlist>
<listitem>
<para>All tests must be categorized, if not they could be skipped.</para>
</listitem>
<listitem>
<para>All tests should be written to be as fast as possible.</para>
</listitem>
<listitem>
<para>Small category tests should last less than 15 seconds, and must not have any side effect.</para>
</listitem>
<listitem>
<para>Medium category tests should last less than 50 seconds.</para>
</listitem>
<listitem>
<para>Large category tests should last less than 3 minutes. This should ensure a good parallelization for people using it, and ease the analysis when the test fails.</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="hbase.tests.sleeps">
<title>Sleeps in tests</title>
<para>Whenever possible, tests should not use <methodname>Thread.sleep</methodname>, but rather waiting for the real event they need. This is faster and clearer for the reader.
Tests should not do a <methodname>Thread.sleep</methodname> without testing an ending condition. This allows understanding what the test is waiting for. Moreover, the test will work whatever the machine performance is.
Sleep should be minimal to be as fast as possible. Waiting for a variable should be done in a 40ms sleep loop. Waiting for a socket operation should be done in a 200 ms sleep loop.
</para>
</section>
<section xml:id="hbase.tests.cluster"> <para>Tests using a HRegion do not have to start a cluster: A region can use the
<title>Tests using a cluster local file system. Start/stopping a cluster cost around 10 seconds. They should
</title> not be started per test method but per test class. Started cluster must be
shutdown using <methodname>HBaseTestingUtility#shutdownMiniCluster</methodname>,
which cleans the directories. As most as possible, tests should use the default
settings for the cluster. When they don't, they should document it. This will
allow to share the cluster later. </para>
</section>
</section>
<para>Tests using a HRegion do not have to start a cluster: A region can use the local file system. <section
Start/stopping a cluster cost around 10 seconds. They should not be started per test method but per test class. xml:id="integration.tests">
Started cluster must be shutdown using <methodname>HBaseTestingUtility#shutdownMiniCluster</methodname>, which cleans the directories. <title>Integration Tests</title>
As most as possible, tests should use the default settings for the cluster. When they don't, they should document it. This will allow to share the cluster later. <para>HBase integration/system tests are tests that are beyond HBase unit tests. They
</para> are generally long-lasting, sizeable (the test can be asked to 1M rows or 1B rows),
</section> targetable (they can take configuration that will point them at the ready-made
</section> cluster they are to run against; integration tests do not include cluster start/stop
code), and verifying success, integration tests rely on public APIs only; they do
not attempt to examine server internals asserting success/fail. Integration tests
are what you would run when you need to more elaborate proofing of a release
candidate beyond what unit tests can do. They are not generally run on the Apache
Continuous Integration build server, however, some sites opt to run integration
tests as a part of their continuous testing on an actual cluster. </para>
<para> Integration tests currently live under the <filename>src/test</filename>
directory in the hbase-it submodule and will match the regex:
<filename>**/IntegrationTest*.java</filename>. All integration tests are also
annotated with <code>@Category(IntegrationTests.class)</code>. </para>
<section xml:id="integration.tests"> <para> Integration tests can be run in two modes: using a mini cluster, or against an
<title>Integration Tests</title> actual distributed cluster. Maven failsafe is used to run the tests using the mini
<para>HBase integration/system tests are tests that are beyond HBase unit tests. They cluster. IntegrationTestsDriver class is used for executing the tests against a
are generally long-lasting, sizeable (the test can be asked to 1M rows or 1B rows), distributed cluster. Integration tests SHOULD NOT assume that they are running
targetable (they can take configuration that will point them at the ready-made cluster against a mini cluster, and SHOULD NOT use private API's to access cluster state. To
they are to run against; integration tests do not include cluster start/stop code), interact with the distributed or mini cluster uniformly,
and verifying success, integration tests rely on public APIs only; they do not <code>IntegrationTestingUtility</code>, and <code>HBaseCluster</code> classes,
attempt to examine server internals asserting success/fail. Integration tests and public client API's can be used. </para>
are what you would run when you need to more elaborate proofing of a release candidate
beyond what unit tests can do. They are not generally run on the Apache Continuous Integration
build server, however, some sites opt to run integration tests as a part of their
continuous testing on an actual cluster.
</para>
<para>
Integration tests currently live under the <filename>src/test</filename> directory
in the hbase-it submodule and will match the regex: <filename>**/IntegrationTest*.java</filename>.
All integration tests are also annotated with <code>@Category(IntegrationTests.class)</code>.
</para>
<para> <para> On a distributed cluster, integration tests that use ChaosMonkey or otherwise
Integration tests can be run in two modes: using a mini cluster, or against an actual distributed cluster. manipulate services thru cluster manager (e.g. restart regionservers) use SSH to do
Maven failsafe is used to run the tests using the mini cluster. IntegrationTestsDriver class is used for it. To run these, test process should be able to run commands on remote end, so ssh
executing the tests against a distributed cluster. Integration tests SHOULD NOT assume that they are running against a should be configured accordingly (for example, if HBase runs under hbase user in
mini cluster, and SHOULD NOT use private API's to access cluster state. To interact with the distributed or mini your cluster, you can set up passwordless ssh for that user and run the test also
cluster uniformly, <code>IntegrationTestingUtility</code>, and <code>HBaseCluster</code> classes, under it). To facilitate that, <code>hbase.it.clustermanager.ssh.user</code>,
and public client API's can be used. <code>hbase.it.clustermanager.ssh.opts</code> and
</para> <code>hbase.it.clustermanager.ssh.cmd</code> configuration settings can be used.
"User" is the remote user that cluster manager should use to perform ssh commands.
"Opts" contains additional options that are passed to SSH (for example, "-i
/tmp/my-key"). Finally, if you have some custom environment setup, "cmd" is the
override format for the entire tunnel (ssh) command. The default string is
{<code>/usr/bin/ssh %1$s %2$s%3$s%4$s "%5$s"</code>} and is a good starting
point. This is a standard Java format string with 5 arguments that is used to
execute the remote command. The argument 1 (%1$s) is SSH options set the via opts
setting or via environment variable, 2 is SSH user name, 3 is "@" if username is set
or "" otherwise, 4 is the target host name, and 5 is the logical command to execute
(that may include single quotes, so don't use them). For example, if you run the
tests under non-hbase user and want to ssh as that user and change to hbase on
remote machine, you can use {<code>/usr/bin/ssh %1$s %2$s%3$s%4$s "su hbase - -c
\"%5$s\""</code>}. That way, to kill RS (for example) integration tests may run
{<code>/usr/bin/ssh some-hostname "su hbase - -c \"ps aux | ... | kill
...\""</code>}. The command is logged in the test logs, so you can verify it is
correct for your environment. </para>
<para> <section
On a distributed cluster, integration tests that use ChaosMonkey or otherwise manipulate services thru cluster manager (e.g. restart regionservers) use SSH to do it. xml:id="maven.build.commands.integration.tests.mini">
To run these, test process should be able to run commands on remote end, so ssh should be configured accordingly (for example, if HBase runs under hbase <title>Running integration tests against mini cluster</title>
user in your cluster, you can set up passwordless ssh for that user and run the test also under it). To facilitate that, <code>hbase.it.clustermanager.ssh.user</code>, <para>HBase 0.92 added a <varname>verify</varname> maven target. Invoking it, for
<code>hbase.it.clustermanager.ssh.opts</code> and <code>hbase.it.clustermanager.ssh.cmd</code> configuration settings can be used. "User" is the remote user that cluster manager should use to perform ssh commands. example by doing <code>mvn verify</code>, will run all the phases up to and
"Opts" contains additional options that are passed to SSH (for example, "-i /tmp/my-key"). including the verify phase via the maven <link
Finally, if you have some custom environment setup, "cmd" is the override format for the entire tunnel (ssh) command. The default string is {<code>/usr/bin/ssh %1$s %2$s%3$s%4$s "%5$s"</code>} and is a good starting point. This is a standard Java format string with 5 arguments that is used to execute the remote command. The argument 1 (%1$s) is SSH options set the via opts setting or via environment variable, 2 is SSH user name, 3 is "@" if username is set or "" otherwise, 4 is the target host name, and 5 is the logical command to execute (that may include single quotes, so don't use them). For example, if you run the tests under non-hbase user and want to ssh as that user and change to hbase on remote machine, you can use {<code>/usr/bin/ssh %1$s %2$s%3$s%4$s "su hbase - -c \"%5$s\""</code>}. That way, to kill RS (for example) integration tests may run {<code>/usr/bin/ssh some-hostname "su hbase - -c \"ps aux | ... | kill ...\""</code>}. xlink:href="http://maven.apache.org/plugins/maven-failsafe-plugin/">failsafe
The command is logged in the test logs, so you can verify it is correct for your environment. plugin</link>, running all the above mentioned HBase unit tests as well as
</para> tests that are in the HBase integration test group. After you have completed
<command>mvn install -DskipTests</command> You can run just the integration
<section xml:id="maven.build.commands.integration.tests.mini"> tests by invoking:</para>
<title>Running integration tests against mini cluster</title> <programlisting>
<para>HBase 0.92 added a <varname>verify</varname> maven target.
Invoking it, for example by doing <code>mvn verify</code>, will
run all the phases up to and including the verify phase via the
maven <link xlink:href="http://maven.apache.org/plugins/maven-failsafe-plugin/">failsafe plugin</link>,
running all the above mentioned HBase unit tests as well as tests that are in the HBase integration test group.
After you have completed
<programlisting>mvn install -DskipTests</programlisting>
You can run just the integration tests by invoking:
<programlisting>
cd hbase-it cd hbase-it
mvn verify</programlisting> mvn verify</programlisting>
<para>If you just want to run the integration tests in top-level, you need to run
two commands. First: <command>mvn failsafe:integration-test</command> This
actually runs ALL the integration tests. </para>
<note>
<para>This command will always output <code>BUILD SUCCESS</code> even if there
are test failures. </para>
</note>
<para>At this point, you could grep the output by hand looking for failed tests.
However, maven will do this for us; just use: <command>mvn
failsafe:verify</command> The above command basically looks at all the test
results (so don't remove the 'target' directory) for test failures and reports
the results.</para>
If you just want to run the integration tests in top-level, you need to run two commands. First: <section
<programlisting>mvn failsafe:integration-test</programlisting> xml:id="maven.build.commands.integration.tests2">
This actually runs ALL the integration tests. <title>Running a subset of Integration tests</title>
<note><para>This command will always output <code>BUILD SUCCESS</code> even if there are test failures. <para>This is very similar to how you specify running a subset of unit tests
</para></note> (see above), but use the property <code>it.test</code> instead of
At this point, you could grep the output by hand looking for failed tests. However, maven will do this for us; just use: <code>test</code>. To just run
<programlisting>mvn failsafe:verify</programlisting> <classname>IntegrationTestClassXYZ.java</classname>, use: <command>mvn
The above command basically looks at all the test results (so don't remove the 'target' directory) for test failures and reports the results.</para> failsafe:integration-test -Dit.test=IntegrationTestClassXYZ</command>
The next thing you might want to do is run groups of integration tests, say
all integration tests that are named IntegrationTestClassX*.java:
<command>mvn failsafe:integration-test -Dit.test=*ClassX*</command> This
runs everything that is an integration test that matches *ClassX*. This
means anything matching: "**/IntegrationTest*ClassX*". You can also run
multiple groups of integration tests using comma-delimited lists (similar to
unit tests). Using a list of matches still supports full regex matching for
each of the groups.This would look something like: <command>mvn
failsafe:integration-test -Dit.test=*ClassX*, *ClassY</command>
</para>
</section>
</section>
<section
xml:id="maven.build.commands.integration.tests.distributed">
<title>Running integration tests against distributed cluster</title>
<para> If you have an already-setup HBase cluster, you can launch the integration
tests by invoking the class <code>IntegrationTestsDriver</code>. You may have to
run test-compile first. The configuration will be picked by the bin/hbase
script. <programlisting>mvn test-compile</programlisting> Then launch the tests
with:</para>
<programlisting>bin/hbase [--config config_dir] org.apache.hadoop.hbase.IntegrationTestsDriver</programlisting>
<para>Pass <code>-h</code> to get usage on this sweet tool. Running the
IntegrationTestsDriver without any argument will launch tests found under
<code>hbase-it/src/test</code>, having
<code>@Category(IntegrationTests.class)</code> annotation, and a name
starting with <code>IntegrationTests</code>. See the usage, by passing -h, to
see how to filter test classes. You can pass a regex which is checked against
the full class name; so, part of class name can be used. IntegrationTestsDriver
uses Junit to run the tests. Currently there is no support for running
integration tests against a distributed cluster using maven (see <link
xlink:href="https://issues.apache.org/jira/browse/HBASE-6201">HBASE-6201</link>). </para>
<section xml:id="maven.build.commands.integration.tests2"> <para> The tests interact with the distributed cluster by using the methods in the
<title>Running a subset of Integration tests</title> <code>DistributedHBaseCluster</code> (implementing
<para>This is very similar to how you specify running a subset of unit tests (see above), but use the property <code>HBaseCluster</code>) class, which in turn uses a pluggable
<code>it.test</code> instead of <code>test</code>. <code>ClusterManager</code>. Concrete implementations provide actual
To just run <classname>IntegrationTestClassXYZ.java</classname>, use: functionality for carrying out deployment-specific and environment-dependent
<programlisting>mvn failsafe:integration-test -Dit.test=IntegrationTestClassXYZ</programlisting> tasks (SSH, etc). The default <code>ClusterManager</code> is
The next thing you might want to do is run groups of integration tests, say all integration tests that are named IntegrationTestClassX*.java: <code>HBaseClusterManager</code>, which uses SSH to remotely execute
<programlisting>mvn failsafe:integration-test -Dit.test=*ClassX*</programlisting> start/stop/kill/signal commands, and assumes some posix commands (ps, etc). Also
This runs everything that is an integration test that matches *ClassX*. This means anything matching: "**/IntegrationTest*ClassX*". assumes the user running the test has enough "power" to start/stop servers on
You can also run multiple groups of integration tests using comma-delimited lists (similar to unit tests). Using a list of matches still supports full regex matching for each of the groups.This would look something like: the remote machines. By default, it picks up <code>HBASE_SSH_OPTS, HBASE_HOME,
<programlisting>mvn failsafe:integration-test -Dit.test=*ClassX*, *ClassY</programlisting> HBASE_CONF_DIR</code> from the env, and uses
</para> <code>bin/hbase-daemon.sh</code> to carry out the actions. Currently tarball
</section> deployments, deployments which uses hbase-daemons.sh, and <link
</section> xlink:href="http://incubator.apache.org/ambari/">Apache Ambari</link>
<section xml:id="maven.build.commands.integration.tests.distributed"> deployments are supported. /etc/init.d/ scripts are not supported for now, but
<title>Running integration tests against distributed cluster</title> it can be easily added. For other deployment options, a ClusterManager can be
<para> implemented and plugged in. </para>
If you have an already-setup HBase cluster, you can launch the integration tests by invoking the class <code>IntegrationTestsDriver</code>. You may have to </section>
run test-compile first. The configuration will be picked by the bin/hbase script.
<programlisting>mvn test-compile</programlisting>
Then launch the tests with:
<programlisting>bin/hbase [--config config_dir] org.apache.hadoop.hbase.IntegrationTestsDriver</programlisting>
Pass <code>-h</code> to get usage on this sweet tool. Running the IntegrationTestsDriver without any argument will launch tests found under <code>hbase-it/src/test</code>, having <code>@Category(IntegrationTests.class)</code> annotation,
and a name starting with <code>IntegrationTests</code>. See the usage, by passing -h, to see how to filter test classes.
You can pass a regex which is checked against the full class name; so, part of class name can be used.
IntegrationTestsDriver uses Junit to run the tests. Currently there is no support for running integration tests against a distributed cluster using maven (see <link xlink:href="https://issues.apache.org/jira/browse/HBASE-6201">HBASE-6201</link>).
</para>
<para> <section
The tests interact with the distributed cluster by using the methods in the <code>DistributedHBaseCluster</code> (implementing <code>HBaseCluster</code>) class, which in turn uses a pluggable <code>ClusterManager</code>. Concrete implementations provide actual functionality for carrying out deployment-specific and environment-dependent tasks (SSH, etc). The default <code>ClusterManager</code> is <code>HBaseClusterManager</code>, which uses SSH to remotely execute start/stop/kill/signal commands, and assumes some posix commands (ps, etc). Also assumes the user running the test has enough "power" to start/stop servers on the remote machines. By default, it picks up <code>HBASE_SSH_OPTS, HBASE_HOME, HBASE_CONF_DIR</code> from the env, and uses <code>bin/hbase-daemon.sh</code> to carry out the actions. Currently tarball deployments, deployments which uses hbase-daemons.sh, and <link xlink:href="http://incubator.apache.org/ambari/">Apache Ambari</link> deployments are supported. /etc/init.d/ scripts are not supported for now, but it can be easily added. For other deployment options, a ClusterManager can be implemented and plugged in. xml:id="maven.build.commands.integration.tests.destructive">
</para> <title>Destructive integration / system tests</title>
</section> <para> In 0.96, a tool named <code>ChaosMonkey</code> has been introduced. It is
modeled after the <link
xlink:href="http://techblog.netflix.com/2012/07/chaos-monkey-released-into-wild.html">same-named
tool by Netflix</link>. Some of the tests use ChaosMonkey to simulate faults
in the running cluster in the way of killing random servers, disconnecting
servers, etc. ChaosMonkey can also be used as a stand-alone tool to run a
(misbehaving) policy while you are running other tests. </para>
<section xml:id="maven.build.commands.integration.tests.destructive"> <para> ChaosMonkey defines Action's and Policy's. Actions are sequences of events.
<title>Destructive integration / system tests</title> We have at least the following actions:</para>
<para> <itemizedlist>
In 0.96, a tool named <code>ChaosMonkey</code> has been introduced. It is modeled after the <link xlink:href="http://techblog.netflix.com/2012/07/chaos-monkey-released-into-wild.html">same-named tool by Netflix</link>. <listitem>
Some of the tests use ChaosMonkey to simulate faults in the running cluster in the way of killing random servers, <para>Restart active master (sleep 5 sec)</para>
disconnecting servers, etc. ChaosMonkey can also be used as a stand-alone tool to run a (misbehaving) policy while you </listitem>
are running other tests. <listitem>
</para> <para>Restart random regionserver (sleep 5 sec)</para>
</listitem>
<listitem>
<para>Restart random regionserver (sleep 60 sec)</para>
</listitem>
<listitem>
<para>Restart META regionserver (sleep 5 sec)</para>
</listitem>
<listitem>
<para>Restart ROOT regionserver (sleep 5 sec)</para>
</listitem>
<listitem>
<para>Batch restart of 50% of regionservers (sleep 5 sec)</para>
</listitem>
<listitem>
<para>Rolling restart of 100% of regionservers (sleep 5 sec)</para>
</listitem>
</itemizedlist>
<para> Policies on the other hand are responsible for executing the actions based on
a strategy. The default policy is to execute a random action every minute based
on predefined action weights. ChaosMonkey executes predefined named policies
until it is stopped. More than one policy can be active at any time. </para>
<para> <para> To run ChaosMonkey as a standalone tool deploy your HBase cluster as usual.
ChaosMonkey defines Action's and Policy's. Actions are sequences of events. We have at least the following actions:</para> ChaosMonkey uses the configuration from the bin/hbase script, thus no extra
<itemizedlist> configuration needs to be done. You can invoke the ChaosMonkey by
<listitem><para>Restart active master (sleep 5 sec)</para></listitem> running:</para>
<listitem><para>Restart random regionserver (sleep 5 sec)</para></listitem> <programlisting>bin/hbase org.apache.hadoop.hbase.util.ChaosMonkey</programlisting>
<listitem><para>Restart random regionserver (sleep 60 sec)</para></listitem> <para> This will output smt like: </para>
<listitem><para>Restart META regionserver (sleep 5 sec)</para></listitem> <screen>
<listitem><para>Restart ROOT regionserver (sleep 5 sec)</para></listitem>
<listitem><para>Batch restart of 50% of regionservers (sleep 5 sec)</para></listitem>
<listitem><para>Rolling restart of 100% of regionservers (sleep 5 sec)</para></listitem>
</itemizedlist>
<para>
Policies on the other hand are responsible for executing the actions based on a strategy.
The default policy is to execute a random action every minute based on predefined action
weights. ChaosMonkey executes predefined named policies until it is stopped. More than one
policy can be active at any time.
</para>
<para>
To run ChaosMonkey as a standalone tool deploy your HBase cluster as usual. ChaosMonkey uses the configuration
from the bin/hbase script, thus no extra configuration needs to be done. You can invoke the ChaosMonkey by running:</para>
<programlisting>bin/hbase org.apache.hadoop.hbase.util.ChaosMonkey</programlisting>
<para>
This will output smt like:
</para>
<screen>
12/11/19 23:21:57 INFO util.ChaosMonkey: Using ChaosMonkey Policy: class org.apache.hadoop.hbase.util.ChaosMonkey$PeriodicRandomActionPolicy, period:60000 12/11/19 23:21:57 INFO util.ChaosMonkey: Using ChaosMonkey Policy: class org.apache.hadoop.hbase.util.ChaosMonkey$PeriodicRandomActionPolicy, period:60000
12/11/19 23:21:57 INFO util.ChaosMonkey: Sleeping for 26953 to add jitter 12/11/19 23:21:57 INFO util.ChaosMonkey: Sleeping for 26953 to add jitter
12/11/19 23:22:24 INFO util.ChaosMonkey: Performing action: Restart active master 12/11/19 23:22:24 INFO util.ChaosMonkey: Performing action: Restart active master
@ -1293,7 +1382,8 @@ Bar bar = foo.getBar(); &lt;--- imagine there's an extra space(s) after the
<section xml:id="common.patch.feedback.javadoc.defaults"> <section xml:id="common.patch.feedback.javadoc.defaults">
<title>Javadoc - Useless Defaults</title> <title>Javadoc - Useless Defaults</title>
<para>Don't just leave the @param arguments the way your IDE generated them. Don't do this... <para>Don't just leave the @param arguments the way your IDE generated them. Don't do
this...</para>
<programlisting> <programlisting>
/** /**
* *
@ -1302,31 +1392,32 @@ Bar bar = foo.getBar(); &lt;--- imagine there's an extra space(s) after the
*/ */
public Foo getFoo(Bar bar); public Foo getFoo(Bar bar);
</programlisting> </programlisting>
... either add something descriptive to the @param and @return lines, or just remove them. <para>... either add something descriptive to the @param and @return lines, or just
But the preference is to add something descriptive and useful. remove them. But the preference is to add something descriptive and
</para> useful.</para>
</section> </section>
<section xml:id="common.patch.feedback.onething"> <section
<title>One Thing At A Time, Folks</title> xml:id="common.patch.feedback.onething">
<para>If you submit a patch for one thing, don't do auto-reformatting or unrelated reformatting of code on a completely <title>One Thing At A Time, Folks</title>
different area of code. <para>If you submit a patch for one thing, don't do auto-reformatting or unrelated
</para> reformatting of code on a completely different area of code. </para>
<para>Likewise, don't add unrelated cleanup or refactorings outside the scope of your Jira. <para>Likewise, don't add unrelated cleanup or refactorings outside the scope of
</para> your Jira. </para>
</section> </section>
<section xml:id="common.patch.feedback.tests"> <section
<title>Ambigious Unit Tests</title> xml:id="common.patch.feedback.tests">
<para>Make sure that you're clear about what you are testing in your unit tests and why. <title>Ambigious Unit Tests</title>
</para> <para>Make sure that you're clear about what you are testing in your unit tests and
</section> why. </para>
</section>
</section> <!-- patch feedback --> </section>
<!-- patch feedback -->
<section> <section>
<title>Submitting a patch again</title> <title>Submitting a patch again</title>
<para> <para> Sometimes committers ask for changes for a patch. After incorporating the
Sometimes committers ask for changes for a patch. After incorporating the suggested/requested changes, follow the following process to submit the patch again. suggested/requested changes, follow the following process to submit the patch again. </para>
</para>
<itemizedlist> <itemizedlist>
<listitem> <listitem>
<para>Do not delete the old patch file</para> <para>Do not delete the old patch file</para>
@ -1341,20 +1432,22 @@ Bar bar = foo.getBar(); &lt;--- imagine there's an extra space(s) after the
<para>'Cancel Patch' on JIRA.. bug status will change back to Open</para> <para>'Cancel Patch' on JIRA.. bug status will change back to Open</para>
</listitem> </listitem>
<listitem> <listitem>
<para>Attach new patch file (e.g. HBASE_XXXX-v2.patch) using 'Files --> Attach'</para> <para>Attach new patch file (e.g. HBASE_XXXX-v2.patch) using 'Files -->
Attach'</para>
</listitem> </listitem>
<listitem> <listitem>
<para>Click on 'Submit Patch'. Now the bug status will say 'Patch Available'.</para> <para>Click on 'Submit Patch'. Now the bug status will say 'Patch
Available'.</para>
</listitem> </listitem>
</itemizedlist> </itemizedlist>
<para>Committers will review the patch. Rinse and repeat as many times as needed :-)</para> <para>Committers will review the patch. Rinse and repeat as many times as needed
:-)</para>
</section> </section>
<section> <section>
<title>Submitting incremental patches</title> <title>Submitting incremental patches</title>
<para> <para> At times you may want to break a big change into mulitple patches. Here is a
At times you may want to break a big change into mulitple patches. Here is a sample work-flow using git sample work-flow using git <itemizedlist>
<itemizedlist>
<listitem> <listitem>
<para>patch 1:</para> <para>patch 1:</para>
<itemizedlist> <itemizedlist>
@ -1374,7 +1467,8 @@ Bar bar = foo.getBar(); &lt;--- imagine there's an extra space(s) after the
<para>save your work</para> <para>save your work</para>
<screen>$ git add file1 file2 </screen> <screen>$ git add file1 file2 </screen>
<screen>$ git commit -am 'saved after HBASE_XXXX-1.patch'</screen> <screen>$ git commit -am 'saved after HBASE_XXXX-1.patch'</screen>
<para>now you have your own branch, that is different from remote master branch</para> <para>now you have your own branch, that is different from remote
master branch</para>
</listitem> </listitem>
<listitem> <listitem>
<para>make more changes...</para> <para>make more changes...</para>

View File

@ -1,13 +1,15 @@
<?xml version="1.0" encoding="UTF-8"?> <?xml version="1.0" encoding="UTF-8"?>
<chapter version="5.0" xml:id="external_apis" <chapter
xmlns="http://docbook.org/ns/docbook" version="5.0"
xmlns:xlink="http://www.w3.org/1999/xlink" xml:id="external_apis"
xmlns:xi="http://www.w3.org/2001/XInclude" xmlns="http://docbook.org/ns/docbook"
xmlns:svg="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:m="http://www.w3.org/1998/Math/MathML" xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:html="http://www.w3.org/1999/xhtml" xmlns:svg="http://www.w3.org/2000/svg"
xmlns:db="http://docbook.org/ns/docbook"> xmlns:m="http://www.w3.org/1998/Math/MathML"
<!-- xmlns:html="http://www.w3.org/1999/xhtml"
xmlns:db="http://docbook.org/ns/docbook">
<!--
/** /**
* Licensed to the Apache Software Foundation (ASF) under one * Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file * or more contributor license agreements. See the NOTICE file

View File

@ -1,5 +1,7 @@
<?xml version="1.0" encoding="UTF-8"?> <?xml version="1.0" encoding="UTF-8"?>
<chapter version="5.0" xml:id="getting_started" <chapter
version="5.0"
xml:id="getting_started"
xmlns="http://docbook.org/ns/docbook" xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude" xmlns:xi="http://www.w3.org/2001/XInclude"
@ -31,46 +33,53 @@
<section> <section>
<title>Introduction</title> <title>Introduction</title>
<para><xref linkend="quickstart" /> will get you up and <para><xref
running on a single-node, standalone instance of HBase. linkend="quickstart" /> will get you up and running on a single-node, standalone instance of
</para> HBase. </para>
</section> </section>
<section xml:id="quickstart"> <section
xml:id="quickstart">
<title>Quick Start</title> <title>Quick Start</title>
<para>This guide describes setup of a standalone HBase instance. It will <para>This guide describes setup of a standalone HBase instance. It will run against the local
run against the local filesystem. In later sections we will take you through filesystem. In later sections we will take you through how to run HBase on Apache Hadoop's
how to run HBase on Apache Hadoop's HDFS, a distributed filesystem. This section HDFS, a distributed filesystem. This section shows you how to create a table in HBase,
shows you how to create a table in HBase, inserting inserting rows into your new HBase table via the HBase <command>shell</command>, and then
rows into your new HBase table via the HBase <command>shell</command>, and then cleaning cleaning up and shutting down your standalone, local filesystem-based HBase instance. The
up and shutting down your standalone, local filesystem-based HBase instance. The below exercise below exercise should take no more than ten minutes (not including download time). </para>
should take no more than ten minutes (not including download time). <note
</para> xml:id="local.fs.durability">
<note xml:id="local.fs.durability"><title>Local Filesystem and Durability</title> <title>Local Filesystem and Durability</title>
<para>Using HBase with a LocalFileSystem does not currently guarantee durability. <para>Using HBase with a LocalFileSystem does not currently guarantee durability. The HDFS
The HDFS local filesystem implementation will lose edits if files are not properly local filesystem implementation will lose edits if files are not properly closed -- which is
closed -- which is very likely to happen when experimenting with a new download. very likely to happen when experimenting with a new download. You need to run HBase on HDFS
You need to run HBase on HDFS to ensure all writes are preserved. Running to ensure all writes are preserved. Running against the local filesystem though will get you
against the local filesystem though will get you off the ground quickly and get you off the ground quickly and get you familiar with how the general system works so lets run
familiar with how the general system works so lets run with it for now. See with it for now. See <link
<link xlink:href="https://issues.apache.org/jira/browse/HBASE-3696"/> and its associated issues for more details.</para></note> xlink:href="https://issues.apache.org/jira/browse/HBASE-3696" /> and its associated issues
<note xml:id="loopback.ip.getting.started"> for more details.</para>
</note>
<note
xml:id="loopback.ip.getting.started">
<title>Loopback IP</title> <title>Loopback IP</title>
<para><emphasis>The below advice is for hbase-0.94.x and older versions only. We believe this fixed in hbase-0.96.0 and beyond <para><emphasis>The below advice is for hbase-0.94.x and older versions only. We believe this
(let us know if we have it wrong).</emphasis> There should be no need of the below modification to <filename>/etc/hosts</filename> in fixed in hbase-0.96.0 and beyond (let us know if we have it wrong).</emphasis> There
later versions of HBase.</para> should be no need of the below modification to <filename>/etc/hosts</filename> in later
versions of HBase.</para>
<para>HBase expects the loopback IP address to be 127.0.0.1. Ubuntu and some other
distributions, for example, will default to 127.0.1.1 and this will cause problems for you <footnote>
<para>See <link
xlink:href="http://blog.devving.com/why-does-hbase-care-about-etchosts/">Why does
HBase care about /etc/hosts?</link> for detail.</para>
</footnote>. </para>
<para><filename>/etc/hosts</filename> should look something like this:</para>
<screen>
127.0.0.1 localhost
127.0.0.1 ubuntu.ubuntu-domain ubuntu
</screen>
<para>HBase expects the loopback IP address to be 127.0.0.1. Ubuntu and some other distributions,
for example, will default to 127.0.1.1 and this will cause problems for you
<footnote><para>See <link xlink:href="http://blog.devving.com/why-does-hbase-care-about-etchosts/">Why does HBase care about /etc/hosts?</link> for detail.</para></footnote>.
</para>
<para><filename>/etc/hosts</filename> should look something like this:
<programlisting>
127.0.0.1 localhost
127.0.0.1 ubuntu.ubuntu-domain ubuntu
</programlisting>
</para>
</note> </note>
@ -78,158 +87,155 @@
<title>Download and unpack the latest stable release.</title> <title>Download and unpack the latest stable release.</title>
<para>Choose a download site from this list of <link <para>Choose a download site from this list of <link
xlink:href="http://www.apache.org/dyn/closer.cgi/hbase/">Apache Download xlink:href="http://www.apache.org/dyn/closer.cgi/hbase/">Apache Download Mirrors</link>.
Mirrors</link>. Click on the suggested top link. This will take you to a Click on the suggested top link. This will take you to a mirror of <emphasis>HBase
mirror of <emphasis>HBase Releases</emphasis>. Click on the folder named Releases</emphasis>. Click on the folder named <filename>stable</filename> and then
<filename>stable</filename> and then download the file that ends in download the file that ends in <filename>.tar.gz</filename> to your local filesystem; e.g.
<filename>.tar.gz</filename> to your local filesystem; e.g. <filename>hbase-0.94.2.tar.gz</filename>.</para>
<filename>hbase-0.94.2.tar.gz</filename>.</para>
<para>Decompress and untar your download and then change into the <para>Decompress and untar your download and then change into the unpacked directory.</para>
unpacked directory.</para>
<para><programlisting>$ tar xfz hbase-<?eval ${project.version}?>.tar.gz <screen><![CDATA[$ tar xfz hbase-<?eval ${project.version}?>.tar.gz
$ cd hbase-<?eval ${project.version}?> $ cd hbase-<?eval ${project.version}?>]]>
</programlisting></para> </screen>
<para>At this point, you are ready to start HBase. But before starting <para>At this point, you are ready to start HBase. But before starting it, edit
it, edit <filename>conf/hbase-site.xml</filename>, the file you write <filename>conf/hbase-site.xml</filename>, the file you write your site-specific
your site-specific configurations into. Set configurations into. Set <varname>hbase.rootdir</varname>, the directory HBase writes data
<varname>hbase.rootdir</varname>, the directory HBase writes data to, to, and <varname>hbase.zookeeper.property.dataDir</varname>, the directory ZooKeeper writes
and <varname>hbase.zookeeper.property.dataDir</varname>, the directory its data too:</para>
ZooKeeper writes its data too: <programlisting><![CDATA[<?xml version="1.0"?>
<programlisting>&lt;?xml version="1.0"?&gt; <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
&lt;?xml-stylesheet type="text/xsl" href="configuration.xsl"?&gt; <configuration>
&lt;configuration&gt; <property>
&lt;property&gt; <name>hbase.rootdir</name>
&lt;name&gt;hbase.rootdir&lt;/name&gt; <value>file:///DIRECTORY/hbase</value>
&lt;value&gt;file:///DIRECTORY/hbase&lt;/value&gt; </property>
&lt;/property&gt; <property>
&lt;property&gt; <name>hbase.zookeeper.property.dataDir</name>
&lt;name&gt;hbase.zookeeper.property.dataDir&lt;/name&gt; <value>/DIRECTORY/zookeeper</value>
&lt;value&gt;/DIRECTORY/zookeeper&lt;/value&gt; </property>
&lt;/property&gt; </configuration>]]></programlisting>
&lt;/configuration&gt;</programlisting> Replace <varname>DIRECTORY</varname> in the above with the <para> Replace <varname>DIRECTORY</varname> in the above with the path to the directory you
path to the directory you would have HBase and ZooKeeper write their data. By default, would have HBase and ZooKeeper write their data. By default,
<varname>hbase.rootdir</varname> is set to <filename>/tmp/hbase-${user.name}</filename> <varname>hbase.rootdir</varname> is set to <filename>/tmp/hbase-${user.name}</filename>
and similarly so for the default ZooKeeper data location which means you'll lose all and similarly so for the default ZooKeeper data location which means you'll lose all your
your data whenever your server reboots unless you change it (Most operating systems clear data whenever your server reboots unless you change it (Most operating systems clear
<filename>/tmp</filename> on restart).</para> <filename>/tmp</filename> on restart).</para>
</section> </section>
<section xml:id="start_hbase"> <section
xml:id="start_hbase">
<title>Start HBase</title> <title>Start HBase</title>
<para>Now start HBase:<programlisting>$ ./bin/start-hbase.sh <para>Now start HBase:</para>
starting Master, logging to logs/hbase-user-master-example.org.out</programlisting></para> <screen>$ ./bin/start-hbase.sh
starting Master, logging to logs/hbase-user-master-example.org.out</screen>
<para>You should now have a running standalone HBase instance. In <para>You should now have a running standalone HBase instance. In standalone mode, HBase runs
standalone mode, HBase runs all daemons in the the one JVM; i.e. both all daemons in the the one JVM; i.e. both the HBase and ZooKeeper daemons. HBase logs can be
the HBase and ZooKeeper daemons. HBase logs can be found in the found in the <filename>logs</filename> subdirectory. Check them out especially if it seems
<filename>logs</filename> subdirectory. Check them out especially if HBase had trouble starting.</para>
it seems HBase had trouble starting.</para>
<note> <note>
<title>Is <application>java</application> installed?</title> <title>Is <application>java</application> installed?</title>
<para>All of the above presumes a 1.6 version of Oracle <para>All of the above presumes a 1.6 version of Oracle <application>java</application> is
<application>java</application> is installed on your machine and installed on your machine and available on your path (See <xref
available on your path (See <xref linkend="java" />); i.e. when you type linkend="java" />); i.e. when you type <application>java</application>, you see output
<application>java</application>, you see output that describes the that describes the options the java program takes (HBase requires java 6). If this is not
options the java program takes (HBase requires java 6). If this is not the case, HBase will not start. Install java, edit <filename>conf/hbase-env.sh</filename>,
the case, HBase will not start. Install java, edit uncommenting the <envar>JAVA_HOME</envar> line pointing it to your java install, then,
<filename>conf/hbase-env.sh</filename>, uncommenting the
<envar>JAVA_HOME</envar> line pointing it to your java install, then,
retry the steps above.</para> retry the steps above.</para>
</note> </note>
</section> </section>
<section xml:id="shell_exercises"> <section
xml:id="shell_exercises">
<title>Shell Exercises</title> <title>Shell Exercises</title>
<para>Connect to your running HBase via the <command>shell</command>.</para> <para>Connect to your running HBase via the <command>shell</command>.</para>
<para><programlisting>$ ./bin/hbase shell <screen><![CDATA[$ ./bin/hbase shell
HBase Shell; enter 'help&lt;RETURN&gt;' for list of supported commands. HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit&lt;RETURN&gt;" to leave the HBase Shell Type "exit<RETURN>" to leave the HBase Shell
Version: 0.90.0, r1001068, Fri Sep 24 13:55:42 PDT 2010 Version: 0.90.0, r1001068, Fri Sep 24 13:55:42 PDT 2010
hbase(main):001:0&gt; </programlisting></para> hbase(main):001:0>]]> </screen>
<para>Type <command>help</command> and then <para>Type <command>help</command> and then <command>&lt;RETURN&gt;</command> to see a listing
<command>&lt;RETURN&gt;</command> to see a listing of shell commands and of shell commands and options. Browse at least the paragraphs at the end of the help
options. Browse at least the paragraphs at the end of the help emission emission for the gist of how variables and command arguments are entered into the HBase
for the gist of how variables and command arguments are entered into the shell; in particular note how table names, rows, and columns, etc., must be quoted.</para>
HBase shell; in particular note how table names, rows, and columns,
etc., must be quoted.</para>
<para>Create a table named <varname>test</varname> with a single column family named <varname>cf</varname>. <para>Create a table named <varname>test</varname> with a single column family named
Verify its creation by listing all tables and then insert some <varname>cf</varname>. Verify its creation by listing all tables and then insert some
values.</para> values.</para>
<para><programlisting>hbase(main):003:0&gt; create 'test', 'cf' <screen><![CDATA[hbase(main):003:0> create 'test', 'cf'
0 row(s) in 1.2200 seconds 0 row(s) in 1.2200 seconds
hbase(main):003:0&gt; list 'test' hbase(main):003:0> list 'test'
.. ..
1 row(s) in 0.0550 seconds 1 row(s) in 0.0550 seconds
hbase(main):004:0&gt; put 'test', 'row1', 'cf:a', 'value1' hbase(main):004:0> put 'test', 'row1', 'cf:a', 'value1'
0 row(s) in 0.0560 seconds 0 row(s) in 0.0560 seconds
hbase(main):005:0&gt; put 'test', 'row2', 'cf:b', 'value2' hbase(main):005:0> put 'test', 'row2', 'cf:b', 'value2'
0 row(s) in 0.0370 seconds 0 row(s) in 0.0370 seconds
hbase(main):006:0&gt; put 'test', 'row3', 'cf:c', 'value3' hbase(main):006:0> put 'test', 'row3', 'cf:c', 'value3'
0 row(s) in 0.0450 seconds</programlisting></para> 0 row(s) in 0.0450 seconds]]></screen>
<para>Above we inserted 3 values, one at a time. The first insert is at <para>Above we inserted 3 values, one at a time. The first insert is at
<varname>row1</varname>, column <varname>cf:a</varname> with a value of <varname>row1</varname>, column <varname>cf:a</varname> with a value of
<varname>value1</varname>. Columns in HBase are comprised of a column family prefix -- <varname>value1</varname>. Columns in HBase are comprised of a column family prefix --
<varname>cf</varname> in this example -- followed by a colon and then a <varname>cf</varname> in this example -- followed by a colon and then a column qualifier
column qualifier suffix (<varname>a</varname> in this case).</para> suffix (<varname>a</varname> in this case).</para>
<para>Verify the data insert by running a scan of the table as follows</para> <para>Verify the data insert by running a scan of the table as follows</para>
<para><programlisting>hbase(main):007:0&gt; scan 'test' <screen><![CDATA[hbase(main):007:0> scan 'test'
ROW COLUMN+CELL ROW COLUMN+CELL
row1 column=cf:a, timestamp=1288380727188, value=value1 row1 column=cf:a, timestamp=1288380727188, value=value1
row2 column=cf:b, timestamp=1288380738440, value=value2 row2 column=cf:b, timestamp=1288380738440, value=value2
row3 column=cf:c, timestamp=1288380747365, value=value3 row3 column=cf:c, timestamp=1288380747365, value=value3
3 row(s) in 0.0590 seconds</programlisting></para> 3 row(s) in 0.0590 seconds]]></screen>
<para>Get a single row</para> <para>Get a single row</para>
<para><programlisting>hbase(main):008:0&gt; get 'test', 'row1' <screen><![CDATA[hbase(main):008:0> get 'test', 'row1'
COLUMN CELL COLUMN CELL
cf:a timestamp=1288380727188, value=value1 cf:a timestamp=1288380727188, value=value1
1 row(s) in 0.0400 seconds</programlisting></para> 1 row(s) in 0.0400 seconds]]></screen>
<para>Now, disable and drop your table. This will clean up all done <para>Now, disable and drop your table. This will clean up all done above.</para>
above.</para>
<para><programlisting>hbase(main):012:0&gt; disable 'test' <screen>h<![CDATA[base(main):012:0> disable 'test'
0 row(s) in 1.0930 seconds 0 row(s) in 1.0930 seconds
hbase(main):013:0&gt; drop 'test' hbase(main):013:0> drop 'test'
0 row(s) in 0.0770 seconds </programlisting></para> 0 row(s) in 0.0770 seconds ]]></screen>
<para>Exit the shell by typing exit.</para> <para>Exit the shell by typing exit.</para>
<para><programlisting>hbase(main):014:0&gt; exit</programlisting></para> <programlisting><![CDATA[hbase(main):014:0> exit]]></programlisting>
</section> </section>
<section xml:id="stopping"> <section
xml:id="stopping">
<title>Stopping HBase</title> <title>Stopping HBase</title>
<para>Stop your hbase instance by running the stop script.</para> <para>Stop your hbase instance by running the stop script.</para>
<para><programlisting>$ ./bin/stop-hbase.sh <screen>$ ./bin/stop-hbase.sh
stopping hbase...............</programlisting></para> stopping hbase...............</screen>
</section> </section>
<section> <section>
<title>Where to go next</title> <title>Where to go next</title>
<para>The above described standalone setup is good for testing and <para>The above described standalone setup is good for testing and experiments only. In the
experiments only. In the next chapter, <xref linkend="configuration" />, next chapter, <xref
we'll go into depth on the different HBase run modes, system requirements linkend="configuration" />, we'll go into depth on the different HBase run modes, system
running HBase, and critical configurations setting up a distributed HBase deploy.</para> requirements running HBase, and critical configurations setting up a distributed HBase
deploy.</para>
</section> </section>
</section> </section>

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,12 +1,15 @@
<?xml version="1.0" encoding="UTF-8"?> <?xml version="1.0" encoding="UTF-8"?>
<preface version="5.0" xml:id="preface" xmlns="http://docbook.org/ns/docbook" <preface
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xmlns:xi="http://www.w3.org/2001/XInclude" xml:id="preface"
xmlns:svg="http://www.w3.org/2000/svg" xmlns="http://docbook.org/ns/docbook"
xmlns:m="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:html="http://www.w3.org/1999/xhtml" xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:db="http://docbook.org/ns/docbook"> xmlns:svg="http://www.w3.org/2000/svg"
<!-- xmlns:m="http://www.w3.org/1998/Math/MathML"
xmlns:html="http://www.w3.org/1999/xhtml"
xmlns:db="http://docbook.org/ns/docbook">
<!--
/** /**
* Licensed to the Apache Software Foundation (ASF) under one * Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file * or more contributor license agreements. See the NOTICE file
@ -25,47 +28,42 @@
* limitations under the License. * limitations under the License.
*/ */
--> -->
<title>Preface</title> <title>Preface</title>
<para>This is the official reference guide for the <link <para>This is the official reference guide for the <link
xlink:href="http://hbase.apache.org/">HBase</link> version it ships with. xlink:href="http://hbase.apache.org/">HBase</link> version it ships with. Herein you
Herein you will find either the definitive documentation on an HBase topic will find either the definitive documentation on an HBase topic as of its standing when the
as of its standing when the referenced HBase version shipped, or it referenced HBase version shipped, or it will point to the location in <link
will point to the location in <link xlink:href="http://hbase.apache.org/apidocs/index.html">javadoc</link>, <link
xlink:href="http://hbase.apache.org/apidocs/index.html">javadoc</link>, xlink:href="https://issues.apache.org/jira/browse/HBASE">JIRA</link> or <link
<link xlink:href="https://issues.apache.org/jira/browse/HBASE">JIRA</link> xlink:href="http://wiki.apache.org/hadoop/Hbase">wiki</link> where the pertinent
or <link xlink:href="http://wiki.apache.org/hadoop/Hbase">wiki</link> where information can be found.</para>
the pertinent information can be found.</para>
<para>This reference guide is a work in progress. The source for this guide can <para>This reference guide is a work in progress. The source for this guide can be found at
be found at <filename>src/main/docbkx</filename> in a checkout of the hbase <filename>src/main/docbkx</filename> in a checkout of the hbase project. This reference
project. This reference guide is marked up using guide is marked up using <link
<link xlink:href="http://www.docbook.com/">DocBook</link> from which the xlink:href="http://www.docbook.com/">DocBook</link> from which the the finished guide is
the finished guide is generated as part of the 'site' build target. Run generated as part of the 'site' build target. Run <programlisting>mvn site</programlisting>
<programlisting>mvn site</programlisting> to generate this documentation. to generate this documentation. Amendments and improvements to the documentation are
Amendments and improvements to the documentation are welcomed. Add a welcomed. Add a patch to an issue up in the HBase <link
patch to an issue up in the HBase <link xlink:href="https://issues.apache.org/jira/browse/HBASE">JIRA</link>.</para>
xlink:href="https://issues.apache.org/jira/browse/HBASE">JIRA</link>.</para>
<note xml:id="headsup"> <note
<title>Heads-up if this is your first foray into the world of distributed computing...</title> xml:id="headsup">
<para> <title>Heads-up if this is your first foray into the world of distributed
If this is your first foray into the wonderful world of computing...</title>
Distributed Computing, then you are in for <para> If this is your first foray into the wonderful world of Distributed Computing, then
some interesting times. First off, distributed systems are you are in for some interesting times. First off, distributed systems are hard; making a
hard; making a distributed system hum requires a disparate distributed system hum requires a disparate skillset that spans systems (hardware and
skillset that spans systems (hardware and software) and software) and networking. Your cluster' operation can hiccup because of any of a myriad
networking. Your cluster' operation can hiccup because of any set of reasons from bugs in HBase itself through misconfigurations -- misconfiguration
of a myriad set of reasons from bugs in HBase itself through misconfigurations of HBase but also operating system misconfigurations -- through to hardware problems
-- misconfiguration of HBase but also operating system misconfigurations -- whether it be a bug in your network card drivers or an underprovisioned RAM bus (to
through to hardware problems whether it be a bug in your network card mention two recent examples of hardware issues that manifested as "HBase is slow"). You
drivers or an underprovisioned RAM bus (to mention two recent will also need to do a recalibration if up to this your computing has been bound to a
examples of hardware issues that manifested as "HBase is slow"). single box. Here is one good starting point: <link
You will also need to do a recalibration if up to this your xlink:href="http://en.wikipedia.org/wiki/Fallacies_of_Distributed_Computing">Fallacies
computing has been bound to a single box. Here is one good of Distributed Computing</link>. That said, you are welcome. Its a fun place to be.
starting point: Yours, the HBase Community. </para>
<link xlink:href="http://en.wikipedia.org/wiki/Fallacies_of_Distributed_Computing">Fallacies of Distributed Computing</link>. </note>
That said, you are welcome. Its a fun place to be. Yours, the HBase Community.
</para>
</note>
</preface> </preface>

View File

@ -1,13 +1,15 @@
<?xml version="1.0" encoding="UTF-8"?> <?xml version="1.0" encoding="UTF-8"?>
<appendix xml:id="hbase.rpc" <appendix
version="5.0" xmlns="http://docbook.org/ns/docbook" xml:id="hbase.rpc"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xmlns:xi="http://www.w3.org/2001/XInclude" xmlns="http://docbook.org/ns/docbook"
xmlns:svg="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:m="http://www.w3.org/1998/Math/MathML" xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:html="http://www.w3.org/1999/xhtml" xmlns:svg="http://www.w3.org/2000/svg"
xmlns:db="http://docbook.org/ns/docbook"> xmlns:m="http://www.w3.org/1998/Math/MathML"
<!--/** xmlns:html="http://www.w3.org/1999/xhtml"
xmlns:db="http://docbook.org/ns/docbook">
<!--/**
* Licensed to the Apache Software Foundation (ASF) under one * Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file * or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information * distributed with this work for additional information
@ -26,211 +28,273 @@
*/ */
--> -->
<title>0.95 RPC Specification</title> <title>0.95 RPC Specification</title>
<para>In 0.95, all client/server communication is done with <para>In 0.95, all client/server communication is done with <link
<link xlink:href="https://code.google.com/p/protobuf/">protobufed</link> Messages rather than with xlink:href="https://code.google.com/p/protobuf/">protobufed</link> Messages rather than
<link xlink:href="http://hadoop.apache.org/docs/current/api/org/apache/hadoop/io/Writable.html">Hadoop Writables</link>. with <link
Our RPC wire format therefore changes. xlink:href="http://hadoop.apache.org/docs/current/api/org/apache/hadoop/io/Writable.html">Hadoop
This document describes the client/server request/response protocol and our new RPC wire-format.</para> Writables</link>. Our RPC wire format therefore changes. This document describes the
<para/> client/server request/response protocol and our new RPC wire-format.</para>
<para>For what RPC is like in 0.94 and previous, <para />
see Benoît/Tsunas <link xlink:href="https://github.com/OpenTSDB/asynchbase/blob/master/src/HBaseRpc.java#L164">Unofficial Hadoop / HBase RPC protocol documentation</link>. <para>For what RPC is like in 0.94 and previous, see Benoît/Tsunas <link
For more background on how we arrived at this spec., see xlink:href="https://github.com/OpenTSDB/asynchbase/blob/master/src/HBaseRpc.java#L164">Unofficial
<link xlink:href="https://docs.google.com/document/d/1WCKwgaLDqBw2vpux0jPsAu2WPTRISob7HGCO8YhfDTA/edit#">HBase RPC: WIP</link></para> Hadoop / HBase RPC protocol documentation</link>. For more background on how we arrived
<para/> at this spec., see <link
<section><title>Goals</title> xlink:href="https://docs.google.com/document/d/1WCKwgaLDqBw2vpux0jPsAu2WPTRISob7HGCO8YhfDTA/edit#">HBase
<para> RPC: WIP</link></para>
<orderedlist> <para />
<listitem> <section>
<para>A wire-format we can evolve</para> <title>Goals</title>
</listitem> <para>
<listitem> <orderedlist>
<para>A format that does not require our rewriting server core or <listitem>
radically changing its current architecture (for later).</para> <para>A wire-format we can evolve</para>
</listitem> </listitem>
</orderedlist> <listitem>
</para> <para>A format that does not require our rewriting server core or radically
</section> changing its current architecture (for later).</para>
<section><title>TODO</title> </listitem>
<para> </orderedlist>
<orderedlist> </para>
<listitem> </section>
<para>List of problems with currently specified format and where <section>
we would like to go in a version2, etc. For example, what would we <title>TODO</title>
have to change if anything to move server async or to support <para>
streaming/chunking?</para> <orderedlist>
</listitem> <listitem>
<listitem> <para>List of problems with currently specified format and where we would like
<para>Diagram on how it works</para> to go in a version2, etc. For example, what would we have to change if
</listitem> anything to move server async or to support streaming/chunking?</para>
<listitem> </listitem>
<para>A grammar that succinctly describes the wire-format. Currently <listitem>
we have these words and the content of the rpc protobuf idl but <para>Diagram on how it works</para>
a grammar for the back and forth would help with groking rpc. Also, </listitem>
a little state machine on client/server interactions would help <listitem>
with understanding (and ensuring correct implementation).</para> <para>A grammar that succinctly describes the wire-format. Currently we have
</listitem> these words and the content of the rpc protobuf idl but a grammar for the
</orderedlist> back and forth would help with groking rpc. Also, a little state machine on
</para> client/server interactions would help with understanding (and ensuring
</section> correct implementation).</para>
<section><title>RPC</title> </listitem>
<para>The client will send setup information on connection establish. </orderedlist>
Thereafter, the client invokes methods against the remote server sending a protobuf Message and receiving a protobuf Message in response. </para>
Communication is synchronous. All back and forth is preceded by an int that has the total length of the request/response. </section>
Optionally, Cells(KeyValues) can be passed outside of protobufs in follow-behind Cell blocks (because <section>
<link xlink:href="https://docs.google.com/document/d/1WEtrq-JTIUhlnlnvA0oYRLp0F8MKpEBeBSCFcQiacdw/edit#">we cant protobuf megabytes of KeyValues</link> or Cells). <title>RPC</title>
These CellBlocks are encoded and optionally compressed.</para> <para>The client will send setup information on connection establish. Thereafter, the client
<para/> invokes methods against the remote server sending a protobuf Message and receiving a
<para>For more detail on the protobufs involved, see the protobuf Message in response. Communication is synchronous. All back and forth is
<link xlink:href="http://svn.apache.org/viewvc/hbase/trunk/hbase-protocol/src/main/protobuf/RPC.proto?view=markup">RPC.proto</link> file in trunk.</para> preceded by an int that has the total length of the request/response. Optionally,
Cells(KeyValues) can be passed outside of protobufs in follow-behind Cell blocks
(because <link
xlink:href="https://docs.google.com/document/d/1WEtrq-JTIUhlnlnvA0oYRLp0F8MKpEBeBSCFcQiacdw/edit#">we
cant protobuf megabytes of KeyValues</link> or Cells). These CellBlocks are encoded
and optionally compressed.</para>
<para />
<para>For more detail on the protobufs involved, see the <link
xlink:href="http://svn.apache.org/viewvc/hbase/trunk/hbase-protocol/src/main/protobuf/RPC.proto?view=markup">RPC.proto</link>
file in trunk.</para>
<section> <section>
<title>Connection Setup</title> <title>Connection Setup</title>
<para>Client initiates connection.</para> <para>Client initiates connection.</para>
<section><title>Client</title> <section>
<para>On connection setup, client sends a preamble followed by a connection header. <title>Client</title>
</para> <para>On connection setup, client sends a preamble followed by a connection header. </para>
<section> <section>
<title>&lt;preamble&gt;</title> <title>&lt;preamble&gt;</title>
<para><programlisting>&lt;MAGIC 4 byte integer&gt; &lt;1 byte RPC Format Version&gt; &lt;1 byte auth type&gt;<footnote><para> We need the auth method spec. here so the connection header is encoded if auth enabled.</para></footnote></programlisting></para> <para><programlisting>&lt;MAGIC 4 byte integer&gt; &lt;1 byte RPC Format Version&gt; &lt;1 byte auth type&gt;<footnote><para> We need the auth method spec. here so the connection header is encoded if auth enabled.</para></footnote></programlisting></para>
<para>E.g.: HBas0x000x50 -- 4 bytes of MAGIC -- HBas -- plus one-byte of version, 0 in this case, and one byte, 0x50 (SIMPLE). of an auth type.</para> <para>E.g.: HBas0x000x50 -- 4 bytes of MAGIC -- HBas -- plus one-byte of
</section> version, 0 in this case, and one byte, 0x50 (SIMPLE). of an auth
type.</para>
</section>
<section> <section>
<title>&lt;Protobuf ConnectionHeader Message&gt;</title> <title>&lt;Protobuf ConnectionHeader Message&gt;</title>
<para>Has user info, and “protocol”, as well as the encoders and compression the client will use sending CellBlocks. <para>Has user info, and “protocol”, as well as the encoders and compression the
CellBlock encoders and compressors are for the life of the connection. client will use sending CellBlocks. CellBlock encoders and compressors are
CellBlock encoders implement org.apache.hadoop.hbase.codec.Codec. for the life of the connection. CellBlock encoders implement
CellBlocks may then also be compressed. org.apache.hadoop.hbase.codec.Codec. CellBlocks may then also be compressed.
Compressors implement org.apache.hadoop.io.compress.CompressionCodec. Compressors implement org.apache.hadoop.io.compress.CompressionCodec. This
This protobuf is written using writeDelimited so is prefaced by a pb varint protobuf is written using writeDelimited so is prefaced by a pb varint with
with its serialized length</para> its serialized length</para>
</section> </section>
</section><!--Client--> </section>
<!--Client-->
<section><title>Server</title> <section>
<para>After client sends preamble and connection header, <title>Server</title>
server does NOT respond if successful connection setup. <para>After client sends preamble and connection header, server does NOT respond if
No response means server is READY to accept requests and to give out response. successful connection setup. No response means server is READY to accept
If the version or authentication in the preamble is not agreeable or the server has trouble parsing the preamble, requests and to give out response. If the version or authentication in the
it will throw a org.apache.hadoop.hbase.ipc.FatalConnectionException explaining the error and will then disconnect. preamble is not agreeable or the server has trouble parsing the preamble, it
If the client in the connection header -- i.e. the protobufd Message that comes after the connection preamble -- asks for for a will throw a org.apache.hadoop.hbase.ipc.FatalConnectionException explaining the
Service the server does not support or a codec the server does not have, again we throw a FatalConnectionException with explanation.</para> error and will then disconnect. If the client in the connection header -- i.e.
</section> the protobufd Message that comes after the connection preamble -- asks for for
</section> a Service the server does not support or a codec the server does not have, again
we throw a FatalConnectionException with explanation.</para>
</section>
</section>
<section><title>Request</title> <section>
<para>After a Connection has been set up, client makes requests. Server responds.</para> <title>Request</title>
<para>A request is made up of a protobuf RequestHeader followed by a protobuf Message parameter. <para>After a Connection has been set up, client makes requests. Server responds.</para>
The header includes the method name and optionally, metadata on the optional CellBlock that may be following. <para>A request is made up of a protobuf RequestHeader followed by a protobuf Message
The parameter type suits the method being invoked: i.e. if we are doing a getRegionInfo request, parameter. The header includes the method name and optionally, metadata on the
the protobuf Message param will be an instance of GetRegionInfoRequest. optional CellBlock that may be following. The parameter type suits the method being
The response will be a GetRegionInfoResponse. invoked: i.e. if we are doing a getRegionInfo request, the protobuf Message param
The CellBlock is optionally used ferrying the bulk of the RPC data: i.e Cells/KeyValues.</para> will be an instance of GetRegionInfoRequest. The response will be a
<para/> GetRegionInfoResponse. The CellBlock is optionally used ferrying the bulk of the RPC
<section><title>Request Parts</title> data: i.e Cells/KeyValues.</para>
<section><title>&lt;Total Length&gt;</title> <section>
<para>The request is prefaced by an int that holds the total length of what follows.</para> <title>Request Parts</title>
</section> <section>
<section><title>&lt;Protobuf RequestHeader Message&gt;</title> <title>&lt;Total Length&gt;</title>
<para>Will have call.id, trace.id, and method name, etc. including optional Metadata on the Cell block IFF one is following. <para>The request is prefaced by an int that holds the total length of what
Data is protobufd inline in this pb Message or optionally comes in the following CellBlock</para> follows.</para>
</section> </section>
<section><title>&lt;Protobuf Param Message&gt;</title> <section>
<para>If the method being invoked is getRegionInfo, if you study the Service descriptor for the client to regionserver protocol, <title>&lt;Protobuf RequestHeader Message&gt;</title>
you will find that the request sends a GetRegionInfoRequest protobuf Message param in this position.</para> <para>Will have call.id, trace.id, and method name, etc. including optional
</section> Metadata on the Cell block IFF one is following. Data is protobufd inline
<section><title>&lt;CellBlock&gt;</title> in this pb Message or optionally comes in the following CellBlock</para>
<para>An encoded and optionally compressed Cell block.</para> </section>
</section> <section>
</section><!--Request parts--> <title>&lt;Protobuf Param Message&gt;</title>
</section><!--Request--> <para>If the method being invoked is getRegionInfo, if you study the Service
descriptor for the client to regionserver protocol, you will find that the
request sends a GetRegionInfoRequest protobuf Message param in this
position.</para>
</section>
<section>
<title>&lt;CellBlock&gt;</title>
<para>An encoded and optionally compressed Cell block.</para>
</section>
</section>
<!--Request parts-->
</section>
<!--Request-->
<section><title>Response</title> <section>
<para>Same as Request, it is a protobuf ResponseHeader followed by a protobuf Message response where the Message response type suits the method invoked. <title>Response</title>
Bulk of the data may come in a following CellBlock.</para> <para>Same as Request, it is a protobuf ResponseHeader followed by a protobuf Message
<section><title>Response Parts</title> response where the Message response type suits the method invoked. Bulk of the data
<section><title>&lt;Total Length&gt;</title> may come in a following CellBlock.</para>
<para>The response is prefaced by an int that holds the total length of what follows.</para> <section>
</section> <title>Response Parts</title>
<section><title>&lt;Protobuf ResponseHeader Message&gt;</title> <section>
<para>Will have call.id, etc. Will include exception if failed processing.  Optionally includes metadata on optional, IFF there is a CellBlock following.</para> <title>&lt;Total Length&gt;</title>
</section> <para>The response is prefaced by an int that holds the total length of what
follows.</para>
</section>
<section>
<title>&lt;Protobuf ResponseHeader Message&gt;</title>
<para>Will have call.id, etc. Will include exception if failed processing.
 Optionally includes metadata on optional, IFF there is a CellBlock
following.</para>
</section>
<section><title>&lt;Protobuf Response Message&gt;</title> <section>
<para>Return or may be nothing if exception. If the method being invoked is getRegionInfo, if you study the Service descriptor for the client to regionserver protocol, <title>&lt;Protobuf Response Message&gt;</title>
you will find that the response sends a GetRegionInfoResponse protobuf Message param in this position.</para> <para>Return or may be nothing if exception. If the method being invoked is
</section> getRegionInfo, if you study the Service descriptor for the client to
<section><title>&lt;CellBlock&gt;</title> regionserver protocol, you will find that the response sends a
<para>An encoded and optionally compressed Cell block.</para> GetRegionInfoResponse protobuf Message param in this position.</para>
</section> </section>
</section><!--Parts--> <section>
</section><!--Response--> <title>&lt;CellBlock&gt;</title>
<para>An encoded and optionally compressed Cell block.</para>
</section>
</section>
<!--Parts-->
</section>
<!--Response-->
<section><title>Exceptions</title> <section>
<para>There are two distinct types. <title>Exceptions</title>
There is the request failed which is encapsulated inside the response header for the response. <para>There are two distinct types. There is the request failed which is encapsulated
The connection stays open to receive new requests. inside the response header for the response. The connection stays open to receive
The second type, the FatalConnectionException, kills the connection.</para> new requests. The second type, the FatalConnectionException, kills the
<para>Exceptions can carry extra information. connection.</para>
See the ExceptionResponse protobuf type. <para>Exceptions can carry extra information. See the ExceptionResponse protobuf type.
It has a flag to indicate do-no-retry as well as other miscellaneous payload to help improve client responsiveness.</para> It has a flag to indicate do-no-retry as well as other miscellaneous payload to help
</section> improve client responsiveness.</para>
<section><title>CellBlocks</title> </section>
<para>These are not versioned. <section>
Server can do the codec or it cannot. <title>CellBlocks</title>
If new version of a codec with say, tighter encoding, then give it a new class name. <para>These are not versioned. Server can do the codec or it cannot. If new version of a
Codecs will live on the server for all time so old clients can connect.</para> codec with say, tighter encoding, then give it a new class name. Codecs will live on
</section> the server for all time so old clients can connect.</para>
</section> </section>
</section>
<section><title>Notes</title> <section>
<section><title>Constraints</title> <title>Notes</title>
<para>In some part, current wire-format -- i.e. all requests and responses preceeded by a length -- has been dictated by current server non-async architecture.</para> <section>
</section> <title>Constraints</title>
<section><title>One fat pb request or header+param</title> <para>In some part, current wire-format -- i.e. all requests and responses preceeded by
<para>We went with pb header followed by pb param making a request and a pb header followed by pb response for now. a length -- has been dictated by current server non-async architecture.</para>
Doing header+param rather than a single protobuf Message with both header and param content:</para> </section>
<para> <section>
<orderedlist> <title>One fat pb request or header+param</title>
<listitem> <para>We went with pb header followed by pb param making a request and a pb header
<para>Is closer to what we currently have</para> followed by pb response for now. Doing header+param rather than a single protobuf
</listitem> Message with both header and param content:</para>
<listitem> <para>
<para>Having a single fat pb requires extra copying putting the already pbd param into the body of the fat request pb (and same making result)</para> <orderedlist>
</listitem> <listitem>
<listitem> <para>Is closer to what we currently have</para>
<para>We can decide whether to accept the request or not before we read the param; for example, the request might be low priority.  As is, we read header+param in one go as server is currently implemented so this is a TODO.</para> </listitem>
</listitem> <listitem>
</orderedlist> <para>Having a single fat pb requires extra copying putting the already pbd
</para> param into the body of the fat request pb (and same making
<para>The advantages are minor.  If later, fat request has clear advantage, can roll out a v2 later.</para> result)</para>
</section> </listitem>
<section xml:id="rpc.configs"><title>RPC Configurations</title> <listitem>
<section><title>CellBlock Codecs</title> <para>We can decide whether to accept the request or not before we read the
<para>To enable a codec other than the default <classname>KeyValueCodec</classname>, param; for example, the request might be low priority.  As is, we read
set <varname>hbase.client.rpc.codec</varname> header+param in one go as server is currently implemented so this is a
to the name of the Codec class to use. Codec must implement hbase's <classname>Codec</classname> Interface. After connection setup, TODO.</para>
all passed cellblocks will be sent with this codec. The server will return cellblocks using this same codec as long </listitem>
as the codec is on the servers' CLASSPATH (else you will get <classname>UnsupportedCellCodecException</classname>).</para> </orderedlist>
<para>To change the default codec, set <varname>hbase.client.default.rpc.codec</varname>. </para>
</para> <para>The advantages are minor.  If later, fat request has clear advantage, can roll out
<para>To disable cellblocks completely and to go pure protobuf, set the default to the a v2 later.</para>
empty String and do not specify a codec in your Configuration. So, set <varname>hbase.client.default.rpc.codec</varname> </section>
to the empty string and do not set <varname>hbase.client.rpc.codec</varname>. <section
This will cause the client to connect to the server with no codec specified. xml:id="rpc.configs">
If a server sees no codec, it will return all responses in pure protobuf. <title>RPC Configurations</title>
Running pure protobuf all the time will be slower than running with cellblocks. <section>
</para> <title>CellBlock Codecs</title>
</section> <para>To enable a codec other than the default <classname>KeyValueCodec</classname>,
<section><title>Compression</title> set <varname>hbase.client.rpc.codec</varname> to the name of the Codec class to
<para>Uses hadoops compression codecs. To enable compressing of passed CellBlocks, set <varname>hbase.client.rpc.compressor</varname> use. Codec must implement hbase's <classname>Codec</classname> Interface. After
to the name of the Compressor to use. Compressor must implement Hadoops' CompressionCodec Interface. After connection setup, connection setup, all passed cellblocks will be sent with this codec. The server
all passed cellblocks will be sent compressed. The server will return cellblocks compressed using this same compressor as long will return cellblocks using this same codec as long as the codec is on the
as the compressor is on its CLASSPATH (else you will get <classname>UnsupportedCompressionCodecException</classname>).</para> servers' CLASSPATH (else you will get
</section> <classname>UnsupportedCellCodecException</classname>).</para>
</section> <para>To change the default codec, set
</section> <varname>hbase.client.default.rpc.codec</varname>. </para>
<para>To disable cellblocks completely and to go pure protobuf, set the default to
the empty String and do not specify a codec in your Configuration. So, set
<varname>hbase.client.default.rpc.codec</varname> to the empty string and do
not set <varname>hbase.client.rpc.codec</varname>. This will cause the client to
connect to the server with no codec specified. If a server sees no codec, it
will return all responses in pure protobuf. Running pure protobuf all the time
will be slower than running with cellblocks. </para>
</section>
<section>
<title>Compression</title>
<para>Uses hadoops compression codecs. To enable compressing of passed CellBlocks,
set <varname>hbase.client.rpc.compressor</varname> to the name of the Compressor
to use. Compressor must implement Hadoops' CompressionCodec Interface. After
connection setup, all passed cellblocks will be sent compressed. The server will
return cellblocks compressed using this same compressor as long as the
compressor is on its CLASSPATH (else you will get
<classname>UnsupportedCompressionCodecException</classname>).</para>
</section>
</section>
</section>
</appendix> </appendix>

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,13 +1,15 @@
<?xml version="1.0"?> <?xml version="1.0"?>
<chapter xml:id="shell" <chapter
version="5.0" xmlns="http://docbook.org/ns/docbook" xml:id="shell"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xmlns:xi="http://www.w3.org/2001/XInclude" xmlns="http://docbook.org/ns/docbook"
xmlns:svg="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:m="http://www.w3.org/1998/Math/MathML" xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:html="http://www.w3.org/1999/xhtml" xmlns:svg="http://www.w3.org/2000/svg"
xmlns:db="http://docbook.org/ns/docbook"> xmlns:m="http://www.w3.org/1998/Math/MathML"
<!-- xmlns:html="http://www.w3.org/1999/xhtml"
xmlns:db="http://docbook.org/ns/docbook">
<!--
/** /**
* Licensed to the Apache Software Foundation (ASF) under one * Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file * or more contributor license agreements. See the NOTICE file
@ -28,54 +30,46 @@
--> -->
<title>The Apache HBase Shell</title> <title>The Apache HBase Shell</title>
<para> <para> The Apache HBase Shell is <link
The Apache HBase Shell is <link xlink:href="http://jruby.org">(J)Ruby</link>'s xlink:href="http://jruby.org">(J)Ruby</link>'s IRB with some HBase particular commands
IRB with some HBase particular commands added. Anything you can do in added. Anything you can do in IRB, you should be able to do in the HBase Shell.</para>
IRB, you should be able to do in the HBase Shell.</para> <para>To run the HBase shell, do as follows:</para>
<para>To run the HBase shell, <programlisting>$ ./bin/hbase shell</programlisting>
do as follows: <para>Type <command>help</command> and then <command>&lt;RETURN&gt;</command> to see a listing
<programlisting>$ ./bin/hbase shell</programlisting> of shell commands and options. Browse at least the paragraphs at the end of the help
</para> emission for the gist of how variables and command arguments are entered into the HBase
<para>Type <command>help</command> and then <command>&lt;RETURN&gt;</command> shell; in particular note how table names, rows, and columns, etc., must be quoted.</para>
to see a listing of shell <para>See <xref
commands and options. Browse at least the paragraphs at the end of linkend="shell_exercises" /> for example basic shell operation. </para>
the help emission for the gist of how variables and command <para>Here is a nicely formatted listing of <link
arguments are entered into the xlink:href="http://learnhbase.wordpress.com/2013/03/02/hbase-shell-commands/">all shell
HBase shell; in particular note how table names, rows, and commands</link> by Rajeshbabu Chintaguntla. </para>
columns, etc., must be quoted.</para>
<para>See <xref linkend="shell_exercises" />
for example basic shell operation.
</para>
<para>Here is a nicely formatted listing of <link xlink:href="http://learnhbase.wordpress.com/2013/03/02/hbase-shell-commands/">all shell commands</link> by Rajeshbabu Chintaguntla.
</para>
<section xml:id="scripting"><title>Scripting</title> <section
<para>For examples scripting Apache HBase, look in the xml:id="scripting">
HBase <filename>bin</filename> directory. Look at the files <title>Scripting</title>
that end in <filename>*.rb</filename>. To run one of these <para>For examples scripting Apache HBase, look in the HBase <filename>bin</filename>
files, do as follows: directory. Look at the files that end in <filename>*.rb</filename>. To run one of these
<programlisting>$ ./bin/hbase org.jruby.Main PATH_TO_SCRIPT</programlisting> files, do as follows:</para>
</para> <programlisting>$ ./bin/hbase org.jruby.Main PATH_TO_SCRIPT</programlisting>
</section> </section>
<section xml:id="shell_tricks"><title>Shell Tricks</title> <section
<section xml:id="table_variables"><title>Table variables</title> xml:id="shell_tricks">
<title>Shell Tricks</title>
<section
xml:id="table_variables">
<title>Table variables</title>
<para> <para> HBase 0.95 adds shell commands that provide a jruby-style object-oriented
HBase 0.95 adds shell commands that provide a jruby-style references for tables. Previously all of the shell commands that act upon a table
object-oriented references for tables. Previously all of have a procedural style that always took the name of the table as an argument. HBase
the shell commands that act upon a table have a procedural 0.95 introduces the ability to assign a table to a jruby variable. The table
style that always took the name of the table as an reference can be used to perform data read write operations such as puts, scans, and
argument. HBase 0.95 introduces the ability to assign a gets well as admin functionality such as disabling, dropping, describing tables. </para>
table to a jruby variable. The table reference can be used
to perform data read write operations such as puts, scans,
and gets well as admin functionality such as disabling,
dropping, describing tables.
</para>
<para> <para> For example, previously you would always specify a table name:</para>
For example, previously you would always specify a table name: <screen>
<programlisting>
hbase(main):000:0> create t, f hbase(main):000:0> create t, f
0 row(s) in 1.0970 seconds 0 row(s) in 1.0970 seconds
hbase(main):001:0> put 't', 'rold', 'f', 'v' hbase(main):001:0> put 't', 'rold', 'f', 'v'
@ -101,12 +95,11 @@ hbase(main):005:0> drop 't'
0 row(s) in 23.1670 seconds 0 row(s) in 23.1670 seconds
hbase(main):006:0> hbase(main):006:0>
</programlisting> </screen>
</para>
<para> <para> Now you can assign the table to a variable and use the results in jruby shell
Now you can assign the table to a variable and use the results in jruby shell code. code.</para>
<programlisting> <screen>
hbase(main):007 > t = create 't', 'f' hbase(main):007 > t = create 't', 'f'
0 row(s) in 1.0970 seconds 0 row(s) in 1.0970 seconds
@ -128,13 +121,11 @@ hbase(main):038:0> t.disable
0 row(s) in 6.2350 seconds 0 row(s) in 6.2350 seconds
hbase(main):039:0> t.drop hbase(main):039:0> t.drop
0 row(s) in 0.2340 seconds 0 row(s) in 0.2340 seconds
</programlisting> </screen>
</para>
<para> <para> If the table has already been created, you can assign a Table to a variable by
If the table has already been created, you can assign a using the get_table method:</para>
Table to a variable by using the get_table method: <screen>
<programlisting>
hbase(main):011 > create 't','f' hbase(main):011 > create 't','f'
0 row(s) in 1.2500 seconds 0 row(s) in 1.2500 seconds
@ -150,15 +141,12 @@ ROW COLUMN+CELL
r1 column=f:, timestamp=1378473876949, value=v r1 column=f:, timestamp=1378473876949, value=v
1 row(s) in 0.0240 seconds 1 row(s) in 0.0240 seconds
hbase(main):015:0> hbase(main):015:0>
</programlisting> </screen>
</para>
<para> <para> The list functionality has also been extended so that it returns a list of table
The list functionality has also been extended so that it names as strings. You can then use jruby to script table operations based on these
returns a list of table names as strings. You can then use names. The list_snapshots command also acts similarly.</para>
jruby to script table operations based on these names. The <screen>
list_snapshots command also acts similarly.
<programlisting>
hbase(main):016 > tables = list(t.*) hbase(main):016 > tables = list(t.*)
TABLE TABLE
t t
@ -170,66 +158,66 @@ hbase(main):017:0> tables.map { |t| disable t ; drop t}
=> [nil] => [nil]
hbase(main):018:0> hbase(main):018:0>
</programlisting> </screen>
</para>
</section> </section>
<section><title><filename>irbrc</filename></title> <section>
<para>Create an <filename>.irbrc</filename> file for yourself in your home <title><filename>irbrc</filename></title>
directory. Add customizations. A useful one is command history so commands are save <para>Create an <filename>.irbrc</filename> file for yourself in your home directory.
across Shell invocations: Add customizations. A useful one is command history so commands are save across
<programlisting> Shell invocations:</para>
$ more .irbrc <screen>
require 'irb/ext/save-history' $ more .irbrc
IRB.conf[:SAVE_HISTORY] = 100 require 'irb/ext/save-history'
IRB.conf[:HISTORY_FILE] = "#{ENV['HOME']}/.irb-save-history"</programlisting> IRB.conf[:SAVE_HISTORY] = 100
See the <application>ruby</application> documentation of <filename>.irbrc</filename> IRB.conf[:HISTORY_FILE] = "#{ENV['HOME']}/.irb-save-history"</screen>
to learn about other possible configurations. </para> <para>See the <application>ruby</application> documentation of
</section> <filename>.irbrc</filename> to learn about other possible configurations.
<section><title>LOG data to timestamp</title>
<para>
To convert the date '08/08/16 20:56:29' from an hbase log into a timestamp, do:
<programlisting>
hbase(main):021:0> import java.text.SimpleDateFormat
hbase(main):022:0> import java.text.ParsePosition
hbase(main):023:0> SimpleDateFormat.new("yy/MM/dd HH:mm:ss").parse("08/08/16 20:56:29", ParsePosition.new(0)).getTime() => 1218920189000</programlisting>
</para>
<para>
To go the other direction:
<programlisting>
hbase(main):021:0> import java.util.Date
hbase(main):022:0> Date.new(1218920189000).toString() => "Sat Aug 16 20:56:29 UTC 2008"</programlisting>
</para>
<para>
To output in a format that is exactly like that of the HBase log format will take a little messing with
<link xlink:href="http://download.oracle.com/javase/6/docs/api/java/text/SimpleDateFormat.html">SimpleDateFormat</link>.
</para> </para>
</section> </section>
<section><title>Debug</title> <section>
<section><title>Shell debug switch</title> <title>LOG data to timestamp</title>
<para>You can set a debug switch in the shell to see more output <para> To convert the date '08/08/16 20:56:29' from an hbase log into a timestamp,
-- e.g. more of the stack trace on exception -- do:</para>
when you run a command: <screen>
<programlisting>hbase> debug &lt;RETURN&gt;</programlisting> hbase(main):021:0> import java.text.SimpleDateFormat
</para> hbase(main):022:0> import java.text.ParsePosition
hbase(main):023:0> SimpleDateFormat.new("yy/MM/dd HH:mm:ss").parse("08/08/16 20:56:29", ParsePosition.new(0)).getTime() => 1218920189000</screen>
<para> To go the other direction:</para>
<screen>
hbase(main):021:0> import java.util.Date
hbase(main):022:0> Date.new(1218920189000).toString() => "Sat Aug 16 20:56:29 UTC 2008"</screen>
<para> To output in a format that is exactly like that of the HBase log format will take
a little messing with <link
xlink:href="http://download.oracle.com/javase/6/docs/api/java/text/SimpleDateFormat.html">SimpleDateFormat</link>.
</para>
</section>
<section>
<title>Debug</title>
<section>
<title>Shell debug switch</title>
<para>You can set a debug switch in the shell to see more output -- e.g. more of the
stack trace on exception -- when you run a command:</para>
<programlisting>hbase> debug &lt;RETURN&gt;</programlisting>
</section> </section>
<section><title>DEBUG log level</title> <section>
<para>To enable DEBUG level logging in the shell, <title>DEBUG log level</title>
launch it with the <command>-d</command> option. <para>To enable DEBUG level logging in the shell, launch it with the
<programlisting>$ ./bin/hbase shell -d</programlisting> <command>-d</command> option.</para>
</para> <programlisting>$ ./bin/hbase shell -d</programlisting>
</section> </section>
</section> </section>
<section><title>Commands</title> <section>
<section><title>count</title> <title>Commands</title>
<para>Count command returns the number of rows in a table. <section>
It's quite fast when configured with the right CACHE <title>count</title>
<programlisting>hbase> count '&lt;tablename&gt;', CACHE => 1000</programlisting> <para>Count command returns the number of rows in a table. It's quite fast when
The above count fetches 1000 rows at a time. Set CACHE lower if your rows are big. configured with the right CACHE
Default is to fetch one row at a time. <programlisting>hbase> count '&lt;tablename&gt;', CACHE => 1000</programlisting>
</para> The above count fetches 1000 rows at a time. Set CACHE lower if your rows are
big. Default is to fetch one row at a time. </para>
</section> </section>
</section> </section>
</section> </section>
</chapter> </chapter>

View File

@ -1,12 +1,14 @@
<?xml version="1.0" encoding="UTF-8"?> <?xml version="1.0" encoding="UTF-8"?>
<appendix xml:id="tracing" <appendix
version="5.0" xmlns="http://docbook.org/ns/docbook" xml:id="tracing"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xmlns:xi="http://www.w3.org/2001/XInclude" xmlns="http://docbook.org/ns/docbook"
xmlns:svg="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:m="http://www.w3.org/1998/Math/MathML" xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:html="http://www.w3.org/1999/xhtml" xmlns:svg="http://www.w3.org/2000/svg"
xmlns:db="http://docbook.org/ns/docbook"> xmlns:m="http://www.w3.org/1998/Math/MathML"
xmlns:html="http://www.w3.org/1999/xhtml"
xmlns:db="http://docbook.org/ns/docbook">
<!--/** <!--/**
* Licensed to the Apache Software Foundation (ASF) under one * Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file * or more contributor license agreements. See the NOTICE file
@ -28,193 +30,158 @@
<title>Enabling Dapper-like Tracing in HBase</title> <title>Enabling Dapper-like Tracing in HBase</title>
<para> <para>
<link xlink:href="https://issues.apache.org/jira/browse/HBASE-6449">HBASE-6449</link> <link
added support for tracing requests through HBase, using the open source tracing library, xlink:href="https://issues.apache.org/jira/browse/HBASE-6449">HBASE-6449</link> added support
<link xlink:href="http://github.com/cloudera/htrace">HTrace</link>. for tracing requests through HBase, using the open source tracing library, <link
Setting up tracing is quite simple, xlink:href="http://github.com/cloudera/htrace">HTrace</link>. Setting up tracing is quite
however it currently requires some very minor changes to your client code simple, however it currently requires some very minor changes to your client code (it would not
(it would not be very difficult to remove this requirement). be very difficult to remove this requirement). </para>
</para>
<section xml:id="tracing.spanreceivers"> <section
xml:id="tracing.spanreceivers">
<title>SpanReceivers</title> <title>SpanReceivers</title>
<para> <para> The tracing system works by collecting information in structs called 'Spans'. It is up to
The tracing system works by collecting information in structs called 'Spans'. you to choose how you want to receive this information by implementing the
It is up to you to choose how you want to receive this information <classname>SpanReceiver</classname> interface, which defines one method: </para>
by implementing the <classname>SpanReceiver</classname> interface, <programlisting><![CDATA[
which defines one method: public void receiveSpan(Span span);
<programlisting><![CDATA[
public void receiveSpan(Span span);
]]></programlisting> ]]></programlisting>
This method serves as a callback whenever a span is completed. <para>This method serves as a callback whenever a span is completed. HTrace allows you to use as
HTrace allows you to use as many SpanReceivers as you want many SpanReceivers as you want so you can easily send trace information to multiple
so you can easily send trace information to multiple destinations. destinations. </para>
</para>
<para> <para> Configure what SpanReceivers you'd like to us by putting a comma separated list of the
Configure what SpanReceivers you'd like to us fully-qualified class name of classes implementing <classname>SpanReceiver</classname> in
by putting a comma separated list of the <filename>hbase-site.xml</filename> property:
fully-qualified class name of classes implementing <varname>hbase.trace.spanreceiver.classes</varname>. </para>
<classname>SpanReceiver</classname> in <filename>hbase-site.xml</filename>
property: <varname>hbase.trace.spanreceiver.classes</varname>.
</para>
<para> <para> HTrace includes a <classname>LocalFileSpanReceiver</classname> that writes all span
HTrace includes a <classname>LocalFileSpanReceiver</classname> information to local files in a JSON-based format. The
that writes all span information to local files in a JSON-based format. <classname>LocalFileSpanReceiver</classname> looks in <filename>hbase-site.xml</filename>
The <classname>LocalFileSpanReceiver</classname> for a <varname>hbase.local-file-span-receiver.path</varname> property with a value describing
looks in <filename>hbase-site.xml</filename> the name of the file to which nodes should write their span information. </para>
for a <varname>hbase.local-file-span-receiver.path</varname> <programlisting><![CDATA[
property with a value describing the name of the file <property>
to which nodes should write their span information. <name>hbase.trace.spanreceiver.classes</name>
<programlisting><![CDATA[ <value>org.htrace.impl.LocalFileSpanReceiver</value>
<property> </property>
<name>hbase.trace.spanreceiver.classes</name> <property>
<value>org.htrace.impl.LocalFileSpanReceiver</value> <name>hbase.local-file-span-receiver.path</name>
</property> <value>/var/log/hbase/htrace.out</value>
<property> </property>
<name>hbase.local-file-span-receiver.path</name>
<value>/var/log/hbase/htrace.out</value>
</property>
]]></programlisting> ]]></programlisting>
</para>
<para> HTrace also provides <classname>ZipkinSpanReceiver</classname> which converts spans to <link
xlink:href="http://github.com/twitter/zipkin">Zipkin</link> span format and send them to
Zipkin server. In order to use this span receiver, you need to install the jar of
htrace-zipkin to your HBase's classpath on all of the nodes in your cluster. </para>
<para> <para>
HTrace also provides <classname>ZipkinSpanReceiver</classname> <filename>htrace-zipkin</filename> is published to the maven central repository. You could get
which converts spans to the latest version from there or just build it locally and then copy it out to all nodes,
<link xlink:href="http://github.com/twitter/zipkin">Zipkin</link> change your config to use zipkin receiver, distribute the new configuration and then (rolling)
span format and send them to Zipkin server. restart. </para>
In order to use this span receiver, <para> Here is the example of manual setup procedure. </para>
you need to install the jar of htrace-zipkin to your HBase's classpath <screen><![CDATA[
on all of the nodes in your cluster. $ git clone https://github.com/cloudera/htrace
</para> $ cd htrace/htrace-zipkin
<para> $ mvn compile assembly:single
<filename>htrace-zipkin</filename> is published to the maven central repository. $ cp target/htrace-zipkin-*-jar-with-dependencies.jar $HBASE_HOME/lib/
You could get the latest version from there or just build it locally and then # copy jar to all nodes...
copy it out to all nodes, change your config to use zipkin receiver, distribute ]]></screen>
the new configuration and then (rolling) restart. <para>The <classname>ZipkinSpanReceiver</classname> looks in <filename>hbase-site.xml</filename>
</para> for a <varname>hbase.zipkin.collector-hostname</varname> and
<para> <varname>hbase.zipkin.collector-port</varname> property with a value describing the Zipkin
Here is the example of manual setup procedure. collector server to which span information are sent. </para>
<programlisting><![CDATA[ <programlisting><![CDATA[
$ git clone https://github.com/cloudera/htrace <property>
$ cd htrace/htrace-zipkin <name>hbase.trace.spanreceiver.classes</name>
$ mvn compile assembly:single <value>org.htrace.impl.ZipkinSpanReceiver</value>
$ cp target/htrace-zipkin-*-jar-with-dependencies.jar $HBASE_HOME/lib/ </property>
# copy jar to all nodes... <property>
<name>hbase.zipkin.collector-hostname</name>
<value>localhost</value>
</property>
<property>
<name>hbase.zipkin.collector-port</name>
<value>9410</value>
</property>
]]></programlisting> ]]></programlisting>
The <classname>ZipkinSpanReceiver</classname>
looks in <filename>hbase-site.xml</filename>
for a <varname>hbase.zipkin.collector-hostname</varname>
and <varname>hbase.zipkin.collector-port</varname>
property with a value describing the Zipkin collector server
to which span information are sent.
<programlisting><![CDATA[
<property>
<name>hbase.trace.spanreceiver.classes</name>
<value>org.htrace.impl.ZipkinSpanReceiver</value>
</property>
<property>
<name>hbase.zipkin.collector-hostname</name>
<value>localhost</value>
</property>
<property>
<name>hbase.zipkin.collector-port</name>
<value>9410</value>
</property>
]]></programlisting>
</para>
<para> <para> If you do not want to use the included span receivers, you are encouraged to write your
If you do not want to use the included span receivers, own receiver (take a look at <classname>LocalFileSpanReceiver</classname> for an example). If
you are encouraged to write your own receiver you think others would benefit from your receiver, file a JIRA or send a pull request to <link
(take a look at <classname>LocalFileSpanReceiver</classname> for an example). xlink:href="http://github.com/cloudera/htrace">HTrace</link>. </para>
If you think others would benefit from your receiver,
file a JIRA or send a pull request to
<link xlink:href="http://github.com/cloudera/htrace">HTrace</link>.
</para>
</section> </section>
<section xml:id="tracing.client.modifications"> <section
xml:id="tracing.client.modifications">
<title>Client Modifications</title> <title>Client Modifications</title>
<para> <para> In order to turn on tracing in your client code, you must initialize the module sending
In order to turn on tracing in your client code, spans to receiver once per client process. </para>
you must initialize the module sending spans to receiver <programlisting><![CDATA[
once per client process. private SpanReceiverHost spanReceiverHost;
<programlisting><![CDATA[
private SpanReceiverHost spanReceiverHost;
... ...
Configuration conf = HBaseConfiguration.create(); Configuration conf = HBaseConfiguration.create();
SpanReceiverHost spanReceiverHost = SpanReceiverHost.getInstance(conf); SpanReceiverHost spanReceiverHost = SpanReceiverHost.getInstance(conf);
]]></programlisting> ]]></programlisting>
Then you simply start tracing span before requests you think are interesting, <para>Then you simply start tracing span before requests you think are interesting, and close it
and close it when the request is done. when the request is done. For example, if you wanted to trace all of your get operations, you
For example, if you wanted to trace all of your get operations, change this: </para>
you change this: <programlisting><![CDATA[
<programlisting><![CDATA[ HTable table = new HTable(conf, "t1");
Get get = new Get(Bytes.toBytes("r1"));
Result res = table.get(get);
]]></programlisting>
<para>into: </para>
<programlisting><![CDATA[
TraceScope ts = Trace.startSpan("Gets", Sampler.ALWAYS);
try {
HTable table = new HTable(conf, "t1"); HTable table = new HTable(conf, "t1");
Get get = new Get(Bytes.toBytes("r1")); Get get = new Get(Bytes.toBytes("r1"));
Result res = table.get(get); Result res = table.get(get);
} finally {
ts.close();
}
]]></programlisting> ]]></programlisting>
into: <para>If you wanted to trace half of your 'get' operations, you would pass in: </para>
<programlisting><![CDATA[ <programlisting><![CDATA[
TraceScope ts = Trace.startSpan("Gets", Sampler.ALWAYS); new ProbabilitySampler(0.5)
try {
HTable table = new HTable(conf, "t1");
Get get = new Get(Bytes.toBytes("r1"));
Result res = table.get(get);
} finally {
ts.close();
}
]]></programlisting> ]]></programlisting>
If you wanted to trace half of your 'get' operations, you would pass in: <para>in lieu of <varname>Sampler.ALWAYS</varname> to <classname>Trace.startSpan()</classname>.
<programlisting><![CDATA[ See the HTrace <filename>README</filename> for more information on Samplers. </para>
new ProbabilitySampler(0.5)
]]></programlisting>
in lieu of <varname>Sampler.ALWAYS</varname>
to <classname>Trace.startSpan()</classname>.
See the HTrace <filename>README</filename> for more information on Samplers.
</para>
</section> </section>
<section xml:id="tracing.client.shell"> <section
xml:id="tracing.client.shell">
<title>Tracing from HBase Shell</title> <title>Tracing from HBase Shell</title>
<para> <para> You can use <command>trace</command> command for tracing requests from HBase Shell.
You can use <command>trace</command> command <command>trace 'start'</command> command turns on tracing and <command>trace
for tracing requests from HBase Shell. 'stop'</command> command turns off tracing. </para>
<command>trace 'start'</command> command turns on tracing and <programlisting><![CDATA[
<command>trace 'stop'</command> command turns off tracing. hbase(main):001:0> trace 'start'
<programlisting><![CDATA[ hbase(main):002:0> put 'test', 'row1', 'f:', 'val1' # traced commands
hbase(main):001:0> trace 'start' hbase(main):003:0> trace 'stop'
hbase(main):002:0> put 'test', 'row1', 'f:', 'val1' # traced commands
hbase(main):003:0> trace 'stop'
]]></programlisting> ]]></programlisting>
</para>
<para> <para>
<command>trace 'start'</command> and <command>trace 'start'</command> and <command>trace 'stop'</command> always returns boolean
<command>trace 'stop'</command> always value representing if or not there is ongoing tracing. As a result, <command>trace
returns boolean value representing 'stop'</command> returns false on suceess. <command>trace 'status'</command> just returns if
if or not there is ongoing tracing. or not tracing is turned on. </para>
As a result, <command>trace 'stop'</command> <programlisting><![CDATA[
returns false on suceess. hbase(main):001:0> trace 'start'
<command>trace 'status'</command> => true
just returns if or not tracing is turned on.
<programlisting><![CDATA[
hbase(main):001:0> trace 'start'
=> true
hbase(main):002:0> trace 'status' hbase(main):002:0> trace 'status'
=> true => true
hbase(main):003:0> trace 'stop' hbase(main):003:0> trace 'stop'
=> false => false
hbase(main):004:0> trace 'status' hbase(main):004:0> trace 'status'
=> false => false
]]></programlisting> ]]></programlisting>
</para>
</section> </section>
</appendix> </appendix>

File diff suppressed because it is too large Load Diff

View File

@ -1,13 +1,15 @@
<?xml version="1.0"?> <?xml version="1.0"?>
<chapter xml:id="upgrading" <chapter
version="5.0" xmlns="http://docbook.org/ns/docbook" xml:id="upgrading"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xmlns:xi="http://www.w3.org/2001/XInclude" xmlns="http://docbook.org/ns/docbook"
xmlns:svg="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:m="http://www.w3.org/1998/Math/MathML" xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:html="http://www.w3.org/1999/xhtml" xmlns:svg="http://www.w3.org/2000/svg"
xmlns:db="http://docbook.org/ns/docbook"> xmlns:m="http://www.w3.org/1998/Math/MathML"
<!-- xmlns:html="http://www.w3.org/1999/xhtml"
xmlns:db="http://docbook.org/ns/docbook">
<!--
/** /**
* Licensed to the Apache Software Foundation (ASF) under one * Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file * or more contributor license agreements. See the NOTICE file
@ -29,206 +31,221 @@
<title>Upgrading</title> <title>Upgrading</title>
<para>You cannot skip major versions upgrading. If you are upgrading from version 0.90.x to <para>You cannot skip major versions upgrading. If you are upgrading from version 0.90.x to
0.94.x, you must first go from 0.90.x to 0.92.x and then go from 0.92.x to 0.94.x.</para> 0.94.x, you must first go from 0.90.x to 0.92.x and then go from 0.92.x to 0.94.x.</para>
<note><para>It may be possible to skip across versions -- for example go from <note>
0.92.2 straight to 0.98.0 just following the 0.96.x upgrade instructions -- <para>It may be possible to skip across versions -- for example go from 0.92.2 straight to
but we have not tried it so cannot say whether it works or not.</para> 0.98.0 just following the 0.96.x upgrade instructions -- but we have not tried it so
cannot say whether it works or not.</para>
</note> </note>
<para> <para> Review <xref
Review <xref linkend="configuration" />, in particular the section on Hadoop version. linkend="configuration" />, in particular the section on Hadoop version. </para>
</para> <section
<section xml:id="hbase.versioning"> xml:id="hbase.versioning">
<title>HBase version numbers</title> <title>HBase version numbers</title>
<para>HBase has not walked a straight line where version numbers are concerned. <para>HBase has not walked a straight line where version numbers are concerned. Since we
Since we came up out of hadoop itself, we originally tracked hadoop versioning. came up out of hadoop itself, we originally tracked hadoop versioning. Later we left
Later we left hadoop versioning behind because we were moving at a different rate hadoop versioning behind because we were moving at a different rate to that of our
to that of our parent. If you are into the arcane, checkout our old wiki page parent. If you are into the arcane, checkout our old wiki page on <link
on <link xlink:href="http://wiki.apache.org/hadoop/Hbase/HBaseVersions">HBase Versioning</link> xlink:href="http://wiki.apache.org/hadoop/Hbase/HBaseVersions">HBase
which tries to connect the HBase version dots.</para> Versioning</link> which tries to connect the HBase version dots.</para>
<section xml:id="hbase.development.series"><title>Odd/Even Versioning or "Development"" Series Releases</title> <section
xml:id="hbase.development.series">
<title>Odd/Even Versioning or "Development"" Series Releases</title>
<para>Ahead of big releases, we have been putting up preview versions to start the <para>Ahead of big releases, we have been putting up preview versions to start the
feedback cycle turning-over earlier. These "Development" Series releases, feedback cycle turning-over earlier. These "Development" Series releases, always
always odd-numbered, come with no guarantees, not even regards being able odd-numbered, come with no guarantees, not even regards being able to upgrade
to upgrade between two sequential releases (we reserve the right to break compatibility across between two sequential releases (we reserve the right to break compatibility across
"Development" Series releases). Needless to say, these releases are not for "Development" Series releases). Needless to say, these releases are not for
production deploys. They are a preview of what is coming in the hope that production deploys. They are a preview of what is coming in the hope that interested
interested parties will take the release for a test drive and flag us early if we parties will take the release for a test drive and flag us early if we there are
there are issues we've missed ahead of our rolling a production-worthy release. issues we've missed ahead of our rolling a production-worthy release. </para>
</para> <para>Our first "Development" Series was the 0.89 set that came out ahead of HBase
<para>Our first "Development" Series was the 0.89 set that came out ahead of 0.90.0. HBase 0.95 is another "Development" Series that portends HBase 0.96.0.
HBase 0.90.0. HBase 0.95 is another "Development" Series that portends
HBase 0.96.0.
</para> </para>
</section> </section>
<section xml:id="hbase.binary.compatibility"> <section
xml:id="hbase.binary.compatibility">
<title>Binary Compatibility</title> <title>Binary Compatibility</title>
<para>When we say two HBase versions are compatible, we mean that the versions <para>When we say two HBase versions are compatible, we mean that the versions are wire
are wire and binary compatible. Compatible HBase versions means that and binary compatible. Compatible HBase versions means that clients can talk to
clients can talk to compatible but differently versioned servers. compatible but differently versioned servers. It means too that you can just swap
It means too that you can just swap out the jars of one version and replace out the jars of one version and replace them with the jars of another, compatible
them with the jars of another, compatible version and all will just work. version and all will just work. Unless otherwise specified, HBase point versions are
Unless otherwise specified, HBase point versions are binary compatible. binary compatible. You can safely do rolling upgrades between binary compatible
You can safely do rolling upgrades between binary compatible versions; i.e. versions; i.e. across point versions: e.g. from 0.94.5 to 0.94.6<footnote>
across point versions: e.g. from 0.94.5 to 0.94.6<footnote><para>See <para>See <link
<link xlink:href="http://search-hadoop.com/m/bOOvwHGW981/Does+compatibility+between+versions+also+mean+binary+compatibility%253F&amp;subj=Re+Does+compatibility+between+versions+also+mean+binary+compatibility+">Does compatibility between versions also mean binary compatibility?</link> xlink:href="http://search-hadoop.com/m/bOOvwHGW981/Does+compatibility+between+versions+also+mean+binary+compatibility%253F&amp;subj=Re+Does+compatibility+between+versions+also+mean+binary+compatibility+">Does
discussion on the hbaes dev mailing list. compatibility between versions also mean binary compatibility?</link>
</para></footnote>. discussion on the hbaes dev mailing list. </para>
</para> </footnote>. </para>
</section> </section>
<section xml:id="hbase.rolling.restart"> <section
xml:id="hbase.rolling.restart">
<title>Rolling Upgrade between versions/Binary compatibility</title> <title>Rolling Upgrade between versions/Binary compatibility</title>
<para>Unless otherwise specified, HBase point versions are binary compatible. <para>Unless otherwise specified, HBase point versions are binary compatible. you can do
you can do a rolling upgrade between hbase point versions; a rolling upgrade between hbase point versions; for example, you can go to 0.94.6
for example, you can go to 0.94.6 from 0.94.5 by doing a rolling upgrade across the cluster from 0.94.5 by doing a rolling upgrade across the cluster replacing the 0.94.5
replacing the 0.94.5 binary with a 0.94.6 binary. binary with a 0.94.6 binary. </para>
</para>
</section> </section>
</section> </section>
<section xml:id="upgrade0.98"> <section
<title>Upgrading from 0.96.x to 0.98.x</title> xml:id="upgrade0.98">
<para>A rolling upgrade from 0.96.x to 0.98.x works. The two versions are not binary compatible.</para> <title>Upgrading from 0.96.x to 0.98.x</title>
<para>Additional steps are required to take advantage of some of the new features of 0.98.x, including cell visibility <para>A rolling upgrade from 0.96.x to 0.98.x works. The two versions are not binary
labels, cell ACLs, and transparent server side encryption. See the compatible.</para>
<xref linkend="security" /> chapter of this guide for more information. Significant performance improvements include a change to the write <para>Additional steps are required to take advantage of some of the new features of 0.98.x,
ahead log threading model that provides higher transaction throughput under including cell visibility labels, cell ACLs, and transparent server side encryption. See
high load, reverse scanners, MapReduce over snapshot files, and striped the <xref
compaction.</para> linkend="security" /> chapter of this guide for more information. Significant
<para>Clients and servers can run with 0.98.x and 0.96.x versions. However, applications may need to be recompiled due to changes in the Java API.</para> performance improvements include a change to the write ahead log threading model that
provides higher transaction throughput under high load, reverse scanners, MapReduce over
snapshot files, and striped compaction.</para>
<para>Clients and servers can run with 0.98.x and 0.96.x versions. However, applications may
need to be recompiled due to changes in the Java API.</para>
</section> </section>
<section> <section>
<title>Upgrading from 0.94.x to 0.98.x</title> <title>Upgrading from 0.94.x to 0.98.x</title>
<para> <para> A rolling upgrade from 0.94.x directly to 0.98.x does not work. The upgrade path
A rolling upgrade from 0.94.x directly to 0.98.x does not work. The upgrade path follows the same procedures as <xref linkend="upgrade0.96" />. Additional steps are required to use some of the new features of 0.98.x. See <xref linkend="upgrade0.98" /> for an abbreviated list of these features. follows the same procedures as <xref
</para> linkend="upgrade0.96" />. Additional steps are required to use some of the new
features of 0.98.x. See <xref
linkend="upgrade0.98" /> for an abbreviated list of these features. </para>
</section> </section>
<section xml:id="upgrade0.96"> <section
<title>Upgrading from 0.94.x to 0.96.x</title> xml:id="upgrade0.96">
<subtitle>The Singularity</subtitle> <title>Upgrading from 0.94.x to 0.96.x</title>
<para>You will have to stop your old 0.94.x cluster completely to upgrade. If you are replicating <subtitle>The Singularity</subtitle>
between clusters, both clusters will have to go down to upgrade. Make sure it is a clean shutdown. <para>You will have to stop your old 0.94.x cluster completely to upgrade. If you are
The less WAL files around, the faster the upgrade will run (the upgrade will split any log files it replicating between clusters, both clusters will have to go down to upgrade. Make sure
finds in the filesystem as part of the upgrade process). All clients must be upgraded to 0.96 too. it is a clean shutdown. The less WAL files around, the faster the upgrade will run (the
</para> upgrade will split any log files it finds in the filesystem as part of the upgrade
<para>The API has changed. You will need to recompile your code against 0.96 and you may need to process). All clients must be upgraded to 0.96 too. </para>
adjust applications to go against new APIs (TODO: List of changes). <para>The API has changed. You will need to recompile your code against 0.96 and you may
</para> need to adjust applications to go against new APIs (TODO: List of changes). </para>
<section> <section>
<title>Executing the 0.96 Upgrade</title> <title>Executing the 0.96 Upgrade</title>
<note> <note>
<para>HDFS and ZooKeeper should be up and running during the upgrade process.</para> <para>HDFS and ZooKeeper should be up and running during the upgrade process.</para>
</note> </note>
<para>hbase-0.96.0 comes with an upgrade script. Run <para>hbase-0.96.0 comes with an upgrade script. Run
<programlisting>$ bin/hbase upgrade</programlisting> to see its usage. <programlisting>$ bin/hbase upgrade</programlisting> to see its usage. The script
The script has two main modes: -check, and -execute. has two main modes: -check, and -execute. </para>
</para> <section>
<section><title>check</title> <title>check</title>
<para>The <emphasis>check</emphasis> step is run against a running 0.94 cluster. <para>The <emphasis>check</emphasis> step is run against a running 0.94 cluster. Run
Run it from a downloaded 0.96.x binary. The <emphasis>check</emphasis> step it from a downloaded 0.96.x binary. The <emphasis>check</emphasis> step is
is looking for the presence of <filename>HFileV1</filename> files. These are looking for the presence of <filename>HFileV1</filename> files. These are
unsupported in hbase-0.96.0. To purge them -- have them rewritten as HFileV2 -- unsupported in hbase-0.96.0. To purge them -- have them rewritten as HFileV2 --
you must run a compaction. you must run a compaction. </para>
</para> <para>The <emphasis>check</emphasis> step prints stats at the end of its run (grep
<para>The <emphasis>check</emphasis> step prints stats at the end of its run for “Result:” in the log) printing absolute path of the tables it scanned, any
(grep for “Result:” in the log) printing absolute path of the tables it scanned, HFileV1 files found, the regions containing said files (the regions we need to
any HFileV1 files found, the regions containing said files (the regions we major compact to purge the HFileV1s), and any corrupted files if any found. A
need to major compact to purge the HFileV1s), and any corrupted files if corrupt file is unreadable, and so is undefined (neither HFileV1 nor HFileV2). </para>
any found. A corrupt file is unreadable, and so is undefined (neither HFileV1 nor HFileV2). <para>To run the check step, run <command>$ bin/hbase upgrade -check</command>. Here
</para> is sample output:</para>
<para>To run the check step, run <programlisting>$ bin/hbase upgrade -check</programlisting>. <screen>
Here is sample output: Tables Processed:
<programlisting> hdfs://localhost:41020/myHBase/.META.
Tables Processed: hdfs://localhost:41020/myHBase/usertable
hdfs://localhost:41020/myHBase/.META. hdfs://localhost:41020/myHBase/TestTable
hdfs://localhost:41020/myHBase/usertable hdfs://localhost:41020/myHBase/t
hdfs://localhost:41020/myHBase/TestTable
hdfs://localhost:41020/myHBase/t
Count of HFileV1: 2 Count of HFileV1: 2
HFileV1: HFileV1:
hdfs://localhost:41020/myHBase/usertable /fa02dac1f38d03577bd0f7e666f12812/family/249450144068442524 hdfs://localhost:41020/myHBase/usertable /fa02dac1f38d03577bd0f7e666f12812/family/249450144068442524
hdfs://localhost:41020/myHBase/usertable /ecdd3eaee2d2fcf8184ac025555bb2af/family/249450144068442512 hdfs://localhost:41020/myHBase/usertable /ecdd3eaee2d2fcf8184ac025555bb2af/family/249450144068442512
Count of corrupted files: 1 Count of corrupted files: 1
Corrupted Files: Corrupted Files:
hdfs://localhost:41020/myHBase/usertable/fa02dac1f38d03577bd0f7e666f12812/family/1 hdfs://localhost:41020/myHBase/usertable/fa02dac1f38d03577bd0f7e666f12812/family/1
Count of Regions with HFileV1: 2 Count of Regions with HFileV1: 2
Regions to Major Compact: Regions to Major Compact:
hdfs://localhost:41020/myHBase/usertable/fa02dac1f38d03577bd0f7e666f12812 hdfs://localhost:41020/myHBase/usertable/fa02dac1f38d03577bd0f7e666f12812
hdfs://localhost:41020/myHBase/usertable/ecdd3eaee2d2fcf8184ac025555bb2af hdfs://localhost:41020/myHBase/usertable/ecdd3eaee2d2fcf8184ac025555bb2af
There are some HFileV1, or corrupt files (files with incorrect major version) There are some HFileV1, or corrupt files (files with incorrect major version)
</programlisting> </screen>
In the above sample output, there are two HFileV1 in two regions, and one corrupt file. <para>In the above sample output, there are two HFileV1 in two regions, and one
Corrupt files should probably be removed. The regions that have HFileV1s need to be major corrupt file. Corrupt files should probably be removed. The regions that have
compacted. To major compact, start up the hbase shell and review how to compact an individual HFileV1s need to be major compacted. To major compact, start up the hbase shell
region. After the major compaction is done, rerun the check step and the HFileV1s shoudl be and review how to compact an individual region. After the major compaction is
gone, replaced by HFileV2 instances. done, rerun the check step and the HFileV1s shoudl be gone, replaced by HFileV2
</para> instances. </para>
<para>By default, the check step scans the hbase root directory (defined as hbase.rootdir in the configuration). <para>By default, the check step scans the hbase root directory (defined as
To scan a specific directory only, pass the <emphasis>-dir</emphasis> option. hbase.rootdir in the configuration). To scan a specific directory only, pass the
<programlisting>$ bin/hbase upgrade -check -dir /myHBase/testTable</programlisting> <emphasis>-dir</emphasis> option.</para>
The above command would detect HFileV1s in the /myHBase/testTable directory. <screen>$ bin/hbase upgrade -check -dir /myHBase/testTable</screen>
</para> <para>The above command would detect HFileV1s in the /myHBase/testTable directory. </para>
<para> <para> Once the check step reports all the HFileV1 files have been rewritten, it is
Once the check step reports all the HFileV1 files have been rewritten, it is safe to proceed with the safe to proceed with the upgrade. </para>
upgrade. </section>
</para> <section>
</section> <title>execute</title>
<section><title>execute</title> <para>After the check step shows the cluster is free of HFileV1, it is safe to
<para>After the check step shows the cluster is free of HFileV1, it is safe to proceed with the upgrade. proceed with the upgrade. Next is the <emphasis>execute</emphasis> step. You
Next is the <emphasis>execute</emphasis> step. You must <emphasis>SHUTDOWN YOUR 0.94.x CLUSTER</emphasis> must <emphasis>SHUTDOWN YOUR 0.94.x CLUSTER</emphasis> before you can run the
before you can run the <emphasis>execute</emphasis> step. The execute step will not run if it <emphasis>execute</emphasis> step. The execute step will not run if it
detects running HBase masters or regionservers. detects running HBase masters or regionservers. <note>
<note> <para>HDFS and ZooKeeper should be up and running during the upgrade
<para>HDFS and ZooKeeper should be up and running during the upgrade process. process. If zookeeper is managed by HBase, then you can start zookeeper
If zookeeper is managed by HBase, then you can start zookeeper so it is available to the upgrade so it is available to the upgrade by running <command>$
by running <programlisting>$ ./hbase/bin/hbase-daemon.sh start zookeeper</programlisting> ./hbase/bin/hbase-daemon.sh start zookeeper</command>
</para></note> </para>
</para> </note>
<para> </para>
The <emphasis>execute</emphasis> upgrade step is made of three substeps. <para> The <emphasis>execute</emphasis> upgrade step is made of three substeps. </para>
<itemizedlist>
<listitem>
<para>Namespaces: HBase 0.96.0 has support for namespaces. The upgrade needs
to reorder directories in the filesystem for namespaces to work.</para>
</listitem>
<listitem>
<para>ZNodes: All znodes are purged so that new ones can be written in their
place using a new protobuf'ed format and a few are migrated in place:
e.g. replication and table state znodes</para>
</listitem>
<listitem>
<para>WAL Log Splitting: If the 0.94.x cluster shutdown was not clean, we'll
split WAL logs as part of migration before we startup on 0.96.0. This
WAL splitting runs slower than the native distributed WAL splitting
because it is all inside the single upgrade process (so try and get a
clean shutdown of the 0.94.0 cluster if you can). </para>
</listitem>
</itemizedlist>
<para> To run the <emphasis>execute</emphasis> step, make sure that first you have
copied hbase-0.96.0 binaries everywhere under servers and under clients. Make
sure the 0.94.0 cluster is down. Then do as follows:</para>
<screen>$ bin/hbase upgrade -execute</screen>
<para>Here is some sample output.</para>
<programlisting>
Starting Namespace upgrade
Created version file at hdfs://localhost:41020/myHBase with version=7
Migrating table testTable to hdfs://localhost:41020/myHBase/.data/default/testTable
…..
Created version file at hdfs://localhost:41020/myHBase with version=8
Successfully completed NameSpace upgrade.
Starting Znode upgrade
….
Successfully completed Znode upgrade
<itemizedlist> Starting Log splitting
<listitem> <para>Namespaces: HBase 0.96.0 has support for namespaces. The upgrade needs to reorder directories in the filesystem for namespaces to work.</para> </listitem>
<listitem> <para>ZNodes: All znodes are purged so that new ones can be written in their place using a new protobuf'ed format and a few are migrated in place: e.g. replication and table state znodes</para> </listitem> Successfully completed Log splitting
<listitem> <para>WAL Log Splitting: If the 0.94.x cluster shutdown was not clean, we'll split WAL logs as part of migration before
we startup on 0.96.0. This WAL splitting runs slower than the native distributed WAL splitting because it is all inside the
single upgrade process (so try and get a clean shutdown of the 0.94.0 cluster if you can).
</para> </listitem>
</itemizedlist>
</para>
<para>
To run the <emphasis>execute</emphasis> step, make sure that first you have copied hbase-0.96.0
binaries everywhere under servers and under clients. Make sure the 0.94.0 cluster is down.
Then do as follows:
<programlisting>$ bin/hbase upgrade -execute</programlisting>
Here is some sample output
<programlisting>
Starting Namespace upgrade
Created version file at hdfs://localhost:41020/myHBase with version=7
Migrating table testTable to hdfs://localhost:41020/myHBase/.data/default/testTable
…..
Created version file at hdfs://localhost:41020/myHBase with version=8
Successfully completed NameSpace upgrade.
Starting Znode upgrade
….
Successfully completed Znode upgrade
Starting Log splitting
Successfully completed Log splitting
</programlisting> </programlisting>
</para> <para> If the output from the execute step looks good, stop the zookeeper instance
<para> you started to do the upgrade:
If the output from the execute step looks good, stop the zookeeper instance you started <programlisting>$ ./hbase/bin/hbase-daemon.sh stop zookeeper</programlisting>
to do the upgrade: <programlisting>$ ./hbase/bin/hbase-daemon.sh stop zookeeper</programlisting> Now start up hbase-0.96.0. </para>
Now start up hbase-0.96.0. </section>
</para> <section
</section> xml:id="s096.migration.troubleshooting">
<section xml:id="s096.migration.troubleshooting"><title>Troubleshooting</title> <title>Troubleshooting</title>
<section xml:id="s096.migration.troubleshooting.old.client"><title>Old Client connecting to 0.96 cluster</title> <section
<para>It will fail with an exception like the below. Upgrade. xml:id="s096.migration.troubleshooting.old.client">
<programlisting>17:22:15 Exception in thread "main" java.lang.IllegalArgumentException: Not a host:port pair: PBUF <title>Old Client connecting to 0.96 cluster</title>
<para>It will fail with an exception like the below. Upgrade.</para>
<screen>17:22:15 Exception in thread "main" java.lang.IllegalArgumentException: Not a host:port pair: PBUF
17:22:15 * 17:22:15 *
17:22:15 api-compat-8.ent.cloudera.com <20><> <20><><EFBFBD>( 17:22:15 api-compat-8.ent.cloudera.com <20><> <20><><EFBFBD>(
17:22:15 at org.apache.hadoop.hbase.util.Addressing.parseHostname(Addressing.java:60) 17:22:15 at org.apache.hadoop.hbase.util.Addressing.parseHostname(Addressing.java:60)
@ -239,192 +256,232 @@
17:22:15 at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getMaster(HConnectionManager.java:703) 17:22:15 at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getMaster(HConnectionManager.java:703)
17:22:15 at org.apache.hadoop.hbase.client.HBaseAdmin.&amp;init>(HBaseAdmin.java:126) 17:22:15 at org.apache.hadoop.hbase.client.HBaseAdmin.&amp;init>(HBaseAdmin.java:126)
17:22:15 at Client_4_3_0.setup(Client_4_3_0.java:716) 17:22:15 at Client_4_3_0.setup(Client_4_3_0.java:716)
17:22:15 at Client_4_3_0.main(Client_4_3_0.java:63)</programlisting> 17:22:15 at Client_4_3_0.main(Client_4_3_0.java:63)</screen>
</para> </section>
</section> </section>
</section> </section>
</section>
</section> </section>
<section xml:id="upgrade0.94"> <section
<title>Upgrading from 0.92.x to 0.94.x</title> xml:id="upgrade0.94">
<para>We used to think that 0.92 and 0.94 were interface compatible and that you can do a <title>Upgrading from 0.92.x to 0.94.x</title>
rolling upgrade between these versions but then we figured that <para>We used to think that 0.92 and 0.94 were interface compatible and that you can do a
<link xlink:href="https://issues.apache.org/jira/browse/HBASE-5357">HBASE-5357 Use builder pattern in HColumnDescriptor</link> rolling upgrade between these versions but then we figured that <link
changed method signatures so rather than return void they instead return HColumnDescriptor. This xlink:href="https://issues.apache.org/jira/browse/HBASE-5357">HBASE-5357 Use builder
will throw <programlisting>java.lang.NoSuchMethodError: org.apache.hadoop.hbase.HColumnDescriptor.setMaxVersions(I)V</programlisting> pattern in HColumnDescriptor</link> changed method signatures so rather than return
.... so 0.92 and 0.94 are NOT compatible. You cannot do a rolling upgrade between them. void they instead return HColumnDescriptor. This will throw</para>
</para> <screen>java.lang.NoSuchMethodError: org.apache.hadoop.hbase.HColumnDescriptor.setMaxVersions(I)V</screen>
</section> <para>.... so 0.92 and 0.94 are NOT compatible. You cannot do a rolling upgrade between them.</para> </section>
<section xml:id="upgrade0.92"> <section
<title>Upgrading from 0.90.x to 0.92.x</title> xml:id="upgrade0.92">
<subtitle>Upgrade Guide</subtitle> <title>Upgrading from 0.90.x to 0.92.x</title>
<para>You will find that 0.92.0 runs a little differently to 0.90.x releases. Here are a few things to watch out for upgrading from 0.90.x to 0.92.0. <subtitle>Upgrade Guide</subtitle>
<note><title>tl;dr</title> <para>You will find that 0.92.0 runs a little differently to 0.90.x releases. Here are a few
<para> things to watch out for upgrading from 0.90.x to 0.92.0. </para>
If you've not patience, here are the important things to know upgrading. <note>
<orderedlist> <title>tl;dr</title>
<listitem><para>Once you upgrade, you cant go back.</para> <para> If you've not patience, here are the important things to know upgrading. <orderedlist>
</listitem> <listitem>
<listitem><para> <para>Once you upgrade, you cant go back.</para>
MSLAB is on by default. Watch that heap usage if you have a lot of regions.</para> </listitem>
</listitem> <listitem>
<listitem><para> <para> MSLAB is on by default. Watch that heap usage if you have a lot of
Distributed splitting is on by defaul. It should make region server failover faster. regions.</para>
</para></listitem> </listitem>
<listitem><para> <listitem>
Theres a separate tarball for security. <para> Distributed splitting is on by defaul. It should make region server
</para></listitem> failover faster. </para>
<listitem><para> </listitem>
If -XX:MaxDirectMemorySize is set in your hbase-env.sh, its going to enable the experimental off-heap cache (You may not want this). <listitem>
</para></listitem> <para> Theres a separate tarball for security. </para>
</orderedlist> </listitem>
</para> <listitem>
</note> <para> If -XX:MaxDirectMemorySize is set in your hbase-env.sh, its going to
</para> enable the experimental off-heap cache (You may not want this). </para>
</listitem>
<section> </orderedlist>
<title>You cant go back!
</title>
<para>To move to 0.92.0, all you need to do is shutdown your cluster, replace your hbase 0.90.x with hbase 0.92.0 binaries (be sure you clear out all 0.90.x instances) and restart (You cannot do a rolling restart from 0.90.x to 0.92.x -- you must restart).
On startup, the <varname>.META.</varname> table content is rewritten removing the table schema from the <varname>info:regioninfo</varname> column.
Also, any flushes done post first startup will write out data in the new 0.92.0 file format, <link xlink:href="http://hbase.apache.org/book.html#hfilev2">HFile V2</link>.
This means you cannot go back to 0.90.x once youve started HBase 0.92.0 over your HBase data directory.
</para>
</section>
<section>
<title>MSLAB is ON by default
</title>
<para>In 0.92.0, the <link xlink:href="http://hbase.apache.org/book.html#hbase.hregion.memstore.mslab.enabled">hbase.hregion.memstore.mslab.enabled</link> flag is set to true
(See <xref linkend="mslab" />). In 0.90.x it was <constant>false</constant>. When it is enabled, memstores will step allocate memory in MSLAB 2MB chunks even if the
memstore has zero or just a few small elements. This is fine usually but if you had lots of regions per regionserver in a 0.90.x cluster (and MSLAB was off),
you may find yourself OOME'ing on upgrade because the <code>thousands of regions * number of column families * 2MB MSLAB (at a minimum)</code>
puts your heap over the top. Set <varname>hbase.hregion.memstore.mslab.enabled</varname> to
<constant>false</constant> or set the MSLAB size down from 2MB by setting <varname>hbase.hregion.memstore.mslab.chunksize</varname> to something less.
</para>
</section>
<section><title>Distributed splitting is on by default
</title>
<para>Previous, WAL logs on crash were split by the Master alone. In 0.92.0, log splitting is done by the cluster (See See “HBASE-1364 [performance] Distributed splitting of regionserver commit logs”). This should cut down significantly on the amount of time it takes splitting logs and getting regions back online again.
</para>
</section>
<section><title>Memory accounting is different now
</title>
<para>In 0.92.0, <xref linkend="hfilev2" /> indices and bloom filters take up residence in the same LRU used caching blocks that come from the filesystem.
In 0.90.x, the HFile v1 indices lived outside of the LRU so they took up space even if the index was on a cold file, one that wasnt being actively used. With the indices now in the LRU, you may find you
have less space for block caching. Adjust your block cache accordingly. See the <xref linkend="block.cache" /> for more detail.
The block size default size has been changed in 0.92.0 from 0.2 (20 percent of heap) to 0.25.
</para>
</section>
<section><title>On the Hadoop version to use
</title>
<para>Run 0.92.0 on Hadoop 1.0.x (or CDH3u3 when it ships). The performance benefits are worth making the move. Otherwise, our Hadoop prescription is as it has been; you need an Hadoop that supports a working sync. See <xref linkend="hadoop" />.
</para>
<para>If running on Hadoop 1.0.x (or CDH3u3), enable local read. See <link xlink:href="http://files.meetup.com/1350427/hug_ebay_jdcryans.pdf">Practical Caching</link> presentation for ruminations on the performance benefits going local (and for how to enable local reads).
</para>
</section>
<section><title>HBase 0.92.0 ships with ZooKeeper 3.4.2
</title>
<para>If you can, upgrade your zookeeper. If you cant, 3.4.2 clients should work against 3.3.X ensembles (HBase makes use of 3.4.2 API).
</para>
</section>
<section>
<title>Online alter is off by default
</title>
<para>In 0.92.0, weve added an experimental online schema alter facility (See <xref linkend="hbase.online.schema.update.enable" />). Its off by default. Enable it at your own risk. Online alter and splitting tables do not play well together so be sure your cluster quiescent using this feature (for now).
</para>
</section>
<section>
<title>WebUI
</title>
<para>The webui has had a few additions made in 0.92.0. It now shows a list of the regions currently transitioning, recent compactions/flushes, and a process list of running processes (usually empty if all is well and requests are being handled promptly). Other additions including requests by region, a debugging servlet dump, etc.
</para>
</section>
<section>
<title>Security tarball
</title>
<para>We now ship with two tarballs; secure and insecure HBase. Documentation on how to setup a secure HBase is on the way.
</para>
</section>
<section xml:id="slabcache"><title>Experimental off-heap cache: SlabCache</title>
<para>
A new cache was contributed to 0.92.0 to act as a solution between using the “on-heap” cache which is the current LRU cache the region servers have and the operating system cache which is out of our control.
To enable <emphasis>SlabCache</emphasis>, as this feature is being called, set “-XX:MaxDirectMemorySize” in hbase-env.sh to the value for maximum direct memory size and specify
<property>hbase.offheapcache.percentage</property> in <filename>hbase-site.xml</filename> with the percentage that you want to dedicate to off-heap cache. This should only be set for servers and not for clients. Use at your own risk.
See this blog post, <link xlink:href="http://www.cloudera.com/blog/2012/01/caching-in-hbase-slabcache/">Caching in Apache HBase: SlabCache</link>, for additional information on this new experimental feature.
</para>
<para>This feature has mostly been eclipsed in later HBases. See <link xlink:href="https://issues.apache.org/jira/browse/HBASE-7404 ">HBASE-7404 Bucket Cache:A solution about CMS,Heap Fragment and Big Cache on HBASE</link>, etc.</para>
</section>
<section><title>Changes in HBase replication
</title>
<para>0.92.0 adds two new features: multi-slave and multi-master replication. The way to enable this is the same as adding a new peer, so in order to have multi-master you would just run add_peer for each cluster that acts as a master to the other slave clusters. Collisions are handled at the timestamp level which may or may not be what you want, this needs to be evaluated on a per use case basis. Replication is still experimental in 0.92 and is disabled by default, run it at your own risk.
</para>
</section>
<section><title>RegionServer now aborts if OOME
</title>
<para>If an OOME, we now have the JVM kill -9 the regionserver process so it goes down fast. Previous, a RegionServer might stick around after incurring an OOME limping along in some wounded state. To disable this facility, and recommend you leave it in place, youd need to edit the bin/hbase file. Look for the addition of the -XX:OnOutOfMemoryError="kill -9 %p" arguments (See [HBASE-4769] - Abort RegionServer Immediately on OOME)
</para>
</section>
<section><title>HFile V2 and the “Bigger, Fewer” Tendency
</title>
<para>0.92.0 stores data in a new format, <xref linkend="hfilev2" />. As HBase runs, it will move all your data from HFile v1 to HFile v2 format. This auto-migration will run in the background as flushes and compactions run.
HFile V2 allows HBase run with larger regions/files. In fact, we encourage that all HBasers going forward tend toward Facebook axiom #1, run with larger, fewer regions.
If you have lots of regions now -- more than 100s per host -- you should look into setting your region size up after you move to 0.92.0 (In 0.92.0, default size is now 1G, up from 256M), and then running online merge tool (See “HBASE-1621 merge tool should work on online cluster, but disabled table”).
</para>
</section>
</section>
<section xml:id="upgrade0.90">
<title>Upgrading to HBase 0.90.x from 0.20.x or 0.89.x</title>
<para>This version of 0.90.x HBase can be started on data written by
HBase 0.20.x or HBase 0.89.x. There is no need of a migration step.
HBase 0.89.x and 0.90.x does write out the name of region directories
differently -- it names them with a md5 hash of the region name rather
than a jenkins hash -- so this means that once started, there is no
going back to HBase 0.20.x.
</para>
<para>
Be sure to remove the <filename>hbase-default.xml</filename> from
your <filename>conf</filename>
directory on upgrade. A 0.20.x version of this file will have
sub-optimal configurations for 0.90.x HBase. The
<filename>hbase-default.xml</filename> file is now bundled into the
HBase jar and read from there. If you would like to review
the content of this file, see it in the src tree at
<filename>src/main/resources/hbase-default.xml</filename> or
see <xref linkend="hbase_default_configurations" />.
</para>
<para>
Finally, if upgrading from 0.20.x, check your
<varname>.META.</varname> schema in the shell. In the past we would
recommend that users run with a 16kb
<varname>MEMSTORE_FLUSHSIZE</varname>.
Run <code>hbase> scan '-ROOT-'</code> in the shell. This will output
the current <varname>.META.</varname> schema. Check
<varname>MEMSTORE_FLUSHSIZE</varname> size. Is it 16kb (16384)? If so, you will
need to change this (The 'normal'/default value is 64MB (67108864)).
Run the script <filename>bin/set_meta_memstore_size.rb</filename>.
This will make the necessary edit to your <varname>.META.</varname> schema.
Failure to run this change will make for a slow cluster <footnote>
<para>
See <link xlink:href="https://issues.apache.org/jira/browse/HBASE-3499">HBASE-3499 Users upgrading to 0.90.0 need to have their .META. table updated with the right MEMSTORE_SIZE</link>
</para> </para>
</footnote> </note>
.
</para> <section>
</section> <title>You cant go back! </title>
</chapter> <para>To move to 0.92.0, all you need to do is shutdown your cluster, replace your hbase
0.90.x with hbase 0.92.0 binaries (be sure you clear out all 0.90.x instances) and
restart (You cannot do a rolling restart from 0.90.x to 0.92.x -- you must restart).
On startup, the <varname>.META.</varname> table content is rewritten removing the
table schema from the <varname>info:regioninfo</varname> column. Also, any flushes
done post first startup will write out data in the new 0.92.0 file format, <link
xlink:href="http://hbase.apache.org/book.html#hfilev2">HFile V2</link>. This
means you cannot go back to 0.90.x once youve started HBase 0.92.0 over your HBase
data directory. </para>
</section>
<section>
<title>MSLAB is ON by default </title>
<para>In 0.92.0, the <link
xlink:href="http://hbase.apache.org/book.html#hbase.hregion.memstore.mslab.enabled">hbase.hregion.memstore.mslab.enabled</link>
flag is set to true (See <xref
linkend="mslab" />). In 0.90.x it was <constant>false</constant>. When it is
enabled, memstores will step allocate memory in MSLAB 2MB chunks even if the
memstore has zero or just a few small elements. This is fine usually but if you had
lots of regions per regionserver in a 0.90.x cluster (and MSLAB was off), you may
find yourself OOME'ing on upgrade because the <code>thousands of regions * number of
column families * 2MB MSLAB (at a minimum)</code> puts your heap over the top.
Set <varname>hbase.hregion.memstore.mslab.enabled</varname> to
<constant>false</constant> or set the MSLAB size down from 2MB by setting
<varname>hbase.hregion.memstore.mslab.chunksize</varname> to something less.
</para>
</section>
<section>
<title>Distributed splitting is on by default </title>
<para>Previous, WAL logs on crash were split by the Master alone. In 0.92.0, log
splitting is done by the cluster (See See “HBASE-1364 [performance] Distributed
splitting of regionserver commit logs”). This should cut down significantly on the
amount of time it takes splitting logs and getting regions back online again.
</para>
</section>
<section>
<title>Memory accounting is different now </title>
<para>In 0.92.0, <xref
linkend="hfilev2" /> indices and bloom filters take up residence in the same LRU
used caching blocks that come from the filesystem. In 0.90.x, the HFile v1 indices
lived outside of the LRU so they took up space even if the index was on a cold
file, one that wasnt being actively used. With the indices now in the LRU, you may
find you have less space for block caching. Adjust your block cache accordingly. See
the <xref
linkend="block.cache" /> for more detail. The block size default size has been
changed in 0.92.0 from 0.2 (20 percent of heap) to 0.25. </para>
</section>
<section>
<title>On the Hadoop version to use </title>
<para>Run 0.92.0 on Hadoop 1.0.x (or CDH3u3 when it ships). The performance benefits are
worth making the move. Otherwise, our Hadoop prescription is as it has been; you
need an Hadoop that supports a working sync. See <xref
linkend="hadoop" />. </para>
<para>If running on Hadoop 1.0.x (or CDH3u3), enable local read. See <link
xlink:href="http://files.meetup.com/1350427/hug_ebay_jdcryans.pdf">Practical
Caching</link> presentation for ruminations on the performance benefits going
local (and for how to enable local reads). </para>
</section>
<section>
<title>HBase 0.92.0 ships with ZooKeeper 3.4.2 </title>
<para>If you can, upgrade your zookeeper. If you cant, 3.4.2 clients should work
against 3.3.X ensembles (HBase makes use of 3.4.2 API). </para>
</section>
<section>
<title>Online alter is off by default </title>
<para>In 0.92.0, weve added an experimental online schema alter facility (See <xref
linkend="hbase.online.schema.update.enable" />). Its off by default. Enable it
at your own risk. Online alter and splitting tables do not play well together so be
sure your cluster quiescent using this feature (for now). </para>
</section>
<section>
<title>WebUI </title>
<para>The webui has had a few additions made in 0.92.0. It now shows a list of the
regions currently transitioning, recent compactions/flushes, and a process list of
running processes (usually empty if all is well and requests are being handled
promptly). Other additions including requests by region, a debugging servlet dump,
etc. </para>
</section>
<section>
<title>Security tarball </title>
<para>We now ship with two tarballs; secure and insecure HBase. Documentation on how to
setup a secure HBase is on the way. </para>
</section>
<section
xml:id="slabcache">
<title>Experimental off-heap cache: SlabCache</title>
<para> A new cache was contributed to 0.92.0 to act as a solution between using the
“on-heap” cache which is the current LRU cache the region servers have and the
operating system cache which is out of our control. To enable
<emphasis>SlabCache</emphasis>, as this feature is being called, set
“-XX:MaxDirectMemorySize” in hbase-env.sh to the value for maximum direct memory
size and specify <property>hbase.offheapcache.percentage</property> in
<filename>hbase-site.xml</filename> with the percentage that you want to
dedicate to off-heap cache. This should only be set for servers and not for clients.
Use at your own risk. See this blog post, <link
xlink:href="http://www.cloudera.com/blog/2012/01/caching-in-hbase-slabcache/">Caching
in Apache HBase: SlabCache</link>, for additional information on this new
experimental feature. </para>
<para>This feature has mostly been eclipsed in later HBases. See <link
xlink:href="https://issues.apache.org/jira/browse/HBASE-7404 ">HBASE-7404 Bucket
Cache:A solution about CMS,Heap Fragment and Big Cache on HBASE</link>,
etc.</para>
</section>
<section>
<title>Changes in HBase replication </title>
<para>0.92.0 adds two new features: multi-slave and multi-master replication. The way to
enable this is the same as adding a new peer, so in order to have multi-master you
would just run add_peer for each cluster that acts as a master to the other slave
clusters. Collisions are handled at the timestamp level which may or may not be what
you want, this needs to be evaluated on a per use case basis. Replication is still
experimental in 0.92 and is disabled by default, run it at your own risk. </para>
</section>
<section>
<title>RegionServer now aborts if OOME </title>
<para>If an OOME, we now have the JVM kill -9 the regionserver process so it goes down
fast. Previous, a RegionServer might stick around after incurring an OOME limping
along in some wounded state. To disable this facility, and recommend you leave it in
place, youd need to edit the bin/hbase file. Look for the addition of the
-XX:OnOutOfMemoryError="kill -9 %p" arguments (See [HBASE-4769] - Abort
RegionServer Immediately on OOME) </para>
</section>
<section>
<title>HFile V2 and the “Bigger, Fewer” Tendency </title>
<para>0.92.0 stores data in a new format, <xref
linkend="hfilev2" />. As HBase runs, it will move all your data from HFile v1 to
HFile v2 format. This auto-migration will run in the background as flushes and
compactions run. HFile V2 allows HBase run with larger regions/files. In fact, we
encourage that all HBasers going forward tend toward Facebook axiom #1, run with
larger, fewer regions. If you have lots of regions now -- more than 100s per host --
you should look into setting your region size up after you move to 0.92.0 (In
0.92.0, default size is now 1G, up from 256M), and then running online merge tool
(See “HBASE-1621 merge tool should work on online cluster, but disabled table”).
</para>
</section>
</section>
<section
xml:id="upgrade0.90">
<title>Upgrading to HBase 0.90.x from 0.20.x or 0.89.x</title>
<para>This version of 0.90.x HBase can be started on data written by HBase 0.20.x or HBase
0.89.x. There is no need of a migration step. HBase 0.89.x and 0.90.x does write out the
name of region directories differently -- it names them with a md5 hash of the region
name rather than a jenkins hash -- so this means that once started, there is no going
back to HBase 0.20.x. </para>
<para> Be sure to remove the <filename>hbase-default.xml</filename> from your
<filename>conf</filename> directory on upgrade. A 0.20.x version of this file will
have sub-optimal configurations for 0.90.x HBase. The
<filename>hbase-default.xml</filename> file is now bundled into the HBase jar and
read from there. If you would like to review the content of this file, see it in the src
tree at <filename>src/main/resources/hbase-default.xml</filename> or see <xref
linkend="hbase_default_configurations" />. </para>
<para> Finally, if upgrading from 0.20.x, check your <varname>.META.</varname> schema in the
shell. In the past we would recommend that users run with a 16kb
<varname>MEMSTORE_FLUSHSIZE</varname>. Run <code>hbase> scan '-ROOT-'</code> in the
shell. This will output the current <varname>.META.</varname> schema. Check
<varname>MEMSTORE_FLUSHSIZE</varname> size. Is it 16kb (16384)? If so, you will need
to change this (The 'normal'/default value is 64MB (67108864)). Run the script
<filename>bin/set_meta_memstore_size.rb</filename>. This will make the necessary
edit to your <varname>.META.</varname> schema. Failure to run this change will make for
a slow cluster <footnote>
<para> See <link
xlink:href="https://issues.apache.org/jira/browse/HBASE-3499">HBASE-3499
Users upgrading to 0.90.0 need to have their .META. table updated with the
right MEMSTORE_SIZE</link>
</para>
</footnote> . </para>
</section>
</chapter>

View File

@ -1,13 +1,15 @@
<?xml version="1.0"?> <?xml version="1.0"?>
<chapter xml:id="zookeeper" <chapter
version="5.0" xmlns="http://docbook.org/ns/docbook" xml:id="zookeeper"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xmlns:xi="http://www.w3.org/2001/XInclude" xmlns="http://docbook.org/ns/docbook"
xmlns:svg="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:m="http://www.w3.org/1998/Math/MathML" xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:html="http://www.w3.org/1999/xhtml" xmlns:svg="http://www.w3.org/2000/svg"
xmlns:db="http://docbook.org/ns/docbook"> xmlns:m="http://www.w3.org/1998/Math/MathML"
<!-- xmlns:html="http://www.w3.org/1999/xhtml"
xmlns:db="http://docbook.org/ns/docbook">
<!--
/** /**
* Licensed to the Apache Software Foundation (ASF) under one * Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file * or more contributor license agreements. See the NOTICE file
@ -27,474 +29,394 @@
*/ */
--> -->
<title>ZooKeeper<indexterm> <title>ZooKeeper<indexterm>
<primary>ZooKeeper</primary> <primary>ZooKeeper</primary>
</indexterm></title> </indexterm></title>
<para>A distributed Apache HBase installation depends on a running ZooKeeper cluster. <para>A distributed Apache HBase installation depends on a running ZooKeeper cluster. All
All participating nodes and clients need to be able to access the participating nodes and clients need to be able to access the running ZooKeeper ensemble. Apache
running ZooKeeper ensemble. Apache HBase by default manages a ZooKeeper HBase by default manages a ZooKeeper "cluster" for you. It will start and stop the ZooKeeper
"cluster" for you. It will start and stop the ZooKeeper ensemble ensemble as part of the HBase start/stop process. You can also manage the ZooKeeper ensemble
as part of the HBase start/stop process. You can also manage the independent of HBase and just point HBase at the cluster it should use. To toggle HBase
ZooKeeper ensemble independent of HBase and just point HBase at management of ZooKeeper, use the <varname>HBASE_MANAGES_ZK</varname> variable in
the cluster it should use. To toggle HBase management of <filename>conf/hbase-env.sh</filename>. This variable, which defaults to
ZooKeeper, use the <varname>HBASE_MANAGES_ZK</varname> variable in <varname>true</varname>, tells HBase whether to start/stop the ZooKeeper ensemble servers as
<filename>conf/hbase-env.sh</filename>. This variable, which part of HBase start/stop.</para>
defaults to <varname>true</varname>, tells HBase whether to
start/stop the ZooKeeper ensemble servers as part of HBase
start/stop.</para>
<para>When HBase manages the ZooKeeper ensemble, you can specify <para>When HBase manages the ZooKeeper ensemble, you can specify ZooKeeper configuration using its
ZooKeeper configuration using its native native <filename>zoo.cfg</filename> file, or, the easier option is to just specify ZooKeeper
<filename>zoo.cfg</filename> file, or, the easier option is to options directly in <filename>conf/hbase-site.xml</filename>. A ZooKeeper configuration option
just specify ZooKeeper options directly in can be set as a property in the HBase <filename>hbase-site.xml</filename> XML configuration file
<filename>conf/hbase-site.xml</filename>. A ZooKeeper by prefacing the ZooKeeper option name with <varname>hbase.zookeeper.property</varname>. For
configuration option can be set as a property in the HBase example, the <varname>clientPort</varname> setting in ZooKeeper can be changed by setting the
<filename>hbase-site.xml</filename> XML configuration file by <varname>hbase.zookeeper.property.clientPort</varname> property. For all default values used
prefacing the ZooKeeper option name with by HBase, including ZooKeeper configuration, see <xref
<varname>hbase.zookeeper.property</varname>. For example, the linkend="hbase_default_configurations" />. Look for the
<varname>clientPort</varname> setting in ZooKeeper can be changed <varname>hbase.zookeeper.property</varname> prefix <footnote>
by setting the <para>For the full list of ZooKeeper configurations, see ZooKeeper's
<varname>hbase.zookeeper.property.clientPort</varname> property. <filename>zoo.cfg</filename>. HBase does not ship with a <filename>zoo.cfg</filename> so
For all default values used by HBase, including ZooKeeper you will need to browse the <filename>conf</filename> directory in an appropriate ZooKeeper
configuration, see <xref linkend="hbase_default_configurations" />. Look for the download.</para>
<varname>hbase.zookeeper.property</varname> prefix <footnote> </footnote></para>
<para>For the full list of ZooKeeper configurations, see
ZooKeeper's <filename>zoo.cfg</filename>. HBase does not ship
with a <filename>zoo.cfg</filename> so you will need to browse
the <filename>conf</filename> directory in an appropriate
ZooKeeper download.</para>
</footnote></para>
<para>You must at least list the ensemble servers in <para>You must at least list the ensemble servers in <filename>hbase-site.xml</filename> using the
<filename>hbase-site.xml</filename> using the <varname>hbase.zookeeper.quorum</varname> property. This property defaults to a single
<varname>hbase.zookeeper.quorum</varname> property. This property ensemble member at <varname>localhost</varname> which is not suitable for a fully distributed
defaults to a single ensemble member at HBase. (It binds to the local machine only and remote clients will not be able to connect). </para>
<varname>localhost</varname> which is not suitable for a fully <note
distributed HBase. (It binds to the local machine only and remote xml:id="how_many_zks">
clients will not be able to connect). <note xml:id="how_many_zks"> <title>How many ZooKeepers should I run?</title>
<title>How many ZooKeepers should I run?</title>
<para>You can run a ZooKeeper ensemble that comprises 1 node <para>You can run a ZooKeeper ensemble that comprises 1 node only but in production it is
only but in production it is recommended that you run a recommended that you run a ZooKeeper ensemble of 3, 5 or 7 machines; the more members an
ZooKeeper ensemble of 3, 5 or 7 machines; the more members an ensemble has, the more tolerant the ensemble is of host failures. Also, run an odd number of
ensemble has, the more tolerant the ensemble is of host machines. In ZooKeeper, an even number of peers is supported, but it is normally not used
failures. Also, run an odd number of machines. In ZooKeeper, because an even sized ensemble requires, proportionally, more peers to form a quorum than an
an even number of peers is supported, but it is normally not used odd sized ensemble requires. For example, an ensemble with 4 peers requires 3 to form a
because an even sized ensemble requires, proportionally, more peers quorum, while an ensemble with 5 also requires 3 to form a quorum. Thus, an ensemble of 5
to form a quorum than an odd sized ensemble requires. For example, an allows 2 peers to fail, and thus is more fault tolerant than the ensemble of 4, which allows
ensemble with 4 peers requires 3 to form a quorum, while an ensemble with only 1 down peer. </para>
5 also requires 3 to form a quorum. Thus, an ensemble of 5 allows 2 peers to <para>Give each ZooKeeper server around 1GB of RAM, and if possible, its own dedicated disk (A
fail, and thus is more fault tolerant than the ensemble of 4, which allows dedicated disk is the best thing you can do to ensure a performant ZooKeeper ensemble). For
only 1 down peer. very heavily loaded clusters, run ZooKeeper servers on separate machines from RegionServers
</para> (DataNodes and TaskTrackers).</para>
<para>Give each ZooKeeper server around 1GB of RAM, and if possible, its own </note>
dedicated disk (A dedicated disk is the best thing you can do
to ensure a performant ZooKeeper ensemble). For very heavily
loaded clusters, run ZooKeeper servers on separate machines
from RegionServers (DataNodes and TaskTrackers).</para>
</note></para>
<para>For example, to have HBase manage a ZooKeeper quorum on <para>For example, to have HBase manage a ZooKeeper quorum on nodes
nodes <emphasis>rs{1,2,3,4,5}.example.com</emphasis>, bound to <emphasis>rs{1,2,3,4,5}.example.com</emphasis>, bound to port 2222 (the default is 2181)
port 2222 (the default is 2181) ensure ensure <varname>HBASE_MANAGE_ZK</varname> is commented out or set to <varname>true</varname> in
<varname>HBASE_MANAGE_ZK</varname> is commented out or set to <filename>conf/hbase-env.sh</filename> and then edit <filename>conf/hbase-site.xml</filename>
<varname>true</varname> in <filename>conf/hbase-env.sh</filename> and set <varname>hbase.zookeeper.property.clientPort</varname> and
and then edit <filename>conf/hbase-site.xml</filename> and set <varname>hbase.zookeeper.quorum</varname>. You should also set
<varname>hbase.zookeeper.property.clientPort</varname> and <varname>hbase.zookeeper.property.dataDir</varname> to other than the default as the default
<varname>hbase.zookeeper.quorum</varname>. You should also set has ZooKeeper persist data under <filename>/tmp</filename> which is often cleared on system
<varname>hbase.zookeeper.property.dataDir</varname> to other than restart. In the example below we have ZooKeeper persist to
the default as the default has ZooKeeper persist data under <filename>/user/local/zookeeper</filename>.</para>
<filename>/tmp</filename> which is often cleared on system <programlisting><![CDATA[
restart. In the example below we have ZooKeeper persist to <configuration>
<filename>/user/local/zookeeper</filename>. <programlisting>
&lt;configuration&gt;
... ...
&lt;property&gt; <property>
&lt;name&gt;hbase.zookeeper.property.clientPort&lt;/name&gt; <name>hbase.zookeeper.property.clientPort</name>
&lt;value&gt;2222&lt;/value&gt; <value>2222</value>
&lt;description&gt;Property from ZooKeeper's config zoo.cfg. <description>Property from ZooKeeper's config zoo.cfg.
The port at which the clients will connect. The port at which the clients will connect.
&lt;/description&gt; </description>
&lt;/property&gt; </property>
&lt;property&gt; <property>
&lt;name&gt;hbase.zookeeper.quorum&lt;/name&gt; <name>hbase.zookeeper.quorum</name>
&lt;value&gt;rs1.example.com,rs2.example.com,rs3.example.com,rs4.example.com,rs5.example.com&lt;/value&gt; <value>rs1.example.com,rs2.example.com,rs3.example.com,rs4.example.com,rs5.example.com</value>
&lt;description&gt;Comma separated list of servers in the ZooKeeper Quorum. <description>Comma separated list of servers in the ZooKeeper Quorum.
For example, "host1.mydomain.com,host2.mydomain.com,host3.mydomain.com". For example, "host1.mydomain.com,host2.mydomain.com,host3.mydomain.com".
By default this is set to localhost for local and pseudo-distributed modes By default this is set to localhost for local and pseudo-distributed modes
of operation. For a fully-distributed setup, this should be set to a full of operation. For a fully-distributed setup, this should be set to a full
list of ZooKeeper quorum servers. If HBASE_MANAGES_ZK is set in hbase-env.sh list of ZooKeeper quorum servers. If HBASE_MANAGES_ZK is set in hbase-env.sh
this is the list of servers which we will start/stop ZooKeeper on. this is the list of servers which we will start/stop ZooKeeper on.
&lt;/description&gt; </description>
&lt;/property&gt; </property>
&lt;property&gt; <property>
&lt;name&gt;hbase.zookeeper.property.dataDir&lt;/name&gt; <name>hbase.zookeeper.property.dataDir</name>
&lt;value&gt;/usr/local/zookeeper&lt;/value&gt; <value>/usr/local/zookeeper</value>
&lt;description&gt;Property from ZooKeeper's config zoo.cfg. <description>Property from ZooKeeper's config zoo.cfg.
The directory where the snapshot is stored. The directory where the snapshot is stored.
&lt;/description&gt; </description>
&lt;/property&gt; </property>
... ...
&lt;/configuration&gt;</programlisting></para> </configuration>]]></programlisting>
<caution xml:id="zk.version"> <caution
<title>What verion of ZooKeeper should I use?</title> xml:id="zk.version">
<para>The newer version, the better. For example, some folks have been bitten by <title>What verion of ZooKeeper should I use?</title>
<link xlink:href="https://issues.apache.org/jira/browse/ZOOKEEPER-1277">ZOOKEEPER-1277</link>. <para>The newer version, the better. For example, some folks have been bitten by <link
If running zookeeper 3.5+, you can ask hbase to make use of the new multi operation by xlink:href="https://issues.apache.org/jira/browse/ZOOKEEPER-1277">ZOOKEEPER-1277</link>. If
enabling <xref linkend="hbase.zookeeper.useMulti"/>" in your <filename>hbase-site.xml</filename>. running zookeeper 3.5+, you can ask hbase to make use of the new multi operation by enabling <xref
</para> linkend="hbase.zookeeper.useMulti" />" in your <filename>hbase-site.xml</filename>. </para>
</caution> </caution>
<caution> <caution>
<title>ZooKeeper Maintenance</title> <title>ZooKeeper Maintenance</title>
<para>Be sure to set up the data dir cleaner described under <para>Be sure to set up the data dir cleaner described under <link
<link xlink:href="http://zookeeper.apache.org/doc/r3.1.2/zookeeperAdmin.html#sc_maintenance">Zookeeper Maintenance</link> else you could xlink:href="http://zookeeper.apache.org/doc/r3.1.2/zookeeperAdmin.html#sc_maintenance">Zookeeper
have 'interesting' problems a couple of months in; i.e. zookeeper could start Maintenance</link> else you could have 'interesting' problems a couple of months in; i.e.
dropping sessions if it has to run through a directory of hundreds of thousands of zookeeper could start dropping sessions if it has to run through a directory of hundreds of
logs which is wont to do around leader reelection time -- a process rare but run on thousands of logs which is wont to do around leader reelection time -- a process rare but run
occasion whether because a machine is dropped or happens to hiccup.</para> on occasion whether because a machine is dropped or happens to hiccup.</para>
</caution> </caution>
<section> <section>
<title>Using existing ZooKeeper ensemble</title> <title>Using existing ZooKeeper ensemble</title>
<para>To point HBase at an existing ZooKeeper cluster, one that <para>To point HBase at an existing ZooKeeper cluster, one that is not managed by HBase, set
is not managed by HBase, set <varname>HBASE_MANAGES_ZK</varname> <varname>HBASE_MANAGES_ZK</varname> in <filename>conf/hbase-env.sh</filename> to
in <filename>conf/hbase-env.sh</filename> to false false</para>
<programlisting> <screen>
... ...
# Tell HBase whether it should manage its own instance of Zookeeper or not. # Tell HBase whether it should manage its own instance of Zookeeper or not.
export HBASE_MANAGES_ZK=false</programlisting> Next set ensemble locations export HBASE_MANAGES_ZK=false</screen>
and client port, if non-standard, in <para>Next set ensemble locations and client port, if non-standard, in
<filename>hbase-site.xml</filename>, or add a suitably <filename>hbase-site.xml</filename>, or add a suitably configured
configured <filename>zoo.cfg</filename> to HBase's <filename>zoo.cfg</filename> to HBase's <filename>CLASSPATH</filename>. HBase will prefer
<filename>CLASSPATH</filename>. HBase will prefer the the configuration found in <filename>zoo.cfg</filename> over any settings in
configuration found in <filename>zoo.cfg</filename> over any <filename>hbase-site.xml</filename>.</para>
settings in <filename>hbase-site.xml</filename>.</para>
<para>When HBase manages ZooKeeper, it will start/stop the <para>When HBase manages ZooKeeper, it will start/stop the ZooKeeper servers as a part of the
ZooKeeper servers as a part of the regular start/stop scripts. regular start/stop scripts. If you would like to run ZooKeeper yourself, independent of HBase
If you would like to run ZooKeeper yourself, independent of start/stop, you would do the following</para>
HBase start/stop, you would do the following</para>
<programlisting> <screen>
${HBASE_HOME}/bin/hbase-daemons.sh {start,stop} zookeeper ${HBASE_HOME}/bin/hbase-daemons.sh {start,stop} zookeeper
</programlisting> </screen>
<para>Note that you can use HBase in this manner to spin up a <para>Note that you can use HBase in this manner to spin up a ZooKeeper cluster, unrelated to
ZooKeeper cluster, unrelated to HBase. Just make sure to set HBase. Just make sure to set <varname>HBASE_MANAGES_ZK</varname> to <varname>false</varname>
<varname>HBASE_MANAGES_ZK</varname> to <varname>false</varname> if you want it to stay up across HBase restarts so that when HBase shuts down, it doesn't take
if you want it to stay up across HBase restarts so that when ZooKeeper down with it.</para>
HBase shuts down, it doesn't take ZooKeeper down with it.</para>
<para>For more information about running a distinct ZooKeeper <para>For more information about running a distinct ZooKeeper cluster, see the ZooKeeper <link
cluster, see the ZooKeeper <link xlink:href="http://hadoop.apache.org/zookeeper/docs/current/zookeeperStarted.html">Getting
xlink:href="http://hadoop.apache.org/zookeeper/docs/current/zookeeperStarted.html">Getting Started Guide</link>. Additionally, see the <link
Started Guide</link>. Additionally, see the <link xlink:href="http://wiki.apache.org/hadoop/ZooKeeper/FAQ#A7">ZooKeeper Wiki</link> or the xlink:href="http://wiki.apache.org/hadoop/ZooKeeper/FAQ#A7">ZooKeeper Wiki</link> or the <link
<link xlink:href="http://zookeeper.apache.org/doc/r3.3.3/zookeeperAdmin.html#sc_zkMulitServerSetup">ZooKeeper documentation</link> xlink:href="http://zookeeper.apache.org/doc/r3.3.3/zookeeperAdmin.html#sc_zkMulitServerSetup">ZooKeeper
for more information on ZooKeeper sizing. documentation</link> for more information on ZooKeeper sizing. </para>
</para> </section>
</section>
<section xml:id="zk.sasl.auth"> <section
<title>SASL Authentication with ZooKeeper</title> xml:id="zk.sasl.auth">
<para>Newer releases of Apache HBase (&gt;= 0.92) will <title>SASL Authentication with ZooKeeper</title>
support connecting to a ZooKeeper Quorum that supports <para>Newer releases of Apache HBase (&gt;= 0.92) will support connecting to a ZooKeeper Quorum
SASL authentication (which is available in Zookeeper that supports SASL authentication (which is available in Zookeeper versions 3.4.0 or
versions 3.4.0 or later).</para> later).</para>
<para>This describes how to set up HBase to mutually <para>This describes how to set up HBase to mutually authenticate with a ZooKeeper Quorum.
authenticate with a ZooKeeper Quorum. ZooKeeper/HBase ZooKeeper/HBase mutual authentication (<link
mutual authentication (<link xlink:href="https://issues.apache.org/jira/browse/HBASE-2418">HBASE-2418</link>) is required
xlink:href="https://issues.apache.org/jira/browse/HBASE-2418">HBASE-2418</link>) as part of a complete secure HBase configuration (<link
is required as part of a complete secure HBase configuration xlink:href="https://issues.apache.org/jira/browse/HBASE-3025">HBASE-3025</link>). For
(<link simplicity of explication, this section ignores additional configuration required (Secure HDFS
xlink:href="https://issues.apache.org/jira/browse/HBASE-3025">HBASE-3025</link>). and Coprocessor configuration). It's recommended to begin with an HBase-managed Zookeeper
configuration (as opposed to a standalone Zookeeper quorum) for ease of learning. </para>
For simplicity of explication, this section ignores <section>
additional configuration required (Secure HDFS and Coprocessor <title>Operating System Prerequisites</title>
configuration). It's recommended to begin with an
HBase-managed Zookeeper configuration (as opposed to a
standalone Zookeeper quorum) for ease of learning.
</para>
<section><title>Operating System Prerequisites</title> <para> You need to have a working Kerberos KDC setup. For each <code>$HOST</code> that will
run a ZooKeeper server, you should have a principle <code>zookeeper/$HOST</code>. For each
such host, add a service key (using the <code>kadmin</code> or <code>kadmin.local</code>
tool's <code>ktadd</code> command) for <code>zookeeper/$HOST</code> and copy this file to
<code>$HOST</code>, and make it readable only to the user that will run zookeeper on
<code>$HOST</code>. Note the location of this file, which we will use below as
<filename>$PATH_TO_ZOOKEEPER_KEYTAB</filename>. </para>
<para> <para> Similarly, for each <code>$HOST</code> that will run an HBase server (master or
You need to have a working Kerberos KDC setup. For regionserver), you should have a principle: <code>hbase/$HOST</code>. For each host, add a
each <code>$HOST</code> that will run a ZooKeeper keytab file called <filename>hbase.keytab</filename> containing a service key for
server, you should have a principle <code>hbase/$HOST</code>, copy this file to <code>$HOST</code>, and make it readable only
<code>zookeeper/$HOST</code>. For each such host, to the user that will run an HBase service on <code>$HOST</code>. Note the location of this
add a service key (using the <code>kadmin</code> or file, which we will use below as <filename>$PATH_TO_HBASE_KEYTAB</filename>. </para>
<code>kadmin.local</code> tool's <code>ktadd</code>
command) for <code>zookeeper/$HOST</code> and copy
this file to <code>$HOST</code>, and make it
readable only to the user that will run zookeeper on
<code>$HOST</code>. Note the location of this file,
which we will use below as
<filename>$PATH_TO_ZOOKEEPER_KEYTAB</filename>.
</para>
<para> <para> Each user who will be an HBase client should also be given a Kerberos principal. This
Similarly, for each <code>$HOST</code> that will run principal should usually have a password assigned to it (as opposed to, as with the HBase
an HBase server (master or regionserver), you should servers, a keytab file) which only this user knows. The client's principal's
have a principle: <code>hbase/$HOST</code>. For each <code>maxrenewlife</code> should be set so that it can be renewed enough so that the user
host, add a keytab file called can complete their HBase client processes. For example, if a user runs a long-running HBase
<filename>hbase.keytab</filename> containing a service client process that takes at most 3 days, we might create this user's principal within
key for <code>hbase/$HOST</code>, copy this file to <code>kadmin</code> with: <code>addprinc -maxrenewlife 3days</code>. The Zookeeper client
<code>$HOST</code>, and make it readable only to the and server libraries manage their own ticket refreshment by running threads that wake up
user that will run an HBase service on periodically to do the refreshment. </para>
<code>$HOST</code>. Note the location of this file,
which we will use below as
<filename>$PATH_TO_HBASE_KEYTAB</filename>.
</para>
<para> <para>On each host that will run an HBase client (e.g. <code>hbase shell</code>), add the
Each user who will be an HBase client should also be following file to the HBase home directory's <filename>conf</filename> directory:</para>
given a Kerberos principal. This principal should
usually have a password assigned to it (as opposed to,
as with the HBase servers, a keytab file) which only
this user knows. The client's principal's
<code>maxrenewlife</code> should be set so that it can
be renewed enough so that the user can complete their
HBase client processes. For example, if a user runs a
long-running HBase client process that takes at most 3
days, we might create this user's principal within
<code>kadmin</code> with: <code>addprinc -maxrenewlife
3days</code>. The Zookeeper client and server
libraries manage their own ticket refreshment by
running threads that wake up periodically to do the
refreshment.
</para>
<para>On each host that will run an HBase client <programlisting>
(e.g. <code>hbase shell</code>), add the following Client {
file to the HBase home directory's <filename>conf</filename> com.sun.security.auth.module.Krb5LoginModule required
directory:</para> useKeyTab=false
useTicketCache=true;
<programlisting> };
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=false
useTicketCache=true;
};
</programlisting> </programlisting>
<para>We'll refer to this JAAS configuration file as <para>We'll refer to this JAAS configuration file as <filename>$CLIENT_CONF</filename>
<filename>$CLIENT_CONF</filename> below.</para> below.</para>
</section> </section>
<section> <section>
<title>HBase-managed Zookeeper Configuration</title> <title>HBase-managed Zookeeper Configuration</title>
<para>On each node that will run a zookeeper, a <para>On each node that will run a zookeeper, a master, or a regionserver, create a <link
master, or a regionserver, create a <link xlink:href="http://docs.oracle.com/javase/1.4.2/docs/guide/security/jgss/tutorials/LoginConfigFile.html">JAAS</link>
xlink:href="http://docs.oracle.com/javase/1.4.2/docs/guide/security/jgss/tutorials/LoginConfigFile.html">JAAS</link> configuration file in the conf directory of the node's <filename>HBASE_HOME</filename>
configuration file in the conf directory of the node's directory that looks like the following:</para>
<filename>HBASE_HOME</filename> directory that looks like the
following:</para>
<programlisting> <programlisting>
Server { Server {
com.sun.security.auth.module.Krb5LoginModule required com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true useKeyTab=true
keyTab="$PATH_TO_ZOOKEEPER_KEYTAB" keyTab="$PATH_TO_ZOOKEEPER_KEYTAB"
storeKey=true storeKey=true
useTicketCache=false useTicketCache=false
principal="zookeeper/$HOST"; principal="zookeeper/$HOST";
}; };
Client { Client {
com.sun.security.auth.module.Krb5LoginModule required com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true useKeyTab=true
useTicketCache=false useTicketCache=false
keyTab="$PATH_TO_HBASE_KEYTAB" keyTab="$PATH_TO_HBASE_KEYTAB"
principal="hbase/$HOST"; principal="hbase/$HOST";
}; };
</programlisting> </programlisting>
<para>where the <filename>$PATH_TO_HBASE_KEYTAB</filename> and <para>where the <filename>$PATH_TO_HBASE_KEYTAB</filename> and
<filename>$PATH_TO_ZOOKEEPER_KEYTAB</filename> files are what <filename>$PATH_TO_ZOOKEEPER_KEYTAB</filename> files are what you created above, and
you created above, and <code>$HOST</code> is the hostname for that <code>$HOST</code> is the hostname for that node.</para>
node.</para>
<para>The <code>Server</code> section will be used by <para>The <code>Server</code> section will be used by the Zookeeper quorum server, while the
the Zookeeper quorum server, while the <code>Client</code> section will be used by the HBase master and regionservers. The path
<code>Client</code> section will be used by the HBase to this file should be substituted for the text <filename>$HBASE_SERVER_CONF</filename> in
master and regionservers. The path to this file should the <filename>hbase-env.sh</filename> listing below.</para>
be substituted for the text <filename>$HBASE_SERVER_CONF</filename>
in the <filename>hbase-env.sh</filename>
listing below.</para>
<para> <para> The path to this file should be substituted for the text
The path to this file should be substituted for the <filename>$CLIENT_CONF</filename> in the <filename>hbase-env.sh</filename> listing below. </para>
text <filename>$CLIENT_CONF</filename> in the
<filename>hbase-env.sh</filename> listing below.
</para>
<para>Modify your <filename>hbase-env.sh</filename> to include the <para>Modify your <filename>hbase-env.sh</filename> to include the following:</para>
following:</para>
<programlisting> <programlisting>
export HBASE_OPTS="-Djava.security.auth.login.config=$CLIENT_CONF" export HBASE_OPTS="-Djava.security.auth.login.config=$CLIENT_CONF"
export HBASE_MANAGES_ZK=true export HBASE_MANAGES_ZK=true
export HBASE_ZOOKEEPER_OPTS="-Djava.security.auth.login.config=$HBASE_SERVER_CONF" export HBASE_ZOOKEEPER_OPTS="-Djava.security.auth.login.config=$HBASE_SERVER_CONF"
export HBASE_MASTER_OPTS="-Djava.security.auth.login.config=$HBASE_SERVER_CONF" export HBASE_MASTER_OPTS="-Djava.security.auth.login.config=$HBASE_SERVER_CONF"
export HBASE_REGIONSERVER_OPTS="-Djava.security.auth.login.config=$HBASE_SERVER_CONF" export HBASE_REGIONSERVER_OPTS="-Djava.security.auth.login.config=$HBASE_SERVER_CONF"
</programlisting> </programlisting>
<para>where <filename>$HBASE_SERVER_CONF</filename> and <para>where <filename>$HBASE_SERVER_CONF</filename> and <filename>$CLIENT_CONF</filename> are
<filename>$CLIENT_CONF</filename> are the full paths to the the full paths to the JAAS configuration files created above.</para>
JAAS configuration files created above.</para>
<para>Modify your <filename>hbase-site.xml</filename> on each node <para>Modify your <filename>hbase-site.xml</filename> on each node that will run zookeeper,
that will run zookeeper, master or regionserver to contain:</para> master or regionserver to contain:</para>
<programlisting><![CDATA[ <programlisting><![CDATA[
<configuration> <configuration>
<property> <property>
<name>hbase.zookeeper.quorum</name> <name>hbase.zookeeper.quorum</name>
<value>$ZK_NODES</value> <value>$ZK_NODES</value>
</property> </property>
<property> <property>
<name>hbase.cluster.distributed</name> <name>hbase.cluster.distributed</name>
<value>true</value> <value>true</value>
</property> </property>
<property> <property>
<name>hbase.zookeeper.property.authProvider.1</name> <name>hbase.zookeeper.property.authProvider.1</name>
<value>org.apache.zookeeper.server.auth.SASLAuthenticationProvider</value> <value>org.apache.zookeeper.server.auth.SASLAuthenticationProvider</value>
</property> </property>
<property> <property>
<name>hbase.zookeeper.property.kerberos.removeHostFromPrincipal</name> <name>hbase.zookeeper.property.kerberos.removeHostFromPrincipal</name>
<value>true</value> <value>true</value>
</property> </property>
<property> <property>
<name>hbase.zookeeper.property.kerberos.removeRealmFromPrincipal</name> <name>hbase.zookeeper.property.kerberos.removeRealmFromPrincipal</name>
<value>true</value> <value>true</value>
</property> </property>
</configuration> </configuration>
]]></programlisting> ]]></programlisting>
<para>where <code>$ZK_NODES</code> is the <para>where <code>$ZK_NODES</code> is the comma-separated list of hostnames of the Zookeeper
comma-separated list of hostnames of the Zookeeper Quorum hosts.</para>
Quorum hosts.</para>
<para>Start your hbase cluster by running one or more <para>Start your hbase cluster by running one or more of the following set of commands on the
of the following set of commands on the appropriate appropriate hosts: </para>
hosts:
</para>
<programlisting> <screen>
bin/hbase zookeeper start bin/hbase zookeeper start
bin/hbase master start bin/hbase master start
bin/hbase regionserver start bin/hbase regionserver start
</screen>
</section>
<section>
<title>External Zookeeper Configuration</title>
<para>Add a JAAS configuration file that looks like:</para>
<programlisting>
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
useTicketCache=false
keyTab="$PATH_TO_HBASE_KEYTAB"
principal="hbase/$HOST";
};
</programlisting> </programlisting>
<para>where the <filename>$PATH_TO_HBASE_KEYTAB</filename> is the keytab created above for
HBase services to run on this host, and <code>$HOST</code> is the hostname for that node.
Put this in the HBase home's configuration directory. We'll refer to this file's full
pathname as <filename>$HBASE_SERVER_CONF</filename> below.</para>
</section> <para>Modify your hbase-env.sh to include the following:</para>
<section><title>External Zookeeper Configuration</title> <programlisting>
<para>Add a JAAS configuration file that looks like: export HBASE_OPTS="-Djava.security.auth.login.config=$CLIENT_CONF"
export HBASE_MANAGES_ZK=false
<programlisting> export HBASE_MASTER_OPTS="-Djava.security.auth.login.config=$HBASE_SERVER_CONF"
Client { export HBASE_REGIONSERVER_OPTS="-Djava.security.auth.login.config=$HBASE_SERVER_CONF"
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
useTicketCache=false
keyTab="$PATH_TO_HBASE_KEYTAB"
principal="hbase/$HOST";
};
</programlisting>
where the <filename>$PATH_TO_HBASE_KEYTAB</filename> is the keytab
created above for HBase services to run on this host, and <code>$HOST</code> is the
hostname for that node. Put this in the HBase home's
configuration directory. We'll refer to this file's
full pathname as <filename>$HBASE_SERVER_CONF</filename> below.</para>
<para>Modify your hbase-env.sh to include the following:</para>
<programlisting>
export HBASE_OPTS="-Djava.security.auth.login.config=$CLIENT_CONF"
export HBASE_MANAGES_ZK=false
export HBASE_MASTER_OPTS="-Djava.security.auth.login.config=$HBASE_SERVER_CONF"
export HBASE_REGIONSERVER_OPTS="-Djava.security.auth.login.config=$HBASE_SERVER_CONF"
</programlisting> </programlisting>
<para>Modify your <filename>hbase-site.xml</filename> on each node <para>Modify your <filename>hbase-site.xml</filename> on each node that will run a master or
that will run a master or regionserver to contain:</para> regionserver to contain:</para>
<programlisting><![CDATA[ <programlisting><![CDATA[
<configuration> <configuration>
<property> <property>
<name>hbase.zookeeper.quorum</name> <name>hbase.zookeeper.quorum</name>
<value>$ZK_NODES</value> <value>$ZK_NODES</value>
</property> </property>
<property> <property>
<name>hbase.cluster.distributed</name> <name>hbase.cluster.distributed</name>
<value>true</value> <value>true</value>
</property> </property>
</configuration> </configuration>
]]> ]]>
</programlisting> </programlisting>
<para>where <code>$ZK_NODES</code> is the <para>where <code>$ZK_NODES</code> is the comma-separated list of hostnames of the Zookeeper
comma-separated list of hostnames of the Zookeeper Quorum hosts.</para>
Quorum hosts.</para>
<para> <para> Add a <filename>zoo.cfg</filename> for each Zookeeper Quorum host containing:</para>
Add a <filename>zoo.cfg</filename> for each Zookeeper Quorum host containing: <programlisting>
<programlisting> authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider kerberos.removeHostFromPrincipal=true
kerberos.removeHostFromPrincipal=true kerberos.removeRealmFromPrincipal=true
kerberos.removeRealmFromPrincipal=true </programlisting>
<para>Also on each of these hosts, create a JAAS configuration file containing:</para>
<programlisting>
Server {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="$PATH_TO_ZOOKEEPER_KEYTAB"
storeKey=true
useTicketCache=false
principal="zookeeper/$HOST";
};
</programlisting>
<para>where <code>$HOST</code> is the hostname of each Quorum host. We will refer to the full
pathname of this file as <filename>$ZK_SERVER_CONF</filename> below. </para>
<para> Start your Zookeepers on each Zookeeper Quorum host with:</para>
<programlisting>
SERVER_JVMFLAGS="-Djava.security.auth.login.config=$ZK_SERVER_CONF" bin/zkServer start
</programlisting> </programlisting>
Also on each of these hosts, create a JAAS configuration file containing: <para> Start your HBase cluster by running one or more of the following set of commands on the
appropriate nodes: </para>
<programlisting> <screen>
Server { bin/hbase master start
com.sun.security.auth.module.Krb5LoginModule required bin/hbase regionserver start
useKeyTab=true </screen>
keyTab="$PATH_TO_ZOOKEEPER_KEYTAB"
storeKey=true
useTicketCache=false
principal="zookeeper/$HOST";
};
</programlisting>
where <code>$HOST</code> is the hostname of each
Quorum host. We will refer to the full pathname of
this file as <filename>$ZK_SERVER_CONF</filename> below.
</para>
<para>
Start your Zookeepers on each Zookeeper Quorum host with:
<programlisting>
SERVER_JVMFLAGS="-Djava.security.auth.login.config=$ZK_SERVER_CONF" bin/zkServer start
</programlisting>
</para>
<para>
Start your HBase cluster by running one or more of the following set of commands on the appropriate nodes:
</para>
<programlisting>
bin/hbase master start
bin/hbase regionserver start
</programlisting>
</section> </section>
<section> <section>
<title>Zookeeper Server Authentication Log Output</title> <title>Zookeeper Server Authentication Log Output</title>
<para>If the configuration above is successful, <para>If the configuration above is successful, you should see something similar to the
you should see something similar to the following in following in your Zookeeper server logs:</para>
your Zookeeper server logs: <screen>
<programlisting>
11/12/05 22:43:39 INFO zookeeper.Login: successfully logged in. 11/12/05 22:43:39 INFO zookeeper.Login: successfully logged in.
11/12/05 22:43:39 INFO server.NIOServerCnxnFactory: binding to port 0.0.0.0/0.0.0.0:2181 11/12/05 22:43:39 INFO server.NIOServerCnxnFactory: binding to port 0.0.0.0/0.0.0.0:2181
11/12/05 22:43:39 INFO zookeeper.Login: TGT refresh thread started. 11/12/05 22:43:39 INFO zookeeper.Login: TGT refresh thread started.
@ -507,18 +429,15 @@ ${HBASE_HOME}/bin/hbase-daemons.sh {start,stop} zookeeper
authorizationID=hbase/ip-10-166-175-249.us-west-1.compute.internal@HADOOP.LOCALDOMAIN. authorizationID=hbase/ip-10-166-175-249.us-west-1.compute.internal@HADOOP.LOCALDOMAIN.
11/12/05 22:43:59 INFO auth.SaslServerCallbackHandler: Setting authorizedID: hbase 11/12/05 22:43:59 INFO auth.SaslServerCallbackHandler: Setting authorizedID: hbase
11/12/05 22:43:59 INFO server.ZooKeeperServer: adding SASL authorization for authorizationID: hbase 11/12/05 22:43:59 INFO server.ZooKeeperServer: adding SASL authorization for authorizationID: hbase
</programlisting> </screen>
</para> </section>
</section> <section>
<title>Zookeeper Client Authentication Log Output</title>
<section> <para>On the Zookeeper client side (HBase master or regionserver), you should see something
<title>Zookeeper Client Authentication Log Output</title> similar to the following:</para>
<para>On the Zookeeper client side (HBase master or regionserver), <screen>
you should see something similar to the following:
<programlisting>
11/12/05 22:43:59 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=ip-10-166-175-249.us-west-1.compute.internal:2181 sessionTimeout=180000 watcher=master:60000 11/12/05 22:43:59 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=ip-10-166-175-249.us-west-1.compute.internal:2181 sessionTimeout=180000 watcher=master:60000
11/12/05 22:43:59 INFO zookeeper.ClientCnxn: Opening socket connection to server /10.166.175.249:2181 11/12/05 22:43:59 INFO zookeeper.ClientCnxn: Opening socket connection to server /10.166.175.249:2181
11/12/05 22:43:59 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 14851@ip-10-166-175-249 11/12/05 22:43:59 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 14851@ip-10-166-175-249
@ -530,76 +449,67 @@ ${HBASE_HOME}/bin/hbase-daemons.sh {start,stop} zookeeper
11/12/05 22:43:59 INFO zookeeper.Login: TGT expires: Tue Dec 06 22:43:59 UTC 2011 11/12/05 22:43:59 INFO zookeeper.Login: TGT expires: Tue Dec 06 22:43:59 UTC 2011
11/12/05 22:43:59 INFO zookeeper.Login: TGT refresh sleeping until: Tue Dec 06 18:30:37 UTC 2011 11/12/05 22:43:59 INFO zookeeper.Login: TGT refresh sleeping until: Tue Dec 06 18:30:37 UTC 2011
11/12/05 22:43:59 INFO zookeeper.ClientCnxn: Session establishment complete on server ip-10-166-175-249.us-west-1.compute.internal/10.166.175.249:2181, sessionid = 0x134106594320000, negotiated timeout = 180000 11/12/05 22:43:59 INFO zookeeper.ClientCnxn: Session establishment complete on server ip-10-166-175-249.us-west-1.compute.internal/10.166.175.249:2181, sessionid = 0x134106594320000, negotiated timeout = 180000
</programlisting> </screen>
</para> </section>
</section>
<section> <section>
<title>Configuration from Scratch</title> <title>Configuration from Scratch</title>
<para>This has been tested on the current standard Amazon <para>This has been tested on the current standard Amazon Linux AMI. First setup KDC and
Linux AMI. First setup KDC and principals as principals as described above. Next checkout code and run a sanity check.</para>
described above. Next checkout code and run a sanity
check.</para>
<programlisting> <screen>
git clone git://git.apache.org/hbase.git git clone git://git.apache.org/hbase.git
cd hbase cd hbase
mvn clean test -Dtest=TestZooKeeperACL mvn clean test -Dtest=TestZooKeeperACL
</programlisting> </screen>
<para>Then configure HBase as described above. <para>Then configure HBase as described above. Manually edit target/cached_classpath.txt (see
Manually edit target/cached_classpath.txt (see below): below): </para>
</para> <screen>
<programlisting> bin/hbase zookeeper &amp;
bin/hbase zookeeper &amp; bin/hbase master &amp;
bin/hbase master &amp; bin/hbase regionserver &amp;
bin/hbase regionserver &amp; </screen>
</programlisting> </section>
</section>
<section> <section>
<title>Future improvements</title> <title>Future improvements</title>
<section><title>Fix target/cached_classpath.txt</title> <section>
<para> <title>Fix target/cached_classpath.txt</title>
You must override the standard hadoop-core jar file from the <para> You must override the standard hadoop-core jar file from the
<code>target/cached_classpath.txt</code> <code>target/cached_classpath.txt</code> file with the version containing the
file with the version containing the HADOOP-7070 fix. You can use the following script to do this: HADOOP-7070 fix. You can use the following script to do this:</para>
<screen>
echo `find ~/.m2 -name "*hadoop-core*7070*SNAPSHOT.jar"` ':' `cat target/cached_classpath.txt` | sed 's/ //g' > target/tmp.txt
mv target/tmp.txt target/cached_classpath.txt
</screen>
</section>
<programlisting> <section>
echo `find ~/.m2 -name "*hadoop-core*7070*SNAPSHOT.jar"` ':' `cat target/cached_classpath.txt` | sed 's/ //g' > target/tmp.txt <title>Set JAAS configuration programmatically</title>
mv target/tmp.txt target/cached_classpath.txt
</programlisting>
</para>
</section>
<section>
<title>Set JAAS configuration
programmatically</title>
<para>This would avoid the need for a separate Hadoop jar <para>This would avoid the need for a separate Hadoop jar that fixes <link
that fixes <link xlink:href="https://issues.apache.org/jira/browse/HADOOP-7070">HADOOP-7070</link>. xlink:href="https://issues.apache.org/jira/browse/HADOOP-7070">HADOOP-7070</link>.
</para> </para>
</section> </section>
<section> <section>
<title>Elimination of <title>Elimination of <code>kerberos.removeHostFromPrincipal</code> and
<code>kerberos.removeHostFromPrincipal</code> and <code>kerberos.removeRealmFromPrincipal</code></title>
<code>kerberos.removeRealmFromPrincipal</code></title> <para />
<para /> </section>
</section>
</section> </section>
</section> <!-- SASL Authentication with ZooKeeper --> </section>
<!-- SASL Authentication with ZooKeeper -->
</chapter> </chapter>