Address outstanding disqus comments on the refguide, fix copyright and some wording

git-svn-id: https://svn.apache.org/repos/asf/hbase/trunk@1562307 13f79535-47bb-0310-9956-ffa450edef68
This commit is contained in:
Michael Stack 2014-01-28 23:09:51 +00:00
parent 747dfde8d9
commit 2313e04223
4 changed files with 19 additions and 12 deletions

View File

@ -39,14 +39,14 @@
</inlinemediaobject> </inlinemediaobject>
</link> </link>
</subtitle> </subtitle>
<copyright><year>2013</year><holder>Apache Software Foundation. <copyright><year>2014</year><holder>Apache Software Foundation.
All Rights Reserved. Apache Hadoop, Hadoop, MapReduce, HDFS, Zookeeper, HBase, and the HBase project logo are trademarks of the Apache Software Foundation. All Rights Reserved. Apache Hadoop, Hadoop, MapReduce, HDFS, Zookeeper, HBase, and the HBase project logo are trademarks of the Apache Software Foundation.
</holder> </holder>
</copyright> </copyright>
<abstract> <abstract>
<para>This is the official reference guide of <para>This is the official reference guide of
<link xlink:href="http://www.hbase.org">Apache HBase&#153;</link>, <link xlink:href="http://www.hbase.org">Apache HBase&#153;</link>,
a distributed, versioned, column-oriented database built on top of a distributed, versioned, big data store built on top of
<link xlink:href="http://hadoop.apache.org/">Apache Hadoop&#153;</link> and <link xlink:href="http://hadoop.apache.org/">Apache Hadoop&#153;</link> and
<link xlink:href="http://zookeeper.apache.org/">Apache ZooKeeper&#153;</link>. <link xlink:href="http://zookeeper.apache.org/">Apache ZooKeeper&#153;</link>.
</para> </para>
@ -1846,6 +1846,7 @@ rs.close();
<listitem>Third replica is written on the same rack as the second, but on a different node chosen randomly <listitem>Third replica is written on the same rack as the second, but on a different node chosen randomly
</listitem> </listitem>
<listitem>Subsequent replicas are written on random nodes on the cluster <listitem>Subsequent replicas are written on random nodes on the cluster
<footnote><para>See <emphasis>Replica Placement: The First Baby Steps</emphasis> on this page: <link xlink:href="http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html">HDFS Architecture</link></para></footnote>
</listitem> </listitem>
</orderedlist> </orderedlist>
Thus, HBase eventually achieves locality for a region after a flush or a compaction. Thus, HBase eventually achieves locality for a region after a flush or a compaction.
@ -1854,7 +1855,7 @@ rs.close();
in the region, or the table is compacted and StoreFiles are re-written, they will become "local" in the region, or the table is compacted and StoreFiles are re-written, they will become "local"
to the RegionServer. to the RegionServer.
</para> </para>
<para>For more information, see <link xlink:href="http://hadoop.apache.org/common/docs/r0.20.205.0/hdfs_design.html#Replica+Placement%3A+The+First+Baby+Steps">HDFS Design on Replica Placement</link> <para>For more information, see <emphasis>Replica Placement: The First Baby Steps</emphasis> on this page: <link xlink:href="http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html">HDFS Architecture</link>
and also Lars George's blog on <link xlink:href="http://www.larsgeorge.com/2010/05/hbase-file-locality-in-hdfs.html">HBase and HDFS locality</link>. and also Lars George's blog on <link xlink:href="http://www.larsgeorge.com/2010/05/hbase-file-locality-in-hdfs.html">HBase and HDFS locality</link>.
</para> </para>
</section> </section>

View File

@ -41,22 +41,27 @@
<para>This guide describes setup of a standalone HBase instance. It will <para>This guide describes setup of a standalone HBase instance. It will
run against the local filesystem. In later sections we will take you through run against the local filesystem. In later sections we will take you through
how to run HBase on HDFS, a distributed filesystem. This section how to run HBase on Apache Hadoop's HDFS, a distributed filesystem. This section
leads you through creating a table, inserting shows you how to create a table in HBase, inserting
rows via the HBase <command>shell</command>, and then cleaning rows into your new HBase table via the HBase <command>shell</command>, and then cleaning
up and shutting down your standalone, local filesystem HBase instance. The below exercise up and shutting down your standalone, local filesystem-based HBase instance. The below exercise
should take no more than ten minutes (not including download time). should take no more than ten minutes (not including download time).
</para> </para>
<note xml:id="local.fs.durability"><title>Local Filesystem and Durability</title> <note xml:id="local.fs.durability"><title>Local Filesystem and Durability</title>
<para>Using HBase with a LocalFileSystem does not currently guarantee durability. <para>Using HBase with a LocalFileSystem does not currently guarantee durability.
The HDFS local filesystem implementation will lose edits if files are not properly
closed -- which is very likely to happen when experimenting with a new download.
You need to run HBase on HDFS to ensure all writes are preserved. Running You need to run HBase on HDFS to ensure all writes are preserved. Running
against the local filesystem though will get you off the ground quickly and get you against the local filesystem though will get you off the ground quickly and get you
familiar with how the general system works so lets run with it for now. See familiar with how the general system works so lets run with it for now. See
<link xlink:href="https://issues.apache.org/jira/browse/HBASE-3696"/> and its associated issues for more details.</para></note> <link xlink:href="https://issues.apache.org/jira/browse/HBASE-3696"/> and its associated issues for more details.</para></note>
<note xml:id="loopback.ip.getting.started"> <note xml:id="loopback.ip.getting.started">
<title>Loopback IP</title> <title>Loopback IP</title>
<para>The below advice is for hbase-0.94.0 (and older) versions; we believe this fixed in hbase-0.96.0 and beyond (let us know if we have it wrong) -- there should be no need of modification to <note>
<filename>/etc/hosts</filename>.</para> <para><emphasis>The below advice is for hbase-0.94.x and older versions only. We believe this fixed in hbase-0.96.0 and beyond
(let us know if we have it wrong).</emphasis> There should be no need of the below modification to <filename>/etc/hosts</filename> in
later versions of HBase.</para>
</note>
<para>HBase expects the loopback IP address to be 127.0.0.1. Ubuntu and some other distributions, <para>HBase expects the loopback IP address to be 127.0.0.1. Ubuntu and some other distributions,
for example, will default to 127.0.1.1 and this will cause problems for you for example, will default to 127.0.1.1 and this will cause problems for you
<footnote><para>See <link xlink:href="http://blog.devving.com/why-does-hbase-care-about-etchosts/">Why does HBase care about /etc/hosts?</link> for detail.</para></footnote>. <footnote><para>See <link xlink:href="http://blog.devving.com/why-does-hbase-care-about-etchosts/">Why does HBase care about /etc/hosts?</link> for detail.</para></footnote>.

View File

@ -250,7 +250,7 @@
<varname>NONE</varname> for no bloom filters. If <varname>NONE</varname> for no bloom filters. If
<varname>ROW</varname>, the hash of the row will be added to the bloom <varname>ROW</varname>, the hash of the row will be added to the bloom
on each insert. If <varname>ROWCOL</varname>, the hash of the row + on each insert. If <varname>ROWCOL</varname>, the hash of the row +
column family + column family qualifier will be added to the bloom on column family name + column family qualifier will be added to the bloom on
each key insert.</para> each key insert.</para>
<para>See <link xlink:href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/HColumnDescriptor.html">HColumnDescriptor</link> and <para>See <link xlink:href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/HColumnDescriptor.html">HColumnDescriptor</link> and
<xref linkend="blooms"/> for more information or this answer up in quora, <xref linkend="blooms"/> for more information or this answer up in quora,
@ -529,7 +529,7 @@ htable.close();</programlisting></para>
<section xml:id="blooms"> <section xml:id="blooms">
<title>Bloom Filters</title> <title>Bloom Filters</title>
<para>Enabling Bloom Filters can save your having to go to disk and <para>Enabling Bloom Filters can save your having to go to disk and
can help improve read latencys.</para> can help improve read latencies.</para>
<para><link xlink:href="http://en.wikipedia.org/wiki/Bloom_filter">Bloom filters</link> were developed over in <link <para><link xlink:href="http://en.wikipedia.org/wiki/Bloom_filter">Bloom filters</link> were developed over in <link
xlink:href="https://issues.apache.org/jira/browse/HBASE-1200">HBase-1200 xlink:href="https://issues.apache.org/jira/browse/HBASE-1200">HBase-1200
Add bloomfilters</link>.<footnote> Add bloomfilters</link>.<footnote>

View File

@ -48,7 +48,7 @@
xlink:href="https://issues.apache.org/jira/browse/HBASE">JIRA</link>.</para> xlink:href="https://issues.apache.org/jira/browse/HBASE">JIRA</link>.</para>
<note xml:id="headsup"> <note xml:id="headsup">
<title>Heads-up</title> <title>Heads-up if this is your first foray into the world of distributed computing...</title>
<para> <para>
If this is your first foray into the wonderful world of If this is your first foray into the wonderful world of
Distributed Computing, then you are in for Distributed Computing, then you are in for
@ -65,6 +65,7 @@
computing has been bound to a single box. Here is one good computing has been bound to a single box. Here is one good
starting point: starting point:
<link xlink:href="http://en.wikipedia.org/wiki/Fallacies_of_Distributed_Computing">Fallacies of Distributed Computing</link>. <link xlink:href="http://en.wikipedia.org/wiki/Fallacies_of_Distributed_Computing">Fallacies of Distributed Computing</link>.
That said, you are welcome. Its a fun place to be. Yours, the HBase Community.
</para> </para>
</note> </note>
</preface> </preface>