HBASE-11340 Remove references to xcievers in documentation (Dima Spivak)

This commit is contained in:
Michael Stack 2014-06-13 10:12:01 -07:00
parent 632301f525
commit 6764275ff0
3 changed files with 24 additions and 24 deletions

View File

@ -223,10 +223,10 @@ Link detected: yes
</section> </section>
<section <section
xml:id="casestudies.xceivers"> xml:id="casestudies.max.transfer.threads">
<title>Case Study #4 (xcievers Config)</title> <title>Case Study #4 (max.transfer.threads Config)</title>
<para> Case study of configuring <code>xceivers</code>, and diagnosing errors from <para> Case study of configuring <code>max.transfer.threads</code> (previously known as
mis-configurations. <link <code>xcievers</code>) and diagnosing errors from misconfigurations. <link
xlink:href="http://www.larsgeorge.com/2012/03/hadoop-hbase-and-xceivers.html">http://www.larsgeorge.com/2012/03/hadoop-hbase-and-xceivers.html</link> xlink:href="http://www.larsgeorge.com/2012/03/hadoop-hbase-and-xceivers.html">http://www.larsgeorge.com/2012/03/hadoop-hbase-and-xceivers.html</link>
</para> </para>
<para> See also <xref <para> See also <xref

View File

@ -501,14 +501,14 @@ Index: pom.xml
<section <section
xml:id="dfs.datanode.max.transfer.threads"> xml:id="dfs.datanode.max.transfer.threads">
<title><varname>dfs.datanode.max.transfer.threads</varname><indexterm> <title><varname>dfs.datanode.max.transfer.threads</varname><indexterm>
<primary>xcievers</primary> <primary>dfs.datanode.max.transfer.threads</primary>
</indexterm></title> </indexterm></title>
<para>An Hadoop HDFS datanode has an upper bound on the number of files that it will serve <para>An HDFS datanode has an upper bound on the number of files that it will serve
at any one time. The upper bound parameter is called <varname>xcievers</varname> (yes, at any one time. Before doing any loading, make sure you have configured
this is misspelled). Again, before doing any loading, make sure you have configured Hadoop's <filename>conf/hdfs-site.xml</filename>, setting the
Hadoop's <filename>conf/hdfs-site.xml</filename> setting the <varname>xceivers</varname> <varname>dfs.datanode.max.transfer.threads</varname> value to at least the following:
value to at least the following:</para> </para>
<programlisting><![CDATA[ <programlisting><![CDATA[
<property> <property>
<name>dfs.datanode.max.transfer.threads</name> <name>dfs.datanode.max.transfer.threads</name>
@ -518,19 +518,18 @@ Index: pom.xml
<para>Be sure to restart your HDFS after making the above configuration.</para> <para>Be sure to restart your HDFS after making the above configuration.</para>
<para>Not having this configuration in place makes for strange looking failures. Eventually <para>Not having this configuration in place makes for strange-looking failures. One
you'll see a complain in the datanode logs complaining about the xcievers exceeded, but on manifestation is a complaint about missing blocks. For example:</para>
the run up to this one manifestation is complaint about missing blocks. For example:<footnote>
<para>See <link
xlink:href="http://ccgtech.blogspot.com/2010/02/hadoop-hdfs-deceived-by-xciever.html">Hadoop
HDFS: Deceived by Xciever</link> for an informative rant on xceivering.</para>
</footnote></para>
<para>See also <xref
linkend="casestudies.xceivers" />
</para>
<screen>10/12/08 20:10:31 INFO hdfs.DFSClient: Could not obtain block <screen>10/12/08 20:10:31 INFO hdfs.DFSClient: Could not obtain block
blk_XXXXXXXXXXXXXXXXXXXXXX_YYYYYYYY from any node: java.io.IOException: No live nodes blk_XXXXXXXXXXXXXXXXXXXXXX_YYYYYYYY from any node: java.io.IOException: No live nodes
contain current block. Will get new block locations from namenode and retry...</screen> contain current block. Will get new block locations from namenode and retry...</screen>
<para>See also <xref linkend="casestudies.max.transfer.threads" /> and note that this
property was previously known as <varname>dfs.datanode.max.xcievers</varname> (e.g.
<link
xlink:href="http://ccgtech.blogspot.com/2010/02/hadoop-hdfs-deceived-by-xciever.html">
Hadoop HDFS: Deceived by Xciever</link>).
</para>
</section> </section>
</section> </section>

View File

@ -42,8 +42,9 @@
example one trick with RegionServers is that they will print some metrics when aborting so example one trick with RegionServers is that they will print some metrics when aborting so
grepping for <emphasis>Dump</emphasis> should get you around the start of the problem. </para> grepping for <emphasis>Dump</emphasis> should get you around the start of the problem. </para>
<para> RegionServer suicides are “normal”, as this is what they do when something goes wrong. <para> RegionServer suicides are “normal”, as this is what they do when something goes wrong.
For example, if ulimit and xcievers (the two most important initial settings, see <xref For example, if ulimit and max transfer threads (the two most important initial settings, see
linkend="ulimit" />) arent changed, it will make it impossible at some point for DataNodes <xref linkend="ulimit" /> and <xref linkend="dfs.datanode.max.transfer.threads" />) arent
changed, it will make it impossible at some point for DataNodes
to create new threads that from the HBase point of view is seen as if HDFS was gone. Think to create new threads that from the HBase point of view is seen as if HDFS was gone. Think
about what would happen if your MySQL database was suddenly unable to access files on your about what would happen if your MySQL database was suddenly unable to access files on your
local file system, well its the same with HBase and HDFS. Another very common reason to see local file system, well its the same with HBase and HDFS. Another very common reason to see