diff --git a/src/main/docbkx/case_studies.xml b/src/main/docbkx/case_studies.xml index 262f0ee821d..7824c7d794f 100644 --- a/src/main/docbkx/case_studies.xml +++ b/src/main/docbkx/case_studies.xml @@ -223,10 +223,10 @@ Link detected: yes
- Case Study #4 (xcievers Config) - Case study of configuring xceivers, and diagnosing errors from - mis-configurations. + Case Study #4 (max.transfer.threads Config) + Case study of configuring max.transfer.threads (previously known as + xcievers) and diagnosing errors from misconfigurations. http://www.larsgeorge.com/2012/03/hadoop-hbase-and-xceivers.html See also <varname>dfs.datanode.max.transfer.threads</varname><indexterm> - <primary>xcievers</primary> + <primary>dfs.datanode.max.transfer.threads</primary> </indexterm> - An Hadoop HDFS datanode has an upper bound on the number of files that it will serve - at any one time. The upper bound parameter is called xcievers (yes, - this is misspelled). Again, before doing any loading, make sure you have configured - Hadoop's conf/hdfs-site.xml setting the xceivers - value to at least the following: + An HDFS datanode has an upper bound on the number of files that it will serve + at any one time. Before doing any loading, make sure you have configured + Hadoop's conf/hdfs-site.xml, setting the + dfs.datanode.max.transfer.threads value to at least the following: + dfs.datanode.max.transfer.threads @@ -518,19 +518,18 @@ Index: pom.xml Be sure to restart your HDFS after making the above configuration. - Not having this configuration in place makes for strange looking failures. Eventually - you'll see a complain in the datanode logs complaining about the xcievers exceeded, but on - the run up to this one manifestation is complaint about missing blocks. For example: - See Hadoop - HDFS: Deceived by Xciever for an informative rant on xceivering. - - See also - + Not having this configuration in place makes for strange-looking failures. One + manifestation is a complaint about missing blocks. For example: 10/12/08 20:10:31 INFO hdfs.DFSClient: Could not obtain block - blk_XXXXXXXXXXXXXXXXXXXXXX_YYYYYYYY from any node: java.io.IOException: No live nodes - contain current block. Will get new block locations from namenode and retry... + blk_XXXXXXXXXXXXXXXXXXXXXX_YYYYYYYY from any node: java.io.IOException: No live nodes + contain current block. Will get new block locations from namenode and retry... + See also and note that this + property was previously known as dfs.datanode.max.xcievers (e.g. + + Hadoop HDFS: Deceived by Xciever). + +
diff --git a/src/main/docbkx/troubleshooting.xml b/src/main/docbkx/troubleshooting.xml index b78df82184f..3d480d31fc5 100644 --- a/src/main/docbkx/troubleshooting.xml +++ b/src/main/docbkx/troubleshooting.xml @@ -42,8 +42,9 @@ example one trick with RegionServers is that they will print some metrics when aborting so grepping for Dump should get you around the start of the problem. RegionServer suicides are “normal”, as this is what they do when something goes wrong. - For example, if ulimit and xcievers (the two most important initial settings, see ) aren’t changed, it will make it impossible at some point for DataNodes + For example, if ulimit and max transfer threads (the two most important initial settings, see + and ) aren’t + changed, it will make it impossible at some point for DataNodes to create new threads that from the HBase point of view is seen as if HDFS was gone. Think about what would happen if your MySQL database was suddenly unable to access files on your local file system, well it’s the same with HBase and HDFS. Another very common reason to see