diff --git a/src/docbkx/configuration.xml b/src/docbkx/configuration.xml
index 31a7c13c538..c88bb8fc9b6 100644
--- a/src/docbkx/configuration.xml
+++ b/src/docbkx/configuration.xml
@@ -104,14 +104,19 @@ to ensure well-formedness of your document after an edit session.
HBase is a database. It uses a lot of files all at the same time.
The default ulimit -n -- i.e. user file limit -- of 1024 on most *nix systems
is insufficient (On mac os x its 256). Any significant amount of loading will
- lead you to FAQ: Why do I
- see "java.io.IOException...(Too many open files)" in my logs?.
- You may also notice errors such as
+ lead you to .
+ You may also notice errors such as...
2010-04-06 03:04:37,542 INFO org.apache.hadoop.hdfs.DFSClient: Exception increateBlockOutputStream java.io.EOFException
2010-04-06 03:04:37,542 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning block blk_-6935524980745310745_1391901
Do yourself a favor and change the upper bound on the
- number of file descriptors. Set it to north of 10k. See the above
- referenced FAQ for how. You should also up the hbase users'
+ number of file descriptors. Set it to north of 10k. The math runs roughly as follows: per ColumnFamily
+ there is at least one StoreFile and possibly up to 5 or 6 if the region is under load. Multiply the
+ average number of StoreFiles per ColumnFamily times the number of regions per RegionServer. For example, assuming
+ that a schema had 3 ColumnFamilies per region with an average of 3 StoreFiles per ColumnFamily,
+ and there are 100 regions per RegionServer, the JVM will open 3 * 3 * 100 = 900 file descriptors
+ (not counting open jar files, config files, etc.)
+
+ You should also up the hbase users'
nproc setting; under load, a low-nproc
setting could manifest as OutOfMemoryErrorSee Jack Levin's major hdfs issues
diff --git a/src/docbkx/troubleshooting.xml b/src/docbkx/troubleshooting.xml
index 9f93cd92f19..1ba03fe0045 100644
--- a/src/docbkx/troubleshooting.xml
+++ b/src/docbkx/troubleshooting.xml
@@ -602,7 +602,14 @@ java.lang.UnsatisfiedLinkError: no gplcompression in java.library.path
java.io.IOException...(Too many open files)
- See the Getting Started section on ulimit and nproc configuration.
+ If you see log messages like this...
+
+2010-09-13 01:24:17,336 WARN org.apache.hadoop.hdfs.server.datanode.DataNode:
+Disk-related IOException in BlockReceiver constructor. Cause is java.io.IOException: Too many open files
+ at java.io.UnixFileSystem.createFileExclusively(Native Method)
+ at java.io.File.createNewFile(File.java:883)
+
+ ... see the Getting Started section on ulimit and nproc configuration.