HBASE-4409. book. Fixed cycle in config section and wiki with "too many open files" error.
git-svn-id: https://svn.apache.org/repos/asf/hbase/trunk@1170834 13f79535-47bb-0310-9956-ffa450edef68
This commit is contained in:
parent
b048ee1466
commit
6e77e87c66
|
@ -104,14 +104,19 @@ to ensure well-formedness of your document after an edit session.
|
|||
<para>HBase is a database. It uses a lot of files all at the same time.
|
||||
The default ulimit -n -- i.e. user file limit -- of 1024 on most *nix systems
|
||||
is insufficient (On mac os x its 256). Any significant amount of loading will
|
||||
lead you to <link xlink:href="http://wiki.apache.org/hadoop/Hbase/FAQ#A6">FAQ: Why do I
|
||||
see "java.io.IOException...(Too many open files)" in my logs?</link>.
|
||||
You may also notice errors such as <programlisting>
|
||||
lead you to <xref linkend="trouble.rs.runtime.filehandles"/>.
|
||||
You may also notice errors such as... <programlisting>
|
||||
2010-04-06 03:04:37,542 INFO org.apache.hadoop.hdfs.DFSClient: Exception increateBlockOutputStream java.io.EOFException
|
||||
2010-04-06 03:04:37,542 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning block blk_-6935524980745310745_1391901
|
||||
</programlisting> Do yourself a favor and change the upper bound on the
|
||||
number of file descriptors. Set it to north of 10k. See the above
|
||||
referenced FAQ for how. You should also up the hbase users'
|
||||
number of file descriptors. Set it to north of 10k. The math runs roughly as follows: per ColumnFamily
|
||||
there is at least one StoreFile and possibly up to 5 or 6 if the region is under load. Multiply the
|
||||
average number of StoreFiles per ColumnFamily times the number of regions per RegionServer. For example, assuming
|
||||
that a schema had 3 ColumnFamilies per region with an average of 3 StoreFiles per ColumnFamily,
|
||||
and there are 100 regions per RegionServer, the JVM will open 3 * 3 * 100 = 900 file descriptors
|
||||
(not counting open jar files, config files, etc.)
|
||||
</para>
|
||||
<para>You should also up the hbase users'
|
||||
<varname>nproc</varname> setting; under load, a low-nproc
|
||||
setting could manifest as <classname>OutOfMemoryError</classname>
|
||||
<footnote><para>See Jack Levin's <link xlink:href="">major hdfs issues</link>
|
||||
|
|
|
@ -602,7 +602,14 @@ java.lang.UnsatisfiedLinkError: no gplcompression in java.library.path
|
|||
<section xml:id="trouble.rs.runtime.filehandles">
|
||||
<title>java.io.IOException...(Too many open files)</title>
|
||||
<para>
|
||||
See the Getting Started section on <link linkend="ulimit">ulimit and nproc configuration</link>.
|
||||
If you see log messages like this...
|
||||
<programlisting>
|
||||
2010-09-13 01:24:17,336 WARN org.apache.hadoop.hdfs.server.datanode.DataNode:
|
||||
Disk-related IOException in BlockReceiver constructor. Cause is java.io.IOException: Too many open files
|
||||
at java.io.UnixFileSystem.createFileExclusively(Native Method)
|
||||
at java.io.File.createNewFile(File.java:883)
|
||||
</programlisting>
|
||||
... see the Getting Started section on <link linkend="ulimit">ulimit and nproc configuration</link>.
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="trouble.rs.runtime.xceivers">
|
||||
|
|
Loading…
Reference in New Issue