diff --git a/src/docbkx/getting_started.xml b/src/docbkx/getting_started.xml index d6c21f16585..32ea93ff0f8 100644 --- a/src/docbkx/getting_started.xml +++ b/src/docbkx/getting_started.xml @@ -329,10 +329,10 @@ stopping hbase............... - HBase is a database, it uses a lot of files all at the same time. - The default ulimit -n -- i.e. user file limit -- of 1024 on *nix systems - is insufficient. Any significant amount of loading will lead you to FAQ: Why do I + HBase is a database. It uses a lot of files all at the same time. + The default ulimit -n -- i.e. user file limit -- of 1024 on most *nix systems + is insufficient (On mac os x its 256). Any significant amount of loading will + lead you to FAQ: Why do I see "java.io.IOException...(Too many open files)" in my logs?. You may also notice errors such as 2010-04-06 03:04:37,542 INFO org.apache.hadoop.hdfs.DFSClient: Exception increateBlockOutputStream java.io.EOFException @@ -343,7 +343,12 @@ stopping hbase............... nproc setting; under load, a low-nproc setting could manifest as OutOfMemoryError See Jack Levin's major hdfs issues - note up on the user list.. + note up on the user list. + The requirement that a database requires upping of system limits + is not peculiar to HBase. See for example the section + Setting Shell Limits for the Oracle User in + + Short Guide to install Oracle 10 on Linux.. To be clear, upping the file descriptors and nproc for the user who is