From dd6dca4d949375843b8ac7ec93761de80bdc9ea7 Mon Sep 17 00:00:00 2001 From: Michael Stack Date: Tue, 5 Apr 2011 18:12:47 +0000 Subject: [PATCH] Pointed at oracle config. doc. for example of how other dbs have same issues we do w/ sys configs git-svn-id: https://svn.apache.org/repos/asf/hbase/trunk@1089149 13f79535-47bb-0310-9956-ffa450edef68 --- src/docbkx/getting_started.xml | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-) diff --git a/src/docbkx/getting_started.xml b/src/docbkx/getting_started.xml index d6c21f16585..32ea93ff0f8 100644 --- a/src/docbkx/getting_started.xml +++ b/src/docbkx/getting_started.xml @@ -329,10 +329,10 @@ stopping hbase............... - HBase is a database, it uses a lot of files all at the same time. - The default ulimit -n -- i.e. user file limit -- of 1024 on *nix systems - is insufficient. Any significant amount of loading will lead you to FAQ: Why do I + HBase is a database. It uses a lot of files all at the same time. + The default ulimit -n -- i.e. user file limit -- of 1024 on most *nix systems + is insufficient (On mac os x its 256). Any significant amount of loading will + lead you to FAQ: Why do I see "java.io.IOException...(Too many open files)" in my logs?. You may also notice errors such as 2010-04-06 03:04:37,542 INFO org.apache.hadoop.hdfs.DFSClient: Exception increateBlockOutputStream java.io.EOFException @@ -343,7 +343,12 @@ stopping hbase............... nproc setting; under load, a low-nproc setting could manifest as OutOfMemoryError See Jack Levin's major hdfs issues - note up on the user list.. + note up on the user list. + The requirement that a database requires upping of system limits + is not peculiar to HBase. See for example the section + Setting Shell Limits for the Oracle User in + + Short Guide to install Oracle 10 on Linux.. To be clear, upping the file descriptors and nproc for the user who is