diff --git a/src/docbkx/book.xml b/src/docbkx/book.xml
index 638441563b8..4fad1ad5671 100644
--- a/src/docbkx/book.xml
+++ b/src/docbkx/book.xml
@@ -285,7 +285,7 @@ stopping hbase...............
Usually you'll want to use the latest version available except the problematic u18 (u22 is the latest version as of this writing).
- hadoop
+ hadoopHadoopThis version of HBase will only run on Hadoop 0.20.x.
It will not run on hadoop 0.21.x (nor 0.22.x) as of this writing.
HBase will lose data unless it is running on an HDFS that has a durable sync.
@@ -343,7 +343,7 @@ be running to use Hadoop's scripts to manage remote Hadoop and HBase daemons.
- ulimit
+ ulimitulimitHBase is a database, it uses a lot of files at the same time.
The default ulimit -n of 1024 on *nix systems is insufficient.
Any significant amount of loading will lead you to
@@ -390,7 +390,7 @@ be running to use Hadoop's scripts to manage remote Hadoop and HBase daemons.
- dfs.datanode.max.xcievers
+ dfs.datanode.max.xcieversxcievers
An Hadoop HDFS datanode has an upper bound on the number of files
that it will serve at any one time.
@@ -1060,13 +1060,14 @@ to ensure well-formedness of your document after an edit session.
HBase ships with a reasonable, conservative configuration that will
work on nearly all
machine types that people might want to test with. If you have larger
- machines you might the following configuration options helpful.
+ machines -- HBase has 8G and larger heap -- you might the following configuration options helpful.
+ TODO.
- LZO compression
+ LZO compressionLZOYou should consider enabling LZO compression. Its
near-frictionless and in most all cases boosts performance.
@@ -1886,10 +1887,14 @@ of all regions.
doing: $ ./bin/hbase org.apache.hadoop.hbase.regionserver.wal.HLog --split hdfs://example.org:9000/hbase/.logs/example.org,60020,1283516293161/
+ Compression Tool
+ See Compression Tool.
+
+
- Compression In HBase
+ Compression In HBaseCompressionCompressionTest Tool
@@ -1947,7 +1952,7 @@ of all regions.
available on the CLASSPATH; in this case it will use native
compressors instead (If the native libs are NOT present,
you will see lots of Got brand-new compressor
- reports in your logs; TO BE FIXED).
+ reports in your logs; see FAQ).
@@ -1966,7 +1971,7 @@ of all regions.
-
+ Why are logs flooded with '2011-01-10 12:40:48,407 INFO org.apache.hadoop.io.compress.CodecPool: Got
brand-new compressor' messages?