From d11ddc0622c16b605b2e62f267a0564d2f318d34 Mon Sep 17 00:00:00 2001 From: Doug Meil Date: Wed, 30 Nov 2011 03:08:14 +0000 Subject: [PATCH] hbase-4898. book.xml (1 correction in no-reducer summary) troubleshooting.xml 1 addition for MapReduce related documentation. git-svn-id: https://svn.apache.org/repos/asf/hbase/trunk@1208230 13f79535-47bb-0310-9956-ffa450edef68 --- src/docbkx/book.xml | 8 +++++--- src/docbkx/troubleshooting.xml | 5 +++++ 2 files changed, 10 insertions(+), 3 deletions(-) diff --git a/src/docbkx/book.xml b/src/docbkx/book.xml index b414201190e..83ec9e87c59 100644 --- a/src/docbkx/book.xml +++ b/src/docbkx/book.xml @@ -867,9 +867,11 @@ System.out.println("md5 digest as string length: " + sbDigest.length); // ret HBase and MapReduce - See HBase and MapReduce up in javadocs. + See + HBase and MapReduce up in javadocs. Start there. Below is some additional help. - For more information about MapReduce, see the Hadoop MapReduce Tutorial. + For more information about MapReduce (i.e., the framework in general), see the + Hadoop MapReduce Tutorial.
Map-Task Spitting
@@ -1117,7 +1119,7 @@ if (!b) { HBase MapReduce Summary Without Reducer It is also possible to perform summaries without a reducer - if you use HBase as the reducer. - There would need to exist an HTable target table for the job summary. The HTable method incrementColumnValue + An HBase target table would need to exist for the job summary. The HTable method incrementColumnValue would be used to atomically increment values. From a performance perspective, it might make sense to keep a Map of values with their values to be incremeneted for each map-task, and make one update per key at during the cleanup method of the mapper. However, your milage may vary depending on the number of rows to be processed and diff --git a/src/docbkx/troubleshooting.xml b/src/docbkx/troubleshooting.xml index a600234fba4..d809fea85d5 100644 --- a/src/docbkx/troubleshooting.xml +++ b/src/docbkx/troubleshooting.xml @@ -554,6 +554,11 @@ Caused by: java.io.FileNotFoundException: File _partition.lst does not exist. at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:210) LocalJobRunner means the job is running locally, not on the cluster. + + See + + http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/package-summary.html#classpath for more + information on HBase MapReduce jobs and classpaths.