hbase-4897 book.xml (clarified scan example), troubleshooting.xml (mr example)

git-svn-id: https://svn.apache.org/repos/asf/hbase/trunk@1208225 13f79535-47bb-0310-9956-ffa450edef68
This commit is contained in:
Doug Meil 2011-11-30 02:48:58 +00:00
parent 99263fecfc
commit a0cce57e46
2 changed files with 36 additions and 3 deletions

View File

@ -256,8 +256,8 @@ HTable htable = ... // instantiate HTable
Scan scan = new Scan();
scan.addColumn(Bytes.toBytes("cf"),Bytes.toBytes("attr"));
scan.setStartRow( Bytes.toBytes("row"));
scan.setStopRow( Bytes.toBytes("row" + new byte[] {0})); // note: stop key != start key
scan.setStartRow( Bytes.toBytes("row")); // start key is inclusive
scan.setStopRow( Bytes.toBytes("row" + new byte[] {0})); // stop key is exclusive
for(Result result : htable.getScanner(scan)) {
// process Result instance
}

View File

@ -523,8 +523,41 @@ hadoop 17789 155 35.2 9067824 8604364 ? S<l Mar04 9855:48 /usr/java/j
</para>
</section>
</section>
<section xml:id="trouble.mapreduce">
<title>MapReduce</title>
<section xml:id="trouble.mapreduce.local">
<title>You Think You're On The Cluster, But You're Actually Local</title>
<para>This following stacktrace happened using <code>ImportTsv</code>, but things like this
can happen on any job with a mis-configuration.
<programlisting>
WARN mapred.LocalJobRunner: job_local_0001
java.lang.IllegalArgumentException: Can't read partitions file
at org.apache.hadoop.hbase.mapreduce.hadoopbackport.TotalOrderPartitioner.setConf(TotalOrderPartitioner.java:111)
at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
at org.apache.hadoop.mapred.MapTask$NewOutputCollector.&lt;init&gt;(MapTask.java:560)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:639)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:210)
Caused by: java.io.FileNotFoundException: File _partition.lst does not exist.
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:383)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:251)
at org.apache.hadoop.fs.FileSystem.getLength(FileSystem.java:776)
at org.apache.hadoop.io.SequenceFile$Reader.&lt;init&gt;(SequenceFile.java:1424)
at org.apache.hadoop.io.SequenceFile$Reader.&lt;init&gt;(SequenceFile.java:1419)
at org.apache.hadoop.hbase.mapreduce.hadoopbackport.TotalOrderPartitioner.readPartitions(TotalOrderPartitioner.java:296)
</programlisting>
.. see the critical portion of the stack? It's...
<programlisting>
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:210)
</programlisting>
LocalJobRunner means the job is running locally, not on the cluster.
</para>
</section>
</section>
<section xml:id="trouble.namenode">
<title>NameNode</title>
<para>For more information on the NameNode, see <xref linkend="arch.hdfs"/>.