HBASE-1876 DroppedSnapshotException when flushing memstore after a datanode dies

git-svn-id: https://svn.apache.org/repos/asf/hadoop/hbase/trunk@822064 13f79535-47bb-0310-9956-ffa450edef68
This commit is contained in:
Michael Stack 2009-10-05 22:37:01 +00:00
parent 3b8bed11a6
commit f370d0ba41
8 changed files with 16 additions and 15 deletions

View File

@ -1019,7 +1019,6 @@ public class KeyValue implements Writable, HeapSize {
/** /**
* @param column Column minus its delimiter * @param column Column minus its delimiter
* @return True if column matches. * @return True if column matches.
* @see #matchingColumn(byte[])
*/ */
public boolean matchingColumnNoDelimiter(final byte [] column) { public boolean matchingColumnNoDelimiter(final byte [] column) {
int rl = getRowLength(); int rl = getRowLength();
@ -1517,7 +1516,7 @@ public class KeyValue implements Writable, HeapSize {
} }
/** /**
* @param b * @param bb
* @return A KeyValue made of a byte buffer that holds the key-only part. * @return A KeyValue made of a byte buffer that holds the key-only part.
* Needed to convert hfile index members to KeyValues. * Needed to convert hfile index members to KeyValues.
*/ */

View File

@ -32,7 +32,7 @@ import org.apache.hadoop.hbase.util.Bytes;
* *
* Each HTablePool acts as a pool for all tables. To use, instantiate an * Each HTablePool acts as a pool for all tables. To use, instantiate an
* HTablePool and use {@link #getTable(String)} to get an HTable from the pool. * HTablePool and use {@link #getTable(String)} to get an HTable from the pool.
* Once you are done with it, return it to the pool with {@link #putTable(HTable)}.<p> * Once you are done with it, return it to the pool with {@link #putTable(HTableInterface)}.<p>
* *
* A pool can be created with a <i>maxSize</i> which defines the most HTable * A pool can be created with a <i>maxSize</i> which defines the most HTable
* references that will ever be retained for each table. Otherwise the default * references that will ever be retained for each table. Otherwise the default

View File

@ -97,7 +97,6 @@ public class Result implements Writable {
/** /**
* Instantiate a Result from the specified raw binary format. * Instantiate a Result from the specified raw binary format.
* @param bytes raw binary format of Result * @param bytes raw binary format of Result
* @param numKeys number of KeyValues in Result
*/ */
public Result(ImmutableBytesWritable bytes) { public Result(ImmutableBytesWritable bytes) {
this.bytes = bytes; this.bytes = bytes;

View File

@ -74,8 +74,8 @@ public abstract class CompareFilter implements Filter {
/** /**
* Constructor. * Constructor.
* @param rowCompareOp the compare op for row matching * @param compareOp the compare op for row matching
* @param rowComparator the comparator for row matching * @param comparator the comparator for row matching
*/ */
public CompareFilter(final CompareOp compareOp, public CompareFilter(final CompareOp compareOp,
final WritableByteArrayComparable comparator) { final WritableByteArrayComparable comparator) {

View File

@ -32,8 +32,7 @@ import org.apache.hadoop.hbase.util.Bytes;
* filtering based on the value of a given column. Use it to test if a given * filtering based on the value of a given column. Use it to test if a given
* regular expression matches a cell value in the column. * regular expression matches a cell value in the column.
* <p> * <p>
* Only EQUAL or NOT_EQUAL {@link CompareOp} comparisons are valid with this * Only EQUAL or NOT_EQUAL comparisons are valid with this comparator.
* comparator.
* <p> * <p>
* For example: * For example:
* <p> * <p>

View File

@ -33,9 +33,9 @@ import org.apache.hadoop.hbase.io.HbaseObjectWritable;
import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.Bytes;
/** /**
* This filter is used to filter cells based on value. It takes a {@link #Filter.CompareOp} * This filter is used to filter cells based on value. It takes a {@link CompareFilter.CompareOp}
* operator (equal, greater, not equal, etc), and either a byte [] value or * operator (equal, greater, not equal, etc), and either a byte [] value or
* a {@link #WritableByteArrayComparable}. * a WritableByteArrayComparable.
* <p> * <p>
* If we have a byte [] value then we just do a lexicographic compare. For * If we have a byte [] value then we just do a lexicographic compare. For
* example, if passed value is 'b' and cell has 'a' and the compare operator * example, if passed value is 'b' and cell has 'a' and the compare operator

View File

@ -20,9 +20,7 @@
/**Provides row-level filters applied to HRegion scan results during calls to /**Provides row-level filters applied to HRegion scan results during calls to
* {@link org.apache.hadoop.hbase.client.ResultScanner#next()}. * {@link org.apache.hadoop.hbase.client.ResultScanner#next()}.
<p>Since HBase 0.20.0, {@link org.apache.hadoop.hbase.filter.Filter} is the new <p>
Interface used filtering. It replaces the deprecated
{@link org.apache.hadoop.hbase.filter.RowFilterInterface}.
Filters run the extent of a table unless you wrap your filter in a Filters run the extent of a table unless you wrap your filter in a
{@link org.apache.hadoop.hbase.filter.WhileMatchFilter}. {@link org.apache.hadoop.hbase.filter.WhileMatchFilter}.
The latter returns as soon as the filter stops matching. The latter returns as soon as the filter stops matching.

View File

@ -69,11 +69,17 @@
in your hbase-site.xml.</li> in your hbase-site.xml.</li>
<li>This is a list of patches we recommend you apply to your running Hadoop cluster: <li>This is a list of patches we recommend you apply to your running Hadoop cluster:
<ul> <ul>
<li><a hef="https://issues.apache.org/jira/browse/HADOOP-4681">HADOOP-4681 <i>"DFSClient block read failures cause open DFSInputStream to become unusable"</i></a>. This patch will help with the ever-popular, "No live nodes contain current block". <li><a hef="https://issues.apache.org/jira/browse/HADOOP-4681">HADOOP-4681/HDFS-127 <i>"DFSClient block read failures cause open DFSInputStream to become unusable"</i></a>. This patch will help with the ever-popular, "No live nodes contain current block".
The hadoop version bundled with hbase has this patch applied. Its an HDFS client The hadoop version bundled with hbase has this patch applied. Its an HDFS client
fix so this should do for usual usage but if your cluster is missing the patch, fix so this should do for usual usage but if your cluster is missing the patch,
and in particular if calling hbase from a mapreduce job, you may run into this and in particular if calling hbase from a mapreduce job, you may run into this
issue. issue.
</li>
<li><a hef="https://issues.apache.org/jira/browse/HDFS-630">HDFS-630 <i> "In DFSOutputStream.nextBlockOutputStream(), the client can exclude specific datanodes when locating the next block"</i></a>. Dead datanodes take ten minutes to timeout at namenode.
Meantime the namenode can still send DFSClients to the dead datanode as host for
a replicated block. DFSClient can get stuck on trying to get block from a
dead node. This patch allows DFSClients pass namenode lists of known
dead datanodes.
</li> </li>
</ul> </ul>
</li> </li>