In some situations, a caller may know that it is properly managing the
Kerberos ticket to talk to HBase. In these situations, it's possible
that AuthUtil still tries to do renewals, but just fails repeatedly to
do so. Give a configuration flag for such clients to be able to tell
AuthUtil to simply stop trying.
Signed-off-by: Duo Zhang <zhangduo@apache.org>
Depending on which compression codec is used, a short read of the
compressed bytes can cause catastrophic errors that confuse the WAL reader.
This problem can manifest when the reader is actively tailing the WAL for
replication. To avoid these issues when WAL value compression is enabled,
BoundedDelegatingInputStream should assume enough bytes are available to
supply a reader up to its bound. This behavior is valid per the contract
of available(), which provides an _estimate_ of available bytes, and
equivalent to IOUtils.readFully but without requiring an intermediate
buffer.
Added TestReplicationCompressedWAL and TestReplicationValueCompressedWAL.
Without the WALCellCodec change TestReplicationValueCompressedWAL will
fail.
Signed-off-by: Bharath Vissapragada <bharathv@apache.org>
Undo asserts that LZ4 and SNAPPY fails if their native libs are NOT
loaded; as of hadoop 3.3.1, LZ4 and SNAPPY can work w/o native libs.
Signed-off-by: Duo Zhang <zhangduo@apache.org>
WAL storage can be expensive, especially if the cell values
represented in the edits are large, consisting of blobs or
significant lengths of text. Such WALs might need to be kept around
for a fairly long time to satisfy replication constraints on a space
limited (or space-contended) filesystem.
We have a custom dictionary compression scheme for cell metadata that
is engaged when WAL compression is enabled in site configuration.
This is fine for that application, where we can expect the universe
of values and their lengths in the custom dictionaries to be
constrained. For arbitrary cell values it is better to use one of the
available compression codecs, which are suitable for arbitrary albeit
compressible data.
Signed-off-by: Bharath Vissapragada <bharathv@apache.org>
Signed-off-by: Duo Zhang <zhangduo@apache.org>
Signed-off-by: Nick Dimiduk <ndimiduk@apache.org>
First extract an hbase-coprocessor module used by hbase-client, hbase-server.
This is prerequisite to extracting an hbase-wal module.
M hbase-common/src/main/java/org/apache/hadoop/hbase/Abortable.java
M hbase-common/src/main/java/org/apache/hadoop/hbase/DoNotRetryIOException.java
M hbase-common/src/main/java/org/apache/hadoop/hbase/util/SortedList.java
Move to hbase-common. Its a generic Interface. Need by
M hbase-coprocessor/src/main/java/org/apache/hadoop/hbase/Coprocessor.java
M hbase-coprocessor/src/main/java/org/apache/hadoop/hbase/CoprocessorEnvironment.java
M hbase-coprocessor/src/main/java/org/apache/hadoop/hbase/coprocessor/BaseEnvironment.java
M hbase-coprocessor/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java
M hbase-coprocessor/src/main/java/org/apache/hadoop/hbase/coprocessor/CoreCoprocessor.java
M hbase-coprocessor/src/main/java/org/apache/hadoop/hbase/coprocessor/ObserverContext.java
M hbase-coprocessor/src/main/java/org/apache/hadoop/hbase/coprocessor/ObserverContextImpl.java
M hbase-coprocessor/src/main/java/org/apache/hadoop/hbase/coprocessor/ReadOnlyConfiguration.java
Move to hbase-coprocessor.
M hbase-endpoint/src/main/java/org/apache/hadoop/hbase/client/coprocessor/BigDecimalColumnInterpreter.java
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/coprocessor/DoubleColumnInterpreter.java
M hbase-endpoint/src/main/java/org/apache/hadoop/hbase/client/coprocessor/LongColumnInterpreter.java
M hbase-endpoint/src/main/java/org/apache/hadoop/hbase/coprocessor/ColumnInterpreter.java
Moved to hbase-endpoint where they are used.
M hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java
Include region name when toString'd.
M hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALCoprocessorHost.java
Include WAL name when toString'd.
M hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java
Add utility used in testing here from CoprocessorHost.
Added new interface in TableDescriptor which allows user to define RSGroup name while creating or modifying a table.
Signed-off-by: Duo Zhang <zhangduo@apache.org>
Signed-off-by: Pankaj Kumar<pankajkumar@apache.org>
Adds "hbase.master.executor.merge.dispatch.threads" and defaults to 2.
Also adds additional logging that includes the number of split plans
and merge plans computed for each normalizer run.
(cherry picked from commit 36b4698cea)
Signed-off-by: Viraj Jasani <vjasani@apache.org>
Signed-off-by: Wellington Chevreuil <wchevreuil@apache.org>
Signed-off-by: Andrew Purtell <apurtell@apache.org>
Revert of the revert -- re-applying HBASE-25449 with a change
of renaming the test hdfs XML configuration file as it was adversely
affecting tests using MiniDFS
This reverts commit c218e576fe.
Co-authored-by: Josh Elser <elserj@apache.org>
Signed-off-by: Peter Somogyi <psomogyi@apache.org>
Signed-off-by: Michael Stack <stack@apache.org>
Signed-off-by: Duo Zhang <zhangduo@apache.org>
* HBASE-25379 Make retry pause time configurable for regionserver short operation RPC (reportRegionStateTransition/reportProcedureDone)
* HBASE-25379 RemoteProcedureResultReporter also should retry after the configured pause time
* Addressed the review comments
Signed-off-by: Yulin Niu <niuyulin@apache.org>
(cherry picked from commit c96fbf0407)
* Using ContiguousCellFormat as a marker alone
* Commit the new file
* Fix the comparator logic that was an oversight
* Fix the sequenceId check order
* Adding few more static methods that helps in scan flow like query
matcher where we have more cols
* Remove ContiguousCellFormat and ensure compare() can be inlined
* applying negation as per review comment
* Fix checkstyle comments
* fix review comments
* Address review comments
Signed-off-by: stack <stack@apache.org>
Signed-off-by: AnoopSamJohn <anoopsamjohn@apache.org>
Signed-off-by: huaxiangsun <huaxiangsun@apache.org>