Exposes process-specific tracing arguments for most of the commands that we can launch from
`bin/hbase`. In addition to serving as an on/off flag, `HBASE_TRACE_OPTS` acts as the site-wide
configuration setting. Additional variables are provided for each applicable command, giving
operators fine-grain control over which processes participate in the tracing system, and to what
degree.
Signed-off-by: Tak Lon (Stephen) Wu <taklwu@apache.org>
Signed-off-by: Josh Elser <elserj@apache.org>
ZStandard supports initialization of compressors and decompressors with a
precomputed dictionary, which can dramatically improve and speed up compression
of tables with small values. For more details, please see
The Case For Small Data Compression
https://github.com/facebook/zstd#the-case-for-small-data-compression
Signed-off-by: Duo Zhang <zhangduo@apache.org>
This reverts commit bfa4584125.
This is not ready yet. There are some code paths remaining where store
configuration (CompoundConfiguration) is not passed into the block decoding
context. Found with additional integration tests.
A regression was introduced by HBASE-25847 which changed regionInfo#isParentSplit
to regionState#isSplit. The region state after restart is CLOSED instead of SPLIT.
We need to check both regionState and regionInfo for split status.
Signed-off-by: Viraj Jasani <vjasani@apache.org>
Use a hybrid logical clock for timestamping entries.
Using BufferedMutator without HLC was not good because we assign client timestamps,
and the store loop is fast enough that on rare occasion two temporally adjacent URLs
in the set of WARCs are equivalent and the timestamp does not advance, leading later
to a rare false positive CORRUPT finding.
While making changes, support direct S3N paths as input paths on the command line.
Signed-off-by: Viraj Jasani <vjasani@apache.org>
ZStandard supports initialization of compressors and decompressors with a
precomputed dictionary, which can dramatically improve and speed up compression
of tables with small values. For more details, please see
The Case For Small Data Compression
https://github.com/facebook/zstd#the-case-for-small-data-compression
Signed-off-by: Duo Zhang <zhangduo@apache.org>
We get and retain Compressor instances in HFileBlockDefaultEncodingContext,
and could in theory call Compressor#reinit when setting up the context,
to update compression parameters like level and buffer size, but we do
not plumb through the CompoundConfiguration from the Store into the
encoding context. As a consequence we can only update codec parameters
globally in system site conf files.
Fine grained configurability is important for algorithms like ZStandard
(ZSTD), which offers more than 20 compression levels, where at level 1
it is almost as fast as LZ4, and where at higher levels it utilizes
computationally expensive techniques to rival LZMA at compression ratio
but trades off significantly for reduced compresson throughput. The ZSTD
level that should be set for a given column family or table will vary by
use case.
Signed-off-by: Viraj Jasani <vjasani@apache.org>
This avoids starvation when the archive directory is large and takes a long time
to iterate through.
Signed-off-by: Duo Zhang <zhangduo@apache.org>
Signed-off-by: Anoop Sam John <anoopsamjohn@apache.org>
Signed-off-by: Pankaj <pankajkumar@apache.org>