Created OnheapDecodedCell and OffheapDecodedExtendedCell objects with duplicate copy of ByteBuffer's underlying array instead of original ByteBuffer
Signed-off-by: Andrew Purtell <apurtell@apache.org>
Signed-off-by: Pankaj Kumar<pankajkumar@apache.org>
(cherry picked from commit c198f23e5e)
- introduce optional flag `hfile.pread.all.bytes.enabled` for pread that must read full bytes with the next block header
Signed-off-by: Josh Elser <elserj@apache.org>
Avoid the pattern where a Random object is allocated, used once or twice, and
then left for GC. This pattern triggers warnings from some static analysis tools
because this pattern leads to poor effective randomness. In a few cases we were
legitimately suffering from this issue; in others a change is still good to
reduce noise in analysis results.
Use ThreadLocalRandom where there is no requirement to set the seed to gain
good reuse.
Where useful relax use of SecureRandom to simply Random or ThreadLocalRandom,
which are unlikely to block if the system entropy pool is low, if we don't need
crypographically strong randomness for the use case. The exception to this is
normalization of use of Bytes#random to fill byte arrays with randomness.
Because Bytes#random may be used to generate key material it must be backed by
SecureRandom.
Signed-off-by: Duo Zhang <zhangduo@apache.org>
For the `TraceUtil` methods that accept `Callable` and `Runnable` types, make them generic over a
child of `Throwable`. This allows us to consolidate the two method signatures into a single more
flexible definition.
Signed-off-by: Duo Zhang <zhangduo@apache.org>
For batch operations, collect and annotate the associated span with the set of all operations
contained in the batch.
Signed-off-by: Duo Zhang <zhangduo@apache.org>
For batch operations, collect and annotate the associated span with the set of all operations
contained in the batch.
Signed-off-by: Duo Zhang <zhangduo@apache.org>
Add support for `db.system`, `db.connection_string`, `db.user`.
Signed-off-by: Duo Zhang <zhangduo@apache.org>
Signed-off-by: Huaxiang Sun <huaxiangsun@apache.org>
Co-authored-by: Josh Elser <josh.elser@gmail.com>
Follows the guidance outlined in https://github.com/open-telemetry/opentelemetry-specification/blob/3e380e2/specification/trace/semantic_conventions/database.dm
* all table data operations are assumed to be of type CLIENT
* populate `db.name` and `db.operation` attributes
* name table data operation spans as `db.operation` `db.name`:`db.hbase.table`
note: this implementation deviates from the recommended `db.name`.`db.sql.table` and instead
uses HBase's native String representation of namespace:tablename.
Signed-off-by: Duo Zhang <zhangduo@apache.org>
Signed-off-by: Tak Lon (Stephen) Wu <taklwu@apache.org>
ZStandard supports initialization of compressors and decompressors with a
precomputed dictionary, which can dramatically improve and speed up compression
of tables with small values. For more details, please see
The Case For Small Data Compression
https://github.com/facebook/zstd#the-case-for-small-data-compression
Signed-off-by: Duo Zhang <zhangduo@apache.org>
Conflicts:
hbase-compression/hbase-compression-zstd/src/main/java/org/apache/hadoop/hbase/io/compress/zstd/ZstdCodec.java
This reverts commit 8ac0b5ed7f.
This is not ready yet. There are some code paths remaining where store
configuration (CompoundConfiguration) is not passed into the block decoding
context. Found with additional integration tests.
ZStandard supports initialization of compressors and decompressors with a
precomputed dictionary, which can dramatically improve and speed up compression
of tables with small values. For more details, please see
The Case For Small Data Compression
https://github.com/facebook/zstd#the-case-for-small-data-compression
Signed-off-by: Duo Zhang <zhangduo@apache.org>
We get and retain Compressor instances in HFileBlockDefaultEncodingContext,
and could in theory call Compressor#reinit when setting up the context,
to update compression parameters like level and buffer size, but we do
not plumb through the CompoundConfiguration from the Store into the
encoding context. As a consequence we can only update codec parameters
globally in system site conf files.
Fine grained configurability is important for algorithms like ZStandard
(ZSTD), which offers more than 20 compression levels, where at level 1
it is almost as fast as LZ4, and where at higher levels it utilizes
computationally expensive techniques to rival LZMA at compression ratio
but trades off significantly for reduced compresson throughput. The ZSTD
level that should be set for a given column family or table will vary by
use case.
Signed-off-by: Viraj Jasani <vjasani@apache.org>
Conflicts:
hbase-compression/hbase-compression-zstd/src/main/java/org/apache/hadoop/hbase/io/compress/zstd/ZstdDecompressor.java
hbase-server/src/test/java/org/apache/hadoop/hbase/io/compress/HFileTestBase.java
hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestDataBlockEncoders.java
hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestSeekToBlockWithEncoders.java