Commit Graph

13 Commits

Author SHA1 Message Date
Nick Dimiduk 6cdbc46c02 Preparing hbase release 2.5.0RC1; tagging and updates to CHANGES.md and RELEASENOTES.md
Signed-off-by: Nick Dimiduk <ndimiduk@apache.org>
2022-08-23 14:03:59 +00:00
Andrew Purtell b7158a87ea Preparing development version 2.5.1-SNAPSHOT
Signed-off-by: Andrew Purtell <apurtell@apache.org>
2022-05-31 20:06:32 -07:00
Andrew Purtell 2da2dd917f Preparing hbase release 2.5.0RC0; tagging and updates to CHANGES.md and RELEASENOTES.md
Signed-off-by: Andrew Purtell <apurtell@apache.org>
2022-05-31 20:06:29 -07:00
Andrew Purtell 2e4d9a2adf HBASE-27019 Minor compression performance improvements (#4420)
TRACE level logging is expensive enough to warrant removal. They were
useful during development but are now just overhead.

Also we unnecessarily create new compressor and decompressor instances
in the reset() methods for the Aircompressor and Lz4 codecs. Remove.

Signed-off-by: Viraj Jasani <vjasani@apache.org>
Signed-off-by: Xiaolin Ha <haxiaolin@apache.org>
2022-05-13 18:31:01 -07:00
Duo Zhang 1aea663c6d HBASE-26899 Run spotless:apply 2022-05-01 22:52:40 +08:00
Andrew Purtell d90522163f HBASE-26959 Brotli compression support (#4353)
Signed-off-by: Nick Dimiduk <ndimiduk@apache.org>
2022-04-22 16:55:23 -07:00
Duo Zhang 5844b53dea HBASE-26802 Backport the log4j2 changes to branch-2 (#4166)
Signed-off-by: Andrew Purtell <apurtell@apache.org>

Conflicts:
	hbase-hadoop-compat/pom.xml
	hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java
	hbase-shaded/hbase-shaded-client-byo-hadoop/pom.xml
	hbase-shaded/hbase-shaded-client/pom.xml
	hbase-shaded/hbase-shaded-mapreduce/pom.xml
	hbase-shaded/hbase-shaded-testing-util/pom.xml
	hbase-shaded/pom.xml
	hbase-testing-util/pom.xml
2022-03-11 11:38:37 -08:00
Duo Zhang 71ddf74dda HBASE-26691 Replacing log4j with reload4j for branch-2.x (#4050)
Signed-off-by: Andrew Purtell <apurtell@apache.org>
2022-03-04 12:08:36 -08:00
Andrew Purtell f2c58fcf68 HBASE-26353 Support loadable dictionaries in hbase-compression-zstd (#3787)
ZStandard supports initialization of compressors and decompressors with a
precomputed dictionary, which can dramatically improve and speed up compression
of tables with small values. For more details, please see

  The Case For Small Data Compression
  https://github.com/facebook/zstd#the-case-for-small-data-compression

Signed-off-by: Duo Zhang <zhangduo@apache.org>

Conflicts:
	hbase-compression/hbase-compression-zstd/src/main/java/org/apache/hadoop/hbase/io/compress/zstd/ZstdCodec.java
2021-10-27 07:57:16 -07:00
Andrew Purtell 644f820c6c Revert "HBASE-26353 Support loadable dictionaries in hbase-compression-zstd (#3748)"
This reverts commit 8ac0b5ed7f.

This is not ready yet. There are some code paths remaining where store
configuration (CompoundConfiguration) is not passed into the block decoding
context. Found with additional integration tests.
2021-10-21 18:41:59 -07:00
Andrew Purtell 8ac0b5ed7f HBASE-26353 Support loadable dictionaries in hbase-compression-zstd (#3748)
ZStandard supports initialization of compressors and decompressors with a
precomputed dictionary, which can dramatically improve and speed up compression
of tables with small values. For more details, please see

  The Case For Small Data Compression
  https://github.com/facebook/zstd#the-case-for-small-data-compression

Signed-off-by: Duo Zhang <zhangduo@apache.org>
2021-10-19 14:12:28 -07:00
Andrew Purtell b6bb18022f HBASE-26316 Per-table or per-CF compression codec setting overrides (#3730)
We get and retain Compressor instances in HFileBlockDefaultEncodingContext,
and could in theory call Compressor#reinit when setting up the context,
to update compression parameters like level and buffer size, but we do
not plumb through the CompoundConfiguration from the Store into the
encoding context. As a consequence we can only update codec parameters
globally in system site conf files.

Fine grained configurability is important for algorithms like ZStandard
(ZSTD), which offers more than 20 compression levels, where at level 1
it is almost as fast as LZ4, and where at higher levels it utilizes
computationally expensive techniques to rival LZMA at compression ratio
but trades off significantly for reduced compresson throughput. The ZSTD
level that should be set for a given column family or table will vary by
use case.

Signed-off-by: Viraj Jasani <vjasani@apache.org>

Conflicts:
	hbase-compression/hbase-compression-zstd/src/main/java/org/apache/hadoop/hbase/io/compress/zstd/ZstdDecompressor.java
	hbase-server/src/test/java/org/apache/hadoop/hbase/io/compress/HFileTestBase.java
	hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestDataBlockEncoders.java
	hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestSeekToBlockWithEncoders.java
2021-10-19 13:59:04 -07:00
Andrew Purtell 18b9fa8a3c HBASE-26259 Fallback support to pure Java compression (#3691)
This change introduces provided compression codecs to HBase as
new Maven modules. Each module provides compression codec support
that formerly required Hadoop native codecs, which in turn relies
on native code integration, which may or may not be available on
a given hardware platform or in an operational environment. We
now provide codecs in the HBase distribution for users whom for
whatever reason cannot or do not wish to deploy the Hadoop native
codecs.

Signed-off-by: Duo Zhang <zhangduo@apache.org>
Signed-off-by: Viraj Jasani <vjasani@apache.org>

Conflicts:
	hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileEncryption.java
	hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileSeek.java
	hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestCompressedWAL.java
2021-10-06 13:48:33 -07:00