This change introduces provided compression codecs to HBase as
new Maven modules. Each module provides compression codec support
that formerly required Hadoop native codecs, which in turn relies
on native code integration, which may or may not be available on
a given hardware platform or in an operational environment. We
now provide codecs in the HBase distribution for users whom for
whatever reason cannot or do not wish to deploy the Hadoop native
codecs.
Signed-off-by: Duo Zhang <zhangduo@apache.org>
Signed-off-by: Viraj Jasani <vjasani@apache.org>
* HBASE-25824 IntegrationTestLoadCommonCrawl
This integration test loads successful resource retrieval records from
the Common Crawl (https://commoncrawl.org/) public dataset into an HBase
table and writes records that can be used to later verify the presence
and integrity of those records.
Run like:
./bin/hbase org.apache.hadoop.hbase.test.IntegrationTestLoadCommonCrawl \
-Dfs.s3n.awsAccessKeyId=<AWS access key> \
-Dfs.s3n.awsSecretAccessKey=<AWS secret key> \
/path/to/test-CC-MAIN-2021-10-warc.paths.gz \
/path/to/tmp/warc-loader-output
Access to the Common Crawl dataset in S3 is made available to anyone by
Amazon AWS, but Hadoop's S3N filesystem still requires valid access
credentials to initialize.
The input path can either specify a directory or a file. The file may
optionally be compressed with gzip. If a directory, the loader expects
the directory to contain one or more WARC files from the Common Crawl
dataset. If a file, the loader expects a list of Hadoop S3N URIs which
point to S3 locations for one or more WARC files from the Common Crawl
dataset, one URI per line. Lines should be terminated with the UNIX line
terminator.
Included in hbase-it/src/test/resources/CC-MAIN-2021-10-warc.paths.gz
is a list of all WARC files comprising the Q1 2021 crawl archive. There
are 64,000 WARC files in this data set, each containing ~1GB of gzipped
data. The WARC files contain several record types, such as metadata,
request, and response, but we only load the response record types. If
the HBase table schema does not specify compression (by default) there
is roughly a 10x expansion. Loading the full crawl archive results in a
table approximately 640 TB in size.
The hadoop-aws jar will be needed at runtime to instantiate the S3N
filesystem. Use the -files ToolRunner argument to add it.
You can also split the Loader and Verify stages:
Load with:
./bin/hbase 'org.apache.hadoop.hbase.test.IntegrationTestLoadCommonCrawl$Loader' \
-files /path/to/hadoop-aws.jar \
-Dfs.s3n.awsAccessKeyId=<AWS access key> \
-Dfs.s3n.awsSecretAccessKey=<AWS secret key> \
/path/to/test-CC-MAIN-2021-10-warc.paths.gz \
/path/to/tmp/warc-loader-output
Verify with:
./bin/hbase 'org.apache.hadoop.hbase.test.IntegrationTestLoadCommonCrawl$Verify' \
/path/to/tmp/warc-loader-output
Signed-off-by: Michael Stack <stack@apache.org>
- upgrade our default jruby to 9.2.13.0
- this major JRuby version update changes the Ruby compatibility from Ruby 2.3 to Ruby 2.5
- use a custom IRB prompt to convey similar information to before
- update the joni and jcoding dependencies to match this version of jruby-complete
closes#2308
Signed-off-by: stack <stack@apache.org>
Signed-off-by: Josh Elser <elserj@apache.org>
Signed-off-by: Sean Busbey <busbey@apache.org>
Some javadoc invocations require that annotations we reference can have any
classes they reference resolved. This includes annotations _they_ have,
even though annotations are normally optional.
In some cases this showed up as javax.annotation.meta.TypeQualifierNickname
not found, because some findbugs annotations use it. Other times it was
javax.annotation.concurrent.Immutable not found, because some old guava
versions use it.
(updated for master branch by doing the config in report config instead of plugin)
Signed-off-by: Peter Somogyi <psomogyi@apache.org>
Signed-off-by: Michael Stack <stack@apache.org>
Addendum: Add jersey-servlet to hadoop3 profile.
Made the hadoop3 profile in top-level pom same as it is for hadoop2
when it comes to exclusions. Then backed out previous attempt mostly.
Made the failing test medium-sized so it ran in its own jvm.
Upgrade thrift from 0.12 to 0.13.
Signed-off-by: Jan Hentschel <jan.hentschel@ultratendency.com>
Signed-off-by: Viraj Jasani <vjasani@apache.org>
Signed-off-by: Nick Dimiduk <ndimiduk@apache.org>
Down jdk8 forked jvm heap from 2800 to 2200 and the jdk11 heap from
3200 to 2200. Down the mvn size from 4G to 3.6G
Change how many puts done by TestMultiRespectsLimits because made
the test run the forked heap over 2.5G in size.
Signed-off-by: Sean Busbey <busbey@apache.org>
Pass --threads=2 to mvn when yetus runs so some parallelism
when dependencies allow.
Signed-off-by: Nick Dimiduk <ndimiduk@apache.org>
Signed-off-by: Viraj Jasani <vjasani@apache.org>
Minor tweaks required to get passing runs of `-PrunLargeTests`.
* Minimum Hadoop version is 3.2.0 due to
[HADOOP-12760](https://issues.apache.org/jira/browse/HADOOP-12760).
* JDK11 looks like it consumes more memory than JDK8, so failures due
to OOME see more common here. Bumping heap allocated to surefire
forks allows better pass rate.
Signed-off-by: Jan Hentschel <jan.hentschel@ultratendency.com>
Does what it says on the tin. Bound to `initialize` phase so that it
runs early in lifecycle. Uses `<inherited>false</inherited>` so that
the plugin will run only for the base pom's reactor stage and not for
any children.
Signed-off-by: Viraj Jasani <vjasani@apache.org>
Signed-off-by: Jan Hentschel <jan.hentschel@ultratendency.com>
Add being able to configure netty thread counts. Enable socket reuse
(should not have any impact).
hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/BlockingRpcConnection.java
Rename the threads we create in here so they are NOT named same was
threads created by Hadoop RPC.
hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/DefaultNettyEventLoopConfig.java
hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/NettyRpcClient.java
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/AsyncFSWAL.java
Allow configuring eventloopgroup thread count (so can override for
tests)
hbase-examples/src/main/java/org/apache/hadoop/hbase/client/example/HttpProxyExample.java
Enable socket resuse.
hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/NettyRpcServer.java
Enable socket resuse and config for how many threads to use.
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
hbase-server/src/main/java/org/apache/hadoop/hbase/util/ModifyRegionUtils.java
Thread name edit; drop the redundant 'Thread' suffix.
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/HFileReplicator.java
Make closeable and shutdown executor when called.
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSink.java
Call close on HFileReplicator
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationBase.java
HDFS creates lots of threads. Use less of it so less threads overall.
hbase-server/src/test/resources/hbase-site.xml
hbase-server/src/test/resources/hdfs-site.xml
Constrain resources when running in test context.
hbase-server/src/test/resources/log4j.properties
Enable debug on netty to see netty configs in our log
pom.xml
Add system properties when we launch JVMs to constrain thread counts in
tests
Signed-off-by: Duo Zhang <zhangduo@apache.org>
Set the fork count for first and second parts to be 0.5C. Add a bit of
doc too on this as well as some qualification on our test categories.
Also adds -T0.5C to MAVEN_ARGS in the hbase personality.
This is causing me issues with parallel test runs.
Also allow setting the surefire reports and temp directories via command line.
Signed-off-by: stack <stack@apache.org>
Bump checkstyle version to 8.28, maven-checkstyle-plugin to 3.1.0.
As per HBASE-23242 and the updated checkstyle docs[1], the LineLength
check should be placed under an instance of Checker.
[1] https://checkstyle.sourceforge.io/config_sizes.html#LineLength
Co-authored-by: Bharath Vissapragada <bharathv@apache.org>
Signed-off-by: Jan Hentschel <janh@apache.org>
Signed-off-by: Viraj Jasani <vjasani@apache.org>
Adds a display of the content of 'hbase:meta' to the Master's
table.jsp, when that table is selected. Supports basic pagination,
filtering, &c.
Signed-off-by: stack <stack@apache.org>
Signed-off-by: Bharath Vissapragada <bharathv@apache.org>
- switch to nexus-staging-maven-plugin for asf-release
- refactor release-build to use mvn deploy and its output.
- cleaned up some tabs in the root pom
Signed-off-by: stack <stack@apache.org>
HBASE-15666 shaded dependencies for hbase-testing-util
Added new artifact hbase-shaded-testing-util. It wraps a whole hbase-server
with its testing dependencies. Users should use only the following dependency
in pom:
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-shaded-testing-util</artifactId>
<version>${hbase.version}</version>
<scope>test</scope>
</dependency>
Added hbase-shaded-testing-util-tester maven module which ensures
that hbase-shaded-testing-util works with a shaded client.
Signed-off-by: Josh Elser <elserj@apache.org>
These scripts came originally from spark [1]. They were then
modified to suit hbase context. Supercedes the old
../make_rc.sh script because what is here is more comprehensive
doing more steps of the RM process as well as running in a
container so the RM build environment can be a constant.
It:
* Tags release
* Updates RELEASENOTES.md and CHANGES.md.
* Sets version to the release version
* Sets version to next SNAPSHOT version.
* Builds, signs, and hashes all artifacts.
* Generates the API report.
* Pushes release tgzs to the dev dir in a apache dist.
* Pushes to repository.apache.org staging.
* Generates a vote email with filled-in fields.
The entry point is the do-release-docker.sh script. Pass -h to
see available options. For example, running below will do all
steps above using the 'rm' dir under Downloads as workspace:
$ ./do-release-docker.sh -d ~/Downloads/rm
1. https://github.com/apache/spark/tree/master/dev/create-release
Signed-off-by: Peter Somogyi <psomogyi@cloudera.com>
Signed-off-by: Duo Zhang <zhangduo@apache.org>
Rest Server throws NoClassDefFoundError : javax/annotation/Priority after buiding with JDK8 and running on JDK8
Signed-off-by: Sean Busbey <busbey@apache.org>
This is a reapply of a reverted commit. This commit includes
HBASE-22059 amendment and subsequent ammendments to HBASE-22052.
See HBASE-22052 for full story.
jersey-core is problematic. It was transitively included from hadoop
and polluting our CLASSPATH with an implementation of a 1.x version
of the javax.ws.rs.core.Response Interface from jsr311-api when we
want the javax.ws.rs-api 2.x version.
M hbase-endpoint/pom.xml
M hbase-http/pom.xml
M hbase-mapreduce/pom.xml
M hbase-rest/pom.xml
M hbase-server/pom.xml
M hbase-zookeeper/pom.xml
Remove redundant version specification (and the odd property define
done already up in parent pom).
M hbase-it/pom.xml
M hbase-rest/pom.xml
Exclude jersey-core explicitly.
M hbase-procedure/pom.xml
Remove redundant version and classifier.
M pom.xml
Add jersey-core exclusions to all dependencies that pull it in
except hadoop-minicluster. mr tests fail w/o the jersey-core
so let it in for minicluster and then in modules, exclude it
where it causes damage as in hbase-it.
* Upgrades to error prone 2.3.3
* Moves to error prone plugin to support 9+ JDKs
* Removes custom error prone plugin due to no usage
Signed-off-by: zhangduo <zhangduo@apache.org>