This is a demonstration of visualization of regions on the cluster. The visualization is a stacked
bar chart showing total storefile size per table per region server, with the x-axis being server
names, the y-axis being storfile size, and the bars stacked per table. The visualization is
generated entirely on the fly from within the browser, implemented using Vega Lite. So far, Vega
appears to handle rendering this visualization for a cluster of over 700 region servers with
approximately 300,000 regions.
Per [0], include an update to the top-level LICENSE.txt. Also update LICENSE files in all binary
distributions (i.e., jars), by way of LICENSE.vm. Vega uses a BSD 3-clause variant without
advertising clause, and as such is a "Category A" license, per [1].
No changes are made to the NOTICE files, as per the existing example of bundling the minified
JQuery, which is also a Category A license.
[0]: https://infra.apache.org/licensing-howto.html
[1]: https://www.apache.org/legal/resolved.html#category-a
Signed-off-by: Andrew Purtell <apurtell@apache.org>
The upgrade is to get the fix in MENFORCER-336, making beanshell evaluation safe for use with `mvn
-T`. Also upgrade extra-enforcer-rules to 1.5.1, as per experience with HBASE-26664.
Signed-off-by: Duo Zhang <zhangduo@apache.org>
Signed-off-by: Sean Busbey <busbey@apache.org>
This change introduces provided compression codecs to HBase as
new Maven modules. Each module provides compression codec support
that formerly required Hadoop native codecs, which in turn relies
on native code integration, which may or may not be available on
a given hardware platform or in an operational environment. We
now provide codecs in the HBase distribution for users whom for
whatever reason cannot or do not wish to deploy the Hadoop native
codecs.
Signed-off-by: Duo Zhang <zhangduo@apache.org>
Signed-off-by: Viraj Jasani <vjasani@apache.org>
* HBASE-25824 IntegrationTestLoadCommonCrawl
This integration test loads successful resource retrieval records from
the Common Crawl (https://commoncrawl.org/) public dataset into an HBase
table and writes records that can be used to later verify the presence
and integrity of those records.
Run like:
./bin/hbase org.apache.hadoop.hbase.test.IntegrationTestLoadCommonCrawl \
-Dfs.s3n.awsAccessKeyId=<AWS access key> \
-Dfs.s3n.awsSecretAccessKey=<AWS secret key> \
/path/to/test-CC-MAIN-2021-10-warc.paths.gz \
/path/to/tmp/warc-loader-output
Access to the Common Crawl dataset in S3 is made available to anyone by
Amazon AWS, but Hadoop's S3N filesystem still requires valid access
credentials to initialize.
The input path can either specify a directory or a file. The file may
optionally be compressed with gzip. If a directory, the loader expects
the directory to contain one or more WARC files from the Common Crawl
dataset. If a file, the loader expects a list of Hadoop S3N URIs which
point to S3 locations for one or more WARC files from the Common Crawl
dataset, one URI per line. Lines should be terminated with the UNIX line
terminator.
Included in hbase-it/src/test/resources/CC-MAIN-2021-10-warc.paths.gz
is a list of all WARC files comprising the Q1 2021 crawl archive. There
are 64,000 WARC files in this data set, each containing ~1GB of gzipped
data. The WARC files contain several record types, such as metadata,
request, and response, but we only load the response record types. If
the HBase table schema does not specify compression (by default) there
is roughly a 10x expansion. Loading the full crawl archive results in a
table approximately 640 TB in size.
The hadoop-aws jar will be needed at runtime to instantiate the S3N
filesystem. Use the -files ToolRunner argument to add it.
You can also split the Loader and Verify stages:
Load with:
./bin/hbase 'org.apache.hadoop.hbase.test.IntegrationTestLoadCommonCrawl$Loader' \
-files /path/to/hadoop-aws.jar \
-Dfs.s3n.awsAccessKeyId=<AWS access key> \
-Dfs.s3n.awsSecretAccessKey=<AWS secret key> \
/path/to/test-CC-MAIN-2021-10-warc.paths.gz \
/path/to/tmp/warc-loader-output
Verify with:
./bin/hbase 'org.apache.hadoop.hbase.test.IntegrationTestLoadCommonCrawl$Verify' \
/path/to/tmp/warc-loader-output
Signed-off-by: Michael Stack <stack@apache.org>
- upgrade our default jruby to 9.2.13.0
- this major JRuby version update changes the Ruby compatibility from Ruby 2.3 to Ruby 2.5
- use a custom IRB prompt to convey similar information to before
- update the joni and jcoding dependencies to match this version of jruby-complete
closes#2308
Signed-off-by: stack <stack@apache.org>
Signed-off-by: Josh Elser <elserj@apache.org>
Signed-off-by: Sean Busbey <busbey@apache.org>
Some javadoc invocations require that annotations we reference can have any
classes they reference resolved. This includes annotations _they_ have,
even though annotations are normally optional.
In some cases this showed up as javax.annotation.meta.TypeQualifierNickname
not found, because some findbugs annotations use it. Other times it was
javax.annotation.concurrent.Immutable not found, because some old guava
versions use it.
(updated for master branch by doing the config in report config instead of plugin)
Signed-off-by: Peter Somogyi <psomogyi@apache.org>
Signed-off-by: Michael Stack <stack@apache.org>
Addendum: Add jersey-servlet to hadoop3 profile.
Made the hadoop3 profile in top-level pom same as it is for hadoop2
when it comes to exclusions. Then backed out previous attempt mostly.
Made the failing test medium-sized so it ran in its own jvm.
Upgrade thrift from 0.12 to 0.13.
Signed-off-by: Jan Hentschel <jan.hentschel@ultratendency.com>
Signed-off-by: Viraj Jasani <vjasani@apache.org>
Signed-off-by: Nick Dimiduk <ndimiduk@apache.org>
Down jdk8 forked jvm heap from 2800 to 2200 and the jdk11 heap from
3200 to 2200. Down the mvn size from 4G to 3.6G
Change how many puts done by TestMultiRespectsLimits because made
the test run the forked heap over 2.5G in size.
Signed-off-by: Sean Busbey <busbey@apache.org>
Pass --threads=2 to mvn when yetus runs so some parallelism
when dependencies allow.
Signed-off-by: Nick Dimiduk <ndimiduk@apache.org>
Signed-off-by: Viraj Jasani <vjasani@apache.org>
Minor tweaks required to get passing runs of `-PrunLargeTests`.
* Minimum Hadoop version is 3.2.0 due to
[HADOOP-12760](https://issues.apache.org/jira/browse/HADOOP-12760).
* JDK11 looks like it consumes more memory than JDK8, so failures due
to OOME see more common here. Bumping heap allocated to surefire
forks allows better pass rate.
Signed-off-by: Jan Hentschel <jan.hentschel@ultratendency.com>
Does what it says on the tin. Bound to `initialize` phase so that it
runs early in lifecycle. Uses `<inherited>false</inherited>` so that
the plugin will run only for the base pom's reactor stage and not for
any children.
Signed-off-by: Viraj Jasani <vjasani@apache.org>
Signed-off-by: Jan Hentschel <jan.hentschel@ultratendency.com>
Add being able to configure netty thread counts. Enable socket reuse
(should not have any impact).
hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/BlockingRpcConnection.java
Rename the threads we create in here so they are NOT named same was
threads created by Hadoop RPC.
hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/DefaultNettyEventLoopConfig.java
hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/NettyRpcClient.java
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/AsyncFSWAL.java
Allow configuring eventloopgroup thread count (so can override for
tests)
hbase-examples/src/main/java/org/apache/hadoop/hbase/client/example/HttpProxyExample.java
Enable socket resuse.
hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/NettyRpcServer.java
Enable socket resuse and config for how many threads to use.
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
hbase-server/src/main/java/org/apache/hadoop/hbase/util/ModifyRegionUtils.java
Thread name edit; drop the redundant 'Thread' suffix.
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/HFileReplicator.java
Make closeable and shutdown executor when called.
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSink.java
Call close on HFileReplicator
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationBase.java
HDFS creates lots of threads. Use less of it so less threads overall.
hbase-server/src/test/resources/hbase-site.xml
hbase-server/src/test/resources/hdfs-site.xml
Constrain resources when running in test context.
hbase-server/src/test/resources/log4j.properties
Enable debug on netty to see netty configs in our log
pom.xml
Add system properties when we launch JVMs to constrain thread counts in
tests
Signed-off-by: Duo Zhang <zhangduo@apache.org>