* on `AsyncTable`, both `scan` and `scanAll` methods should result in `SCAN` table operations.
* the span of the `SCAN` table operation should have children representing all the RPC calls
involved in servicing the scan.
* when a user provides custom implementation of `AdvancedScanResultConsumer`, any spans emitted
from the callback methods should also be tied to the span that represents the `SCAN` table
operation. This is easily done because these callbacks are executed on the RPC thread.
* when a user provides a custom implementation of `ScanResultConsumer`, any spans emitted from the
callback methods should be also be tied to the span that represents the `SCAN` table
operation. This accomplished by carefully passing the span instance around after it is created.
Signed-off-by: Andrew Purtell <apurtell@apache.org>
Signed-off-by: Duo Zhang <zhangduo@apache.org>
Bump httpclient from 4.5.3 to 4.5.13 to avoid a CVE of medium severity in this
dependency.
Newer httpclient versions enable a URI normalization algorithm by default that
rewrites URIs in a way that breaks some forms of valid REST gateway interactions,
so disable it when building the httpclient instance in Client.
Signed-off-by: Duo Zhang <zhangduo@apache.org>
Signed-off-by: Pankaj Kumar <pankajkumar@apache.org>
This is no longer needed since we've transitioned to the shaded Jersey shipped in
hbase-thirdparty. Also drop supplemental models entry.
Signed-off-by: Duo Zhang <zhangduo@apache.org>
Signed-off-by: Andrew Purtell <apurtell@apache.org>
This is a demonstration of visualization of regions on the cluster. The visualization is a stacked
bar chart showing total storefile size per table per region server, with the x-axis being server
names, the y-axis being storfile size, and the bars stacked per table. The visualization is
generated entirely on the fly from within the browser, implemented using Vega Lite. So far, Vega
appears to handle rendering this visualization for a cluster of over 700 region servers with
approximately 300,000 regions.
Per [0], include an update to the top-level LICENSE.txt. Also update LICENSE files in all binary
distributions (i.e., jars), by way of LICENSE.vm. Vega uses a BSD 3-clause variant without
advertising clause, and as such is a "Category A" license, per [1].
No changes are made to the NOTICE files, as per the existing example of bundling the minified
JQuery, which is also a Category A license.
[0]: https://infra.apache.org/licensing-howto.html
[1]: https://www.apache.org/legal/resolved.html#category-a
Signed-off-by: Andrew Purtell <apurtell@apache.org>
The upgrade is to get the fix in MENFORCER-336, making beanshell evaluation safe for use with `mvn
-T`. Also upgrade extra-enforcer-rules to 1.5.1, as per experience with HBASE-26664.
Signed-off-by: Duo Zhang <zhangduo@apache.org>
Signed-off-by: Sean Busbey <busbey@apache.org>
This change introduces provided compression codecs to HBase as
new Maven modules. Each module provides compression codec support
that formerly required Hadoop native codecs, which in turn relies
on native code integration, which may or may not be available on
a given hardware platform or in an operational environment. We
now provide codecs in the HBase distribution for users whom for
whatever reason cannot or do not wish to deploy the Hadoop native
codecs.
Signed-off-by: Duo Zhang <zhangduo@apache.org>
Signed-off-by: Viraj Jasani <vjasani@apache.org>
Conflicts:
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileEncryption.java
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileSeek.java
hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestCompressedWAL.java
15/17 commits of HBASE-22120, original commit b714889989
Co-authored-by: Duo Zhang <zhangduo@apache.org>
Signed-off-by: Peter Somogyi <psomogyi@apache.org>
12/17 commits of HBASE-22120, original commits 8399293e21
Co-authored-by: Duo Zhang <zhangduo@apache.org>
Signed-off-by: Duo Zhang <zhangduo@apache.org>
Signed-off-by: Peter Somogyi <psomogyi@apache.org>
10/17 commits of HBASE-22120, original commit f6ff519dd0
Co-authored-by: Duo Zhang <zhangduo@apache.org>
Signed-off-by: Duo Zhang <zhangduo@apache.org>
Signed-off-by: Peter Somogyi <psomogyi@apache.org>
3/17 commits of HBASE-22120, original commit#57960fa8fa7228d65b1a4adc8e9b5b1a8158824d
Co-authored-by: Duo Zhang <zhangduo@apache.org>
Signed-off-by: Peter Somogyi <psomogyi@apache.org>
This integration test loads successful resource retrieval records from
the Common Crawl (https://commoncrawl.org/) public dataset into an HBase
table and writes records that can be used to later verify the presence
and integrity of those records.
Run like:
./bin/hbase org.apache.hadoop.hbase.test.IntegrationTestLoadCommonCrawl \
-Dfs.s3n.awsAccessKeyId=<AWS access key> \
-Dfs.s3n.awsSecretAccessKey=<AWS secret key> \
/path/to/test-CC-MAIN-2021-10-warc.paths.gz \
/path/to/tmp/warc-loader-output
Access to the Common Crawl dataset in S3 is made available to anyone by
Amazon AWS, but Hadoop's S3N filesystem still requires valid access
credentials to initialize.
The input path can either specify a directory or a file. The file may
optionally be compressed with gzip. If a directory, the loader expects
the directory to contain one or more WARC files from the Common Crawl
dataset. If a file, the loader expects a list of Hadoop S3N URIs which
point to S3 locations for one or more WARC files from the Common Crawl
dataset, one URI per line. Lines should be terminated with the UNIX line
terminator.
Included in hbase-it/src/test/resources/CC-MAIN-2021-10-warc.paths.gz
is a list of all WARC files comprising the Q1 2021 crawl archive. There
are 64,000 WARC files in this data set, each containing ~1GB of gzipped
data. The WARC files contain several record types, such as metadata,
request, and response, but we only load the response record types. If
the HBase table schema does not specify compression (by default) there
is roughly a 10x expansion. Loading the full crawl archive results in a
table approximately 640 TB in size.
The hadoop-aws jar will be needed at runtime to instantiate the S3N
filesystem. Use the -files ToolRunner argument to add it.
You can also split the Loader and Verify stages:
Load with:
./bin/hbase 'org.apache.hadoop.hbase.test.IntegrationTestLoadCommonCrawl$Loader' \
-files /path/to/hadoop-aws.jar \
-Dfs.s3n.awsAccessKeyId=<AWS access key> \
-Dfs.s3n.awsSecretAccessKey=<AWS secret key> \
/path/to/test-CC-MAIN-2021-10-warc.paths.gz \
/path/to/tmp/warc-loader-output
Verify with:
./bin/hbase 'org.apache.hadoop.hbase.test.IntegrationTestLoadCommonCrawl$Verify' \
/path/to/tmp/warc-loader-output
Signed-off-by: Michael Stack <stack@apache.org>
Conflicts:
pom.xml
First extract an hbase-coprocessor module used by hbase-client, hbase-server.
This is prerequisite to extracting an hbase-wal module.
M hbase-common/src/main/java/org/apache/hadoop/hbase/Abortable.java
M hbase-common/src/main/java/org/apache/hadoop/hbase/DoNotRetryIOException.java
M hbase-common/src/main/java/org/apache/hadoop/hbase/util/SortedList.java
Move to hbase-common. Its a generic Interface. Need by
M hbase-coprocessor/src/main/java/org/apache/hadoop/hbase/Coprocessor.java
M hbase-coprocessor/src/main/java/org/apache/hadoop/hbase/CoprocessorEnvironment.java
M hbase-coprocessor/src/main/java/org/apache/hadoop/hbase/coprocessor/BaseEnvironment.java
M hbase-coprocessor/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java
M hbase-coprocessor/src/main/java/org/apache/hadoop/hbase/coprocessor/CoreCoprocessor.java
M hbase-coprocessor/src/main/java/org/apache/hadoop/hbase/coprocessor/ObserverContext.java
M hbase-coprocessor/src/main/java/org/apache/hadoop/hbase/coprocessor/ObserverContextImpl.java
M hbase-coprocessor/src/main/java/org/apache/hadoop/hbase/coprocessor/ReadOnlyConfiguration.java
Move to hbase-coprocessor.
M hbase-endpoint/src/main/java/org/apache/hadoop/hbase/client/coprocessor/BigDecimalColumnInterpreter.java
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/coprocessor/DoubleColumnInterpreter.java
M hbase-endpoint/src/main/java/org/apache/hadoop/hbase/client/coprocessor/LongColumnInterpreter.java
M hbase-endpoint/src/main/java/org/apache/hadoop/hbase/coprocessor/ColumnInterpreter.java
Moved to hbase-endpoint where they are used.
M hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java
Include region name when toString'd.
M hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALCoprocessorHost.java
Include WAL name when toString'd.
M hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java
Add utility used in testing here from CoprocessorHost.
- upgrade our default jruby to 9.2.13.0
- this major JRuby version update changes the Ruby compatibility from Ruby 2.3 to Ruby 2.5
- use a custom IRB prompt to convey similar information to before
- update the joni and jcoding dependencies to match this version of jruby-complete
closes#2308
Signed-off-by: stack <stack@apache.org>
Signed-off-by: Josh Elser <elserj@apache.org>
Signed-off-by: Sean Busbey <busbey@apache.org>
(cherry picked from commit f0c430aed2)
Some javadoc invocations require that annotations we reference can have any
classes they reference resolved. This includes annotations _they_ have,
even though annotations are normally optional.
In some cases this showed up as javax.annotation.meta.TypeQualifierNickname
not found, because some findbugs annotations use it. Other times it was
javax.annotation.concurrent.Immutable not found, because some old guava
versions use it.
(updated for master branch by doing the config in report config instead of plugin)
Signed-off-by: Peter Somogyi <psomogyi@apache.org>
Signed-off-by: Michael Stack <stack@apache.org>
(cherry picked from commit f0d66273cd)
Second attempt. Made the hadoop3 profile in top-level pom same as it is
for hadoop2 when it comes to exclusions. Then backed out previous
attempt mostly. Made the failing test medium-sized so it ran in its
own jvm.
Pass --threads=2 to mvn when yetus runs so some parallelism
when dependencies allow.
Signed-off-by: Nick Dimiduk <ndimiduk@apache.org>
Signed-off-by: Viraj Jasani <vjasani@apache.org>
Down jdk8 forked jvm heap from 2800 to 2200 and the jdk11 heap from
3200 to 2200. Down the mvn size from 4G to 3.6G
Change how many puts done by TestMultiRespectsLimits because made
the test run the forked heap over 2.5G in size.
Signed-off-by: Sean Busbey <busbey@apache.org>
Minor tweaks required to get passing runs of `-PrunLargeTests`.
* Minimum Hadoop version is 3.2.0 due to
[HADOOP-12760](https://issues.apache.org/jira/browse/HADOOP-12760).
* JDK11 looks like it consumes more memory than JDK8, so failures due
to OOME see more common here. Bumping heap allocated to surefire
forks allows better pass rate.
Signed-off-by: Jan Hentschel <jan.hentschel@ultratendency.com>
Does what it says on the tin. Bound to `initialize` phase so that it
runs early in lifecycle. Uses `<inherited>false</inherited>` so that
the plugin will run only for the base pom's reactor stage and not for
any children.
Signed-off-by: Viraj Jasani <vjasani@apache.org>
Signed-off-by: Jan Hentschel <jan.hentschel@ultratendency.com>
Add being able to configure netty thread counts. Enable socket reuse
(should not have any impact).
hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/BlockingRpcConnection.java
Rename the threads we create in here so they are NOT named same was
threads created by Hadoop RPC.
hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/DefaultNettyEventLoopConfig.java
hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/NettyRpcClient.java
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/AsyncFSWAL.java
Allow configuring eventloopgroup thread count (so can override for
tests)
hbase-examples/src/main/java/org/apache/hadoop/hbase/client/example/HttpProxyExample.java
Enable socket resuse.
hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/NettyRpcServer.java
Enable socket resuse and config for how many threads to use.
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
hbase-server/src/main/java/org/apache/hadoop/hbase/util/ModifyRegionUtils.java
Thread name edit; drop the redundant 'Thread' suffix.
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/HFileReplicator.java
Make closeable and shutdown executor when called.
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSink.java
Call close on HFileReplicator
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationBase.java
HDFS creates lots of threads. Use less of it so less threads overall.
hbase-server/src/test/resources/hbase-site.xml
hbase-server/src/test/resources/hdfs-site.xml
Constrain resources when running in test context.
hbase-server/src/test/resources/log4j.properties
Enable debug on netty to see netty configs in our log
pom.xml
Add system properties when we launch JVMs to constrain thread counts in
tests
Signed-off-by: Duo Zhang <zhangduo@apache.org>
Set the fork count for first and second parts to be 0.5C. Add a bit of
doc too on this as well as some qualification on our test categories.
Also adds -T0.5C to MAVEN_ARGS in the hbase personality.
This is causing me issues with parallel test runs.
Also allow setting the surefire reports and temp directories via command line.
Signed-off-by: stack <stack@apache.org>