Jettison versions <= 1.5.0 are subject to CVE-2022-40149 and CVE-2022-40150.
Move jettison.version to 1.5.1.
Signed-off-by: Duo Zhang <zhangduo@apache.org>
(Forward port from branch-2; simplified by the fact that there
is no hadoop-2.0 profile on master branch)
Make it so our published poms carry the minimum needed to run
an hbase; the published pom has no profiles -- the profiles
specified at build time are resolved, their dependencies inlined,
and then they are stripped -- and no build-time, or plugins
dependencies or properties, etc. Resultant poms have explicit
hadoop lib versions baked in -- no more being able to choose
hbase with hadoop2 or haddop3 at downstream build time by setting
a '-Dhadoop.profile=X.0'.
Pattern is to add profiles when none in sub-modules when
the flatten plugin complains it can't resolve an hadoop
dependency's 'version' (e.g. hadoop-common, hadoop-hdfs).
Adding the profile in the sub-module make it so the flatten
plugin can figure 'hadoop.version' definitively.
(In master there is only the hadoop-3.0 profile).
Another spin on the above happens when profiles already exist
in submodule but the flatten plugin is complaining it can't
figure figure version on an hadoop dependency NOT under
profiles. Below, we move the delinquent hadoop dependency under
existing profiles (minikdc was the usual dependency outside
profiles in sub-modules that flatten complained about).
Sometimes, moving an hadoop dependency under a profile, there
would be excludes on the local dependency. If the parent pom
excludes section was missing the local excludes, we added them
up to the parent module so all excluding is done up there in
the parent profile dependencyManagement section.
Signed-off-by: Duo Zhang <zhangduo@apache.org>
Fix test case failures in org.apache.hadoop.hbase.http.log.TestLogLevel under Openjdk 17 because of a missing export of java.security.jgss/sun.security.krb5.
Removed option --illegal-access=permit ignored since Openjdk 17.
Signed-off-by: Duo Zhang <zhangduo@apache.org>
- the agent jar dropped the `-all` classifier after 1.8.0
Signed-off-by: Duo Zhang <zhangduo@apache.org>
Signed-off-by: Andrew Purtell <apurtell@apache.org>
- update asciidoctor maven plugin to latest
- update the asciidoctor pdf dependency to latest
- allow the plugin to use its own version of jruby
Signed-off-by: Duo Zhang <zhangduo@apache.org>
Signed-off-by: Nick Dimiduk <ndimiduk@apache.org>
- Update JRuby
- Replace java_kind_of since it has been removed
- update jcoding / joni to match jruby
Signed-off-by: Peter Somogyi <psomogyi@apache.org>
When building against Hadoop 3.3.3 and any future version of Hadoop
incorporating reload4j the new Enforcer rule we have active in
branch-2.5 and up to exclude other logging frameworks besides log4j2
will trigger. We need to add exclusions to prevent that from
happening so the build will succeed.
Also exclude leveldbjni-all to avoid a LICENSE file generation error.
Add netty-all to hadoop-hdfs test context... to fix tests failing
trying to init minidfscluster.
Co-authored-by: stack <stack@apache.org>
Signed-off-by: Sean Busbey <busbey@apache.org>
Bump httpclient from 4.5.3 to 4.5.13 to avoid a CVE of medium severity in this
dependency.
Newer httpclient versions enable a URI normalization algorithm by default that
rewrites URIs in a way that breaks some forms of valid REST gateway interactions,
so disable it when building the httpclient instance in Client.
Signed-off-by: Duo Zhang <zhangduo@apache.org>
Signed-off-by: Pankaj Kumar <pankajkumar@apache.org>
* on `AsyncTable`, both `scan` and `scanAll` methods should result in `SCAN` table operations.
* the span of the `SCAN` table operation should have children representing all the RPC calls
involved in servicing the scan.
* when a user provides custom implementation of `AdvancedScanResultConsumer`, any spans emitted
from the callback methods should also be tied to the span that represents the `SCAN` table
operation. This is easily done because these callbacks are executed on the RPC thread.
* when a user provides a custom implementation of `ScanResultConsumer`, any spans emitted from the
callback methods should be also be tied to the span that represents the `SCAN` table
operation. This accomplished by carefully passing the span instance around after it is created.
Signed-off-by: Andrew Purtell <apurtell@apache.org>
Signed-off-by: Duo Zhang <zhangduo@apache.org>
This is no longer needed since we've transitioned to the shaded Jersey shipped in
hbase-thirdparty. Also drop supplemental models entry.
Signed-off-by: Duo Zhang <zhangduo@apache.org>
Signed-off-by: Andrew Purtell <apurtell@apache.org>
This is a demonstration of visualization of regions on the cluster. The visualization is a stacked
bar chart showing total storefile size per table per region server, with the x-axis being server
names, the y-axis being storfile size, and the bars stacked per table. The visualization is
generated entirely on the fly from within the browser, implemented using Vega Lite. So far, Vega
appears to handle rendering this visualization for a cluster of over 700 region servers with
approximately 300,000 regions.
Per [0], include an update to the top-level LICENSE.txt. Also update LICENSE files in all binary
distributions (i.e., jars), by way of LICENSE.vm. Vega uses a BSD 3-clause variant without
advertising clause, and as such is a "Category A" license, per [1].
No changes are made to the NOTICE files, as per the existing example of bundling the minified
JQuery, which is also a Category A license.
[0]: https://infra.apache.org/licensing-howto.html
[1]: https://www.apache.org/legal/resolved.html#category-a
Signed-off-by: Andrew Purtell <apurtell@apache.org>