Make it so our published poms carry the minimum needed to run
an hbase; the published pom has no profiles -- the profiles
specified at build time are resolved, their dependencies inlined,
and then they are stripped -- and no build-time, or plugins
dependencies or properties, etc. Resultant poms have explicit
hadoop lib versions baked in -- no more being able to choose
hbase with hadoop2 or haddop3 at downstream build time by setting
a '-Dhadoop.profile=X.0'.
Pattern is to add profiles when none in sub-modules when
the flatten plugin complains it can't resolve an hadoop
dependency's 'version' (e.g. hadoop-common, hadoop-hdfs).
Adding the hadoop-2.0 and hadoop-3.0 profiles in the sub-module
make it so the flatten plugin can figure 'hadoop.version'
definitively.
Another spin on the above happens when profiles already exist
in submodule but the flatten plugin is complaining it can't
figure figure version on an hadoop dependency NOT under
profiles. Below, we move the delinquent hadoop dependency under
existing profiles (minikdc was the usual dependency outside
profiles in sub-modules that flatten complained about).
Sometimes, moving an hadoop dependency under a profile, there
would be excludes on the local dependency. If the parent pom
excludes section was missing the local excludes, we added them
up to the parent module so all excluding is done up there in
the parent profile dependencyManagement section.
* hbase-asyncfs/pom.xml
* hbase-endpoint/pom.xml
* hbase-examples/pom.xml
* hbase-http/pom.xml
* hbase-rest/pom.xml
* hbase-server/pom.xml
Move the minikdc under profiles so it picks up appropriate hadoop version
when the flatten plugin runs.
* hbase-hadoop2-compat/pom.xml
Add hadoop2 and hadoop3 profiles and move hadoop-common, etc.
under them so we pick up appropriate hadoop version when flatten
plugin runs.
* hbase-mapreduce/pom.xml
Move hadoop dependencies under profiles so right version is
available when the flatten plugin runs.
* hbase-shaded/hbase-shaded-testing-util/pom.xml
Add profiles for hadoop-2.0 and hadoop-3.0 and move the
hadoop dependencies under them.
pom.xml
Add the flatten plugin with the flatten profiles enabled.
Add a few excludes on hadoop profiles picked up from sub-modules.
E.g. exclude bouncycastle bcprov-jdk15 when we include minikdc.
Signed-off-by: Andrew Purtell <apurtell@apache.org>
Signed-off-by: Duo Zhang <zhangduo@apache.org>
Fix test case failures in org.apache.hadoop.hbase.http.log.TestLogLevel under Openjdk 17 because of a missing export of java.security.jgss/sun.security.krb5.
Removed option --illegal-access=permit ignored since Openjdk 17.
Signed-off-by: Duo Zhang <zhangduo@apache.org>
(cherry picked from commit e10c15d030)
- the agent jar dropped the `-all` classifier after 1.8.0
Signed-off-by: Duo Zhang <zhangduo@apache.org>
Signed-off-by: Andrew Purtell <apurtell@apache.org>
- Update JRuby
- Replace java_kind_of since it has been removed
- update jcoding / joni to match jruby
Signed-off-by: Peter Somogyi <psomogyi@apache.org>
(cherry picked from commit d1149f7e20)
When building against Hadoop 3.3.3 and any future version of Hadoop
incorporating reload4j the new Enforcer rule we have active in
branch-2.5 and up to exclude other logging frameworks besides log4j2
will trigger. We need to add exclusions to prevent that from
happening so the build will succeed.
Also exclude leveldbjni-all to avoid a LICENSE file generation error.
Add netty-all to hadoop-hdfs test context... to fix tests failing
trying to init minidfscluster.
Co-authored-by: stack <stack@apache.org>
Signed-off-by: Sean Busbey <busbey@apache.org>
* on `AsyncTable`, both `scan` and `scanAll` methods should result in `SCAN` table operations.
* the span of the `SCAN` table operation should have children representing all the RPC calls
involved in servicing the scan.
* when a user provides custom implementation of `AdvancedScanResultConsumer`, any spans emitted
from the callback methods should also be tied to the span that represents the `SCAN` table
operation. This is easily done because these callbacks are executed on the RPC thread.
* when a user provides a custom implementation of `ScanResultConsumer`, any spans emitted from the
callback methods should be also be tied to the span that represents the `SCAN` table
operation. This accomplished by carefully passing the span instance around after it is created.
Signed-off-by: Andrew Purtell <apurtell@apache.org>
Signed-off-by: Duo Zhang <zhangduo@apache.org>
Bump httpclient from 4.5.3 to 4.5.13 to avoid a CVE of medium severity in this
dependency.
Newer httpclient versions enable a URI normalization algorithm by default that
rewrites URIs in a way that breaks some forms of valid REST gateway interactions,
so disable it when building the httpclient instance in Client.
Signed-off-by: Duo Zhang <zhangduo@apache.org>
Signed-off-by: Pankaj Kumar <pankajkumar@apache.org>
This is no longer needed since we've transitioned to the shaded Jersey shipped in
hbase-thirdparty. Also drop supplemental models entry.
Signed-off-by: Duo Zhang <zhangduo@apache.org>
Signed-off-by: Andrew Purtell <apurtell@apache.org>
This is a demonstration of visualization of regions on the cluster. The visualization is a stacked
bar chart showing total storefile size per table per region server, with the x-axis being server
names, the y-axis being storfile size, and the bars stacked per table. The visualization is
generated entirely on the fly from within the browser, implemented using Vega Lite. So far, Vega
appears to handle rendering this visualization for a cluster of over 700 region servers with
approximately 300,000 regions.
Per [0], include an update to the top-level LICENSE.txt. Also update LICENSE files in all binary
distributions (i.e., jars), by way of LICENSE.vm. Vega uses a BSD 3-clause variant without
advertising clause, and as such is a "Category A" license, per [1].
No changes are made to the NOTICE files, as per the existing example of bundling the minified
JQuery, which is also a Category A license.
[0]: https://infra.apache.org/licensing-howto.html
[1]: https://www.apache.org/legal/resolved.html#category-a
Signed-off-by: Andrew Purtell <apurtell@apache.org>
The upgrade is to get the fix in MENFORCER-336, making beanshell evaluation safe for use with `mvn
-T`. Also upgrade extra-enforcer-rules to 1.5.1, as per experience with HBASE-26664.
Signed-off-by: Duo Zhang <zhangduo@apache.org>
Signed-off-by: Sean Busbey <busbey@apache.org>