Jettison versions <= 1.5.0 are subject to CVE-2022-40149 and CVE-2022-40150.
Move jettison.version to 1.5.1.
Signed-off-by: Duo Zhang <zhangduo@apache.org>
Make it so our published poms carry the minimum needed to run
an hbase; the published pom has no profiles -- the profiles
specified at build time are resolved, their dependencies inlined,
and then they are stripped -- and no build-time, or plugins
dependencies or properties, etc. Resultant poms have explicit
hadoop lib versions baked in -- no more being able to choose
hbase with hadoop2 or haddop3 at downstream build time by setting
a '-Dhadoop.profile=X.0'.
Pattern is to add profiles when none in sub-modules when
the flatten plugin complains it can't resolve an hadoop
dependency's 'version' (e.g. hadoop-common, hadoop-hdfs).
Adding the hadoop-2.0 and hadoop-3.0 profiles in the sub-module
make it so the flatten plugin can figure 'hadoop.version'
definitively.
Another spin on the above happens when profiles already exist
in submodule but the flatten plugin is complaining it can't
figure figure version on an hadoop dependency NOT under
profiles. Below, we move the delinquent hadoop dependency under
existing profiles (minikdc was the usual dependency outside
profiles in sub-modules that flatten complained about).
Sometimes, moving an hadoop dependency under a profile, there
would be excludes on the local dependency. If the parent pom
excludes section was missing the local excludes, we added them
up to the parent module so all excluding is done up there in
the parent profile dependencyManagement section.
* hbase-asyncfs/pom.xml
* hbase-endpoint/pom.xml
* hbase-examples/pom.xml
* hbase-http/pom.xml
* hbase-rest/pom.xml
* hbase-server/pom.xml
Move the minikdc under profiles so it picks up appropriate hadoop version
when the flatten plugin runs.
* hbase-hadoop2-compat/pom.xml
Add hadoop2 and hadoop3 profiles and move hadoop-common, etc.
under them so we pick up appropriate hadoop version when flatten
plugin runs.
* hbase-mapreduce/pom.xml
Move hadoop dependencies under profiles so right version is
available when the flatten plugin runs.
* hbase-shaded/hbase-shaded-testing-util/pom.xml
Add profiles for hadoop-2.0 and hadoop-3.0 and move the
hadoop dependencies under them.
pom.xml
Add the flatten plugin with the flatten profiles enabled.
Add a few excludes on hadoop profiles picked up from sub-modules.
E.g. exclude bouncycastle bcprov-jdk15 when we include minikdc.
Signed-off-by: Andrew Purtell <apurtell@apache.org>
Signed-off-by: Duo Zhang <zhangduo@apache.org>
Fix test case failures in org.apache.hadoop.hbase.http.log.TestLogLevel under Openjdk 17 because of a missing export of java.security.jgss/sun.security.krb5.
Removed option --illegal-access=permit ignored since Openjdk 17.
Signed-off-by: Duo Zhang <zhangduo@apache.org>
(cherry picked from commit e10c15d030)
- the agent jar dropped the `-all` classifier after 1.8.0
Signed-off-by: Duo Zhang <zhangduo@apache.org>
Signed-off-by: Andrew Purtell <apurtell@apache.org>
When building against Hadoop 3.3.3 and any future version of Hadoop
incorporating reload4j the new Enforcer rule we have active in
branch-2.5 and up to exclude other logging frameworks besides log4j2
will trigger. We need to add exclusions to prevent that from
happening so the build will succeed.
Also exclude leveldbjni-all to avoid a LICENSE file generation error.
Add netty-all to hadoop-hdfs test context... to fix tests failing
trying to init minidfscluster.
Co-authored-by: stack <stack@apache.org>
Signed-off-by: Sean Busbey <busbey@apache.org>
* on `AsyncTable`, both `scan` and `scanAll` methods should result in `SCAN` table operations.
* the span of the `SCAN` table operation should have children representing all the RPC calls
involved in servicing the scan.
* when a user provides custom implementation of `AdvancedScanResultConsumer`, any spans emitted
from the callback methods should also be tied to the span that represents the `SCAN` table
operation. This is easily done because these callbacks are executed on the RPC thread.
* when a user provides a custom implementation of `ScanResultConsumer`, any spans emitted from the
callback methods should be also be tied to the span that represents the `SCAN` table
operation. This accomplished by carefully passing the span instance around after it is created.
Signed-off-by: Andrew Purtell <apurtell@apache.org>
Signed-off-by: Duo Zhang <zhangduo@apache.org>