Signed-off-by: Andrew Purtell <apurtell@apache.org>
Signed-off-by: Tak Lon (Stephen) Wu <taklwu@apache.org>
(cherry picked from commit a0fdfa782b)
(cherry picked from commit 83d450d5b5)
Jettison versions <= 1.5.0 are subject to CVE-2022-40149 and CVE-2022-40150.
Move jettison.version to 1.5.1.
Signed-off-by: Duo Zhang <zhangduo@apache.org>
Make it so our published poms carry the minimum needed to run
an hbase; the published pom has no profiles -- the profiles
specified at build time are resolved, their dependencies inlined,
and then they are stripped -- and no build-time, or plugins
dependencies or properties, etc. Resultant poms have explicit
hadoop lib versions baked in -- no more being able to choose
hbase with hadoop2 or haddop3 at downstream build time by setting
a '-Dhadoop.profile=X.0'.
Pattern is to add profiles when none in sub-modules when
the flatten plugin complains it can't resolve an hadoop
dependency's 'version' (e.g. hadoop-common, hadoop-hdfs).
Adding the hadoop-2.0 and hadoop-3.0 profiles in the sub-module
make it so the flatten plugin can figure 'hadoop.version'
definitively.
Another spin on the above happens when profiles already exist
in submodule but the flatten plugin is complaining it can't
figure figure version on an hadoop dependency NOT under
profiles. Below, we move the delinquent hadoop dependency under
existing profiles (minikdc was the usual dependency outside
profiles in sub-modules that flatten complained about).
Sometimes, moving an hadoop dependency under a profile, there
would be excludes on the local dependency. If the parent pom
excludes section was missing the local excludes, we added them
up to the parent module so all excluding is done up there in
the parent profile dependencyManagement section.
* hbase-asyncfs/pom.xml
* hbase-endpoint/pom.xml
* hbase-examples/pom.xml
* hbase-http/pom.xml
* hbase-rest/pom.xml
* hbase-server/pom.xml
Move the minikdc under profiles so it picks up appropriate hadoop version
when the flatten plugin runs.
* hbase-hadoop2-compat/pom.xml
Add hadoop2 and hadoop3 profiles and move hadoop-common, etc.
under them so we pick up appropriate hadoop version when flatten
plugin runs.
* hbase-mapreduce/pom.xml
Move hadoop dependencies under profiles so right version is
available when the flatten plugin runs.
* hbase-shaded/hbase-shaded-testing-util/pom.xml
Add profiles for hadoop-2.0 and hadoop-3.0 and move the
hadoop dependencies under them.
pom.xml
Add the flatten plugin with the flatten profiles enabled.
Add a few excludes on hadoop profiles picked up from sub-modules.
E.g. exclude bouncycastle bcprov-jdk15 when we include minikdc.
Signed-off-by: Andrew Purtell <apurtell@apache.org>
Signed-off-by: Duo Zhang <zhangduo@apache.org>
Fix test case failures in org.apache.hadoop.hbase.http.log.TestLogLevel under Openjdk 17 because of a missing export of java.security.jgss/sun.security.krb5.
Removed option --illegal-access=permit ignored since Openjdk 17.
Signed-off-by: Duo Zhang <zhangduo@apache.org>
(cherry picked from commit e10c15d030)
- the agent jar dropped the `-all` classifier after 1.8.0
Signed-off-by: Duo Zhang <zhangduo@apache.org>
Signed-off-by: Andrew Purtell <apurtell@apache.org>
- Update JRuby
- Replace java_kind_of since it has been removed
- update jcoding / joni to match jruby
Signed-off-by: Peter Somogyi <psomogyi@apache.org>
(cherry picked from commit d1149f7e20)
When building against Hadoop 3.3.3 and any future version of Hadoop
incorporating reload4j the new Enforcer rule we have active in
branch-2.5 and up to exclude other logging frameworks besides log4j2
will trigger. We need to add exclusions to prevent that from
happening so the build will succeed.
Also exclude leveldbjni-all to avoid a LICENSE file generation error.
Add netty-all to hadoop-hdfs test context... to fix tests failing
trying to init minidfscluster.
Co-authored-by: stack <stack@apache.org>
Signed-off-by: Sean Busbey <busbey@apache.org>