Make it so our published poms carry the minimum needed to run
an hbase; the published pom has no profiles -- the profiles
specified at build time are resolved, their dependencies inlined,
and then they are stripped -- and no build-time, or plugins
dependencies or properties, etc. Resultant poms have explicit
hadoop lib versions baked in -- no more being able to choose
hbase with hadoop2 or haddop3 at downstream build time by setting
a '-Dhadoop.profile=X.0'.
Pattern is to add profiles when none in sub-modules when
the flatten plugin complains it can't resolve an hadoop
dependency's 'version' (e.g. hadoop-common, hadoop-hdfs).
Adding the hadoop-2.0 and hadoop-3.0 profiles in the sub-module
make it so the flatten plugin can figure 'hadoop.version'
definitively.
Another spin on the above happens when profiles already exist
in submodule but the flatten plugin is complaining it can't
figure figure version on an hadoop dependency NOT under
profiles. Below, we move the delinquent hadoop dependency under
existing profiles (minikdc was the usual dependency outside
profiles in sub-modules that flatten complained about).
Sometimes, moving an hadoop dependency under a profile, there
would be excludes on the local dependency. If the parent pom
excludes section was missing the local excludes, we added them
up to the parent module so all excluding is done up there in
the parent profile dependencyManagement section.
* hbase-asyncfs/pom.xml
* hbase-endpoint/pom.xml
* hbase-examples/pom.xml
* hbase-http/pom.xml
* hbase-rest/pom.xml
* hbase-server/pom.xml
Move the minikdc under profiles so it picks up appropriate hadoop version
when the flatten plugin runs.
* hbase-hadoop2-compat/pom.xml
Add hadoop2 and hadoop3 profiles and move hadoop-common, etc.
under them so we pick up appropriate hadoop version when flatten
plugin runs.
* hbase-mapreduce/pom.xml
Move hadoop dependencies under profiles so right version is
available when the flatten plugin runs.
* hbase-shaded/hbase-shaded-testing-util/pom.xml
Add profiles for hadoop-2.0 and hadoop-3.0 and move the
hadoop dependencies under them.
pom.xml
Add the flatten plugin with the flatten profiles enabled.
Add a few excludes on hadoop profiles picked up from sub-modules.
E.g. exclude bouncycastle bcprov-jdk15 when we include minikdc.
Signed-off-by: Andrew Purtell <apurtell@apache.org>
Signed-off-by: Duo Zhang <zhangduo@apache.org>
Trace table ExecService invocations as table operations. Ensure span relationships for both table
and master invocations.
Signed-off-by: Andrew Purtell <apurtell@apache.org>
This is needed in a couple places in order to test that traces over the IPC layer carry correct
span names, and it's good hygiene anyway.
Signed-off-by: Duo Zhang <zhangduo@apache.org>
First extract an hbase-coprocessor module used by hbase-client, hbase-server.
This is prerequisite to extracting an hbase-wal module.
M hbase-common/src/main/java/org/apache/hadoop/hbase/Abortable.java
M hbase-common/src/main/java/org/apache/hadoop/hbase/DoNotRetryIOException.java
M hbase-common/src/main/java/org/apache/hadoop/hbase/util/SortedList.java
Move to hbase-common. Its a generic Interface. Need by
M hbase-coprocessor/src/main/java/org/apache/hadoop/hbase/Coprocessor.java
M hbase-coprocessor/src/main/java/org/apache/hadoop/hbase/CoprocessorEnvironment.java
M hbase-coprocessor/src/main/java/org/apache/hadoop/hbase/coprocessor/BaseEnvironment.java
M hbase-coprocessor/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java
M hbase-coprocessor/src/main/java/org/apache/hadoop/hbase/coprocessor/CoreCoprocessor.java
M hbase-coprocessor/src/main/java/org/apache/hadoop/hbase/coprocessor/ObserverContext.java
M hbase-coprocessor/src/main/java/org/apache/hadoop/hbase/coprocessor/ObserverContextImpl.java
M hbase-coprocessor/src/main/java/org/apache/hadoop/hbase/coprocessor/ReadOnlyConfiguration.java
Move to hbase-coprocessor.
M hbase-endpoint/src/main/java/org/apache/hadoop/hbase/client/coprocessor/BigDecimalColumnInterpreter.java
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/coprocessor/DoubleColumnInterpreter.java
M hbase-endpoint/src/main/java/org/apache/hadoop/hbase/client/coprocessor/LongColumnInterpreter.java
M hbase-endpoint/src/main/java/org/apache/hadoop/hbase/coprocessor/ColumnInterpreter.java
Moved to hbase-endpoint where they are used.
M hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java
Include region name when toString'd.
M hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALCoprocessorHost.java
Include WAL name when toString'd.
M hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java
Add utility used in testing here from CoprocessorHost.
Implements `ClusterManager` that relies on the new
`ShellExecEndpointCoprocessor` for remote shell command execution.
Signed-off-by: Bharath Vissapragada <bharathv@apache.org>
Add being able to configure netty thread counts. Enable socket reuse
(should not have any impact).
hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/BlockingRpcConnection.java
Rename the threads we create in here so they are NOT named same was
threads created by Hadoop RPC.
hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/DefaultNettyEventLoopConfig.java
hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/NettyRpcClient.java
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/AsyncFSWAL.java
Allow configuring eventloopgroup thread count (so can override for
tests)
hbase-examples/src/main/java/org/apache/hadoop/hbase/client/example/HttpProxyExample.java
Enable socket resuse.
hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/NettyRpcServer.java
Enable socket resuse and config for how many threads to use.
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
hbase-server/src/main/java/org/apache/hadoop/hbase/util/ModifyRegionUtils.java
Thread name edit; drop the redundant 'Thread' suffix.
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/HFileReplicator.java
Make closeable and shutdown executor when called.
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSink.java
Call close on HFileReplicator
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationBase.java
HDFS creates lots of threads. Use less of it so less threads overall.
hbase-server/src/test/resources/hbase-site.xml
hbase-server/src/test/resources/hdfs-site.xml
Constrain resources when running in test context.
hbase-server/src/test/resources/log4j.properties
Enable debug on netty to see netty configs in our log
pom.xml
Add system properties when we launch JVMs to constrain thread counts in
tests
Signed-off-by: Duo Zhang <zhangduo@apache.org>
This is a reapply of a reverted commit. This commit includes
HBASE-22059 amendment and subsequent ammendments to HBASE-22052.
See HBASE-22052 for full story.
jersey-core is problematic. It was transitively included from hadoop
and polluting our CLASSPATH with an implementation of a 1.x version
of the javax.ws.rs.core.Response Interface from jsr311-api when we
want the javax.ws.rs-api 2.x version.
M hbase-endpoint/pom.xml
M hbase-http/pom.xml
M hbase-mapreduce/pom.xml
M hbase-rest/pom.xml
M hbase-server/pom.xml
M hbase-zookeeper/pom.xml
Remove redundant version specification (and the odd property define
done already up in parent pom).
M hbase-it/pom.xml
M hbase-rest/pom.xml
Exclude jersey-core explicitly.
M hbase-procedure/pom.xml
Remove redundant version and classifier.
M pom.xml
Add jersey-core exclusions to all dependencies that pull it in
except hadoop-minicluster. mr tests fail w/o the jersey-core
so let it in for minicluster and then in modules, exclude it
where it causes damage as in hbase-it.
BC 1.47 introduced some incompatible API changes which came in via
a new Maven artifact. We don't use any changed API in HBase. This
also removes some unnecessary dependencies on bcprov in other
modules (presumably, they are vestiges)
Signed-off-by: Mike Drob <mdrob@apache.org>
Signed-off-by: Ted Yu <tedyu@apache.org>
Create a helper method HBaseKerberosUtils#setSecuredConfiguration().
TestSecureExport, TestSaslFanOutOneBlockAsyncDFSOutput,
SecureTestCluster and TestThriftSpnegoHttpServer uses this new helper
method.
Signed-off-by: tedyu <yuzhihong@gmail.com>
* modify the jar checking script to take args; make hadoop stuff optional
* separate out checking the artifacts that have hadoop vs those that don't.
* * Unfortunately means we need two modules for checking things
* * put in a safety check that the support script for checking jar contents is maintained in both modules
* * have to carve out an exception for o.a.hadoop.metrics2. :(
* fix duplicated class warning
* clean up dependencies in hbase-server and some modules that depend on it.
* allow Hadoop to have its own htrace where it needs it
* add a precommit check to make sure we're not using old htrace imports
Conflicts:
hbase-backup/pom.xml
hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml
Signed-off-by: Mike Drob <mdrob@apache.org>