(Forward port from branch-2; simplified by the fact that there
is no hadoop-2.0 profile on master branch)
Make it so our published poms carry the minimum needed to run
an hbase; the published pom has no profiles -- the profiles
specified at build time are resolved, their dependencies inlined,
and then they are stripped -- and no build-time, or plugins
dependencies or properties, etc. Resultant poms have explicit
hadoop lib versions baked in -- no more being able to choose
hbase with hadoop2 or haddop3 at downstream build time by setting
a '-Dhadoop.profile=X.0'.
Pattern is to add profiles when none in sub-modules when
the flatten plugin complains it can't resolve an hadoop
dependency's 'version' (e.g. hadoop-common, hadoop-hdfs).
Adding the profile in the sub-module make it so the flatten
plugin can figure 'hadoop.version' definitively.
(In master there is only the hadoop-3.0 profile).
Another spin on the above happens when profiles already exist
in submodule but the flatten plugin is complaining it can't
figure figure version on an hadoop dependency NOT under
profiles. Below, we move the delinquent hadoop dependency under
existing profiles (minikdc was the usual dependency outside
profiles in sub-modules that flatten complained about).
Sometimes, moving an hadoop dependency under a profile, there
would be excludes on the local dependency. If the parent pom
excludes section was missing the local excludes, we added them
up to the parent module so all excluding is done up there in
the parent profile dependencyManagement section.
Signed-off-by: Duo Zhang <zhangduo@apache.org>
We introduced EnvironmentEdgeManager as a way to inject alternate clocks
for unit tests. In order for this to be effective, all callers that would
otherwise use System.currentTimeMillis() must call
EnvironmentEdgeManager.currentTime() instead, except the implementers of
EnvironmentEdge.
Signed-off-by: Bharath Vissapragada <bharathv@apache.org>
Signed-off-by: Duo Zhang <zhangduo@apache.org>
Signed-off-by: Viraj Jasani <vjasani@apache.org>
The PLAIN mechanism test added in the Shade authentication example has
different semantics than GSSAPI mechanism -- the client reports that the
handshake is done after the original challenge is computed. The javadoc
on SaslClient, however, tells us that we need to wait for a response
from the server before proceeding.
The client, best as I can see, does not receive any data from HBase;
however the application semantics (e.g. throw an exception on auth'n
error) do not work as we intend as a result of this bug.
Extra trace logging was also added to debug this, should a similar error
ever happen again with some other mechanism.
Closes#1260
Signed-off-by: Duo Zhang <zhangduo@apache.org>
Signed-off-by: Bharath Vissapragada <bharathv@apache.org>
Move the proto files only used by hbase-examples to hbase-examples, to show users how to make use of shaded hbase protos when implementing coprocessor.
Signed-off-by: Jan Hentschel <jan.hentschel@ultratendency.com>
Signed-off-by: Viraj Jasani <vjasani@apache.org>
proto files layout:
protobuf/client - client to server messages, client rpc service and protos, used in hbase-client exclusively;
protobuf/rest - hbase-rest messages;
protobuf/rpc - rpc and post-rpc tracing messages;
protobuf/server/coprocessor - coprocessor rpc services;
protobuf/server/coprocessor/example - coprocessors rpc services examples from hbase-examples;
protobuf/server/io - filesystem and hbase-server/io protos;
protobuf/server/maser - master rpc services and messages;
protobuf/server/region - region rpc services and messages (except client rpc service, which is in Client.proto);
protobuf/server/rsgroup - rsgroup protos;
protobuf/server/zookeeper - protos for zookeeper and ones used exclusively in hbase-zookeeper module;
protobuf/server - protos used across other server protos;
protobuf/test - protos used in tests;
protobuf/ - protos used across other protos, exclusive for hbase-mapreduce and hbase-backup, other protos.
Signed-off-by: Jan Hentschel <jan.hentschel@ultratendency.com>
Signed-off-by: Duo Zhang <zhangduo@apache.org>
The PLAIN mechanism test added in the Shade authentication example has
different semantics than GSSAPI mechanism -- the client reports that the
handshake is done after the original challenge is computed. The javadoc
on SaslClient, however, tells us that we need to wait for a response
from the server before proceeding.
The client, best as I can see, does not receive any data from HBase;
however the application semantics (e.g. throw an exception on auth'n
error) do not work as we intend as a result of this bug.
Extra trace logging was also added to debug this, should a similar error
ever happen again with some other mechanism.
Closes#1260
Signed-off-by: Duo Zhang <zhangduo@apache.org>
Signed-off-by: Bharath Vissapragada <bharathv@apache.org>
hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/MiniZooKeeperCluster.java
Have client and server use loopback instead of 'localhost'
Signed-off-by: Duo Zhang <zhangduo@apache.org>
Signed-off-by: Jan Hentschel <janh@apache.org>
Add being able to configure netty thread counts. Enable socket reuse
(should not have any impact).
hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/BlockingRpcConnection.java
Rename the threads we create in here so they are NOT named same was
threads created by Hadoop RPC.
hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/DefaultNettyEventLoopConfig.java
hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/NettyRpcClient.java
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/AsyncFSWAL.java
Allow configuring eventloopgroup thread count (so can override for
tests)
hbase-examples/src/main/java/org/apache/hadoop/hbase/client/example/HttpProxyExample.java
Enable socket resuse.
hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/NettyRpcServer.java
Enable socket resuse and config for how many threads to use.
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
hbase-server/src/main/java/org/apache/hadoop/hbase/util/ModifyRegionUtils.java
Thread name edit; drop the redundant 'Thread' suffix.
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/HFileReplicator.java
Make closeable and shutdown executor when called.
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSink.java
Call close on HFileReplicator
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationBase.java
HDFS creates lots of threads. Use less of it so less threads overall.
hbase-server/src/test/resources/hbase-site.xml
hbase-server/src/test/resources/hdfs-site.xml
Constrain resources when running in test context.
hbase-server/src/test/resources/log4j.properties
Enable debug on netty to see netty configs in our log
pom.xml
Add system properties when we launch JVMs to constrain thread counts in
tests
Signed-off-by: Duo Zhang <zhangduo@apache.org>
Decouple the HBase internals such that someone can implement
their own SASL-based authentication mechanism and plug it into
HBase RegionServers/Masters.
Comes with a design doc in dev-support/design-docs and an example in
hbase-examples known as "Shade" which uses a flat-password file
for authenticating users.
Closes#884
Signed-off-by: Wellington Chevreuil <wchevreuil@apache.org>
Signed-off-by: Andrew Purtell <apurtell@apache.org>
Signed-off-by: Reid Chan <reidchan@apache.org>