(Forward port from branch-2; simplified by the fact that there
is no hadoop-2.0 profile on master branch)
Make it so our published poms carry the minimum needed to run
an hbase; the published pom has no profiles -- the profiles
specified at build time are resolved, their dependencies inlined,
and then they are stripped -- and no build-time, or plugins
dependencies or properties, etc. Resultant poms have explicit
hadoop lib versions baked in -- no more being able to choose
hbase with hadoop2 or haddop3 at downstream build time by setting
a '-Dhadoop.profile=X.0'.
Pattern is to add profiles when none in sub-modules when
the flatten plugin complains it can't resolve an hadoop
dependency's 'version' (e.g. hadoop-common, hadoop-hdfs).
Adding the profile in the sub-module make it so the flatten
plugin can figure 'hadoop.version' definitively.
(In master there is only the hadoop-3.0 profile).
Another spin on the above happens when profiles already exist
in submodule but the flatten plugin is complaining it can't
figure figure version on an hadoop dependency NOT under
profiles. Below, we move the delinquent hadoop dependency under
existing profiles (minikdc was the usual dependency outside
profiles in sub-modules that flatten complained about).
Sometimes, moving an hadoop dependency under a profile, there
would be excludes on the local dependency. If the parent pom
excludes section was missing the local excludes, we added them
up to the parent module so all excluding is done up there in
the parent profile dependencyManagement section.
Signed-off-by: Duo Zhang <zhangduo@apache.org>
Avoid the pattern where a Random object is allocated, used once or twice, and
then left for GC. This pattern triggers warnings from some static analysis tools
because this pattern leads to poor effective randomness. In a few cases we were
legitimately suffering from this issue; in others a change is still good to
reduce noise in analysis results.
Use ThreadLocalRandom where there is no requirement to set the seed to gain
good reuse.
Where useful relax use of SecureRandom to simply Random or ThreadLocalRandom,
which are unlikely to block if the system entropy pool is low, if we don't need
crypographically strong randomness for the use case. The exception to this is
normalization of use of Bytes#random to fill byte arrays with randomness.
Because Bytes#random may be used to generate key material it must be backed by
SecureRandom.
Signed-off-by: Duo Zhang <zhangduo@apache.org>
`CallDroppedException` can be thrown with `CallRunner.drop()` by queue implementations that decide to drop calls to groom the RPC call backlog. The LifoCoDel queue does this I believe and with Pluggable queue it's possible for 3rd party queue implementations to be using `drop()` for similar reasons. It would be nice for the server to be tracking these exceptions in metrics since otherwise you might have to do some extra lifting on the client side.
Signed-off-by: Duo Zhang <zhangduo@apache.org>
Reviewed-by: Bryan Beaudreault <bbeaudreault@hubspot.com>
Introduces a new metric that tracks number of replication sources that are stuck in initialization.
Signed-off-by: Xu Cang <xucang@apache.org>
Signed-off-by: Bharath Vissapragada <bharathv@apache.org>
Add new metric rpcFullScanRequestCount to track number of requests that are full region scans. Can be used to notify user to check if this is truly intended.
Signed-off-by Anoop Sam John <anoopsamjohn@apache.org>
Signed-off-by Ramkrishna S Vasudevan <ramkrishna@apache.org>
* Admin API getLogEntries() for ring buffer use-cases: so far, provides balancerDecision and slowLogResponse
* Refactor RPC call for similar use-cases
* Single RPC API getLogEntries() for both Master.proto and Admin.proto
Closes#2261
Signed-off-by: Andrew Purtell <apurtell@apache.org>