Makes MergeTableRegionsProcedure do more than just two regions at a
time. Compatible as MTRP was done considering one day it'd do more than
two at a time.
Changes hardcoded assumption that merge parent regions are named
mergeA and mergeB in a column on the resultant region. Instead
can have N columns on the merged region, one for each parent
merged. Column qualifiers all being with 'merge'.
Most of code below is undoing the assumption that there are two
parents on a merge only.
This is a reapply of a reverted commit. This commit includes
HBASE-22059 amendment and subsequent ammendments to HBASE-22052.
See HBASE-22052 for full story.
jersey-core is problematic. It was transitively included from hadoop
and polluting our CLASSPATH with an implementation of a 1.x version
of the javax.ws.rs.core.Response Interface from jsr311-api when we
want the javax.ws.rs-api 2.x version.
M hbase-endpoint/pom.xml
M hbase-http/pom.xml
M hbase-mapreduce/pom.xml
M hbase-rest/pom.xml
M hbase-server/pom.xml
M hbase-zookeeper/pom.xml
Remove redundant version specification (and the odd property define
done already up in parent pom).
M hbase-it/pom.xml
M hbase-rest/pom.xml
Exclude jersey-core explicitly.
M hbase-procedure/pom.xml
Remove redundant version and classifier.
M pom.xml
Add jersey-core exclusions to all dependencies that pull it in
except hadoop-minicluster. mr tests fail w/o the jersey-core
so let it in for minicluster and then in modules, exclude it
where it causes damage as in hbase-it.
Also correct how the test does string conversion for region names that include non-printable characters.
Signed-off-by: Duo Zhang <zhangduo@apache.org>
Close the scanner leak in rest server when using the limit number of rows.
Example: when using uris like
/sometable/*?limit=5&startrow=eGlND&endrow=eGlNE
where the scan range has more rows than the limit, the
rest server will start leaking memmory.
Signed-off-by: Andrew Purtell <apurtell@apache.org>
* modify the jar checking script to take args; make hadoop stuff optional
* separate out checking the artifacts that have hadoop vs those that don't.
* * Unfortunately means we need two modules for checking things
* * put in a safety check that the support script for checking jar contents is maintained in both modules
* * have to carve out an exception for o.a.hadoop.metrics2. :(
* fix duplicated class warning
* clean up dependencies in hbase-server and some modules that depend on it.
* allow Hadoop to have its own htrace where it needs it
* add a precommit check to make sure we're not using old htrace imports