2010-02-22 18:49:24 -05:00
<?xml version="1.0" encoding="UTF-8"?>
2012-06-03 17:59:50 -04:00
<project xmlns= "http://maven.apache.org/POM/4.0.0" xmlns:xsi= "http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation= "http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd" >
HBASE-15638 Shade protobuf
Which includes
HBASE-16742 Add chapter for devs on how we do protobufs going forward
HBASE-16741 Amend the generate protobufs out-of-band build step
to include shade, pulling in protobuf source and a hook for patching protobuf
Removed ByteStringer from hbase-protocol-shaded. Use the protobuf-3.1.0
trick directly instead. Makes stuff cleaner. All under 'shaded' dir is
now generated.
HBASE-16567 Upgrade to protobuf-3.1.x
Regenerate all protos in this module with protoc3.
Redo ByteStringer to use new pb3.1.0 unsafebytesutil
instead of HBaseZeroCopyByteString
HBASE-16264 Figure how to deal with endpoints and shaded pb Shade our protobufs.
Do it in a manner that makes it so we can still have in our API references to
com.google.protobuf (and in REST). The c.g.p in API is for Coprocessor Endpoints (CPEP)
This patch is Tactic #4 from Shading Doc attached to the referenced issue.
Figuring an appoach took a while because we have Coprocessor Endpoints
mixed in with the core of HBase that are tough to untangle (FIX).
Tactic #4 (the fourth attempt at addressing this issue) is COPY all but
the CPEP .proto files currently in hbase-protocol to a new module named
hbase-protocol-shaded. Generate .protos again in the new location and
then relocate/shade the generated files. Let CPEPs keep on with the
old references at com.google.protobuf.* and
org.apache.hadoop.hbase.protobuf.* but change the hbase core so all
instead refer to the relocated files in their new location at
org.apache.hadoop.hbase.shaded.com.google.protobuf.*.
Let the new module also shade protobufs themselves and change hbase
core to pick up this shaded protobuf rather than directly reference
com.google.protobuf.
This approach allows us to explicitly refer to either the shaded or
non-shaded version of a protobuf class in any particular context (though
usually context dictates one or the other). Core runs on shaded protobuf.
CPEPs continue to use whatever is on the classpath with
com.google.protobuf.* which is pb2.5.0 for the near future at least.
See above cited doc for follow-ons and downsides. In short, IDEs will complain
about not being able to find the shaded protobufs since shading happens at package
time; will fix by checking in all generated classes and relocated protobuf in
a follow-on. Also, CPEPs currently suffer an extra-copy as marshalled from
non-shaded to shaded. To fix. Finally, our .protos are duplicated; once
shaded, and once not. Pain, but how else to reveal our protos to CPEPs or
C++ client that wants to talk with HBase AND shade protobuf.
Details:
Add a new hbase-protocol-shaded module. It is a copy of hbase-protocol
i with all relocated offset from o.a.h.h. to o.a.h.h.shaded. The new module
also includes the relocated pb. It does not include CPEPs. They stay in
their old location.
Add another module hbase-endpoint which has in it all the endpoints
that ship as part of hbase -- at least the ones that are not
entangled with core such as AccessControl and Auth. Move all protos
for these CPEPs here as well as their unit tests (mostly moving a
bunch of stuff out of hbase-server module)
Much of the change looks like this:
-import org.apache.hadoop.hbase.protobuf.ProtobufUtil;
-import org.apache.hadoop.hbase.protobuf.generated.ClusterIdProtos;
+import org.apache.hadoop.hbase.protobuf.shaded.ProtobufUtil;
+import org.apache.hadoop.hbase.shaded.protobuf.generated.ClusterIdProtos;
In HTable and in HBaseAdmin, regularize the way Callables are used and also hide
protobuf usage as much as possible moving it up into Callable super classes or out
to utility classes. Still TODO is adding in of retries, etc., but can wait on
procedure which will redo all this.
Also in HTable and HBaseAdmin as well as in HRegionServer and Server, be explicit
when using non-shaded protobuf. Do the full-path so it is clear. This is around
endpoint coprocessors registration of services and execution of CPEP methods.
Shrunk ProtobufUtil by moving methods used by one CPEP only back to the CPEP either
into Client class or as new Util class; e.g. AccessControlUtil.
There are actually two versions of ProtobufUtil now; a shaded one and a subset
that is used by CPEPs doing non-shaded work.
Made it so hbase-common no longer depends on hbase-protocol (with Matteo's help)
R*Converter classes got moved down under shaded package -- they are for internal
use only. There are no non-shaded versions of these classes.
D hbase-client/src/main/java/org/apache/hadoop/hbase/client/AbstractRegionServerCallable
D RetryingCallableBase
Not used anymore and we have too many tiers of Callables so removed/cleaned-up.
A ClientServicecallable
Had to add this one. RegionServerCallable was made generic so it could be used
for a few Interfaces (Client and Admin). Then added ClientServiceCallable to
implement RegionServerCallable with the Client Interface.
2016-10-04 00:37:32 -04:00
<!--
2011-10-23 19:03:32 -04:00
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
2012-09-18 16:03:50 -04:00
ON MVN COMPILE NOT WORKING
If you wondering why 'mvn compile' does not work building HBase
(in particular, if you are doing it for the first time), instead do
'mvn package'. If you are interested in the full story, see
https://issues.apache.org/jira/browse/HBASE-6795.
2011-10-23 19:03:32 -04:00
-->
2010-02-22 18:49:24 -05:00
<modelVersion > 4.0.0</modelVersion>
2010-10-03 23:19:26 -04:00
<parent >
<groupId > org.apache</groupId>
<artifactId > apache</artifactId>
2016-11-17 15:03:55 -05:00
<version > 18</version>
2012-06-03 17:59:50 -04:00
<relativePath />
<!-- no parent resolution -->
2010-10-03 23:19:26 -04:00
</parent>
2010-02-22 18:49:24 -05:00
<groupId > org.apache.hbase</groupId>
<artifactId > hbase</artifactId>
2012-05-26 01:56:04 -04:00
<packaging > pom</packaging>
2017-08-14 13:28:44 -04:00
<version > 2.0.0-alpha2</version>
2015-07-15 06:12:36 -04:00
<name > Apache HBase</name>
2010-06-08 04:16:12 -04:00
<description >
2013-03-11 00:11:33 -04:00
Apache HBase is the Hadoop database. Use it when you need
2010-10-03 23:19:26 -04:00
random, realtime read/write access to your Big Data.
2010-02-24 15:17:17 -05:00
This project's goal is the hosting of very large tables -- billions of rows X millions of columns -- atop clusters
of commodity hardware.
</description>
2010-05-20 01:24:32 -04:00
<url > http://hbase.apache.org</url>
2015-07-15 06:12:36 -04:00
<inceptionYear > 2007</inceptionYear>
2016-08-02 01:54:50 -04:00
<!-- Set here so we can consistently use the correct name, even on branches with
an ASF parent pom older than v15. Also uses the url from v18.
-->
<licenses >
<license >
<name > Apache License, Version 2.0</name>
<url > https://www.apache.org/licenses/LICENSE-2.0.txt</url>
<distribution > repo</distribution>
</license>
</licenses>
2012-05-26 01:56:04 -04:00
<modules >
2015-07-15 06:12:36 -04:00
<module > hbase-resource-bundle</module>
2012-05-26 01:56:04 -04:00
<module > hbase-server</module>
2013-09-20 16:44:22 -04:00
<module > hbase-thrift</module>
2013-09-23 12:40:51 -04:00
<module > hbase-shell</module>
HBASE-15638 Shade protobuf
Which includes
HBASE-16742 Add chapter for devs on how we do protobufs going forward
HBASE-16741 Amend the generate protobufs out-of-band build step
to include shade, pulling in protobuf source and a hook for patching protobuf
Removed ByteStringer from hbase-protocol-shaded. Use the protobuf-3.1.0
trick directly instead. Makes stuff cleaner. All under 'shaded' dir is
now generated.
HBASE-16567 Upgrade to protobuf-3.1.x
Regenerate all protos in this module with protoc3.
Redo ByteStringer to use new pb3.1.0 unsafebytesutil
instead of HBaseZeroCopyByteString
HBASE-16264 Figure how to deal with endpoints and shaded pb Shade our protobufs.
Do it in a manner that makes it so we can still have in our API references to
com.google.protobuf (and in REST). The c.g.p in API is for Coprocessor Endpoints (CPEP)
This patch is Tactic #4 from Shading Doc attached to the referenced issue.
Figuring an appoach took a while because we have Coprocessor Endpoints
mixed in with the core of HBase that are tough to untangle (FIX).
Tactic #4 (the fourth attempt at addressing this issue) is COPY all but
the CPEP .proto files currently in hbase-protocol to a new module named
hbase-protocol-shaded. Generate .protos again in the new location and
then relocate/shade the generated files. Let CPEPs keep on with the
old references at com.google.protobuf.* and
org.apache.hadoop.hbase.protobuf.* but change the hbase core so all
instead refer to the relocated files in their new location at
org.apache.hadoop.hbase.shaded.com.google.protobuf.*.
Let the new module also shade protobufs themselves and change hbase
core to pick up this shaded protobuf rather than directly reference
com.google.protobuf.
This approach allows us to explicitly refer to either the shaded or
non-shaded version of a protobuf class in any particular context (though
usually context dictates one or the other). Core runs on shaded protobuf.
CPEPs continue to use whatever is on the classpath with
com.google.protobuf.* which is pb2.5.0 for the near future at least.
See above cited doc for follow-ons and downsides. In short, IDEs will complain
about not being able to find the shaded protobufs since shading happens at package
time; will fix by checking in all generated classes and relocated protobuf in
a follow-on. Also, CPEPs currently suffer an extra-copy as marshalled from
non-shaded to shaded. To fix. Finally, our .protos are duplicated; once
shaded, and once not. Pain, but how else to reveal our protos to CPEPs or
C++ client that wants to talk with HBase AND shade protobuf.
Details:
Add a new hbase-protocol-shaded module. It is a copy of hbase-protocol
i with all relocated offset from o.a.h.h. to o.a.h.h.shaded. The new module
also includes the relocated pb. It does not include CPEPs. They stay in
their old location.
Add another module hbase-endpoint which has in it all the endpoints
that ship as part of hbase -- at least the ones that are not
entangled with core such as AccessControl and Auth. Move all protos
for these CPEPs here as well as their unit tests (mostly moving a
bunch of stuff out of hbase-server module)
Much of the change looks like this:
-import org.apache.hadoop.hbase.protobuf.ProtobufUtil;
-import org.apache.hadoop.hbase.protobuf.generated.ClusterIdProtos;
+import org.apache.hadoop.hbase.protobuf.shaded.ProtobufUtil;
+import org.apache.hadoop.hbase.shaded.protobuf.generated.ClusterIdProtos;
In HTable and in HBaseAdmin, regularize the way Callables are used and also hide
protobuf usage as much as possible moving it up into Callable super classes or out
to utility classes. Still TODO is adding in of retries, etc., but can wait on
procedure which will redo all this.
Also in HTable and HBaseAdmin as well as in HRegionServer and Server, be explicit
when using non-shaded protobuf. Do the full-path so it is clear. This is around
endpoint coprocessors registration of services and execution of CPEP methods.
Shrunk ProtobufUtil by moving methods used by one CPEP only back to the CPEP either
into Client class or as new Util class; e.g. AccessControlUtil.
There are actually two versions of ProtobufUtil now; a shaded one and a subset
that is used by CPEPs doing non-shaded work.
Made it so hbase-common no longer depends on hbase-protocol (with Matteo's help)
R*Converter classes got moved down under shaded package -- they are for internal
use only. There are no non-shaded versions of these classes.
D hbase-client/src/main/java/org/apache/hadoop/hbase/client/AbstractRegionServerCallable
D RetryingCallableBase
Not used anymore and we have too many tiers of Callables so removed/cleaned-up.
A ClientServicecallable
Had to add this one. RegionServerCallable was made generic so it could be used
for a few Interfaces (Client and Admin). Then added ClientServiceCallable to
implement RegionServerCallable with the Client Interface.
2016-10-04 00:37:32 -04:00
<module > hbase-protocol-shaded</module>
2012-11-20 18:26:00 -05:00
<module > hbase-protocol</module>
2012-12-03 16:30:19 -05:00
<module > hbase-client</module>
2012-07-17 18:02:06 -04:00
<module > hbase-hadoop-compat</module>
2012-05-30 19:51:44 -04:00
<module > hbase-common</module>
2015-04-09 15:44:56 -04:00
<module > hbase-procedure</module>
HBASE-15638 Shade protobuf
Which includes
HBASE-16742 Add chapter for devs on how we do protobufs going forward
HBASE-16741 Amend the generate protobufs out-of-band build step
to include shade, pulling in protobuf source and a hook for patching protobuf
Removed ByteStringer from hbase-protocol-shaded. Use the protobuf-3.1.0
trick directly instead. Makes stuff cleaner. All under 'shaded' dir is
now generated.
HBASE-16567 Upgrade to protobuf-3.1.x
Regenerate all protos in this module with protoc3.
Redo ByteStringer to use new pb3.1.0 unsafebytesutil
instead of HBaseZeroCopyByteString
HBASE-16264 Figure how to deal with endpoints and shaded pb Shade our protobufs.
Do it in a manner that makes it so we can still have in our API references to
com.google.protobuf (and in REST). The c.g.p in API is for Coprocessor Endpoints (CPEP)
This patch is Tactic #4 from Shading Doc attached to the referenced issue.
Figuring an appoach took a while because we have Coprocessor Endpoints
mixed in with the core of HBase that are tough to untangle (FIX).
Tactic #4 (the fourth attempt at addressing this issue) is COPY all but
the CPEP .proto files currently in hbase-protocol to a new module named
hbase-protocol-shaded. Generate .protos again in the new location and
then relocate/shade the generated files. Let CPEPs keep on with the
old references at com.google.protobuf.* and
org.apache.hadoop.hbase.protobuf.* but change the hbase core so all
instead refer to the relocated files in their new location at
org.apache.hadoop.hbase.shaded.com.google.protobuf.*.
Let the new module also shade protobufs themselves and change hbase
core to pick up this shaded protobuf rather than directly reference
com.google.protobuf.
This approach allows us to explicitly refer to either the shaded or
non-shaded version of a protobuf class in any particular context (though
usually context dictates one or the other). Core runs on shaded protobuf.
CPEPs continue to use whatever is on the classpath with
com.google.protobuf.* which is pb2.5.0 for the near future at least.
See above cited doc for follow-ons and downsides. In short, IDEs will complain
about not being able to find the shaded protobufs since shading happens at package
time; will fix by checking in all generated classes and relocated protobuf in
a follow-on. Also, CPEPs currently suffer an extra-copy as marshalled from
non-shaded to shaded. To fix. Finally, our .protos are duplicated; once
shaded, and once not. Pain, but how else to reveal our protos to CPEPs or
C++ client that wants to talk with HBase AND shade protobuf.
Details:
Add a new hbase-protocol-shaded module. It is a copy of hbase-protocol
i with all relocated offset from o.a.h.h. to o.a.h.h.shaded. The new module
also includes the relocated pb. It does not include CPEPs. They stay in
their old location.
Add another module hbase-endpoint which has in it all the endpoints
that ship as part of hbase -- at least the ones that are not
entangled with core such as AccessControl and Auth. Move all protos
for these CPEPs here as well as their unit tests (mostly moving a
bunch of stuff out of hbase-server module)
Much of the change looks like this:
-import org.apache.hadoop.hbase.protobuf.ProtobufUtil;
-import org.apache.hadoop.hbase.protobuf.generated.ClusterIdProtos;
+import org.apache.hadoop.hbase.protobuf.shaded.ProtobufUtil;
+import org.apache.hadoop.hbase.shaded.protobuf.generated.ClusterIdProtos;
In HTable and in HBaseAdmin, regularize the way Callables are used and also hide
protobuf usage as much as possible moving it up into Callable super classes or out
to utility classes. Still TODO is adding in of retries, etc., but can wait on
procedure which will redo all this.
Also in HTable and HBaseAdmin as well as in HRegionServer and Server, be explicit
when using non-shaded protobuf. Do the full-path so it is clear. This is around
endpoint coprocessors registration of services and execution of CPEP methods.
Shrunk ProtobufUtil by moving methods used by one CPEP only back to the CPEP either
into Client class or as new Util class; e.g. AccessControlUtil.
There are actually two versions of ProtobufUtil now; a shaded one and a subset
that is used by CPEPs doing non-shaded work.
Made it so hbase-common no longer depends on hbase-protocol (with Matteo's help)
R*Converter classes got moved down under shaded package -- they are for internal
use only. There are no non-shaded versions of these classes.
D hbase-client/src/main/java/org/apache/hadoop/hbase/client/AbstractRegionServerCallable
D RetryingCallableBase
Not used anymore and we have too many tiers of Callables so removed/cleaned-up.
A ClientServicecallable
Had to add this one. RegionServerCallable was made generic so it could be used
for a few Interfaces (Client and Admin). Then added ClientServiceCallable to
implement RegionServerCallable with the Client Interface.
2016-10-04 00:37:32 -04:00
<module > hbase-endpoint</module>
2012-06-28 22:56:20 -04:00
<module > hbase-it</module>
2012-11-06 16:22:27 -05:00
<module > hbase-examples</module>
2013-02-06 19:36:24 -05:00
<module > hbase-prefix-tree</module>
2013-03-29 14:49:42 -04:00
<module > hbase-assembly</module>
2013-10-02 17:59:19 -04:00
<module > hbase-testing-util</module>
2014-09-22 21:46:35 -04:00
<module > hbase-annotations</module>
2014-10-07 18:08:54 -04:00
<module > hbase-rest</module>
2014-10-15 13:28:45 -04:00
<module > hbase-checkstyle</module>
2015-08-28 19:13:36 -04:00
<module > hbase-external-blockcache</module>
2015-04-21 01:20:19 -04:00
<module > hbase-shaded</module>
2015-07-28 12:10:37 -04:00
<module > hbase-spark</module>
2016-02-19 09:39:43 -05:00
<module > hbase-archetypes</module>
2017-01-25 14:47:35 -05:00
<module > hbase-metrics-api</module>
<module > hbase-metrics</module>
2017-07-12 20:12:52 -04:00
<module > hbase-spark-it</module>
2012-05-26 01:56:04 -04:00
</modules>
2014-10-29 18:36:19 -04:00
<!-- Add apache snapshots in case we want to use unreleased versions of plugins:
e.g. surefire 2.18-SNAPSHOT-->
<pluginRepositories >
<pluginRepository >
<id > apache.snapshots</id>
<url > http://repository.apache.org/snapshots/</url>
</pluginRepository>
</pluginRepositories>
2010-02-22 18:49:24 -05:00
<scm >
2014-05-22 13:27:27 -04:00
<connection > scm:git:git://git.apache.org/hbase.git</connection>
<developerConnection > scm:git:https://git-wip-us.apache.org/repos/asf/hbase.git</developerConnection>
<url > https://git-wip-us.apache.org/repos/asf?p=hbase.git</url>
2010-02-22 18:49:24 -05:00
</scm>
2010-02-24 15:17:17 -05:00
<issueManagement >
<system > JIRA</system>
2010-10-03 23:19:26 -04:00
<url > http://issues.apache.org/jira/browse/HBASE</url>
2010-02-24 15:17:17 -05:00
</issueManagement>
<ciManagement >
<system > hudson</system>
2010-10-03 23:19:26 -04:00
<url > http://hudson.zones.apache.org/hudson/view/HBase/job/HBase-TRUNK/</url>
2010-02-24 15:17:17 -05:00
</ciManagement>
<mailingLists >
<mailingList >
<name > User List</name>
2010-05-20 01:24:32 -04:00
<subscribe > user-subscribe@hbase.apache.org</subscribe>
<unsubscribe > user-unsubscribe@hbase.apache.org</unsubscribe>
<post > user@hbase.apache.org</post>
<archive > http://mail-archives.apache.org/mod_mbox/hbase-user/</archive>
2010-02-24 15:17:17 -05:00
<otherArchives >
2010-07-14 13:19:03 -04:00
<otherArchive > http://dir.gmane.org/gmane.comp.java.hadoop.hbase.user</otherArchive>
<otherArchive > http://search-hadoop.com/?q=& fc_project=HBase</otherArchive>
2010-02-24 15:17:17 -05:00
</otherArchives>
</mailingList>
<mailingList >
<name > Developer List</name>
2010-05-20 01:24:32 -04:00
<subscribe > dev-subscribe@hbase.apache.org</subscribe>
<unsubscribe > dev-unsubscribe@hbase.apache.org</unsubscribe>
<post > dev@hbase.apache.org</post>
<archive > http://mail-archives.apache.org/mod_mbox/hbase-dev/</archive>
2010-02-24 15:17:17 -05:00
<otherArchives >
2010-07-14 13:19:03 -04:00
<otherArchive > http://dir.gmane.org/gmane.comp.java.hadoop.hbase.devel</otherArchive>
<otherArchive > http://search-hadoop.com/?q=& fc_project=HBase</otherArchive>
2010-02-24 15:17:17 -05:00
</otherArchives>
</mailingList>
<mailingList >
<name > Commits List</name>
2010-05-20 01:24:32 -04:00
<subscribe > commits-subscribe@hbase.apache.org</subscribe>
<unsubscribe > commits-unsubscribe@hbase.apache.org</unsubscribe>
<archive > http://mail-archives.apache.org/mod_mbox/hbase-commits/</archive>
2010-02-24 15:17:17 -05:00
</mailingList>
2010-03-05 16:10:09 -05:00
<mailingList >
<name > Issues List</name>
2010-05-20 01:24:32 -04:00
<subscribe > issues-subscribe@hbase.apache.org</subscribe>
<unsubscribe > issues-unsubscribe@hbase.apache.org</unsubscribe>
<archive > http://mail-archives.apache.org/mod_mbox/hbase-issues/</archive>
2010-03-05 16:10:09 -05:00
</mailingList>
2013-06-24 13:57:12 -04:00
<mailingList >
<name > Builds List</name>
<subscribe > builds-subscribe@hbase.apache.org</subscribe>
<unsubscribe > builds-unsubscribe@hbase.apache.org</unsubscribe>
<archive > http://mail-archives.apache.org/mod_mbox/hbase-builds/</archive>
</mailingList>
2010-02-24 15:17:17 -05:00
</mailingLists>
2010-02-22 18:49:24 -05:00
<developers >
2013-04-02 01:02:44 -04:00
<developer >
<id > acube123</id>
<name > Amitanand S. Aiyer</name>
<email > acube123@apache.org</email>
<timezone > -8</timezone>
</developer>
2016-06-16 18:24:19 -04:00
<developer >
<id > appy</id>
<name > Apekshit Sharma</name>
<email > appy@apache.org</email>
<timezone > -8</timezone>
</developer>
2017-03-27 08:21:54 -04:00
<developer >
<id > anastasia</id>
<name > Anastasia Braginsky</name>
<email > anastasia@apache.org</email>
<timezone > +2</timezone>
</developer>
2010-02-22 18:49:24 -05:00
<developer >
2010-02-24 15:17:17 -05:00
<id > apurtell</id>
<name > Andrew Purtell</name>
<email > apurtell@apache.org</email>
<timezone > -8</timezone>
2010-12-02 19:06:57 -05:00
</developer>
2013-03-10 01:44:12 -05:00
<developer >
<id > anoopsamjohn</id>
<name > Anoop Sam John</name>
<email > anoopsamjohn@apache.org</email>
<timezone > +5</timezone>
</developer>
2015-06-11 08:28:20 -04:00
<developer >
<id > antonov</id>
<name > Mikhail Antonov</name>
<email > antonov@apache.org</email>
<timezone > -8</timezone>
</developer>
2016-03-30 01:38:21 -04:00
<developer >
<id > ashishsinghi</id>
<name > Ashish Singhi</name>
<email > ashishsinghi@apache.org</email>
<timezone > +5</timezone>
</developer>
2016-11-20 21:44:12 -05:00
<developer >
<id > binlijin</id>
<name > Lijin Bin</name>
<email > binlijin@apache.org</email>
<timezone > +8</timezone>
</developer>
2014-12-01 17:52:40 -05:00
<developer >
<id > busbey</id>
<name > Sean Busbey</name>
<email > busbey@apache.org</email>
<timezone > -6</timezone>
</developer>
2015-11-13 22:20:08 -05:00
<developer >
<id > chenheng</id>
<name > Heng Chen</name>
<email > chenheng@apache.org</email>
<timezone > +8</timezone>
</developer>
2017-03-17 15:15:17 -04:00
<developer >
<id > chia7712</id>
<name > Chia-Ping Tsai</name>
<email > chia7712@apache.org</email>
<timezone > +8</timezone>
</developer>
2013-02-07 14:22:20 -05:00
<developer >
<id > ddas</id>
<name > Devaraj Das</name>
<email > ddas@apache.org</email>
<timezone > -8</timezone>
</developer>
2016-08-31 23:18:30 -04:00
<developer >
<id > dimaspivak</id>
<name > Dima Spivak</name>
<email > dimaspivak@apache.org</email>
<timezone > -8</timezone>
</developer>
2012-05-26 01:56:04 -04:00
<developer >
2011-08-01 20:09:31 -04:00
<id > dmeil</id>
<name > Doug Meil</name>
2011-08-06 19:58:22 -04:00
<email > dmeil@apache.org</email>
2011-08-01 20:09:31 -04:00
<timezone > -5</timezone>
</developer>
2015-08-20 17:16:35 -04:00
<developer >
<id > eclark</id>
<name > Elliott Clark</name>
<email > eclark@apache.org</email>
<timezone > -8</timezone>
</developer>
2016-12-11 15:17:54 -05:00
<developer >
<id > elserj</id>
<name > Josh Elser</name>
<email > elserj@apache.org</email>
<timezone > -5</timezone>
</developer>
2012-09-25 16:34:55 -04:00
<developer >
<id > enis</id>
<name > Enis Soztutar</name>
<email > enis@apache.org</email>
<timezone > -8</timezone>
</developer>
2017-03-19 03:58:51 -04:00
<developer >
<id > eshcar</id>
<name > Eshcar Hillel</name>
<email > eshcar@apache.org</email>
<timezone > +2</timezone>
</developer>
2014-03-12 17:21:37 -04:00
<developer >
<id > fenghh</id>
<name > Honghua Feng</name>
<email > fenghh@apache.org</email>
<timezone > +8</timezone>
</developer>
2010-12-02 19:06:57 -05:00
<developer >
<id > garyh</id>
<name > Gary Helmling</name>
<email > garyh@apache.org</email>
<timezone > -8</timezone>
2010-02-24 15:17:17 -05:00
</developer>
2012-09-07 16:30:29 -04:00
<developer >
<id > gchanan</id>
<name > Gregory Chanan</name>
<email > gchanan@apache.org</email>
<timezone > -8</timezone>
</developer>
2010-02-24 15:17:17 -05:00
<developer >
<id > jdcryans</id>
<name > Jean-Daniel Cryans</name>
<email > jdcryans@apache.org</email>
<timezone > -8</timezone>
</developer>
2013-06-03 19:25:47 -04:00
<developer >
<id > jeffreyz</id>
<name > Jeffrey Zhong</name>
<email > jeffreyz@apache.org</email>
<timezone > -8</timezone>
2015-04-02 23:36:42 -04:00
</developer>
<developer >
<id > jerryjch</id>
<name > Jing Chen (Jerry) He</name>
<email > jerryjch@apache.org</email>
<timezone > -8</timezone>
2013-06-03 19:25:47 -04:00
</developer>
2012-11-09 18:58:44 -05:00
<developer >
<id > jyates</id>
<name > Jesse Yates</name>
<email > jyates@apache.org</email>
<timezone > -8</timezone>
</developer>
2010-02-24 15:17:17 -05:00
<developer >
<id > jgray</id>
<name > Jonathan Gray</name>
2011-08-01 20:09:31 -04:00
<email > jgray@fb.com</email>
2010-02-24 15:17:17 -05:00
<timezone > -8</timezone>
</developer>
2016-06-22 07:21:21 -04:00
<developer >
<id > jingchengdu</id>
<name > Jingcheng Du</name>
<email > jingchengdu@apache.org</email>
<timezone > +8</timezone>
</developer>
2015-07-01 20:52:20 -04:00
<developer >
<id > esteban</id>
<name > Esteban Gutierrez</name>
<email > esteban@apache.org</email>
<timezone > -8</timezone>
</developer>
2012-01-19 01:29:01 -05:00
<developer >
<id > jmhsieh</id>
<name > Jonathan Hsieh</name>
<email > jmhsieh@apache.org</email>
<timezone > -8</timezone>
</developer>
2012-06-05 19:40:37 -04:00
<developer >
<id > jxiang</id>
<name > Jimmy Xiang</name>
<email > jxiang@apache.org</email>
<timezone > -8</timezone>
</developer>
2011-10-31 16:07:12 -04:00
<developer >
<id > kannan</id>
<name > Kannan Muthukkaruppan</name>
<email > kannan@fb.com</email>
<timezone > -8</timezone>
</developer>
<developer >
<id > karthik</id>
<name > Karthik Ranganathan</name>
<email > kranganathan@fb.com</email>
<timezone > -8</timezone>
</developer>
2010-02-24 15:17:17 -05:00
<developer >
<id > larsgeorge</id>
<name > Lars George</name>
2010-03-04 12:54:27 -05:00
<email > larsgeorge@apache.org</email>
2010-02-24 15:17:17 -05:00
<timezone > +1</timezone>
</developer>
2011-10-07 22:27:08 -04:00
<developer >
<id > larsh</id>
<name > Lars Hofhansl</name>
<email > larsh@apache.org</email>
<timezone > -8</timezone>
</developer>
2013-12-18 17:42:41 -05:00
<developer >
<id > liangxie</id>
<name > Liang Xie</name>
<email > liangxie@apache.org</email>
<timezone > +8</timezone>
2015-01-07 03:31:04 -05:00
</developer>
<developer >
<id > liushaohui</id>
<name > Shaohui Liu</name>
<email > liushaohui@apache.org</email>
<timezone > +8</timezone>
2013-12-18 17:42:41 -05:00
</developer>
2015-08-20 17:16:35 -04:00
<developer >
<id > liyin</id>
<name > Liyin Tang</name>
<email > liyin.tang@fb.com</email>
<timezone > -8</timezone>
</developer>
2016-03-17 01:05:16 -04:00
<developer >
<id > liyu</id>
<name > Yu Li</name>
<email > liyu@apache.org</email>
<timezone > +8</timezone>
</developer>
2012-01-25 15:35:02 -05:00
<developer >
<id > mbautin</id>
<name > Mikhail Bautin</name>
<email > mbautin@apache.org</email>
<timezone > -8</timezone>
</developer>
2015-08-20 17:16:35 -04:00
<developer >
<id > mbertozzi</id>
<name > Matteo Bertozzi</name>
<email > mbertozzi@apache.org</email>
<timezone > 0</timezone>
</developer>
2014-09-09 00:13:40 -04:00
<developer >
<id > misty</id>
<name > Misty Stanley-Jones</name>
<email > misty@apache.org</email>
<timezone > +10</timezone>
</developer>
2013-09-10 20:26:08 -04:00
<developer >
<id > ndimiduk</id>
<name > Nick Dimiduk</name>
<email > ndimiduk@apache.org</email>
<timezone > -8</timezone>
</developer>
2015-08-20 17:16:35 -04:00
<developer >
<id > nkeywal</id>
<name > Nicolas Liochon</name>
<email > nkeywal@apache.org</email>
<timezone > +1</timezone>
</developer>
2010-12-02 16:55:40 -05:00
<developer >
<id > nspiegelberg</id>
<name > Nicolas Spiegelberg</name>
<email > nspiegelberg@fb.com</email>
<timezone > -8</timezone>
</developer>
2015-03-02 15:53:18 -05:00
<developer >
<id > octo47</id>
<name > Andrey Stepachev</name>
<email > octo47@gmail.com</email>
<timezone > 0</timezone>
</developer>
2015-08-20 17:16:35 -04:00
<developer >
<id > rajeshbabu</id>
<name > Rajeshbabu Chintaguntla</name>
<email > rajeshbabu@apache.org</email>
<timezone > +5</timezone>
</developer>
<developer >
<id > ramkrishna</id>
<name > Ramkrishna S Vasudevan</name>
<email > ramkrishna@apache.org</email>
<timezone > +5</timezone>
</developer>
2010-02-22 21:11:00 -05:00
<developer >
<id > rawson</id>
<name > Ryan Rawson</name>
<email > rawson@apache.org</email>
<timezone > -8</timezone>
2010-02-24 15:17:17 -05:00
</developer>
2013-02-25 14:48:45 -05:00
<developer >
<id > sershe</id>
<name > Sergey Shelukhin</name>
<email > sershe@apache.org</email>
<timezone > -8</timezone>
</developer>
2015-04-01 13:42:28 -04:00
<developer >
<id > ssrungarapu</id>
<name > Srikanth Srungarapu</name>
<email > ssrungarapu@apache.org</email>
<timezone > -8</timezone>
</developer>
2010-02-24 15:17:17 -05:00
<developer >
<id > stack</id>
<name > Michael Stack</name>
<email > stack@apache.org</email>
<timezone > -8</timezone>
2010-02-22 21:11:00 -05:00
</developer>
2015-08-20 17:16:35 -04:00
<developer >
<id > syuanjiang</id>
<name > Stephen Yuan Jiang</name>
<email > syuanjiang@apache.org</email>
<timezone > -8</timezone>
</developer>
2011-08-04 08:58:32 -04:00
<developer >
<id > tedyu</id>
<name > Ted Yu</name>
<email > yuzhihong@gmail.com</email>
<timezone > -8</timezone>
</developer>
2010-05-25 02:52:16 -04:00
<developer >
<id > todd</id>
<name > Todd Lipcon</name>
<email > todd@apache.org</email>
<timezone > -8</timezone>
</developer>
2016-07-07 17:15:47 -04:00
<developer >
<id > toffer</id>
<name > Francis Liu</name>
<email > toffer@apache.org</email>
<timezone > -8</timezone>
</developer>
2014-12-11 20:32:03 -05:00
<developer >
<id > virag</id>
<name > Virag Kothari</name>
<email > virag@yahoo-inc.com</email>
<timezone > -8</timezone>
</developer>
2016-11-26 23:01:59 -05:00
<developer >
<id > yangzhe1991</id>
<name > Phil Yang</name>
<email > yangzhe1991@apache.org</email>
<timezone > +8</timezone>
</developer>
2016-12-19 21:43:17 -05:00
<developer >
<id > zghao</id>
<name > Guanghao Zhang</name>
<email > zghao@apache.org</email>
<timezone > +8</timezone>
</developer>
2015-03-09 19:29:58 -04:00
<developer >
<id > zhangduo</id>
<name > Duo Zhang</name>
<email > zhangduo@apache.org</email>
<timezone > +8</timezone>
</developer>
2013-01-03 04:00:07 -05:00
<developer >
<id > zjushch</id>
<name > Chunhui Shen</name>
<email > zjushch@apache.org</email>
<timezone > +8</timezone>
</developer>
2010-02-22 18:49:24 -05:00
</developers>
<build >
2016-10-27 16:17:59 -04:00
<extensions >
<extension >
<groupId > kr.motd.maven</groupId>
<artifactId > os-maven-plugin</artifactId>
2017-06-29 09:37:22 -04:00
<version > ${os.maven.version}</version>
2016-10-27 16:17:59 -04:00
</extension>
</extensions>
2017-06-29 09:37:22 -04:00
<!-- Plugin versions are inherited from ASF parent pom: https://maven.apache.org/pom/asf/
For specific version use a property and define it in the parent pom.
-->
2010-02-24 15:17:17 -05:00
<pluginManagement >
<plugins >
2015-12-26 17:47:53 -05:00
<plugin >
<groupId > org.apache.maven.plugins</groupId>
<artifactId > maven-javadoc-plugin</artifactId>
</plugin>
2015-07-15 06:12:36 -04:00
<plugin >
<groupId > org.apache.maven.plugins</groupId>
<artifactId > maven-remote-resources-plugin</artifactId>
</plugin>
<plugin >
<groupId > org.apache.maven.plugins</groupId>
<artifactId > maven-shade-plugin</artifactId>
2017-06-29 09:37:22 -04:00
<version > ${maven.shade.version}</version>
2015-07-15 06:12:36 -04:00
</plugin>
2012-03-02 12:26:34 -05:00
<plugin >
<groupId > org.apache.maven.plugins</groupId>
<artifactId > maven-release-plugin</artifactId>
<configuration >
2012-03-06 15:25:01 -05:00
<!-- You need this profile. It'll sign your artifacts.
I'm not sure if this config. actually works though.
I've been specifying -Papache-release on the command-line
-->
<releaseProfiles > apache-release</releaseProfiles>
2012-03-02 12:26:34 -05:00
<!-- This stops our running tests for each stage of maven release.
2012-10-25 16:50:41 -04:00
But it builds the test jar. From SUREFIRE-172.
2012-03-02 12:26:34 -05:00
-->
2014-09-09 01:01:03 -04:00
<arguments > -Dmaven.test.skip.exec ${arguments}</arguments>
2014-12-16 19:58:42 -05:00
<goals > ${goals}</goals>
2013-08-02 15:01:34 -04:00
<pomFileName > pom.xml</pomFileName>
2012-03-02 12:26:34 -05:00
</configuration>
</plugin>
2010-02-24 15:17:17 -05:00
<plugin >
2014-10-29 16:11:44 -04:00
<groupId > org.apache.maven.plugins</groupId>
2010-02-24 15:17:17 -05:00
<artifactId > maven-compiler-plugin</artifactId>
2017-06-29 09:37:22 -04:00
<version > ${maven.compiler.version}</version>
2010-02-24 15:17:17 -05:00
<configuration >
<source > ${compileSource}</source>
<target > ${compileSource}</target>
<showWarnings > true</showWarnings>
2010-03-22 23:56:52 -04:00
<showDeprecation > false</showDeprecation>
2014-11-11 20:45:37 -05:00
<useIncrementalCompilation > false</useIncrementalCompilation>
2012-07-10 22:44:53 -04:00
<compilerArgument > -Xlint:-options</compilerArgument>
2010-02-24 15:17:17 -05:00
</configuration>
</plugin>
2012-05-26 01:56:04 -04:00
<!-- Test oriented plugins -->
2010-02-24 15:17:17 -05:00
<plugin >
2014-10-29 16:11:44 -04:00
<groupId > org.apache.maven.plugins</groupId>
2010-02-24 15:17:17 -05:00
<artifactId > maven-surefire-plugin</artifactId>
2011-11-19 11:38:07 -05:00
<version > ${surefire.version}</version>
<dependencies >
2012-10-25 16:50:41 -04:00
<!-- by default surefire selects dynamically the connector to the unit tests
tool. We want to use always the same as the different connectors can have different
2012-05-26 01:56:04 -04:00
bugs and behaviour. -->
2011-11-19 11:38:07 -05:00
<dependency >
<groupId > org.apache.maven.surefire</groupId>
2011-12-05 16:20:04 -05:00
<artifactId > ${surefire.provider}</artifactId>
2011-11-19 11:38:07 -05:00
<version > ${surefire.version}</version>
</dependency>
[jira] [HBASE-4908] HBase cluster test tool (port from 0.89-fb)
Summary:
Porting one of our HBase cluster test tools (a single-process multi-threaded
load generator and verifier) from 0.89-fb to trunk.
I cleaned up the code a bit compared to what's in 0.89-fb, and discovered that
it has some features that I have not tried yet (some kind of a kill test, and
some way to run HBase as multiple processes on one machine).
The main utility of this piece of code for us has been the HBaseClusterTest
command-line tool (called HBaseTest in 0.89-fb), which we usually invoke as a
load test in our five-node dev cluster testing, e.g.:
hbase org.apache.hadoop.hbase.util.LoadTestTool -write 50:100:20 -tn loadtest4
-read 100:10 -zk <zk_quorum_node> -bloom ROWCOL -compression LZO -key_window 5
-max_read_errors 10000 -num_keys 10000000000 -start_key 0
Test Plan:
Run this on a dev cluster. Run all unit tests.
Reviewers: stack, Karthik, Kannan, nspiegelberg, JIRA
Reviewed By: nspiegelberg
CC: stack, nspiegelberg, mbautin, Karthik
Differential Revision: 549
git-svn-id: https://svn.apache.org/repos/asf/hbase/trunk@1211746 13f79535-47bb-0310-9956-ffa450edef68
2011-12-07 21:38:27 -05:00
</dependencies>
2012-05-30 19:51:44 -04:00
<!-- Generic testing configuration for all packages -->
2010-02-25 17:54:27 -05:00
<configuration >
2015-11-03 23:47:07 -05:00
<groups > ${surefire.firstPartGroups}</groups>
2012-05-31 00:23:20 -04:00
<failIfNoTests > false</failIfNoTests>
2012-05-26 01:56:04 -04:00
<skip > ${surefire.skipFirstPart}</skip>
2014-08-21 04:50:14 -04:00
<forkCount > ${surefire.firstPartForkCount}</forkCount>
<reuseForks > false</reuseForks>
2014-02-19 16:50:42 -05:00
<testFailureIgnore > ${surefire.testFailureIgnore}</testFailureIgnore>
2013-01-22 15:00:53 -05:00
<forkedProcessTimeoutInSeconds > ${surefire.timeout}</forkedProcessTimeoutInSeconds>
2012-03-02 13:32:36 -05:00
<redirectTestOutputToFile > ${test.output.tofile}</redirectTestOutputToFile>
2014-11-05 03:59:23 -05:00
<systemPropertyVariables >
2015-10-13 02:10:06 -04:00
<test.build.classes > ${test.build.classes}</test.build.classes>
2017-08-02 17:47:51 -04:00
<!-- For shaded netty, to find the relocated .so.
Trick from
https://stackoverflow.com/questions/33825743/rename-files-inside-a-jar-using-some-maven-plugin
The netty jar has a .so in it. Shading requires rename of the .so and then passing a system
property so netty finds the renamed .so and associates it w/ the relocated netty files.
The relocated netty is in hbase-thirdparty dependency. Just set this propery globally rather
than per module.
-->
<org.apache.hadoop.hbase.shaded.io.netty.packagePrefix > org.apache.hadoop.hbase.shaded.</org.apache.hadoop.hbase.shaded.io.netty.packagePrefix>
2014-10-04 00:51:49 -04:00
</systemPropertyVariables>
2014-12-18 09:42:21 -05:00
<excludes >
<!-- users can add - D option to skip particular test classes
ex: mvn test -Dtest.exclude.pattern=**/TestFoo.java,**/TestBar.java
-->
<exclude > ${test.exclude.pattern}</exclude>
</excludes>
2011-09-29 01:42:13 -04:00
</configuration>
<executions >
<execution >
2012-05-26 01:56:04 -04:00
<id > secondPartTestsExecution</id>
<phase > test</phase>
2011-09-29 01:42:13 -04:00
<goals >
2012-05-26 01:56:04 -04:00
<goal > test</goal>
2011-09-29 01:42:13 -04:00
</goals>
2012-05-26 01:56:04 -04:00
<configuration >
<skip > ${surefire.skipSecondPart}</skip>
2014-02-19 16:50:42 -05:00
<testFailureIgnore > ${surefire.testFailureIgnore}</testFailureIgnore>
2014-08-21 04:50:14 -04:00
<reuseForks > false</reuseForks>
<forkCount > ${surefire.secondPartForkCount}</forkCount>
2012-05-26 01:56:04 -04:00
<groups > ${surefire.secondPartGroups}</groups>
2015-10-15 15:46:07 -04:00
<forkedProcessTimeoutInSeconds > ${surefire.timeout}</forkedProcessTimeoutInSeconds>
2012-05-26 01:56:04 -04:00
</configuration>
2011-09-29 01:42:13 -04:00
</execution>
</executions>
2012-05-27 17:45:03 -04:00
</plugin>
<plugin >
2014-10-29 16:11:44 -04:00
<groupId > org.apache.maven.plugins</groupId>
2012-05-27 17:45:03 -04:00
<artifactId > maven-surefire-report-plugin</artifactId>
<version > ${surefire.version}</version>
</plugin>
2010-03-04 12:54:27 -05:00
<plugin >
2014-10-29 16:11:44 -04:00
<groupId > org.apache.maven.plugins</groupId>
2010-03-04 12:54:27 -05:00
<artifactId > maven-clean-plugin</artifactId>
2010-10-03 23:19:26 -04:00
<configuration >
<filesets >
<fileset >
<!-- dfs tests have build dir hardcoded. Clean it as part of
clean target-->
<directory > build</directory>
</fileset>
</filesets>
</configuration>
2010-03-04 12:54:27 -05:00
</plugin>
2015-07-15 06:12:36 -04:00
<plugin >
<groupId > org.codehaus.mojo</groupId>
<artifactId > buildnumber-maven-plugin</artifactId>
2017-06-29 09:37:22 -04:00
<version > ${buildnumber.maven.version}</version>
2015-07-15 06:12:36 -04:00
</plugin>
2012-05-26 01:56:04 -04:00
<plugin >
<groupId > org.codehaus.mojo</groupId>
<artifactId > findbugs-maven-plugin</artifactId>
2017-06-29 09:37:22 -04:00
<version > ${findbugs.maven.version}</version>
2014-10-29 16:11:44 -04:00
<!-- NOTE: Findbugs 3.0.0 requires jdk7 -->
2012-05-26 01:56:04 -04:00
<configuration >
2012-06-03 13:45:55 -04:00
<excludeFilterFile > ${project.basedir}/../dev-support/findbugs-exclude.xml</excludeFilterFile>
2012-05-26 01:56:04 -04:00
<findbugsXmlOutput > true</findbugsXmlOutput>
<xmlOutput > true</xmlOutput>
<effort > Max</effort>
</configuration>
</plugin>
2011-09-02 19:31:37 -04:00
<plugin >
<groupId > org.codehaus.mojo</groupId>
<artifactId > build-helper-maven-plugin</artifactId>
2017-06-29 09:37:22 -04:00
<version > ${build.helper.maven.version}</version>
2011-09-02 19:31:37 -04:00
</plugin>
2012-05-26 01:56:04 -04:00
<plugin >
<artifactId > maven-antrun-plugin</artifactId>
<version > ${maven.antrun.version}</version>
</plugin>
<plugin >
<groupId > org.jamon</groupId>
<artifactId > jamon-maven-plugin</artifactId>
<version > ${jamon.plugin.version}</version>
</plugin>
<!-- Make a jar and put the sources in the jar.
In the parent pom, so submodules will do the right thing. -->
<plugin >
<groupId > org.apache.maven.plugins</groupId>
<artifactId > maven-source-plugin</artifactId>
<executions >
<execution >
<id > attach-sources</id>
<phase > prepare-package</phase>
<goals >
<goal > jar-no-fork</goal>
</goals>
</execution>
</executions>
</plugin>
<!-- General configuration for submodules who want to build a test jar -->
<plugin >
<groupId > org.apache.maven.plugins</groupId>
<artifactId > maven-jar-plugin</artifactId>
<executions >
<execution >
<phase > prepare-package</phase>
<goals >
2013-08-02 15:01:34 -04:00
<!-- This goal will install a - test.jar when we do install
See http://maven.apache.org/guides/mini/guide-attached-tests.html
-->
2012-05-26 01:56:04 -04:00
<goal > test-jar</goal>
</goals>
</execution>
</executions>
<configuration >
HBASE-15638 Shade protobuf
Which includes
HBASE-16742 Add chapter for devs on how we do protobufs going forward
HBASE-16741 Amend the generate protobufs out-of-band build step
to include shade, pulling in protobuf source and a hook for patching protobuf
Removed ByteStringer from hbase-protocol-shaded. Use the protobuf-3.1.0
trick directly instead. Makes stuff cleaner. All under 'shaded' dir is
now generated.
HBASE-16567 Upgrade to protobuf-3.1.x
Regenerate all protos in this module with protoc3.
Redo ByteStringer to use new pb3.1.0 unsafebytesutil
instead of HBaseZeroCopyByteString
HBASE-16264 Figure how to deal with endpoints and shaded pb Shade our protobufs.
Do it in a manner that makes it so we can still have in our API references to
com.google.protobuf (and in REST). The c.g.p in API is for Coprocessor Endpoints (CPEP)
This patch is Tactic #4 from Shading Doc attached to the referenced issue.
Figuring an appoach took a while because we have Coprocessor Endpoints
mixed in with the core of HBase that are tough to untangle (FIX).
Tactic #4 (the fourth attempt at addressing this issue) is COPY all but
the CPEP .proto files currently in hbase-protocol to a new module named
hbase-protocol-shaded. Generate .protos again in the new location and
then relocate/shade the generated files. Let CPEPs keep on with the
old references at com.google.protobuf.* and
org.apache.hadoop.hbase.protobuf.* but change the hbase core so all
instead refer to the relocated files in their new location at
org.apache.hadoop.hbase.shaded.com.google.protobuf.*.
Let the new module also shade protobufs themselves and change hbase
core to pick up this shaded protobuf rather than directly reference
com.google.protobuf.
This approach allows us to explicitly refer to either the shaded or
non-shaded version of a protobuf class in any particular context (though
usually context dictates one or the other). Core runs on shaded protobuf.
CPEPs continue to use whatever is on the classpath with
com.google.protobuf.* which is pb2.5.0 for the near future at least.
See above cited doc for follow-ons and downsides. In short, IDEs will complain
about not being able to find the shaded protobufs since shading happens at package
time; will fix by checking in all generated classes and relocated protobuf in
a follow-on. Also, CPEPs currently suffer an extra-copy as marshalled from
non-shaded to shaded. To fix. Finally, our .protos are duplicated; once
shaded, and once not. Pain, but how else to reveal our protos to CPEPs or
C++ client that wants to talk with HBase AND shade protobuf.
Details:
Add a new hbase-protocol-shaded module. It is a copy of hbase-protocol
i with all relocated offset from o.a.h.h. to o.a.h.h.shaded. The new module
also includes the relocated pb. It does not include CPEPs. They stay in
their old location.
Add another module hbase-endpoint which has in it all the endpoints
that ship as part of hbase -- at least the ones that are not
entangled with core such as AccessControl and Auth. Move all protos
for these CPEPs here as well as their unit tests (mostly moving a
bunch of stuff out of hbase-server module)
Much of the change looks like this:
-import org.apache.hadoop.hbase.protobuf.ProtobufUtil;
-import org.apache.hadoop.hbase.protobuf.generated.ClusterIdProtos;
+import org.apache.hadoop.hbase.protobuf.shaded.ProtobufUtil;
+import org.apache.hadoop.hbase.shaded.protobuf.generated.ClusterIdProtos;
In HTable and in HBaseAdmin, regularize the way Callables are used and also hide
protobuf usage as much as possible moving it up into Callable super classes or out
to utility classes. Still TODO is adding in of retries, etc., but can wait on
procedure which will redo all this.
Also in HTable and HBaseAdmin as well as in HRegionServer and Server, be explicit
when using non-shaded protobuf. Do the full-path so it is clear. This is around
endpoint coprocessors registration of services and execution of CPEP methods.
Shrunk ProtobufUtil by moving methods used by one CPEP only back to the CPEP either
into Client class or as new Util class; e.g. AccessControlUtil.
There are actually two versions of ProtobufUtil now; a shaded one and a subset
that is used by CPEPs doing non-shaded work.
Made it so hbase-common no longer depends on hbase-protocol (with Matteo's help)
R*Converter classes got moved down under shaded package -- they are for internal
use only. There are no non-shaded versions of these classes.
D hbase-client/src/main/java/org/apache/hadoop/hbase/client/AbstractRegionServerCallable
D RetryingCallableBase
Not used anymore and we have too many tiers of Callables so removed/cleaned-up.
A ClientServicecallable
Had to add this one. RegionServerCallable was made generic so it could be used
for a few Interfaces (Client and Admin). Then added ClientServiceCallable to
implement RegionServerCallable with the Client Interface.
2016-10-04 00:37:32 -04:00
<skipIfEmpty > true</skipIfEmpty>
2012-05-26 01:56:04 -04:00
<excludes >
<exclude > hbase-site.xml</exclude>
2013-09-12 10:48:05 -04:00
<exclude > hdfs-site.xml</exclude>
2012-05-26 01:56:04 -04:00
<exclude > log4j.properties</exclude>
2013-09-17 16:41:55 -04:00
<exclude > mapred-queues.xml</exclude>
<exclude > mapred-site.xml</exclude>
HBASE-15638 Shade protobuf
Which includes
HBASE-16742 Add chapter for devs on how we do protobufs going forward
HBASE-16741 Amend the generate protobufs out-of-band build step
to include shade, pulling in protobuf source and a hook for patching protobuf
Removed ByteStringer from hbase-protocol-shaded. Use the protobuf-3.1.0
trick directly instead. Makes stuff cleaner. All under 'shaded' dir is
now generated.
HBASE-16567 Upgrade to protobuf-3.1.x
Regenerate all protos in this module with protoc3.
Redo ByteStringer to use new pb3.1.0 unsafebytesutil
instead of HBaseZeroCopyByteString
HBASE-16264 Figure how to deal with endpoints and shaded pb Shade our protobufs.
Do it in a manner that makes it so we can still have in our API references to
com.google.protobuf (and in REST). The c.g.p in API is for Coprocessor Endpoints (CPEP)
This patch is Tactic #4 from Shading Doc attached to the referenced issue.
Figuring an appoach took a while because we have Coprocessor Endpoints
mixed in with the core of HBase that are tough to untangle (FIX).
Tactic #4 (the fourth attempt at addressing this issue) is COPY all but
the CPEP .proto files currently in hbase-protocol to a new module named
hbase-protocol-shaded. Generate .protos again in the new location and
then relocate/shade the generated files. Let CPEPs keep on with the
old references at com.google.protobuf.* and
org.apache.hadoop.hbase.protobuf.* but change the hbase core so all
instead refer to the relocated files in their new location at
org.apache.hadoop.hbase.shaded.com.google.protobuf.*.
Let the new module also shade protobufs themselves and change hbase
core to pick up this shaded protobuf rather than directly reference
com.google.protobuf.
This approach allows us to explicitly refer to either the shaded or
non-shaded version of a protobuf class in any particular context (though
usually context dictates one or the other). Core runs on shaded protobuf.
CPEPs continue to use whatever is on the classpath with
com.google.protobuf.* which is pb2.5.0 for the near future at least.
See above cited doc for follow-ons and downsides. In short, IDEs will complain
about not being able to find the shaded protobufs since shading happens at package
time; will fix by checking in all generated classes and relocated protobuf in
a follow-on. Also, CPEPs currently suffer an extra-copy as marshalled from
non-shaded to shaded. To fix. Finally, our .protos are duplicated; once
shaded, and once not. Pain, but how else to reveal our protos to CPEPs or
C++ client that wants to talk with HBase AND shade protobuf.
Details:
Add a new hbase-protocol-shaded module. It is a copy of hbase-protocol
i with all relocated offset from o.a.h.h. to o.a.h.h.shaded. The new module
also includes the relocated pb. It does not include CPEPs. They stay in
their old location.
Add another module hbase-endpoint which has in it all the endpoints
that ship as part of hbase -- at least the ones that are not
entangled with core such as AccessControl and Auth. Move all protos
for these CPEPs here as well as their unit tests (mostly moving a
bunch of stuff out of hbase-server module)
Much of the change looks like this:
-import org.apache.hadoop.hbase.protobuf.ProtobufUtil;
-import org.apache.hadoop.hbase.protobuf.generated.ClusterIdProtos;
+import org.apache.hadoop.hbase.protobuf.shaded.ProtobufUtil;
+import org.apache.hadoop.hbase.shaded.protobuf.generated.ClusterIdProtos;
In HTable and in HBaseAdmin, regularize the way Callables are used and also hide
protobuf usage as much as possible moving it up into Callable super classes or out
to utility classes. Still TODO is adding in of retries, etc., but can wait on
procedure which will redo all this.
Also in HTable and HBaseAdmin as well as in HRegionServer and Server, be explicit
when using non-shaded protobuf. Do the full-path so it is clear. This is around
endpoint coprocessors registration of services and execution of CPEP methods.
Shrunk ProtobufUtil by moving methods used by one CPEP only back to the CPEP either
into Client class or as new Util class; e.g. AccessControlUtil.
There are actually two versions of ProtobufUtil now; a shaded one and a subset
that is used by CPEPs doing non-shaded work.
Made it so hbase-common no longer depends on hbase-protocol (with Matteo's help)
R*Converter classes got moved down under shaded package -- they are for internal
use only. There are no non-shaded versions of these classes.
D hbase-client/src/main/java/org/apache/hadoop/hbase/client/AbstractRegionServerCallable
D RetryingCallableBase
Not used anymore and we have too many tiers of Callables so removed/cleaned-up.
A ClientServicecallable
Had to add this one. RegionServerCallable was made generic so it could be used
for a few Interfaces (Client and Admin). Then added ClientServiceCallable to
implement RegionServerCallable with the Client Interface.
2016-10-04 00:37:32 -04:00
<!-- I was seeing this w/o the below addition:
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-jar-plugin:2.4:jar (default-jar) on project hbase-protocol-shaded: Error assembling JAR: A zip file cannot include itself -> [Help 1]
-->
<exclude > *.jar</exclude>
2012-05-26 01:56:04 -04:00
</excludes>
</configuration>
</plugin>
<!-- General config for eclipse classpath/settings -->
<plugin >
<groupId > org.apache.maven.plugins</groupId>
<artifactId > maven-eclipse-plugin</artifactId>
2017-06-29 09:37:22 -04:00
<version > ${maven.eclipse.version}</version>
2012-05-26 01:56:04 -04:00
</plugin>
2015-03-13 17:27:54 -04:00
<!-- This plugin's configuration is used to store Eclipse m2e settings
only. It has no influence on the Maven build itself. m2e does not
provide any safeguards against rogue maven plugins that leak
classloaders, modify random files inside workspace or throw nasty
exceptions to fail the build.
Top level doesn't do any specific configuration currently - left
to modules to decide what they want to bind, sans those plugins
defined in this pom. -->
2012-02-03 21:59:39 -05:00
<plugin >
<groupId > org.eclipse.m2e</groupId>
<artifactId > lifecycle-mapping</artifactId>
2017-06-29 09:37:22 -04:00
<version > ${lifecycle.mapping.version}</version>
2015-03-13 17:27:54 -04:00
<configuration >
<lifecycleMappingMetadata >
<pluginExecutions >
<pluginExecution >
<pluginExecutionFilter >
<groupId > org.jacoco</groupId>
<artifactId > jacoco-maven-plugin</artifactId>
<versionRange > [0.6.2.201302030002,)</versionRange>
<goals >
<goal > prepare-agent</goal>
</goals>
</pluginExecutionFilter>
<action >
<ignore > </ignore>
</action>
</pluginExecution>
<pluginExecution >
<pluginExecutionFilter >
<groupId > org.apache.maven.plugins</groupId>
<artifactId > maven-enforcer-plugin</artifactId>
<versionRange > [1.0.1,)</versionRange>
<goals >
<goal > enforce</goal>
</goals>
</pluginExecutionFilter>
<action >
<ignore />
</action>
</pluginExecution>
<pluginExecution >
<pluginExecutionFilter >
<groupId > org.apache.maven.plugins</groupId>
<artifactId > maven-remote-resources-plugin</artifactId>
2015-07-15 06:12:36 -04:00
<versionRange > [1.5,)</versionRange>
2015-03-13 17:27:54 -04:00
<goals >
<goal > process</goal>
2016-11-23 13:13:16 -05:00
<goal > bundle</goal>
2015-03-13 17:27:54 -04:00
</goals>
</pluginExecutionFilter>
<action >
<ignore />
</action>
</pluginExecution>
2015-07-15 06:12:36 -04:00
<pluginExecution >
<pluginExecutionFilter >
<groupId > org.codehaus.mojo</groupId>
<artifactId > buildnumber-maven-plugin</artifactId>
<versionRange > [1.3,)</versionRange>
<goals >
<goal > create-timestamp</goal>
</goals>
</pluginExecutionFilter>
<action >
<execute >
<runOnConfiguration > true</runOnConfiguration>
<runOnIncremental > true</runOnIncremental>
</execute>
</action>
</pluginExecution>
2015-03-13 17:27:54 -04:00
</pluginExecutions>
</lifecycleMappingMetadata>
</configuration>
2012-02-03 21:59:39 -05:00
</plugin>
2012-06-03 17:59:50 -04:00
<plugin >
<!-- excludes are inherited -->
<groupId > org.apache.rat</groupId>
<artifactId > apache-rat-plugin</artifactId>
2017-06-29 09:37:22 -04:00
<version > ${apache.rat.version}</version>
2012-02-11 17:12:23 -05:00
<configuration >
<excludes >
2013-04-01 15:49:31 -04:00
<exclude > **/*.versionsBackup</exclude>
2012-07-24 04:58:19 -04:00
<exclude > **/*.log</exclude>
2012-02-11 17:12:23 -05:00
<exclude > **/.*</exclude>
2012-03-02 13:05:43 -05:00
<exclude > **/*.tgz</exclude>
2012-03-05 18:10:04 -05:00
<exclude > **/*.orig</exclude>
2012-03-02 13:05:43 -05:00
<exclude > **/8e8ab58dcf39412da19833fcd8f687ac</exclude>
2013-05-01 09:27:14 -04:00
<exclude > **/a6a6562b777440fd9c34885428f5cb61.21e75333ada3d5bafb34bb918f29576c</exclude>
2015-01-31 11:38:15 -05:00
<exclude > **/0000000000000016310</exclude>
2012-06-06 18:07:06 -04:00
<exclude > **/.idea/**</exclude>
2013-02-25 17:50:17 -05:00
<exclude > **/*.iml</exclude>
2012-02-11 17:12:23 -05:00
<exclude > **/CHANGES.txt</exclude>
<exclude > **/generated/**</exclude>
2012-11-13 18:08:55 -05:00
<exclude > **/gen-*/**</exclude>
2015-07-15 06:22:53 -04:00
<!-- No material contents -->
<exclude > conf/regionservers</exclude>
2012-02-11 17:12:23 -05:00
<exclude > **/*.avpr</exclude>
2012-06-03 17:59:50 -04:00
<exclude > **/*.svg</exclude>
2015-07-15 06:12:36 -04:00
<!-- non - standard notice file from jruby included by reference -->
<exclude > **/src/main/resources/META-INF/LEGAL</exclude>
<!-- MIT: https://github.com/asciidoctor/asciidoctor/blob/master/LICENSE.adoc -->
2015-01-14 21:56:28 -05:00
<exclude > **/src/main/asciidoc/hbase.css</exclude>
2012-07-17 18:42:42 -04:00
<!-- MIT http://jquery.org/license -->
<exclude > **/jquery.min.js</exclude>
2012-06-03 17:59:50 -04:00
<!-- vector graphics -->
<exclude > **/*.vm</exclude>
<!-- apache doxia generated -->
2012-02-11 17:12:23 -05:00
<exclude > **/control</exclude>
<exclude > **/conffile</exclude>
2015-07-15 06:12:36 -04:00
<!-- auto - gen docs -->
2012-06-03 17:59:50 -04:00
<exclude > docs/*</exclude>
2013-07-09 19:09:42 -04:00
<exclude > logs/*</exclude>
2012-10-05 15:06:45 -04:00
<!-- exclude source control files -->
<exclude > .git/**</exclude>
<exclude > .svn/**</exclude>
2013-07-09 19:09:42 -04:00
<exclude > **/.settings/**</exclude>
2014-05-28 14:57:40 -04:00
<exclude > **/patchprocess/**</exclude>
2015-11-10 16:20:03 -05:00
<exclude > src/main/site/resources/repo/**</exclude>
2015-11-04 16:40:46 -05:00
<exclude > **/dependency-reduced-pom.xml</exclude>
<exclude > **/rat.txt</exclude>
2016-10-09 06:38:31 -04:00
<!-- exclude the shaded protobuf files -->
<exclude > **/shaded/com/google/protobuf/**</exclude>
2016-10-26 17:52:47 -04:00
<exclude > **/src/main/patches/**</exclude>
2012-02-11 17:12:23 -05:00
</excludes>
</configuration>
2012-06-03 17:59:50 -04:00
</plugin>
2013-07-10 13:59:55 -04:00
<plugin >
2013-04-07 01:49:57 -04:00
<artifactId > maven-assembly-plugin</artifactId>
<configuration >
<!-- Defer to the hbase - assembly sub - module. It
does all assembly-->
<skipAssembly > true</skipAssembly>
</configuration>
2013-07-10 13:59:55 -04:00
</plugin>
2014-08-21 04:50:14 -04:00
<plugin >
2016-10-27 16:17:59 -04:00
<groupId > org.xolstice.maven.plugins</groupId>
<artifactId > protobuf-maven-plugin</artifactId>
<version > ${protobuf.plugin.version}</version>
2013-11-27 18:57:23 -05:00
<configuration >
2016-10-27 16:17:59 -04:00
<protocArtifact > com.google.protobuf:protoc:${external.protobuf.version}:exe:${os.detected.classifier}</protocArtifact>
<protoSourceRoot > ${basedir}/src/main/protobuf/</protoSourceRoot>
<clearOutputDirectory > false</clearOutputDirectory>
2016-11-10 14:37:26 -05:00
<checkStaleness > true</checkStaleness>
2013-11-27 18:57:23 -05:00
</configuration>
2014-08-21 04:50:14 -04:00
</plugin>
2014-10-15 13:28:45 -04:00
<plugin >
<groupId > org.apache.maven.plugins</groupId>
<artifactId > maven-checkstyle-plugin</artifactId>
2017-06-29 09:37:22 -04:00
<version > ${maven.checkstyle.version}</version>
2014-10-15 13:28:45 -04:00
<dependencies >
<dependency >
<groupId > org.apache.hbase</groupId>
<artifactId > hbase-checkstyle</artifactId>
<version > ${project.version}</version>
</dependency>
2016-05-09 02:38:51 -04:00
<dependency >
<groupId > com.puppycrawl.tools</groupId>
<artifactId > checkstyle</artifactId>
<version > ${checkstyle.version}</version>
</dependency>
2014-10-15 13:28:45 -04:00
</dependencies>
<configuration >
<configLocation > hbase/checkstyle.xml</configLocation>
<suppressionsLocation > hbase/checkstyle-suppressions.xml</suppressionsLocation>
</configuration>
</plugin>
2010-02-24 15:17:17 -05:00
</plugins>
</pluginManagement>
2010-02-22 18:49:24 -05:00
<plugins >
2017-06-16 22:53:13 -04:00
<plugin >
<groupId > org.codehaus.mojo</groupId>
<artifactId > build-helper-maven-plugin</artifactId>
<executions >
<execution >
<id > negate-license-bundles-property</id>
<goals >
<goal > bsh-property</goal>
</goals>
<configuration >
<source > skip.license.check = !${license.bundles.dependencies};</source>
<properties >
<property > skip.license.check</property>
</properties>
</configuration>
</execution>
</executions>
</plugin>
2014-10-30 12:28:35 -04:00
<plugin >
<groupId > org.apache.maven.plugins</groupId>
<artifactId > maven-enforcer-plugin</artifactId>
2015-05-24 18:49:25 -04:00
<dependencies >
<dependency >
<groupId > org.codehaus.mojo</groupId>
<artifactId > extra-enforcer-rules</artifactId>
<version > ${extra.enforcer.version}</version>
</dependency>
</dependencies>
2014-10-30 12:28:35 -04:00
<!-- version set by parent -->
2016-08-02 02:17:59 -04:00
<executions >
<execution >
2016-08-02 12:36:51 -04:00
<id > min-maven-min-java-banned-xerces</id>
2016-08-02 02:17:59 -04:00
<goals >
<goal > enforce</goal>
</goals>
<configuration >
<rules >
<!-- The earliest maven version we verify builds for via ASF Jenkins -->
<requireMavenVersion >
<version > [${maven.min.version},)</version>
<message > Maven is out of date.
2014-10-30 12:28:35 -04:00
HBase requires at least version ${maven.min.version} of Maven to properly build from source.
You appear to be using an older version. You can use either "mvn -version" or
"mvn enforcer:display-info" to verify what version is active.
See the reference guide on building for more information: http://hbase.apache.org/book.html#build
2016-08-02 02:17:59 -04:00
</message>
</requireMavenVersion>
<!-- The earliest JVM version we verify builds for via ASF Jenkins -->
<requireJavaVersion >
<version > [${java.min.version},)</version>
<message > Java is out of date.
2014-10-30 12:28:35 -04:00
HBase requirs at least version ${java.min.version} of the JDK to properly build from source.
You appear to be using an older version. You can use either "mvn -version" or
"mvn enforcer:display-info" to verify what version is active.
See the reference guide on building for more information: http://hbase.apache.org/book.html#build
2016-08-02 02:17:59 -04:00
</message>
</requireJavaVersion>
2016-08-02 12:36:51 -04:00
<bannedDependencies >
<excludes >
<exclude > xerces:xercesImpl</exclude>
</excludes>
<message > We avoid adding our own Xerces jars to the classpath, see HBASE-16340.</message>
</bannedDependencies>
2016-08-02 02:17:59 -04:00
</rules>
</configuration>
</execution>
2014-10-30 12:28:35 -04:00
<execution >
2016-08-02 02:17:59 -04:00
<id > banned-jsr305</id>
2014-10-30 12:28:35 -04:00
<goals >
<goal > enforce</goal>
</goals>
2016-08-02 02:17:59 -04:00
<configuration >
<rules >
<bannedDependencies >
<excludes >
<exclude > com.google.code.findbugs:jsr305</exclude>
</excludes>
<message > We don't allow the JSR305 jar from the Findbugs project, see HBASE-16321.</message>
</bannedDependencies>
</rules>
</configuration>
2014-10-30 12:28:35 -04:00
</execution>
2017-06-07 14:55:47 -04:00
<execution >
<id > banned-scala</id>
<goals >
<goal > enforce</goal>
</goals>
<configuration >
<rules >
<bannedDependencies >
<excludes >
<exclude > org.scala-lang:scala-library</exclude>
</excludes>
<message > We don't allow Scala outside of the hbase-spark module, see HBASE-13992.</message>
</bannedDependencies>
</rules>
</configuration>
</execution>
<execution >
<id > banned-hbase-spark</id>
<goals >
<goal > enforce</goal>
</goals>
<configuration >
<rules >
<bannedDependencies >
<excludes >
<exclude > org.apache.hbase:hbase-spark</exclude>
</excludes>
<message > We don't allow other modules to depend on hbase-spark, see HBASE-13992.</message>
</bannedDependencies>
</rules>
</configuration>
</execution>
2017-06-16 22:53:13 -04:00
<execution >
<id > check-aggregate-license</id>
<!-- must check after LICENSE is built at 'generate - resources' -->
<phase > process-resources</phase>
<goals >
<goal > enforce</goal>
</goals>
<configuration >
<rules >
<evaluateBeanshell >
<condition >
File license = new File("${license.aggregate.path}");
// Beanshell does not support try-with-resources,
// so we must close this scanner manually
Scanner scanner = new Scanner(license);
while (scanner.hasNextLine()) {
if (scanner.nextLine().startsWith("ERROR:")) {
scanner.close();
return false;
}
}
scanner.close();
return true;
</condition>
<message >
License errors detected, for more detail find ERROR in
${license.aggregate.path}
</message>
</evaluateBeanshell>
</rules>
<skip > ${skip.license.check}</skip>
</configuration>
</execution>
2014-10-30 12:28:35 -04:00
</executions>
</plugin>
2013-04-07 01:49:57 -04:00
<!-- parent - module only plugins -->
2012-06-03 17:59:50 -04:00
<plugin >
2013-04-07 01:49:57 -04:00
<groupId > org.codehaus.mojo</groupId>
<artifactId > xml-maven-plugin</artifactId>
2017-06-29 09:37:22 -04:00
<version > ${xml.maven.version}</version>
2013-04-07 01:49:57 -04:00
<inherited > false</inherited>
<executions >
<execution >
<!-- Run the hbase - default.xml through a stylesheet so can show it in doc -->
<goals >
<goal > transform</goal>
</goals>
2015-11-04 16:31:59 -05:00
<phase > site</phase>
2013-04-07 01:49:57 -04:00
</execution>
</executions>
2012-06-03 17:59:50 -04:00
<configuration >
2013-04-07 01:49:57 -04:00
<transformationSets >
2015-01-06 23:02:16 -05:00
<!-- For asciidoc -->
2013-04-07 01:49:57 -04:00
<transformationSet >
<!-- Reaching up and over into common sub - module for hbase - default.xml -->
<dir > ${basedir}/hbase-common/src/main/resources/</dir>
<includes >
<include > hbase-default.xml</include>
</includes>
2015-01-06 23:02:16 -05:00
<stylesheet > ${basedir}/src/main/xslt/configuration_to_asciidoc_chapter.xsl</stylesheet>
<fileMappers >
<fileMapper implementation= "org.codehaus.plexus.components.io.filemappers.RegExpFileMapper" >
<pattern > ^(.*)\.xml$</pattern>
<replacement > $1.adoc</replacement>
</fileMapper>
</fileMappers>
<outputDir > ${basedir}/target/asciidoc</outputDir>
2015-04-09 15:44:56 -04:00
</transformationSet>
2013-04-07 01:49:57 -04:00
</transformationSets>
2011-05-03 15:07:00 -04:00
</configuration>
</plugin>
2012-06-03 17:59:50 -04:00
<!-- Special configuration for findbugs just in the parent so
the filter file location can be more general (see definition in pluginManagement) -->
<plugin >
<groupId > org.codehaus.mojo</groupId>
<artifactId > findbugs-maven-plugin</artifactId>
<executions >
<execution >
<inherited > false</inherited>
<goals >
<goal > findbugs</goal>
</goals>
<configuration >
2013-04-07 01:49:57 -04:00
<excludeFilterFile > ${basedir}/dev-support/findbugs-exclude.xml</excludeFilterFile>
</configuration>
</execution>
</executions>
</plugin>
2014-10-15 13:28:45 -04:00
<plugin >
<groupId > org.apache.maven.plugins</groupId>
<artifactId > maven-checkstyle-plugin</artifactId>
<dependencies >
<dependency >
<groupId > org.apache.hbase</groupId>
<artifactId > hbase-checkstyle</artifactId>
<version > ${project.version}</version>
</dependency>
2016-05-09 02:38:51 -04:00
<dependency >
<groupId > com.puppycrawl.tools</groupId>
<artifactId > checkstyle</artifactId>
<version > ${checkstyle.version}</version>
</dependency>
2014-10-15 13:28:45 -04:00
</dependencies>
<configuration >
<configLocation > hbase/checkstyle.xml</configLocation>
<suppressionsLocation > hbase/checkstyle-suppressions.xml</suppressionsLocation>
</configuration>
</plugin>
2013-04-07 01:49:57 -04:00
<plugin >
<groupId > org.apache.maven.plugins</groupId>
<artifactId > maven-site-plugin</artifactId>
2017-07-04 11:06:08 -04:00
<version > ${maven.site.version}</version>
2013-04-07 01:49:57 -04:00
<inherited > false</inherited>
<dependencies >
<dependency >
<!-- add support for ssh/scp -->
<groupId > org.apache.maven.wagon</groupId>
<artifactId > wagon-ssh</artifactId>
2017-06-29 09:37:22 -04:00
<version > ${wagon.ssh.version}</version>
2013-04-07 01:49:57 -04:00
</dependency>
</dependencies>
<configuration >
2013-07-10 13:59:55 -04:00
<siteDirectory > ${basedir}/src/main/site</siteDirectory>
2015-10-22 23:18:26 -04:00
<customBundle > ${basedir}/src/main/site/custom/project-info-report.properties</customBundle>
2013-04-07 01:49:57 -04:00
<inputEncoding > UTF-8</inputEncoding>
<outputEncoding > UTF-8</outputEncoding>
</configuration>
</plugin>
2015-01-06 23:02:16 -05:00
<!-- For AsciiDoc docs building -->
<plugin >
<groupId > org.asciidoctor</groupId>
<artifactId > asciidoctor-maven-plugin</artifactId>
2015-08-21 17:45:42 -04:00
<version > ${asciidoctor.plugin.version}</version>
2015-03-04 02:32:15 -05:00
<inherited > false</inherited>
<dependencies >
<dependency >
<groupId > org.asciidoctor</groupId>
<artifactId > asciidoctorj-pdf</artifactId>
2017-06-29 09:37:22 -04:00
<version > ${asciidoctorj.pdf.version}</version>
2015-03-04 02:32:15 -05:00
</dependency>
</dependencies>
<configuration >
2015-10-16 09:37:20 -04:00
<outputDirectory > ${project.reporting.outputDirectory}/</outputDirectory>
2015-03-04 02:32:15 -05:00
<doctype > book</doctype>
<imagesDir > images</imagesDir>
<sourceHighlighter > coderay</sourceHighlighter>
2015-03-05 20:49:11 -05:00
<attributes >
<docVersion > ${project.version}</docVersion>
</attributes>
2015-03-04 02:32:15 -05:00
</configuration>
2015-01-06 23:02:16 -05:00
<executions >
<execution >
2015-04-09 15:44:56 -04:00
<id > output-html</id>
2015-03-04 02:32:15 -05:00
<phase > site</phase>
2015-01-06 23:02:16 -05:00
<goals >
2015-04-09 15:44:56 -04:00
<goal > process-asciidoc</goal>
2015-01-06 23:02:16 -05:00
</goals>
<configuration >
<attributes >
<stylesheet > hbase.css</stylesheet>
</attributes>
<backend > html5</backend>
</configuration>
</execution>
2015-03-04 02:32:15 -05:00
<execution >
<id > output-pdf</id>
<phase > site</phase>
<goals >
<goal > process-asciidoc</goal>
</goals>
<configuration >
<backend > pdf</backend>
<attributes >
<pagenums />
<toc />
<idprefix />
<idseparator > -</idseparator>
</attributes>
</configuration>
</execution>
</executions>
</plugin>
2015-11-04 16:31:59 -05:00
<plugin >
<groupId > org.apache.maven.plugins</groupId>
<artifactId > maven-resources-plugin</artifactId>
<!-- $NO - MVN - MAN - VER$ -->
<inherited > false</inherited>
<executions >
<execution >
<id > copy-htaccess</id>
<goals >
<goal > copy-resources</goal>
</goals>
<phase > site</phase>
<configuration >
<outputDirectory > ${project.reporting.outputDirectory}/</outputDirectory>
<resources >
<resource >
<directory > ${basedir}/src/main/site/resources/</directory>
<includes >
<include > .htaccess</include>
</includes>
</resource>
</resources>
</configuration>
</execution>
<!-- needed to make the redirect above work -->
<execution >
<id > copy-empty-book-dir</id>
<goals >
<goal > copy-resources</goal>
</goals>
<phase > site</phase>
<configuration >
<outputDirectory > ${project.reporting.outputDirectory}/</outputDirectory>
<resources >
<resource >
<directory > ${basedir}/src/main/site/resources/</directory>
<includes >
<include > book/**</include>
</includes>
</resource>
</resources>
</configuration>
</execution>
</executions>
<configuration >
<escapeString > \</escapeString>
</configuration>
</plugin>
2015-03-04 02:32:15 -05:00
<plugin >
2015-06-04 19:11:15 -04:00
<groupId > org.apache.maven.plugins</groupId>
2015-03-04 02:32:15 -05:00
<artifactId > maven-antrun-plugin</artifactId>
<version > ${maven.antrun.version}</version>
<inherited > false</inherited>
<!-- Rename the book.pdf generated by asciidoctor -->
<executions >
<execution >
<id > rename-pdf</id>
2015-11-04 16:31:59 -05:00
<phase > site</phase>
2015-03-04 02:32:15 -05:00
<configuration >
<target name= "rename file" >
2015-10-16 09:37:20 -04:00
<move file= "${project.reporting.outputDirectory}/book.pdf" tofile= "${project.reporting.outputDirectory}/apache_hbase_reference_guide.pdf" />
2015-03-04 02:32:15 -05:00
</target>
</configuration>
<goals >
<goal > run</goal>
</goals>
</execution>
2015-01-06 23:02:16 -05:00
</executions>
</plugin>
2015-07-15 06:12:36 -04:00
<plugin >
<groupId > org.codehaus.mojo</groupId>
<artifactId > buildnumber-maven-plugin</artifactId>
<executions >
<execution >
<phase > validate</phase>
<goals >
<goal > create-timestamp</goal>
</goals>
</execution>
</executions>
<configuration >
<timestampFormat > yyyy</timestampFormat>
<timestampPropertyName > build.year</timestampPropertyName>
</configuration>
</plugin>
2015-02-17 20:46:27 -05:00
<plugin >
<groupId > org.apache.felix</groupId>
<artifactId > maven-bundle-plugin</artifactId>
2017-06-29 09:37:22 -04:00
<version > ${maven.bundle.version}</version>
2015-02-17 20:46:27 -05:00
<inherited > true</inherited>
<extensions > true</extensions>
</plugin>
2016-04-13 17:54:56 -04:00
<plugin >
<groupId > org.scala-tools</groupId>
<artifactId > maven-scala-plugin</artifactId>
2017-06-29 09:37:22 -04:00
<version > ${maven.scala.version}</version>
2016-04-13 17:54:56 -04:00
</plugin>
2013-04-07 01:49:57 -04:00
</plugins>
2010-02-22 18:49:24 -05:00
</build>
2010-04-20 17:10:44 -04:00
<properties >
2015-07-15 06:12:36 -04:00
<!-- override on command line to have generated LICENSE files include
diagnostic info for verifying notice requirements -->
<license.debug.print.included > false</license.debug.print.included>
<!-- When a particular module bundles its depenendencies, should be true -->
<license.bundles.dependencies > false</license.bundles.dependencies>
<!-- modules that include a the logo in their source tree should set true -->
<license.bundles.logo > false</license.bundles.logo>
<!-- modules that include bootstrap in their source tree should set true -->
<license.bundles.bootstrap > false</license.bundles.bootstrap>
<!-- modules that include jquery in their source tree should set true -->
<license.bundles.jquery > false</license.bundles.jquery>
2017-06-16 22:53:13 -04:00
<!-- where to find the generated LICENSE files -->
<license.aggregate.path >
${project.build.directory}/maven-shared-archive-resources/META-INF/LICENSE
</license.aggregate.path>
2012-05-26 01:56:04 -04:00
<tar.name > ${project.build.finalName}.tar.gz</tar.name>
2011-12-06 01:43:54 -05:00
<maven.build.timestamp.format >
yyyy-MM-dd'T'HH:mm
</maven.build.timestamp.format>
2012-05-26 01:56:04 -04:00
<buildDate > ${maven.build.timestamp}</buildDate>
2016-09-12 03:44:31 -04:00
<compileSource > 1.8</compileSource>
2014-10-30 12:28:35 -04:00
<!-- Build dependencies -->
2015-09-01 17:04:44 -04:00
<maven.min.version > 3.0.4</maven.min.version>
2014-10-30 12:28:35 -04:00
<java.min.version > ${compileSource}</java.min.version>
2010-10-03 23:19:26 -04:00
<!-- Dependencies -->
2015-08-27 12:01:16 -04:00
<hadoop-two.version > 2.7.1</hadoop-two.version>
2017-07-12 14:36:27 -04:00
<hadoop-three.version > 3.0.0-alpha4</hadoop-three.version>
2016-06-09 14:30:45 -04:00
<!-- These must be defined here for downstream build tools that don't look at profiles.
They ought to match the values found in our default hadoop profile, which is
currently "hadoop-2.0". See HBASE-15925 for more info. -->
<hadoop.version > ${hadoop-two.version}</hadoop.version>
<compat.module > hbase-hadoop2-compat</compat.module>
<assembly.file > src/main/assembly/hadoop-two-compat.xml</assembly.file>
<!-- end HBASE - 15925 default hadoop compatibility values -->
2017-06-16 18:57:00 -04:00
<avro.version > 1.7.7</avro.version>
2017-05-03 08:43:51 -04:00
<commons-cli.version > 1.4</commons-cli.version>
2014-10-29 14:16:18 -04:00
<commons-codec.version > 1.9</commons-codec.version>
2012-06-03 17:59:50 -04:00
<!-- pretty outdated -->
2017-05-03 08:43:51 -04:00
<commons-io.version > 2.5</commons-io.version>
2012-09-27 02:05:11 -04:00
<commons-lang.version > 2.6</commons-lang.version>
2014-10-29 14:16:18 -04:00
<commons-logging.version > 1.2</commons-logging.version>
<commons-math.version > 2.2</commons-math.version>
2016-11-08 20:24:12 -05:00
<disruptor.version > 3.3.6</disruptor.version>
2015-11-23 16:37:19 -05:00
<!-- Do not use versions earlier than 3.2.2 due to a security vulnerability -->
<collections.version > 3.2.2</collections.version>
2017-05-03 08:43:51 -04:00
<httpclient.version > 4.5.3</httpclient.version>
<httpcore.version > 4.4.6</httpcore.version>
<metrics-core.version > 3.2.1</metrics-core.version>
2017-01-30 14:54:54 -05:00
<jackson.version > 2.23.2</jackson.version>
2017-05-03 08:43:51 -04:00
<jaxb-api.version > 2.2.12</jaxb-api.version>
2017-01-30 14:54:54 -05:00
<jetty.version > 9.3.8.v20160314</jetty.version>
<jetty-jsp.version > 9.2.19.v20160908</jetty-jsp.version>
<servlet.api.version > 3.1.0</servlet.api.version>
2017-06-28 17:28:40 -04:00
<wx.rs.api.version > 2.0.1</wx.rs.api.version>
2017-05-03 08:43:51 -04:00
<jersey.version > 2.25.1</jersey.version>
2010-10-03 23:19:26 -04:00
<jetty.jspapi.version > 6.1.14</jetty.jspapi.version>
2017-05-03 08:43:51 -04:00
<jruby.version > 9.1.10.0</jruby.version>
2015-08-24 15:49:55 -04:00
<junit.version > 4.12</junit.version>
2015-01-21 16:12:57 -05:00
<hamcrest.version > 1.3</hamcrest.version>
2017-05-03 08:43:51 -04:00
<htrace.version > 3.2.0-incubating</htrace.version>
2012-08-21 09:22:56 -04:00
<log4j.version > 1.2.17</log4j.version>
2017-05-03 08:43:51 -04:00
<mockito-all.version > 1.10.19</mockito-all.version>
2016-10-27 16:17:59 -04:00
<!-- Internally we use a different version of protobuf. See hbase - protocol - shaded -->
<external.protobuf.version > 2.5.0</external.protobuf.version>
<protobuf.plugin.version > 0.5.0</protobuf.plugin.version>
2014-11-05 03:59:23 -05:00
<thrift.path > thrift</thrift.path>
2015-11-18 22:11:51 -05:00
<thrift.version > 0.9.3</thrift.version>
2017-05-03 08:43:51 -04:00
<zookeeper.version > 3.4.9</zookeeper.version>
<slf4j.version > 1.7.24</slf4j.version>
2015-05-02 00:53:13 -04:00
<clover.version > 4.0.3</clover.version>
2015-10-13 20:25:02 -04:00
<jamon-runtime.version > 2.4.1</jamon-runtime.version>
2017-05-03 08:43:51 -04:00
<jettison.version > 1.3.8</jettison.version>
2017-08-02 17:47:51 -04:00
<!-- This property is for hadoops netty. HBase netty
comes in via hbase-thirdparty hbase-shaded-netty-->
2015-07-06 18:00:51 -04:00
<netty.hadoop.version > 3.6.2.Final</netty.hadoop.version>
2017-05-18 12:32:40 -04:00
<!-- Make sure these joni/jcodings are compatible with the versions used by jruby -->
<joni.version > 2.1.11</joni.version>
<jcodings.version > 1.0.18</jcodings.version>
2017-05-03 08:43:51 -04:00
<spy.version > 2.12.2</spy.version>
2015-04-15 12:47:34 -04:00
<bouncycastle.version > 1.46</bouncycastle.version>
2016-06-15 12:26:44 -04:00
<kerby.version > 1.0.0-RC2</kerby.version>
2016-10-21 06:32:39 -04:00
<commons-crypto.version > 1.0.0</commons-crypto.version>
2017-05-03 08:43:51 -04:00
<curator.version > 2.12.0</curator.version>
2012-05-26 01:56:04 -04:00
<!-- Plugin Dependencies -->
2017-06-29 09:37:22 -04:00
<apache.rat.version > 0.12</apache.rat.version>
2017-07-07 07:54:41 -04:00
<asciidoctor.plugin.version > 1.5.5</asciidoctor.plugin.version>
<asciidoctorj.pdf.version > 1.5.0-alpha.15</asciidoctorj.pdf.version>
2017-06-29 09:37:22 -04:00
<build.helper.maven.version > 3.0.0</build.helper.maven.version>
<buildnumber.maven.version > 1.4</buildnumber.maven.version>
2016-05-09 02:38:51 -04:00
<checkstyle.version > 6.18</checkstyle.version>
2017-06-29 09:37:22 -04:00
<exec.maven.version > 1.6.0</exec.maven.version>
<findbugs-annotations > 1.3.9-1</findbugs-annotations>
<findbugs.maven.version > 3.0.4</findbugs.maven.version>
<jamon.plugin.version > 2.4.2</jamon.plugin.version>
<lifecycle.mapping.version > 1.0.0</lifecycle.mapping.version>
<maven.antrun.version > 1.8</maven.antrun.version>
<maven.bundle.version > 3.3.0</maven.bundle.version>
<maven.checkstyle.version > 2.17</maven.checkstyle.version>
<maven.compiler.version > 3.6.1</maven.compiler.version>
<maven.eclipse.version > 2.10</maven.eclipse.version>
<maven.install.version > 2.5.2</maven.install.version>
<maven.jar.version > 3.0.2</maven.jar.version>
<maven.patch.version > 1.2</maven.patch.version>
<maven.scala.version > 2.15.2</maven.scala.version>
<maven.shade.version > 3.0.0</maven.shade.version>
2017-07-04 11:06:08 -04:00
<maven.site.version > 3.4</maven.site.version>
2017-06-29 09:37:22 -04:00
<maven.source.version > 3.0.1</maven.source.version>
<os.maven.version > 1.5.0.Final</os.maven.version>
<scala.maven.version > 3.2.2</scala.maven.version>
<scalatest.maven.version > 1.0</scalatest.maven.version>
<spotbugs.version > 3.1.0-RC3</spotbugs.version>
<wagon.ssh.version > 2.12</wagon.ssh.version>
<xml.maven.version > 1.0.1</xml.maven.version>
2017-07-05 15:06:29 -04:00
<hbase-thirdparty.version > 1.0.0</hbase-thirdparty.version>
2012-05-26 01:56:04 -04:00
<!-- General Packaging -->
[jira] [HBASE-4908] HBase cluster test tool (port from 0.89-fb)
Summary:
Porting one of our HBase cluster test tools (a single-process multi-threaded
load generator and verifier) from 0.89-fb to trunk.
I cleaned up the code a bit compared to what's in 0.89-fb, and discovered that
it has some features that I have not tried yet (some kind of a kill test, and
some way to run HBase as multiple processes on one machine).
The main utility of this piece of code for us has been the HBaseClusterTest
command-line tool (called HBaseTest in 0.89-fb), which we usually invoke as a
load test in our five-node dev cluster testing, e.g.:
hbase org.apache.hadoop.hbase.util.LoadTestTool -write 50:100:20 -tn loadtest4
-read 100:10 -zk <zk_quorum_node> -bloom ROWCOL -compression LZO -key_window 5
-max_read_errors 10000 -num_keys 10000000000 -start_key 0
Test Plan:
Run this on a dev cluster. Run all unit tests.
Reviewers: stack, Karthik, Kannan, nspiegelberg, JIRA
Reviewed By: nspiegelberg
CC: stack, nspiegelberg, mbautin, Karthik
Differential Revision: 549
git-svn-id: https://svn.apache.org/repos/asf/hbase/trunk@1211746 13f79535-47bb-0310-9956-ffa450edef68
2011-12-07 21:38:27 -05:00
<package.prefix > /usr</package.prefix>
<package.conf.dir > /etc/hbase</package.conf.dir>
2011-03-15 18:20:08 -04:00
<package.log.dir > /var/log/hbase</package.log.dir>
<package.pid.dir > /var/run/hbase</package.pid.dir>
[jira] [HBASE-4908] HBase cluster test tool (port from 0.89-fb)
Summary:
Porting one of our HBase cluster test tools (a single-process multi-threaded
load generator and verifier) from 0.89-fb to trunk.
I cleaned up the code a bit compared to what's in 0.89-fb, and discovered that
it has some features that I have not tried yet (some kind of a kill test, and
some way to run HBase as multiple processes on one machine).
The main utility of this piece of code for us has been the HBaseClusterTest
command-line tool (called HBaseTest in 0.89-fb), which we usually invoke as a
load test in our five-node dev cluster testing, e.g.:
hbase org.apache.hadoop.hbase.util.LoadTestTool -write 50:100:20 -tn loadtest4
-read 100:10 -zk <zk_quorum_node> -bloom ROWCOL -compression LZO -key_window 5
-max_read_errors 10000 -num_keys 10000000000 -start_key 0
Test Plan:
Run this on a dev cluster. Run all unit tests.
Reviewers: stack, Karthik, Kannan, nspiegelberg, JIRA
Reviewed By: nspiegelberg
CC: stack, nspiegelberg, mbautin, Karthik
Differential Revision: 549
git-svn-id: https://svn.apache.org/repos/asf/hbase/trunk@1211746 13f79535-47bb-0310-9956-ffa450edef68
2011-12-07 21:38:27 -05:00
<package.release > 1</package.release>
2011-09-17 14:23:31 -04:00
<final.name > ${project.artifactId}-${project.version}</final.name>
2012-05-26 01:56:04 -04:00
<!-- Intraproject jar naming properties -->
<!-- TODO this is pretty ugly, but works for the moment.
Modules are pretty heavy-weight things, so doing this work isn't too bad. -->
<server.test.jar > hbase-server-${project.version}-tests.jar</server.test.jar>
2013-01-16 21:11:44 -05:00
<common.test.jar > hbase-common-${project.version}-tests.jar</common.test.jar>
2015-04-09 15:44:56 -04:00
<procedure.test.jar > hbase-procedure-${project.version}-tests.jar</procedure.test.jar>
2012-11-29 01:47:34 -05:00
<it.test.jar > hbase-it-${project.version}-tests.jar</it.test.jar>
2014-10-27 18:40:35 -04:00
<annotations.test.jar > hbase-annotations-${project.version}-tests.jar</annotations.test.jar>
2016-04-28 20:14:54 -04:00
<rsgroup.test.jar > hbase-rsgroup-${project.version}-tests.jar</rsgroup.test.jar>
2017-07-11 15:55:42 -04:00
<surefire.version > 2.19.1</surefire.version>
2011-12-05 16:20:04 -05:00
<surefire.provider > surefire-junit47</surefire.provider>
2012-01-01 10:22:43 -05:00
<!-- default: run small & medium, medium with 2 threads -->
[jira] [HBASE-4908] HBase cluster test tool (port from 0.89-fb)
Summary:
Porting one of our HBase cluster test tools (a single-process multi-threaded
load generator and verifier) from 0.89-fb to trunk.
I cleaned up the code a bit compared to what's in 0.89-fb, and discovered that
it has some features that I have not tried yet (some kind of a kill test, and
some way to run HBase as multiple processes on one machine).
The main utility of this piece of code for us has been the HBaseClusterTest
command-line tool (called HBaseTest in 0.89-fb), which we usually invoke as a
load test in our five-node dev cluster testing, e.g.:
hbase org.apache.hadoop.hbase.util.LoadTestTool -write 50:100:20 -tn loadtest4
-read 100:10 -zk <zk_quorum_node> -bloom ROWCOL -compression LZO -key_window 5
-max_read_errors 10000 -num_keys 10000000000 -start_key 0
Test Plan:
Run this on a dev cluster. Run all unit tests.
Reviewers: stack, Karthik, Kannan, nspiegelberg, JIRA
Reviewed By: nspiegelberg
CC: stack, nspiegelberg, mbautin, Karthik
Differential Revision: 549
git-svn-id: https://svn.apache.org/repos/asf/hbase/trunk@1211746 13f79535-47bb-0310-9956-ffa450edef68
2011-12-07 21:38:27 -05:00
<surefire.skipFirstPart > false</surefire.skipFirstPart>
2011-12-05 16:20:04 -05:00
<surefire.skipSecondPart > false</surefire.skipSecondPart>
2014-08-21 04:50:14 -04:00
<surefire.firstPartForkCount > 1</surefire.firstPartForkCount>
2015-11-02 11:17:41 -05:00
<surefire.secondPartForkCount > 2</surefire.secondPartForkCount>
2014-09-13 00:37:06 -04:00
<surefire.firstPartGroups > org.apache.hadoop.hbase.testclassification.SmallTests</surefire.firstPartGroups>
<surefire.secondPartGroups > org.apache.hadoop.hbase.testclassification.MediumTests</surefire.secondPartGroups>
2014-02-19 16:50:42 -05:00
<surefire.testFailureIgnore > false</surefire.testFailureIgnore>
2012-03-02 13:32:36 -05:00
<test.output.tofile > true</test.output.tofile>
2013-01-22 15:00:53 -05:00
<surefire.timeout > 900</surefire.timeout>
2014-12-18 09:42:21 -05:00
<test.exclude.pattern > </test.exclude.pattern>
2015-02-18 12:47:15 -05:00
<!-- default Xmx value is 2800m. Use - Dsurefire.Xmx=xxg to run tests with different JVM Xmx value -->
2015-02-18 13:47:29 -05:00
<surefire.Xmx > 2800m</surefire.Xmx>
<surefire.cygwinXmx > 2800m</surefire.cygwinXmx>
2015-11-06 21:39:15 -05:00
<!-- Mark our test runs with ' - Dhbase.build.id' so we can identify a surefire test as ours in a process listing
2015-10-07 16:41:27 -04:00
-->
2015-11-06 21:39:15 -05:00
<hbase-surefire.argLine > -enableassertions -Dhbase.build.id=${build.id} -Xmx${surefire.Xmx}
2016-09-24 19:07:25 -04:00
-Djava.security.egd=file:/dev/./urandom -Djava.net.preferIPv4Stack=true
2014-10-02 16:35:15 -04:00
-Djava.awt.headless=true
</hbase-surefire.argLine>
2016-09-24 19:07:25 -04:00
<hbase-surefire.cygwin-argLine > -enableassertions -Xmx${surefire.cygwinXmx}
2014-10-02 16:35:15 -04:00
-Djava.security.egd=file:/dev/./urandom -Djava.net.preferIPv4Stack=true
"-Djava.library.path=${hadoop.library.path};${java.library.path}"
2015-10-14 15:22:27 -04:00
</hbase-surefire.cygwin-argLine>
<!-- Surefire argLine defaults to Linux, cygwin argLine is used in the os.windows profile -->
<argLine > ${hbase-surefire.argLine}</argLine>
2015-06-14 10:03:20 -04:00
<jacoco.version > 0.7.5.201505241946</jacoco.version>
2017-06-29 09:37:22 -04:00
<extra.enforcer.version > 1.0-beta-6</extra.enforcer.version>
2015-10-13 02:10:06 -04:00
<!-- Location of test resources -->
<test.build.classes > ${project.build.directory}/test-classes</test.build.classes>
2015-11-06 21:39:15 -05:00
<maven.build.timestamp.format > yyyy-MM-dd'T'HH:mm:ss'Z'</maven.build.timestamp.format>
<!-- This build.id we'll add as flag so can identify which forked processes belong to our build.
Default is the build start timestamp. Up on jenkins pass in the jenkins build id by setting
this parameter by invoking mvn with -Dbuild.id=$BUILD_ID-->
<build.id > ${maven.build.timestamp}</build.id>
</properties>
2010-10-03 23:19:26 -04:00
<!-- Sorted by groups of dependencies then groupId and artifactId -->
2012-05-26 01:56:04 -04:00
<dependencyManagement >
<dependencies >
2012-06-03 17:59:50 -04:00
<!--
2010-10-03 23:19:26 -04:00
Note: There are a few exclusions to prevent duplicate code in different jars to be included:
2014-02-07 12:06:00 -05:00
org.mortbay.jetty:servlet-api, javax.servlet:servlet-api: These are excluded because they are
2010-10-03 23:19:26 -04:00
the same implementations. I chose org.mortbay.jetty:servlet-api-2.5 instead, which is a third
implementation of the same, because Hadoop also uses this version
2014-02-07 12:06:00 -05:00
javax.servlet:jsp-api in favour of org.mortbay.jetty:jsp-api-2.1
2012-05-26 01:56:04 -04:00
-->
<!-- Intra - module dependencies -->
2014-09-22 21:46:35 -04:00
<dependency >
<groupId > org.apache.hbase</groupId>
<artifactId > hbase-annotations</artifactId>
<version > ${project.version}</version>
</dependency>
2014-10-07 02:16:22 -04:00
<dependency >
<groupId > org.apache.hbase</groupId>
<artifactId > hbase-annotations</artifactId>
<version > ${project.version}</version>
<type > test-jar</type>
2014-10-27 17:58:55 -04:00
<!-- Was test scope only but if we want to run hbase - it tests, need the annotations test jar -->
2014-10-07 02:16:22 -04:00
</dependency>
2012-05-30 19:51:44 -04:00
<dependency >
<groupId > org.apache.hbase</groupId>
<artifactId > hbase-common</artifactId>
<version > ${project.version}</version>
</dependency>
2012-09-26 08:23:43 -04:00
<dependency >
<groupId > org.apache.hbase</groupId>
<artifactId > hbase-common</artifactId>
<version > ${project.version}</version>
<type > test-jar</type>
<scope > test</scope>
</dependency>
HBASE-15638 Shade protobuf
Which includes
HBASE-16742 Add chapter for devs on how we do protobufs going forward
HBASE-16741 Amend the generate protobufs out-of-band build step
to include shade, pulling in protobuf source and a hook for patching protobuf
Removed ByteStringer from hbase-protocol-shaded. Use the protobuf-3.1.0
trick directly instead. Makes stuff cleaner. All under 'shaded' dir is
now generated.
HBASE-16567 Upgrade to protobuf-3.1.x
Regenerate all protos in this module with protoc3.
Redo ByteStringer to use new pb3.1.0 unsafebytesutil
instead of HBaseZeroCopyByteString
HBASE-16264 Figure how to deal with endpoints and shaded pb Shade our protobufs.
Do it in a manner that makes it so we can still have in our API references to
com.google.protobuf (and in REST). The c.g.p in API is for Coprocessor Endpoints (CPEP)
This patch is Tactic #4 from Shading Doc attached to the referenced issue.
Figuring an appoach took a while because we have Coprocessor Endpoints
mixed in with the core of HBase that are tough to untangle (FIX).
Tactic #4 (the fourth attempt at addressing this issue) is COPY all but
the CPEP .proto files currently in hbase-protocol to a new module named
hbase-protocol-shaded. Generate .protos again in the new location and
then relocate/shade the generated files. Let CPEPs keep on with the
old references at com.google.protobuf.* and
org.apache.hadoop.hbase.protobuf.* but change the hbase core so all
instead refer to the relocated files in their new location at
org.apache.hadoop.hbase.shaded.com.google.protobuf.*.
Let the new module also shade protobufs themselves and change hbase
core to pick up this shaded protobuf rather than directly reference
com.google.protobuf.
This approach allows us to explicitly refer to either the shaded or
non-shaded version of a protobuf class in any particular context (though
usually context dictates one or the other). Core runs on shaded protobuf.
CPEPs continue to use whatever is on the classpath with
com.google.protobuf.* which is pb2.5.0 for the near future at least.
See above cited doc for follow-ons and downsides. In short, IDEs will complain
about not being able to find the shaded protobufs since shading happens at package
time; will fix by checking in all generated classes and relocated protobuf in
a follow-on. Also, CPEPs currently suffer an extra-copy as marshalled from
non-shaded to shaded. To fix. Finally, our .protos are duplicated; once
shaded, and once not. Pain, but how else to reveal our protos to CPEPs or
C++ client that wants to talk with HBase AND shade protobuf.
Details:
Add a new hbase-protocol-shaded module. It is a copy of hbase-protocol
i with all relocated offset from o.a.h.h. to o.a.h.h.shaded. The new module
also includes the relocated pb. It does not include CPEPs. They stay in
their old location.
Add another module hbase-endpoint which has in it all the endpoints
that ship as part of hbase -- at least the ones that are not
entangled with core such as AccessControl and Auth. Move all protos
for these CPEPs here as well as their unit tests (mostly moving a
bunch of stuff out of hbase-server module)
Much of the change looks like this:
-import org.apache.hadoop.hbase.protobuf.ProtobufUtil;
-import org.apache.hadoop.hbase.protobuf.generated.ClusterIdProtos;
+import org.apache.hadoop.hbase.protobuf.shaded.ProtobufUtil;
+import org.apache.hadoop.hbase.shaded.protobuf.generated.ClusterIdProtos;
In HTable and in HBaseAdmin, regularize the way Callables are used and also hide
protobuf usage as much as possible moving it up into Callable super classes or out
to utility classes. Still TODO is adding in of retries, etc., but can wait on
procedure which will redo all this.
Also in HTable and HBaseAdmin as well as in HRegionServer and Server, be explicit
when using non-shaded protobuf. Do the full-path so it is clear. This is around
endpoint coprocessors registration of services and execution of CPEP methods.
Shrunk ProtobufUtil by moving methods used by one CPEP only back to the CPEP either
into Client class or as new Util class; e.g. AccessControlUtil.
There are actually two versions of ProtobufUtil now; a shaded one and a subset
that is used by CPEPs doing non-shaded work.
Made it so hbase-common no longer depends on hbase-protocol (with Matteo's help)
R*Converter classes got moved down under shaded package -- they are for internal
use only. There are no non-shaded versions of these classes.
D hbase-client/src/main/java/org/apache/hadoop/hbase/client/AbstractRegionServerCallable
D RetryingCallableBase
Not used anymore and we have too many tiers of Callables so removed/cleaned-up.
A ClientServicecallable
Had to add this one. RegionServerCallable was made generic so it could be used
for a few Interfaces (Client and Admin). Then added ClientServiceCallable to
implement RegionServerCallable with the Client Interface.
2016-10-04 00:37:32 -04:00
<dependency >
<groupId > org.apache.hbase</groupId>
<artifactId > hbase-protocol-shaded</artifactId>
<version > ${project.version}</version>
</dependency>
2012-11-20 18:26:00 -05:00
<dependency >
<groupId > org.apache.hbase</groupId>
<artifactId > hbase-protocol</artifactId>
<version > ${project.version}</version>
</dependency>
2015-04-09 15:44:56 -04:00
<dependency >
<groupId > org.apache.hbase</groupId>
<artifactId > hbase-procedure</artifactId>
<version > ${project.version}</version>
</dependency>
<dependency >
<groupId > org.apache.hbase</groupId>
<artifactId > hbase-procedure</artifactId>
<version > ${project.version}</version>
<type > test-jar</type>
</dependency>
2012-07-17 18:02:06 -04:00
<dependency >
<groupId > org.apache.hbase</groupId>
<artifactId > hbase-hadoop-compat</artifactId>
<version > ${project.version}</version>
</dependency>
2012-09-07 01:28:31 -04:00
<dependency >
<groupId > org.apache.hbase</groupId>
<artifactId > hbase-hadoop-compat</artifactId>
<version > ${project.version}</version>
<type > test-jar</type>
<scope > test</scope>
</dependency>
2012-07-17 18:02:06 -04:00
<dependency >
<groupId > org.apache.hbase</groupId>
<artifactId > ${compat.module}</artifactId>
<version > ${project.version}</version>
</dependency>
2012-09-07 01:28:31 -04:00
<dependency >
<groupId > org.apache.hbase</groupId>
<artifactId > ${compat.module}</artifactId>
<version > ${project.version}</version>
<type > test-jar</type>
<scope > test</scope>
</dependency>
2016-03-14 21:28:50 -04:00
<dependency >
<artifactId > hbase-rsgroup</artifactId>
<groupId > org.apache.hbase</groupId>
<version > ${project.version}</version>
</dependency>
<dependency >
<artifactId > hbase-rsgroup</artifactId>
<groupId > org.apache.hbase</groupId>
<version > ${project.version}</version>
<type > test-jar</type>
<scope > test</scope>
</dependency>
2012-05-26 01:56:04 -04:00
<dependency >
<artifactId > hbase-server</artifactId>
<groupId > org.apache.hbase</groupId>
<version > ${project.version}</version>
</dependency>
<dependency >
<artifactId > hbase-server</artifactId>
<groupId > org.apache.hbase</groupId>
<version > ${project.version}</version>
<type > test-jar</type>
<scope > test</scope>
</dependency>
HBASE-15638 Shade protobuf
Which includes
HBASE-16742 Add chapter for devs on how we do protobufs going forward
HBASE-16741 Amend the generate protobufs out-of-band build step
to include shade, pulling in protobuf source and a hook for patching protobuf
Removed ByteStringer from hbase-protocol-shaded. Use the protobuf-3.1.0
trick directly instead. Makes stuff cleaner. All under 'shaded' dir is
now generated.
HBASE-16567 Upgrade to protobuf-3.1.x
Regenerate all protos in this module with protoc3.
Redo ByteStringer to use new pb3.1.0 unsafebytesutil
instead of HBaseZeroCopyByteString
HBASE-16264 Figure how to deal with endpoints and shaded pb Shade our protobufs.
Do it in a manner that makes it so we can still have in our API references to
com.google.protobuf (and in REST). The c.g.p in API is for Coprocessor Endpoints (CPEP)
This patch is Tactic #4 from Shading Doc attached to the referenced issue.
Figuring an appoach took a while because we have Coprocessor Endpoints
mixed in with the core of HBase that are tough to untangle (FIX).
Tactic #4 (the fourth attempt at addressing this issue) is COPY all but
the CPEP .proto files currently in hbase-protocol to a new module named
hbase-protocol-shaded. Generate .protos again in the new location and
then relocate/shade the generated files. Let CPEPs keep on with the
old references at com.google.protobuf.* and
org.apache.hadoop.hbase.protobuf.* but change the hbase core so all
instead refer to the relocated files in their new location at
org.apache.hadoop.hbase.shaded.com.google.protobuf.*.
Let the new module also shade protobufs themselves and change hbase
core to pick up this shaded protobuf rather than directly reference
com.google.protobuf.
This approach allows us to explicitly refer to either the shaded or
non-shaded version of a protobuf class in any particular context (though
usually context dictates one or the other). Core runs on shaded protobuf.
CPEPs continue to use whatever is on the classpath with
com.google.protobuf.* which is pb2.5.0 for the near future at least.
See above cited doc for follow-ons and downsides. In short, IDEs will complain
about not being able to find the shaded protobufs since shading happens at package
time; will fix by checking in all generated classes and relocated protobuf in
a follow-on. Also, CPEPs currently suffer an extra-copy as marshalled from
non-shaded to shaded. To fix. Finally, our .protos are duplicated; once
shaded, and once not. Pain, but how else to reveal our protos to CPEPs or
C++ client that wants to talk with HBase AND shade protobuf.
Details:
Add a new hbase-protocol-shaded module. It is a copy of hbase-protocol
i with all relocated offset from o.a.h.h. to o.a.h.h.shaded. The new module
also includes the relocated pb. It does not include CPEPs. They stay in
their old location.
Add another module hbase-endpoint which has in it all the endpoints
that ship as part of hbase -- at least the ones that are not
entangled with core such as AccessControl and Auth. Move all protos
for these CPEPs here as well as their unit tests (mostly moving a
bunch of stuff out of hbase-server module)
Much of the change looks like this:
-import org.apache.hadoop.hbase.protobuf.ProtobufUtil;
-import org.apache.hadoop.hbase.protobuf.generated.ClusterIdProtos;
+import org.apache.hadoop.hbase.protobuf.shaded.ProtobufUtil;
+import org.apache.hadoop.hbase.shaded.protobuf.generated.ClusterIdProtos;
In HTable and in HBaseAdmin, regularize the way Callables are used and also hide
protobuf usage as much as possible moving it up into Callable super classes or out
to utility classes. Still TODO is adding in of retries, etc., but can wait on
procedure which will redo all this.
Also in HTable and HBaseAdmin as well as in HRegionServer and Server, be explicit
when using non-shaded protobuf. Do the full-path so it is clear. This is around
endpoint coprocessors registration of services and execution of CPEP methods.
Shrunk ProtobufUtil by moving methods used by one CPEP only back to the CPEP either
into Client class or as new Util class; e.g. AccessControlUtil.
There are actually two versions of ProtobufUtil now; a shaded one and a subset
that is used by CPEPs doing non-shaded work.
Made it so hbase-common no longer depends on hbase-protocol (with Matteo's help)
R*Converter classes got moved down under shaded package -- they are for internal
use only. There are no non-shaded versions of these classes.
D hbase-client/src/main/java/org/apache/hadoop/hbase/client/AbstractRegionServerCallable
D RetryingCallableBase
Not used anymore and we have too many tiers of Callables so removed/cleaned-up.
A ClientServicecallable
Had to add this one. RegionServerCallable was made generic so it could be used
for a few Interfaces (Client and Admin). Then added ClientServiceCallable to
implement RegionServerCallable with the Client Interface.
2016-10-04 00:37:32 -04:00
<dependency >
<artifactId > hbase-endpoint</artifactId>
<groupId > org.apache.hbase</groupId>
<version > ${project.version}</version>
</dependency>
2013-09-23 12:40:51 -04:00
<dependency >
<artifactId > hbase-shell</artifactId>
<groupId > org.apache.hbase</groupId>
<version > ${project.version}</version>
</dependency>
<dependency >
<artifactId > hbase-shell</artifactId>
<groupId > org.apache.hbase</groupId>
<version > ${project.version}</version>
<type > test-jar</type>
<scope > test</scope>
</dependency>
2013-09-20 16:44:22 -04:00
<dependency >
<artifactId > hbase-thrift</artifactId>
<groupId > org.apache.hbase</groupId>
<version > ${project.version}</version>
</dependency>
<dependency >
<artifactId > hbase-thrift</artifactId>
<groupId > org.apache.hbase</groupId>
<version > ${project.version}</version>
<type > test-jar</type>
<scope > test</scope>
</dependency>
2014-08-21 04:50:14 -04:00
<dependency >
<groupId > org.apache.hbase</groupId>
<artifactId > hbase-testing-util</artifactId>
<version > ${project.version}</version>
<scope > test</scope>
</dependency>
2013-02-06 19:36:24 -05:00
<dependency >
<groupId > org.apache.hbase</groupId>
<artifactId > hbase-prefix-tree</artifactId>
<version > ${project.version}</version>
<!-- unfortunately, runtime scope causes Eclipse to give compile time access which isn't
needed, however it is apparently needed to run things within Eclipse -->
<scope > runtime</scope>
</dependency>
2012-11-06 16:22:27 -05:00
<dependency >
<artifactId > hbase-examples</artifactId>
<groupId > org.apache.hbase</groupId>
<version > ${project.version}</version>
</dependency>
2015-08-28 19:13:36 -04:00
<dependency >
<groupId > org.apache.hbase</groupId>
<artifactId > hbase-external-blockcache</artifactId>
<version > ${project.version}</version>
</dependency>
2012-11-29 01:47:34 -05:00
<dependency >
<artifactId > hbase-it</artifactId>
<groupId > org.apache.hbase</groupId>
<version > ${project.version}</version>
<type > test-jar</type>
<scope > test</scope>
</dependency>
2012-12-03 16:30:19 -05:00
<dependency >
<artifactId > hbase-client</artifactId>
<groupId > org.apache.hbase</groupId>
<version > ${project.version}</version>
</dependency>
2017-01-25 14:47:35 -05:00
<dependency >
<artifactId > hbase-metrics-api</artifactId>
<groupId > org.apache.hbase</groupId>
<version > ${project.version}</version>
</dependency>
<dependency >
<artifactId > hbase-metrics-api</artifactId>
<groupId > org.apache.hbase</groupId>
<version > ${project.version}</version>
<type > test-jar</type>
<scope > test</scope>
</dependency>
<dependency >
<artifactId > hbase-metrics</artifactId>
<groupId > org.apache.hbase</groupId>
<version > ${project.version}</version>
</dependency>
<dependency >
<artifactId > hbase-metrics</artifactId>
<groupId > org.apache.hbase</groupId>
<version > ${project.version}</version>
<type > test-jar</type>
<scope > test</scope>
</dependency>
2012-05-26 01:56:04 -04:00
<!-- General dependencies -->
2016-08-02 02:17:59 -04:00
<dependency >
<groupId > com.github.stephenc.findbugs</groupId>
<artifactId > findbugs-annotations</artifactId>
<version > ${findbugs-annotations}</version>
</dependency>
<!-- General dependencies -->
2012-07-19 00:20:20 -04:00
<dependency >
<groupId > org.codehaus.jettison</groupId>
<artifactId > jettison</artifactId>
<version > ${jettison.version}</version>
</dependency>
2013-08-02 15:01:34 -04:00
<dependency >
<groupId > log4j</groupId>
<artifactId > log4j</artifactId>
<version > ${log4j.version}</version>
</dependency>
2017-06-16 18:57:00 -04:00
<!-- Avro dependencies we mostly get transitively, manual version coallescing -->
<dependency >
<groupId > org.apache.avro</groupId>
<artifactId > avro</artifactId>
<version > ${avro.version}</version>
</dependency>
2013-08-02 15:01:34 -04:00
<!-- This is not used by hbase directly. Used by thrift,
2015-12-15 15:11:27 -05:00
dropwizard and zk.-->
2013-08-02 15:01:34 -04:00
<dependency >
<groupId > org.slf4j</groupId>
<artifactId > slf4j-api</artifactId>
<version > ${slf4j.version}</version>
</dependency>
2012-05-30 11:09:36 -04:00
<dependency >
2015-12-15 15:11:27 -05:00
<groupId > io.dropwizard.metrics</groupId>
2012-05-30 19:51:44 -04:00
<artifactId > metrics-core</artifactId>
<version > ${metrics-core.version}</version>
2012-05-30 11:09:36 -04:00
</dependency>
2013-08-02 15:01:34 -04:00
<dependency >
<groupId > commons-collections</groupId>
<artifactId > commons-collections</artifactId>
<version > ${collections.version}</version>
</dependency>
<dependency >
2016-05-05 16:41:26 -04:00
<groupId > org.apache.httpcomponents</groupId>
<artifactId > httpclient</artifactId>
2013-08-02 15:01:34 -04:00
<version > ${httpclient.version}</version>
</dependency>
2016-06-15 12:26:44 -04:00
<dependency >
<groupId > org.apache.httpcomponents</groupId>
<artifactId > httpcore</artifactId>
<version > ${httpcore.version}</version>
</dependency>
2012-05-26 01:56:04 -04:00
<dependency >
<groupId > commons-cli</groupId>
<artifactId > commons-cli</artifactId>
<version > ${commons-cli.version}</version>
</dependency>
<dependency >
<groupId > commons-codec</groupId>
<artifactId > commons-codec</artifactId>
<version > ${commons-codec.version}</version>
</dependency>
<dependency >
<groupId > commons-io</groupId>
<artifactId > commons-io</artifactId>
<version > ${commons-io.version}</version>
</dependency>
<dependency >
<groupId > commons-lang</groupId>
<artifactId > commons-lang</artifactId>
<version > ${commons-lang.version}</version>
</dependency>
<dependency >
<groupId > commons-logging</groupId>
<artifactId > commons-logging</artifactId>
<version > ${commons-logging.version}</version>
</dependency>
2012-05-30 16:53:15 -04:00
<dependency >
<groupId > org.apache.commons</groupId>
<artifactId > commons-math</artifactId>
<version > ${commons-math.version}</version>
</dependency>
2012-05-26 01:56:04 -04:00
<dependency >
<groupId > org.apache.zookeeper</groupId>
<artifactId > zookeeper</artifactId>
<version > ${zookeeper.version}</version>
<exclusions >
<exclusion >
<groupId > jline</groupId>
<artifactId > jline</artifactId>
</exclusion>
2013-07-29 19:12:29 -04:00
<exclusion >
<groupId > com.sun.jmx</groupId>
<artifactId > jmxri</artifactId>
</exclusion>
<exclusion >
<groupId > com.sun.jdmk</groupId>
<artifactId > jmxtools</artifactId>
</exclusion>
<exclusion >
<groupId > javax.jms</groupId>
<artifactId > jms</artifactId>
</exclusion>
2013-09-23 12:55:09 -04:00
<exclusion >
2014-05-20 06:57:11 -04:00
<groupId > io.netty</groupId>
2013-09-23 12:55:09 -04:00
<artifactId > netty</artifactId>
</exclusion>
2012-05-26 01:56:04 -04:00
</exclusions>
</dependency>
2013-09-23 12:55:09 -04:00
<dependency >
2012-05-26 01:56:04 -04:00
<groupId > org.apache.thrift</groupId>
<artifactId > libthrift</artifactId>
<version > ${thrift.version}</version>
<exclusions >
<exclusion >
<groupId > org.slf4j</groupId>
<artifactId > slf4j-simple</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency >
<groupId > org.jruby</groupId>
<artifactId > jruby-complete</artifactId>
<version > ${jruby.version}</version>
</dependency>
2015-01-21 16:12:57 -05:00
<dependency >
<groupId > org.jruby.jcodings</groupId>
<artifactId > jcodings</artifactId>
<version > ${jcodings.version}</version>
</dependency>
2014-10-03 02:06:32 -04:00
<dependency >
<groupId > org.jruby.joni</groupId>
<artifactId > joni</artifactId>
<version > ${joni.version}</version>
</dependency>
2012-05-26 01:56:04 -04:00
<dependency >
<groupId > org.jamon</groupId>
<artifactId > jamon-runtime</artifactId>
<version > ${jamon-runtime.version}</version>
</dependency>
<!-- REST dependencies -->
<dependency >
2017-01-30 14:54:54 -05:00
<groupId > javax.servlet</groupId>
<artifactId > javax.servlet-api</artifactId>
<version > ${servlet.api.version}</version>
2012-05-26 01:56:04 -04:00
</dependency>
2017-06-28 17:28:40 -04:00
<dependency >
<groupId > javax.ws.rs</groupId>
<artifactId > javax.ws.rs-api</artifactId>
<version > ${wx.rs.api.version}</version>
</dependency>
2012-05-26 01:56:04 -04:00
<dependency >
2017-01-30 14:54:54 -05:00
<groupId > org.eclipse.jetty</groupId>
<artifactId > jetty-server</artifactId>
<version > ${jetty.version}</version>
2012-05-26 01:56:04 -04:00
</dependency>
<dependency >
2017-01-30 14:54:54 -05:00
<groupId > org.eclipse.jetty</groupId>
<artifactId > jetty-servlet</artifactId>
<version > ${jetty.version}</version>
2014-02-07 12:06:00 -05:00
<exclusions >
<exclusion >
2017-01-30 14:54:54 -05:00
<groupId > org.eclipse.jetty</groupId>
<artifactId > servlet-api</artifactId>
2014-02-07 12:06:00 -05:00
</exclusion>
</exclusions>
2012-05-26 01:56:04 -04:00
</dependency>
<dependency >
2017-01-30 14:54:54 -05:00
<groupId > org.eclipse.jetty</groupId>
<artifactId > jetty-security</artifactId>
<version > ${jetty.version}</version>
</dependency>
<dependency >
<groupId > org.eclipse.jetty</groupId>
<artifactId > jetty-http</artifactId>
<version > ${jetty.version}</version>
</dependency>
<dependency >
<groupId > org.eclipse.jetty</groupId>
<artifactId > jetty-util</artifactId>
<version > ${jetty.version}</version>
</dependency>
<dependency >
<groupId > org.eclipse.jetty</groupId>
<artifactId > jetty-io</artifactId>
<version > ${jetty.version}</version>
</dependency>
<dependency >
<groupId > org.eclipse.jetty</groupId>
<artifactId > jetty-jsp</artifactId>
<version > ${jetty-jsp.version}</version>
</dependency>
<dependency >
<groupId > org.eclipse.jetty</groupId>
<artifactId > jetty-jmx</artifactId>
<version > ${jetty.version}</version>
</dependency>
<dependency >
<groupId > org.eclipse.jetty</groupId>
<artifactId > jetty-webapp</artifactId>
<version > ${jetty.version}</version>
</dependency>
<dependency >
<groupId > org.eclipse.jetty</groupId>
<artifactId > jetty-util-ajax</artifactId>
<version > ${jetty.version}</version>
</dependency>
<dependency >
<groupId > com.google.protobuf</groupId>
<artifactId > protobuf-java</artifactId>
<version > ${external.protobuf.version}</version>
</dependency>
<dependency >
<groupId > org.glassfish.jersey.containers</groupId>
<artifactId > jersey-container-servlet-core</artifactId>
2012-05-26 01:56:04 -04:00
<version > ${jersey.version}</version>
</dependency>
2017-08-18 13:09:23 -04:00
<dependency >
<groupId > org.glassfish.jersey.media</groupId>
<artifactId > jersey-media-json-jackson1</artifactId>
<version > ${jackson.version}</version>
</dependency>
2012-05-26 01:56:04 -04:00
<dependency >
<groupId > javax.xml.bind</groupId>
<artifactId > jaxb-api</artifactId>
<version > ${jaxb-api.version}</version>
<exclusions >
<exclusion >
<groupId > javax.xml.stream</groupId>
<artifactId > stax-api</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency >
<groupId > junit</groupId>
<artifactId > junit</artifactId>
<version > ${junit.version}</version>
2012-12-03 16:30:19 -05:00
</dependency>
2015-01-21 16:12:57 -05:00
<dependency >
<groupId > org.hamcrest</groupId>
<artifactId > hamcrest-core</artifactId>
<version > ${hamcrest.version}</version>
2015-01-25 19:48:29 -05:00
<scope > test</scope>
2015-01-21 16:12:57 -05:00
</dependency>
2012-05-26 01:56:04 -04:00
<dependency >
<groupId > org.mockito</groupId>
<artifactId > mockito-all</artifactId>
<version > ${mockito-all.version}</version>
2015-01-25 19:48:29 -05:00
<scope > test</scope>
2012-05-26 01:56:04 -04:00
</dependency>
2013-04-25 01:02:07 -04:00
<dependency >
2015-01-21 16:02:24 -05:00
<groupId > org.apache.htrace</groupId>
2013-08-14 19:20:51 -04:00
<artifactId > htrace-core</artifactId>
<version > ${htrace.version}</version>
</dependency>
2014-01-25 23:41:39 -05:00
<dependency >
<groupId > com.lmax</groupId>
<artifactId > disruptor</artifactId>
<version > ${disruptor.version}</version>
</dependency>
2016-10-27 21:50:20 -04:00
<dependency >
2015-03-27 16:15:27 -04:00
<groupId > net.spy</groupId>
<artifactId > spymemcached</artifactId>
<version > ${spy.version}</version>
<optional > true</optional>
2016-10-27 21:50:20 -04:00
</dependency>
<dependency >
<groupId > org.bouncycastle</groupId>
<artifactId > bcprov-jdk16</artifactId>
<version > ${bouncycastle.version}</version>
<scope > test</scope>
</dependency>
<dependency >
<groupId > org.apache.kerby</groupId>
<artifactId > kerb-client</artifactId>
<version > ${kerby.version}</version>
</dependency>
<dependency >
<groupId > org.apache.kerby</groupId>
<artifactId > kerb-simplekdc</artifactId>
<version > ${kerby.version}</version>
</dependency>
<dependency >
<groupId > org.apache.commons</groupId>
<artifactId > commons-crypto</artifactId>
<version > ${commons-crypto.version}</version>
<exclusions >
<exclusion >
<groupId > net.java.dev.jna</groupId>
<artifactId > jna</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency >
<groupId > org.apache.curator</groupId>
<artifactId > curator-recipes</artifactId>
<version > ${curator.version}</version>
</dependency>
<dependency >
<groupId > org.apache.curator</groupId>
<artifactId > curator-framework</artifactId>
<version > ${curator.version}</version>
</dependency>
<dependency >
<groupId > org.apache.curator</groupId>
<artifactId > curator-client</artifactId>
<version > ${curator.version}</version>
<exclusions >
<exclusion >
<groupId > com.google.guava</groupId>
<artifactId > guava</artifactId>
</exclusion>
</exclusions>
</dependency>
2017-07-05 15:06:29 -04:00
<dependency >
<groupId > org.apache.hbase.thirdparty</groupId>
<artifactId > hbase-shaded-miscellaneous</artifactId>
<version > ${hbase-thirdparty.version}</version>
</dependency>
<dependency >
<groupId > org.apache.hbase.thirdparty</groupId>
<artifactId > hbase-shaded-netty</artifactId>
<version > ${hbase-thirdparty.version}</version>
</dependency>
<dependency >
<groupId > org.apache.hbase.thirdparty</groupId>
<artifactId > hbase-shaded-protobuf</artifactId>
<version > ${hbase-thirdparty.version}</version>
</dependency>
2012-05-26 01:56:04 -04:00
</dependencies>
</dependencyManagement>
<!-- Dependencies needed by subprojects -->
<dependencies >
2012-12-28 08:56:15 -05:00
<dependency >
2013-01-08 03:31:11 -05:00
<groupId > com.github.stephenc.findbugs</groupId>
<artifactId > findbugs-annotations</artifactId>
2012-12-28 08:56:15 -05:00
<scope > compile</scope>
</dependency>
2013-08-02 15:01:34 -04:00
<dependency >
<groupId > log4j</groupId>
<artifactId > log4j</artifactId>
</dependency>
2010-05-20 01:24:32 -04:00
<!-- Test dependencies -->
<dependency >
<groupId > junit</groupId>
<artifactId > junit</artifactId>
</dependency>
2010-06-01 20:40:48 -04:00
<dependency >
<groupId > org.mockito</groupId>
<artifactId > mockito-all</artifactId>
2010-05-20 01:24:32 -04:00
</dependency>
</dependencies>
2010-04-12 19:50:51 -04:00
<!--
To publish, use the following settings.xml file ( placed in ~/.m2/settings.xml )
2010-02-24 15:17:17 -05:00
2010-04-12 19:50:51 -04:00
<settings >
<servers >
<server >
<id > apache.releases.https</id>
<username > hbase_committer</username>
<password > ********</password>
</server>
<server >
<id > apache.snapshots.https</id>
<username > hbase_committer</username>
<password > ********</password>
</server>
</servers>
</settings>
$ mvn deploy
(or)
2010-06-07 19:06:24 -04:00
$ mvn -s /my/path/settings.xml deploy
2010-04-12 19:50:51 -04:00
-->
2011-03-15 18:20:08 -04:00
<profiles >
2015-12-26 17:47:53 -05:00
<profile >
2016-03-14 21:28:50 -04:00
<id > rsgroup</id>
<activation >
<property >
<name > !skip-rsgroup</name>
</property>
</activation>
<modules >
<module > hbase-rsgroup</module>
</modules>
</profile>
<profile >
2015-12-26 17:47:53 -05:00
<id > build-with-jdk8</id>
<activation >
<jdk > 1.8</jdk>
</activation>
<build >
<pluginManagement >
<plugins >
<plugin >
<groupId > org.apache.maven.plugins</groupId>
<artifactId > maven-javadoc-plugin</artifactId>
<configuration >
<!-- TODO HBASE - 15041 clean up our javadocs so jdk8 linter can be used -->
<additionalparam > -Xdoclint:none</additionalparam>
</configuration>
</plugin>
2017-06-29 01:16:40 -04:00
<plugin >
<groupId > org.codehaus.mojo</groupId>
<artifactId > findbugs-maven-plugin</artifactId>
<version > 3.0.0</version>
<!-- NOTE: Findbugs 3.0.0 requires jdk7 -->
<configuration >
<excludeFilterFile > ${project.basedir}/../dev-support/findbugs-exclude.xml</excludeFilterFile>
<findbugsXmlOutput > true</findbugsXmlOutput>
<xmlOutput > true</xmlOutput>
<effort > Max</effort>
</configuration>
<dependencies >
<dependency >
<groupId > com.github.spotbugs</groupId>
<artifactId > spotbugs</artifactId>
<version > ${spotbugs.version}</version>
</dependency>
</dependencies>
</plugin>
2015-12-26 17:47:53 -05:00
</plugins>
</pluginManagement>
</build>
</profile>
2015-03-11 14:18:30 -04:00
<!-- profile activated by the Jenkins patch testing job -->
<profile >
<id > jenkins.patch</id>
<activation >
<activeByDefault > false</activeByDefault>
<property >
<name > HBasePatchProcess</name>
</property>
</activation>
2015-12-28 00:26:49 -05:00
<properties >
<surefire.rerunFailingTestsCount > 2</surefire.rerunFailingTestsCount>
</properties>
2015-03-11 14:18:30 -04:00
<build >
<plugins >
<plugin >
<groupId > org.apache.maven.plugins</groupId>
<artifactId > maven-antrun-plugin</artifactId>
<inherited > false</inherited>
<executions >
<execution >
<phase > validate</phase>
<goals >
<goal > run</goal>
</goals>
<configuration >
<tasks >
2015-06-04 19:11:15 -04:00
<echo > Maven Execution Environment</echo>
2015-03-11 14:18:30 -04:00
<echo > MAVEN_OPTS="${env.MAVEN_OPTS}"</echo>
</tasks>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</profile>
2015-10-14 15:22:27 -04:00
<profile >
<id > jacoco</id>
<activation >
<activeByDefault > false</activeByDefault>
</activation>
<build >
<plugins >
<plugin >
<groupId > org.jacoco</groupId>
<artifactId > jacoco-maven-plugin</artifactId>
<version > ${jacoco.version}</version>
<executions >
<execution >
<id > prepare-agent</id>
<goals >
<goal > prepare-agent</goal>
</goals>
</execution>
<execution >
<id > report</id>
<phase > prepare-package</phase>
<goals >
<goal > report</goal>
</goals>
</execution>
</executions>
<configuration >
<systemPropertyVariables >
<jacoco-agent.destfile > target/jacoco.exec</jacoco-agent.destfile>
</systemPropertyVariables>
<excludes >
<exclude > **/generated/**/*.class</exclude>
</excludes>
</configuration>
</plugin>
</plugins>
</build>
</profile>
2011-06-06 19:16:09 -04:00
<profile >
<id > os.linux</id>
<activation >
<activeByDefault > false</activeByDefault>
<os >
<family > Linux</family>
</os>
</activation>
<properties >
<build.platform > ${os.name}-${os.arch}-${sun.arch.data.model}</build.platform>
</properties>
</profile>
<profile >
2012-05-26 01:56:04 -04:00
<id > os.mac</id>
<activation >
<os >
<family > Mac</family>
</os>
</activation>
<properties >
<build.platform > Mac_OS_X-${sun.arch.data.model}</build.platform>
</properties>
2011-06-06 19:16:09 -04:00
</profile>
2013-01-30 16:31:21 -05:00
<profile >
<id > os.windows</id>
<activation >
<os >
<family > Windows</family>
</os>
</activation>
<properties >
<build.platform > cygwin</build.platform>
2015-10-14 15:22:27 -04:00
<argLine > ${hbase-surefire.cygwin-argLine}</argLine>
2013-01-30 16:31:21 -05:00
</properties>
</profile>
2012-02-11 17:12:23 -05:00
<!-- this profile should be activated for release builds -->
<profile >
<id > release</id>
<build >
2012-06-03 17:59:50 -04:00
<plugins >
<plugin >
<groupId > org.apache.rat</groupId>
2012-02-11 17:12:23 -05:00
<artifactId > apache-rat-plugin</artifactId>
2012-06-03 17:59:50 -04:00
<executions >
2012-02-11 17:12:23 -05:00
<execution >
<phase > package</phase>
<goals >
<goal > check</goal>
</goals>
</execution>
</executions>
</plugin>
2015-05-31 11:17:23 -04:00
<plugin >
<groupId > org.apache.maven.plugins</groupId>
<artifactId > maven-enforcer-plugin</artifactId>
<configuration >
<rules >
<enforceBytecodeVersion >
<maxJdkVersion > ${compileSource}</maxJdkVersion>
<message > HBase has unsupported dependencies.
HBase requires that all dependencies be compiled with version ${compileSource} or earlier
of the JDK to properly build from source. You appear to be using a newer dependency. You can use
either "mvn -version" or "mvn enforcer:display-info" to verify what version is active.
Non-release builds can temporarily build with a newer JDK version by setting the
'compileSource' property (eg. mvn -DcompileSource=1.8 clean package).
</message>
</enforceBytecodeVersion>
</rules>
</configuration>
</plugin>
2012-02-11 17:12:23 -05:00
</plugins>
</build>
</profile>
2012-06-03 17:59:50 -04:00
<!-- Dependency management profiles for submodules when building against specific hadoop branches. -->
2012-05-26 01:56:04 -04:00
<!-- Submodules that need hadoop dependencies should declare
profiles with activation properties matching the profile here.
Generally, it should be sufficient to copy the first
few lines of the profile you want to match. -->
2012-10-25 16:50:41 -04:00
<!-- profile for building against Hadoop 2.0.x
2013-12-06 17:28:03 -05:00
This is the default.
-->
2011-08-23 18:47:53 -04:00
<profile >
2012-05-26 01:56:04 -04:00
<id > hadoop-2.0</id>
2011-08-23 18:47:53 -04:00
<activation >
<property >
2014-09-13 00:37:06 -04:00
<!-- Below formatting for dev - support/generate - hadoopX - poms.sh -->
<!-- h2 --> <name > !hadoop.profile</name>
2011-08-23 18:47:53 -04:00
</property>
</activation>
2013-01-23 18:45:36 -05:00
<modules >
<module > hbase-hadoop2-compat</module>
</modules>
2011-08-23 18:47:53 -04:00
<properties >
2012-07-17 18:02:06 -04:00
<hadoop.version > ${hadoop-two.version}</hadoop.version>
<compat.module > hbase-hadoop2-compat</compat.module>
2013-04-07 01:49:57 -04:00
<assembly.file > src/main/assembly/hadoop-two-compat.xml</assembly.file>
2011-08-23 18:47:53 -04:00
</properties>
2012-05-26 01:56:04 -04:00
<dependencyManagement >
<dependencies >
2013-08-02 15:01:34 -04:00
<dependency >
<groupId > org.apache.hadoop</groupId>
<artifactId > hadoop-mapreduce-client-core</artifactId>
<version > ${hadoop-two.version}</version>
2014-05-20 06:57:11 -04:00
<exclusions >
<exclusion >
<groupId > io.netty</groupId>
<artifactId > netty</artifactId>
</exclusion>
2017-01-30 14:54:54 -05:00
<exclusion >
<groupId > javax.servlet</groupId>
<artifactId > servlet-api</artifactId>
</exclusion>
2014-05-20 06:57:11 -04:00
</exclusions>
2013-08-02 15:01:34 -04:00
</dependency>
<dependency >
<groupId > org.apache.hadoop</groupId>
<artifactId > hadoop-mapreduce-client-jobclient</artifactId>
<version > ${hadoop-two.version}</version>
2014-05-20 06:57:11 -04:00
<exclusions >
<exclusion >
<groupId > io.netty</groupId>
<artifactId > netty</artifactId>
</exclusion>
2017-01-30 14:54:54 -05:00
<exclusion >
<groupId > javax.servlet</groupId>
<artifactId > servlet-api</artifactId>
</exclusion>
2014-05-20 06:57:11 -04:00
</exclusions>
2013-08-02 15:01:34 -04:00
</dependency>
<dependency >
<groupId > org.apache.hadoop</groupId>
<artifactId > hadoop-mapreduce-client-jobclient</artifactId>
<version > ${hadoop-two.version}</version>
<type > test-jar</type>
2014-01-28 20:19:28 -05:00
<scope > test</scope>
2014-05-20 06:57:11 -04:00
<exclusions >
<exclusion >
<groupId > io.netty</groupId>
<artifactId > netty</artifactId>
</exclusion>
2017-01-30 14:54:54 -05:00
<exclusion >
<groupId > javax.servlet</groupId>
<artifactId > servlet-api</artifactId>
</exclusion>
2014-05-20 06:57:11 -04:00
</exclusions>
2013-08-02 15:01:34 -04:00
</dependency>
<dependency >
<groupId > org.apache.hadoop</groupId>
<artifactId > hadoop-hdfs</artifactId>
2014-08-21 04:50:14 -04:00
<exclusions >
<exclusion >
<groupId > javax.servlet.jsp</groupId>
<artifactId > jsp-api</artifactId>
</exclusion>
<exclusion >
<groupId > javax.servlet</groupId>
<artifactId > servlet-api</artifactId>
</exclusion>
2015-07-06 18:00:51 -04:00
<exclusion >
<groupId > io.netty</groupId>
<artifactId > netty</artifactId>
</exclusion>
2014-08-21 04:50:14 -04:00
<exclusion >
<groupId > stax</groupId>
<artifactId > stax-api</artifactId>
</exclusion>
2016-08-02 12:36:51 -04:00
<exclusion >
<groupId > xerces</groupId>
<artifactId > xercesImpl</artifactId>
</exclusion>
2014-08-21 04:50:14 -04:00
</exclusions>
2013-08-02 15:01:34 -04:00
<version > ${hadoop-two.version}</version>
</dependency>
<dependency >
<groupId > org.apache.hadoop</groupId>
<artifactId > hadoop-hdfs</artifactId>
<version > ${hadoop-two.version}</version>
<type > test-jar</type>
2014-01-28 20:19:28 -05:00
<scope > test</scope>
2014-08-21 04:50:14 -04:00
<exclusions >
<exclusion >
<groupId > javax.servlet.jsp</groupId>
<artifactId > jsp-api</artifactId>
</exclusion>
<exclusion >
<groupId > javax.servlet</groupId>
<artifactId > servlet-api</artifactId>
</exclusion>
2015-07-06 18:00:51 -04:00
<exclusion >
<groupId > io.netty</groupId>
<artifactId > netty</artifactId>
</exclusion>
2014-08-21 04:50:14 -04:00
<exclusion >
<groupId > stax</groupId>
<artifactId > stax-api</artifactId>
</exclusion>
2016-08-02 12:36:51 -04:00
<exclusion >
<groupId > xerces</groupId>
<artifactId > xercesImpl</artifactId>
</exclusion>
2014-08-21 04:50:14 -04:00
</exclusions>
2013-08-02 15:01:34 -04:00
</dependency>
<dependency >
<groupId > org.apache.hadoop</groupId>
<artifactId > hadoop-auth</artifactId>
<version > ${hadoop-two.version}</version>
</dependency>
2012-05-26 01:56:04 -04:00
<dependency >
<groupId > org.apache.hadoop</groupId>
2013-05-01 00:20:01 -04:00
<artifactId > hadoop-common</artifactId>
<version > ${hadoop-two.version}</version>
2014-02-07 12:06:00 -05:00
<exclusions >
<exclusion >
<groupId > javax.servlet.jsp</groupId>
<artifactId > jsp-api</artifactId>
</exclusion>
<exclusion >
<groupId > javax.servlet</groupId>
<artifactId > servlet-api</artifactId>
</exclusion>
<exclusion >
<groupId > stax</groupId>
<artifactId > stax-api</artifactId>
</exclusion>
2014-05-20 06:57:11 -04:00
<exclusion >
<groupId > io.netty</groupId>
<artifactId > netty</artifactId>
</exclusion>
2016-08-02 02:17:59 -04:00
<exclusion >
<groupId > com.google.code.findbugs</groupId>
<artifactId > jsr305</artifactId>
</exclusion>
2017-01-01 15:15:32 -05:00
<exclusion >
<groupId > junit</groupId>
<artifactId > junit</artifactId>
</exclusion>
2014-02-07 12:06:00 -05:00
</exclusions>
2013-05-01 00:20:01 -04:00
</dependency>
<dependency >
<groupId > org.apache.hadoop</groupId>
2012-05-26 01:56:04 -04:00
<artifactId > hadoop-client</artifactId>
2012-07-17 18:02:06 -04:00
<version > ${hadoop-two.version}</version>
2012-05-26 01:56:04 -04:00
</dependency>
2012-10-25 16:50:41 -04:00
<!-- This was marked as test dep in earlier pom, but was scoped compile.
2012-05-26 01:56:04 -04:00
Where do we actually need it? -->
<dependency >
<groupId > org.apache.hadoop</groupId>
<artifactId > hadoop-minicluster</artifactId>
2012-07-17 18:02:06 -04:00
<version > ${hadoop-two.version}</version>
2014-02-07 12:06:00 -05:00
<exclusions >
2016-08-11 00:28:45 -04:00
<exclusion >
<groupId > commons-httpclient</groupId>
<artifactId > commons-httpclient</artifactId>
</exclusion>
2014-02-07 12:06:00 -05:00
<exclusion >
<groupId > javax.servlet.jsp</groupId>
<artifactId > jsp-api</artifactId>
</exclusion>
<exclusion >
<groupId > javax.servlet</groupId>
<artifactId > servlet-api</artifactId>
</exclusion>
<exclusion >
<groupId > stax</groupId>
<artifactId > stax-api</artifactId>
</exclusion>
2014-05-20 06:57:11 -04:00
<exclusion >
<groupId > io.netty</groupId>
<artifactId > netty</artifactId>
</exclusion>
2016-08-02 02:17:59 -04:00
<exclusion >
<groupId > com.google.code.findbugs</groupId>
<artifactId > jsr305</artifactId>
</exclusion>
2014-02-07 12:06:00 -05:00
</exclusions>
2012-05-26 01:56:04 -04:00
</dependency>
2015-02-17 20:46:27 -05:00
<dependency >
<groupId > org.apache.hadoop</groupId>
<artifactId > hadoop-minikdc</artifactId>
<version > ${hadoop-two.version}</version>
<scope > test</scope>
</dependency>
2012-05-26 01:56:04 -04:00
</dependencies>
</dependencyManagement>
2011-08-23 18:47:53 -04:00
</profile>
2012-03-08 12:58:08 -05:00
<!--
2012-05-26 01:56:04 -04:00
profile for building against Hadoop 3.0.0. Activate using:
mvn -Dhadoop.profile=3.0
2012-03-08 12:58:08 -05:00
-->
<profile >
2012-05-26 01:56:04 -04:00
<id > hadoop-3.0</id>
2012-03-08 12:58:08 -05:00
<activation >
<property >
2012-05-27 20:55:44 -04:00
<name > hadoop.profile</name>
<value > 3.0</value>
2012-03-08 12:58:08 -05:00
</property>
</activation>
2014-09-30 01:28:12 -04:00
<modules >
<!-- For now, use hadoop2 compat module -->
<module > hbase-hadoop2-compat</module>
</modules>
2012-03-08 12:58:08 -05:00
<properties >
2014-09-30 01:28:12 -04:00
<hadoop.version > ${hadoop-three.version}</hadoop.version>
<!-- Use this compat module for now. TODO: Make h3 one if we need one -->
<compat.module > hbase-hadoop2-compat</compat.module>
2017-04-14 16:45:07 -04:00
<assembly.file > src/main/assembly/hadoop-two-compat.xml</assembly.file>
2012-03-08 12:58:08 -05:00
</properties>
2014-09-30 01:28:12 -04:00
<dependencyManagement >
<dependencies >
<dependency >
<groupId > org.apache.hadoop</groupId>
<artifactId > hadoop-mapreduce-client-core</artifactId>
<version > ${hadoop-three.version}</version>
<exclusions >
<exclusion >
<groupId > io.netty</groupId>
<artifactId > netty</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency >
<groupId > org.apache.hadoop</groupId>
<artifactId > hadoop-mapreduce-client-jobclient</artifactId>
<version > ${hadoop-three.version}</version>
<exclusions >
<exclusion >
<groupId > io.netty</groupId>
<artifactId > netty</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency >
<groupId > org.apache.hadoop</groupId>
<artifactId > hadoop-mapreduce-client-jobclient</artifactId>
<version > ${hadoop-three.version}</version>
<type > test-jar</type>
<scope > test</scope>
<exclusions >
<exclusion >
<groupId > io.netty</groupId>
<artifactId > netty</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency >
<groupId > org.apache.hadoop</groupId>
<artifactId > hadoop-hdfs</artifactId>
<exclusions >
<exclusion >
<groupId > javax.servlet.jsp</groupId>
<artifactId > jsp-api</artifactId>
</exclusion>
<exclusion >
<groupId > javax.servlet</groupId>
<artifactId > servlet-api</artifactId>
</exclusion>
<exclusion >
<groupId > stax</groupId>
<artifactId > stax-api</artifactId>
</exclusion>
2016-08-02 12:36:51 -04:00
<exclusion >
<groupId > xerces</groupId>
<artifactId > xercesImpl</artifactId>
</exclusion>
2014-09-30 01:28:12 -04:00
</exclusions>
<version > ${hadoop-three.version}</version>
</dependency>
<dependency >
<groupId > org.apache.hadoop</groupId>
<artifactId > hadoop-hdfs</artifactId>
<version > ${hadoop-three.version}</version>
<type > test-jar</type>
<scope > test</scope>
<exclusions >
<exclusion >
<groupId > javax.servlet.jsp</groupId>
<artifactId > jsp-api</artifactId>
</exclusion>
<exclusion >
<groupId > javax.servlet</groupId>
<artifactId > servlet-api</artifactId>
</exclusion>
<exclusion >
<groupId > stax</groupId>
<artifactId > stax-api</artifactId>
</exclusion>
2016-08-02 12:36:51 -04:00
<exclusion >
<groupId > xerces</groupId>
<artifactId > xercesImpl</artifactId>
</exclusion>
2014-09-30 01:28:12 -04:00
</exclusions>
</dependency>
<dependency >
<groupId > org.apache.hadoop</groupId>
<artifactId > hadoop-auth</artifactId>
<version > ${hadoop-three.version}</version>
2017-07-24 21:25:08 -04:00
<exclusions >
<exclusion >
<groupId > com.google.guava</groupId>
<artifactId > guava</artifactId>
</exclusion>
</exclusions>
2014-09-30 01:28:12 -04:00
</dependency>
<dependency >
<groupId > org.apache.hadoop</groupId>
<artifactId > hadoop-common</artifactId>
<version > ${hadoop-three.version}</version>
<exclusions >
<exclusion >
<groupId > javax.servlet.jsp</groupId>
<artifactId > jsp-api</artifactId>
</exclusion>
<exclusion >
<groupId > javax.servlet</groupId>
<artifactId > servlet-api</artifactId>
</exclusion>
<exclusion >
<groupId > stax</groupId>
<artifactId > stax-api</artifactId>
</exclusion>
<exclusion >
<groupId > io.netty</groupId>
<artifactId > netty</artifactId>
</exclusion>
2016-08-02 02:17:59 -04:00
<exclusion >
<groupId > com.google.code.findbugs</groupId>
<artifactId > jsr305</artifactId>
</exclusion>
2017-01-01 15:15:32 -05:00
<exclusion >
<groupId > junit</groupId>
<artifactId > junit</artifactId>
</exclusion>
2014-09-30 01:28:12 -04:00
</exclusions>
</dependency>
<dependency >
<groupId > org.apache.hadoop</groupId>
<artifactId > hadoop-client</artifactId>
<version > ${hadoop-three.version}</version>
</dependency>
<dependency >
<groupId > org.apache.hadoop</groupId>
<artifactId > hadoop-annotations</artifactId>
<version > ${hadoop-three.version}</version>
</dependency>
<!-- This was marked as test dep in earlier pom, but was scoped compile.
Where do we actually need it? -->
<dependency >
<groupId > org.apache.hadoop</groupId>
<artifactId > hadoop-minicluster</artifactId>
<version > ${hadoop-three.version}</version>
<exclusions >
2016-08-11 00:28:45 -04:00
<exclusion >
<groupId > commons-httpclient</groupId>
<artifactId > commons-httpclient</artifactId>
</exclusion>
2014-09-30 01:28:12 -04:00
<exclusion >
<groupId > javax.servlet.jsp</groupId>
<artifactId > jsp-api</artifactId>
</exclusion>
<exclusion >
<groupId > javax.servlet</groupId>
<artifactId > servlet-api</artifactId>
</exclusion>
<exclusion >
<groupId > stax</groupId>
<artifactId > stax-api</artifactId>
</exclusion>
<exclusion >
<groupId > io.netty</groupId>
<artifactId > netty</artifactId>
</exclusion>
2016-08-02 02:17:59 -04:00
<exclusion >
<groupId > com.google.code.findbugs</groupId>
<artifactId > jsr305</artifactId>
</exclusion>
2014-09-30 01:28:12 -04:00
</exclusions>
</dependency>
2015-02-17 20:46:27 -05:00
<dependency >
<groupId > org.apache.hadoop</groupId>
<artifactId > hadoop-minikdc</artifactId>
<version > ${hadoop-three.version}</version>
<scope > test</scope>
</dependency>
2014-09-30 01:28:12 -04:00
</dependencies>
</dependencyManagement>
2012-05-01 17:08:11 -04:00
</profile>
[jira] [HBASE-4908] HBase cluster test tool (port from 0.89-fb)
Summary:
Porting one of our HBase cluster test tools (a single-process multi-threaded
load generator and verifier) from 0.89-fb to trunk.
I cleaned up the code a bit compared to what's in 0.89-fb, and discovered that
it has some features that I have not tried yet (some kind of a kill test, and
some way to run HBase as multiple processes on one machine).
The main utility of this piece of code for us has been the HBaseClusterTest
command-line tool (called HBaseTest in 0.89-fb), which we usually invoke as a
load test in our five-node dev cluster testing, e.g.:
hbase org.apache.hadoop.hbase.util.LoadTestTool -write 50:100:20 -tn loadtest4
-read 100:10 -zk <zk_quorum_node> -bloom ROWCOL -compression LZO -key_window 5
-max_read_errors 10000 -num_keys 10000000000 -start_key 0
Test Plan:
Run this on a dev cluster. Run all unit tests.
Reviewers: stack, Karthik, Kannan, nspiegelberg, JIRA
Reviewed By: nspiegelberg
CC: stack, nspiegelberg, mbautin, Karthik
Differential Revision: 549
git-svn-id: https://svn.apache.org/repos/asf/hbase/trunk@1211746 13f79535-47bb-0310-9956-ffa450edef68
2011-12-07 21:38:27 -05:00
<!-- profiles for the tests
2011-11-19 11:38:07 -05:00
See as well the properties of the project for the values
when no profile is active. -->
2012-06-03 17:59:50 -04:00
<profile >
2015-11-03 23:47:07 -05:00
<!-- Use it to launch all tests in the same JVM -->
2011-11-19 11:38:07 -05:00
<id > singleJVMTests</id>
<activation >
<activeByDefault > false</activeByDefault>
</activation>
<properties >
2014-08-21 04:50:14 -04:00
<surefire.firstPartForkCount > 1</surefire.firstPartForkCount>
[jira] [HBASE-4908] HBase cluster test tool (port from 0.89-fb)
Summary:
Porting one of our HBase cluster test tools (a single-process multi-threaded
load generator and verifier) from 0.89-fb to trunk.
I cleaned up the code a bit compared to what's in 0.89-fb, and discovered that
it has some features that I have not tried yet (some kind of a kill test, and
some way to run HBase as multiple processes on one machine).
The main utility of this piece of code for us has been the HBaseClusterTest
command-line tool (called HBaseTest in 0.89-fb), which we usually invoke as a
load test in our five-node dev cluster testing, e.g.:
hbase org.apache.hadoop.hbase.util.LoadTestTool -write 50:100:20 -tn loadtest4
-read 100:10 -zk <zk_quorum_node> -bloom ROWCOL -compression LZO -key_window 5
-max_read_errors 10000 -num_keys 10000000000 -start_key 0
Test Plan:
Run this on a dev cluster. Run all unit tests.
Reviewers: stack, Karthik, Kannan, nspiegelberg, JIRA
Reviewed By: nspiegelberg
CC: stack, nspiegelberg, mbautin, Karthik
Differential Revision: 549
git-svn-id: https://svn.apache.org/repos/asf/hbase/trunk@1211746 13f79535-47bb-0310-9956-ffa450edef68
2011-12-07 21:38:27 -05:00
<surefire.skipFirstPart > false</surefire.skipFirstPart>
<surefire.skipSecondPart > true</surefire.skipSecondPart>
2012-06-03 17:59:50 -04:00
<surefire.firstPartGroups />
2011-11-19 11:38:07 -05:00
</properties>
[jira] [HBASE-4908] HBase cluster test tool (port from 0.89-fb)
Summary:
Porting one of our HBase cluster test tools (a single-process multi-threaded
load generator and verifier) from 0.89-fb to trunk.
I cleaned up the code a bit compared to what's in 0.89-fb, and discovered that
it has some features that I have not tried yet (some kind of a kill test, and
some way to run HBase as multiple processes on one machine).
The main utility of this piece of code for us has been the HBaseClusterTest
command-line tool (called HBaseTest in 0.89-fb), which we usually invoke as a
load test in our five-node dev cluster testing, e.g.:
hbase org.apache.hadoop.hbase.util.LoadTestTool -write 50:100:20 -tn loadtest4
-read 100:10 -zk <zk_quorum_node> -bloom ROWCOL -compression LZO -key_window 5
-max_read_errors 10000 -num_keys 10000000000 -start_key 0
Test Plan:
Run this on a dev cluster. Run all unit tests.
Reviewers: stack, Karthik, Kannan, nspiegelberg, JIRA
Reviewed By: nspiegelberg
CC: stack, nspiegelberg, mbautin, Karthik
Differential Revision: 549
git-svn-id: https://svn.apache.org/repos/asf/hbase/trunk@1211746 13f79535-47bb-0310-9956-ffa450edef68
2011-12-07 21:38:27 -05:00
</profile>
2012-06-03 17:59:50 -04:00
<profile >
<!-- Use it to launch small tests only -->
2011-11-19 11:38:07 -05:00
<id > runSmallTests</id>
<activation >
<activeByDefault > false</activeByDefault>
</activation>
<properties >
2014-08-21 04:50:14 -04:00
<surefire.firstPartForkCount > 1</surefire.firstPartForkCount>
[jira] [HBASE-4908] HBase cluster test tool (port from 0.89-fb)
Summary:
Porting one of our HBase cluster test tools (a single-process multi-threaded
load generator and verifier) from 0.89-fb to trunk.
I cleaned up the code a bit compared to what's in 0.89-fb, and discovered that
it has some features that I have not tried yet (some kind of a kill test, and
some way to run HBase as multiple processes on one machine).
The main utility of this piece of code for us has been the HBaseClusterTest
command-line tool (called HBaseTest in 0.89-fb), which we usually invoke as a
load test in our five-node dev cluster testing, e.g.:
hbase org.apache.hadoop.hbase.util.LoadTestTool -write 50:100:20 -tn loadtest4
-read 100:10 -zk <zk_quorum_node> -bloom ROWCOL -compression LZO -key_window 5
-max_read_errors 10000 -num_keys 10000000000 -start_key 0
Test Plan:
Run this on a dev cluster. Run all unit tests.
Reviewers: stack, Karthik, Kannan, nspiegelberg, JIRA
Reviewed By: nspiegelberg
CC: stack, nspiegelberg, mbautin, Karthik
Differential Revision: 549
git-svn-id: https://svn.apache.org/repos/asf/hbase/trunk@1211746 13f79535-47bb-0310-9956-ffa450edef68
2011-12-07 21:38:27 -05:00
<surefire.skipFirstPart > false</surefire.skipFirstPart>
<surefire.skipSecondPart > true</surefire.skipSecondPart>
2014-09-13 00:37:06 -04:00
<surefire.firstPartGroups > org.apache.hadoop.hbase.testclassification.SmallTests</surefire.firstPartGroups>
2012-06-03 17:59:50 -04:00
<surefire.secondPartGroups />
2011-11-19 11:38:07 -05:00
</properties>
[jira] [HBASE-4908] HBase cluster test tool (port from 0.89-fb)
Summary:
Porting one of our HBase cluster test tools (a single-process multi-threaded
load generator and verifier) from 0.89-fb to trunk.
I cleaned up the code a bit compared to what's in 0.89-fb, and discovered that
it has some features that I have not tried yet (some kind of a kill test, and
some way to run HBase as multiple processes on one machine).
The main utility of this piece of code for us has been the HBaseClusterTest
command-line tool (called HBaseTest in 0.89-fb), which we usually invoke as a
load test in our five-node dev cluster testing, e.g.:
hbase org.apache.hadoop.hbase.util.LoadTestTool -write 50:100:20 -tn loadtest4
-read 100:10 -zk <zk_quorum_node> -bloom ROWCOL -compression LZO -key_window 5
-max_read_errors 10000 -num_keys 10000000000 -start_key 0
Test Plan:
Run this on a dev cluster. Run all unit tests.
Reviewers: stack, Karthik, Kannan, nspiegelberg, JIRA
Reviewed By: nspiegelberg
CC: stack, nspiegelberg, mbautin, Karthik
Differential Revision: 549
git-svn-id: https://svn.apache.org/repos/asf/hbase/trunk@1211746 13f79535-47bb-0310-9956-ffa450edef68
2011-12-07 21:38:27 -05:00
</profile>
2012-06-03 17:59:50 -04:00
<profile >
<!-- Use it to launch medium tests only -->
2011-11-19 11:38:07 -05:00
<id > runMediumTests</id>
<activation >
<activeByDefault > false</activeByDefault>
</activation>
<properties >
[jira] [HBASE-4908] HBase cluster test tool (port from 0.89-fb)
Summary:
Porting one of our HBase cluster test tools (a single-process multi-threaded
load generator and verifier) from 0.89-fb to trunk.
I cleaned up the code a bit compared to what's in 0.89-fb, and discovered that
it has some features that I have not tried yet (some kind of a kill test, and
some way to run HBase as multiple processes on one machine).
The main utility of this piece of code for us has been the HBaseClusterTest
command-line tool (called HBaseTest in 0.89-fb), which we usually invoke as a
load test in our five-node dev cluster testing, e.g.:
hbase org.apache.hadoop.hbase.util.LoadTestTool -write 50:100:20 -tn loadtest4
-read 100:10 -zk <zk_quorum_node> -bloom ROWCOL -compression LZO -key_window 5
-max_read_errors 10000 -num_keys 10000000000 -start_key 0
Test Plan:
Run this on a dev cluster. Run all unit tests.
Reviewers: stack, Karthik, Kannan, nspiegelberg, JIRA
Reviewed By: nspiegelberg
CC: stack, nspiegelberg, mbautin, Karthik
Differential Revision: 549
git-svn-id: https://svn.apache.org/repos/asf/hbase/trunk@1211746 13f79535-47bb-0310-9956-ffa450edef68
2011-12-07 21:38:27 -05:00
<surefire.skipFirstPart > false</surefire.skipFirstPart>
<surefire.skipSecondPart > true</surefire.skipSecondPart>
2014-09-13 00:37:06 -04:00
<surefire.firstPartGroups > org.apache.hadoop.hbase.testclassification.MediumTests</surefire.firstPartGroups>
2012-06-03 17:59:50 -04:00
<surefire.secondPartGroups />
2011-11-19 11:38:07 -05:00
</properties>
[jira] [HBASE-4908] HBase cluster test tool (port from 0.89-fb)
Summary:
Porting one of our HBase cluster test tools (a single-process multi-threaded
load generator and verifier) from 0.89-fb to trunk.
I cleaned up the code a bit compared to what's in 0.89-fb, and discovered that
it has some features that I have not tried yet (some kind of a kill test, and
some way to run HBase as multiple processes on one machine).
The main utility of this piece of code for us has been the HBaseClusterTest
command-line tool (called HBaseTest in 0.89-fb), which we usually invoke as a
load test in our five-node dev cluster testing, e.g.:
hbase org.apache.hadoop.hbase.util.LoadTestTool -write 50:100:20 -tn loadtest4
-read 100:10 -zk <zk_quorum_node> -bloom ROWCOL -compression LZO -key_window 5
-max_read_errors 10000 -num_keys 10000000000 -start_key 0
Test Plan:
Run this on a dev cluster. Run all unit tests.
Reviewers: stack, Karthik, Kannan, nspiegelberg, JIRA
Reviewed By: nspiegelberg
CC: stack, nspiegelberg, mbautin, Karthik
Differential Revision: 549
git-svn-id: https://svn.apache.org/repos/asf/hbase/trunk@1211746 13f79535-47bb-0310-9956-ffa450edef68
2011-12-07 21:38:27 -05:00
</profile>
2012-06-03 17:59:50 -04:00
<profile >
<!-- Use it to launch large tests only -->
2011-11-19 11:38:07 -05:00
<id > runLargeTests</id>
<activation >
<activeByDefault > false</activeByDefault>
</activation>
<properties >
[jira] [HBASE-4908] HBase cluster test tool (port from 0.89-fb)
Summary:
Porting one of our HBase cluster test tools (a single-process multi-threaded
load generator and verifier) from 0.89-fb to trunk.
I cleaned up the code a bit compared to what's in 0.89-fb, and discovered that
it has some features that I have not tried yet (some kind of a kill test, and
some way to run HBase as multiple processes on one machine).
The main utility of this piece of code for us has been the HBaseClusterTest
command-line tool (called HBaseTest in 0.89-fb), which we usually invoke as a
load test in our five-node dev cluster testing, e.g.:
hbase org.apache.hadoop.hbase.util.LoadTestTool -write 50:100:20 -tn loadtest4
-read 100:10 -zk <zk_quorum_node> -bloom ROWCOL -compression LZO -key_window 5
-max_read_errors 10000 -num_keys 10000000000 -start_key 0
Test Plan:
Run this on a dev cluster. Run all unit tests.
Reviewers: stack, Karthik, Kannan, nspiegelberg, JIRA
Reviewed By: nspiegelberg
CC: stack, nspiegelberg, mbautin, Karthik
Differential Revision: 549
git-svn-id: https://svn.apache.org/repos/asf/hbase/trunk@1211746 13f79535-47bb-0310-9956-ffa450edef68
2011-12-07 21:38:27 -05:00
<surefire.skipFirstPart > false</surefire.skipFirstPart>
<surefire.skipSecondPart > true</surefire.skipSecondPart>
2014-09-13 00:37:06 -04:00
<surefire.firstPartGroups > org.apache.hadoop.hbase.testclassification.LargeTests</surefire.firstPartGroups>
2012-06-03 17:59:50 -04:00
<surefire.secondPartGroups />
2011-11-19 11:38:07 -05:00
</properties>
[jira] [HBASE-4908] HBase cluster test tool (port from 0.89-fb)
Summary:
Porting one of our HBase cluster test tools (a single-process multi-threaded
load generator and verifier) from 0.89-fb to trunk.
I cleaned up the code a bit compared to what's in 0.89-fb, and discovered that
it has some features that I have not tried yet (some kind of a kill test, and
some way to run HBase as multiple processes on one machine).
The main utility of this piece of code for us has been the HBaseClusterTest
command-line tool (called HBaseTest in 0.89-fb), which we usually invoke as a
load test in our five-node dev cluster testing, e.g.:
hbase org.apache.hadoop.hbase.util.LoadTestTool -write 50:100:20 -tn loadtest4
-read 100:10 -zk <zk_quorum_node> -bloom ROWCOL -compression LZO -key_window 5
-max_read_errors 10000 -num_keys 10000000000 -start_key 0
Test Plan:
Run this on a dev cluster. Run all unit tests.
Reviewers: stack, Karthik, Kannan, nspiegelberg, JIRA
Reviewed By: nspiegelberg
CC: stack, nspiegelberg, mbautin, Karthik
Differential Revision: 549
git-svn-id: https://svn.apache.org/repos/asf/hbase/trunk@1211746 13f79535-47bb-0310-9956-ffa450edef68
2011-12-07 21:38:27 -05:00
</profile>
2012-06-03 17:59:50 -04:00
<profile >
<!-- Use it to launch small & medium tests -->
2011-11-19 11:38:07 -05:00
<id > runDevTests</id>
<activation >
<activeByDefault > false</activeByDefault>
</activation>
<properties >
2014-08-21 04:50:14 -04:00
<surefire.firstPartForkCount > 1</surefire.firstPartForkCount>
[jira] [HBASE-4908] HBase cluster test tool (port from 0.89-fb)
Summary:
Porting one of our HBase cluster test tools (a single-process multi-threaded
load generator and verifier) from 0.89-fb to trunk.
I cleaned up the code a bit compared to what's in 0.89-fb, and discovered that
it has some features that I have not tried yet (some kind of a kill test, and
some way to run HBase as multiple processes on one machine).
The main utility of this piece of code for us has been the HBaseClusterTest
command-line tool (called HBaseTest in 0.89-fb), which we usually invoke as a
load test in our five-node dev cluster testing, e.g.:
hbase org.apache.hadoop.hbase.util.LoadTestTool -write 50:100:20 -tn loadtest4
-read 100:10 -zk <zk_quorum_node> -bloom ROWCOL -compression LZO -key_window 5
-max_read_errors 10000 -num_keys 10000000000 -start_key 0
Test Plan:
Run this on a dev cluster. Run all unit tests.
Reviewers: stack, Karthik, Kannan, nspiegelberg, JIRA
Reviewed By: nspiegelberg
CC: stack, nspiegelberg, mbautin, Karthik
Differential Revision: 549
git-svn-id: https://svn.apache.org/repos/asf/hbase/trunk@1211746 13f79535-47bb-0310-9956-ffa450edef68
2011-12-07 21:38:27 -05:00
<surefire.skipFirstPart > false</surefire.skipFirstPart>
<surefire.skipSecondPart > false</surefire.skipSecondPart>
2014-09-13 00:37:06 -04:00
<surefire.firstPartGroups > org.apache.hadoop.hbase.testclassification.SmallTests</surefire.firstPartGroups>
<surefire.secondPartGroups > org.apache.hadoop.hbase.testclassification.MediumTests</surefire.secondPartGroups>
2011-11-19 11:38:07 -05:00
</properties>
[jira] [HBASE-4908] HBase cluster test tool (port from 0.89-fb)
Summary:
Porting one of our HBase cluster test tools (a single-process multi-threaded
load generator and verifier) from 0.89-fb to trunk.
I cleaned up the code a bit compared to what's in 0.89-fb, and discovered that
it has some features that I have not tried yet (some kind of a kill test, and
some way to run HBase as multiple processes on one machine).
The main utility of this piece of code for us has been the HBaseClusterTest
command-line tool (called HBaseTest in 0.89-fb), which we usually invoke as a
load test in our five-node dev cluster testing, e.g.:
hbase org.apache.hadoop.hbase.util.LoadTestTool -write 50:100:20 -tn loadtest4
-read 100:10 -zk <zk_quorum_node> -bloom ROWCOL -compression LZO -key_window 5
-max_read_errors 10000 -num_keys 10000000000 -start_key 0
Test Plan:
Run this on a dev cluster. Run all unit tests.
Reviewers: stack, Karthik, Kannan, nspiegelberg, JIRA
Reviewed By: nspiegelberg
CC: stack, nspiegelberg, mbautin, Karthik
Differential Revision: 549
git-svn-id: https://svn.apache.org/repos/asf/hbase/trunk@1211746 13f79535-47bb-0310-9956-ffa450edef68
2011-12-07 21:38:27 -05:00
</profile>
2012-06-03 17:59:50 -04:00
<profile >
<!-- Use it to launch all tests -->
2011-11-19 11:38:07 -05:00
<id > runAllTests</id>
<activation >
<activeByDefault > false</activeByDefault>
</activation>
<properties >
2014-08-21 04:50:14 -04:00
<surefire.firstPartForkCount > 1</surefire.firstPartForkCount>
2015-11-02 11:17:41 -05:00
<surefire.secondPartForkCount > 5</surefire.secondPartForkCount>
[jira] [HBASE-4908] HBase cluster test tool (port from 0.89-fb)
Summary:
Porting one of our HBase cluster test tools (a single-process multi-threaded
load generator and verifier) from 0.89-fb to trunk.
I cleaned up the code a bit compared to what's in 0.89-fb, and discovered that
it has some features that I have not tried yet (some kind of a kill test, and
some way to run HBase as multiple processes on one machine).
The main utility of this piece of code for us has been the HBaseClusterTest
command-line tool (called HBaseTest in 0.89-fb), which we usually invoke as a
load test in our five-node dev cluster testing, e.g.:
hbase org.apache.hadoop.hbase.util.LoadTestTool -write 50:100:20 -tn loadtest4
-read 100:10 -zk <zk_quorum_node> -bloom ROWCOL -compression LZO -key_window 5
-max_read_errors 10000 -num_keys 10000000000 -start_key 0
Test Plan:
Run this on a dev cluster. Run all unit tests.
Reviewers: stack, Karthik, Kannan, nspiegelberg, JIRA
Reviewed By: nspiegelberg
CC: stack, nspiegelberg, mbautin, Karthik
Differential Revision: 549
git-svn-id: https://svn.apache.org/repos/asf/hbase/trunk@1211746 13f79535-47bb-0310-9956-ffa450edef68
2011-12-07 21:38:27 -05:00
<surefire.skipFirstPart > false</surefire.skipFirstPart>
2011-11-19 11:38:07 -05:00
<surefire.skipSecondPart > false</surefire.skipSecondPart>
2014-09-13 00:37:06 -04:00
<surefire.firstPartGroups > org.apache.hadoop.hbase.testclassification.SmallTests</surefire.firstPartGroups>
<surefire.secondPartGroups > org.apache.hadoop.hbase.testclassification.MediumTests,org.apache.hadoop.hbase.testclassification.LargeTests</surefire.secondPartGroups>
2011-11-19 11:38:07 -05:00
</properties>
[jira] [HBASE-4908] HBase cluster test tool (port from 0.89-fb)
Summary:
Porting one of our HBase cluster test tools (a single-process multi-threaded
load generator and verifier) from 0.89-fb to trunk.
I cleaned up the code a bit compared to what's in 0.89-fb, and discovered that
it has some features that I have not tried yet (some kind of a kill test, and
some way to run HBase as multiple processes on one machine).
The main utility of this piece of code for us has been the HBaseClusterTest
command-line tool (called HBaseTest in 0.89-fb), which we usually invoke as a
load test in our five-node dev cluster testing, e.g.:
hbase org.apache.hadoop.hbase.util.LoadTestTool -write 50:100:20 -tn loadtest4
-read 100:10 -zk <zk_quorum_node> -bloom ROWCOL -compression LZO -key_window 5
-max_read_errors 10000 -num_keys 10000000000 -start_key 0
Test Plan:
Run this on a dev cluster. Run all unit tests.
Reviewers: stack, Karthik, Kannan, nspiegelberg, JIRA
Reviewed By: nspiegelberg
CC: stack, nspiegelberg, mbautin, Karthik
Differential Revision: 549
git-svn-id: https://svn.apache.org/repos/asf/hbase/trunk@1211746 13f79535-47bb-0310-9956-ffa450edef68
2011-12-07 21:38:27 -05:00
</profile>
2014-09-13 00:37:06 -04:00
<profile >
<id > runMiscTests</id>
<activation >
<activeByDefault > false</activeByDefault>
</activation>
<properties >
<surefire.firstPartForkCount > 1</surefire.firstPartForkCount>
<surefire.secondPartForkCount > 1</surefire.secondPartForkCount>
<surefire.skipFirstPart > false</surefire.skipFirstPart>
<surefire.skipSecondPart > true</surefire.skipSecondPart>
<surefire.firstPartGroups > org.apache.hadoop.hbase.testclassification.MiscTests
</surefire.firstPartGroups>
<surefire.secondPartGroups > </surefire.secondPartGroups>
</properties>
</profile>
<profile >
<id > runCoprocessorTests</id>
<activation >
<activeByDefault > false</activeByDefault>
</activation>
<properties >
<surefire.firstPartForkCount > 1</surefire.firstPartForkCount>
<surefire.secondPartForkCount > 1</surefire.secondPartForkCount>
<surefire.skipFirstPart > false</surefire.skipFirstPart>
<surefire.skipSecondPart > true</surefire.skipSecondPart>
<surefire.firstPartGroups >
org.apache.hadoop.hbase.testclassification.CoprocessorTests
</surefire.firstPartGroups>
<surefire.secondPartGroups > </surefire.secondPartGroups>
</properties>
</profile>
<profile >
<id > runClientTests</id>
<activation >
<activeByDefault > false</activeByDefault>
</activation>
<properties >
<surefire.firstPartForkCount > 1</surefire.firstPartForkCount>
<surefire.secondPartForkCount > 1</surefire.secondPartForkCount>
<surefire.skipFirstPart > false</surefire.skipFirstPart>
<surefire.skipSecondPart > true</surefire.skipSecondPart>
<surefire.firstPartGroups > org.apache.hadoop.hbase.testclassification.ClientTests
</surefire.firstPartGroups>
<surefire.secondPartGroups > </surefire.secondPartGroups>
</properties>
</profile>
<profile >
<id > runMasterTests</id>
<activation >
<activeByDefault > false</activeByDefault>
</activation>
<properties >
<surefire.firstPartForkCount > 1</surefire.firstPartForkCount>
<surefire.secondPartForkCount > 1</surefire.secondPartForkCount>
<surefire.skipFirstPart > false</surefire.skipFirstPart>
<surefire.skipSecondPart > true</surefire.skipSecondPart>
<surefire.firstPartGroups > org.apache.hadoop.hbase.testclassification.MasterTests
</surefire.firstPartGroups>
<surefire.secondPartGroups > </surefire.secondPartGroups>
</properties>
</profile>
<profile >
<id > runMapredTests</id>
<activation >
<activeByDefault > false</activeByDefault>
</activation>
<properties >
<surefire.firstPartForkCount > 1</surefire.firstPartForkCount>
<surefire.secondPartForkCount > 1</surefire.secondPartForkCount>
<surefire.skipFirstPart > false</surefire.skipFirstPart>
<surefire.skipSecondPart > true</surefire.skipSecondPart>
<surefire.firstPartGroups > org.apache.hadoop.hbase.testclassification.MapredTests
</surefire.firstPartGroups>
<surefire.secondPartGroups > </surefire.secondPartGroups>
</properties>
</profile>
<profile >
<id > runMapreduceTests</id>
<activation >
<activeByDefault > false</activeByDefault>
</activation>
<properties >
<surefire.firstPartForkCount > 1</surefire.firstPartForkCount>
<surefire.secondPartForkCount > 1</surefire.secondPartForkCount>
<surefire.skipFirstPart > false</surefire.skipFirstPart>
<surefire.skipSecondPart > true</surefire.skipSecondPart>
<surefire.firstPartGroups > org.apache.hadoop.hbase.testclassification.MapReduceTests
</surefire.firstPartGroups>
<surefire.secondPartGroups > </surefire.secondPartGroups>
</properties>
</profile>
<profile >
<id > runRegionServerTests</id>
<activation >
<activeByDefault > false</activeByDefault>
</activation>
<properties >
<surefire.firstPartForkCount > 1</surefire.firstPartForkCount>
<surefire.secondPartForkCount > 1</surefire.secondPartForkCount>
<surefire.skipFirstPart > false</surefire.skipFirstPart>
<surefire.skipSecondPart > true</surefire.skipSecondPart>
<surefire.firstPartGroups >
org.apache.hadoop.hbase.testclassification.RegionServerTests
</surefire.firstPartGroups>
<surefire.secondPartGroups > </surefire.secondPartGroups>
</properties>
</profile>
<profile >
<id > runVerySlowMapReduceTests</id>
<activation >
<activeByDefault > false</activeByDefault>
</activation>
<properties >
<surefire.firstPartForkCount > 2</surefire.firstPartForkCount>
<surefire.secondPartForkCount > 1</surefire.secondPartForkCount>
<surefire.skipFirstPart > false</surefire.skipFirstPart>
<surefire.skipSecondPart > true</surefire.skipSecondPart>
<surefire.firstPartGroups >
org.apache.hadoop.hbase.testclassification.VerySlowMapReduceTests
</surefire.firstPartGroups>
<surefire.secondPartGroups > </surefire.secondPartGroups>
</properties>
</profile>
<profile >
<id > runVerySlowRegionServerTests</id>
<activation >
<activeByDefault > false</activeByDefault>
</activation>
<properties >
<surefire.firstPartForkCount > 2</surefire.firstPartForkCount>
<surefire.secondPartForkCount > 1</surefire.secondPartForkCount>
<surefire.skipFirstPart > false</surefire.skipFirstPart>
<surefire.skipSecondPart > true</surefire.skipSecondPart>
<surefire.firstPartGroups >
org.apache.hadoop.hbase.testclassification.VerySlowRegionServerTests
</surefire.firstPartGroups>
<surefire.secondPartGroups > </surefire.secondPartGroups>
</properties>
</profile>
<profile >
<id > runFilterTests</id>
<activation >
<activeByDefault > false</activeByDefault>
</activation>
<properties >
<surefire.firstPartForkCount > 1</surefire.firstPartForkCount>
<surefire.secondPartForkCount > 1</surefire.secondPartForkCount>
<surefire.skipFirstPart > false</surefire.skipFirstPart>
<surefire.skipSecondPart > true</surefire.skipSecondPart>
<surefire.firstPartGroups > org.apache.hadoop.hbase.testclassification.FilterTests
</surefire.firstPartGroups>
<surefire.secondPartGroups > </surefire.secondPartGroups>
</properties>
</profile>
<profile >
<id > runIOTests</id>
<activation >
<activeByDefault > false</activeByDefault>
</activation>
<properties >
<surefire.firstPartForkCount > 1</surefire.firstPartForkCount>
<surefire.secondPartForkCount > 1</surefire.secondPartForkCount>
<surefire.skipFirstPart > false</surefire.skipFirstPart>
<surefire.skipSecondPart > true</surefire.skipSecondPart>
<surefire.firstPartGroups > org.apache.hadoop.hbase.testclassification.IOTests
</surefire.firstPartGroups>
<surefire.secondPartGroups > </surefire.secondPartGroups>
</properties>
</profile>
<profile >
<id > runRestTests</id>
<activation >
<activeByDefault > false</activeByDefault>
</activation>
<properties >
<surefire.firstPartForkCount > 1</surefire.firstPartForkCount>
<surefire.secondPartForkCount > 1</surefire.secondPartForkCount>
<surefire.skipFirstPart > false</surefire.skipFirstPart>
<surefire.skipSecondPart > true</surefire.skipSecondPart>
<surefire.firstPartGroups > org.apache.hadoop.hbase.testclassification.RestTests
</surefire.firstPartGroups>
<surefire.secondPartGroups > </surefire.secondPartGroups>
</properties>
</profile>
<profile >
<id > runRPCTests</id>
<activation >
<activeByDefault > false</activeByDefault>
</activation>
<properties >
<surefire.firstPartForkCount > 1</surefire.firstPartForkCount>
<surefire.secondPartForkCount > 1</surefire.secondPartForkCount>
<surefire.skipFirstPart > false</surefire.skipFirstPart>
<surefire.skipSecondPart > true</surefire.skipSecondPart>
<surefire.firstPartGroups > org.apache.hadoop.hbase.testclassification.RPCTests
</surefire.firstPartGroups>
<surefire.secondPartGroups > </surefire.secondPartGroups>
</properties>
</profile>
<profile >
<id > runReplicationTests</id>
<activation >
<activeByDefault > false</activeByDefault>
</activation>
<properties >
<surefire.firstPartForkCount > 1</surefire.firstPartForkCount>
<surefire.secondPartForkCount > 1</surefire.secondPartForkCount>
<surefire.skipFirstPart > false</surefire.skipFirstPart>
<surefire.skipSecondPart > true</surefire.skipSecondPart>
<surefire.firstPartGroups >
org.apache.hadoop.hbase.testclassification.ReplicationTests
</surefire.firstPartGroups>
<surefire.secondPartGroups > </surefire.secondPartGroups>
</properties>
</profile>
<profile >
<id > runSecurityTests</id>
<activation >
<activeByDefault > false</activeByDefault>
</activation>
<properties >
<surefire.firstPartForkCount > 1</surefire.firstPartForkCount>
<surefire.secondPartForkCount > 1</surefire.secondPartForkCount>
<surefire.skipFirstPart > false</surefire.skipFirstPart>
<surefire.skipSecondPart > true</surefire.skipSecondPart>
<surefire.firstPartGroups > org.apache.hadoop.hbase.testclassification.SecurityTests
</surefire.firstPartGroups>
<surefire.secondPartGroups > </surefire.secondPartGroups>
</properties>
</profile>
<profile >
<id > runFlakeyTests</id>
<activation >
<activeByDefault > false</activeByDefault>
</activation>
<properties >
<surefire.firstPartForkCount > 1</surefire.firstPartForkCount>
<surefire.secondPartForkCount > 1</surefire.secondPartForkCount>
<surefire.skipFirstPart > false</surefire.skipFirstPart>
<surefire.skipSecondPart > true</surefire.skipSecondPart>
<surefire.firstPartGroups > org.apache.hadoop.hbase.testclassification.FlakeyTests
</surefire.firstPartGroups>
<surefire.secondPartGroups > </surefire.secondPartGroups>
</properties>
</profile>
2012-06-03 17:59:50 -04:00
<profile >
<!-- Use it to launch tests locally -->
2011-12-05 16:20:04 -05:00
<id > localTests</id>
<activation >
2012-12-13 08:53:06 -05:00
<property >
2013-04-07 01:49:57 -04:00
<name > test</name>
2012-12-13 08:53:06 -05:00
</property>
2011-12-05 16:20:04 -05:00
</activation>
<properties >
[jira] [HBASE-4908] HBase cluster test tool (port from 0.89-fb)
Summary:
Porting one of our HBase cluster test tools (a single-process multi-threaded
load generator and verifier) from 0.89-fb to trunk.
I cleaned up the code a bit compared to what's in 0.89-fb, and discovered that
it has some features that I have not tried yet (some kind of a kill test, and
some way to run HBase as multiple processes on one machine).
The main utility of this piece of code for us has been the HBaseClusterTest
command-line tool (called HBaseTest in 0.89-fb), which we usually invoke as a
load test in our five-node dev cluster testing, e.g.:
hbase org.apache.hadoop.hbase.util.LoadTestTool -write 50:100:20 -tn loadtest4
-read 100:10 -zk <zk_quorum_node> -bloom ROWCOL -compression LZO -key_window 5
-max_read_errors 10000 -num_keys 10000000000 -start_key 0
Test Plan:
Run this on a dev cluster. Run all unit tests.
Reviewers: stack, Karthik, Kannan, nspiegelberg, JIRA
Reviewed By: nspiegelberg
CC: stack, nspiegelberg, mbautin, Karthik
Differential Revision: 549
git-svn-id: https://svn.apache.org/repos/asf/hbase/trunk@1211746 13f79535-47bb-0310-9956-ffa450edef68
2011-12-07 21:38:27 -05:00
<surefire.provider > surefire-junit4</surefire.provider>
<surefire.skipFirstPart > false</surefire.skipFirstPart>
2011-12-05 16:20:04 -05:00
<surefire.skipSecondPart > true</surefire.skipSecondPart>
2012-06-03 17:59:50 -04:00
<surefire.firstPartGroups />
2011-12-05 16:20:04 -05:00
</properties>
</profile>
2012-05-26 01:56:04 -04:00
<!-- Profile for running clover. You need to have a clover license under ~/.clover.license for ${clover.version}
or you can provide the license with -Dmaven.clover.licenseLocation=/path/to/license. Committers can find
the license under https://svn.apache.org/repos/private/committers/donated-licenses/clover/
2015-05-02 00:53:13 -04:00
The report will be generated under target/site/clover/index.html when you run
2016-09-24 19:07:25 -04:00
MAVEN_OPTS="-Xmx2048m" mvn clean package -Pclover site -->
2012-05-26 01:56:04 -04:00
<profile >
<id > clover</id>
<activation >
<activeByDefault > false</activeByDefault>
<property >
<name > clover</name>
</property>
</activation>
<properties >
<maven.clover.licenseLocation > ${user.home}/.clover.license</maven.clover.licenseLocation>
</properties>
<build >
<plugins >
2015-05-02 00:53:13 -04:00
<!-- When Clover is active, we need to add it as a dependency for the javadoc plugin, or
our instrumented classes for the doclet will fail
-->
<plugin >
<groupId > org.apache.maven.plugins</groupId>
<artifactId > maven-javadoc-plugin</artifactId>
<dependencies >
<dependency >
<groupId > com.atlassian.maven.plugins</groupId>
<artifactId > maven-clover2-plugin</artifactId>
<version > ${clover.version}</version>
</dependency>
</dependencies>
</plugin>
2012-05-26 01:56:04 -04:00
<plugin >
<groupId > com.atlassian.maven.plugins</groupId>
<artifactId > maven-clover2-plugin</artifactId>
<version > ${clover.version}</version>
<configuration >
<includesAllSourceRoots > true</includesAllSourceRoots>
<includesTestSourceRoots > true</includesTestSourceRoots>
<targetPercentage > 50%</targetPercentage>
<generateHtml > true</generateHtml>
<generateXml > true</generateXml>
<excludes >
<exclude > **/generated/**</exclude>
</excludes>
</configuration>
<executions >
<execution >
<id > clover-setup</id>
<phase > process-sources</phase>
<goals >
<goal > setup</goal>
</goals>
</execution>
<execution >
<id > clover</id>
<phase > site</phase>
<goals >
<goal > clover</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
2012-06-03 17:59:50 -04:00
</profile>
2014-12-16 00:23:47 -05:00
<profile >
2015-03-30 18:35:46 -04:00
<id > errorProne</id>
2014-12-16 00:23:47 -05:00
<activation >
<activeByDefault > false</activeByDefault>
</activation>
2015-03-30 18:35:46 -04:00
<build >
<plugins >
<!-- Turn on error - prone -->
<plugin >
<groupId > org.apache.maven.plugins</groupId>
<artifactId > maven-compiler-plugin</artifactId>
2017-06-29 09:37:22 -04:00
<version > ${maven.compiler.version}</version>
2015-03-30 18:35:46 -04:00
<configuration >
<compilerId > javac-with-errorprone</compilerId>
<forceJavacCompilerUse > true</forceJavacCompilerUse>
</configuration>
<dependencies >
<dependency >
<groupId > org.codehaus.plexus</groupId>
<artifactId > plexus-compiler-javac-errorprone</artifactId>
<version > 2.5</version>
</dependency>
</dependencies>
</plugin>
</plugins>
</build>
2014-12-16 00:23:47 -05:00
</profile>
2011-03-15 18:20:08 -04:00
</profiles>
2010-02-24 15:17:17 -05:00
<!-- See http://jira.codehaus.org/browse/MSITE - 443 why the settings need to be here and not in pluginManagement. -->
2010-02-22 18:49:24 -05:00
<reporting >
<plugins >
2010-03-04 12:54:27 -05:00
<plugin >
<artifactId > maven-project-info-reports-plugin</artifactId>
2015-10-12 00:23:46 -04:00
<reportSets >
<reportSet >
2015-10-22 05:16:08 -04:00
<reports >
2015-10-22 05:34:19 -04:00
<report > cim</report>
2015-10-22 07:41:10 -04:00
<report > dependencies</report>
<report > dependency-convergence</report>
<report > dependency-info</report>
<report > dependency-management</report>
<report > index</report>
2015-10-22 05:34:19 -04:00
<report > issue-tracking</report>
<report > license</report>
2015-10-22 07:41:10 -04:00
<report > mailing-list</report>
2015-10-22 05:34:19 -04:00
<report > plugin-management</report>
<report > plugins</report>
2015-10-22 07:41:10 -04:00
<report > project-team</report>
<report > scm</report>
<report > summary</report>
2015-10-22 05:16:08 -04:00
</reports>
2015-10-12 00:23:46 -04:00
</reportSet>
</reportSets>
2015-10-16 09:37:20 -04:00
<!-- see src/main/site/site.xml for selected reports -->
2013-03-11 00:11:33 -04:00
<configuration >
<dependencyLocationsEnabled > false</dependencyLocationsEnabled>
</configuration>
2010-02-22 18:49:24 -05:00
</plugin>
2014-09-11 20:09:25 -04:00
2013-09-12 20:04:22 -04:00
<plugin >
2014-08-21 04:50:14 -04:00
<groupId > org.apache.maven.plugins</groupId>
2013-09-12 20:04:22 -04:00
<artifactId > maven-javadoc-plugin</artifactId>
2014-08-21 04:50:14 -04:00
<reportSets >
2015-10-16 09:37:20 -04:00
<!-- Dev API -->
2014-08-21 04:50:14 -04:00
<reportSet >
<id > devapi</id>
<reports >
<report > aggregate</report>
</reports>
<configuration >
<destDir > devapidocs</destDir>
2015-10-16 09:37:20 -04:00
<name > Developer API</name>
<description > The full HBase API, including private and unstable APIs</description>
2015-10-25 20:11:10 -04:00
<sourceFileExcludes >
<exclude > **/generated/*</exclude>
<exclude > **/protobuf/*</exclude>
<exclude > **/*.scala</exclude>
</sourceFileExcludes>
2015-11-03 17:31:24 -05:00
<excludePackageNames > org.apache.hadoop.hbase.tmpl.common:com.google.protobuf:org.apache.hadoop.hbase.spark:org.apache.hadoop.hbase.generated*</excludePackageNames>
<show > private</show> <!-- (shows all classes and members) -->
2015-10-16 09:37:20 -04:00
<quiet > true</quiet>
<linksource > true</linksource>
<sourcetab > 2</sourcetab>
<validateLinks > true</validateLinks>
<fixClassComment > true</fixClassComment>
<fixFieldComment > true</fixFieldComment>
<fixMethodComment > true</fixMethodComment>
<fixTags > all</fixTags>
<notimestamp > true</notimestamp>
<!-- Pass some options straight to the javadoc executable since it is easier -->
<additionalJOption > -J-Xmx2G</additionalJOption>
<!-- JDK8 javadoc requires test scope transitive dependencies due to our custom doclet -->
<additionalDependencies >
<additionalDependency >
<groupId > org.mockito</groupId>
<artifactId > mockito-all</artifactId>
<version > ${mockito-all.version}</version>
</additionalDependency>
<additionalDependency >
<groupId > org.hamcrest</groupId>
<artifactId > hamcrest-core</artifactId>
<version > ${hamcrest.version}</version>
</additionalDependency>
</additionalDependencies>
<inherited > false</inherited>
</configuration>
</reportSet>
<reportSet >
<id > testdevapi</id>
<reports >
<report > test-aggregate</report>
</reports>
<configuration >
<destDir > testdevapidocs</destDir>
<name > Developer API</name>
2015-11-03 17:31:24 -05:00
<description > The full HBase API test code, including private and unstable APIs</description>
2015-10-25 20:11:10 -04:00
<sourceFileExcludes >
<exclude > **/generated/*</exclude>
<exclude > **/protobuf/*</exclude>
<exclude > **/*.scala</exclude>
</sourceFileExcludes>
2015-11-03 17:31:24 -05:00
<excludePackageNames > org.apache.hadoop.hbase.tmpl.common:com.google.protobuf:org.apache.hadoop.hbase.spark:org.apache.hadoop.hbase.generated*</excludePackageNames>
<show > private</show> <!-- (shows all classes and members) -->
2015-10-16 09:37:20 -04:00
<quiet > true</quiet>
<linksource > true</linksource>
<sourcetab > 2</sourcetab>
<validateLinks > true</validateLinks>
<fixClassComment > true</fixClassComment>
<fixFieldComment > true</fixFieldComment>
<fixMethodComment > true</fixMethodComment>
<fixTags > all</fixTags>
<notimestamp > true</notimestamp>
<!-- Pass some options straight to the javadoc executable since it is easier -->
<additionalJOption > -J-Xmx2G</additionalJOption>
<!-- JDK8 javadoc requires test scope transitive dependencies due to our custom doclet -->
<additionalDependencies >
<additionalDependency >
<groupId > org.mockito</groupId>
<artifactId > mockito-all</artifactId>
<version > ${mockito-all.version}</version>
</additionalDependency>
<additionalDependency >
<groupId > org.hamcrest</groupId>
<artifactId > hamcrest-core</artifactId>
<version > ${hamcrest.version}</version>
</additionalDependency>
</additionalDependencies>
<inherited > false</inherited>
2014-08-21 04:50:14 -04:00
</configuration>
</reportSet>
2013-09-16 15:26:46 -04:00
2015-10-16 09:37:20 -04:00
<!-- User API -->
2014-08-21 04:50:14 -04:00
<reportSet >
<id > userapi</id>
<reports >
<report > aggregate</report>
</reports>
<configuration >
<doclet >
2014-09-22 21:46:35 -04:00
org.apache.hadoop.hbase.classification.tools.IncludePublicAnnotationsStandardDoclet
2014-08-21 04:50:14 -04:00
</doclet>
<docletArtifact >
2014-09-22 21:46:35 -04:00
<groupId > org.apache.hbase</groupId>
<artifactId > hbase-annotations</artifactId>
<version > ${project.version}</version>
2014-08-21 04:50:14 -04:00
</docletArtifact>
2015-10-16 09:37:20 -04:00
<useStandardDocletOptions > true</useStandardDocletOptions>
2014-08-21 04:50:14 -04:00
<destDir > apidocs</destDir>
<name > User API</name>
<description > The HBase Application Programmer's API</description>
<excludePackageNames >
2017-01-19 13:24:24 -05:00
org.apache.hadoop.hbase.backup*:org.apache.hadoop.hbase.catalog:org.apache.hadoop.hbase.client.coprocessor:org.apache.hadoop.hbase.client.metrics:org.apache.hadoop.hbase.codec*:org.apache.hadoop.hbase.constraint:org.apache.hadoop.hbase.coprocessor.*:org.apache.hadoop.hbase.executor:org.apache.hadoop.hbase.fs:*.generated.*:org.apache.hadoop.hbase.io.hfile.*:org.apache.hadoop.hbase.mapreduce.hadoopbackport:org.apache.hadoop.hbase.mapreduce.replication:org.apache.hadoop.hbase.master.*:org.apache.hadoop.hbase.metrics*:org.apache.hadoop.hbase.migration:org.apache.hadoop.hbase.monitoring:org.apache.hadoop.hbase.p*:org.apache.hadoop.hbase.regionserver.compactions:org.apache.hadoop.hbase.regionserver.handler:org.apache.hadoop.hbase.regionserver.snapshot:org.apache.hadoop.hbase.replication.*:org.apache.hadoop.hbase.rest.filter:org.apache.hadoop.hbase.rest.model:org.apache.hadoop.hbase.rest.p*:org.apache.hadoop.hbase.security.*:org.apache.hadoop.hbase.thrift*:org.apache.hadoop.hbase.tmpl.*:org.apache.hadoop.hbase.tool:org.apache.hadoop.hbase.trace:org.apache.hadoop.hbase.util.byterange*:org.apache.hadoop.hbase.util.test:org.apache.hadoop.hbase.util.vint:org.apache.hadoop.metrics2*:org.apache.hadoop.hbase.io.compress*
2014-08-21 04:50:14 -04:00
</excludePackageNames>
2014-09-22 21:46:35 -04:00
<!-- switch on dependency - driven aggregation -->
<includeDependencySources > false</includeDependencySources>
<dependencySourceIncludes >
<!-- include ONLY dependencies I control -->
<dependencySourceInclude > org.apache.hbase:hbase-annotations</dependencySourceInclude>
2015-10-16 09:37:20 -04:00
</dependencySourceIncludes>
<sourceFilesExclude > **/generated/*</sourceFilesExclude>
2015-11-03 17:31:24 -05:00
<show > protected</show> <!-- (shows only public and protected classes and members) -->
2015-10-16 09:37:20 -04:00
<quiet > true</quiet>
<linksource > true</linksource>
<sourcetab > 2</sourcetab>
<validateLinks > true</validateLinks>
<fixClassComment > true</fixClassComment>
<fixFieldComment > true</fixFieldComment>
<fixMethodComment > true</fixMethodComment>
<fixTags > all</fixTags>
<notimestamp > true</notimestamp>
<!-- Pass some options straight to the javadoc executable since it is easier -->
<additionalJOption > -J-Xmx2G</additionalJOption>
<!-- JDK8 javadoc requires test scope transitive dependencies due to our custom doclet -->
<additionalDependencies >
<additionalDependency >
<groupId > org.mockito</groupId>
<artifactId > mockito-all</artifactId>
<version > ${mockito-all.version}</version>
</additionalDependency>
<additionalDependency >
<groupId > org.hamcrest</groupId>
<artifactId > hamcrest-core</artifactId>
<version > ${hamcrest.version}</version>
</additionalDependency>
</additionalDependencies>
<inherited > false</inherited>
</configuration>
</reportSet>
2015-11-03 17:31:24 -05:00
<!-- User Test API -->
2015-10-16 09:37:20 -04:00
<reportSet >
<id > testuserapi</id>
<reports >
<report > test-aggregate</report>
</reports>
<configuration >
<doclet >
org.apache.hadoop.hbase.classification.tools.IncludePublicAnnotationsStandardDoclet
</doclet>
<docletArtifact >
<groupId > org.apache.hbase</groupId>
<artifactId > hbase-annotations</artifactId>
<version > ${project.version}</version>
</docletArtifact>
<useStandardDocletOptions > true</useStandardDocletOptions>
<destDir > testapidocs</destDir>
<name > User API</name>
<description > The HBase Application Programmer's API</description>
<excludePackageNames >
2017-01-19 13:24:24 -05:00
org.apache.hadoop.hbase.backup*:org.apache.hadoop.hbase.catalog:org.apache.hadoop.hbase.client.coprocessor:org.apache.hadoop.hbase.client.metrics:org.apache.hadoop.hbase.codec*:org.apache.hadoop.hbase.constraint:org.apache.hadoop.hbase.coprocessor.*:org.apache.hadoop.hbase.executor:org.apache.hadoop.hbase.fs:*.generated.*:org.apache.hadoop.hbase.io.hfile.*:org.apache.hadoop.hbase.mapreduce.hadoopbackport:org.apache.hadoop.hbase.mapreduce.replication:org.apache.hadoop.hbase.master.*:org.apache.hadoop.hbase.metrics*:org.apache.hadoop.hbase.migration:org.apache.hadoop.hbase.monitoring:org.apache.hadoop.hbase.p*:org.apache.hadoop.hbase.regionserver.compactions:org.apache.hadoop.hbase.regionserver.handler:org.apache.hadoop.hbase.regionserver.snapshot:org.apache.hadoop.hbase.replication.*:org.apache.hadoop.hbase.rest.filter:org.apache.hadoop.hbase.rest.model:org.apache.hadoop.hbase.rest.p*:org.apache.hadoop.hbase.security.*:org.apache.hadoop.hbase.thrift*:org.apache.hadoop.hbase.tmpl.*:org.apache.hadoop.hbase.tool:org.apache.hadoop.hbase.trace:org.apache.hadoop.hbase.util.byterange*:org.apache.hadoop.hbase.util.test:org.apache.hadoop.hbase.util.vint:org.apache.hadoop.metrics2*:org.apache.hadoop.hbase.io.compress*
2015-10-16 09:37:20 -04:00
</excludePackageNames>
<!-- switch on dependency - driven aggregation -->
<includeDependencySources > false</includeDependencySources>
<dependencySourceIncludes >
<!-- include ONLY dependencies I control -->
<dependencySourceInclude > org.apache.hbase:hbase-annotations</dependencySourceInclude>
</dependencySourceIncludes>
<sourceFilesExclude > **/generated/*</sourceFilesExclude>
2015-11-03 17:31:24 -05:00
<show > protected</show> <!-- (shows only public and protected classes and members) -->
2015-10-16 09:37:20 -04:00
<quiet > true</quiet>
<linksource > true</linksource>
<sourcetab > 2</sourcetab>
<validateLinks > true</validateLinks>
<fixClassComment > true</fixClassComment>
<fixFieldComment > true</fixFieldComment>
<fixMethodComment > true</fixMethodComment>
<fixTags > all</fixTags>
<notimestamp > true</notimestamp>
<!-- Pass some options straight to the javadoc executable since it is easier -->
<additionalJOption > -J-Xmx2G</additionalJOption>
<!-- JDK8 javadoc requires test scope transitive dependencies due to our custom doclet -->
<additionalDependencies >
<additionalDependency >
<groupId > org.mockito</groupId>
<artifactId > mockito-all</artifactId>
<version > ${mockito-all.version}</version>
</additionalDependency>
<additionalDependency >
<groupId > org.hamcrest</groupId>
<artifactId > hamcrest-core</artifactId>
<version > ${hamcrest.version}</version>
</additionalDependency>
</additionalDependencies>
<inherited > false</inherited>
2014-08-21 04:50:14 -04:00
</configuration>
</reportSet>
</reportSets>
2014-10-15 13:28:45 -04:00
</plugin>
2015-10-16 09:37:20 -04:00
2014-10-15 13:28:45 -04:00
<plugin >
<groupId > org.apache.maven.plugins</groupId>
<artifactId > maven-checkstyle-plugin</artifactId>
2017-06-29 09:37:22 -04:00
<version > ${maven.checkstyle.version}</version>
2014-10-15 13:28:45 -04:00
<configuration >
2015-03-09 22:50:56 -04:00
<excludes > target/**</excludes>
2014-10-15 13:28:45 -04:00
<configLocation > hbase/checkstyle.xml</configLocation>
<suppressionsLocation > hbase/checkstyle-suppressions.xml</suppressionsLocation>
</configuration>
</plugin>
2015-10-16 09:37:20 -04:00
2016-04-13 17:54:56 -04:00
<plugin >
<groupId > org.scala-tools</groupId>
<artifactId > maven-scala-plugin</artifactId>
</plugin>
2010-02-22 18:49:24 -05:00
</plugins>
</reporting>
2015-10-22 08:28:36 -04:00
<distributionManagement >
<site >
<id > hbase.apache.org</id>
<name > HBase Website at hbase.apache.org</name>
<!-- On why this is the tmp dir and not hbase.apache.org, see
https://issues.apache.org/jira/browse/HBASE-7593?focusedCommentId=13555866& page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13555866
-->
<url > file:///tmp</url>
</site>
</distributionManagement>
2015-11-09 17:40:15 -05:00
<repositories >
<repository >
<id > project.local</id>
<name > project</name>
<url > file:${project.basedir}/src/main/site/resources/repo</url>
</repository>
</repositories>
2010-02-22 18:49:24 -05:00
</project>