From 34c8e0ab16811cc1764940b9a14b0dce3050a078 Mon Sep 17 00:00:00 2001
From: Michael Stack
Table of Contents
@@ -30,94 +30,94 @@
- Constraints are designed to be configurable, so a constraint can be used across different tables, but implement different + Constraints are designed to be configurable, so a constraint can be used across different tables, but implement different behavior depending on the specific configuration given to that constraint.
- By adding a constraint to a table (see Example Usage), constraints will automatically enabled. - You also then have the option of to disable (just 'turn off') or remove (delete all associated information) all constraints on a table. - If you remove all constraints - (see {@link org.apache.hadoop.hbase.constraint.Constraints#remove(org.apache.hadoop.hbase.HTableDescriptor)}, - you must re-add any {@link org.apache.hadoop.hbase.constraint.Constraint} you want on that table. - However, if they are just disabled (see {@link org.apache.hadoop.hbase.constraint.Constraints#disable(org.apache.hadoop.hbase.HTableDescriptor)}, + By adding a constraint to a table (see Example Usage), constraints will automatically enabled. + You also then have the option of to disable (just 'turn off') or remove (delete all associated information) all constraints on a table. + If you remove all constraints + (see {@link org.apache.hadoop.hbase.constraint.Constraints#remove(org.apache.hadoop.hbase.HTableDescriptor)}, + you must re-add any {@link org.apache.hadoop.hbase.constraint.Constraint} you want on that table. + However, if they are just disabled (see {@link org.apache.hadoop.hbase.constraint.Constraints#disable(org.apache.hadoop.hbase.HTableDescriptor)}, all you need to do is enable constraints again, and everything will be turned back on as it was configured. Individual constraints can also be individually enabled, disabled or removed without affecting other constraints.
- By default, constraints are disabled on a table. + By default, constraints are disabled on a table. This means you will not see any slow down on a table if constraints are not enabled.
- Locking is recommended around each of Constraints add methods: - {@link org.apache.hadoop.hbase.constraint.Constraints#add(org.apache.hadoop.hbase.HTableDescriptor, Class...)}, - {@link org.apache.hadoop.hbase.constraint.Constraints#add(org.apache.hadoop.hbase.HTableDescriptor, org.apache.hadoop.hbase.util.Pair...)}, + Locking is recommended around each of Constraints add methods: + {@link org.apache.hadoop.hbase.constraint.Constraints#add(org.apache.hadoop.hbase.HTableDescriptor, Class...)}, + {@link org.apache.hadoop.hbase.constraint.Constraints#add(org.apache.hadoop.hbase.HTableDescriptor, org.apache.hadoop.hbase.util.Pair...)}, and {@link org.apache.hadoop.hbase.constraint.Constraints#add(org.apache.hadoop.hbase.HTableDescriptor, Class, org.apache.hadoop.conf.Configuration)}. Any changes on a single HTableDescriptor should be serialized, either within a single thread or via external mechanisms.
- Note that having a higher priority means that a constraint will run later; e.g. a constraint with priority 1 will run before a - constraint with priority 2. + Note that having a higher priority means that a constraint will run later; e.g. a constraint with priority 1 will run before a + constraint with priority 2.
- Since Constraints currently are designed to just implement simple checks (e.g. is the value in the right range), there will - be no atomicity conflicts. - Even if one of the puts finishes the constraint first, the single row will not be corrupted and the 'fastest' write will win; + Since Constraints currently are designed to just implement simple checks (e.g. is the value in the right range), there will + be no atomicity conflicts. + Even if one of the puts finishes the constraint first, the single row will not be corrupted and the 'fastest' write will win; the underlying region takes care of breaking the tie and ensuring that writes get serialized to the table. - So yes, this doesn't ensure that we are going to get specific ordering or even a fully consistent view of the underlying data. + So yes, this doesn't ensure that we are going to get specific ordering or even a fully consistent view of the underlying data.
Each constraint should only use local/instance variables, unless doing more advanced usage. Static variables could cause difficulties when checking concurrent writes to the same region, leading to either highly locked situations (decreasing through-put) or higher probability of errors. However, as long as each constraint just uses local variables, each thread interacting with the constraint will execute correctly and efficiently.
- Under the hood, constraints are implemented as a Coprocessor (see {@link org.apache.hadoop.hbase.constraint.ConstraintProcessor} +
+ Under the hood, constraints are implemented as a Coprocessor (see {@link org.apache.hadoop.hbase.constraint.ConstraintProcessor} if you are interested).
- Let's look at one possible implementation of a constraint - an IntegerConstraint(there are also several simple examples in the tests).
+ Let's look at one possible implementation of a constraint - an IntegerConstraint(there are also several simple examples in the tests).
The IntegerConstraint checks to make sure that the value is a String-encoded int
.
It is really simple to implement this kind of constraint, the only method needs to be implemented is
{@link org.apache.hadoop.hbase.constraint.Constraint#check(org.apache.hadoop.hbase.client.Put)}:
@@ -141,18 +141,18 @@
} catch (NumberFormatException e) {
throw new ConstraintException("Value in Put (" + p
+ ") was not a String-encoded integer", e);
- } } }
+ } } }
- Note that all exceptions that you expect to be thrown must be caught and then rethrown as a - {@link org.apache.hadoop.hbase.exceptions.ConstraintException}. This way, you can be sure that a - {@link org.apache.hadoop.hbase.client.Put} fails for an expected reason, rather than for any reason. - For example, an {@link java.lang.OutOfMemoryError} is probably indicative of an inherent problem in + Note that all exceptions that you expect to be thrown must be caught and then rethrown as a + {@link org.apache.hadoop.hbase.constraint.ConstraintException}. This way, you can be sure that a + {@link org.apache.hadoop.hbase.client.Put} fails for an expected reason, rather than for any reason. + For example, an {@link java.lang.OutOfMemoryError} is probably indicative of an inherent problem in the {@link org.apache.hadoop.hbase.constraint.Constraint}, rather than a failed {@link org.apache.hadoop.hbase.client.Put}.
If an unexpected exception is thrown (for example, any kind of uncaught {@link java.lang.RuntimeException}), - constraint-checking will be 'unloaded' from the regionserver where that error occurred. + constraint-checking will be 'unloaded' from the regionserver where that error occurred. This means no further {@link org.apache.hadoop.hbase.constraint.Constraint Constraints} will be checked on that server until it is reloaded. This is done to ensure the system remains as available as possible. Therefore, be careful when writing your own Constraint. @@ -166,14 +166,14 @@ Constraints.add(desc, IntegerConstraint.class);
- Once we added the IntegerConstraint, constraints will be enabled on the table (once it is created) and + Once we added the IntegerConstraint, constraints will be enabled on the table (once it is created) and we will always check to make sure that the value is an String-encoded integer. -
+
However, suppose we also write our own constraint, MyConstraint.java
.
- First, you need to make sure this class-files are in the classpath (in a jar) on the regionserver where
+ First, you need to make sure this class-files are in the classpath (in a jar) on the regionserver where
that constraint will be run (this could require a rolling restart on the region server - see Caveats above)
- Suppose that MyConstraint also uses a Configuration (see {@link org.apache.hadoop.hbase.constraint.Constraint#getConf()}). + Suppose that MyConstraint also uses a Configuration (see {@link org.apache.hadoop.hbase.constraint.Constraint#getConf()}). Then adding MyConstraint looks like this:
Suppose we realize that the {@link org.apache.hadoop.conf.Configuration} for MyConstraint is actually wrong - when it was added to the table. Note, when it is added to the table, it is not added by reference, + when it was added to the table. Note, when it is added to the table, it is not added by reference, but is instead copied into the {@link org.apache.hadoop.hbase.HTableDescriptor}. Thus, to change the {@link org.apache.hadoop.conf.Configuration} we are using for MyConstraint, we need to do this: @@ -202,7 +202,7 @@ Constraints.setConfiguration(desc, MyConstraint.class, conf);
- This will overwrite the previous configuration for MyConstraint, but not change the order of the + This will overwrite the previous configuration for MyConstraint, but not change the order of the constraint nor if it is enabled/disabled.
Note that the same constraint class can be added multiple times to a table without repercussion. @@ -216,7 +216,7 @@
This just turns off MyConstraint, but retains the position and the configuration associated with MyConstraint. - Now, if we want to re-enable the constraint, its just another one-liner: + Now, if we want to re-enable the constraint, its just another one-liner:
Constraints.enable(desc, MyConstraint.class); diff --git a/hbase-server/src/main/java/org/apache/hadoop/hbase/thrift/HThreadedSelectorServerArgs.java b/hbase-server/src/main/java/org/apache/hadoop/hbase/thrift/HThreadedSelectorServerArgs.java index f19ec5fae45..8c4ab615bf6 100644 --- a/hbase-server/src/main/java/org/apache/hadoop/hbase/thrift/HThreadedSelectorServerArgs.java +++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/thrift/HThreadedSelectorServerArgs.java @@ -19,21 +19,19 @@ package org.apache.hadoop.hbase.thrift; +import org.apache.commons.logging.Log; +import org.apache.commons.logging.LogFactory; import org.apache.hadoop.classification.InterfaceAudience; import org.apache.hadoop.conf.Configuration; import org.apache.thrift.server.TThreadedSelectorServer; import org.apache.thrift.transport.TNonblockingServerTransport; -import org.slf4j.Logger; -import org.slf4j.LoggerFactory; /** * A TThreadedSelectorServer.Args that reads hadoop configuration */ @InterfaceAudience.Private public class HThreadedSelectorServerArgs extends TThreadedSelectorServer.Args { - - private static final Logger LOG = - LoggerFactory.getLogger(TThreadedSelectorServer.class); + private static final Log LOG = LogFactory.getLog(TThreadedSelectorServer.class); /** * Number of selector threads for reading and writing socket diff --git a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterNoCluster.java b/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterNoCluster.java index 8741952a534..423c5c614e9 100644 --- a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterNoCluster.java +++ b/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterNoCluster.java @@ -65,8 +65,8 @@ import org.junit.BeforeClass; import org.junit.Test; import org.mockito.Mockito; import org.junit.experimental.categories.Category; -import org.slf4j.Logger; -import org.slf4j.LoggerFactory; +import org.apache.commons.logging.Log; +import org.apache.commons.logging.LogFactory; /** * Standup the master and fake it to test various aspects of master function. @@ -78,7 +78,7 @@ import org.slf4j.LoggerFactory; */ @Category(MediumTests.class) public class TestMasterNoCluster { - private static Logger LOG = LoggerFactory.getLogger(TestMasterNoCluster.class); + private static final Log LOG = LogFactory.getLog(TestMasterNoCluster.class); private static final HBaseTestingUtility TESTUTIL = new HBaseTestingUtility(); @BeforeClass @@ -240,7 +240,7 @@ public class TestMasterNoCluster { * @throws IOException * @throws KeeperException * @throws InterruptedException - * @throws DeserializationException + * @throws DeserializationException * @throws ServiceException */ @Test (timeout=30000) diff --git a/pom.xml b/pom.xml index 1a3e7d869cd..92385c0d039 100644 --- a/pom.xml +++ b/pom.xml @@ -461,6 +461,7 @@org.apache.maven.plugins maven-release-plugin +2.4.1 -Dmaven.test.skip.exec +pom.xml @@ -609,6 +611,9 @@ @@ -884,7 +889,9 @@ prepare-package + test-jar 2.4 2.6 1.1.1 -2.1 +2.2 +3.2.1 +3.0.1 2.1.2 12.0.1 1.8.8 @@ -896,13 +903,13 @@1.6.8 4.11 1.50 -1.4.3 1.2.17 1.9.0 2.4.1 1.0.1 0.9.0 3.4.5 +1.6.4 0.0.1-SNAPSHOT 2.6.3 2.3.1 @@ -1045,6 +1052,18 @@jettison ${jettison.version} ++ + +log4j +log4j +${log4j.version} ++ org.slf4j +slf4j-api +${slf4j.version} ++ com.yammer.metrics metrics-core @@ -1055,6 +1074,16 @@guava ${guava.version} + +commons-collections +commons-collections +${collections.version} ++ commons-httpclient +commons-httpclient +${httpclient.version} +- commons-cli commons-cli @@ -1090,11 +1119,6 @@commons-math ${commons-math.version} - log4j -log4j -${log4j.version} -- org.apache.zookeeper zookeeper @@ -1203,16 +1227,6 @@jackson-xc ${jackson.version} - -org.slf4j -slf4j-api -${slf4j.version} -- org.slf4j -slf4j-log4j12 -${slf4j.version} -junit @@ -1438,7 +1462,8 @@hadoop-1.1 - !hadoop.profile + +!hadoop.profile @@ -1446,7 +1471,6 @@ @@ -1507,7 +1531,6 @@ ${hadoop-one.version} -1.4.3 hbase-hadoop1-compat src/main/assembly/hadoop-one-compat.xml 1.0.4 ${hadoop.version} -1.4.3 hbase-hadoop1-compat src/main/assembly/hadoop-one-compat.xml @@ -1558,8 +1581,8 @@hadoop-2.0 - hadoop.profile -2.0 + +hadoop.profile 2.0 @@ -1567,12 +1590,43 @@ ${hadoop-two.version} -1.6.1 hbase-hadoop2-compat src/main/assembly/hadoop-two-compat.xml + + +org.apache.hadoop +hadoop-mapreduce-client-core +${hadoop-two.version} ++ +org.apache.hadoop +hadoop-mapreduce-client-jobclient +${hadoop-two.version} ++ +org.apache.hadoop +hadoop-mapreduce-client-jobclient +${hadoop-two.version} +test-jar ++ +org.apache.hadoop +hadoop-hdfs +${hadoop-two.version} ++ +org.apache.hadoop +hadoop-hdfs +${hadoop-two.version} +test-jar ++ org.apache.hadoop +hadoop-auth +${hadoop-two.version} +org.apache.hadoop hadoop-common @@ -1625,7 +1679,6 @@- 1.6.1 3.0.0-SNAPSHOT