Fix logic for
1). how to handle exception while waiting for reply from the primary replica.
2). handle exception from replicas while waiting for a correct response.
Signed-off-by: Esteban Gutierrez <esteban@apache.org>
Which includes
HBASE-16742 Add chapter for devs on how we do protobufs going forward
HBASE-16741 Amend the generate protobufs out-of-band build step
to include shade, pulling in protobuf source and a hook for patching protobuf
Removed ByteStringer from hbase-protocol-shaded. Use the protobuf-3.1.0
trick directly instead. Makes stuff cleaner. All under 'shaded' dir is
now generated.
HBASE-16567 Upgrade to protobuf-3.1.x
Regenerate all protos in this module with protoc3.
Redo ByteStringer to use new pb3.1.0 unsafebytesutil
instead of HBaseZeroCopyByteString
HBASE-16264 Figure how to deal with endpoints and shaded pb Shade our protobufs.
Do it in a manner that makes it so we can still have in our API references to
com.google.protobuf (and in REST). The c.g.p in API is for Coprocessor Endpoints (CPEP)
This patch is Tactic #4 from Shading Doc attached to the referenced issue.
Figuring an appoach took a while because we have Coprocessor Endpoints
mixed in with the core of HBase that are tough to untangle (FIX).
Tactic #4 (the fourth attempt at addressing this issue) is COPY all but
the CPEP .proto files currently in hbase-protocol to a new module named
hbase-protocol-shaded. Generate .protos again in the new location and
then relocate/shade the generated files. Let CPEPs keep on with the
old references at com.google.protobuf.* and
org.apache.hadoop.hbase.protobuf.* but change the hbase core so all
instead refer to the relocated files in their new location at
org.apache.hadoop.hbase.shaded.com.google.protobuf.*.
Let the new module also shade protobufs themselves and change hbase
core to pick up this shaded protobuf rather than directly reference
com.google.protobuf.
This approach allows us to explicitly refer to either the shaded or
non-shaded version of a protobuf class in any particular context (though
usually context dictates one or the other). Core runs on shaded protobuf.
CPEPs continue to use whatever is on the classpath with
com.google.protobuf.* which is pb2.5.0 for the near future at least.
See above cited doc for follow-ons and downsides. In short, IDEs will complain
about not being able to find the shaded protobufs since shading happens at package
time; will fix by checking in all generated classes and relocated protobuf in
a follow-on. Also, CPEPs currently suffer an extra-copy as marshalled from
non-shaded to shaded. To fix. Finally, our .protos are duplicated; once
shaded, and once not. Pain, but how else to reveal our protos to CPEPs or
C++ client that wants to talk with HBase AND shade protobuf.
Details:
Add a new hbase-protocol-shaded module. It is a copy of hbase-protocol
i with all relocated offset from o.a.h.h. to o.a.h.h.shaded. The new module
also includes the relocated pb. It does not include CPEPs. They stay in
their old location.
Add another module hbase-endpoint which has in it all the endpoints
that ship as part of hbase -- at least the ones that are not
entangled with core such as AccessControl and Auth. Move all protos
for these CPEPs here as well as their unit tests (mostly moving a
bunch of stuff out of hbase-server module)
Much of the change looks like this:
-import org.apache.hadoop.hbase.protobuf.ProtobufUtil;
-import org.apache.hadoop.hbase.protobuf.generated.ClusterIdProtos;
+import org.apache.hadoop.hbase.protobuf.shaded.ProtobufUtil;
+import org.apache.hadoop.hbase.shaded.protobuf.generated.ClusterIdProtos;
In HTable and in HBaseAdmin, regularize the way Callables are used and also hide
protobuf usage as much as possible moving it up into Callable super classes or out
to utility classes. Still TODO is adding in of retries, etc., but can wait on
procedure which will redo all this.
Also in HTable and HBaseAdmin as well as in HRegionServer and Server, be explicit
when using non-shaded protobuf. Do the full-path so it is clear. This is around
endpoint coprocessors registration of services and execution of CPEP methods.
Shrunk ProtobufUtil by moving methods used by one CPEP only back to the CPEP either
into Client class or as new Util class; e.g. AccessControlUtil.
There are actually two versions of ProtobufUtil now; a shaded one and a subset
that is used by CPEPs doing non-shaded work.
Made it so hbase-common no longer depends on hbase-protocol (with Matteo's help)
R*Converter classes got moved down under shaded package -- they are for internal
use only. There are no non-shaded versions of these classes.
D hbase-client/src/main/java/org/apache/hadoop/hbase/client/AbstractRegionServerCallable
D RetryingCallableBase
Not used anymore and we have too many tiers of Callables so removed/cleaned-up.
A ClientServicecallable
Had to add this one. RegionServerCallable was made generic so it could be used
for a few Interfaces (Client and Admin). Then added ClientServiceCallable to
implement RegionServerCallable with the Client Interface.
Changes namespace_exists? method in SecurityAdmin ruby code to catch NamespaceNotFoundException
and modified Admin.java file to document the exception.
Signed-off-by: Sean Busbey <busbey@apache.org>
New tool to dump existing replication peers, configurations and
queues when using HBase Replication. The tool provides two flags:
--distributed This flag will poll each RS for information about
the replication queues being processed on this RS.
By default this is not enabled and the information
about the replication queues and configuration will
be obtained from ZooKeeper.
--hdfs When --distributed is used, this flag will attempt
to calculate the total size of the WAL files used
by the replication queues. Since its possible that
multiple peers can be configured this value can be
overestimated.
Signed-off-by: Matteo Bertozzi <matteo.bertozzi@cloudera.com>
This is a revert of a revert; i.e. we are adding back the change only adding
back with fixes for the broken unit test; was a real issue on a test that
went in just at same time as this commit; I was getting a new nonce on each
retry rather than getting one for the mutation.
Other changes since revert are more hiding of RpcController. Use
accessor method rather than always pass in a RpcController
Walked back retrying operations that used to be single-shot (though
code comment said need a retry) because it opens a can of worms where
we retry stuff like bad column family when we shouldn't (needs
work adding in DoNotRetryIOEs)
Changed name of class from PayloadCarryingServerCallable to
CancellableRegionServerCallable.
Fix javadoc and findbugs warnings.
Fix case of not initializing the ScannerCallable RpcController.
Below is original commit message:
Remove mention of ServiceException and other protobuf classes from all over the codebase.
Purge TimeLimitedRpcController. Lets just have one override of RpcController.
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/AbstractRegionServerCallable.java
Cleanup. Make it clear this is an odd class for async hbase intro.
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTable.java
Refactor of RegionServerCallable allows me clean up a bunch of
boilerplate in here and remove protobuf references.
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java
Purge protobuf references everywhere except a reference to a throw of a
ServiceException in method checkHBaseAvailable. I deprecated it in favor
of new available method (the SE is not actually needed)
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/PayloadCarryingServerCallable.java
Move the RetryingTimeTracker instance in here from HTable.
Allows me to contain tracker and remove a repeated code in HTable.
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/RegionServerCallable.java
Clean up move set up of rpc in here rather than have it repeat in HTable.
Allows me to remove protobuf references from a bunch of places.
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/FlushRegionCallable.java
Make use of the push of boilerplate up into RegionServerCallable
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/MultiServerCallable.java
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/PayloadCarryingServerCallable.java
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/RegionAdminServiceCallable.java
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCallerWithReadReplicas.java
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/ScannerCallable.java
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/SecureBulkLoadClient.java
M hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java
Move boilerplate up into superclass.
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/RetryingTimeTracker.java
Cleanup
M hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/PayloadCarryingRpcController.java
M hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
M hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALEditsReplaySink.java
M hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/RegionReplicaReplicationEndpoint.java
Factor in TimeLimitedRpcController. Just have one RpcController override.
D hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/TimeLimitedRpcController.java
Removed. Lets have one override of pb rpccontroller only.
M hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
(handleRemoteException) added
(toText) added
Purge ServiceException from Callable subclasses by pushing SE handling
up into the parent Callable class (varies by context but this is basic
patten). Allows us remove a bunch of boilerplate.
Do this in the public facing classes in particular (though if
an API has SE in it -- which a few do, this patch leaves these
untouched -- for now.) Make it so HBaseAdmin and HTable have no
direct pb imports (except for endpoint processor API).
Change a few of the HBaseAdmin calls to be retrying where comments
ask that we do retry rather than one time.
Purge TimeLimitedRpcController. Lets just have one override of RpcController.
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/AbstractRegionServerCallable.java
Cleanup. Make it clear this is an odd class for async hbase intro.
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTable.java
Refactor of RegionServerCallable allows me clean up a bunch of
boilerplate in here and remove protobuf references.
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java
Purge protobuf references everywhere except a reference to a throw of a
ServiceException in method checkHBaseAvailable. I deprecated it in favor
of new available method (the SE is not actually needed)
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/PayloadCarryingServerCallable.java
Move the RetryingTimeTracker instance in here from HTable.
Allows me to contain tracker and remove a repeated code in HTable.
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/RegionServerCallable.java
Clean up move set up of rpc in here rather than have it repeat in HTable.
Allows me to remove protobuf references from a bunch of places.
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/FlushRegionCallable.java
Make use of the push of boilerplate up into RegionServerCallable
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/MultiServerCallable.java
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/PayloadCarryingServerCallable.java
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/RegionAdminServiceCallable.java
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCallerWithReadReplicas.java
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/ScannerCallable.java
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/SecureBulkLoadClient.java
M hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java
Move boilerplate up into superclass.
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/RetryingTimeTracker.java
Cleanup
M hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/PayloadCarryingRpcController.java
M hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
M hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALEditsReplaySink.java
M hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/RegionReplicaReplicationEndpoint.java
Factor in TimeLimitedRpcController. Just have one RpcController override.
D hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/TimeLimitedRpcController.java
Removed. Lets have one override of pb rpccontroller only.
M hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
(handleRemoteException) added
(toText) added
Signed-off-by: stack <stack@apache.org>
hbase-client has hbase-common test-jar as dependency in compile scope, while it should be test scope instead.
This patch fixes the bug.
closes#12
Signed-off-by: Sean Busbey <busbey@apache.org>
TimeRangeTracker as point of contention when many threads reading a StoreFile
Fixes HBASE-16074 ITBLL fails, reports lost big or tiny families broken
scanning because of a side effect of a clean up in HBASE-15650 to make
TimeRange construction consistent exposed a latent issue in
TimeRange#compare. See HBASE-16074 for more detail.
Also change HFile Writer constructor so we pass in the TimeRangeTracker, if one,
on construction rather than set later (the flag and reference were not volatile
so could have made for issues in concurrent case). And make sure the construction
of a TimeRange from a TimeRangeTracer on open of an HFile Reader never makes a
bad minimum value, one that would preclude us reading any values from a file
(set min to 0)
M hbase-common/src/main/java/org/apache/hadoop/hbase/io/TimeRange.java
Call through to next constructor (if minStamp was 0, we'd skip setting
allTime=true). Add asserts that timestamps are not < 0 cos it messes
us up if they are (we already were checking for < 0 on construction but
assert passed in timestamps are not < 0).
M hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
Add constructor override that takes a TimeRangeTracker (set when flushing
but not when compacting)
M hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Store.java
Add override creating an HFile in tmp that takes a TimeRangeTracker
M hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java
Add override for HFile Writer that takes a TimeRangeTracker Take it on
construction instead of having it passed by a setter later (flags and
reference set by the setter were not volatile... could have been prob
in concurrent case)
M hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/TimeRangeTracker.java
Log WARN if bad initial TimeRange value (and then 'fix' it)
M hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestTimeRangeTracker.java
A few tests to prove serialization works as expected and that we'll get a bad min if not constructed properly.
M hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ScanQueryMatcher.java
Handle OLDEST_TIMESTAMP explictly. Don't expect TimeRange to do it.
M hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestQueryMatcher.java
Refactor from junit3 to junit4 and add test for this weird case.
All ReplicationTableBase method's that need to access the Replication Table will block until it is created though.
Also refactored ReplicationSourceManager so that abandoned queue adoption is run in the background too so that it does not block HRegionServer initialization.
Signed-off-by: Elliott Clark <eclark@apache.org>
Building on HBase-15958.
Provided a ReplicationQueuesClientHBaseImpl that relies on the HBase Replication Table to track WAL queues.
Refactored out a large section of ReplicationQueuesHBaseImpl into a ReplicationTableClient class that handles Replication Table operations.
Signed-off-by: Elliott Clark <eclark@apache.org>
Building on HBase-15883.
Now implementing the claim queues procedure within an HBase table.
Also added UnitTests to test claimQueue.
Peer tracking will still be performed by ZooKeeper though.
Also modified the queueId tracking procedure so we no longer have to perform scans over the Replication Table.
This does make our queue naming schema slightly different from ReplicationQueuesZKImpl though.
Signed-off-by: Elliott Clark <eclark@apache.org>
- Testing by executing a command will cover the exact path users will trigger, so its better then directly calling library functions in tests. Changing the tests to use @shell.command(:<command>, args) to execute them like it's a command coming from shell.
Norm change:
Commands should print the output user would like to see, but in the end, should also return the relevant value. This way:
- Tests can use returned value to check that functionality works
- Tests can capture stdout to assert particular kind of output user should see.
- We do not print the return value in interactive mode and keep the output clean. See Shell.command() function.
Bugs found due to this change:
- Uncovered bug in major_compact.rb with this approach. It was calling admin.majorCompact() which doesn't exist but our tests didn't catch it since they directly tested admin.major_compact()
- Enabled TestReplicationShell. If it's bad, flaky infra will take care of it.
Change-Id: I5d8af16bf477a79a2f526a5bf11c245b02b7d276
Implemented ReplicationQueuesHBaseImpl that tracks WAL offsets and replication queues in an HBase table.
Only wrote the basic tracking methods, have not implemented claimQueue() or HFileRef methods yet.
Wrote a basic unit test for ReplicationQueueHBaseImpl that tests the implemented functions on a single Region Server
Signed-off-by: Elliott Clark <elliott@fb.com>
Signed-off-by: Elliott Clark <eclark@apache.org>
It added this in AsyncProcess#waitForMaximumCurrentTasks:
synchronized (this.tasksInProgress) {
+ if (tasksInProgress.get() != oldInProgress) break;
this.tasksInProgress.wait(100);
which added a break out of our waiting loop if any change in
count of tasks; it seems that what was wanted was instead to
avoid the wait if there was movement in the count of completed
task.
Reformats waitForMaximumCurrentTasks so it is testable. Adds
test that we indeed wait on the specified parameter.
Summary:
Allow TestTimestampFilterSeekHint to provide a seek next hint.
This can be incorrect as it might skip deletes. However it can
make things much much faster.
Test Plan: Added a unit test.
Differential Revision: https://reviews.facebook.net/D55617
Summary:
Currently WAL splitting is broken when a region has been opened multiple times in recent minutes.
Region open and region close write event markers to the wal. These markers should have the sequence id in them. However it is currently getting 1. That means that if a region has moved multiple times in the last few mins then multiple split log workers will try and create the recovered edits file for sequence id 1. One of the workers will fail and on failing they will delete the recovered edits. Causing all split wal attempts to fail.
We need to:
It appears that the close event with a sequence id of one is coming from region warm up.
This patch fixes that by making sure the close on warm up doesn't happen. Also splitting will ignore any of the events that are already in the logs.
Test Plan: Unit tests pass
Differential Revision: https://reviews.facebook.net/D55557
Further investigation after HBASE-15221 lead to some findings that
AsyncProcess should have been managing the contents of the region
location cache, appropriately clearing it when necessary (e.g. an
RPC to a server fails because the server doesn't host that region)
For multi() RPCs, the tableName argument is null since there is no
single table that the updates are destined to. This inadvertently
caused the existing region location cache updates to fail on 1.x
branches. AsyncProcess needs to handle when tableName is null
and perform the necessary cache evictions.
As such, much of the new retry logic in HTableMultiplexer is
unnecessary and is removed with this commit. Getters which were
added as a part of testing were left since that are mostly
harmless and should contain no negative impact.
Signed-off-by: stack <stack@apache.org>
When a Put fails due to a NotServingRegionException, the cached location
of that Region is never cleared. Thus, subsequent calls to resubmit
the Put will fail in the same way as the original, never determining
the new location of the Region.
If the Connection is not closed by the user before the Multiplexer
is discarded, it will leak resources and could cause resource
issues.
Signed-off-by: Sean Busbey <busbey@cloudera.com>
Summary:
* Add VersionInfoUtil to determine if a client has a specified version or better
* Add an exception type to say that the response should be chunked
* Add on client knowledge of retry exceptions
* Add on metrics for how often this happens
Test Plan: Added a unit test
Differential Revision: https://reviews.facebook.net/D51771
Summary:
Create and use a copy on write map for region location.
- Create a copy on write map backed by a sorted array.
- Create a test for both comparing each with a jdk provided map.
- Change MetaCache to use the new map.
Test Plan:
- org.apache.hadoop.hbase.client.TestFromClientSide
- TestHCM
Differential Revision: https://reviews.facebook.net/D49545
Summary:
Use concurrent collections and atomic longs to keep track of edits in buffered mutator.
This keeps buffered mutator as thread safe but it means that shared buffered mutators are not contending on thread locking.
Test Plan: Unit Tests.
Differential Revision: https://reviews.facebook.net/D49467
Summary: Send the list of mutations to AsyncProcess only after done adding the list otherwise there's a lot of contention
Test Plan: UnitTests.
Differential Revision: https://reviews.facebook.net/D49251
* We now have apidocs, devapidocs, testapidocs, and testdevapidocs
* When using reportSets, the Javadoc plugin ignores the plugin configuration,
so I propagated it to the individual reportSet configuration blocks.
* I was able to remove some of the logic that moved / copied things around
in post-site.
* We now have xref and xref-test (these are superfluous but something needs them
so I left them)
* Added source to Javadocs -- you can click a method or property to browse its source.
More user-friendly than xref maybe.
* You can now get the whole site to build by doing 'mvn clean site site:stage' and
it takes about 10 minutes.
* We now have apidocs, devapidocs, testapidocs, and testdevapidocs
* When using reportSets, the Javadoc plugin ignores the plugin configuration,
so I propagated it to the individual reportSet configuration blocks.
* I was able to remove some of the logic that moved / copied things around
in post-site.
* We now have xref and xref-test (these are superfluous but something needs them
so I left them)
* Added source to Javadocs -- you can click a method or property to browse its source.
More user-friendly than xref maybe.
* You can now get the whole site to build by doing 'mvn clean site site:stage' and
it takes about 10 minutes.
First pass. Plumbs up metrics for each connection instance. Exposes
static information about those connections (zk quorum and root node,
cluster id, user). Exposes basic thread pool utilization gauges for
"meta lookup" and "batch" pools.
* corrects license/notice for source distribution
* adds inception year to correct copyright in generated NOTICE files for jars
* updates project names in poms to use "Apache HBase" instead of "HBase" so jar NOTICE files will be correct
* uses append-resources to include supplemental info on jars with 3rd party works in source
* adds an hbase specific resource bundle for jars that include 3rd party works for binaries
** uses supplemental-model to fill in license gaps
** uses the above and a shade plugin transformation to build proper files for shaded jars.
** uses the above and the assembly plugin to build the proper files for bin assembly
* adds a NOTICE item for things copied out of Hadoop (TODO legal-discuss)
* change pom to use a maven 3 compat version of clover
* add clover to javadoc plugin deps so that instrumented doclet works
* modify IA annotation test to filter out clover instrumentation
* make splitlog counters check for atomiclong before casting
Revert commit of HBASE-13655 Deprecate duplicate getCompression methods in HColumnDescriptor
I committed with bad commit message.
This reverts commit 5732bdb483.
API conflicts and test fixes
Update LoadTestTool.COLUMN_FAMILY -> DEFAULT_COLUMN_FAMILY due HBASE-11842
Use new 1.0+ api in some tests
Use updated Scanners internal api
Fix to take into account HBASE-13203 - procedure v2 table delete
Conflicts:
hbase-client/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java
hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java
* IA.Public accessible logger instances deprecated
* logger instances modified by tests left in place
* all others made private static final
Signed-off-by: Sean Busbey <busbey@apache.org>
Use the context passed back via ScanResponse that a RegionServer
fills in to denote whether or not more results existing in the
current Region. Add a simple factory to remove a static method
used across both SmallScanner and SmallReverseScanner. Add new
unit tests for both scanner classes to test scans with and
without the new context (as a quick backward-compatibility test).
The RS already returns to the client whether or not it has additional
results to be returned in a subsequent call to scan(), but the ClientScanner
did not use or adhere to this value. Subsequently, this can lead to
bugs around moving to the next region too early. A new method was added
to ClientScanner in the name of testability.
Encapsulate server-state into RegionServerCallable to avoid
modifying parameterization of callable impls.
Signed-off-by: Andrew Purtell <apurtell@apache.org>
Include some basic tests for the method on a testing cluster.
Also update master page to show an alert when balancer is disabled.
Signed-off-by: Enis Soztutar <enis@apache.org>
Summary:
The current behavior of a region move shuts down a region and then starts is up in another regionserver. This causes increased latency and possibly timeouts till the new region's cache is fully warmed up. We can make a region move less disruptive by warming the cache in the destination region server before shutting dow the old region.
See https://issues.apache.org/jira/browse/HBASE-13316
Test Plan:
1. Unit Tests
2. Added test for concurrent moves and warmups
3. Manually tested reads/writes happening with concurrent moves
Subscribers: tedyu
Differential Revision: https://reviews.facebook.net/D35967
Signed-off-by: Elliott Clark <eclark@apache.org>
Adds a number of lifecycle-mapping entries which
prevent errors from showing up in Eclipse on a fresh
import of HBase. For plugins defined in the top-level
pom, the mapping is added there; otherwise, the mapping
is pushed down to the child pom.
Signed-off-by: Sean Busbey <busbey@apache.org>
* move mapreduce version of TableInputFormat tests out of mapred
* add ability to get runnable job via MR test shims
* correct the javadoc example for current APIs.
* add tests the run a job based on the extending TableInputFormatBase (as given in the javadocs)
* add tests that run jobs based on the javadocs from 0.98
* fall back to our own Connection if ussers of the deprecated table configuration have a managed connection.
In our pre-1.0 API, HTable is considered a light-weight object that consumed by
a single thread at a time. The HTablePool class provided a means of sharing
multiple HTable instances across a number of threads. As an optimization,
HTable managed a "write buffer", accumulating edits and sending a "batch" all
at once. By default the batch was sent as the last step in invocations of
put(Put) and put(List<Put>). The user could disable the automatic flushing of
the write buffer, retaining edits locally and only sending the whole "batch"
once the write buffer has filled or when the flushCommits() method in invoked
explicitly. Explicit or implicit batch writing was controlled by the
setAutoFlushTo(boolean) method. A value of true (the default) had the write
buffer flushed at the completion of a call to put(Put) or put(List<Put>). A
value of false allowed for explicit buffer management. HTable also exposed the
buffer to consumers via getWriteBuffer().
The combination of HTable with setAutoFlushTo(false) and the HTablePool
provided a convenient mechanism by which multiple "Put-producing" threads could
share a common write buffer. Both HTablePool and HTable are deprecated, and
they are officially replaced in The new 1.0 API by Table and BufferedMutator.
Table, which replaces HTable, no longer exposes explicit write-buffer
management. Instead, explicit buffer management is exposed via BufferedMutator.
BufferedMutator is made safe for concurrent use. Where code would previously
retrieve and return HTables from an HTablePool, now that code creates and
shares a single BufferedMutator instance across all threads.