This is a revert of a revert, i.e. puts back the change only I added
HBaseCommonTestingUtility#getRandomDir from branch-1 too since
missing in master branch.
This reverts commit 9afcb4366d.
- processOldArgs() in AbstractHBaseTool will make it possible to change existing tools to use Apache CLI in a backward compatible manner.
- Changes ExportSnapshot tool as a proof-of-concept
- Update version of Apache CLI from 1.2 to 1.3.1
Change-Id: I4bb2e7da4f2ed0e12e28621ab16459309fa36d03
Which includes
HBASE-16742 Add chapter for devs on how we do protobufs going forward
HBASE-16741 Amend the generate protobufs out-of-band build step
to include shade, pulling in protobuf source and a hook for patching protobuf
Removed ByteStringer from hbase-protocol-shaded. Use the protobuf-3.1.0
trick directly instead. Makes stuff cleaner. All under 'shaded' dir is
now generated.
HBASE-16567 Upgrade to protobuf-3.1.x
Regenerate all protos in this module with protoc3.
Redo ByteStringer to use new pb3.1.0 unsafebytesutil
instead of HBaseZeroCopyByteString
HBASE-16264 Figure how to deal with endpoints and shaded pb Shade our protobufs.
Do it in a manner that makes it so we can still have in our API references to
com.google.protobuf (and in REST). The c.g.p in API is for Coprocessor Endpoints (CPEP)
This patch is Tactic #4 from Shading Doc attached to the referenced issue.
Figuring an appoach took a while because we have Coprocessor Endpoints
mixed in with the core of HBase that are tough to untangle (FIX).
Tactic #4 (the fourth attempt at addressing this issue) is COPY all but
the CPEP .proto files currently in hbase-protocol to a new module named
hbase-protocol-shaded. Generate .protos again in the new location and
then relocate/shade the generated files. Let CPEPs keep on with the
old references at com.google.protobuf.* and
org.apache.hadoop.hbase.protobuf.* but change the hbase core so all
instead refer to the relocated files in their new location at
org.apache.hadoop.hbase.shaded.com.google.protobuf.*.
Let the new module also shade protobufs themselves and change hbase
core to pick up this shaded protobuf rather than directly reference
com.google.protobuf.
This approach allows us to explicitly refer to either the shaded or
non-shaded version of a protobuf class in any particular context (though
usually context dictates one or the other). Core runs on shaded protobuf.
CPEPs continue to use whatever is on the classpath with
com.google.protobuf.* which is pb2.5.0 for the near future at least.
See above cited doc for follow-ons and downsides. In short, IDEs will complain
about not being able to find the shaded protobufs since shading happens at package
time; will fix by checking in all generated classes and relocated protobuf in
a follow-on. Also, CPEPs currently suffer an extra-copy as marshalled from
non-shaded to shaded. To fix. Finally, our .protos are duplicated; once
shaded, and once not. Pain, but how else to reveal our protos to CPEPs or
C++ client that wants to talk with HBase AND shade protobuf.
Details:
Add a new hbase-protocol-shaded module. It is a copy of hbase-protocol
i with all relocated offset from o.a.h.h. to o.a.h.h.shaded. The new module
also includes the relocated pb. It does not include CPEPs. They stay in
their old location.
Add another module hbase-endpoint which has in it all the endpoints
that ship as part of hbase -- at least the ones that are not
entangled with core such as AccessControl and Auth. Move all protos
for these CPEPs here as well as their unit tests (mostly moving a
bunch of stuff out of hbase-server module)
Much of the change looks like this:
-import org.apache.hadoop.hbase.protobuf.ProtobufUtil;
-import org.apache.hadoop.hbase.protobuf.generated.ClusterIdProtos;
+import org.apache.hadoop.hbase.protobuf.shaded.ProtobufUtil;
+import org.apache.hadoop.hbase.shaded.protobuf.generated.ClusterIdProtos;
In HTable and in HBaseAdmin, regularize the way Callables are used and also hide
protobuf usage as much as possible moving it up into Callable super classes or out
to utility classes. Still TODO is adding in of retries, etc., but can wait on
procedure which will redo all this.
Also in HTable and HBaseAdmin as well as in HRegionServer and Server, be explicit
when using non-shaded protobuf. Do the full-path so it is clear. This is around
endpoint coprocessors registration of services and execution of CPEP methods.
Shrunk ProtobufUtil by moving methods used by one CPEP only back to the CPEP either
into Client class or as new Util class; e.g. AccessControlUtil.
There are actually two versions of ProtobufUtil now; a shaded one and a subset
that is used by CPEPs doing non-shaded work.
Made it so hbase-common no longer depends on hbase-protocol (with Matteo's help)
R*Converter classes got moved down under shaded package -- they are for internal
use only. There are no non-shaded versions of these classes.
D hbase-client/src/main/java/org/apache/hadoop/hbase/client/AbstractRegionServerCallable
D RetryingCallableBase
Not used anymore and we have too many tiers of Callables so removed/cleaned-up.
A ClientServicecallable
Had to add this one. RegionServerCallable was made generic so it could be used
for a few Interfaces (Client and Admin). Then added ClientServiceCallable to
implement RegionServerCallable with the Client Interface.
Instead of writing package-info.java with VersionAnnotation, saveVersion.sh now writes Version.java with static members.
Change-Id: I009f440fa049f409e10cb3f1c3afb483bc2aa876
This is a revert of a revert; i.e. we are adding back the change only adding
back with fixes for the broken unit test; was a real issue on a test that
went in just at same time as this commit; I was getting a new nonce on each
retry rather than getting one for the mutation.
Other changes since revert are more hiding of RpcController. Use
accessor method rather than always pass in a RpcController
Walked back retrying operations that used to be single-shot (though
code comment said need a retry) because it opens a can of worms where
we retry stuff like bad column family when we shouldn't (needs
work adding in DoNotRetryIOEs)
Changed name of class from PayloadCarryingServerCallable to
CancellableRegionServerCallable.
Fix javadoc and findbugs warnings.
Fix case of not initializing the ScannerCallable RpcController.
Below is original commit message:
Remove mention of ServiceException and other protobuf classes from all over the codebase.
Purge TimeLimitedRpcController. Lets just have one override of RpcController.
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/AbstractRegionServerCallable.java
Cleanup. Make it clear this is an odd class for async hbase intro.
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTable.java
Refactor of RegionServerCallable allows me clean up a bunch of
boilerplate in here and remove protobuf references.
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java
Purge protobuf references everywhere except a reference to a throw of a
ServiceException in method checkHBaseAvailable. I deprecated it in favor
of new available method (the SE is not actually needed)
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/PayloadCarryingServerCallable.java
Move the RetryingTimeTracker instance in here from HTable.
Allows me to contain tracker and remove a repeated code in HTable.
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/RegionServerCallable.java
Clean up move set up of rpc in here rather than have it repeat in HTable.
Allows me to remove protobuf references from a bunch of places.
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/FlushRegionCallable.java
Make use of the push of boilerplate up into RegionServerCallable
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/MultiServerCallable.java
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/PayloadCarryingServerCallable.java
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/RegionAdminServiceCallable.java
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCallerWithReadReplicas.java
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/ScannerCallable.java
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/SecureBulkLoadClient.java
M hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java
Move boilerplate up into superclass.
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/RetryingTimeTracker.java
Cleanup
M hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/PayloadCarryingRpcController.java
M hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
M hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALEditsReplaySink.java
M hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/RegionReplicaReplicationEndpoint.java
Factor in TimeLimitedRpcController. Just have one RpcController override.
D hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/TimeLimitedRpcController.java
Removed. Lets have one override of pb rpccontroller only.
M hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
(handleRemoteException) added
(toText) added
Purge ServiceException from Callable subclasses by pushing SE handling
up into the parent Callable class (varies by context but this is basic
patten). Allows us remove a bunch of boilerplate.
Do this in the public facing classes in particular (though if
an API has SE in it -- which a few do, this patch leaves these
untouched -- for now.) Make it so HBaseAdmin and HTable have no
direct pb imports (except for endpoint processor API).
Change a few of the HBaseAdmin calls to be retrying where comments
ask that we do retry rather than one time.
Purge TimeLimitedRpcController. Lets just have one override of RpcController.
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/AbstractRegionServerCallable.java
Cleanup. Make it clear this is an odd class for async hbase intro.
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTable.java
Refactor of RegionServerCallable allows me clean up a bunch of
boilerplate in here and remove protobuf references.
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java
Purge protobuf references everywhere except a reference to a throw of a
ServiceException in method checkHBaseAvailable. I deprecated it in favor
of new available method (the SE is not actually needed)
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/PayloadCarryingServerCallable.java
Move the RetryingTimeTracker instance in here from HTable.
Allows me to contain tracker and remove a repeated code in HTable.
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/RegionServerCallable.java
Clean up move set up of rpc in here rather than have it repeat in HTable.
Allows me to remove protobuf references from a bunch of places.
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/FlushRegionCallable.java
Make use of the push of boilerplate up into RegionServerCallable
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/MultiServerCallable.java
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/PayloadCarryingServerCallable.java
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/RegionAdminServiceCallable.java
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCallerWithReadReplicas.java
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/ScannerCallable.java
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/SecureBulkLoadClient.java
M hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java
Move boilerplate up into superclass.
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/RetryingTimeTracker.java
Cleanup
M hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/PayloadCarryingRpcController.java
M hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
M hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALEditsReplaySink.java
M hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/RegionReplicaReplicationEndpoint.java
Factor in TimeLimitedRpcController. Just have one RpcController override.
D hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/TimeLimitedRpcController.java
Removed. Lets have one override of pb rpccontroller only.
M hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
(handleRemoteException) added
(toText) added
Signed-off-by: stack <stack@apache.org>
TimeRangeTracker as point of contention when many threads reading a StoreFile
Fixes HBASE-16074 ITBLL fails, reports lost big or tiny families broken
scanning because of a side effect of a clean up in HBASE-15650 to make
TimeRange construction consistent exposed a latent issue in
TimeRange#compare. See HBASE-16074 for more detail.
Also change HFile Writer constructor so we pass in the TimeRangeTracker, if one,
on construction rather than set later (the flag and reference were not volatile
so could have made for issues in concurrent case). And make sure the construction
of a TimeRange from a TimeRangeTracer on open of an HFile Reader never makes a
bad minimum value, one that would preclude us reading any values from a file
(set min to 0)
M hbase-common/src/main/java/org/apache/hadoop/hbase/io/TimeRange.java
Call through to next constructor (if minStamp was 0, we'd skip setting
allTime=true). Add asserts that timestamps are not < 0 cos it messes
us up if they are (we already were checking for < 0 on construction but
assert passed in timestamps are not < 0).
M hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
Add constructor override that takes a TimeRangeTracker (set when flushing
but not when compacting)
M hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Store.java
Add override creating an HFile in tmp that takes a TimeRangeTracker
M hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java
Add override for HFile Writer that takes a TimeRangeTracker Take it on
construction instead of having it passed by a setter later (flags and
reference set by the setter were not volatile... could have been prob
in concurrent case)
M hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/TimeRangeTracker.java
Log WARN if bad initial TimeRange value (and then 'fix' it)
M hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestTimeRangeTracker.java
A few tests to prove serialization works as expected and that we'll get a bad min if not constructed properly.
M hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ScanQueryMatcher.java
Handle OLDEST_TIMESTAMP explictly. Don't expect TimeRange to do it.
M hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestQueryMatcher.java
Refactor from junit3 to junit4 and add test for this weird case.
Refactor so we use the immutable, unsynchronized TimeRange when doing
time-based checks at read time rather than use heavily synchronized
TimeRangeTracker; let TimeRangeTracker be for write-time only.
While in here, changed the Segment stuff so that when an immutable
segment, it uses TimeRange rather than TimeRangeTracker too.
M hbase-common/src/main/java/org/apache/hadoop/hbase/io/TimeRange.java
Make allTime final.
Add a includesTimeRange method copied from TimeRangeTracker.
M hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/TimeRangeTracker.java
Change name of a few methods so they match TimeRange methods that do
same thing.
(getTimeRangeTracker, getTimeRange, toTimeRange) add utility methods
M hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ImmutableSegment.java
Change ImmutableSegment so it uses a TimeRange rather than
TimeRangeTracker.. it is read-only. Redo shouldSeek, getMinTimestamp,
updateMetaInfo, and getTimeRangeTracker so we use TimeRange-based
implementations instead of TimeRangeTracker implementations.
M hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MutableSegment.java
Implement shouldSeek, getMinTimestamp, updateMetaInfo, and
getTimeRangeTracker using TimeRangeTracker.
M hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Segment.java
Make methods that were using TimeRangeTracker abstract and instead
have the implementations do these methods how they want either using
TimeRangeTracker when a mutable segment or TimeRange when an immutable
segment.
M hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java
Change Reader to use TimeRange-based checks instead of
TimeRangeTracker.
Signed-off-by: stack <stack@apache.org>
M hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ScanQueryMatcher.java
moreRowsMayExistAfterCell Exploit the fact a Scan is a Get Scan. Also save compares
if no non-default stopRow.
M hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
optimize Add doc on what is being optimized. Also, if a Get Scan, do not
optimize else we'll keep going after our row is DONE.
Another place to make use of the Get Scan fact is when we are DONE.. if
Get Scan, we can close out the scan.
M hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreScanner.java
Add tests for Get Scans and optimize around block loading.
When we read from HDFS, we overread to pick up the next blocks header.
Doing this saves a seek as we move through the hfile; we save having to
do an explicit seek just to read the block header every time we need to
read the body. We used to read in the next header as part of the
current blocks buffer. This buffer was then what got persisted to
blockcache; so we were over-persisting: our block plus the next blocks'
header (33 bytes).
This patch undoes this over-persisting.
Removes support for version 1 blocks (0.2 was added in hbase-0.92.0).
Not needed any more.
There is an open question on whether checksums should be persisted
when caching. The code seems to say no but if cache is SSD backed or
backed by anything that does not do error correction, we'll want
checksums.
Adds loads of documentation.
M hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockType.java
(write) Add writing from a ByteBuff.
M hbase-common/src/main/java/org/apache/hadoop/hbase/nio/ByteBuff.java
(toString) Add one so ByteBuff looks like ByteBuffer when you click on
it in IDE
M hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
Remove support for version 1 blocks.
Cleaned up handling of metadata added when we serialize a block to
caches. Metadata is smaller now.
When we serialize (used when caching), do not persist the next blocks
header if present.
Removed a bunch of methods, a few of which had overlapping
functionality and others that exposed too much of our internals.
Also removed a bunch of constructors and unified the constructors we
had left over making them share a common init method.
Shutdown access to defines that should only be used internally here.
Renamed all to do w/ 'EXTRA' and 'extraSerialization' to instead talk
about metadata saved to caches; was unclear previously what EXTRA was
about.
Renamed static final declarations as all uppercase.
(readBlockDataInternal): Redid. Couldn't make sense of it previously.
Undid heavy-duty parse of header by constructing HFileBlock. Other
cleanups. Its 1/3rd the length it used to be. More to do in here.
When we read from HDFS, we overread to pick up the next blocks header.
Doing this saves a seek as we move through the hfile; we save having to
do an explicit seek just to read the block header every time we need to
read the body. We used to read in the next header as part of the
current blocks buffer. This buffer was then what got persisted to
blockcache; so we were over-persisting: our block plus the next blocks'
header (33 bytes).
This patch undoes this over-persisting.
Removes support for version 1 blocks (0.2 was added in hbase-0.92.0).
Not needed any more.
There is an open question on whether checksums should be persisted
when caching. The code seems to say no but if cache is SSD backed or
backed by anything that does not do error correction, we'll want
checksums.
Adds loads of documentation.
M hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockType.java
(write) Add writing from a ByteBuff.
M hbase-common/src/main/java/org/apache/hadoop/hbase/nio/ByteBuff.java
(toString) Add one so ByteBuff looks like ByteBuffer when you click on
it in IDE
M hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
Remove support for version 1 blocks.
Cleaned up handling of metadata added when we serialize a block to
caches. Metadata is smaller now.
When we serialize (used when caching), do not persist the next blocks
header if present.
Removed a bunch of methods, a few of which had overlapping
functionality and others that exposed too much of our internals.
Also removed a bunch of constructors and unified the constructors we
had left over making them share a common init method.
Shutdown access to defines that should only be used internally here.
Renamed all to do w/ 'EXTRA' and 'extraSerialization' to instead talk
about metadata saved to caches; was unclear previously what EXTRA was
about.
Renamed static final declarations as all uppercase.
(readBlockDataInternal): Redid. Couldn't make sense of it previously.
Undid heavy-duty parse of header by constructing HFileBlock. Other
cleanups. Its 1/3rd the length it used to be. More to do in here.
M hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
optimize Add doc on what is being optimized. Also, if a Get Scan, do not
optimize else we'll keep going after our row is DONE.
Another place to make use of the Get Scan fact is when we are DONE.. if
Get Scan, we can close out the scan.
M hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreScanner.java
Add tests for Get Scans and optimize around block loading.
M hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContext.java
Make it emit its toString in format that matches the way we log
elsewhere
M hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFile.java
Capitalize statics.
M hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
Verify and cleanup documentation of hfileblock format at head of class.
Explain what 'EXTRA_SERIALIZATION_SPACE' is all about.
Connect how we serialize and deserialize... done in different places
and one way when pulling from HDFS and another when pulling from cache
(TO BE FIXED). Shut down a load of public access.
M hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderImpl.java
Add trace-level logging
M hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileScanner.java
Make it Closeable
Summary:
Use less contended things for metrics.
For histogram which was the largest culprit we use FastLongHistogram
For atomic long where possible we now use counter.
Test Plan: unit tests
Reviewers:
Subscribers:
Differential Revision: https://reviews.facebook.net/D54381
Summary:
Create and use a copy on write map for region location.
- Create a copy on write map backed by a sorted array.
- Create a test for both comparing each with a jdk provided map.
- Change MetaCache to use the new map.
Test Plan:
- org.apache.hadoop.hbase.client.TestFromClientSide
- TestHCM
Differential Revision: https://reviews.facebook.net/D49545
Summary:
Don't create a user for every call. Instead create one user per connection.
Then inside of SecureHadoopUser cache the groupNames. This allows HBase to cache
empty groups. This is needed since it's much more likely that HBase will be accessed
by a user that's not present on the posix /etc/group file.
Test Plan: Unit Tests
Differential Revision: https://reviews.facebook.net/D47751
* corrects license/notice for source distribution
* adds inception year to correct copyright in generated NOTICE files for jars
* updates project names in poms to use "Apache HBase" instead of "HBase" so jar NOTICE files will be correct
* uses append-resources to include supplemental info on jars with 3rd party works in source
* adds an hbase specific resource bundle for jars that include 3rd party works for binaries
** uses supplemental-model to fill in license gaps
** uses the above and a shade plugin transformation to build proper files for shaded jars.
** uses the above and the assembly plugin to build the proper files for bin assembly
* adds a NOTICE item for things copied out of Hadoop (TODO legal-discuss)
* copies ReflectionUtils.logThreadInfo and needed private methods from Hadoop
branch-2, fixes minor issues specific to our use.
* updates HttpServer's use of RU.newInstance to use the HBase version.
Side effect: previously, FilterInitializer instances that happened to also
implement Configurable would have setConfiguration called. Such uses should
instead rely on the mandatory FilterInitializer.initFilter method call.
API conflicts and test fixes
Update LoadTestTool.COLUMN_FAMILY -> DEFAULT_COLUMN_FAMILY due HBASE-11842
Use new 1.0+ api in some tests
Use updated Scanners internal api
Fix to take into account HBASE-13203 - procedure v2 table delete
Conflicts:
hbase-client/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java
hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java
* IA.Public accessible logger instances deprecated
* logger instances modified by tests left in place
* all others made private static final
Signed-off-by: Sean Busbey <busbey@apache.org>
Added readAsVLong() to deprecate readVLong() which was throwing IOException. Added test for readAsVLong().
Signed-off-by: Sean Busbey <busbey@apache.org>
Adds a number of lifecycle-mapping entries which
prevent errors from showing up in Eclipse on a fresh
import of HBase. For plugins defined in the top-level
pom, the mapping is added there; otherwise, the mapping
is pushed down to the child pom.
Signed-off-by: Sean Busbey <busbey@apache.org>
Instead of just blocking the client for 90 seconds when the region gets too
busy, it now sends along region load stats to the client so the client can
know how busy the server is. Currently, its just the load on the memstore, but
it can be extended for other stats (e.g. cpu, general memory, etc.).
It is then up to the client to decide if it wants to listen to these stats.
By default, the client ignores the stats, but it can easily be toggled to the
built-in exponential back-off or users can plug in their own back-off
implementations
Summary: This diff ports the Preemptive Fast Fail feature to OSS. In multi threaded clients, we use a feature developed on 0.89-fb branch called Preemptive Fast Fail. This allows the client threads which would potentially fail, fail fast. The idea behind this feature is that we allow, among the hundreds of client threads, one thread to try and establish connection with the regionserver and if that succeeds, we mark it as a live node again. Meanwhile, other threads which are trying to establish connection to the same server would ideally go into the timeouts which is effectively unfruitful. We can in those cases return appropriate exceptions to those clients instead of letting them retry.
Test Plan: Unit tests
Differential Revision: https://reviews.facebook.net/D24177
Signed-off-by: stack <stack@apache.org>
Fix javadoc warnings.
Fixup findbugs warnings mostly by adding annotations saying 'working as expected'.
In RpcRetryingCallerWithReadReplicas made following change which findbugs spotted:
- if (completed == null) tasks.wait();
+ while (completed == null) tasks.wait();
In RecoverableZooKeeper, made all zk accesses synchronized -- we were doing it
half-ways previously.
In RatioBasedCompactionPolicy we were making an instance of Random on
each invocation of getNextMajorCompactionTime
When hbase.block.data.cachecompressed=true, DATA (and ENCODED_DATA) blocks are
cached in the BlockCache in their on-disk format. This is different from the
default behavior, which decompresses and decrypts a block before caching.