New tool to dump existing replication peers, configurations and
queues when using HBase Replication. The tool provides two flags:
--distributed This flag will poll each RS for information about
the replication queues being processed on this RS.
By default this is not enabled and the information
about the replication queues and configuration will
be obtained from ZooKeeper.
--hdfs When --distributed is used, this flag will attempt
to calculate the total size of the WAL files used
by the replication queues. Since its possible that
multiple peers can be configured this value can be
overestimated.
Signed-off-by: Matteo Bertozzi <matteo.bertozzi@cloudera.com>
This is a revert of a revert; i.e. we are adding back the change only adding
back with fixes for the broken unit test; was a real issue on a test that
went in just at same time as this commit; I was getting a new nonce on each
retry rather than getting one for the mutation.
Other changes since revert are more hiding of RpcController. Use
accessor method rather than always pass in a RpcController
Walked back retrying operations that used to be single-shot (though
code comment said need a retry) because it opens a can of worms where
we retry stuff like bad column family when we shouldn't (needs
work adding in DoNotRetryIOEs)
Changed name of class from PayloadCarryingServerCallable to
CancellableRegionServerCallable.
Fix javadoc and findbugs warnings.
Fix case of not initializing the ScannerCallable RpcController.
Below is original commit message:
Remove mention of ServiceException and other protobuf classes from all over the codebase.
Purge TimeLimitedRpcController. Lets just have one override of RpcController.
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/AbstractRegionServerCallable.java
Cleanup. Make it clear this is an odd class for async hbase intro.
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTable.java
Refactor of RegionServerCallable allows me clean up a bunch of
boilerplate in here and remove protobuf references.
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java
Purge protobuf references everywhere except a reference to a throw of a
ServiceException in method checkHBaseAvailable. I deprecated it in favor
of new available method (the SE is not actually needed)
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/PayloadCarryingServerCallable.java
Move the RetryingTimeTracker instance in here from HTable.
Allows me to contain tracker and remove a repeated code in HTable.
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/RegionServerCallable.java
Clean up move set up of rpc in here rather than have it repeat in HTable.
Allows me to remove protobuf references from a bunch of places.
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/FlushRegionCallable.java
Make use of the push of boilerplate up into RegionServerCallable
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/MultiServerCallable.java
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/PayloadCarryingServerCallable.java
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/RegionAdminServiceCallable.java
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCallerWithReadReplicas.java
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/ScannerCallable.java
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/SecureBulkLoadClient.java
M hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java
Move boilerplate up into superclass.
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/RetryingTimeTracker.java
Cleanup
M hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/PayloadCarryingRpcController.java
M hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
M hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALEditsReplaySink.java
M hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/RegionReplicaReplicationEndpoint.java
Factor in TimeLimitedRpcController. Just have one RpcController override.
D hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/TimeLimitedRpcController.java
Removed. Lets have one override of pb rpccontroller only.
M hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
(handleRemoteException) added
(toText) added
Purge ServiceException from Callable subclasses by pushing SE handling
up into the parent Callable class (varies by context but this is basic
patten). Allows us remove a bunch of boilerplate.
Do this in the public facing classes in particular (though if
an API has SE in it -- which a few do, this patch leaves these
untouched -- for now.) Make it so HBaseAdmin and HTable have no
direct pb imports (except for endpoint processor API).
Change a few of the HBaseAdmin calls to be retrying where comments
ask that we do retry rather than one time.
Purge TimeLimitedRpcController. Lets just have one override of RpcController.
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/AbstractRegionServerCallable.java
Cleanup. Make it clear this is an odd class for async hbase intro.
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTable.java
Refactor of RegionServerCallable allows me clean up a bunch of
boilerplate in here and remove protobuf references.
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java
Purge protobuf references everywhere except a reference to a throw of a
ServiceException in method checkHBaseAvailable. I deprecated it in favor
of new available method (the SE is not actually needed)
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/PayloadCarryingServerCallable.java
Move the RetryingTimeTracker instance in here from HTable.
Allows me to contain tracker and remove a repeated code in HTable.
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/RegionServerCallable.java
Clean up move set up of rpc in here rather than have it repeat in HTable.
Allows me to remove protobuf references from a bunch of places.
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/FlushRegionCallable.java
Make use of the push of boilerplate up into RegionServerCallable
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/MultiServerCallable.java
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/PayloadCarryingServerCallable.java
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/RegionAdminServiceCallable.java
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCallerWithReadReplicas.java
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/ScannerCallable.java
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/SecureBulkLoadClient.java
M hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java
Move boilerplate up into superclass.
M hbase-client/src/main/java/org/apache/hadoop/hbase/client/RetryingTimeTracker.java
Cleanup
M hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/PayloadCarryingRpcController.java
M hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
M hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALEditsReplaySink.java
M hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/RegionReplicaReplicationEndpoint.java
Factor in TimeLimitedRpcController. Just have one RpcController override.
D hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/TimeLimitedRpcController.java
Removed. Lets have one override of pb rpccontroller only.
M hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
(handleRemoteException) added
(toText) added
Signed-off-by: stack <stack@apache.org>
TimeRangeTracker as point of contention when many threads reading a StoreFile
Fixes HBASE-16074 ITBLL fails, reports lost big or tiny families broken
scanning because of a side effect of a clean up in HBASE-15650 to make
TimeRange construction consistent exposed a latent issue in
TimeRange#compare. See HBASE-16074 for more detail.
Also change HFile Writer constructor so we pass in the TimeRangeTracker, if one,
on construction rather than set later (the flag and reference were not volatile
so could have made for issues in concurrent case). And make sure the construction
of a TimeRange from a TimeRangeTracer on open of an HFile Reader never makes a
bad minimum value, one that would preclude us reading any values from a file
(set min to 0)
M hbase-common/src/main/java/org/apache/hadoop/hbase/io/TimeRange.java
Call through to next constructor (if minStamp was 0, we'd skip setting
allTime=true). Add asserts that timestamps are not < 0 cos it messes
us up if they are (we already were checking for < 0 on construction but
assert passed in timestamps are not < 0).
M hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
Add constructor override that takes a TimeRangeTracker (set when flushing
but not when compacting)
M hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Store.java
Add override creating an HFile in tmp that takes a TimeRangeTracker
M hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java
Add override for HFile Writer that takes a TimeRangeTracker Take it on
construction instead of having it passed by a setter later (flags and
reference set by the setter were not volatile... could have been prob
in concurrent case)
M hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/TimeRangeTracker.java
Log WARN if bad initial TimeRange value (and then 'fix' it)
M hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestTimeRangeTracker.java
A few tests to prove serialization works as expected and that we'll get a bad min if not constructed properly.
M hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ScanQueryMatcher.java
Handle OLDEST_TIMESTAMP explictly. Don't expect TimeRange to do it.
M hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestQueryMatcher.java
Refactor from junit3 to junit4 and add test for this weird case.
Instead of running the primary test in a separate thread and hoping it finishes in time, just run the test in the primary thread.
Signed-off-by: Elliott Clark <eclark@apache.org>
All ReplicationTableBase method's that need to access the Replication Table will block until it is created though.
Also refactored ReplicationSourceManager so that abandoned queue adoption is run in the background too so that it does not block HRegionServer initialization.
Signed-off-by: Elliott Clark <eclark@apache.org>
Building on HBase-15958.
Provided a ReplicationQueuesClientHBaseImpl that relies on the HBase Replication Table to track WAL queues.
Refactored out a large section of ReplicationQueuesHBaseImpl into a ReplicationTableClient class that handles Replication Table operations.
Signed-off-by: Elliott Clark <eclark@apache.org>
M hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcExecutor.java
Refactor which makes a Handler type. Put all 'handler' stuff inside this
new type. Also make it so subclass can provide its own Handler type.
M hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/SimpleRpcScheduler.java
Name the handler threads for their type so can tell if configs are
having an effect.
Signed-off-by: stack <stack@apache.org>
Invoking 'hbase hfile' inside a servlet raises several concerns. This
patch avoids invoking a separate process, and also adds validation that
the file being read is at least inside the HBase root directory.
Signed-off-by: Mikhail Antonov <antonov@apache.org>
Building on HBase-15883.
Now implementing the claim queues procedure within an HBase table.
Also added UnitTests to test claimQueue.
Peer tracking will still be performed by ZooKeeper though.
Also modified the queueId tracking procedure so we no longer have to perform scans over the Replication Table.
This does make our queue naming schema slightly different from ReplicationQueuesZKImpl though.
Signed-off-by: Elliott Clark <eclark@apache.org>
Changes how we do accounting of Connections to match how it is done in Hadoop.
Adds a ConnectionManager class. Adds new configurations for this new class.
"hbase.ipc.client.idlethreshold" 4000
"hbase.ipc.client.connection.idle-scan-interval.ms" 10000
"hbase.ipc.client.connection.maxidletime" 10000
"hbase.ipc.client.kill.max", 10
"hbase.ipc.server.handler.queue.size", 100
The new scheme does away with synchronization that purportedly would freeze out
reads while we were cleaning up stale connections (according to HADOOP-9955)
Also adds in new mechanism for accepting Connections by pulling in as many
as we can at a time adding them to a Queue instead of doing one at a time.
Can help when bursty traffic according to HADOOP-9956. Removes a blocking
while Reader is busy parsing a request. Adds configuration
"hbase.ipc.server.read.connection-queue.size" with default of 100 for
queue size.
Signed-off-by: stack <stack@apache.org>
Adds HADOOP-9955 RPC idle connection closing is extremely inefficient
Then removes queue added by HADOOP-9956 at Enis suggestion
Changes how we do accounting of Connections to match how it is done in Hadoop.
Adds a ConnectionManager class. Adds new configurations for this new class.
"hbase.ipc.client.idlethreshold" 4000
"hbase.ipc.client.connection.idle-scan-interval.ms" 10000
"hbase.ipc.client.connection.maxidletime" 10000
"hbase.ipc.client.kill.max", 10
"hbase.ipc.server.handler.queue.size", 100
The new scheme does away with synchronization that purportedly would freeze out
reads while we were cleaning up stale connections (according to HADOOP-9955)
Also adds in new mechanism for accepting Connections by pulling in as many
as we can at a time adding them to a Queue instead of doing one at a time.
Can help when bursty traffic according to HADOOP-9956. Removes a blocking
while Reader is busy parsing a request. Adds configuration
"hbase.ipc.server.read.connection-queue.size" with default of 100 for
queue size.
Implemented ReplicationQueuesHBaseImpl that tracks WAL offsets and replication queues in an HBase table.
Only wrote the basic tracking methods, have not implemented claimQueue() or HFileRef methods yet.
Wrote a basic unit test for ReplicationQueueHBaseImpl that tests the implemented functions on a single Region Server
Signed-off-by: Elliott Clark <elliott@fb.com>
Signed-off-by: Elliott Clark <eclark@apache.org>
@Before and @After to setup/teardown tables using @Rule to set table name based on testname.
Refactor out copy-pasted code fragments to single function.
(Apekshit)
Change-Id: Ic22e5027cc3952bab5ec30070ed20e98017db65a
Changed UI labels so that queue "size" refers to size in bytes and queue "length" refers to number of items in queue.
Signed-off-by: Elliott Clark <elliott@fb.com>
Refactor so we use the immutable, unsynchronized TimeRange when doing
time-based checks at read time rather than use heavily synchronized
TimeRangeTracker; let TimeRangeTracker be for write-time only.
While in here, changed the Segment stuff so that when an immutable
segment, it uses TimeRange rather than TimeRangeTracker too.
M hbase-common/src/main/java/org/apache/hadoop/hbase/io/TimeRange.java
Make allTime final.
Add a includesTimeRange method copied from TimeRangeTracker.
M hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/TimeRangeTracker.java
Change name of a few methods so they match TimeRange methods that do
same thing.
(getTimeRangeTracker, getTimeRange, toTimeRange) add utility methods
M hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ImmutableSegment.java
Change ImmutableSegment so it uses a TimeRange rather than
TimeRangeTracker.. it is read-only. Redo shouldSeek, getMinTimestamp,
updateMetaInfo, and getTimeRangeTracker so we use TimeRange-based
implementations instead of TimeRangeTracker implementations.
M hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MutableSegment.java
Implement shouldSeek, getMinTimestamp, updateMetaInfo, and
getTimeRangeTracker using TimeRangeTracker.
M hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Segment.java
Make methods that were using TimeRangeTracker abstract and instead
have the implementations do these methods how they want either using
TimeRangeTracker when a mutable segment or TimeRange when an immutable
segment.
M hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java
Change Reader to use TimeRange-based checks instead of
TimeRangeTracker.
Signed-off-by: stack <stack@apache.org>
Summary: Missing a root index block is worse than missing a data block. We should know the difference
Test Plan: Tested on a local instance. All numbers looked reasonable.
Differential Revision: https://reviews.facebook.net/D55563
Summary:
Allow TestTimestampFilterSeekHint to provide a seek next hint.
This can be incorrect as it might skip deletes. However it can
make things much much faster.
Test Plan: Added a unit test.
Differential Revision: https://reviews.facebook.net/D55617
M hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ScanQueryMatcher.java
moreRowsMayExistAfterCell Exploit the fact a Scan is a Get Scan. Also save compares
if no non-default stopRow.
M hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
optimize Add doc on what is being optimized. Also, if a Get Scan, do not
optimize else we'll keep going after our row is DONE.
Another place to make use of the Get Scan fact is when we are DONE.. if
Get Scan, we can close out the scan.
M hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreScanner.java
Add tests for Get Scans and optimize around block loading.
When we read from HDFS, we overread to pick up the next blocks header.
Doing this saves a seek as we move through the hfile; we save having to
do an explicit seek just to read the block header every time we need to
read the body. We used to read in the next header as part of the
current blocks buffer. This buffer was then what got persisted to
blockcache; so we were over-persisting: our block plus the next blocks'
header (33 bytes).
This patch undoes this over-persisting.
Removes support for version 1 blocks (0.2 was added in hbase-0.92.0).
Not needed any more.
There is an open question on whether checksums should be persisted
when caching. The code seems to say no but if cache is SSD backed or
backed by anything that does not do error correction, we'll want
checksums.
Adds loads of documentation.
M hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockType.java
(write) Add writing from a ByteBuff.
M hbase-common/src/main/java/org/apache/hadoop/hbase/nio/ByteBuff.java
(toString) Add one so ByteBuff looks like ByteBuffer when you click on
it in IDE
M hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
Remove support for version 1 blocks.
Cleaned up handling of metadata added when we serialize a block to
caches. Metadata is smaller now.
When we serialize (used when caching), do not persist the next blocks
header if present.
Removed a bunch of methods, a few of which had overlapping
functionality and others that exposed too much of our internals.
Also removed a bunch of constructors and unified the constructors we
had left over making them share a common init method.
Shutdown access to defines that should only be used internally here.
Renamed all to do w/ 'EXTRA' and 'extraSerialization' to instead talk
about metadata saved to caches; was unclear previously what EXTRA was
about.
Renamed static final declarations as all uppercase.
(readBlockDataInternal): Redid. Couldn't make sense of it previously.
Undid heavy-duty parse of header by constructing HFileBlock. Other
cleanups. Its 1/3rd the length it used to be. More to do in here.
When we read from HDFS, we overread to pick up the next blocks header.
Doing this saves a seek as we move through the hfile; we save having to
do an explicit seek just to read the block header every time we need to
read the body. We used to read in the next header as part of the
current blocks buffer. This buffer was then what got persisted to
blockcache; so we were over-persisting: our block plus the next blocks'
header (33 bytes).
This patch undoes this over-persisting.
Removes support for version 1 blocks (0.2 was added in hbase-0.92.0).
Not needed any more.
There is an open question on whether checksums should be persisted
when caching. The code seems to say no but if cache is SSD backed or
backed by anything that does not do error correction, we'll want
checksums.
Adds loads of documentation.
M hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockType.java
(write) Add writing from a ByteBuff.
M hbase-common/src/main/java/org/apache/hadoop/hbase/nio/ByteBuff.java
(toString) Add one so ByteBuff looks like ByteBuffer when you click on
it in IDE
M hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
Remove support for version 1 blocks.
Cleaned up handling of metadata added when we serialize a block to
caches. Metadata is smaller now.
When we serialize (used when caching), do not persist the next blocks
header if present.
Removed a bunch of methods, a few of which had overlapping
functionality and others that exposed too much of our internals.
Also removed a bunch of constructors and unified the constructors we
had left over making them share a common init method.
Shutdown access to defines that should only be used internally here.
Renamed all to do w/ 'EXTRA' and 'extraSerialization' to instead talk
about metadata saved to caches; was unclear previously what EXTRA was
about.
Renamed static final declarations as all uppercase.
(readBlockDataInternal): Redid. Couldn't make sense of it previously.
Undid heavy-duty parse of header by constructing HFileBlock. Other
cleanups. Its 1/3rd the length it used to be. More to do in here.
When we read from HDFS, we overread to pick up the next blocks header.
Doing this saves a seek as we move through the hfile; we save having to
do an explicit seek just to read the block header every time we need to
read the body. We used to read in the next header as part of the
current blocks buffer. This buffer was then what got persisted to
blockcache; so we were over-persisting: our block plus the next blocks'
header (33 bytes).
This patch undoes this over-persisting.
Removes support for version 1 blocks (0.2 was added in hbase-0.92.0).
Not needed any more.
There is an open question on whether checksums should be persisted
when caching. The code seems to say no but if cache is SSD backed or
backed by anything that does not do error correction, we'll want
checksums.
Adds loads of documentation.
M hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockType.java
(write) Add writing from a ByteBuff.
M hbase-common/src/main/java/org/apache/hadoop/hbase/nio/ByteBuff.java
(toString) Add one so ByteBuff looks like ByteBuffer when you click on
it in IDE
M hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
Remove support for version 1 blocks.
Cleaned up handling of metadata added when we serialize a block to
caches. Metadata is smaller now.
When we serialize (used when caching), do not persist the next blocks
header if present.
Removed a bunch of methods, a few of which had overlapping
functionality and others that exposed too much of our internals.
Also removed a bunch of constructors and unified the constructors we
had left over making them share a common init method.
Shutdown access to defines that should only be used internally here.
Renamed all to do w/ 'EXTRA' and 'extraSerialization' to instead talk
about metadata saved to caches; was unclear previously what EXTRA was
about.
Renamed static final declarations as all uppercase.
(readBlockDataInternal): Redid. Couldn't make sense of it previously.
Undid heavy-duty parse of header by constructing HFileBlock. Other
cleanups. Its 1/3rd the length it used to be. More to do in here.
M hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
optimize Add doc on what is being optimized. Also, if a Get Scan, do not
optimize else we'll keep going after our row is DONE.
Another place to make use of the Get Scan fact is when we are DONE.. if
Get Scan, we can close out the scan.
M hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreScanner.java
Add tests for Get Scans and optimize around block loading.
Summary:
Move the config keys to one place
Make Two different config keys. One for default, one for priority
Test Plan: unit tests
Differential Revision: https://reviews.facebook.net/D55575
Summary:
Currently WAL splitting is broken when a region has been opened multiple times in recent minutes.
Region open and region close write event markers to the wal. These markers should have the sequence id in them. However it is currently getting 1. That means that if a region has moved multiple times in the last few mins then multiple split log workers will try and create the recovered edits file for sequence id 1. One of the workers will fail and on failing they will delete the recovered edits. Causing all split wal attempts to fail.
We need to:
It appears that the close event with a sequence id of one is coming from region warm up.
This patch fixes that by making sure the close on warm up doesn't happen. Also splitting will ignore any of the events that are already in the logs.
Test Plan: Unit tests pass
Differential Revision: https://reviews.facebook.net/D55557
M hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContext.java
Make it emit its toString in format that matches the way we log
elsewhere
M hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFile.java
Capitalize statics.
M hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
Verify and cleanup documentation of hfileblock format at head of class.
Explain what 'EXTRA_SERIALIZATION_SPACE' is all about.
Connect how we serialize and deserialize... done in different places
and one way when pulling from HDFS and another when pulling from cache
(TO BE FIXED). Shut down a load of public access.
M hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderImpl.java
Add trace-level logging
M hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileScanner.java
Make it Closeable
Summary:
Use less contended things for metrics.
For histogram which was the largest culprit we use FastLongHistogram
For atomic long where possible we now use counter.
Test Plan: unit tests
Reviewers:
Subscribers:
Differential Revision: https://reviews.facebook.net/D54381
Further investigation after HBASE-15221 lead to some findings that
AsyncProcess should have been managing the contents of the region
location cache, appropriately clearing it when necessary (e.g. an
RPC to a server fails because the server doesn't host that region)
For multi() RPCs, the tableName argument is null since there is no
single table that the updates are destined to. This inadvertently
caused the existing region location cache updates to fail on 1.x
branches. AsyncProcess needs to handle when tableName is null
and perform the necessary cache evictions.
As such, much of the new retry logic in HTableMultiplexer is
unnecessary and is removed with this commit. Getters which were
added as a part of testing were left since that are mostly
harmless and should contain no negative impact.
Signed-off-by: stack <stack@apache.org>
M hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/ChecksumUtil.java
Cleanup trace message and include offset; makes debug the easier.
M hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/FixedFileTrailer.java
Fix incorrect data member javadoc.
M hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
Pass along the offset we are checksumming at.
M hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderImpljava
Add trace logging for debugging and set the end of the prefetch to be
last datablock, not size minus trailersize (there is the root indices
and file info to be skipped)