Refactor so we use the immutable, unsynchronized TimeRange when doing
time-based checks at read time rather than use heavily synchronized
TimeRangeTracker; let TimeRangeTracker be for write-time only.
While in here, changed the Segment stuff so that when an immutable
segment, it uses TimeRange rather than TimeRangeTracker too.
M hbase-common/src/main/java/org/apache/hadoop/hbase/io/TimeRange.java
Make allTime final.
Add a includesTimeRange method copied from TimeRangeTracker.
M hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/TimeRangeTracker.java
Change name of a few methods so they match TimeRange methods that do
same thing.
(getTimeRangeTracker, getTimeRange, toTimeRange) add utility methods
M hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ImmutableSegment.java
Change ImmutableSegment so it uses a TimeRange rather than
TimeRangeTracker.. it is read-only. Redo shouldSeek, getMinTimestamp,
updateMetaInfo, and getTimeRangeTracker so we use TimeRange-based
implementations instead of TimeRangeTracker implementations.
M hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MutableSegment.java
Implement shouldSeek, getMinTimestamp, updateMetaInfo, and
getTimeRangeTracker using TimeRangeTracker.
M hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Segment.java
Make methods that were using TimeRangeTracker abstract and instead
have the implementations do these methods how they want either using
TimeRangeTracker when a mutable segment or TimeRange when an immutable
segment.
M hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java
Change Reader to use TimeRange-based checks instead of
TimeRangeTracker.
Signed-off-by: stack <stack@apache.org>
Summary: Missing a root index block is worse than missing a data block. We should know the difference
Test Plan: Tested on a local instance. All numbers looked reasonable.
Differential Revision: https://reviews.facebook.net/D55563
Summary:
Allow TestTimestampFilterSeekHint to provide a seek next hint.
This can be incorrect as it might skip deletes. However it can
make things much much faster.
Test Plan: Added a unit test.
Differential Revision: https://reviews.facebook.net/D55617
M hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ScanQueryMatcher.java
moreRowsMayExistAfterCell Exploit the fact a Scan is a Get Scan. Also save compares
if no non-default stopRow.
M hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
optimize Add doc on what is being optimized. Also, if a Get Scan, do not
optimize else we'll keep going after our row is DONE.
Another place to make use of the Get Scan fact is when we are DONE.. if
Get Scan, we can close out the scan.
M hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreScanner.java
Add tests for Get Scans and optimize around block loading.
When we read from HDFS, we overread to pick up the next blocks header.
Doing this saves a seek as we move through the hfile; we save having to
do an explicit seek just to read the block header every time we need to
read the body. We used to read in the next header as part of the
current blocks buffer. This buffer was then what got persisted to
blockcache; so we were over-persisting: our block plus the next blocks'
header (33 bytes).
This patch undoes this over-persisting.
Removes support for version 1 blocks (0.2 was added in hbase-0.92.0).
Not needed any more.
There is an open question on whether checksums should be persisted
when caching. The code seems to say no but if cache is SSD backed or
backed by anything that does not do error correction, we'll want
checksums.
Adds loads of documentation.
M hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockType.java
(write) Add writing from a ByteBuff.
M hbase-common/src/main/java/org/apache/hadoop/hbase/nio/ByteBuff.java
(toString) Add one so ByteBuff looks like ByteBuffer when you click on
it in IDE
M hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
Remove support for version 1 blocks.
Cleaned up handling of metadata added when we serialize a block to
caches. Metadata is smaller now.
When we serialize (used when caching), do not persist the next blocks
header if present.
Removed a bunch of methods, a few of which had overlapping
functionality and others that exposed too much of our internals.
Also removed a bunch of constructors and unified the constructors we
had left over making them share a common init method.
Shutdown access to defines that should only be used internally here.
Renamed all to do w/ 'EXTRA' and 'extraSerialization' to instead talk
about metadata saved to caches; was unclear previously what EXTRA was
about.
Renamed static final declarations as all uppercase.
(readBlockDataInternal): Redid. Couldn't make sense of it previously.
Undid heavy-duty parse of header by constructing HFileBlock. Other
cleanups. Its 1/3rd the length it used to be. More to do in here.
When we read from HDFS, we overread to pick up the next blocks header.
Doing this saves a seek as we move through the hfile; we save having to
do an explicit seek just to read the block header every time we need to
read the body. We used to read in the next header as part of the
current blocks buffer. This buffer was then what got persisted to
blockcache; so we were over-persisting: our block plus the next blocks'
header (33 bytes).
This patch undoes this over-persisting.
Removes support for version 1 blocks (0.2 was added in hbase-0.92.0).
Not needed any more.
There is an open question on whether checksums should be persisted
when caching. The code seems to say no but if cache is SSD backed or
backed by anything that does not do error correction, we'll want
checksums.
Adds loads of documentation.
M hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockType.java
(write) Add writing from a ByteBuff.
M hbase-common/src/main/java/org/apache/hadoop/hbase/nio/ByteBuff.java
(toString) Add one so ByteBuff looks like ByteBuffer when you click on
it in IDE
M hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
Remove support for version 1 blocks.
Cleaned up handling of metadata added when we serialize a block to
caches. Metadata is smaller now.
When we serialize (used when caching), do not persist the next blocks
header if present.
Removed a bunch of methods, a few of which had overlapping
functionality and others that exposed too much of our internals.
Also removed a bunch of constructors and unified the constructors we
had left over making them share a common init method.
Shutdown access to defines that should only be used internally here.
Renamed all to do w/ 'EXTRA' and 'extraSerialization' to instead talk
about metadata saved to caches; was unclear previously what EXTRA was
about.
Renamed static final declarations as all uppercase.
(readBlockDataInternal): Redid. Couldn't make sense of it previously.
Undid heavy-duty parse of header by constructing HFileBlock. Other
cleanups. Its 1/3rd the length it used to be. More to do in here.
When we read from HDFS, we overread to pick up the next blocks header.
Doing this saves a seek as we move through the hfile; we save having to
do an explicit seek just to read the block header every time we need to
read the body. We used to read in the next header as part of the
current blocks buffer. This buffer was then what got persisted to
blockcache; so we were over-persisting: our block plus the next blocks'
header (33 bytes).
This patch undoes this over-persisting.
Removes support for version 1 blocks (0.2 was added in hbase-0.92.0).
Not needed any more.
There is an open question on whether checksums should be persisted
when caching. The code seems to say no but if cache is SSD backed or
backed by anything that does not do error correction, we'll want
checksums.
Adds loads of documentation.
M hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockType.java
(write) Add writing from a ByteBuff.
M hbase-common/src/main/java/org/apache/hadoop/hbase/nio/ByteBuff.java
(toString) Add one so ByteBuff looks like ByteBuffer when you click on
it in IDE
M hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
Remove support for version 1 blocks.
Cleaned up handling of metadata added when we serialize a block to
caches. Metadata is smaller now.
When we serialize (used when caching), do not persist the next blocks
header if present.
Removed a bunch of methods, a few of which had overlapping
functionality and others that exposed too much of our internals.
Also removed a bunch of constructors and unified the constructors we
had left over making them share a common init method.
Shutdown access to defines that should only be used internally here.
Renamed all to do w/ 'EXTRA' and 'extraSerialization' to instead talk
about metadata saved to caches; was unclear previously what EXTRA was
about.
Renamed static final declarations as all uppercase.
(readBlockDataInternal): Redid. Couldn't make sense of it previously.
Undid heavy-duty parse of header by constructing HFileBlock. Other
cleanups. Its 1/3rd the length it used to be. More to do in here.
M hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
optimize Add doc on what is being optimized. Also, if a Get Scan, do not
optimize else we'll keep going after our row is DONE.
Another place to make use of the Get Scan fact is when we are DONE.. if
Get Scan, we can close out the scan.
M hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreScanner.java
Add tests for Get Scans and optimize around block loading.
Summary:
Move the config keys to one place
Make Two different config keys. One for default, one for priority
Test Plan: unit tests
Differential Revision: https://reviews.facebook.net/D55575
Summary:
Currently WAL splitting is broken when a region has been opened multiple times in recent minutes.
Region open and region close write event markers to the wal. These markers should have the sequence id in them. However it is currently getting 1. That means that if a region has moved multiple times in the last few mins then multiple split log workers will try and create the recovered edits file for sequence id 1. One of the workers will fail and on failing they will delete the recovered edits. Causing all split wal attempts to fail.
We need to:
It appears that the close event with a sequence id of one is coming from region warm up.
This patch fixes that by making sure the close on warm up doesn't happen. Also splitting will ignore any of the events that are already in the logs.
Test Plan: Unit tests pass
Differential Revision: https://reviews.facebook.net/D55557
M hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContext.java
Make it emit its toString in format that matches the way we log
elsewhere
M hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFile.java
Capitalize statics.
M hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
Verify and cleanup documentation of hfileblock format at head of class.
Explain what 'EXTRA_SERIALIZATION_SPACE' is all about.
Connect how we serialize and deserialize... done in different places
and one way when pulling from HDFS and another when pulling from cache
(TO BE FIXED). Shut down a load of public access.
M hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderImpl.java
Add trace-level logging
M hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileScanner.java
Make it Closeable
Summary:
Use less contended things for metrics.
For histogram which was the largest culprit we use FastLongHistogram
For atomic long where possible we now use counter.
Test Plan: unit tests
Reviewers:
Subscribers:
Differential Revision: https://reviews.facebook.net/D54381
Further investigation after HBASE-15221 lead to some findings that
AsyncProcess should have been managing the contents of the region
location cache, appropriately clearing it when necessary (e.g. an
RPC to a server fails because the server doesn't host that region)
For multi() RPCs, the tableName argument is null since there is no
single table that the updates are destined to. This inadvertently
caused the existing region location cache updates to fail on 1.x
branches. AsyncProcess needs to handle when tableName is null
and perform the necessary cache evictions.
As such, much of the new retry logic in HTableMultiplexer is
unnecessary and is removed with this commit. Getters which were
added as a part of testing were left since that are mostly
harmless and should contain no negative impact.
Signed-off-by: stack <stack@apache.org>