M TestStressWALProcedureStore.java
Disable test that now runs that fails because of difference in pb3.1.0.
Signed-off-by: Michael Stack <stack@apache.org>
In some particular deployments, the Replication code believes it has
reached EOF for a WAL prior to succesfully parsing all bytes known to
exist in a cleanly closed file.
Consistently this failure happens due to an InvalidProtobufException
after some number of seeks during our attempts to tail the in-progress
RegionServer WAL. As a work-around, this patch treats cleanly closed
files differently than other execution paths. If an EOF is detected due
to parsing or other errors while there are still unparsed bytes before
the end-of-file trailer, we now reset the WAL to the very beginning and
attempt a clean read-through.
In current testing, a single such reset is sufficient to work around
observed dataloss. However, the above change will retry a given WAL file
indefinitely. On each such attempt, a log message like the below will
be emitted at the WARN level:
Processing end of WAL file '{}'. At position {}, which is too far away
from reported file length {}. Restarting WAL reading (see HBASE-15983
for details).
Additionally, this patch adds some additional log detail at the TRACE
level about file offsets seen while handling recoverable errors. It also
add metrics that measure the use of this recovery mechanism.
Changes how we do accounting of Connections to match how it is done in Hadoop.
Adds a ConnectionManager class. Adds new configurations for this new class.
"hbase.ipc.client.idlethreshold" 4000
"hbase.ipc.client.connection.idle-scan-interval.ms" 10000
"hbase.ipc.client.connection.maxidletime" 10000
"hbase.ipc.client.kill.max", 10
"hbase.ipc.server.handler.queue.size", 100
The new scheme does away with synchronization that purportedly would freeze out
reads while we were cleaning up stale connections (according to HADOOP-9955)
Also adds in new mechanism for accepting Connections by pulling in as many
as we can at a time adding them to a Queue instead of doing one at a time.
Can help when bursty traffic according to HADOOP-9956. Removes a blocking
while Reader is busy parsing a request. Adds configuration
"hbase.ipc.server.read.connection-queue.size" with default of 100 for
queue size.
Signed-off-by: stack <stack@apache.org>
Adds HADOOP-9955 RPC idle connection closing is extremely inefficient
Then removes queue added by HADOOP-9956 at Enis suggestion
Changes how we do accounting of Connections to match how it is done in Hadoop.
Adds a ConnectionManager class. Adds new configurations for this new class.
"hbase.ipc.client.idlethreshold" 4000
"hbase.ipc.client.connection.idle-scan-interval.ms" 10000
"hbase.ipc.client.connection.maxidletime" 10000
"hbase.ipc.client.kill.max", 10
"hbase.ipc.server.handler.queue.size", 100
The new scheme does away with synchronization that purportedly would freeze out
reads while we were cleaning up stale connections (according to HADOOP-9955)
Also adds in new mechanism for accepting Connections by pulling in as many
as we can at a time adding them to a Queue instead of doing one at a time.
Can help when bursty traffic according to HADOOP-9956. Removes a blocking
while Reader is busy parsing a request. Adds configuration
"hbase.ipc.server.read.connection-queue.size" with default of 100 for
queue size.
Summary: Missing a root index block is worse than missing a data block. We should know the difference
Test Plan: Tested on a local instance. All numbers looked reasonable.
Differential Revision: https://reviews.facebook.net/D55563
Summary:
Use less contended things for metrics.
For histogram which was the largest culprit we use FastLongHistogram
For atomic long where possible we now use counter.
Test Plan: unit tests
Reviewers:
Subscribers:
Differential Revision: https://reviews.facebook.net/D54381
Summary:
* Add VersionInfoUtil to determine if a client has a specified version or better
* Add an exception type to say that the response should be chunked
* Add on client knowledge of retry exceptions
* Add on metrics for how often this happens
Test Plan: Added a unit test
Differential Revision: https://reviews.facebook.net/D51771
* make sure the test classifications are in test scope for their use in the hadoop-compat modules
* added a test category for 'metrics related' since that's what all these tests are for
* categorized tests as small,metrics
First pass. Plumbs up metrics for each connection instance. Exposes
static information about those connections (zk quorum and root node,
cluster id, user). Exposes basic thread pool utilization gauges for
"meta lookup" and "batch" pools.
* corrects license/notice for source distribution
* adds inception year to correct copyright in generated NOTICE files for jars
* updates project names in poms to use "Apache HBase" instead of "HBase" so jar NOTICE files will be correct
* uses append-resources to include supplemental info on jars with 3rd party works in source
* adds an hbase specific resource bundle for jars that include 3rd party works for binaries
** uses supplemental-model to fill in license gaps
** uses the above and a shade plugin transformation to build proper files for shaded jars.
** uses the above and the assembly plugin to build the proper files for bin assembly
* adds a NOTICE item for things copied out of Hadoop (TODO legal-discuss)
API conflicts and test fixes
Update LoadTestTool.COLUMN_FAMILY -> DEFAULT_COLUMN_FAMILY due HBASE-11842
Use new 1.0+ api in some tests
Use updated Scanners internal api
Fix to take into account HBASE-13203 - procedure v2 table delete
Conflicts:
hbase-client/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java
hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java
Summary: Add metrics on how many requests are exceptions and what type.
Test Plan: behold unit tests.
Differential Revision: https://reviews.facebook.net/D37167
Adds a number of lifecycle-mapping entries which
prevent errors from showing up in Eclipse on a fresh
import of HBase. For plugins defined in the top-level
pom, the mapping is added there; otherwise, the mapping
is pushed down to the child pom.
Signed-off-by: Sean Busbey <busbey@apache.org>
Incompatible changes called out in release notes on jira.
* Cleaned up references to HLog
* Deprecates HLogKey but maintains it for compatibility
- Moves all Writeable from WALKey to HLogKey
* Adds utility code to CoprocessorHost to help with evolving Coprocessor APIs
* RSRpcServices roll WAL call now requests the non-meta LogRoller roll all logs
- rolls actually happen asynchronously
- deprecated old api (and noted incompatible behavior change)
- modified api in new Admin interface to reflect lack of return values.
* Moved WAL user facing API to "WAL"
- only 1 sync offered
- WALTrailer removed from API
* make provider used by the WALFactory configurable.
* Move all WAL requests to use opaque ids instead of paths
* WALProvider provides API details for implementers and handles creation of WALs.
* Refactor WALActionsListener to have a basic implementation.
* turn MetricsWAL into a WALActionsListener.
* tests that needs FSHLog implementation details use them directly, others just reference provider + factory
- Some tests moved from Large to Medium based on run time.
* pull out wal disabling into its own no-op class
* update region open to delegate to WALFactory
* update performance test tool to allow for multiple regions
* Removed references to meta-specific wals within wal code
- replaced with generic suffixes
- WALFactory maintains a dedicated WALProvider for meta (and so knows about the distinction)
* maintain backwards compat on HLogPrettyPrinter and mark it deprecated.
- made WALPrettyPrinter IA.Private in favor of `bin/hbase wal`
* move WALUtil stuff that's implementation specific to said implementation
- WALUtil now acts as an integration point between the RegionServer and hte WAL code.
Incorporates contributions from v.himanshu.
Signed-off-by: stack <stack@apache.org>
In HBASE-8370 "Report data block cache hit rates apart from aggregate cache hit rates"
discussion highlights small changes in caching percentage can make for large
changes in cache contents; e.g. lots of data blocks may be getting evicited
but because meta blocks -- index and blooms -- are staying in cache with
high hit rates, a large data block eviction may show as a small change in
overall cache percentage (e.g. from 99.99 to 99.11).
Changes the getBlockCacheHitPercent from an int to a double.
Also changes name of the jmx metric blockCountHitPercent to be
blockCacheCountHitPercent instead.
May be incompatible change.