With prefetch-on-open enabled, the task doing the prefetching was using
non-positional (i.e. streaming) reads. If the main (non-prefetch) thread
was also using non-positional reads, these two would conflict, because
inputstreams are not thread-safe for non-positional reads.
In the case of an encrypted filesystem, this could cause JVM crashes,
etc, as underlying cipher buffers were freed underneath the racing
threads. In the case of a non-encrypted filesystem, less severe errors
would be thrown. The included unit test reproduces the latter case.
(cherry picked from commit 025ddce868)
Signed-off-by: Todd Lipcon <todd@cloudera.com>
Also changes modify table operations to help the case where a MTP spans
two master, avoiding the sanity-checks propagating back to the client
unnecessarily.
Signed-off-by: Josh Elser <elserj@apache.org>
Signed-off-by: Michael Stack <stack@apache.org>
ModifyTableProcedure is using MoveRegionProcedure in a way
that was unintended from the original implementation. As such,
we have to guard against certain usages of it. We know we can
re-open OPEN regions, but regions in OPENING will similarly
soon be OPEN (thus, we want to reopen those regions too).
Signed-off-by: Michael Stack <stack@apache.org>
Signed-off-by: zhangduo <zhangduo@apache.org>
Provides an extra client descriptor to build a second
tarball with a reduced set of dependencies. Not of great
impact now, but will build the way for better in the future.
Signed-off-by: Sean Busbey <busbey@apache.org>
Conflicts:
hbase-assembly/pom.xml
Conflicts:
hbase-spark/pom.xml
* modify the jar checking script to take args; make hadoop stuff optional
* separate out checking the artifacts that have hadoop vs those that don't.
* * Unfortunately means we need two modules for checking things
* * put in a safety check that the support script for checking jar contents is maintained in both modules
* * have to carve out an exception for o.a.hadoop.metrics2. :(
* fix duplicated class warning
* clean up dependencies in hbase-server and some modules that depend on it.
* allow Hadoop to have its own htrace where it needs it
* add a precommit check to make sure we're not using old htrace imports
Conflicts:
hbase-backup/pom.xml
hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml
Signed-off-by: Mike Drob <mdrob@apache.org>
Cannot go to latest (8.9) yet due to
https://github.com/checkstyle/checkstyle/issues/5279
* move hbaseanti import checks to checkstyle
* implment a few missing equals checks, and ignore one
* fix lots of javadoc errors
Signed-off-by: Sean Busbey <busbey@apache.org>
HBase ITs require junit which requires hamcrest. Hadoop recently
stopped including hamcrest in their installation (and thus our inherited
classpath), which means that we need to ship it.
Signed-off-by: Sean Busbey <busbey@apache.org>
Return 401 sooner when AUTHORIZATION header is missing
HBase Thrift server was checking for the AUTHORIZATION header and assuming it was always present
even when it was the first request. Many clients will not send the AUTHORIZATION header until
a 401 is received. HBase Thrift in the case of no header was throwing multiple exceptions and
filling the logs with exceptions. This was fixed by checking that if the AUTHORIZATION header is
empty then return a 401 immediately if security is enabled.
Signed-off-by: Josh Elser <elserj@apache.org>