This adds a Vagrantfile and supporting automation that creates a virtual machine environment
suitable for running the create-release scripting.
Signed-off-by: Duo Zhang <zhangduo@apache.org>
Signed-off-by: Michael Stack <stack@apache.org>
Revert of the revert -- re-applying HBASE-25449 with a change
of renaming the test hdfs XML configuration file as it was adversely
affecting tests using MiniDFS
This reverts commit c218e576fe.
Co-authored-by: Josh Elser <elserj@apache.org>
Signed-off-by: Peter Somogyi <psomogyi@apache.org>
Signed-off-by: Michael Stack <stack@apache.org>
Signed-off-by: Duo Zhang <zhangduo@apache.org>
Adding StoreContext which contains the metadata about the HStore. This
metadata can be used across the HFileWriter/Readers and other HStore
consumers without the need of passing around the complete store and
exposing its internals.
Co-authored-by: Abhishek Khanna <akkhanna@amazon.com>
Signed-off-by: stack <stack@apache.org>
Signed-off-by: Zach York <zyork@apache.org>
Exclude hbase-shaded-testing-util*.jar from checkcompatibility; this
jar can not be unzipped on a case-insensitive filesystem. Added
some means of debug into the checkcompatibility to help when
cryptic failures.
Signed-off-by: Nick Dimiduk <ndimiduk@apache.org>
Profiler shows a lot of time spent in the UPDATE SQL statement. Remove the tight loop and let SQL
do a bulk-update instead.
Signed-off-by: Huaxiang Sun <huaxiangsun@apache.org>
Signed-off-by: Michael Stack <stack@apache.org>
IntegrationTestImportTsv is generating HFiles under the working directory of the
current hdfs user executing the tool, before bulkloading it into HBase.
Assuming you encrypt the HBase root directory within HDFS (using HDFS
Transparent Encryption), you can bulkload HFiles only if they sit in the same
encryption zone in HDFS as the HBase root directory itself.
When IntegrationTestImportTsv is executed against a real distributed cluster
and the working directory of the current user (e.g. /user/hbase) is not in the
same encryption zone as the HBase root directory (e.g. /hbase/data) then you
will get an exception:
```
ERROR org.apache.hadoop.hbase.regionserver.HRegion: There was a partial failure
due to IO when attempting to load d :
hdfs://mycluster/user/hbase/test-data/22d8460d-04cc-e032-88ca-2cc20a7dd01c/
IntegrationTestImportTsv/hfiles/d/74655e3f8da142cb94bc31b64f0475cc
org.apache.hadoop.ipc.RemoteException(java.io.IOException):
/user/hbase/test-data/22d8460d-04cc-e032-88ca-2cc20a7dd01c/
IntegrationTestImportTsv/hfiles/d/74655e3f8da142cb94bc31b64f0475cc
can't be moved into an encryption zone.
```
In this commit I make it configurable where the IntegrationTestImportTsv
generates the HFiles.
Co-authored-by: Mate Szalay-Beko <symat@apache.com>
Signed-off-by: Peter Somogyi <psomogyi@apache.org>
Some OMME can not cause the JVM to exit, like "java.lang.OutOfMemoryError: Direct buffer memory", "java.lang.OutOfMemoryError: unable to create new native thread", as they dont call vmError#next_OnError_command. So abort HMaster when uncaught exception occurs in TimeoutExecutor, the new active Hmaster will resume the suspended procedure.
Signed-off-by: Duo Zhang <zhangduo@apache.org>
Signed-off-by: stack <stack@apache.com>
Signed-off-by: Pankaj Kumar<pankajkumar@apache.org>
M dev-support/create-release/README.txt
Remove redundant text. Add some extra help around figuring state of
gpg-agent.
M dev-support/create-release/do-release.sh
Undo my mistaken commit where I undid test of gpg signing if under docker
M dev-support/create-release/release-build.sh
Handle '-h'
M src/main/asciidoc/_chapters/developer.adoc
Point to the README.txt under dev-tools/create-release rather than
repeat the text in here. Be more insistent about using scripts.