The code starting at `ZKUtil.dump(ZKWatcher)` is a small mess – it has cyclic dependencies woven
through itself, `ZKWatcher` and `RecoverableZooKeeper`. It also initializes a static variable in
`ZKUtil` through the factory for `RecoverableZooKeeper` instances. Let's decouple and clean it
up.
Signed-off-by: Duo Zhang <zhangduo@apache.org>
Signed-off-by: Josh Elser <elserj@apache.org>
Use a hybrid logical clock for timestamping entries.
Using BufferedMutator without HLC was not good because we assign client timestamps,
and the store loop is fast enough that on rare occasion two temporally adjacent URLs
in the set of WARCs are equivalent and the timestamp does not advance, leading later
to a rare false positive CORRUPT finding.
While making changes, support direct S3N paths as input paths on the command line.
Signed-off-by: Viraj Jasani <vjasani@apache.org>
10/17 commits of HBASE-22120, original commit f6ff519dd0
Co-authored-by: Duo Zhang <zhangduo@apache.org>
Signed-off-by: Duo Zhang <zhangduo@apache.org>
Signed-off-by: Peter Somogyi <psomogyi@apache.org>
* HBASE-25867 Extra doc around ITBLL
Minor edits to a few log messages.
Explain how the '-c' option works when passed to ChaosMonkeyRunner.
Some added notes on ITBLL.
Fix whacky 'R' and 'Not r' thing in Master (shows when you run ITBLL).
In HRS, report hostname and port when it checks in (was debugging issue
where Master and HRS had different notions of its hostname).
Spare a dirty FNFException on startup if base dir not yet in place.
* Address Review by Sean
Signed-off-by: Sean Busbey <busbey@apache.org>
This integration test loads successful resource retrieval records from
the Common Crawl (https://commoncrawl.org/) public dataset into an HBase
table and writes records that can be used to later verify the presence
and integrity of those records.
Run like:
./bin/hbase org.apache.hadoop.hbase.test.IntegrationTestLoadCommonCrawl \
-Dfs.s3n.awsAccessKeyId=<AWS access key> \
-Dfs.s3n.awsSecretAccessKey=<AWS secret key> \
/path/to/test-CC-MAIN-2021-10-warc.paths.gz \
/path/to/tmp/warc-loader-output
Access to the Common Crawl dataset in S3 is made available to anyone by
Amazon AWS, but Hadoop's S3N filesystem still requires valid access
credentials to initialize.
The input path can either specify a directory or a file. The file may
optionally be compressed with gzip. If a directory, the loader expects
the directory to contain one or more WARC files from the Common Crawl
dataset. If a file, the loader expects a list of Hadoop S3N URIs which
point to S3 locations for one or more WARC files from the Common Crawl
dataset, one URI per line. Lines should be terminated with the UNIX line
terminator.
Included in hbase-it/src/test/resources/CC-MAIN-2021-10-warc.paths.gz
is a list of all WARC files comprising the Q1 2021 crawl archive. There
are 64,000 WARC files in this data set, each containing ~1GB of gzipped
data. The WARC files contain several record types, such as metadata,
request, and response, but we only load the response record types. If
the HBase table schema does not specify compression (by default) there
is roughly a 10x expansion. Loading the full crawl archive results in a
table approximately 640 TB in size.
The hadoop-aws jar will be needed at runtime to instantiate the S3N
filesystem. Use the -files ToolRunner argument to add it.
You can also split the Loader and Verify stages:
Load with:
./bin/hbase 'org.apache.hadoop.hbase.test.IntegrationTestLoadCommonCrawl$Loader' \
-files /path/to/hadoop-aws.jar \
-Dfs.s3n.awsAccessKeyId=<AWS access key> \
-Dfs.s3n.awsSecretAccessKey=<AWS secret key> \
/path/to/test-CC-MAIN-2021-10-warc.paths.gz \
/path/to/tmp/warc-loader-output
Verify with:
./bin/hbase 'org.apache.hadoop.hbase.test.IntegrationTestLoadCommonCrawl$Verify' \
/path/to/tmp/warc-loader-output
Signed-off-by: Michael Stack <stack@apache.org>
Conflicts:
pom.xml
IntegrationTestImportTsv is generating HFiles under the working directory of the
current hdfs user executing the tool, before bulkloading it into HBase.
Assuming you encrypt the HBase root directory within HDFS (using HDFS
Transparent Encryption), you can bulkload HFiles only if they sit in the same
encryption zone in HDFS as the HBase root directory itself.
When IntegrationTestImportTsv is executed against a real distributed cluster
and the working directory of the current user (e.g. /user/hbase) is not in the
same encryption zone as the HBase root directory (e.g. /hbase/data) then you
will get an exception:
```
ERROR org.apache.hadoop.hbase.regionserver.HRegion: There was a partial failure
due to IO when attempting to load d :
hdfs://mycluster/user/hbase/test-data/22d8460d-04cc-e032-88ca-2cc20a7dd01c/
IntegrationTestImportTsv/hfiles/d/74655e3f8da142cb94bc31b64f0475cc
org.apache.hadoop.ipc.RemoteException(java.io.IOException):
/user/hbase/test-data/22d8460d-04cc-e032-88ca-2cc20a7dd01c/
IntegrationTestImportTsv/hfiles/d/74655e3f8da142cb94bc31b64f0475cc
can't be moved into an encryption zone.
```
In this commit I make it configurable where the IntegrationTestImportTsv
generates the HFiles.
Co-authored-by: Mate Szalay-Beko <symat@apache.com>
Signed-off-by: Peter Somogyi <psomogyi@apache.org>
Sometimes running chaos monkey, I've found that we lose accounting of
region servers. I've taken to a manual process of checking the
reported list against a known reference. It occurs to me that
ChaosMonkey has a known reference, and it can do this accounting for
me.
Signed-off-by: Viraj Jasani <vjasani@apache.org>
Running `ServerKillingChaosMonkey` via `RESTApiClusterManager` for any
duration of time slowly leaks region servers. I see failures on the
RESTApi side go unreported on the ChaosMonkey side. It seems like
`RuntimeException`s are being thrown and lost.
`PolicyBasedChaosMonkey` uses a primitive means of thread management
anyway. Update to use a thread pool, thread groups, and an
uncaughtExceptionHandler.
Signed-off-by: Bharath Vissapragada <bharathv@apache.org>
Signed-off-by: Viraj Jasani <vjasani@apache.org>
* sometimes API calls return with null/empty response bodies. thus,
wrap all API calls in a retry loop.
* calls that submit work in the form of "commands" now retrieve the
commandId from successful command submission, and track completion
of that command before returning control to calling context.
* model CM's process state and use that model to guide state
transitions more intelligently. this guards against, for example,
the start command failing with an error message like "Role must be
stopped".
* improvements to logging levels, avoid spamming logs with the
side-effects of retries at this and higher contexts.
* include references to API documentation, such as it is.
Signed-off-by: stack <stack@apache.org>
`RollingBatchRestartRsAction` doesn't handle failure cases when
tracking its list of dead servers. The original author believed that a
failure to restart would result in a retry. However, by removing the
dead server from the failed list, that state is lost, and retry never
occurs. Because this action doesn't ever look back to the current
state of the cluster, relying only on its local state for the current
action invocation, it never realizes the abandoned server is still
dead. Instead, be more careful to only remove the dead server from the
list when the `startRs` invocation claims to have been successful.
Signed-off-by: stack <stack@apache.org>
Adds `protected abstract Logger getLogger()` to `Action` so that
implementation's names are logged when actions are performed.
Signed-off-by: stack <stack@apache.org>
Signed-off-by: Jan Hentschel <jan.hentschel@ultratendency.com>
Implements `ClusterManager` that relies on the new
`ShellExecEndpointCoprocessor` for remote shell command execution.
Signed-off-by: Bharath Vissapragada <bharathv@apache.org>
Use the correct GSON API for deserializing service responses. Add
simple unit test covering a very limited selection of the overall API
surface area, just enough to ensure deserialization works.
Signed-off-by: stack <stack@apache.org>
We have this nice description in the java doc on ITBLL but it's
unformatted and thus illegible. Add some formatting so that it can be
read by humans.
Signed-off-by: Jan Hentschel <janh@apache.org>
Signed-off-by: Josh Elser <elserj@apache.org>
Instead of using the default properties when checking for monkey
properties, now we use the ones already extended with command line
params.
Change FillDiskCommandAction to try to stop the remote process if the
command failed with an exception.
Signed-off-by: stack <stack@apache.org>
Add monkey actions:
- manipulate network packages with tc (reorder, loose,...)
- add CPU load
- fill the disk
- corrupt or delete regionserver data files
Extend HBaseClusterManager to allow sudo calls.
Signed-off-by: Josh Elser <elserj@apache.org>
Signed-off-by: Balazs Meszaros <meszibalu@apache.org>
* Add chaos monkey action for suspend/resume region servers
* Add chaos monkey action for graceful rolling restart
* Add these to relevant chaos monkeys
Signed-off-by: Balazs Meszaros <meszibalu@apache.org>
Signed-off-by: Peter Somogyi <psomogyi@apache.org>
Added new artifact hbase-shaded-testing-util. It wraps a whole hbase-server
with its testing dependencies. Users should use only the following dependency
in pom:
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-shaded-testing-util</artifactId>
<version>${hbase.version}</version>
<scope>test</scope>
</dependency>
Added hbase-shaded-testing-util-tester maven module which ensures
that hbase-shaded-testing-util works with a shaded client.
Signed-off-by: Josh Elser <elserj@apache.org>