mirror of https://github.com/apache/druid.git
Improve README for integration test (#11860)
* Optimize IT readme * Resolve comments
This commit is contained in:
parent
b13f07a057
commit
cfd60f1222
|
@ -42,27 +42,30 @@ under Preferences > Resources > Advanced.)
|
|||
Also set the `DOCKER_IP`
|
||||
environment variable to localhost on your system, as follows:
|
||||
|
||||
```
|
||||
```bash
|
||||
export DOCKER_IP=127.0.0.1
|
||||
```
|
||||
|
||||
Optionally, you can also set `APACHE_ARCHIVE_MIRROR_HOST` to override `https://archive.apache.org` host. This host is used to download archives such as hadoop and kafka during building docker images:
|
||||
|
||||
```
|
||||
```bash
|
||||
export APACHE_ARCHIVE_MIRROR_HOST=https://example.com/remote-generic-repo
|
||||
```
|
||||
|
||||
## Running tests
|
||||
## Running tests againt auto brought up Docker containers
|
||||
|
||||
> NOTE: This section describes how to start integration tests against docker containers which will be brought up automatically by following commands.
|
||||
If you want to buid docker images and run tests separately, see the next section.
|
||||
|
||||
To run all tests from a test group using docker and mvn run the following command:
|
||||
(list of test groups can be found at integration-tests/src/test/java/org/apache/druid/tests/TestNGGroup.java)
|
||||
```
|
||||
mvn verify -P integration-tests -Dgroups=<test_group>
|
||||
(list of test groups can be found at `integration-tests/src/test/java/org/apache/druid/tests/TestNGGroup.java`)
|
||||
```bash
|
||||
mvn verify -P integration-tests -Dgroups=<test_group>
|
||||
```
|
||||
|
||||
To run only a single test using mvn run the following command:
|
||||
```
|
||||
mvn verify -P integration-tests -Dgroups=<test_group> -Dit.test=<test_name>
|
||||
```bash
|
||||
mvn verify -P integration-tests -Dgroups=<test_group> -Dit.test=<test_name>
|
||||
```
|
||||
The test group should always be set, as certain test setup and cleanup tasks are based on the test group. You can find
|
||||
the test group for a given test as an annotation in the respective test class.
|
||||
|
@ -70,108 +73,127 @@ the test group for a given test as an annotation in the respective test class.
|
|||
Add `-rf :druid-integration-tests` when running integration tests for the second time or later without changing
|
||||
the code of core modules in between to skip up-to-date checks for the whole module dependency tree.
|
||||
|
||||
Integration tests can also be run with either Java 8 or Java 11 by adding -Djvm.runtime=# to mvn command, where #
|
||||
Integration tests can also be run with either Java 8 or Java 11 by adding `-Djvm.runtime=#` to mvn command, where `#`
|
||||
can either be 8 or 11.
|
||||
|
||||
Druid's configuration (using Docker) can be overrided by providing -Doverride.config.path=<PATH_TO_FILE>.
|
||||
Druid's configuration (using Docker) can be overrided by providing `-Doverride.config.path=<PATH_TO_FILE>`.
|
||||
The file must contain one property per line, the key must start with `druid_` and the format should be snake case.
|
||||
Note that when bringing up docker containers through mvn and -Doverride.config.path is provided, additional
|
||||
Druid routers for security group integration test (permissive tls, no client auth tls, custom check tls) will not be started.
|
||||
|
||||
## Docker compose
|
||||
## Running tests against mannually brought up Docker containers
|
||||
|
||||
There are a few different Docker compose yamls located in "docker" folder. Before you can run any of these, you must
|
||||
build the Docker images. See "Manually bringing up Docker containers and running tests" below.
|
||||
1. Build docker images.
|
||||
|
||||
From root module run maven command, run the following command:
|
||||
```bash
|
||||
mvn clean install -pl integration-tests -P integration-tests -Ddocker.run.skip=true -Dmaven.test.skip=true -Ddocker.build.hadoop=true
|
||||
```
|
||||
|
||||
> **NOTE**: `-Ddocker.build.hadoop=true` is optional if you don't run tests against Hadoop.
|
||||
|
||||
docker-compose.base.yml - Base file that defines all containers for integration test
|
||||
2. Choose a docker-compose file to start containers.
|
||||
|
||||
docker-compose.yml - Defines Druid cluster with default configuration that is used for running integration tests in Travis CI.
|
||||
There are a few different Docker compose yamls located in "docker" folder that could be used to start containers for different tests.
|
||||
|
||||
```
|
||||
docker-compose -f docker-compose.yml up
|
||||
// DRUID_INTEGRATION_TEST_GROUP - this variable is used in Druid docker container for "security" and "query" test group. Use next docker-compose if you want to run security/query tests.
|
||||
DRUID_INTEGRATION_TEST_GROUP=security docker-compose -f docker-compose.yml up
|
||||
```
|
||||
- To start basic Druid cluster (skip this if running Druid cluster with override configs):
|
||||
```bash
|
||||
docker-compose -f integration-tests/docker/docker-compose.yml up
|
||||
```
|
||||
|
||||
docker-compose.override-env.yml - Defines Druid cluster with default configuration plus any additional and/or overriden configurations from override-env file.
|
||||
- To start Druid cluster with override configs
|
||||
```bash
|
||||
OVERRIDE_ENV=<PATH_TO_ENV> docker-compose -f docker-compose.override-env.yml up
|
||||
```
|
||||
|
||||
- To start tests against Hadoop
|
||||
```bash
|
||||
docker-compose -f docker-compose.druid-hadoop.yml up
|
||||
```
|
||||
|
||||
- To start tests againt security group
|
||||
```bash
|
||||
docker-compose -f docker-compose.yml -f docker-compose.security.yml up
|
||||
```
|
||||
|
||||
```
|
||||
// OVERRIDE_ENV - variable that must contains path to Druid configuration file
|
||||
OVERRIDE_ENV=./environment-configs/override-examples/s3 docker-compose -f docker-compose.override-env.yml up
|
||||
```
|
||||
3. Run tests.
|
||||
|
||||
Execute the following command from root module, where `<test_name>` is the class name of a test, such as ITIndexerTest.
|
||||
```bash
|
||||
mvn verify -P integration-tests -pl integration-tests -Ddocker.build.skip=true -Ddocker.run.skip=true -Dit.test=<test_name>
|
||||
```
|
||||
|
||||
docker-compose.security.yml - Defines three additional Druid router services with permissive tls, no client auth tls, and custom check tls respectively.
|
||||
This is meant to be use together with docker-compose.yml or docker-compose.override-env.yml and is only needed for the "security" group integration test.
|
||||
## Docker Compose files
|
||||
|
||||
```
|
||||
docker-compose -f docker-compose.yml -f docker-compose.security.yml up
|
||||
```
|
||||
- docker-compose.base.yml
|
||||
|
||||
docker-compose.druid-hadoop.yml - for starting Apache Hadoop 2.8.5 cluster with the same setup as the Druid tutorial
|
||||
Base file that defines all containers for integration test
|
||||
|
||||
```
|
||||
docker-compose -f docker-compose.druid-hadoop.yml up
|
||||
```
|
||||
- docker-compose.yml
|
||||
|
||||
## Manually bringing up Docker containers and running tests
|
||||
Defines Druid cluster with default configuration that is used for running integration tests.
|
||||
|
||||
1. Build druid-cluster, druid-hadoop docker images. From root module run maven command:
|
||||
```
|
||||
mvn clean install -pl integration-tests -P integration-tests -Ddocker.run.skip=true -Dmaven.test.skip=true -Ddocker.build.hadoop=true
|
||||
```
|
||||
```bash
|
||||
docker-compose -f docker-compose.yml up
|
||||
// DRUID_INTEGRATION_TEST_GROUP - this variable is used in Druid docker container for "security" and "query" test group. Use next docker-compose if you want to run security/query tests.
|
||||
DRUID_INTEGRATION_TEST_GROUP=security docker-compose -f docker-compose.yml up
|
||||
```
|
||||
|
||||
2. Run druid cluster by docker-compose:
|
||||
- docker-compose.override-env.yml
|
||||
|
||||
```
|
||||
# Basic Druid cluster (skip this if running Druid cluster with override configs):
|
||||
docker-compose -f integration-tests/docker/docker-compose.yml up
|
||||
Defines Druid cluster with default configuration plus any additional and/or overriden configurations from override-env file.
|
||||
|
||||
# Druid cluster with override configs (skip this if running Basic Druid cluster):
|
||||
OVERRIDE_ENV=<PATH_TO_ENV> docker-compose -f ${DOCKERDIR}/docker-compose.override-env.yml up
|
||||
```bash
|
||||
// OVERRIDE_ENV - variable that must contains path to Druid configuration file
|
||||
OVERRIDE_ENV=./environment-configs/override-examples/s3 docker-compose -f docker-compose.override-env.yml up
|
||||
```
|
||||
|
||||
# Druid hadoop (if needed):
|
||||
docker-compose -f ${DOCKERDIR}/docker-compose.druid-hadoop.yml up
|
||||
- docker-compose.security.yml
|
||||
|
||||
# Druid routers for security group integration test (if needed):
|
||||
docker-compose -f ${DOCKERDIR}/docker-compose.security.yml up
|
||||
```
|
||||
Defines three additional Druid router services with permissive tls, no client auth tls, and custom check tls respectively.
|
||||
This is meant to be use together with docker-compose.yml or docker-compose.override-env.yml and is only needed for the "security" group integration test.
|
||||
|
||||
3. Run maven command to execute tests with -Ddocker.build.skip=true -Ddocker.run.skip=true
|
||||
```bash
|
||||
docker-compose -f docker-compose.yml -f docker-compose.security.yml up
|
||||
```
|
||||
|
||||
For example:
|
||||
- docker-compose.druid-hadoop.yml
|
||||
|
||||
For starting Apache Hadoop 2.8.5 cluster with the same setup as the Druid tutorial.
|
||||
|
||||
```bash
|
||||
docker-compose -f docker-compose.druid-hadoop.yml up
|
||||
```
|
||||
|
||||
```
|
||||
mvn verify -P integration-tests -pl integration-tests -Dit.test=ITIndexerTest -Ddocker.build.skip=true -Ddocker.run.skip=true
|
||||
```
|
||||
|
||||
## Tips & tricks for debugging and developing integration tests
|
||||
|
||||
### Useful mvn command flags
|
||||
|
||||
- -Ddocker.build.skip=true to skip building the containers.
|
||||
If you do not apply any change to Druid then you skip rebuilding the containers. This can save ~4 minutes.
|
||||
You need to build druid containers only once, after you can skip docker build step.
|
||||
- -Ddocker.run.skip=true to skip starting docker containers. This can save ~3 minutes by skipping building and bringing
|
||||
up the docker containers (Druid, Kafka, Hadoop, MYSQL, zookeeper, etc). Please make sure that you actually do have
|
||||
these containers already running if using this flag. Additionally, please make sure that the running containers
|
||||
are in the same state that the setup script (run_cluster.sh) would have brought it up in.
|
||||
- -Ddocker.build.hadoop=true to build the hadoop image when either running integration tests or when building the integration test docker images without running the tests.
|
||||
- -Dstart.hadoop.docker=true to start hadoop container when you need to run IT tests that utilize local hadoop docker
|
||||
| Flag | Description |
|
||||
|:---|---|
|
||||
| -Ddocker.build.skip=true | Skip building the containers. <br/><br/>If you do not apply any change to Druid then you skip rebuilding the containers. This can save ~4 minutes. You need to build druid containers only once, after you can skip docker build step. |
|
||||
| -Ddocker.run.skip=true | Skip starting docker containers.<br/><br/> This can save ~3 minutes by skipping building and bringing up the docker containers (Druid, Kafka, Hadoop, MYSQL, zookeeper, etc).<br/> Please make sure that you actually do have these containers already running if using this flag.<br/><br/> Additionally, please make sure that the running containers are in the same state that the setup script (run_cluster.sh) would have brought it up in. |
|
||||
| -Ddocker.build.hadoop=true | Build the hadoop image when either running integration tests or when building the integration test docker images without running the tests. |
|
||||
| -Dstart.hadoop.docker=true | Start hadoop container when you need to run IT tests that utilize local hadoop docker. |
|
||||
|
||||
### Debugging Druid while running tests
|
||||
|
||||
For your convenience, Druid processes running inside Docker have debugging enabled and the following ports have
|
||||
been made available to attach your remote debugger (such as via IntelliJ IDEA's Remote Configuration):
|
||||
For your convenience, Druid processes running inside Docker have been debugging enabled at following debugging ports:
|
||||
|
||||
- Overlord process at port 5009
|
||||
- Middlemanager process at port 5008
|
||||
- Historical process at port 5007
|
||||
- Coordinator process at port 5006
|
||||
- Broker process at port 5005
|
||||
- Router process at port 5004
|
||||
- Router with custom check tls process at port 5003
|
||||
- Router with no client auth tls process at port 5002
|
||||
- Router with permissive tls process at port 5001
|
||||
| Process | Remote Debugging Port |
|
||||
| --- | :---: |
|
||||
| Router with permissive tls | 5001 |
|
||||
| Router with no client auth tls | 5002 |
|
||||
| Router with custom check tls | 5003 |
|
||||
| Router| 5004 |
|
||||
| Broker| 5005 |
|
||||
| Coordinator | 5006 |
|
||||
| Historical | 5007 |
|
||||
| Middlemanager | 5008 |
|
||||
| Overlord | 5009 |
|
||||
|
||||
You can use remote debugger(such as via IntelliJ IDEA's Remote Configuration) to debug the corresponding Druid process at above port.
|
||||
|
||||
Running Tests Using A Quickstart Cluster
|
||||
-------------------
|
||||
|
@ -179,25 +201,25 @@ Running Tests Using A Quickstart Cluster
|
|||
When writing integration tests, it can be helpful to test against a quickstart
|
||||
cluster so that you can set up remote debugging with in your developer
|
||||
environment. This section walks you through setting up the integration tests
|
||||
so that it can run against a [quickstart cluster](../docs/tutorials/index.md#getting-started) running on your development
|
||||
so that it can run against a [quickstart cluster](../docs/tutorials/index.md) running on your development
|
||||
machine.
|
||||
|
||||
> NOTE: Not all features run by default on a quickstart cluster, so it may not make sense to run the entire test suite against this configuration.
|
||||
|
||||
> NOTE: Quickstart does not run with ssl, so to trick the integration tests we specify the `*_tls_url` in the config to be the same as the http url.
|
||||
> **NOTE**: Not all features run by default on a quickstart cluster, so it may not make sense to run the entire test suite against this configuration.
|
||||
>
|
||||
> Quickstart does not run with ssl, so to trick the integration tests we specify the `*_tls_url` in the config to be the same as the http url.
|
||||
|
||||
Make sure you have at least 6GiB of memory available before you run the tests.
|
||||
|
||||
The tests rely on files in the test/resources folder to exist under the path /resources,
|
||||
so create a symlink to make them available:
|
||||
|
||||
```
|
||||
ln -s ${DRUID_HOME}/integration-tests/src/test/resources /resources
|
||||
```bash
|
||||
ln -s ${DRUID_HOME}/integration-tests/src/test/resources /resources
|
||||
```
|
||||
|
||||
Set the cluster config file environment variable to the quickstart config:
|
||||
```
|
||||
export CONFIG_FILE=${DRUID_HOME}/integration-tests/quickstart-it.json
|
||||
```bash
|
||||
export CONFIG_FILE=${DRUID_HOME}/integration-tests/quickstart-it.json
|
||||
```
|
||||
|
||||
The test group `quickstart-compatible` has tests that have been verified to work against the quickstart cluster.
|
||||
|
@ -207,10 +229,10 @@ If you find some integration tests do not work, look at the docker files to see
|
|||
do similar steps to get the test to work.
|
||||
|
||||
Then run the tests using a command similar to:
|
||||
```
|
||||
mvn verify -P int-tests-config-file -Dit.test=<test_name>
|
||||
# Run all integration tests that have been verified to work against a quickstart cluster.
|
||||
mvn verify -P int-tests-config-file -Dgroups=quickstart-compatible
|
||||
```bash
|
||||
mvn verify -P int-tests-config-file -Dit.test=<test_name>
|
||||
# Run all integration tests that have been verified to work against a quickstart cluster.
|
||||
mvn verify -P int-tests-config-file -Dgroups=quickstart-compatible
|
||||
```
|
||||
|
||||
Running Tests Using A Configuration File for Any Cluster
|
||||
|
@ -242,13 +264,13 @@ export CONFIG_FILE=<config file name>
|
|||
|
||||
To run all tests from a test group using mvn run the following command:
|
||||
(list of test groups can be found at integration-tests/src/test/java/org/apache/druid/tests/TestNGGroup.java)
|
||||
```
|
||||
mvn verify -P int-tests-config-file -Dgroups=<test_group>
|
||||
```bash
|
||||
mvn verify -P int-tests-config-file -Dgroups=<test_group>
|
||||
```
|
||||
|
||||
To run only a single test using mvn run the following command:
|
||||
```
|
||||
mvn verify -P int-tests-config-file -Dit.test=<test_name>
|
||||
```bash
|
||||
mvn verify -P int-tests-config-file -Dit.test=<test_name>
|
||||
```
|
||||
|
||||
Running a Test That Uses Cloud
|
||||
|
@ -262,7 +284,7 @@ These can be run by providing "kinesis-index", "kinesis-data-format", "gcs-deep-
|
|||
to -Dgroups for Amazon Kinesis, Google Cloud Storage, Amazon S3, and Microsoft Azure respectively. Note that only
|
||||
one group should be run per mvn command.
|
||||
|
||||
For all of the Cloud Integration tests, the following will also need to be provided:
|
||||
For all the Cloud Integration tests, the following will also need to be provided:
|
||||
1) Provide -Doverride.config.path=<PATH_TO_FILE> with your Cloud credentials/configs set. See
|
||||
integration-tests/docker/environment-configs/override-examples/ directory for env vars to provide for each Cloud.
|
||||
|
||||
|
@ -280,8 +302,8 @@ For Google Cloud Storage, in addition to the above, you will also have to:
|
|||
1) Provide -Dresource.file.dir.path=<PATH_TO_FOLDER> with folder that contains GOOGLE_APPLICATION_CREDENTIALS file
|
||||
|
||||
For example, to run integration test for Google Cloud Storage:
|
||||
```
|
||||
mvn verify -P integration-tests -Dgroups=gcs-deep-storage -Doverride.config.path=<PATH_TO_FILE> -Dresource.file.dir.path=<PATH_TO_FOLDER> -Ddruid.test.config.cloudBucket=test-bucket -Ddruid.test.config.cloudPath=test-data-folder/
|
||||
```bash
|
||||
mvn verify -P integration-tests -Dgroups=gcs-deep-storage -Doverride.config.path=<PATH_TO_FILE> -Dresource.file.dir.path=<PATH_TO_FOLDER> -Ddruid.test.config.cloudBucket=test-bucket -Ddruid.test.config.cloudPath=test-data-folder/
|
||||
```
|
||||
|
||||
|
||||
|
@ -313,23 +335,23 @@ If using the Docker-based Hadoop container, the steps above are automatically do
|
|||
When running the Hadoop tests, you must set `-Dextra.datasource.name.suffix=''`, due to https://github.com/apache/druid/issues/9788.
|
||||
|
||||
Option 1: Run the test using mvn (using the bundled Docker-based Hadoop cluster and building docker images at runtime):
|
||||
```
|
||||
mvn verify -P integration-tests -Dit.test=ITHadoopIndexTest -Dstart.hadoop.docker=true -Ddocker.build.hadoop=true -Doverride.config.path=docker/environment-configs/override-examples/hdfs -Dextra.datasource.name.suffix=''
|
||||
```bash
|
||||
mvn verify -P integration-tests -Dit.test=ITHadoopIndexTest -Dstart.hadoop.docker=true -Ddocker.build.hadoop=true -Doverride.config.path=docker/environment-configs/override-examples/hdfs -Dextra.datasource.name.suffix=''
|
||||
```
|
||||
|
||||
Option 2: Run the test using mvn (using the bundled Docker-based hadoop cluster and not building images at runtime):
|
||||
```
|
||||
mvn verify -P integration-tests -Dit.test=ITHadoopIndexTest -Dstart.hadoop.docker=true -Ddocker.build.skip=true -Doverride.config.path=docker/environment-configs/override-examples/hdfs -Dextra.datasource.name.suffix=''
|
||||
```bash
|
||||
mvn verify -P integration-tests -Dit.test=ITHadoopIndexTest -Dstart.hadoop.docker=true -Ddocker.build.skip=true -Doverride.config.path=docker/environment-configs/override-examples/hdfs -Dextra.datasource.name.suffix=''
|
||||
```
|
||||
|
||||
Option 3: Run the test using mvn (using the bundled Docker-based hadoop cluster and when you have already started all containers)
|
||||
```
|
||||
mvn verify -P integration-tests -Dit.test=ITHadoopIndexTest -Ddocker.run.skip=true -Ddocker.build.skip=true -Doverride.config.path=docker/environment-configs/override-examples/hdfs -Dextra.datasource.name.suffix=''
|
||||
```bash
|
||||
mvn verify -P integration-tests -Dit.test=ITHadoopIndexTest -Ddocker.run.skip=true -Ddocker.build.skip=true -Doverride.config.path=docker/environment-configs/override-examples/hdfs -Dextra.datasource.name.suffix=''
|
||||
```
|
||||
|
||||
Option 4: Run the test using mvn (using config file for existing Hadoop cluster):
|
||||
```
|
||||
mvn verify -P int-tests-config-file -Dit.test=ITHadoopIndexTest -Dextra.datasource.name.suffix=''
|
||||
```bash
|
||||
mvn verify -P int-tests-config-file -Dit.test=ITHadoopIndexTest -Dextra.datasource.name.suffix=''
|
||||
```
|
||||
|
||||
In some test environments, the machine where the tests need to be executed
|
||||
|
@ -340,7 +362,7 @@ do the following instead of running the tests using mvn:
|
|||
|
||||
On a machine that can do mvn builds:
|
||||
|
||||
```
|
||||
```bash
|
||||
cd druid
|
||||
mvn clean package
|
||||
cd integration_tests
|
||||
|
@ -353,7 +375,7 @@ Copy the integration-tests directory to the test cluster.
|
|||
|
||||
### Set CLASSPATH
|
||||
|
||||
```
|
||||
```bash
|
||||
TDIR=<directory containing integration-tests>/target
|
||||
VER=<version of druid you built>
|
||||
export CLASSPATH=$TDIR/dependency/*:$TDIR/druid-integration-tests-$VER.jar:$TDIR/druid-integration-tests-$VER-tests.jar
|
||||
|
@ -361,7 +383,7 @@ export CLASSPATH=$TDIR/dependency/*:$TDIR/druid-integration-tests-$VER.jar:$TDIR
|
|||
|
||||
### Run the test
|
||||
|
||||
```
|
||||
```bash
|
||||
java -Duser.timezone=UTC -Dfile.encoding=UTF-8 -Ddruid.test.config.type=configFile -Ddruid.test.config.configFile=<pathname of configuration file> org.testng.TestNG -testrunfactory org.testng.DruidTestRunnerFactory -testclass org.apache.druid.tests.hadoop.ITHadoopIndexTest
|
||||
```
|
||||
|
||||
|
@ -387,8 +409,8 @@ A test can access different helper and utility classes provided by test-framewor
|
|||
To mark a test be able to use Guice Dependency Injection -
|
||||
Annotate the test class with the below annotation
|
||||
|
||||
```
|
||||
@Guice(moduleFactory = DruidTestModuleFactory.class)
|
||||
```java
|
||||
@Guice(moduleFactory = DruidTestModuleFactory.class)
|
||||
```
|
||||
This will tell the test framework that the test class needs to be constructed using guice.
|
||||
|
||||
|
@ -413,12 +435,14 @@ class can be set to run in parallel (multiple test methods of each class running
|
|||
the given class/package from the "AllSerializedTests" test tag section and including it in the "AllParallelizedTests"
|
||||
test tag section in integration-tests/src/test/resources/testng.xml. TestNG uses two parameters, i.e.,
|
||||
`thread-count` and `data-provider-thread-count`, for parallel test execution, which are both set to 2 for Druid integration tests.
|
||||
|
||||
For test using parallel execution with data provider, you will also need to set `@DataProvider(parallel = true)`
|
||||
on your data provider method in your test class. Note that for test using parallel execution with data provider, the test
|
||||
class does not need to be in the "AllParallelizedTests" test tag section and if it is in the "AllParallelizedTests"
|
||||
test tag section it will actually be run with `thread-count` times `data-provider-thread-count` threads.
|
||||
You may want to modify those values for faster execution.
|
||||
See https://testng.org/doc/documentation-main.html#parallel-running and https://testng.org/doc/documentation-main.html#parameters-dataproviders for details.
|
||||
|
||||
Please be mindful when adding tests to the "AllParallelizedTests" test tag that the tests can run in parallel with
|
||||
other tests from the same class at the same time. i.e. test does not modify/restart/stop the druid cluster or other dependency containers,
|
||||
test does not use excessive memory starving other concurent task, test does not modify and/or use other task,
|
||||
|
@ -430,6 +454,6 @@ By default, integration tests are run in Travis environment on commits made to o
|
|||
required to pass for a PR to be elligible to be merged. Here are known issues and limitations to the Druid docker cluster
|
||||
running in Travis machine that may cause the tests to fail:
|
||||
- Number of concurrent running tasks. Although the default Druid cluster config sets the maximum number of tasks (druid.worker.capacity) to 10,
|
||||
the actual maximum can be lower depending on the type of the tasks. For example, running 2 range partitioning compaction tasks with 2 subtasks each
|
||||
the actual maximum can be lowered depending on the type of the tasks. For example, running 2 range partitioning compaction tasks with 2 subtasks each
|
||||
(for a total of 6 tasks) concurrently can cause the cluster to intermittently fail. This can cause the Travis job to become stuck until it timeouts (50 minutes)
|
||||
and/or terminates after 10 mins of not receiving new output.
|
||||
|
|
Loading…
Reference in New Issue