2016-02-16 10:23:37 -05:00
Integration Testing
2015-11-12 19:44:28 -05:00
===================
2014-11-06 13:23:24 -05:00
2016-02-16 10:23:37 -05:00
To run integration tests, you have to specify the druid cluster the
tests should use.
Druid comes with the mvn profile integration-tests
for setting up druid running in docker containers, and using that
cluster to run the integration tests.
To use a druid cluster that is already running, use the
mvn profile int-tests-config-file, which uses a configuration file
describing the cluster.
Integration Testing Using Docker
-------------------
2018-02-07 16:06:06 -05:00
For running integration tests using docker there are 2 approaches.
If your platform supports docker natively, you can simply set `DOCKER_IP`
environment variable to localhost and skip to [Running tests ](#running-tests ) section.
```
export DOCKER_IP=127.0.0.1
```
The other approach is to use separate virtual machine to run docker
containers with help of `docker-machine` tool.
## Installing Docker Machine
2014-11-06 13:23:24 -05:00
2016-02-17 12:21:14 -05:00
Please refer to instructions at [https://github.com/druid-io/docker-druid/blob/master/docker-install.md ](https://github.com/druid-io/docker-druid/blob/master/docker-install.md ).
2014-11-06 13:23:24 -05:00
2015-11-12 19:44:28 -05:00
## Creating the Docker VM
2014-11-06 13:23:24 -05:00
2015-11-12 19:44:28 -05:00
Create a new VM for integration tests with at least 6GB of memory.
2014-11-06 13:23:24 -05:00
```
2015-11-12 19:44:28 -05:00
docker-machine create --driver virtualbox --virtualbox-memory 6000 integration
2014-11-06 13:23:24 -05:00
```
2015-11-12 19:44:28 -05:00
Set the docker environment:
2014-11-06 13:23:24 -05:00
```
2015-11-12 19:44:28 -05:00
eval "$(docker-machine env integration)"
export DOCKER_IP=$(docker-machine ip integration)
2018-09-26 15:59:05 -04:00
export DOCKER_MACHINE_IP=$(docker-machine inspect integration | jq -r .Driver[\"HostOnlyCIDR\"])
2014-11-06 13:23:24 -05:00
```
2018-09-26 15:59:05 -04:00
The final command uses the `jq` tool to read the Driver->HostOnlyCIDR field from the `docker-machine inspect` output. If you don't wish to install `jq` , you will need to set DOCKER_MACHINE_IP manually.
2016-02-16 10:23:37 -05:00
## Running tests
2014-11-06 13:23:24 -05:00
2016-02-16 10:23:37 -05:00
To run all the tests using docker and mvn run the following command:
2015-07-14 20:01:06 -04:00
```
2014-11-06 13:23:24 -05:00
mvn verify -P integration-tests
2015-07-14 20:01:06 -04:00
```
2014-11-06 13:23:24 -05:00
2016-02-16 10:23:37 -05:00
To run only a single test using mvn run the following command:
2015-07-14 20:01:06 -04:00
```
2014-11-06 13:23:24 -05:00
mvn verify -P integration-tests -Dit.test=< test_name >
2015-07-14 20:01:06 -04:00
```
2014-11-06 13:23:24 -05:00
2016-02-16 10:23:37 -05:00
Running Tests Using A Configuration File for Any Cluster
-------------------
Make sure that you have at least 6GB of memory available before you run the tests.
2015-07-06 15:15:15 -04:00
To run tests on any druid cluster that is already running, create a configuration file:
{
"broker_host": "< broker_ip > ",
"broker_port": "< broker_port > ",
"router_host": "< router_ip > ",
"router_port": "< router_port > ",
"indexer_host": "< indexer_ip > ",
"indexer_port": "< indexer_port > ",
"coordinator_host": "< coordinator_ip > ",
"coordinator_port": "< coordinator_port > ",
2016-10-18 09:23:34 -04:00
"middlemanager_host": "< middle_manager_ip > ",
2015-07-06 15:15:15 -04:00
"zookeeper_hosts": "< comma-separated list of zookeeper_ip:zookeeper_port > ",
}
2016-02-16 10:23:37 -05:00
Set the environment variable CONFIG_FILE to the name of the configuration file:
2015-07-06 15:15:15 -04:00
```
export CONFIG_FILE=< config file name >
```
2016-02-16 10:23:37 -05:00
To run all the tests using mvn run the following command:
2015-07-06 15:15:15 -04:00
```
mvn verify -P int-tests-config-file
```
2016-02-16 10:23:37 -05:00
To run only a single test using mvn run the following command:
2015-07-06 15:15:15 -04:00
```
mvn verify -P int-tests-config-file -Dit.test=< test_name >
```
2014-11-06 13:23:24 -05:00
2016-02-16 10:23:37 -05:00
Running a Test That Uses Hadoop
-------------------
The integration test that indexes from hadoop is not run as part
of the integration test run discussed above. This is because druid
test clusters might not, in general, have access to hadoop.
That's the case (for now, at least) when using the docker cluster set
up by the integration-tests profile, so the hadoop test
has to be run using a cluster specified in a configuration file.
The data file is
integration-tests/src/test/resources/hadoop/batch_hadoop.data.
Create a directory called batchHadoop1 in the hadoop file system
(anywhere you want) and put batch_hadoop.data into that directory
(as its only file).
Add this keyword to the configuration file (see above):
```
"hadoopTestDir": "< name_of_dir_containing_batchHadoop1 > "
```
Run the test using mvn:
```
mvn verify -P int-tests-config-file -Dit.test=ITHadoopIndexTest
```
In some test environments, the machine where the tests need to be executed
cannot access the outside internet, so mvn cannot be run. In that case,
do the following instead of running the tests using mvn:
### Compile druid and the integration tests
On a machine that can do mvn builds:
```
cd druid
mvn clean package
cd integration_tests
mvn dependency:copy-dependencies package
```
### Put the compiled test code into your test cluster
Copy the integration-tests directory to the test cluster.
### Set CLASSPATH
```
TDIR=< directory containing integration-tests > /target
VER=< version of druid you built >
export CLASSPATH=$TDIR/dependency/*:$TDIR/druid-integration-tests-$VER.jar:$TDIR/druid-integration-tests-$VER-tests.jar
```
### Run the test
```
2018-08-30 12:56:26 -04:00
java -Duser.timezone=UTC -Dfile.encoding=UTF-8 -Ddruid.test.config.type=configFile -Ddruid.test.config.configFile=< pathname of configuration file > org.testng.TestNG -testrunfactory org.testng.DruidTestRunnerFactory -testclass org.apache.druid.tests.hadoop.ITHadoopIndexTest
2016-02-16 10:23:37 -05:00
```
2014-11-06 13:23:24 -05:00
Writing a New Test
2016-02-16 10:23:37 -05:00
-------------------
2014-11-06 13:23:24 -05:00
## What should we cover in integration tests
For every end-user functionality provided by druid we should have an integration-test verifying the correctness.
## Rules to be followed while writing a new integration test
2016-02-16 10:23:37 -05:00
### Every Integration Test must follow these rules:
2014-11-06 13:23:24 -05:00
1) Name of the test must start with a prefix "IT"
2) A test should be independent of other tests
3) Tests are to be written in TestNG style ([http://testng.org/doc/documentation-main.html#methods](http://testng.org/doc/documentation-main.html#methods))
4) If a test loads some data it is the responsibility of the test to clean up the data from the cluster
### How to use Guice Dependency Injection in a test
A test can access different helper and utility classes provided by test-framework in order to access Coordinator,Broker etc..
To mark a test be able to use Guice Dependency Injection -
Annotate the test class with the below annotation
2015-07-14 20:01:06 -04:00
```
2014-11-06 13:23:24 -05:00
@Guice (moduleFactory = DruidTestModuleFactory.class)
2015-07-14 20:01:06 -04:00
```
2014-11-06 13:23:24 -05:00
This will tell the test framework that the test class needs to be constructed using guice.
### Helper Classes provided
1) IntegrationTestingConfig - configuration of the test
2) CoordinatorResourceTestClient - httpclient for coordinator endpoints
3) OverlordResourceTestClient - httpclient for indexer endpoints
4) QueryResourceTestClient - httpclient for broker endpoints
### Static Utility classes
1) RetryUtil - provides methods to retry an operation until it succeeds for configurable no. of times
2) FromFileTestQueryHelper - reads queries with expected results from file and executes them and verifies the results using ResultVerifier
Refer ITIndexerTest as an example on how to use dependency Injection