mirror of
https://github.com/apache/druid.git
synced 2025-02-09 19:44:57 +00:00
Reorganize Travis CI jobs into smaller faster (and more) jobs. Add various maven options to skip unnecessary work and refactored Travis CI job definitions to follow DRY. Detailed changes: .travis.yml - Refactor build logic to get rid of copy-and-paste logic - Skip static checks and enable parallelism for maven install - Split static analysis into different jobs to ease triage - Use "name" attribute instead of NAME environment variable - Split "indexing" and "web console" out of "other modules test" - Split 2 integration test jobs into multiple smaller jobs build.sh - Enable parallelism - Disable more static checks travis_script_integration.sh travis_script_integration_part2.sh integration-tests/README.md - Use TestNG groups instead of shell scripts and move definition of jobs into Travis CI yaml integration-tests/pom.xml - Show elapsed time of individual tests to aid in future rebalancing of Travis CI integration test jobs run time TestNGGroup.java - Use TestNG groups to make it easy to have multiple Travis CI integration test jobs. TestNG groups also make it easier to have an "other" integration test group and make it less likely a test will accidentally not be included in a CI job. IT*Test.java AbstractITBatchIndexTest.java AbstractKafkaIndexerTest.java - Add TestNG group - Fix various IntelliJ inspection warnings - Reduce scope of helper methods since the TestNG group annotation on the class makes TestNG consider all public methods as test methods pom.xml - Allow enforce plugin to be run from command-line - Bump resources plugin version so that "[debug] execute contextualize" output is correctly suppressed by "mvn -q" - Bump exec plugin version so that skip property is renamed from "skip" to "exec.skip" web-console/pom.xml - Add property to allow disabling javascript-related work. This property is overridden in Travis CI to speed up the jobs.
221 lines
7.6 KiB
Markdown
221 lines
7.6 KiB
Markdown
<!--
|
|
~ Licensed to the Apache Software Foundation (ASF) under one
|
|
~ or more contributor license agreements. See the NOTICE file
|
|
~ distributed with this work for additional information
|
|
~ regarding copyright ownership. The ASF licenses this file
|
|
~ to you under the Apache License, Version 2.0 (the
|
|
~ "License"); you may not use this file except in compliance
|
|
~ with the License. You may obtain a copy of the License at
|
|
~
|
|
~ http://www.apache.org/licenses/LICENSE-2.0
|
|
~
|
|
~ Unless required by applicable law or agreed to in writing,
|
|
~ software distributed under the License is distributed on an
|
|
~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
|
~ KIND, either express or implied. See the License for the
|
|
~ specific language governing permissions and limitations
|
|
~ under the License.
|
|
-->
|
|
|
|
Integration Testing
|
|
===================
|
|
|
|
To run integration tests, you have to specify the druid cluster the
|
|
tests should use.
|
|
|
|
Druid comes with the mvn profile integration-tests
|
|
for setting up druid running in docker containers, and using that
|
|
cluster to run the integration tests.
|
|
|
|
To use a druid cluster that is already running, use the
|
|
mvn profile int-tests-config-file, which uses a configuration file
|
|
describing the cluster.
|
|
|
|
Integration Testing Using Docker
|
|
-------------------
|
|
|
|
For running integration tests using docker there are 2 approaches.
|
|
If your platform supports docker natively, you can simply set `DOCKER_IP`
|
|
environment variable to localhost and skip to [Running tests](#running-tests) section.
|
|
|
|
```
|
|
export DOCKER_IP=127.0.0.1
|
|
```
|
|
|
|
The other approach is to use separate virtual machine to run docker
|
|
containers with help of `docker-machine` tool.
|
|
|
|
## Installing Docker Machine
|
|
|
|
Please refer to instructions at [https://github.com/druid-io/docker-druid/blob/master/docker-install.md](https://github.com/druid-io/docker-druid/blob/master/docker-install.md).
|
|
|
|
## Creating the Docker VM
|
|
|
|
Create a new VM for integration tests with at least 6GB of memory.
|
|
|
|
```
|
|
docker-machine create --driver virtualbox --virtualbox-memory 6000 integration
|
|
```
|
|
|
|
Set the docker environment:
|
|
|
|
```
|
|
eval "$(docker-machine env integration)"
|
|
export DOCKER_IP=$(docker-machine ip integration)
|
|
export DOCKER_MACHINE_IP=$(docker-machine inspect integration | jq -r .Driver[\"HostOnlyCIDR\"])
|
|
```
|
|
|
|
The final command uses the `jq` tool to read the Driver->HostOnlyCIDR field from the `docker-machine inspect` output. If you don't wish to install `jq`, you will need to set DOCKER_MACHINE_IP manually.
|
|
|
|
## Running tests
|
|
|
|
To run all the tests using docker and mvn run the following command:
|
|
```
|
|
mvn verify -P integration-tests
|
|
```
|
|
|
|
To run only a single test using mvn run the following command:
|
|
```
|
|
mvn verify -P integration-tests -Dit.test=<test_name>
|
|
```
|
|
|
|
Add `-rf :druid-integration-tests` when running integration tests for the second time or later without changing
|
|
the code of core modules in between to skip up-to-date checks for the whole module dependency tree.
|
|
|
|
Running Tests Using A Configuration File for Any Cluster
|
|
-------------------
|
|
|
|
Make sure that you have at least 6GB of memory available before you run the tests.
|
|
|
|
To run tests on any druid cluster that is already running, create a configuration file:
|
|
|
|
{
|
|
"broker_host": "<broker_ip>",
|
|
"broker_port": "<broker_port>",
|
|
"router_host": "<router_ip>",
|
|
"router_port": "<router_port>",
|
|
"indexer_host": "<indexer_ip>",
|
|
"indexer_port": "<indexer_port>",
|
|
"coordinator_host": "<coordinator_ip>",
|
|
"coordinator_port": "<coordinator_port>",
|
|
"middlemanager_host": "<middle_manager_ip>",
|
|
"zookeeper_hosts": "<comma-separated list of zookeeper_ip:zookeeper_port>",
|
|
}
|
|
|
|
Set the environment variable CONFIG_FILE to the name of the configuration file:
|
|
```
|
|
export CONFIG_FILE=<config file name>
|
|
```
|
|
|
|
To run all the tests using mvn run the following command:
|
|
```
|
|
mvn verify -P int-tests-config-file
|
|
```
|
|
|
|
To run only a single test using mvn run the following command:
|
|
```
|
|
mvn verify -P int-tests-config-file -Dit.test=<test_name>
|
|
```
|
|
|
|
Running a Test That Uses Hadoop
|
|
-------------------
|
|
|
|
The integration test that indexes from hadoop is not run as part
|
|
of the integration test run discussed above. This is because druid
|
|
test clusters might not, in general, have access to hadoop.
|
|
That's the case (for now, at least) when using the docker cluster set
|
|
up by the integration-tests profile, so the hadoop test
|
|
has to be run using a cluster specified in a configuration file.
|
|
|
|
The data file is
|
|
integration-tests/src/test/resources/hadoop/batch_hadoop.data.
|
|
Create a directory called batchHadoop1 in the hadoop file system
|
|
(anywhere you want) and put batch_hadoop.data into that directory
|
|
(as its only file).
|
|
|
|
Add this keyword to the configuration file (see above):
|
|
|
|
```
|
|
"hadoopTestDir": "<name_of_dir_containing_batchHadoop1>"
|
|
```
|
|
|
|
Run the test using mvn:
|
|
|
|
```
|
|
mvn verify -P int-tests-config-file -Dit.test=ITHadoopIndexTest
|
|
```
|
|
|
|
In some test environments, the machine where the tests need to be executed
|
|
cannot access the outside internet, so mvn cannot be run. In that case,
|
|
do the following instead of running the tests using mvn:
|
|
|
|
### Compile druid and the integration tests
|
|
|
|
On a machine that can do mvn builds:
|
|
|
|
```
|
|
cd druid
|
|
mvn clean package
|
|
cd integration_tests
|
|
mvn dependency:copy-dependencies package
|
|
```
|
|
|
|
### Put the compiled test code into your test cluster
|
|
|
|
Copy the integration-tests directory to the test cluster.
|
|
|
|
### Set CLASSPATH
|
|
|
|
```
|
|
TDIR=<directory containing integration-tests>/target
|
|
VER=<version of druid you built>
|
|
export CLASSPATH=$TDIR/dependency/*:$TDIR/druid-integration-tests-$VER.jar:$TDIR/druid-integration-tests-$VER-tests.jar
|
|
```
|
|
|
|
### Run the test
|
|
|
|
```
|
|
java -Duser.timezone=UTC -Dfile.encoding=UTF-8 -Ddruid.test.config.type=configFile -Ddruid.test.config.configFile=<pathname of configuration file> org.testng.TestNG -testrunfactory org.testng.DruidTestRunnerFactory -testclass org.apache.druid.tests.hadoop.ITHadoopIndexTest
|
|
```
|
|
|
|
Writing a New Test
|
|
-------------------
|
|
|
|
## What should we cover in integration tests
|
|
|
|
For every end-user functionality provided by druid we should have an integration-test verifying the correctness.
|
|
|
|
## Rules to be followed while writing a new integration test
|
|
|
|
### Every Integration Test must follow these rules:
|
|
|
|
1) Name of the test must start with a prefix "IT"
|
|
2) A test should be independent of other tests
|
|
3) Tests are to be written in TestNG style ([http://testng.org/doc/documentation-main.html#methods](http://testng.org/doc/documentation-main.html#methods))
|
|
4) If a test loads some data it is the responsibility of the test to clean up the data from the cluster
|
|
|
|
### How to use Guice Dependency Injection in a test
|
|
|
|
A test can access different helper and utility classes provided by test-framework in order to access Coordinator,Broker etc..
|
|
To mark a test be able to use Guice Dependency Injection -
|
|
Annotate the test class with the below annotation
|
|
|
|
```
|
|
@Guice(moduleFactory = DruidTestModuleFactory.class)
|
|
```
|
|
This will tell the test framework that the test class needs to be constructed using guice.
|
|
|
|
### Helper Classes provided
|
|
|
|
1) IntegrationTestingConfig - configuration of the test
|
|
2) CoordinatorResourceTestClient - httpclient for coordinator endpoints
|
|
3) OverlordResourceTestClient - httpclient for indexer endpoints
|
|
4) QueryResourceTestClient - httpclient for broker endpoints
|
|
|
|
### Static Utility classes
|
|
|
|
1) RetryUtil - provides methods to retry an operation until it succeeds for configurable no. of times
|
|
2) FromFileTestQueryHelper - reads queries with expected results from file and executes them and verifies the results using ResultVerifier
|
|
|
|
Refer ITIndexerTest as an example on how to use dependency Injection
|