[[TestingFrameworkCheatsheet]] = Testing [partintro] Elasticsearch uses jUnit for testing, it also uses randomness in the tests, that can be set using a seed, the following is a cheatsheet of options for running the tests for ES. == Creating packages To create a distribution without running the tests, simply run the following: ----------------------------- ./gradlew assemble ----------------------------- To create a platform-specific build including the x-pack modules, use the following depending on your operating system: ----------------------------- ./gradlew :distribution:archives:linux-tar:assemble --parallel ./gradlew :distribution:archives:darwin-tar:assemble --parallel ./gradlew :distribution:archives:windows-zip:assemble --parallel ----------------------------- === Running Elasticsearch from a checkout In order to run Elasticsearch from source without building a package, you can run it using Gradle: ------------------------------------- ./gradlew run ------------------------------------- ==== Launching and debugging from an IDE If you want to run Elasticsearch from your IDE, the `./gradlew run` task supports a remote debugging option: --------------------------------------------------------------------------- ./gradlew run --debug-jvm --------------------------------------------------------------------------- ==== Distribution By default a node is started with the zip distribution. In order to start with a different distribution use the `-Drun.distribution` argument. To for example start the open source distribution: ------------------------------------- ./gradlew run -Drun.distribution=oss ------------------------------------- ==== License type By default a node is started with the `basic` license type. In order to start with a different license type use the `-Drun.license_type` argument. In order to start a node with a trial license execute the following command: ------------------------------------- ./gradlew run -Drun.license_type=trial ------------------------------------- This enables security and other paid features and adds a superuser with the username: `elastic-admin` and password: `elastic-password`. ==== Other useful arguments In order to start a node with a different max heap space add: `-Dtests.heap.size=4G` In order to disable annotations add: `-Dtests.asserts=false` In order to set an Elasticsearch setting, provide a setting with the following prefix: `-Dtests.es.` === Test case filtering. - `tests.class` is a class-filtering shell-like glob pattern, - `tests.method` is a method-filtering glob pattern. Run a single test case (variants) ---------------------------------------------------------- ./gradlew test -Dtests.class=org.elasticsearch.package.ClassName ./gradlew test "-Dtests.class=*.ClassName" ---------------------------------------------------------- Run all tests in a package and its sub-packages ---------------------------------------------------- ./gradlew test "-Dtests.class=org.elasticsearch.package.*" ---------------------------------------------------- Run any test methods that contain 'esi' (like: ...r*esi*ze...) ------------------------------- ./gradlew test "-Dtests.method=*esi*" ------------------------------- Run all tests that are waiting for a bugfix (disabled by default) ------------------------------------------------ ./gradlew test -Dtests.filter=@awaitsfix ------------------------------------------------ === Seed and repetitions. Run with a given seed (seed is a hex-encoded long). ------------------------------ ./gradlew test -Dtests.seed=DEADBEEF ------------------------------ === Repeats _all_ tests of ClassName N times. Every test repetition will have a different method seed (derived from a single random master seed). -------------------------------------------------- ./gradlew test -Dtests.iters=N -Dtests.class=*.ClassName -------------------------------------------------- === Repeats _all_ tests of ClassName N times. Every test repetition will have exactly the same master (0xdead) and method-level (0xbeef) seed. ------------------------------------------------------------------------ ./gradlew test -Dtests.iters=N -Dtests.class=*.ClassName -Dtests.seed=DEAD:BEEF ------------------------------------------------------------------------ === Repeats a given test N times (note the filters - individual test repetitions are given suffixes, ie: testFoo[0], testFoo[1], etc... so using testmethod or tests.method ending in a glob is necessary to ensure iterations are run). ------------------------------------------------------------------------- ./gradlew test -Dtests.iters=N -Dtests.class=*.ClassName -Dtests.method=mytest* ------------------------------------------------------------------------- Repeats N times but skips any tests after the first failure or M initial failures. ------------------------------------------------------------- ./gradlew test -Dtests.iters=N -Dtests.failfast=true -Dtestcase=... ./gradlew test -Dtests.iters=N -Dtests.maxfailures=M -Dtestcase=... ------------------------------------------------------------- === Test groups. Test groups can be enabled or disabled (true/false). Default value provided below in [brackets]. ------------------------------------------------------------------ ./gradlew test -Dtests.awaitsfix=[false] - known issue (@AwaitsFix) ------------------------------------------------------------------ === Load balancing and caches. By default the tests run on multiple processes using all the available cores on all available CPUs. Not including hyper-threading. If you want to explicitly specify the number of JVMs you can do so on the command line: ---------------------------- ./gradlew test -Dtests.jvms=8 ---------------------------- Or in `~/.gradle/gradle.properties`: ---------------------------- systemProp.tests.jvms=8 ---------------------------- Its difficult to pick the "right" number here. Hypercores don't count for CPU intensive tests and you should leave some slack for JVM-interal threads like the garbage collector. And you have to have enough RAM to handle each JVM. === Test compatibility. It is possible to provide a version that allows to adapt the tests behaviour to older features or bugs that have been changed or fixed in the meantime. ----------------------------------------- ./gradlew test -Dtests.compatibility=1.0.0 ----------------------------------------- === Miscellaneous. Run all tests without stopping on errors (inspect log files). ----------------------------------------- ./gradlew test -Dtests.haltonfailure=false ----------------------------------------- Run more verbose output (slave JVM parameters, etc.). ---------------------- ./gradlew test -verbose ---------------------- Change the default suite timeout to 5 seconds for all tests (note the exclamation mark). --------------------------------------- ./gradlew test -Dtests.timeoutSuite=5000! ... --------------------------------------- Change the logging level of ES (not Gradle) -------------------------------- ./gradlew test -Dtests.es.logger.level=DEBUG -------------------------------- Print all the logging output from the test runs to the commandline even if tests are passing. ------------------------------ ./gradlew test -Dtests.output=always ------------------------------ Configure the heap size. ------------------------------ ./gradlew test -Dtests.heap.size=512m ------------------------------ Pass arbitrary jvm arguments. ------------------------------ # specify heap dump path ./gradlew test -Dtests.jvm.argline="-XX:HeapDumpPath=/path/to/heapdumps" # enable gc logging ./gradlew test -Dtests.jvm.argline="-verbose:gc" # enable security debugging ./gradlew test -Dtests.jvm.argline="-Djava.security.debug=access,failure" ------------------------------ == Running verification tasks To run all verification tasks, including static checks, unit tests, and integration tests: --------------------------------------------------------------------------- ./gradlew check --------------------------------------------------------------------------- Note that this will also run the unit tests and precommit tasks first. If you want to just run the integration tests (because you are debugging them): --------------------------------------------------------------------------- ./gradlew integTest --------------------------------------------------------------------------- If you want to just run the precommit checks: --------------------------------------------------------------------------- ./gradlew precommit --------------------------------------------------------------------------- Some of these checks will require `docker-compose` installed for bringing up test fixtures. If it's not present those checks will be skipped automatically. == Testing the REST layer The available integration tests make use of the java API to communicate with the elasticsearch nodes, using the internal binary transport (port 9300 by default). The REST layer is tested through specific tests that are shared between all the elasticsearch official clients and consist of YAML files that describe the operations to be executed and the obtained results that need to be tested. The YAML files support various operators defined in the link:/rest-api-spec/src/main/resources/rest-api-spec/test/README.asciidoc[rest-api-spec] and adhere to the link:/rest-api-spec/README.markdown[Elasticsearch REST API JSON specification] The REST tests are run automatically when executing the "./gradlew check" command. To run only the REST tests use the following command: --------------------------------------------------------------------------- ./gradlew :distribution:archives:integ-test-zip:integTest \ -Dtests.class="org.elasticsearch.test.rest.*Yaml*IT" --------------------------------------------------------------------------- A specific test case can be run with --------------------------------------------------------------------------- ./gradlew :distribution:archives:integ-test-zip:integTest \ -Dtests.class="org.elasticsearch.test.rest.*Yaml*IT" \ -Dtests.method="test {p0=cat.shards/10_basic/Help}" --------------------------------------------------------------------------- `*Yaml*IT` are the executable test classes that runs all the yaml suites available within the `rest-api-spec` folder. The REST tests support all the options provided by the randomized runner, plus the following: * `tests.rest[true|false]`: determines whether the REST tests need to be run (default) or not. * `tests.rest.suite`: comma separated paths of the test suites to be run (by default loaded from /rest-api-spec/test). It is possible to run only a subset of the tests providing a sub-folder or even a single yaml file (the default /rest-api-spec/test prefix is optional when files are loaded from classpath) e.g. -Dtests.rest.suite=index,get,create/10_with_id * `tests.rest.blacklist`: comma separated globs that identify tests that are blacklisted and need to be skipped e.g. -Dtests.rest.blacklist=index/*/Index document,get/10_basic/* Note that the REST tests, like all the integration tests, can be run against an external cluster by specifying the `tests.cluster` property, which if present needs to contain a comma separated list of nodes to connect to (e.g. localhost:9300). A transport client will be created based on that and used for all the before|after test operations, and to extract the http addresses of the nodes so that REST requests can be sent to them. == Testing packaging The packaging tests use Vagrant virtual machines to verify that installing and running elasticsearch distributions works correctly on supported operating systems. These tests should really only be run in vagrant vms because they're destructive. . Install Virtual Box and Vagrant. + . (Optional) Install https://github.com/fgrehm/vagrant-cachier[vagrant-cachier] to squeeze a bit more performance out of the process: + -------------------------------------- vagrant plugin install vagrant-cachier -------------------------------------- + . Validate your installed dependencies: + ------------------------------------- ./gradlew :qa:vagrant:vagrantCheckVersion ------------------------------------- + . Download and smoke test the VMs with `./gradlew vagrantSmokeTest` or `./gradlew -Pvagrant.boxes=all vagrantSmokeTest`. The first time you run this it will download the base images and provision the boxes and immediately quit. Downloading all the images may take a long time. After the images are already on your machine, they won't be downloaded again unless they have been updated to a new version. + . Run the tests with `./gradlew packagingTest`. This will cause Gradle to build the tar, zip, and deb packages and all the plugins. It will then run the tests on ubuntu-1404 and centos-7. We chose those two distributions as the default because they cover deb and rpm packaging and SyvVinit and systemd. You can choose which boxes to test by setting the `-Pvagrant.boxes` project property. All of the valid options for this property are: * `sample` - The default, only chooses ubuntu-1404 and centos-7 * List of box names, comma separated (e.g. `oel-7,fedora-28`) - Chooses exactly the boxes listed. * `linux-all` - All linux boxes. * `windows-all` - All Windows boxes. If there are any Windows boxes which do not have images available when this value is provided, the build will fail. * `all` - All boxes we test. If there are any boxes (e.g. Windows) which do not have images available when this value is provided, the build will fail. For a complete list of boxes on which tests can be run, run `./gradlew :qa:vagrant:listAllBoxes`. For a list of boxes that have images available from your configuration, run `./gradlew :qa:vagrant:listAvailableBoxes` Note that if you interrupt gradle in the middle of running these tasks, any boxes started will remain running and you'll have to stop them manually with `./gradlew stop` or `vagrant halt`. All the regular vagrant commands should just work so you can get a shell in a VM running trusty by running `vagrant up ubuntu-1404 --provider virtualbox && vagrant ssh ubuntu-1404`. These are the linux flavors supported, all of which we provide images for * ubuntu-1404 aka trusty * ubuntu-1604 aka xenial * ubuntu-1804 aka bionic beaver * debian-8 aka jessie * debian-9 aka stretch, the current debian stable distribution * centos-6 * centos-7 * fedora-28 * fedora-29 * oel-6 aka Oracle Enterprise Linux 6 * oel-7 aka Oracle Enterprise Linux 7 * sles-12 * opensuse-42 aka Leap We're missing the following from the support matrix because there aren't high quality boxes available in vagrant atlas: * sles-11 === Testing packaging on Windows The packaging tests also support Windows Server 2012R2 and Windows Server 2016. Unfortunately we're not able to provide boxes for them in open source use because of licensing issues. Any Virtualbox image that has WinRM and Powershell enabled for remote users should work. Testing on Windows requires the https://github.com/criteo/vagrant-winrm[vagrant-winrm] plugin. ------------------------------------ vagrant plugin install vagrant-winrm ------------------------------------ Specify the image IDs of the Windows boxes to gradle with the following project properties. They can be set in `~/.gradle/gradle.properties` like ------------------------------------ vagrant.windows-2012r2.id=my-image-id vagrant.windows-2016.id=another-image-id ------------------------------------ or passed on the command line like `-Pvagrant.windows-2012r2.id=my-image-id` `-Pvagrant.windows-2016=another-image-id` These properties are required for Windows support in all gradle tasks that handle packaging tests. Either or both may be specified. Remember that to run tests on these boxes, the project property `vagrant.boxes` still needs to be set to a value that will include them. If you're running vagrant commands outside of gradle, specify the Windows boxes with the environment variables * `VAGRANT_WINDOWS_2012R2_BOX` * `VAGRANT_WINDOWS_2016_BOX` === Testing VMs are disposable It's important to think of VMs like cattle. If they become lame you just shoot them and let vagrant reprovision them. Say you've hosed your precise VM: ---------------------------------------------------- vagrant ssh ubuntu-1404 -c 'sudo rm -rf /bin'; echo oops ---------------------------------------------------- All you've got to do to get another one is ---------------------------------------------- vagrant destroy -f ubuntu-1404 && vagrant up ubuntu-1404 --provider virtualbox ---------------------------------------------- The whole process takes a minute and a half on a modern laptop, two and a half without vagrant-cachier. Its possible that some downloads will fail and it'll be impossible to restart them. This is a bug in vagrant. See the instructions here for how to work around it: https://github.com/mitchellh/vagrant/issues/4479 Some vagrant commands will work on all VMs at once: ------------------ vagrant halt vagrant destroy -f ------------------ `vagrant up` would normally start all the VMs but we've prevented that because that'd consume a ton of ram. === Iterating on packaging tests Running the packaging tests through gradle can take a while because it will start and stop the VM each time. You can iterate faster by keeping the VM up and running the tests directly. The packaging tests use a random seed to determine which past version to use for testing upgrades. To use a single past version fix the test seed when running the commands below (see <>) First build the packaging tests and their dependencies -------------------------------------------- ./gradlew :qa:vagrant:setupPackagingTest -------------------------------------------- Then choose the VM you want to test on and bring it up. For example, to bring up Debian 9 use the gradle command below. Bringing the box up with vagrant directly may not mount the packaging test project in the right place. Once the VM is up, ssh into it -------------------------------------------- ./gradlew :qa:vagrant:vagrantDebian9#up vagrant ssh debian-9 -------------------------------------------- Now inside the VM, start the packaging tests from the terminal. There are two packaging test projects. The old ones are written with https://github.com/sstephenson/bats[bats] and only run on linux. To run them do -------------------------------------------- cd $PACKAGING_ARCHIVES # runs all bats tests sudo bats $BATS_TESTS/*.bats # you can also pass specific test files sudo bats $BATS_TESTS/20_tar_package.bats $BATS_TESTS/25_tar_plugins.bats -------------------------------------------- The new packaging tests are written in Java and run on both linux and windows. On linux (again, inside the VM) -------------------------------------------- # run the full suite sudo bash $PACKAGING_TESTS/run-tests.sh # run specific test cases sudo bash $PACKAGING_TESTS/run-tests.sh \ org.elasticsearch.packaging.test.DefaultZipTests \ org.elasticsearch.packaging.test.OssZipTests -------------------------------------------- or on Windows, from a terminal running as Administrator -------------------------------------------- # run the full suite powershell -File $Env:PACKAGING_TESTS/run-tests.ps1 # run specific test cases powershell -File $Env:PACKAGING_TESTS/run-tests.ps1 ` org.elasticsearch.packaging.test.DefaultZipTests ` org.elasticsearch.packaging.test.OssZipTests -------------------------------------------- Note that on Windows boxes when running from inside the GUI, you may have to log out and back in to the `vagrant` user (password `vagrant`) for the environment variables that locate the packaging tests and distributions to take effect, due to how vagrant provisions Windows machines. When you've made changes you want to test, keep the VM up and reload the tests and distributions inside by running (on the host) -------------------------------------------- ./gradlew :qa:vagrant:clean :qa:vagrant:setupPackagingTest -------------------------------------------- Note: Starting vagrant VM outside of the elasticsearch folder requires to indicates the folder that contains the Vagrantfile using the VAGRANT_CWD environment variable. == Testing backwards compatibility Backwards compatibility tests exist to test upgrading from each supported version to the current version. To run them all use: ------------------------------------------------- ./gradlew bwcTest ------------------------------------------------- A specific version can be tested as well. For example, to test bwc with version 5.3.2 run: ------------------------------------------------- ./gradlew v5.3.2#bwcTest ------------------------------------------------- Tests are ran for versions that are not yet released but with which the current version will be compatible with. These are automatically checked out and built from source. See link:./buildSrc/src/main/java/org/elasticsearch/gradle/VersionCollection.java[VersionCollection] and link:./distribution/bwc/build.gradle[distribution/bwc/build.gradle] for more information. When running `./gradlew check`, minimal bwc checks are also run against compatible versions that are not yet released. ==== BWC Testing against a specific remote/branch Sometimes a backward compatibility change spans two versions. A common case is a new functionality that needs a BWC bridge in an unreleased versioned of a release branch (for example, 5.x). To test the changes, you can instruct Gradle to build the BWC version from a another remote/branch combination instead of pulling the release branch from GitHub. You do so using the `tests.bwc.remote` and `tests.bwc.refspec.BRANCH` system properties: ------------------------------------------------- ./gradlew check -Dtests.bwc.remote=${remote} -Dtests.bwc.refspec.5.x=index_req_bwc_5.x ------------------------------------------------- The branch needs to be available on the remote that the BWC makes of the repository you run the tests from. Using the remote is a handy trick to make sure that a branch is available and is up to date in the case of multiple runs. Example: Say you need to make a change to `master` and have a BWC layer in `5.x`. You will need to: . Create a branch called `index_req_change` off your remote `${remote}`. This will contain your change. . Create a branch called `index_req_bwc_5.x` off `5.x`. This will contain your bwc layer. . Push both branches to your remote repository. . Run the tests with `./gradlew check -Dtests.bwc.remote=${remote} -Dtests.bwc.refspec.5.x=index_req_bwc_5.x`. ==== Skip fetching latest For some BWC testing scenarios, you want to use the local clone of the repository without fetching latest. For these use cases, you can set the system property `tests.bwc.git_fetch_latest` to `false` and the BWC builds will skip fetching the latest from the remote. == How to write good tests? === Base classes for test cases There are multiple base classes for tests: - **`ESTestCase`**: The base class of all tests. It is typically extended directly by unit tests. - **`ESSingleNodeTestCase`**: This test case sets up a cluster that has a single node. - **`ESIntegTestCase`**: An integration test case that creates a cluster that might have multiple nodes. - **`ESRestTestCase`**: An integration tests that interacts with an external cluster via the REST API. For instance, YAML tests run via sub classes of `ESRestTestCase`. === Good practices ==== What kind of tests should I write? Unit tests are the preferred way to test some functionality: most of the time they are simpler to understand, more likely to reproduce, and unlikely to be affected by changes that are unrelated to the piece of functionality that is being tested. The reason why `ESSingleNodeTestCase` exists is that all our components used to be very hard to set up in isolation, which had led us to having a number of integration tests but close to no unit tests. `ESSingleNodeTestCase` is a workaround for this issue which provides an easy way to spin up a node and get access to components that are hard to instantiate like `IndicesService`. Whenever practical, you should prefer unit tests. Many tests extend `ESIntegTestCase`, mostly because this is how most tests used to work in the early days of Elasticsearch. However the complexity of these tests tends to make them hard to debug. Whenever the functionality that is being tested isn't intimately dependent on how Elasticsearch behaves as a cluster, it is recommended to write unit tests or REST tests instead. In short, most new functionality should come with unit tests, and optionally REST tests to test integration. ==== Refactor code to make it easier to test Unfortunately, a large part of our code base is still hard to unit test. Sometimes because some classes have lots of dependencies that make them hard to instantiate. Sometimes because API contracts make tests hard to write. Code refactors that make functionality easier to unit test are encouraged. If this sounds very abstract to you, you can have a look at https://github.com/elastic/elasticsearch/pull/16610[this pull request] for instance, which is a good example. It refactors `IndicesRequestCache` in such a way that: - it no longer depends on objects that are hard to instantiate such as `IndexShard` or `SearchContext`, - time-based eviction is applied on top of the cache rather than internally, which makes it easier to assert on what the cache is expected to contain at a given time. === Bad practices ==== Use randomized-testing for coverage In general, randomization should be used for parameters that are not expected to affect the behavior of the functionality that is being tested. For instance the number of shards should not impact `date_histogram` aggregations, and the choice of the `store` type (`niofs` vs `mmapfs`) does not affect the results of a query. Such randomization helps improve confidence that we are not relying on implementation details of one component or specifics of some setup. However it should not be used for coverage. For instance if you are testing a piece of functionality that enters different code paths depending on whether the index has 1 shards or 2+ shards, then we shouldn't just test against an index with a random number of shards: there should be one test for the 1-shard case, and another test for the 2+ shards case. ==== Abuse randomization in multi-threaded tests Multi-threaded tests are often not reproducible due to the fact that there is no guarantee on the order in which operations occur across threads. Adding randomization to the mix usually makes things worse and should be done with care. == Test coverage analysis Generating test coverage reports for Elasticsearch is currently not possible through Gradle. However, it _is_ possible to gain insight in code coverage using IntelliJ's built-in coverage analysis tool that can measure coverage upon executing specific tests. Eclipse may also be able to do the same using the EclEmma plugin. Test coverage reporting used to be possible with JaCoCo when Elasticsearch was using Maven as its build system. Since the switch to Gradle though, this is no longer possible, seeing as the code currently used to build Elasticsearch does not allow JaCoCo to recognize its tests. For more information on this, see the discussion in https://github.com/elastic/elasticsearch/issues/28867[issue #28867]. == Debugging remotely from an IDE If you want to run Elasticsearch and be able to remotely attach the process for debugging purposes from your IDE, can start Elasticsearch using `ES_JAVA_OPTS`: --------------------------------------------------------------------------- ES_JAVA_OPTS="-Xdebug -Xrunjdwp:server=y,transport=dt_socket,address=4000,suspend=y" ./bin/elasticsearch --------------------------------------------------------------------------- Read your IDE documentation for how to attach a debugger to a JVM process. == Building with extra plugins Additional plugins may be built alongside elasticsearch, where their dependency on elasticsearch will be substituted with the local elasticsearch build. To add your plugin, create a directory called elasticsearch-extra as a sibling of elasticsearch. Checkout your plugin underneath elasticsearch-extra and the build will automatically pick it up. You can verify the plugin is included as part of the build by checking the projects of the build. --------------------------------------------------------------------------- ./gradlew projects --------------------------------------------------------------------------- == Environment misc There is a known issue with macOS localhost resolve strategy that can cause some integration tests to fail. This is because integration tests have timings for cluster formation, discovery, etc. that can be exceeded if name resolution takes a long time. To fix this, make sure you have your computer name (as returned by `hostname`) inside `/etc/hosts`, e.g.: .... 127.0.0.1 localhost ElasticMBP.local 255.255.255.255 broadcasthost ::1 localhost ElasticMBP.local` .... == Benchmarking For changes that might affect the performance characteristics of Elasticsearch you should also run macrobenchmarks. We maintain a macrobenchmarking tool called https://github.com/elastic/rally[Rally] which you can use to measure the performance impact. It comes with a set of default benchmarks that we also https://elasticsearch-benchmarks.elastic.co/[run every night]. To get started, please see https://esrally.readthedocs.io/en/stable/[Rally's documentation].