OpenSearch/docs/reference/testing/testing-framework.asciidoc

261 lines
15 KiB
Plaintext
Raw Normal View History

[[testing-framework]]
== Java Testing Framework
[[testing-intro]]
Testing is a crucial part of your application, and as information retrieval itself is already a complex topic, there should not be any additional complexity in setting up a testing infrastructure, which uses elasticsearch. This is the main reason why we decided to release an additional file to the release, which allows you to use the same testing infrastructure we do in the elasticsearch core. The testing framework allows you to setup clusters with multiple nodes in order to check if your code covers everything needed to run in a cluster. The framework prevents you from writing complex code yourself to start, stop or manage several test nodes in a cluster. In addition there is another very important feature called randomized testing, which you are getting for free as it is part of the elasticsearch infrastructure.
[[why-randomized-testing]]
=== why randomized testing?
The key concept of randomized testing is not to use the same input values for every testcase, but still be able to reproduce it in case of a failure. This allows to test with vastly different input variables in order to make sure, that your implementation is actually independent from your provided test data.
All of the tests are run using a custom junit runner, the `RandomizedRunner` provided by the randomized-testing project. If you are interested in the implementation being used, check out the http://labs.carrotsearch.com/randomizedtesting.html[RandomizedTesting webpage].
[[using-elasticsearch-test-classes]]
=== Using the elasticsearch test classes
First, you need to include the testing dependency in your project, along with the elasticsearch dependency you have already added. If you use maven and its `pom.xml` file, it looks like this
[source,xml]
--------------------------------------------------
<dependencies>
<dependency>
<groupId>org.apache.lucene</groupId>
<artifactId>lucene-test-framework</artifactId>
<version>${lucene.version}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch</artifactId>
<version>${elasticsearch.version}</version>
<scope>test</scope>
<type>test-jar</type>
</dependency>
</dependencies>
--------------------------------------------------
Replace the elasticsearch version and the lucene version with the corresponding elasticsearch version and its accompanying lucene release.
We provide a few classes that you can inherit from in your own test classes which provide:
* pre-defined loggers
* randomized testing infrastructure
* a number of helper methods
[[unit-tests]]
=== unit tests
If your test is a well isolated unit test which doesn't need a running elasticsearch cluster, you can use the `ESTestCase`. If you are testing lucene features, use `ESTestCase` and if you are testing concrete token streams, use the `ESTokenStreamTestCase` class. Those specific classes execute additional checks which ensure that no resources leaks are happening, after the test has run.
[[integration-tests]]
=== integration tests
[TEST] Randomized number of shards used for indices created during tests Introduced two levels of randomization for the number of shards (between 1 and 10) when running tests: 1) through the existing random index template, which now sets a random number of shards that is shared across all the indices created in the same test method unless overwritten 2) through `createIndex` and `prepareCreate` methods, similar to what happens using the `indexSettings` method, which changes for every `createIndex` or `prepareCreate` unless overwritten (overwrites index template for what concerns the number of shards) Added the following facilities to deal with the random number of shards: - `getNumShards` to retrieve the number of shards of a given existing index, useful when doing comparisons based on the number of shards and we can avoid specifying a static number. The method returns an object containing the number of primaries, number of replicas and the total number of shards for the existing index - added `assertFailures` that checks that a shard failure happened during a search request, either partial failure or total (all shards failed). Checks also the error code and the error message related to the failure. This is needed as without knowing the number of shards upfront, when simulating errors we can run into either partial (search returns partial results and failures) or total failures (search returns an error) - added common methods similar to `indexSettings`, to be used in combination with `createIndex` and `prepareCreate` method and explicitly control the second level of randomization: `numberOfShards`, `minimumNumberOfShards` and `maximumNumberOfShards`. Added also `numberOfReplicas` despite the number of replicas is not randomized (default not specified but can be overwritten by tests) Tests that specified the number of shards have been reviewed and the results follow: - removed number_of_shards in node settings, ignored anyway as it would be overwritten by both mechanisms above - remove specific number of shards when not needed - removed manual shards randomization where present, replaced with ordinary one that's now available - adapted tests that didn't need a specific number of shards to the new random behaviour - fixed a couple of test bugs (e.g. 3 levels parent child test could only work on a single shard as the routing key used for grand-children wasn't correct) - also done some cleanup, shared code through shard size facets and aggs tests and used common methods like `assertAcked`, `ensureGreen`, `refresh`, `flush` and `refreshAndFlush` where possible - made sure that `indexSettings()` is always used as a basis when using `prepareCreate` to inject specific settings - converted indexRandom(false, ...) + refresh to indexRandom(true, ...)
2014-02-22 05:58:05 -05:00
These kind of tests require firing up a whole cluster of nodes, before the tests can actually be run. Compared to unit tests they are obviously way more time consuming, but the test infrastructure tries to minimize the time cost by only restarting the whole cluster, if this is configured explicitly.
The class your tests have to inherit from is `ESIntegTestCase`. By inheriting from this class, you will no longer need to start elasticsearch nodes manually in your test, although you might need to ensure that at least a certain number of nodes are up. The integration test behaviour can be configured heavily by specifying different system properties on test runs. See the `TESTING.asciidoc` documentation in the https://github.com/elastic/elasticsearch/blob/master/TESTING.asciidoc[source repository] for more information.
[TEST] Randomized number of shards used for indices created during tests Introduced two levels of randomization for the number of shards (between 1 and 10) when running tests: 1) through the existing random index template, which now sets a random number of shards that is shared across all the indices created in the same test method unless overwritten 2) through `createIndex` and `prepareCreate` methods, similar to what happens using the `indexSettings` method, which changes for every `createIndex` or `prepareCreate` unless overwritten (overwrites index template for what concerns the number of shards) Added the following facilities to deal with the random number of shards: - `getNumShards` to retrieve the number of shards of a given existing index, useful when doing comparisons based on the number of shards and we can avoid specifying a static number. The method returns an object containing the number of primaries, number of replicas and the total number of shards for the existing index - added `assertFailures` that checks that a shard failure happened during a search request, either partial failure or total (all shards failed). Checks also the error code and the error message related to the failure. This is needed as without knowing the number of shards upfront, when simulating errors we can run into either partial (search returns partial results and failures) or total failures (search returns an error) - added common methods similar to `indexSettings`, to be used in combination with `createIndex` and `prepareCreate` method and explicitly control the second level of randomization: `numberOfShards`, `minimumNumberOfShards` and `maximumNumberOfShards`. Added also `numberOfReplicas` despite the number of replicas is not randomized (default not specified but can be overwritten by tests) Tests that specified the number of shards have been reviewed and the results follow: - removed number_of_shards in node settings, ignored anyway as it would be overwritten by both mechanisms above - remove specific number of shards when not needed - removed manual shards randomization where present, replaced with ordinary one that's now available - adapted tests that didn't need a specific number of shards to the new random behaviour - fixed a couple of test bugs (e.g. 3 levels parent child test could only work on a single shard as the routing key used for grand-children wasn't correct) - also done some cleanup, shared code through shard size facets and aggs tests and used common methods like `assertAcked`, `ensureGreen`, `refresh`, `flush` and `refreshAndFlush` where possible - made sure that `indexSettings()` is always used as a basis when using `prepareCreate` to inject specific settings - converted indexRandom(false, ...) + refresh to indexRandom(true, ...)
2014-02-22 05:58:05 -05:00
[[number-of-shards]]
==== number of shards
The number of shards used for indices created during integration tests is randomized between `1` and `10` unless overwritten upon index creation via index settings.
The rule of thumb is not to specify the number of shards unless needed, so that each test will use a different one all the time. Alternatively you can override the `numberOfShards()` method. The same applies to the `numberOfReplicas()` method.
[[helper-methods]]
==== generic helper methods
There are a couple of helper methods in `ESIntegTestCase`, which will make your tests shorter and more concise.
[horizontal]
`refresh()`:: Refreshes all indices in a cluster
`ensureGreen()`:: Ensures a green health cluster state, waiting for relocations. Waits the default timeout of 30 seconds before failing.
`ensureYellow()`:: Ensures a yellow health cluster state, also waits for 30 seconds before failing.
`createIndex(name)`:: Creates an index with the specified name
`flush()`:: Flushes all indices in a cluster
`flushAndRefresh()`:: Combines `flush()` and `refresh()` calls
`forceMerge()`:: Waits for all relocations and force merges all indices in the cluster to one segment.
`indexExists(name)`:: Checks if given index exists
`admin()`:: Returns an `AdminClient` for administrative tasks
`clusterService()`:: Returns the cluster service java class
`cluster()`:: Returns the test cluster class, which is explained in the next paragraphs
[[test-cluster-methods]]
==== test cluster methods
The `InternalTestCluster` class is the heart of the cluster functionality in a randomized test and allows you to configure a specific setting or replay certain types of outages to check, how your custom code reacts.
[horizontal]
`ensureAtLeastNumNodes(n)`:: Ensure at least the specified number of nodes is running in the cluster
`ensureAtMostNumNodes(n)`:: Ensure at most the specified number of nodes is running in the cluster
`getInstance()`:: Get a guice instantiated instance of a class from a random node
`getInstanceFromNode()`:: Get a guice instantiated instance of a class from a specified node
`stopRandomNode()`:: Stop a random node in your cluster to mimic an outage
`stopCurrentMasterNode()`:: Stop the current master node to force a new election
`stopRandomNonMaster()`:: Stop a random non master node to mimic an outage
`buildNode()`:: Create a new elasticsearch node
`startNode(settings)`:: Create and start a new elasticsearch node
[[changing-node-settings]]
==== Changing node settings
If you want to ensure a certain configuration for the nodes, which are started as part of the `EsIntegTestCase`, you can override the `nodeSettings()` method
[source,java]
-----------------------------------------
public class Mytests extends ESIntegTestCase {
@Override
protected Settings nodeSettings(int nodeOrdinal) {
return Settings.builder().put(super.nodeSettings(nodeOrdinal))
.put("node.mode", "network")
.build();
}
}
-----------------------------------------
[[accessing-clients]]
==== Accessing clients
In order to execute any actions, you have to use a client. You can use the `ESIntegTestCase.client()` method to get back a random client. This client can be a `TransportClient` or a `NodeClient` - and usually you do not need to care as long as the action gets executed. There are several more methods for client selection inside of the `InternalTestCluster` class, which can be accessed using the `ESIntegTestCase.internalCluster()` method.
[horizontal]
`iterator()`:: An iterator over all available clients
`masterClient()`:: Returns a client which is connected to the master node
`nonMasterClient()`:: Returns a client which is not connected to the master node
`clientNodeClient()`:: Returns a client, which is running on a client node
`client(String nodeName)`:: Returns a client to a given node
`smartClient()`:: Returns a smart client
[[scoping]]
==== Scoping
By default the tests are run with unique cluster per test suite. Of course all indices and templates are deleted between each test. However, sometimes you need to start a new cluster for each test - for example, if you load a certain plugin, but you do not want to load it for every test.
You can use the `@ClusterScope` annotation at class level to configure this behaviour
[source,java]
-----------------------------------------
@ClusterScope(scope=TEST, numDataNodes=1)
public class CustomSuggesterSearchTests extends ESIntegTestCase {
// ... tests go here
}
-----------------------------------------
The above sample configures the test to use a new cluster for each test method. The default scope is `SUITE` (one cluster for all
test methods in the test). The `numDataNodes` settings allows you to only start a certain number of data nodes, which can speed up test
execution, as starting a new node is a costly and time consuming operation and might not be needed for this test.
By default, the testing infrastructure will randomly start dedicated master nodes. If you want to disable dedicated masters
you can set `supportsDedicatedMasters=false` in a similar fashion to the `numDataNodes` setting. If dedicated master nodes are not used,
data nodes will be allowed to become masters as well.
[[changing-node-configuration]]
==== Changing plugins via configuration
As elasticsearch is using JUnit 4, using the `@Before` and `@After` annotations is not a problem. However you should keep in mind, that this does not have any effect in your cluster setup, as the cluster is already up and running when those methods are run. So in case you want to configure settings - like loading a plugin on node startup - before the node is actually running, you should overwrite the `nodePlugins()` method from the `ESIntegTestCase` class and return the plugin classes each node should load.
[source,java]
-----------------------------------------
@Override
protected Collection<Class<? extends Plugin>> nodePlugins() {
return pluginList(CustomSuggesterPlugin.class);
}
-----------------------------------------
[[randomized-testing]]
=== Randomized testing
The code snippets you saw so far did not show any trace of randomized testing features, as they are carefully hidden under the hood. However when you are writing your own tests, you should make use of these features as well. Before starting with that, you should know, how to repeat a failed test with the same setup, how it failed. Luckily this is quite easy, as the whole mvn call is logged together with failed tests, which means you can simply copy and paste that line and run the test.
[[generating-random-data]]
==== Generating random data
The next step is to convert your test using static test data into a test using randomized test data. The kind of data you could randomize varies a lot with the functionality you are testing against. Take a look at the following examples (note, that this list could go on for pages, as a distributed system has many, many moving parts):
* Searching for data using arbitrary UTF8 signs
* Changing your mapping configuration, index and field names with each run
* Changing your response sizes/configurable limits with each run
* Changing the number of shards/replicas when creating an index
So, how can you create random data. The most important thing to know is, that you never should instantiate your own `Random` instance, but use the one provided in the `RandomizedTest`, from which all elasticsearch dependent test classes inherit from.
[horizontal]
`getRandom()`:: Returns the random instance, which can recreated when calling the test with specific parameters
`randomBoolean()`:: Returns a random boolean
`randomByte()`:: Returns a random byte
`randomShort()`:: Returns a random short
`randomInt()`:: Returns a random integer
`randomLong()`:: Returns a random long
`randomFloat()`:: Returns a random float
`randomDouble()`:: Returns a random double
`randomInt(max)`:: Returns a random integer between 0 and max
`between()`:: Returns a random between the supplied range
`atLeast()`:: Returns a random integer of at least the specified integer
`atMost()`:: Returns a random integer of at most the specified integer
`randomLocale()`:: Returns a random locale
`randomTimeZone()`:: Returns a random timezone
`randomFrom()`:: Returns a random element from a list/array
In addition, there are a couple of helper methods, allowing you to create random ASCII and Unicode strings, see methods beginning with `randomAscii`, `randomUnicode`, and `randomRealisticUnicode` in the random test class. The latter one tries to create more realistic unicode string by not being arbitrary random.
If you want to debug a specific problem with a specific random seed, you can use the `@Seed` annotation to configure a specific seed for a test. If you want to run a test more than once, instead of starting the whole test suite over and over again, you can use the `@Repeat` annotation with an arbitrary value. Each iteration than gets run with a different seed.
[[assertions]]
=== Assertions
As many elasticsearch tests are checking for a similar output, like the amount of hits or the first hit or special highlighting, a couple of predefined assertions have been created. Those have been put into the `ElasticsearchAssertions` class. There is also a specific geo assertions in `ElasticsearchGeoAssertions`.
[horizontal]
`assertHitCount()`:: Checks hit count of a search or count request
[TEST] Randomized number of shards used for indices created during tests Introduced two levels of randomization for the number of shards (between 1 and 10) when running tests: 1) through the existing random index template, which now sets a random number of shards that is shared across all the indices created in the same test method unless overwritten 2) through `createIndex` and `prepareCreate` methods, similar to what happens using the `indexSettings` method, which changes for every `createIndex` or `prepareCreate` unless overwritten (overwrites index template for what concerns the number of shards) Added the following facilities to deal with the random number of shards: - `getNumShards` to retrieve the number of shards of a given existing index, useful when doing comparisons based on the number of shards and we can avoid specifying a static number. The method returns an object containing the number of primaries, number of replicas and the total number of shards for the existing index - added `assertFailures` that checks that a shard failure happened during a search request, either partial failure or total (all shards failed). Checks also the error code and the error message related to the failure. This is needed as without knowing the number of shards upfront, when simulating errors we can run into either partial (search returns partial results and failures) or total failures (search returns an error) - added common methods similar to `indexSettings`, to be used in combination with `createIndex` and `prepareCreate` method and explicitly control the second level of randomization: `numberOfShards`, `minimumNumberOfShards` and `maximumNumberOfShards`. Added also `numberOfReplicas` despite the number of replicas is not randomized (default not specified but can be overwritten by tests) Tests that specified the number of shards have been reviewed and the results follow: - removed number_of_shards in node settings, ignored anyway as it would be overwritten by both mechanisms above - remove specific number of shards when not needed - removed manual shards randomization where present, replaced with ordinary one that's now available - adapted tests that didn't need a specific number of shards to the new random behaviour - fixed a couple of test bugs (e.g. 3 levels parent child test could only work on a single shard as the routing key used for grand-children wasn't correct) - also done some cleanup, shared code through shard size facets and aggs tests and used common methods like `assertAcked`, `ensureGreen`, `refresh`, `flush` and `refreshAndFlush` where possible - made sure that `indexSettings()` is always used as a basis when using `prepareCreate` to inject specific settings - converted indexRandom(false, ...) + refresh to indexRandom(true, ...)
2014-02-22 05:58:05 -05:00
`assertAcked()`:: Ensure the a request has been acknowledged by the master
`assertSearchHits()`:: Asserts a search response contains specific ids
`assertMatchCount()`:: Asserts a matching count from a percolation response
`assertFirstHit()`:: Asserts the first hit hits the specified matcher
`assertSecondHit()`:: Asserts the second hit hits the specified matcher
`assertThirdHit()`:: Asserts the third hits hits the specified matcher
`assertSearchHit()`:: Assert a certain element in a search response hits the specified matcher
`assertNoFailures()`:: Asserts that no shard failures have occurred in the response
[TEST] Randomized number of shards used for indices created during tests Introduced two levels of randomization for the number of shards (between 1 and 10) when running tests: 1) through the existing random index template, which now sets a random number of shards that is shared across all the indices created in the same test method unless overwritten 2) through `createIndex` and `prepareCreate` methods, similar to what happens using the `indexSettings` method, which changes for every `createIndex` or `prepareCreate` unless overwritten (overwrites index template for what concerns the number of shards) Added the following facilities to deal with the random number of shards: - `getNumShards` to retrieve the number of shards of a given existing index, useful when doing comparisons based on the number of shards and we can avoid specifying a static number. The method returns an object containing the number of primaries, number of replicas and the total number of shards for the existing index - added `assertFailures` that checks that a shard failure happened during a search request, either partial failure or total (all shards failed). Checks also the error code and the error message related to the failure. This is needed as without knowing the number of shards upfront, when simulating errors we can run into either partial (search returns partial results and failures) or total failures (search returns an error) - added common methods similar to `indexSettings`, to be used in combination with `createIndex` and `prepareCreate` method and explicitly control the second level of randomization: `numberOfShards`, `minimumNumberOfShards` and `maximumNumberOfShards`. Added also `numberOfReplicas` despite the number of replicas is not randomized (default not specified but can be overwritten by tests) Tests that specified the number of shards have been reviewed and the results follow: - removed number_of_shards in node settings, ignored anyway as it would be overwritten by both mechanisms above - remove specific number of shards when not needed - removed manual shards randomization where present, replaced with ordinary one that's now available - adapted tests that didn't need a specific number of shards to the new random behaviour - fixed a couple of test bugs (e.g. 3 levels parent child test could only work on a single shard as the routing key used for grand-children wasn't correct) - also done some cleanup, shared code through shard size facets and aggs tests and used common methods like `assertAcked`, `ensureGreen`, `refresh`, `flush` and `refreshAndFlush` where possible - made sure that `indexSettings()` is always used as a basis when using `prepareCreate` to inject specific settings - converted indexRandom(false, ...) + refresh to indexRandom(true, ...)
2014-02-22 05:58:05 -05:00
`assertFailures()`:: Asserts that shard failures have happened during a search request
`assertHighlight()`:: Assert specific highlights matched
`assertSuggestion()`:: Assert for specific suggestions
`assertSuggestionSize()`:: Assert for specific suggestion count
`assertThrows()`:: Assert a specific exception has been thrown
Common matchers
[horizontal]
`hasId()`:: Matcher to check for a search hit id
`hasType()`:: Matcher to check for a search hit type
`hasIndex()`:: Matcher to check for a search hit index
`hasScore()`:: Matcher to check for a certain score of a hit
`hasStatus()`:: Matcher to check for a certain `RestStatus` of a response
Usually, you would combine assertions and matchers in your test like this
[source,java]
----------------------------
SearchResponse seearchResponse = client().prepareSearch() ...;
assertHitCount(searchResponse, 4);
assertFirstHit(searchResponse, hasId("4"));
assertSearchHits(searchResponse, "1", "2", "3", "4");
----------------------------