================
This commit extends the `CompletionSuggester` by context
informations. In example such a context informations can
be a simple string representing a category reducing the
suggestions in order to this category.
Three base implementations of these context informations
have been setup in this commit.
- a Category Context
- a Geo Context
All the mapping for these context informations are
specified within a context field in the completion
field that should use this kind of information.
Per single main ping request we maintain the received ping response per node. Each node level ping response is mapped into that. If from a previous node level ping request the response has already been set for a node, it will be overwritten. We give higher value to the latest response. This change makes sure that this doesn't happen if the previous response has a set master and the current response hasn't a set master. Otherwise a node will lose the fact that another node has elected itself as master, the result of that would be that there would multiple master nodes in a single cluster.
Closes#5413
It can happen that not all shards are ready, thus we won't have a total failure, but we do need to check that we have at least a failure. Checked also the message of the failure.
If randomization brings up a single shard per index in this test
we might run our searches on only one index which causes the assertions
to fail afterwards that's why we need to wait until everything is alloated.
The delete snapshot operation on a running snapshot should cancel the snapshot execution. However, it interrupts the snapshot only when currently running snapshot files are completely copied, which might take a long time for large files.
Closes#5242
- pre_zone_adjust_large_interval was not parsed properly
- added tests for pre_zone and pre_zone_adjust_large_interval
- changed DateHistogram#getBucketByKey(String) to support date formats (next to numeric strings)
- added randomized testing for fetching the bucket by key in date_histogram tests
- added missing "format" support in DateHistogramBuilder
Closes#5375
Introduced two levels of randomization for the number of shards (between 1 and 10) when running tests:
1) through the existing random index template, which now sets a random number of shards that is shared across all the indices created in the same test method unless overwritten
2) through `createIndex` and `prepareCreate` methods, similar to what happens using the `indexSettings` method, which changes for every `createIndex` or `prepareCreate` unless overwritten (overwrites index template for what concerns the number of shards)
Added the following facilities to deal with the random number of shards:
- `getNumShards` to retrieve the number of shards of a given existing index, useful when doing comparisons based on the number of shards and we can avoid specifying a static number. The method returns an object containing the number of primaries, number of replicas and the total number of shards for the existing index
- added `assertFailures` that checks that a shard failure happened during a search request, either partial failure or total (all shards failed). Checks also the error code and the error message related to the failure. This is needed as without knowing the number of shards upfront, when simulating errors we can run into either partial (search returns partial results and failures) or total failures (search returns an error)
- added common methods similar to `indexSettings`, to be used in combination with `createIndex` and `prepareCreate` method and explicitly control the second level of randomization: `numberOfShards`, `minimumNumberOfShards` and `maximumNumberOfShards`. Added also `numberOfReplicas` despite the number of replicas is not randomized (default not specified but can be overwritten by tests)
Tests that specified the number of shards have been reviewed and the results follow:
- removed number_of_shards in node settings, ignored anyway as it would be overwritten by both mechanisms above
- remove specific number of shards when not needed
- removed manual shards randomization where present, replaced with ordinary one that's now available
- adapted tests that didn't need a specific number of shards to the new random behaviour
- fixed a couple of test bugs (e.g. 3 levels parent child test could only work on a single shard as the routing key used for grand-children wasn't correct)
- also done some cleanup, shared code through shard size facets and aggs tests and used common methods like `assertAcked`, `ensureGreen`, `refresh`, `flush` and `refreshAndFlush` where possible
- made sure that `indexSettings()` is always used as a basis when using `prepareCreate` to inject specific settings
- converted indexRandom(false, ...) + refresh to indexRandom(true, ...)