The global cluster gets created from a static block and shared through all tests in the same jvm. The `buildTestCluster` method can't get called passing in `Scope.GLOBAL`, hence removed its mention from it as it might be misleading. The only two scopes supported within the `buildTestCluster` method are `SUITE` and `TEST`.
Merging the accumulated work from the feautre/improve_zen branch. Here are the highlights of the changes:
__Testing infra__
- Networking:
- all symmetric partitioning
- dropping packets
- hard disconnects
- Jepsen Tests
- Single node service disruptions:
- Long GC / Halt
- Slow cluster state updates
- Discovery settings
- Easy to setup unicast with partial host list
__Zen Discovery__
- Pinging after master loss (no local elects)
- Fixes the split brain issue: #2488
- Batching join requests
- More resilient joining process (wait on a publish from master)
Closes#7493
Previous implementation used a marker interface and had no explicit failure call back for the case update task was run on a non master (i.e., the master stepped down after it was submitted). That lead to a couple of instance of checks.
This approach moves ClusterStateUpdateTask from an interface to an abstract class, which allows adding a flag to indicate whether it should only run on master nodes (defaults to true). It also adds an explicit onNoLongerMaster call back to allow different error handling for that case. This also removed the need for the NoLongerMaster.
Closes#7511
We currently have two ways to randomize the number of shards and replicas: random index template, that stays the same for all indices created under the same scope, and the overridable `indexSettings` method, called by `createIndex` and `prepareCreate` which returns different values each time.
Now that the `randomIndexTemplate` method is not static anymore, we can easily apply the same logic to both. Especially for number of replicas, we used to have slightly different behaviours, where more than one replicas were only rarely used through random index template, which gets now applied to the `indexSettings` method too (might speed up the tests a bit)
Side note: `randomIndexTemplate` had its own logic which didn't depend on `numberOfReplicas` or `maximumNumberOfReplicas`, which was causing bw comp tests failures since in some cases too many copies of the data are requested, which cannot be allocated to older nodes, and the write consistency quorum cannot be met, thus indexing times out.
Closes#7522
Settings that are not default for _size, _index and _timestamp were only build in
toXContent if these fields were actually enabled.
_timestamp, _index and _size can be dynamically enabled or disabled.
Therfore the settings must be kept, even if the field is disabled.
(Dynamic enabling/disabling was intended, see TimestampFieldMapper.merge(..)
and SizeMappingTests#testThatDisablingWorksWhenMerging
but actually never worked, see below).
To avoid that _timestamp is overwritten by a default mapping
this commit also adds a check to mapping merging if the type is already
in the mapping. In this case the default is not applied anymore.
(see
SimpleTimestampTests#testThatUpdatingMappingShouldNotRemoveTimestampConfiguration)
As a side effect, this fixes
- overwriting of paramters from the _source field by default mappings
(see DefaultSourceMappingTests).
- dynamic enabling and disabling of _timestamp and _size ()
(see SimpleTimestampTests#testThatTimestampCanBeSwitchedOnAndOff and
SizeMappingIntegrationTests#testThatTimestampCanBeSwitchedOnAndOff )
Tests:
Enable UpdateMappingOnClusterTests#test_doc_valuesInvalidMappingOnUpdate again
The missing settings in the mapping for _timestamp, _index and _size caused a the
failure: When creating a mapping which has settings other than default and the
field disabled, still empty field mappings were built from the type mappers.
When creating such a mapping, the mapping source on master and the rest of the cluster
can be out of sync for some time:
1. Master creates the index with source _timestamp:{_store:true}
mapper classes are in a correct state but source is _timestamp:{}
2. Nodes update mapping and refresh source which then completely misses _timestamp
3. After a while source is refreshed again also on master and the _timestamp:{}
vanishes there also.
The test UpdateMappingOnCusterTests#test_doc_valuesInvalidMappingOnUpdate failed
because the cluster state was sampled from master between 1. and 3. because the
randomized testing injected a default mapping with disabled _size and _timestamp
fields that have settings which are not default.
The test
TimestampMappingTests#testThatDisablingFieldMapperDoesNotReturnAnyUselessInfo
must be removed because it actualy expected the timestamp to remove
parameters when it was disabled.
closes#7137
The root endpoint returns basic information about this node, like it's name and ES version etc. The cluster name is an important information that belongs in that list.
Closes#7524
The reverse_nested aggregator requires that the emitted doc ids are always in ascending order, which is already enforced on the scorer level,
but this also needs to be enforced on the nested aggrgetor level otherwise incorrect counts are a result.
Closes#7505Closes#7514
During a test run we have a global shared cluster and potentially a suite level or even a test level cluster running. All of those share the same node name pattern (node_#). This can be confusing if you're debugging discovery related tests where those nodes from the different clusters potentially interact (and reject each other). This commit gives each cluster type a unique prefix to make tracing and log filtering simpler.
Closes#7518
Comparisons for the BigArrays breaker use "greater than" instead of
"greater than or equal", which was never an issue before because the
test size was not right on a page boundary. A test with an exactly
divisible page boundary (4mb exactly in this case) caused the sizes to
be equal to, but not exceed, the limit, and never break.
The limit should be smaller than the test increments the breaker anyway.
Today we have logic that removes a shard from the indexservice if
the shard has changed ie. from replica to primary or if it's recovery
source vanished etc. This can cause shards from been not allocated at
all on a nodes causeing delete requests to timeout since we were waiting
for shards on nodes that got dropped due to a IndexShardMissingException
Closes#7509
1) One issue reported by a user is due to the truncation of the geohash string. Added Junit test for this scenario
2) Another suspect piece of code was the “toAutomaton” method that only merged the first of possibly many precisions into the result.
Closes#7368
When we corrupt a file in the snapshot/restore case we have to corrupt
a per-segment file. The .del file might change with the commit / flush
that is triggered by the snapshot operation.
This commit makes the default number of shards for the .scripts index to ````1````, it also
forces the auto_expand replicas to ````1-all````. This change means that script index GET requests to load
scripts from the index should always use the local copy of the scripts index, preventing any network traffic or calls
on script GET.
Per default the heap dump is written to target/JX/pidXYZ.hprof
In order to keep them when a new test is is started, they
should be written to log folder which is not cleared in a new
test run.
Heap dump location can be set with -Dtests.heapdump.path=/path/to/heapdump
closes#7452