This assertion module also injects an AssertingIndexSearcher that
checks if our queries are all compliant with the lucene specification
which is improtant for future updates and changes in the upstream project.
There was a small window of time where the transport response handler's handException method was invoked twice. As far as I can tell this happened when node disconnect event was processed just after the request was registered and between a "Node not connected" error was thrown. The TransportService#sendRequest method would invoke the transport response handler's handException method regardless if it was already invoked. This resulted that for one request failure, two retries were executed.
The mpercolate api has an assert that tripped when more than the expected shard level responses were returned. This was caused by the issue described above. For the a single shard level request we had multiple responses and this broke the the the total excepted responses. Also the reduce could be started prematurely, which resulted in an incorrect final response (e.g. total count being incorrect). For example: two shards in total, shard 0 gets reduces twice. The second shard 0 response gets in just before shard 1 response gets in. The reduce starts without shard 1 response.
The master node processing changes to cluster state, and part of the processing is publishing the cluster state to other nodes. It does not wait for the cluster state to be processed on the other nodes before it moves on to the next cluster state processing job.
This is fine, we support out of order cluster state events using versioning, and nodes can handle those cases. It does lead though to non optimal API semantics. For example, when issuing cluster health, and waiting for green state, the master node will report back once the cluster is green based on its cluster state, but that mentioned "green" state might not have been received by all other nodes yet.
Add a discovery.zen.publish_timeout setting, and default it to 5s. This will give a best effort into making sure all nodes will process a cluster state within a window of time.
closes#3736
If a request hasn't been acknowledged, there's no guarantee for any node to hold the up-to-date cluster state (not even the master yet, as the execution is asynchronous)
The average and sum comparators basically share the same code which is
copy-past today. We can simplify this into a base class which reduces
code duplication and prevents copy-paste bugs.
While testing an async system providing reproducible tests that
use randomized components is a hard task we should at least try to
reestablish the enviroment of a failing test as much as possible.
This commit allows to re-establish the shared 'TestCluster' by
resetting the cluster to a predefined shared state before each test.
Before this commit a tests that is executed in isolation was likely
using a entirely different node enviroment as the failing test since
the 'TestCluster' kept intermediate nodes started by other tests around.
use service id for pid name
disable filtering on *.exe (caused corruption)
rename exe names and add more options to .bat
start/stop operations are now supported (and expected to be called) by service.bat
add more variables from the env to customize default behavior prior to installing the service
add manager option
fixes regarding batch flow
specify service id in description
minor readability improvement
include .exe only in ZIP archive
rename x64 service id to make it work out of the box
add elasticsearch as a service for Windows platforms
based on Apace Commons Daemon
supports both x64 and x86
Compared to setting node.local to true, would be nicer to support node.mode with values of local or network.
Note, node.local is still supported.
closes#3713