We have an optimization which compares routing/meta data version of cluster states and tries to reuse the current object if the versions are equal. This can cause rare failures during recovery from a minimum_master_node breach when using the "new light rejoin" mechanism and simulated network disconnects. This happens where the current master updates it's state, doesn't manage to broadcast it to other nodes due to the disconnect and then steps down. The new master will start with a previous version and continue to update it. When the old master rejoins, the versions of it's state can equal but the content is different.
Also improved DiscoveryWithNetworkFailuresTests to simulate this failure (and other improvements)
Closes#6466
When a node steps down from being a master (because, for example, min_master_node is breached), it may still have
cluster state update tasks queued up. Most (but not all) are tasks that should no longer be executed as the node
no longer has authority to do so. Other cluster states updates, like electing the current node as master, should be
executed even if the current node is no longer master.
This commit make sure that, by default, `ClusterStateUpdateTask` is not executed if the node is no longer master. Tasks
that should run on non masters are changed to implement a new interface called `ClusterStateNonMasterUpdateTask`
Closes#6230
Only the previous master node has been removed, so only shards allocated to that node will get failed.
This would have happened anyhow on later on when AllocationService#reroute is invoked (for example when a cluster setting changes or another cluster event),
but by cleaning the routing table pro-actively, the stale routing table is fixed sooner and therefor the shards
that are not accessible anyhow (because the node these shards were on has left the cluster) will get re-assigned sooner.
The comparison and read code in the BlobStoreIndexShardRepository
used the physicalName and Name in reverse order. This caused
SnapshotBackwardsCompatibilityTest to fail.
This reverts commit 636af40da1
The "optimized" encoders/decoders have been unreliable and error prone.
Also, fix LZFCompressor.compress to use LZFEncoder.safeEncode, which
creates a new safe encoder, instead of using a shared encoder (which
is not threadsafe).
closes#7468
Due to additional safety added in #7351 we compute now a strong hash for
.si and segments_N files which are compared during snapshot / restore.
Old snapshots don't have this hash which can cause unnecessary copying
of large amount of data. This commit adds the ability to fetch this
hash from the blob store if needed.
Closes#7434
Today we have a small window where a searcher can be acquired but the
engine is in the state of starting up. This causes a NPE triggering a
shard failure if we are fast enough. This commit fixes this situation
gracefully.
Closes#7455
- _all field was never merged when mapping was updated and no conflict reported
- _all accepted doc_values format although it is always tokenized
relates to #777closes#7377
It's risky to have our own snapshot of Java 8's ConcurrentHashMap:
unless we keep the sources in sync over time (and OpenJDK's version
had already diverged), then we won't get bug/performance fixes. Users
can choose to upgrade to Java 8 to see the improvements of CHM.
Closes#7392Closes#7296