We have an optimization which compares routing/meta data version of cluster states and tries to reuse the current object if the versions are equal. This can cause rare failures during recovery from a minimum_master_node breach when using the "new light rejoin" mechanism and simulated network disconnects. This happens where the current master updates it's state, doesn't manage to broadcast it to other nodes due to the disconnect and then steps down. The new master will start with a previous version and continue to update it. When the old master rejoins, the versions of it's state can equal but the content is different.
Also improved DiscoveryWithNetworkFailuresTests to simulate this failure (and other improvements)
Closes#6466
When a node steps down from being a master (because, for example, min_master_node is breached), it may still have
cluster state update tasks queued up. Most (but not all) are tasks that should no longer be executed as the node
no longer has authority to do so. Other cluster states updates, like electing the current node as master, should be
executed even if the current node is no longer master.
This commit make sure that, by default, `ClusterStateUpdateTask` is not executed if the node is no longer master. Tasks
that should run on non masters are changed to implement a new interface called `ClusterStateNonMasterUpdateTask`
Closes#6230
Only the previous master node has been removed, so only shards allocated to that node will get failed.
This would have happened anyhow on later on when AllocationService#reroute is invoked (for example when a cluster setting changes or another cluster event),
but by cleaning the routing table pro-actively, the stale routing table is fixed sooner and therefor the shards
that are not accessible anyhow (because the node these shards were on has left the cluster) will get re-assigned sooner.
Switches TranslogStreams to check a header in the file to determine the
translog format, delegating to the version-specific stream.
Version 1 of the translog format writes a header using Lucene's
CodecUtil at the beginning of the file and appends a checksum for each
translog operation written.
Also refactors much of the translog operations, such as merging
.hasNext() and .next() in FsChannelSnapshot
Relates to #6554
These caches have no advantage compared to the default node cache. Additionally,
the soft cache makes use of soft references which make fielddata loading quite
unpredictable in addition to pushing more pressure on the garbage collector.
The `none` cache is still there because of tests. There is no other good
reason to use it.
LongFieldDataBenchmark has been removed because the refactoring exposed a
compilation error in this class, which seems to not having been working for a
long time. In addition it's not as much useful now that we are progressively
moving more fields to doc values.
Close#7443
This also fixes has_parent filters with a nested empty bool filter
(see test SimpleChildQuerySearchTests#test6722, the test should actually expect
either 0 results when searching for has_parent "test" or one result when
search for has_parent "foo")
closes#7240closes#7347
Complex explanations were always read as Explanations. Depending
on if the response was streamed or not the explanation was
therefore generated by a ComplexExplanation or by a regular
Explanation.
closes#7257
Added the ability to register template filters that are being applied when a new index is created. The default filter that checks whether the template pattern matches the index name always runs first, additional filters can also be registered so that templates can be filtered out based on custom logic.
Took the chance to add the handy source(Object... source) method to PutIndexTemplateRequest and corresponding builder
Closes#7459Closes#7454
This change forces the GetRequest when a script is being loaded from an index
to use preference("_local") and threaded(false) to prevent the script service from
forking for GetRequests.
The comparison and read code in the BlobStoreIndexShardRepository
used the physicalName and Name in reverse order. This caused
SnapshotBackwardsCompatibilityTest to fail.
This reverts commit 636af40da1
The "optimized" encoders/decoders have been unreliable and error prone.
Also, fix LZFCompressor.compress to use LZFEncoder.safeEncode, which
creates a new safe encoder, instead of using a shared encoder (which
is not threadsafe).
closes#7468
Due to additional safety added in #7351 we compute now a strong hash for
.si and segments_N files which are compared during snapshot / restore.
Old snapshots don't have this hash which can cause unnecessary copying
of large amount of data. This commit adds the ability to fetch this
hash from the blob store if needed.
Closes#7434
Today we have a small window where a searcher can be acquired but the
engine is in the state of starting up. This causes a NPE triggering a
shard failure if we are fast enough. This commit fixes this situation
gracefully.
Closes#7455
- _all field was never merged when mapping was updated and no conflict reported
- _all accepted doc_values format although it is always tokenized
relates to #777closes#7377
It's risky to have our own snapshot of Java 8's ConcurrentHashMap:
unless we keep the sources in sync over time (and OpenJDK's version
had already diverged), then we won't get bug/performance fixes. Users
can choose to upgrade to Java 8 to see the improvements of CHM.
Closes#7392Closes#7296