When a search on a shard to a remove node fails, and then replica exists on the local node, then the execution of the search is done on the network thread. This is problematic since we need to execute it on the actual search thread pool, but can also explain #4519, where the get happens on the network thread and it waits to send the get request till the network thread we use is freed (deadlock...)
fixes#4526
note, re-enable the geo shape fetch test, this fix should solve it as well
Allow to have a new index level setting index.codec.bloom.load (default to true), that can control if the boom filters will be loaded or not. This is an updateable setting, that can be updated on a live index using the update settings API.
Note though, when this setting is updated, a fresh Lucene index will be reopened, causing associate caches to be dropped potentially.
closes#4525
Note, this change also disables the returning lucene ram usage stats, due to a bug in Lucene, relates to #4512
The shards in the set are mutated after they are added to the
set such that the hashcode doesn't fit anymore. For this reason
this used an identity hashset before but the downside of this is
that the iteration order is not deterministic. We can just use a list
since shard removal is a very rare action and the size of the list is
very small such that iteration is fast.
The memory used for the Lucene index (term dict, bloom filter, ...) can now be reported per segment using the segments API, and on the segments flag on node/indices stats
closes#4512
a previous change introduces an identity hashset that has non-deterministic
iteration order which kill the reproducibility of our unittests if they fail.
This patch adds back deterministic allocations.
Make sure to evict an existing node with the same transport address as a new node that joins. This can happen for example when there is a bug in a cluster state event handler, which causes the "old" node to not be evicted, or a load on the master node that will take time for the "old" node leaving to be processed.
closes#4503
Currently we trying to find a replica for a primary that is allocated by
running through all shards in the cluster while RoutingNodes already has
a datastructure keyed by shard ID for this. We should lookup this
directly rather than using linear probing. This improves shard allocation performance
by 5x.
This is an extreme case, exposed by a bug we had in our allocation in local gateway, causing a cluster state that doesn't include a node in the nodes list, but still has the shard in the routing table pointing at the non existent node. Then, when a node on the same box comes back, it will cause the local shard data to be deleted because it thinks its fully allocated on other nodes.
fixes#4502