we already fail the shard in the `onFailure` method if the replica
operation barfs. This additional check has been added lately that
bypasses the clusterstate observer which causes replicas to fail
if the mappings are not yet present.
When a node was a data node only then the index state was not written.
In case this node connected to a master that did not have the index
in the cluster state, for example because a master was restarted and
the data folder was lost, then the indices were not imported as dangling
but instead deleted.
This commit makes sure that index state for data nodes is also written
if they have at least one shard of this index allocated.
closes#8823closes#9952
This commit moves the translog creation into the InternalEngine
to ensure the transactino log is created after we acquired the write
lock on the index. This also prevents races when ShadowEngines are shutting
down due to node restarts where another node already takes over the not yet
fully synced transaction log.
If the collect method was called with a bucketOrd of > 0 the arrays holding the state for the aggregation would be grown but the initial values for the bucketOrds > 0 were all set to Double.NEGATIVE_INFINITY meaning that for the bottom, posLeft and negLeft values no collected document would change the value since NEGATIVE_INFINITY is always less than every other value.
Closes#10804
This change removes the multiple implementations of different admin interfaces and centralizes it with AbstractClient. It also makes sure *all* executions of actions now go through a single AbstractClient#execute method, taking care of copying headers and wrapping listener.
This also has the side benefit of removing all the code around differnet possible clients, and removes quite a bit of code (most of the + code is actually removal of generics and such).
This change also changes how TransportClient is constructed, requiring a Builder to create it, its a breaking change and its noted in the migration guide.
Yea another step towards simplifying the action infra and making it simpler...
Currently, when all copies of a shard are lost, we reach out to all
other nodes to see whether they have a copy of the data. For a shared
filesystem, though, we can assume that each node has a copy of the data
available, so return a state version of at least 0 for each node.
This feature is set using the dynamic index setting
`index.shared_filesystem.recover_on_any_node`, which defaults to
`false`.
Fixes#10932
It appears the previous failure (-Dtests.seed=D9EF60095522804F) is just accumulation of
floating point error differences between expected and actual results. Making the tests less
stringent by requiring closeTo(0.1) instead of the previous 0.00001
This commit adds a counter for IndexShard that keeps track of how many write operations
are currently in flight on a shard. The counter is incremented whenever a write request is
submitted in TransportShardReplicationOperationAction and decremented when it is finished.
On a primary it stays incremented while replicas are being processed.
The counter is an instance of AbstractRefCounted. Once this counter reaches 0
each write operation will be rejected with an IndexClosedException.
closes#10610
Today we wait 30 sec for shards to flush and close and then simply exit the process.
This is often not desired and we should by default wait long enough for shards to
close etc. This commit adds a default timeout of one day which simplifies the code
and gives us _enough_ time to shut down.
Closes#10680
If the initial recovery is skipped all uncommitted changes are lost. This
must be enforced otherwise primary and replica will go out of sync once the
primary is started after restore and a replica recovers from it applying the
still referenced transaction logs.
Instead of failing the Engine for a shared filesystem, this change
allows a "soft close" of the Engine, where only the IndexWriter is
closed so that the replica can open an IndexWriter using the same
filesystem directory/mount.
Fixes#10469