This change removes the multiple implementations of different admin interfaces and centralizes it with AbstractClient. It also makes sure *all* executions of actions now go through a single AbstractClient#execute method, taking care of copying headers and wrapping listener.
This also has the side benefit of removing all the code around differnet possible clients, and removes quite a bit of code (most of the + code is actually removal of generics and such).
This change also changes how TransportClient is constructed, requiring a Builder to create it, its a breaking change and its noted in the migration guide.
Yea another step towards simplifying the action infra and making it simpler...
Currently, when all copies of a shard are lost, we reach out to all
other nodes to see whether they have a copy of the data. For a shared
filesystem, though, we can assume that each node has a copy of the data
available, so return a state version of at least 0 for each node.
This feature is set using the dynamic index setting
`index.shared_filesystem.recover_on_any_node`, which defaults to
`false`.
Fixes#10932
It appears the previous failure (-Dtests.seed=D9EF60095522804F) is just accumulation of
floating point error differences between expected and actual results. Making the tests less
stringent by requiring closeTo(0.1) instead of the previous 0.00001
This commit adds a counter for IndexShard that keeps track of how many write operations
are currently in flight on a shard. The counter is incremented whenever a write request is
submitted in TransportShardReplicationOperationAction and decremented when it is finished.
On a primary it stays incremented while replicas are being processed.
The counter is an instance of AbstractRefCounted. Once this counter reaches 0
each write operation will be rejected with an IndexClosedException.
closes#10610
Today we wait 30 sec for shards to flush and close and then simply exit the process.
This is often not desired and we should by default wait long enough for shards to
close etc. This commit adds a default timeout of one day which simplifies the code
and gives us _enough_ time to shut down.
Closes#10680
If the initial recovery is skipped all uncommitted changes are lost. This
must be enforced otherwise primary and replica will go out of sync once the
primary is started after restore and a replica recovers from it applying the
still referenced transaction logs.
Instead of failing the Engine for a shared filesystem, this change
allows a "soft close" of the Engine, where only the IndexWriter is
closed so that the replica can open an IndexWriter using the same
filesystem directory/mount.
Fixes#10469
This refcounting doesn't work for shadow replicas since we open
the same translog file from more than one node while running a rolling
restart. This functionality is also superseeded by our filesystem abstraction
which detects file leaks under the hood.
Today, we rely on the user to set request listener threads to true when they are on the client side in order not to block the IO threads on heavy operations. This proves to be very trappy for users, and end up creating problems that are very hard to debug.
Instead, we can do the right thing, and automatically thread listeners that are used from the client when the client is a node client or a transport client.
This change also removes the ability to set request level listener threading, in the effort of simplifying the code path and reasoning around when something is threaded and when it is not.
closes#10940
This removes Elasticsearch's filter cache and uses Lucene's instead. It has some
implications:
- custom cache keys (`_cache_key`) are unsupported
- decisions are made internally and can't be overridden by users ('_cache`)
- not only filters can be cached but also all queries that do not need scores
- parent/child queries can now be cached, however cached entries are only
valid for the current top-level reader so in practice it will likely only
be used on read-only indices
- the cache deduplicates filters, which plays nicer with large keys (eg. `terms`)
- better stats: we already had ram usage and evictions, but now also hit count,
miss count, lookup count, number of cached doc id sets and current number of
doc id sets in the cache
- dynamically changing the filter cache size is not supported anymore
Internally, an important change is that it removes the NoCacheFilter infrastructure
in favour of making Query.rewrite specializing the query for the current reader so
that it will only be cached on this reader (look for IndexCacheableQuery).
Note that consuming filters with the query API (createWeight/scorer) instead of
the filter API (getDocIdSet) is important for parent/child queries because
otherwise a QueryWrapperFilter(ParentQuery) would run the wrapped query per
segment while relations might be cross segments.