We have two types of parse methods for queries: one for the inner query, to be used once the parser is positioned within the query element, and one for the whole query source, including the query element that wraps the actual query.
With the search refactoring we ended up using the former in count, cat count and delete by query, whereas we should have used the former. It ends up working properly given that we have a registered (deprecated) query called "query", which used to allow to wrap a filter into a query, but this has the following downsides:
1) prevents us from removing the deprecated "query" query
2) we end up supporting a top level query that is not wrapped within a query element (pre 1.0 syntax iirc that shouldn't be supported anymore)
This commit finally removes the "query" query and fixes the related parsing bugs. We also had some tests that were providing queries in the wrong format, those have been fixed too.
Closes#13326Closes#14304
This commit makes QueryCache and SearcherWrappoer registration public
otherwise plugins can't access those extension points due to security restrictions.
This commit brings all the registration etc. from IndexCacheModule into
IndexModule. As a side-effect to remove a circular dependency between
IndicesService and IndicesWarmer this commit also cleans up IndicesWarmer and
separates the Engine from the warmer.
This changes how `min_score` is implemented both for `function_score` and the
search request parameter to confirm whether the minimum score is met in the
`matches()` phase of a TwoPhaseIterator.
Today if a shard is marked as inactive after a heavy indexing period
large merges are very likely. Yet, those merges are never committed today
since time-based flush has been removed and unless the shard becomes active
again we won't revisit it.
Yet, inactive shards have very likely been sync-flushed before such that we need
to maintain the sync id if possible which this change tries on a per shard basis.
The test spawns up 3 nodes, waits for a master to be elected and starts network disruptions. If those kick in too early we may have a cluster with a state not recovered block which causes failure during clean ups. http://build-us-00.elastic.co/job/es_core_master_oracle_6/3034/