The nested scope is set by any nested feature, so that sub nested queries and filters know about their context and these sub nested queries and filters can construct the right parent filter.
Removed the LateBindingParentFilter workaround in the nested query parser in favour of the nested scope maintained in the query parse context.
Due to this change nested queries and filters can now also be included in nested sorting and inner hits, because those features also now use the nested scope.
This change doesn't fix the usage of nested filters in nested and reverse_nested aggregations. The `nested` filter shouldn't be used inside these aggregations and instead the `nested` and `reverse_nested` aggs should be used to query on the right level. In a different change `nested` inside a `nested` and `reverse_nested` aggregation should result in a parse error.
Closes#9305
Removed the existing `pre_zone` and `post_zone` option in `date_histogram` in favor of
the simpler `time_zone` option. Previously, specifying different values for these could
lead to confusing scenarios where ES would return bucket keys that are not UTC.
Now `time_zone` is the only option setting, the calculation of date buckets to take place in the
preferred time zone, but after rounding converting the bucket key values back to UTC.
Closes#9062Closes#9637
When asking for term statistics, generating term vectors on the fly or with
`dfs` set to `true`, some requests may take a while, so it is useful to know
exactly how long.
Closes#9583
This paves the way for more shared code between the `InternalEngine` and
`ShadowEngine` by way of the abstract `Engine` class. No actual
functionality has been changed.
Negative settings for interval in date_histogram could lead to OOM errors in conjunction
with min_doc_count=0. This fix raises exceptions in the histogram builder and the
TimeZoneRounding classes so that the query fails before this can happen.
Closes#9634Closes#9690
Today we sometimes have to transfer files without verifying the checksum
ie. if the file had an old alder32 checksum but was using random access
while writing such that we can only verify they files length. We will likely
not detect corruptions there and with the new checks during recovery finalization
we might run into corrupt index exceptions in that stage. This causes
the primary to be failed as well since we don't handle the exception today. This commit
adds better handling and a test for this scenario.
When an index is deleted we wait on all nodes to ack the delete. Data nodes are expected to both ack the remove of the index from their IndicesService and also the deletion of the store from disk. At the moment all nodes sends this ack which causes wrong counting on the master side. On top of this, we currently have an unneeded WARN message in the logs when client nodes try to acquire locks but do not have a data folder.
Relates to #9605Closes#9672
Groovy was disabled by default, but we turn it on in our test infra. We can then declare support for it so we go and execute script related tests as part of the REST tests suite.
With #9629 we introduced REST spec validation, which barfs whenever the REST spec don't follow the defined conventions. That said, we sometimes execute tests against previous branches and tags which have spec that needs fixing but we can't go back and fix them. We now support the `-Dtests.rest.validate_spec` system property that allows to turn off REST spec validation (enabled by default) so that we can still run tests against old branches/tags.
When multiple fields under object fields share the same name, accessing
by short name is ambiguous. This removes support for short names,
always requiring the full name when used in queries.
closes#8872
Enabling GC logging works now by setting the environment variable ES_GC_LOG_FILE
to the full path to the GC log file. Missing directories will be created as needed.
The ES_USE_GC_LOGGING environment variable is no longer used.
Closes#8471Closes#8479
This moves the rule, so it is made available in the test.jar. In
addition, you can now specify the exception, which triggers a rerun
of the test in order to make it reusable for others.
Also ensured that the NettyTransportTest frees all resources inside
of its testing method instead of pre/post running methods, as those
are still called only once, even though a failed test might be repeated.
At the moment we sometime submit generic runnables, which make life slightly harder when generated pending task list which have to account for them. This commit adds an abstract TimedPrioritizedRunnable class which should always be used. This class also automatically measures time in queue, which is needed for the pending task reporting.
Relates to #8077Closes#9354Closes#9671
Using '_cat/segments' or the indices segments api without matching any index
now returns empty result instead of throwing IndexMissingException.
Closes#9219
Aggregators now return a new collector instance per segment, like Lucene 5 does
with its oal.search.Collector API. This is important for us because things like
knowing whether the field is single or multi-valued is only known at a segment
level.
In order to do that I had to change aggregators to notify their sub aggregators
of new incoming segments (pretty much in the spirit of #6477) while everything
used to be centralized in the AggregationContext class. While this might slow
down a bit deeply nested aggregation trees, this also makes the children
aggregation and the `breadth_first` collection mode much better options since
they can now only replay what they need while they used to have to replay the
whole aggregation tree.
I also took advantage of this big refactoring to remove some abstractions that
were not really required like ValuesSource.MetaData or BucketAnalysisCollector.
I also splitted Aggregator into Aggregator and AggregatorBase in order to
separate the Aggregator API from implementation helpers.
Close#9544
Whenever we have an api that supports GET with a body, we always support the POST method too, as well as providing the body as a query_string parameter called `source`. Our REST spec should reflect this convention. FIxed them and introduced a hard check at parse time in our Java REST tests runner, which will cause the tests to fail if spec are not compliant.
Closes#9629
As a CliTool command could potentially also delete files, the
CheckFileCommand needs to check if those files exist, before
trying to get permissions/owners/groups from that path.
When using the CLI tool infrastructure, a command can potentially write
a new file. In case it overwrites an existing one, you may want to ensure
that the permissions, the owner and the group are kept the same and do not
accidentally change when overwriting those files.
This PR introduces a command that allows you to execute this check per path.
It also adds a new testing dependency, namely jimfs, which allows you to create
in-memory filesystems with certain properties (like supporting or not posix permissions
on this filesystem), so that you can test those features, without executing
tests on a certain operating system.
The FileSystemUtils class has a helper method to create files with
a .new suffix, in case the file, which should be created already
exists. If you install plugins and those have configuration files,
even without changes, you will end up with tons of .new files.
This commit checks the file size and sha-256 sum, and only if those
differ, a .new file is actually being created.
On CI machines node recovery sometimes takes up to 2 seconds. When it happens an update cluster state task gets stuck behind the recovery and tests fail with 1 second timeout. This commit makes sure that we wait for recovery to complete before starting the clock.
This has been very trappy. Rather than continue to allow buggy behavior
of having upgrade/optimize requests sidestep the single shard per node
limits optimize is supposed to be subject to, this removes
the ability to run the upgrade/optimize async.
closes#9638