Change bucket key_as_string to reflect `time_zone` parameter. Currently `time_zone`
shifts bucket boundaries to other time zone, but keys are displayed in UTC, so e.g.
daily buckets in "+01:00" time zone have key_as_string like "2014-01-01T23:00:00Z". With this
change the default is to format this dates according to the local time zone, so the
above bucket key would be "2014-01-02T00:00:00+01:00".
Closes#9710Closes#9744
Today we trash everything that has been indexed but not flushed to disk
if the engine is closed. This might not be desired if we shutting down a
node for restart / upgrade or if we close / archive an index. In such a
case we would like to flush the transaction log and commit everything to
disk. This commit adds a flag to the close method that is set on close
and shutdown but not when we remove the shard due to relocations
We are using repository ids with spaces in our `pom.xml`. Although it's not forbidden, a common practice is to avoid space in id.
This commit changes codehaus snapshots and lucene snapshots to a consistent naming (using a dash, all lowercase).
We also add a name which is used by Maven when displaying some information about the repository.
This naming is also consistent with [elasticsearch-parent project](https://github.com/elasticsearch/elasticsearch-parent) which will be used in the next future in 1.x and master branch.
**Important note**: If you have trouble to compile elasticsearch or a plugin using `mvn compile` and hit a `Access denied to: [URL_HERE], ReasonPhrase: Forbidden. -> [Help 1]`, you can remove related maven files:
```sh
find ~/.m2/repository -name _remote.repositories -exec rm -v {} \;
find ~/.m2/repository -name _maven.repositories -exec rm -v {} \;
```
Another option is to tell Maven not using those files with `--llr`:
```sh
mvn compile --llr
```
The nested scope is set by any nested feature, so that sub nested queries and filters know about their context and these sub nested queries and filters can construct the right parent filter.
Removed the LateBindingParentFilter workaround in the nested query parser in favour of the nested scope maintained in the query parse context.
Due to this change nested queries and filters can now also be included in nested sorting and inner hits, because those features also now use the nested scope.
This change doesn't fix the usage of nested filters in nested and reverse_nested aggregations. The `nested` filter shouldn't be used inside these aggregations and instead the `nested` and `reverse_nested` aggs should be used to query on the right level. In a different change `nested` inside a `nested` and `reverse_nested` aggregation should result in a parse error.
Closes#9305
Another InternalHistogram instance can be passed into the method with the buckets and the name and will be used to set all the options such as minDocCount, formatter, Order etc.
Removed the existing `pre_zone` and `post_zone` option in `date_histogram` in favor of
the simpler `time_zone` option. Previously, specifying different values for these could
lead to confusing scenarios where ES would return bucket keys that are not UTC.
Now `time_zone` is the only option setting, the calculation of date buckets to take place in the
preferred time zone, but after rounding converting the bucket key values back to UTC.
Closes#9062Closes#9637
When asking for term statistics, generating term vectors on the fly or with
`dfs` set to `true`, some requests may take a while, so it is useful to know
exactly how long.
Closes#9583
This paves the way for more shared code between the `InternalEngine` and
`ShadowEngine` by way of the abstract `Engine` class. No actual
functionality has been changed.
Negative settings for interval in date_histogram could lead to OOM errors in conjunction
with min_doc_count=0. This fix raises exceptions in the histogram builder and the
TimeZoneRounding classes so that the query fails before this can happen.
Closes#9634Closes#9690
Today we sometimes have to transfer files without verifying the checksum
ie. if the file had an old alder32 checksum but was using random access
while writing such that we can only verify they files length. We will likely
not detect corruptions there and with the new checks during recovery finalization
we might run into corrupt index exceptions in that stage. This causes
the primary to be failed as well since we don't handle the exception today. This commit
adds better handling and a test for this scenario.
When an index is deleted we wait on all nodes to ack the delete. Data nodes are expected to both ack the remove of the index from their IndicesService and also the deletion of the store from disk. At the moment all nodes sends this ack which causes wrong counting on the master side. On top of this, we currently have an unneeded WARN message in the logs when client nodes try to acquire locks but do not have a data folder.
Relates to #9605Closes#9672
Groovy was disabled by default, but we turn it on in our test infra. We can then declare support for it so we go and execute script related tests as part of the REST tests suite.
With #9629 we introduced REST spec validation, which barfs whenever the REST spec don't follow the defined conventions. That said, we sometimes execute tests against previous branches and tags which have spec that needs fixing but we can't go back and fix them. We now support the `-Dtests.rest.validate_spec` system property that allows to turn off REST spec validation (enabled by default) so that we can still run tests against old branches/tags.