Date math rounding currently works by rounding the date up or down based
on the scope of the rounding. For example, if you have the date
`2009-12-24||/d` it will round down to the inclusive lower end
`2009-12-24T00:00:00.000` and round up to the non-inclusive date
`2009-12-25T00:00:00.000`.
The range endpoint semantics work as follows:
* `gt` - round D down, and use > that value
* `gte` - round D down, and use >= that value
* `lt` - round D down, and use <
* `lte` - round D up, and use <=
There are 2 problems with these semantics:
* `lte` ends up including the upper value, which should be non-inclusive
* `gt` only excludes the beginning of the date, not the entire rounding scope
This change makes the range endpoint semantics symmetrical. First, it
changes the parser to round up and down using the first (same as before)
and last (1 ms less than before) values of the rounding scope. This
makes both rounded endpoints inclusive. The range endpoint semantics
are then as follows:
* `gt` - round D up, and use > that value
* `gte` - round D down, and use >= that value
* `lt` - round D down, and use < that value
* `lte` - round D up, and use <= that value
closes#8424closes#8556
We speak of the term vectors of a document, where each field has an associated
stored term vector. Since by default we are requesting all the term vectors of
a document, the HTTP request endpoint should rather be called `_termvectors`
instead of `_termvector`. The usage of `_termvector` is now deprecated, as
well as the transport client call to termVector and prepareTermVector.
Closes#8484
Today there is a race condition between the actual deletion of
the shard and the release of the lock in the store. This race can cause
rare imports of dangeling indices if the cluster state update loop
tires to import the dangeling index in that particular windonw. This commit
adds more safety to the import of dangeling indices and removes the race
condition by holding on to the lock on store closing while the listener
is notified.
Since v1.3.0, and issue #6201, the default values in code and documentation have been updated to 85% and 90% for low and high watermarks. However, the related javadoc still contains the initial values : this commit fix this.
make the "es090" postings format read-only, just to support old segments. There is a test version that subclasses it with write-capability for testing.
Closes#8571
Today we hold on to search context reference if they are not cleaned
up for a while until a reaper thread trashes them if they timed out.
This commit removes all pending contexts once the index is closed to release
resources and filehandles immediatly once the index is closed.
In order to implement #8551 correctly without causing problems of relocating
shards we need to be informed if an index is actually deleted. This commit adds
more callbacks to the listener and makes deleteIndex a dedicated method on IndicesService
Added Junit test that recreates the error and fixed FilterParser to default to using a MatchAllDocsFilter if the requested filter clause is left empty.
Also added fix and test for the Filters (with an "s") aggregation.
Closes#8438
_all reports a conflict since #7377. However, it was not checked if _all
was actually configured in the updated mapping. Therefore whenever _all
was disabled a mapping could not be updated unless _all was again added to the
updated mapping.
Also, add enabled setting to mapping always whenever enabled was set explicitely.
closes#8423closes#8426
This aggregation creates an anonymous fielddata instance that takes geo points
and turns them into a geo hash encoded as a long. A bug was introduced in 1.4
because of a fielddata refactoring: the fielddata instance tries to populate
an array with values without first making sure that it is large enough.
Close#8507
Since aggregators are only called on documents that match the query, it never
gets called on deleted documents, so by specifying `null` as live docs, we very
likely remove a BitsFilteredDocIdSet layer.
Close#8540
repro command line.
The carrot runner currently randomizes both locale and timezone, but
these are not set in the maven reproduce line. Since they aren't
even printed, we have no idea what locale/timezone the tests
actually ran with.
Percolator queries and index alias filters are parsed once and reused as long as they exist on a node. If they contain time based range filters with a `now` expression then the alias filters and percolator queries are going to be incorrect from the moment these are constructed (depending on the date rounding).
If a range filter or range query is constructed as part of adding a percolator query or a index alias filter then these get wrapped in special query or filter wrappers that defer the resolution of now at last possible moment as apposed during parse time. In the case of the range filter a special Resolvable Filter makes sure that `now` is resolved when the DocIdSet is pulled and in the case of the range query `now` is resolved at query rewrite time. Both occur at the time the range filter or query is used as apposed when the query or filter is constructed during parse time.
Closes#8474Closes#8534
Today recovery sources are not cancled if a shard is closed. The recovery target
is already cancled when shards are closed but we should also cleanup and cancel
the sources side since it holds on to shard locks / references until it's closed.
Today we hold on to search context reference if they are not cleaned
up for a while until a reaper thread trashes them if they timed out.
This commit removes all pending contexts once the index is closed to release
resources and filehandles immediatly once the index is closed.
During node shutdown we have a race condition between processing cluster state updates (creating shards) and closing down the index service. This may cause shards to leak and not be closed properly.
This commit removes the concurrency in shard closing as InternalIndexService.removeShard has been synchronized for a long time now.
On the other hand, the commit restores the parallel shutdown of indices lost in 7e1d8a6ca3Closes#8557
was set to yes, dangling indices were deleted by mistake,
because a RemoveDanglingIndices runnable was added
to every dangling indices, without considering the auto_import_dangled
setting.
PR #8464 come with a bug in the example provided.
First, the current log file is not compressed so it should not end with `.gz`.
Second, conversion pattern was removing all the log content but was printing only the log date.
Then, the current log filename was hardcoded to `elasticsearch` instead of the cluster name.
Make also LogConfigurator#ALLOWED_SUFFIXES package private so that it can be used in LoggingConfigurationTests, now that it's in the same package as the class that it tests.
Add few randomized aspects to LoggingConfigurationTests.
Make sure that files such as logging.yml.rpmnew or logging.yml.bak are not loaded as logging configuration.
Only files that start with the "logging." prefix and end with ".yaml", ".yml", ".json" and ".properties" suffix get loaded.
Closes#7457
Interrupting a thread while blocking on an NIO Read / Write Operation
can cause a file to be closed due to the interrupts. This can have unpredictable
effects when files are open by index readers etc. we should prevent interruptions
across the board if possible.
Closes#8494
We need to register those data paths otherwise we might miss path that
need to get cleaned when using local gatway etc. which can otherwise
cause imports of dangeling indices.
This commit fixes the issue caused by restore process deleting all legacy checksum files at the end of restore process. Instead it keeps the latest version of the checksum intact. The issue manifests itself in losing checksum for all legacy files restored into post 1.3.0 cluster, which in turn causes unnecessary snapshotting of files that didn't change.
Fixes#8119
Now each error is reported in bulk response rather than causing entire bulk to fail.
Added a Junit test but the use of TransportClient means the error is manifested differently to a REST based request - instead of a NullPointer the whole of the bulk request failed with a RoutingMissingException. Changed TransportBulkAction to catch this exception and treat it the same as the existing logic for a ElasticsearchParseException - the individual bulk request items are flagged and reported individually rather than failing the whole bulk request.
Closes#8365