Since v1.3.0, and issue #6201, the default values in code and documentation have been updated to 85% and 90% for low and high watermarks. However, the related javadoc still contains the initial values : this commit fix this.
make the "es090" postings format read-only, just to support old segments. There is a test version that subclasses it with write-capability for testing.
Closes#8571
Today we hold on to search context reference if they are not cleaned
up for a while until a reaper thread trashes them if they timed out.
This commit removes all pending contexts once the index is closed to release
resources and filehandles immediatly once the index is closed.
In order to implement #8551 correctly without causing problems of relocating
shards we need to be informed if an index is actually deleted. This commit adds
more callbacks to the listener and makes deleteIndex a dedicated method on IndicesService
Added Junit test that recreates the error and fixed FilterParser to default to using a MatchAllDocsFilter if the requested filter clause is left empty.
Also added fix and test for the Filters (with an "s") aggregation.
Closes#8438
_all reports a conflict since #7377. However, it was not checked if _all
was actually configured in the updated mapping. Therefore whenever _all
was disabled a mapping could not be updated unless _all was again added to the
updated mapping.
Also, add enabled setting to mapping always whenever enabled was set explicitely.
closes#8423closes#8426
This aggregation creates an anonymous fielddata instance that takes geo points
and turns them into a geo hash encoded as a long. A bug was introduced in 1.4
because of a fielddata refactoring: the fielddata instance tries to populate
an array with values without first making sure that it is large enough.
Close#8507
Since aggregators are only called on documents that match the query, it never
gets called on deleted documents, so by specifying `null` as live docs, we very
likely remove a BitsFilteredDocIdSet layer.
Close#8540
repro command line.
The carrot runner currently randomizes both locale and timezone, but
these are not set in the maven reproduce line. Since they aren't
even printed, we have no idea what locale/timezone the tests
actually ran with.
Percolator queries and index alias filters are parsed once and reused as long as they exist on a node. If they contain time based range filters with a `now` expression then the alias filters and percolator queries are going to be incorrect from the moment these are constructed (depending on the date rounding).
If a range filter or range query is constructed as part of adding a percolator query or a index alias filter then these get wrapped in special query or filter wrappers that defer the resolution of now at last possible moment as apposed during parse time. In the case of the range filter a special Resolvable Filter makes sure that `now` is resolved when the DocIdSet is pulled and in the case of the range query `now` is resolved at query rewrite time. Both occur at the time the range filter or query is used as apposed when the query or filter is constructed during parse time.
Closes#8474Closes#8534
Today recovery sources are not cancled if a shard is closed. The recovery target
is already cancled when shards are closed but we should also cleanup and cancel
the sources side since it holds on to shard locks / references until it's closed.
Today we hold on to search context reference if they are not cleaned
up for a while until a reaper thread trashes them if they timed out.
This commit removes all pending contexts once the index is closed to release
resources and filehandles immediatly once the index is closed.
During node shutdown we have a race condition between processing cluster state updates (creating shards) and closing down the index service. This may cause shards to leak and not be closed properly.
This commit removes the concurrency in shard closing as InternalIndexService.removeShard has been synchronized for a long time now.
On the other hand, the commit restores the parallel shutdown of indices lost in 7e1d8a6ca3Closes#8557
was set to yes, dangling indices were deleted by mistake,
because a RemoveDanglingIndices runnable was added
to every dangling indices, without considering the auto_import_dangled
setting.
PR #8464 come with a bug in the example provided.
First, the current log file is not compressed so it should not end with `.gz`.
Second, conversion pattern was removing all the log content but was printing only the log date.
Then, the current log filename was hardcoded to `elasticsearch` instead of the cluster name.
Make also LogConfigurator#ALLOWED_SUFFIXES package private so that it can be used in LoggingConfigurationTests, now that it's in the same package as the class that it tests.
Add few randomized aspects to LoggingConfigurationTests.
Make sure that files such as logging.yml.rpmnew or logging.yml.bak are not loaded as logging configuration.
Only files that start with the "logging." prefix and end with ".yaml", ".yml", ".json" and ".properties" suffix get loaded.
Closes#7457
Interrupting a thread while blocking on an NIO Read / Write Operation
can cause a file to be closed due to the interrupts. This can have unpredictable
effects when files are open by index readers etc. we should prevent interruptions
across the board if possible.
Closes#8494
We need to register those data paths otherwise we might miss path that
need to get cleaned when using local gatway etc. which can otherwise
cause imports of dangeling indices.
This commit fixes the issue caused by restore process deleting all legacy checksum files at the end of restore process. Instead it keeps the latest version of the checksum intact. The issue manifests itself in losing checksum for all legacy files restored into post 1.3.0 cluster, which in turn causes unnecessary snapshotting of files that didn't change.
Fixes#8119
Now each error is reported in bulk response rather than causing entire bulk to fail.
Added a Junit test but the use of TransportClient means the error is manifested differently to a REST based request - instead of a NullPointer the whole of the bulk request failed with a RoutingMissingException. Changed TransportBulkAction to catch this exception and treat it the same as the existing logic for a ElasticsearchParseException - the individual bulk request items are flagged and reported individually rather than failing the whole bulk request.
Closes#8365