Load tests showed that SerialMS has problems to keep up with
the merges under high load. We should switch back to CMS
until we have a better story to balance merge
threads / efforts across shards on a single node.
Closes#5817
If the during percolating a new field was introduced
in the local mapping service, then those changes should
be updated in cluster state of the master as well.
Closes#5776
This methods had some workarounds for bugs that seem to be fixed
in Java 7 [1]. There seem to be other problems on shared file-systems
which are not really supported by lucene anyway or rather not
recommeded. Yet the current solution that interrupts a static thread
reference is too dangrous given all the usage of NIO across
elasticsearch.
[1] http://bugs.java.com/bugdatabase/view_bug.do?bug_id=4742723
This method basically forcefully creates as many files as possible
to find out the process limit in a brute-force manner. The number of
possible probles with this approach would exceed the number of lines
left on this commit message.
This commit uses a JMX based alternative to print the process limit.
This prevents executing bulks internal autocreate indices logic
and ensures that this internal request never creates an index
automaticall.
This fixes a bug where the TTL purger thread ran after the actual
index it was purging was already closed / deleted and that re-created
that index.
Closes#5766
This is related to LUCENE-5570 where fsync creates a 0-byte file
if the file does not exists. This commit adds the patched lucene
version using Java 7 APIs as well as a note to replace this method
with the upcomeing IOUtils#fsync in Lucene 4.8
This commit cleans up FsImmutableBlobContainer#writeBlob to make
use of Java7 Auto-Closing features and ensures that the directory
the blob was written to is fsynced as well if possible.
For resources that have their life time effectively defined by the search
context they are attached to, it is convenient to use the search context to
schedule the release of such resources.
This commit changes aggregations to use this mechanism and also introduces
a `Lifetime` object that can be used to define how long the object should
live:
- COLLECTION: if the object only needs to live during collection time and is
what SearchContext.addReleasable would have chosen before this change
(used for p/c queries),
- SEARCH_PHASE for resources that only need to live during the current search
phase (DFS, QUERY or FETCH),
- SEARCH_CONTEXT for resources that need to live until the context is
destroyed.
Aggregators are currently registed with SEARCH_CONTEXT. The reason is that when
using the DFS_QUERY_THEN_FETCH search type, they are allocated during the DFS
phase but only used during the QUERY phase. However we should fix it in order
to only allocate them during the QUERY phase and use SEARCH_PHASE as a life
time.
Close#5703
Java7's AutoCloseable allows to manage resources more nicely using
try-with-resources statements. Since the semantics of our Releasable interface
are very close to a Closeable, let's switch to it.
Close#5689
When a create document is executed, and its an auto generated id (based on UUID), we know that the document will not exists in the index, so there is no need to try and lookup the version from the index.
For many cases, like logging, where ids are auto generated, this can improve the indexing performance, specifically for lightweight documents where analysis is not a big part of the execution.
Due to the default of `async_merge` to `true` we never run
the merge policy on a segment flush which prevented the
pending merges from being updated and that caused actual
pending merges not to contribute to the merge decision.
This commit also removes the `index.async.merge` setting is actually
misleading since we take care of merges not being excecuted on the
indexing threads on a different level (the merge scheduler) since 1.1.
This commit also adds an additional check when to run a refresh
since soely relying on the dirty flag might leave merges un-refreshed
which can cause search slowdowns and higher memory consumption.
Closes#5779
When we load sparse single valued data, we automatically assign a missing value to represent a document who has none. We try to find a value that will increase the number of bits needed to represent the data. If that missing value happen to be 0, we do no properly intialize the value array.
This commit solved this problem but also cleans up the code even more to make spotting such issues easier in the future.
The AppendingDeltaPackedLongBuffer uses delta compression in paged fashion. For data which is roughly monotonic this results in reduced memory signature.
By default we use the storage format expected to use the least memory. You can force a choice using a new field data setting `memory_storage_hint` which can be set to `ORDINALS`, `PACKED` or `PAGED`
Closes#5706
Add an API endpoint at /_bench for submitting, listing, and aborting
search benchmarks. This API can be used for timing search requests,
subject to various user-defined settings.
Benchmark results provide summary and detailed statistics on such
values as min, max, and mean time. Values are reported per-node so that
it is easy to spot outliers. Slow requests are also reported.
Long running benchmarks can be viewed with a GET request, or aborted
with a POST request.
Benchmark results are optionally stored in an index for subsequent
analysis.
Closes#5407
In some cases when we have a lot of docs with lots of shards
recovery takes longer than 1m causing the tests to fail before all
shards are recovered. This commit raises the timeout in this test to
5m max while it's rarely needed.
This commit also adds an assertion to ElasticsearchAssertions that
ensures that the cluster health requests are not hitting a timeout.
After a shard fails on a node we assign a new replica on another node. This is important in order to avoid failing again due to node specific problems. In the rare case where two different replicas of the same shard failed in a short time span, we may fail to do so and assign one of them back to the node it's currently on. This happens if both shard failed events are processed within the same batch on the master.
Closes#5725