The returned sets are only used for iterating. Therefore we might
as well return a list since this guaratees order.
This is the same effect as in
https://github.com/elasticsearch/elasticsearch/pull/7698
The test SimpleIndexQueryParserTests#testQueryStringFieldsMatch
failed on openjdk 1.7.0_65 with
<jdk.map.althashing.threshold>0</jdk.map.althashing.threshold>
closes#7709
As pointed out in #7487 DocLookup is a variable that is accessible by all scripts
for one doc while the query is executed. But the _score and therfore the scorer
depends on the current context, that is, which part of query is currently executed.
Instead of setting the scorer for DocLookup
and have Script access the DocLookup for getting the score, the Scorer should just
be explicitely set for each script.
DocLookup should not have any reference to a scorer.
This was similarly discussed in #7043.
This dependency caused a stackoverflow when running script score in combination with an
aggregation on _score. Also the wrong scorer was called when nesting several script scores.
closes#7487closes#7819
Incorrect usage of XContentParser.hasTextCharacters() can result in NumberFormatException as well as other possible issues in template query parser and phrase suggest parsers.
Fixes#7875
Today we only run 10% of the time, and the test doesn't fail when
corruption is detected.
I think it's better to always run and fail the test, so we can catch
any possible resiliency bugs in Lucene/Elasticsearch causing corruption.
For known tests that create corrupted indices, it's easy to set
MockFSDirectoryService.CHECK_INDEX_ON_CLOSE to false...
Closes#7730
On the page http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/analysis-synonym-tokenfilter.html
even on a huge monitor the text is being wrapped the next way
```
mapping:
ipod, i-pod, i pod => ipod, i-pod, i pod
mapping:
ipod, i-pod, i pod => ipod
```
So one can think that "mapping:" is not in comment and is a part of syntax. But the lines are less than 80 chars, so perhaps the problem is in the page layout and there may be some other pages in the reference where the text is also being wrapped in an undesirable way.
Closes#7739
With local transport or any transport that doesn't necessarily send
notification if connections are closed we might miss a node
disconnection and the request handler hangs forever / until the timeout
kicks in. This window only exists during shutdown and is likely
unproblematic in practice but tests might run into this problem when
local transport is used.
Today, when executing an action (mainly when using the Java API), a listener threaded flag can be set to true in order to execute the listener on a different thread pool. Today, this thread pool is the generic thread pool, which is cached. This can create problems for Java clients (mainly) around potential thread explosion.
Introduce a new thread pool called listener, that is fixed sized and defaults to the half the cores maxed at 10, and use it where listeners are executed.
relates to #5152closes#7837
Shutting down threadpools and executor services is done in very similar
fashion across the codebase. This commit streamlines the process by
adding a terminate method to ThreadPool.
The parameter `percent_terms_to_match` (percentage of terms that must match in
the generated query) was wrongly set to the top level boolean query. This
would lead to zero or all results type of situations. This commit ensures that
the parameter is indeed applied to the query of generated terms.
Closes#7754
Lucene will soon release official 4.10.1, but by upgrading sooner we can 1) sidestep the false failures due to the 1.8.0_20 JVM hotspot bug (has caused a number of false failures in recent Jenkins tests), 2) make sure none of the Lucene changes in 4.10.1 are problematic.
Closes#7844
We currently use the same internal request when we need to free the search context after a search and a scroll. The two original requests though diverged after #6933 as `SearchRequest` implements `IndicesRequest` while `SearchScrollRequest` and `ClearScrollRequest` don't. That said, with #7319 we made `SearchFreeContextRequest` implement `IndicesRequest` by making it hold the original indices taken from the original request, which are null if the free context was originated by a scroll or by a clear scroll call, and that is why original indices are optional there.
This commit introduces a separate free context request and transport action for scroll, which doesn't hold original indices. The new action is only used against nodes that expose it, the previous action name will be used for nodes older than 1.4.0.Beta1.
As a result, in 1.4 we have a new `indices:data/read/search[free_context/scroll]` action that is equivalent to the previous `indices:data/read/search[free_context]` whose request implements now `IndicesRequest` and holds the original indices coming from the original request. The original indices in the latter requests can only be null during a rolling upgrade (already existing version checks make sure that serialization is bw compatible), when some nodes are still < 1.4.
Closes#7856
Today all threads are allowed to leak a suite. This is tricky since
it essentially allows resource leaks by default where for instance
test private TransportClients will never get closed and consume
resources influencing other tests. It also hides threads that
are not fully under elasticsearchs control like the Lucene
TimeLimitingCollector thread. This commit restricts the threads
that can leak a suite to the threads spawned from testclusters
and fixes sevearl places that leaked threads.
Closes#7833
These tests rely on the fact that all files stay the same after
the corruption and if we run into a translog based flush we might
use a new / different delete file causing the test to fail.
Today, due to how netty works (both on http layer and transport layer), and even though the buffers sent over to netty are paged (CompositeChannelBuffer), it ends up re-copying the whole buffer into another heap buffer (bad), and then send it over directly to sun.nio which allocates a full thread local direct buffer to send it (which can be repeated if not all message is sent).
This is problematic for very large messages, aside from the extra heap temporal usage, the large direct buffers will stay around and not released by the JVM.
This change forces the use of gathering when building a CompositeChannelBuffer, which results in netty using the sun.nio write method that accepts an array of ByteBuffer (so no extra heap copying), and also reduces the amount of direct memory allocated for large messages.
See the doc on NettyUtils#DEFAULT_GATHERING for more info.
closes#7811
Today we rely on the metadata length of the file we are recoverying
to indicate when the last chunk was received. Yet, this might hide bugs
on the compression layer if payloads are truncated. We should indicate
if the last chunk is send to make sure we validate checksums
accordingly if possible.
Closes#7830
From the version 1.0 FilterBuilders and QueryBuilders are not part from org.elasticsearch.index.query.xcontent package no more.
Closes#7701.
(cherry picked from commit 32d4200)