We use number of processors to choose default thread pool sizes, and number of workers in networking (for HTTP and transport). Bound it to max at 32 by default as a safety measure to create too many threads.
This relates to #3478, where we set the default to 24, but 32 is probably a better default.
closes#3545
Terminate phrase searches early if max phrase window is exceeded in
FastVectorHighlighter to prevent very long running phrase
extraction if phrase terms are high frequent. See LUCENE-5182
Closes#3543
The blocking thread pool type is not recommended to be used, since it will end up blocking the IO thread most times when executing, which is not recommended (other operations will then stall as well).
closes#3531
If a static cached version of MatchAllDocsQuery is used through for
instanst the `query_string` together with a boost like `*:*^2.0` the
globally used version is modified since queries are not immutable and it's
boost variable can change at any time. Holding on to queries that are modifiable
is risky and should not be done in a global scope.
This commit also adds tests for constant scores from `constant_score` query.
Closes#3521
Unless the field is not mapped phrase suggester should return
empty results or skip candidate generation if a field in not in
the index rather than failing hard with an illegal argument exception.
Some shards might not have a value in a certain field.
Closes#3469
From #3469.
When running suggest on empty shards, it raises an error like:
```
"failures" : [ {
"status" : 400,
"reason" : "ElasticSearchIllegalArgumentException[generator field [title] doesn't exist]"
} ]
```
We should ignore empty shards.
Closes#3473.
The write-lock timeout on the index writer is 1s by default. Given the
default lock poll interval of 1s this gives an upper bound of 2 obtain
checks for a write lock which might be not enough under load.
This commit adds cleaned up exception handling and more warn logging
related to obtaining locks under load.
Also:
more cluster state related debug logging.
get cluster state api didn't populate cluster state version in response (was always 0)
added logging when testing ends and before indices cleanup start.
Often it's hard to tell which testmethods were already executed
when a particualr test fails. This commit adds a log output when a
new test is started to better differentiate which log output belongs
to which test.
This commit also moves the reproduce line to logging output to gain
timestamps for the failure.
* Fuzzy Suggester parameter names are now easier to understand
* non_prefix_length became prefix_length
* min_prefix_length became min_length
* Instead of specyfying search_analyzer and index_analyzer using analyzer for both is supported
* CompletionSuggester used the CharsRef spare instead of too much toString() now
The ClusterState can hold an 'invalid' local 'DiscoveryNode' during
node startup and rare race conditions can cause NPEs if an 'invalid'
'DiscoveryNode' is serialized.
Closes#3515
* The _percolator type now has always to _id field enabled (index=not_analyzed, store=no)
* During loading shard initialization the query ids are fetched from field data, before ids were fetched from stored values.
* Moved internal percolator query map storage from Text to HashedBytesRef based keys.
For machines with lots of cores ie. >= 48 the number of threads
created by default might cause unecessary memory pressure on the system
and can even lead to OOM where the system is not able to create any
native threads anymore. This commit limits the number of available
CPUs on the system used for thread pool initialization to at most
24 cores.
Closes#3478
It will eventually time out (with the default 5 minutes timeout), but we should properly handle it, and also, properly propagate the failure.
closes#3513
The '"side" : "back"' parameter is not supported anymore on
EdgeNGramTokenizer if the mapping is created with 0.90.2 / Lucene 4.3
The 'EdgeNgramTokenFilter' handles this automatically wrapping the
'EdgeNgramTokenFilter' in a 'ReverseTokenFilter' yet with tokenizers this
optimization is not possible. This commit also add a more verbose error message
how to work around this limitation.
Closes#3489