_wait_for_completion defaults to false. If set to true then the API will
wait for all the tasks that it finds to stop running before returning. You
can use the timeout parameter to prevent it from waiting forever. If you
don't set a timeout parameter it'll default to 30 seconds.
Also adds a log message to rest tests if any tasks overrun the test. This
is just a log (instead of failing the test) because lots of tasks are run
by the cluster on its own and they shouldn't cause the test to fail. Things
like fetching disk usage from the other nodes, for example.
Switches the request to getter/setter style methods as we're going that
way in the Elasticsearch code base. Reindex is all getter/setter style.
Closes#16906
Currently the message stays in the `UnassignedInfo` for the shard,
however, it would be very useful to know the exact point (time-wise)
that the cancellation happened when diagnosing an issue.
Relates to debugging #16357
Apparently lucene6 is way more picky with respect to corrupting files
that are not fsynced that's why this test sometimes failed after the lucene6
upgrade.
This commit reduces the maximum number of threads required in the
bootstrap check. This limit can be reduced since the generic thread pool
is no longer unbounded.
Relates #17003
The generic thread pool was previously configured to be able to create
an unlimited number of threads. The thinking is that tasks that are
submitted to its work queue must execute and should not block waiting
for a worker. However, in cases of heavy load, this can lead to an
explosion in the number of threads; this can even lead to a feedback
loop that exacerbates the problem. What is more, this can even bump into
OS limits on the number of threads that can be created.
This commit limits the number of threads in the generic thread pool to
four times the bounded number of processors.
Relates #17003
This merge commit merges #16682 which adds the ability to define a index-global
default similarity but at the same time prevents overriding built-in similarities
for new indices. Old indices where able to do this int the past which should not
be punished even though this is not possible anymore.
Closes#16594Closes#16682
This removes the need for accessing the SearchContext when parsing Sort elements
to queries. After applying the patch only a QueryShardContext is needed.
Relates to #15178
Move some test methods from AnalylzeActionIT to RestAnalyzeActionTest
Allow string explain param if it can parse
Fix wrong param name in rest-api-spec
Closes#16925
Add support for alpha versions
Elasticsearch 5.0 will come with alpha versions which is not supported
in the current version scheme. This commit adds support for aplpha starting
with es 5.0.0 in a backwards compatible way.
We removed leniencey from version parsing which caught problems with
-SNAPSHOT suffixes on plugin properies. This commit removes the -SNAPSHOT
from both es and the extension version and adds tests to ensure we can
parse older versions that allowed -SNAPSHOT in BWC way.
Elasticsearch 5.0 will come with alpha versions which is not supported
in the current version scheme. This commit adds support for aplpha starting
with es 5.0.0 in a backwards compatible way.
Date range aggregations used to be unable to use a `time_zone` parameter to e.g. be applied
int date math roundings like in `now/d` (see #10130 as an example). After the aggregation
refactoring, the time_zone parameter has been pulled up to ValuesSourceAggregatorBuilder
and can now be used in date range aggregations as well.
This change adds randomized time zone settings to the existing IT tests to verify that the
`time_zone` parameter is honored when calculating the bucket boundaries. Also moving
the DateRangeTests from module-groovy/messy back to core as DateRangeIT, sharing
common script mocks with DateHistogramIT and adding documentation for the
`time_zone` parameter in the date range aggregation docs.
Closes#10130
Internally the put pipeline API uses this information in node info API to validate if all specified processors in a pipeline exist on all nodes in the cluster.