Today we have a push model for registering basically anything. All our extension points
are defined on modules which we pass in to plugins. This is harder to maintain and adds
unnecessary dependencies on the modules itself. This change moves towards a pull model
where the plugin offers a getter kind of method to get the extensions. This will also
help in the future if we need to pass dependencies to the extension points which can
easily be defined on the method as arguments if a pull model is used.
Currently the error messages for failing tests in the TimeZoneRoundingTests test
suite are hard to read because they usually report the actual end expected date
in milliseconds utc (e.g. "Expected: <1414270860000L> but: was <1414270800000L>".
This makes failing tests hard to read.
This change introduces a new Matcher that can be used for equality checks for
long dates but reports the error both as a formated date string according to
some time zone and also as the actual long values, so you get messages like
"Expected: 2014-10-26T00:01:00.000+03:00 [1414270860000] but: was
"2014-10-26T00:00:00.000+03:00 [1414270800000]".
Also clean cleaning up some helper methods and generally simplifying a few test
cases. Otherwise this change shouldn't affect either the scope of the test or
anything about the rounding implementation itself.
They have been implemented in https://issues.apache.org/jira/browse/LUCENE-7289.
Ranges are implemented so that the accuracy loss only occurs at index time,
which means that if you are searching for values between A and B, the query will
match exactly all documents whose value rounded to the closest half-float point
is between A and B.
Registering a script engine or native scripts still uses Guice today
and is much more complicated than needed. This change moves to a pull
based model where script plugins have to implement a dedicated interface
`ScriptPlugin` and defines simple getter returning instances rather than
classes.
This interface used to have dedicated methods to prevent calling execute
methods. These methods are unnecessary as the checks can simply be
done inside the execute methods itself. This simplifies the interface
as well as its usage.
With this commit we exclude certain HTTP requests that are needed to inspect the cluster
from HTTP request limiting to ensure these commands are processed even in critical
memory conditions.
Relates #17951, relates #18145, closes#18833
When installing plugins, we first try the elastic download service for
official plugins, then try maven coordinates, and finally try the
argument as a url. This can lead to confusing error messages about
unknown protocols when eg an official plugin name is mispelled. This
change adds a heuristic for determining if the argument in the final
case is in fact a url that we should try, and gives a simplified error
message in the case it is definitely not a url.
closes#17226
The search preference _prefer_node allows specifying a single node to
prefer when routing a request. This functionality can be enhanced by
permitting multiple nodes to be preferred. This commit replaces the
search preference _prefer_node with the search preference _prefer_nodes
which supplants the former by specifying a single node and otherwise
adds functionality.
Relates #18872
This caches FieldStats at the field level. For one off requests or for
few indicies this doesn't save anything, but when there are 30 indices,
5 shards, 1 replica, 100 parallel requests this is about twice as fast
as not caching. I expect lots of usage won't see much benefit from this
but pointing kibana to a cluster with many indexes and shards, will be
faster.
Closes#18717
This adds a get task API that supports GET /_tasks/${taskId} and
removes that responsibility from the list tasks API. The get task
API supports wait_for_complation just as the list tasks API does
but doesn't support any of the list task API's filters. In exchange,
it supports falling back to the .results index when the task isn't
running any more. Like any good GET API it 404s when it doesn't
find the task.
Then we change reindex, update-by-query, and delete-by-query to
persist the task result when wait_for_completion=false. The leads
to the neat behavior that, once you start a reindex with
wait_for_completion=false, you can fetch the result of the task by
using the get task API and see the result when it has finished.
Also rename the .results index to .tasks.
There are edge cases where rounding a date to a certain interval using a time
zone with DST shifts can currently cause the rounded date to be bigger than the
original date. This happens when rounding a date closely after a DST start and
the rounded date falls into the DST gap.
Here is an example for CET time zone, where local time is set forward by one
hour at 2016-03-27T02:00:00+01:00 to 2016-03-27T03:00:00.000+02:00:
The date 2016-03-27T03:01:00.000+02:00 (1459040460000) which is just after the
DST change is first converted to local time (1459047660000). If we then apply
interval rounding for a 14m interval in local time, this takes us to
1459047240000, which unfortunately falls into the DST gap. When converting
this back to UTC, joda provides options to throw exceptions on illegal dates
like this, or correct this by adjusting the date to the new time zone offset.
We currently do the later, but this leads to converting this illegal date back
to 2016-03-27T03:54:00.000+02:00 (1459043640000), giving us a date that is
larger than the original date we wanted to round.
This change fixes this by using the "strict" option of 'convertLocalToUTC()'
to detect rounded dates that fall into the DST gap. If this happens, we can use
the time of the DST change instead as the interval start.
Even before this change, intervals around DST shifts like this can be shorter
than the desired interval. This, for example, happens when the requested
interval width doesn't completely fit into the remaining time span when the DST
shift happens. For example, using a 14m interval in UTC+1 (CET before DST
starts) leads to the following valid rounding values around the time where DST
happens:
2016-03-27T01:30:00+01:00
2016-03-27T01:44:00+01:00
2016-03-27T01:58:00+01:00
2016-03-27T02:12:00+01:00
2016-03-27T02:26:00+01:00
...
while the rounding values in UTC+2 (CET after DST start) are placed like this
around the same time:
2016-03-27T02:40:00+02:00
2016-03-27T02:54:00+02:00
2016-03-27T03:08:00+02:00
2016-03-27T03:22:00+02:00
...
From this we can see then when we switch from UTC+1 to UTC+2 at 02:00 the last
rounding value in UTC+1 is at 01:58 and the first valid one in UTC+2 is at
03:08, so even if we decide to put all the dates in between into one rounding
interval, it will only cover 10 minutes. With this change we choose to use the
moment of DST shift as an aditional interval separator, leaving us with a 2min
interval from [01:58,02:00) before the shift and an 8min interval from
[03:00,03:08) after the shift.
This change also adds tests for the above example and adds randomization to the
existing TimeIntervalRounding tests.
By default the number of searches msearch executes is capped by the number of
nodes multiplied with the default size of the search threadpool. This default can be
overwritten by using the newly added `max_concurrent_searches` parameter.
Before the msearch api would concurrently execute all searches concurrently. If many large
msearch requests would be executed this could lead to some searches being rejected
while other searches in the msearch request would succeed.
The goal of this change is to avoid this exhausting of the search TP.
Closes#17926
Writeable is better for immutable objects like TimeValue.
Switch to writeZLong which takes up less space than the original
writeLong in the majority of cases. Since we expect negative
TimeValues we shouldn't use
writeVLong.