With this commit we add a benchmarks project that contains the necessary build
infrastructure and an example benchmark. It is added as a separate project to avoid
interfering with the regular build too much (especially sanity checks) and to keep
the microbenchmarks isolated.
Microbenchmarks are generated with `gradle :benchmarks:jmhJar` and can be run with
` gradle :benchmarks:jmh`.
We intentionally do not use the
[jmh-gradle-plugin](https://github.com/melix/jmh-gradle-plugin) as it causes all
sorts of problems (dependencies are not properly excluded, not all JMH parameters
can be set) and it adds another abstraction layer that is not needed.
Closes#18242
This interface used to have dedicated methods to prevent calling execute
methods. These methods are unnecessary as the checks can simply be
done inside the execute methods itself. This simplifies the interface
as well as its usage.
With this commit we exclude certain HTTP requests that are needed to inspect the cluster
from HTTP request limiting to ensure these commands are processed even in critical
memory conditions.
Relates #17951, relates #18145, closes#18833
When installing plugins, we first try the elastic download service for
official plugins, then try maven coordinates, and finally try the
argument as a url. This can lead to confusing error messages about
unknown protocols when eg an official plugin name is mispelled. This
change adds a heuristic for determining if the argument in the final
case is in fact a url that we should try, and gives a simplified error
message in the case it is definitely not a url.
closes#17226
The search preference _prefer_node allows specifying a single node to
prefer when routing a request. This functionality can be enhanced by
permitting multiple nodes to be preferred. This commit replaces the
search preference _prefer_node with the search preference _prefer_nodes
which supplants the former by specifying a single node and otherwise
adds functionality.
Relates #18872
This will let things that don't depend on :test:framework like the
client use it.
Also skip initializing the classes we check because we don't care
about their initialization behavior because we're not executing them.
This makes the naming conventions check pretty close to instant
from a "human eye" perspective.
We weren't doing it before because we weren't starting the plugins.
Now we are.
The hardest part of this was handling the files the tests expect
to be on the filesystem. extraConfigFiles was broken.
This caches FieldStats at the field level. For one off requests or for
few indicies this doesn't save anything, but when there are 30 indices,
5 shards, 1 replica, 100 parallel requests this is about twice as fast
as not caching. I expect lots of usage won't see much benefit from this
but pointing kibana to a cluster with many indexes and shards, will be
faster.
Closes#18717
Groovy does some crazy capturing when using closures inside a loop. In
this case, it somehow decided the local loop variable would be
modified, and so each closure was getting a wrapped value that would be
updated on each loop iteration, until all the closures pointed at the
last value. This change fixes the loop to extract the object to be used by
the closures.
This adds a get task API that supports GET /_tasks/${taskId} and
removes that responsibility from the list tasks API. The get task
API supports wait_for_complation just as the list tasks API does
but doesn't support any of the list task API's filters. In exchange,
it supports falling back to the .results index when the task isn't
running any more. Like any good GET API it 404s when it doesn't
find the task.
Then we change reindex, update-by-query, and delete-by-query to
persist the task result when wait_for_completion=false. The leads
to the neat behavior that, once you start a reindex with
wait_for_completion=false, you can fetch the result of the task by
using the get task API and see the result when it has finished.
Also rename the .results index to .tasks.
There are edge cases where rounding a date to a certain interval using a time
zone with DST shifts can currently cause the rounded date to be bigger than the
original date. This happens when rounding a date closely after a DST start and
the rounded date falls into the DST gap.
Here is an example for CET time zone, where local time is set forward by one
hour at 2016-03-27T02:00:00+01:00 to 2016-03-27T03:00:00.000+02:00:
The date 2016-03-27T03:01:00.000+02:00 (1459040460000) which is just after the
DST change is first converted to local time (1459047660000). If we then apply
interval rounding for a 14m interval in local time, this takes us to
1459047240000, which unfortunately falls into the DST gap. When converting
this back to UTC, joda provides options to throw exceptions on illegal dates
like this, or correct this by adjusting the date to the new time zone offset.
We currently do the later, but this leads to converting this illegal date back
to 2016-03-27T03:54:00.000+02:00 (1459043640000), giving us a date that is
larger than the original date we wanted to round.
This change fixes this by using the "strict" option of 'convertLocalToUTC()'
to detect rounded dates that fall into the DST gap. If this happens, we can use
the time of the DST change instead as the interval start.
Even before this change, intervals around DST shifts like this can be shorter
than the desired interval. This, for example, happens when the requested
interval width doesn't completely fit into the remaining time span when the DST
shift happens. For example, using a 14m interval in UTC+1 (CET before DST
starts) leads to the following valid rounding values around the time where DST
happens:
2016-03-27T01:30:00+01:00
2016-03-27T01:44:00+01:00
2016-03-27T01:58:00+01:00
2016-03-27T02:12:00+01:00
2016-03-27T02:26:00+01:00
...
while the rounding values in UTC+2 (CET after DST start) are placed like this
around the same time:
2016-03-27T02:40:00+02:00
2016-03-27T02:54:00+02:00
2016-03-27T03:08:00+02:00
2016-03-27T03:22:00+02:00
...
From this we can see then when we switch from UTC+1 to UTC+2 at 02:00 the last
rounding value in UTC+1 is at 01:58 and the first valid one in UTC+2 is at
03:08, so even if we decide to put all the dates in between into one rounding
interval, it will only cover 10 minutes. With this change we choose to use the
moment of DST shift as an aditional interval separator, leaving us with a 2min
interval from [01:58,02:00) before the shift and an 8min interval from
[03:00,03:08) after the shift.
This change also adds tests for the above example and adds randomization to the
existing TimeIntervalRounding tests.