Elasticsearch 1.x used to implicitly round up upper bounds of queries when they
were inclusive so that eg. `[2016-09-18 TO 2016-09-20]` would actually run
`[2016-09-18T00:00:00.000Z TO 2016-09-20T23:59:59.999Z]` and include dates like
`2016-09-20T15:32:44`. This behaviour was lost in the cleanups of #8889.
Closes#20579
The shards preference on a search request enables specifying a list of
shards to hit, and then a secondary preference (e.g., "_primary") can be
added. Today, the separator between the shards list and the secondary
preference is ';'. Unfortunately, this is also a valid separtor for URL
query parameters. This means that a preference like "_shards:0;_primary"
will be parsed into two URL parameters: "_shards:0" and "_primary". With
the recent change to strict URL parsing, the second parameter will be
rejected, "_primary" is not a valid URL parameter on a search
request. This means that this feature has never worked (unless the ';'
is escaped, but no one does that because our docs do not that, and there
was no indication from Elasticsearch that this did not work). This
commit changes the separator to '|'.
Relates #20786
This change proposes the removal of all non-tcp transport implementations. The
mock transport can be used by default to run tests instead of local transport that has
roughly the same performance compared to TCP or at least not noticeably slower.
This is a master only change, deprecation notice in 5.x will be committed as a
separate change.
Previous to this change the DateMathParser accepted a Callable<Long> to use for accessing the now value. The implementations of this callable would fall back on System.currentTimeMillis() if there was no context object provided. This is no longer necessary for two reasons:
We should not fall back to System.currentTimeMillis() as a context should always be provided. This ensures consistency between shards for the now value in all cases
We should use a LongSupplier rather than requiring an implementation of Callable. This means that we can just pass in context::noInMillis for this parameter and not have not implement anything.
This commit improves the shard decision container class in the following
ways:
1. Renames UnassignedShardDecision to ShardAllocationDecision, so that
the class can be used for general shard decisions, not just unassigned
shard decisions.
2. Changes ShardAllocationDecision to have the final decision as a Type
instead of a Decision, because all the information needed from the final
decision is contained in `Type`.
3. Uses cached instances of ShardAllocationDecision for NO and THROTTLE
decisions when no explanation is needed (which is the common case when
executing reroute's as opposed to using the explain API).
Today SearchContext expose the current context as a thread local which makes any kind of sane interface design very very hard. This PR removes the thread local entirely and instead passes the relevant context anywhere needed. This simplifies state management dramatically and will allow for a much leaner SearchContext interface down the road.
As the wise man @ywelsch said: currently when we batch cluster state update tasks by the same executor, we the first task un-queued from the pending task queue. That means that other tasks for the same executor are left in the queue. When those are dequeued, they will trigger another run for the same executor. This can give unfair precedence to future tasks of the same executor, even if they weren't batched in the first run. Take this queue for example (all with equal priority)
```
T1 (executor 1)
T2 (executor 1)
T3 (executor 2)
T4 (executor 2)
T5 (executor 1)
T6 (executor 1)
```
If T1 & T2 are picked up first (when T5 & T6 are not yet queued), one would expect T3 & T4 to run second. However, since T2 is still in the queue, it will trigger execution of T5 & T6.
The fix is easy - ignore processed tasks when extracting them from the queue.
Closes#20768
Today it compiles when creating the aggregator, meaning that scripts will be
compiled as many times as there are buckets. Instead it should compile when
creating the factory so that scripts are compiled only once regardless of the
number of buckets.
TRA currently resolves incoming requests to IndexShards in order to acquire operations locks on them. There is no need for all subclasses to have to go through the same IndicesService/IndexService song and dance. Also, doing it once means we don't need to worry about edge cases where the shard is removed while a TRA is in flight.
LongGCDisruption suspends and resumes node threads but respects several
`unsafe` class name patterns where it's unsafe to suspend. For instance
log4j uses a global lock so we can't suspend a thread that is currently
calling into log4j. The same is true for the security manager, it's similar
to log4j a shared resource between the test and the node that is _suspended_.
This change adds `java.lang.SecrityManager` to the unsafe patterns.
This prevents test framework deadlocking if a nodes thread is supended
while it's calling into the security manager that uses synchronized maps etc.
This commit improves the logic flow of BalancedShardsAllocator in
preparation for separating out components of this class to be used
in the cluster allocation explain APIs. In particular, this commit:
1. Adds a minimum value for the index/shard balance factor settings (0.0)
2. Makes the Balancer data structures immutable and pre-calculated at
construction time.
3. Removes difficult to follow labeled blocks / GOTOs
4. Better logic for skipping over the same replica set when one of
the replicas received a NO decision
5. Separates the decision making logic for a single shard from the logic
to iterate over all unassigned shards.
* Add parametrized retries for dnf install
Given that dnf doesn't do retries installation of openjdk can sometimes
be affected by checksum or network issues with mirrors offered by
metalink.
Allow setting number of retries through the parameter
`install_command_retries`
* Insert delay between package install retries
Fedora's metalink occasionally returns broken mirrors. Pausing for a few
seconds between retries increases the chance of receiving a different
list of mirrors from metalink and success with package installation.
This causes the snippets to be tested during the build and gives
helpful links to the reader to open the docs in console or copy them
as curl commands.
Relates to #18160
Before this change the processing of the ranges in the date range (and
other range type) aggregations was done when the Aggregator was created.
This meant that the SearchContext did not know that now had been used in
a range until after the decision to cache was made.
This change moves the processing of the ranges to the aggregation builders
so that the search context is made aware that now has been used before
it decides if the request should be cached
This commit adds a did you mean feature to the strict REST params error
message. This works by comparing any unconsumed parameters to all of the
consumer parameters, comparing the Levenstein distance between those
parameters, and taking any consumed parameters that are close to an
unconsumed parameter as candiates for the did you mean.
* Fix pluralization in strict REST params message
This commit fixes the pluralization in the strict REST parameters error
message so that the word "parameter" is not unconditionally written as
"parameters" even when there is only one unrecognized parameter.
* Strength strict REST params did you mean test
This commit adds an unconsumed parameter that is too far from every
consumed parameter to have any candidate suggestions.
Relates #20747
This commit changes the strict REST parameters message to say that
unconsumed parameters are unrecognized rather than unused. Additionally,
the test is beefed up to include two unused parameters.
Relates #20745