Several shard-level operations that previously broadcasted a request
per shard were converted to broadcast a request per node. This commit
converts upgrade action to this new model as well.
Closes#13204
Switch to use HTTPS by default for all hardcoded plugin URLs.
If users want to install via HTTP they can still specify a HTTP
URL manually.
Closes#12748
The current netty multiport tests bind on localhost and then try to connect
to 127.0.0.1, which may fail, if localhost is resolved to ipv6 by default.
This randomly chooses between 127.0.0.1, localhost and ::1 (if available) for
binding and then uses this throughout the test.
The path that a shard is allocated on is not taken into account when
we decide to move a shard away from a node because it passed a watermark.
Even worse we potentially moved away (relocated) a shard that was not even
allocated on that disk but on another on the node in question. This commit
adds a ShardRouting -> dataPath mapping to ClusterInfo that allows to identify
on which disk the shards are allocated on.
Relates to #13106
Added some comments about changes that we might be able to make later on in the refactoring. Also exposed handleTermsLookup in the context directly, removed unused similarityService getter and corrected simpleMatchToIndexNames call according to changes happened on master (the method variant that accept types will go away, types are ignored anyway).
The reason behind this change is that we will soon remove the index member from QueryParseContext, as parsing will be independent of the index it happens against.
DummyQueryParser properly implements now fromXContent and the corresponding builder supports converting to the corresponding lucene query through toQuery method.
This allows us to remove the string argument from the parseQuery method and the need for removing the first few tokens from the query, as the selected parser knows only about his portion of dsl.
```sh
bin/plugin install lmenezes/elasticsearch-kopf/develop
-> Installing lmenezes/elasticsearch-kopf/develop...
Trying http://download.elastic.co/lmenezes/elasticsearch-kopf/elasticsearch-kopf-develop.zip ...
Trying http://search.maven.org/remotecontent?filepath=lmenezes/elasticsearch-kopf/develop/elasticsearch-kopf-develop.zip ...
Trying https://oss.sonatype.org/service/local/repositories/releases/content/lmenezes/elasticsearch-kopf/develop/elasticsearch-kopf-develop.zip ...
Trying https://github.com/lmenezes/elasticsearch-kopf/archive/develop.zip ...
Downloading .................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................DONE
Verifying https://github.com/lmenezes/elasticsearch-kopf/archive/develop.zip checksums if available ...
Trying https://github.com/lmenezes/elasticsearch-kopf/archive/master.zip ...
Downloading ....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................DONE
Verifying https://github.com/lmenezes/elasticsearch-kopf/archive/master.zip checksums if available ...
```
This happens because we don't have anymore ElasticsearchWrapperException here but standard java exceptions.
Closes#13196.
Currently, many shard-level operations are transported with a request
per shard via TransportBroadcastAction. These shard-level requests are
then submitted to unbounded execution queues for asynchronous execution
on the receiving node. This transport mechanism and stuffing of the
execution queues can be problematic on large clusters. A better
mechanism would be to aggregate the shard-level requests, transport
them via a single request per node, and execute the shard-level
operations serially on the receiving node.
This commit introduces TransportNodeBroadcastAction which is the
high-level mechanism for transporting the shard-level operations in a
single request per node. The shard-level operations are executed
serially on the receiving node and per-node shard-level results are
aggregated into a single response per node. These node-level results
are then aggregated into a single response to the initial request.
One item of note is a new mechanism for registering request handlers.
This mechanism enables registrants to provide a callback for
instantiating new instances of the request class. Doing this enables
the inner class to be instantiated with the context of its outer class.
This is done so that a single NodeRequest class can be defined rather
than defining a class per operation.
Closes#7990
```xml
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-enforcer-plugin</artifactId>
<version>1.4.1</version>
</plugin>
```
Release Notes - Maven Enforcer - Version 1.4.1
========
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12317520&version=12330766
Bugs:
-----
* [MENFORCER-222] - RequireSameVersion rule regression between 1.3 and 1.4
* [MENFORCER-224] - Regression from 1.3.1 to 1.4 with bannedDependencies rule
* [MENFORCER-229] - Ban Distribution Management documentation example doesn't work
* [MENFORCER-237] - Resources Link to codehaus is wrong
Improvements:
------------
* [MENFORCER-223] - Upgrade mrm-maven-plugin to 1.0-beta-2
* [MENFORCER-227] - Document nullability with @Nonnull on EnforcerRule API
* [MENFORCER-233] - Upgrade maven-invoker-plugin to 2.0.0
* [MENFORCER-235] - Use maven-fluido-skin 1.4
* [MENFORCER-236] - Upgrade maven-assembly-plugin version from 2.4 to 2.5.5 in integration test
* [MENFORCER-238] - Upgrade plexus-utils to 3.0.22
Today we sum up the disk usage for the allocation decider which is broken since
we don't stripe across multiple data paths. Each shard has it's own private path
now but the allocation deciders still treat all paths as one big disk. This commit
adds allows allocation deciders to access the least used and most used path to make
better allocation decidsions upon canRemain and canAllocate calls.
Yet, this commit doesn't fix all the issues since we still can't tell which shard
can remain and which can't. This problem is out of scope in this commit and will be solved
in a followup commit.
Relates to #13106
Previously multiple clusters in the same JVM reused the same port ranges, leading to potential big gaps in port selection, which in turns causes unicast based discovery to fail, missing to find another node in the default 5 port range.
Also the previous logic had http use a range that is assigned to another JVMs.
There are some leftovers around usage of NamedWriteableRegistry in our branch compared to master, just because the registry was added first here, then backported to master, and something didn't go completely right during the final merge.
We default the value to be 20x the value of a ping timeout, however we only use the legacy ping timeout settings value for the calculation.
Closes#13162