Currently, we do not allow reads on shards which are in POST_RECOVERY which
unfortunately can cause search failures on shards which just recovered if there no replicas (#9421).
The reason why we did not allow reads on shards that are in POST_RECOVERY is
that after relocating a shard might miss a refresh if the node that executed the
refresh is behind with cluster state processing. If that happens, a user might execute
index/refresh/search but still not find the document that was indexed.
We changed how refresh works now in #13068 to make sure that shards cannot miss a refresh this
way by sending refresh requests the same way that we send write requests.
This commit changes IndexShard to allow reads on POST_RECOVERY now.
In addition it adds two test:
- test for issue #9421 (After relocation shards might temporarily not be searchable if still in POST_RECOVERY)
- test for visibility issue with relocation and refresh if reads allowed when shard is in POST_RECOVERY
closes#9421
DateHistogramIT has suite scope and therefore must take care of cleaning up after
each test. Otherwise we cannot run tests in several times with tests.iters.
The implementation this commit replaces was almost k-NN regression with
k=2, but had two bugs: (a) it depends on the empirical raw estimates
being in strictly non-decreasing order for the binary search (which they
are not); and (b) it weights the biases positively with increased
distance from the corresponding raw estimate.
“HyperLogLog in Practice” leaves the choice of exact algorithm here
fairly vague, just noting: “We use k-nearest neighbor interpolation to
get the bias for a given raw estimate (for k = 6).” The majority of
other open source HyperLogLog++ implementations appear to use k-NN
regression with uniform weights (and generally k = 6). Uniform
weighting does decrease variance, but also introduces bias at the domain
extrema. This problem, plus the use of the word “interpolation” in the
original paper, suggests (inverse) distance-weighted k-NN, as
implemented here.
Since #13068 refresh and flush requests go to the primary first and are then replicated.
One difference to before is though that if a shard is not available (INITIALIZING for example)
we wait a little for an indexing request but for refresh we don't and just give up immediately.
Before, refresh requests were just send to the shards regardless of what their state is.
In tests we sometimes create an index, issue an indexing request, refresh and
then get the document. But we do not wait until all nodes know that all primaries have ben assigned.
Now potentially one node can be one cluster state behind and not know yet that
the shards have ben started. If the refresh is executed through this node then the
refresh request will silently fail on shards that are started already because from
the nodes perspective they are still initializing. As a consequence, documents
that expected to be available in the test are now not.
Example test failures are here: http://build-us-00.elastic.co/job/elasticsearch-20-oracle-jdk7/395/
This commit changes the timeout to 1m (default) to make sure we don't miss shards
when we refresh. This will trigger the same retry mechanism as for indexing requests.
We still have to make a decision if this change of behavior is acceptable.
see #13238
From a user perspective, the main benefit from this upgrade is that the new
Lucene53Codec has disk-based norms. The elasticsearch directory has been fixed
to load these norms through mmap instead of nio.
Other changes include the removal of `max_thread_states`, the fact that
PhraseQuery and BooleanQuery are now immutable, and that deleted docs are now
applied on top of the Scorer API.
This change introduces a couple of `AwaitsFix`s but I don't think it should
hold us from merging.
This allows reducing privileges with doPrivileged to work,
otherwise it will fail with NPE.
In general, if some code wants to do that, let it. The null
check is needed, even though ProtectionDomain(CodeSource, PermissionCollection)
is more than a bit misleading: "the current Policy will not be consulted".
Additionally add a defensive check for location, since the docs
there are even more confusing: https://bugs.openjdk.java.net/browse/JDK-8129972
The jdk policy impl has both these checks.
Several shard-level operations that previously broadcasted a request
per shard were converted to broadcast a request per node. This commit
converts upgrade action to this new model as well.
Closes#13204
Switch to use HTTPS by default for all hardcoded plugin URLs.
If users want to install via HTTP they can still specify a HTTP
URL manually.
Closes#12748
The current netty multiport tests bind on localhost and then try to connect
to 127.0.0.1, which may fail, if localhost is resolved to ipv6 by default.
This randomly chooses between 127.0.0.1, localhost and ::1 (if available) for
binding and then uses this throughout the test.
The path that a shard is allocated on is not taken into account when
we decide to move a shard away from a node because it passed a watermark.
Even worse we potentially moved away (relocated) a shard that was not even
allocated on that disk but on another on the node in question. This commit
adds a ShardRouting -> dataPath mapping to ClusterInfo that allows to identify
on which disk the shards are allocated on.
Relates to #13106
The field type tests for mappings had a huge hole: check compatibility
was not tested directly at all! I had meant for this to happen in a
follow up after #8871, and was relying on existing mapping tests.
However, there were a number of issues.
This change reworks the fieldtype tests to be able to check all settable
properties on a field type work with checkCompatibility. It fixes a
handful of small bugs in various field types. In particular, analyzer
comparison was just wrong: it was comparing reference equality for
search analyzer instead of the analyzer name. There was also no check
for search quote analyzer.
closes#13112
```sh
bin/plugin install lmenezes/elasticsearch-kopf/develop
-> Installing lmenezes/elasticsearch-kopf/develop...
Trying http://download.elastic.co/lmenezes/elasticsearch-kopf/elasticsearch-kopf-develop.zip ...
Trying http://search.maven.org/remotecontent?filepath=lmenezes/elasticsearch-kopf/develop/elasticsearch-kopf-develop.zip ...
Trying https://oss.sonatype.org/service/local/repositories/releases/content/lmenezes/elasticsearch-kopf/develop/elasticsearch-kopf-develop.zip ...
Trying https://github.com/lmenezes/elasticsearch-kopf/archive/develop.zip ...
Downloading .................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................DONE
Verifying https://github.com/lmenezes/elasticsearch-kopf/archive/develop.zip checksums if available ...
Trying https://github.com/lmenezes/elasticsearch-kopf/archive/master.zip ...
Downloading ....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................DONE
Verifying https://github.com/lmenezes/elasticsearch-kopf/archive/master.zip checksums if available ...
```
This happens because we don't have anymore ElasticsearchWrapperException here but standard java exceptions.
Closes#13196.
Currently, many shard-level operations are transported with a request
per shard via TransportBroadcastAction. These shard-level requests are
then submitted to unbounded execution queues for asynchronous execution
on the receiving node. This transport mechanism and stuffing of the
execution queues can be problematic on large clusters. A better
mechanism would be to aggregate the shard-level requests, transport
them via a single request per node, and execute the shard-level
operations serially on the receiving node.
This commit introduces TransportNodeBroadcastAction which is the
high-level mechanism for transporting the shard-level operations in a
single request per node. The shard-level operations are executed
serially on the receiving node and per-node shard-level results are
aggregated into a single response per node. These node-level results
are then aggregated into a single response to the initial request.
One item of note is a new mechanism for registering request handlers.
This mechanism enables registrants to provide a callback for
instantiating new instances of the request class. Doing this enables
the inner class to be instantiated with the context of its outer class.
This is done so that a single NodeRequest class can be defined rather
than defining a class per operation.
Closes#7990
Today we sum up the disk usage for the allocation decider which is broken since
we don't stripe across multiple data paths. Each shard has it's own private path
now but the allocation deciders still treat all paths as one big disk. This commit
adds allows allocation deciders to access the least used and most used path to make
better allocation decidsions upon canRemain and canAllocate calls.
Yet, this commit doesn't fix all the issues since we still can't tell which shard
can remain and which can't. This problem is out of scope in this commit and will be solved
in a followup commit.
Relates to #13106
Previously multiple clusters in the same JVM reused the same port ranges, leading to potential big gaps in port selection, which in turns causes unicast based discovery to fail, missing to find another node in the default 5 port range.
Also the previous logic had http use a range that is assigned to another JVMs.
We default the value to be 20x the value of a ping timeout, however we only use the legacy ping timeout settings value for the calculation.
Closes#13162
This commit removes and now forbids all uses of
com.google.common.collect.Lists across the codebase. This is the first
of many steps in the eventual removal of Guava as a dependency.
lessThanOrEqualTo is more appropriate when comparing _ttl than lessThan
because in rare cases, when tests run very fast, the ttl you fetch will
still equal the one you sent.
Some listeners may need to do work before a shard's path is
accessed (such as creating the directory in a plugin), so the listener
should be called before anything happens (as its name implies).
detect_noop is pretty cheap and noop updates compartively expensive so this
feels like a sensible default.
Also had to do some testing and documentation around how _ttl works with
detect_noop.
Closes#11282
As discussed in #11744 this is the last step to unify parsing of boost and _name. Those fields are supported only in long version of queries, while we sometimes parse them when wwe shouldn't, inconsistently.
Closes#11744Closes#12966
Since we now don't stripe shards across data paths we need a way
to access the information on which path a shard is allocated to
eventually do better allocation decisions based on disk usage etc.
This commit exposes the shard paths as part of the shard stats.
Relates to #13106
Until a couple of hours ago we expected the position_offset_gap to default
to 0 in 2.0 and 100 in 2.1. We decided it was worth backporting that new
default to 2.0. So now that its backported we need to teach 2.1 that 2.0
also defaults to 100.
Closes#7268
This is much more fiddly than you'd expect it to be because of the way
position_offset_gap is applied in StringFieldMapper. Instead of setting
the default to 100 its simpler to make sure that all the analyzers default
to 100 and that StringFieldMapper doesn't override the default unless the
user specifies something different. Unless the index was created before
2.1, in which case the old default of 0 has to take.
Also postition_offset_gaps less than 0 aren't allowed at all.
New tests test that:
1. the new default doesn't match phrases across values with reasonably low
slop (5)
2. the new default doest match phrases across values with reasonably high
slop (50)
3. you can override the value and phrases work as you'd expect
4. if you leave the value undefined in the mapping and define it on a
custom analyzer the the value from the custom analyzer shines through
Closes#7268
This commit changes the startup behavior of Elasticsearch to throw an
exception if duplicate settings keys are detected in the Elasticsearch
configuration file.
Closes#13079
Currently when an entire type is disabled, our document parser will end
parsing on the first field of the document. This blows up the recently
added check that parsing did not silently skip any tokens (ie whether
there was garbage leftover).
This change fixes the parser to correctly skip the entire document when
the type is disabled.
closes#13017
This commit consolidates the logic in
RoutingTable#allActiveShardsGrouped and
RoutingTable#allAssignedShardsGrouped into a single method that merely
applies a predicate to each ShardRouting.
Closes#13081
The `rewrite` option has been removed from the parser with
commit da5fa6c4 and won't parse anymore, however we still
have a setter for it in the builder that gets rendered out
when used and potentially leads to parsing errors. This PR
removes the setter for the unsupported `rewrite` option.
The `rewrite` option has been removed from the parser with
commit da5fa6c4 and won't parse anymore, however we still
have a setter for it in the builder that gets rendered out
when used and potentially leads to parsing errors. This PR
removes the setter for the unsupported `rewrite` option.
This change makes modules added by plugins come before others, as it was
before #12783. The order of configuration, and thereby binding, happens
in the order modules are received, and without this change, some plugins
can get *insane* guice errors (500mb stack trace).
The setting `plugin.types` is currently used to load plugins from the
classpath. This is necessary in tests, as well as the transport client.
This change removes the setting, and replaces it with the ability to
directly add plugins when building a transport client, as well as
infrastructure in the integration tests to specify which plugin classes
should be loaded on each node.
* makes most classes final and package private
* removes duplicate and confusing multiple entry points
* adds javadocs to some classes like JarHell,Security
* adds a public class BootStrapInfo that exposes any stats
needed by outside code.
When we create a plugin's classloader, we should allow it to register things in
the lucene SPI (registry of Tokenizers, TokenFilters, CharFilters, Codec,
PostingsFormat, DocValuesFormat).
Plugins should be able to do this so they can extend Lucene.
We do check if a directory is present and then open a dir stream on
it. Yet the file can be concurrrently deleted which is OK but we fail
with a hard exception. This change tries to open the dir directly (listing via stream)
and catches NoSuchFileEx | FNFEx.
This commit enforces that at most a single settings file is found. If
multiple settings files are found, a SettingsException will be thrown
Closes#13042
This commit fixes an issue that was causing Elasticsearch to silently
ignore settings files that contain garbage. The underlying issue was
swallowing an SettingsException under the assumption that the only
reason that an exception could be throw was due to the settings file
not existing (in this case the IOException would be the cause of the
swallowed SettingsException). This assumption is mistaken as an
IOException could also be thrown due to an access error or a read
error. Additionally, a SettingsException could be thrown exactly
because garbage was found in the settings file. We should instead
explicitly check that the settings file exists, and bomb on an
exception thrown for any reason.
Closes#13028
Today we are very verbose when rendering exceptions on the rest layer.
Yet, this isn't necessarily very easy to read and way too much infromation most
of the time. This change suppresses the stacktrace rendering by default but instead
adds a `rest.suppressed` logger that logs the suppressed stacktrace or rather the entire
exception on the node that renderes the exception
The log message looks like this:
```
[2015-08-19 16:21:58,427][INFO ][rest.suppressed ] /test/_search/ Params: {index=test}
[test] IndexNotFoundException[no such index]
at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.resolve(IndexNameExpressionResolver.java:551)
```
Currently we implicitly enforce a version format on the java.version
property for plugins via JarHell.checkJavaVersion. We should explicitly
enforce this version format and specify it in the documentation.
This commit adds an explicit enforcement of the version format on the
java.version property for plugins and updates the documentation for
plugin-descriptor.properties accordingly.
Closes#13009
This commit improves Java version comparison in JarHell.
The first improvement is the addition of a method to check the version
format of a target version string. This method will reject target
version strings that are not a sequence of nonnegative decimal integers
separated by “.”s, possibly with leading zeros (0*[0-9]+(\.[0-9]+)?).
This version format is the version format used for Java specification
versioning (cf. Java Product Versioning, 1.5.1 Specification Versioning
and the Javadocs for java.lang.Package.)
The second improvement is a clean method for checking that a target
version is compatible with the runtime version of the JVM. This is done
using the system property java.specification.version and comparing the
versions lexicograpically. This method of comparison has been tested on
JDK 9 builds that include JEP-220 (the Project Jigsaw JEP concerning
modular runtime images) and JEP-223 (the version string JEP). The class
that encapsulates the methods for parsing and comparing versions is
written in a way that can easily be converted to use the Version class
from JEP-223 if that class is ultimately incorporated into mainline JDK
9.
Closes#12441
This commit changes the behavior when validating index templates to
accumulate all validation errors before reporting failure to the user.
This addresses a usability issue when creating index templates.
Closes#12900
Aggregation.subPath() always threw an ArrayStoreException because we were trying to pass a List into System.arraycopy(). This change fixes that bug and adds a test to prevent regression
Today we only guess how big the shard will be that we are allocating on a node.
Yet, we have this information on the master but it's not available on the data nodes
when we pick a data path for the shard. We use some rather simple heuristic based on
existing shard sizes on this node which might be complete bogus. This change adds
the expected shard size to the ShardRouting for RELOCATING and INITIALIZING shards
to be used on the actual node to find the best data path for the shard.
Closes#11271
Try to strike a balance between usability and debuggability.
2MB stacktrace is not gonna work... we print message and find
"likely root cause" and identify it as such, limit it to 30 stack
frames, and filter out guice. When frames are truncated we identify
that too, and always inform the user full exception is available from
logs.
Closes#13029
Multicast has known issues (see #12999 and #12993). This change moves
multicast into a plugin, and deprecates it in the docs. It also allows
for plugging in multiple zen ping implementations.
closes#13019
If the machine doesn't support IPv6, or if the user disabled it
with -Djava.net.preferIPv4Stack, or if the user disabled it with
ES_USE_IPV4, we will still try to bind a socket to ::1 out of box,
and we will see lots of exceptions logged to the console.
It is harmless noise but not a great experience.
this commit removes all support for reverse host name resolving from
InetSocketTransportAddress. This class now only returns IP addresses.
Closes#13014
Fix unicast discovery to work when a host has multiple addresses.
Ban dangerous methods in java.net with forbidden APIs.
Fix ipv6 bugs and formatting of network addresses everywhere.
Closes#12999Closes#12993
Squashed commit of the following:
commit 6c1aa001d091c5cf25212a53dc701fb704337f1e
Author: Robert Muir <rmuir@apache.org>
Date: Thu Aug 20 14:25:43 2015 -0400
Fix these to be correct with addresses just in case
commit 648215627e84abf58a71400e7dc9ae775efb71d6
Merge: d00561b 41d8fbe
Author: Robert Muir <rmuir@apache.org>
Date: Thu Aug 20 13:23:09 2015 -0400
Merge branch 'master' into unicast_all_the_way_down
commit d00561b76fd1aa5850699f7901f3dae3d4d402b7
Author: Simon Willnauer <simonw@apache.org>
Date: Thu Aug 20 16:38:50 2015 +0200
limit local ports to 5 in UnicastZenPing
commit e2e15c594006746cbe24432694294a71cc99deb8
Author: Robert Muir <rmuir@apache.org>
Date: Thu Aug 20 10:32:47 2015 -0400
fix port limiting
commit 10153cb7adadda81a1f482445e703836b65cf5e2
Author: Robert Muir <rmuir@apache.org>
Date: Thu Aug 20 10:18:37 2015 -0400
don't serialize scopeids: that's broken
commit 2aa63d43db2baec68a2e9bc227cfeb85dfeb4f83
Author: Simon Willnauer <simonw@apache.org>
Date: Thu Aug 20 16:06:51 2015 +0200
restore @Network
commit c840f1d1ef438826ae1ecfd5e45942a0e30dc9c0
Author: Simon Willnauer <simonw@apache.org>
Date: Thu Aug 20 16:02:30 2015 +0200
Use NetworkAddress.formatAddress where applicable in plugins
commit 374ce878852b35d626b7a29c8c4773545b0e9ddd
Author: Simon Willnauer <simonw@apache.org>
Date: Thu Aug 20 15:34:06 2015 +0200
Use NetworkAddress.formatAddress where applicable
commit e7a606d63f1bc43c1b62b6e17adf707c76d43a15
Author: Simon Willnauer <simonw@apache.org>
Date: Thu Aug 20 10:17:57 2015 +0200
Add @Multicast annotation to disable multicast tests by default.
We only run multicast tests now when we explicitly state it. A working
multicast env is required which is not always the case.
commit 2d7d2d0347179696ab41f71f048b13305014c85b
Author: Simon Willnauer <simonw@apache.org>
Date: Thu Aug 20 09:51:28 2015 +0200
Remove extra check for local mode in InternalTestCluster
commit dda59ac39aa136d4687b9274c2692cd77f8b8f66
Author: Simon Willnauer <simonw@apache.org>
Date: Thu Aug 20 09:37:03 2015 +0200
Handle node mode across entire test cluster
We used static methods reading sys properties to define the node mode
per cluster. this had lots of problems when tests couldn't cope with
mixed or only local mode. Now we are passing it down to the cluster from the test
which allows to @SuppressNetworkMode / @SupressLocalMode on the test to force
consistent node configurations.
commit 058197b7a408318995c88ce7f6762e32348de0de
Author: Robert Muir <rmuir@apache.org>
Date: Thu Aug 20 03:19:14 2015 -0400
really ban InetSocketAddress's trappy method and break build and go to sleep, sorry
commit ac8779185aee1e17e6f5a81766290fdfc9c603ba
Author: Robert Muir <rmuir@apache.org>
Date: Thu Aug 20 03:16:52 2015 -0400
Ban methods that might surprisingly cause DNS lookups
commit e64fe3dff2b11503e5f2831eb9863d64f56c5538
Author: Robert Muir <rmuir@apache.org>
Date: Thu Aug 20 02:59:05 2015 -0400
Add unit test
commit f15434f20fb1a3691b1cc16028597d8fae937e05
Author: Robert Muir <rmuir@apache.org>
Date: Thu Aug 20 02:39:02 2015 -0400
fix ipv6 formatting bugs
commit 05c2c74098052c75fbb79ea1818a295ef2e03e30
Author: Robert Muir <rmuir@apache.org>
Date: Thu Aug 20 02:12:05 2015 -0400
format addresses correctly so I can actually read what comes out of our logs and stats apis
commit 4f9389dcf1e8925f23153c5eb271b4ce2294dbaf
Author: Robert Muir <rmuir@apache.org>
Date: Wed Aug 19 21:26:52 2015 -0400
ban dangerous methods in java.net
commit 6aacd4d9925f324903d1d099a6cf5f862aeaf677
Author: Robert Muir <rmuir@apache.org>
Date: Wed Aug 19 20:59:24 2015 -0400
ban lenient method
commit f466a842c60163d1f4554bdce8a4163edb534c2c
Author: Simon Willnauer <simonw@apache.org>
Date: Thu Aug 20 00:29:00 2015 +0200
fix tests to not mix local transport and zen unicast disco
commit 0de007a33b33fb68cf85cd86db4ca4f8ce10bbc9
Author: Simon Willnauer <simonw@apache.org>
Date: Thu Aug 20 00:10:07 2015 +0200
fix tests to not mix local transport and zen unicast disco
commit 539f6ca6e5137e0d496239adc8684688dedcc824
Author: Simon Willnauer <simonw@apache.org>
Date: Thu Aug 20 00:02:01 2015 +0200
fix tests to not mix local transport and zen unicast disco
commit 004c2881b25467f332acc8c9f9e92b1f0f9d314e
Author: Robert Muir <rmuir@apache.org>
Date: Wed Aug 19 17:51:45 2015 -0400
Fix multinode
commit 54113af325ce31571811c49fdaae89d5687be4ba
Author: Robert Muir <rmuir@apache.org>
Date: Wed Aug 19 17:36:45 2015 -0400
fix integration tests
commit 0156a77a56319d6b9737ec6a531992052e50bd59
Author: Simon Willnauer <simonw@apache.org>
Date: Wed Aug 19 23:32:18 2015 +0200
enable multicast in MulticastZenPingIT.java
commit 1791caa35da853ce0122485fa3fd4674c671ec6e
Author: Robert Muir <rmuir@apache.org>
Date: Wed Aug 19 17:23:16 2015 -0400
Fix constant
commit 22820b53e0b2dc9fd47145c2bc29ce912a8fd484
Author: Simon Willnauer <simonw@apache.org>
Date: Wed Aug 19 22:59:09 2015 +0200
give it some extra ids for local transport crazyness
commit b2138fafa94a8a085813fd48356df63e57ade5b3
Author: Simon Willnauer <simonw@apache.org>
Date: Wed Aug 19 22:51:42 2015 +0200
pass on local addresses from configured transport rather than hard code IP addresses
commit 1bf5de1f457b081e0ce262b57d2b55d39c434156
Author: Simon Willnauer <simonw@apache.org>
Date: Wed Aug 19 22:04:31 2015 +0200
fix PluggableTransportModuleIT.java to use local disco and detach port limit for node local disco
commit b6706eddfa04c43947c16551359ae98a463d34aa
Author: Robert Muir <rmuir@apache.org>
Date: Wed Aug 19 14:16:03 2015 -0400
Default to unicast discovery, with default host list of 127.0.0.1, [::1]
In some cases this may contain duplicates, although its a misconfiguration,
lets not bind to multiple ports. Its no problem for us to dedup, this code
doesn't need to be huper-duper fast since its used only for logic around bind/publish
The cardinality aggregation is a metric aggregation and therefore cannot accept sub-aggregations. It was previously possible to create a rest request with a cardinality aggregation that had sub-aggregations. Now such a request will throw an error in the response.
Close#12988
We have some settings that prevent host name resolution which should
be repected by InetSocketTransportAddress#getHost() to only resolve if
allowed or desired.
At the moment, when installing from an url, a user provides the plugin name on
the command line like:
* bin/plugin install [plugin-name] --url [url]
This can lead to problems when picking an already existing name from another
plugin, and can potentially overwrite plugins already installed with that name.
This, this PR introduces a mandatory `name` property to the plugin descriptor
file which replaces the name formerly provided by the user.
With the addition of the `name` property to the plugin descriptor file, the user
does not need to specify the plugin name any longer when installing from a file
or url. Because of this, all arguments to `plugin install` command are now
either treated as a symbolic name, a URL or a file without the need to specify
this with an explicit option.
The new syntax for `plugin install` is now:
bin/plugin install [name or url]
* downloads official plugin
bin/plugin install analysis-kuromoji
* downloads github plugin
bin/plugin install lmenezes/elasticsearch-kopf
* install from URL or file
bin/plugin install http://link.to/foo.zip
bin/plugin install file:/path/to/foo.zip
If the argument does not parse to a valid URL, it is assumed to be a name and the
download location is resolved like before. Regardless of the source location of
the plugin, it is extracted to a temporary directory and the `name` property from
the descriptor file is used to determine the final install location.
Relates to #12715
it's an important exception to serialize and we see it often in tests
etc. but then it's wrapped in NotSerializableExceptionWrapper which is
odd. This commit adds native support for this exception.
The cluster configuration allows to setup a cluster for use with unicast discovery. This means that nodes have to have pre-calculated known addresses which can be used to poplulate the unicast hosts setting. Despite of repeated attempts to select unused ports, we still see test failures where the node can not bind to it's assigned port due to it already being in use (most on CentOS). This commit changes it to allow each node to have a pre-set mutual exclusive range of ports and add all those ports to the unicast hosts list. That's OK because we know the node will only bind to one of those.
We are in the process of getting rid of guava, and this removes a major
use. The replacement is mostly Collections.emptyList(), Arrays.asList
and Collections.unmodifiableList. While it is questionable whether we
need the last one (as these are usually placed in final members), we can
continue to refactor later by removing unnecessary wrappings.
Dynamic date fields mapped from dates of the form "yyyy-MM-dd"
automatically receive the millisecond paresr epoch_millis as an
alternative parsing format. However, dynamic date fields mapped from
dates of the form "yyyy/MM/dd" do not. This is a bug since the migration
documentation currently specifies that a dynamically added date field,
by default, includes the epoch_millis format. This commit adds
epoch_millis as an alternative parser to dynamic date fields mapped from
dates of the form "yyyy/MM/dd".
Closes#12873