This commit adds a link to the minimum master nodes section of the
important settings docs from the Zen discovery docs to clarify the
meaning and importance of setting minimum master nodes to a quorum of
master-eligible nodes.
Relates #24311
It looks like auto-complete gave us a nasty surprise here with
Logger#equals being invoked instead of Logger#error swallowing the
absolute worst-possible level of a log message. This commit fixes the
invocation.
`tests.enable_mock_modules` is a documented but unrespected / unused
option to disable all mock modules / pluings during test runs. This
will basically site-step mock assertions like check-index on shard closing.
This can speed up test-execution dramatically on nodes with slow disks etc.
Relates to #24304
Workday recently open-sourced our internal Scala wrapper for the Elasticsearch REST API. We plan to continue maintaining the library and use it in our products. Thought it would be a good idea to link it here in case anyone else is interested in using it!
This commit adds a compileTemplate method to the ScriptService.
Eventually this will be used to easily cutover all consumers to a new
TemplateService.
relates #16314
Replaces LambdaMetaFactory with LambdaBootstrap, a custom solution for lambdas in Painless using a design similar to LambdaMetaFactory, but allows for custom adaptation of types which recent changes to LambdaMetaFactory no longer allowed.
The `count` value in the stats aggregation represents a simple doc count
that doesn't require a formatted version. We didn't render an "as_string"
version for count in the rest response, so the method should also be
removed in favour of just using String.valueOf(getCount()) if a string
version of the count is needed.
Closes#24287
There was a bug in the calculation of the shards that a snapshot must
wait on, due to their relocating or initializing, before the snapshot
can proceed safely to snapshot the shard data. In this bug, an
incorrect key was used to look up the index of the waiting shards,
resulting in the fact that each index would have at most one shard in
the waiting state causing the snapshot to pause. This could be
problematic if there are more than one shard in the relocating or
initializing state, which would result in a snapshot prematurely
starting because it thinks its only waiting on one relocating or
initializing shard (when in fact there could be more than one). While
not a common case and likely rare in practice, it is still problematic.
This commit fixes the issue by ensuring the correct key is used to look
up the waiting indices map as it is being built up, so the list of
waiting shards for each index (those shards that are relocating or
initializing) are aggregated for a given index instead of overwritten.
I just spent ages debugging a script I wrote after following the documentation. It was not clear to me that _index is not defined when using painless; if it was mentioned on this page I would have saved myself a lot of time.
If the user explicitly configured path.data to include
default.path.data, then we should not fail the node if we find indices
in default.path.data. This commit addresses this.
Relates #24285
This commit fixes the hash code for AliasFilter as the previous
implementation was neglecting to take into consideration the fact that
the aliases field is an array and thus a deep hash code of it should be
computed rather than a shallow hash code on the reference.
Relates #24286
The tribe was being shutdown by the test while a publishing round (that adds the tribe node to a cluster) is not completed yet (i.e. the node itself
knows that it became part of the cluster, and the test shuts the tribe node down, but another node has not applied the cluster state yet, which makes
that node hang while trying to connect to the node that is shutting down (due to connect_timeout being 30 seconds), delaying publishing for 30
seconds, and subsequently tripping an assertion when another tribe instance wants to join.
Relates to #23695
In #24251 we fix an issue with stored search templates that
this test would have discovered: stored search templates cause
the node to refuse to start. Technically a "restart" test would
have caught this as well and would have caught it more quickly.
But we already *have* an upgrade test and we don't have restart tests.
And testing this on upgrade is a good thing too.
When parsing StoredSearchScript we were adding a Content type option that was forbidden (by a check that threw an exception) by the parser thats used to parse the template when we read it from the cluster state. This was stopping Elastisearch from starting after stored search templates had been added.
This change no longer adds the content type option to the StoredScriptSource object when parsing from the put search template request. This is safe because the StoredScriptSource content is always JSON when its stored in the cluster state since we do a conversion to JSON before this point.
Also removes the check for the content type in the options when parsing StoredScriptSource so users who already have stored scripts can start Elasticsearch.
Closes#24227
It seems that Wildfly 10 can not be made to start in a fully-functional
form on JDK 9, so this commit skips running the Wildfly integration
tests on JDK 9.
This commit fixes an issue when deleting the plugin directory while
executing the remove plugin command. Namely, we take out a file
descriptor on the plugin directory to traverse its contents to obtain
the list of files to delete. We leaked this file descriptor. On
Unix-based filesystems, this is not a problem, deleting the plugin
directory deletes the plugin directory. On Windows though, a delete is
not executed until the last file descriptor is closed. Since we leaked
this file descriptor, the plugin was not actually deleted. This led to
test failures that tried to cleanup left behind temporary directories
but these test failures were just exposing this bug. This commit fixes
this issue by ensuring that we close the file descriptor to the plugin
directory when we are finished with it.
Relates #24266
ScriptService has two executable methods, one which takes a
CompiledScript, which is similar to search, and one that takes a raw
Script and both compiles and returns an ExecutableScript for it. The
latter is not needed, and the call sites which used one or the other
were mixed. This commit removes the extra executable method in favor of
callers first calling compile, then executable.
The unwrap method was leftover from support javascript and python. Since
those languages are removed in 6.0, this commit removes the unwrap
feature from scripts.
Before #22488 when an index couldn't be created during a `_bulk`
operation we'd do all the *other* actions and return the index
creation error on each failing action. In #22488 we accidentally
changed it so that we now reject the entire bulk request if a single
action cannot create an index that it must create to run. This
gets reverts to the old behavior while still keeping the nicer
error messages. Instead of failing the entire request we now only
fail the portions of the request that can't work because the index
doesn't exist.
Closes#24028
For the Windows service, JAVA_HOME should be set to the path to the
JDK. We should make this clear in the docs to help users avoid
frustrating startup problems.
Relates #24260
Today when removing a plugin, we attempt to move the plugin directory to
a temporary directory and then delete that directory from the
filesystem. We do this to avoid a plugin being in a half-removed
state. We previously tried an atomic move, and fell back to a non-atomic
move if that failed. Atomic moves can fail on union filesystems when the
plugin directory is not in the top layer of the
filesystem. Interestingly, the regular move can fail as well. This is
because when the JDK is executing such a move, it first tries to rename
the source directory to the target directory and if this fails with
EXDEV (as in the case of an atomic move failing), it falls back to
copying the source to the target, and then attempts to rmdir the
source. The bug here is that the JDK never deleted the contents of the
source so the rmdir will always fail (except in the case of an empty
directory).
Given all this silliness, we were inspired to find a different
strategy. The strategy is simple. We will add a marker file to the
plugin directory that indicates the plugin is in a state of
removal. This file will be the last file out the door during removal. If
this file exists during startup, we fail startup.
Relates #24252
Today we might promote a primary and recover from store where after translog
recovery the local checkpoint is still behind the maximum sequence ID seen.
To fill the holes in the sequence ID history this PR adds a utility method
that fills up all missing sequence IDs up to the maximum seen sequence ID
with no-ops.
Relates to #10708
An important use case for our users is deploying our clients inside of
applications containers like Wildly. Sometimes, we make changes that
unintentionally break this use case. We need to know before we ship a
release that we have broken such use cases. As Wildfly is one of the
bigger application containers, this commit starts by adding an
integration test that deploys an application using the transport client
to Wildfly and ensures that all is well. Future work can add similar
integration tests for the low-level and high-level REST clients.
Relates #24147
The plugin cli currently resides inside the elasticsearch jar. This
commit moves it into a plugin-cli jar. This is change alone is a no-op;
it does not change anything about what is loaded at runtime. But it will
allow easier testing (with fixtures in the future to test ES or maven
installation), as well as eventually not loading these classes when
starting elasticsearch.
This commit adds a XContentParserUtils.parseTypedKeysObject() method
that can be used to parse named XContent objects identified by a field
name containing a type identifier, a delimiter and the name of the
object to parse.