This commit makes x-pack a module and adds it to the default
distrubtion. It also creates distributions for zip, tar, deb and rpm
which contain only oss code.
This commit moves the checks on JAVAX_HOME (where X is the java version
number) existing to the end of gradle's configuration phase, and based
on whether the tasks needing the java home are configured to execute.
relates #29519
The test was using a parameter on GET /_cluster/health that older nodes
do not understand. Yet, we do no even need to make this call here, we
can use ensure green for the index.
This test is failing because of an addition of a call to GET
/_cluster/health with the parameter wait_for_no_initializing_shards set
to true. As older versions of Elasticsearch do not understand this
parameter, this request fails and the test fails. This commit marks this
test as awaiting a fix.
Some build tasks require older JDKs. For example, the BWC build tasks
for older versions of Elasticsearch require older JDKs. It is onerous to
require these be configured when merely compiling Elasticsearch, the
requirement that they be strictly set to appropriate values should only
be enforced if these tasks are going to be executed. To address this, we
lazy configure these tasks.
Today we have JAVA_HOME for the compiler Java home and RUNTIME_JAVA_HOME
for the test Java home. However, when we compile BWC nodes and run them,
neither of these Java homes might be the version that was suitable for
that BWC node (e.g., 5.6 requires JDK 8 to compile and to run). This
commit adds support for the environment variables JAVA\d+_HOME and uses
the appropriate Java home based on the version of the node being
started. We even do this for reindex-from-old which requires JDK 7 for
these very old nodes. Note that these environment variables are not
required if not running BWC tests, and they are strictly required if
running BWC tests.
Some features have been deprecated since `6.0` like the `_parent` field or the
ability to have multiple types per index. This allows to remove quite some
code, which in-turn will hopefully make it easier to proceed with the removal
of types.
I found the following bugs:
- The 6.0 logic for conjunctions didn't work when there were only `match_all`
queries in MUST/FILTER clauses as they didn't propagate the `matchAllDocs`
flag.
- Some queries still had the same issue as `BooleanQuery` used to have with
duplicate terms (see #28353), eg. `MultiPhraseQuery`.
Closes#29376
Today we have a silent batch mode in the install plugin command when
standard input is closed or there is no tty. It appears that
historically this was useful when running tests where we want to accept
plugin permissions without having to acknowledge them. Now that we have
an explicit batch mode flag, this use-case is removed. The motivation
for removing this now is that there is another place where silent batch
mode arises and that is when a user attempts to install a plugin inside
a Docker container without keeping standard input open and attaching a
tty. In this case, the install plugin command will treat the situation
as a silent batch mode and therefore the user will never have the chance
to acknowledge the additional permissions required by a plugin. This
commit removes this silent batch mode in favor of using the --batch flag
when running tests and requiring the user to take explicit action to
acknowledge the additional permissions (either by leaving standard input
open and attaching a tty, or by passing the --batch flags themselves).
Note that with this change the user will now see a null pointer
exception when they try to install a plugin in a Docker container
without keeping standard input open and attaching a tty. This will be
addressed in an immediate follow-up, but because the implications of
that change are larger, they should be handled separately from this one.
The vagrant test plugin adds tasks for the groovy packaging tests,
which run after the bats packaging test tasks.Rename the 'bats'
configuration to 'packaging' and remove the option to inherit
archives from this configuration.
I did a little digging. It looks like IOException is thrown when the other
side closes its connection while we're waiting on our buffer to fill up. We
totally expect that in this test. It feels to me like we should throw a
`ConnectionClosedException` but upstream does not agree:
https://issues.apache.org/jira/browse/HTTPASYNC-134
While we *could* catch the exception and transform it ourselves that
seems like a bigger change than is merited at this point.
Closes#29136
Currently we store the indices specified in the request URL together with all
the other ranking evaluation specification in RankEvalSpec. This is not ideal
since e.g. the indices are not rendered to xContent and so cannot be parsed
back. Instead we should keep them in RankEvalRequest.
This commit (which will be reverted soon) adds logging on the output of
starting Wildfly. This is needed to debug an issue with Wildfly not
starting in CI.
* Decouple XContentBuilder from BytesReference
This commit removes all mentions of `BytesReference` from `XContentBuilder`.
This is needed so that we can completely decouple the XContent code and move it
into its own dependency.
While this change appears large, it is due to two main changes, moving
`.bytes()` and `.string()` out of XContentBuilder itself into static methods
`BytesReference.bytes` and `Strings.toString` respectively. The rest of the
change is code reacting to these changes (the majority of it in tests).
Relates to #28504
I have long wanted an actual test that dying with dignity works. It is
tricky because if dying with dignity works, it means the test JVM dies
which is usually an abnormal condition. And anyway, how does one force a
fatal error to be thrown. I was motivated to investigate this again by
the fact that I missed a backport to one branch leading to an issue
where Elasticsearch would not successfully die with dignity. And now we
have a solution: we install a plugin that throws an out of memory error
when it receives a request. We hack the standalone test infrastructure
to prevent this from failing the test. To do this, we bypass the
security manager and remove the PID file for the node; this tricks the
test infrastructure into thinking that it does not need to stop the
node. We also bypass seccomp so that we can fork jps to make sure that
Elasticsearch really died. And to be extra paranoid, we parse the logs
of the dead Elasticsearch process to make sure it died with
dignity. Never forget.
This commit removes the ability to specify that a plugin requires the
keystore and instead creates the keystore on package installation or
when Elasticsearch is started for the first time. The reason that we opt
to create the keystore on package installation is to ensure that the
keystore has the correct permissions (the package installation scripts
run as root as opposed to Elasticsearch running as the elasticsearch
user) and to enable removing the keystore on package removal if the
keystore is not modified.
This commit makes the controller spawner also look under modules. It
also fixes a bug in module security policy loading where the module is a
meta plugin.
Applying the rest test gradle plugin already uses the zip distribution
by default, so specifying it explicitly is not necessary. These are
leftovers from before zip was the default for rest tests.
Add support version and version_type in ingest pipelines
Add support for setting document version and version type in set
processor of an ingest pipeline.
Nodes are reusing task ids after restart. So in some rare circumstances
the same task id might be assigned to the reindexing task stored by the
old cluster and the new task that is trying to retrieve the task
results. As a result, the get task request can timeout waiting on
itself. Since we already waited for the task to finish before restarting
the cluster, waiting for the task here doesn't make any sense to start
with.
Fixes#28732
* Remove log4j dependency from elasticsearch-core
This removes the log4j dependency from our elasticsearch-core project. It was
originally necessary only for our jar classpath checking. It is now replaced by
a `Consumer<String>` so that the es-core dependency doesn't have external
dependencies.
The parts of #28191 which were moved in conjunction (like `ESLoggerFactory` and
`Loggers`) have been moved back where appropriate, since they are not required
in the core jar.
This is tangentially related to #28504
* Add javadocs for `output` parameter
* Change @code to @link
Previously a user could set a custom config path to a relative directory
using ES_PATH_CONF. In a previous change related to enabling GC logging
by default, we forced the working directory for Elasticsearch to be
ES_HOME. This had the impact of causing all relative paths to be
relative to ES_HOME, against the intent of the user. This commit
addresses this by making ES_PATH_CONF absolute before we switch the
working directory to ES_HOME.
Relates #28700
When we submit a task to a thread pool for asynchronous execution, we
are returned a future. Since we submitted to go asynchronous, these
futures are not inspected for failure (we would have to block a thread
to do that). While we have on failure handlers for exceptions that are
thrown during execution, we do not handle throwables that are not
exceptions and these end up silently lost. This commit adds a check
after the runnable returns that inspects the status of the future. If an
unhandled throwable occurred during execution, this throwable is
propogated out where it will land in the uncaught exception handler.
Relates #28667
* Move more XContent.createParser calls to non-deprecated version
This moves more of the callers to pass in the DeprecationHandler.
Relates to #28504
* Use parser's deprecation handler where available
[TEST] packaging: function to collect debug info
Sometimes when packaging tests fail in CI the test logs aren't enough to
tell what went wrong. This routine helps collect more info about the
state of the es installation at failure time
Generalizing BWC building so that there is less code to modify for a release. This ensures we do not
need to think about what major or minor version is in the gradle code. It follows the general rules of the
elastic release structure. For more information on the rules, see the VersionCollection's javadoc.
This also removes the additional bwc snapshots that will never be released, such as 6.0.2, which were
being built and tested against every time we ran bwc tests.
Additionally, it creates 4 new projects that correspond to the different types of snapshots that may exist
for a given version. Its possible to now run those individual tasks to work out bwc logic whereas
previously it was impossible and the entire suite of bwc tests had to be run to work out any logic
changes in the build tools' bwc project. Please note that if the project does not make sense for the
version that is current, that an error will be thrown from that individual project if an attempt is made to
run it.
This should allow for automating the version bumps as well, since it removes all the hardcoded version
logic from the configs.
Currently meta plugins will ask for confirmation of security policy
exceptions for each bundled plugin. This commit collects the necessary
permissions of each bundled plugin, and asks for confirmation of all of
them at the same time.
This pull request replaces the jvm-example plugin (from the jvm/site plugins era) by two new plugins: a custom-settings that shows how to register and use custom settings (including secured settings) in a plugin, and rest-handler plugin that shows how to register a rest handler.
The two plugins now reside in the plugins/examples project. They can serve as sample plugins for users, a special attention has been put on documentation. The packaging tests have been adapted to use the custom-settings plugin.
Our rest client throws exceptions in funny ways that might cause
`suppressed` exceptions to be eaten. This works around that in the
reindex-from-old tests so we don't stomp on a real failure. It is fairly
diryt so we should work on fixing the high level rest client.
Relates to #25453
The current install_plugin() does not play well with meta plugins because
it always checks for the plugin's descriptor file.
This commit changes the install_plugin() so that it only runs the install plugin
command and lets the caller verify that the required files are correctly installed.
It also adds a install_meta_plugin() function to install meta plugins.
Currenty the rest response of the ranking evaluation API wraps all inside an
enclosing `rank_eval` object. This is redundant since it is clear from the API
call and it doesn't provide any other useful information. This change removes
this.
This commit lessens the burden on configuring settings.gradle when new
projects are added. In particular, this makes it trivial to move a
plugin to a module (or vice versa).
This commit modifies the build to require JDK 9 for
compilation. Henceforth, we will compile with a JDK 9 compiler targeting
JDK 8 as the class file format. Optionally, RUNTIME_JAVA_HOME can be set
as the runtime JDK used for running tests. To enable this change, we
separate the meaning of the compiler Java home versus the runtime Java
home. If the runtime Java home is not set (via RUNTIME_JAVA_HOME) then
we fallback to using JAVA_HOME as the runtime Java home. This enables:
- developers only have to set one Java home (JAVA_HOME)
- developers can set an optional Java home (RUNTIME_JAVA_HOME) to test
on the minimum supported runtime
- we can test compiling with JDK 9 running on JDK 8 and compiling with
JDK 9 running on JDK 9 in CI
Currently if the Lucene version is `X.Y.Z-snapshot-{gitrev}`, then we will
expect the docs to have `X.Y.Z-snapshot` as a Lucene version. I would like
to change it to `X.Y.Z` so that this doesn't need changing when we move from a
snapshot to a final release.
This is related to #27933. It introduces a jar named elasticsearch-core
in the lib directory. This commit moves the JarHell class from server to
elasticsearch-core. Additionally, PathUtils and some of Loggers are
moved as JarHell depends on them.
The configuration of the upgraded cluster task was missing a dependency
on the stopping of the second old node in the cluster. In some cases
(e.g., --parallel) Gradle would then try to run the configuration of a
node in the upgraded cluster before it had even configured the old nodes
in the cluster.
Relates #28036
This commit adds the ability to package multiple plugins in a single zip.
The zip file for a meta plugin must contains the following structure:
|____elasticsearch/
| |____ <plugin1> <-- The plugin files for plugin1 (the content of the elastisearch directory)
| |____ <plugin2> <-- The plugin files for plugin2
| |____ meta-plugin-descriptor.properties <-- example contents below
The meta plugin properties descriptor is mandatory and must contain the following properties:
description: simple summary of the meta plugin.
name: the meta plugin name
The installation process installs each plugin in a sub-folder inside the meta plugin directory.
The example above would create the following structure in the plugins directory:
|_____ plugins
| |____ <name_of_the_meta_plugin>
| | |____ meta-plugin-descriptor.properties
| | |____ <plugin1>
| | |____ <plugin2>
If the sub plugins contain a config or a bin directory, they are copied in a sub folder inside the meta plugin config/bin directory.
|_____ config
| |____ <name_of_the_meta_plugin>
| | |____ <plugin1>
| | |____ <plugin2>
|_____ bin
| |____ <name_of_the_meta_plugin>
| | |____ <plugin1>
| | |____ <plugin2>
The sub-plugins are loaded at startup like normal plugins with the same restrictions; they have a separate class loader and a sub-plugin
cannot have the same name than another plugin (or a sub-plugin inside another meta plugin).
It is also not possible to remove a sub-plugin inside a meta plugin, only full removal of the meta plugin is allowed.
Closes#27316
We have a packaging test that tries to install all plugins, and then
asserts that all expected plugins are installed. The expected plugins
are dervied from the list of plugins in the plugins sub-project. The
plugin transport-nio was recently added here, but explicit commands to
install and remove this plugin were never added. This commit addresses
this.