Docker informed us that for official multi-arch Docker builds, there
needs to be a single Dockerfile and build context that can be used for
each supported architecture. Therefore, rework the build to move the
relevant architecture logic into the Dockerfile, and merge the aarch64
/ x64 docker context builds.
* Simplify java home verification
At one time, all uses of java home were found through the getJavaHome
utility method on BuildPlugin. However, that was changed many
refactorings ago, but the complex support for registering a java home
version needed that fails at configuration time still exists. The only
remaining use of grabbing java home is within bwc tests, and must be at
runtime since that is when we have the checkout and know what version is
needed.
This commit consolidates the java home finding method into a utility
unassociated with BuildPlugin.
* fix checkstyle
* address feedback
Firstly, backport the use of tini as the Docker entrypoint. This was supposed
to have been done following #50277, but was missed. It isn't a direct backport
as this branch will continue using root as the initial Docker user.
Secondly, backport #55491 to use the official checksums when downloading tini.
After #53562, the `geo_shape` field mapper is registered within
a module. This opens the door for introducing a new `geo_shape`
field mapper into the Spatial Plugin that has doc-values support.
This is very much an extension of server's GeoShapeFieldMapper,
but with the addition of the doc values implementation.
We believe there's no longer a need to be able to disable basic-license
features completely using the "xpack.*.enabled" settings. If users don't
want to use those features, they simply don't need to use them. Having
such features always available lets us build more complex features that
assume basic-license features are present.
This commit deprecates settings of the form "xpack.*.enabled" for
basic-license features, excluding "security", which is a special case.
It also removes deprecated settings from integration tests and unit
tests where they're not directly relevant; e.g. monitoring and ILM are
no longer disabled in many integration tests.
In bash, checking for whether an env variable exists uses the -z test,
against a stringified env var, so that the test is actually whether the
env var is empty, but not necessarily undefined. We use this to test
whether JAVA_HOME is set, to determine whether the bundled jdk should be
used. In windows, this test is an actual "undefined" check. This commit
brings the behavior on two systems in sync, opting to allow for an empty
JAVA_HOME in windows to indicate the bundled jdk should be used.
closes#55134
Following elastic/ml-cpp#1135 there are now Linux binaries
for both x86_64 and aarch64. The code that finds the
correct binaries to ship with each distribution was
including both on every Linux distribution. This change
alters that logic to consider the architecture as well
as the operating system.
Also, there is no need to disable ML on aarch64 now that
we have the native binaries available. ML is still not
supported on aarch64, but the processes at least run up
and work at a superficial level.
Backport of #55256
Currently forbidden apis accounts for 800+ tasks in the build. These
tasks are aggressively created by the plugin. In forbidden apis 3.0, we
will get task avoidance
(https://github.com/policeman-tools/forbidden-apis/pull/162), but we
need to ourselves use the same task avoidance mechanisms to not trigger
these task creations. This commit does that for our foribdden apis
usages, in preparation for upgrading to 3.0 when it is released.
This directory interferes with notarization and removing it before we
notarize allows us to have a properly notarized
distribution. Conceptually, this directory is only needed when building
a distribution to be installed by Installer (a so-called "pkg"). Since
we are not building such distributions, and this directory interferes
with notarization, we choose to exclude it here. We do this here, rather
than in our notarization process, to ensure that what we run through CI
for testing is also what we ship to the world.
Backport of #55073.
We added tasks to build an ARM distribution and Docker image, but didn't
provide any way to run packaging tests against them. Add extra loops on
the possible Architecture values, and skip tasks that can't be run on
the current Architecture.
Apply the :distribution:archives naming convention to some of the Docker
sub-projects, so that we have a more consistent naming scheme.
Also, we've seen some examples of Docker packaging tests failing sporadically
when they try to clean up the temp directory, citing a not-empty
directory. Ensure that any running container is removed before cleaning
up the temp dir, in an effort to avoid this problem.
This commit includes a number of changes to reduce overall build
configuration time. These optimizations include:
- Removing the usage of the 'nebula.info-scm' plugin. This plugin
leverages jgit to load read various pieces of VCS information. This
is mostly overkill and we have our own minimal implementation for
determining the current commit id.
- Removing unnecessary build dependencies such as perforce and jgit
now that we don't need them. This reduces our classpath considerably.
- Expanding the usage lazy task creation, particularly in our
distribution projects. The archives and packages projects create
lots of tasks with very complex configuration. Avoiding the creation
of these tasks at configuration time gives us a nice boost.
Guava was removed from Elasticsearch many years ago, but remnants of it
remain due to transitive dependencies. When a dependency pulls guava
into the compile classpath, devs can inadvertently begin using methods
from guava without realizing it. This commit moves guava to a runtime
dependency in the modules that it is needed.
Note that one special case is the html sanitizer in watcher. The third
party dep uses guava in the PolicyFactory class signature. However, only
calling a method on the PolicyFactory actually causes the class to be
loaded, a reference alone does not trigger compilation to look at the
class implementation. There we utilize a MethodHandle for invoking the
relevant method at runtime, where guava will continue to exist.
Now that JDK 14 is available, and we are bundling it, this commit
enables us to run with helpful null pointer exceptions, which will be a
great aid in debugging.
Change how we format exceptions to only wrap them as necessary. While
the config's overall philosophy is to put items one-per-line when
wrapping, in practice this is a little cumbersome for exception lists.
This commit switches to using an onlyIf to determine if a build Docker
image task execution should occur. This is preferred since it means that
the determination is performed at task execution time, rather than
during configuration.
This commit introduces aarch64 packaging, including bundling an aarch64
JDK distribution. We had to make some interesting choices here:
- ML binaries are not compiled for aarch64, so for now we disable ML on
aarch64
- depending on underlying page sizes, we have to disable class data
sharing
* Add REST API for ComponentTemplate CRUD
This adds the Put/Get/DeleteComponentTemplate APIs that allow inserting, retrieving, and removing
ComponentTemplateMetadata into the cluster state metadata.
These APIs are currently only available behind a feature flag system property -
`es.itv2_feature_flag_registered`.
Relates to #53101
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
* Handle special chars in JAVA_HOME in elasticsearch-service.bat (#52676)
* Test case for windows service where JAVA_HOME path contains spaces (#53028)
Co-authored-by: Muhammad Shaheer Akram <41253927+shaheerakr@users.noreply.github.com>
* Smarter copying of the rest specs and tests (#52114)
This PR addresses the unnecessary copying of the rest specs and allows
for better semantics for which specs and tests are copied. By default
the rest specs will get copied if the project applies
`elasticsearch.standalone-rest-test` or `esplugin` and the project
has rest tests or you configure the custom extension `restResources`.
This PR also removes the need for dozens of places where the x-pack
specs were copied by supporting copying of the x-pack rest specs too.
The plugin/task introduced here can also copy the rest tests to the
local project through a similar configuration.
The new plugin/task allows a user to minimize the surface area of
which rest specs are copied. Per project can be configured to include
only a subset of the specs (or tests). Configuring a project to only
copy the specs when actually needed should help with build cache hit
rates since we can better define what is actually in use.
However, project level optimizations for build cache hit rates are
not included with this PR.
Also, with this PR you can no longer use the includePackaged flag on
integTest task.
The following items are included in this PR:
* new plugin: `elasticsearch.rest-resources`
* new tasks: CopyRestApiTask and CopyRestTestsTask - performs the copy
* new extension 'restResources'
```
restResources {
restApi {
includeCore 'foo' , 'bar' //will include the core specs that start with foo and bar
includeXpack 'baz' //will include x-pack specs that start with baz
}
restTests {
includeCore 'foo', 'bar' //will include the core tests that start with foo and bar
includeXpack 'baz' //will include the x-pack tests that start with baz
}
}
```
When installing plugins from remote sources, either the Elastic download
service, or maven, a checksum file is downloaded and checked against the
downloaded zip. The current format for official plugins is to use a
sha512 checksum which includes the zip filename. This format matches
that from sha512sum, and allows using the --check argument there to
verify the checksum manually. However, when generating checksum files
with maven and gradle, the filename is not included.
This commit relaxes the requirement the filename existing within the
sha512 checksum file for maven plugins. We continue to strictly enforce
official plugins have the existing format of the file.
closes#52413
Backport of #52525.
Closes#52503. Implement a list of `_FILE` env vars that will be used to
populate env vars with file content, instead of processing all `_FILE`
vars in the environment.
Reading the startup scripts does not elucidate how JVM options are
applied. Instead, the reader must consult the source for the JVM options
parser. This commit adds some transparency around this process so that
it easier to understand reading the startup scripts how the final JVM
options to start Elasticsearch are constructed.
* Set default ES_PATH_CONF for package scriptlets
Our packages use scriptlets to create or update the Elasticsearch
keystore as necessary when installing or upgrading an Elasticsearch
package. If these scriptlets don't work as expected, Elasticsearch may
try and fail to create or upgrade the keystore at startup time. This
will prevent Elasticsearch from starting up at all.
These scriptlets use the Elasticsearch keystore command-line tools. Like
most of our command-line tools, the keystore tools will by default get
their value for ES_PATH_CONF from a system configuration file:
/etc/sysconfig/elasticsearch for RPMs, /etc/default/elasticsearch for
debian packages. Previously, if the user removed ES_PATH_CONF from that
system configuration file (perhaps thinking that it is obsolete when
the same variables is also defined in the systemd unit file), the
keystore command-line tools would fail. Scriptlet errors do not seem to
cause the installation to fail, and for RPMs the error message is easy
to miss in command output.
This commit adds a line of bash to scriptlets that will set ES_PATH_CONF
to a default when ES_PATH_CONF is unset, rather than only when the
system configuration file is missing.
Back when the distribution launchers were compiled to target JDK 7, we
did not have access to the String#join method to space-delimit JVM
options. Since the launchers now target the same minimum JDK as
Elasticsearch itself, we now have access to this method and can replace
the use of spaceDelimitJvmOptions with String#join. This commit does
that.
This commit prepares the JvmOptionsParser to be more unit testable by
refactoring the class to have some input that it pulls from external
sources passed in as arguments. We do not change any functionality in
this commit, nor add any unit tests, we are only preparing the way.
Now that the FIPS 140 security provider is simply a test dependency
we don't need the thirdPartyAudit exceptions, but plugin-cli and
transport-netty4 do need jarHell disabled as they use the non fips
BouncyCastle security provider as a test dependency too.
This commit introduces the ability to override JVM options by adding
custom JVM options files to a jvm.options.d directory. This simplifies
administration of Elasticsearch by not requiring administrators to keep
the root jvm.options file in sync with changes that we make to the root
jvm.options file. Instead, they are not expected to modify this file but
instead supply their own in jvm.options.d. In Docker installations, this
means they can bind mount this directory in. In future versions of
Elasticsearch, we can consider removing the root jvm.options file
(instead, providing all options there as system JVM options).
* Reload secure settings with password (#43197)
If a password is not set, we assume an empty string to be
compatible with previous behavior.
Only allow the reload to be broadcast to other nodes if TLS is
enabled for the transport layer.
* Add passphrase support to elasticsearch-keystore (#38498)
This change adds support for keystore passphrases to all subcommands
of the elasticsearch-keystore cli tool and adds a subcommand for
changing the passphrase of an existing keystore.
The work to read the passphrase in Elasticsearch when
loading, which will be addressed in a different PR.
Subcommands of elasticsearch-keystore can handle (open and create)
passphrase protected keystores
When reading a keystore, a user is only prompted for a passphrase
only if the keystore is passphrase protected.
When creating a keystore, a user is allowed (default behavior) to create one with an
empty passphrase
Passphrase can be set to be empty when changing/setting it for an
existing keystore
Relates to: #32691
Supersedes: #37472
* Restore behavior for force parameter (#44847)
Turns out that the behavior of `-f` for the add and add-file sub
commands where it would also forcibly create the keystore if it
didn't exist, was by design - although undocumented.
This change restores that behavior auto-creating a keystore that
is not password protected if the force flag is used. The force
OptionSpec is moved to the BaseKeyStoreCommand as we will presumably
want to maintain the same behavior in any other command that takes
a force option.
* Handle pwd protected keystores in all CLI tools (#45289)
This change ensures that `elasticsearch-setup-passwords` and
`elasticsearch-saml-metadata` can handle a password protected
elasticsearch.keystore.
For setup passwords the user would be prompted to add the
elasticsearch keystore password upon running the tool. There is no
option to pass the password as a parameter as we assume the user is
present in order to enter the desired passwords for the built-in
users.
For saml-metadata, we prompt for the keystore password at all times
even though we'd only need to read something from the keystore when
there is a signing or encryption configuration.
* Modify docs for setup passwords and saml metadata cli (#45797)
Adds a sentence in the documentation of `elasticsearch-setup-passwords`
and `elasticsearch-saml-metadata` to describe that users would be
prompted for the keystore's password when running these CLI tools,
when the keystore is password protected.
Co-Authored-By: Lisa Cawley <lcawley@elastic.co>
* Elasticsearch keystore passphrase for startup scripts (#44775)
This commit allows a user to provide a keystore password on Elasticsearch
startup, but only prompts when the keystore exists and is encrypted.
The entrypoint in Java code is standard input. When the Bootstrap class is
checking for secure keystore settings, it checks whether or not the keystore
is encrypted. If so, we read one line from standard input and use this as the
password. For simplicity's sake, we allow a maximum passphrase length of 128
characters. (This is an arbitrary limit and could be increased or eliminated.
It is also enforced in the keystore tools, so that a user can't create a
password that's too long to enter at startup.)
In order to provide a password on standard input, we have to account for four
different ways of starting Elasticsearch: the bash startup script, the Windows
batch startup script, systemd startup, and docker startup. We use wrapper
scripts to reduce systemd and docker to the bash case: in both cases, a
wrapper script can read a passphrase from the filesystem and pass it to the
bash script.
In order to simplify testing the need for a passphrase, I have added a
has-passwd command to the keystore tool. This command can run silently, and
exit with status 0 when the keystore has a password. It exits with status 1 if
the keystore doesn't exist or exists and is unencrypted.
A good deal of the code-change in this commit has to do with refactoring
packaging tests to cleanly use the same tests for both the "archive" and the
"package" cases. This required not only moving tests around, but also adding
some convenience methods for an abstraction layer over distribution-specific
commands.
* Adjust docs for password protected keystore (#45054)
This commit adds relevant parts in the elasticsearch-keystore
sub-commands reference docs and in the reload secure settings API
doc.
* Fix failing Keystore Passphrase test for feature branch (#50154)
One problem with the passphrase-from-file tests, as written, is that
they would leave a SystemD environment variable set when they failed,
and this setting would cause elasticsearch startup to fail for other
tests as well. By using a try-finally, I hope that these tests will fail
more gracefully.
It appears that our Fedora and Ubuntu environments may be configured to
store journald information under /var rather than under /run, so that it
will persist between boots. Our destructive tests that read from the
journal need to account for this in order to avoid trying to limit the
output we check in tests.
* Run keystore management tests on docker distros (#50610)
* Add Docker handling to PackagingTestCase
Keystore tests need to be able to run in the Docker case. We can do this
by using a DockerShell instead of a plain Shell when Docker is running.
* Improve ES startup check for docker
Previously we were checking truncated output for the packaged JDK as
an indication that Elasticsearch had started. With new preliminary
password checks, we might get a false positive from ES keystore
commands, so we have to check specifically that the Elasticsearch
class from the Bootstrap package is what's running.
* Test password-protected keystore with Docker (#50803)
This commit adds two tests for the case where we mount a
password-protected keystore into a Docker container and provide a
password via a Docker environment variable.
We also fix a logging bug where we were logging the identifier for an
array of strings rather than the contents of that array.
* Add documentation for keystore startup prompting (#50821)
When a keystore is password-protected, Elasticsearch will prompt at
startup. This commit adds documentation for this prompt for the archive,
systemd, and Docker cases.
Co-authored-by: Lisa Cawley <lcawley@elastic.co>
* Warn when unable to upgrade keystore on debian (#51011)
For Red Hat RPM upgrades, we warn if we can't upgrade the keystore. This
commit brings the same logic to the code for Debian packages. See the
posttrans file for gets executed for RPMs.
* Restore handling of string input
Adds tests that were mistakenly removed. One of these tests proved
we were not handling the the stdin (-x) option correctly when no
input was added. This commit restores the original approach of
reading stdin one char at a time until there is no more (-1, \r, \n)
instead of using readline() that might return null
* Apply spotless reformatting
* Use '--since' flag to get recent journal messages
When we get Elasticsearch logs from journald, we want to fetch only log
messages from the last run. There are two reasons for this. First, if
there are many logs, we might get a string that's too large for our
utility methods. Second, when we're looking for a specific message or
error, we almost certainly want to look only at messages from the last
execution.
Previously, we've been trying to do this by clearing out the physical
files under the journald process. But there seems to be some contention
over these directories: if journald writes a log file in between when
our deletion command deletes the file and when it deletes the log
directory, the deletion will fail.
It seems to me that we might be able to use journald's "--since" flag to
retrieve only log messages from the last run, and that this might be
less likely to fail due to race conditions in file deletion.
Unfortunately, it looks as if the "--since" flag has a granularity of
one-second. I've added a two-second sleep to make sure that there's a
sufficient gap between the test that will read from journald and the
test before it.
* Use new journald wrapper pattern
* Update version added in secure settings request
Co-authored-by: Lisa Cawley <lcawley@elastic.co>
Co-authored-by: Ioannis Kakavas <ikakavas@protonmail.com>
Today we are repeatedly checking if the current build is a snapshot
build or not by reading the system property build.snapshot. This commit
formalizes this by adding a build parameter to indicate whether or not
the current build is a snapshot build.
In 2bb31fe (v0.6.0!) we added DEBUG-level logging to the default config of
action loggers "for easier debugging". This change to the default config lives
on to this day. It does not obviously make debugging any easier any more, but
it does result in a good deal of log noise sometimes. This commit removes this
special case from the default config.
Closes#51198
This change changes the way to run our test suites in
JVMs configured in FIPS 140 approved mode. It does so by:
- Configuring any given runtime Java in FIPS mode with the bundled
policy and security properties files, setting the system
properties java.security.properties and java.security.policy
with the == operator that overrides the default JVM properties
and policy.
- When runtime java is 11 and higher, using BouncyCastle FIPS
Cryptographic provider and BCJSSE in FIPS mode. These are
used as testRuntime dependencies for unit
tests and internal clusters, and copied (relevant jars)
explicitly to the lib directory for testclusters used in REST tests
- When runtime java is 8, using BouncyCastle FIPS
Cryptographic provider and SunJSSE in FIPS mode.
Running the tests in FIPS 140 approved mode doesn't require an
additional configuration either in CI workers or locally and is
controlled by specifying -Dtests.fips.enabled=true
Adding back accidentally removed jvm option that is required to enforce
start of the week = Monday in IsoCalendarDataProvider.
Adding a `feature` to yml test in order to skip running it in JDK8
commit that removed it 398c802
commit that backports SystemJvmOptions c4fbda3
relates 7.x backport of code that enforces CalendarDataProvider use #48349
Backport of #50927.
Closes#49653. When using _FILE environment variables to supply values
to Elasticsearch, following symlinks when checking that file permissions
are secure.
When installing multiple plugins at once, this commit changes the
behavior to report installed plugins as we go. In the case of failure,
we emit a message that we are rolling back any plugins that were
installed successfully, and also that they were successfully rolled
back. In the case a plugin is not successfully rolled back, we report
this clearly too, alerting the user that there might still be state on
disk they would have to clean up.
This commit allows the plugin installer to install multiple plugins in a
single invocation. The installation will be treated as a transaction, so
that all of the plugins are install successfully, or none of the plugins
are installed.
We renamed README.textile to README.asciidoc but a bunch of tests and
the package build itself still pointed at the old name. This switches
them the new name.
A previous commit taught Elasticsearch packages to respect ES_PATH_CONF
during installs. Missed in that commit was respecting ES_PATH_CONF on
upgrades. This commit does that. Additionally, while ES_PATH_CONF is not
currently used in pre-install, this commit adds respect to the preinst
script in case we do in the future.
Backport of #49612.
The current Docker entrypoint script picks up environment variables and
translates them into -E command line arguments. However, since any tool
executes via `docker exec` doesn't run the entrypoint, it results in
a poorer user experience.
Therefore, refactor the env var handling so that the -E options are
generated in `elasticsearch-env`. These have to be appended to any
existing command arguments, since some CLI tools have subcommands and
-E arguments must come after the subcommand.
Also extract the support for `_FILE` env vars into a separate script, so
that it can be called from more than once place (the behaviour is
idempotent).
Finally, add noop -E handling to CronEvalTool for parity, and support
`-E` in MultiCommand before subcommands.
We respect ES_PATH_CONF everywhere except package install. This commit
addresses this by respecting ES_PATH_CONF when installing the RPM/Debian
packages.
This commit tweaks the workaround introduced in #49211 to support
Gradle 6.0. In the workaround, we specifically override the address
the Gradle daemon binds to by passing the desired address via the
OPENSHIFT_IP environment variable. This works fine for builds using
Gradle 6.0, but for older Gradle versions this causes issues with
inter-daemon communication, specifically when we build BWC branches
not on Gradle 6.0. The fix here is to strip that environment variable
out when building the target BWC branch if that branch is on an
older Gradle version.
This is all temporary and will be removed when this bug fix is released
in Gradle 6.1.
Closes#50025
This upgrade required a few significant changes. Firstly, the build
scan plugin has been renamed, and changed to be a Settings plugin rather
than a project plugin so the declaration of this has moved to our
settings.gradle file. Second, we were using a rather old version of the
Nebula ospackage plugin for building deb and rpm packages, the migration
to the latest version required some updates to get things working as
expected as we had some workarounds in place that are no longer
applicable with the latest bug fixes.
(cherry picked from commit 87f9c16e2f8870e3091062cde37b43042c3ae1c5)
The docker build task depends on the docker context being built, but it
was not explicitly setup as an input. This commit adds the task as an
input to the docker build.
relates #49613
Backport of #49079. Reimplement a number of the tests from
elastic/elasticsearch-docker.
There is also one Docker image fix here, which is that two of the provided
config files had different file permissions to the rest. I've fixed this
with another RUN chmod while building the image, and adjusted the
corresponding packaging test.
This commits sets an output marker file for the docker build tasks so
that it can be tracked as up to date. It also fixes the docker build
context task to omit the build date as in input property which always
left the task as out of date.
relates #49359
Backport of #47573.
Closes#43603. Allow environment variables to be passed to ES in a Docker
container via a file, by setting an environment variable with the `_FILE`
suffix that points to the file with the intended value of the env var.
The Apache Commons Daemon has some helpful features for Java
applications, like nice little next boxes for min heap, max heap, and
thread stack size. Our elasticsearch-service.bat script parses those
values out of the ES_JAVA_OPTS environment variable and provides them to
the Apache Commons Daemon invocation command in order to provide
sensible defaults. However, we failed to remove those values from the
ES_JAVA_OPTS environment variable, which meant they ended up in the
"Java Options" text box and would, from there, override whatever the
user put in the specific boxes for heap size or thread stack size.
This commit modifies the loop that parses ES_JAVA_OPTS to construct a
new enviroment variable containing only the values that aren't parsed
out for heap size or thread stack size, then uses that new enviroment
variable in the commons daemon invocation command.
JDK 14 has removed CMS. This commit restricts the support for CMS to JDK
8 through JDK 13, and defaults to G1 GC on JDK 14. We will revisit all
defaults in the future, but this ensures that we run with a
properly-configured garbage collector on JDK 14+.
This fixes a regression introduced in #42042. The logic here was
mistakenly inverted such that we only run these tests in a FIPS JVM
which is the opposite of what we intend.
Backport of #48849. Update `.editorconfig` to make the Java settings the
default for all files, and then apply a 2-space indent to all `*.gradle`
files. Then reformat all the files.
This commit fixes the names of the UBI-based Docker build contexts to
lift the ubi component of the name into the archive base name, instead
of the classifier.
Backport of #46599 and #47640. Add packaging tests for Docker.
* Introduce packaging tests for Docker (#46599)
Closes#37617. Add packaging tests for our Docker images, similar to what
we have for RPMs or Debian packages. This works by running a container and
probing it e.g. via `docker exec`. Test can also be run in Vagrant, by
exporting the Docker images to disk and loading them again in VMs. Docker
is installed via `Vagrantfile` in a selection of boxes.
* Only define Docker pkg tests if Docker is available (#47640)
Closes#47639, and unmutes tests that were muted in b958467.
The Docker packaging tests were being defined irrespective of whether
Docker was actually available in the current environment. Instead,
implement exclude lists so that in environments where Docker is not
available, no Docker packaging tests are defined. For CI hosts, the build
checks `.ci/dockerOnLinuxExclusions`. The Vagrant VMs can defined the
extension property `shouldTestDocker` property to opt-in to packaging
tests.
As part of this, define a seperate utility class for checking Docker,
and call that instead of defining checks in-line in BuildPlugin.groovy
This commit fixes the name of the UBI-based Docker image build contexts
to include "7" (to set us up for the future where we are likely to have
a ubi8-based image).
This commit registers the UBI-based Docker image projects in the build
so that their assemble tasks are executed when the top-level assemble
task is executed.
This commit introduces a consistent, and type-safe manner for handling
global build parameters through out our build logic. Primarily this
replaces the existing usages of extra properties with static accessors.
It also introduces and explicit API for initialization and mutation of
any such parameters, as well as better error handling for uninitialized
or eager access of parameter values.
Closes#42042
This commit simplifies the JDK copy specification in the archives build
so that it's a single line as opposed to an if/else with a repeated
body. This approach reduces the maintenance cost of this code.
* Always pass user-specified MaxDirectMemorySize
We had been testing whether a user had passed a value for
MaxDirectMemorySize by parsing the output of "java -XX:PrintFlagsFinal
-version". If MaxDirectMemorySize equals zero, we set it to half of max
heap. The problem is that on Windows with JDK 8, a JDK bug incorrectly
truncates values over 4g and returns multiples of 4g as zero. In order
to always respect the user-defined settings, we need to check our input
to see if an "-XX:MaxDirectMemorySize" value has been passed.
* Always warn for Windows/jdk8 ergo issue
Even if a user has set MaxDirectMemorySize, they aren't future-proof for
this JDK bug. With this change, we issue a general warning for the
windows/JDK8 issue, and a specific warning if MaxDirectMemorySize is
unset.
After some consideration, we are electing to make "ubi" part of the
image name instead of part of the tag. This commit implements that
change for the Elasticsearch UBI-based Docker images.
This commit bumps the bundled JDK to 13.0.1+9. Since AdoptOpenJDK did
not release 13.0.1+9 for Windows, this commit also enables that the
bundled JDK version can vary by platform.
This commit moves JVM options that we are setting on behalf of the user
that we do not expect them to fiddle with out of the jvm.options
configuration file and into the JVM options parser. In this way, we
discourage fiddling with these settings, but more importantly, we ensure
that as we evolve or add to these settings that a user would pick these
pick instead of being left behind if they have a modified jvm.options
file and do not pick any new that come with the distribution.
Our JVM ergonomics extract max heap size from JDK PrintFlagsFinal output.
On JDK 8, there is a system-dependent bug where memory sizes are cast to
32-bit integers. On affected systems (namely, Windows), when 1/4 of physical
memory is more than the maximum integer value, the output of PrintFlagsFinal
will be inaccurate. In the pathological case, where the max heap size would
be a multiple of 4g, the test will fail.
The practical effect of this bug, beyond test failures, is that we may set
MaxDirectMemorySize to an incorrect value on Windows. This commit adds a
warning about this situation during startup.
This commit removes the option to change the netty system properties to
reenable the direct buffer pooling. It also removes the need for us to
disable the buffer pooling in the system properties file. Instead, we
programmatically craete an allocator that is used by our networking
layer.
This commit does introduce an Elasticsearch property which allows the
user to fallback on the netty default allocator. If they choose this
option, they can configure the default allocator how they wish using the
standard netty properties.
This test has been failing very frequently on Windows checks for 7.4. We've also seen it for 7.x as well as for the new 7.5 tests. I'm muting during investigation so we can see if this is masking any other issues.
* Convert RunTask to use testclusers, remove ClusterFormationTasks
This PR adds a new RunTask and a way for it to start a
testclusters cluster out of band and block on it to replace
the old RunTask that used ClusterFormationTasks.
With this we can now remove ClusterFormationTasks.
* Use versions specific distribution folders so we don't need to clean up (#46539)
* Retry deleting distro dir on windows
When retarting the cluster we clean up old distribution files that might
still be in use by the OS.
Windows closes resources of ded processes async, so we do a couple of
retries to get arround it.
Closes#46014
* Avoid having to delete the distro folder.
* Remove the use of ClusterFormationTasks form RestTestTask (#47022)
This PR removes a use-case of the ClusterFormationTasks and converts a
project that flew under the radar so far.
There's probably more clean-up possible here, but for now the goal is
to be able to remove that code after `RunTask` is also updated.
* Migrate some 7.x only projects
Compilation was accidentally broken here when a backport used code from
JDK 9, which is not supported in 7.x. This commit addresses this by
using JDK 8 compatiable APIs.
This commit moves the ES_TMPDIR substitution that we do for JVM options
into the JVM options parser itself. This solves a problem where the fact
that the we do not make the substitution before ergonomics parsing can
lead to the JVM that we start for computing the ergonomic values failing
to start. Additionally, moving this substitution here enables us to
simplify the shell scripts since we do not need to implement this there,
and twice for Bash and Windows.
This PR makes the necesary adaptations to the tests and adds a power shell script to
invoke the OS tests on GCP instances connected as CI workers.
Also noticed that logs were not being produced by the tests and that theses were not using log4j so fixed that too.
One of the difficulties in working on theses tests was that the tests just stalled with no indication where the problem is.
To ease with the debugging, after process explorer suggested that the tests are running some commands, we now have multiple timeouts: one for the tests ( which will generate a thread dump ) and one for individual commands ( that bails with the command being ran and output and error so far ) to make it easier to see what went wrong.
The tests were blocking because apparently the pipes to the sub-process were not closing, thus the threads were blocking on them and we were blocking indefinitely on the join. I'm not sure why this doesn't happen in vagrant, but we now properly deal with it.
Since the bundled jdk was added to Elasticsearch, there are now 2 ways
java can be missing. Either JAVA_HOME is set but does not exist, or the
bundled jdk does not exist. This commit improves the error messages in
those two cases, and also ensures our tests cover both cases.
Looks like there's a workaround with aufs used in debian 8.
Adding `tsflags=nodocs` works around this issue and results in smaller
image files also.
Closes#47097 and elastic/infra#14780
This is the Java side of https://github.com/elastic/ml-cpp/pull/593
with a fallback so that ml-cpp bundles with either the
new or old directory structure work for the time being.
A few days after merging the C++ changes a followup to
this change will be made that removes the fallback.
Relates to #47007 . the `gradle-ospackage-plugin` plugin doesn't
properly support symlink on windows.
This PR changes the way we configure tasks to prevent building these
packages as part of a windows check.
G1 GC were setup to use an `InitiatingHeapOccupancyPercent` of 75. This
could leave used memory at a very high level for an extended duration,
triggering the real memory circuit breaker even at low activity levels.
The value is a threshold for old generation usage relative to total heap
size and thus it should leave room for the new generation. Default in
G1 is to allow up to 60 percent for new generation and this could mean that the
threshold was effectively at 135% heap usage. GC would still kick in of course and
eventually enough mixed collections would take place such that adaptive adjustment
of IHOP kicks in.
The JVM has adaptive setting of the IHOP, but this does not kick in
until it has sampled a few collections. A newly started, relatively
quiet server with primarily new generation activity could thus
experience heap above 95% frequently for a duration.
The changes here are two-fold:
1. Use 30% default for IHOP (the JVM default of 45 could still mean
105% heap usage threshold and did not fully ensure not to hit the
circuit breaker with low activity)
2. Set G1ReservePercent=25. This is used by the adaptive IHOP mechanism,
meaning old/mixed GC should kick in no later than at 75% heap. This
ensures IHOP stays compatible with the real memory circuit breaker also
after being adjusted by adaptive IHOP.
This PR adds some restrictions around testfixtures to make sure the same service ( as defiend in docker-compose.yml ) is not shared between multiple projects.
Sharing would break running with --parallel.
Projects can still share fixtures as long as each has it;s own service within.
This is still useful to share some of the setup and configuration code of the fixture.
Project now also have to specify a service name when calling useCluster to refer to a specific service.
If this is not the case all services will be claimed and the fixture can't be shared.
For this reason fixtures have to explicitly specify if they are using themselves ( fixture and tests in the same project ).
This commit disables caching of BWC snapshot distributions in the "trunk" (aka master) branch.
Since the previous major release branches move quickly we rarely get cache hits for these
tasks, and the artifacts themselves are very large. This means the overhead here is high and
savings basically zero. We conditionally disable task output caching in this scenario in CI to
avoid excessive build cache overhead as well as causing too much turn in the cache itself which
would lead to lots of cache entry evictions.
This commit teaches the build how to bundle AdoptOpenJDK with our
artifacts, and switches to AdoptOpenJDK as the bundled JDK. We keep the
functionality to also bundle Oracle OpenJDK distributions.
We currently configure io.netty.allocator.numDirectArenas to be 0 in the
jvm erconomics class. This is a config that we always want to set, so it
makes sense to move it to jvm.options.
This adds a pipeline aggregation that calculates the cumulative
cardinality of a field. It does this by iteratively merging in the
HLL sketch from consecutive buckets and emitting the cardinality up
to that point.
This is useful for things like finding the total "new" users that have
visited a website (as opposed to "repeat" visitors).
This is a Basic+ aggregation and adds a new Data Science plugin
to house it and future advanced analytics/data science aggregations.
In the Sys V init scripts, we check for Java. This is not needed, since
the same check happens in elasticsearch-env when starting up. Having
this duplicate check has bitten us in the past, where we made a change
to the logic in elasticsearch-env, but missed updating it here. Since
there is no need for this duplicate check, we remove it from the Sys V
init scripts.
Most of our CLI tools use the Terminal class, which previously did not provide methods for writing to standard output. When all output goes to standard out, there are two basic problems. First, errors and warnings are "swallowed" in pipelines, making it hard for a user to know when something's gone wrong. Second, errors and warnings are intermingled with legitimate output, making it difficult to pass the results of interactive scripts to other tools.
This commit adds a second set of print commands to Terminal for printing to standard error, with errorPrint corresponding to print and errorPrintln corresponding to println. This leaves it to developers to decide which output should go where. It also adjusts existing commands to send errors and warnings to stderr.
Usage is printed to standard output when it's correctly requested (e.g., bin/elasticsearch-keystore --help) but goes to standard error when a command is invoked incorrectly (e.g. bin/elasticsearch-keystore list-with-a-typo | sort).
* Add input and outut tracking of built bwc versions
This PR adds tracking of the bwc versions git has as input and all the
expected files as output.
The effect is that `gradlew` is not called at all when the git has
doesn't change and the version was allready built.
Previusly gradlew would be called for the bwc version and it would have
to configure the project and go trough up to date checks to figure out
that nothing changed.
This helps when working on bwc tests locally needing to run the test
multiple times.
This should also help in CI not re-build bwc versions across different
runs.
* Enable caching of bwc builds
This commit addresses an issue when trying to using Elasticsearch on
systems with Sys V init and the bundled JDK was not being used. Instead,
we were still inadvertently trying to fallback on the path. This commit
removes that fallback as that is against our intentions for 7.x where we
only support the bundled JDK or an explicit JDK via JAVA_HOME.
This commit makes the gitRevision property a lazy loaded value by
returning an Object implementing toString(). The Dockerfile template is
also changed to use groovy templates instead of the mavenfilter hack, so
converting to String will not happen until runtime.
Elasticsearch does not grant Netty reflection access to get Unsafe. The
only mechanism that currently exists to free direct buffers in a timely
manner is to use Unsafe. This leads to the occasional scenario, under
heavy network load, that direct byte buffers can slowly build up without
being freed.
This commit disables Netty direct buffer pooling and moves to a strategy
of using a single thread-local direct buffer for interfacing with sockets.
This will reduce the memory usage from networking. Elasticsearch
currently derives very little value from direct buffer usage (TLS,
compression, Lucene, Elasticsearch handling, etc all use heap bytes). So
this seems like the correct trade-off until that changes.
Prior to this PR we always checked out the latest bwc branches and had
an external mechanism to store the bwc versions used for every CI run so
we could both reproduce those builds and run additional tests using the
same combination.
This adds complexities in setting up and maintaining CI and makes it
difficult to set up multi jobs.
This change replaces that mechanism with a time based approach
that looks at the commit date of the current revision and picks the
newest on the bwc branch that's still older than that.
It also makes sure there are no merge commits in this interval.
This new behavior will is ment to be enabled in CI only, for everything
except PR checks that will still use last available bwc revision.
This commit applies a normalization process to environment paths, both
in how they are stored internally, also their settings values. This
normalization is done via two means:
- we make the paths absolute
- we remove redundant name elements from the path (what Java calls
"normalization")
This change ensures that when we compare and refer to these paths within
the system, we are using a common ground. For example, prior to the
change if the data path was relative, we would not compare it correctly
to paths from disk usage. This is because the paths in disk usage were
being made absolute.
The org.label-schema labels on Docker images have been superseded by
pre-defined OCI annotations. However, there is still a lot of tooling in
use that relies on the org.label-schema, so we do not want to drop
them. This commit adds values for the org.opencontainers.image
pre-defined annotation keys. Additionally, we correct an issue with the
label used to represent the license, to use the org.label-schema.license
label. While this label was never accepted into the org.label-schema
specfication (because this specification was superseded, it's not that
it was explicitly rejected) there are containers out there using this
label. In particular, our base image is and so we need to override
otherwise we inherit, and end up mis-reporting the license.
Today our systemd service defaults to a service type of simple. This
means that systemd assumes Elasticsearch is ready as soon as the
ExecStart (bin/elasticsearch) process is forked off. This means that the
service appears ready long before it actually is, so before it is ready
to receive requests. It also means that services that want to depend on
Elasticsearch being ready to start can not as there is not a reliable
mechanism to determine this. This commit changes the service type to
notify. This requires that Elasticsearch sends a notification message
via libsystemd sd_notify method. This commit does that by using JNA to
invoke this native method. Additionally, we use this integration to also
notify systemd when we are stopping.
The field has to be defined in log4j2.properties and should be an
escaped JSON for now (it is a broken JSON at the moment). This should later be refactored into a JSON array
of strings.
* Mute failing test
tracked in #44552
* mute EvilSecurityTests
tracking in #44558
* Fix line endings in ESJsonLayoutTests
* Mute failing ForecastIT test on windows
Tracking in #44609
* mute BasicRenormalizationIT.testDefaultRenormalization
tracked in #44613
* fix mute testDefaultRenormalization
* Increase busyWait timeout windows is slow
* Mute failure unconfigured node name
* mute x-pack internal cluster test windows
tracking #44610
* Mute JvmErgonomicsTests on windows
Tracking #44669
* mute SharedClusterSnapshotRestoreIT testParallelRestoreOperationsFromSingleSnapshot
Tracking #44671
* Mute NodeTests on Windows
Tracking #44256
In https://github.com/elastic/elasticsearch/pull/41913 setting up the
temp dir for ES was moved from the env script to individual cli scripts.
However, moving it to the windows service cli was missed. This commit
restores setting up the temp dir for the windows service control script.
Today when checksumming a plugin zip during plugin install, we read all
of the bytes of the zip into memory at once. When trying to run the
plugin installer on a small heap (say, 64 MiB), this can lead to the
plugin installer running out of memory when checksumming large
plugins. This commit addresses this by reading the plugin bytes in 8 KiB
chunks, thus using a constant amount of memory independent of the size
of the plugin.
This commit adds some default CLI JVM options to control the heap size
and the garbage collector used for the CLI tools. We do this because
otherwise the JVM will default to large initial and max heap sizes based
on the RAM visible to the JVM (which could be all the physical RAM on
the machine if not run in a container-aware JVM). This commit therefore
sets the initial heap size to 4m, the max heap size to 64m, the garbage
collector to the serial collector, and leaves this user-configurable by
honoring ES_JAVA_OPTS last.
This is a refactor to current JSON logging to make it more open for extensions
and support for custom ES log messages used inDeprecationLogger IndexingSlowLog , SearchSLowLog
We want to include x-opaque-id in deprecation logs. The easiest way to have this as an additional JSON field instead of part of the message is to create a custom DeprecatedMessage (extends ESLogMEssage)
These messages are regular log4j messages with a text, but also carry a map of fields which can then populate the log pattern. The logic for this lives in ESJsonLayout and ESMessageFieldConverter.
Similar approach can be used to refactor IndexingSlowLog and SearchSlowLog JSON logs to contain fields previously only present as escaped JSON string in a message field.
closes#41350
backport #41354
This change makes the process of verifying the signature of
official plugins FIPS 140 compliant by defaulting to use the
BouncyCastle FIPS provider and adding a dependency to bcpg-fips
that implement parts of openPGP in a FIPS compliant manner.
In already FIPS 140 enabled environments that use the
BouncyCastle FIPS provider, the bcfips dependency is redundant
but doesn't cause an issue as it will be added only in the classpath
of the cli-tools
This is a backport of #44224
When using gradle run by itself, this uses the default distro with a
basic license and enables security. There is a setup command to create
a elastic-admin user but only when the license is a trial license. Now
that security is available with the basic license, we should always run
this command when using the default distribution.
We initially added `requireDocker` for a way for tasks to say that they
absolutely must have it, like the build docker image tasks.
Projects using the test fixtures plugin are not in this both, as the
intent with these is that they will be skipped if docker and docker-compose
is not available.
Before this change we were lenient, the docker image build would succeed
but produce nothing. The implementation was also confusing as it was not
immediately obvious this was the case due to all the indirection in the
code.
The reason we have this leniency is that when we added the docker image
build, docker was a fairly new requirement for us, and we didn't have
it deployed in CI widely enough nor had CI configured to prefer workers
with docker when possible. We are in a much better position now.
The other reason was other stack teams running `./gradlew assemble`
in their respective CI and the possibility of breaking them if docker is
not installed. We have been advocating for building specific distros for
some time now and I will also send out an additional notice
The PR also removes the use of `requireDocker` from tests that actually
use test fixtures and are ok without it, and fixes a bug in test
fixtures that would cause incorrect configuration and allow some tasks
to run when docker was not available and they shouldn't have.
Closes #42680 and #42829 see also #42719
Enable audit logs in docker by creating console appenders for audit loggers.
also rename field @timestamp to timestamp and add field type with value audit
The docker build contains now two log4j configuration for oss or default versions. The build now allows override the default configuration.
Also changed the format of a timestamp from ISO8601 to include time zone as per this discussion #36833 (comment)
closes#42666
backport#42671
Before this change we would recurse to cache bwc versions.
This proved to be problematic due to the number of steps it was
generating taking too long.
Also this required tricky maintenance to break the recursion for old
branches we don't really care about.
With this change we now cache specific branches only.
Previously we used LoggedExec for running the internal bwc builds.
However, this had bad performance implications as all the output was
buffered into memory, thus we changed back to normal Exec. This commit
adds a `spoolOutput` setting to LoggedExec which can be used for
commands with large amounts of output, and switches the bwc builds to
use this flag.
The elasticsearch-cli helper script does not use the tempdir created by
elasticsearch-env, yet the env script still creates it. This can lead to
lots of temp directories being created when running cli scripts in an
automated fashion. This commit passes a fake tmpdir to the env script to
avoid creation.
closes#34445
This commit adds deletion of the bin directory to postrm cleanup. While
the package's bin files are cleaned up by the package manager, plugins
may have created subdirectories under bin. We already cleanup plugins,
but not the extra bin dirs their installation created.
closes#18109
Java 8 presents the JVM options slightly differently when displaying via
-XX:+PrintFlagsFinal. This commit adapts the JVM options parser for this
possibility.
Relates #42009
This commit removes manual parsing of JVM options when calculating
ergonomics. This is to avoid a situation that we parse values
differently than the JVM would. In fact, we already have a bug along
these lines today. It is possible to start the JVM with the same flag
multiple times on the command line. In this case, the last value
wins. For example, -Xmx1g -Xmx2g would start the JVM with a heap size of
two gigabytes. Our JVM ergonomics ignores this possibility and instead
the first value is winning!
Our strategy to avoid manual parsing of the JVM options is to start the
Java command line parser (without actually starting a JVM) by invoking
java with the same command line flags as presented and request that the
JVM tell us what values it would start with. This ensures that we have
the correct values when making ergonomic decisions.
Moreover, our strategy also is ignoring ES_JAVA_OPTS which could
override the heap size as well leading to incorrect ergonomic
choices. This commit address this issue too.
The deb package has been updated several times in the past to contain
overrides in order to pass lintian inspection. However, there have never
been any tests to ensure we do not fallback to failure. This commit
updates the overrides file given things that have changed since 2.x like
adding ML and bundling the jdk.
closes#17185
We currently download 3 variants of the same version of the jdk for
bundling into the distributions. Additionally, the vagrant images do
their own downloading. This commit moves the jdk downloading into a
utility gradle plugin. This will be used in a future PR by the packaging
tests.
The new plugin exposes a "jdks" project extension which allows creating
named jdks. Once the jdk version and platform are set for a named jdk,
the jdk object may be used as a lazy String for the jdk home path, or a
file collection for copying.
testclusters detect from settings that security is enabled
if a user is not specified using the DSL introduced in this PR, a default one is created
the appropriate wait conditions are used authenticating with the first user defined in the DSL ( or the default user ).
an example DSL to create a user is user username:"test_user" password:"x-pack-test-password" role: "superuser" all keys are optional and default to the values shown in this example
We have faked some Ivy repositories on a few artifact locations. Today
when Gradle attempts to resolve these artifacts, it follows its default
strategy to search for Gradle metadata, then Maven POM files, then Ivy
descriptors, and finally will fallback to looking directly for the
artifact. This wastes times on remote network calls that will 404 anyway
since these metadata resources will not exist for these fake Ivy
repositories. This commit overrides the Gradle strategy to look directly
for artifacts.
When Elasticsearch is run from a package installation, the running
process does not have permissions to write to the keystore. This is
because of the root:root ownership of /etc/elasticsearch. This is why we
create the keystore if it does not exist during package installation. If
the keystore needs to be upgraded, that is currently done by the running
Elasticsearch process. Yet, as just mentioned, the Elasticsearch process
would not have permissions to do that during runtime. Instead, this
needs to be done during package upgrade. This commit adds an upgrade
command to the keystore CLI for this purpose, and that is invoked during
package upgrade if the keystore already exists. This ensures that we are
always on the latest keystore format before the Elasticsearch process is
invoked, and therefore no upgrade would be needed then. While this bug
has always existed, we have not heard of reports of it in practice. Yet,
this bug becomes a lot more likely with a recent change to the format of
the keystore to remove the distinction between file and string entries.
We use Bouncy Castle to verify signatures when installing official
plugins. This leads to illegal access warnings because Bouncy Castle
accesses the Sun security provider constructor. This commit adds an
add-opens flag to suppress this illegal access.
This commit bumps the bundled JDK to version 12.0.1. Note that we had to
add a new pattern here as Oracle has changed the source of the
builds. This commit will be backported to 6.7 in a different form to
bump the bundled JDK in the Docker images too.
We had been obtaining JDK distributions from download.java.net. This
site is now presenting a certificate that does not list
download.java.net as a SAN. Therefore with host verification, the build
can not use this site. This commit switches to using download.oracle.com
which appears to be an alternative name for the same CNAME
download.oracle.com.edgekey.net. This allows our builds to resume.
hamcrest has some improvements in newer versions, like FileMatchers
that make assertions regarding file exists cleaner. This commit upgrades
to the latest version of hamcrest so we can start using new and improved
matchers.
The pid dir for both systemd and init.d is already managed by those
respective systems (tmpfiles.d and the init script, respectively). Since
the /var/run dir is often mounted as tmpfs, it does not make sense to
have the elasticsearch pid dir added by the package installation. This
commit removes that empty dir from deb and rpm.
This commit adds a filter to the files include from modules to only
include platform specific files relevant to the distribution being
built. For example, the deb files on linux would now only include linux
ML binaries, and not windows or macos files.
* fix the packer cache script
This PR disabled the explicit pull since it seems this always tries to
work with a registry.
Functionality will not be affected since we will still build the images
on pull.
Instead of allowing docker-compose to rebuild it.
With this change we tag the image with a test label, and use that
in the testing as this is simpler that dealing with a dynamically
generated docker-compose file.
This commit changes the bwc builds from a single task for a branch to a
task for each bwc artifact. This reduces the bwc build time when only
needing a specific artifact, for example when running cluster restart
tests on a mac, the windows artifacts or rpm/debs are not needed.
This commit fixes an issue when the artifact used to build the Docker
image is sourced from artifacts.elastic.co. In particular, the artifact
was not downloaded to the proper location.
* Replace usages RandomizedTestingTask with built-in Gradle Test (#40978)
This commit replaces the existing RandomizedTestingTask and supporting code with Gradle's built-in JUnit support via the Test task type. Additionally, the previous workaround to disable all tasks named "test" and create new unit testing tasks named "unitTest" has been removed such that the "test" task now runs unit tests as per the normal Gradle Java plugin conventions.
(cherry picked from commit 323f312bbc829a63056a79ebe45adced5099f6e6)
* Fix forking JVM runner
* Don't bump shadow plugin version
We previously found a bug in the JVM where AVX-512 instructions could
crash the JVM to crash with a segmentation fault. This bug impacted JDK
9 and JDK 10, but was most prominent on JDK 10 because AVX-512 was
enabled there by default. In JDK 11, this bug is reported fixed so this
commit restricts the disabling of AVX-512 to JDK 10 only. Since we no
longer support JDK 10 for any versions that this commit will be
integrated into (7.1, 8.0), we simply remove the disabling of this flag
from the JVM options.
This commit deprecates versions of Java prior to Java 11. This commit
will cause a warning to be printed to standard error when any command
line tool is invoked, or when Elasticsearch is started. Additionally, we
log a deprecation message when Elasticsearch is started.
* Add notice for bundled jdk
This commit adds the license/notice for the bundled openjdk.
* First draft
* iteration
* Fix package notices
* Iteration
* One more iteration
While yum does retry retrieving files 10 times by default [1], slow
network fetches, governed by `minrate` cause immediate aborts without
getting retried.
Wrap yum commands in a 10 iteration retry loop.
[1] http://man7.org/linux/man-pages/man5/yum.conf.5.html
Backport of #40349
On windows, JAVA_HOME is currently resolved when the windows service is started. However, this is contrary to what our documentation states. This commit moves resolution to service install. This has the side effect of making java existence checking optional in elasticsearch-env.bat, since the rest of the service commands do not require java.
closes#30720
This change removes the use of hardcoded port values for the
idp-fixture in favor of the mapped ephemeral ports. This should prevent
failures due to port conflicts in CI.
* Revert "Configure TMP for test nodes on Windows (#39959)"
This reverts commit 97562a874fcb1f29fb05272ab860a0307e97d1aa.
* Configure a tmp dir without spaces
* Pass on TMP instead of changing it
Now that we have the bundled JDK in the Docker images, we should use
them as opposed to procuring a JDK ourselves. This commit replaces the
JDK in the Docker image with the bundled JDK.
This commit adds cd $ES_HOME to elasticsearch-env and removes it from
elasticsearch. This way, both elasticsearch and elasticsearch-cli are
executed with the working directory set to $ES_HOME. The need for the
fix arose from the following bug:
1. Explicitly set path.data to relative to ES_HOME path in
elasticsearch.yml.
2. Run elasticsearch from any directory. Elasticsearch is able to
correctly start.
3. Stop elasticsearch.
4. Run elasticsearch-node unsafe-bootstrap, not from ES_HOME directory.
It will fail with an exception.
This commit fixes the issue and adds a new test.
This PR fixes the issue and adds a new test.
Also tests >=100 are renamed because alphabetic order does not work for
them.
(cherry picked from commit 2ffc29306ff7366efc598e7b4dd2ce528895cd3a
with fixes by #40083 and #40118)
This commit adds a variant for every official distribution that omits
the bundled jdk. The "no-jdk" naming is conveyed through the package
classifier, alongside the platform. Package tests are also added for
each new distribution.
This breaks on windows where TMP dir default to C:\Windows and startup
fails with a permission error.
I tried to create a tmp dir and pass in `TMP` env, but it lead to a
class not found error, and since testclusers is already independent of
the calling environment I stopped there.
The posix_spawn method of launching a process from Java
goes via an intermediate process called jspawnhelper
which lives in the lib directory rather than the bin
directory and hence got missed by the original chmod
loop. This change adds jspawnhelper as a special case.
It's the only program that's in the lib directory in a
macOS JDK 11.