Closes#63869. Perform `docker pull` explicitly instead of as part of
`docker build`, and wrap it in a retry loop. This is an attempt to make
the build more resilient to transient errors.
This commit converts build code that downloads distributions or other
artifacts to use the new no-kpi subdomain, and removes the formerly used
no-kpi header.
* Add tests for using ES_JAVA_OPTS with windows service
* Relocate ES_JAVA_OPTS delimiter munging
* Don't use equals for -Xmx and -Xms args
* Write newlines in temporary configs
This commit adjusts the defaults for the tiered data roles so that they
are enabled by default, or if the node has the legacy data role. This
ensures that the default experience is that the tiered data roles are
enabled.
To fully specifiy the behavior for the tiered data roles then:
- starting a new node with the defaults: enabled
- starting a new node with node.roles configured: enabled if and only
if the tiered data roles are explicitly configured, independently
of the node having the data role
- starting a new node with node.data enabled: enabled unless the
tiered data roles are explicitly disabled
- starting a new node with node.data disabled: disabled unless the
tiered data roles are explicitly enabled
* Fix concurrent modification on task realization
* Use taskprovider instead of relying on tasks in distribution setup
* Port more task references in :distribution to task provider
* Fix nullpointer in distribution setup
* Cleanup on integtest distribution setup (#62937)
- Simplify build task and archive base name calculation
- Move integ test zip project only setup into integ test zip build script
* Fix merge
* Wire local unreleased bwc versions more efficient for tests (#62473)
For testing against the local distribution we already avoid the packaging/unpackaging
cycle of es distributions when setting up test clusters. This PR adopts the usage of the
expanded created distributions for unreleased bwc versions (versions that are checkout
from a branch and build from source in the :distribution:bwc:minor / :distribution:bwc:bugfix).
This makes the setup of bwc based cross version tests a bit faster by avoiding
the unpackaging overhead. We still assemble both in the bwcBuild tasks atm
which will be addressed in a later issue.
This reworks the :distribution:bwc project:
- Convert all the custom logic from build script logic (groovy) into gradle binary plugins (java)
- Tried to make the bwc setup logic a bit more readable
- Add basic functional test coverage for the bwc logic this PR tweaked.
- Extracted a general internal BWC Git plugin out of the bwc setup plugin to improve maintenance
and testability
- Changed the InternalDistributionPlugin to resolve the extracted distro instead on relying
on unpacking the distribution archive
* Fix java8 incompatibility
* Fix extension calculation for 6.8.* distribution
We switched to adoptopenjdk from oracle jdk to rely on the notarization
found in adoptopnejdk on MacOS. However, that notarization still had
issues, and we currently do our own notarization of the entire
distribution, including the jdk. The recent bump to jdk 15 has revealed
openjdk to be lax in maintaining support for older systems. Since the
notarization is no longer an issue, this PR moves the bundled jdk back
to Oracle, in order to continue supporting those older systems affected
by adoptopenjdk 15.
relates #62709
Referencing a project instance during task execution is discouraged by
Gradle and should be avoided. E.g. It is incompatible with Gradles
incubating configuration cache. Instead there are services available to handle
archive and filesystem operations in task actions.
Brings us one step closer to #57918
Java 15 requires at last glibc 2.14, but we support older Linux OSs that ship with older versions. Rather than continue to ship Java 14, which is now EOL and therefore unsupported, ES will detect this situation and print a helpful message, instead of the cryptic error that would otherwise be printed. Users on older OSs will have to set JAVA_HOME instead of using the bundled JVM.
This doesn't affect v8.0.0 because these older Linux OSs will not be supported, and all the supported ones have glibc 2.14.
- Extract distribution archives defaults into plugin
- Added basic test coverage
- Avoid packaging/unpackaging cycle when relying on locally build distributions
- Provide DSL for setting up distribution archives
- Cleanup archives build script
This commit removes `integTest` task from all es-plugins.
Most relevant projects have been converted to use yamlRestTest, javaRestTest,
or internalClusterTest in prior PRs.
A few projects needed to be adjusted to allow complete removal of this task
* x-pack/plugin - converted to use yamlRestTest and javaRestTest
* plugins/repository-hdfs - kept the integTest task, but use `rest-test` plugin to define the task
* qa/die-with-dignity - convert to javaRestTest
* x-pack/qa/security-example-spi-extension - convert to javaRestTest
* multiple projects - remove the integTest.enabled = false (yay!)
related: #61802
related: #60630
related: #59444
related: #59089
related: #56841
related: #59939
related: #55896
The windows service script does a little munging of the parsed JVM
option string, converting whitespaces to semicolons. We recently added
an optional Java 14 JDK flag to our system JVM flags. On earlier JDKs,
the windows service batch script would encounter a double whitespace
when this option was missing and convert it into double semicolons.
Double semicolons, in turn, don't work in the arguments to the windows
service command, and led to a lot of JVM options being dropped,
including "es.path.conf", which is required for startup.
This commit puts in a double defense. First, it removes any empty-string
options from the system options list in the Java Options Parser code.
Second, it munges out double semicolons if they do appear in the parsed
option output.
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
This commit adds external test modules. These are modules meant for
external systems to test edge cases in elasticsearch, but only within
snapshots. They are not meant to be used in production, so protections
are also added from their accidental inclusion in release builds.
Note that this commit does not actually add any new modules, it only
adds the infrastructure for the new modules, under
`test/external-modules`.
PR #61474 reworked deprecation logging to rely more heavily on log4j. Unfortunately,
the changes required to log4j's configuration were not applied to the version we ship
with the Docker image.
We leverage artifact transforms now when downloading and unpacking elasticsearch distributions.
This has the benefit of
- handcrafted extract tasks on the root project are not required.
- The general tight coupling to the root project has been removed.
- The overall required configurations required to handle a distribution have been reduced
- ElasticsearchDistribution has been simplified by making Extracted an ordinary Configuration
downloaded and unpacked external distributions are reused in later builds by been cached
in the gradle user home.
DistributionDownloadPlugin functional tests have been extended and ported
to DistributionDownloadPluginFuncTest.
* Fix ElasticsearchNode#getDistributionFiles (#61219)
Fixes#61647
Backport of #61474.
Part of #46106. Simplify the implementation of deprecation logging by
relying of log4j more completely, and implementing additional behaviour
through custom appenders and filters.
- Added test coverage
- Removes build script cluttering
- Splits archive building and archive checking logic
- only rely on boost for now for ML licenses(tbd)
- Use Gradle build-in untar and unzip support
* Handle dynamic versions in func tests assertions
Closes#60864. Tweak the JDK directories' permissions in the ES
Docker image so that ES can run under a different user and group.
These changes assume that the image is being run with bind-mounted
config, data and logs directories, and reads and writes to these
locations will still fail when both the UID and GID are not the
default. Everything should be OK when running with the default GID
of zero, however.
Backport of #60742.
This PR resurrects support for building Docker images based on one of
Red Hat's UBI images. It also adds support for running the existing
Docker tests against the image. The image is named
`elasticsearch-ubi8:<version>`.
I also changed the Docker build file uses enums instead strings in a lot
of places, for added rigour.
We leverage artifact transforms now when downloading and unpacking elasticsearch distributions.
This has the benefit of
- handcrafted extract tasks on the root project are not required.
- the general tight coupling to the root project has been removed.
- the overall required configurations required to handle a distribution have been reduced
- ElasticsearchDistribution has been simplified by making Extracted an ordinary Configuration
downloaded and unpacked external distributions are reused in later builds by been cached
in the gradle user home.
DistributionDownloadPlugin functional tests have been extended and ported
to DistributionDownloadPluginFuncTest.
* Fix java8 compliant Path calculation
Closes#60853. After upgrading to CentOS 8, the behaviour of chroot has
subtly changed. Now we have to explicitly set the GID in order to get
the previous behaviour of creating files with GID 0.
* Merge test runner task into RestIntegTest (#60261)
* Merge test runner task into RestIntegTest
* Reorganizing Standalone runner and RestIntegTest task
* Rework general test task configuration and extension
* Fix merge issues
* use former 7.x common test configuration
- Replace immediate task creations by using task avoidance api
- One step closer to #56610
- Still many tasks are created during configuration phase. Tackled in separate steps
* Split internal distribution handling into separate internal plugin (#60295)
* Provide proper failure if unexpected non jdk bundled bwc version is requested
This commit adds compatibility testing of our JDBC driver against
different Elasticsearch versions. Although we are really testing the
forwards compatibility nature of the JDBC driver we model the testing
the same as we do existing BWC tests, that is, with the current branch
fetching the earlier versions of the artifact that is to be tested. In
this case, that's the JDBC driver itself.
Because the tests include the JDBC driver jar on it's classpath we had
to change the packaging of the driver jar in order to avoid jarhell and
other conflicting dependency issues when using an old JDBC driver with
later branches. For this we simply relocate all driver dependencies in
the shadow jar under a "shadowed" package. This allows the JDBC driver
to use the correct version of Elasticsearch libs classes, while the
tests themselves use their versions. Since this required a change to the
driver jar compatibility testing can only go back as far as that version
which at the time of this commit is 7.8.1.
For systemd, while we are starting up, we notify the system every 15
seconds that we are still in the middle of starting up. However, if
initial startup before plugin initialization is slower than 15 seconds,
we won't ever get the chance to run the first timeout extension. This
commit sets the initial timeout to 75 seconds, up from the default 30
seconds used by systemd.
closes#60140
- Fix duplicate path deprecation by removing duplicate test resources
- fix deprecated non annotated input property in LazyPropertyList
- fix deprecated usage of AbstractArchiveTask.version
- Resolve correct test resources
- Fixes how libs in distribution are resolved
- Required minor rework on common repository setup to allow distribution projects
to resolve thirdparty artifacts
- Use Default configurations when resolving tools for distribution packaging
- Related to #57920
This commit creates a new Gradle plugin to provide a separate task name
and source set for running YAML based REST tests. The only project
converted to use the new plugin in this PR is distribution/archives/integ-test-zip.
For which the testing has been moved to :rest-api-spec since it makes the most
sense and it avoids a small but awkward change to the distribution plugin.
The remaining cases in modules, plugins, and x-pack will be handled in followups.
This plugin is distinctly different from the plugin introduced in #55896 since
the YAML REST tests are intended to be black box tests over HTTP. As such they
should not (by default) have access to the classpath for that which they are testing.
The YAML based REST tests will be moved to separate source sets (yamlRestTest).
The which source is the target for the test resources is dependent on if this
new plugin is applied. If it is not applied, it will default to the test source
set.
Further, this introduces a breaking change for plugin developers that
use the YAML testing framework. They will now need to either use the new source set
and matching task, or configure the rest resources to use the old "test" source set that
matches the old integTest task. (The former should be preferred).
As part of this change (which is also breaking for plugin developers) the
rest resources plugin has been removed from the build plugin and now requires
either explicit application or application via the new YAML REST test plugin.
Plugin developers should be able to fix the breaking changes to the YAML tests
by adding apply plugin: 'elasticsearch.yaml-rest-test' and moving the YAML tests
under a yamlRestTest folder (instead of test)
* Replace compile configuration usage with api (#58451)
- Use java-library instead of plugin to allow api configuration usage
- Remove explicit references to runtime configurations in dependency declarations
- Make test runtime classpath input for testing convention
- required as java library will by default not have build jar file
- jar file is now explicit input of the task and gradle will ensure its properly build
* Fix compile usages in 7.x branch
The plugin installer currently checks the fake plugins installed contain
a single jar file. This commit adds another test for a plugin which
contains multiple jar files, ensuring all jars exist in the installed
plugin.
* Remove usage of deprecated testCompile configuration
* Replace testCompile usage by testImplementation
* Make testImplementation non transitive by default (as we did for testCompile)
* Update CONTRIBUTING about using testImplementation for test dependencies
* Fail on testCompile configuration usage
* Move classes from build scripts to buildSrc
- move Run task
- move duplicate SanEvaluator
* Remove :run workaround
* Some little cleanup on build scripts on the way
In #51459 DEBUG-level logging was removed from the default log4j
configuration. However, our docker build has its own log4j configuration
which was missed in that change. This commit removes the same from the
docker log4j configuration.
relates #51459
relates #51198
This is another part of the breakup of the massive BuildPlugin. This PR
moves the code for configuring publications to a separate plugin. Most
of the time these publications are jar files, but this also supports the
zip publication we have for integ tests.
Docker informed us that for official multi-arch Docker builds, there
needs to be a single Dockerfile and build context that can be used for
each supported architecture. Therefore, rework the build to move the
relevant architecture logic into the Dockerfile, and merge the aarch64
/ x64 docker context builds.
* Simplify java home verification
At one time, all uses of java home were found through the getJavaHome
utility method on BuildPlugin. However, that was changed many
refactorings ago, but the complex support for registering a java home
version needed that fails at configuration time still exists. The only
remaining use of grabbing java home is within bwc tests, and must be at
runtime since that is when we have the checkout and know what version is
needed.
This commit consolidates the java home finding method into a utility
unassociated with BuildPlugin.
* fix checkstyle
* address feedback
Firstly, backport the use of tini as the Docker entrypoint. This was supposed
to have been done following #50277, but was missed. It isn't a direct backport
as this branch will continue using root as the initial Docker user.
Secondly, backport #55491 to use the official checksums when downloading tini.
After #53562, the `geo_shape` field mapper is registered within
a module. This opens the door for introducing a new `geo_shape`
field mapper into the Spatial Plugin that has doc-values support.
This is very much an extension of server's GeoShapeFieldMapper,
but with the addition of the doc values implementation.
We believe there's no longer a need to be able to disable basic-license
features completely using the "xpack.*.enabled" settings. If users don't
want to use those features, they simply don't need to use them. Having
such features always available lets us build more complex features that
assume basic-license features are present.
This commit deprecates settings of the form "xpack.*.enabled" for
basic-license features, excluding "security", which is a special case.
It also removes deprecated settings from integration tests and unit
tests where they're not directly relevant; e.g. monitoring and ILM are
no longer disabled in many integration tests.
In bash, checking for whether an env variable exists uses the -z test,
against a stringified env var, so that the test is actually whether the
env var is empty, but not necessarily undefined. We use this to test
whether JAVA_HOME is set, to determine whether the bundled jdk should be
used. In windows, this test is an actual "undefined" check. This commit
brings the behavior on two systems in sync, opting to allow for an empty
JAVA_HOME in windows to indicate the bundled jdk should be used.
closes#55134
Following elastic/ml-cpp#1135 there are now Linux binaries
for both x86_64 and aarch64. The code that finds the
correct binaries to ship with each distribution was
including both on every Linux distribution. This change
alters that logic to consider the architecture as well
as the operating system.
Also, there is no need to disable ML on aarch64 now that
we have the native binaries available. ML is still not
supported on aarch64, but the processes at least run up
and work at a superficial level.
Backport of #55256
Currently forbidden apis accounts for 800+ tasks in the build. These
tasks are aggressively created by the plugin. In forbidden apis 3.0, we
will get task avoidance
(https://github.com/policeman-tools/forbidden-apis/pull/162), but we
need to ourselves use the same task avoidance mechanisms to not trigger
these task creations. This commit does that for our foribdden apis
usages, in preparation for upgrading to 3.0 when it is released.
This directory interferes with notarization and removing it before we
notarize allows us to have a properly notarized
distribution. Conceptually, this directory is only needed when building
a distribution to be installed by Installer (a so-called "pkg"). Since
we are not building such distributions, and this directory interferes
with notarization, we choose to exclude it here. We do this here, rather
than in our notarization process, to ensure that what we run through CI
for testing is also what we ship to the world.
Backport of #55073.
We added tasks to build an ARM distribution and Docker image, but didn't
provide any way to run packaging tests against them. Add extra loops on
the possible Architecture values, and skip tasks that can't be run on
the current Architecture.
Apply the :distribution:archives naming convention to some of the Docker
sub-projects, so that we have a more consistent naming scheme.
Also, we've seen some examples of Docker packaging tests failing sporadically
when they try to clean up the temp directory, citing a not-empty
directory. Ensure that any running container is removed before cleaning
up the temp dir, in an effort to avoid this problem.
This commit includes a number of changes to reduce overall build
configuration time. These optimizations include:
- Removing the usage of the 'nebula.info-scm' plugin. This plugin
leverages jgit to load read various pieces of VCS information. This
is mostly overkill and we have our own minimal implementation for
determining the current commit id.
- Removing unnecessary build dependencies such as perforce and jgit
now that we don't need them. This reduces our classpath considerably.
- Expanding the usage lazy task creation, particularly in our
distribution projects. The archives and packages projects create
lots of tasks with very complex configuration. Avoiding the creation
of these tasks at configuration time gives us a nice boost.
Guava was removed from Elasticsearch many years ago, but remnants of it
remain due to transitive dependencies. When a dependency pulls guava
into the compile classpath, devs can inadvertently begin using methods
from guava without realizing it. This commit moves guava to a runtime
dependency in the modules that it is needed.
Note that one special case is the html sanitizer in watcher. The third
party dep uses guava in the PolicyFactory class signature. However, only
calling a method on the PolicyFactory actually causes the class to be
loaded, a reference alone does not trigger compilation to look at the
class implementation. There we utilize a MethodHandle for invoking the
relevant method at runtime, where guava will continue to exist.
Now that JDK 14 is available, and we are bundling it, this commit
enables us to run with helpful null pointer exceptions, which will be a
great aid in debugging.
Change how we format exceptions to only wrap them as necessary. While
the config's overall philosophy is to put items one-per-line when
wrapping, in practice this is a little cumbersome for exception lists.
This commit switches to using an onlyIf to determine if a build Docker
image task execution should occur. This is preferred since it means that
the determination is performed at task execution time, rather than
during configuration.
This commit introduces aarch64 packaging, including bundling an aarch64
JDK distribution. We had to make some interesting choices here:
- ML binaries are not compiled for aarch64, so for now we disable ML on
aarch64
- depending on underlying page sizes, we have to disable class data
sharing
* Add REST API for ComponentTemplate CRUD
This adds the Put/Get/DeleteComponentTemplate APIs that allow inserting, retrieving, and removing
ComponentTemplateMetadata into the cluster state metadata.
These APIs are currently only available behind a feature flag system property -
`es.itv2_feature_flag_registered`.
Relates to #53101
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
* Handle special chars in JAVA_HOME in elasticsearch-service.bat (#52676)
* Test case for windows service where JAVA_HOME path contains spaces (#53028)
Co-authored-by: Muhammad Shaheer Akram <41253927+shaheerakr@users.noreply.github.com>
* Smarter copying of the rest specs and tests (#52114)
This PR addresses the unnecessary copying of the rest specs and allows
for better semantics for which specs and tests are copied. By default
the rest specs will get copied if the project applies
`elasticsearch.standalone-rest-test` or `esplugin` and the project
has rest tests or you configure the custom extension `restResources`.
This PR also removes the need for dozens of places where the x-pack
specs were copied by supporting copying of the x-pack rest specs too.
The plugin/task introduced here can also copy the rest tests to the
local project through a similar configuration.
The new plugin/task allows a user to minimize the surface area of
which rest specs are copied. Per project can be configured to include
only a subset of the specs (or tests). Configuring a project to only
copy the specs when actually needed should help with build cache hit
rates since we can better define what is actually in use.
However, project level optimizations for build cache hit rates are
not included with this PR.
Also, with this PR you can no longer use the includePackaged flag on
integTest task.
The following items are included in this PR:
* new plugin: `elasticsearch.rest-resources`
* new tasks: CopyRestApiTask and CopyRestTestsTask - performs the copy
* new extension 'restResources'
```
restResources {
restApi {
includeCore 'foo' , 'bar' //will include the core specs that start with foo and bar
includeXpack 'baz' //will include x-pack specs that start with baz
}
restTests {
includeCore 'foo', 'bar' //will include the core tests that start with foo and bar
includeXpack 'baz' //will include the x-pack tests that start with baz
}
}
```
When installing plugins from remote sources, either the Elastic download
service, or maven, a checksum file is downloaded and checked against the
downloaded zip. The current format for official plugins is to use a
sha512 checksum which includes the zip filename. This format matches
that from sha512sum, and allows using the --check argument there to
verify the checksum manually. However, when generating checksum files
with maven and gradle, the filename is not included.
This commit relaxes the requirement the filename existing within the
sha512 checksum file for maven plugins. We continue to strictly enforce
official plugins have the existing format of the file.
closes#52413
Backport of #52525.
Closes#52503. Implement a list of `_FILE` env vars that will be used to
populate env vars with file content, instead of processing all `_FILE`
vars in the environment.
Reading the startup scripts does not elucidate how JVM options are
applied. Instead, the reader must consult the source for the JVM options
parser. This commit adds some transparency around this process so that
it easier to understand reading the startup scripts how the final JVM
options to start Elasticsearch are constructed.
* Set default ES_PATH_CONF for package scriptlets
Our packages use scriptlets to create or update the Elasticsearch
keystore as necessary when installing or upgrading an Elasticsearch
package. If these scriptlets don't work as expected, Elasticsearch may
try and fail to create or upgrade the keystore at startup time. This
will prevent Elasticsearch from starting up at all.
These scriptlets use the Elasticsearch keystore command-line tools. Like
most of our command-line tools, the keystore tools will by default get
their value for ES_PATH_CONF from a system configuration file:
/etc/sysconfig/elasticsearch for RPMs, /etc/default/elasticsearch for
debian packages. Previously, if the user removed ES_PATH_CONF from that
system configuration file (perhaps thinking that it is obsolete when
the same variables is also defined in the systemd unit file), the
keystore command-line tools would fail. Scriptlet errors do not seem to
cause the installation to fail, and for RPMs the error message is easy
to miss in command output.
This commit adds a line of bash to scriptlets that will set ES_PATH_CONF
to a default when ES_PATH_CONF is unset, rather than only when the
system configuration file is missing.
Back when the distribution launchers were compiled to target JDK 7, we
did not have access to the String#join method to space-delimit JVM
options. Since the launchers now target the same minimum JDK as
Elasticsearch itself, we now have access to this method and can replace
the use of spaceDelimitJvmOptions with String#join. This commit does
that.
This commit prepares the JvmOptionsParser to be more unit testable by
refactoring the class to have some input that it pulls from external
sources passed in as arguments. We do not change any functionality in
this commit, nor add any unit tests, we are only preparing the way.
Now that the FIPS 140 security provider is simply a test dependency
we don't need the thirdPartyAudit exceptions, but plugin-cli and
transport-netty4 do need jarHell disabled as they use the non fips
BouncyCastle security provider as a test dependency too.
This commit introduces the ability to override JVM options by adding
custom JVM options files to a jvm.options.d directory. This simplifies
administration of Elasticsearch by not requiring administrators to keep
the root jvm.options file in sync with changes that we make to the root
jvm.options file. Instead, they are not expected to modify this file but
instead supply their own in jvm.options.d. In Docker installations, this
means they can bind mount this directory in. In future versions of
Elasticsearch, we can consider removing the root jvm.options file
(instead, providing all options there as system JVM options).
* Reload secure settings with password (#43197)
If a password is not set, we assume an empty string to be
compatible with previous behavior.
Only allow the reload to be broadcast to other nodes if TLS is
enabled for the transport layer.
* Add passphrase support to elasticsearch-keystore (#38498)
This change adds support for keystore passphrases to all subcommands
of the elasticsearch-keystore cli tool and adds a subcommand for
changing the passphrase of an existing keystore.
The work to read the passphrase in Elasticsearch when
loading, which will be addressed in a different PR.
Subcommands of elasticsearch-keystore can handle (open and create)
passphrase protected keystores
When reading a keystore, a user is only prompted for a passphrase
only if the keystore is passphrase protected.
When creating a keystore, a user is allowed (default behavior) to create one with an
empty passphrase
Passphrase can be set to be empty when changing/setting it for an
existing keystore
Relates to: #32691
Supersedes: #37472
* Restore behavior for force parameter (#44847)
Turns out that the behavior of `-f` for the add and add-file sub
commands where it would also forcibly create the keystore if it
didn't exist, was by design - although undocumented.
This change restores that behavior auto-creating a keystore that
is not password protected if the force flag is used. The force
OptionSpec is moved to the BaseKeyStoreCommand as we will presumably
want to maintain the same behavior in any other command that takes
a force option.
* Handle pwd protected keystores in all CLI tools (#45289)
This change ensures that `elasticsearch-setup-passwords` and
`elasticsearch-saml-metadata` can handle a password protected
elasticsearch.keystore.
For setup passwords the user would be prompted to add the
elasticsearch keystore password upon running the tool. There is no
option to pass the password as a parameter as we assume the user is
present in order to enter the desired passwords for the built-in
users.
For saml-metadata, we prompt for the keystore password at all times
even though we'd only need to read something from the keystore when
there is a signing or encryption configuration.
* Modify docs for setup passwords and saml metadata cli (#45797)
Adds a sentence in the documentation of `elasticsearch-setup-passwords`
and `elasticsearch-saml-metadata` to describe that users would be
prompted for the keystore's password when running these CLI tools,
when the keystore is password protected.
Co-Authored-By: Lisa Cawley <lcawley@elastic.co>
* Elasticsearch keystore passphrase for startup scripts (#44775)
This commit allows a user to provide a keystore password on Elasticsearch
startup, but only prompts when the keystore exists and is encrypted.
The entrypoint in Java code is standard input. When the Bootstrap class is
checking for secure keystore settings, it checks whether or not the keystore
is encrypted. If so, we read one line from standard input and use this as the
password. For simplicity's sake, we allow a maximum passphrase length of 128
characters. (This is an arbitrary limit and could be increased or eliminated.
It is also enforced in the keystore tools, so that a user can't create a
password that's too long to enter at startup.)
In order to provide a password on standard input, we have to account for four
different ways of starting Elasticsearch: the bash startup script, the Windows
batch startup script, systemd startup, and docker startup. We use wrapper
scripts to reduce systemd and docker to the bash case: in both cases, a
wrapper script can read a passphrase from the filesystem and pass it to the
bash script.
In order to simplify testing the need for a passphrase, I have added a
has-passwd command to the keystore tool. This command can run silently, and
exit with status 0 when the keystore has a password. It exits with status 1 if
the keystore doesn't exist or exists and is unencrypted.
A good deal of the code-change in this commit has to do with refactoring
packaging tests to cleanly use the same tests for both the "archive" and the
"package" cases. This required not only moving tests around, but also adding
some convenience methods for an abstraction layer over distribution-specific
commands.
* Adjust docs for password protected keystore (#45054)
This commit adds relevant parts in the elasticsearch-keystore
sub-commands reference docs and in the reload secure settings API
doc.
* Fix failing Keystore Passphrase test for feature branch (#50154)
One problem with the passphrase-from-file tests, as written, is that
they would leave a SystemD environment variable set when they failed,
and this setting would cause elasticsearch startup to fail for other
tests as well. By using a try-finally, I hope that these tests will fail
more gracefully.
It appears that our Fedora and Ubuntu environments may be configured to
store journald information under /var rather than under /run, so that it
will persist between boots. Our destructive tests that read from the
journal need to account for this in order to avoid trying to limit the
output we check in tests.
* Run keystore management tests on docker distros (#50610)
* Add Docker handling to PackagingTestCase
Keystore tests need to be able to run in the Docker case. We can do this
by using a DockerShell instead of a plain Shell when Docker is running.
* Improve ES startup check for docker
Previously we were checking truncated output for the packaged JDK as
an indication that Elasticsearch had started. With new preliminary
password checks, we might get a false positive from ES keystore
commands, so we have to check specifically that the Elasticsearch
class from the Bootstrap package is what's running.
* Test password-protected keystore with Docker (#50803)
This commit adds two tests for the case where we mount a
password-protected keystore into a Docker container and provide a
password via a Docker environment variable.
We also fix a logging bug where we were logging the identifier for an
array of strings rather than the contents of that array.
* Add documentation for keystore startup prompting (#50821)
When a keystore is password-protected, Elasticsearch will prompt at
startup. This commit adds documentation for this prompt for the archive,
systemd, and Docker cases.
Co-authored-by: Lisa Cawley <lcawley@elastic.co>
* Warn when unable to upgrade keystore on debian (#51011)
For Red Hat RPM upgrades, we warn if we can't upgrade the keystore. This
commit brings the same logic to the code for Debian packages. See the
posttrans file for gets executed for RPMs.
* Restore handling of string input
Adds tests that were mistakenly removed. One of these tests proved
we were not handling the the stdin (-x) option correctly when no
input was added. This commit restores the original approach of
reading stdin one char at a time until there is no more (-1, \r, \n)
instead of using readline() that might return null
* Apply spotless reformatting
* Use '--since' flag to get recent journal messages
When we get Elasticsearch logs from journald, we want to fetch only log
messages from the last run. There are two reasons for this. First, if
there are many logs, we might get a string that's too large for our
utility methods. Second, when we're looking for a specific message or
error, we almost certainly want to look only at messages from the last
execution.
Previously, we've been trying to do this by clearing out the physical
files under the journald process. But there seems to be some contention
over these directories: if journald writes a log file in between when
our deletion command deletes the file and when it deletes the log
directory, the deletion will fail.
It seems to me that we might be able to use journald's "--since" flag to
retrieve only log messages from the last run, and that this might be
less likely to fail due to race conditions in file deletion.
Unfortunately, it looks as if the "--since" flag has a granularity of
one-second. I've added a two-second sleep to make sure that there's a
sufficient gap between the test that will read from journald and the
test before it.
* Use new journald wrapper pattern
* Update version added in secure settings request
Co-authored-by: Lisa Cawley <lcawley@elastic.co>
Co-authored-by: Ioannis Kakavas <ikakavas@protonmail.com>
Today we are repeatedly checking if the current build is a snapshot
build or not by reading the system property build.snapshot. This commit
formalizes this by adding a build parameter to indicate whether or not
the current build is a snapshot build.
In 2bb31fe (v0.6.0!) we added DEBUG-level logging to the default config of
action loggers "for easier debugging". This change to the default config lives
on to this day. It does not obviously make debugging any easier any more, but
it does result in a good deal of log noise sometimes. This commit removes this
special case from the default config.
Closes#51198
This change changes the way to run our test suites in
JVMs configured in FIPS 140 approved mode. It does so by:
- Configuring any given runtime Java in FIPS mode with the bundled
policy and security properties files, setting the system
properties java.security.properties and java.security.policy
with the == operator that overrides the default JVM properties
and policy.
- When runtime java is 11 and higher, using BouncyCastle FIPS
Cryptographic provider and BCJSSE in FIPS mode. These are
used as testRuntime dependencies for unit
tests and internal clusters, and copied (relevant jars)
explicitly to the lib directory for testclusters used in REST tests
- When runtime java is 8, using BouncyCastle FIPS
Cryptographic provider and SunJSSE in FIPS mode.
Running the tests in FIPS 140 approved mode doesn't require an
additional configuration either in CI workers or locally and is
controlled by specifying -Dtests.fips.enabled=true
Adding back accidentally removed jvm option that is required to enforce
start of the week = Monday in IsoCalendarDataProvider.
Adding a `feature` to yml test in order to skip running it in JDK8
commit that removed it 398c802
commit that backports SystemJvmOptions c4fbda3
relates 7.x backport of code that enforces CalendarDataProvider use #48349
Backport of #50927.
Closes#49653. When using _FILE environment variables to supply values
to Elasticsearch, following symlinks when checking that file permissions
are secure.
When installing multiple plugins at once, this commit changes the
behavior to report installed plugins as we go. In the case of failure,
we emit a message that we are rolling back any plugins that were
installed successfully, and also that they were successfully rolled
back. In the case a plugin is not successfully rolled back, we report
this clearly too, alerting the user that there might still be state on
disk they would have to clean up.
This commit allows the plugin installer to install multiple plugins in a
single invocation. The installation will be treated as a transaction, so
that all of the plugins are install successfully, or none of the plugins
are installed.
We renamed README.textile to README.asciidoc but a bunch of tests and
the package build itself still pointed at the old name. This switches
them the new name.
A previous commit taught Elasticsearch packages to respect ES_PATH_CONF
during installs. Missed in that commit was respecting ES_PATH_CONF on
upgrades. This commit does that. Additionally, while ES_PATH_CONF is not
currently used in pre-install, this commit adds respect to the preinst
script in case we do in the future.
Backport of #49612.
The current Docker entrypoint script picks up environment variables and
translates them into -E command line arguments. However, since any tool
executes via `docker exec` doesn't run the entrypoint, it results in
a poorer user experience.
Therefore, refactor the env var handling so that the -E options are
generated in `elasticsearch-env`. These have to be appended to any
existing command arguments, since some CLI tools have subcommands and
-E arguments must come after the subcommand.
Also extract the support for `_FILE` env vars into a separate script, so
that it can be called from more than once place (the behaviour is
idempotent).
Finally, add noop -E handling to CronEvalTool for parity, and support
`-E` in MultiCommand before subcommands.
We respect ES_PATH_CONF everywhere except package install. This commit
addresses this by respecting ES_PATH_CONF when installing the RPM/Debian
packages.
This commit tweaks the workaround introduced in #49211 to support
Gradle 6.0. In the workaround, we specifically override the address
the Gradle daemon binds to by passing the desired address via the
OPENSHIFT_IP environment variable. This works fine for builds using
Gradle 6.0, but for older Gradle versions this causes issues with
inter-daemon communication, specifically when we build BWC branches
not on Gradle 6.0. The fix here is to strip that environment variable
out when building the target BWC branch if that branch is on an
older Gradle version.
This is all temporary and will be removed when this bug fix is released
in Gradle 6.1.
Closes#50025
This upgrade required a few significant changes. Firstly, the build
scan plugin has been renamed, and changed to be a Settings plugin rather
than a project plugin so the declaration of this has moved to our
settings.gradle file. Second, we were using a rather old version of the
Nebula ospackage plugin for building deb and rpm packages, the migration
to the latest version required some updates to get things working as
expected as we had some workarounds in place that are no longer
applicable with the latest bug fixes.
(cherry picked from commit 87f9c16e2f8870e3091062cde37b43042c3ae1c5)
The docker build task depends on the docker context being built, but it
was not explicitly setup as an input. This commit adds the task as an
input to the docker build.
relates #49613
Backport of #49079. Reimplement a number of the tests from
elastic/elasticsearch-docker.
There is also one Docker image fix here, which is that two of the provided
config files had different file permissions to the rest. I've fixed this
with another RUN chmod while building the image, and adjusted the
corresponding packaging test.
This commits sets an output marker file for the docker build tasks so
that it can be tracked as up to date. It also fixes the docker build
context task to omit the build date as in input property which always
left the task as out of date.
relates #49359
Backport of #47573.
Closes#43603. Allow environment variables to be passed to ES in a Docker
container via a file, by setting an environment variable with the `_FILE`
suffix that points to the file with the intended value of the env var.
The Apache Commons Daemon has some helpful features for Java
applications, like nice little next boxes for min heap, max heap, and
thread stack size. Our elasticsearch-service.bat script parses those
values out of the ES_JAVA_OPTS environment variable and provides them to
the Apache Commons Daemon invocation command in order to provide
sensible defaults. However, we failed to remove those values from the
ES_JAVA_OPTS environment variable, which meant they ended up in the
"Java Options" text box and would, from there, override whatever the
user put in the specific boxes for heap size or thread stack size.
This commit modifies the loop that parses ES_JAVA_OPTS to construct a
new enviroment variable containing only the values that aren't parsed
out for heap size or thread stack size, then uses that new enviroment
variable in the commons daemon invocation command.
JDK 14 has removed CMS. This commit restricts the support for CMS to JDK
8 through JDK 13, and defaults to G1 GC on JDK 14. We will revisit all
defaults in the future, but this ensures that we run with a
properly-configured garbage collector on JDK 14+.
This fixes a regression introduced in #42042. The logic here was
mistakenly inverted such that we only run these tests in a FIPS JVM
which is the opposite of what we intend.
Backport of #48849. Update `.editorconfig` to make the Java settings the
default for all files, and then apply a 2-space indent to all `*.gradle`
files. Then reformat all the files.
This commit fixes the names of the UBI-based Docker build contexts to
lift the ubi component of the name into the archive base name, instead
of the classifier.
Backport of #46599 and #47640. Add packaging tests for Docker.
* Introduce packaging tests for Docker (#46599)
Closes#37617. Add packaging tests for our Docker images, similar to what
we have for RPMs or Debian packages. This works by running a container and
probing it e.g. via `docker exec`. Test can also be run in Vagrant, by
exporting the Docker images to disk and loading them again in VMs. Docker
is installed via `Vagrantfile` in a selection of boxes.
* Only define Docker pkg tests if Docker is available (#47640)
Closes#47639, and unmutes tests that were muted in b958467.
The Docker packaging tests were being defined irrespective of whether
Docker was actually available in the current environment. Instead,
implement exclude lists so that in environments where Docker is not
available, no Docker packaging tests are defined. For CI hosts, the build
checks `.ci/dockerOnLinuxExclusions`. The Vagrant VMs can defined the
extension property `shouldTestDocker` property to opt-in to packaging
tests.
As part of this, define a seperate utility class for checking Docker,
and call that instead of defining checks in-line in BuildPlugin.groovy
This commit fixes the name of the UBI-based Docker image build contexts
to include "7" (to set us up for the future where we are likely to have
a ubi8-based image).
This commit registers the UBI-based Docker image projects in the build
so that their assemble tasks are executed when the top-level assemble
task is executed.
This commit introduces a consistent, and type-safe manner for handling
global build parameters through out our build logic. Primarily this
replaces the existing usages of extra properties with static accessors.
It also introduces and explicit API for initialization and mutation of
any such parameters, as well as better error handling for uninitialized
or eager access of parameter values.
Closes#42042
This commit simplifies the JDK copy specification in the archives build
so that it's a single line as opposed to an if/else with a repeated
body. This approach reduces the maintenance cost of this code.