Backport of #51526.
Previous the formatter was breaking simple if/else statements (i.e.
without braces) onto separate lines, which could be fragile because the
formatter cannot also introduce braces. Instead, keep such expressions
on the same line.
Today we are repeatedly checking if the current build is a snapshot
build or not by reading the system property build.snapshot. This commit
formalizes this by adding a build parameter to indicate whether or not
the current build is a snapshot build.
This change changes the way to run our test suites in
JVMs configured in FIPS 140 approved mode. It does so by:
- Configuring any given runtime Java in FIPS mode with the bundled
policy and security properties files, setting the system
properties java.security.properties and java.security.policy
with the == operator that overrides the default JVM properties
and policy.
- When runtime java is 11 and higher, using BouncyCastle FIPS
Cryptographic provider and BCJSSE in FIPS mode. These are
used as testRuntime dependencies for unit
tests and internal clusters, and copied (relevant jars)
explicitly to the lib directory for testclusters used in REST tests
- When runtime java is 8, using BouncyCastle FIPS
Cryptographic provider and SunJSSE in FIPS mode.
Running the tests in FIPS 140 approved mode doesn't require an
additional configuration either in CI workers or locally and is
controlled by specifying -Dtests.fips.enabled=true
If a worktree is used, say for 7.x, and packaging tests are run, the
build within the VM will fail due to the parent checkout not being
accessible. This is because the path of the worktree is for the host
systtem, not the VM. This commit makes the git info unknown, just as if
the .git directory did not exist.
When building clusters for integration tests, today we install plugins
sequentially. We recently introduced the ability to install plugins in a
single invocation of the install plugin command. Using this can save
substantial time starting up JVMs. This commit changes the build
infrastructure to install multiple plugins at once when building
clusters for integration tests.
For the docs integration tests in particular, where we install many
plugins, this change makes a substantial difference. On my laptop, prior
to this change, installing the plugins sequentially took 115
seconds. After this change, it takes 14 seconds.
Tests in BuildPluginIT copy the workspace but exclude the build
directories based on whether the directory string representation
includes `/build/` or not. This check is problematic if the directory
of the project has a parent directory also named `build`. The change in
this commit checks to see if the path relative to the project directory
has any path parts equal to `build`.
Backport / reimplementation of #50786 on 7.x.
Opt-in `buildSrc` for automatic formatting. This required a config tweak
in order to pick up all the Java sources, and as a result more files are
now found in the Enrich plugin, that were previously missed.
I also moved the 2 Java files in `buildSrc/src/main/groovy` into the Java
directory, which required some follow-up changes.
The packed-refs support was using the original .git path, changed to use
the real .git directory after reference from worktree has been followed.
Relates #47464
This commit adds a cliSetup command that can be used to run arbitrary
bin scripts to setup a test cluster. This is the same as what was
previously called setupCommands in cluster formation tasks.
closes#50382
This commit adds a special run.datadir system property that may be
passed to `./gradlew run` which sets the root data directory used by the
task. While normally overriding the data path is not allowed for test
clusters, it is useful when experimenting with the run task.
closes#50338
This commit ensures the global info plugin is applied, which supplies
the isInternal flag used to determine whether distro download looks for
bwcVersions.
relates #50230
The testclusters shutdown code was killing child processes
of the ES JVM before the ES JVM. This causes any running
ML jobs to be recorded as failed, as the ES JVM notices that
they have disconnected from it without being told to stop,
as they would if they crashed. In many test suites this
doesn't matter because the test cluster will never be
restarted, but in the case of upgrade tests it makes it
impossible to test what happens when an ML job is running
at the time of the upgrade.
This change reverses the order of killing the ES process
tree such that the parent processes are killed before their
children. A list of children is stored before killing the
parent so that they can subsequently be killed (if they
don't exit by themselves as a side effect of the parent
dying).
Backport of #50175
This commit sets xpack.security.ssl.diagnose.trust to false in all
the nodes of our TestClusters when running integTest. This is needed
in 7.x because setting xpack.security.ssl.diagnose.trust to true
wraps SunJSSE TrustManager with our own DiagnosticTrustManager and
this is not allowed when SunJSSE is in FIPS mode.
An alternative would be to set `xpack.security.fips.enabled` to
true which would also implicitly disable
xpack.security.ssl.diagnose.trust but would have additional effects
(would require that we set PBKDF2 for password hashing algorithm in
all test clusters, would prohibit using JKS keystores in nodes even
if relevant tests have been muted in FIPS mode etc.)
The testclusters registory is a singleton extension element added to the
root project which tracks which test clusters are used throughout the
multi project. But having the same name as the extension used to
configure test clusters within each subprojects breaks using a single
project for an external plugin. This commit renames the registry
extension to make it unique.
closes#49787
This commit goes from using a JvmArgumentProvider to using the normal
Test task APIs for passing the `HeapDumpOnOutOfMemoryError` JVM
argument. This makes it simpler for subprojects, such as lang-painless
to override this setting if necessary.
Closes#49117
(cherry picked from commit e97c38ff8e862abdc1d7816c66f9869ed216031f)
We have a long history of advancing the required compiler to the newest
JDK. JDK 13 has been with us for awhile, but we were blocked from
upgrading since Gradle was not compatible with JDK 13. With the
advancement in our project to Gradle 6 which supports JDK 13, we can now
advance our minimum compiler version. This commit updates the minimum
compiler version to JDK 13.
The elasticsearch-node tools allow manipulating the on-disk cluster state. The tool is currently
unaware of plugins and will therefore drop custom metadata from the cluster state once the
state is written out again (as it skips over the custom metadata that it can't read). This commit
preserves unknown customs when editing on-disk metadata through the elasticsearch-node
command-line tools.
This upgrade required a few significant changes. Firstly, the build
scan plugin has been renamed, and changed to be a Settings plugin rather
than a project plugin so the declaration of this has moved to our
settings.gradle file. Second, we were using a rather old version of the
Nebula ospackage plugin for building deb and rpm packages, the migration
to the latest version required some updates to get things working as
expected as we had some workarounds in place that are no longer
applicable with the latest bug fixes.
(cherry picked from commit 87f9c16e2f8870e3091062cde37b43042c3ae1c5)
Move BuildParams class to 'minimumRuntime' source set to retain compatibility
with build-tools for builds using a Java 8 runtime.
Closes#49766
(cherry picked from commit 1059f823acdfa7a2f1f9bff21c7256dae4f3e23c)
When external plugin authors use build-tools, their integ tests depend
on the integ-test-zip artifact. However, this dependency was broken in
7.5.0 by accidentally removing the `@zip` qualifier on the maven
dependency, which works around the fact the pom for the integ-test-zip
claims the artifact is a pom instead of zip packaging. This commit
restores the workaround of using `@zip` until the pom can be fixed.
closes#49787
This rewrites long sort as a `DistanceFeatureQuery`, which can
efficiently skip non-competitive blocks and segments of documents.
Depending on the dataset, the speedups can be 2 - 10 times.
The optimization can be disabled with setting the system property
`es.search.rewrite_sort` to `false`.
Optimization is skipped when an index has 50% or more data with
the same value.
Optimization is done through:
1. Rewriting sort as `DistanceFeatureQuery` which can
efficiently skip non-competitive blocks and segments of documents.
2. Sorting segments according to the primary numeric sort field(#44021)
This allows to skip non-competitive segments.
3. Using collector manager.
When we optimize sort, we sort segments by their min/max value.
As a collector expects to have segments in order,
we can not use a single collector for sorted segments.
We use collectorManager, where for every segment a dedicated collector
will be created.
4. Using Lucene's shared TopFieldCollector manager
This collector manager is able to exchange minimum competitive
score between collectors, which allows us to efficiently skip
the whole segments that don't contain competitive scores.
5. When index is force merged to a single segment, #48533 interleaving
old and new segments allows for this optimization as well,
as blocks with non-competitive docs can be skipped.
Backport for #48804
Co-authored-by: Jim Ferenczi <jim.ferenczi@elastic.co>
Add a mirror of the maven repository of the shibboleth project
and upgrade opensaml and related dependencies to the latest
version available version
Resolves: #44947
This commit introduces a workaround for an issue related to our recent
notarization of distributions starting with the 6.8.5 release. An
unintended side effect of notarization was that the file entries of the
release tar all have a `./` prefix in the path. This causes a number of
issues, not least of which is that our Gradle extract tasks end up
copying an empty fileset to the destination directory. The workaround
here is imply to remove the leading `./` path segment from each file
when performing the extraction. For more details see this issue:
https://github.com/elastic/elasticsearch/issues/49417
The test task is configured to use the runtime java version, but there
are issues with the version of groovy used by gradle pre 6.0. In order
to workaround this, we use the Gradle JDK to execute the build-tools
tests.
Closes#49404Closes#49253
Tasks intending to use a particular java home provided by JAVA<N>_HOME
use the getJavaHome method, which verifies the given java home is
available, or will be if the task will run. However, the verification
logic was broken, in addition to unnecessarily delaying retrieving the
java home until runtime. This commit fixes the verification logic to run
at either config time, delaying verification, or at runtime which
immediately checks if java home is available.
closes#49153
This PR adds build configuration to use the `jdk-download` plugin with
unit tests when no runtime java is configured externally.
It's a first part in a longer chain of changes described in #40531.
This commit upgrades the JDK that is bundled with Windows from 13+33 to
13.0.1+9. Our other platforms have previously been upgraded but Windows
was delayed because the artifacts were not available at the time that we
made the previous upgrade.
Relates #48587
The bats tests require several distributions to all be built into a
single directory. The addition of docker packaging tests now cause the
bats tests to depend on docker, even though docker is not used there.
This commit filters out docker distributions from those that bats
depends on.
The RunTask is responsible for logging output from nodes to the console
and also stays active since we want the cluster to keep running.
However, the implementation of the logging and waiting resulted in a
spin loop that continually polls for data to have been written to one
of the nodes' output files. On my laptop, this causes an idle
invocation of `gradle run` to consume an entire core.
The JDK provides a method to be notified of changes to files through
the use of a WatchService. While a WatchService based implementation
for logging and waiting works, a delay of up to ten seconds is
encountered when running on macOS. This is due to the lack of a native
WatchService implementation that uses kqueue or FSEvents; the current
WatchService implementation in the JDK uses polling with a default
interval of ten seconds. While the interval can be changed
programmatically it is not an acceptable solution due to the need to
access the com.sun.nio.file.SensitivityWatchEventModifier enum, which
is in an internal package.
The change in this commit instead introduces a check to see if any data
was available to read and log. If no data is available in any of the
node output files, the thread sleeps for 100ms. This is enough time to
prevent consuming large amounts of cpu while still providing output to
the console in a timely fashion.
Backport of #48849. Update `.editorconfig` to make the Java settings the
default for all files, and then apply a 2-space indent to all `*.gradle`
files. Then reformat all the files.
Backport of #48450.
Make a number of changes so that code in the `server` directory is more
resilient to automatic formatting. This covers:
* Reformatting multiline JSON to embed whitespace in the strings
* Move some comments around to they aren't auto-formatted to a strange
place. This also required moving some `&&` and `||` operators from the
end-of-line to start-of-line`.
* Add helper method `reformatJson()`, to strip whitespace from a JSON
document using XContent methods. This is sometimes necessary where
a test is comparing some machine-generated JSON with an expected
value.
Also, `HyperLogLogPlusPlus.java` is now excluded from formatting because it
contains large data tables that don't reformat well with the current settings,
and changing the settings would be worse for the rest of the codebase.
Backport of #48898.
We no longer configure distributions for prior versions for Docker. This
is because doing so prompts Gradle to try and resolve the Docker
dependencies, which doesn't work as they can't be downloaded via Ivy
(configured in DistributionDownloadPlugin). Since we need these for the
BATS upgrade tests, and those tests only cover .rpm and .deb, it's OK to
omit creating such distributions in the first place. We may need to
revisit this in the future, to allow upgrade testing using Docker
containers.
Backport of #48883.
Per elastic/infra#15864, the Elasticsearch CI images are failing due to
a packer_cache failure. This is because Gradle is trying to resolve
a `.docker` file through the Ivy repository, which doesn't work. Disable
the Docker tests again until we figure out the way forward.
Backport of #46599 and #47640. Add packaging tests for Docker.
* Introduce packaging tests for Docker (#46599)
Closes#37617. Add packaging tests for our Docker images, similar to what
we have for RPMs or Debian packages. This works by running a container and
probing it e.g. via `docker exec`. Test can also be run in Vagrant, by
exporting the Docker images to disk and loading them again in VMs. Docker
is installed via `Vagrantfile` in a selection of boxes.
* Only define Docker pkg tests if Docker is available (#47640)
Closes#47639, and unmutes tests that were muted in b958467.
The Docker packaging tests were being defined irrespective of whether
Docker was actually available in the current environment. Instead,
implement exclude lists so that in environments where Docker is not
available, no Docker packaging tests are defined. For CI hosts, the build
checks `.ci/dockerOnLinuxExclusions`. The Vagrant VMs can defined the
extension property `shouldTestDocker` property to opt-in to packaging
tests.
As part of this, define a seperate utility class for checking Docker,
and call that instead of defining checks in-line in BuildPlugin.groovy
This commit introduces a consistent, and type-safe manner for handling
global build parameters through out our build logic. Primarily this
replaces the existing usages of extra properties with static accessors.
It also introduces and explicit API for initialization and mutation of
any such parameters, as well as better error handling for uninitialized
or eager access of parameter values.
Closes#42042
This commit eliminates some custom logic we have in place for post-hoc
cleanup of POM files generated by Gradle. There were to main issues this
logic was meant to address:
First, for dependencies marked as `transitive = false`, Gradle by
default creates a "wildcard" exclusion in the generated POM file. It
turns out that Ivy didn't handle these types of exclusions well, even
though they are perfectly valid and dealt with by Gradle and Maven as
expected. We've since confirmed that this issues is indeed resolved in
the most recent Ivy release (2.5.0-rc1) so going forward the suggestion
to folks consuming Elasticsearch dependencies with Ivy will be to use
this version.
Second, earlier versions of Gradle would incorrectly assign compile
dependencies to the "runtime" scope in the publish POM file. This could
cause issues if the dependencies were indeed needed at compile time
because their APIs were exposed. This has since been fixed and these
dependencies are correctly marked as "compile" scope in the POM.
Since these two issues have been resolved in their respective projects
we can eliminate this logic and all the supporting code, such as having
to create lots of "internal" configurations for tracking transitive
dependencies.
This commit simplifies and standardizes our usage of the Gradle Shadow
plugin to conform more to plugin conventions. The custom "bundle" plugin
has been removed as it's not necessary and performs the same function
as the Shadow plugin's default behavior with existing configurations.
Additionally, this removes unnecessary creation of a "nodeps" artifact,
which is unnecessary because by default project dependencies will in
fact use the non-shadowed JAR unless explicitly depending on the
"shadow" configuration.
Finally, we've cleaned up the logic used for unit testing, so we are
now correctly testing against the shadow JAR when the plugin is applied.
This better represents a real-world scenario for consumers and provides
better test coverage for incorrectly declared dependencies.
(cherry picked from commit 3698131109c7e78bdd3a3340707e1c7b4740d310)
This commit bumps the bundled JDK to 13.0.1+9. Since AdoptOpenJDK did
not release 13.0.1+9 for Windows, this commit also enables that the
bundled JDK version can vary by platform.
Reverting the change introducing IsoLocal.ROOT and introducing IsoCalendarDataProvider that defaults start of the week to Monday and requires minimum 4 days in first week of a year. This extension is using java SPI mechanism and defaults for Locale.ROOT only.
It require jvm property java.locale.providers to be set with SPI,COMPAT
closes#41670
backport #48209
* Always publish a build scan in CI
This PR changes the build scan configuration to alwasy publisha build
scan when running in our CI.
We should alkready be passing these env vars into the Vagrant VM so this
will make it produce a build scan too.
The old properties to accept build scan ToS on the public server are
thus no longer relevant and will be cleaned up from the Jenkins config
once this is merged.
* Pass env vars to vagrant VM
* Enable running in parallel in the VM
* Add job name and build nomber as custom values
The classpath for some project could outgrow the max allowed command
line on Windows. Using an env var is not fault proof, but give more
breathing room
This commit removes the option to change the netty system properties to
reenable the direct buffer pooling. It also removes the need for us to
disable the buffer pooling in the system properties file. Instead, we
programmatically craete an allocator that is used by our networking
layer.
This commit does introduce an Elasticsearch property which allows the
user to fallback on the netty default allocator. If they choose this
option, they can configure the default allocator how they wish using the
standard netty properties.
Before this change one needed to re-start debugging several times, as we
launched multiple JVMs in debug mode.
With this option the IDE has the option to re-launch and listen for
connections again leading for to a more pleasant experience.
The distribution download plugin which handles finding built
distributions for testing currently only knows how to find locally built
snapshots. When an external Elasticsearch plugin uses build-tools, these
snapshots do not exist. This commit extends the download plugin so it
pulls from the Elastic snapshots service when used outside of the
Elasticsearch repository.
closes#47123
* GlobalBuildInfo plugin searches packed references
In recent versions of Git, references may be packed in a packed-refs
file. If this happens, Gradle will need to look in that file to find
build information.
We fixed warnings related to task input and outputs in #45098.
This particular input was not considered, a warning was present for it
and Gradle didn't use it as part of task inputs.
As soon as we fixed it Gradle started considering it an input and
enforced that it exists.
With this change we make it optional as the task can work both wih and
without this directory.
In order to work with external elasticsearch plugins, some parts of
build-tools need to know when the current build is part of the
elasticsearch project or an external build. We identify these "internal"
builds by a marker in our buildSrc classpath. However, some build-tools
integ tests need to override this flag to test both the external and
internal behavior.
This commit moves the storage of the flag indicating whether we are
running in an internal build to global build info. This will allow
testkit projects to override the flag.
The global build info plugin prints high level information about the
build, including the test seed. However, BuildPlugin sets up the test
seed, which creates an odd implicit dependency on it. This commit moves
the intialization of the testSeed property into the global build info.
This commit adds a thread filter for gradle unit tests which omits
threads gradle may create that we have no control over shutting down.
The current example of this is for project.exec which gradle pools.
closes#47417
* Convert RunTask to use testclusers, remove ClusterFormationTasks
This PR adds a new RunTask and a way for it to start a
testclusters cluster out of band and block on it to replace
the old RunTask that used ClusterFormationTasks.
With this we can now remove ClusterFormationTasks.
* Use versions specific distribution folders so we don't need to clean up (#46539)
* Retry deleting distro dir on windows
When retarting the cluster we clean up old distribution files that might
still be in use by the OS.
Windows closes resources of ded processes async, so we do a couple of
retries to get arround it.
Closes#46014
* Avoid having to delete the distro folder.
* Remove the use of ClusterFormationTasks form RestTestTask (#47022)
This PR removes a use-case of the ClusterFormationTasks and converts a
project that flew under the radar so far.
There's probably more clean-up possible here, but for now the goal is
to be able to remove that code after `RunTask` is also updated.
* Migrate some 7.x only projects
* Bwc testclusters all (#46265)
Convert all bwc projects to testclusters
* Fix bwc versions config
* WIP fix rolling upgrade
* Fix bwc tests on old versions
* Fix rolling upgrade
This PR makes the necesary adaptations to the tests and adds a power shell script to
invoke the OS tests on GCP instances connected as CI workers.
Also noticed that logs were not being produced by the tests and that theses were not using log4j so fixed that too.
One of the difficulties in working on theses tests was that the tests just stalled with no indication where the problem is.
To ease with the debugging, after process explorer suggested that the tests are running some commands, we now have multiple timeouts: one for the tests ( which will generate a thread dump ) and one for individual commands ( that bails with the command being ran and output and error so far ) to make it easier to see what went wrong.
The tests were blocking because apparently the pipes to the sub-process were not closing, thus the threads were blocking on them and we were blocking indefinitely on the join. I'm not sure why this doesn't happen in vagrant, but we now properly deal with it.