This commit migrates the RestIntegTestTask from groovy to Java.
No changes to logic should be included, however the following changes
are needed:
* Move Fixture interface to Java (Java can not depend on Groovy classes)
* Support lazy evaluation of non-input System parameters (can not use Groovy strings)
* Use constants for system property names
* Remove dead System property pass through code (the build plugin does this already)
Drop a nasty regex in our checkstyle config that I wrote a long time ago
in favor of a checkstyle extension. This is better because:
* It is faster. It saves a little more than a minute across the entire
build.
* It is easier to read. Who knew 100 lines of Java would be easier to
read than a regex, but it is.
* It has tests.
Backport of #54276.
Move and rename formatter config file, so that it is easier for
Eclipse users to import.
Also switch to an opt-out list for formatting. Instead of explcitly
listing projects that should be formatted, instead list projects
that should not be formatted. This means that any new projects will
automatically be formatted and checked.
Xpack license state contains a helper method to determine whether
security is disabled due to license level defaults. Most code needs to
know whether security is enabled, not disabled, but this method exists
so that the security being explicitly disabled can be distinguished from
licence level defaulting to disabled. However, in the case that security
is explicitly disabled, the handlers in question are never registered,
so security is implicitly not disabled explicitly, and thus we can share
a single method to know whether licensing is enabled.
This commit removes the configuration time vs execution time distinction
with regards to certain BuildParms properties. Because of the cost of
determining Java versions for configuration JDK locations we deferred
this until execution time. This had two main downsides. First, we had
to implement all this build logic in tasks, which required a bunch of
additional plumbing and complexity. Second, because some information
wasn't known during configuration time, we had to nest any build logic
that depended on this in awkward callbacks.
We now defer to the JavaInstallationRegistry recently added in Gradle.
This utility uses a much more efficient method for probing Java
installations vs our jrunscript implementation. This, combined with some
optimizations to avoid probing the current JVM as well as deferring
some evaluation via Providers when probing installations for BWC builds
we can maintain effectively the same configuration time performance
while removing a bunch of complexity and runtime cost (snapshotting
inputs for the GenerateGlobalBuildInfoTask was very expensive). The end
result should be a much more responsive build execution in almost all
scenarios.
(cherry picked from commit ecdbd37f2e0f0447ed574b306adb64c19adc3ce1)
This commit introduces aarch64 packaging, including bundling an aarch64
JDK distribution. We had to make some interesting choices here:
- ML binaries are not compiled for aarch64, so for now we disable ML on
aarch64
- depending on underlying page sizes, we have to disable class data
sharing
When depending on lucene snapshots we point maven at our own s3 backed
repository. However, in this case lucene packages should only be
retrieved from this location, and no other packages should ever be found
in that repo. This commit makes the maven repository exclusive to lucene
packages.
Prior to this commit Watcher explicitly copied test between two
projects with a copy task. This commit removes the explicit copy in favor
of adding the Watcher tests to the available restResources that may be
copied between projects.
This is how inter-project dependencies should be modeled. However, only
Watcher is included here since it is (currently) the only project with
inter-project test dependencies.
Re-applies the change from #53523 along with test fixes.
closes#53626closes#53624closes#53622closes#53625
Co-authored-by: Nik Everett <nik9000@gmail.com>
Co-authored-by: Lee Hinman <dakrone@users.noreply.github.com>
Co-authored-by: Jake Landis <jake.landis@elastic.co>
The multiline regex rule used to detect docs code snippets greater than
76 characters in length has considerable cost to. For the high level
rest client project alone, this was taking upwards of 3 minutes. This
commit updates the rule regex pattern to use non-greedy matching when
appropriate which results in a lot fewer backtracks and about a 70%
reduction in execution time on the high level rest client module.
This commit upgrades the jackson-databind depdendency to
2.8.11.6. Additionally, we revert a previous change that put
ingest-geoip on the version of jackson-databind from the version
properties file. This is because upgrading ingest-geoip to a later
version of jackson-databind also requires an upgrade to the geoip2
dependency which is currently blocked. Therefore, if we can get to a
point where we otherwise upgrade our Jackson dependencies, we do not
want ingest-geoip to automatically come along with it.
The jdk and distribution download plugins create fake ivy repositories,
and use group based repository filtering to ensure no other artifacts
try to resolve against the fake repos. Currently this works by adding a
blanket exclude to all repositories for the given group name. This
commit changes to using the new exclusiveContent feature in
Gradle to do the exclusion.
Lucene 8.5.0 release candidates are imminent. This commit upgrades master to use
the latest snapshot to check that there are no last-minute bugs or regressions.
This commit fixes ensures that for external builds
(e.g. plugin development) that the REST tests that are
copied are properly filtered to only include the API
by default.
The code prior to this change resulted in including both
the API and tests since the copy.include resulted as an
empty list by default since the stream is empty unless
explicitly configured.
related #52114fixes#53183
A recent PR #52114 introduced two new tasks to copy the REST api and tests.
A couple bugs were found in that initial PR that prevents the incremental
build from working as expected.
The pattern match of empty string is equivalent to match all and it was coded
as match none. Fixed with explicit checks against empty patterns.
The fileCollection.plus return value was ignored. Fixed by changing how the
input's fileTree is constructed.
If a project has an src/test/resources directory, and tests are being copied
without a rest-api-spec/test directory could result no-op. Masked by the other
bugs and fixed by minor changes to logic to determine if a project has tests.
We embed the :reaper project jar in the build-tools jar so we can spawn
a reaper process at build runtime. Due to this, the jar technically
isn't part of the test runtime classpath, but for input snapshotting
purposes, we should be treating it as such. Instead, because it lives
in META-INF, Gradle treats it as a normal file, which in practice means
its hash changes on every build (timestamps, etc).
This commit changes our input snapshotting strategy such that instead
we explicitly add the jar as an input to any test tasks using Gradle's
runtime classpath normalization strategy (ignore timestamps, jar entry
order, etc) and ignore the file in META-INF. This ensures that we can
properly cache test results for build-tools, why still ensuring that
changes to the :reaper project trigger reexecution of tests.
* Smarter copying of the rest specs and tests (#52114)
This PR addresses the unnecessary copying of the rest specs and allows
for better semantics for which specs and tests are copied. By default
the rest specs will get copied if the project applies
`elasticsearch.standalone-rest-test` or `esplugin` and the project
has rest tests or you configure the custom extension `restResources`.
This PR also removes the need for dozens of places where the x-pack
specs were copied by supporting copying of the x-pack rest specs too.
The plugin/task introduced here can also copy the rest tests to the
local project through a similar configuration.
The new plugin/task allows a user to minimize the surface area of
which rest specs are copied. Per project can be configured to include
only a subset of the specs (or tests). Configuring a project to only
copy the specs when actually needed should help with build cache hit
rates since we can better define what is actually in use.
However, project level optimizations for build cache hit rates are
not included with this PR.
Also, with this PR you can no longer use the includePackaged flag on
integTest task.
The following items are included in this PR:
* new plugin: `elasticsearch.rest-resources`
* new tasks: CopyRestApiTask and CopyRestTestsTask - performs the copy
* new extension 'restResources'
```
restResources {
restApi {
includeCore 'foo' , 'bar' //will include the core specs that start with foo and bar
includeXpack 'baz' //will include x-pack specs that start with baz
}
restTests {
includeCore 'foo', 'bar' //will include the core tests that start with foo and bar
includeXpack 'baz' //will include the x-pack tests that start with baz
}
}
```
We explicitly set the path for the temporary directory to use in test
tasks, but today this path is a relative path, relative to the current
working directory of the test task. The fact that we are using a
relative path here appears to be legacy, simply leftover from the days
of the Maven build. An absolute path is preferred here, since it's
explicit and we do not have to rely on everyone resolving the path
properly relative to the working directory.
Today we we set the test temporary directory explicitly by controling
java.io.tmpdir. Yet, we do not guarantee this directory exists, instead
relying on a test base class (LuceneTestCase) to create this directory
when it initializes. However, some of our tests do not rely on our test
framework, and thus do not have access to LuceneTestCase, instead
relying on RandomizedRunner directly. We should not be relying on the
temporary directory being implicitly created, instead guaranteeing that
it exists before test execution starts. This commit does that by
creating the test temporary directory before the test task executes (via
a doFirst).
This commit converts the sysv init tests from bats tests into the java
packaging tests. Since it is the last oss specific test, the bats oss
test task is also removed.
relates #46005
When docker-compose is required for a test fixture but is not
available, we warn log a message to this effect. This ends up being
noise during configuration, especially when working locally. This
commit changes the logging level of these messages to debug.
- Enable SunJGSS provider for Kerberos tests
- Handle the fact that in the decrypt method in KeyStoreWrapper might
not throw immediately when the GCM cipher is from BouncyCastle FIPS
and we end up with a DataInputStream that has reached it's end.
- Disable tests, jarHell, testingConventions for ingest attachment
plugin. We don't support this plugin (and document this) in FIPS
mode.
- Don't attempt to install ingest-attachment in smoke-test-plugins
Due to of a typo in the version regex pattern only the last digit of a major
version number is taken into consideration. So docker's version 17.0.1 is parsed
as 7.0.1.
While we use `== false` as a more visible form of boolean negation
(instead of `!`), the true case is implied and the true value does not
need to explicitly checked. This commit converts cases that have slipped
into the code checking for `== true`.
Backport of #51526.
Previous the formatter was breaking simple if/else statements (i.e.
without braces) onto separate lines, which could be fragile because the
formatter cannot also introduce braces. Instead, keep such expressions
on the same line.
Today we are repeatedly checking if the current build is a snapshot
build or not by reading the system property build.snapshot. This commit
formalizes this by adding a build parameter to indicate whether or not
the current build is a snapshot build.
This change changes the way to run our test suites in
JVMs configured in FIPS 140 approved mode. It does so by:
- Configuring any given runtime Java in FIPS mode with the bundled
policy and security properties files, setting the system
properties java.security.properties and java.security.policy
with the == operator that overrides the default JVM properties
and policy.
- When runtime java is 11 and higher, using BouncyCastle FIPS
Cryptographic provider and BCJSSE in FIPS mode. These are
used as testRuntime dependencies for unit
tests and internal clusters, and copied (relevant jars)
explicitly to the lib directory for testclusters used in REST tests
- When runtime java is 8, using BouncyCastle FIPS
Cryptographic provider and SunJSSE in FIPS mode.
Running the tests in FIPS 140 approved mode doesn't require an
additional configuration either in CI workers or locally and is
controlled by specifying -Dtests.fips.enabled=true
If a worktree is used, say for 7.x, and packaging tests are run, the
build within the VM will fail due to the parent checkout not being
accessible. This is because the path of the worktree is for the host
systtem, not the VM. This commit makes the git info unknown, just as if
the .git directory did not exist.
When building clusters for integration tests, today we install plugins
sequentially. We recently introduced the ability to install plugins in a
single invocation of the install plugin command. Using this can save
substantial time starting up JVMs. This commit changes the build
infrastructure to install multiple plugins at once when building
clusters for integration tests.
For the docs integration tests in particular, where we install many
plugins, this change makes a substantial difference. On my laptop, prior
to this change, installing the plugins sequentially took 115
seconds. After this change, it takes 14 seconds.
Tests in BuildPluginIT copy the workspace but exclude the build
directories based on whether the directory string representation
includes `/build/` or not. This check is problematic if the directory
of the project has a parent directory also named `build`. The change in
this commit checks to see if the path relative to the project directory
has any path parts equal to `build`.
Backport / reimplementation of #50786 on 7.x.
Opt-in `buildSrc` for automatic formatting. This required a config tweak
in order to pick up all the Java sources, and as a result more files are
now found in the Enrich plugin, that were previously missed.
I also moved the 2 Java files in `buildSrc/src/main/groovy` into the Java
directory, which required some follow-up changes.
The packed-refs support was using the original .git path, changed to use
the real .git directory after reference from worktree has been followed.
Relates #47464
This commit adds a cliSetup command that can be used to run arbitrary
bin scripts to setup a test cluster. This is the same as what was
previously called setupCommands in cluster formation tasks.
closes#50382
This commit adds a special run.datadir system property that may be
passed to `./gradlew run` which sets the root data directory used by the
task. While normally overriding the data path is not allowed for test
clusters, it is useful when experimenting with the run task.
closes#50338
This commit ensures the global info plugin is applied, which supplies
the isInternal flag used to determine whether distro download looks for
bwcVersions.
relates #50230
The testclusters shutdown code was killing child processes
of the ES JVM before the ES JVM. This causes any running
ML jobs to be recorded as failed, as the ES JVM notices that
they have disconnected from it without being told to stop,
as they would if they crashed. In many test suites this
doesn't matter because the test cluster will never be
restarted, but in the case of upgrade tests it makes it
impossible to test what happens when an ML job is running
at the time of the upgrade.
This change reverses the order of killing the ES process
tree such that the parent processes are killed before their
children. A list of children is stored before killing the
parent so that they can subsequently be killed (if they
don't exit by themselves as a side effect of the parent
dying).
Backport of #50175
This commit sets xpack.security.ssl.diagnose.trust to false in all
the nodes of our TestClusters when running integTest. This is needed
in 7.x because setting xpack.security.ssl.diagnose.trust to true
wraps SunJSSE TrustManager with our own DiagnosticTrustManager and
this is not allowed when SunJSSE is in FIPS mode.
An alternative would be to set `xpack.security.fips.enabled` to
true which would also implicitly disable
xpack.security.ssl.diagnose.trust but would have additional effects
(would require that we set PBKDF2 for password hashing algorithm in
all test clusters, would prohibit using JKS keystores in nodes even
if relevant tests have been muted in FIPS mode etc.)
The testclusters registory is a singleton extension element added to the
root project which tracks which test clusters are used throughout the
multi project. But having the same name as the extension used to
configure test clusters within each subprojects breaks using a single
project for an external plugin. This commit renames the registry
extension to make it unique.
closes#49787
This commit goes from using a JvmArgumentProvider to using the normal
Test task APIs for passing the `HeapDumpOnOutOfMemoryError` JVM
argument. This makes it simpler for subprojects, such as lang-painless
to override this setting if necessary.
Closes#49117
(cherry picked from commit e97c38ff8e862abdc1d7816c66f9869ed216031f)
We have a long history of advancing the required compiler to the newest
JDK. JDK 13 has been with us for awhile, but we were blocked from
upgrading since Gradle was not compatible with JDK 13. With the
advancement in our project to Gradle 6 which supports JDK 13, we can now
advance our minimum compiler version. This commit updates the minimum
compiler version to JDK 13.
The elasticsearch-node tools allow manipulating the on-disk cluster state. The tool is currently
unaware of plugins and will therefore drop custom metadata from the cluster state once the
state is written out again (as it skips over the custom metadata that it can't read). This commit
preserves unknown customs when editing on-disk metadata through the elasticsearch-node
command-line tools.
This upgrade required a few significant changes. Firstly, the build
scan plugin has been renamed, and changed to be a Settings plugin rather
than a project plugin so the declaration of this has moved to our
settings.gradle file. Second, we were using a rather old version of the
Nebula ospackage plugin for building deb and rpm packages, the migration
to the latest version required some updates to get things working as
expected as we had some workarounds in place that are no longer
applicable with the latest bug fixes.
(cherry picked from commit 87f9c16e2f8870e3091062cde37b43042c3ae1c5)
Move BuildParams class to 'minimumRuntime' source set to retain compatibility
with build-tools for builds using a Java 8 runtime.
Closes#49766
(cherry picked from commit 1059f823acdfa7a2f1f9bff21c7256dae4f3e23c)
When external plugin authors use build-tools, their integ tests depend
on the integ-test-zip artifact. However, this dependency was broken in
7.5.0 by accidentally removing the `@zip` qualifier on the maven
dependency, which works around the fact the pom for the integ-test-zip
claims the artifact is a pom instead of zip packaging. This commit
restores the workaround of using `@zip` until the pom can be fixed.
closes#49787
This rewrites long sort as a `DistanceFeatureQuery`, which can
efficiently skip non-competitive blocks and segments of documents.
Depending on the dataset, the speedups can be 2 - 10 times.
The optimization can be disabled with setting the system property
`es.search.rewrite_sort` to `false`.
Optimization is skipped when an index has 50% or more data with
the same value.
Optimization is done through:
1. Rewriting sort as `DistanceFeatureQuery` which can
efficiently skip non-competitive blocks and segments of documents.
2. Sorting segments according to the primary numeric sort field(#44021)
This allows to skip non-competitive segments.
3. Using collector manager.
When we optimize sort, we sort segments by their min/max value.
As a collector expects to have segments in order,
we can not use a single collector for sorted segments.
We use collectorManager, where for every segment a dedicated collector
will be created.
4. Using Lucene's shared TopFieldCollector manager
This collector manager is able to exchange minimum competitive
score between collectors, which allows us to efficiently skip
the whole segments that don't contain competitive scores.
5. When index is force merged to a single segment, #48533 interleaving
old and new segments allows for this optimization as well,
as blocks with non-competitive docs can be skipped.
Backport for #48804
Co-authored-by: Jim Ferenczi <jim.ferenczi@elastic.co>
Add a mirror of the maven repository of the shibboleth project
and upgrade opensaml and related dependencies to the latest
version available version
Resolves: #44947
This commit introduces a workaround for an issue related to our recent
notarization of distributions starting with the 6.8.5 release. An
unintended side effect of notarization was that the file entries of the
release tar all have a `./` prefix in the path. This causes a number of
issues, not least of which is that our Gradle extract tasks end up
copying an empty fileset to the destination directory. The workaround
here is imply to remove the leading `./` path segment from each file
when performing the extraction. For more details see this issue:
https://github.com/elastic/elasticsearch/issues/49417
The test task is configured to use the runtime java version, but there
are issues with the version of groovy used by gradle pre 6.0. In order
to workaround this, we use the Gradle JDK to execute the build-tools
tests.
Closes#49404Closes#49253
Tasks intending to use a particular java home provided by JAVA<N>_HOME
use the getJavaHome method, which verifies the given java home is
available, or will be if the task will run. However, the verification
logic was broken, in addition to unnecessarily delaying retrieving the
java home until runtime. This commit fixes the verification logic to run
at either config time, delaying verification, or at runtime which
immediately checks if java home is available.
closes#49153
This PR adds build configuration to use the `jdk-download` plugin with
unit tests when no runtime java is configured externally.
It's a first part in a longer chain of changes described in #40531.
This commit upgrades the JDK that is bundled with Windows from 13+33 to
13.0.1+9. Our other platforms have previously been upgraded but Windows
was delayed because the artifacts were not available at the time that we
made the previous upgrade.
Relates #48587
The bats tests require several distributions to all be built into a
single directory. The addition of docker packaging tests now cause the
bats tests to depend on docker, even though docker is not used there.
This commit filters out docker distributions from those that bats
depends on.
The RunTask is responsible for logging output from nodes to the console
and also stays active since we want the cluster to keep running.
However, the implementation of the logging and waiting resulted in a
spin loop that continually polls for data to have been written to one
of the nodes' output files. On my laptop, this causes an idle
invocation of `gradle run` to consume an entire core.
The JDK provides a method to be notified of changes to files through
the use of a WatchService. While a WatchService based implementation
for logging and waiting works, a delay of up to ten seconds is
encountered when running on macOS. This is due to the lack of a native
WatchService implementation that uses kqueue or FSEvents; the current
WatchService implementation in the JDK uses polling with a default
interval of ten seconds. While the interval can be changed
programmatically it is not an acceptable solution due to the need to
access the com.sun.nio.file.SensitivityWatchEventModifier enum, which
is in an internal package.
The change in this commit instead introduces a check to see if any data
was available to read and log. If no data is available in any of the
node output files, the thread sleeps for 100ms. This is enough time to
prevent consuming large amounts of cpu while still providing output to
the console in a timely fashion.
Backport of #48849. Update `.editorconfig` to make the Java settings the
default for all files, and then apply a 2-space indent to all `*.gradle`
files. Then reformat all the files.
Backport of #48450.
Make a number of changes so that code in the `server` directory is more
resilient to automatic formatting. This covers:
* Reformatting multiline JSON to embed whitespace in the strings
* Move some comments around to they aren't auto-formatted to a strange
place. This also required moving some `&&` and `||` operators from the
end-of-line to start-of-line`.
* Add helper method `reformatJson()`, to strip whitespace from a JSON
document using XContent methods. This is sometimes necessary where
a test is comparing some machine-generated JSON with an expected
value.
Also, `HyperLogLogPlusPlus.java` is now excluded from formatting because it
contains large data tables that don't reformat well with the current settings,
and changing the settings would be worse for the rest of the codebase.
Backport of #48898.
We no longer configure distributions for prior versions for Docker. This
is because doing so prompts Gradle to try and resolve the Docker
dependencies, which doesn't work as they can't be downloaded via Ivy
(configured in DistributionDownloadPlugin). Since we need these for the
BATS upgrade tests, and those tests only cover .rpm and .deb, it's OK to
omit creating such distributions in the first place. We may need to
revisit this in the future, to allow upgrade testing using Docker
containers.
Backport of #48883.
Per elastic/infra#15864, the Elasticsearch CI images are failing due to
a packer_cache failure. This is because Gradle is trying to resolve
a `.docker` file through the Ivy repository, which doesn't work. Disable
the Docker tests again until we figure out the way forward.
Backport of #46599 and #47640. Add packaging tests for Docker.
* Introduce packaging tests for Docker (#46599)
Closes#37617. Add packaging tests for our Docker images, similar to what
we have for RPMs or Debian packages. This works by running a container and
probing it e.g. via `docker exec`. Test can also be run in Vagrant, by
exporting the Docker images to disk and loading them again in VMs. Docker
is installed via `Vagrantfile` in a selection of boxes.
* Only define Docker pkg tests if Docker is available (#47640)
Closes#47639, and unmutes tests that were muted in b958467.
The Docker packaging tests were being defined irrespective of whether
Docker was actually available in the current environment. Instead,
implement exclude lists so that in environments where Docker is not
available, no Docker packaging tests are defined. For CI hosts, the build
checks `.ci/dockerOnLinuxExclusions`. The Vagrant VMs can defined the
extension property `shouldTestDocker` property to opt-in to packaging
tests.
As part of this, define a seperate utility class for checking Docker,
and call that instead of defining checks in-line in BuildPlugin.groovy
This commit introduces a consistent, and type-safe manner for handling
global build parameters through out our build logic. Primarily this
replaces the existing usages of extra properties with static accessors.
It also introduces and explicit API for initialization and mutation of
any such parameters, as well as better error handling for uninitialized
or eager access of parameter values.
Closes#42042
This commit eliminates some custom logic we have in place for post-hoc
cleanup of POM files generated by Gradle. There were to main issues this
logic was meant to address:
First, for dependencies marked as `transitive = false`, Gradle by
default creates a "wildcard" exclusion in the generated POM file. It
turns out that Ivy didn't handle these types of exclusions well, even
though they are perfectly valid and dealt with by Gradle and Maven as
expected. We've since confirmed that this issues is indeed resolved in
the most recent Ivy release (2.5.0-rc1) so going forward the suggestion
to folks consuming Elasticsearch dependencies with Ivy will be to use
this version.
Second, earlier versions of Gradle would incorrectly assign compile
dependencies to the "runtime" scope in the publish POM file. This could
cause issues if the dependencies were indeed needed at compile time
because their APIs were exposed. This has since been fixed and these
dependencies are correctly marked as "compile" scope in the POM.
Since these two issues have been resolved in their respective projects
we can eliminate this logic and all the supporting code, such as having
to create lots of "internal" configurations for tracking transitive
dependencies.
This commit simplifies and standardizes our usage of the Gradle Shadow
plugin to conform more to plugin conventions. The custom "bundle" plugin
has been removed as it's not necessary and performs the same function
as the Shadow plugin's default behavior with existing configurations.
Additionally, this removes unnecessary creation of a "nodeps" artifact,
which is unnecessary because by default project dependencies will in
fact use the non-shadowed JAR unless explicitly depending on the
"shadow" configuration.
Finally, we've cleaned up the logic used for unit testing, so we are
now correctly testing against the shadow JAR when the plugin is applied.
This better represents a real-world scenario for consumers and provides
better test coverage for incorrectly declared dependencies.
(cherry picked from commit 3698131109c7e78bdd3a3340707e1c7b4740d310)
This commit bumps the bundled JDK to 13.0.1+9. Since AdoptOpenJDK did
not release 13.0.1+9 for Windows, this commit also enables that the
bundled JDK version can vary by platform.
Reverting the change introducing IsoLocal.ROOT and introducing IsoCalendarDataProvider that defaults start of the week to Monday and requires minimum 4 days in first week of a year. This extension is using java SPI mechanism and defaults for Locale.ROOT only.
It require jvm property java.locale.providers to be set with SPI,COMPAT
closes#41670
backport #48209
* Always publish a build scan in CI
This PR changes the build scan configuration to alwasy publisha build
scan when running in our CI.
We should alkready be passing these env vars into the Vagrant VM so this
will make it produce a build scan too.
The old properties to accept build scan ToS on the public server are
thus no longer relevant and will be cleaned up from the Jenkins config
once this is merged.
* Pass env vars to vagrant VM
* Enable running in parallel in the VM
* Add job name and build nomber as custom values
The classpath for some project could outgrow the max allowed command
line on Windows. Using an env var is not fault proof, but give more
breathing room
This commit removes the option to change the netty system properties to
reenable the direct buffer pooling. It also removes the need for us to
disable the buffer pooling in the system properties file. Instead, we
programmatically craete an allocator that is used by our networking
layer.
This commit does introduce an Elasticsearch property which allows the
user to fallback on the netty default allocator. If they choose this
option, they can configure the default allocator how they wish using the
standard netty properties.
Before this change one needed to re-start debugging several times, as we
launched multiple JVMs in debug mode.
With this option the IDE has the option to re-launch and listen for
connections again leading for to a more pleasant experience.
The distribution download plugin which handles finding built
distributions for testing currently only knows how to find locally built
snapshots. When an external Elasticsearch plugin uses build-tools, these
snapshots do not exist. This commit extends the download plugin so it
pulls from the Elastic snapshots service when used outside of the
Elasticsearch repository.
closes#47123
* GlobalBuildInfo plugin searches packed references
In recent versions of Git, references may be packed in a packed-refs
file. If this happens, Gradle will need to look in that file to find
build information.
We fixed warnings related to task input and outputs in #45098.
This particular input was not considered, a warning was present for it
and Gradle didn't use it as part of task inputs.
As soon as we fixed it Gradle started considering it an input and
enforced that it exists.
With this change we make it optional as the task can work both wih and
without this directory.
In order to work with external elasticsearch plugins, some parts of
build-tools need to know when the current build is part of the
elasticsearch project or an external build. We identify these "internal"
builds by a marker in our buildSrc classpath. However, some build-tools
integ tests need to override this flag to test both the external and
internal behavior.
This commit moves the storage of the flag indicating whether we are
running in an internal build to global build info. This will allow
testkit projects to override the flag.