This commit moves log statements related to classification of naming
convention checks for tests to debug level. At info level they emit an
enormous amount of output in CI, while these are not generally useful
for debugging normal build failures.
Commit #36786 updated docs and strings to reference transport.port instead of
transport.tcp.port. However, this breaks backwards compatibility tests
as the tests rely on string configurations and transport.port does not
exist prior to 6.6. This commit reverts the places were we reference
transport.tcp.port for tests. This work will need to be reintroduced in
a backwards compatible way.
Lucene 7.6 uses a smaller encoding for LatLonShape. This commit forks the LatLonShape classes to Elasticsearch's local lucene package. These classes will be removed on the release of Lucene 7.6.
This is related to #36652. In 7.0 we plan to deprecate a number of
settings that make reference to the concept of a tcp transport. We
mostly just have a single transport type now (based on tcp). Settings
should only reference tcp if they are referring to socket options. This
commit updates the settings in the docs. And removes string usages of
the old settings. Additionally it adds a missing remote compress setting
to the docs.
Includes the following:
* Reversion of doc-values changes in LUCENE-8374; we are interested in seeing if this
has an effect on benchmarks for node-stats and index-stats
* More improvements to docvalues updates
- fix up to date checks to ignore elasticsearch jars. We were not scanning them but these still triggered a rebuild.
- add tests to assert correct behavior and up to date checks.
- make the task less verbose with `-i` and include the output only on errors.
* Upgrae plugin to latest and expose udp
* Explicit check for windows
* Rename the properties for the port numbers
* Tasks for pre and pos container actions
Our Docker build uses a multi-stage Docker build. This requires Docker
version 17.05 or greater. Without an explicit check here, the build
fails in a mysterious way such as "invalid reference format" that is
hard to track down (Google searches for "Docker invalid reference
format" do not turn up anything useful). This commit refactors our
existing Docker checks, and adds a new one for the minimum Docker
version.
This commit changes the Docker assemble tasks so that they attempt to
build Docker if the Docker binaries exist, of if build.docker is set to
true. If the Docker binaries do not exist, or if build.docker is set to
false, then no attempt is made to build the Docker images.
Moves all remaining (rolling-upgrade and mixed-version) REST tests to use Zen2. To avoid adding
extra configuration, it relies on Zen2 being set as the default discovery type. This required a few
smaller changes in other tests. I've removed AzureMinimumMasterNodesTests which tests Zen1
functionality and dates from a time where host providers were not configurable and each cloud
plugin had its own discovery.type, subclassing the ZenDiscovery class. I've also adapted a few tests
which were unnecessarily adding addTestZenDiscovery = false for the same legacy reasons. Finally,
this also moves the unconfigured-node-name REST test to Zen2, testing the auto-bootstrapping
functionality in development mode when no discovery configuration is provided.
Closes#34820
With this change we allow for no tests being ran in randomized testing
task, and forbid empty testing tasks from the testing conventions task.
We will no longer have to disable the task if all tests are muted.
* Improve logged exec output readability
- Split error and out streams and log them separately
- Log everything in a single call to prevent interference from
other log messages
Includes:
LUCENE-8594: DV update are broken for updates on new field
LUCENE-8590: Optimize DocValues update datastructures
LUCENE-8593: Specialize single value numeric DV updates
Relates #36286
This commit moves back to use explicit dependsOn for test tasks on
check. Not all tasks extending RandomizedTestingTask should be run by
check directly.
This commit fixes an oops when pushing a change to add the building of
the Docker images. A change that was made for testing was accidentally
left behind. This commit addresses that.
This commit introduces the building of the Docker images as bonafide
packaging formats alongside our existing archive and packaging
distributions. This build is migrated from a dedicated repository, and
converted to Gradle in the process.
Currently we use Math.random() in a few places in the tests which makes these
tests not reproducable with the random seed mechanism that comes with
ESTestCase. The change removes those instances.
Closes#35435
- make it easier to add additional testing tasks with the proper configuration and add some where they were missing.
- mute or fix failing tests
- add a check as part of testing conventions to find classes not included in any testing task.
The logic in the dockerComposeSupported method currently returns false
even when docker and docker compose are available on the build machine.
This change updates the check to see if docker compose is available in
one of the two paths and allows the `tests.fixture.enabled` property to
disable the tests even if docker compose is available.
This commit upgrades netty. This will close#35360. Netty started
throwing an IllegalArgumentException if a CompositeByteBuf is
created with < 2 components. Netty4Utils was updated to reflect this
change.
This commit removes padding from the java version in the output of
compiler/runtime/gradle java versions. This value is only ever 1 or 2
digits, and the later information (hotspot/jvm vendor info) is separated
by the java home path from the other versions, so there isn't a visual
reason for needing exact vertical alignment.
Some times the test fixtures plugin did not correctly disable tasks
from the build plugin as it should.
The plugin manager and tasks both use domain name collections so
the previus conde should have worked.
I did not have trime to track it down, but suspect there's some race
condition in Gradle causing this. The plugin manager is still incubating.
Since the tasks are on the cp even if the plugin is not applyed, we
don't really need to involve the plugin at all.
Closes#36041
Our `REPRODUCE WITH` line wasn't working for tests with `(` or `)` in
the method name. This isn't super common, but happens sometimes for our
yml based rest tests. This is because randomized runner's globbing
mechanism dind't escape `(` or `)`. I filed this upstream at
https://github.com/randomizedtesting/randomizedtesting/issues/271
and Dawid kindly fixed the issue. This upgrades to pick up the fix.
Closes#35692
In #35259 we switched the default number of VMs to fork for unit tests to
the number of physical CPU cores. But because we could only get an accurate
count on machines with a normal `/proc` filesystem, macOS machine did not
pick up the new default. Given that macOS is a huge portion of developer
machines, we'd like to get the right default there. This does that.
It also moves the default-finding process from happening once per testing
task to happening once at startup. This seems like a good choice in general,
but a very good choice for macOS because we have to run a command to list
the count.
This inserts newlines in order to reduce line lengths in the
o.e.action.admin.cluster package to 140 characters or less. This
also remves the checkstyle suppressions for affected files.
Relates #34884, #34923
- The current version was hard coded in the test, causing it to fail as
we removed the qualifier
- also applied the base plugin to the root project to get the `clean`
task to work as expected. This was preventing the failure from
reproducing locally.
* Manage dependencies for test clusters
Create a configuration and add the distribution to it automatically.
A task is created and added as a dependency to any task that uses a test
cluster.
The task extracts all the zip archives ( only zip support for now )
in the configuration.
We do this only once because most tests mostly use the same distribution
and thus we can avoid extracting it multiple times.
With this we will be able to start the node from the same files which
will most of the time live in OS caches or COW if the
configuration requires it.
* Swithc to jcenter
Jcenter suposedly operates on a CDN.
It should be faster and more reliable.
It's the default in Andorid Studio currently
( it switched from mavenCentral ).
This change is safe because jcenter is a superset of mavenCentral.
* update comment
* DISCOVERY: 0s Initial State Timeout in Tests
* Don't wait for initial state even with a single node, otherwise the loop writing the discovery file causes that single node to wait
for its own transport.ports file for 30s.
* Closes#35456
* DISCOVERY: Fix RollingUpgradeTests
* Don't manually manage min master nodes if not necessary
* Remove some dead code
* Allow for manually supplying list of seed nodes
* Closes#35178
With this change, `Version` no longer carries information about the qualifier,
we still need a way to show the "display version" that does have both
qualifier and snapshot. This is now stored by the build and red from `META-INF`.
* Add missing up-to-date configuration
The source properties file and the dynamic elasticsearch version
(set based on properties ) were missing from task outputs leading
to the task being incorrectly considered up to date.
Closes#35204
* Introduce property to set version qualifier
- VersionProperties.elasticsearch is now a string which can have qualifier
and snapshot too
- The Version class in the build no longer cares about snapshot and
qualifier.
This further applies the pattern set in #34125 to reduce copy-and-paste
in the multi-document CRUD portion of the High Level REST Client docs.
It also adds line wraps to snippets that are too wide to fit into the box
when rendered in the docs, following up on the work started in #34163.
* DISCOVERY: Use Realistic Num. of Min Master Nodes
* With all 3 nodes starting in parallel 2 nodes can win a master election and
start waiting for nodes to join when started in parallel. We wait for 30s on the
rest tests for the cluster health endpoint to show 3 nodes which breaks if
one of the master nodes itself waits for 30s for joining nodes, waiting for 5 seconds
fixes the issue and allows us to run with `minimum master nodes < node count`
* relates #33675
* Fix linelength suppressions in index.fielddata
* Some lines that were too long were dead code => Removed them and all code that became dead because of it
* Relates #34884
- we already require Java 11 to build, yet we target the minimum
supported version in build-tools ( currently 8 )
- this is because we have some checks that are executed in a new JVM
which could be running the minimum version.
- For everything else it would be nice to be able to use new features,
like the new process API.
With this change, we selectively compile the few classes that need an
older target version and move everything over to Java 10.
Unfortunately the current Gradle version does not support 11 as a target
version yet.
With this change, we apply the common test config automatically to all
newly created tasks instead of opting in specifically.
For plugin authors using the plugin externally this means that the
configuration will be applied to their RandomizedTestingTasks as well.
The purpose of the task is to simplify setup and make it easier to
change projects that use the `test` task but actually run integration
tests to use a task called `integTest` for clarity, but also because
we may want to configure and run them differently.
E.x. using different levels of concurrency.
- Restrict visibility of Aggregators and Factories
- Move PipelineAggregatorBuilders up a level so it is consistent with
AggregatorBuilders
- Checkstyle line length fixes for a few classes
- Minor odds/ends (swapping to method references, formatting, etc)
This commit adds the ability for docs tests to add a tear down
snippet. This snippet will be converted to a tear down section of the
generated REST tests.
In the docs tests, we have pre-defined setups in the build.gradle file,
and we can also define test setup sections within the doc page
itself. Alas, these two are incompatible in that if you try to use a
pre-defined setup alongside a test setup section, the pre-defined setup
will be silently ignored. This commit enables pre-defined setup sections
to be used together with test setup sections. The ordering here is that
pre-defined setup sections will be executed first, followed by the test
setup section.
The `term` and `phrase` suggesters have different options to filter candidates
based on their frequencies. The `popular` mode for instance filters candidate
terms that occur in less docs than the original term. However when we compute this threshold
we use the total term frequency of a term instead of the document frequency. This is not inline
with the actual filtering which is always based on the document frequency. This change fixes
this discrepancy and clarifies the meaning of the different frequencies in use in the suggesters.
It also ensures that the threshold doesn't overflow the maximum allowed value (Integer.MAX_VALUE).
Closes#34282
Applies our line length guidance for all classes in the server in `lucene`
directories *except* `XMoreLikeThis`. The only long line in
`XMoreLikeThis` says "remove this when we upgrade to Lucene 5. Given
that we're on Lucene 8, this is a little terrifying and deserves another
look.
This drops checkstyle suppressions that refer to files that don't exist
since those suppressions don't do anything other than make us feel bad.
It also updates some suppressions to more closely match the path to the
file that they suppress. These suppressions are still needed but didn't
pass the "the file exists" test because they weren't precise. It is just
easier on future-me if they are precise.
We generate tests from our documentation, including assertions about the
responses returned by a particular API. But sometimes we *can't* assert
that the response is correct because of some defficiency in our tooling.
Previously we marked the response `// NOTCONSOLE` to skip it, but this
is kind of odd because `// NOTCONSOLE` is really to mark snippets that
are json but aren't requests or responses. This introduces a new
construct to skip response assertions:
```
// TESTRESPONSE[skip:reason we skipped this]
```
Now that JDK 11 is GA, we would switch our 6.x and master branches to
the JDK 11 compiler. This commit makes this change, as well as removes
JDK 10 from the CI configuration.
This slightly reworks the expert script plugin example so it fits on the
page when the docs are rendered. The box in which it is rendered is not
very wide so it took a bit of twisting to make it readable.
Mainly this fixes a warning by replacing the unchecked `new ActionListener`
with the checked `new ActionListener<Response>`, and it also fixes the line
length violations in this class.
We use wrap code in `// tag` and `//end` to include it in our docs. Our
current docs style wraps code snippets in a box that is only wide enough
for 76 characters and adds a horizontal scroll bar for wider snippets
which makes the snippet much harder to read. This adds a checkstyle check
that looks for java code that is included in the docs and is wider than
that 76 characters so all snippets fit into the box. It solves many of
the failures that this catches but suppresses many more. I will clean
those up in a follow up change.
This change adds the OneStatementPerLineCheck to our checkstyle precommit
checks. This rule restricts the number of statements per line to one. The
resoning behind this is that it is very difficult to read multiple statements on
one line. People seem to mostly use it in short lambdas and switch statements in
our code base, but just going through the changes already uncovered some actual
problems in randomization in test code, so I think its worth it.
Wraps all lines in our test framework at 140 characters because that is
our standard line length and removes all of the checkstyle suppressions
for the test framework.
Drops most of `ModuleTestCase` because it isn't used and we're moving
away from using guice in the way that it wants to test anyway. Also
switches a few classes that extend it but don't use it to extend
`ESTestCase` instead.
Gradle can sometimes emit mixed log lines due to how we spawn things in
separate processes. This commit changes the jarhell integ test to only
look for the exception and message, instead of including the outer part
about the exception in thread main.
closes#33774
Make sure that all java files have a package declaration and that all of
the package declarations line up with the directory structure. This
would have caught the bug that I caused in
190ea9a6de and fixed in
b6d68bd805.
This commit changes the sanity check which ensures the test task was
properly replaced with randomized testing to have a per project check,
isntead of a global one. The previous global check assumed all test
tasks within the root project and below should be randomized testing,
but that is not the case for a multi project in which only one project
is an elasticsearch plugin. While the new check is not able to emit all
of the failed replacements in one error message, the efficacy of the
check remains.
This commit switches the joda time backcompat in scripting to use
augmentation over ZonedDateTime. The augmentation methods provide
compatibility with the missing methods between joda's DateTime and
java's ZonedDateTime. Due to getDayOfWeek returning an enum in the java
API, ZonedDateTime is wrapped so that the method can return int like the
joda time does. The java time api version is renamed to
getDayOfWeekEnum, which will be kept through 7.x for compatibility while
users switch back to getDayOfWeek once joda compatibility is removed.
* LeafCollector.setScorer() now takes a Scorable
* Scorers may not have null Weights
* IndexWriter.getFlushingBytes() reports how much memory is being used by IW threads writing to disk
Today the FilterRoutingTests take the belt-and-braces approach of excluding
some node attribute values and including some others. This means that we don't
really test that both inclusion and exclusion work correctly: as long as one of
them works as expected then the test will pass. This change improves these
tests by only using one approach at once, demonstrating that both do indeed
work, and adds tests for various other scenarios too.
Change the logging infrastructure to handle when the node name isn't
available in `elasticsearch.yml`. In that case the node name is not
available until long after logging is configured. The biggest change is
that the node name logging no longer fixed at pattern build time.
Instead it is read from a `SetOnce` on every print. If it is unset it is
printed as `unknown` so we have something that fits in the pattern.
On normal startup we don't log anything until the node name is available
so we never see the `unknown`s.
This change collapses all metrics aggregations classes into a single package `org.elasticsearch.aggregations.metrics`.
It also restricts the visibility of some classes (aggregators and factories) that should not be used outside of the package.
Relates #22868
The main benefit of the upgrade for users is the search optimization for top scored documents when the total hit count is not needed. However this optimization is not activated in this change, there is another issue opened to discuss how it should be integrated smoothly.
Some comments about the change:
* Tests that can produce negative scores have been adapted but we need to forbid them completely: #33309Closes#32899
Solves all of the xpack line length suppressions and then merges the
remainder of the xpack checkstyle_suppressions.xml file into the core
checkstyle_suppressions.xml file. At this point that just means the
antlr generated files for sql.
It also adds an exclusion to the line length tests for javadocs that
are just a URL. We have one such javadoc and breaking up the line would
make the link difficult to use.
This commit introduces the formal notion of a private setting. This
enables us to register some settings that we had previously not
registered as fully-fledged settings to avoid them being exposed via
APIs such as the create index API. For example, we had hacks in the
codebase to allow index.version.created to be passed around inside of
settings objects, but was not registered as a setting so that if a user
tried to use the setting on any API then they would get an
exception. This prevented users from setting index.version.created on
index creation, or updating it via the index settings API. By
introducing private settings, we can continue to reject these attempts,
yet now we can represent these settings as actual settings. In this
change, we register index.version.created as an actual setting. We do
not cutover all settings that we had been treating as private in this
pull request, it is already quite large due to moving some tests around
to account for the fact that some tests need to be able to set the
index.version.created. This can be done in a follow-up change.
Exclude classes meant for newer versions than what we are auditing against, those classes won't be found. There's no reason to exclude JDK classes from newer versions, with this PR, we will not extract them in the first place.
The new implementation is functional equivalent with the old, ant based one.
It parses task standard error to get the missing classes and violations in the same way.
I considered re-using ForbiddenApisCliTask but Gradle makes it hard to build inheritance with tasks that have task actions , since the order of the task actions can't be controlled.
This inheritance isn't dully desired either as the third party audit task is much more opinionated and we don't want to expose some of the configuration.
We could probably extract a common base class without any task actions, but probably more trouble than it's worth.
Closes#31715
This commit removes the setting of the fork options maximum memory size
in our build plugin and instead adds the value in the gradle.properties
file to be alongside the value set in jvmArgs.
This change is necessary when using parallel compilation as 512m is not
sufficient for parallel compilation on some machines.
This reworks how we configure the `shadow` plugin in the build. The major
change is that we no longer bundle dependencies in the `compile` configuration,
instead we bundle dependencies in the new `bundle` configuration. This feels
more right because it is a little more "opt in" rather than "opt out" and the
name of the `bundle` configuration is a little more obvious.
As an neat side effect of this, the `runtimeElements` configuration used when
one project depends on another now contains exactly the dependencies needed
to run the project so you no longer need to reference projects that use the
shadow plugin like this:
```
testCompile project(path: ':client:rest-high-level', configuration: 'shadow')
```
You can instead use the much more normal:
```
testCompile "org.elasticsearch.client:elasticsearch-rest-high-level-client:${version}"
```
Elasticsearch versions earlier than 6.4.0 cannot properly run in a
FIPS 140 JVM. This commit ensures that we use a non-FIPS JVM for
nodes that we spin up in BWC tests even when we're testing FIPS.
* Scripted metric aggregations: add deprecation warning and system property to control legacy params
Scripted metric aggregation params._agg/_aggs are replaced by state/states context variables. By default the old params are still present, and a deprecation warning is emitted when Scripted Metric Aggregations are used. A new system property can be used to disable the legacy params. This functionality will be removed in a future revision.
* Fix minor style issue and docs test failure
* Disable deprecated params._agg/_aggs in tests and revise tests to use state/states instead
* Add integration test covering deprecated scripted metrics aggs params._agg/_aggs access
* Disable deprecated params._agg/_aggs in docs integration tests and revise stored scripts to use state/states instead
* Revert unnecessary migrations doc change
A relevant note should be added in the changes destined for 7.0; this PR is going to be backported to 6.x.
* Replace deprecated _agg param bwc integration test with a couple of unit tests
* Fix compatibility test after merge
* Rename backwards compatibility system property per code review feedback
* Tweak deprecation warning text per review feedback
Add tests for build-tools to make sure example plugins build stand-alone using it.
This will catch issues such as referencing files from the buildSrc directly, breaking external uses of build-tools.
* Implement Version in java
- This allows to move all all .java files from .groovy.
- Will prevent eclipse from tangling up in this setup
- make it possible to use Version from Java
* PR review comments
* Cluster formation plugin with reference counting
```
> Task :plugins:ingest-user-agent:listElasticSearchClusters
Starting cluster: myTestCluster
* myTestCluster: /home/alpar/work/elastic/elasticsearch/plugins/ingest-user-agent/foo
Asked to unClaimAndStop myTestCluster, since cluster still has 1 claim it will not be stopped
> Task :plugins:ingest-user-agent:testme UP-TO-DATE
Stopping myTestCluster, since no of claims is 0
```
- Meant to auto manage the clusters lifecycle
- Add integration test for cluster formation
* Fix rebase
* Change to `useCluster` method on task
* Add a task to run forbiddenapis using cli
Add a task that offers an equivalent check to the forbidden APIs plugin,
but runs it using the forbiddenAPIs CLI instead.
This isn't wired into precommit first, and doesn't work for projects
that require specific signatures, etc. It's meant to show how this can
be used. The next step is to make a custom task type and configure it
based on the project extension from the pugin and make some minor
adjustments to some build scripts as we can't bee 100% compatible with
that at least due to how additional signatures are passed.
Notes:
- there's no `--target` for the CLI version so we have to pass in
specific bundled signature names
- the cli task already wires to `runtimeJavaHome`
- no equivalent for `failOnUnsupportedJava = false` but doesn't seem to
be a problem. Tested with Java 8 to 11
- there's no way to pass additional signatures as URL, these will have
to be on the file system, and can't be resources on the cp unless we
rely on how forbiddenapis is implemented and mimic them as boundled
signatures.
- the average of 3 runs is 4% slower using the CLI for :server.
( `./gradlew clean :server:forbiddenApis` vs `./gradlew clean
:server:forbiddenApisCli`)
- up-to-date checks don't work on the cli task yet, that will happen
with the custom task.
See also: #31715
The upcoming ML log structure finder functionality will use these
libraries, and it makes sense to use the same versions that are
being used elsewhere in Elasticsearch. This is especially true
with icu4j, which is pretty big.
Enhance reproduction line with info about jdks
Provide the ability to control compiler and hava versions just by
passing a property. The actual java home comes from the
`JAVA<major>_HOME` env vars that we allready require.
This works better with the Gradle daemon as well.
Output is also changed a bit.
for `-Druntime.java=8 -Dcompiler.java=9`:
```
=======================================
Elasticsearch Build Hamster says Hello!
Gradle Version : 4.9
OS Info : Linux 4.17.8-1-ARCH (amd64)
Compiler JDK Version : 11 (Oracle Corporation 11-ea [OpenJDK 64-Bit Server VM 11-ea+22])
Runtime JDK Version : 11 (Oracle Corporation 11-ea [OpenJDK 64-Bit Server VM 11-ea+22])
Gradle JDK Version : 10 (Oracle Corporation 10.0.1 [OpenJDK 64-Bit Server VM 10.0.1+10])
Compiler java.home : /home/alpar/opt/jdk-11-ea22/
Runtime java.home : /home/alpar/opt/jdk-11-ea22/
Gradle java.home : /usr/lib/jvm/java-10-openjdk
Random Testing Seed : EA858533191E8DFB
=======================================
```
Without configuration:
```
=======================================
Elasticsearch Build Hamster says Hello!
=======================================
Gradle Version : 4.9
OS Info : Linux 4.17.8-1-ARCH (amd64)
JDK Version : 10 (Oracle Corporation 10.0.1 [OpenJDK 64-Bit Server VM 10.0.1+10])
JAVA_HOME : /usr/lib/jvm/java-10-openjdk
Random Testing Seed : 4BD5B2A839C8FCA1
=======================================
```
Here's how a reproduction line will look like (test made to fail):
```
./gradlew :modules:lang-painless:test -Dtests.seed=2DA2379065A4EEAB -Dtests.class=org.elasticsearch.painless.AdditionTests -Dtests.method="testInt" -Dtests.security.manager=true -Dtests.locale=es-PE -Dtests.timezone=WET -Dcompiler.java=10 -Druntime.java=10
```
This commit adds two pieces. The first is a small set of documentation providing
instructions on how to get setup to run context examples. This will require a download
similar to how Kibana works for some of the examples. The second is an ingest processor
example using the downloaded data. More examples will follow as ideally one per PR.
This also adds a set of tests to individually test each script as a unit test.
Currently, snippets in lists cannot be rendered correctly as a console command because the console command requires a line continuation '+'. This allows snippets to have a line continuation between the snippet and the // CONSOLE.
This commit adds the elastic repo, alongside maven central, to any
plugin using BuildPlugin. This is necessary now because the default
distribution, which the test framework uses by default, is now only
hosted on elastic maven. While inside the elasticsearch build this does
not matter, those that build external plugins with our build-tools could
have tests fail to find the distribution dependency.
This commit adds a boolean system property, `es.scripting.use_java_time`,
which controls the concrete return type used by doc values within
scripts. The return type of accessing doc values for a date field is
changed to Object, essentially duck typing the type to allow
co-existence during the transition from joda time to java time.
The main highlight is the removal of the reclaim_deletes_weight in the TieredMergePolicy.
The es setting index.merge.policy.reclaim_deletes_weight is deprecated in this commit and the value is ignored. The new merge policy setting setDeletesPctAllowed should be added in a follow up.
When we added the `java-gradle-plugin` to `buildSrc` it added a second
task to generate the pom that duplicates the publishing work that we
configure in `BuildPlugin`. Not only does it dupliciate the pom, it
creates a pom that is missing things like `name` and `description` which
are required for publishing to maven central.
This change disables the duplicate pom generation.
These are collected from a number of open PRs and are required to
improove existing and write more readable future tests.
I am extracting them to their own PR hoping to be able to merge and use
them sooner.
* Determine the minimum gradle version based on the wrapper
This is restrictive and forces users of the plugin to move together with
us, but without integration tests it's close to impossible to make sure
that the claimed compatability is really there.
If we do want to offer more flexibility, we should add those tests
first.
* Track gradle version in individual file
* PR review
This bundles the x-pack:protocol project into the x-pack:plugin:core
project because we'd like folks to consider it an implementation detail
of our build rather than a separate artifact to be managed and depended
on. It is now bundled into both x-pack:plugin:core and
client:rest-high-level. To make this work I had to fix a few things.
Firstly, I had to make PluginBuildPlugin work with the shadow plugin.
In that case we have to bundle only the `shadow` dependencies and the
shadow jar.
Secondly, every reference to x-pack:plugin:core has to use the `shadow`
configuration. Without that the reference is missing all of the
un-shadowed dependencies. I tried to make it so that applying the shadow
plugin automatically redefines the `default` configuration to mirror the
`shadow` configuration which would allow us to use bare project references
to the x-pack:plugin:core project but I couldn't make it work. It'd *look*
like it works but then fail for transitive dependencies anyway. I think
it is still a good thing to do but I don't have the willpower to do it
now.
Finally, I had to fix an issue where Eclipse and IntelliJ didn't properly
reference shadowed transitive dependencies. Neither IDE supports shadowing
natively so they have to reference the shadowed projects. We fix this by
detecting `shadow` dependencies when in "Intellij mode" or "Eclipse mode"
and adding `runtime` dependencies to the same target. This convinces
IntelliJ and Eclipse to play nice.
* Complete changes for running IT in a fips JVM
- Mute :x-pack:qa:sql:security:ssl:integTest as it
cannot run in FIPS 140 JVM until the SQL CLI supports key/cert.
- Set default JVM keystore/truststore password in top level build
script for all integTest tasks in a FIPS 140 JVM
- Changed top level x-pack build script to use keys and certificates
for trust/key material when spinning up clusters for IT
Throw an exception for doc['field'].value
if this document is missing a value for the field.
After deprecation changes have been backported to 6.x,
make this a default behaviour in 7.0
Closes#29286
In 1.x and 2.x, plugins were published to maven and the plugin
installer downloaded them from there. This was later changed to install
from the download service, and in 5.0 plugin zips were no longer
published to maven. However, the build still currently produces an
unused pom file. This is troublesome in the special case when the main
jar of a plugin needs to be published (and thus needs a pom file of
the same name).
closes#31946
This commit moves additional unit test runners from being dependencies
of the test task to dependencies of check. Without this change,
reproduce lines are incorrect due to the additional test runner not
matching any of the reproduce class/method info.
closes#31964
Moves the customizations to the build to produce nice shadow jars and
javadocs into common build code, mostly BuildPlugin with a little into
the root build.gradle file. This means that any project that applies the
shadow plugin will automatically be set up just like the high level rest
client:
* The non-shadow jar will not be built
* The shadow jar will not have a "classifier"
* Tests will run against the shadow jar
* Javadoc will include all of the shadowed classes
* Service files in `META-INF/services` will be merged
We have been encountering name mismatches between API defined in our
REST spec and method names that have been added to the high-level REST
client. We should check this automatically to prevent furher mismatches,
and correct all the current ones.
This commit adds a test for this and corrects the issues found by it.
With this commit we disable the real-memory circuit breaker in REST
tests as this breaker is based on real memory usage over which we have
no (full) control in tests and the REST client is not yet ready to retry
on circuit breaker exceptions.
This is only meant as a temporary measure to avoid spurious test
failures while we ensure that the REST client can handle those
situations appropriately.
Closes#32050
Relates #31767
Relates #31986
Relates #32074
Implement buildSrc Version in java
- This allows to move all all .java files from .groovy.
- Will prevent eclipse from tangling up in this setup
- make it possible to use Version from Java
Recreates the rest of the bats packaging tests for the tar distribution
in the java packaging test project, with support for both tar and zip
packaging, both oss and default flavors, and on Linux and Windows. Most
tests are followed fairly closely, some have either been dropped if
unnecessary or folded into others if convenient.
* Handle missing values in painless
Throw an exception for `doc['field'].value`
if this document is missing a value for the `field`.
For 7.0:
This is the default behaviour from 7.0
For 6.x:
To enable this behavior from 6.x, a user can set a jvm.option:
`-Des.script.exception_for_missing_value=true` on a node.
If a user does not enable this behavior, a deprecation warning is logged on start up.
Closes#29286
* Upgrade bouncycastle
Required to fix
`bcprov-jdk15on-1.55.jar; invalid manifest format `
on jdk 11
* Downgrade bouncycastle to avoid invalid manifest
* Add checksum for new jars
* Update tika permissions for jdk 11
* Mute test failing on jdk 11
* Add JDK11 to CI
* Thread#stop(Throwable) was removed
http://mail.openjdk.java.net/pipermail/core-libs-dev/2018-June/053536.html
* Disable failing tests #31456
* Temprorarily disable doc tests
To see if there are other failures on JDK11
* Only blacklist specific doc tests
* Disable only failing tests in ingest attachment plugin
* Mute failing HDFS tests #31498
* Mute failing lang-painless tests #31500
* Fix backwards compatability builds
Fix JAVA version to 10 for ES 6.3
* Add 6.x to bwx -> java10
* Prefix out and err from buildBwcVersion for readability
```
> Task :distribution:bwc:next-bugfix-snapshot:buildBwcVersion
[bwc] :buildSrc:compileJava
[bwc] WARNING: An illegal reflective access operation has occurred
[bwc] WARNING: Illegal reflective access by org.codehaus.groovy.reflection.CachedClass (file:/home/alpar/.gradle/wrapper/dists/gradle-4.5-all/cg9lyzfg3iwv6fa00os9gcgj4/gradle-4.5/lib/groovy-all-2.4.12.jar) to method java.lang.Object.finalize()
[bwc] WARNING: Please consider reporting this to the maintainers of org.codehaus.groovy.reflection.CachedClass
[bwc] WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
[bwc] WARNING: All illegal access operations will be denied in a future release
[bwc] :buildSrc:compileGroovy
[bwc] :buildSrc:writeVersionProperties
[bwc] :buildSrc:processResources
[bwc] :buildSrc:classes
[bwc] :buildSrc:jar
```
* Also set RUNTIME_JAVA_HOME for bwcBuild
So that we can make sure it's not too new for the build to understand.
* Align bouncycastle dependency
* fix painles array tets
closes#31500
* Update jar checksums
* Keep 8/10 runtime/compile untill consensus builds on 11
* Only skip failing tests if running on Java 11
* Failures are dependent of compile java version not runtime
* Condition doc test exceptions on compiler java version as well
* Disable hdfs tests based on runtime java
* Set runtime java to minimum supported for bwc
* PR review
* Add comment with ticket for forbidden apis
* remove explicit wrapper task
It's created by Gradle and triggers a deprecation warning
Simplify configuration
* Upgrade shadow plugin to get rid of Gradle deprecation
* Move compile configuration to base plugin
Solves Gradle deprecation warning from earlier Gradle versions
* Enable stable publishing in the Gradle build
* Replace usage of deprecated property
* bump Gradle version in build compare
* Remove deprecation warnings to prepare for Gradle 5
Gradle replaced `project.sourceSets.main.output.classesDir` of type
`File` with `project.sourceSets.main.output.classesDirs` of type
`FileCollection`
(see [SourceSetOutput](https://github.com/gradle/gradle/blob/master/subprojects/plugins/src/main/java/org/gradle/api/tasks/SourceSetOutput.java))
Build output is now stored on a per language folder.
There are a few places where we use that, here's these and how it's
fixed:
- Randomized Test execution
- look in all test folders ( pass the multi dir configuration to the
ant runner )
- DRY the task configuration by introducing `basedOn` for
`RandomizedTestingTask` DSL
- Extend the naming convention test to support passing in multiple
directories
- Fix the standalon test plugin, the dires were not passed trough,
checked with a debuger and the statement had no affect due to a
missing `=`.
Closes#30354
* Only check Java tests, PR feedback
- Name checker was ran for Groovy tests that don't adhere to the same
convections causing the check to fail
- implement PR feedback
* Replace `add` with `addAll`
This worked because the list is passed to `project.files` that does the
right thing.
* Revert "Only check Java tests, PR feedback"
This reverts commit 9bd9389875d8b88aadb50df57a45cd0d2b073241.
* Remove `basedOn` helper
* Bring some changes back
Previus revert accidentally reverted too much
* Fix negation
* add back public
* revert name check changes
* Revert "revert name check changes"
This reverts commit a2800c0b363168339ea65e2a79ec8256e5883e6d.
* Pass all dirs to name check
Only run on Java for build-tools, this is safe because it's a self test.
It needs more work before we could pass in the Groovy classes as well as
these inherit from `GroovyTestCase`
* remove self tests from name check
The self complicates the task setup and disable real checks on
build-tools.
With this change there are no more self tests, and the build-tools tests
adhere to the conventions.
The self test will be replaced by gradle test kit, thus the addition of
the Gradle plugin builder plugin.
* First test to run a Gradle build
* Add tests that replace the name check self test
* Clean up integ test base class
* Always run tests
* Align with test naming conventions
* Make integ. test case inherit from unit test case
The check requires this
* Remove `import static org.junit.Assert.*`
* Move to Gradle 4.8 RC1
* Use latest version of plugin
The current does not work with Gradle 4.8 RC1
* Switch to Gradle GA
* Add and configure build compare plugin
* add work-around for https://github.com/gradle/gradle/issues/5692
* work around https://github.com/gradle/gradle/issues/5696
* Make use of Gradle build compare with reference project
* Make the manifest more compare friendly
* Clear the manifest in compare friendly mode
* Remove animalsniffer from buildscript classpath
* Fix javadoc errors
* Fix doc issues
* reference Gradle issues in comments
* Conditionally configure build compare
* Fix some more doclint issues
* fix typo in build script
* Add sanity check to make sure the test task was replaced
Relates to #31324. It seems like Gradle has an inconsistent behavior and
the taks is not always replaced.
* Include number of non conforming tasks in the exception.
* No longer replace test task, create implicit instead
Closes#31324. The issue has full context in comments.
With this change the `test` task becomes nothing more than an alias for `utest`.
Some of the stand alone tests that had a `test` task now have `integTest`, and a
few of them that used to have `integTest` to run multiple tests now only
have `check`.
This will also help separarate unit/micro tests from integration tests.
* Revert "No longer replace test task, create implicit instead"
This reverts commit f1ebaf7d93e4a0a19e751109bf620477dc35023c.
* Fix replacement of the test task
Based on information from gradle/gradle#5730 replace the task taking
into account the task providres.
Closes#31324.
* Only apply build comapare plugin if needed
* Make sure test runs before integTest
* Fix doclint aftter merge
* PR review comments
* Switch to Gradle 4.8.1 and remove workaround
* PR review comments
* Consolidate task ordering
Skips tests the require xpack if we run the doc build without xpack. So
this should work:
```
./gradlew -p docs check -Dtests.distribution=oss-zip
```
This is implemented by detecting parts of the doc that look like:
```
[testenv="basic"]
```
Relates to #30665
* remove left-over comment
* make sure of the property for plugins
* skip installing modules if these exist in the distribution
* Log the distrbution being ran
* Don't allow running with integ-tests-zip passed externally
* top level x-pack/qa can't run with oss distro
* Add support for matching objects in lists
Makes it possible to have a key that points to a list and assert that a
certain object is present in the list. All keys have to be present and
values have to match. The objects in the source list may have additional
fields.
example:
```
match: { 'nodes.$master.plugins': { name: ingest-attachment } }
```
* Update plugin and module tests to work with other distributions
Some of the tests expected that the integration tests will always be ran
with the `integ-test-zip` distribution so that there will be no other
plugins loaded.
With this change, we check for the presence of the plugin without
assuming exclusivity.
* Allow modules to run on other distros as well
To match the behavior of tets.distributions
* Add and use a new `contains` assertion
Replaces the previus changes that caused `match` to do a partial match.
* Implement PR review comments
This switches the docs tests from the `oss-zip` distribution to the
`zip` distribution so they have xpack installed and configured with the
default basic license. The goal is to be able to merge the
`x-pack/docs` directory into the `docs` directory, marking the x-pack
docs with some kind of marker. This is the first step in that process.
This also enables `-Dtests.distribution` support for the `docs`
directory so you can run the tests against the `oss-zip` distribution
with something like
```
./gradlew -p docs check -Dtests.distribution=oss-zip
```
We can set up Jenkins to run both.
Relates to #30665
The goal of this commit is to address unknown licenses when producing
the dependencies info report. We have two different checks that we run
on licenses. The first check is whether or not we have stashed a copy of
the license text for a dependency in the repository. The second is to
map every dependency to a license type (e.g., BSD 3-clause). The problem
here is that the way we were handling licenses in the second check
differs from how we handle licenses in the first check. The first check
works by finding a license file with the name of the artifact followed
by the text -LICENSE.txt. Yet in some cases we allow mapping an artifact
name to another name used to check for the license (e.g., we map
lucene-.* to lucene, and opensaml-.* to shibboleth. The second check
understood the first way of looking for a license file but not the
second way. So in this commit we teach the second check about the
mappings from artifact names to license names. We do this by copying the
configuration from the dependencyLicenses task to the dependenciesInfo
task and then reusing the code from the first check in the second
check. There were some other challenges here though. For example,
dependenciesInfo was checking too many dependencies. For now, we should
only be checking direct dependencies and leaving transitive dependencies
from another org.elasticsearch artifact to that artifact (we want to do
this differently in a follow-up). We also want to disable
dependenciesInfo for projects that we do not publish, users only care
about licenses they might be exposed to if they use our assembled
products. With all of the changes in this commit we have eliminated all
unknown licenses. A follow-up will enforce that when we add a new
dependency it does not get mapped to unknown, these will be forbidden in
the future. Therefore, with this change and earlier changes are left
having no unknown licenses and two custom licenses; custom here means it
does not map to an SPDX license type. Those two licenses are xz and
ldapsdk. A future change will not allow additional custom licenses
unless they are explicitly whitelisted. This ensures that if a new
dependency is added it is mapped to an SPDX license or mapped to custom
because it does not have an SPDX license.
Most of our license file names strip the version off the artifact name
when deducing the license filename. However, the version on the GCS SDK
(google-api-services-storage) does not match the usual format and
instead starts with a vee. This means that the license filename for this
license ended up carrying the version and we should not do that. This
commit adjusts the regex the deduces the license filename to account for
this case, and adjusts the google-api-services-storage license files
accordingly.
This commit enhances the license detection that we have for various
licenses. Here we improve the detection for all licenses (especially the
Apache 2.0 License), the BSD 2-clause license, the MIT (with
attribution) license, and we add detection for the BSD 3-clause
license. One way that we achieved this improvement is by changing how
the license files are read so that rather than reading them as a
multi-line string which ended up represented as "[line1, line2, line3,
...]" internally, we read the full bytes of the license text and replace
all whitespace with a single space so the license text is now loaded as
"line1 line2 line3". For the MIT license we add the actual license text
and remove the "MIT" string as not all copies of the license clearly
indicate that the text is the MIT license. We take a similar strategy
for the BSD-2 and BSD-3 clause licenses. With this change, we reduce the
number of "custom" licenses in the codebase from 31 to 2. The two
remaining appear to be truly custom licenses, not carrying licenses
identifiable by SPDX. A follow-up will address "unknown" licenses.
Use all running nodes as unicast seeds in the rolling restart tests to
avoid a race between pinging and the tests. Without this if the tests
are too fast then when a new node comes up and pings its single
configured seed node that node *might* not have a ping from the other
running node.
This commit adds a new writeBlobAtomic() method to the BlobContainer
interface that can be implemented by repository implementations which
support atomic writes operations.
When the BlobContainer implementation does not provide a specific
implementation of writeBlobAtomic(), then the writeBlob() method is used.
Related to #30680
This snapshot includes:
- LUCENE-8341: Record soft deletes in SegmentCommitInfo which will resolve#30851
- LUCENE-8335: Enforce soft-deletes field up-front
Today when executing REST tests we take full responsibility for cluster
configuration. Yet, there are use cases for brining your own cluster to
the REST tests. This commit is a small first step towards that effort by
skipping creating the cluster if the tests.rest.cluster and test.cluster
system properties are set. In this case, the user takes full
responsibility for configuring the cluster as expected by the REST
tests. This step is by no means meant to be perfect or complete, only a
baby step.
This commit ensures the delete of the upgrade_is_oss indicator for
the packaging tests is always deleted before each run. It works by
moving the check on version which skips the task into the doFirst block,
replacing the onlyIf.
closes#30682
Ports the first couple tests for archive distributions from the old bats
project to the new java project that includes windows platforms,
consolidating them into one test method that tests that the
distributions can be extracted and their contents verified. Includes the
zip distributions which were not tested in the bats project.
The new snapshot includes LUCENE-8324 which fixes missing checkpoint
after a fully deletes segment is dropped on flush. This snapshot should
resolves failed tests in the CorruptedFileIT suite.
Closes#30741Closes#30577
* disable annotation processor for docs
Could not find evidence that the log4j annotation processor is used.
The compiler flag enables the Gradle 5.0 behavior
Closes#30476
* Disable annotation processors for all tests
* remove redundant `-proc:none` already handled by required plugins
* Revert unintentional changes
Meta plugins existed only for a short time, in order to enable breaking
up x-pack into multiple plugins. However, now that x-pack is no longer
installed as a plugin, the need for them has disappeared. This commit
removes the meta plugins infrastructure.
Adds windows server 2012r2 and 2016 vagrant boxes to packaging tests.
They can only be used if IDs for their images are specified, which are
passed to gradle and then to vagrant via env variables. Adds options
to the project property `vagrant.boxes` to choose between linux and
windows boxes.
Bats tests are run only on linux boxes, and portable packaging tests run
on all boxes. Platform tests are only run on linux boxes since they are
not being maintained.
For #26741
This commit changes the default out-of-the-box configuration for the
number of shards from five to one. We think this will help address a
common problem of oversharding. For users with time-based indices that
need a different default, this can be managed with index templates. For
users with non-time-based indices that find they need to re-shard with
the split API in place they no longer need to resort only to
reindexing.
Since this has the impact of changing the default number of shards used
in REST tests, we want to ensure that we still have coverage for issues
that could arise from multiple shards. As such, we randomize (rarely)
the default number of shards in REST tests to two. This is managed via a
global index template. However, some tests check the templates that are
in the cluster state during the test. Since this template is randomly
there, we need a way for tests to skip adding the template used to set
the number of shards to two. For this we add the default_shards feature
skip. To avoid having to write our docs in a complicated way because
sometimes they might be behind one shard, and sometimes they might be
behind two shards we apply the default_shards feature skip to all docs
tests. That is, these tests will always run with the default number of
shards (one).
This commit adds the ability to specify a plugin from maven for a
test cluster to use. Currently, only local projects may be used as
plugins, except when testing bwc, where the coordinates of the project
are used. However, that assumes all projects always keep the same
coordinates, or are even still plugins, which is no longer the case for
x-pack. The full cluster and rolling restart tests are changed to use
this new method when pulling x-pack versions before 6.3.0.
We have a pile of documentation describing how to rebuild the built in
language analyzers and, previously, our documentation testing framework
made sure that the examples successfully built *an* analyzer but they
didn't assert that the analyzer built by the documentation matches the
built in anlayzer. Unsuprisingly, some of the examples aren't quite
right.
This adds a mechanism that tests that the analyzers built by the docs.
The mechanism is fairly simple and brutal but it seems to be working:
build a hundred random unicode sequences and send them through the
`_analyze` API with the rebuilt analyzer and then again through the
built in analyzer. Then make sure both APIs return the same results.
Each of these calls to `_anlayze` takes about 20ms on my laptop which
seems fine.
This is a follow-up to our previous change to only fork javac if needed
to respect the Java compiler home (via JAVA_HOME). This commit makes the
same change for groovyc: we only fork groovyc if the JDK for Gradle is
not the JDK specified for the compiler (via JAVA_HOME).
We started forking javac to avoid GC overhead when running builds. Yet,
we do not seem to have this problem anymore and not forking leads to a
substantial speed improvement. This commit stops forking javac.
Upgrade to lucene-7.4.0-snapshot-1ed95c097b
This version contains:
* An Analyzer for Korean
* An IntervalQuery and IntervalsSource that retrieve minimum intervals of positional queries.
* A new API to retrieve matches (offsets and positions) of a query for a single document.
* Support for soft deletes in the index writer.
* A fixed shingle filter that handles index time synonyms.
* Support for emoji sequence in ICUTokenizer (with an upgrade to icu 61.1)
Uses a filter on the copy task for the eclipse settings files to
replace the token @@LICENSE_HEADER_TEXT@@ with the correct licence
header from the new buildSrc/src/main/resources/license-headers
directory
xpack core contains a fork of `Cron` from quartz who's javadoc has a
`<table>` with non-html5 compatible stuff. This html5ifies the table and
switches the `:x-pack:plugin:core` project to building javadoc with
HTML5.
[test] add java packaging test project
Adds a project for building and running packaging tests written in java
for portability. The vagrant tasks use jars on the packagingTest
configuration, which are built in the same project. No tests are added
yet.
Corresponding changes are not made to :x-pack:qa:vagrant because the
java packaging tests will all be consolidated into one project.
For #26741
`javadoc` will switch from detaulting to html4 to html5 in "a future
release". We should get ahead of it so we're not surprised. Also, HTML5
is the future! Er, the present. Anyway, this follows up from #30220 to
make the Javadoc for two of the four remaining projects HTML5
compatible.
This *mostly* silences `javadoc`'s warning about defaulting to
generating html4 files by enabling generating html5 file for the
projects for which that works. It didn't work in a half dozen projects,
about half of which I've fixed in this PR, entirely by replacing
`<tt>thing</tt>` with `{@code thing}`.
There are a few remaining projects that contain javadoc with invalid
html5. I'll fix those projects in a followup.
Add the oss tar distribution to the packaging test plugin. Test the oss
tar distribution in the core packaging tests, and the non-oss tar
distribution in the x-pack packaging tests.
Today we update index settings directly via IndexService instead of the
cluster state in IndexServiceTests. However, those changes will be lost
if there is a cluster state update. In general, we should update index
settings via client and limit the direct usage in only special tests.
This commit replaces direct usages by the updateSettings api of client.
Closes#24491
Today when forking setup commands we do not set JAVA_HOME. This means
that we might not use a version of Java compatible with the version of
Java the command is expecting to run on (for example, 5.6 nodes would
expect JDK 8, and this is true even for their setup commands). This
commit sets JAVA_HOME when configuring setup command tasks.
This commit moves the apache and elastic license files into a new
root level `licenses` directory and rewrites the top level LICENSE.txt
to clarify the repository has a mix of apache and elastic licensed code.
With the switch to X-Pack as a module, we lost production of POMs for
the JARs that we publish, and did not have a license/notice file in the
zip archives nor the exploded module. This commit ensures that we
generate these POMs, and license/notice files.
This commit makes x-pack a module and adds it to the default
distrubtion. It also creates distributions for zip, tar, deb and rpm
which contain only oss code.
This commit moves the checks on JAVAX_HOME (where X is the java version
number) existing to the end of gradle's configuration phase, and based
on whether the tasks needing the java home are configured to execute.
relates #29519
This commit is a minor cleanup of a code block in NodeInfo.groovy. We
remove an unused variable, make the formatting of the code consistent,
and cast a property that is typed as an Object to a String to avoid an
annoying IDE warning.
Some build tasks require older JDKs. For example, the BWC build tasks
for older versions of Elasticsearch require older JDKs. It is onerous to
require these be configured when merely compiling Elasticsearch, the
requirement that they be strictly set to appropriate values should only
be enforced if these tasks are going to be executed. To address this, we
lazy configure these tasks.
Today we have a nodeVersion property on the NodeInfo class that we use
to carry around information about a standalone node that we will start
during tests. This property is a String which we usually end up parsing
to a Version anyway to do various checks on it. This can end up
happening a lot during configuration so it would be more efficient and
safer to have this already be strongly-typed as a Version and parsed
from a String only once for each instance of NodeInfo. Therefore, this
commit makes NodeInfo#nodeVersion strongly-typed as a Version.
There are some scenarios where the license on a source file is one that
is compatible with our projects yet we do not want to add the license to
the list of approved license headers (to keep the number of files with
that compatible license contained). This commit adds the ability to
exclude a file from the license check.
Today we have JAVA_HOME for the compiler Java home and RUNTIME_JAVA_HOME
for the test Java home. However, when we compile BWC nodes and run them,
neither of these Java homes might be the version that was suitable for
that BWC node (e.g., 5.6 requires JDK 8 to compile and to run). This
commit adds support for the environment variables JAVA\d+_HOME and uses
the appropriate Java home based on the version of the node being
started. We even do this for reindex-from-old which requires JDK 7 for
these very old nodes. Note that these environment variables are not
required if not running BWC tests, and they are strictly required if
running BWC tests.
This commit introduces built in support for adding files to the
keystore when configuring the integration test cluster for a project.
In order to use this support, simply add `keystoreFile` followed by the
secure setting name and the path to the source file inside the
integTestCluster closure for a project. The built in support will
handle the creation of the keystore and the addition of the file to the
keystore.
This change moves the -ea and -esa options that enable assertions for
test nodes before the cluster-specific JVM arguments on the Java command
line. This opens up the possibility for the cluster-specific JVM
arguments to disable assertions for one particular package or class,
which can be useful in BWC testing where incorrect assertions cannot be
removed from released versions of the product.
Today we have a silent batch mode in the install plugin command when
standard input is closed or there is no tty. It appears that
historically this was useful when running tests where we want to accept
plugin permissions without having to acknowledge them. Now that we have
an explicit batch mode flag, this use-case is removed. The motivation
for removing this now is that there is another place where silent batch
mode arises and that is when a user attempts to install a plugin inside
a Docker container without keeping standard input open and attaching a
tty. In this case, the install plugin command will treat the situation
as a silent batch mode and therefore the user will never have the chance
to acknowledge the additional permissions required by a plugin. This
commit removes this silent batch mode in favor of using the --batch flag
when running tests and requiring the user to take explicit action to
acknowledge the additional permissions (either by leaving standard input
open and attaching a tty, or by passing the --batch flags themselves).
Note that with this change the user will now see a null pointer
exception when they try to install a plugin in a Docker container
without keeping standard input open and attaching a tty. This will be
addressed in an immediate follow-up, but because the implications of
that change are larger, they should be handled separately from this one.
Correctly setup classpath/dependencies and fix checkstyle task that was partly broken because delayed setup of Java9 sourcesets. This also cleans packaging of META-INF. It also prepares forbiddenapis 2.6 upgrade
relates #29292
* Begin moving XContent to a separate lib/artifact
This commit moves a large portion of the XContent code from the `server` project
to the `libs/xcontent` project. For the pieces that have been moved, some
helpers have been duplicated to allow them to be decoupled from ES helper
classes. In addition, `Booleans` and `CheckedFunction` have been moved to the
`elasticsearch-core` project.
This decoupling is a move so that we can eventually make things like the
high-level REST client not rely on the entire ES jar, only the parts it needs.
There are some pieces that are still not decoupled, in particular some of the
XContent tests still remain in the server project, this is because they test a
large portion of the pluggable xcontent pieces through
`XContentElasticsearchException`. They may be decoupled in future work.
Additionally, there may be more piecese that we want to move to the xcontent lib
in the future that are not part of this PR, this is a starting point.
Relates to #28504
The sysprop repos.mavenLocal may be used to add the local .m2 maven
repository for testing snapshots of locally build dependencies.
Unfortunately this has to be checked in two different places (they cannot
be shared, due to buildSrc being built essentially as a separate
project), and the casing of the string sysprop lookups did not align.
This commit fixes BuildPlugin's checking of repos.mavenLocal to use the
correct casing (camelCase, to match the gradle dsl element).
When a module or plugin register that it has a client JAR, we copy
artifacts like the Javadoc and sources JARs as the JARs for the client
as well (with -client added to the name). I previously had to disable
the Javadoc task on JDK 10 due to a bug in bin/javadoc. After JDK 10
went GA without a fix for this bug, I added workaround to fix the
Javadoc task on JDK 10. However, I made a mistake reverting the
previously skipped Javadocs tasks and missed that one that copies the
Javadoc JAR for client JARs. This commit fixes that issue.
The vagrant test plugin adds tasks for the groovy packaging tests,
which run after the bats packaging test tasks.Rename the 'bats'
configuration to 'packaging' and remove the option to inherit
archives from this configuration.
This commit reenables the Javadoc tasks on JDK 10. To reenable these
tasks, we have to workaround a bug in JDK 10 which trips on some deeply
nested anonymous classes that we have in the codebase (and are fine
as-is, this is not a problem with this code). The workaround is to
remove the compiled classes from the classpath. This has been reported
upstream and the workaround was suggested there (see the code comment).
We need to configure the Java 9 checkstyle task to depend on the
checkstyle configuration task or the task could run before the
checkstyle conf has been copied leading to runtime failures. We have to
do this after projects have been evaluated because the configuration of
these tasks can occur before the Java 9 source set has been added to a
project.
This commit fixes the directory name bundled plugins are added under
within a meta plugin to be the configured name of the bundled plugin,
instead of the project name.
This commit creates the copyRestSpec task for rest integ tests
immediately on creation of the RestIntegTestTask instead of lazily in
afterEvaluate. This allows other projects to add additional rest specs
to be copied, instead of needing to create another parallel copy task.
Adds support for triple quoted strings to the documentation test
generator. Kibana's CONSOLE tool has supported them for a year but we
were unable to use them in Elasticsearch's docs because the process that
converts example snippets into tests couldn't handle this. This change
adds code to convert them into standard JSON so we can pass them to
Elasticsearch.
This commit enhances the error messages reported when JAVA_HOME and
RUNTIME_JAVA_HOME are not correctly set to point towards the minimum
compiler and minimum runtime JDKs that are expected by the builds. The
previous error message would say:
Java 1.9 or above is required to build Elasticsearch
which is confusing if the user does have a JDK 9 installation and is
even the version that they have on their path yet they have JAVA_HOME
pointing to another JDK installation. The error message reported after
this change is:
the environment variable JAVA_HOME must be set to a JDK installation directory for Java 1.9 but is [/usr/java/jdk-8] corresponding to [1.8]
As we have factored Elasticsearch into smaller libraries, we have ended
up in a situation that some of the dependencies of Elasticsearch are not
available to code that depends on these smaller libraries but not server
Elasticsearch. This is a good thing, this was one of the goals of
separating Elasticsearch into smaller libraries, to shed some of the
dependencies from other components of the system. However, this now
means that simple utility methods from Lucene that we rely on are no
longer available everywhere. This commit copies IOUtils (with some small
formatting changes for our codebase) into the fold so that other
components of the system can rely on these methods where they no longer
depend on Lucene.
This commit removes the ability to specify that a plugin requires the
keystore and instead creates the keystore on package installation or
when Elasticsearch is started for the first time. The reason that we opt
to create the keystore on package installation is to ensure that the
keystore has the correct permissions (the package installation scripts
run as root as opposed to Elasticsearch running as the elasticsearch
user) and to enable removing the keystore on package removal if the
keystore is not modified.
Log4j2 provides a wide range of logging methods. Our code typically only uses a subset of them. In particular, uses of the methods trace|debug|info|warn|error|fatal(Object) or trace|debug|info|warn|error|fatal(Object, Throwable) have all been wrong, leading to not properly logging the provided message. To prevent these issues in the future, the corresponding Logger methods have been blacklisted.
This allows us to remove another dependency in the decoupling of the XContent
code. Rather than move this class over or decouple it, it can simply be removed.
Relates tangentially to #28504
Running any randomized testing task within Elasticsearch currently fails
if a project has zero tests. This was supposed to be overrideable, but
it was always set to 'fail', and the system property to override was
passed down to the test runner, but never read there. This commit
changes the value of the ifNoTests setting to randomized runner to be
read from system properties and continue to default to 'fail'.
This commit fixes the test progress logging to not produce an NPE when
there are no tests run. The onQuit method is always called, but onStart
would not be called if no tests match the test patterns.
It is only a comment, but can confuse those reading the code
Used 6.0 as an arbitrary elasticsearch.version value since it is version that required Java 8
This commit specifies that the working directory of the destroy task for
destroying test VMs is the root of the build. This is necessary in case
the build was run from a sub-directory, the Vagrant command would then
not be able to locate the Vagrantfile for the VMs in question.
Today we do not destroy Vagrant boxes before tests. This is because
constantly reprovisioning these boxes is time-consuming. Yet, not
destroying these boxes can lead to state being left around that impacts
subsequent test runs. To address this, we now always destroy these boxes
before tests and provide a flag to set if this is not desired while
iterating locally.
Applying the rest test gradle plugin already uses the zip distribution
by default, so specifying it explicitly is not necessary. These are
leftovers from before zip was the default for rest tests.
With this commit we reduce the maximum amount of memory that the javac
compiler can use from 1g to 512mb. While the build would succeed even
with 256mb, it influences compile time slightly negatively.
We have measured that the runtime overhead stays tolerable by running
the following command five times under repeatable conditions (i.e. we
execute `./gradlew clean`, then drop the caches and TRIM the disk):
```
./gradlew compileGroovy compileJava compileJava9Java\
compileTestGroovy compileTestJava compileGroovy
```
The results in seconds (as reported by Gradle) are:
* 1gb: avg: 253s, min: 252s, max: 256s
* 512mb: avg: 257s, min: 256s, max: 259s
This commit moves the distribution specific tasks into the respective
archives and packages builds. The collocation of common and distribution
specific tasks make it much easier to reason about what is expected in a
particular distribution.
This commit adds intermediate gradle projects for archive based
distributions (zip, tar) and package based distributions (rpm, deb). The
grouping allows the common distribution build file to be considerably
shorter and clearly separated from the common zip/tar and rpm/deb
configuration.
This directory was removed from plugins in #28589, but docs still
referenced it. This commit cleans up the plugin author docs to no longer
refer to it.
We need to investigate why startup is taking so long in CI for
standalone tests. This commit adds logging for bootstrap and network
code that is executed before the node starts initializing in case this
is the source of the trouble.
Relates #28659
By default we wait for 30 seconds for nodes to start. Looking at CI,
even for passing builds, the startup time is frequently over 20 seconds
(grep for `#wait (Thread[Task worker for ':',5,main]) completed` in any
console log), which is close to the threshold. This commit sets a
default timeout of 120 seconds instead, so that REST tests are less
likely to hit a timeout.
This commit removes the extra layer of all plugin files existing under
"elasticsearch" within plugin zips. This simplifies building plugin zips
and removes the need for special logic of modules vs plugins.
The build.snapshot was mistakenly passed in to every snapshot version,
so when release tests were run, these versions were mistaken as released
entities and could not be found in maven, because they do not
exist. This fix removes that bug in logic, and always makes them proper
snapshots. This has a benefit of cleaning up the VersionUtilsTests
because they no longer rely on different sets of versions to check
against, which was also a bug.
We suspect a build failure might be due to a startup timeout, but there is
insufficient information in the logs. This change enhances the information
available so we will be better-informed next time.
Generalizing BWC building so that there is less code to modify for a release. This ensures we do not
need to think about what major or minor version is in the gradle code. It follows the general rules of the
elastic release structure. For more information on the rules, see the VersionCollection's javadoc.
This also removes the additional bwc snapshots that will never be released, such as 6.0.2, which were
being built and tested against every time we ran bwc tests.
Additionally, it creates 4 new projects that correspond to the different types of snapshots that may exist
for a given version. Its possible to now run those individual tasks to work out bwc logic whereas
previously it was impossible and the entire suite of bwc tests had to be run to work out any logic
changes in the build tools' bwc project. Please note that if the project does not make sense for the
version that is current, that an error will be thrown from that individual project if an attempt is made to
run it.
This should allow for automating the version bumps as well, since it removes all the hardcoded version
logic from the configs.
When elasticsearch was originally moved to gradle, the "provided" equivalent in maven had to be done through a plugin. Since then, gradle added the "compileOnly" configuration. This commit removes the provided plugin and replaces all uses with compileOnly.
Tests use the (internal) gradle progress logger. Since aroudn gradle
4.0, loggers can have children loggers which each get their own line of
updating output. This commit improves the test status output to give
each jvm its own status line, as well as lines for suite and test
counts. It also fixes an issue where the progress logger was not marked
as completed, which caused its last output to never be cleared.
Gradle 4.5 now hides immutable task dependencies. We previously copied
the existing dependencies from the builtin test task to the
randomizedtesting task. This commit adds testClasses as an extra
dependency of the randomizedtesting task, to ensure the classes are
built.
ava.time has the functionality needed to deal with timezones with varying
offsets correctly, but it also has a bunch of methods that silently let you
forget about the hard cases, which raises the risk that we'll quietly do the
wrong thing at some point in the future.
This change adds the trappy methods to the list of forbidden methods to try and
help stop this from happening.
It also fixes the only use of these methods in the codebase so far:
IngestDocument#deepCopy() used ZonedDateTime.of() which may alter the offset of
the given time in cases where the offset is ambiguous.
This commit conditionally adds the --illegal-access=warn flag when tests
are run with java 9. Currently, testing on java 9 triggers a warning
about illegal access from mockito. While that should be fixed (by
updating to a newer mockito base for securemock), the stderr warning we
get is only the first one. Thankfully that is the only one, but this
change will enable finding all such illegal accesses in the future.
This commit allows for configuration of the amount of time we wait for a node to startup. This is needed as some QA tests with plugins and tribe timeout when starting the tribe
node. Even regular node startup time with plugins is starting to approach the limit of 30 seconds.
Additionally, this commit provides better feedback when a wait has failed. The code checks to
ensure all expected files exist for each node; if they do not then we consider the wait task as
having failed. Prior to this change, when there was one or more missing file the build would
continue and attempt to execute the wait condition that typically makes an HTTP request to the
cluster. The output of this type of failure does include which files exist and which do not but
this change makes it clearer that the actual HTTP call did not time out, but the failure was before
the call was even made.
Prints the warning level logs from the integration testing cluster if
there is a failure. We'd attempted to print those logs but I believe our
regex was missing a space.
This commit adds pom generation to meta plugins by using the same hacks
that PluginBuildPlugin already uses to get around "pom" type poms (ie
zip files).
Sometimes modules/plugins depend on locally built elasticsearch jars.
This means not only that the jar is constantly changing (so no need for
a sha check), but also that the license falls under the Elasticsearch
license, and there is no need to keep another copy. This commit updates
the dependencies checked by dependencyLicenses to exclude those that are
built by elasticsearch.
Integ test clusters should use the plugin method of ClusterConfiguration
to install plugins. Without it, meta plugins install based on the name
of the project directory, rather than the actual configured plugin name.
This commit fixes that, and also corrects the distribution used to be
the default integ-test-zip, to match that of PluginBuildPlugin. This
ensures plugins are tested in isolation by default.
We use the --release flag which is only available starting in JDK
9. Since Gradle could be running on JDK 8 without forking the compiler,
compilation will occur with the Java home of Gradle. This commit adds a
fork flag to the compiler Java home so that we use the right compiler.
Relates #28300
This commit fixes the copying of files from bundled plugin zips.
Previously all files within each zip would be flattened under the
bundled plugin name, and the original directories would exist at the top
level of the plugin.
The build-tools project (ie buildSrc) has groovy code. The -source
option was set to java 8, but this is not correct on java 9, and
actually causes a warning about not setting boot classpath. This commit
adds the --release option to groovy compilation, just like is done for
java compilation.
This commit adds a gradle plugin to ease development of meta plugins.
Applying the plugin will generated the meta plugin properties based on
the es_meta_plugin configuration object, which includes name and
description. The plugins to include within the meta plugin are
configured through the `plugins` list. An integ test task is also
automatically added.
This commit introduces the ability for the core Elasticsearch JAR to be
a multi-release JAR containing code that is compiled for JDK 8 and code
that is compiled for JDK 9. At runtime, a JDK 8 JVM will ignore the JDK
9 compiled classfiles, and a JDK 9 JVM will use the JDK 9 compiled
classfiles instead of the JDK 8 compiled classfiles. With this work, we
utilize the new JDK 9 API for obtaining the PID of the running JVM,
instead of relying on a hack.
For now, we want to keep IDEs on JDK 8 so when the build is in an IDE we
ignore the JDK 9 source set (as otherwise the IDE would give compilation
errors). However, with this change, running Gradle from the command-line
now requires JAVA_HOME and JAVA_9_HOME to be set. This will require
follow-up work in our CI infrastructure and our release builds to
accommodate this change.
Relates #28051
This commit modifies the build to require JDK 9 for
compilation. Henceforth, we will compile with a JDK 9 compiler targeting
JDK 8 as the class file format. Optionally, RUNTIME_JAVA_HOME can be set
as the runtime JDK used for running tests. To enable this change, we
separate the meaning of the compiler Java home versus the runtime Java
home. If the runtime Java home is not set (via RUNTIME_JAVA_HOME) then
we fallback to using JAVA_HOME as the runtime Java home. This enables:
- developers only have to set one Java home (JAVA_HOME)
- developers can set an optional Java home (RUNTIME_JAVA_HOME) to test
on the minimum supported runtime
- we can test compiling with JDK 9 running on JDK 8 and compiling with
JDK 9 running on JDK 9 in CI
Currently the code which disable transitive dependencies assumes all
deps are a "module dependency" in gradle. But a jar file dep is not.
This commit relaxes the closure signature to allow any dependency and
only enforce the transitive disabling for module dependencies.
This commit adds the ability to package multiple plugins in a single zip.
The zip file for a meta plugin must contains the following structure:
|____elasticsearch/
| |____ <plugin1> <-- The plugin files for plugin1 (the content of the elastisearch directory)
| |____ <plugin2> <-- The plugin files for plugin2
| |____ meta-plugin-descriptor.properties <-- example contents below
The meta plugin properties descriptor is mandatory and must contain the following properties:
description: simple summary of the meta plugin.
name: the meta plugin name
The installation process installs each plugin in a sub-folder inside the meta plugin directory.
The example above would create the following structure in the plugins directory:
|_____ plugins
| |____ <name_of_the_meta_plugin>
| | |____ meta-plugin-descriptor.properties
| | |____ <plugin1>
| | |____ <plugin2>
If the sub plugins contain a config or a bin directory, they are copied in a sub folder inside the meta plugin config/bin directory.
|_____ config
| |____ <name_of_the_meta_plugin>
| | |____ <plugin1>
| | |____ <plugin2>
|_____ bin
| |____ <name_of_the_meta_plugin>
| | |____ <plugin1>
| | |____ <plugin2>
The sub-plugins are loaded at startup like normal plugins with the same restrictions; they have a separate class loader and a sub-plugin
cannot have the same name than another plugin (or a sub-plugin inside another meta plugin).
It is also not possible to remove a sub-plugin inside a meta plugin, only full removal of the meta plugin is allowed.
Closes#27316
* Fix license SPDX identifiers (CDDL)
* Fix license type for Custom URL:
* If the license is identified but not listed as an SPDX identifier, the character `;` is used after the identifier to set the license URL.
Gradle will no longer be needed in the test VMs as the Gradle wrapper
can be used to run the platform tests. This commit updates the platform
tests to use the Gradle wrapper, and removes installing Gradle in the VM
which will speed up VM provisioning.
Relates #28105
When finding the commit hash for the build to place in the JAR manifest
(which is used to identity the build), the scm-info plugin assumes that
GIT_COMMIT is the commit for this build. That assumption is wrong, this
build could be a sub-build of another build that GIT_COMMIT belongs
to. If GIT_COMMIT is set, we ignore the commit hash calculated by
scm-info and calculate the hash ourselves.
Relates #28082
This commit adds the infrastructure to plugin building and loading to
allow one plugin to extend another. That is, one plugin may extend
another by the "parent" plugin allowing itself to be extended through
java SPI. When all plugins extending a plugin are finished loading, the
"parent" plugin has a callback (through the ExtensiblePlugin interface)
allowing it to reload SPI.
This commit also adds an example plugin which uses as-yet implemented
extensibility (adding to the painless whitelist).
* Adds task dependenciesInfo to BuildPlugin to generate a CSV file with dependencies information (name,version,url,license)
* Adds `ConcatFilesTask.groovy` to concatenates multiple files into one
* Adds task `:distribution:generateDependenciesReport` to concatenate `dependencies.csv` files into a single file (`es-dependencies.csv` by default)
# Examples:
$ gradle dependenciesInfo :distribution:generateDependenciesReport
## Use `csv` system property to customize the output file path
$ gradle dependenciesInfo :distribution:generateDependenciesReport -Dcsv=/tmp/elasticsearch-dependencies.csv
## When branch is not master, use `build.branch` system property to generate correct licenses URLs
$ gradle dependenciesInfo :distribution:generateDependenciesReport -Dbuild.branch=6.x -Dcsv=/tmp/elasticsearch-dependencies.csv
We disabled the Javadoc task on JDK 10 due to an apparent bug in Javadoc
generation on JDK 10. However, the client JAR task sets up its own
Javadoc task for client JARs (because it merely copies the non-client
Javadoc JAR). This commit diables that task too, since the Javadocs for
the non-client JAR will not exist.
Relates #27962
There appears to be a bug in JDK 10 for generating Javadocs with some
nested anonymous classes. This commit disables these on JDK 10 until the
upstream issue is resolved.
Relates #27952
JDK 10 has gone fully-modular. This means that:
- when targeting JDK 8 with a JDK 10 compiler, we have to use the full
profile
- when targeting JDK 10 with a JDK 10 compiler, we have to use
-add-modules java.base
Today we only target JDK 8 (our minimum compatibility version) so we
need to change the compiler flags conditional on using a JDK 10
compiler. This commit does that.
Relates #27884
When running the release tests, we set build.snapshot to false and this
causes all version numbers to not have "-SNAPSHOT". This is true even
for the tips of the branches (e.g., currently 5.6.6 on the 5.6
branch). Yet, if we do not set snapshot to false, then we would still be
trying to find artifacts with "-SNAPSHOT" appended which would not have
been build since build.snapshot is false. To fix this, we have to push
build.snapshot into the version logic.
Relates #27778
We have tests that manually unpackage the RPM and Debian package
distributions and start a cluster manually (not from the service) and
run a basic suite of integration tests against them. This is problematic
because it is not how the packages are intended to be used (instead,
they are intended to be installed using the package installation tools,
and started as services) and so violates assumptions that we make about
directory paths. This commit removes these integration tests, instead
relying on the packaging tests to ensure the packages are not
broken. Additionally, we add a sanity check that the package
distributions can be unpackaged. Finally, with this change we can remove
some leniency from elasticsearch-env about checking for the existence of
the environment file which the leniency was there solely for these
integration tests.
Relates #27725
The problem here is that splitting was using a method that intentionally
trims whitespace (the method is really meant to be used for splitting
parameters where whitespace should be trimmed like list
settings). However, for routing values whitespace should not be trimmed
because we allow routing with leading and trailing spaces. This commit
switches the parsing of these routing values to a method that does not
trim whitespace.
Relates #27712
This new snapshot mostly brings a change to TopFieldCollector which can now
early terminate collection when trackTotalHits is `false`.
As a follow-up, we should replace our usage of
`EarlyTerminatingSortingCollector` with this new option.
Gradle 4.2 introduced a feature for safer handling of stale output files. Unfortunately, due to the way some of our tasks are written, this broke execution of our REST tests tasks. The reason for this is that the extract task (which extracts the ES distribution) would clean its output directory, and thereby also remove the empty cwd subdirectory which was created by the clean task. The reason why Gradle would remove the directory is that the earlier running clean task would programmatically create the empty cwd directory, but not make Gradle aware of this fact, which would result in Gradle believing that it could just safely clean the directory.
This commit explicitly defines a task to create the cwd subdirectory, and marks the directory as output of the task, so that the subsequent extract task won't eagerly clean it. It thereby restores full compatibility of the build with Gradle 4.2 and above.
This commit also removes the @Input annotation on the waitCondition closure of AntFixture, which conflicted with the extended input/output validation of Gradle 4.3.
The main highlight of this new snapshot is that it introduces the opportunity
for queries to opt out of caching. In case a query opts out of caching, not only
will it never be cached, but also no compound query that wraps it will be
cached.
The build currently does not work with Gradle 4.3.1 as the Gradle team stopped publishing the gradle-logging dependency to jcenter, starting with 4.3.1 (not sure why). There are two options:
- Add the repository managed by Gradle team (https://repo.gradle.org/gradle/libs-releases-local) to our build
- Use an older version (4.3) of the dependency when running with Gradle 4.3.1.
Not to depend on another external repo, I've chosen solution 2. Note that this solution could break on future versions, but as this is a compileOnly dependency, and the interface we use has been super stable since forever, I don't envision this to be an issue (and easily detected by a breaking build).
PR #26911 set minimum_master_nodes from number_of_nodes to (number_of_nodes / 2) + 1 in our REST tests. This has led to test failures (see #27233) as the REST tests only configure the first node in its unicast.hosts pinging list (see explanation here: #27233 (comment)). Until we have a proper fix for this, I'm reverting the change in #26911.
Gradle 5.0 will remove support for colons in configuration and task
names. This commit fixes this for our build by removing all current uses
of colons in configuration and task names.
Relates #27305
While it's not possible to upgrade the Jackson dependencies
to their latest versions yet (see #27032 (comment) for more)
it's still possible to upgrade to the latest 2.8.x version.
An upstream Gradle change has broken us starting on version 4.2. This
commit blacklists these versions until we can either find a workaround,
or the upstream issue is addressed.
Relates #27087
Upgrade to Jackson 2.9.2 and also use a boolean `closed` flag to
indicate that a FastStringReader instance is closed, so that length
is still correctly reported after the reader is closed.
The rolling-upgrade test was only writing the "minimum_master_nodes" setting to the configuration file of the old nodes, but not the upgraded ones.
Also changes the value of "minimum_master_nodes" from "number_of_nodes" to "(number_of_nodes / 2) + 1".
Update provides:
* Adds support for Gradle 4.0 (without deprecation warning)
* Full Java 9 support through ASM 6.0
Version 2.4 was buggy (it broke Gradle dependencies on SourceSets), but this version fixes it.
Today we have all non-plugin mappers in core. I'd like to start moving those
that neither map to json datatypes nor are very frequently used like `date` or
`ip` to a module.
This commit creates a new module called `mappers-extra` and moves the
`scaled_float` and `token_count` mappers to it. I'd like to eventually move
`range` fields there but it's more complicated due to their intimate
relationship with range queries.
Relates #10368
This commit adds a dependency to the install module task on the task
that builds the module. This is needed for standalone integration
tests that require other modules to be installed. Without this, we do
not have a guarantee that the module is bundled.
Javadoc linking between projects currently relies on
projectSubstitutions. However, that is an extension variable that is not
part of BuildPlugin. This commit moves the javadoc linking into the root
build.gradle, alongside where projectSubstitutions are defined.
There is a bug in Log4j on JDK 9 for walking the stack to find where a
log line is coming from. This bug is impacting some of our testing, so
this commit marks these tests as skippable only on JDK 9 until the bug
is fixed upstream.
Relates #26467
The current script service has a script compilation limit for a one
minute window. This is set to a small default value of 15. Instead of
increasing that default value, this commit introduces a new setting
that allows to configure a rate per time unit, so that the script service can deal with bursts better.
The new setting is named `script.max_compilations_rate`,
requires a nonnegative number and a positive time value.
The default is `75/5m`, which is equivalent to the existing 15 per minute.
When changing how the config path is configured (from a command-line
flag to an environment variable) we had to add BWC code in the build so
that we could form clusters with 5.x nodes in them. Now that this branch
has moved to 7.x, we no longer need to be BWC with 5.x for starting
nodes. This commit removes this dead BWC code.
Relates #26446
When starting a node in standalone tests, we sometimes use a wrapper
script as opposed to starting Elasticsearch directly (this is used for
backgrounding). On Windows, the path to this wrapper script can be
exceptionally long, exceeding the length of a batch script that cmd.exe
will invoke without whining. This commit replaces using the full path
name for this wrapper script by the short name for the wrapper script.
Additionally, the data, configuration, and jvm.options paths can also
end up being too long so we shorten those too. Care has to be taken with
the data directory because we usually rely on the node creating it on
startup but doing that will not be compatible with getting the short
name as that requires the directory already existing. Therefore, we
create that directory on-demand immediately before actually resolving
the short name.
Relates #26444
At current, we do not feel there is enough of a reason to shade the low
level rest client. It caused problems with commons logging and IDE's
during the brief time it was used. We did not know exactly how many
users will need this, and decided that leaving shading out until we
gather more information is best. Users can still shade the jar
themselves. For information and feeback, see issue #26366.
Closes#26328
This reverts commit 3a20922046.
This reverts commit 2c271f0f22.
This reverts commit 9d10dbea39.
This reverts commit e816ef89a2.
We currently add the apache license/notice for elasticsearch to any
plugin that uses our ES plugin gradle plugin. However, each plugin
should be able to use their own license. This commit adds a licenseFile
and noticeFile property to the root of project's using BuildPlugin,
which is added to jar files for that project.
In some cases our Windows builds fail due to long path names that arise
from a combination of long build job names plus long sub-project
names. While newer versions of Windows can handle long paths, invoking
batch scripts longer than 260 characters via cmd.exe is still
problematic. This leads to failing integration tests because we can not
run the commands to install plugins, create the keystore, and start the
node. This commit handles this by converting all paths on Windows used
to start an Elasticsearch node to short path names.
Relates #26365
This commit removes the keystore creation on elasticsearch startup, and
instead adds a plugin property which indicates the plugin needs the
keystore to exist. It does still make sure the keystore.seed exists on
ES startup, but through an "upgrade" method that loading the keystore in
Bootstrap calls.
closes#26309
This commit makes the security code aware of the Java 9 FilePermission changes (see #21534) and allows us to remove the `jdk.io.permissionsUseCanonicalPath` system property.
The client sniffer depends on the low-level REST client, while the Java high-level REST client and the transport client depend on Elasticsearch itself. Javadoc are not that useful unless they have links to the Elasticsearch classes in the latter case, and to the low-level REST client in the sniffer javadoc. This commit adds those links.
All of the snippets in our docs marked with `// TESTRESPONSE` are
checked against the response from Elasticsearch but, due to the
way they are implemented they are actually parsed as YAML instead
of JSON. Luckilly, all valid JSON is valid YAML! Unfurtunately
that means that invalid JSON has snuck into the exmples!
This adds a step during the build to parse them as JSON and fail
the build if they don't parse.
But no! It isn't quite that simple. The displayed text of some of
these responses looks like:
```
{
...
"aggregations": {
"range": {
"buckets": [
{
"to": 1.4436576E12,
"to_as_string": "10-2015",
"doc_count": 7,
"key": "*-10-2015"
},
{
"from": 1.4436576E12,
"from_as_string": "10-2015",
"doc_count": 0,
"key": "10-2015-*"
}
]
}
}
}
```
Note the `...` which isn't valid json but we like it anyway and want
it in the output. We use substitution rules to convert the `...`
into the response we expect. That yields a response that looks like:
```
{
"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,
"aggregations": {
"range": {
"buckets": [
{
"to": 1.4436576E12,
"to_as_string": "10-2015",
"doc_count": 7,
"key": "*-10-2015"
},
{
"from": 1.4436576E12,
"from_as_string": "10-2015",
"doc_count": 0,
"key": "10-2015-*"
}
]
}
}
}
```
That is what the tests consume but it isn't valid JSON! Oh no! We don't
want to go update all the substitution rules because that'd be huge and,
ultimately, wouldn't buy much. So we quote the `$body.took` bits before
parsing the JSON.
Note the responses that we use for the `_cat` APIs are all converted into
regexes and there is no expectation that they are valid JSON.
Closes#26233
The secure settings tool reads from stdIn and we use a closure to
provide a value for this. Yet, we evaluate they value too late and end up
with the last provided value for all keys.
The environment variable CONF_DIR was previously inconsistently used in
our packaging to customize the location of Elasticsearch configuration
files. The importance of this environment variable has increased
starting in 6.0.0 as it's now used consistently to ensure Elasticsearch
and all secondary scripts (e.g., elasticsearch-keystore) all use the
same configuration. The name CONF_DIR is there for legacy reasons yet
it's too generic. This commit renames CONF_DIR to ES_PATH_CONF.
Relates #26197
The build was ignoring suffixes like "beta1" and "rc1" on the version numbers which was causing the backwards compatibility packaging tests to fail because they expected to be upgrading from 6.0.0 even though they were actually upgrading from 6.0.0-beta1. This adds the suffixes to the information that the build scrapes from Version.java. It then uses those suffixes when it resolves artifacts build from the bwc branch and for testing.
Closes#26017
Compiling all of elasticsearch classes in one jvm, which is shared with
all of the loaded classes of gradle, can trip gc overhead limits. This
commit re-enables forking javac.
This commit updates the version for master to 7.0.0-alpha1. It also adds
the 6.1 version constant, and fixes many tests, as well as marking some
as awaits fix.
Closes#25893Closes#25870
Some REST tests can rapid-fire script compilations that exceed the
default script compilations per minute. Rather than subjecting ourselves
to spurious failures because of the limit being too low, we opt for a
larger limit here.
With Gradle 4.1 and newer JDK versions, we can finally invoke Gradle directly using a JDK9 JAVA_HOME without requiring a JDK8 to "bootstrap" the build. As the thirdPartyAudit task runs within the JVM that Gradle runs in, it needs to be adapted now to be JDK9 aware.
This commit also changes the `JavaCompile` tasks to only fork if necessary (i.e. when Gradle's JVM and JAVA_HOME's JVM differ).
Today we expose `IndexFieldDataService` outside of IndexService to do maintenance
or lookup field data in different ways. Yet, we have a streamlined way to access IndexFieldData
via `QueryShardContext` that should encapsulate all access to it. This also ensures that we control all other functionality like cache clearing etc.
This change also removes the `recycler` option from `ClearIndicesCacheRequest` this option is a no-op and should have been removed long ago.
Stored fields were still being accessed for nested inner hits even if the _source was not requested.
This was done to figure out the id of the root document. However this is already known higher up the stack.
So instead this change adds the id to the nested search context, so that it is no longer required to be fetched via the stored fields.
In case the _source is large and no source is requested then hot threads like these ones would still appear:
```
100.3% (501.3ms out of 500ms) cpu usage by thread 'elasticsearch[AfXKKfq][search][T#6]'
2/10 snapshots sharing following 22 elements
org.apache.lucene.store.DataInput.skipBytes(DataInput.java:352)
org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.skipField(CompressingStoredFieldsReader.java:246)
org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.visitDocument(CompressingStoredFieldsReader.java:601)
org.apache.lucene.index.CodecReader.document(CodecReader.java:88)
org.apache.lucene.index.FilterLeafReader.document(FilterLeafReader.java:411)
org.elasticsearch.search.fetch.FetchPhase.loadStoredFields(FetchPhase.java:347)
org.elasticsearch.search.fetch.FetchPhase.createNestedSearchHit(FetchPhase.java:219)
org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:150)
org.elasticsearch.search.fetch.subphase.InnerHitsFetchSubPhase.hitsExecute(InnerHitsFetchSubPhase.java:73)
org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:166)
org.elasticsearch.search.fetch.subphase.InnerHitsFetchSubPhase.hitsExecute(InnerHitsFetchSubPhase.java:73)
org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:166)
org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:422)
```
and:
```
8/10 snapshots sharing following 27 elements
org.apache.lucene.codecs.compressing.LZ4.decompress(LZ4.java:135)
org.apache.lucene.codecs.compressing.CompressionMode$4.decompress(CompressionMode.java:138)
org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader$BlockState$1.fillBuffer(CompressingStoredFieldsReader.java:531)
org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader$BlockState$1.readBytes(CompressingStoredFieldsReader.java:550)
org.apache.lucene.store.DataInput.readBytes(DataInput.java:87)
org.apache.lucene.store.DataInput.skipBytes(DataInput.java:350)
org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.skipField(CompressingStoredFieldsReader.java:246)
org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.visitDocument(CompressingStoredFieldsReader.java:601)
org.apache.lucene.index.CodecReader.document(CodecReader.java:88)
org.apache.lucene.index.FilterLeafReader.document(FilterLeafReader.java:411)
org.elasticsearch.search.fetch.FetchPhase.loadStoredFields(FetchPhase.java:347)
org.elasticsearch.search.fetch.FetchPhase.createNestedSearchHit(FetchPhase.java:219)
org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:150)
org.elasticsearch.search.fetch.subphase.InnerHitsFetchSubPhase.hitsExecute(InnerHitsFetchSubPhase.java:73)
org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:166)
org.elasticsearch.search.fetch.subphase.InnerHitsFetchSubPhase.hitsExecute(InnerHitsFetchSubPhase.java:73)
org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:166)
org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:422)
```
This commit removes all external dependencies from the rest client jar
and shades them in an 'org.elasticsearch.client' package within the jar
using shadowJar gradle plugin. All projects that depended on the
existing jar have been converted to using the 'org.elasticsearch.client'
package prefixes to interact with the rest client.
Closes#25208
Using `sh` means we used whatever default the system has, which is `dash` on
Ubuntu, even though our startup script is written for bash (see the shebang).
This commit introduces the elasticsearch-env script. The purpose of this
script is threefold:
- vastly simplify the various scripts used in Elasticsearch
- provide a script that can be included in other scripts in the
Elasticsearch ecosystem (e.g., plugins)
- correctly establish the environment for all scripts (e.g., so that
users can run `elasticsearch-keystore` from a package distribution
without having to worry about setting `CONF_DIR` first, otherwise the
keystore would be created in the wrong location)
Relates #25815
This commit removes the environment variable ES_JVM_OPTIONS that allows
the jvm.options file to sit separately from the rest of the config
directory. Instead, we use the CONF_DIR environment variable for custom
configuration location just as we do for the other configuration files.
Relates #25679
This commit does two things:
- bumps the version from 6.0.0-alpha3 to 6.0.0-beta1
- renames the 6.0.0-alpha3 version constant to 6.0.0-beta1
Relates #25621
This commit adds cross-settings validation for the low/high/flood stage
disk watermark settings. This validation was enabled by the introduction
of multiple settings validation.
Relates #25600
This commit introduces a nio based tcp transport into framework for
testing.
Currently Elasticsearch uses a simple blocking tcp transport for
testing purposes (MockTcpTransport). This diverges from production
where our current transport (netty) is non-blocking.
The point of this commit is to introduce a testing variant that more
closely matches the behavior of production instances.