* Improve logged exec output readability
- Split error and out streams and log them separately
- Log everything in a single call to prevent interference from
other log messages
Includes:
LUCENE-8594: DV update are broken for updates on new field
LUCENE-8590: Optimize DocValues update datastructures
LUCENE-8593: Specialize single value numeric DV updates
Relates #36286
This commit moves back to use explicit dependsOn for test tasks on
check. Not all tasks extending RandomizedTestingTask should be run by
check directly.
This commit fixes an oops when pushing a change to add the building of
the Docker images. A change that was made for testing was accidentally
left behind. This commit addresses that.
This commit introduces the building of the Docker images as bonafide
packaging formats alongside our existing archive and packaging
distributions. This build is migrated from a dedicated repository, and
converted to Gradle in the process.
Currently we use Math.random() in a few places in the tests which makes these
tests not reproducable with the random seed mechanism that comes with
ESTestCase. The change removes those instances.
Closes#35435
- make it easier to add additional testing tasks with the proper configuration and add some where they were missing.
- mute or fix failing tests
- add a check as part of testing conventions to find classes not included in any testing task.
The logic in the dockerComposeSupported method currently returns false
even when docker and docker compose are available on the build machine.
This change updates the check to see if docker compose is available in
one of the two paths and allows the `tests.fixture.enabled` property to
disable the tests even if docker compose is available.
This commit upgrades netty. This will close#35360. Netty started
throwing an IllegalArgumentException if a CompositeByteBuf is
created with < 2 components. Netty4Utils was updated to reflect this
change.
This commit removes padding from the java version in the output of
compiler/runtime/gradle java versions. This value is only ever 1 or 2
digits, and the later information (hotspot/jvm vendor info) is separated
by the java home path from the other versions, so there isn't a visual
reason for needing exact vertical alignment.
Some times the test fixtures plugin did not correctly disable tasks
from the build plugin as it should.
The plugin manager and tasks both use domain name collections so
the previus conde should have worked.
I did not have trime to track it down, but suspect there's some race
condition in Gradle causing this. The plugin manager is still incubating.
Since the tasks are on the cp even if the plugin is not applyed, we
don't really need to involve the plugin at all.
Closes#36041
Our `REPRODUCE WITH` line wasn't working for tests with `(` or `)` in
the method name. This isn't super common, but happens sometimes for our
yml based rest tests. This is because randomized runner's globbing
mechanism dind't escape `(` or `)`. I filed this upstream at
https://github.com/randomizedtesting/randomizedtesting/issues/271
and Dawid kindly fixed the issue. This upgrades to pick up the fix.
Closes#35692
In #35259 we switched the default number of VMs to fork for unit tests to
the number of physical CPU cores. But because we could only get an accurate
count on machines with a normal `/proc` filesystem, macOS machine did not
pick up the new default. Given that macOS is a huge portion of developer
machines, we'd like to get the right default there. This does that.
It also moves the default-finding process from happening once per testing
task to happening once at startup. This seems like a good choice in general,
but a very good choice for macOS because we have to run a command to list
the count.
This inserts newlines in order to reduce line lengths in the
o.e.action.admin.cluster package to 140 characters or less. This
also remves the checkstyle suppressions for affected files.
Relates #34884, #34923
- The current version was hard coded in the test, causing it to fail as
we removed the qualifier
- also applied the base plugin to the root project to get the `clean`
task to work as expected. This was preventing the failure from
reproducing locally.
* Manage dependencies for test clusters
Create a configuration and add the distribution to it automatically.
A task is created and added as a dependency to any task that uses a test
cluster.
The task extracts all the zip archives ( only zip support for now )
in the configuration.
We do this only once because most tests mostly use the same distribution
and thus we can avoid extracting it multiple times.
With this we will be able to start the node from the same files which
will most of the time live in OS caches or COW if the
configuration requires it.
* Swithc to jcenter
Jcenter suposedly operates on a CDN.
It should be faster and more reliable.
It's the default in Andorid Studio currently
( it switched from mavenCentral ).
This change is safe because jcenter is a superset of mavenCentral.
* update comment
* DISCOVERY: 0s Initial State Timeout in Tests
* Don't wait for initial state even with a single node, otherwise the loop writing the discovery file causes that single node to wait
for its own transport.ports file for 30s.
* Closes#35456
* DISCOVERY: Fix RollingUpgradeTests
* Don't manually manage min master nodes if not necessary
* Remove some dead code
* Allow for manually supplying list of seed nodes
* Closes#35178
With this change, `Version` no longer carries information about the qualifier,
we still need a way to show the "display version" that does have both
qualifier and snapshot. This is now stored by the build and red from `META-INF`.
* Add missing up-to-date configuration
The source properties file and the dynamic elasticsearch version
(set based on properties ) were missing from task outputs leading
to the task being incorrectly considered up to date.
Closes#35204
* Introduce property to set version qualifier
- VersionProperties.elasticsearch is now a string which can have qualifier
and snapshot too
- The Version class in the build no longer cares about snapshot and
qualifier.
This further applies the pattern set in #34125 to reduce copy-and-paste
in the multi-document CRUD portion of the High Level REST Client docs.
It also adds line wraps to snippets that are too wide to fit into the box
when rendered in the docs, following up on the work started in #34163.
* DISCOVERY: Use Realistic Num. of Min Master Nodes
* With all 3 nodes starting in parallel 2 nodes can win a master election and
start waiting for nodes to join when started in parallel. We wait for 30s on the
rest tests for the cluster health endpoint to show 3 nodes which breaks if
one of the master nodes itself waits for 30s for joining nodes, waiting for 5 seconds
fixes the issue and allows us to run with `minimum master nodes < node count`
* relates #33675
* Fix linelength suppressions in index.fielddata
* Some lines that were too long were dead code => Removed them and all code that became dead because of it
* Relates #34884
- we already require Java 11 to build, yet we target the minimum
supported version in build-tools ( currently 8 )
- this is because we have some checks that are executed in a new JVM
which could be running the minimum version.
- For everything else it would be nice to be able to use new features,
like the new process API.
With this change, we selectively compile the few classes that need an
older target version and move everything over to Java 10.
Unfortunately the current Gradle version does not support 11 as a target
version yet.
With this change, we apply the common test config automatically to all
newly created tasks instead of opting in specifically.
For plugin authors using the plugin externally this means that the
configuration will be applied to their RandomizedTestingTasks as well.
The purpose of the task is to simplify setup and make it easier to
change projects that use the `test` task but actually run integration
tests to use a task called `integTest` for clarity, but also because
we may want to configure and run them differently.
E.x. using different levels of concurrency.
- Restrict visibility of Aggregators and Factories
- Move PipelineAggregatorBuilders up a level so it is consistent with
AggregatorBuilders
- Checkstyle line length fixes for a few classes
- Minor odds/ends (swapping to method references, formatting, etc)
This commit adds the ability for docs tests to add a tear down
snippet. This snippet will be converted to a tear down section of the
generated REST tests.
In the docs tests, we have pre-defined setups in the build.gradle file,
and we can also define test setup sections within the doc page
itself. Alas, these two are incompatible in that if you try to use a
pre-defined setup alongside a test setup section, the pre-defined setup
will be silently ignored. This commit enables pre-defined setup sections
to be used together with test setup sections. The ordering here is that
pre-defined setup sections will be executed first, followed by the test
setup section.
The `term` and `phrase` suggesters have different options to filter candidates
based on their frequencies. The `popular` mode for instance filters candidate
terms that occur in less docs than the original term. However when we compute this threshold
we use the total term frequency of a term instead of the document frequency. This is not inline
with the actual filtering which is always based on the document frequency. This change fixes
this discrepancy and clarifies the meaning of the different frequencies in use in the suggesters.
It also ensures that the threshold doesn't overflow the maximum allowed value (Integer.MAX_VALUE).
Closes#34282
Applies our line length guidance for all classes in the server in `lucene`
directories *except* `XMoreLikeThis`. The only long line in
`XMoreLikeThis` says "remove this when we upgrade to Lucene 5. Given
that we're on Lucene 8, this is a little terrifying and deserves another
look.
This drops checkstyle suppressions that refer to files that don't exist
since those suppressions don't do anything other than make us feel bad.
It also updates some suppressions to more closely match the path to the
file that they suppress. These suppressions are still needed but didn't
pass the "the file exists" test because they weren't precise. It is just
easier on future-me if they are precise.
We generate tests from our documentation, including assertions about the
responses returned by a particular API. But sometimes we *can't* assert
that the response is correct because of some defficiency in our tooling.
Previously we marked the response `// NOTCONSOLE` to skip it, but this
is kind of odd because `// NOTCONSOLE` is really to mark snippets that
are json but aren't requests or responses. This introduces a new
construct to skip response assertions:
```
// TESTRESPONSE[skip:reason we skipped this]
```
Now that JDK 11 is GA, we would switch our 6.x and master branches to
the JDK 11 compiler. This commit makes this change, as well as removes
JDK 10 from the CI configuration.
This slightly reworks the expert script plugin example so it fits on the
page when the docs are rendered. The box in which it is rendered is not
very wide so it took a bit of twisting to make it readable.
Mainly this fixes a warning by replacing the unchecked `new ActionListener`
with the checked `new ActionListener<Response>`, and it also fixes the line
length violations in this class.
We use wrap code in `// tag` and `//end` to include it in our docs. Our
current docs style wraps code snippets in a box that is only wide enough
for 76 characters and adds a horizontal scroll bar for wider snippets
which makes the snippet much harder to read. This adds a checkstyle check
that looks for java code that is included in the docs and is wider than
that 76 characters so all snippets fit into the box. It solves many of
the failures that this catches but suppresses many more. I will clean
those up in a follow up change.
This change adds the OneStatementPerLineCheck to our checkstyle precommit
checks. This rule restricts the number of statements per line to one. The
resoning behind this is that it is very difficult to read multiple statements on
one line. People seem to mostly use it in short lambdas and switch statements in
our code base, but just going through the changes already uncovered some actual
problems in randomization in test code, so I think its worth it.
Wraps all lines in our test framework at 140 characters because that is
our standard line length and removes all of the checkstyle suppressions
for the test framework.
Drops most of `ModuleTestCase` because it isn't used and we're moving
away from using guice in the way that it wants to test anyway. Also
switches a few classes that extend it but don't use it to extend
`ESTestCase` instead.
Gradle can sometimes emit mixed log lines due to how we spawn things in
separate processes. This commit changes the jarhell integ test to only
look for the exception and message, instead of including the outer part
about the exception in thread main.
closes#33774
Make sure that all java files have a package declaration and that all of
the package declarations line up with the directory structure. This
would have caught the bug that I caused in
190ea9a6de and fixed in
b6d68bd805.
This commit changes the sanity check which ensures the test task was
properly replaced with randomized testing to have a per project check,
isntead of a global one. The previous global check assumed all test
tasks within the root project and below should be randomized testing,
but that is not the case for a multi project in which only one project
is an elasticsearch plugin. While the new check is not able to emit all
of the failed replacements in one error message, the efficacy of the
check remains.
This commit switches the joda time backcompat in scripting to use
augmentation over ZonedDateTime. The augmentation methods provide
compatibility with the missing methods between joda's DateTime and
java's ZonedDateTime. Due to getDayOfWeek returning an enum in the java
API, ZonedDateTime is wrapped so that the method can return int like the
joda time does. The java time api version is renamed to
getDayOfWeekEnum, which will be kept through 7.x for compatibility while
users switch back to getDayOfWeek once joda compatibility is removed.
* LeafCollector.setScorer() now takes a Scorable
* Scorers may not have null Weights
* IndexWriter.getFlushingBytes() reports how much memory is being used by IW threads writing to disk
Today the FilterRoutingTests take the belt-and-braces approach of excluding
some node attribute values and including some others. This means that we don't
really test that both inclusion and exclusion work correctly: as long as one of
them works as expected then the test will pass. This change improves these
tests by only using one approach at once, demonstrating that both do indeed
work, and adds tests for various other scenarios too.
Change the logging infrastructure to handle when the node name isn't
available in `elasticsearch.yml`. In that case the node name is not
available until long after logging is configured. The biggest change is
that the node name logging no longer fixed at pattern build time.
Instead it is read from a `SetOnce` on every print. If it is unset it is
printed as `unknown` so we have something that fits in the pattern.
On normal startup we don't log anything until the node name is available
so we never see the `unknown`s.
This change collapses all metrics aggregations classes into a single package `org.elasticsearch.aggregations.metrics`.
It also restricts the visibility of some classes (aggregators and factories) that should not be used outside of the package.
Relates #22868
The main benefit of the upgrade for users is the search optimization for top scored documents when the total hit count is not needed. However this optimization is not activated in this change, there is another issue opened to discuss how it should be integrated smoothly.
Some comments about the change:
* Tests that can produce negative scores have been adapted but we need to forbid them completely: #33309Closes#32899
Solves all of the xpack line length suppressions and then merges the
remainder of the xpack checkstyle_suppressions.xml file into the core
checkstyle_suppressions.xml file. At this point that just means the
antlr generated files for sql.
It also adds an exclusion to the line length tests for javadocs that
are just a URL. We have one such javadoc and breaking up the line would
make the link difficult to use.
This commit introduces the formal notion of a private setting. This
enables us to register some settings that we had previously not
registered as fully-fledged settings to avoid them being exposed via
APIs such as the create index API. For example, we had hacks in the
codebase to allow index.version.created to be passed around inside of
settings objects, but was not registered as a setting so that if a user
tried to use the setting on any API then they would get an
exception. This prevented users from setting index.version.created on
index creation, or updating it via the index settings API. By
introducing private settings, we can continue to reject these attempts,
yet now we can represent these settings as actual settings. In this
change, we register index.version.created as an actual setting. We do
not cutover all settings that we had been treating as private in this
pull request, it is already quite large due to moving some tests around
to account for the fact that some tests need to be able to set the
index.version.created. This can be done in a follow-up change.
Exclude classes meant for newer versions than what we are auditing against, those classes won't be found. There's no reason to exclude JDK classes from newer versions, with this PR, we will not extract them in the first place.
The new implementation is functional equivalent with the old, ant based one.
It parses task standard error to get the missing classes and violations in the same way.
I considered re-using ForbiddenApisCliTask but Gradle makes it hard to build inheritance with tasks that have task actions , since the order of the task actions can't be controlled.
This inheritance isn't dully desired either as the third party audit task is much more opinionated and we don't want to expose some of the configuration.
We could probably extract a common base class without any task actions, but probably more trouble than it's worth.
Closes#31715
This commit removes the setting of the fork options maximum memory size
in our build plugin and instead adds the value in the gradle.properties
file to be alongside the value set in jvmArgs.
This change is necessary when using parallel compilation as 512m is not
sufficient for parallel compilation on some machines.
This reworks how we configure the `shadow` plugin in the build. The major
change is that we no longer bundle dependencies in the `compile` configuration,
instead we bundle dependencies in the new `bundle` configuration. This feels
more right because it is a little more "opt in" rather than "opt out" and the
name of the `bundle` configuration is a little more obvious.
As an neat side effect of this, the `runtimeElements` configuration used when
one project depends on another now contains exactly the dependencies needed
to run the project so you no longer need to reference projects that use the
shadow plugin like this:
```
testCompile project(path: ':client:rest-high-level', configuration: 'shadow')
```
You can instead use the much more normal:
```
testCompile "org.elasticsearch.client:elasticsearch-rest-high-level-client:${version}"
```
Elasticsearch versions earlier than 6.4.0 cannot properly run in a
FIPS 140 JVM. This commit ensures that we use a non-FIPS JVM for
nodes that we spin up in BWC tests even when we're testing FIPS.
* Scripted metric aggregations: add deprecation warning and system property to control legacy params
Scripted metric aggregation params._agg/_aggs are replaced by state/states context variables. By default the old params are still present, and a deprecation warning is emitted when Scripted Metric Aggregations are used. A new system property can be used to disable the legacy params. This functionality will be removed in a future revision.
* Fix minor style issue and docs test failure
* Disable deprecated params._agg/_aggs in tests and revise tests to use state/states instead
* Add integration test covering deprecated scripted metrics aggs params._agg/_aggs access
* Disable deprecated params._agg/_aggs in docs integration tests and revise stored scripts to use state/states instead
* Revert unnecessary migrations doc change
A relevant note should be added in the changes destined for 7.0; this PR is going to be backported to 6.x.
* Replace deprecated _agg param bwc integration test with a couple of unit tests
* Fix compatibility test after merge
* Rename backwards compatibility system property per code review feedback
* Tweak deprecation warning text per review feedback
Add tests for build-tools to make sure example plugins build stand-alone using it.
This will catch issues such as referencing files from the buildSrc directly, breaking external uses of build-tools.
* Implement Version in java
- This allows to move all all .java files from .groovy.
- Will prevent eclipse from tangling up in this setup
- make it possible to use Version from Java
* PR review comments
* Cluster formation plugin with reference counting
```
> Task :plugins:ingest-user-agent:listElasticSearchClusters
Starting cluster: myTestCluster
* myTestCluster: /home/alpar/work/elastic/elasticsearch/plugins/ingest-user-agent/foo
Asked to unClaimAndStop myTestCluster, since cluster still has 1 claim it will not be stopped
> Task :plugins:ingest-user-agent:testme UP-TO-DATE
Stopping myTestCluster, since no of claims is 0
```
- Meant to auto manage the clusters lifecycle
- Add integration test for cluster formation
* Fix rebase
* Change to `useCluster` method on task
* Add a task to run forbiddenapis using cli
Add a task that offers an equivalent check to the forbidden APIs plugin,
but runs it using the forbiddenAPIs CLI instead.
This isn't wired into precommit first, and doesn't work for projects
that require specific signatures, etc. It's meant to show how this can
be used. The next step is to make a custom task type and configure it
based on the project extension from the pugin and make some minor
adjustments to some build scripts as we can't bee 100% compatible with
that at least due to how additional signatures are passed.
Notes:
- there's no `--target` for the CLI version so we have to pass in
specific bundled signature names
- the cli task already wires to `runtimeJavaHome`
- no equivalent for `failOnUnsupportedJava = false` but doesn't seem to
be a problem. Tested with Java 8 to 11
- there's no way to pass additional signatures as URL, these will have
to be on the file system, and can't be resources on the cp unless we
rely on how forbiddenapis is implemented and mimic them as boundled
signatures.
- the average of 3 runs is 4% slower using the CLI for :server.
( `./gradlew clean :server:forbiddenApis` vs `./gradlew clean
:server:forbiddenApisCli`)
- up-to-date checks don't work on the cli task yet, that will happen
with the custom task.
See also: #31715
The upcoming ML log structure finder functionality will use these
libraries, and it makes sense to use the same versions that are
being used elsewhere in Elasticsearch. This is especially true
with icu4j, which is pretty big.
Enhance reproduction line with info about jdks
Provide the ability to control compiler and hava versions just by
passing a property. The actual java home comes from the
`JAVA<major>_HOME` env vars that we allready require.
This works better with the Gradle daemon as well.
Output is also changed a bit.
for `-Druntime.java=8 -Dcompiler.java=9`:
```
=======================================
Elasticsearch Build Hamster says Hello!
Gradle Version : 4.9
OS Info : Linux 4.17.8-1-ARCH (amd64)
Compiler JDK Version : 11 (Oracle Corporation 11-ea [OpenJDK 64-Bit Server VM 11-ea+22])
Runtime JDK Version : 11 (Oracle Corporation 11-ea [OpenJDK 64-Bit Server VM 11-ea+22])
Gradle JDK Version : 10 (Oracle Corporation 10.0.1 [OpenJDK 64-Bit Server VM 10.0.1+10])
Compiler java.home : /home/alpar/opt/jdk-11-ea22/
Runtime java.home : /home/alpar/opt/jdk-11-ea22/
Gradle java.home : /usr/lib/jvm/java-10-openjdk
Random Testing Seed : EA858533191E8DFB
=======================================
```
Without configuration:
```
=======================================
Elasticsearch Build Hamster says Hello!
=======================================
Gradle Version : 4.9
OS Info : Linux 4.17.8-1-ARCH (amd64)
JDK Version : 10 (Oracle Corporation 10.0.1 [OpenJDK 64-Bit Server VM 10.0.1+10])
JAVA_HOME : /usr/lib/jvm/java-10-openjdk
Random Testing Seed : 4BD5B2A839C8FCA1
=======================================
```
Here's how a reproduction line will look like (test made to fail):
```
./gradlew :modules:lang-painless:test -Dtests.seed=2DA2379065A4EEAB -Dtests.class=org.elasticsearch.painless.AdditionTests -Dtests.method="testInt" -Dtests.security.manager=true -Dtests.locale=es-PE -Dtests.timezone=WET -Dcompiler.java=10 -Druntime.java=10
```
This commit adds two pieces. The first is a small set of documentation providing
instructions on how to get setup to run context examples. This will require a download
similar to how Kibana works for some of the examples. The second is an ingest processor
example using the downloaded data. More examples will follow as ideally one per PR.
This also adds a set of tests to individually test each script as a unit test.
Currently, snippets in lists cannot be rendered correctly as a console command because the console command requires a line continuation '+'. This allows snippets to have a line continuation between the snippet and the // CONSOLE.
This commit adds the elastic repo, alongside maven central, to any
plugin using BuildPlugin. This is necessary now because the default
distribution, which the test framework uses by default, is now only
hosted on elastic maven. While inside the elasticsearch build this does
not matter, those that build external plugins with our build-tools could
have tests fail to find the distribution dependency.
This commit adds a boolean system property, `es.scripting.use_java_time`,
which controls the concrete return type used by doc values within
scripts. The return type of accessing doc values for a date field is
changed to Object, essentially duck typing the type to allow
co-existence during the transition from joda time to java time.
The main highlight is the removal of the reclaim_deletes_weight in the TieredMergePolicy.
The es setting index.merge.policy.reclaim_deletes_weight is deprecated in this commit and the value is ignored. The new merge policy setting setDeletesPctAllowed should be added in a follow up.
When we added the `java-gradle-plugin` to `buildSrc` it added a second
task to generate the pom that duplicates the publishing work that we
configure in `BuildPlugin`. Not only does it dupliciate the pom, it
creates a pom that is missing things like `name` and `description` which
are required for publishing to maven central.
This change disables the duplicate pom generation.
These are collected from a number of open PRs and are required to
improove existing and write more readable future tests.
I am extracting them to their own PR hoping to be able to merge and use
them sooner.
* Determine the minimum gradle version based on the wrapper
This is restrictive and forces users of the plugin to move together with
us, but without integration tests it's close to impossible to make sure
that the claimed compatability is really there.
If we do want to offer more flexibility, we should add those tests
first.
* Track gradle version in individual file
* PR review
This bundles the x-pack:protocol project into the x-pack:plugin:core
project because we'd like folks to consider it an implementation detail
of our build rather than a separate artifact to be managed and depended
on. It is now bundled into both x-pack:plugin:core and
client:rest-high-level. To make this work I had to fix a few things.
Firstly, I had to make PluginBuildPlugin work with the shadow plugin.
In that case we have to bundle only the `shadow` dependencies and the
shadow jar.
Secondly, every reference to x-pack:plugin:core has to use the `shadow`
configuration. Without that the reference is missing all of the
un-shadowed dependencies. I tried to make it so that applying the shadow
plugin automatically redefines the `default` configuration to mirror the
`shadow` configuration which would allow us to use bare project references
to the x-pack:plugin:core project but I couldn't make it work. It'd *look*
like it works but then fail for transitive dependencies anyway. I think
it is still a good thing to do but I don't have the willpower to do it
now.
Finally, I had to fix an issue where Eclipse and IntelliJ didn't properly
reference shadowed transitive dependencies. Neither IDE supports shadowing
natively so they have to reference the shadowed projects. We fix this by
detecting `shadow` dependencies when in "Intellij mode" or "Eclipse mode"
and adding `runtime` dependencies to the same target. This convinces
IntelliJ and Eclipse to play nice.
* Complete changes for running IT in a fips JVM
- Mute :x-pack:qa:sql:security:ssl:integTest as it
cannot run in FIPS 140 JVM until the SQL CLI supports key/cert.
- Set default JVM keystore/truststore password in top level build
script for all integTest tasks in a FIPS 140 JVM
- Changed top level x-pack build script to use keys and certificates
for trust/key material when spinning up clusters for IT
Throw an exception for doc['field'].value
if this document is missing a value for the field.
After deprecation changes have been backported to 6.x,
make this a default behaviour in 7.0
Closes#29286
In 1.x and 2.x, plugins were published to maven and the plugin
installer downloaded them from there. This was later changed to install
from the download service, and in 5.0 plugin zips were no longer
published to maven. However, the build still currently produces an
unused pom file. This is troublesome in the special case when the main
jar of a plugin needs to be published (and thus needs a pom file of
the same name).
closes#31946
This commit moves additional unit test runners from being dependencies
of the test task to dependencies of check. Without this change,
reproduce lines are incorrect due to the additional test runner not
matching any of the reproduce class/method info.
closes#31964
Moves the customizations to the build to produce nice shadow jars and
javadocs into common build code, mostly BuildPlugin with a little into
the root build.gradle file. This means that any project that applies the
shadow plugin will automatically be set up just like the high level rest
client:
* The non-shadow jar will not be built
* The shadow jar will not have a "classifier"
* Tests will run against the shadow jar
* Javadoc will include all of the shadowed classes
* Service files in `META-INF/services` will be merged
We have been encountering name mismatches between API defined in our
REST spec and method names that have been added to the high-level REST
client. We should check this automatically to prevent furher mismatches,
and correct all the current ones.
This commit adds a test for this and corrects the issues found by it.
With this commit we disable the real-memory circuit breaker in REST
tests as this breaker is based on real memory usage over which we have
no (full) control in tests and the REST client is not yet ready to retry
on circuit breaker exceptions.
This is only meant as a temporary measure to avoid spurious test
failures while we ensure that the REST client can handle those
situations appropriately.
Closes#32050
Relates #31767
Relates #31986
Relates #32074
Implement buildSrc Version in java
- This allows to move all all .java files from .groovy.
- Will prevent eclipse from tangling up in this setup
- make it possible to use Version from Java
Recreates the rest of the bats packaging tests for the tar distribution
in the java packaging test project, with support for both tar and zip
packaging, both oss and default flavors, and on Linux and Windows. Most
tests are followed fairly closely, some have either been dropped if
unnecessary or folded into others if convenient.
* Handle missing values in painless
Throw an exception for `doc['field'].value`
if this document is missing a value for the `field`.
For 7.0:
This is the default behaviour from 7.0
For 6.x:
To enable this behavior from 6.x, a user can set a jvm.option:
`-Des.script.exception_for_missing_value=true` on a node.
If a user does not enable this behavior, a deprecation warning is logged on start up.
Closes#29286
* Upgrade bouncycastle
Required to fix
`bcprov-jdk15on-1.55.jar; invalid manifest format `
on jdk 11
* Downgrade bouncycastle to avoid invalid manifest
* Add checksum for new jars
* Update tika permissions for jdk 11
* Mute test failing on jdk 11
* Add JDK11 to CI
* Thread#stop(Throwable) was removed
http://mail.openjdk.java.net/pipermail/core-libs-dev/2018-June/053536.html
* Disable failing tests #31456
* Temprorarily disable doc tests
To see if there are other failures on JDK11
* Only blacklist specific doc tests
* Disable only failing tests in ingest attachment plugin
* Mute failing HDFS tests #31498
* Mute failing lang-painless tests #31500
* Fix backwards compatability builds
Fix JAVA version to 10 for ES 6.3
* Add 6.x to bwx -> java10
* Prefix out and err from buildBwcVersion for readability
```
> Task :distribution:bwc:next-bugfix-snapshot:buildBwcVersion
[bwc] :buildSrc:compileJava
[bwc] WARNING: An illegal reflective access operation has occurred
[bwc] WARNING: Illegal reflective access by org.codehaus.groovy.reflection.CachedClass (file:/home/alpar/.gradle/wrapper/dists/gradle-4.5-all/cg9lyzfg3iwv6fa00os9gcgj4/gradle-4.5/lib/groovy-all-2.4.12.jar) to method java.lang.Object.finalize()
[bwc] WARNING: Please consider reporting this to the maintainers of org.codehaus.groovy.reflection.CachedClass
[bwc] WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
[bwc] WARNING: All illegal access operations will be denied in a future release
[bwc] :buildSrc:compileGroovy
[bwc] :buildSrc:writeVersionProperties
[bwc] :buildSrc:processResources
[bwc] :buildSrc:classes
[bwc] :buildSrc:jar
```
* Also set RUNTIME_JAVA_HOME for bwcBuild
So that we can make sure it's not too new for the build to understand.
* Align bouncycastle dependency
* fix painles array tets
closes#31500
* Update jar checksums
* Keep 8/10 runtime/compile untill consensus builds on 11
* Only skip failing tests if running on Java 11
* Failures are dependent of compile java version not runtime
* Condition doc test exceptions on compiler java version as well
* Disable hdfs tests based on runtime java
* Set runtime java to minimum supported for bwc
* PR review
* Add comment with ticket for forbidden apis
* remove explicit wrapper task
It's created by Gradle and triggers a deprecation warning
Simplify configuration
* Upgrade shadow plugin to get rid of Gradle deprecation
* Move compile configuration to base plugin
Solves Gradle deprecation warning from earlier Gradle versions
* Enable stable publishing in the Gradle build
* Replace usage of deprecated property
* bump Gradle version in build compare
* Remove deprecation warnings to prepare for Gradle 5
Gradle replaced `project.sourceSets.main.output.classesDir` of type
`File` with `project.sourceSets.main.output.classesDirs` of type
`FileCollection`
(see [SourceSetOutput](https://github.com/gradle/gradle/blob/master/subprojects/plugins/src/main/java/org/gradle/api/tasks/SourceSetOutput.java))
Build output is now stored on a per language folder.
There are a few places where we use that, here's these and how it's
fixed:
- Randomized Test execution
- look in all test folders ( pass the multi dir configuration to the
ant runner )
- DRY the task configuration by introducing `basedOn` for
`RandomizedTestingTask` DSL
- Extend the naming convention test to support passing in multiple
directories
- Fix the standalon test plugin, the dires were not passed trough,
checked with a debuger and the statement had no affect due to a
missing `=`.
Closes#30354
* Only check Java tests, PR feedback
- Name checker was ran for Groovy tests that don't adhere to the same
convections causing the check to fail
- implement PR feedback
* Replace `add` with `addAll`
This worked because the list is passed to `project.files` that does the
right thing.
* Revert "Only check Java tests, PR feedback"
This reverts commit 9bd9389875d8b88aadb50df57a45cd0d2b073241.
* Remove `basedOn` helper
* Bring some changes back
Previus revert accidentally reverted too much
* Fix negation
* add back public
* revert name check changes
* Revert "revert name check changes"
This reverts commit a2800c0b363168339ea65e2a79ec8256e5883e6d.
* Pass all dirs to name check
Only run on Java for build-tools, this is safe because it's a self test.
It needs more work before we could pass in the Groovy classes as well as
these inherit from `GroovyTestCase`
* remove self tests from name check
The self complicates the task setup and disable real checks on
build-tools.
With this change there are no more self tests, and the build-tools tests
adhere to the conventions.
The self test will be replaced by gradle test kit, thus the addition of
the Gradle plugin builder plugin.
* First test to run a Gradle build
* Add tests that replace the name check self test
* Clean up integ test base class
* Always run tests
* Align with test naming conventions
* Make integ. test case inherit from unit test case
The check requires this
* Remove `import static org.junit.Assert.*`
* Move to Gradle 4.8 RC1
* Use latest version of plugin
The current does not work with Gradle 4.8 RC1
* Switch to Gradle GA
* Add and configure build compare plugin
* add work-around for https://github.com/gradle/gradle/issues/5692
* work around https://github.com/gradle/gradle/issues/5696
* Make use of Gradle build compare with reference project
* Make the manifest more compare friendly
* Clear the manifest in compare friendly mode
* Remove animalsniffer from buildscript classpath
* Fix javadoc errors
* Fix doc issues
* reference Gradle issues in comments
* Conditionally configure build compare
* Fix some more doclint issues
* fix typo in build script
* Add sanity check to make sure the test task was replaced
Relates to #31324. It seems like Gradle has an inconsistent behavior and
the taks is not always replaced.
* Include number of non conforming tasks in the exception.
* No longer replace test task, create implicit instead
Closes#31324. The issue has full context in comments.
With this change the `test` task becomes nothing more than an alias for `utest`.
Some of the stand alone tests that had a `test` task now have `integTest`, and a
few of them that used to have `integTest` to run multiple tests now only
have `check`.
This will also help separarate unit/micro tests from integration tests.
* Revert "No longer replace test task, create implicit instead"
This reverts commit f1ebaf7d93e4a0a19e751109bf620477dc35023c.
* Fix replacement of the test task
Based on information from gradle/gradle#5730 replace the task taking
into account the task providres.
Closes#31324.
* Only apply build comapare plugin if needed
* Make sure test runs before integTest
* Fix doclint aftter merge
* PR review comments
* Switch to Gradle 4.8.1 and remove workaround
* PR review comments
* Consolidate task ordering
Skips tests the require xpack if we run the doc build without xpack. So
this should work:
```
./gradlew -p docs check -Dtests.distribution=oss-zip
```
This is implemented by detecting parts of the doc that look like:
```
[testenv="basic"]
```
Relates to #30665
* remove left-over comment
* make sure of the property for plugins
* skip installing modules if these exist in the distribution
* Log the distrbution being ran
* Don't allow running with integ-tests-zip passed externally
* top level x-pack/qa can't run with oss distro
* Add support for matching objects in lists
Makes it possible to have a key that points to a list and assert that a
certain object is present in the list. All keys have to be present and
values have to match. The objects in the source list may have additional
fields.
example:
```
match: { 'nodes.$master.plugins': { name: ingest-attachment } }
```
* Update plugin and module tests to work with other distributions
Some of the tests expected that the integration tests will always be ran
with the `integ-test-zip` distribution so that there will be no other
plugins loaded.
With this change, we check for the presence of the plugin without
assuming exclusivity.
* Allow modules to run on other distros as well
To match the behavior of tets.distributions
* Add and use a new `contains` assertion
Replaces the previus changes that caused `match` to do a partial match.
* Implement PR review comments
This switches the docs tests from the `oss-zip` distribution to the
`zip` distribution so they have xpack installed and configured with the
default basic license. The goal is to be able to merge the
`x-pack/docs` directory into the `docs` directory, marking the x-pack
docs with some kind of marker. This is the first step in that process.
This also enables `-Dtests.distribution` support for the `docs`
directory so you can run the tests against the `oss-zip` distribution
with something like
```
./gradlew -p docs check -Dtests.distribution=oss-zip
```
We can set up Jenkins to run both.
Relates to #30665
The goal of this commit is to address unknown licenses when producing
the dependencies info report. We have two different checks that we run
on licenses. The first check is whether or not we have stashed a copy of
the license text for a dependency in the repository. The second is to
map every dependency to a license type (e.g., BSD 3-clause). The problem
here is that the way we were handling licenses in the second check
differs from how we handle licenses in the first check. The first check
works by finding a license file with the name of the artifact followed
by the text -LICENSE.txt. Yet in some cases we allow mapping an artifact
name to another name used to check for the license (e.g., we map
lucene-.* to lucene, and opensaml-.* to shibboleth. The second check
understood the first way of looking for a license file but not the
second way. So in this commit we teach the second check about the
mappings from artifact names to license names. We do this by copying the
configuration from the dependencyLicenses task to the dependenciesInfo
task and then reusing the code from the first check in the second
check. There were some other challenges here though. For example,
dependenciesInfo was checking too many dependencies. For now, we should
only be checking direct dependencies and leaving transitive dependencies
from another org.elasticsearch artifact to that artifact (we want to do
this differently in a follow-up). We also want to disable
dependenciesInfo for projects that we do not publish, users only care
about licenses they might be exposed to if they use our assembled
products. With all of the changes in this commit we have eliminated all
unknown licenses. A follow-up will enforce that when we add a new
dependency it does not get mapped to unknown, these will be forbidden in
the future. Therefore, with this change and earlier changes are left
having no unknown licenses and two custom licenses; custom here means it
does not map to an SPDX license type. Those two licenses are xz and
ldapsdk. A future change will not allow additional custom licenses
unless they are explicitly whitelisted. This ensures that if a new
dependency is added it is mapped to an SPDX license or mapped to custom
because it does not have an SPDX license.
Most of our license file names strip the version off the artifact name
when deducing the license filename. However, the version on the GCS SDK
(google-api-services-storage) does not match the usual format and
instead starts with a vee. This means that the license filename for this
license ended up carrying the version and we should not do that. This
commit adjusts the regex the deduces the license filename to account for
this case, and adjusts the google-api-services-storage license files
accordingly.
This commit enhances the license detection that we have for various
licenses. Here we improve the detection for all licenses (especially the
Apache 2.0 License), the BSD 2-clause license, the MIT (with
attribution) license, and we add detection for the BSD 3-clause
license. One way that we achieved this improvement is by changing how
the license files are read so that rather than reading them as a
multi-line string which ended up represented as "[line1, line2, line3,
...]" internally, we read the full bytes of the license text and replace
all whitespace with a single space so the license text is now loaded as
"line1 line2 line3". For the MIT license we add the actual license text
and remove the "MIT" string as not all copies of the license clearly
indicate that the text is the MIT license. We take a similar strategy
for the BSD-2 and BSD-3 clause licenses. With this change, we reduce the
number of "custom" licenses in the codebase from 31 to 2. The two
remaining appear to be truly custom licenses, not carrying licenses
identifiable by SPDX. A follow-up will address "unknown" licenses.
Use all running nodes as unicast seeds in the rolling restart tests to
avoid a race between pinging and the tests. Without this if the tests
are too fast then when a new node comes up and pings its single
configured seed node that node *might* not have a ping from the other
running node.
This commit adds a new writeBlobAtomic() method to the BlobContainer
interface that can be implemented by repository implementations which
support atomic writes operations.
When the BlobContainer implementation does not provide a specific
implementation of writeBlobAtomic(), then the writeBlob() method is used.
Related to #30680
This snapshot includes:
- LUCENE-8341: Record soft deletes in SegmentCommitInfo which will resolve#30851
- LUCENE-8335: Enforce soft-deletes field up-front
Today when executing REST tests we take full responsibility for cluster
configuration. Yet, there are use cases for brining your own cluster to
the REST tests. This commit is a small first step towards that effort by
skipping creating the cluster if the tests.rest.cluster and test.cluster
system properties are set. In this case, the user takes full
responsibility for configuring the cluster as expected by the REST
tests. This step is by no means meant to be perfect or complete, only a
baby step.
This commit ensures the delete of the upgrade_is_oss indicator for
the packaging tests is always deleted before each run. It works by
moving the check on version which skips the task into the doFirst block,
replacing the onlyIf.
closes#30682
Ports the first couple tests for archive distributions from the old bats
project to the new java project that includes windows platforms,
consolidating them into one test method that tests that the
distributions can be extracted and their contents verified. Includes the
zip distributions which were not tested in the bats project.
The new snapshot includes LUCENE-8324 which fixes missing checkpoint
after a fully deletes segment is dropped on flush. This snapshot should
resolves failed tests in the CorruptedFileIT suite.
Closes#30741Closes#30577
* disable annotation processor for docs
Could not find evidence that the log4j annotation processor is used.
The compiler flag enables the Gradle 5.0 behavior
Closes#30476
* Disable annotation processors for all tests
* remove redundant `-proc:none` already handled by required plugins
* Revert unintentional changes
Meta plugins existed only for a short time, in order to enable breaking
up x-pack into multiple plugins. However, now that x-pack is no longer
installed as a plugin, the need for them has disappeared. This commit
removes the meta plugins infrastructure.
Adds windows server 2012r2 and 2016 vagrant boxes to packaging tests.
They can only be used if IDs for their images are specified, which are
passed to gradle and then to vagrant via env variables. Adds options
to the project property `vagrant.boxes` to choose between linux and
windows boxes.
Bats tests are run only on linux boxes, and portable packaging tests run
on all boxes. Platform tests are only run on linux boxes since they are
not being maintained.
For #26741
This commit changes the default out-of-the-box configuration for the
number of shards from five to one. We think this will help address a
common problem of oversharding. For users with time-based indices that
need a different default, this can be managed with index templates. For
users with non-time-based indices that find they need to re-shard with
the split API in place they no longer need to resort only to
reindexing.
Since this has the impact of changing the default number of shards used
in REST tests, we want to ensure that we still have coverage for issues
that could arise from multiple shards. As such, we randomize (rarely)
the default number of shards in REST tests to two. This is managed via a
global index template. However, some tests check the templates that are
in the cluster state during the test. Since this template is randomly
there, we need a way for tests to skip adding the template used to set
the number of shards to two. For this we add the default_shards feature
skip. To avoid having to write our docs in a complicated way because
sometimes they might be behind one shard, and sometimes they might be
behind two shards we apply the default_shards feature skip to all docs
tests. That is, these tests will always run with the default number of
shards (one).
This commit adds the ability to specify a plugin from maven for a
test cluster to use. Currently, only local projects may be used as
plugins, except when testing bwc, where the coordinates of the project
are used. However, that assumes all projects always keep the same
coordinates, or are even still plugins, which is no longer the case for
x-pack. The full cluster and rolling restart tests are changed to use
this new method when pulling x-pack versions before 6.3.0.
We have a pile of documentation describing how to rebuild the built in
language analyzers and, previously, our documentation testing framework
made sure that the examples successfully built *an* analyzer but they
didn't assert that the analyzer built by the documentation matches the
built in anlayzer. Unsuprisingly, some of the examples aren't quite
right.
This adds a mechanism that tests that the analyzers built by the docs.
The mechanism is fairly simple and brutal but it seems to be working:
build a hundred random unicode sequences and send them through the
`_analyze` API with the rebuilt analyzer and then again through the
built in analyzer. Then make sure both APIs return the same results.
Each of these calls to `_anlayze` takes about 20ms on my laptop which
seems fine.
This is a follow-up to our previous change to only fork javac if needed
to respect the Java compiler home (via JAVA_HOME). This commit makes the
same change for groovyc: we only fork groovyc if the JDK for Gradle is
not the JDK specified for the compiler (via JAVA_HOME).
We started forking javac to avoid GC overhead when running builds. Yet,
we do not seem to have this problem anymore and not forking leads to a
substantial speed improvement. This commit stops forking javac.
Upgrade to lucene-7.4.0-snapshot-1ed95c097b
This version contains:
* An Analyzer for Korean
* An IntervalQuery and IntervalsSource that retrieve minimum intervals of positional queries.
* A new API to retrieve matches (offsets and positions) of a query for a single document.
* Support for soft deletes in the index writer.
* A fixed shingle filter that handles index time synonyms.
* Support for emoji sequence in ICUTokenizer (with an upgrade to icu 61.1)
Uses a filter on the copy task for the eclipse settings files to
replace the token @@LICENSE_HEADER_TEXT@@ with the correct licence
header from the new buildSrc/src/main/resources/license-headers
directory
xpack core contains a fork of `Cron` from quartz who's javadoc has a
`<table>` with non-html5 compatible stuff. This html5ifies the table and
switches the `:x-pack:plugin:core` project to building javadoc with
HTML5.
[test] add java packaging test project
Adds a project for building and running packaging tests written in java
for portability. The vagrant tasks use jars on the packagingTest
configuration, which are built in the same project. No tests are added
yet.
Corresponding changes are not made to :x-pack:qa:vagrant because the
java packaging tests will all be consolidated into one project.
For #26741
`javadoc` will switch from detaulting to html4 to html5 in "a future
release". We should get ahead of it so we're not surprised. Also, HTML5
is the future! Er, the present. Anyway, this follows up from #30220 to
make the Javadoc for two of the four remaining projects HTML5
compatible.
This *mostly* silences `javadoc`'s warning about defaulting to
generating html4 files by enabling generating html5 file for the
projects for which that works. It didn't work in a half dozen projects,
about half of which I've fixed in this PR, entirely by replacing
`<tt>thing</tt>` with `{@code thing}`.
There are a few remaining projects that contain javadoc with invalid
html5. I'll fix those projects in a followup.
Add the oss tar distribution to the packaging test plugin. Test the oss
tar distribution in the core packaging tests, and the non-oss tar
distribution in the x-pack packaging tests.
Today we update index settings directly via IndexService instead of the
cluster state in IndexServiceTests. However, those changes will be lost
if there is a cluster state update. In general, we should update index
settings via client and limit the direct usage in only special tests.
This commit replaces direct usages by the updateSettings api of client.
Closes#24491
Today when forking setup commands we do not set JAVA_HOME. This means
that we might not use a version of Java compatible with the version of
Java the command is expecting to run on (for example, 5.6 nodes would
expect JDK 8, and this is true even for their setup commands). This
commit sets JAVA_HOME when configuring setup command tasks.
This commit moves the apache and elastic license files into a new
root level `licenses` directory and rewrites the top level LICENSE.txt
to clarify the repository has a mix of apache and elastic licensed code.
With the switch to X-Pack as a module, we lost production of POMs for
the JARs that we publish, and did not have a license/notice file in the
zip archives nor the exploded module. This commit ensures that we
generate these POMs, and license/notice files.
This commit makes x-pack a module and adds it to the default
distrubtion. It also creates distributions for zip, tar, deb and rpm
which contain only oss code.
This commit moves the checks on JAVAX_HOME (where X is the java version
number) existing to the end of gradle's configuration phase, and based
on whether the tasks needing the java home are configured to execute.
relates #29519
This commit is a minor cleanup of a code block in NodeInfo.groovy. We
remove an unused variable, make the formatting of the code consistent,
and cast a property that is typed as an Object to a String to avoid an
annoying IDE warning.
Some build tasks require older JDKs. For example, the BWC build tasks
for older versions of Elasticsearch require older JDKs. It is onerous to
require these be configured when merely compiling Elasticsearch, the
requirement that they be strictly set to appropriate values should only
be enforced if these tasks are going to be executed. To address this, we
lazy configure these tasks.