Adds support for triple quoted strings to the documentation test
generator. Kibana's CONSOLE tool has supported them for a year but we
were unable to use them in Elasticsearch's docs because the process that
converts example snippets into tests couldn't handle this. This change
adds code to convert them into standard JSON so we can pass them to
Elasticsearch.
This commit enhances the error messages reported when JAVA_HOME and
RUNTIME_JAVA_HOME are not correctly set to point towards the minimum
compiler and minimum runtime JDKs that are expected by the builds. The
previous error message would say:
Java 1.9 or above is required to build Elasticsearch
which is confusing if the user does have a JDK 9 installation and is
even the version that they have on their path yet they have JAVA_HOME
pointing to another JDK installation. The error message reported after
this change is:
the environment variable JAVA_HOME must be set to a JDK installation directory for Java 1.9 but is [/usr/java/jdk-8] corresponding to [1.8]
As we have factored Elasticsearch into smaller libraries, we have ended
up in a situation that some of the dependencies of Elasticsearch are not
available to code that depends on these smaller libraries but not server
Elasticsearch. This is a good thing, this was one of the goals of
separating Elasticsearch into smaller libraries, to shed some of the
dependencies from other components of the system. However, this now
means that simple utility methods from Lucene that we rely on are no
longer available everywhere. This commit copies IOUtils (with some small
formatting changes for our codebase) into the fold so that other
components of the system can rely on these methods where they no longer
depend on Lucene.
This commit removes the ability to specify that a plugin requires the
keystore and instead creates the keystore on package installation or
when Elasticsearch is started for the first time. The reason that we opt
to create the keystore on package installation is to ensure that the
keystore has the correct permissions (the package installation scripts
run as root as opposed to Elasticsearch running as the elasticsearch
user) and to enable removing the keystore on package removal if the
keystore is not modified.
Log4j2 provides a wide range of logging methods. Our code typically only uses a subset of them. In particular, uses of the methods trace|debug|info|warn|error|fatal(Object) or trace|debug|info|warn|error|fatal(Object, Throwable) have all been wrong, leading to not properly logging the provided message. To prevent these issues in the future, the corresponding Logger methods have been blacklisted.
This allows us to remove another dependency in the decoupling of the XContent
code. Rather than move this class over or decouple it, it can simply be removed.
Relates tangentially to #28504
Running any randomized testing task within Elasticsearch currently fails
if a project has zero tests. This was supposed to be overrideable, but
it was always set to 'fail', and the system property to override was
passed down to the test runner, but never read there. This commit
changes the value of the ifNoTests setting to randomized runner to be
read from system properties and continue to default to 'fail'.
This commit fixes the test progress logging to not produce an NPE when
there are no tests run. The onQuit method is always called, but onStart
would not be called if no tests match the test patterns.
It is only a comment, but can confuse those reading the code
Used 6.0 as an arbitrary elasticsearch.version value since it is version that required Java 8
This commit specifies that the working directory of the destroy task for
destroying test VMs is the root of the build. This is necessary in case
the build was run from a sub-directory, the Vagrant command would then
not be able to locate the Vagrantfile for the VMs in question.
Today we do not destroy Vagrant boxes before tests. This is because
constantly reprovisioning these boxes is time-consuming. Yet, not
destroying these boxes can lead to state being left around that impacts
subsequent test runs. To address this, we now always destroy these boxes
before tests and provide a flag to set if this is not desired while
iterating locally.
Applying the rest test gradle plugin already uses the zip distribution
by default, so specifying it explicitly is not necessary. These are
leftovers from before zip was the default for rest tests.
With this commit we reduce the maximum amount of memory that the javac
compiler can use from 1g to 512mb. While the build would succeed even
with 256mb, it influences compile time slightly negatively.
We have measured that the runtime overhead stays tolerable by running
the following command five times under repeatable conditions (i.e. we
execute `./gradlew clean`, then drop the caches and TRIM the disk):
```
./gradlew compileGroovy compileJava compileJava9Java\
compileTestGroovy compileTestJava compileGroovy
```
The results in seconds (as reported by Gradle) are:
* 1gb: avg: 253s, min: 252s, max: 256s
* 512mb: avg: 257s, min: 256s, max: 259s
This commit moves the distribution specific tasks into the respective
archives and packages builds. The collocation of common and distribution
specific tasks make it much easier to reason about what is expected in a
particular distribution.
This commit adds intermediate gradle projects for archive based
distributions (zip, tar) and package based distributions (rpm, deb). The
grouping allows the common distribution build file to be considerably
shorter and clearly separated from the common zip/tar and rpm/deb
configuration.
This directory was removed from plugins in #28589, but docs still
referenced it. This commit cleans up the plugin author docs to no longer
refer to it.
We need to investigate why startup is taking so long in CI for
standalone tests. This commit adds logging for bootstrap and network
code that is executed before the node starts initializing in case this
is the source of the trouble.
Relates #28659
By default we wait for 30 seconds for nodes to start. Looking at CI,
even for passing builds, the startup time is frequently over 20 seconds
(grep for `#wait (Thread[Task worker for ':',5,main]) completed` in any
console log), which is close to the threshold. This commit sets a
default timeout of 120 seconds instead, so that REST tests are less
likely to hit a timeout.
This commit removes the extra layer of all plugin files existing under
"elasticsearch" within plugin zips. This simplifies building plugin zips
and removes the need for special logic of modules vs plugins.
The build.snapshot was mistakenly passed in to every snapshot version,
so when release tests were run, these versions were mistaken as released
entities and could not be found in maven, because they do not
exist. This fix removes that bug in logic, and always makes them proper
snapshots. This has a benefit of cleaning up the VersionUtilsTests
because they no longer rely on different sets of versions to check
against, which was also a bug.
We suspect a build failure might be due to a startup timeout, but there is
insufficient information in the logs. This change enhances the information
available so we will be better-informed next time.
Generalizing BWC building so that there is less code to modify for a release. This ensures we do not
need to think about what major or minor version is in the gradle code. It follows the general rules of the
elastic release structure. For more information on the rules, see the VersionCollection's javadoc.
This also removes the additional bwc snapshots that will never be released, such as 6.0.2, which were
being built and tested against every time we ran bwc tests.
Additionally, it creates 4 new projects that correspond to the different types of snapshots that may exist
for a given version. Its possible to now run those individual tasks to work out bwc logic whereas
previously it was impossible and the entire suite of bwc tests had to be run to work out any logic
changes in the build tools' bwc project. Please note that if the project does not make sense for the
version that is current, that an error will be thrown from that individual project if an attempt is made to
run it.
This should allow for automating the version bumps as well, since it removes all the hardcoded version
logic from the configs.
When elasticsearch was originally moved to gradle, the "provided" equivalent in maven had to be done through a plugin. Since then, gradle added the "compileOnly" configuration. This commit removes the provided plugin and replaces all uses with compileOnly.
Tests use the (internal) gradle progress logger. Since aroudn gradle
4.0, loggers can have children loggers which each get their own line of
updating output. This commit improves the test status output to give
each jvm its own status line, as well as lines for suite and test
counts. It also fixes an issue where the progress logger was not marked
as completed, which caused its last output to never be cleared.
Gradle 4.5 now hides immutable task dependencies. We previously copied
the existing dependencies from the builtin test task to the
randomizedtesting task. This commit adds testClasses as an extra
dependency of the randomizedtesting task, to ensure the classes are
built.
ava.time has the functionality needed to deal with timezones with varying
offsets correctly, but it also has a bunch of methods that silently let you
forget about the hard cases, which raises the risk that we'll quietly do the
wrong thing at some point in the future.
This change adds the trappy methods to the list of forbidden methods to try and
help stop this from happening.
It also fixes the only use of these methods in the codebase so far:
IngestDocument#deepCopy() used ZonedDateTime.of() which may alter the offset of
the given time in cases where the offset is ambiguous.
This commit conditionally adds the --illegal-access=warn flag when tests
are run with java 9. Currently, testing on java 9 triggers a warning
about illegal access from mockito. While that should be fixed (by
updating to a newer mockito base for securemock), the stderr warning we
get is only the first one. Thankfully that is the only one, but this
change will enable finding all such illegal accesses in the future.
This commit allows for configuration of the amount of time we wait for a node to startup. This is needed as some QA tests with plugins and tribe timeout when starting the tribe
node. Even regular node startup time with plugins is starting to approach the limit of 30 seconds.
Additionally, this commit provides better feedback when a wait has failed. The code checks to
ensure all expected files exist for each node; if they do not then we consider the wait task as
having failed. Prior to this change, when there was one or more missing file the build would
continue and attempt to execute the wait condition that typically makes an HTTP request to the
cluster. The output of this type of failure does include which files exist and which do not but
this change makes it clearer that the actual HTTP call did not time out, but the failure was before
the call was even made.
Prints the warning level logs from the integration testing cluster if
there is a failure. We'd attempted to print those logs but I believe our
regex was missing a space.
This commit adds pom generation to meta plugins by using the same hacks
that PluginBuildPlugin already uses to get around "pom" type poms (ie
zip files).
Sometimes modules/plugins depend on locally built elasticsearch jars.
This means not only that the jar is constantly changing (so no need for
a sha check), but also that the license falls under the Elasticsearch
license, and there is no need to keep another copy. This commit updates
the dependencies checked by dependencyLicenses to exclude those that are
built by elasticsearch.
Integ test clusters should use the plugin method of ClusterConfiguration
to install plugins. Without it, meta plugins install based on the name
of the project directory, rather than the actual configured plugin name.
This commit fixes that, and also corrects the distribution used to be
the default integ-test-zip, to match that of PluginBuildPlugin. This
ensures plugins are tested in isolation by default.
We use the --release flag which is only available starting in JDK
9. Since Gradle could be running on JDK 8 without forking the compiler,
compilation will occur with the Java home of Gradle. This commit adds a
fork flag to the compiler Java home so that we use the right compiler.
Relates #28300
This commit fixes the copying of files from bundled plugin zips.
Previously all files within each zip would be flattened under the
bundled plugin name, and the original directories would exist at the top
level of the plugin.
The build-tools project (ie buildSrc) has groovy code. The -source
option was set to java 8, but this is not correct on java 9, and
actually causes a warning about not setting boot classpath. This commit
adds the --release option to groovy compilation, just like is done for
java compilation.
This commit adds a gradle plugin to ease development of meta plugins.
Applying the plugin will generated the meta plugin properties based on
the es_meta_plugin configuration object, which includes name and
description. The plugins to include within the meta plugin are
configured through the `plugins` list. An integ test task is also
automatically added.
This commit introduces the ability for the core Elasticsearch JAR to be
a multi-release JAR containing code that is compiled for JDK 8 and code
that is compiled for JDK 9. At runtime, a JDK 8 JVM will ignore the JDK
9 compiled classfiles, and a JDK 9 JVM will use the JDK 9 compiled
classfiles instead of the JDK 8 compiled classfiles. With this work, we
utilize the new JDK 9 API for obtaining the PID of the running JVM,
instead of relying on a hack.
For now, we want to keep IDEs on JDK 8 so when the build is in an IDE we
ignore the JDK 9 source set (as otherwise the IDE would give compilation
errors). However, with this change, running Gradle from the command-line
now requires JAVA_HOME and JAVA_9_HOME to be set. This will require
follow-up work in our CI infrastructure and our release builds to
accommodate this change.
Relates #28051
This commit modifies the build to require JDK 9 for
compilation. Henceforth, we will compile with a JDK 9 compiler targeting
JDK 8 as the class file format. Optionally, RUNTIME_JAVA_HOME can be set
as the runtime JDK used for running tests. To enable this change, we
separate the meaning of the compiler Java home versus the runtime Java
home. If the runtime Java home is not set (via RUNTIME_JAVA_HOME) then
we fallback to using JAVA_HOME as the runtime Java home. This enables:
- developers only have to set one Java home (JAVA_HOME)
- developers can set an optional Java home (RUNTIME_JAVA_HOME) to test
on the minimum supported runtime
- we can test compiling with JDK 9 running on JDK 8 and compiling with
JDK 9 running on JDK 9 in CI
Currently the code which disable transitive dependencies assumes all
deps are a "module dependency" in gradle. But a jar file dep is not.
This commit relaxes the closure signature to allow any dependency and
only enforce the transitive disabling for module dependencies.
This commit adds the ability to package multiple plugins in a single zip.
The zip file for a meta plugin must contains the following structure:
|____elasticsearch/
| |____ <plugin1> <-- The plugin files for plugin1 (the content of the elastisearch directory)
| |____ <plugin2> <-- The plugin files for plugin2
| |____ meta-plugin-descriptor.properties <-- example contents below
The meta plugin properties descriptor is mandatory and must contain the following properties:
description: simple summary of the meta plugin.
name: the meta plugin name
The installation process installs each plugin in a sub-folder inside the meta plugin directory.
The example above would create the following structure in the plugins directory:
|_____ plugins
| |____ <name_of_the_meta_plugin>
| | |____ meta-plugin-descriptor.properties
| | |____ <plugin1>
| | |____ <plugin2>
If the sub plugins contain a config or a bin directory, they are copied in a sub folder inside the meta plugin config/bin directory.
|_____ config
| |____ <name_of_the_meta_plugin>
| | |____ <plugin1>
| | |____ <plugin2>
|_____ bin
| |____ <name_of_the_meta_plugin>
| | |____ <plugin1>
| | |____ <plugin2>
The sub-plugins are loaded at startup like normal plugins with the same restrictions; they have a separate class loader and a sub-plugin
cannot have the same name than another plugin (or a sub-plugin inside another meta plugin).
It is also not possible to remove a sub-plugin inside a meta plugin, only full removal of the meta plugin is allowed.
Closes#27316
* Fix license SPDX identifiers (CDDL)
* Fix license type for Custom URL:
* If the license is identified but not listed as an SPDX identifier, the character `;` is used after the identifier to set the license URL.
Gradle will no longer be needed in the test VMs as the Gradle wrapper
can be used to run the platform tests. This commit updates the platform
tests to use the Gradle wrapper, and removes installing Gradle in the VM
which will speed up VM provisioning.
Relates #28105
When finding the commit hash for the build to place in the JAR manifest
(which is used to identity the build), the scm-info plugin assumes that
GIT_COMMIT is the commit for this build. That assumption is wrong, this
build could be a sub-build of another build that GIT_COMMIT belongs
to. If GIT_COMMIT is set, we ignore the commit hash calculated by
scm-info and calculate the hash ourselves.
Relates #28082
This commit adds the infrastructure to plugin building and loading to
allow one plugin to extend another. That is, one plugin may extend
another by the "parent" plugin allowing itself to be extended through
java SPI. When all plugins extending a plugin are finished loading, the
"parent" plugin has a callback (through the ExtensiblePlugin interface)
allowing it to reload SPI.
This commit also adds an example plugin which uses as-yet implemented
extensibility (adding to the painless whitelist).
* Adds task dependenciesInfo to BuildPlugin to generate a CSV file with dependencies information (name,version,url,license)
* Adds `ConcatFilesTask.groovy` to concatenates multiple files into one
* Adds task `:distribution:generateDependenciesReport` to concatenate `dependencies.csv` files into a single file (`es-dependencies.csv` by default)
# Examples:
$ gradle dependenciesInfo :distribution:generateDependenciesReport
## Use `csv` system property to customize the output file path
$ gradle dependenciesInfo :distribution:generateDependenciesReport -Dcsv=/tmp/elasticsearch-dependencies.csv
## When branch is not master, use `build.branch` system property to generate correct licenses URLs
$ gradle dependenciesInfo :distribution:generateDependenciesReport -Dbuild.branch=6.x -Dcsv=/tmp/elasticsearch-dependencies.csv
We disabled the Javadoc task on JDK 10 due to an apparent bug in Javadoc
generation on JDK 10. However, the client JAR task sets up its own
Javadoc task for client JARs (because it merely copies the non-client
Javadoc JAR). This commit diables that task too, since the Javadocs for
the non-client JAR will not exist.
Relates #27962
There appears to be a bug in JDK 10 for generating Javadocs with some
nested anonymous classes. This commit disables these on JDK 10 until the
upstream issue is resolved.
Relates #27952
JDK 10 has gone fully-modular. This means that:
- when targeting JDK 8 with a JDK 10 compiler, we have to use the full
profile
- when targeting JDK 10 with a JDK 10 compiler, we have to use
-add-modules java.base
Today we only target JDK 8 (our minimum compatibility version) so we
need to change the compiler flags conditional on using a JDK 10
compiler. This commit does that.
Relates #27884
When running the release tests, we set build.snapshot to false and this
causes all version numbers to not have "-SNAPSHOT". This is true even
for the tips of the branches (e.g., currently 5.6.6 on the 5.6
branch). Yet, if we do not set snapshot to false, then we would still be
trying to find artifacts with "-SNAPSHOT" appended which would not have
been build since build.snapshot is false. To fix this, we have to push
build.snapshot into the version logic.
Relates #27778
We have tests that manually unpackage the RPM and Debian package
distributions and start a cluster manually (not from the service) and
run a basic suite of integration tests against them. This is problematic
because it is not how the packages are intended to be used (instead,
they are intended to be installed using the package installation tools,
and started as services) and so violates assumptions that we make about
directory paths. This commit removes these integration tests, instead
relying on the packaging tests to ensure the packages are not
broken. Additionally, we add a sanity check that the package
distributions can be unpackaged. Finally, with this change we can remove
some leniency from elasticsearch-env about checking for the existence of
the environment file which the leniency was there solely for these
integration tests.
Relates #27725
The problem here is that splitting was using a method that intentionally
trims whitespace (the method is really meant to be used for splitting
parameters where whitespace should be trimmed like list
settings). However, for routing values whitespace should not be trimmed
because we allow routing with leading and trailing spaces. This commit
switches the parsing of these routing values to a method that does not
trim whitespace.
Relates #27712
This new snapshot mostly brings a change to TopFieldCollector which can now
early terminate collection when trackTotalHits is `false`.
As a follow-up, we should replace our usage of
`EarlyTerminatingSortingCollector` with this new option.
Gradle 4.2 introduced a feature for safer handling of stale output files. Unfortunately, due to the way some of our tasks are written, this broke execution of our REST tests tasks. The reason for this is that the extract task (which extracts the ES distribution) would clean its output directory, and thereby also remove the empty cwd subdirectory which was created by the clean task. The reason why Gradle would remove the directory is that the earlier running clean task would programmatically create the empty cwd directory, but not make Gradle aware of this fact, which would result in Gradle believing that it could just safely clean the directory.
This commit explicitly defines a task to create the cwd subdirectory, and marks the directory as output of the task, so that the subsequent extract task won't eagerly clean it. It thereby restores full compatibility of the build with Gradle 4.2 and above.
This commit also removes the @Input annotation on the waitCondition closure of AntFixture, which conflicted with the extended input/output validation of Gradle 4.3.
The main highlight of this new snapshot is that it introduces the opportunity
for queries to opt out of caching. In case a query opts out of caching, not only
will it never be cached, but also no compound query that wraps it will be
cached.
The build currently does not work with Gradle 4.3.1 as the Gradle team stopped publishing the gradle-logging dependency to jcenter, starting with 4.3.1 (not sure why). There are two options:
- Add the repository managed by Gradle team (https://repo.gradle.org/gradle/libs-releases-local) to our build
- Use an older version (4.3) of the dependency when running with Gradle 4.3.1.
Not to depend on another external repo, I've chosen solution 2. Note that this solution could break on future versions, but as this is a compileOnly dependency, and the interface we use has been super stable since forever, I don't envision this to be an issue (and easily detected by a breaking build).
PR #26911 set minimum_master_nodes from number_of_nodes to (number_of_nodes / 2) + 1 in our REST tests. This has led to test failures (see #27233) as the REST tests only configure the first node in its unicast.hosts pinging list (see explanation here: #27233 (comment)). Until we have a proper fix for this, I'm reverting the change in #26911.
Gradle 5.0 will remove support for colons in configuration and task
names. This commit fixes this for our build by removing all current uses
of colons in configuration and task names.
Relates #27305
While it's not possible to upgrade the Jackson dependencies
to their latest versions yet (see #27032 (comment) for more)
it's still possible to upgrade to the latest 2.8.x version.
An upstream Gradle change has broken us starting on version 4.2. This
commit blacklists these versions until we can either find a workaround,
or the upstream issue is addressed.
Relates #27087
Upgrade to Jackson 2.9.2 and also use a boolean `closed` flag to
indicate that a FastStringReader instance is closed, so that length
is still correctly reported after the reader is closed.
The rolling-upgrade test was only writing the "minimum_master_nodes" setting to the configuration file of the old nodes, but not the upgraded ones.
Also changes the value of "minimum_master_nodes" from "number_of_nodes" to "(number_of_nodes / 2) + 1".
Update provides:
* Adds support for Gradle 4.0 (without deprecation warning)
* Full Java 9 support through ASM 6.0
Version 2.4 was buggy (it broke Gradle dependencies on SourceSets), but this version fixes it.
Today we have all non-plugin mappers in core. I'd like to start moving those
that neither map to json datatypes nor are very frequently used like `date` or
`ip` to a module.
This commit creates a new module called `mappers-extra` and moves the
`scaled_float` and `token_count` mappers to it. I'd like to eventually move
`range` fields there but it's more complicated due to their intimate
relationship with range queries.
Relates #10368
This commit adds a dependency to the install module task on the task
that builds the module. This is needed for standalone integration
tests that require other modules to be installed. Without this, we do
not have a guarantee that the module is bundled.
Javadoc linking between projects currently relies on
projectSubstitutions. However, that is an extension variable that is not
part of BuildPlugin. This commit moves the javadoc linking into the root
build.gradle, alongside where projectSubstitutions are defined.
There is a bug in Log4j on JDK 9 for walking the stack to find where a
log line is coming from. This bug is impacting some of our testing, so
this commit marks these tests as skippable only on JDK 9 until the bug
is fixed upstream.
Relates #26467
The current script service has a script compilation limit for a one
minute window. This is set to a small default value of 15. Instead of
increasing that default value, this commit introduces a new setting
that allows to configure a rate per time unit, so that the script service can deal with bursts better.
The new setting is named `script.max_compilations_rate`,
requires a nonnegative number and a positive time value.
The default is `75/5m`, which is equivalent to the existing 15 per minute.
When changing how the config path is configured (from a command-line
flag to an environment variable) we had to add BWC code in the build so
that we could form clusters with 5.x nodes in them. Now that this branch
has moved to 7.x, we no longer need to be BWC with 5.x for starting
nodes. This commit removes this dead BWC code.
Relates #26446
When starting a node in standalone tests, we sometimes use a wrapper
script as opposed to starting Elasticsearch directly (this is used for
backgrounding). On Windows, the path to this wrapper script can be
exceptionally long, exceeding the length of a batch script that cmd.exe
will invoke without whining. This commit replaces using the full path
name for this wrapper script by the short name for the wrapper script.
Additionally, the data, configuration, and jvm.options paths can also
end up being too long so we shorten those too. Care has to be taken with
the data directory because we usually rely on the node creating it on
startup but doing that will not be compatible with getting the short
name as that requires the directory already existing. Therefore, we
create that directory on-demand immediately before actually resolving
the short name.
Relates #26444
At current, we do not feel there is enough of a reason to shade the low
level rest client. It caused problems with commons logging and IDE's
during the brief time it was used. We did not know exactly how many
users will need this, and decided that leaving shading out until we
gather more information is best. Users can still shade the jar
themselves. For information and feeback, see issue #26366.
Closes#26328
This reverts commit 3a20922046.
This reverts commit 2c271f0f22.
This reverts commit 9d10dbea39.
This reverts commit e816ef89a2.
We currently add the apache license/notice for elasticsearch to any
plugin that uses our ES plugin gradle plugin. However, each plugin
should be able to use their own license. This commit adds a licenseFile
and noticeFile property to the root of project's using BuildPlugin,
which is added to jar files for that project.
In some cases our Windows builds fail due to long path names that arise
from a combination of long build job names plus long sub-project
names. While newer versions of Windows can handle long paths, invoking
batch scripts longer than 260 characters via cmd.exe is still
problematic. This leads to failing integration tests because we can not
run the commands to install plugins, create the keystore, and start the
node. This commit handles this by converting all paths on Windows used
to start an Elasticsearch node to short path names.
Relates #26365
This commit removes the keystore creation on elasticsearch startup, and
instead adds a plugin property which indicates the plugin needs the
keystore to exist. It does still make sure the keystore.seed exists on
ES startup, but through an "upgrade" method that loading the keystore in
Bootstrap calls.
closes#26309
This commit makes the security code aware of the Java 9 FilePermission changes (see #21534) and allows us to remove the `jdk.io.permissionsUseCanonicalPath` system property.
The client sniffer depends on the low-level REST client, while the Java high-level REST client and the transport client depend on Elasticsearch itself. Javadoc are not that useful unless they have links to the Elasticsearch classes in the latter case, and to the low-level REST client in the sniffer javadoc. This commit adds those links.
All of the snippets in our docs marked with `// TESTRESPONSE` are
checked against the response from Elasticsearch but, due to the
way they are implemented they are actually parsed as YAML instead
of JSON. Luckilly, all valid JSON is valid YAML! Unfurtunately
that means that invalid JSON has snuck into the exmples!
This adds a step during the build to parse them as JSON and fail
the build if they don't parse.
But no! It isn't quite that simple. The displayed text of some of
these responses looks like:
```
{
...
"aggregations": {
"range": {
"buckets": [
{
"to": 1.4436576E12,
"to_as_string": "10-2015",
"doc_count": 7,
"key": "*-10-2015"
},
{
"from": 1.4436576E12,
"from_as_string": "10-2015",
"doc_count": 0,
"key": "10-2015-*"
}
]
}
}
}
```
Note the `...` which isn't valid json but we like it anyway and want
it in the output. We use substitution rules to convert the `...`
into the response we expect. That yields a response that looks like:
```
{
"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,
"aggregations": {
"range": {
"buckets": [
{
"to": 1.4436576E12,
"to_as_string": "10-2015",
"doc_count": 7,
"key": "*-10-2015"
},
{
"from": 1.4436576E12,
"from_as_string": "10-2015",
"doc_count": 0,
"key": "10-2015-*"
}
]
}
}
}
```
That is what the tests consume but it isn't valid JSON! Oh no! We don't
want to go update all the substitution rules because that'd be huge and,
ultimately, wouldn't buy much. So we quote the `$body.took` bits before
parsing the JSON.
Note the responses that we use for the `_cat` APIs are all converted into
regexes and there is no expectation that they are valid JSON.
Closes#26233
The secure settings tool reads from stdIn and we use a closure to
provide a value for this. Yet, we evaluate they value too late and end up
with the last provided value for all keys.
The environment variable CONF_DIR was previously inconsistently used in
our packaging to customize the location of Elasticsearch configuration
files. The importance of this environment variable has increased
starting in 6.0.0 as it's now used consistently to ensure Elasticsearch
and all secondary scripts (e.g., elasticsearch-keystore) all use the
same configuration. The name CONF_DIR is there for legacy reasons yet
it's too generic. This commit renames CONF_DIR to ES_PATH_CONF.
Relates #26197
The build was ignoring suffixes like "beta1" and "rc1" on the version numbers which was causing the backwards compatibility packaging tests to fail because they expected to be upgrading from 6.0.0 even though they were actually upgrading from 6.0.0-beta1. This adds the suffixes to the information that the build scrapes from Version.java. It then uses those suffixes when it resolves artifacts build from the bwc branch and for testing.
Closes#26017
Compiling all of elasticsearch classes in one jvm, which is shared with
all of the loaded classes of gradle, can trip gc overhead limits. This
commit re-enables forking javac.
This commit updates the version for master to 7.0.0-alpha1. It also adds
the 6.1 version constant, and fixes many tests, as well as marking some
as awaits fix.
Closes#25893Closes#25870
Some REST tests can rapid-fire script compilations that exceed the
default script compilations per minute. Rather than subjecting ourselves
to spurious failures because of the limit being too low, we opt for a
larger limit here.
With Gradle 4.1 and newer JDK versions, we can finally invoke Gradle directly using a JDK9 JAVA_HOME without requiring a JDK8 to "bootstrap" the build. As the thirdPartyAudit task runs within the JVM that Gradle runs in, it needs to be adapted now to be JDK9 aware.
This commit also changes the `JavaCompile` tasks to only fork if necessary (i.e. when Gradle's JVM and JAVA_HOME's JVM differ).
Today we expose `IndexFieldDataService` outside of IndexService to do maintenance
or lookup field data in different ways. Yet, we have a streamlined way to access IndexFieldData
via `QueryShardContext` that should encapsulate all access to it. This also ensures that we control all other functionality like cache clearing etc.
This change also removes the `recycler` option from `ClearIndicesCacheRequest` this option is a no-op and should have been removed long ago.
Stored fields were still being accessed for nested inner hits even if the _source was not requested.
This was done to figure out the id of the root document. However this is already known higher up the stack.
So instead this change adds the id to the nested search context, so that it is no longer required to be fetched via the stored fields.
In case the _source is large and no source is requested then hot threads like these ones would still appear:
```
100.3% (501.3ms out of 500ms) cpu usage by thread 'elasticsearch[AfXKKfq][search][T#6]'
2/10 snapshots sharing following 22 elements
org.apache.lucene.store.DataInput.skipBytes(DataInput.java:352)
org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.skipField(CompressingStoredFieldsReader.java:246)
org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.visitDocument(CompressingStoredFieldsReader.java:601)
org.apache.lucene.index.CodecReader.document(CodecReader.java:88)
org.apache.lucene.index.FilterLeafReader.document(FilterLeafReader.java:411)
org.elasticsearch.search.fetch.FetchPhase.loadStoredFields(FetchPhase.java:347)
org.elasticsearch.search.fetch.FetchPhase.createNestedSearchHit(FetchPhase.java:219)
org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:150)
org.elasticsearch.search.fetch.subphase.InnerHitsFetchSubPhase.hitsExecute(InnerHitsFetchSubPhase.java:73)
org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:166)
org.elasticsearch.search.fetch.subphase.InnerHitsFetchSubPhase.hitsExecute(InnerHitsFetchSubPhase.java:73)
org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:166)
org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:422)
```
and:
```
8/10 snapshots sharing following 27 elements
org.apache.lucene.codecs.compressing.LZ4.decompress(LZ4.java:135)
org.apache.lucene.codecs.compressing.CompressionMode$4.decompress(CompressionMode.java:138)
org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader$BlockState$1.fillBuffer(CompressingStoredFieldsReader.java:531)
org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader$BlockState$1.readBytes(CompressingStoredFieldsReader.java:550)
org.apache.lucene.store.DataInput.readBytes(DataInput.java:87)
org.apache.lucene.store.DataInput.skipBytes(DataInput.java:350)
org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.skipField(CompressingStoredFieldsReader.java:246)
org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.visitDocument(CompressingStoredFieldsReader.java:601)
org.apache.lucene.index.CodecReader.document(CodecReader.java:88)
org.apache.lucene.index.FilterLeafReader.document(FilterLeafReader.java:411)
org.elasticsearch.search.fetch.FetchPhase.loadStoredFields(FetchPhase.java:347)
org.elasticsearch.search.fetch.FetchPhase.createNestedSearchHit(FetchPhase.java:219)
org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:150)
org.elasticsearch.search.fetch.subphase.InnerHitsFetchSubPhase.hitsExecute(InnerHitsFetchSubPhase.java:73)
org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:166)
org.elasticsearch.search.fetch.subphase.InnerHitsFetchSubPhase.hitsExecute(InnerHitsFetchSubPhase.java:73)
org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:166)
org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:422)
```
This commit removes all external dependencies from the rest client jar
and shades them in an 'org.elasticsearch.client' package within the jar
using shadowJar gradle plugin. All projects that depended on the
existing jar have been converted to using the 'org.elasticsearch.client'
package prefixes to interact with the rest client.
Closes#25208
Using `sh` means we used whatever default the system has, which is `dash` on
Ubuntu, even though our startup script is written for bash (see the shebang).
This commit introduces the elasticsearch-env script. The purpose of this
script is threefold:
- vastly simplify the various scripts used in Elasticsearch
- provide a script that can be included in other scripts in the
Elasticsearch ecosystem (e.g., plugins)
- correctly establish the environment for all scripts (e.g., so that
users can run `elasticsearch-keystore` from a package distribution
without having to worry about setting `CONF_DIR` first, otherwise the
keystore would be created in the wrong location)
Relates #25815
This commit removes the environment variable ES_JVM_OPTIONS that allows
the jvm.options file to sit separately from the rest of the config
directory. Instead, we use the CONF_DIR environment variable for custom
configuration location just as we do for the other configuration files.
Relates #25679
This commit does two things:
- bumps the version from 6.0.0-alpha3 to 6.0.0-beta1
- renames the 6.0.0-alpha3 version constant to 6.0.0-beta1
Relates #25621
This commit adds cross-settings validation for the low/high/flood stage
disk watermark settings. This validation was enabled by the introduction
of multiple settings validation.
Relates #25600
This commit introduces a nio based tcp transport into framework for
testing.
Currently Elasticsearch uses a simple blocking tcp transport for
testing purposes (MockTcpTransport). This diverges from production
where our current transport (netty) is non-blocking.
The point of this commit is to introduce a testing variant that more
closely matches the behavior of production instances.
Today we load plugins reflectively, looking for constructors that
conform to specific signatures. This commit tightens the reflective
operations here, not allowing plugins to have ambiguous constructors.
Relates #25405
This commit removes path.conf as a valid setting and replaces it with a
command-line flag for specifying a non-default path for configuration.
Relates #25392
You can continue a test started in a previous snippet by marking the
next snippet with `// TEST[continued]`. The trouble is, if you mark the
first snippet in a file with `// TEST[continued]` you'd get difficult
to predict behavior because you'd continue the test started in another
file. This will usually create a test that fails the build but it isn't
easy to track down what you did wrong. This commit catches this
scenario up front and fails the build with a useful error message.
Similarly, if you put `// TEST[continued]` directly after a
`// TESTSETUP` section then the docs tests will fail to run but the
error message did not point you to the `// TEST[continued]` snippet.
This commit catches this scenario up front as well and fails the build
with a useful error message.
The following token filters were moved: stemmer, stemmer_override, kstem, dictionary_decompounder, hyphenation_decompounder, reverse, elision and truncate.
Relates to #23658
Most notable changes:
- better update concurrency: LUCENE-7868
- TopDocs.totalHits is now a long: LUCENE-7872
- QueryBuilder does not remove the boolean query around multi-term synonyms:
LUCENE-7878
- removal of Fields: LUCENE-7500
For the `TopDocs.totalHits` change, this PR relies on the fact that the encoding
of vInts and vLongs are compatible: you can write and read with any of them as
long as the value can be represented by a positive int.
Removes the `assemble` task from the `build` task when we have
removed `assemble` from the project. We removed `assemble` from
projects that aren't published so our releases will be faster. But
That broke CI because CI builds with `gradle precommit build` and,
it turns out, that `build` includes `check` and `assemble`. With
this change CI will only run `check` for projects without an
`assemble`.
Removes the `assemble` task from projects that are not published.
This should speed up `gradle assemble` by skipping projects that
don't need to be built. Which is useful because `gradle assemble`
is how we cut releases.
This snapshot has faster range queries on range fields (LUCENE-7828), more
accurate norms (LUCENE-7730) and the ability to use fake term frequencies
(LUCENE-7854).
This change removes the `postings` highlighter. This highlighter has been removed from Lucene master (7.x) because it behaves
exactly like the `unified` highlighter when index_options is set to `offsets`:
https://issues.apache.org/jira/browse/LUCENE-7815
It also makes the `unified` highlighter the default choice for highlighting a field (if `type` is not provided).
The strategy used internally by this highlighter remain the same as before, it checks `term_vectors` first, then `postings` and ultimately it re-analyzes the text.
Ultimately it rewrites the docs so that the options that the `unified` highlighter cannot handle are clearly marked as such.
There are few features that the `unified` highlighter is not able to handle which is why the other highlighters (`plain` and `fvh`) are still available.
I'll open separate issues for these features and we'll deprecate the `fvh` and `plain` highlighters when full support for these features have been added to the `unified`.
Pins the random testing seed at build start rather than letting
it vary with every randomized testing invocation. This is useful
for projects where random decisions in one randomized testing run
can effect the outcome of a second randomized testing run such as
the full cluster restart tests.
The goal isn't for tests to be able to assume that random decision
will be the same in both tests. It is more to make sure that the
seed printed when a test fails reproduces the appropriate random
decisions. And pinning the seed at startup should do just that.
This works by taking the key passed as a system property if one
is passed, otherwise picking a random long and getting it into
appropriate key format. The build just calls
`new Random().nextLong()` to get the seed while randomized testing
uses a Murmur3 hash of `System.nanoTime`.
This adds an option to `ClusterConfiguration` to preserve the
`shared` directory when starting up a new cluster and switches
the `qa:full-cluster-restart` tests to use it rather than
disable the clean shared task.
Relates to #24846
We're using Vagrant in more places now than before. This commit includes a plugin that verifies
the Vagrant and Virtualbox installations for projects that depend on them. This shared code
should fix up the errors we've seen from CI builds relating to the new Kerberos fixture.
This commit adds a new `branchConsistency` task which will run in CI
once a day, instead of on every commit. This allows `verifyVersions` to
not break immediately once a new version is released in maven.
openSUSE-13 has reached [EOL](https://en.opensuse.org/Lifetime).
Replace openSUSE-13 with openSUSE-42 (Leap) for packaging tests and
update docs.
Relates #25001
Removes the need for the `_UNRELEASED` suffix on versions by detecting if a version should be unreleased or not based on the versions around it. This should make it simpler to automate the task of adding a new version label.
These tests spin up two nodes of an older version of Elasticsearch,
create some stuff, shut down the nodes, start the current version,
and verify that the created stuff works.
You can run `gradle qa:full-cluster-restart:check` to run these
tests against the head of the previous branch of Elasticsearch
(5.x for master, 5.4 for 5.x, etc) or you can run
`gradle qa:full-cluster-restart:bwcTest` to run this test against
all "index compatible" versions, one after the other. For master
this is every released version in the 5.x.y version *and* the tip
of the 5.x branch.
I'd love to add more to these tests in the future but these
currently just cover the functionality of the `create_bwc_index.py`
script and start to cover the assertions in the
`OldIndexBackwardsCompatibilityIT` test.
When transitive dependencies are disable for a dependency, gradle adds a
wildcard exclusion to the generated pom. However, some external tools
like ivy have bugs with wildcards. This commit adds back the explicit
generation of transitive excludes, and removes the gradle generated
exclusions element from the pom.
closes#24490
Adds a "magic" key to the yaml testing stash mostly for use with
documentation tests. When unstashing an object, `$_path` is the
path into the current position in the object you are unstashing.
This means that in docs tests you can use
`// TESTRESPONSEs/somevalue/$body.${_path}/` to mean "replace
`somevalue` with whatever is the response in the same position."
Compare how you must carefully mock out all the numbers in the profile
response without this change:
```
// TESTRESPONSE[s/"id": "\[2aE02wS1R8q_QFnYu6vDVQ\]\[twitter\]\[1\]"/"id": $body.profile.shards.0.id/]
// TESTRESPONSE[s/"rewrite_time": 51443/"rewrite_time": $body.profile.shards.0.searches.0.rewrite_time/]
// TESTRESPONSE[s/"score": 51306/"score": $body.profile.shards.0.searches.0.query.0.breakdown.score/]
// TESTRESPONSE[s/"time_in_nanos": "1873811"/"time_in_nanos": $body.profile.shards.0.searches.0.query.0.time_in_nanos/]
// TESTRESPONSE[s/"build_scorer": 2935582/"build_scorer": $body.profile.shards.0.searches.0.query.0.breakdown.build_scorer/]
// TESTRESPONSE[s/"create_weight": 919297/"create_weight": $body.profile.shards.0.searches.0.query.0.breakdown.create_weight/]
// TESTRESPONSE[s/"next_doc": 53876/"next_doc": $body.profile.shards.0.searches.0.query.0.breakdown.next_doc/]
// TESTRESPONSE[s/"time_in_nanos": "391943"/"time_in_nanos": $body.profile.shards.0.searches.0.query.0.children.0.time_in_nanos/]
// TESTRESPONSE[s/"score": 28776/"score": $body.profile.shards.0.searches.0.query.0.children.0.breakdown.score/]
// TESTRESPONSE[s/"build_scorer": 784451/"build_scorer": $body.profile.shards.0.searches.0.query.0.children.0.breakdown.build_scorer/]
// TESTRESPONSE[s/"create_weight": 1669564/"create_weight": $body.profile.shards.0.searches.0.query.0.children.0.breakdown.create_weight/]
// TESTRESPONSE[s/"next_doc": 10111/"next_doc": $body.profile.shards.0.searches.0.query.0.children.0.breakdown.next_doc/]
// TESTRESPONSE[s/"time_in_nanos": "210682"/"time_in_nanos": $body.profile.shards.0.searches.0.query.0.children.1.time_in_nanos/]
// TESTRESPONSE[s/"score": 4552/"score": $body.profile.shards.0.searches.0.query.0.children.1.breakdown.score/]
// TESTRESPONSE[s/"build_scorer": 42602/"build_scorer": $body.profile.shards.0.searches.0.query.0.children.1.breakdown.build_scorer/]
// TESTRESPONSE[s/"create_weight": 89323/"create_weight": $body.profile.shards.0.searches.0.query.0.children.1.breakdown.create_weight/]
// TESTRESPONSE[s/"next_doc": 2852/"next_doc": $body.profile.shards.0.searches.0.query.0.children.1.breakdown.next_doc/]
// TESTRESPONSE[s/"time_in_nanos": "304311"/"time_in_nanos": $body.profile.shards.0.searches.0.collector.0.time_in_nanos/]
// TESTRESPONSE[s/"time_in_nanos": "32273"/"time_in_nanos": $body.profile.shards.0.searches.0.collector.0.children.0.time_in_nanos/]
```
To how you can cavalierly mock all the numbers at once with this change:
```
// TESTRESPONSE[s/(?<=[" ])\d+(\.\d+)?/$body.$_path/]
```
When Elasticsearch dies during a standalone REST test we might leave a
dirty PID file laying around. We tried to log about this, but the log
messages contained references to undefined variables so we simply died
instead of providing a helpful message to run clean. This commit
addresses this issue.
Now that we generate the versions list from Versions.java we can
drop the list of versions maintained for vagrant testing. One nice
thing that the vagrant testing did was to check if the list of
versions was out of date. This moves that test to the core
project.
This commit changes the rolling upgrade test to create a set of rest
test tasks per wire compat version. The most recent wire compat version
is always tested with the `integTest` task, and all versions can be
tested with `bwcTest`.
This commit renames all rest test files to use the .yml extension
instead of .yaml. This way the extension used within all of
elasticsearch for yaml is consistent.
This is almost exclusively for docs test which frequently match the
entire response. This allow something like:
```
- set: {nodes.$master.http.publish_address: host}
- match:
$body:
{
"nodes": {
$host: {
... stuff in here ...
}
}
}
```
This should make it possible for the docs tests to work with
unpredictable keys.
The disruption tests sit in a single test suite which causes these tests
to be single-threaded. We can split this test suite into multiple suites
(logically, of course) enabling them to be run in parallel reducing the
total run time of all integration tests in core. This commit splits the
discovery with service disruptions test suite into three suites
- master disruptions
- discovery disruptions
- cluster disruptions
The last one could probably be better named, it is meant to represent
performing actions in the cluster (indexing, failing a shard, etc.)
while a disruption is taking place.
Relates #24662
* Add parent-join module
This change adds a new module named `parent-join`.
The goal of this module is to provide a replacement for the `_parent` field but as a first step this change only moves the `has_child`, `has_parent` queries and the `children` aggregation to this module.
These queries and aggregations are no longer in core but they are deployed by default as a module.
Relates #20257
This allows other plugins to use a client to call the functionality
that is in the core modules without duplicating the logic.
Plugins can now safely send the request and response classes via the
client even if the requests are executed locally. All relevant classes
are loaded by the core classloader such that plugins can share them.
This is re-adds this commit that was revered in 952feb58e4
This allows other plugins to use a client to call the functionality
that is in the core modules without duplicating the logic.
Plugins can now safely send the request and response classes via the
client even if the requests are executed locally. All relevant classes
are loaded by the core classloader such that plugins can share them.
This starts breaking up the `UpdateHelper.prepare` method so that each piece can
be individually unit tested. No actual functionality has changed.
Note however, that I did add a TODO about `ctx.op` leniency, which I'd love to
remove as a separate PR if desired.
Previously you could set the system property tests.es.path.data and
start the run task with a custom data directory. A change in core to
detect duplicate settings broke this. That change should stay, but we
can work around this easily by only setting path.data to the data
directory if tests.es.path.data is not set.
We start the test JVMs with various options. There are two that we
should remove, they do not make sense.
- we require Java 8 yet there was a conditional build option for Java 7
- we do not set MaxDirectMemorySize in our default JVM options, we
should not in the test JVMs either
Relates #24501
Use fedora-25 Vagrant box in VagrantTestPlugin, which was missing from
9a3ab3e800 causing packaging test
failures.
Additionally update TESTING.asciidoc
* Fix JVM test argline
The argline was being overridden by '-XX:-OmitStackTraceInFastThrow'
which led to test failures that were expecting the JVM to be in a
certain state based on the value of tests.jvm.argline but they were not
since these arguments were never passed to the JVM. Additionally, we
need to respect the provided JVM argline if it is already provided with
a flag for OmitStackTraceInFastThrow. This commit fixes this by only
setting OmitStackTraceInFastThrow if it is not already set.
* Add comment
* Fix comment
* More elaborate comment
* Sigh
TransportService and RemoteClusterService are closely coupled already today
and to simplify remote cluster integration down the road it can be a direct
dependency of TransportService. This change moves RemoteClusterService into
TransportService with the goal to make it a hidden implementation detail
of TransportService in followup changes.
This adds `-XX:-OmitStackTraceInFastThrow` to the JVM arguments
which *should* prevent the JVM from omitting stack traces on
common exception sites. Even though these sites are common, we'd
still like the exceptions to debug them.
This also adds the flag when running tests and adapts some tests
that had workarounds for the absense of the flag.
Closes#24376
This commit removes a few duplicates from the list of checkstyle
suppressions that accumulated while auto-generating the list of
suppressions based on the files that currently exceed the 140-column
limit.
The list of checkstyle suppressions was auto-generated based on the list
of files that currently violate the 140-column limit. An errant entry
was committed that did no harm, but this commit removes it.
We are back to having a 140-column limit. While at some point we will
apply auto-formatting tools to the code base, that is a down-the-road
thing and adjusting the suppressions now shaves minutes off the build
time.
Relates #24408
Separates cluster state publishing from applying cluster states:
- ClusterService is split into two classes MasterService and ClusterApplierService. MasterService has the responsibility to calculate cluster state updates for actions that want to change the cluster state (create index, update shard routing table, etc.). ClusterApplierService has the responsibility to apply cluster states that have been successfully published and invokes the cluster state appliers and listeners.
- ClusterApplierService keeps track of the last applied state, but MasterService is stateless and uses the last cluster state that is provided by the discovery module to calculate the next prospective state. The ClusterService class is still kept around, which now just delegates actions to ClusterApplierService and MasterService.
- The discovery implementation is now responsible for managing the last cluster state that is used by the consensus layer and the master service. It also exposes the initial cluster state which is used by the ClusterApplierService. The discovery implementation is also responsible for adding the right cluster-level blocks to the initial state.
- NoneDiscovery has been renamed to TribeDiscovery as it is exclusively used by TribeService. It adds the tribe blocks to the initial state.
- ZenDiscovery is synchronized on state changes to the last cluster state that is used by the consensus layer and the master service, and does not submit cluster state update tasks anymore to make changes to the disco state (except when becoming master).
Control flow for cluster state updates is now as follows:
- State updates are sent to MasterService
- MasterService gets the latest committed cluster state from the discovery implementation and calculates the next cluster state to publish
- MasterService submits the new prospective cluster state to the discovery implementation for publishing
- Discovery implementation publishes cluster states to all nodes and, once the state is committed, asks the ClusterApplierService to apply the newly committed state.
- ClusterApplierService applies state to local node.
Creates a new task `namingConventionsMain`, that runs on the
`buildSrc` and `test:framework` projects and fails the build if
any of the classes in the main artifacts are named like tests or
are non-abstract subclasses of ESTestCase.
It also fixes the three tests that would cause it to fail.
Start moving built in analysis components into the new analysis-common
module. The goal of this project is:
1. Remove core's dependency on lucene-analyzers-common.jar which should
shrink the dependencies for transport client and high level rest client.
2. Prove that analysis plugins can do all the "built in" things by moving all
"built in" behavior to a plugin.
3. Force tests not to depend on any oddball analyzer behavior. If tests
need anything more than the standard analyzer they can use the mock
analyzer provided by Lucene's test infrastructure.
This commit adds a call to jstack to see where each node is stuck when
starting up, if a timeout occurs. This also decreases the timeout back
to 30 seconds.
We want to upgrade to Lucene 7 ahead of time in order to be able to check whether it causes any trouble to Elasticsearch before Lucene 7.0 gets released. From a user perspective, the main benefit of this upgrade is the enhanced support for sparse fields, whose resource consumption is now function of the number of docs that have a value rather than the total number of docs in the index.
Some notes about the change:
- it includes the deprecation of the `disable_coord` parameter of the `bool` and `common_terms` queries: Lucene has removed support for coord factors
- it includes the deprecation of the `index.similarity.base` expert setting, since it was only useful to configure coords and query norms, which have both been removed
- two tests have been marked with `@AwaitsFix` because of #23966, which we intend to address after the merge
After splitting integ tests into cluster configuration and the test
runner task, we still have dependencies of the test runner added as deps
of the cluster. This commit adds dependencies directly to the cluster,
so that the runner can have other dependencies independent of what is
needed for the cluster.
This new version of jna is rebuilt from the official release of jna, but
with native libs linked against older glibc in order to support all
platforms elasticsearch supports.
closes#23640
This reverts the line limit change in #23623 - this PR doesn't touch the suppression file since we are moving towards automatic code formatting which makes it mainly obsolete.
Adds the option for a plugin to specify extra directories containing notices
and licenses files to be incorporated into the overall notices file that is
generated for the plugin.
This can be useful, for example, where the plugin has a non-Java dependency
that itself incorporates many 3rd party components.
The buffer limit should have been configurable already, but the factory constructor is package private so it is truly configurable only from the org.elasticsearch.client package. Also the HttpAsyncResponseConsumerFactory interface was package private, so it could only be implemented from the org.elasticsearch.client package.
Closes#23958
This commit upgrades the Log4j dependencies from version 2.7 to version
2.8.2. This release includes a fix for a case where Log4j could lose
exceptions in the presence of a security manager.
Relates #23995
This commit puts all the classes in the repository-s3 plugin into a
single package. In addition to simplifying the plugin, it will make it
easier to test as things that should be package private will not be
difficult to use inside tests alone.
The method Boolean#getBoolean is dangerous. It is too easy to mistakenly
invoke this method thinking that it is parsing a string as a
boolean. However, what it actually does is get a system property with
the specified string, and then attempts to use usual crappy boolean
parsing in the JDK to parse that system property as boolean with
complete leniency (it parses every input value into either true or
false); that is, this method amounts to invoking
Boolean#parseBoolean(String) on the result of
System#getProperty(String). Boo. This commit bans usage of this method.
Relates #23864
Now that we are on gradle 3.3, we can take advantage of a fix that was
made in 2.14 which properly handles disabling transitive dependencies in
pom generation. As it was currently, we actually ended up generated two
exclusions sections in the generated pom. This is yet another example of
why we need validation on the pom files with our generation here, but I
leave that for another day because I still don't know a good way to do
it.
This moves `updateReplicaRequest` to `createPrimaryResponse` and separates the
translog updating to be a separate function so that the function purpose is more
easily understood (and testable).
It also separates the logic for `MappingUpdatePerformer` into two functions,
`updateMappingsIfNeeded` and `verifyMappings` so they don't do too much in a
single function. This allows finer-grained error testing for when a mapping
fails to parse or be applied.
Finally, it separates parsing and version validation for
`executeIndexRequestOnReplica` into a separate
method (`prepareIndexOperationOnReplica`) and adds a test for it.
Relates to #23359
This commit adds a build listener to the integ test runner which will
print out an excerpt of the logs from the integ test cluster if the test
run fails. There are future improvements that can be made (for example,
to dump excerpts from dependent clusters like in tribe node tests), but
this is a start, and would help with simple rest tests failures that we
currently don't know what actually happened on the node.
This will allow us to get rid of deprecation warnings that appear when
using 3.3, and also get rid of extra logic for 2.13 required because of
the progress logger.
Search took time uses an absolute clock to measure elapsed time, and
then tries to deal with the complexities of using an absolute clock for
this purpose. Instead, we should use a high-precision monotonic relative
clock that is designed exactly for measuring elapsed time. This commit
modifies the search infrastructure to use a relative clock for measuring
took time, but still provides an absolute clock for the components of
search that require a real clock (e.g., index name expression
resolution, etc.).
Relates #23662
This commit creates a keystore and adds settings to it during the
cluster formation for integration tests. Users can define a
`keyStoreSetting` in build files for settings that need to be placed in
the keystore.
This commit moves the checkstyle rule of max line length from 140
characters to 100 characters. We whitelist all existing violations and
will address them in follow-ups.
Relates #23623
In Gradle 3.4, the buildSrc plugin seems to be packaged into a jar before it is accessed by the rest of the build and the signatures file for the third-party audit task cannot be accessed as
getClass().getResource('/forbidden/third-party-audit.txt') then points to a file entry in a JAR, which cannot be loaded directly as a File object. This commit changes the third-party audit task to pass the content of the signatures file as a String instead.
While trying to improve the failure output in #23547, the stderr was
also captured from jrunscript. This was under the assumption that stderr
is only written to in case of an error. However, with java 9, when
JAVA_TOOL_OPTIONS are set, they are output to stderr. And our CI sets
JAVA_TOOL_OPTIONS for some reason. This commit fixes the jrunscript call
to use a separate buffer for stderr.
A previous change to the multi-search request execution to avoid stack
overflows regressed on limiting the number of concurrent search requests
from a batched multi-search request. In particular, the replacement of
the tail-recursive call with a loop could asynchronously fire off all of
the remaining search requests in the batch while max concurrent search
requests are already executing. This commit attempts to address this
issue by taking a more careful approach to the initial problem of
recurisve calls. The cause of the initial problem was due to possibility
of individual requests completing on the same thread as invoked the
search action execution. This can happen, for example, in cases when an
individual request does not resolve to any shards. To address this
problem, when an individual request completes we check if it completed
on the same thread as fired off the request. In this case, we loop and
otherwise safely recurse. Sadly, there was a unit test to check that the
maximum number of concurrent search requests was not exceeded, but that
test was broken while modifying the test to reproduce a case that led to
the possibility of stack overflow. As such, we randomize whether or not
search actions execute on the same thread as the thread that invoked the
action.
Relates #23538
This commit improves the output when jrunscript fails to include the
full output of the command. It also makes the quoting that is needed for
windows only happen on windows (which worked on java 8, but for some
reason does not work with java 9)
This commit upgrades to the newest version of randomized runner. There
is a new additional check that allows ensuring the working directory
for each child jvm is empty. By default, this check will fail the test
run. However, for elasticsearch, we default to wipe the directory. For
example, if you previously told the runner to not wipe the directory, in
order to investigate a failure, the wipe option will delete this data
upon re-running the test.
While the esplugin extension already had an input for the base notice
file of the plugin, the NoticeTask did not actually know how to use
that, and always used the base notice file from Elasticsearch.
This commit adds an `ignoreSha` configuration to the `dependencyLicense`
task, which allows to not check for a sha for a given dependency jar.
This is useful for locally built jars, which will constantly change.
Adds a common base class for testing subclasses of
`InternalSingleBucketAggregation`. They are so similar they
call into question the utility of having all of these classes.
We maybe could just use `InternalSingleBucketAggregation` in
all those cases.... But for now, let's test the classes!
Relates to #22278
This commit changes the build hash to be the string "Unknown" when for
some reason this build hash is not available. This aligns the value with
the value we use when the hash is not available from the jar
manifest. This situation can occur when running tests from a worktree
which is not currently handled correctly by JGit, the upstream
dependency that is used to acquire the hash. This causes problems when
running tests locally because the warning header pattern expects a hash
or the string "Unknown". While the warning header pattern be changed to
allow "N/A" as well, it seemed more sensible to align the value here
with the value when the hash is not available from the jar manifest.
Relates #23421
This change adds a new test task, platformTest, which runs `gradle test
integTest` within a vagrant host. This will allow us to still test on
all the supported platforms, but be able to standardize on the tools
provided in the host system, for example, with a modern version of git
that can allow #22946.
In order to have sufficient memory and cpu to run the tests, the
vagrantfile has been updated to use 8GB and 4 cpus by default. This can
be customized with the `VAGRANT_MEMORY` and `VAGRANT_CPUS` environment
variables. Also, to save time to show this can work, it currently uses
the same Vagrantfile the packaging tests do. There are a lot of cleanups
we can do to how the gradle-vagrant tasks work, including generating
Vagrantfile altogether, but I think this is fine for now as the same
machines that would run platformTest run packagingTest, and they are
relatively beefy machines, so the higher memory and cpu for them, with
either task, should not be an issue.
This commit fixes the reproduce line output when the vagrant packagingTest
fails. Before only the `gradle packagingTest` would be output, but the
seed and list of versions was swallowed by groovy with an ancillary
failure (due to the `+` being on the wrong line for a string
continuation). With the new reproduce line, it is now output next to
the task right after failure, contains the actual task (specific to the
box that fails), and contains the seed. It also no longer contains the
upgrade versions list, as the seed is used to determine which of those
to use, and the same file would be read when testing a failure on a
particular git commit. Finally, this also ties bats test setup directly
to packagingTest, instead of to the vagrant up command.
When a rest integ test has multiple nodes, each node is supposed to not
start configuring itself until the first node has been started, so that
the unicast host information can be written. However, this was never
explicitly setup to occur, and we were just very lucky with the current
gradle version and stability of the code always produced a task graph
that had node0 starting first. With the recent refactorings to integ
tests, the order has changed. This commit fixes the ordering by adding
an explicit dependency between the first node and the other nodes.
This commit moves the LICENSE.txt and NOTICE.txt files for each plugin
to be alongside the other plugin files, inside the elasticsearch subdir.
This ensures those files are installed alongside the plugin.
Gradle's finalizedBy on tasks only ensures one task runs after another,
but not immediately after. This is problematic for our integration tests
since it allows multiple project's integ test clusters to be
simultaneously. While this has not been a problem thus far (gradle 2.13
happened to keep the finalizedBy tasks close enough that no clusters
were running in parallel), with gradle 3.3 the task graph generation has
changed, and numerous clusters may be running simultaneously, causing
memory pressure, and thus generally slower tests, or even failure if the
system has a limited amount of memory (eg in a vagrant host).
This commit reworks how integ tests are configured. It adds an
`integTestCluster` extension to gradle which is equivalent to the current
`integTest.cluster` and moves the rest test runner task to
`integTestRunner`. The `integTest` task is then just a dummy task,
which depends on the cluster runner task, as well as the cluster stop
task. This means running `integTest` in one project will both run the
rest tests, and shut down the cluster, before running `integTest` in
another project.
In #23253 we added an the ability to incrementally reduce search results.
This change exposes the parameter to control the batch since and therefore
the memory consumption of a large search request.
This commit enforces the requirement of Content-Type for the REST layer and removes the deprecated methods in transport
requests and their usages.
While doing this, it turns out that there are many places where *Entity classes are used from the apache http client
libraries and many of these usages did not specify the content type. The methods that do not specify a content type
explicitly have been added to forbidden apis to prevent more of these from entering our code base.
Relates #19388
These images have been rebuilt to be preloaded with java 8 installed.
This change re-enables the systems. It also removes some redundancy in
the rpm checks I found while testing the new images, and fixes a
potential issue with generated resources in plugins where a stale dir
can cause junk to get into the distribution.
This commit adds the elasticsearch LICENSE.txt to all plugins that
released with elasticsearch, as well as a generated NOTICE.txt specific
to the dependencies of each plugin.
When configuring which repositories to pull from, we currently add
mavenLocal() when the `repos.mavenLocal` flag is set. However, this is
only done in normal projects, but not the special buildSrc project. This
change adds that support. Note that this was not possible before gradle
2.13, as there was a bug which prevented sys props from reaching the
buildSrc project (https://issues.gradle.org/browse/GRADLE-2475).
However, we already require 2.13+.
This is related to #22116. This commit adds calls that require
SocketPermission connect to forbidden APIs.
The following calls are now forbidden:
- java.net.URL#openStream()
- java.net.URLConnection#connect()
- java.net.URLConnection#getInputStream()
- java.net.Socket#connect(java.net.SocketAddress)
- java.net.Socket#connect(java.net.SocketAddress, int)
- java.nio.channels.SocketChannel#open(java.net.SocketAddress)
- java.nio.channels.SocketChannel#connect(java.net.SocketAddress)
Now that debian is disabled, we are seeing similar failures with fedora
not able to install java. This commit temporarily disables fedora until
it is once again stable.
This changes the way that replica failures are handled such that not all
failures will cause the replica shard to be failed or marked as stale.
In some cases such as refresh operations, or global checkpoint syncs, it is
"okay" for the operation to fail without the shard being failed (because no data
is out of sync). In these cases, instead of failing the shard we should simply
fail the operation, and, in the event it is a user-facing operation, return a
5xx response code including the shard-specific failures.
This was accomplished by having two forms of the `Replicas` proxy, one that is
for non-write operations that does not fail the shard, and one that is for write
operations that will fail the shard when an operation fails.
Relates to #10708
Debian 8 has been having issues with the openjdk package dependencies
being broken. This comment comments out debian-8 from the boxes which
packaging tests will run on CI.
This is related to #22116. Core no longer needs `SocketPermission`
`connect`.
This permission is relegated to these modules/plugins:
- transport-netty4 module
- reindex module
- repository-url module
- discovery-azure-classic plugin
- discovery-ec2 plugin
- discovery-gce plugin
- repository-azure plugin
- repository-gcs plugin
- repository-hdfs plugin
- repository-s3 plugin
And for tests:
- mocksocket jar
- rest client
- httpcore-nio jar
- httpasyncclient jar
This commit upgrades the checkstyle configuration from version 5.9 to
version 7.5, the latest version as of today. The main enhancement
obtained via this upgrade is better detection of redundant modifiers.
Relates #22960
This change switches to using jrunscript, instead of jjs, for detecting
version properties of java, which is available on java versions prior to 8.
closes#22898
Currently, stored scripts use a namespace of (lang, id) to be put, get, deleted, and executed. This is not necessary since the lang is stored with the stored script. A user should only have to specify an id to use a stored script. This change makes that possible while keeping backwards compatibility with the previous namespace of (lang, id). Anywhere the previous namespace is used will log deprecation warnings.
The new behavior is the following:
When a user specifies a stored script, that script will be stored under both the new namespace and old namespace.
Take for example script 'A' with lang 'L0' and data 'D0'. If we add script 'A' to the empty set, the scripts map will be ["A" -- D0, "A#L0" -- D0]. If a script 'A' with lang 'L1' and data 'D1' is then added, the scripts map will be ["A" -- D1, "A#L1" -- D1, "A#L0" -- D0].
When a user deletes a stored script, that script will be deleted from both the new namespace (if it exists) and the old namespace.
Take for example a scripts map with {"A" -- D1, "A#L1" -- D1, "A#L0" -- D0}. If a script is removed specified by an id 'A' and lang null then the scripts map will be {"A#L0" -- D0}. To remove the final script, the deprecated namespace must be used, so an id 'A' and lang 'L0' would need to be specified.
When a user gets/executes a stored script, if the new namespace is used then the script will be retrieved/executed using only 'id', and if the old namespace is used then the script will be retrieved/executed using 'id' and 'lang'
This commit introduces sequence-number-based recovery. When a replica
has fallen out of sync, rather than performing a file-based recovery we
first attempt to replay operations since the last local checkpoint on
the replica. To do this, at the start of recovery the replica tells the
primary what its local checkpoint is. The primary will then wait for all
operations between that local checkpoint and the current maximum
sequence number to complete; this is to ensure that there are no gaps in
the operations that will be replayed from the primary to the
replica. This is a best-effort attempt as we currently have no
guarantees on the primary that these operations will be available; if we
are not able to replay all operations in the desired range, we just
fallback to file-based recovery. Later work will strengthen the
guarantees.
Relates #22484
This is related to #22116. URLRepository requires SocketPermission
connect. This commit introduces a new module called "repository-url"
where URLRepository will reside. With the new module, permissions can
be removed from core.
Add unit tests for `TopHitsAggregator` and convert some snippets in
docs for `top_hits` aggregation to `// CONSOLE`.
Relates to #22278
Relates to #18160
These files should have been removed in an earlier commit. This commit also simplifies usage of ProgressLoggerWrapper by using the Groovy delegation instead of using explicit delegation.