We start the test JVMs with various options. There are two that we
should remove, they do not make sense.
- we require Java 8 yet there was a conditional build option for Java 7
- we do not set MaxDirectMemorySize in our default JVM options, we
should not in the test JVMs either
Relates #24501
Use fedora-25 Vagrant box in VagrantTestPlugin, which was missing from
9a3ab3e800 causing packaging test
failures.
Additionally update TESTING.asciidoc
* Fix JVM test argline
The argline was being overridden by '-XX:-OmitStackTraceInFastThrow'
which led to test failures that were expecting the JVM to be in a
certain state based on the value of tests.jvm.argline but they were not
since these arguments were never passed to the JVM. Additionally, we
need to respect the provided JVM argline if it is already provided with
a flag for OmitStackTraceInFastThrow. This commit fixes this by only
setting OmitStackTraceInFastThrow if it is not already set.
* Add comment
* Fix comment
* More elaborate comment
* Sigh
TransportService and RemoteClusterService are closely coupled already today
and to simplify remote cluster integration down the road it can be a direct
dependency of TransportService. This change moves RemoteClusterService into
TransportService with the goal to make it a hidden implementation detail
of TransportService in followup changes.
This adds `-XX:-OmitStackTraceInFastThrow` to the JVM arguments
which *should* prevent the JVM from omitting stack traces on
common exception sites. Even though these sites are common, we'd
still like the exceptions to debug them.
This also adds the flag when running tests and adapts some tests
that had workarounds for the absense of the flag.
Closes#24376
This commit removes a few duplicates from the list of checkstyle
suppressions that accumulated while auto-generating the list of
suppressions based on the files that currently exceed the 140-column
limit.
The list of checkstyle suppressions was auto-generated based on the list
of files that currently violate the 140-column limit. An errant entry
was committed that did no harm, but this commit removes it.
We are back to having a 140-column limit. While at some point we will
apply auto-formatting tools to the code base, that is a down-the-road
thing and adjusting the suppressions now shaves minutes off the build
time.
Relates #24408
Separates cluster state publishing from applying cluster states:
- ClusterService is split into two classes MasterService and ClusterApplierService. MasterService has the responsibility to calculate cluster state updates for actions that want to change the cluster state (create index, update shard routing table, etc.). ClusterApplierService has the responsibility to apply cluster states that have been successfully published and invokes the cluster state appliers and listeners.
- ClusterApplierService keeps track of the last applied state, but MasterService is stateless and uses the last cluster state that is provided by the discovery module to calculate the next prospective state. The ClusterService class is still kept around, which now just delegates actions to ClusterApplierService and MasterService.
- The discovery implementation is now responsible for managing the last cluster state that is used by the consensus layer and the master service. It also exposes the initial cluster state which is used by the ClusterApplierService. The discovery implementation is also responsible for adding the right cluster-level blocks to the initial state.
- NoneDiscovery has been renamed to TribeDiscovery as it is exclusively used by TribeService. It adds the tribe blocks to the initial state.
- ZenDiscovery is synchronized on state changes to the last cluster state that is used by the consensus layer and the master service, and does not submit cluster state update tasks anymore to make changes to the disco state (except when becoming master).
Control flow for cluster state updates is now as follows:
- State updates are sent to MasterService
- MasterService gets the latest committed cluster state from the discovery implementation and calculates the next cluster state to publish
- MasterService submits the new prospective cluster state to the discovery implementation for publishing
- Discovery implementation publishes cluster states to all nodes and, once the state is committed, asks the ClusterApplierService to apply the newly committed state.
- ClusterApplierService applies state to local node.
Creates a new task `namingConventionsMain`, that runs on the
`buildSrc` and `test:framework` projects and fails the build if
any of the classes in the main artifacts are named like tests or
are non-abstract subclasses of ESTestCase.
It also fixes the three tests that would cause it to fail.
Start moving built in analysis components into the new analysis-common
module. The goal of this project is:
1. Remove core's dependency on lucene-analyzers-common.jar which should
shrink the dependencies for transport client and high level rest client.
2. Prove that analysis plugins can do all the "built in" things by moving all
"built in" behavior to a plugin.
3. Force tests not to depend on any oddball analyzer behavior. If tests
need anything more than the standard analyzer they can use the mock
analyzer provided by Lucene's test infrastructure.
This commit adds a call to jstack to see where each node is stuck when
starting up, if a timeout occurs. This also decreases the timeout back
to 30 seconds.
We want to upgrade to Lucene 7 ahead of time in order to be able to check whether it causes any trouble to Elasticsearch before Lucene 7.0 gets released. From a user perspective, the main benefit of this upgrade is the enhanced support for sparse fields, whose resource consumption is now function of the number of docs that have a value rather than the total number of docs in the index.
Some notes about the change:
- it includes the deprecation of the `disable_coord` parameter of the `bool` and `common_terms` queries: Lucene has removed support for coord factors
- it includes the deprecation of the `index.similarity.base` expert setting, since it was only useful to configure coords and query norms, which have both been removed
- two tests have been marked with `@AwaitsFix` because of #23966, which we intend to address after the merge
After splitting integ tests into cluster configuration and the test
runner task, we still have dependencies of the test runner added as deps
of the cluster. This commit adds dependencies directly to the cluster,
so that the runner can have other dependencies independent of what is
needed for the cluster.
This new version of jna is rebuilt from the official release of jna, but
with native libs linked against older glibc in order to support all
platforms elasticsearch supports.
closes#23640
This reverts the line limit change in #23623 - this PR doesn't touch the suppression file since we are moving towards automatic code formatting which makes it mainly obsolete.
Adds the option for a plugin to specify extra directories containing notices
and licenses files to be incorporated into the overall notices file that is
generated for the plugin.
This can be useful, for example, where the plugin has a non-Java dependency
that itself incorporates many 3rd party components.
The buffer limit should have been configurable already, but the factory constructor is package private so it is truly configurable only from the org.elasticsearch.client package. Also the HttpAsyncResponseConsumerFactory interface was package private, so it could only be implemented from the org.elasticsearch.client package.
Closes#23958
This commit upgrades the Log4j dependencies from version 2.7 to version
2.8.2. This release includes a fix for a case where Log4j could lose
exceptions in the presence of a security manager.
Relates #23995
This commit puts all the classes in the repository-s3 plugin into a
single package. In addition to simplifying the plugin, it will make it
easier to test as things that should be package private will not be
difficult to use inside tests alone.
The method Boolean#getBoolean is dangerous. It is too easy to mistakenly
invoke this method thinking that it is parsing a string as a
boolean. However, what it actually does is get a system property with
the specified string, and then attempts to use usual crappy boolean
parsing in the JDK to parse that system property as boolean with
complete leniency (it parses every input value into either true or
false); that is, this method amounts to invoking
Boolean#parseBoolean(String) on the result of
System#getProperty(String). Boo. This commit bans usage of this method.
Relates #23864
Now that we are on gradle 3.3, we can take advantage of a fix that was
made in 2.14 which properly handles disabling transitive dependencies in
pom generation. As it was currently, we actually ended up generated two
exclusions sections in the generated pom. This is yet another example of
why we need validation on the pom files with our generation here, but I
leave that for another day because I still don't know a good way to do
it.
This moves `updateReplicaRequest` to `createPrimaryResponse` and separates the
translog updating to be a separate function so that the function purpose is more
easily understood (and testable).
It also separates the logic for `MappingUpdatePerformer` into two functions,
`updateMappingsIfNeeded` and `verifyMappings` so they don't do too much in a
single function. This allows finer-grained error testing for when a mapping
fails to parse or be applied.
Finally, it separates parsing and version validation for
`executeIndexRequestOnReplica` into a separate
method (`prepareIndexOperationOnReplica`) and adds a test for it.
Relates to #23359
This commit adds a build listener to the integ test runner which will
print out an excerpt of the logs from the integ test cluster if the test
run fails. There are future improvements that can be made (for example,
to dump excerpts from dependent clusters like in tribe node tests), but
this is a start, and would help with simple rest tests failures that we
currently don't know what actually happened on the node.
This will allow us to get rid of deprecation warnings that appear when
using 3.3, and also get rid of extra logic for 2.13 required because of
the progress logger.
Search took time uses an absolute clock to measure elapsed time, and
then tries to deal with the complexities of using an absolute clock for
this purpose. Instead, we should use a high-precision monotonic relative
clock that is designed exactly for measuring elapsed time. This commit
modifies the search infrastructure to use a relative clock for measuring
took time, but still provides an absolute clock for the components of
search that require a real clock (e.g., index name expression
resolution, etc.).
Relates #23662
This commit creates a keystore and adds settings to it during the
cluster formation for integration tests. Users can define a
`keyStoreSetting` in build files for settings that need to be placed in
the keystore.
This commit moves the checkstyle rule of max line length from 140
characters to 100 characters. We whitelist all existing violations and
will address them in follow-ups.
Relates #23623
In Gradle 3.4, the buildSrc plugin seems to be packaged into a jar before it is accessed by the rest of the build and the signatures file for the third-party audit task cannot be accessed as
getClass().getResource('/forbidden/third-party-audit.txt') then points to a file entry in a JAR, which cannot be loaded directly as a File object. This commit changes the third-party audit task to pass the content of the signatures file as a String instead.
While trying to improve the failure output in #23547, the stderr was
also captured from jrunscript. This was under the assumption that stderr
is only written to in case of an error. However, with java 9, when
JAVA_TOOL_OPTIONS are set, they are output to stderr. And our CI sets
JAVA_TOOL_OPTIONS for some reason. This commit fixes the jrunscript call
to use a separate buffer for stderr.
A previous change to the multi-search request execution to avoid stack
overflows regressed on limiting the number of concurrent search requests
from a batched multi-search request. In particular, the replacement of
the tail-recursive call with a loop could asynchronously fire off all of
the remaining search requests in the batch while max concurrent search
requests are already executing. This commit attempts to address this
issue by taking a more careful approach to the initial problem of
recurisve calls. The cause of the initial problem was due to possibility
of individual requests completing on the same thread as invoked the
search action execution. This can happen, for example, in cases when an
individual request does not resolve to any shards. To address this
problem, when an individual request completes we check if it completed
on the same thread as fired off the request. In this case, we loop and
otherwise safely recurse. Sadly, there was a unit test to check that the
maximum number of concurrent search requests was not exceeded, but that
test was broken while modifying the test to reproduce a case that led to
the possibility of stack overflow. As such, we randomize whether or not
search actions execute on the same thread as the thread that invoked the
action.
Relates #23538
This commit improves the output when jrunscript fails to include the
full output of the command. It also makes the quoting that is needed for
windows only happen on windows (which worked on java 8, but for some
reason does not work with java 9)
This commit upgrades to the newest version of randomized runner. There
is a new additional check that allows ensuring the working directory
for each child jvm is empty. By default, this check will fail the test
run. However, for elasticsearch, we default to wipe the directory. For
example, if you previously told the runner to not wipe the directory, in
order to investigate a failure, the wipe option will delete this data
upon re-running the test.
While the esplugin extension already had an input for the base notice
file of the plugin, the NoticeTask did not actually know how to use
that, and always used the base notice file from Elasticsearch.
This commit adds an `ignoreSha` configuration to the `dependencyLicense`
task, which allows to not check for a sha for a given dependency jar.
This is useful for locally built jars, which will constantly change.
Adds a common base class for testing subclasses of
`InternalSingleBucketAggregation`. They are so similar they
call into question the utility of having all of these classes.
We maybe could just use `InternalSingleBucketAggregation` in
all those cases.... But for now, let's test the classes!
Relates to #22278
This commit changes the build hash to be the string "Unknown" when for
some reason this build hash is not available. This aligns the value with
the value we use when the hash is not available from the jar
manifest. This situation can occur when running tests from a worktree
which is not currently handled correctly by JGit, the upstream
dependency that is used to acquire the hash. This causes problems when
running tests locally because the warning header pattern expects a hash
or the string "Unknown". While the warning header pattern be changed to
allow "N/A" as well, it seemed more sensible to align the value here
with the value when the hash is not available from the jar manifest.
Relates #23421
This change adds a new test task, platformTest, which runs `gradle test
integTest` within a vagrant host. This will allow us to still test on
all the supported platforms, but be able to standardize on the tools
provided in the host system, for example, with a modern version of git
that can allow #22946.
In order to have sufficient memory and cpu to run the tests, the
vagrantfile has been updated to use 8GB and 4 cpus by default. This can
be customized with the `VAGRANT_MEMORY` and `VAGRANT_CPUS` environment
variables. Also, to save time to show this can work, it currently uses
the same Vagrantfile the packaging tests do. There are a lot of cleanups
we can do to how the gradle-vagrant tasks work, including generating
Vagrantfile altogether, but I think this is fine for now as the same
machines that would run platformTest run packagingTest, and they are
relatively beefy machines, so the higher memory and cpu for them, with
either task, should not be an issue.
This commit fixes the reproduce line output when the vagrant packagingTest
fails. Before only the `gradle packagingTest` would be output, but the
seed and list of versions was swallowed by groovy with an ancillary
failure (due to the `+` being on the wrong line for a string
continuation). With the new reproduce line, it is now output next to
the task right after failure, contains the actual task (specific to the
box that fails), and contains the seed. It also no longer contains the
upgrade versions list, as the seed is used to determine which of those
to use, and the same file would be read when testing a failure on a
particular git commit. Finally, this also ties bats test setup directly
to packagingTest, instead of to the vagrant up command.
When a rest integ test has multiple nodes, each node is supposed to not
start configuring itself until the first node has been started, so that
the unicast host information can be written. However, this was never
explicitly setup to occur, and we were just very lucky with the current
gradle version and stability of the code always produced a task graph
that had node0 starting first. With the recent refactorings to integ
tests, the order has changed. This commit fixes the ordering by adding
an explicit dependency between the first node and the other nodes.
This commit moves the LICENSE.txt and NOTICE.txt files for each plugin
to be alongside the other plugin files, inside the elasticsearch subdir.
This ensures those files are installed alongside the plugin.
Gradle's finalizedBy on tasks only ensures one task runs after another,
but not immediately after. This is problematic for our integration tests
since it allows multiple project's integ test clusters to be
simultaneously. While this has not been a problem thus far (gradle 2.13
happened to keep the finalizedBy tasks close enough that no clusters
were running in parallel), with gradle 3.3 the task graph generation has
changed, and numerous clusters may be running simultaneously, causing
memory pressure, and thus generally slower tests, or even failure if the
system has a limited amount of memory (eg in a vagrant host).
This commit reworks how integ tests are configured. It adds an
`integTestCluster` extension to gradle which is equivalent to the current
`integTest.cluster` and moves the rest test runner task to
`integTestRunner`. The `integTest` task is then just a dummy task,
which depends on the cluster runner task, as well as the cluster stop
task. This means running `integTest` in one project will both run the
rest tests, and shut down the cluster, before running `integTest` in
another project.
In #23253 we added an the ability to incrementally reduce search results.
This change exposes the parameter to control the batch since and therefore
the memory consumption of a large search request.
This commit enforces the requirement of Content-Type for the REST layer and removes the deprecated methods in transport
requests and their usages.
While doing this, it turns out that there are many places where *Entity classes are used from the apache http client
libraries and many of these usages did not specify the content type. The methods that do not specify a content type
explicitly have been added to forbidden apis to prevent more of these from entering our code base.
Relates #19388
These images have been rebuilt to be preloaded with java 8 installed.
This change re-enables the systems. It also removes some redundancy in
the rpm checks I found while testing the new images, and fixes a
potential issue with generated resources in plugins where a stale dir
can cause junk to get into the distribution.
This commit adds the elasticsearch LICENSE.txt to all plugins that
released with elasticsearch, as well as a generated NOTICE.txt specific
to the dependencies of each plugin.
When configuring which repositories to pull from, we currently add
mavenLocal() when the `repos.mavenLocal` flag is set. However, this is
only done in normal projects, but not the special buildSrc project. This
change adds that support. Note that this was not possible before gradle
2.13, as there was a bug which prevented sys props from reaching the
buildSrc project (https://issues.gradle.org/browse/GRADLE-2475).
However, we already require 2.13+.
This is related to #22116. This commit adds calls that require
SocketPermission connect to forbidden APIs.
The following calls are now forbidden:
- java.net.URL#openStream()
- java.net.URLConnection#connect()
- java.net.URLConnection#getInputStream()
- java.net.Socket#connect(java.net.SocketAddress)
- java.net.Socket#connect(java.net.SocketAddress, int)
- java.nio.channels.SocketChannel#open(java.net.SocketAddress)
- java.nio.channels.SocketChannel#connect(java.net.SocketAddress)
Now that debian is disabled, we are seeing similar failures with fedora
not able to install java. This commit temporarily disables fedora until
it is once again stable.
This changes the way that replica failures are handled such that not all
failures will cause the replica shard to be failed or marked as stale.
In some cases such as refresh operations, or global checkpoint syncs, it is
"okay" for the operation to fail without the shard being failed (because no data
is out of sync). In these cases, instead of failing the shard we should simply
fail the operation, and, in the event it is a user-facing operation, return a
5xx response code including the shard-specific failures.
This was accomplished by having two forms of the `Replicas` proxy, one that is
for non-write operations that does not fail the shard, and one that is for write
operations that will fail the shard when an operation fails.
Relates to #10708
Debian 8 has been having issues with the openjdk package dependencies
being broken. This comment comments out debian-8 from the boxes which
packaging tests will run on CI.
This is related to #22116. Core no longer needs `SocketPermission`
`connect`.
This permission is relegated to these modules/plugins:
- transport-netty4 module
- reindex module
- repository-url module
- discovery-azure-classic plugin
- discovery-ec2 plugin
- discovery-gce plugin
- repository-azure plugin
- repository-gcs plugin
- repository-hdfs plugin
- repository-s3 plugin
And for tests:
- mocksocket jar
- rest client
- httpcore-nio jar
- httpasyncclient jar
This commit upgrades the checkstyle configuration from version 5.9 to
version 7.5, the latest version as of today. The main enhancement
obtained via this upgrade is better detection of redundant modifiers.
Relates #22960
This change switches to using jrunscript, instead of jjs, for detecting
version properties of java, which is available on java versions prior to 8.
closes#22898
Currently, stored scripts use a namespace of (lang, id) to be put, get, deleted, and executed. This is not necessary since the lang is stored with the stored script. A user should only have to specify an id to use a stored script. This change makes that possible while keeping backwards compatibility with the previous namespace of (lang, id). Anywhere the previous namespace is used will log deprecation warnings.
The new behavior is the following:
When a user specifies a stored script, that script will be stored under both the new namespace and old namespace.
Take for example script 'A' with lang 'L0' and data 'D0'. If we add script 'A' to the empty set, the scripts map will be ["A" -- D0, "A#L0" -- D0]. If a script 'A' with lang 'L1' and data 'D1' is then added, the scripts map will be ["A" -- D1, "A#L1" -- D1, "A#L0" -- D0].
When a user deletes a stored script, that script will be deleted from both the new namespace (if it exists) and the old namespace.
Take for example a scripts map with {"A" -- D1, "A#L1" -- D1, "A#L0" -- D0}. If a script is removed specified by an id 'A' and lang null then the scripts map will be {"A#L0" -- D0}. To remove the final script, the deprecated namespace must be used, so an id 'A' and lang 'L0' would need to be specified.
When a user gets/executes a stored script, if the new namespace is used then the script will be retrieved/executed using only 'id', and if the old namespace is used then the script will be retrieved/executed using 'id' and 'lang'
This commit introduces sequence-number-based recovery. When a replica
has fallen out of sync, rather than performing a file-based recovery we
first attempt to replay operations since the last local checkpoint on
the replica. To do this, at the start of recovery the replica tells the
primary what its local checkpoint is. The primary will then wait for all
operations between that local checkpoint and the current maximum
sequence number to complete; this is to ensure that there are no gaps in
the operations that will be replayed from the primary to the
replica. This is a best-effort attempt as we currently have no
guarantees on the primary that these operations will be available; if we
are not able to replay all operations in the desired range, we just
fallback to file-based recovery. Later work will strengthen the
guarantees.
Relates #22484
This is related to #22116. URLRepository requires SocketPermission
connect. This commit introduces a new module called "repository-url"
where URLRepository will reside. With the new module, permissions can
be removed from core.
Add unit tests for `TopHitsAggregator` and convert some snippets in
docs for `top_hits` aggregation to `// CONSOLE`.
Relates to #22278
Relates to #18160
These files should have been removed in an earlier commit. This commit also simplifies usage of ProgressLoggerWrapper by using the Groovy delegation instead of using explicit delegation.
move "es." internal headers to separate metadata set in ElasticsearchException and stop returning them as response headers
Closes#17593
* [TEST] remove ESExceptionTests, move its methods to ElasticsearchExceptionTests or ExceptionSerializationTests
Instead of using Gradle-version specific compilation options, use distinct source sets. This also allows compilation of buildSrc/build-tools under IDEs that
don't understand the version-specific compilation options.
Relates to #22669
This changes build files so that building Elasticsearch works with both Gradle 2.13 as well as higher versions of Gradle (tested 2.14 and 3.3), enabling a smooth transition from Gradle 2.13 to 3.x.
* Upgrade to Lucene 6.4.0
`ValueSource`s are now converted to `DoubleValueSource`s using the Lucene adapter made for the migration to the new API in 6.4.0.
* S3 repository: Deprecate specifying credentials through env vars and sys props
This is a follow up to #22479, where storing credentials secure way was
added.
This commit adds a MessyRestTestPlugin to the gradle build. It extends
StandaloneRestTestPlugin. The main piece of functionality that it adds
is to copy plugin-metadata from dependencies into the
generated-resources for the current test source. This is necessary to
ensure that permissions for dependencies are applied when running the
tests.
A current limitation is that the permissions are applied differently
than in the distribution sources. When permissions are granted to all
depedencies for a module or plugin, the permissions are granted to all
dependencies on the classpath for tests besides a few hardcoded
exclusions:
- es core
- es test framework
- lucene test framework
- randomized runner
- junit library
This changes build files so that building Elasticsearch works with both Gradle 2.13 as well as higher versions of Gradle (tested 2.14 and 3.3), enabling a smooth transition from Gradle 2.13 to 3.x.
This PR removes all leniency in the conversion of Strings to booleans: "true"
is converted to the boolean value `true`, "false" is converted to the boolean
value `false`. Everything else raises an error.
Changes the error message when `action.auto_create_index` or
`index.mapper.dynamic` forbids automatic creation of an index
from `no such index` to one of:
* `no such index and [action.auto_create_index] is [false]`
* `no such index and [index.mapper.dynamic] is [false]`
* `no such index and [action.auto_create_index] contains [-<pattern>] which forbids automatic creation of the index`
* `no such index and [action.auto_create_index] ([all patterns]) doesn't match`
This should make it more clear *why* there is `no such index`.
Closes#22435
Today we have quite some abstractions that are essentially providing a simple
dispatch method to the plugins defining a `HttpServerTransport`. This commit
removes `HttpServer` and `HttpServerAdaptor` and introduces a simple `Dispatcher` functional
interface that delegate to `RestController` by default.
Relates to #18482
The IndexingOperationListener interface did not provide any
information about the shard id when a document was indexed.
This commit adds the shard id as the first parameter to all methods
in the IndexingOperationListener.
It is no longer needed. It used to contain a lot of strings
used by serialization but those have since been removed. Now
it is just another thing to pass around that we don't really
need.
Currently, such tasks are only created for default boxes (centos-7, ubuntu-1404) and not all boxes and this can be misleading for developers who want to debug testing scripts on non-default boxes.
The RestHighLevelClient class takes as as an argument a low level client instance RestClient. The first method added is ping, which returns true if the call to HEAD / went ok and false if an IOException was thrown. Any other exception gets bubbled up.
There are two kinds of tests, a unit test (RestHighLevelClientTests) that verifies the interaction between high level and low level client, and an integration test (MainActionIT) which relies on an externally started es cluster to send requests to.
Randomized runner uses a flag, tests.asserts, which we have previously
not used, but is used in lucene for disabling assertions. This change
modifies the gradle configuration to look for this flag and pass through
to the test runner to determine whether -ea and -esa are added to the
java commandline for tests.
This integrates the mocksocket jar with elasticsearch tests. Mocksocket wraps actions requiring SocketPermissions in doPrivilege blocks. This will eventually allow SocketPermissions to be assigned to the mocksocket jar opposed to the entire elasticsearch codebase.
This commit removes a leftover checkstyle suppression for a source file
that was temporarily forked into the codebase to hack around a bug in
Log4j. When that source file was removed, the suppression was left
behind.
The backwards compatibility tests rely on gradle's built-in mechanisms for resolving dependencies
to get the zip of the older version we test against. By default, this will cache snapshots for
24 hours, which can lead to unexpected failures in CI. This change makes the special configurations
for backwards compatibility always update their snapshots by setting the amount of time to cache
to 0 seconds.
This commit adds a test for applying logging levels in hierarchical
order, and addresses an issue with restoring the logging levels at the
end of a test or suite.
When starting a standalone cluster, we do not able assertions. This is
problematic because it means that we miss opportunities to catch
bugs. This commit enables assertions for standalone integration tests,
and fixes a couple bugs that were uncovered by enabling these.
Relates #22334
If we conditionally do random things, e.g. initialize a node only after the first test, we have to make sure that we unconditionally create a new seed calling random.nextLong(), then initialize the node under a private randomness context. This makes sure that any random usage through Randomness.get() will retrieve the proper random instance through RandomizedContext.current().getRandom(). When running under private randomness, the context will return the Random instance that was created with the provided seed (forked from the main random instance) rather than the main Random that's exposed to tests as well. Otherwise tests become non repeatable because that initialization part happens only before the first executed test.
Moved field values `toXContent` logic to `GetField` (from `GetResult`), which outputs its own fields, and can also parse them now. Also added `fromXContent` to `GetResult` and `GetResponse`.
The start object and end object for `GetResponse` output have been moved to `GetResult#toXContent`, from the corresponding rest action. This makes it possible to have `toXContent` and `fromXContent` completely symmetric, as parsing requires looping till an end object is found which is weird when the corresponding `toXContent` doesn't print that out.
This also introduces the foundation for testing retrieval of _source and stored field values.
Sequence BWC logic consists of two elements:
1) Wire level BWC using stream versions.
2) A changed to the global checkpoint maintenance semantics.
For the sequence number infra to work with a mixed version clusters, we have to consider situation where the primary is on an old node and replicas are on new ones (i.e., the replicas will receive operations without seq#) and also the reverse (i.e., the primary sends operations to a replica but the replica can't process the seq# and respond with local checkpoint). An new primary with an old replica is a rare because we do not allow a replica to recover from a new primary. However, it can occur if the old primary failed and a new replica was promoted or during primary relocation where the source primary is treated as a replica until the master starts the target.
1) Old Primary & New Replica - this case is easy as is taken care of by the wire level BWC. All incoming requests will have their seq# set to `UNASSIGNED_SEQ_NO`, which doesn't confuse the local checkpoint logic (keeping it at `NO_OPS_PERFORMED`)
2) New Primary & Old replica - this one is trickier as the global checkpoint service currently takes all in sync replicas into consideration for the global checkpoint calculation. In order to deal with old replicas, we change the semantics to say all *new node* in sync replicas. That means the replicas on old nodes don't count for the global checkpointing. In this state the seq# infra is not fully operational (you can't search on it, because copies may miss it) but it is maintained on shards that can support it. The old replicas will have to go through a file based recovery at some point and will get the seq# information at that point. There is still an edge case where a new primary fails and an old replica takes over. I'lll discuss this one with @ywelsch as I prefer to avoid it completely.
This PR also re-enables the BWC tests which were disabled. As such it had to fix any BWC issue that had crept in. Most notably an issue with the removal of the `timestamp` field in #21670.
The commit also includes a fix for the default value of the seq number field in replicated write requests (it was 0 but should be -2), that surface some other minor bugs which are fixed as well.
Last - I added some debugging tools like more sane node names and forcing replication request to implement a `toString`
If you write a yaml test with a `warnings` section in a `do` block
that doesn't also have a corresponding `skip` section for `warnings`
then client test runners that don't support `warnings` will fail.
This causes the elasticsearch build to fail so we catch these errors
earlier.
Related to #21811
Changes the build to recognize `NORELEASE` as well as `NOCOMMIT` to
mean the same thing as `norelease` and `nocommit` respectively. This
is useful because people have been using them that way but haven't
realized that only the lowercase versions worked.
This also explicitly forbids silly things like `NoReLeAsE` and
`noCOMMIT`, failing the build and telling you to spell them properly.
Set lucene version to 6.4.0-snapshot-ec38570 and update all the sha1s/license
Fix invalid combo after upgrade in query_string query. split_on_whitespace=false is disallowed if auto_generate_phrase_queries=true
Adapt the expectations of some tests to the new format of the Lucene explain output
REST tests use the default OOTB low/high disk watermarks of 85%/90%, which can make some tests fail if run on a machine with a fuller disk. This commit changes the watermarks in the same way as in IntegTestCase so that they're essentially ignored.
Add indices and filter information to search shards api output
The search shards api returns info about which shards are going to be hit by executing a search with provided parameters: indices, routing, preference. Indices can also be aliases, which can also hold filters. The output includes an array of shards and a summary of all the nodes the shards are allocated on. This commit adds a new indices section to the search shards output that includes one entry per index, where each index can be associated with an optional filter in case the index was hit through a filtered alias.
This is relevant since we have moved parsing of alias filters to the coordinating node.
Relates to #20916
Today there is no way to get notified if a node is disconnected. Client code
must poll the TransportClient constantly to detect that a node is not connected
anymore in order to react and add new nodes or notify altering etc. For instance
if a hostname gets resolved to an IP but that host is disconnected clients want
to reconnect by resolving the hostname again which is a common situation in cloud
environments.
Closes#21424
This commit adds the ability to support running with plugins in tests that make use of
backwards compatibility nodes. This can be used to test rolling upgrades with plugins
to ensure they do not cause issues during a rolling upgrade of elasticsearch.
In #21348 the command executed to run the packaging tests has been changed to "sudo -E bats ...", forcing all environment variables from the vagrant user to be passed to the `sudo` command. This breaks a test on opensuse-13 (the one where it checks that elasticsearch cannot be started when `java` is not found) because all the PATH from the user is passed to the sudo command.
This commit restores the previous behavior while allowing only necessary testing environment variables to be passed using a /etc/sudoers.d file.
JDK9 removed pathname canonicalization when constructing FilePermission objects, which breaks some of the FilePermissions added by Elasticsearch. This commit adds the system property jdk.io.permissionsUseCanonicalPath which makes JDK9 behave like JDK8 w.r.t. FilePermission objects (see #21534).
Today when a node starts, we create dynamic socket permissions based on
the configured HTTP ports and transport ports. If no ports are
configured, we use the default port ranges. When a tribe node starts, a
tribe node creates an internal node client for connecting to each remote
cluster. If neither an explicit HTTP port nor transport ports were
specified, the default port ranges are large enough for the tribe node
and its internal node clients. If an explicit HTTP port or transport
port was specified for the tribe node, then socket permissions for those
ports will be created, but not for the internal node clients. Whether
the internal node clients have explicit ports specified, or attempt to
bind within the default range, socket permissions for these will not
have been created and the internal node clients will hit a permissions
issue when attempting to bind. This commit addresses this issue by also
accounting for tribe nodes when creating the dynamic socket
permissions. Additionally, we add our first real integration test for
tribe nodes.
JDK9 removed pathname canonicalization when constructing FilePermission objects, which breaks some of the FilePermissions added by
Elasticsearch. This commit adds the system property jdk.io.permissionsUseCanonicalPath which makes JDK9 behave like JDK8 w.r.t. FilePermissions (see
https://github.com/elastic/elasticsearch/issues/21534).
This commit changes the current :elactisearch:qa:vagrant build file and transforms it into a Gradle plugin in order to reuse it in other projects.
Most of the code from the build.gradle file has been moved into the VagrantTestPlugin class. To avoid duplicated VMs when running vagrant tests, the Gradle plugin sets the following environment variables before running vagrant commands:
VAGRANT_CWD: absolute path to the folder that contains the Vagrantfile
VAGRANT_PROJECT_DIR: absolute path to the Gradle project that use the VagrantTestPlugin
The VAGRANT_PROJECT_DIR is used to share project folders and files with the vagrant VM. These folders and files are exported when running the task `gradle vagrantSetUp` which:
- collects all project archives dependencies and copies them into `${project.buildDir}/bats/archives`
- copy all project bats testing files from 'src/test/resources/packaging/tests' into `${project.buildDir}/bats/tests`
- copy all project bats utils files from 'src/test/resources/packaging/utils' into `${project.buildDir}/bats/utils`
It is also possible to inherit and grab the archives/tests/utils files from project dependencies using the plugin configuration:
apply plugin: 'elasticsearch.vagrant'
esvagrant {
inheritTestUtils true|false
inheritTestArchives true|false
inheritTests true|false
}
dependencies {
// Inherit Bats test utils from :qa:vagrant project
bats project(path: ':qa:vagrant', configuration: 'bats')
}
The folders `${project.buildDir}/bats/archives`, `${project.buildDir}/bats/tests` and `${project.buildDir}/bats/utils` are then exported to the vagrant VMs and mapped to the BATS_ARCHIVES, BATS_TESTS and BATS_UTILS environnement variables.
The following Gradle tasks have also be renamed:
* gradle vagrantSetUp
This task copies all the necessary files to the project build directory (was `prepareTestRoot`)
* gradle vagrantSmokeTest
This task starts the VMs and echoes a "Hello world" within each VM (was: `smokeTest`)
We currently have a lot of log messages in our CI output like
```
[thirdPartyAudit] WARNING: The referenced class 'org.noggit.JSONParser' cannot be loaded. Please fix the classpath!
[thirdPartyAudit] WARNING: The referenced class 'org.noggit.JSONParser' cannot be loaded. Please fix the classpath!
[thirdPartyAudit] WARNING: The referenced class 'org.noggit.JSONParser' cannot be loaded. Please fix the classpath!
[thirdPartyAudit] WARNING: The referenced class 'org.noggit.JSONParser' cannot be loaded. Please fix the classpath!
[thirdPartyAudit] WARNING: The referenced class 'org.noggit.JSONParser' cannot be loaded. Please fix the classpath!
[thirdPartyAudit] WARNING: The referenced class 'org.noggit.JSONParser' cannot be loaded. Please fix the classpath!
[thirdPartyAudit] WARNING: The referenced class 'org.noggit.JSONParser' cannot be loaded. Please fix the classpath!
[thirdPartyAudit] WARNING: The referenced class 'org.noggit.JSONParser' cannot be loaded. Please fix the classpath!
[thirdPartyAudit] WARNING: The referenced class 'org.noggit.JSONParser' cannot be loaded. Please fix the classpath!
... repeated 1827281 times ...
```
This changes these messages to be logged at the DEBUG level, so they
will not show up by default.
This change simply makes the level of the ant timestamp for waiting on
the integ test cluster echo at the info level instead of warn (the
default) so that it is only output when running with gradle --info, or
when the wait condition fails.
Today we still have a leftover from older percolators where lucene
query instances where created ahead of time and rewritten later.
This `LateParsingQuery` was resolving `now()` when it's really used which we
don't need anymore. As a side-effect this failed to execute some highlighting
queries when they get rewritten since at that point `now` access it not permitted
anymore to prevent bugs when queries get cached.
Closes#21295
Dependencies are currently marked as non-transitive in generated POM files by adding a wildcard (*) exclusion. This breaks compatibility with the dependency manager Apache Ivy as it incorrectly translates POMs with * excludes to Ivy XML with * excludes which results in the main artifact being excluded as well (see https://issues.apache.org/jira/browse/IVY-1531). To stay compatible with the current release of Ivy this commit uses explicit excludes for each transitive artifact instead to ensure that the main artifact is not excluded. This should be revisited when we upgrade Gradle to a higher version as the current one (2.13) as Gradle automatically translates non-transitive dependencies to * excludes in 2.14+.
Since we now validate all consumed request parameter, users can't specify
`_cat/nodes?full_id=true|false` anymore since this parameter is consumed late.
This commit adds a test for this parameter and consumes it before request is processed.
Closes#21266
Setting `discovery.initial_state_timeout: 0s` to make `discovery.zen.minimum_master_nodes: N`
work reliably can cause issues in clusters that rely on state recovery once the cluster is available.
This change makes the use or `discovery.zen.minimum_master_nodes` optional for clusters where this behavior is desirable.
Lucene 6.3 is expected to be released in the next weeks so it'd be good to give
it some integration testing. I had to upgrade randomized-testing too so that
both Lucene and Elasticsearch are on the same version.
Today we only use a single node to send requests to when we run REST tests.
In some cases we have more than one node (ie. in the BWC case) where we should
send requests to all nodes in a round-robin fashion. This change passes all
available node endpoints to the rest test.
Additionally, this change adds the setting of `discovery.zen.minimum_master_nodes`
to the cluster formation forcing the nodes to wait for all other nodes until the cluster
is formed. This allows for a more realistic master election and allows all master eligable
nodes to become master while before always the first node in the cluster became the master.
This also adds logging to each test run to log the master nodes version and the minimum node
version in the cluster to help debugging BWC test failures.
This fixes our cluster formation task to run REST tests against a mixed version cluster.
Yet, due to some limitations in our test framework `indices.rollover` tests are currently
disabled for the BWC case since they select the current master as the merge node which
happens to be a BWC node and we can't relocate all shards to it since the primaries are on
a higher version node. This will be fixed in a followup.
Closes#21142
Note: This has been cherry-picked from 5.0 and fixes several rest tests
as well as a BWC break in `OsStats.java`
Today the request interceptor can't support async calls since the response
of the async call would execute on a different thread ie. a client or listener
thread. This means in-turn that the intercepted handler is not executed with the
thread it was supposed to run and therefor can, if it's executing blocking
operations, potentially deadlock an entire server.
* Move all zen discovery classes into o.e.discovery.zen
This collapses sub packages of zen into zen. These all had just a couple
classes each, and there is really no reason to have the subpackages.
* fix checkstyle
When running `gradle run`, a developer usually intends to get a running
instance as if they had run elasticsearch from the command line. This is
different than the isolated environment we use for integration testing
plugins. This change switches the run task to use the zip distribution,
so that all modules included in the normal distribution are included.
Cleaning up a few remaining occurences of using junits ExpectedException rule in
favor of using LuceneTestCase#expectThrows() which is more concise and versatile.
This change adds a overloaded `XContentMapValues#filter` method that returns
a function enclosing the compiled automatons that can be reused across filter
calls. This for instance prevents compiling automatons over and over again when
hits are filtered or in the SourceFieldMapper for each document.
Closes#20839
Settings updates are important to be able to help and administer a cluster in distress. We shouldn't block it due to circuit breakers. An extreme example is where we are actually trying to increase and unreasonable low setting for the circuit breaker itself.
See https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+master+g1gc/242/
Instead provide services where they are needed. The class worked
well as a temporary measure to easy removal of guice from the index
level but now we can remove it entirely.
-1 @Inject annotation
This commit upgrades the Log4j 2 dependency to version 2.7 and removes
some hacks that we had in place to work around bugs in Log4j 2 version
2.6.2.
Relates #20805
Today we hold on to all possible tokenizers, tokenfilters etc. when we create
an index service on a node. This was mainly done to allow the `_analyze` API to
directly access all these primitive. We fixed this in #19827 and can now get rid of
the AnalysisService entirely and replace it with a simple map like class. This
ensures we don't create a gazillion long living objects that are entirely useless since
they are never used in most of the indices. Also those objects might consume a considerable
amount of memory since they might load stopwords or synonyms etc.
Closes#19828
Today we define a cluster wait condition to try to wait at least a
certain number of nodes when running integration tests. Alas, the wait
condition is incorrect because wait_for_nodes>=${numNodes} will be split
by parameter parsing on the equals sign so the request looks like it has
a parameter named wait_for_nodes>. The fact that REST param parsing is
lenient leads to this being undiscovered. This commit fixes this issue.
Relates #20601
Adds an integration test for the file-based discovery plugin
to test the plugin operates correctly and uses the hosts
configured in `unicast_hosts.txt` with a real cluster
Closes#20459
We have a "HUGE HACK" that allows us to publish zip artifacts to
Sonatype's OSS repository without javadoc and source jars. We don't
include those jars because the zip is just a repackaging of the
core and module jars for which we already publish the javadoc and
source jars. So we have a hack to publish the zip artifact when the
pom says the project is of type 'pom'.
The build currently depends on the presence of a Git remote named origin to determine the URL that is used in the generated POM file. As this is best-effort anyhow and only required by Maven Central, this commit allows the build to run even if a Git remote with the name "origin" is missing.
With the switch to Log4j 2 throughout our code base, the logger usage checker was temporarily disabled. This commit
adapts the checks to work with Log4j 2 and re-enables the Gradle checks.
Closes#20243
I'm not sure why we need this pom instead of the pom generated by
nebula, but if we are going to have it then we need to populate it
with appropriate stuff like project name, description, and url.
Gradle appears to have a bug in maven publshing which will not match the
artifactId of a generated pom with the artifact id it puts in the file.
This adds back a copy hack from the original pom file name to the client
pom file name (which we had before #20403 inadvertently
removed it).
This commit adds a new test TribeIT#testClusterStateNodes() to verify that the tribe node correctly reflects the nodes of the remote clusters it is connected to.
It also changes the existing tests so that they really use two remote clusters now.
* Build: Remove old maven deploy support
This change removes the old maven deploy that we have in parallel to
maven-publish, and makes maven-publish fully work with publishing to
maven local. Using `gradle publishToMavenLocal` should be used to
publish to .m2.
Note that there is an unfortunate hack that means for
zip artifacts we must first create/publish a dummy pom file, and then
follow that with the real pom file. It would be nice to have the pom
file contains packaging=zip, but maven central then requires sources and
javadocs. But our zips are really just attached artifacts, so we already
set the packaging type to pom for our zip files. This change just works
around a limitation of the underlying maven publishing library which
silently skips attached artifacts when the packaging type is set to pom.
relates #20164closes#20375
* Remove unnecessary extra spacing
We were using maven snapshots during heavy development, but this should
not be something generally available (we should never release depending
on a snapshot version in maven). This change removes the snapshot repo.
If we ever need it temporarily for some reason, we can add it if/when
it is necessary.
relates #20559
This tracks the snippets that probably should be converted to
`// CONSOLE` or `// TESTRESPONSE` and fails the build if the list
of files with such snippets doesn't match the list in `docs/build.gradle`.
Setting the file looks like
```
/* List of files that have snippets that probably should be converted to
* `// CONSOLE` and `// TESTRESPONSE` but have yet to be converted. Try and
* only remove entries from this list. When it is empty we'll remove it
* entirely and have a party! There will be cake and everything.... */
buildRestTests.expectedUnconvertedCandidates = [
'plugins/discovery-azure-classic.asciidoc',
...
'reference/search/suggesters/completion-suggest.asciidoc',
]
```
This list is in `build.gradle` because we expect it to be fairly
temporary. In a few months we'll have converted all of the docs and won't
ned it any more.
From now on if you add now docs that contain a snippet that shows an
interaction with elasticsearch you have three choices:
1. Stick `// CONSOLE` on the interactions and `// TESTRESPONSE` on the
responses. The build (specifically (`gradle docs:check`) will test that
these interactions "work". If there isn't a `// TESTRESPONSE` snippet
then "work" just means "Elasticsearch responds with a 200-level response
code and no `WARNING` headers. This is way better than nothing.
2. Add `// NOTCONSOLE` if the snippet isn't actually interacting with
Elasticsearch. This should only be required for stuff like javascript
source code or `curl` against an external service like AWS or GCE. The
snippet will not get "OPEN IN CONSOLE" or "COPY AS CURL" buttons or be
tested.
3. Add `// TEST[skip:reason]` under the snippet. This will just skip the
snippet in the test phase. This should really be reserved for snippets
where we can't test them because they require an external service that
we don't have at testing time.
Please, please, please, please don't add more things to the list. After
all, it sais there'll be cake when we remove it entirely!
Relates to #18160
This PR introduces backward compatibility index tests to test the rolling upgrade process amongst Elasticsearch instances within the same major version. The test executes in three phases. In the first phase, we form a cluster of 2 ES instances on an old version. In the second phase, we keep one of the nodes from the old cluster, kill the other node, but preserve its data directory and start an instance of the current version of ES using the same data directory as the killed instance. In the third phase, we kill the other old version ES instance from the first phase and launch a new instance, using the same data directory as the killed instance. Therefore, during phase 3, we have fully migrated and have all current versions of ES running. In each phase, we run REST tests that index documents and search them, ensuring at each stage that the documents from the previous phase are still there.
Note that because we haven't released a GA yet of 5.0, the tests currently don't start an old version cluster in the first phase. Once GA is released, this will be changed to make the backward compatibility version 5.0, while the current version in the cluster will be 5.x.
automatically between tasks, as we want some of the nodes from
the previous task to continue running in the next task. This
commit enables a cluster configuration setting to not stop
nodes automatically after a task runs, but instead the creator
of the test task must stop the running nodes explicitly in a
cleanup phase.
cluster, we wait for the cluster health to indicate the
necessary nodes have formed a cluster. This check was an
exact value (equality) check. However, if we are trying to
connect the nodes in the cluster to nodes from a previously
formed cluster (of the same name), then we will have more
nodes returned by the cluster health check than the current
task's configured number of nodes. Hence, this check needs
to be a >= check. This commit fixes it.
The JDK project is in the process of modifying the command-line flags
for various JDK tools (http://openjdk.java.net/jeps/293). In particular,
the release flag on javac has changed from -release to --release. This
commit adapts the build process to this change.
Relates #20420
SearchParseElement is renamed to FetchSubPhaseParser and moved to the search.fetch package. Its parse method doesn't get the SearchContext as argument anymore, only the XContentParser, and the return type is what gets parsed (the fetch sub phase context which we may as well rename later).
It is the parser that initializes the FetchSubPhaseContext then. SearchService retrieves the parser by name, calls parse against it and stores the result of parsing by name. No need for FetchSubPhase.ContextFactory anymore, which can be removed.
and be much more stingy about what we consider a console candidate.
* Add `// CONSOLE` to check-running
* Fix version in some snippets
* Mark groovy snippets as groovy
* Fix versions in plugins
* Fix language marker errors
* Fix language parsing in snippets
This adds support for snippets who's language is written like
`[source, txt]` and `["source","js",subs="attributes,callouts"]`.
This also makes language required for snippets which is nice because
then we can be sure we can grep for snippets in a particular language.
This commit removes a line-length violation in RemovePluginCommand.java
and removes this file from the list of files for which the line-length
check is suppressed.
Log4j has a bug where it does not handle a security exception that can
be thrown when it is rendering a stack trace. This commit intentionally
introduces jar hell with the ThrowableProxy class to work around this
bug until a fix is a released.
Relates #20306
If elasticsearch controls the ID values as well as the documents
version we can optimize the code that adds / appends the documents
to the index. Essentially we an skip the version lookup for all
documents unless the same document is delivered more than once.
On the lucene level we can simply call IndexWriter#addDocument instead
of #updateDocument but on the Engine level we need to ensure that we deoptimize
the case once we see the same document more than once.
This is done as follows:
1. Mark every request with a timestamp. This is done once on the first node that
receives a request and is fixed for this request. This can be even the
machine local time (see why later). The important part is that retry
requests will have the same value as the original one.
2. In the engine we make sure we keep the highest seen time stamp of "retry" requests.
This is updated while the retry request has its doc id lock. Call this `maxUnsafeAutoIdTimestamp`
3. When the engine runs an "optimized" request comes, it compares it's timestamp with the
current `maxUnsafeAutoIdTimestamp` (but doesn't update it). If the the request
timestamp is higher it is safe to execute it as optimized (no retry request with the same
timestamp has been run before). If not we fall back to "non-optimzed" mode and run the request as a retry one
and update the `maxUnsafeAutoIdTimestamp` unless it's been updated already to a higher value
Relates to #19813
* master:
Increase visibility of deprecation logger
Skip transport client plugin installed on JDK 9
Explicitly disable Netty key set replacement
percolator: Fail indexing percolator queries containing either a has_child or has_parent query.
Make it possible for Ingest Processors to access AnalysisRegistry
Allow RestClient to send array-based headers
Silence rest util tests until the bogusness can be simplified
Remove unknown HttpContext-based test as it fails unpredictably on different JVMs
Tests: Improve rest suite names and generated test names for docs tests
Add support for a RestClient base path
Rest test suites are currently only the directory above the yaml test
file. That is confusing when there are more than one directory level
which contain yaml tests, as there are in generated docs tests. This
change makes rest tests use the full relative path to the rest test root
as the suite name, and also makes the test names for docs tests a little
clearer (that they are testing an example from a specific line number,
instead of just the line number as an opaque test name).
While removing an index isn't actually an alias action, if we add
an alias action that deletes an index then we can delete and index
and add an alias with the same name as the index atomically, in
the same cluster state update.
Closes#20064
This changes Elasticsearch to automatically downgrade `text` and
`keyword` fields into appropriate `string` fields when changing the
mapping of indexes imported from 2.x. This allows users to use the
modern, documented syntax against 2.x indexes. It also makes it clear
that reindexing in order to recreate the index in 5.0 is required for
any long lived indexes. This change is useful for the times when you
can't (cluster is just starting, not stable enough for reindex) or
shouldn't (index will only live 90 days or something).
Adds an explicit recoverySource field to ShardRouting that characterizes the type of recovery to perform:
- fresh empty shard copy
- existing local shard copy
- recover from peer (primary)
- recover from snapshot
- recover from other local shards on same node (shrink index action)
This was previously broken because run and integTest used the same
configuration name. This change makes the configuration name prefixed by
the task the cluster is created for.
In 1e91f3b we disabled annotation processors globally. However, some
project like JMH need annotation processing, so we add an ability to
selectively enabled annotation processing for certain projects by
setting an external property in the corresponding Gradle build script.
Note that `javac` would allow to set a specific annotation processor
with the command line option `-processor`. However, due to a bug in
Gradle we we cannot use this option and need to enable all annotation
processors.
This commit sets external nodes for integration tests to default to
using 512m of heap. This can be overridden using tests.heap.size (a
system property that we already use elesewhere for setting the size of
the heap for the test runner) or using tests.jvm.argline (this last one
takes precedence).
Squashes all the subpackages of `org.elasticsearch.rest.action` down to
the following:
* `o.e.rest.action.admin` - Administrative actions
* `o.e.rest.action.cat` - Actions that make tables for `grep`ing
* `o.e.rest.action.document` - Actions that act on documents
* `o.e.rest.action.ingest` - Actions that act on ingest pipelines
* `o.e.rest.action.search` - Actions that search
I'm tempted to merge `search` into `document` but the `document`
package feels fairly complete as is and `Suggest` isn't actually always
about documents either....
I'm also tempted to merge `ingest` into `admin.cluster` because the
latter contains the actions for dealing with stored scripts.
I've moved the `o.e.rest.action.support` into `o.e.rest.action`.
I've also added `package-info.java`s to all packges in `o.e.rest`. I
figure if the package is too small to deserve a `package-info.java` file
then it is too small to deserve to be a package....
Also fixes checkstyle in all moved classes.
We have 1074 snippets that look like they should be converted to
`// CONSOLE`. At least that is what `gradle docs:listConsoleCandidates`
says. This adds `// NOTCONSOLE` to explicitly mark snippets that
*shouldn't* be converted to `// CONSOLE`. After marking the blindingly
obvious ones this cuts the remaining snippet count to 1032.
This commit defaults the max local storage nodes to one. The motivation
for this change is that a default value greather than one is dangerous
as users sometimes end up unknowingly starting a second node and start
thinking that they have encountered data loss.
Relates #19964
I also reduced the visibility of a couple classes and renamed/consolidated some
test classes for consistency, eg. removing the `Simple` prefix or using the
`<Type>FieldMapperTests` convention for testing field mappers.
Some unused annotation processors caused build-time failures. Instead,
we should just be explicit about which annotation processors we will use
(if any) with additional command-line flags.
Relates #19919
This change works around a known issue with using the maven-publish
gralde plugin. All deps are marked in the generated pom as runtime. With
this change, they are set back to compile time. This also simplified the
transitive dependencies exclusion to work the same as how it was fixed in
gradle 2.14 (wildcard exclusions).
closes#19835
This commit updates Jackson to the 2.8.1 version, which is more strict when it comes to build objects. It also adds the snakeyaml dependency that was previously shaded in jackson libs.
It also closes#18076
Adds `warnings` syntax to the yaml test that allows you to expect
a `Warning` header that looks like:
```
- do:
warnings:
- '[index] is deprecated'
- quotes are not required because yaml
- but this argument is always a list, never a single string
- no matter how many warnings you expect
get:
index: test
type: test
id: 1
```
These are accessible from the docs with:
```
// TEST[warning:some warning]
```
This should help to force you to update the docs if you deprecate
something. You *must* add the warnings marker to the docs or the build
will fail. While you are there you *should* update the docs to add
deprecation warnings visible in the rendered results.
The `loggerUsageCheck` can only run on directories that exist. It was
checking whether or not the directories exists before they were built
built and then deciding to do no work. But only if you are building in
a cleaned environment which CI does, but people rarely do locally.
After #13834 many tests that used Groovy scripts (for good or bad reason) in their tests have been moved in the lang-groovy module and the issue #13837 has been created to track these messy tests in order to clean them up.
The work started with #19280, #19302 and #19336 and this PR moves the remaining messy tests back in core, removes the dependency on Groovy, changes the scripts in order to use the mocked script engine, and change the tests to integration tests.
It also moves IndexLookupIT test back (even if it has good chance to be removed soon) and fixes its tests.
It also changes AbstractQueryTestCase to use custom script plugins in tests.
closes#13837
In an effort to reduce the number of tiny packages we have in the
code base this moves all the files that were in subdirectories of
`org.elasticsearch.rest.action.admin.cluster` into
`org.elasticsearch.rest.action.admin.cluster`.
Also fixes line length in these packages.
In an effort to reduce the number of tiny packages we have in the
code base this moves all the files that were in subdirectories of
`org.elasticsearch.rest.action.admin.indices` into
`org.elasticsearch.rest.action.admin.indices`.
It also adds a `package-info.java` file explaining what the files in
the package *do*.
Also fixes line length in these packages. It makes a single non-checkstyle
change: implementing `ToXContent` on `GetIndexTemplatesResponse`. I did
this because it was the right thing to do and it fixed a line length
violation.
This adds a header that looks like `Location: /test/test/1` to the
response for the index/create/update API. The requirement for the header
comes from https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.htmlhttps://tools.ietf.org/html/rfc7231#section-7.1.2 claims that relative
URIs are OK. So we use an absolute path which should resolve to the
appropriate location.
Closes#19079
This makes large changes to our rest test infrastructure, allowing us
to write junit tests that test a running cluster via the rest client.
It does this by splitting ESRestTestCase into two classes:
* ESRestTestCase is the superclass of all tests that use the rest client
to interact with a running cluster.
* ESClientYamlSuiteTestCase is the superclass of all tests that use the
rest client to run the yaml tests. These tests are shared across all
official clients, thus the `ClientYamlSuite` part of the name.
After #13834 many tests that used Groovy scripts (for good or bad reason) in their tests have been moved in the lang-groovy module and the issue #13837 has been created to track these messy tests in order to clean them up.
This commit moves more tests back in core, removes the dependency on Groovy, changes the scripts in order to use the mocked script engine, and change the tests to integration tests.
Maven central requires a project url. The recent change to make poms for
plugin client jars broke that because we no longer use nebula publishing
for plugin pom generation. This change adds back the url to the pom.
This change removes the multiple ways that plugins can be added to the
integ test cluster. It also removes the use of the default
configuration, and instead adds a zip configuration to all plugins. This
will enable using project substitutions with plugins, which must be done
with the default configuration.
Moving the dependent jars instead of copying breaks downstream builds
that rely on the jars existing for compilation. This commit modifies
these moves to be copies.
This changes adds a flag which can be set in the esplugin closure in
build.gradle for plugins and modules which contain pieces that must be
published to maven, for use in the transport client. The jar/pom and
source/javadoc jars are moved to a new name that has the suffix
"-client".
I enabled this for the two modules that I know definitely need this;
there may be more. One open question is which groupId to use for the
generated pom.
closes#19411
This moves all netty related code into modules/transport-netty the module is build as a zip file as well as a JAR to serve as a dependency for transport client. For the time being this is required otherwise we have no network based impl. for transport client users. This might be subject to change given that we move forward http client.
* Clean up the generics around significant terms aggregation results
* Reduce code duplicated between `SignificantLongTerms` and
`SignificantStringTerms` by creating `InternalMappedSignificantTerms`
and moving common things there where possible.
* Migrate to `NamedWriteable`
* Line length fixes while I was there
This exposes a method to start an action and return a task from
`NodeClient`. This allows reindex to use the injected `Client` rather
than require injecting `TransportAction`s