This PR introduces backward compatibility index tests to test the rolling upgrade process amongst Elasticsearch instances within the same major version. The test executes in three phases. In the first phase, we form a cluster of 2 ES instances on an old version. In the second phase, we keep one of the nodes from the old cluster, kill the other node, but preserve its data directory and start an instance of the current version of ES using the same data directory as the killed instance. In the third phase, we kill the other old version ES instance from the first phase and launch a new instance, using the same data directory as the killed instance. Therefore, during phase 3, we have fully migrated and have all current versions of ES running. In each phase, we run REST tests that index documents and search them, ensuring at each stage that the documents from the previous phase are still there.
Note that because we haven't released a GA yet of 5.0, the tests currently don't start an old version cluster in the first phase. Once GA is released, this will be changed to make the backward compatibility version 5.0, while the current version in the cluster will be 5.x.
automatically between tasks, as we want some of the nodes from
the previous task to continue running in the next task. This
commit enables a cluster configuration setting to not stop
nodes automatically after a task runs, but instead the creator
of the test task must stop the running nodes explicitly in a
cleanup phase.
cluster, we wait for the cluster health to indicate the
necessary nodes have formed a cluster. This check was an
exact value (equality) check. However, if we are trying to
connect the nodes in the cluster to nodes from a previously
formed cluster (of the same name), then we will have more
nodes returned by the cluster health check than the current
task's configured number of nodes. Hence, this check needs
to be a >= check. This commit fixes it.
Today when starting Elasticsearch without a Log4j 2 configuration file,
we end up throwing an array index out of bounds exception. This is
because we are passing no configuration files to Log4j. Instead, we
should throw a useful error message to the user. This commit modifies
the Log4j configuration setup to throw a user exception if no Log4j
configuration files are present in the config directory.
Relates #20493
BATS upgrade tests fails on master branch because it tries to install 2.x versions to upgrade from instead of 5.x versions. And since #18554 we should only test upgrades from 5.0.0-alpha4 versions.
This commit changes the vagrant tests so that it tries to list all the previous releases from version N-1. If nothing is found, it will fetch the current version and will run the upgrade tests with it. It works nicely with the current master 6.0.0-alpha1-SNAPSHOT. Once 5.0.0 is released it should run the test with it.
When uninstalling or upgrading elasticsearch using the RPM package some empty directories remain on the filesystem:
/usr/share/elasticsearch/bin
/usr/share/elasticsearch/lib
/usr/share/elasticsearch/modules
/usr/share/elasticsearch/modules/foo
Having empty directories in modules can prevent elasticsearch to start after an upgrade: the plugins service expects to find a plugin-descriptor.properties file in every sub directory of modules.
This PR cleans things a bit so that these empty directories are removed on upgrade/removal like it was in 2.x.
The Log4j shutdown hack test tests that a hack we have in place to
workaround a bug in Log4j during shutdown is effective. Log4j can use
JMX to control logging levels, but we disable this through the use of a
system property log4j2.disable.jmx (mainly because there is no need for
this feature, but it also means granting additional security
permissions). The bug in Log4j is that during shutdown, it neglects to
check whether or not its usage of JMX is disable and so it attempts to
unregister management beans, leading to a permissions violation. The
test works by attempting to shutdown Log4j and thus triggering the bad
code path. With the Log4j hack in place, we have introduced jar hell so
that its our code running instead of code from the Log4j jar. Our code
correctly checks that the usage of JMX is disabled and thus does not
trip on a permissions violation. The test was a little complicated in
that it attempted to just grant the minimal permissions needed for Log4j
to do its thing, but this can sometimes lead to other unwanted
permissions violations because the permissions put in place are more
restrictive necessary. This commit simplifies this situation by
rewriting the test to only deny Log4j the sole permission needed to
trigger the bug.
Relates #20476
When upgrading elasticsearch using the RPM package, the scripts directory is removed if it's empty but it won't be recreated by the upgraded package. But after that the service won't start because the scripts dir is missing.
Today when setting the logging level via the command-line or an API
call, the expectation is that the logging level should trickle down the
hiearchy to descendant loggers. However, this is not necessarily the
case. For example, if loggers x and x.y are already configured then
setting the logging level on x will not descend to x.y. This is because
the logging config for x.y has already been forked from the logging
config for x. Therefore, we must explicitly descend the hierarchy when
setting the logging level and that is what this commit does.
Relates #20463
This commit introduces a new plugin for file-based unicast hosts
discovery. This allows specifying the unicast hosts participating
in discovery through a `unicast_hosts.txt` file located in the
`config/discovery-file` directory. The plugin will use the hosts
specified in this file as the set of hosts to ping during discovery.
The format of the `unicast_hosts.txt` file is to have one host/port
entry per line. The hosts file is read and parsed every time
discovery makes ping requests, thus a new version of the file that
is published to the config directory will automatically be picked
up.
Closes#20323
Today we add a prefix when logging within Elasticsearch. This prefix
contains the node name, and index and shard-level components if
appropriate.
Due to some implementation details with Log4j 2 , this does not work for
integration tests; instead what we see is the node name for the last
node to startup. The implementation detail here is that Log4j 2 there is
only one logger for a name, message factory pair, and the key derived
from the message factory is the class name of the message factory. So,
when the last node starts up and starts setting prefixes on its message
factories, it will impact the loggers for the other nodes.
Additionally, the prefixes are lost when logging an exception. This is
due to another implementation detail in Log4j 2. Namely, since we log
exceptions using a parameterized message, Log4j 2 decides that that
means that we do not want to use the message factory that we have
provided (the prefix message factory) and so logs the exception without
the prefix.
This commit fixes both of these issues.
Relates #20429
This commit adds a -q/--quiet option to Elasticsearch so that it does not log anything in the console and closes stdout & stderr streams. This is useful for SystemD to avoid duplicate logs in both journalctl and /var/log/elasticsearch/elasticsearch.log while still allows the JVM to print error messages in stdout/stderr if needed.
closes#17220
The plugin command now displays the version of the plugin, which is
compared to a string without the version. This removes the version from
the string.
The evil logger tests rely on external configuration. This configuration
is shared between these tests which means that changing the
configuration for one test can cause an unrelated test to fail. In
particular, removing the appenders on the root logger so that inherited
loggers in one test do not have a console and file appender by default
breaks tests that were expecting the root logger to have these
appenders. This commit separates these configs so that these tests are
not subject to this problem.
Log4j has a bug where on shutdown it ignores that JMX might be disabled;
since it does not respect this on shutdown, it proceeds to attempt to
access JMX leading to a security exception that should have otherwise
not occurred had it respected that JMX is disabled. This commit
intentionally introduces jar hell with the Server class to work around
this bug until a fix is released.
Relates #20389
Previously we would disable console logging in certain circumstances
(for example, if Elasticsearch is not in the foreground, or if
Elasticsearch is in the foreground but an exception was thrown during
bootstrap). This commit makes this handling work with Log4j 2. This will
prevent users from seeing double bootstrap check failure messages.
Relates #20387
Previous versions of Elasticsearch permitted unquoted JSON field names even though this is against the JSON spec. This leniency was disabled by default in the 5.x series of Elasticsearch but a backwards compatibility layer was added via a system property with the intention of removing this layer in 6.0.0. This commit removes this backwards compatibility layer.
Relates #20388
The 5.x series of Elasticsearch emits a warning if any of the old
logging configuration formats are present. This commit removes that
warning.
Relates #20386
By default, when an exception causes the JVM to terminate, the stack
trace is printed. In the case of failing bootstrap checks, this stack
trace is useless to the user, and might even distract them from seeing
that the bootstrap checks failed for reasons under their control. With
this commit, we cause the stack trace for a failing bootstrap check to
be truncated.
We also modify some methods to not declare that they throw the top level
checked exception type Exception, but instead explicitly declare the
exceptions that they throw. These exceptions are caught and wrapped in a
BootstrapException so that we can percolate only two exception types out
of Bootstrap#init as checked exception, BootstrapException and
NodeValidationException.
Relates #19989
The logging configuration tests write to log files which are deleted at
the end of the test. If these files are not closed, some operating
systems will complain when these deletes are performed. This commit
ensures that the logging system is properly shutdown so that these files
can be properly deleted.
The evil logging tests write to log files which are deleted at the end
of the test. If these files are not closed, some operating systems will
complain when these deletes are performed. This commit ensures that the
logging system is properly shutdown so that these files can be properly
deleted.
This commit expands on the message printed when config files are
preserved when removing a plugin to give the user an indication of the
reason the config files are preserved.
When removing a plugin with a config directory, we preserve the config
directory. This is because the workflow for upgrading a plugin involves
removing and then installing the plugin again and losing the plugin
config in this case would be terrible. This commit causes a message
regarding this to be printed in case the user wants to manually delete
these files.
* master:
Avoid NPE in LoggingListener
Randomly use Netty 3 plugin in some tests
Skip smoke test client on JDK 9
Revert "Don't allow XContentBuilder#writeValue(TimeValue)"
[docs] Remove coming in 2.0.0
Don't allow XContentBuilder#writeValue(TimeValue)
[doc] Remove leftover from CONSOLE conversion
Parameter improvements to Cluster Health API wait for shards (#20223)
Add 2.4.0 to packaging tests list
Docs: clarify scale is applied at origin+offest (#20242)
When Netty 4 was introduced, it was not the default network
implementation. Some tests were constructed to randomly use Netty 4
instead of the default network implementation. When Netty 4 was made the
default implementation, these tests were not updated. Thus, these tests
are randomly choosing between the default network implementation (Netty
4) and Netty 4. This commit updates these tests to reverse the role of
Netty 3 and Netty 4 so that the randomization is choosing between Netty
3 and the default (again, now Netty 4).
Relates #20265
This commit adds an assumption to SmokeTestClientIT tests on JDK 9. The
underlying issue is that Netty attempts to access sun.nio.ch but this
package is not exported from java.base on JDK 9. This throws an uncaught
InaccessibleObjectException causing the test to fail. This assumption
can be removed when Netty 4.1.6 is released as it will include a fix for
this scenario.
Relates #20260
This commit enables CLI tools to have console logging. For the CLI
tools, we skip configuring the logging infrastructure via the config
file, and instead set the level only via a system property.
This commit fixes failing evil logging configuration tests. The test for
resolving multiple configuration files was failing after
9a58fc2348 removed some of the
configuration needed for this test. The solution is revert the removal
of that configuration, but remove additivity from the test logger to
prevent the evil logger tests from failing.
This commit defaults the max local storage nodes to one. The motivation
for this change is that a default value greather than one is dangerous
as users sometimes end up unknowingly starting a second node and start
thinking that they have encountered data loss.
Relates #19964
When compiling many dynamically changing scripts, parameterized
scripts (<https://www.elastic.co/guide/en/elasticsearch/reference/master/modules-scripting-using.html#prefer-params>)
should be preferred. This enforces a limit to the number of scripts that
can be compiled within a minute. A new dynamic setting is added -
`script.max_compilations_per_minute`, which defaults to 15.
If more dynamic scripts are sent, a user will get the following
exception:
```json
{
"error" : {
"root_cause" : [
{
"type" : "circuit_breaking_exception",
"reason" : "[script] Too many dynamic script compilations within one minute, max: [15/min]; please use on-disk, indexed, or scripts with parameters instead",
"bytes_wanted" : 0,
"bytes_limit" : 0
}
],
"type" : "search_phase_execution_exception",
"reason" : "all shards failed",
"phase" : "query",
"grouped" : true,
"failed_shards" : [
{
"shard" : 0,
"index" : "i",
"node" : "a5V1eXcZRYiIk8lecjZ4Jw",
"reason" : {
"type" : "general_script_exception",
"reason" : "Failed to compile inline script [\"aaaaaaaaaaaaaaaa\"] using lang [painless]",
"caused_by" : {
"type" : "circuit_breaking_exception",
"reason" : "[script] Too many dynamic script compilations within one minute, max: [15/min]; please use on-disk, indexed, or scripts with parameters instead",
"bytes_wanted" : 0,
"bytes_limit" : 0
}
}
}
],
"caused_by" : {
"type" : "general_script_exception",
"reason" : "Failed to compile inline script [\"aaaaaaaaaaaaaaaa\"] using lang [painless]",
"caused_by" : {
"type" : "circuit_breaking_exception",
"reason" : "[script] Too many dynamic script compilations within one minute, max: [15/min]; please use on-disk, indexed, or scripts with parameters instead",
"bytes_wanted" : 0,
"bytes_limit" : 0
}
}
},
"status" : 500
}
```
This also fixes a bug in `ScriptService` where requests being executed
concurrently on a single node could cause a script to be compiled
multiple times (many in the case of a powerful node with many shards)
due to no synchronization between checking the cache and compiling the
script. There is now synchronization so that a script being compiled
will only be compiled once regardless of the number of concurrent
searches on a node.
Relates to #19396
This commit fixes a test bug in
EvilJNANativesTests#testSetMaximumNumberOfThreads. Namely, the test was
not checking whether or not the value from /proc/self/limits was equal
to "unlimited" before attempting to parse as a long. This commit fixes
that error.
Today when we load the Netty plugins, we indirectly cause several Netty
classes to initialize. This is because we attempt to load some classes
by name, and loading these classes is done in a way that triggers a long
chain of class initializers within Netty. We should not do this, this
can lead to log messages before the logger is loader, and it leads to
initialization in cases when the classes would never be needed (for
example, Netty 3 class initialization is never needed if Netty 4 is
used, and vice versa). This commit avoids this early initialization of
these classes by removing the need for the early loading.
Relates #19819
This makes it obvious that these tests are for running the client yaml
suites. Now that there are other ways of running tests using the REST
client against a running cluster we can't go on calling the shared
client yaml tests "REST tests". They are rest tests, but they aren't
**the** rest tests.
This adds a header that looks like `Location: /test/test/1` to the
response for the index/create/update API. The requirement for the header
comes from https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.htmlhttps://tools.ietf.org/html/rfc7231#section-7.1.2 claims that relative
URIs are OK. So we use an absolute path which should resolve to the
appropriate location.
Closes#19079
This makes large changes to our rest test infrastructure, allowing us
to write junit tests that test a running cluster via the rest client.
It does this by splitting ESRestTestCase into two classes:
* ESRestTestCase is the superclass of all tests that use the rest client
to interact with a running cluster.
* ESClientYamlSuiteTestCase is the superclass of all tests that use the
rest client to run the yaml tests. These tests are shared across all
official clients, thus the `ClientYamlSuite` part of the name.
* rethrow script compilation exceptions into ingest configuration exceptions
* update readProcessor to rethrow any exception as an ElasticsearchException
Remove `ParseField` constants used for names where there are no deprecated
names and just use the `String` version of the registration method instead.
This is step 2 in cleaning up the plugin interface for extending
search time actions. Aggregations are next.
This is breaking for plugins because those that register a new query should
now implement `SearchPlugin` rather than `onModule(SearchModule)`.
We throw IOException, which is the exception that is going to be thrown in 99% of the cases. A more generic exception can happen, and if it is a runtime one we just let it bubble up as is, otherwise we wrap it into runtime one so that we don't require to catch Exception everywhere, which seems odd.
Also adjusted javadocs for all performRequest methods
The new method accepts the usual parameters (method, endpoint, params, entity and headers) plus a response listener and an async response consumer. Shortcut methods are also added that don't require params, entity and the async response consumer optional.
There are a few relevant api changes as a consequence of the move to async client that affect sync methods:
- Response doesn't implement Closeable anymore, responses don't need to be closed
- performRequest throws Exception rather than just IOException, as that is the the exception that we get from the FutureCallback#failed method in the async http client
- ssl configuration is a bit simpler, one only needs to call setSSLStrategy from a custom HttpClientConfigCallback, that doesn't end up overridng any other default around connection pooling (it used to happen with the sync client and make ssl configuration more complex)
Relates to #19055
The `client/transport` project adds a new jar build project that
pulls in all dependencies and configures all required modules.
Preinstalled modules are:
* transport-netty
* lang-mustache
* reindex
* percolator
The `TransportClient` classes are still in core
while `TransportClient.Builder` has only a protected construcutor
such that users are redirected to use the new `TransportClientBuilder`
from the new jar.
Closes#19412
This change removes the multiple ways that plugins can be added to the
integ test cluster. It also removes the use of the default
configuration, and instead adds a zip configuration to all plugins. This
will enable using project substitutions with plugins, which must be done
with the default configuration.
creation in the REST tests, as we no longer need it due
to index creation now waiting for active shard copies
before returning (by default, it waits for the primary of
each shard, which is the same as ensuring yellow health).
Relates #19450
This commit renames the Netty 3 transport module from transport-netty to
transport-netty3. This is to make room for a Netty 4 transport module,
transport-netty4.
Relates #19439
Currently custom headers that should be passed through rest requests are
registered by depending on the RestController in guice and calling a
registration method. This change moves that registration to a getter for
plugins, and makes the RestController take the set of headers on
construction.
Today `node.mode` and `node.local` serve almost the same purpose, they
are a shortcut for `discovery.type` and `transport.type`. If `node.local: true`
or `node.mode: local` is set elasticsearch will start in _local_ mode which means
only nodes within the same JVM are discovered and a non-network based transport
is used. The _local_ mode it only really used in tests or if nodes are embedded.
For both, embedding and tests explicit configuration via `discovery.type` and `transport.type`
should be preferred.
This change removes all the usage of these settings and by-default doesn't
configure a default transport implemenation since netty is now a module. Yet, to make
the user expericence flawless, plugins or modules can set a `http.type.default` and
`transport.type.default`. Plugins set this via `PluginService#additionalSettings()`
which enforces _set-once_ which prevents node startup if set multiple times. This means
that our distributions will just startup with netty transport since it's packaged as a
module unless `transport.type` or `http.transport.type` is explicitly set.
This change also found a bunch of bugs since several NamedWriteables were not registered if a
transport client is used. Now that we don't rely on the `node.mode` leniency which is inherited
instead of using explicit settings, `TransportClient` uses `AssertingLocalTransport` which detects these problems since it serializes all messages.
Closes#16234
This moves all netty related code into modules/transport-netty the module is build as a zip file as well as a JAR to serve as a dependency for transport client. For the time being this is required otherwise we have no network based impl. for transport client users. This might be subject to change given that we move forward http client.
Some tests still start http implicitly or miss configuring the transport clients correctly.
This commit fixes all remaining tests and adds a depdenceny to `transport-netty` from
`qa/smoke-test-http` and `modules/reindex` since they need an http server running on the nodes.
This also moves all required permissions for netty into it's module and out of core.
The callback replaces the ability to fully replace the http client instance. By doing that, one used to lose any default that the RestClient had set for the underlying http client. Given that you'd usually override one or two things only, like a couple of timeout values, the ssl factory or the default credentials providers, it is not uder friendly if by doing that users end up replacing the whole http client instance and lose any default set by us.
This change adds a createComponents() method to Plugin implementations
which they can use to return already constructed componenents/services.
Eventually this should be just services ("components" don't really do
anything), but for now it allows any object so that preconstructed
instances by plugins can still be bound to guice. Over time we should
add basic services as arguments to this method, but for now I have left
it empty so as to not presume what is a necessary service.
this commit moves the most of the http related integ tests out into it's own
`qa/smoke-test-http` project where most of the test can run against the external cluster.
This commit migrates the Vagrant box for Fedora for the packaging tests
from Fedora 22 to Fedora 24 as Fedora 22 reached end-of-line upon the
release of Fedora 24.
Relates #19308
The top-level class Throwable represents all errors and exceptions in
Java. This hierarchy is divided into Error and Exception, the former
being serious problems that applications should not try to catch and the
latter representing exceptional conditions that an application might
want to catch and handle. This commit renames
org.elasticsearch.cli.UserError to org.elasticsearch.UserException to
make its name consistent with where it falls in this hierarchy.
Relates #19254
Node IDs are currently randomly generated during node startup. That means they change every time the node is restarted. While this doesn't matter for ES proper, it makes it hard for external services to track nodes. Another, more minor, side effect is that indexing the output of, say, the node stats API results in creating new fields due to node ID being used as keys.
The first approach I considered was to use the node's published address as the base for the id. We already [treat nodes with the same address as the same](https://github.com/elastic/elasticsearch/blob/master/core/src/main/java/org/elasticsearch/discovery/zen/NodeJoinController.java#L387) so this is a simple change (see [here](https://github.com/elastic/elasticsearch/compare/master...bleskes:node_persistent_id_based_on_address)). While this is simple and it works for probably most cases, it is not perfect. For example, if after a node restart, the node is not able to bind to the same port (because it's not yet freed by the OS), it will cause the node to still change identity. Also in environments where the host IP can change due to a host restart, identity will not be the same.
Due to those limitation, I opted to go with a different approach where the node id will be persisted in the node's data folder. This has the upside of connecting the id to the nodes data. It also means that the host can be adapted in any way (replace network cards, attach storage to a new VM). I
It does however also have downsides - we now run the risk of two nodes having the same id, if someone copies clones a data folder from one node to another. To mitigate this I changed the semantics of the protection against multiple nodes with the same address to be stricter - it will now reject the incoming join if a node exists with the same id but a different address. Note that if the existing node doesn't respond to pings (i.e., it's not alive) it will be removed and the new node will be accepted when it tries another join.
Last, and most importantly, this change requires that *all* nodes persist data to disk. This is a change from current behavior where only data & master nodes store local files. This is the main reason for marking this PR as breaking.
Other less important notes:
- DummyTransportAddress is removed as we need a unique network address per node. Use `LocalTransportAddress.buildUnique()` instead.
- I renamed `node.add_lid_to_custom_path` to `node.add_lock_id_to_custom_path` to avoid confusion with the node ID which is now part of the `NodeEnvironment` logic.
- I removed the `version` paramater from `MetaDataStateFormat#write` , it wasn't really used and was just in the way :)
- TribeNodes are special in the sense that they do start multiple sub-nodes (previously known as client nodes). Those sub-nodes do not store local files but derive their ID from the parent node id, so they are generated consistently.
Today throughout the codebase, catch throwable is used with reckless
abandon. This is dangerous because the throwable could be a fatal
virtual machine error resulting from an internal error in the JVM, or an
out of memory error or a stack overflow error that leaves the virtual
machine in an unstable and unpredictable state. This commit removes
catch throwable from the codebase and removes the temptation to use it
by modifying listener APIs to receive instances of Exception instead of
the top-level Throwable.
Relates #19231
As discussed at https://github.com/elastic/elasticsearch-cloud-azure/issues/91#issuecomment-229113595, we know that the current `discovery-azure` plugin only works with Azure Classic VMs / Services (which is somehow Legacy now).
The proposal here is to rename `discovery-azure` to `discovery-azure-classic` in case some users are using it.
And deprecate it for 5.0.
Closes#19144.
As some plugins are becoming big now, it is hard for the user to know, if the plugin
is being downloaded or just nothing happens.
This commit adds a progress bar during download, which can be disabled by using the `-q`
parameter.
In addition this updates to jimfs 1.1, which allows us to test the batch mode, as adding
security policies are now supported due to having jimfs:// protocol support in URL stream
handlers.
This commit adds randomization for the packaging upgrade test. In
particular, we extract a list of the released version of Elasticsearch
from Maven Central and randomize the selection of the version to upgrade
from. The randomization is repeatable, and supports the tests.seed
property. Specific versions can be tested by setting the property
tests.packaging.upgrade.from.versions.
Relates #19033
Registering a script engine or native scripts still uses Guice today
and is much more complicated than needed. This change moves to a pull
based model where script plugins have to implement a dedicated interface
`ScriptPlugin` and defines simple getter returning instances rather than
classes.
The database files have been doubled in size compared to the previous files being used.
For this reason the database files are now gzip compressed, which required using
`GZIPInputStream` when loading database files.
Previously Elasticsearch used $DATA_DIR/$CLUSTER_NAME/nodes for the path
where data is stored, this commit changes that to be $DATA_DIR/nodes.
On startup, if the old folder structure is detected it will be used.
This behavior will be removed in Elasticsearch 6.0
Resolves#17810
Folded grok processor into ingest-common module.
The rest tests have been moved to ingest-common module as well, because these tests don't run in the rest-api-spec module but in the distribution:integ-test-zip module
and adding a test plugin there felt just wrong to me. I think this is ok. I left a tiny ingest rest test behind in that tests with an empty pipeline.
Removed messy tests, these tests were already covered in the rest tests
Added ingest test plugin in test infra so that each module testing integration with ingest doesn't need write its own plugin
Moved reindex ingest tests to qa module
Closes#18490
We have 3 evil tests for jarhell. They have been failing in java 9
because of how evil they are. The first checks the leniency we add for
jarhell in the jdk itself. This is unecessary, since if the leniency
wasn't there, we would already be failing all jarhell checks. The second
is checking the compile version is compatible with the jdk. This is
simpler since we don't need to fake the java version: we know 1.7 should
be compatibile with both java 8 and 9, so we can use that as a constant.
Finally the last test checks if the java version system property is
broken. This is simply something we should not check, we have to trust
that java specifies it correctly, and again, if it was broken, all
jarhell checks would be broken.
Lucene SuppressForbidden is marked lucene.internal and should not be
used outside of Lucene. This commit removes the uses of this class
within Elasticsearch. Instead,
org.elasticsearch.common.SuppressForbidden should be used, which was
already the case in most places.
This commit fixes an issue with the plugins directory being a symbolic
link. Namely, the install plugins command attempts to always create the
plugins directory just in case it does not exist. The JDK method used
here guarantees that the directory is created, and an exception is not
thrown if the directory could not be created because it already
exists. The problem is that this JDK method does not respect symlinks so
its internal existence checks fails, it proceeds to attempt to create
the directory, but the directory creation fails because the symlink
exists. This is documented as being not an issue. We work around this by
checking if there is a symlink where we expect the plugins directory to
be, and only attempt to create if not. We add a unit test that plugin
installation to a symlinked plugins directory works as expected.
This commit removes the ability to specify a custom plugins
path. Instead, the plugins path will always be a subdirectory called
"plugins" off of the home directory.
- now you can specify a list of grok patterns to match your field with
and the first one to successfully match wins.
- only non-null captures will be inserted into your matched document.
Fixes#17903.
This removes the ScriptMode class entirely, which was an enum with two
options (ON and OFF) which essentially boiled down to true and false.
Now the boolean values are used instead.
Today when parsing settings during bootstrap, we add a system property
for every Elasticsearch setting. Additionally, settings can be set via
system properties. This commit simplifies this situation.
- settings are no longer propogated to system properties
- system properties can not be used to set settings
- the "es." prefix on settings is no longer required (nor permitted)
- test logging has a dedicated system property (tests.logger.level)
Relates #18198
The plugin script parses command-line options looking for Java system
properties and extracts these arguments to pass to the java command when
starting the JVM. Since elasticsearch-plugin allows arbitrary user
arguments to the JVM via ES_JAVA_OPTS, this parsing is unnecessary. This
commit removes this unnecessary
Relates #18207
This commit modifies the packaging tests to account for the fact that
rpm behaves differently with respect to preserving directories marked as
"CONFIG | NOREPLACE" on older versions versus newer versions. Older
versions will leave the directory as-is while newer versions will append
the suffix ".rpmsave" to the directory name.
Relates #18216
This change makes the vagrant tasks extend LoggedExec, so that the
entire vagrant output can be dumped on failure (and completely logged
when using --info). It should help for debugging issues like #18122.
Exit with proper exit code (1) and an error message if elasticsearch
executable binary does not exists or has insufficient permissions to
execute.
Squashed commit of the following:
commit 9768d316303418ba4f9c96d3f87c376048a1b1bc
Author: Puru <tuladharpuru@gmail.com>
Date: Fri Apr 22 23:26:47 2016 +0545
Fixed ES_HOME typo
commit 79a2b0394297f8b02b6f71b71ba35ff79f1a684e
Author: Puru <tuladharpuru@gmail.com>
Date: Sun Apr 10 11:00:24 2016 +0545
Improve elasticsearch startup script test
Added improvement as per conversation in https://github.com/elastic/elasticsearch/pull/17082#issuecomment-206459613
commit 7be38e1fefd4baa6ccdbdc14745c00f6dc052e0c
Author: Puru <tuladharpuru@gmail.com>
Date: Wed Mar 23 13:23:52 2016 +0545
Add elasticsearch startup script test
The test ensures that elasticsearch startup script exists and is executable.
commit d10eed5c08260fa9c158a4487bbb3103a8d867ed
Author: Puru <tuladharpuru@gmail.com>
Date: Wed Mar 23 12:30:25 2016 +0545
Fixed IF syntax and failure message
commit 6dc66f616545572485b4d43bee05a4cbbf1bed72
Author: Puru <tuladharpuru@gmail.com>
Date: Sat Mar 12 11:08:11 2016 +0545
Fix exit code
Exit with proper exit code (1) and an error message if elasticsearch executable binary does not exists or has insufficient permissions to execute.
This changes our packaging to be explicit about the permissions of files
and directories in the tar.gz, rpm, and deb packages. This is to protect
against a user having an incorrectly set umask when installing.
Additionally, plugins that are installed now have their permissions set
by the plugin installation so that plugins that may have been packaged
with incorrect permissions are secured.
Resolves#17634
* `rename` processor, renamed `to` to `target_field`
* `date` processor, renamed `match_field` to `field` and renamed `match_formats` to `formats`
* `geoip` processor, renamed `source_field` to `field` and renamed `fields` to `properties`
* `attachment` processor, renamed `source_field` to `field` and renamed `fields` to `properties`
Closes#17835
In Elasticsearch 5.0.0, by default unquoted field names in JSON will be
rejected. This can cause issues, however, for documents that were
already indexed with unquoted field names. To alleviate this, a system
property has been added that can be enabled so migration can occur.
This system property will be removed in Elasticsearch 6.0.0
Resolves#17674
This commit adds a new configuration file jvm.options to centralize and
simplify management of JVM options. This separates the configuration of
the JVM from the packaging scripts (bin/elasticsearch*, bin/service.bat,
and init.d/elasticsearch) simplifying end-user operational management of
custom JVM options.
This change makes specifying which boxes to run vagrant tests on a
little easier. Previously there were two tasks, checkPackages and
checkPackagesAllDistros. With this change, there is a single
packagingTest task. The boxes to run on are specified using the
gradle property vagrant.boxes, which can be easily specified on the
command line, or in a gradle properties file. There are also two
alias names, 'sample' for a yum and apt box, and 'all' for all boxes.
This commit ensures that the data, logs, and config directories have the
proper ownership after the packages are installed. Additionally, this
commit ensures that the configs in /etc/elasticsearch are preserved
after removal of the RPM package.
Debian asks during installation, if the configuration file should be updated.
This is asked via a prompt and thus hangs.
This adds an option to always update to the newer config file, so automated
installation keeps working.
Change version, required a minor fix in the RPM building.
In case of a alpha/beta version, the release will contain alpha/beta
as the RPM version cannot contains dashes/tildes.
This commit adds a bootstrap check on Linux and OS X for the max size of
virtual memory (address space) to the user running the Elasticsearch
process.
Closes#16935
This commit sets up the default filesystem used during install plugins
tests. A hack is neeeded to handle the temporary directory because the
system property "java.io.tmpdir" will have been initialized to a value
that is sensible for the default filesystem, but not necessarily to a
value that makes sense for the mock filesystem in use during the
tests. This property is restored after each test.
This commit refactors the unit tests for installing plugins to test
against mock filesystems (as well as the native filesystem) for better
test coverage. This commit also adds tests that cover the POSIX
attributes handling when installing plugins (e.g., ensuring that the
plugins directory has the right permissions, the bin directory has
execute permissions, and the config directory has the same owner and
group as its parent).
Today, certain bootstrap properties are set and read via system
properties. This action-at-distance way of managing these properties is
rather confusing, and completely unnecessary. But another problem exists
with setting these as system properties. Namely, these system properties
are interpreted as Elasticsearch settings, not all of which are
registered. This leads to Elasticsearch failing to startup if any of
these special properties are set. Instead, these properties should be
kept as local as possible, and passed around as method parameters where
needed. This eliminates the action-at-distance way of handling these
properties, and eliminates the need to register these non-setting
properties. This commit does exactly that.
Additionally, today we use the "-D" command line flag to set the
properties, but this is confusing because "-D" is a special flag to the
JVM for setting system properties. This creates confusion because some
"-D" properties should be passed via arguments to the JVM (so via
ES_JAVA_OPTS), and some should be passed as arguments to
Elasticsearch. This commit changes the "-D" flag for Elasticsearch
settings to "-E".
This change adds the infrastructure to run the rest tests on a multi-node
cluster that users 2 different minor versions of elasticsearch. It doesn't implement
any dedicated BWC tests but rather leverages the existing REST tests.
Since we don't have a real version to test against, the tests uses the current version
until the first minor / RC is released to ensure the infrastructure works.
Relates to #14406Closes#17072
When unzipping a plugin zip, the zip entries are resolved relative to
the directory being unzipped into. However, there are currently no
checks that the entry name was not absolute, or relatively points
outside of the plugin dir. This change adds a check for those two cases.
Indexing failures have caused the reindex http request to fail for a while
now. Both search and indexing failures cause it to abort. But search
failures didn't cause a non-200 response code from the http api. This
fixes that.
Also slips in a fix to some infrequently failing rest tests.
Closes#16037
Currently, the cluster service is tightly coupled to the transport service by both managing node connections and requiring the bound address in order to create the local disco node. This commit introduces a new NodeConnectionsService which is in charge of node connection management and makes it possible to remove all network related calls from the cluster service. The local DiscoNode is now created by DiscoveryNodeService and is set both the cluster service and the transport service during node start up.
Closes#16788Closes#16872
This commit simplifies and consolidates the two different
implementations of terminals used in tests. There is now a single
MockTerminal which captures output, and allows accessing as one large
string (with unix style \n as newlines), as well as configuring
input.
The big win here is catching tests that are incorrectly named and will
be skipped by gradle, providing a false sense of security.
The whole thing takes about 10 seconds on my Macbook Air, not counting
compiling the test classes, which seems worth it. Because this runs as
a gradle task with propery UP-TO-DATE handling it can be skipped if the
tests haven't been changed which should save some time.
I chose to keep this in test:framework rather than a new subproject of
buildSrc because ESIntegTestCase and doesn't inroduce any additional
dependencies.
DiscoveryService was a bridge into the discovery universe. This is unneeded and we can just access discovery directly or do things in a different way.
One of those different ways, is not having a dedicated discovery implementation for each our dicovery plugins but rather reuse ZenDiscovery.
UnicastHostProviders are now classified by discovery type, removing unneeded checks on plugins.
Closes#16821
Today we might start a node and some of the paths might not have the
required permissions. This commit goes through all data directories as
well as index, shard and state directories and ensures we have write access.
To make this work across all OS etc. we are trying to write a real file
and remove it again in each of those directories
All we do is check the cancelled flag and stop the request at a few key
points.
Adds the cancellation cause to the status so any request that is cancelled
but doesn't die can be seen in the task list.
One of our tests leaked a system property here since we failed after appling some
system properties in BootstrapCLIParser. This is not a huge deal in production since
we exit the JVM if we fail on that. Yet for correctnes we should only apply them if
we manage to parse them all.
This also caused a test failure lately on CI but on an unrelated test:
https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+master+periodic/314/console
This processor is useful when all elements of a json array need to be processed in the same way.
This avoids that a processor needs to be defined for each element in an array.
Also it is very likely that it is unknown how many elements are inside an json array.
Renamed `ingest-with-mustache` to `smoke-test-ingest-with-all-dependencies`
Also renamed `ingest-disabled` to `smoke-test-ingest-disabled` so that the name is more inline with other qa smoke test modules.
Identifying when a plugin id is maven coordinates is currently done by
checking if the plugin id contains 2 colons. However, a valid url could
have 2 colons, for example when a port is specified. This change adds
another check, ensuring the plugin id with maven coordinates does not
contain a slash, which only a url would have.
closes#16376
This commit enableds strict settings validation on node startup. All settings
passed to elasticsearch either through system properties, yaml files or any other
way to pass settings must be registered and valid. Settings that are unknown ie. due to
typos or due to deprecation or removal will cause the node to NOT start up. Plugins
have to declare all their settings on the `SettingsModule#registerSetting` and settings for
plugins that are not installed must be removed.
This commit also removes the ability to specify the nodes name via `-Des.name` or just `name` in the
configuration files. The node name must be prefixed with the node prexif like `node.name: Boom`. Left over
usage of `name` will also cause startup to fail.
Cli tools currently catch all exceptions, and only print the exception
message, except when a special system property is set. Even with this
flag set, certain exceptions, like IOException, are captured and their
stack trace is always lost.
This change adds a UserError class, which can be used a cli tools to
specify a message to the user, as well as an exit status. All other
exceptions are propagated out of main, so java will exit with non-zero
and print the stack trace.
The plugin cli currently is extremely lenient, allowing most errors to
simply be logged. This can lead to either corrupt installations (eg
partially installed plugins), or confused users.
This change rewrites the plugin cli to have almost no leniency.
Unfortunately it was not possible to remove all leniency, due in
particular to how config files are handled.
The following functionality was simplified:
* The format of the name argument to install a plugin is now an official
plugin name, maven coordinates, or a URL.
* Checksum files are required, and only checked, for official plugins
and maven plugins. Checksums are also only SHA1.
* Downloading no longer uses a separate thread, and no longer has a timeout.
* Installation, and removal, attempts to be atomic. This only truly works
when no config or bin files exist.
* config and bin directories are verified before copying is attempted.
* Permissions and user/group are no longer set on config and bin files.
We rely on the users umask.
* config and bin directories must only contain files, no subdirectories.
* The code is reorganized so each command is a separate class. These
classes already existed, but were embedded in the plugin cli class, as
an extra layer between the cli code and the code running for each command.
If we don't do this, and some path.conf is set when starting the tribe node, that path.conf will be ignored and the inner tribe clients will try to read elsewhere, where they most likely don't have permissions to read from.
Closes#16253Closes#16258
This commit converts the script mode settings to the new settings
infrastructure. This is a major refactoring of the handling of script
mode settings. This refactoring is necessary because these settings are
determined at runtime based on the registered script engines and the
registered script contexts.
The rest test framework, because it used to be tightly integrated with
ESIntegTestCase, currently expects the addresses for the test cluster to
be passed using the transport protocol port. However, it only uses this
to then find the http address.
This change makes ESRestTestCase extend from ESTestCase instead of
ESIntegTestCase, and changes the sysprop used to tests.rest.cluster,
which now takes the http address.
closes#15459
Site plugins used to be used for things like kibana and marvel, but
there is no longer a need since kibana (and marvel as a kibana plugin)
uses node.js. This change removes site plugins, as well as the flag for
jvm plugins. Now all plugins are jvm plugins.
This creates an reindex plugin with a very basic implementation that is
very like delete-by-query. New we'll integrate it with the task managament
work but for now this works.
Rather than failing the request, when a node with node.ingest set to false receives an index or bulk request with a pipeline id, it should try to redirect the request to another node with node.ingest set to true. If there are no node with ingest set to true based on the current cluster state, an exception will be returned and the request will fail. Note that in case there are no ingest nodes and bulk has a pipeline id specified only for a subset of index requests, the whole bulk will fail.
Now that the ingest infra is part of es core we can remove some code that was required by the plugin and have a better integration with es core. We allow to specify the pipeline id in bulk and index as a request parameter, we have a REST filter that parses it and adds it to the relevant action request. That is not required anymore, as we can add this logic to RestIndexAction and RestBulkAction directly, no need for a filter. Also, we can allow to specify a pipeline id for each index requests in a bulk request. The small downside of this is that the ingest filter has to go over each item of a bulk request, all the time, to figure out whether they have a pipeline id.
CRUD and simulate apis work now fine, every node has the pipelines in memory, but node.ingest disables ingestion, meaning that any index or bulk request with a pipeline id is going to fail
We will keep this abstractions as it's convenient, otherwise IngestDocument would depend on ScriptService directly, and would explicitly rely on mustache which is not even part of core. better to have the interface in core, and the impl as part of the ingest plugin, which relies on mustache, shipped with core by default.
The append processor allows to append one or more values to an existing list; add a new list with the provided values if the field doesn't exist yet, or convert an existing scalar into a list and add the provided values to the newly created list.
This required adapting of IngestDocument#appendFieldValue behaviour, also added support for templating to it.
Closes#14324
This change adds a Fixture class for use by gradle. A Fixture is an
external process that integration tests will use. It can be added as a
dependsOn for integTest, and will automatically be shutdown upon success
or failure, as well as relevant information dumped on failure. There is
also an example fixture in this change.
Added ingest wide template infrastructure to IngestDocument
Added a TemplateService interface that the ingest framework uses
Added a TemplateService implementation that the ingest plugin provides that delegates to the ES' script service
Cut SetProcessor over to use the template infrastructure for the `field` and `value` settings.
Removed the MetaDataProcessor
Removed dependency on mustache library
Added qa ingest mustache rest test so that the ingest and mustache integration can be tested.
This fixes the `lenient` parameter to be `missingClasses`. I will remove this boolean and we can handle them via the normal whitelist.
It also adds a check for sheisty classes (jar hell with the jdk).
This is inspired by the lucene "sheisty" classes check, but it has false positives. This check is more evil, it validates every class file against the extension classloader as a resource, to see if it exists there. If so: jar hell.
This jar hell is a problem for several reasons:
1. causes insanely-hard-to-debug problems (like bugs in forbidden-apis)
2. hides problems (like internal api access)
3. the code you think is executing, is not really executing
4. security permissions are not what you think they are
5. brings in unnecessary dependencies
6. its jar hell
The more difficult problems are stuff like jython, where these classes are simply 'uberjared' directly in, so you cant just fix them by removing a bogus dependency. And there is a legit reason for them to do that, they want to support java 1.4.
This change removes hardcoded ports from cluster formation. It passes
port 0 for http and transport, and then uses a special property to have
the node log the ports used for http and transport (just for tests).
This does not yet work for multi node tests. This brings us one step
closer to working with --parallel.
This commit removes and now forbids all uses of
Collections#shuffle(List) and Random#<init>() across the codebase. The
rationale for removing and forbidding these methods is to increase test
reproducibility. As these methods use non-reproducible seeds, production
code and tests that rely on these methods contribute to
non-reproducbility of tests.
Instead of Collections#shuffle(List) the method
Collections#shuffle(List, Random) can be used. All that is required then
is a reproducible source of randomness. Consequently, the utility class
Randomness has been added to assist in creating reproducible sources of
randomness.
Instead of Random#<init>(), Random#<init>(long) with a reproducible seed
or the aforementioned Randomess class can be used.
Closes#15287
The NodeBuilder is currently used to construct a Node. However, this is
really just yet-another-builder that wraps around a Settings.Builder
witha couple convenience methods. But there are very few uses of these
convenience methods. This change removes NodeBuilder, in favor of just
using the Node constructor.
The tribe node creates one local client node for each cluster it
connects to. Refactorings in #13383 broke this so that each local client
node now tries to load the full elasticsearch.yml that the real tribe
node uses.
This change fixes the problem by adding a TribeClientNode which is a
subclass of Node. The Environment the node uses is now passed in (in
place of Settings), and the TribeClientNode simply does not use
InternalSettingsPreparer.prepareEnvironment.
The tests around tribe nodes are not great. The existing tests pass, but
I also manually tested by creating 2 local clusters, and configuring and
starting a tribe node. With this I was able to see in the logs the tribe
node connecting to each cluster.
closes#13383
We currently use the full suite of packaged rest tests for each
distribution. We also used to run rest tests within core integ tests,
but this stopped working when we split out the test-framework, since the
test files are in there.
This change simplifies the code to run packaged rest tests just once,
for the integ-test-zip, and removes the unused rest tests from
test-framework. Distributions rest tests now check that all modules
were loaded.
The current mechanism for adding plugins to the integTest cluster is to
have a FileCollection. This works well for the integTests for a single
plugin, which automatically adds itself to be installed. However, for qa
tests where many plugins may be installed, and from other projects, it
is cumbersome to add configurations, dependencies and dependsOn
statements over and over. This simplifies installing a plugin from
another project by moving this common setup into the cluster
configuration code.
This change adds back the multi node smoke test, as well as making the
cluster formation for any test allow multiple nodes. The main changes in
cluster formation are abstracting out the node specific configuration to
a helper struct, as well as making a single wait task that waits for all
nodes after their start tasks have run. The output on failure was also
improved to log which node's info is being printed.
Currently we use the "gradle project attachment plugin" to support
building elasticsearch as part of another project. However, this plugin
has a number of issues, a large part of which is requiring consistent
use of the projectsPrefix.
This change removes projectsPrefix, and adds support for a special
extra-plugins directory in the root of elasticsearch. Any projects
checked out within this directory will be automatically added to
elasticsearch.
* Forbid System.setProperties & co in forbidden APIs.
* Ban property write access at runtime with security manager.
Plugins that need to modify system properties will need to request permission in their plugin-security.policy
closes#14726
Squashed commit of the following:
commit 5b591e98570e3fa481b2816a44063b98bff36ddf
Author: Robert Muir <rmuir@apache.org>
Date: Fri Nov 13 00:54:08 2015 -0500
add assumption for self-signing in PluginManagerTests
commit ed11e5371b6f71591dc41c6f60d033502cfcf029
Author: Robert Muir <rmuir@apache.org>
Date: Fri Nov 13 00:20:59 2015 -0500
show error output from integ test startup
commit d8b187a10e95d89a0e775333dcbe1aaa903fb376
Author: Robert Muir <rmuir@apache.org>
Date: Thu Nov 12 22:14:11 2015 -0500
fix gradle check under jigsaw
Just suck in the system policy, so its compatible with any version of java.
It means it also respects configuration (e.g. for monitoring agents)
Closes#14704
Many other improvements:
* Use spaces in ES path
* Use space in path for plugin file installation
* Use a different cwd than ES home
* Use jps to ensure process being stopped is actually elasticsearch
* Stop ES if pid file already exists
* Delete pid file when successfully killed
Also, refactored the cluster formation code to be a little more organized.
closes#14464
This gets the tar and tar_plugins tests working in gradle. It does so by
adding a subproject, qa/vagrant, which adds the following tasks:
Verification
------------
checkPackages - Check the packages against a representative sample of the
linux distributions we have in our Vagrantfile
checkPackagesAllDistros - Check the packages against all the linux
distributions we have in our Vagrantfile
Package Verification
--------------------
checkCentos6 - Run packaging tests against centos-6
checkCentos7 - Run packaging tests against centos-7
checkDebian8 - Run packaging tests against debian-8
checkFedora22 - Run packaging tests against fedora-22
checkOel7 - Run packaging tests against oel-7
checkOpensuse13 - Run packaging tests against opensuse-13
checkSles12 - Run packaging tests against sles-12
checkUbuntu1204 - Run packaging tests against ubuntu-1204
checkUbuntu1404 - Run packaging tests against ubuntu-1404
checkUbuntu1504 - Run packaging tests against ubuntu-1504
Vagrant
-------
smokeTestCentos6 - Smoke test the centos-6 VM
smokeTestCentos7 - Smoke test the centos-7 VM
smokeTestDebian8 - Smoke test the debian-8 VM
smokeTestFedora22 - Smoke test the fedora-22 VM
smokeTestOel7 - Smoke test the oel-7 VM
smokeTestOpensuse13 - Smoke test the opensuse-13 VM
smokeTestSles12 - Smoke test the sles-12 VM
smokeTestUbuntu1204 - Smoke test the ubuntu-1204 VM
smokeTestUbuntu1404 - Smoke test the ubuntu-1404 VM
smokeTestUbuntu1504 - Smoke test the ubuntu-1504 VM
vagrantHaltCentos6 - Shutdown the vagrant VM running centos-6
vagrantHaltCentos7 - Shutdown the vagrant VM running centos-7
vagrantHaltDebian8 - Shutdown the vagrant VM running debian-8
vagrantHaltFedora22 - Shutdown the vagrant VM running fedora-22
vagrantHaltOel7 - Shutdown the vagrant VM running oel-7
vagrantHaltOpensuse13 - Shutdown the vagrant VM running opensuse-13
vagrantHaltSles12 - Shutdown the vagrant VM running sles-12
vagrantHaltUbuntu1204 - Shutdown the vagrant VM running ubuntu-1204
vagrantHaltUbuntu1404 - Shutdown the vagrant VM running ubuntu-1404
vagrantHaltUbuntu1504 - Shutdown the vagrant VM running ubuntu-1504
vagrantSmokeTest - Smoke test some representative distros from the Vagrantfile
vagrantSmokeTestAllDistros - Smoke test all distros from the Vagrantfile
vagrantUpCentos6 - Startup a vagrant VM running centos-6
vagrantUpCentos7 - Startup a vagrant VM running centos-7
vagrantUpDebian8 - Startup a vagrant VM running debian-8
vagrantUpFedora22 - Startup a vagrant VM running fedora-22
vagrantUpOel7 - Startup a vagrant VM running oel-7
vagrantUpOpensuse13 - Startup a vagrant VM running opensuse-13
vagrantUpSles12 - Startup a vagrant VM running sles-12
vagrantUpUbuntu1204 - Startup a vagrant VM running ubuntu-1204
vagrantUpUbuntu1404 - Startup a vagrant VM running ubuntu-1404
vagrantUpUbuntu1504 - Startup a vagrant VM running ubuntu-1504
It does not make the "check" task depend on "checkPackages" so running the
vagrant tests is still optional. They are slow and depend on vagrant and
virtualbox.
The Package Verification tasks are useful for testing individual distros.
The Vagrant tasks are listed in `gradle tasks` primarily for discoverability.
This change removes the leftover pom files. A couple files were left for
reference, namely in qa tests that have not yet been migrated (vagrant
and multinode). The deb and rpm assemblies also still exist for
reference when finishing their setup in gradle.
See #13930
There are three ways `@Test` was used. Way one:
```java
@Test
public void flubTheBlort() {
```
This way was always replaced with:
```java
public void testFlubTheBlort() {
```
Or, maybe with a better method name if I was feeling generous.
Way two:
```java
@Test(throws=IllegalArgumentException.class)
public void testFoo() {
methodThatThrows();
}
```
This way of using `@Test` is actually pretty OK, but to get the tools to ban
`@Test` entirely it can't be used. Instead:
```java
public void testFoo() {
try {
methodThatThrows();
fail("Expected IllegalArgumentException");
} catch (IllegalArgumentException e ) {
assertThat(e.getMessage(), containsString("something"));
}
}
```
This is longer but tests more than the old ways and is much more precise.
Compare:
```java
@Test(throws=IllegalArgumentException.class)
public void testFoo() {
some();
copy();
and();
pasted();
methodThatThrows();
code(); // <---- This was left here by mistake and is never called
}
```
to:
```java
@Test(throws=IllegalArgumentException.class)
public void testFoo() {
some();
copy();
and();
pasted();
try {
methodThatThrows();
fail("Expected IllegalArgumentException");
} catch (IllegalArgumentException e ) {
assertThat(e.getMessage(), containsString("something"));
}
}
```
The final use of test is:
```java
@Test(timeout=1000)
public void testFoo() {
methodThatWasSlow();
}
```
This is the most insidious use of `@Test` because its tempting but tragically
flawed. Its flaws are:
1. Hard and fast timeouts can look like they are asserting that something is
faster and even do an ok job of it when you compare the timings on the same
machine but as soon as you take them to another machine they start to be
invalid. On a slow VM both the new and old methods fail. On a super-fast
machine the slower and faster ways succeed.
2. Tests often contain slow `assert` calls so the performance of tests isn't
sure to predict the performance of non-test code.
3. These timeouts are rude to debuggers because the test just drops out from
under it after the timeout.
Confusingly, timeouts are useful in tests because it'd be rude for a broken
test to cause CI to abort the whole build after it hits a global timeout. But
those timeouts should be very very long "backstop" timeouts and aren't useful
assertions about speed.
For all its flaws `@Test(timeout=1000)` doesn't have a good replacement __in__
__tests__. Nightly benchmarks like http://benchmarks.elasticsearch.org/ are
useful here because they run on the same machine but they aren't quick to check
and it takes lots of time to figure out the regressions. Sometimes its useful
to compare dueling implementations but that requires keeping both
implementations around. All and all we don't have a satisfactory answer to the
question "what do you replace `@Test(timeout=1000)`" with. So we handle each
occurrence on a case by case basis.
For files with `@Test` this also:
1. Removes excess blank lines. They don't help anything.
2. Removes underscores from method names. Those would fail any code style
checks we ever care to run and don't add to readability. Since I did this manually
I didn't do it consistently.
3. Make sure all test method names start with `test`. Some used to end in `Test` or start
with `verify` or `check` and they were picked up using the annotation. Without the
annotation they always need to start with `test`.
4. Organizes imports using the rules we generate for Eclipse. For the most part
this just removes `*` imports which is a win all on its own. It was "required"
to quickly remove `@Test`.
5. Removes unneeded casts. This is just a setting I have enabled in Eclipse and
forgot to turn off before I did this work. It probably isn't hurting anything.
6. Removes trailing whitespace. Again, another Eclipse setting I forgot to turn
off that doesn't hurt anything. Hopefully.
7. Swaps some tests override superclass tests to make them empty with
`assumeTrue` so that the reasoning for the skips is logged in the test run and
it doesn't "look like" that thing is being tested when it isn't.
8. Adds an oxford comma to an error message.
The total test count doesn't change. I know. I counted.
```bash
git checkout master && mvn clean && mvn install | tee with_test
git no_test_annotation master && mvn clean && mvn install | tee not_test
grep 'Tests summary' with_test > with_test_summary
grep 'Tests summary' not_test > not_test_summary
diff with_test_summary not_test_summary
```
These differ somewhat because some tests are skipped based on the random seed.
The total shouldn't differ. But it does!
```
1c1
< [INFO] Tests summary: 564 suites (1 ignored), 3171 tests, 31 ignored (31 assumptions)
---
> [INFO] Tests summary: 564 suites (1 ignored), 3167 tests, 17 ignored (17 assumptions)
```
These are the core unit tests. So we dig further:
```bash
cat with_test | perl -pe 's/\n// if /^Suite/;s/.*\n// if /IGNOR/;s/.*\n// if /Assumption #/;s/.*\n// if /HEARTBEAT/;s/Completed .+?,//' | grep Suite > with_test_suites
cat not_test | perl -pe 's/\n// if /^Suite/;s/.*\n// if /IGNOR/;s/.*\n// if /Assumption #/;s/.*\n// if /HEARTBEAT/;s/Completed .+?,//' | grep Suite > not_test_suites
diff <(sort with_test_suites) <(sort not_test_suites)
```
The four tests with lower test numbers are all extend `AbstractQueryTestCase`
and all have a method that looks like this:
```java
@Override
public void testToQuery() throws IOException {
assumeTrue("test runs only when at least a type is registered", getCurrentTypes().length > 0);
super.testToQuery();
}
```
It looks like this method was being double counted on master and isn't anymore.
Closes#14028
This commit makes sure that the plugin script looks at user, group and permissions of the elasticsearch bin dir and copies them over to the plugin bin subdirectory, whatever they are, so that they get properly setup depending on how elasticsearch was installed. We also make sure that execute permissions are added for files (we already did this before).
Relates to #11016Closes#14088
Depending on how elasticsearch is installed, we have two scenarios to take into account that relate to user, group and permissions assigned to the config directory:
1) deb/rpm package: /etc/elasticsearch is root:elasticsearch 750 and the plugin script is run from root user
2) tar/zip archive: es config dir is most likely elasticsearch:elasticsearch and the plugin script is most likely run from elasticsearch user
When the plugin script copies over the plugin config dir within the es config dir, it should take care of setting the proper user, group and permissions, which vary depending on how elasticsearch was installed in the first place. Should be root:elasticsearch 750 if installed from a package, or elasticsearch:elasticsearch if installed from an archive.
This commit makes sure that the plugin script looks at user, group and permissions of the config dir and copies them over to the plugin config subdirectory, whatever they are, so that they get properly setup depending on how elasticsearch was installed in the first place. We also make sure that execute permissions are left untouched for files.
Relates to #11016Closes#14048
When generating the rpm and dep package we now set proper group (elasticsearch) and permissions (750) to the conf dir (default /etc/elasticsearch). Same for the scripts subdirectory.
Expanded the assert_file bash function to also optionally check the group of files, so we can actually test that the group was set correctly.
Relates to #11016Closes#14017
It is rarely used and was not consistently handled by different distributions anyway.
This commit also adds a test for specifying CONF_DIR when installing plugins and
starting elasticsearch.
relates to #12712 and #12954closes#5329closes#13715
package installation creates the plugin directory already so when a plugin
is installed it prints the additional line
Plugins directory [/tmp/elasticsearch/plugins] does not exist. Creating...
Plugin cli tools configures logging with whatever is in the logging.yml.
If a file appender is configured for any of the logs this will cause creation
of an empty log file. If a plugin was for example installed as root it will
create empty logs at es.home/logs.
This is problematic when for example plugins are installed as root and es is run
as service. Logs will then be created in /usr/share/elasticsearch/logs
and can later not be removed by for example dpkg -r or -purge.
To avoid this, configure the logger to use an appender that writes to the same
output that plugin cli tool does. This allows other components that are called
from Plugin cli tool to write to the same terminal that plugin cli tool writes to
by using the logging mechanism already in place.
The logging conf is not read at all pb plugin cli tool.
As a side effect, the loging level for components that are called
from the plugin command such as the jar hell check can now be configured
with -Des.logger.level which makes it easier to debug the jar hell check.
Installs javatana in vivid, emulates its on-login actions when starting
elasticsearch and verifies that elasticsearch turns off javatana.
Relates to #13813
Before this commit he tests always run bin/plugin as root which is somewhat
unrealistic and causes trouble (log files owned by root instead of
elasticsearch). After this commit `bin/plugin` runs as root when elasticsearch
is installed via the repository and as elasticsearch otherwise which is much
more realistic.
This also adds extra timeout to starting elasticsearch which is required
when all the plugins are installed. And it fixes up a problem with logging
elasticsearch's log if elasticsearch doesn't start which came up multiple
time while debugging this problem.
Also adds docs recommending running `bin/plugin` as the user that owns the
Elasticsearch files or root if installed with the packages.
Closes#13557
This adds SuSe Linux Enterprise Server 12 to the list of tested VMs.
SLES 12 is using systemd, so that the current RPM works
out of the box.
SLES12 however is already quite old and does not ship with java8, so this
required adding an opensuse repo.
Fix the vagrant tests after azure was split into 3 plugins. The tests
need to list all the plugins and some dependency so we can make sure the
plugin can be installed and uninstalled.
Until now we had a cloud-azure plugin which is providing 3 distinct features:
* discovery on Azure
* snapshot/restore on Aure
* SMB store
This commit splits the plugin by feature so people can use either one or the other or both features.
Doc is updated accordingly.
Right now we execute some debian-isms in the init.d tests. This switches to
trying both the debian and centos ways to stop services from starting
automatically.
The AWS plugin was broken into discovery-ec2 and repository-s3 so we can't
test the old plugin and must test the new ones.
Fixed some wording issues in test names.
This changes the packaging tests to start Elasticsearch with all plugins
installed and checks `_cat/plugins?h=c` against the list of plugins in
the plugins directory. If the list differs, error! So it proves that the
plugins can be installed using bin/plugin as shipped in the rpm and deb
packages.
Closes#13254
There are two other obvious ways to implement the "packages don't start
elasticsearch" checks but when you work through them they aren't as nice
as the implementation of the checks that we use now. This just adds
documentation to that effect.
We don't want either the deb or rpm package to start elasticsearch as soon
as they install nor do we want the package to register elasticsearch to
start on restart. That action is reserved for the administrator. This adds
tests for that.
Closes#13122
To do this we:
1. All the rpm based distros we test support Java 8. We just ask to install
it.
2. There is a ppa that works for the Ubuntus. We just add that for them.
3. Debian Jessie has Java 8 in its backports. We just add that repository.
4. Debian Wheezy doesn't have Java 8 easily accessible so we drop it. We
could add it back with Orache Java 8 at a later date but that will take a
few more backflips and won't support things like vagrant-cachier.
This required a ton of rebuilding of vagrant boxes so it also fixes:
1. apt-get update is run too frequently
2. Lots of weird warning messages are spit out of apt-get
3. Switch from the chef provided based images to those provided by boxcutter.
The chef images has left vagrant atlas!
Closes#13366
Until now we had a cloud-aws plugin which is providing 2 disctinct features:
* discovery on EC2
* snapshot/restore on S3
This commit splits the plugin by feature so people can use either one or the other or both features.
Doc is updated accordingly.
As we log a lot, we hit a default limit:
```
The test or suite printed 9450 bytes to stdout and stderr, even though the limit was set to 8192 bytes. Increase the limit with @Limit, ignore it completely with @SuppressSysoutChecks or run with -Dtests.verbose=true
```
(cherry picked from commit 0cb325d)
This commit adds a new smoke test for testing client as a end Java user.
It starts a cluster in `pre-integration-test` phase, then execute the client operations defined as JUnit tests within `integration-test` phase and then stop the external cluster in `post-integration-test` phase.
You can also run test classes from your IDE.
* Start an external node on your machine with `bin/elasticsearch` (note that you can test Java API regressions if you run an older or newer node version)
* Run the JUnit test. By default, it will run tests on `localhost:9300` but you can change this setting using system property `tests.cluster`. It also expects the default `cluster.name` (`elasticsearch`).
This commit also starts adding [snippets as defined by Maven](https://maven.apache.org/guides/mini/guide-snippet-macro.html) to help keeping automatically synchronized the Java reference guide with the current code.
Our documentation builder tool does not support snippets though but we will most likely support it at some point.
The shaded version of elasticsearch was built at the very beginning to avoid dependency conflicts in a specific case where:
* People use elasticsearch from Java
* People needs to embed elasticsearch jar within their own application (as it's today the only way to get a `TransportClient`)
* People also embed in their application another (most of the time older) version of dependency we are using for elasticsearch, such as: Guava, Joda, Jackson...
This conflict issue can be solved within the projects themselves by either upgrade the dependency version and use the one provided by elasticsearch or by shading elasticsearch project and relocating some conflicting packages.
Example
-------
As an example, let's say you want to use within your project `Joda 2.1` but elasticsearch `2.0.0-beta1` provides `Joda 2.8`.
Let's say you also want to run all that with shield plugin.
Create a new maven project or module with:
```xml
<groupId>fr.pilato.elasticsearch.test</groupId>
<artifactId>es-shaded</artifactId>
<version>1.0-SNAPSHOT</version>
<properties>
<elasticsearch.version>2.0.0-beta1</elasticsearch.version>
</properties>
<dependencies>
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch</artifactId>
<version>${elasticsearch.version}</version>
</dependency>
<dependency>
<groupId>org.elasticsearch.plugin</groupId>
<artifactId>shield</artifactId>
<version>${elasticsearch.version}</version>
</dependency>
</dependencies>
```
And now shade and relocate all packages which conflicts with your own application:
```xml
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>2.4.1</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<relocations>
<relocation>
<pattern>org.joda</pattern>
<shadedPattern>fr.pilato.thirdparty.joda</shadedPattern>
</relocation>
</relocations>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
```
You can create now a shaded version of elasticsearch + shield by running `mvn clean install`.
In your project, you can now depend on:
```xml
<dependency>
<groupId>fr.pilato.elasticsearch.test</groupId>
<artifactId>es-shaded</artifactId>
<version>1.0-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>joda-time</groupId>
<artifactId>joda-time</artifactId>
<version>2.1</version>
</dependency>
```
Build then your TransportClient as usual:
```java
TransportClient client = TransportClient.builder()
.settings(Settings.builder()
.put("path.home", ".")
.put("shield.user", "username:password")
.put("plugin.types", "org.elasticsearch.shield.ShieldPlugin")
)
.build();
client.addTransportAddress(new InetSocketTransportAddress(new InetSocketAddress("localhost", 9300)));
// Index some data
client.prepareIndex("test", "doc", "1").setSource("foo", "bar").setRefresh(true).get();
SearchResponse searchResponse = client.prepareSearch("test").get();
```
If you want to use your own version of Joda, then import for example `org.joda.time.DateTime`. If you want to access to the shaded version (not recommended though), import `fr.pilato.thirdparty.joda.time.DateTime`.
You can run a simple test to make sure that both classes can live together within the same JVM:
```java
CodeSource codeSource = new org.joda.time.DateTime().getClass().getProtectionDomain().getCodeSource();
System.out.println("unshaded = " + codeSource);
codeSource = new fr.pilato.thirdparty.joda.time.DateTime().getClass().getProtectionDomain().getCodeSource();
System.out.println("shaded = " + codeSource);
```
It will print:
```
unshaded = (file:/path/to/joda-time-2.1.jar <no signer certificates>)
shaded = (file:/path/to/es-shaded-1.0-SNAPSHOT.jar <no signer certificates>)
```
This PR also removes fully-loaded module.
By the way, the project can now build with Maven 3.3.3 so we can relax a bit our maven policy.
This cleans up deb, rpm, systemd, and sysvinit tests:
1. Move skip_not_rpm, skip_not_dpkg, etc to the setup() methods for faster
runtime and cleaner code.
2. Removed lots of needless invocations of `run`
3. Created install_package for use in the systemd and sysvinit tests.
4. Removed lots of needless stderr to stdout redirects.
Closes#13075
Related to #13074
Virtualbox is the default virtualization provier for vagrant but folks
override that from time to time. If they do then the build will fail because
the boxes used by the build don't usually support non-virtualbox providers.
Closes#13217
commons-lang really is only used by some core classes to join strings or modiy arrays.
It's not worth carrying the dependency. This commit removes the dependency on commons-lang
entirely.
Now we are using short names for artifactId (see #12879) so we don't need anymore to transform long names `elasticsearch-pluginname` to short names `pluginname` in ant script when we install a plugin.
Modify also convert-plugin-name
Clean up remaining plugins with old format
And fix vagrant tests
In #12853 we actually introduced a test regression. Now as we wait for yellow instead of green, we might have some pending tasks.
This commit simplify all that and only checks the number of nodes within the cluster.
In plugins, we are using non consistent naming. We use `elasticsearch-cloud-aws` as the artifactId, which generates a jar file called `elasticsearch-cloud-aws-VERSION.jar`.
But when you want to install the plugin, you will end up with a shorter name for the plugin `cloud-aws`.
```
bin/plugin install cloud-aws
```
This commit changes that and use consistent names for `artifactId`, so `finalName`.
Also changed maven names.
Indeed, we check within the test suite that we have not unassigned shards.
But when the test starts on my machine I get:
```
[elasticsearch] [2015-08-13 12:03:18,801][INFO ][org.elasticsearch.cluster.routing.allocation.decider] [Kehl of Tauran] low disk watermark [85%] exceeded on [eLujVjWAQ8OHdhscmaf0AQ][Jackhammer] free: 59.8gb[12.8%], replicas will not be assigned to this node
```
```
2> REPRODUCE WITH: mvn verify -Pdev -Dskip.unit.tests -Dtests.seed=2AE3A3B7B13CE3D6 -Dtests.class=org.elasticsearch.smoketest.SmokeTestMultiIT -Dtests.method="test {yaml=smoke_test_multinode/10_basic/cluster health basic test, one index}" -Des.logger.level=ERROR -Dtests.assertion.disabled=false -Dtests.security.manager=true -Dtests.heap.size=512m -Dtests.locale=ar_YE -Dtests.timezone=Asia/Hong_Kong -Dtests.rest.suite=smoke_test_multinode
FAILURE 38.5s | SmokeTestMultiIT.test {yaml=smoke_test_multinode/10_basic/cluster health basic test, one index} <<<
> Throwable #1: java.lang.AssertionError: expected [2xx] status code but api [cluster.health] returned [408 Request Timeout] [{"cluster_name":"prepare_release","status":"yellow","timed_out":true,"number_of_nodes":2,"number_of_data_nodes":2,"active_primary_shards":3,"active_shards":3,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":3,"delayed_unassigned_shards":0,"number_of_pending_tasks":0,"number_of_in_flight_fetch":0,"task_max_waiting_in_queue_millis":0,"active_shards_percent_as_number":50.0}]
```
We don't check anymore if we have unassigned shards and we wait for `yellow` status instead of `green`.
Closes#12852.
This move the `murmur3` field to the `mapper-murmur3` plugin and fixes its
defaults so that values will not be indexed by default, as the only purpose
of this field is to speed up `cardinality` aggregations on high-cardinality
string fields, which only requires doc values.
I also removed the `rehash` option from the `cardinality` aggregation as it
doesn't bring much value (rehashing is cheap) and allowed to remove the
coupling between the `cardinality` aggregation and the `murmur3` field.
Close#12874
1. Move `clean_before_test` to the first test so its more explicit.
2. Move `skip_not_tar_gz` to setup because it was run first in every test.
3. Remove calls to `run` that only check the status. Its simpler to just
execute the command. Its better because std-out will be captured and replayed
on error.
4. Switch from `su` to `sudo` because `su` was breaking `bats`'s error
reporting.
In the bats test ES_CLEAN_BEFORE_TEST was used to clean the environment
before running the tests. Unfortunately the tests don't work unless you
specify it every time. This removes that option and always runs the clean.
Build fails with maven 3.3.1 and 3.3.3. To reproduce, install one of the 3.3.x versions of maven and run `mvn clean verify` in the root directory of the project. The build will fail in the QA: Smoke Test Shaded Jar module with the following error:
```
Started J0 PID(99979@flea.local).
Suite: org.elasticsearch.shaded.test.ShadedIT
2> NOTE: reproduce with: ant test -Dtestcase=ShadedIT -Dtests.method=testJodaIsNotOnTheCP -Dtests.seed=2F4D23A7462CF921 -Dtests.locale= -Dtests.timezone=Asia/Baku -Dtests.asserts=true -Dtests.file.encoding=UTF-8
FAILURE 0.06s | ShadedIT.testJodaIsNotOnTheCP <<<
> Throwable #1: junit.framework.AssertionFailedError: Expected an exception but the test passed: java.lang.ClassNotFoundException
> at __randomizedtesting.SeedInfo.seed([2F4D23A7462CF921:3A9404F1F69FD80]:0)
> at junit.framework.Assert.fail(Assert.java:57)
> at java.lang.Thread.run(Thread.java:745)
2> NOTE: reproduce with: ant test -Dtestcase=ShadedIT -Dtests.method=testGuavaIsNotOnTheCP -Dtests.seed=2F4D23A7462CF921 -Dtests.locale= -Dtests.timezone=Asia/Baku -Dtests.asserts=true -Dtests.file.encoding=UTF-8
FAILURE 0.01s | ShadedIT.testGuavaIsNotOnTheCP <<<
> Throwable #1: junit.framework.AssertionFailedError: Expected an exception but the test passed: java.lang.ClassNotFoundException
> at __randomizedtesting.SeedInfo.seed([2F4D23A7462CF921:C2502FD54D83433D]:0)
> at junit.framework.Assert.fail(Assert.java:57)
> at java.lang.Thread.run(Thread.java:745)
2> NOTE: reproduce with: ant test -Dtestcase=ShadedIT -Dtests.method=testjsr166eIsNotOnTheCP -Dtests.seed=2F4D23A7462CF921 -Dtests.locale= -Dtests.timezone=Asia/Baku -Dtests.asserts=true -Dtests.file.encoding=UTF-8
FAILURE 0.01s | ShadedIT.testjsr166eIsNotOnTheCP <<<
> Throwable #1: junit.framework.AssertionFailedError: Expected an exception but the test passed: java.lang.ClassNotFoundException
> at __randomizedtesting.SeedInfo.seed([2F4D23A7462CF921:35593286F4269392]:0)
> at junit.framework.Assert.fail(Assert.java:57)
> at java.lang.Thread.run(Thread.java:745)
2> NOTE: leaving temporary files on disk at: /Users/Shared/Jenkins/Home/workspace/elasticsearch-master/qa/smoke-test-shaded/target/J0/temp/org.elasticsearch.shaded.test.ShadedIT_2F4D23A7462CF921-001
2> NOTE: test params are: codec=CheapBastard, sim=DefaultSimilarity, locale=, timezone=Asia/Baku
2> NOTE: Mac OS X 10.10.4 x86_64/Oracle Corporation 1.8.0_25 (64-bit)/cpus=8,threads=1,free=482137936,total=514850816
2> NOTE: All tests run in this JVM: [ShadedIT]
Completed [1/1] in 6.61s, 5 tests, 3 failures <<< FAILURES!
Tests with failures:
- org.elasticsearch.shaded.test.ShadedIT.testJodaIsNotOnTheCP
- org.elasticsearch.shaded.test.ShadedIT.testGuavaIsNotOnTheCP
- org.elasticsearch.shaded.test.ShadedIT.testjsr166eIsNotOnTheCP
```
Please note that build doesn't fail with maven 3.2.x and it doesn't fail if mvn command is executed inside the qa/smoke-test-shaded directory. Only when the build is started from the root directory the error above can be observed.
The reason is because of the shaded version which depends on elasticsearch core.
When Maven build the module only, then elasticsearch core is not added to the dependency tree.
```sh
mvn dependency:tree -pl :smoke-test-shaded
```
```
[INFO] --- maven-dependency-plugin:2.8:tree (default-cli) @ smoke-test-shaded ---
[INFO] org.elasticsearch.qa:smoke-test-shaded:jar:2.0.0-beta1-SNAPSHOT
[INFO] +- org.elasticsearch.distribution.shaded:elasticsearch:jar:2.0.0-beta1-SNAPSHOT:compile
[INFO] | +- org.apache.lucene:lucene-core:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-backward-codecs:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-analyzers-common:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-queries:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-memory:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-highlighter:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-queryparser:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-sandbox:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-suggest:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-misc:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-join:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-grouping:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-spatial:jar:5.2.1:compile
[INFO] | \- com.spatial4j:spatial4j:jar:0.4.1:compile
[INFO] +- org.hamcrest:hamcrest-all:jar:1.3:test
[INFO] \- org.apache.lucene:lucene-test-framework:jar:5.2.1:test
[INFO] +- org.apache.lucene:lucene-codecs:jar:5.2.1:test
[INFO] +- com.carrotsearch.randomizedtesting:randomizedtesting-runner:jar:2.1.16:test
[INFO] +- junit:junit:jar:4.11:test
[INFO] \- org.apache.ant🐜jar:1.8.2:test
```
But if shaded plugin is involved during the build, it modifies the `projectArtifactMap`:
```sh
mvn dependency:tree -pl org.elasticsearch.distribution.shaded:elasticsearch,:smoke-test-shaded
```
```
[INFO] --- maven-dependency-plugin:2.8:tree (default-cli) @ smoke-test-shaded ---
[INFO] org.elasticsearch.qa:smoke-test-shaded:jar:2.0.0-beta1-SNAPSHOT
[INFO] +- org.elasticsearch.distribution.shaded:elasticsearch:jar:2.0.0-beta1-SNAPSHOT:compile
[INFO] | \- org.elasticsearch:elasticsearch:jar:2.0.0-beta1-SNAPSHOT:compile
[INFO] | +- org.apache.lucene:lucene-backward-codecs:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-analyzers-common:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-queries:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-memory:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-highlighter:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-queryparser:jar:5.2.1:compile
[INFO] | | \- org.apache.lucene:lucene-sandbox:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-suggest:jar:5.2.1:compile
[INFO] | | \- org.apache.lucene:lucene-misc:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-join:jar:5.2.1:compile
[INFO] | | \- org.apache.lucene:lucene-grouping:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-spatial:jar:5.2.1:compile
[INFO] | | \- com.spatial4j:spatial4j:jar:0.4.1:compile
[INFO] | +- com.google.guava:guava:jar:18.0:compile
[INFO] | +- com.carrotsearch:hppc:jar:0.7.1:compile
[INFO] | +- joda-time:joda-time:jar:2.8:compile
[INFO] | +- org.joda:joda-convert:jar:1.2:compile
[INFO] | +- com.fasterxml.jackson.core:jackson-core:jar:2.5.3:compile
[INFO] | +- com.fasterxml.jackson.dataformat:jackson-dataformat-smile:jar:2.5.3:compile
[INFO] | +- com.fasterxml.jackson.dataformat:jackson-dataformat-yaml:jar:2.5.3:compile
[INFO] | | \- org.yaml:snakeyaml:jar:1.12:compile
[INFO] | +- com.fasterxml.jackson.dataformat:jackson-dataformat-cbor:jar:2.5.3:compile
[INFO] | +- io.netty:netty:jar:3.10.3.Final:compile
[INFO] | +- com.ning:compress-lzf:jar:1.0.2:compile
[INFO] | +- com.tdunning:t-digest:jar:3.0:compile
[INFO] | +- org.hdrhistogram:HdrHistogram:jar:2.1.6:compile
[INFO] | +- org.apache.commons:commons-lang3:jar:3.3.2:compile
[INFO] | +- commons-cli:commons-cli:jar:1.3.1:compile
[INFO] | \- com.twitter:jsr166e:jar:1.1.0:compile
[INFO] +- org.hamcrest:hamcrest-all:jar:1.3:test
[INFO] \- org.apache.lucene:lucene-test-framework:jar:5.2.1:test
[INFO] +- org.apache.lucene:lucene-codecs:jar:5.2.1:test
[INFO] +- org.apache.lucene:lucene-core:jar:5.2.1:compile
[INFO] +- com.carrotsearch.randomizedtesting:randomizedtesting-runner:jar:2.1.16:test
[INFO] +- junit:junit:jar:4.11:test
[INFO] \- org.apache.ant🐜jar:1.8.2:test
```
A fix could consist of fixing something on Maven side. Probably something changed in a recent version and introduced this "issue" but it might be not really an issue. More a fix.
There are two workarounds:
1) exclude manually elasticsearch core from shaded version in smoke-test-shaded module and add manually each lucene lib needed by elasticsearch
2) add a new `elasticsearch-lucene` (lucene) POM module which simply declares all needed lucene libs in subprojects (such as the smoke tester one).
I choose the later.
Closes#12791.
This adds the infrastrucutre to run integration tests with more than one node.
* it adds relevant macros and targets to integration-tests.xml to start unicast nodes
* there is a qa/smoke-test-multinode project that simulates such a setup
this commit is soely the infrastructure and doesn't hook up any projects to use this.
For reliability and stability reasons this should be used with care and only if it's really
needed.
Closes#12718
the default classloader. It had all kinds of leniency in how the
classname was found, and simply cannot work with plugins having isolated
classloaders.
This change removes that method. Some of the uses of it were for custom
extension points, like custom repository or discovery types. A lot were
just there to plugin mock implementations for tests. For the settings
that were legitimate, all now support plugins adding the given setting
via onModule. For those that were specific to tests for mocks, they now
use Classes.loadClass (a helper around Class.forName). This is a
temporary measure until (in a future PR) tests can change the
implementation via package private statics.
I also removed a number of unnecessary intermediate modules, added a
"jvm-example" plugin that can be filled in in the future as a smoke test
for breaking plugins, and gave some documentation to "spawn" modules
interface.
closes#12643closes#12656
If we run the tests as a reactor build we reference the dependencies
before they are shaded. This causes problems since we verify that unshaded versions
of a transitive dependency is not present. This commit moves the verification tests
into the integration test that always runs with the shaded version of the jar.
This creates a module in qa called vagrant that can be run if you have
vagrant and virtualbox installed and will run the packaging tests in trusty
and centos-7.0. You can ask it to run tests in other linuxes. This is the full
list:
* precise aka Ubuntu 12.04
* trusty aka Ubuntu 14.04
* vivid aka Ubuntun 15.04
* wheezy aka Debian 7, the current debian oldstable distribution
* jessie aka Debian 8, the current debina stable distribution
* centos-6
* centos-7
* fedora-22
* oel-7
There is lots of documentation on how to do this in the TESTING.asciidoc.
Closes#12611
Moved the license checker config into the parent pom, and overrede
the license dir/target-to-check in distributions/pom.
Disabled the license checker explicitly for projects which run integration
tests but have no licenses dir:
* core
* distribution
* qa
* plugins/delete-by-query
* plugins/mapper-size
* plugins/site-example
Closes#12752Closes#12754
this commit adds a simple integration test that starts a
node from a shaded jar, indexes a doc and retrieves it. It
also has some basic unittests that try to load shaded classes and ensure
that their counterpart is not in the classpath.
Closes#12711
This is one of our esoteric metadata mappers so I think we should distribute
it in a plugin rather than in elasticsearch core.
This introduces one limitation: the value of the `_size` parameter is not
retrievable for documents that are only in the transaction log.
We have a smoke_test_plugins.py, but its a bit slow, not integrated
into our build, etc.
I converted this into an integration test. It is definitely uglier
but more robust and fast (e.g. 20 seconds time to verify).
Also there is refactoring of existing integ tests logic, like printing
out commands we execute and stuff