In addition to being a big security problem, setAccessible is a risk
for java 9 migration. We need to clean up our code so we can ban it
and eventually enforce this with security manager for third-party code, too,
or we may have problems.
Instead of using setAccessible, use the correct modifier (e.g. public).
TODO: ban in tests
TODO: ban in security manager at runtime
This commit removes and now forbids all uses of
com.google.common.collect.ImmutableSortedMap across the codebase. This
is one of many steps in the eventual removal of Guava as a dependency.
Relates #13224
This commit replaces:
* com.google.common.util.concurrent.ListenableFuture
* com.google.common.util.concurrent.SettableFuture
* com.google.common.util.concurrent.Futures
* com.google.common.util.concurrent.MoreExecutors
And forbits its usage via forbidden APIs. This is one of
many steps in the eventual removal of Guava as a dependency.
Relates to #13224
This commit removes and now forbids all uses of
com.google.common.collect.Queues across the codebase. This is one of
many steps in the eventual removal of Guava as a dependency.
Relates #13224
This commit removes and now forbids all uses of
com.google.common.collect.ImmutableSortedSet across the codebase. This
is one of many steps in the eventual removal of Guava as a dependency.
Relates #13224
This commit removes and now forbids all uses of
com.google.common.base.Preconditions#checkNotNull across the codebase.
This is one of many steps in the eventual removal of Guava as a
dependency.
Relates #13224
The semantics of the `boost` parameter for `function_score` changed. This is
due to the fact that Lucene now requires that query boosts and top-level boosts
are applied the same way.
This commit removes and now forbids all uses of
com.google.common.collect.Sets across the codebase. This is one of many
steps in the eventual removal of Guava as a dependency.
Relates #13224
Adds a node attribute to all test runs and uses the attribute to test
`_cat/nodeattrs`.
Note that its quite possible create an impressively slow regex while doing
this and you have to be careful. See comment in commit for more if curious.
Closes#12558
There are a few tests that currently use the statically generated
backcompat indexes. This change moves them to a shared location, so they
no longer have to build a path based on the package name of the old
index tests.
This commit removes and now forbids all uses of
com.google.common.collect.Maps across the codebase. This is one of many
steps in the eventual removal of Guava as a dependency.
Relates #13224
This commit removes and now forbids all uses of
com.google.common.base.Throwables across the codebase.
For uses of com.google.common.base.Throwables#getStackTraceAsString,
use org.elasticsearch.ExceptionsHelper#stackTrace.
Relates #13224
This script allows to ensure that artifacts have been pushed to
a repository after running `mvn deploy`. This will allow us to
check that all of our artifacts have been deployed to sonatype and
the S3 bucket.
Basically it takes the contents of a local mvn repository and
runs HTTP HEAD requests against all artifacts. It also compares if
the length returned by the Content-Length header is the same as the
size of the artifact locally.
As elasticsearch is marked as provided we don't need to explicitly exclude it from the assembly descriptor.
We get a warning today for all plugins, the following:
```
[INFO] --- maven-assembly-plugin:2.5.5:single (default) @ repository-s3 ---
[INFO] Reading assembly descriptor: /path/to/plugin-assembly.xml
[WARNING] The following patterns were never triggered in this artifact exclusion filter:
o 'org.elasticsearch:elasticsearch'
[INFO] Building zip: /path/to/target/releases/repository-s3-3.0.0-SNAPSHOT.zip
[INFO]
```
It now gives:
```
[INFO] --- maven-assembly-plugin:2.5.5:single (default) @ repository-s3 ---
[INFO] Reading assembly descriptor: /path/to/plugin-assembly.xml
[INFO] Building zip: /path/to/target/releases/repository-s3-3.0.0-SNAPSHOT.zip
[INFO]
```
This commit removes and now forbids all uses of
com.google.common.base.Strings across the codebase.
For uses of com.google.common.base.Strings.isNullOrEmpty, use
org.elasticsearch.common.Strings.isNullOrEmpty.
For uses of com.google.common.base.Strings.padStart use
org.elasticsearch.common.Strings.padStart.
For uses of com.google.common.base.Strings.nullToEmpty use
org.elasticsearch.common.Strings.coalesceToEmpty.
Relates #13224
This commit removes and now forbids all uses of
com.google.common.base.Predicate and com.google.common.base.Predicates
across the codebase. This is one of the many steps in the eventual
removal of Guava as a dependency. This was enabled by #13314.
Relates #13224
This commit removes and now forbids all uses of
com.google.common.base.Objects across the codebase. This is a small
step in the eventual removal of Guava as a dependency.
Relates #13224
This commit adds a new smoke test for testing client as a end Java user.
It starts a cluster in `pre-integration-test` phase, then execute the client operations defined as JUnit tests within `integration-test` phase and then stop the external cluster in `post-integration-test` phase.
You can also run test classes from your IDE.
* Start an external node on your machine with `bin/elasticsearch` (note that you can test Java API regressions if you run an older or newer node version)
* Run the JUnit test. By default, it will run tests on `localhost:9300` but you can change this setting using system property `tests.cluster`. It also expects the default `cluster.name` (`elasticsearch`).
This commit also starts adding [snippets as defined by Maven](https://maven.apache.org/guides/mini/guide-snippet-macro.html) to help keeping automatically synchronized the Java reference guide with the current code.
Our documentation builder tool does not support snippets though but we will most likely support it at some point.
The shaded version of elasticsearch was built at the very beginning to avoid dependency conflicts in a specific case where:
* People use elasticsearch from Java
* People needs to embed elasticsearch jar within their own application (as it's today the only way to get a `TransportClient`)
* People also embed in their application another (most of the time older) version of dependency we are using for elasticsearch, such as: Guava, Joda, Jackson...
This conflict issue can be solved within the projects themselves by either upgrade the dependency version and use the one provided by elasticsearch or by shading elasticsearch project and relocating some conflicting packages.
Example
-------
As an example, let's say you want to use within your project `Joda 2.1` but elasticsearch `2.0.0-beta1` provides `Joda 2.8`.
Let's say you also want to run all that with shield plugin.
Create a new maven project or module with:
```xml
<groupId>fr.pilato.elasticsearch.test</groupId>
<artifactId>es-shaded</artifactId>
<version>1.0-SNAPSHOT</version>
<properties>
<elasticsearch.version>2.0.0-beta1</elasticsearch.version>
</properties>
<dependencies>
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch</artifactId>
<version>${elasticsearch.version}</version>
</dependency>
<dependency>
<groupId>org.elasticsearch.plugin</groupId>
<artifactId>shield</artifactId>
<version>${elasticsearch.version}</version>
</dependency>
</dependencies>
```
And now shade and relocate all packages which conflicts with your own application:
```xml
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>2.4.1</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<relocations>
<relocation>
<pattern>org.joda</pattern>
<shadedPattern>fr.pilato.thirdparty.joda</shadedPattern>
</relocation>
</relocations>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
```
You can create now a shaded version of elasticsearch + shield by running `mvn clean install`.
In your project, you can now depend on:
```xml
<dependency>
<groupId>fr.pilato.elasticsearch.test</groupId>
<artifactId>es-shaded</artifactId>
<version>1.0-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>joda-time</groupId>
<artifactId>joda-time</artifactId>
<version>2.1</version>
</dependency>
```
Build then your TransportClient as usual:
```java
TransportClient client = TransportClient.builder()
.settings(Settings.builder()
.put("path.home", ".")
.put("shield.user", "username:password")
.put("plugin.types", "org.elasticsearch.shield.ShieldPlugin")
)
.build();
client.addTransportAddress(new InetSocketTransportAddress(new InetSocketAddress("localhost", 9300)));
// Index some data
client.prepareIndex("test", "doc", "1").setSource("foo", "bar").setRefresh(true).get();
SearchResponse searchResponse = client.prepareSearch("test").get();
```
If you want to use your own version of Joda, then import for example `org.joda.time.DateTime`. If you want to access to the shaded version (not recommended though), import `fr.pilato.thirdparty.joda.time.DateTime`.
You can run a simple test to make sure that both classes can live together within the same JVM:
```java
CodeSource codeSource = new org.joda.time.DateTime().getClass().getProtectionDomain().getCodeSource();
System.out.println("unshaded = " + codeSource);
codeSource = new fr.pilato.thirdparty.joda.time.DateTime().getClass().getProtectionDomain().getCodeSource();
System.out.println("shaded = " + codeSource);
```
It will print:
```
unshaded = (file:/path/to/joda-time-2.1.jar <no signer certificates>)
shaded = (file:/path/to/es-shaded-1.0-SNAPSHOT.jar <no signer certificates>)
```
This PR also removes fully-loaded module.
By the way, the project can now build with Maven 3.3.3 so we can relax a bit our maven policy.
To support spaces in both the command as well as its arguments cmd needs
be called like this:
cmd /C ""c:\a b\c.bat" "argument 1" "argument2""
ant was running
cmd /C "c:\a b\c.bat" "argument 1" "argument2"
which in windows causes to be preprocessed to
cmd /C c:\a b\c.bat" "argument 1" "argument2
Which would make it appear as though ant was not properly quoting (which
it did sort of).
This commit removes and now forbids all uses of
com.google.common.collect.Lists across the codebase. This is the first
of many steps in the eventual removal of Guava as a dependency.
We had a file in dev-tools/ElasticSearch.launch which tried to launch
elasticsearch but failed somewhat epically because of the security manager
and files having moved. This recreates it with
`-Des.security.manager.enabled=false` to get it working again. Its not as nice
as testing with the security manager in place but its better than waiting
minutes for maven to package and startup elasticsearch.
The script now allows to run all required steps at once and alternatively
prints out manual instructions to run the steps individually. It also has
flags and options to run debug builds from a local checkout.
Currently we implicitly enforce a version format on the java.version
property for plugins via JarHell.checkJavaVersion. We should explicitly
enforce this version format and specify it in the documentation.
This commit adds an explicit enforcement of the version format on the
java.version property for plugins and updates the documentation for
plugin-descriptor.properties accordingly.
Closes#13009
Multicast has known issues (see #12999 and #12993). This change moves
multicast into a plugin, and deprecates it in the docs. It also allows
for plugging in multiple zen ping implementations.
closes#13019
Fix unicast discovery to work when a host has multiple addresses.
Ban dangerous methods in java.net with forbidden APIs.
Fix ipv6 bugs and formatting of network addresses everywhere.
Closes#12999Closes#12993
Squashed commit of the following:
commit 6c1aa001d091c5cf25212a53dc701fb704337f1e
Author: Robert Muir <rmuir@apache.org>
Date: Thu Aug 20 14:25:43 2015 -0400
Fix these to be correct with addresses just in case
commit 648215627e84abf58a71400e7dc9ae775efb71d6
Merge: d00561b 41d8fbe
Author: Robert Muir <rmuir@apache.org>
Date: Thu Aug 20 13:23:09 2015 -0400
Merge branch 'master' into unicast_all_the_way_down
commit d00561b76fd1aa5850699f7901f3dae3d4d402b7
Author: Simon Willnauer <simonw@apache.org>
Date: Thu Aug 20 16:38:50 2015 +0200
limit local ports to 5 in UnicastZenPing
commit e2e15c594006746cbe24432694294a71cc99deb8
Author: Robert Muir <rmuir@apache.org>
Date: Thu Aug 20 10:32:47 2015 -0400
fix port limiting
commit 10153cb7adadda81a1f482445e703836b65cf5e2
Author: Robert Muir <rmuir@apache.org>
Date: Thu Aug 20 10:18:37 2015 -0400
don't serialize scopeids: that's broken
commit 2aa63d43db2baec68a2e9bc227cfeb85dfeb4f83
Author: Simon Willnauer <simonw@apache.org>
Date: Thu Aug 20 16:06:51 2015 +0200
restore @Network
commit c840f1d1ef438826ae1ecfd5e45942a0e30dc9c0
Author: Simon Willnauer <simonw@apache.org>
Date: Thu Aug 20 16:02:30 2015 +0200
Use NetworkAddress.formatAddress where applicable in plugins
commit 374ce878852b35d626b7a29c8c4773545b0e9ddd
Author: Simon Willnauer <simonw@apache.org>
Date: Thu Aug 20 15:34:06 2015 +0200
Use NetworkAddress.formatAddress where applicable
commit e7a606d63f1bc43c1b62b6e17adf707c76d43a15
Author: Simon Willnauer <simonw@apache.org>
Date: Thu Aug 20 10:17:57 2015 +0200
Add @Multicast annotation to disable multicast tests by default.
We only run multicast tests now when we explicitly state it. A working
multicast env is required which is not always the case.
commit 2d7d2d0347179696ab41f71f048b13305014c85b
Author: Simon Willnauer <simonw@apache.org>
Date: Thu Aug 20 09:51:28 2015 +0200
Remove extra check for local mode in InternalTestCluster
commit dda59ac39aa136d4687b9274c2692cd77f8b8f66
Author: Simon Willnauer <simonw@apache.org>
Date: Thu Aug 20 09:37:03 2015 +0200
Handle node mode across entire test cluster
We used static methods reading sys properties to define the node mode
per cluster. this had lots of problems when tests couldn't cope with
mixed or only local mode. Now we are passing it down to the cluster from the test
which allows to @SuppressNetworkMode / @SupressLocalMode on the test to force
consistent node configurations.
commit 058197b7a408318995c88ce7f6762e32348de0de
Author: Robert Muir <rmuir@apache.org>
Date: Thu Aug 20 03:19:14 2015 -0400
really ban InetSocketAddress's trappy method and break build and go to sleep, sorry
commit ac8779185aee1e17e6f5a81766290fdfc9c603ba
Author: Robert Muir <rmuir@apache.org>
Date: Thu Aug 20 03:16:52 2015 -0400
Ban methods that might surprisingly cause DNS lookups
commit e64fe3dff2b11503e5f2831eb9863d64f56c5538
Author: Robert Muir <rmuir@apache.org>
Date: Thu Aug 20 02:59:05 2015 -0400
Add unit test
commit f15434f20fb1a3691b1cc16028597d8fae937e05
Author: Robert Muir <rmuir@apache.org>
Date: Thu Aug 20 02:39:02 2015 -0400
fix ipv6 formatting bugs
commit 05c2c74098052c75fbb79ea1818a295ef2e03e30
Author: Robert Muir <rmuir@apache.org>
Date: Thu Aug 20 02:12:05 2015 -0400
format addresses correctly so I can actually read what comes out of our logs and stats apis
commit 4f9389dcf1e8925f23153c5eb271b4ce2294dbaf
Author: Robert Muir <rmuir@apache.org>
Date: Wed Aug 19 21:26:52 2015 -0400
ban dangerous methods in java.net
commit 6aacd4d9925f324903d1d099a6cf5f862aeaf677
Author: Robert Muir <rmuir@apache.org>
Date: Wed Aug 19 20:59:24 2015 -0400
ban lenient method
commit f466a842c60163d1f4554bdce8a4163edb534c2c
Author: Simon Willnauer <simonw@apache.org>
Date: Thu Aug 20 00:29:00 2015 +0200
fix tests to not mix local transport and zen unicast disco
commit 0de007a33b33fb68cf85cd86db4ca4f8ce10bbc9
Author: Simon Willnauer <simonw@apache.org>
Date: Thu Aug 20 00:10:07 2015 +0200
fix tests to not mix local transport and zen unicast disco
commit 539f6ca6e5137e0d496239adc8684688dedcc824
Author: Simon Willnauer <simonw@apache.org>
Date: Thu Aug 20 00:02:01 2015 +0200
fix tests to not mix local transport and zen unicast disco
commit 004c2881b25467f332acc8c9f9e92b1f0f9d314e
Author: Robert Muir <rmuir@apache.org>
Date: Wed Aug 19 17:51:45 2015 -0400
Fix multinode
commit 54113af325ce31571811c49fdaae89d5687be4ba
Author: Robert Muir <rmuir@apache.org>
Date: Wed Aug 19 17:36:45 2015 -0400
fix integration tests
commit 0156a77a56319d6b9737ec6a531992052e50bd59
Author: Simon Willnauer <simonw@apache.org>
Date: Wed Aug 19 23:32:18 2015 +0200
enable multicast in MulticastZenPingIT.java
commit 1791caa35da853ce0122485fa3fd4674c671ec6e
Author: Robert Muir <rmuir@apache.org>
Date: Wed Aug 19 17:23:16 2015 -0400
Fix constant
commit 22820b53e0b2dc9fd47145c2bc29ce912a8fd484
Author: Simon Willnauer <simonw@apache.org>
Date: Wed Aug 19 22:59:09 2015 +0200
give it some extra ids for local transport crazyness
commit b2138fafa94a8a085813fd48356df63e57ade5b3
Author: Simon Willnauer <simonw@apache.org>
Date: Wed Aug 19 22:51:42 2015 +0200
pass on local addresses from configured transport rather than hard code IP addresses
commit 1bf5de1f457b081e0ce262b57d2b55d39c434156
Author: Simon Willnauer <simonw@apache.org>
Date: Wed Aug 19 22:04:31 2015 +0200
fix PluggableTransportModuleIT.java to use local disco and detach port limit for node local disco
commit b6706eddfa04c43947c16551359ae98a463d34aa
Author: Robert Muir <rmuir@apache.org>
Date: Wed Aug 19 14:16:03 2015 -0400
Default to unicast discovery, with default host list of 127.0.0.1, [::1]
At the moment, when installing from an url, a user provides the plugin name on
the command line like:
* bin/plugin install [plugin-name] --url [url]
This can lead to problems when picking an already existing name from another
plugin, and can potentially overwrite plugins already installed with that name.
This, this PR introduces a mandatory `name` property to the plugin descriptor
file which replaces the name formerly provided by the user.
With the addition of the `name` property to the plugin descriptor file, the user
does not need to specify the plugin name any longer when installing from a file
or url. Because of this, all arguments to `plugin install` command are now
either treated as a symbolic name, a URL or a file without the need to specify
this with an explicit option.
The new syntax for `plugin install` is now:
bin/plugin install [name or url]
* downloads official plugin
bin/plugin install analysis-kuromoji
* downloads github plugin
bin/plugin install lmenezes/elasticsearch-kopf
* install from URL or file
bin/plugin install http://link.to/foo.zip
bin/plugin install file:/path/to/foo.zip
If the argument does not parse to a valid URL, it is assumed to be a name and the
download location is resolved like before. Regardless of the source location of
the plugin, it is extracted to a temporary directory and the `name` property from
the descriptor file is used to determine the final install location.
Relates to #12715
This causes a FileSystemException when trying to retrieve FileStore for a path,
and falsely returns false for Files.isWritable
Squashed commit of the following:
commit d2cc0d966f3bc94aa836b316a42b3c5724bc01ef
Author: Robert Muir <rmuir@apache.org>
Date: Tue Aug 18 15:49:48 2015 -0400
fixes for the non-bogus comments
commit 6e0a272f5f8ef7358654ded8ff4ffc31831fa5c7
Author: Robert Muir <rmuir@apache.org>
Date: Tue Aug 18 15:30:43 2015 -0400
Fix isWritable too
commit 2a8764ca118fc4c950bfc60d0b97de873e0e82ad
Author: Robert Muir <rmuir@apache.org>
Date: Tue Aug 18 14:49:50 2015 -0400
try to workaround filestore bug
Now we are using short names for artifactId (see #12879) so we don't need anymore to transform long names `elasticsearch-pluginname` to short names `pluginname` in ant script when we install a plugin.
Modify also convert-plugin-name
Clean up remaining plugins with old format
And fix vagrant tests
this method is very confusing and if it's used it's likely the wrong thing
with respect to the actual bound / published address. This change discourages
it's use and removes all useage. It's replaced with the actual published address
most of the time.
When elasticsearch is configured by interface (or default: loopback interfaces),
bind to all addresses on the interface rather than an arbitrary one.
If the publish address is not specified, default it from the bound addresses
based on the following sort ordering:
* ipv4/ipv6 (java.net.preferIPv4Stack, defaults to true)
* ordinary addresses
* site-local addresses
* link local addresses
* loopback addresses
One one address is published, and multicast is still always over ipv4: these
need to be future improvements.
Closes#12906Closes#12915
Squashed commit of the following:
commit 7e60833312f329a5749f9a256b9c1331a956d98f
Author: Robert Muir <rmuir@apache.org>
Date: Mon Aug 17 14:45:33 2015 -0400
fix java 7 compilation oops
commit c7b9f3a42058beb061b05c6dd67fd91477fd258a
Author: Robert Muir <rmuir@apache.org>
Date: Mon Aug 17 14:24:16 2015 -0400
Cleanup/fix logic around custom resolvers
commit bd7065f1936e14a29c9eb8fe4ecab0ce512ac08e
Author: Robert Muir <rmuir@apache.org>
Date: Mon Aug 17 13:29:42 2015 -0400
Add some unit tests for utility methods
commit 0faf71cb0ee9a45462d58af3d1bf214e8a79347c
Author: Robert Muir <rmuir@apache.org>
Date: Mon Aug 17 12:11:48 2015 -0400
localhost all the way down
commit e198bb2bc0d1673288b96e07e6e6ad842179978c
Merge: b55d092 b93a75f
Author: Robert Muir <rmuir@apache.org>
Date: Mon Aug 17 12:05:02 2015 -0400
Merge branch 'master' into network_cleanup
commit b55d092811d7832bae579c5586e171e9cc1ebe9d
Author: Robert Muir <rmuir@apache.org>
Date: Mon Aug 17 12:03:03 2015 -0400
fix docs, fix another bug in multicast (publish host = bad here!)
commit 88c462eb302b30a82585f95413927a5cbb7d54c4
Author: Robert Muir <rmuir@apache.org>
Date: Mon Aug 17 11:50:49 2015 -0400
remove nocommit
commit 89547d7b10d68b23d7f24362e1f4782f5e1ca03c
Author: Robert Muir <rmuir@apache.org>
Date: Mon Aug 17 11:49:35 2015 -0400
fix http too
commit 9b9413aca8a3f6397b5031831f910791b685e5be
Author: Robert Muir <rmuir@apache.org>
Date: Mon Aug 17 11:06:02 2015 -0400
Fix transport / interface code
Next up: multicast and then http
In order to have consistent deploys across several repositories,
we should deploy to sonatype first, then mirror those contents,
and then upload to s3.
This means, the aws wagon is not needed anymore.
mvn has a versions:set plugin, that can be easily invoked and does not
require the python script to parse the files and hope that there are no
other snapshot mentions.
This is purely for maintainance reasons since it easier to see if we can drop
certain stageing urls if we have the version next to the hash.
I also removed the gpg passphrase from the example URL since it's better to get prompted?
In order to reflect our RC release process, we need to separate
the prepare_release script into two separate scripts.
One script now updates the documentation. That one can be executed
anytime and needs to be pushed after that.
The other script updates the version in Version.java and all pom.xml
files, but does not commit anymore. This allows to create a non snapshot
version locally, run mvn deploy, push the artifacts into S3 and, upon
successful tests, simply release them on sonatype. This also allows for
updates, because the S3 snapshot will include the commitId in their repo
as already pushed before.
If core plugins are to be renamed to not include the "elasticsearch-"
prefix, then we need a way of telling the license checker which
JAR files to ignore.
Refactored a part out of the release script, so the user can
change the version locally as well as move the documentation
and change the Version.java
The background of this change is to have a very simple release
process that puts stuff into a staging environment, so the beta
release can be tested, before it is officially released.
This means the build_release script can be removed soon.
On our Jenkins instances the ${path.home} and createTempDir() locations
share a different parent, so the custom index locations are not within
the ${path.shared_data} directory. This is a hack to fix it until we can
find a way to unify the createTempDir() and `path.shared_data` settings
inside the tests
Previously we had additional.args as a argument to the startup-elasticsearch macrodef and this was
being used to set some additional elasticsearch settings. This adds the ability to specify additional
arguments back using a element called additional-args.
This adds the infrastrucutre to run integration tests with more than one node.
* it adds relevant macros and targets to integration-tests.xml to start unicast nodes
* there is a qa/smoke-test-multinode project that simulates such a setup
this commit is soely the infrastructure and doesn't hook up any projects to use this.
For reliability and stability reasons this should be used with care and only if it's really
needed.
Closes#12718
Adds an explicit description the RPM package so it doesn't inherit the description from the POM.
Closes#12550
Also, modified descriptions for deb and rpm packages to be the same and to reference the documentation rather than listing features that are out of date.
This creates a module in qa called vagrant that can be run if you have
vagrant and virtualbox installed and will run the packaging tests in trusty
and centos-7.0. You can ask it to run tests in other linuxes. This is the full
list:
* precise aka Ubuntu 12.04
* trusty aka Ubuntu 14.04
* vivid aka Ubuntun 15.04
* wheezy aka Debian 7, the current debian oldstable distribution
* jessie aka Debian 8, the current debina stable distribution
* centos-6
* centos-7
* fedora-22
* oel-7
There is lots of documentation on how to do this in the TESTING.asciidoc.
Closes#12611
Moved the license checker config into the parent pom, and overrede
the license dir/target-to-check in distributions/pom.
Disabled the license checker explicitly for projects which run integration
tests but have no licenses dir:
* core
* distribution
* qa
* plugins/delete-by-query
* plugins/mapper-size
* plugins/site-example
Closes#12752Closes#12754
this commit adds a simple integration test that starts a
node from a shaded jar, indexes a doc and retrieves it. It
also has some basic unittests that try to load shaded classes and ensure
that their counterpart is not in the classpath.
Closes#12711
We have a smoke_test_plugins.py, but its a bit slow, not integrated
into our build, etc.
I converted this into an integration test. It is definitely uglier
but more robust and fast (e.g. 20 seconds time to verify).
Also there is refactoring of existing integ tests logic, like printing
out commands we execute and stuff
As the script now deploys to S3 and several things in master have
changed, this script needs to reflect the latest changes
* An unsigned RPM is built by default, so that users of older
RPM based distros can download and use that RPM by default
* In addition a signed RPM is built, that is used for the repositories
* Paths for the new distributions have been fixed
* The check for the number of jars has been removed, as this is done
as part of the license checking in `mvn verify`
* Checksum generation has been removed, as this is done as part of the
mvn build
* Publishing artifacts of S3 has been removed
* Repostitory creation script has been updated
This can happen for a number of reasons, including bugs.
Today you will get a super crappy failure, telling you a .pid file
was not found... you can go look in target/integ-tests/elasticsearch-xxx/logs
and examine the log file, but thats kinda a pain and not easy if its a jenkins
server.
Instead we can fail like this:
```
start-external-cluster-with-plugin:
[echo] Installing plugin elasticsearch-example-jvm-plugin...
[mkdir] Created dir: /home/rmuir/workspace/elasticsearch/plugins/jvm-example/target/integ-tests/temp
[exec] -> Installing elasticsearch-example-jvm-plugin...
[exec] Plugins directory [/home/rmuir/workspace/elasticsearch/plugins/jvm-example/target/integ-tests/elasticsearch-2.0.0-SNAPSHOT/plugins] does not exist. Creating...
[exec] Trying file:/home/rmuir/workspace/elasticsearch/plugins/jvm-example/target/releases/elasticsearch-example-jvm-plugin-2.0.0-SNAPSHOT.zip ...
[exec] Downloading ...........DONE
[exec] PluginInfo{name='example-jvm-plugin', description='Demonstrates all the pluggable Java entry points in Elasticsearch', site=false, jvm=true, classname=org.elasticsearch.plugin.example.ExampleJvmPlugin, isolated=true, version='2.0.0-SNAPSHOT'}
[exec] Installed example-jvm-plugin into /home/rmuir/workspace/elasticsearch/plugins/jvm-example/target/integ-tests/elasticsearch-2.0.0-SNAPSHOT/plugins/example-jvm-plugin
[echo] Starting up external cluster...
[echo] [2015-08-04 22:02:55,130][INFO ][org.elasticsearch.node ] [smoke_tester] version[2.0.0-SNAPSHOT], pid[4321], build[e2a47d8/2015-08-05T00:50:08Z]
[echo] [2015-08-04 22:02:55,130][INFO ][org.elasticsearch.node ] [smoke_tester] initializing ...
[echo] [2015-08-04 22:02:55,259][INFO ][org.elasticsearch.plugins] [smoke_tester] loaded [uber-plugin], sites []
[echo] [2015-08-04 22:02:55,260][ERROR][org.elasticsearch.bootstrap] Exception
[echo] java.lang.NullPointerException
[echo] at org.elasticsearch.common.settings.Settings$Builder.put(Settings.java:1051)
[echo] at org.elasticsearch.plugins.PluginsService.updatedSettings(PluginsService.java:208)
[echo] at org.elasticsearch.node.Node.<init>(Node.java:148)
[echo] at org.elasticsearch.node.NodeBuilder.build(NodeBuilder.java:157)
[echo] at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:177)
[echo] at org.elasticsearch.bootstrap.Bootstrap.main(Bootstrap.java:272)
[echo] at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:28)
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 25.178 s
[INFO] Finished at: 2015-08-04T22:03:14-05:00
[INFO] Final Memory: 32M/515M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.8:run (integ-setup) on project elasticsearch-example-jvm-plugin: An Ant BuildException has occured: The following error occurred while executing this line:
[ERROR] /home/rmuir/workspace/elasticsearch/plugins/jvm-example/target/dev-tools/ant/integration-tests.xml:176: The following error occurred while executing this line:
[ERROR] /home/rmuir/workspace/elasticsearch/plugins/jvm-example/target/dev-tools/ant/integration-tests.xml:142: ES instance did not start
```
Conflicting mappings that were allowed before v2.0 can cause runaway shard failures on upgrade. This commit adds a check that prevents a cluster from starting if it contains such indices as well as restoring such indices from a snapshot into already running cluster.
Closes#11857