Versions can be tricky with linux vendors and such. To help debug any possible issues, we should output a better version.
Today:
```
[elasticsearch] java.lang.RuntimeException: Java version: 1.7.0_55 suffers from critical bug https://bugs.openjdk.java.net/browse/JDK-8024830 which can cause data corruption.
[elasticsearch] Please upgrade the JVM, see http://www.elastic.co/guide/en/elasticsearch/reference/current/_installation.html for current recommendations.
[elasticsearch] If you absolutely cannot upgrade, please add -XX:-UseSuperWord to the JAVA_OPTS environment variable.
[elasticsearch] Upgrading is preferred, this workaround will result in degraded performance.
[elasticsearch] at org.elasticsearch.bootstrap.JVMCheck.check(JVMCheck.java:121)
[elasticsearch] at org.elasticsearch.bootstrap.Bootstrap.main(Bootstrap.java:270)
[elasticsearch] at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:28)
```
With patch:
```
java.lang.RuntimeException: Java version: Oracle Corporation 1.7.0_40 [Java HotSpot(TM) 64-Bit Server VM 24.0-b56] suffers from critical bug https://bugs.openjdk.java.net/browse/JDK-8024830 which can cause data corruption.
Please upgrade the JVM, see http://www.elastic.co/guide/en/elasticsearch/reference/current/_installation.html for current recommendations.
If you absolutely cannot upgrade, please add -XX:-UseSuperWord to the JAVA_OPTS environment variable.
Upgrading is preferred, this workaround will result in degraded performance.
at org.elasticsearch.bootstrap.JVMCheck.check(JVMCheck.java:121)
at org.elasticsearch.bootstrap.Bootstrap.main(Bootstrap.java:270)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:28)
```
This commit adds a new API to allow scripts to say whether they need scores.
In practice, only the `expression` script engine makes use of it correctly,
other engines just return `true` since they can't predict whether they'll
need scores. This should make scripted aggregations and `function_query`
faster as we'll now be able to pass needsScores=false to Query.createWeight.
Adding a named query that is null can lead to a NullPointerException
when copying the named queries. This is due to an implementation detail
in QueryParseContent.copyNamedQueries. In particular, this method uses
com.google.common.collect.ImmutableMap.copyOf. A documented requirement
of ImmutableMap is that none of the entries have a null key nor null
value. Therefore, we should not add such queries to the namedQueries
map. This will not change any behavior since Map.get returns null if no
entry with the given key exists anyway.
Closes#12683
This change improves the `function_score` query to not compute scores at all
when they are not needed, and to not compute scores on the underlying query
when the combine function is to replace the score with the scores of the
functions.
This commit makes it possible to serialize arbitrary objects by having them extend Writeable. When reading them though, we need to be able to identify which object we have to create, based on its name. This is useful for queries once we move to parsing on the coordinating node, as well as with aggregations and so on.
Introduced a new abstraction called NamedWriteable, which is supported by StreamOutput and StreamInput through writeNamedWriteable and readNamedWriteable methods. A new NamedWriteableRegistry is introduced also where named writeable prototypes need to be registered so that we are able to retrieve the proper instance of the writeable given its name and then de-serialize it calling readFrom against it.
Closes#12393
We have a smoke_test_plugins.py, but its a bit slow, not integrated
into our build, etc.
I converted this into an integration test. It is definitely uglier
but more robust and fast (e.g. 20 seconds time to verify).
Also there is refactoring of existing integ tests logic, like printing
out commands we execute and stuff
In order to avoid extra reroutes, `RoutingService` should avoid
scheduling a reroute of any shards where the delay is negative. To make
sure that we don't encounter a race condition between the
GatewayAllocator thinking a shard is delayed and RoutingService thinking
it is not, the GatewayAllocator will update the RoutingService with the
last time it checked in order to use a consistent "view" of the delay.
Resolves#12456
Relates to #12515 and #12456
When postFilter generates a token that is identical to the input term
DirectCandidateGenerator should not preFilter this token. If postFilter
and preFilter are the same analyzer instance it would fail with :
"TokenStream contract violation: close() call missing"
This is a forward port of @nomoa's #12670
Today we try to detect if there is an index existing in the directory
and if not we create one. This can be tricky and errof prone since we
rely on the filesystem without taking the context into account when the
engine gets created. We know in all situations if the index should be created
so we can just use this infromation and rely on the lucene index writer to barf
if we hit a situations where we can't append to an index while we should.
Today we miss to throw / rethrow an recovery exception if it happens during
the finalization of phase 1 if the source files are not affected. Even worse
this can cause some dataloss if the reason for this exception is a failure of
deleting a corruption marker or similar pre-existing corruptions since we continue
with the recovery and mark the target shared as started which will in-turn open
an engine with an empty index.
This makes the output of EsThreadPoolExecutor#toString less pretty but
we no longer have funky hacky that rely on the specific format of the
toString produced by ThreadPoolExecutor which isn't part of its API and
could change with any JVM version and break the output.
Improving the toString allows for nicer error reporting. Also cleaned up
the way that EsRejectedExecutionException notices that it was rejected
from a shutdown thread pool. I left javadocs about how its not 100% correct
but good enough for most uses.
The improved toString on EsThreadPoolExecutor mean every one of them needs
a name. In most cases the name to use is obvious. In tests I use the name
of the test method and in real thread pools I use the name of the thread
pool. In non-ThreadPool executors I use the thread's name.
Closes#9732
In order to support the URL notation including a user/pass combination
(like http://user:pass@host/plugin.zip) the auth info needs to be added
manually.
Today this is "unofficial" as conf/scripts, but some people
want to share scripts across different nodes and so on. Because
they cannot configure it, they are forced to use dirty hacks
like symbolic links, which isnt going to work: we aren't going
to recursively scan conf/ and add permissions to all link targets
underneath it, thats crazy.
I really hate adding yet another configuration knob here, but
users resorting to using symlinks are going to be frustrated,
and do things in a more insecure way.
The URL to download the main elasticsearch plugins did not match
what the S3 wageon is supposed to write to.
In addition this PR adds support for snapshots to access the
snapshot S3 bucket, so we can possibly download snapshot versions
of plugins.
The format of the URLs stems from #12270Closes#12632
As the script now deploys to S3 and several things in master have
changed, this script needs to reflect the latest changes
* An unsigned RPM is built by default, so that users of older
RPM based distros can download and use that RPM by default
* In addition a signed RPM is built, that is used for the repositories
* Paths for the new distributions have been fixed
* The check for the number of jars has been removed, as this is done
as part of the license checking in `mvn verify`
* Checksum generation has been removed, as this is done as part of the
mvn build
* Publishing artifacts of S3 has been removed
* Repostitory creation script has been updated
This can happen for a number of reasons, including bugs.
Today you will get a super crappy failure, telling you a .pid file
was not found... you can go look in target/integ-tests/elasticsearch-xxx/logs
and examine the log file, but thats kinda a pain and not easy if its a jenkins
server.
Instead we can fail like this:
```
start-external-cluster-with-plugin:
[echo] Installing plugin elasticsearch-example-jvm-plugin...
[mkdir] Created dir: /home/rmuir/workspace/elasticsearch/plugins/jvm-example/target/integ-tests/temp
[exec] -> Installing elasticsearch-example-jvm-plugin...
[exec] Plugins directory [/home/rmuir/workspace/elasticsearch/plugins/jvm-example/target/integ-tests/elasticsearch-2.0.0-SNAPSHOT/plugins] does not exist. Creating...
[exec] Trying file:/home/rmuir/workspace/elasticsearch/plugins/jvm-example/target/releases/elasticsearch-example-jvm-plugin-2.0.0-SNAPSHOT.zip ...
[exec] Downloading ...........DONE
[exec] PluginInfo{name='example-jvm-plugin', description='Demonstrates all the pluggable Java entry points in Elasticsearch', site=false, jvm=true, classname=org.elasticsearch.plugin.example.ExampleJvmPlugin, isolated=true, version='2.0.0-SNAPSHOT'}
[exec] Installed example-jvm-plugin into /home/rmuir/workspace/elasticsearch/plugins/jvm-example/target/integ-tests/elasticsearch-2.0.0-SNAPSHOT/plugins/example-jvm-plugin
[echo] Starting up external cluster...
[echo] [2015-08-04 22:02:55,130][INFO ][org.elasticsearch.node ] [smoke_tester] version[2.0.0-SNAPSHOT], pid[4321], build[e2a47d8/2015-08-05T00:50:08Z]
[echo] [2015-08-04 22:02:55,130][INFO ][org.elasticsearch.node ] [smoke_tester] initializing ...
[echo] [2015-08-04 22:02:55,259][INFO ][org.elasticsearch.plugins] [smoke_tester] loaded [uber-plugin], sites []
[echo] [2015-08-04 22:02:55,260][ERROR][org.elasticsearch.bootstrap] Exception
[echo] java.lang.NullPointerException
[echo] at org.elasticsearch.common.settings.Settings$Builder.put(Settings.java:1051)
[echo] at org.elasticsearch.plugins.PluginsService.updatedSettings(PluginsService.java:208)
[echo] at org.elasticsearch.node.Node.<init>(Node.java:148)
[echo] at org.elasticsearch.node.NodeBuilder.build(NodeBuilder.java:157)
[echo] at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:177)
[echo] at org.elasticsearch.bootstrap.Bootstrap.main(Bootstrap.java:272)
[echo] at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:28)
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 25.178 s
[INFO] Finished at: 2015-08-04T22:03:14-05:00
[INFO] Final Memory: 32M/515M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.8:run (integ-setup) on project elasticsearch-example-jvm-plugin: An Ant BuildException has occured: The following error occurred while executing this line:
[ERROR] /home/rmuir/workspace/elasticsearch/plugins/jvm-example/target/dev-tools/ant/integration-tests.xml:176: The following error occurred while executing this line:
[ERROR] /home/rmuir/workspace/elasticsearch/plugins/jvm-example/target/dev-tools/ant/integration-tests.xml:142: ES instance did not start
```
Conflicting mappings that were allowed before v2.0 can cause runaway shard failures on upgrade. This commit adds a check that prevents a cluster from starting if it contains such indices as well as restoring such indices from a snapshot into already running cluster.
Closes#11857
Currently when we delete files belonging to deleted snapshots we issue one delete command to underlying snapshot store at a time. Some repositories can benefit from bulk deletes of multiple files.
Closes#12533