Commit Graph

6415 Commits

Author SHA1 Message Date
javanna 1b496d09c3 [TEST] moved custom query parser tests to proper location 2015-06-08 15:50:43 +02:00
Boaz Leskes 10adb71445 Recovery: fix recovered translog ops stat counting when retrying a batch
#11363 introduced a retry logic for the case where we have to wait on a mapping update during the translog replay phase of recovery. The retry throws or recovery stats off as it may count ops twice.
2015-06-08 15:32:06 +02:00
javanna 7f673fbdfd Merge branch 'master' into feature/query-refactoring
Conflicts:
	core/src/test/java/org/elasticsearch/index/query/SimpleIndexQueryParserTests.java
	core/src/test/java/org/elasticsearch/index/query/guice/MyJsonQueryParser.java
	core/src/test/java/org/elasticsearch/index/query/plugin/PluginJsonQueryParser.java
2015-06-08 15:14:13 +02:00
javanna 2ef0fcfd6a Plugins: one single (global) way to register custom query parsers
There are different ways to register custom query parsers through plugins, a couple of them work per index via index settings, which is probably even too flexible. There also three different ways to add a global custom query parser through either IndicesQueriesModule or IndicesQueriesRegistry. This commit consolidates the registration of custom query parsers via IndicesQueriesModule#addQuery(Class<? extends QueryParser>). The complexity of supporting parsers per index is not needed hence it got removed. Also the other ways of registering global custom parsers are dropped in favour of the one mentioned above.

Closes #11481
2015-06-08 12:19:53 +02:00
Colin Goodheart-Smithe f336cea35e Scripting: Execute Scripting Engine before searching for inner templates in template query
The search template and template query did not run the template through the script engine before searching for an inner template. This meant that parsing for the inner template failed because the template was not always valid JSON (if it contained mustache code) when it was parsed to find the inner template. This has been fixed and Tests added to check for the failing behaviour.

Tests are from https://github.com/elastic/elasticsearch/pull/8393
2015-06-08 10:44:58 +01:00
Christoph Büscher b282ac6192 Moving parser tests and base classes to correct location 2015-06-08 09:00:49 +02:00
jaymode 78630e03a2 make prompt placeholders consistent with existing placeholders
In #10918, we introduced the prompt placeholders. These were had a different format
than our existing placeholders. This changes the prompt placeholders to follow the
format of the existing placeholders.

Relates to #11455
2015-06-06 10:41:07 -04:00
Christoph Büscher f1426abe3a Merge branch 'master' into feature/query-refactoring 2015-06-06 14:38:04 +02:00
Simon Willnauer 4c981ff4bf [BUILD] Don't shade core artifacts
This commit adds an additioal jar that is shaded and keeps all the
artifacts that are used by default on the server-side unshaded. Users
that need a shaded jar can now use the `shaded` classifyer to pull
the shaded minimized jar in instead. Including the shaded jar in a
downstream project looks like this:

```XML
<dependency>
  <groupId>org.elasticsearch</groupId>
  <artifactId>elasticsearch</artifactId>
  <classifier>shaded</classifier>
</dependency>
```
2015-06-05 21:52:09 +02:00
Boaz Leskes 6aa27a16c6 GatewayAllocator: reset rerouting flag after error
After asynchronously fetching shard information the gateway allocator issues a reroute via  a cluster state update task. #11421 introduced an optimization trying to avoid submitting unneeded reroutes when results for many shards come in together. This is done by having a rerouting flag, indicating a pending reroute is coming and thus any new incoming shard info doesn't need to issue a reroute. This flag wasn't reset upon an error in the reroute update task. Most notably - if a master node had to step during to a min_master_node violation, it could reject an ongoing reroute. Lacking to reset the flag causing it to skip any future reroute, when the node became master again.

Closes #11519
2015-06-05 21:21:09 +02:00
Igor Motov 1d02212b1c Snapshot/Restore: blob store shouldn't try deleting the write.lock file at the end of the restore process
Since we are creating write.lock earlier now, blob store shouldn't attempt deleting this file during clean up at the end of the restore process. The file is locked and the blog store doesn't succeed, but it generates a lot of useless warnings "failed to delete file [write.lock] during snapshot cleanup".

Closes #11517
2015-06-05 08:54:21 -10:00
Christoph Büscher b1a566fdf2 moving files not existing on master to core after introduction of multi module 2015-06-05 16:26:23 +02:00
Christoph Büscher 0f856cc7f9 Merge branch 'master' into feature/query-refactoring 2015-06-05 16:21:21 +02:00
Simon Willnauer 29d06605c0 add core module 2015-06-05 13:12:05 +02:00
Simon Willnauer 15a6244834 create core module 2015-06-05 13:12:03 +02:00