All the ordinary test operations happen based on the ImmutableTestCluster base class and are executed via transport client. Will be used especially for the REST tests once migrated to the standard randomized runner.
Added new httpAddresses method to ImmutableTestCluster to be able to retrieve the http addresses to connect to for the REST tests. Both versions will look inside the cluster to figure out which nodes are available for http calls and their addresses.
The external cluster is used as global cluster if the tests.cluster system property is available. The property needs to contain a comma separated list of available elasticsearch nodes that will be used to connect to the cluster (e.g. localhost:9300,localhost:9301).
Only a subset of the integration tests can currently be run successfully against the external cluster, for more precision the ones that don't modify the cluster layout (don't require cluster() functionalities but rely only on immutableCluster()). Also at least two data nodes are required otherwise the ensureGreen calls cannot succeed.
Closes#5630
The way that SearchServiceTransportAction executes actions on a local node
today would propagate exceptions thrown in onResult to onFailure.
Close#5629
An infrastructure that allows to simulate different network topologies failures, including 2 basic ones in failure to send requests, and unresponsive nodes
closes#5631
Today the clusterstate is not asssociated with the cluster_name which is odd
since it's pretty much it's only valid identifier. Any node can send a cluster
state to another node today even if it's not the same cluster. These situations
can happen rarely during tests or even in the wild by accident.
- removed an abstraction layer that handles the values source (consolidated values source with field data source)
- better handling of value parser/formatter in range & histogram aggs
- the buckets key will now be shown by default in range agg
simplify rest response class hierarchy, by using BytesReference for content, and handling JSONP internally in the respective channel that sends the response.
Also, handle the future case where bytes might be releasable, when we start to potentially recycle bytes output stream.
A bunch of minor fixes have been included here, especially due
to wrongly parsed mappings. Also using assertions resulted in an
NPE because they were disabled in the distribution.
Closes#5525
The new base class contains all the immutable methods for a cluster, which will be extended by the future ExternalCluster impl that relies on an external cluster and won't be adding nodes etc. but only sending requests to an existing cluster whose layout never changes
Closes#5620
Before a bulk request is executed, missing indices are being created by default.
If this fails, the whole request is failed.
This patch changes the behaviour to not fail the whole request, but rather all
requests using the index where the creation has failed.
Closes#4987
If a node fails to forward a master node operation to the current master, it will schedule a retry using a listener for cluster state changes. Once the listener is in place (and future changes are guaranteed to be observed) it will double check nothing has change during the addition of the listener. This check has previously been change to use cluster state versions (see: #5499). This is however not reliable solution as master elections (which change the master) do not increment the cluster state version and thus could be missed. This commit changes the check to use reference equality making it stricter.
Closes#5548
Although high-severity bugs were mostly static variables that were not final,
it also found potential NPEs and bugs like `x ^= x;` or equality checks on
objects that don't share a common interface.
Close#5571
For around 10% of the calls that support the GET method, the request body gets now sent as `source` parameter with GET method (no randomized method in this case).
This way we can find bugs like #5556 (missing support for source param).
The `field_value_factor` function uses the value of a field in the
document to influence the score.
A query that looks like:
{
"query": {
"function_score": {
"query": {"match": { "body": "foo" }},
"functions": [
{
"field_value_factor": {
"field": "popularity",
"factor": 1.1,
"modifier": "square"
}
}
],
"score_mode": "max",
"boost_mode": "sum"
}
}
}
Would have the score modified by:
square(1.1 * doc['popularity'].value)
Closes#5519
allow to configure on the index level which blocks can optionally be applied using tribe.blocks.indices prefix settings.
allow to control what will be done when a conflict is detected on index names coming from several clusters using the tribe.on_conflict setting. Defaults remains "any", but now support also "drop" and "prefer_[tribeName]".
closes#5501
We have had a couple of bugs because of the use of these methods without paying
attention that it might return a negative value when provided with MIN_VALUE.
There is one common and legitimate usage of this method in order to perform
a modulo operation which would always return a positive number. This use-case
has been extracted to MathUtils.mod.
Close#5562
* Only run the test with one node, to make sure only a single purger thread is running.
* Instead of generating the _timestamp upon indexing in ES, define the _timestamp in the documents.
* Verify the responses from the index requests.
This is the first to make it possible to have a different impl of TestCluster (e.g. based on an external cluster) that has the same methods but a different impl for them (e.g. it might use the REST API to do the same instead of the Java API)
Closes#5542
The current implementations of BytesReference.Helper.bytesEquals and
bytesHashCode materialize a byte[] when passed an instance that returns `false`
in `hasArray()`. They should instead do a byte-by-byte comparison/hashcode
computation without materializing the array.
Close#5517
TransportMasterNodeOperationAction forwards incoming requests to the currently known master node. If that fails due to a connection error, a cluster state listener will be added in order to try again when a new master is elected. After the listener is in place, a check was made to see if the master has change *while* the listener was being added so that change will not be missed. The check was not enough as it may be that the same master was re-elected (for example, a network hick up) and thus test will fail even though the re-ellection event was missed. In these cases, the request would timeout unjustly. This commit changes this test to be more strict and retry if the cluster state version changed during the addition of the listener.
Closes#5499
When updating the min_master_nodes setting via the Cluster Settings API, the change is propagated to all nodes. The current master node also updates the ElectMasterService and validates that is still sees enough master eligible nodes and that it's election is still valid. Other master eligible nodes do not go through this validation (good) but also didn't update the ElectMasterService with the new settings. The result is that if the current master goes away, the next election will not be done with the latest setting.
Note - min_master_node set in the elasticsearch.yml file are processed correctly
Closes#5494
Currently we close the store and therefor the underlying directory
when the engine / shard is closed ie. during relocation etc. We also
just close it while there are still searches going on and/or we are
recovering from it. The recoveries might fail which is ok but searches
etc. will be working like pending fetch phases.
The contract of the Directory doesn't prevent to read from a stream
that was already opened before the Directory was closed but from a
system boundary perspective and from lifecycles that we test it seems
to be the right thing to do to wait until all resources are released.
Additionally it will also help to make sure everything is closed
properly before directories are closed itself.
Note: this commit adds Object#wait & Object@#notify/All to forbidden APIs
Closes#5432
The default mustache engine was using HTML escaping which breaks queries
if used with JSON etc. This commit adds escaping for:
```
\b Backspace (ascii code 08)
\f Form feed (ascii code 0C)
\n New line
\r Carriage return
\t Tab
\v Vertical tab
\" Double quote
\\ Backslash
```
Closes#5473
Adds a new API endpoint at /_recovery as well as to the Java API. The
recovery API allows one to see the recovery status of all shards in the
cluster. It will report on percent complete, recovery type, and which
files are copied.
Closes#4637
By default the date_/histogram returns all the buckets within the range of the data itself, that is, the documents with the smallest values (on which with histogram) will determine the min bucket (the bucket with the smallest key) and the documents with the highest values will determine the max bucket (the bucket with the highest key). Often, when when requesting empty buckets (min_doc_count : 0), this causes a confusion, specifically, when the data is also filtered.
To understand why, let's look at an example:
Lets say the you're filtering your request to get all docs from the last month, and in the date_histogram aggs you'd like to slice the data per day. You also specify min_doc_count:0 so that you'd still get empty buckets for those days to which no document belongs. By default, if the first document that fall in this last month also happen to fall on the first day of the **second week** of the month, the date_histogram will **not** return empty buckets for all those days prior to that second week. The reason for that is that by default the histogram aggregations only start building buckets when they encounter documents (hence, missing on all the days of the first week in our example).
With extended_bounds, you now can "force" the histogram aggregations to start building buckets on a specific min values and also keep on building buckets up to a max value (even if there are no documents anymore). Using extended_bounds only makes sense when min_doc_count is 0 (the empty buckets will never be returned if the min_doc_count is greater than 0).
Note that (as the name suggest) extended_bounds is **not** filtering buckets. Meaning, if the min bounds is higher than the values extracted from the documents, the documents will still dictate what the min bucket will be (and the same goes to the extended_bounds.max and the max bucket). For filtering buckets, one should nest the histogram agg under a range filter agg with the appropriate min/max.
Closes#5224
The test currently uses ensureGreen to guaranty all shard allocations has happened. This only guaranties it from the cluster perspective and in some cases the nodes are not fast enough to implement the changes (which is what the test is about).
Default precision was computed based on the number of MULTI_BUCKET parents
instead of PER_BUCKET.
The ordinals-based execution mode was almost always used although ordinals
might have non-negligible memory usage compared to the counters.
Close#5452
* moved `geo_point` parsing to GeoUtils
* cleaned up `gzipped.json` for bulktest
* merged `GeoPointFieldMapper` and `GeoPoint` parsing methods
Closes#5390
Instead of using byte arrays, pass the BytesReference to the actual translog file, and use the new copyTo(channel) method to write. This will improve by not potentially having to convert the data to a byte array
closes#5463
Today, we use ConcurrentMergeScheduler, and this can be painful since it is concurrent on a shard level, with a max of 3 threads doing concurrent merges. If there are several shards being indexed, then there will be a minor explosion of threads trying to do merges, all being throttled by our merge throttling.
Moving to serial merge scheduler will still maintain concurrency of merges across shards, as we have the merge thread pool that schedules those merges. It will just be a serial one on a specific shard.
Also, on serial merge scheduler, we now have a limit of how many merges it will do at one go, so it will let other shards get their fair chance of merging. We use the pending merges on IW to check if merges are needed or not for it.
Note, that if a merge is happening, it will not block due to a sync on the maybeMerge call at indexing (flush) time, since we wrap our merge scheduler with the EnabledMergeScheduler, where maybeMerge is not activated during indexing, only with explicit calls to IW#maybeMerge (see Merges).
closes#5447
Two changes:
1. In the StupidBackoffScorer only look for the trigram if there is a bigram.
2. Cache the frequencies in WordScorer so we don't look them up again and
again and again. This is implemented by wrapping the TermsEnum in a special
purpose wrapper that really only works in context of the WordScorer.
This provides a pretty substantial speedup when there are many candidates.
Closes#5395
* Index one at a time only rarely if doing more then 300.
* When launching async actions, take some care to make sure you don't already
have more then 150 other async actions in flight.
* When indexing in bulk split into chunks of 1000 documents.
Seen during CI tests, it could appears that the download service is not available for any reason.
This fix in test will check before each test which requires an internet access (annotated with @Network) that the download service we are testing is still working.
It won't fail the test but will mark the test as `Ignored` in case of failure.
if the async sendPingsHandler throws an unexpected exception the
corresponding latch is never counted down. This might only happen
during node shutdown but can still cause starvation or test failures.
If we want to have a full picture of versions running in a cluster, we need to add a `_cat/plugins` endpoint.
Response could look like:
```sh
% curl es2:9200/_cat/plugins?v
node component version type url desc
es1 mapper-attachments 1.7.0 j Adds the attachment type allowing to parse difference attachment formats
es1 lang-javascript 1.4.0 j JavaScript plugin allowing to add javascript scripting support
es1 analysis-smartcn 1.9.0 j Smart Chinese analysis support
es1 marvel 1.1.0 j/s http://localhost:9200/_plugins/marvel Elasticsearch Management & Monitoring
es1 kopf 0.5.3 s http://localhost:9200/_plugins/kopf kopf - simple web administration tool for ElasticSearch
es2 mapper-attachments 2.0.0.RC1 j Adds the attachment type allowing to parse difference attachment formats
es2 lang-javascript 2.0.0.RC1 j JavaScript plugin allowing to add javascript scripting support
es2 analysis-smartcn 2.0.0.RC1 j Smart Chinese analysis support
```
Closes#4824.
Fixed minor logging discrepancies introduced with randomized shard count.
Added logging to recoverWhileUnderLoadWithNodeShutdown
Added logging ElasticsearchIntegrationTest.allowNodes to indicate what nodes were excluded.
recoverWhileRelocating's shard setting were potentially ignored (depending on key order in hashmaps)
When you set a BulkProcessor with a bulk actions size of 100, it executes the bulk after 101 documents.
```java
BulkProcessor.builder(client(), listener).setBulkActions(100).setConcurrentRequests(1).setName("foo").build();
```
Same for size. If you set the bulk size to 1024 bytes, it will actually execute the bulk after 1025 bytes.
This patch fix it.
Closes#4265.
We have the default QueryWrapperFilter as well as our custom one while
our wrapper is explicitly marked as no_cache such that it will never
be included in a cache. This was not consistenly used and caused several
problems during tests where p/c related queries were used as filters
and ended up in the cache. This commit adds the QueryWrapperFilter
ctor to the forbidden APIs to enforce the query instance checks.
Some mappers do not support externalValue() to be set. So plugin developers can't use it while building their own mappers.
Support added in this PR for:
* `BinaryFieldMapper`
* `BooleanFieldMapper`
* `GeoPointFieldMapper`
* `GeoShapeFieldMapper`
Closes#4986.
Relative to #4154.
Due to bugs in jvm (specifically OSX), running zen discovery tests causes for "socket close" failure on receive on multicast socket, and under some jvm versions, even crashes. This happens because of the creation of multiple multicast sockets within the same VM. In practice, in our tests, we use the same settings, so we can share the same multicast socket across multiple channels.
This change creates an abstraction called MulticastChannel, that can be shared, with ref counting. Today, the shared option is only enabled under OSX.
closes#5410
A missing call to ArrayUtil.grow prevented the array that stores the values
from growing in case the number of values returned by the script was higher
than the original size of the array.
Close#5414
Seen during CI tests, it could appears that the download service is not available for any reason.
This fix in test will check before each test which requires an internet access (annotated with @Network) that the download service we are testing is still working.
It won't fail the test but will mark the test as `Ignored` in case of failure.
Note that the standard `atLeast` implementation has now Integer.MAX_VALUE as upper bound, thus it behaves differently from what we expect in our tests, as we never expect the upper bound to be that high.
Added our own `atLeast` to `AbstractRandomizedTest` so that it has the expected behaviour with a reasonable upper bound.
See https://github.com/carrotsearch/randomizedtesting/issues/131
Significance is related to the changes in document frequency observed between everyday use in the corpus and
frequency observed in the result set. The asciidocs include extensive details on the applications of this feature.
Closes#5146
Lucene's RamUsageEstimator.sizeOf(Object) is to expensive.
Query size estimation will be enabled when a cheaper way of query size estimation can be found.
Closes#5372
Relates to #5339
During snapshot finalization the snapshot file is getting overwritten. If we try to read the snapshot file at this moment we can get back an empty or incomplete snapshot. This change adds a retry mechanism in case of such failure.
The test was shutting down nodes even if some of the inidces had only a
single shard. This caused that we basically had no shard active that
could sever the docs and caused random failures. This commit fixed the
test to rather allocate enough shards such that we never need to resize
the cluster which also makes the test faster.
This aggregation computes unique term counts using the hyperloglog++ algorithm
which uses linear counting to estimate low cardinalities and hyperloglog on
higher cardinalities.
Since this algorithm works on hashes, it is useful for high-cardinality fields
to store the hash of values directly in the index, which is the purpose of
the new `murmur3` field type. This is less necessary on low-cardinality
string fields because the aggregator is smart enough to only compute the hash
once per unique value per segment thanks to ordinals, or on numeric fields
since hashing them is very fast.
Close#5426
Aggregations were still using CacheRecycler on the reduce phase. They are now
using page-based recycling for both the aggregation phase and the reduce phase.
Close#4929
Due to a regression edit distances > 2 threw exceptions after unifying
the fuzziness factor in Elasticsearch `1.0`. This commit brings back the
expceted behavior.
Closes#5292
Introduced two levels of randomization for the number of replicas when running tests:
1) through the existing random index template, which now sets a random number of replicas that can either be 0 or 1 that is shared across all the indices created in the same test method unless overwritten
2) through createIndex and prepareCreate methods, between 0 and the number of data nodes available, similar to what happens using the indexSettings method, which changes for every createIndex or prepareCreate unless overwritten (overwrites index template for what concerns the number of replicas)
Added the following facilities to deal with the random number of replicas:
- made it possible to retrieve how many data nodes are available in the `TestCluster`
- added common methods similar to indexSettings, to be used in combination with createIndex and prepareCreate method and explicitly control the second level of randomization: numberOfReplicas, minimumNumberOfReplicas and maximumNumberOfReplicas
Tests that specified the number of replicas have been reviewed:
- removed manual replicas randomization where present, replaced with ordinary one that's now available
- adapted tests that didn't need a specific number of replicas to the new random behaviour
- also done some more cleanup, used common methods like assertAcked, ensureGreen, refresh, flush and refreshAndFlush where possible
================
This commit extends the `CompletionSuggester` by context
informations. In example such a context informations can
be a simple string representing a category reducing the
suggestions in order to this category.
Three base implementations of these context informations
have been setup in this commit.
- a Category Context
- a Geo Context
All the mapping for these context informations are
specified within a context field in the completion
field that should use this kind of information.
Per single main ping request we maintain the received ping response per node. Each node level ping response is mapped into that. If from a previous node level ping request the response has already been set for a node, it will be overwritten. We give higher value to the latest response. This change makes sure that this doesn't happen if the previous response has a set master and the current response hasn't a set master. Otherwise a node will lose the fact that another node has elected itself as master, the result of that would be that there would multiple master nodes in a single cluster.
Closes#5413
It can happen that not all shards are ready, thus we won't have a total failure, but we do need to check that we have at least a failure. Checked also the message of the failure.
If randomization brings up a single shard per index in this test
we might run our searches on only one index which causes the assertions
to fail afterwards that's why we need to wait until everything is alloated.
The delete snapshot operation on a running snapshot should cancel the snapshot execution. However, it interrupts the snapshot only when currently running snapshot files are completely copied, which might take a long time for large files.
Closes#5242
- pre_zone_adjust_large_interval was not parsed properly
- added tests for pre_zone and pre_zone_adjust_large_interval
- changed DateHistogram#getBucketByKey(String) to support date formats (next to numeric strings)
- added randomized testing for fetching the bucket by key in date_histogram tests
- added missing "format" support in DateHistogramBuilder
Closes#5375
Introduced two levels of randomization for the number of shards (between 1 and 10) when running tests:
1) through the existing random index template, which now sets a random number of shards that is shared across all the indices created in the same test method unless overwritten
2) through `createIndex` and `prepareCreate` methods, similar to what happens using the `indexSettings` method, which changes for every `createIndex` or `prepareCreate` unless overwritten (overwrites index template for what concerns the number of shards)
Added the following facilities to deal with the random number of shards:
- `getNumShards` to retrieve the number of shards of a given existing index, useful when doing comparisons based on the number of shards and we can avoid specifying a static number. The method returns an object containing the number of primaries, number of replicas and the total number of shards for the existing index
- added `assertFailures` that checks that a shard failure happened during a search request, either partial failure or total (all shards failed). Checks also the error code and the error message related to the failure. This is needed as without knowing the number of shards upfront, when simulating errors we can run into either partial (search returns partial results and failures) or total failures (search returns an error)
- added common methods similar to `indexSettings`, to be used in combination with `createIndex` and `prepareCreate` method and explicitly control the second level of randomization: `numberOfShards`, `minimumNumberOfShards` and `maximumNumberOfShards`. Added also `numberOfReplicas` despite the number of replicas is not randomized (default not specified but can be overwritten by tests)
Tests that specified the number of shards have been reviewed and the results follow:
- removed number_of_shards in node settings, ignored anyway as it would be overwritten by both mechanisms above
- remove specific number of shards when not needed
- removed manual shards randomization where present, replaced with ordinary one that's now available
- adapted tests that didn't need a specific number of shards to the new random behaviour
- fixed a couple of test bugs (e.g. 3 levels parent child test could only work on a single shard as the routing key used for grand-children wasn't correct)
- also done some cleanup, shared code through shard size facets and aggs tests and used common methods like `assertAcked`, `ensureGreen`, `refresh`, `flush` and `refreshAndFlush` where possible
- made sure that `indexSettings()` is always used as a basis when using `prepareCreate` to inject specific settings
- converted indexRandom(false, ...) + refresh to indexRandom(true, ...)
There are cases where nodes should not connect to another node. For example, client nodes do not connect to other client nodes. When executing a nodes level API, we should have a nicer failure indicating that, and not log it by default, as its not a real failure
Supports sorting on sub-aggs down the current hierarchy. This is supported as long as the aggregation in the specified order path are of a single-bucket type, where the last aggregation in the path points to either a single-bucket aggregation or a metrics one. If it's a single-bucket aggregation, the sort will be applied on the document count in the bucket (i.e. doc_count), and if it is a metrics type, the sort will be applied on the pointed out metric (in case of a single-metric aggregations, such as avg, the sort will be applied on the single metric value)
NOTE: this commit adds a constraint on what should be considered a valid aggregation name. Aggregations names must be alpha-numeric and may contain '-' and '_'.
Closes#5253
Today, even though our merge policy doesn't return new merge specs on SEGMENT_FLUSH, merge on the scheduler is still called on flush time, and can cause merges to stall indexing during merges. Both for the concurrent merge scheduler (the default) and the serial merge scheduler. This behavior become worse when throttling kicks in (today at 20mb per sec).
In order to solve it (outside of Lucene for now), we wrap the merge scheduler with an EnableMergeScheduler, where, on the thread level, using a thread local, the call to merge can be enabled/disabled.
A Merges helper class is added where all explicit merges operations should go through. If the scheduler is the enabled one, it will enable merges before calling the relevant explicit method call. In order to make sure Merges is the only class that calls the explicit merge calls, the IW variant of them is added to the forbidden APIs list.
closes#5319
Each plugin is now loaded in its own classloader to prevent class
conflicts when loading different versions of the same library. It
is enabled by default and is configurable through
`plugins.isolation` settings .Additionally, each plugin can
change its own isolation through the `isolation` property in
`es-plugin.properties`- if not specified, the global setting in
ES applies.
Closes#5261
The BigArrays utility class is useful to generate arrays of various sizes: when
small, arrays will be allocated directly on the heap while larger arrays are
going to be paged and to recycle pages through PageCacheRecycler. We already
have tracking for pages but this is not triggered very often since it only
happens on large amounts of data while our tests work on small amounts of data
in order to be fast.
Tracking arrays directly helps make sure that we never forget to release them.
This pull request also improves testing by:
- putting random content in the arrays upon release: this makes sure that
consumers don't use these arrays anymore when they are released as their
content may be subject to use for another purpose since pages are recycled
- putting random content in the arrays upon creation and resize when
`clearOnResize` is `false`.
The major difference with `master` is that the `BigArrays` class is now
instanciable, injected via Guice and usually available through the
`SearchContext`. This way, it can be mocked for tests.
It was being invoked once per reader and parent type combination resulting in more memory being reported to the circuit breaker than actually being used in field data.
Lucene 4.7 supports a setter for the `filler_token` that is
inserted if there are gaps in the token stream. This change exposes
this setting.
Closes#4307
The `ShardOperationFailedException` is now created within `TransportIndexReplicationAction` passing in the current shard id as a constructor argument.
Also replaced `AtomicReferenceArray<Object>` with `AtomicReferenceArray<ShardActionResult>`, where `ShardActionResult` wraps the `ShardResponse` or the failure, containing all the needed info.
seed from the main master seed. Removed shared cluster's seed entirely.
The problem here is that if you don't give cluster's seed then test times
fluctuate oddly, even for a fixed -Dtests.seed=... This shouldn't be the
case -- ideally, the test ran with the same master seed should reproduce
pretty much with the same execution time (and internal logic, obviously).
From the code point of view "global" variables are indeed a problem
because JUnit has no notion of before-suite hooks. And RandomizedRunner
doesn't support context accesses in static class initializers (this is
intentional because there is no way to determine when such initializers
will be executed). A workaround is to move such static global variables to
lazily-initialized methods and invoke them (once) in @BeforeClass hooks.
the thread local recycler requires obtain and recycle to be called on the same thread, while other recyclers do not. Also, it can create heavy recycle usage since it depends on the threads that its being used on. The concurrent / pinned thread base one is by far better than the pure thread local (and is the default) one since it more easily bounds the elements recycled, while still allowing to mix obtain and recycle across threads.
We will end up using the paged recyclers more and more, for example, in our networking output buffer, where obtaining will happen on one thread, while recycling can potentially occur on another thread (the callback thread). Since the limit of binding to a thread of the 2 calls is not really needed, and our best implementation supports going cross threads, there is no real need to impose this restriction.
some of the highlighters require term extraction to be implemented in
order to work. BlendedTermQuery doesn't implement the trivial extraction.
Closes#5246
- introduce additional destroy() callback that allows better control
over internals of recycled data
- introduced AbstractRecyclerC as superclass for Recycler factories
(Adrien) with empty destroy() by default
- added tests for destroy()
- cleaned up Recycler tests (reduce copy&paste)
A Field instance can map to multiple actual fields when using wildcard expressions. Each actual field should use the proper highlighter depending on the available data structure (e.g. term_vectors), while we currently select the highlighter for the first field and we keep using the same for all the fields that match the wildcard expression.
Modified also how the PercolateContext sets the forceSource option, in a global manner now rather than per field.
Closes#5175
When starting elasticsearch with a wrong linux user, it could generate a `NullPointerException` when `PluginsService` tries to list available plugins in `./plugins` dir.
To reproduce:
* create a plugins directory with `rwx` rights for root user only
* launch elasticsearch from another account (elasticsearch for example)
It was supposed to be fixed with #4186, but sadly it's not :-(
Closes#5195.
In #4052 we added support for highlighting multi term queries using the postings highlighter. That worked only for top-level queries though, and not for multi term queries that are nested for instance within a bool query, or filtered query, or a constant score query.
The way we make this work is by walking the query structure and temporarily overriding the query rewrite method with a method that allows for multi terms extraction.
Closes#5102
Fixes#5128
Remove java 7 specific Locale functions, add "coming[1.1.0]" to documentation
add LocaleUtils utility class for dealing with Locale functions
When running tests for site plugins, it could happen that the REST Service is not fully started and not ready immediately to serve HTTP requests.
It gives `503 Service Unavailable` error in that case.
This patch will gives 5 seconds before failing the test.
Adds support for storing mustache based query templates that can later be filled
with query parameter values at execution time. Templates may be both quoted,
non-quoted and referencing templates stored in config/scripts/*.mustache by file
name.
See docs/reference/query-dsl/queries/template-query.asciidoc for templating
examples.
Implementation detail: mustache itself is being shaded as it depends directly on
guava - so having it marked optional but included in the final distribution
raises chances of version conflicts downstream.
Fixes#4879
`_exists_` and `_missing_` miss field name expansion that `exists` and
`missing` have, which allows these filters to work on `object` fields.
Close#5142