Commit Graph

8643 Commits

Author SHA1 Message Date
Igor Motov fe46ef393b Snapshot/Restore: Ensure that shard failure reasons are correctly stored in CS (#25941)
The failure reason for snapshot shard failures might not be propagated properly if the master node changes after the errors were reported by other data nodes. This commits ensures that the snapshot shard failure reason is preserved properly and adds workaround for reading old snapshot files where this information might not have been preserved.

Closes #25878
2017-07-28 12:28:02 -04:00
Martijn van Groningen 7c3735bdc4
percolator: Store the QueryBuilder's Writable representation instead of its XContent representation.
The Writeble representation is less heavy to parse and that will benefit percolate performance and throughput.

The query builder's binary format has now the same bwc guarentees as the xcontent format.

Added a qa test that verifies that percolator queries written in older versions are still readable by the current version.
2017-07-28 12:24:10 +02:00
Yannick Welsch 1a01514081 Move tribe to a module (#25778)
This commit moves tribe to a module, stripping core from the tribe functionality.
2017-07-28 11:23:50 +02:00
Jim Ferenczi 562c3744ca Merge FunctionScoreQuery and FiltersFunctionScoreQuery (#25889)
This change merges the functionality of the FiltersFunctionScoreQuery in the FunctionScoreQuery.
It also ensures that an exception is thrown when the computed score is equals to Float.NaN or Float.NEGATIVE_INFINITY.
These scores are invalid for TopDocsCollectors that relies on score comparison.

Fixes #15709
Fixes #23628
2017-07-28 09:22:20 +02:00
Jason Tedor 1492ccd7ae Fix environment-aware command tests
This commit fixes tests for environment-aware commands. A previous
change added a check that es.path.conf is not null. The problem is that
this system property is not being set in tests so this check trips every
single time. To fix this, we move the check into a method that can be
overridden, and then override this method in relevant places in tests to
avoid having to set the property in tests. We also add a test that this
check works as expected.
2017-07-28 14:37:04 +09:00
Jason Tedor c1ee65f990 Remove unused imports from EnvironmentAwareCommand
This commit removes two unused imports from EnvironmentAwareCommand that
were left behind after a previous change.
2017-07-28 12:20:18 +09:00
Jason Tedor 8639bf4a1a Pass config path as a system property
A previous change enabled it so that users could configure the
configuration path via a command-line option --path.conf. However, a
subsequent change has made it so that we expect users to set the
configuration path via the environment variable CONF_DIR. To enable
this, we now pass the value of CONF_DIR as the value for the
command-line option --path.conf. This has two problems:
 - the presence of --path.conf always being on the command line breaks
   other flags like --help for multi-commands
 - the scripts for which --help is not broken say that you can pass
   --path.conf but this is a lie since passing it will make it appear
   twice in the command-line arguments breaking the script

Since --path.conf is no longer the way that we want users to set the
configuration path, we should remove the --path.conf option. However, we
still need a way to get the configuration path from the scripts to the
running Java process. To do this, we now pass the configuration path as
a system property. This keeps it off the script command line fixing the
above problems.

The only remaining question (that I can see) is whether or not to
respect -Des.path.conf=<some path> if the user sets this in their
jvm.options or via ES_JAVA_OPTS. I think that we should not do this (as
has been our tradition), es.path.home and es.path.conf are special,
should be set by our scripts only so users should not be setting them at
all so we should not take any effort to respect these flags if the user
tries to otherwise use them.

Relates #25943
2017-07-28 12:15:22 +09:00
Yannick Welsch efd79882a2 Allow build to directly run under JDK 9 (#25859)
With Gradle 4.1 and newer JDK versions, we can finally invoke Gradle directly using a JDK9 JAVA_HOME without requiring a JDK8 to "bootstrap" the build. As the thirdPartyAudit task runs within the JVM that Gradle runs in, it needs to be adapted now to be JDK9 aware.

This commit also changes the `JavaCompile` tasks to only fork if necessary (i.e. when Gradle's JVM and JAVA_HOME's JVM differ).
2017-07-27 16:14:04 +02:00
Yannick Welsch 020ba41c5d Close translog view after primary-replica resync (#25862)
The translog view was being closed too early, possibly causing a failed resync. Note: The bug only affects unreleased code.

Relates to #24841
2017-07-27 14:36:51 +02:00
Yannick Welsch 620536f850 Release operation permit on thread-pool rejection (#25930)
At the shard level we use an operation permit to coordinate between regular shard operations and special operations that need exclusive access. In ES versions < 6, the operation requiring exclusive access was invoked during primary relocation, but ES versions >= 6 this exclusive access is also used when a replica learns about a new primary or when a replica is promoted to primary.

These special operations requiring exclusive access delay regular operations from running, by adding them to a queue, and after finishing the exclusive access, release these operations which then need to be put back on the original thread-pool they were running on. In the presence of thread pool rejections, the current implementation had two issues:

- it would not properly release the operation permit when hitting a rejection (i.e. when calling ThreadedActionListener.onResponse from IndexShardOperationPermits.acquire).
- it would not invoke the onFailure method of the action listener when the shard was closed, and just log a warning instead (see ThreadedActionListener.onFailure), which would ultimately lead to the replication task never being cleaned up (see #25863).

This commit fixes both issues by introducing a custom threaded action listener that is permit-aware and properly deals with rejections.

Closes #25863
2017-07-27 14:15:00 +02:00
Adrien Grand 1cd5e3413d Caching a MinDocQuery can lead to wrong results. (#25909)
Queries are supposed to be cacheable per segment, yet matches of this query
also depend on how many documents exist on previous segments.
2017-07-27 11:19:20 +02:00
Adrien Grand 876c7e0400 Fix random score generation when no seed is provided. (#25908)
It fixes random score generation to ensure that you will not always get the
same scores on a read-only index by integrating the seed into the score
computation when using doc ids. It also removes `ctx.docBase` from the formula
since it might change over time if deletes are compacted while scores are
supposed to be cacheable per segment.
2017-07-27 11:17:56 +02:00
Martijn van Groningen edad7b4737
Add support for selecting percolator query candidate matches containing range queries.
Extracts ranges from range queries on byte, short, integer, long, half_float, scaled_float, float, double, date and ip fields.
byte, short, integer and date ranges are normalized to Lucene's LongRange.
half_float and float are normalized to Lucene's DoubleRange.

When extracting range queries, the QueryAnalyzer computes the width of the range.  This width is used to determine
what range should be preferred in a conjunction query. The QueryAnalyzer prefers the smaller ranges, because these
ranges tend to match with less documents.

Closes #21040
2017-07-26 21:25:45 +02:00
Simon Willnauer b72c71083c Cleanup IndexFieldData visibility (#25900)
Today we expose `IndexFieldDataService` outside of IndexService to do maintenance
or lookup field data in different ways. Yet, we have a streamlined way to access IndexFieldData
via `QueryShardContext` that should encapsulate all access to it. This also ensures that we control all other functionality like cache clearing etc.

This change also removes the `recycler` option from `ClearIndicesCacheRequest` this option is a no-op and should have been removed long ago.
2017-07-26 20:03:42 +02:00
Tim Brooks 6d02b45f10 Support client-only mode for NioTransport (#25839)
Currently, NioTransport does start normal socket selectors and the
client when the network server setting is set to false. This commit
makes it so that the client will be started even when the network server
is not enabled.

Additionally, it randomly introduces the NioTransport as an option for
the MockTransportClient throughout tests.
2017-07-26 10:27:15 -05:00
Boaz Leskes 03eb1460ad MasterNodeChangePredicate should use the node instance to detect master change (#25877)
This predicate is used to deal with the intricacies of detecting when a master is reelected/nodes rejoins an existing master. The current implementation is based on nodeIds, which is fine if the master really change. If the nodeId is equal the code falls back to detecting an increment in the cluster state version which happens when a node is re-elected or when the node rejoins. Sadly this doesn't cover the case where the same node is elected after a full restart of all master nodes. In that case we recover the cluster state from disk but the version is reset back to 0. To fix this, the check should be done based on ephemeral IDs which are reset on restart.

Fixes #25471
2017-07-26 17:02:42 +02:00
Luca Cavanna d8203f19fd Remove XContentHelper#toString(ToXContent) in favour of Strings#toString(ToXContent) (#25866)
These two methods do do the same thing. The subtle difference between the two is that the former prints out pretty printed content by default while the latter doesn't. There are way more usages of the latter throughout the codebase hence I kept that variant although I do think that it would be much better to print out prettified content by default from a `toString`. That breaks quite some tests so I didn't make that change yet.

Also XContentHelper#toString was outdated as it didn't check the ToXContent#isFragment method to decide whether a new anonymous object has to be created or not. It would simply fail with any ToXContentObject.
2017-07-26 16:00:59 +02:00
Boaz Leskes 26e82610b7 testWaitForPendingSeqNo didn't properly wait for all pending ops to "stuck"
The test only waited for one op to be stuck. In rare occasions the other ops were still in flight when recovery captured a translog snapshot throwing doc count off.
2017-07-26 13:49:15 +02:00
Boaz Leskes 015424d9f4 Test: indexOnReplicaWithGaps should randomly add a gap at the end
This confuses assertion because if it's the only gap, it looks like one operation less is indexed and there are no gaps at all.
2017-07-26 13:28:39 +02:00
Michael Basnight 9d10dbea39 Fix rest client causing jarHell for gradle 3.5+ (#25892)
The configuration removed from the runtime configuration did not
properly remove the deps jar from gradle versions > 3.3. The rest client
now removes both the 3.3 and 3.3+ configurations so this works on both
versions of gradle.

Closes #25884
Relates #25208
2017-07-26 11:25:25 +02:00
Simon Willnauer ca4f77039c Remove unused member in IndicesService 2017-07-26 10:20:40 +02:00
Simon Willnauer 4baf7a9e50 Remove unnecessary imports 2017-07-26 10:08:23 +02:00
Simon Willnauer 634ce90dc0 Respect cluster alias in `_index` aggs and queries (#25885)
Today when we aggregate on the `_index` field the cross cluster search
alias is not taken into account. Neither is it respected when we search
on the field. This change adds support for cluster alias when the cluster
alias is present on the `_index` field.

Closes #25606
2017-07-26 09:16:52 +02:00
Scott Somerville 2f8def11b5 Coerce decimal strings for whole number types by truncating the decimal part (#25835)
This changes makes it so you can index a value like "1.0" or "1.1" into whole
number field types like byte and integer. Without this change then the above
values would have resulted in an error, even with coerce set to true.

Closes #25819
2017-07-26 08:21:42 +02:00
Lee Hinman 6e79062078 Add 5.5.2 Version and 5.5.1 BWC indices 2017-07-25 10:58:39 -06:00
Adrien Grand 315319b763 Remove assertion about deviation when casting to a float. (#25806)
We cannot guarantee that the result of computations will be in the float range,
since it depends on the data and how scores are computed. We already use doubles
as intermediate representations and cast to a float as a final step, which is
the right thing to do. Small doubles will just be rounded to zero, there is not
much we can or should do about it.

Closes #25330
2017-07-25 15:07:45 +02:00
Martijn van Groningen a9ae52e78b
inner hits: Only access stored fields when needed
Stored fields were still being accessed for nested inner hits even if the _source was not requested.
This was done to figure out the id of the root document. However this is already known higher up the stack.
So instead this change adds the id to the nested search context, so that it is no longer required to be fetched via the stored fields.

In case the _source is large and no source is requested then hot threads like these ones would still appear:

```
100.3% (501.3ms out of 500ms) cpu usage by thread 'elasticsearch[AfXKKfq][search][T#6]'
     2/10 snapshots sharing following 22 elements
       org.apache.lucene.store.DataInput.skipBytes(DataInput.java:352)
       org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.skipField(CompressingStoredFieldsReader.java:246)
       org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.visitDocument(CompressingStoredFieldsReader.java:601)
       org.apache.lucene.index.CodecReader.document(CodecReader.java:88)
       org.apache.lucene.index.FilterLeafReader.document(FilterLeafReader.java:411)
       org.elasticsearch.search.fetch.FetchPhase.loadStoredFields(FetchPhase.java:347)
       org.elasticsearch.search.fetch.FetchPhase.createNestedSearchHit(FetchPhase.java:219)
       org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:150)
       org.elasticsearch.search.fetch.subphase.InnerHitsFetchSubPhase.hitsExecute(InnerHitsFetchSubPhase.java:73)
       org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:166)
       org.elasticsearch.search.fetch.subphase.InnerHitsFetchSubPhase.hitsExecute(InnerHitsFetchSubPhase.java:73)
       org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:166)
       org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:422)
```

and:

```
8/10 snapshots sharing following 27 elements
       org.apache.lucene.codecs.compressing.LZ4.decompress(LZ4.java:135)
       org.apache.lucene.codecs.compressing.CompressionMode$4.decompress(CompressionMode.java:138)
       org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader$BlockState$1.fillBuffer(CompressingStoredFieldsReader.java:531)
       org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader$BlockState$1.readBytes(CompressingStoredFieldsReader.java:550)
       org.apache.lucene.store.DataInput.readBytes(DataInput.java:87)
       org.apache.lucene.store.DataInput.skipBytes(DataInput.java:350)
       org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.skipField(CompressingStoredFieldsReader.java:246)
       org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.visitDocument(CompressingStoredFieldsReader.java:601)
       org.apache.lucene.index.CodecReader.document(CodecReader.java:88)
       org.apache.lucene.index.FilterLeafReader.document(FilterLeafReader.java:411)
       org.elasticsearch.search.fetch.FetchPhase.loadStoredFields(FetchPhase.java:347)
       org.elasticsearch.search.fetch.FetchPhase.createNestedSearchHit(FetchPhase.java:219)
       org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:150)
       org.elasticsearch.search.fetch.subphase.InnerHitsFetchSubPhase.hitsExecute(InnerHitsFetchSubPhase.java:73)
       org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:166)
       org.elasticsearch.search.fetch.subphase.InnerHitsFetchSubPhase.hitsExecute(InnerHitsFetchSubPhase.java:73)
       org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:166)
       org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:422)
```
2017-07-25 12:10:59 +02:00
Yannick Welsch 7e08753bd2 [TEST] Set proper version on InputStream 2017-07-25 10:11:06 +02:00
Boaz Leskes cd508555f9 Engine.close should only return when resources are freed (#25852)
Currently Engine.close can return immediately if the engine is already at the process of shutting down (due to a concurrent close call or an engine failure). This is a shame because some of our testing infra wants to do things like checking the index. This commit changes the logic to make sure that all calls to close wait until resources are freed. Failing the engine is still non blocking.

Fixes #25817
2017-07-25 08:08:44 +02:00
Michael Basnight e816ef89a2 Shade external dependencies in the rest client jar
This commit removes all external dependencies from the rest client jar
and shades them in an 'org.elasticsearch.client' package within the jar
using shadowJar gradle plugin. All projects that depended on the
existing jar have been converted to using the 'org.elasticsearch.client'
package prefixes to interact with the rest client.

Closes #25208
2017-07-24 12:55:43 -05:00
Jim Ferenczi 4a9995145c [Docs]: Clarify query_string parser splits on operator 2017-07-24 18:36:16 +02:00
Boaz Leskes 17714acb9e add debug logging to SpecificMasterNodesIT
Chasing https://github.com/elastic/elasticsearch/issues/25471

Also beefed up tests in TransportMasterNodeActionTests trying to simulate possible failures
2017-07-24 18:33:44 +02:00
Jim Ferenczi 3a59b6a16c Context suggester should filter doc values field (#25858)
The context suggester extracts the context field values from the document but it does not filter doc values field coming from Keyword field.
This change filters doc values field when building the context values.

Fixes #25404
2017-07-24 17:45:01 +02:00
Jim Ferenczi 2f8f440e80 #25851: Fix ParentFieldMapper.toXContent to print eager_global_ordinals only when it is set to false 2017-07-24 15:03:05 +02:00
Jim Ferenczi d73e17c103 SpanNearQueryBuilder should return the inner clause when a single clause is provided (#25856)
This change handles the case where a SpanNearQueryBuilder tries to create a query with a single clause.
This is not allowed in the SpanNearQuery so instead of throwing an exception when the weight is built, this change builds and returns
the singleton inner clause on toQuery.

Fixes #25630
2017-07-24 13:24:29 +02:00
Jim Ferenczi 93b04fb7bd The default _parent field should not try to load global ordinals (#25851)
The default _parent field tries to load global ordinals because it is created with eager_global_ordinals=true.
This leads to an IllegalStateException because this field does not have doc_values.
This change explicitely sets eager_global_ordinals to false in order to avoid the ISE on startup.

Fixes #25849
2017-07-24 13:07:19 +02:00
Boaz Leskes c72fc55283 adapt testDoubleDeliveryReplicaAppendingOnly to #25827 2017-07-22 08:46:21 +02:00
Boaz Leskes d21ad9b652 fix compilation 2017-07-21 20:16:58 +02:00
Boaz Leskes ab1636d547 Engine - do not index operations with seq# lower than the local checkpoint into lucene (#25827)
When a replica processes out of order operations, it can drop some due to version comparisons. In the past that would have resulted in a VersionConflictException being thrown and the operation was totally ignored. With the seq# push, we started storing these operations in the translog (but not indexing them into lucene) in order to have complete op histories to facilitate ops based recoveries. This in turn had the undesired effect that deleted docs may be resurrected during recovery in some extreme edge situation (see a complete explanation below). This PR contains a simple fix, which is also an optimization for the recovery process, incoming operation that have a seq# lower than the current local checkpoint (i.e., have already been processed) should not be indexed into lucene. Note that sometimes we can also skip storing them in the translog, but this is not required for the fix and is more complicated.

This is the equivalent of #25592

## More details on resurrected ops 

Consider two operations: 
 - Index d1, seq no 1
 - Delete d1, seq no 3

On a replica they come out of order:
 - Translog gen 1 contains:
    - delete (seqNo 3)
 - Translog gen 2 contains:
    - index (seqNo 1) (wasn't indexed into lucene, but put into the translog)
    - another operation (seqNo 10)
 - Translog gen 3 
    - another op (seqNo 9)
 - Engine commits with:
    - local checkpoint 9
    - refers to gen 2 

If this replica becomes a primary:
    - Local recovery will replay translog gen 2 and up, causing index #1 to be re-index. 
    - Even if recovery will start at gen 3, the translog retention policy will cause file based recovery to replay the entire translog. If it happens to start at gen 2 (but not 1), we will run into the same problem.

#### Some context - out of order delivery involving deletes:

On normal operations, this relies on the gc_deletes setting. We assume that the setting represents an upper bound on the time between the index and the delete operation. The index operation will be detected as stale based on the tombstone map in the LiveVersionMap.

Recovery presents a challenge as it can replay an old index operation that was in the translog and override a delete operation that was done when the engine was opened (and is not part of the replayed snapshot). To deal with this situation, we disable GC deletes (i.e. retain all deletes) for the duration of recoveries. This means that the delete operation will be remembered and the index operation ignored.

Both of the above scenarios (local recover + peer recovery) create a situation where the delete operation is never replayed. It this "lost" as lucene doesn't remember it happened and our LiveVersionMap is populated with it.

#### Solution:

Note that both local and peer recovery represent a scenario where we replay translog ops on top of an existing lucene index, potentially with ongoing indexing. Therefore we can treat them the same.

The local checkpoint in Lucene represent a marker indicating that all operations below it were performed on the index. This is the only form of "memory" that we have that relates to deletes. If we can achieve the following:
1) All ops below the local checkpoint are not indexed to lucene.
2) All ops above the local checkpoint are

It will mean that all  variants are covered: (i# == index op seq#, d# == delete op seq#, lc == local checkpoint in commit)
1) i# < d# <= lc - document is already deleted in lucene and stays that way.
2) i# <= lc < d# - delete is replayed on index - document is deleted
3) lc < i# < d# - index is replayed and then delete - document is deleted.

More formally - we want to make sure that for all ops that performed on the primary o1 and o2, if o2 is processed on a shard before o1, o1 will be dropped. We have the following scenarios

1) If both o1 or o2 are not included in the replayed snapshot and are above it (i.e., have a higher seq#), they fall under the gc deletes assumption.
2) If both o1 is part of the replayed snapshot but o2 is above it:
	- if o2 arrives first, o1 must arrive due to the recovery and potentially via replication as well. since gc deletes is disabled we are guaranteed to know of o2's existence.
3) If both o2 and o1 are part of the replayed snapshot:
	- we fall under the same scenarios as #2 - disabling GC deletes ensures we know of o2 if it arrives first.
4) If o1 falls before the snapshot and o2 is either part of the snapshot or higher:
	- Since the snapshot is guaranteed to contain all ops that are not part of lucene and are above the lc in the commit used, this means that o1 is part of lucene and o1 < local checkpoint. This means it won't be processed and we're not in the scenario we're discussing.
5) If o2 falls before the snapshot but o1 is part of it:
	- by the same reasoning above, o2 is < local checkpoint. Since o1 < o2, we also get o1 < local checkpoint and this will be dropped.


#### Implementation:

For local recovery, we can filter the ops we read of the translog and avoid replaying them. For peer recovery this is tricky as we do want to send the operations in order to have some history on the target shard. Filtering operations on the engine level (i.e., not indexing to lucene if op seq# <= lc) would work for both.
2017-07-21 17:19:54 +02:00
Jim Ferenczi c3784326eb Refactor field expansion for match, multi_match and query_string query (#25726)
This commit changes the way we handle field expansion in `match`, `multi_match` and `query_string` query.
 The main changes are:

- For exact field name, the new behavior is to rewrite to a matchnodocs query when the field name is not found in the mapping.

- For partial field names (with `*` suffix), the expansion is done only on `keyword`, `text`, `date`, `ip` and `number` field types. Other field types are simply ignored.

- For all fields (`*`), the expansion is done on accepted field types only (see above) and metadata fields are also filtered.

- The `*` notation can also be used to set `default_field` option on`query_string` query. This should replace the needs for the extra option `use_all_fields` which is deprecated in this change.

This commit also rewrites simple `*` query to matchalldocs query when all fields are requested (Fixes #25556). 

The same change should be done on `simple_query_string` for completeness.

`use_all_fields` option in `query_string` is also deprecated in this change, `default_field` should be set to `*` instead.

Relates #25551
2017-07-21 16:52:57 +02:00
Boaz Leskes 47f92d7c62 testRejectingJoinWithIncompatibleVersion(WithUnrecoveredState) should use immediate priorities
That will prevent race conditions with the join task, causing failures.
2017-07-21 16:43:18 +02:00
Yannick Welsch a2624dfcef Move primary term from ReplicationRequest to ConcreteShardRequest (#25822)
Removes the primary term from the replication request and pushes it into the transport envelope. This makes it possible to remove the term from the ReplicationOperation universe. The primary term that is to be used for a replication operation is now determined in the reroute phase when the node decides to execute a primary action (and validated once the primary action gets to execute). This makes it possible to validate that the primary action was sent to the correct primary shard instance that it was meant to be sent to (currently we only validate primary actions using the allocation id, which can be reused for failed and reallocated primaries).
2017-07-21 15:57:42 +02:00
Yannick Welsch d6a8984be6 Make sure shard is not closed when updating local checkpoint
If a primary shard is relocated, and then subsequently closed, there is a short window where ReplicationOperation could access the
closed shard (engine is not shut down yet) and, because it does not know that the shard was relocated, try to update the local
checkpoint, tripping an assertion in GlobalCheckPointTracker that a local checkpoint cannot be updated if it's not in primary mode.
2017-07-21 14:27:39 +02:00
Simon Willnauer 682abb90ee [TEST] Rename variable to make it less confusing 2017-07-21 13:02:33 +02:00
Yannick Welsch fd57101952 Make sure shard is not closed when accessing ReplicationGroup 2017-07-21 11:45:24 +02:00
Simon Willnauer 0e3ad522a2 Rewrite search requests on the coordinating nodes (#25814)
This change rewrites search requests on the coordinating node before
we send requests to the individual shards. This will reduce the rewrite load
and object creation for each rewrite on the executing nodes and will fetch
resources only once instead of N times once per shard for queries like `terms`
query with index lookups. (among percolator and geo-shape)

Relates to #25791
2017-07-21 09:38:38 +02:00
Simon Willnauer 0d0c103451 First increment shard stats before notifing and potentially sending response (#25818)
When we skip a shard we should first increment the skip and successful shard
counters before we notify the super class about a skipped shard which could
send back the result before we increment the stats.
2017-07-21 08:46:10 +02:00
Ryan Ernst cfdfa4705e Bump the min compat version to 5.6.0 (#25805)
This commit increases the min compat version for 6.0 to 5.6.0. This is
already what is being tested by gradle, but the code was out of sync.
2017-07-20 13:02:07 -07:00
Ryan Ernst 8ab0d10387 Add compatibility versions to main action response (#25799)
This commit adds the min wire/index compat versions to the main action
output. Not only will this make the compatility expected more
transparent, but it also allows to test which version others think the
compat versions are, similar to how we test the lucene version.
2017-07-20 13:01:41 -07:00
Boaz Leskes 7488877d1a Validate a joining node's version with version of existing cluster nodes (#25808)
When a node tries to join a cluster, it goes through a validation step to make sure the node is compatible with the cluster. Currently we validation that the node can read the cluster state and that it is compatible with the indexes of the cluster. This PR adds validation that the joining node's version is compatible with the versions of existing nodes. Concretely we check that:

1) The node's min compatible version is higher or equal to any node in the cluster (this prevents a too-new node from joining)
2) The node's version is higher or equal to the min compat version of all cluster nodes (this prevents a too old join where, for example, the master is on 5.6, there's another 6.0 node in the cluster and a 5.4 node tries to join).
3) The node's major version is at least as higher as the lowest node in the cluster. This is important as we use the minimum version in the cluster to stop executing bwc code for operations that require multiple nodes. If the nodes are already operating in "new cluster mode", we should prevent nodes from the previous major to join (even if they are wire level compatible). This does mean that if you have a very unlucky partition during the upgrade which partitions all old nodes which are also a minority / data nodes only, the may not be able to re-join the cluster. We feel this edge case risk is well worth the simplification it brings to BWC layers only going one way. This restriction only holds if the cluster state has been recovered (i.e., the cluster has properly formed).

 Also, the node join validation can now selectively fail specific nodes (previously the entire batch was failed). This is an important preparation for a follow up PR where we plan to have a rejected joining node die with dignity.
2017-07-20 20:11:29 +02:00