Commit Graph

1142 Commits

Author SHA1 Message Date
Adrien Grand 55ad318541 Reduce the overhead of timeouts and low-level search cancellation. (#25776)
Setting a timeout or enforcing low-level search cancellation used to make us
wrap the collector and check either the current time or whether the search
task was cancelled for every collected document. This can be significant
overhead on cheap queries that match many documents.

This commit changes the approach to wrap the bulk scorer rather than the
collector and exponentially increase the interval between two consecutive
checks in order to reduce the overhead of those checks.
2017-07-19 14:15:53 +02:00
Boaz Leskes 1e1f8e6376 Validate a joining node's version with version of existing cluster nodes (#25770)
When a node tries to join a cluster, it goes through a validation step to make sure the node is compatible with the cluster. Currently we validation that the node can read the cluster state and that it is compatible with the indexes of the cluster. This PR adds validation that the joining node's version is compatible with the versions of existing nodes. Concretely we check that:

1) The node's min compatible version is higher or equal to any node in the cluster (this prevents a too-new node from joining)
2) The node's version is higher or equal to the min compat version of all cluster nodes (this prevents a too old join where, for example, the master is on 5.6, there's another 6.0 node in the cluster and a 5.4 node tries to join).
3) The node's major version is at least as higher as the lowest node in the cluster. This is important as we use the minimum version in the cluster to stop executing bwc code for operations that require multiple nodes. If the nodes are already operating in "new cluster mode", we should prevent nodes from the previous major to join (even if they are wire level compatible). This does mean that if you have a very unlucky partition during the upgrade which partitions all old nodes which are also a minority / data nodes only, the may not be able to re-join the cluster. We feel this edge case risk is well worth the simplification it brings to BWC layers only going one way.

 Also, the node join validation can now selectively fail specific nodes (previously the entire batch was failed). This is an important preparation for a follow up PR where we plan to have a rejected joining node die with dignity.
2017-07-19 12:57:29 +02:00
Lee Hinman 610ba7e427 Register data node stats from info carried back in search responses (#25430)
* Register data node stats from info carried back in search responses

This is part of #24915, where we now calculate the EWMA of service time for
tasks in the search threadpool, and send that as well as the current queue size
back to the coordinating node. The coordinating node now tracks this information
for each node in the cluster.

This information will be used in the future the determining the best replica a
search request should be routed to. This change has no user-visible difference.

* Move response time timing into ResponseListenerWrapper

* Move ResponseListenerWrapper to ActionListener instead of SearchActionListener

Also removes the logger

* Move `requestIndex` back to private

* De-guice-ify ResponseCollectorService \o/

* Undo all changes to SearchQueryThenFetchAsyncAction

* Remove unneeded response collector from TransportSearchAction

* Undo all changes to SearchDfsQueryThenFetchAsyncAction

* Completely rewrite the inside of ResponseCollectorService's record keeping

* Documentation and cleanups for ResponseCollectorService

* Add unit test for collection of queue size and service time

* Fix Guice construction error

* Add basic unit tests for ResponseCollectorService

* Fix version constant for the master merge

* Fix test compilation after master merge

* Add a test for node removal on cluster changed event

* Remove integration test as there are now unit tests

* Rename ResponseListenerWrapper -> SearchExecutionStatsCollector

* Fix line-length

* Make classes private and final where appropriate

* Pass nodeId into SearchExecutionStatsCollector and use only ActionListener

* Get nodeId from connection so searchShardTarget can be private

* Remove threadpool from SearchContext, get it from IndexShard instead

* Add missing import

* Use BiFunction for responseWrapper rather than passing in collector service
2017-07-17 11:04:51 -06:00
Adrien Grand 264088f1c4 Deprecate the `_default_` mapping. (#25652)
Now that indices cannot have types anymore, this feature does not buy anything
anymore.

Closes #25500
2017-07-17 15:37:59 +02:00
Martijn van Groningen 8003171a0c
Move more token filters to analysis-common module
The following token filters were moved: arabic_normalization, german_normalization, hindi_normalization, indic_normalization, persian_normalization, scandinavian_normalization, serbian_normalization, sorani_normalization, cjk_width and cjk_width

Relates to #23658
2017-07-17 08:29:44 +02:00
Boaz Leskes a6bea1bf97 testMockFailToSendNoConnectRule should wait for connection close to bubble up and disconnect the node
#25521 changed channel closing to be handled async on anything but transport stop. This means it may take a while before
calling `connection.close()` and the node being removed from the `connectedNodes` list (but the connection is immediately unusuable).

Fixes #25686
2017-07-15 09:28:17 +02:00
Yannick Welsch 8f0b357651 Let primary own its replication group (#25692)
Currently replication and recovery are both coordinated through the latest cluster state available on the ClusterService as well as through the GlobalCheckpointTracker (to have consistent local/global checkpoint information), making it difficult to understand the relation between recovery and replication, and requiring some tricky checks in the recovery code to coordinate between the two. This commit makes the primary the single owner of its replication group, which simplifies the replication model and allows to clean up corner cases we have in our recovery code. It also reduces the dependencies in the code, so that neither RecoverySourceXXX nor ReplicationOperation need access to the latest state on ClusterService anymore. Finally, it gives us the property that in-sync shard copies won't receive global checkpoint updates which are above their local checkpoint (relates #25485).
2017-07-14 13:52:53 +02:00
Luca Cavanna ec66d655b5 Rename client artifacts (#25693)
It was brought up that our current client artifacts have generic names like 'rest' that may cause conflicts with other artifacts.

This commit renames:

- rest -> elasticsearch-rest-client
- sniffer -> elasticsearch-rest-client-sniffer
- rest-high-level -> elasticsearch-rest-high-level-client

A couple of small changes are also preparing the high level client for its first release.

Closes #20248
2017-07-13 09:44:25 +02:00
Simon Willnauer b7bc790428 Use a non default port range in MockTransportService
We already use a per JVM port range in MockTransportService. Yet,
it's possible that if we are executing in the JVM with ordinal 0 that
other clusters reuse ports from the mock transport service and some tests
try to simulate disconnects etc. By using a non-defautl port range (starting at 10300)
we prevent internal test clusters from reusing any of the mock impls ports

Relates to #25301
2017-07-12 22:29:21 +02:00
Simon Willnauer e81804cfa4 Add a shard filter search phase to pre-filter shards based on query rewriting (#25658)
Today if we search across a large amount of shards we hit every shard. Yet, it's quite
common to search across an index pattern for time based indices but filtering will exclude
all results outside a certain time range ie. `now-3d`. While the search can potentially hit
hundreds of shards the majority of the shards might yield 0 results since there is not document
that is within this date range. Kibana for instance does this regularly but used `_field_stats`
to optimize the indexes they need to query. Now with the deprecation of `_field_stats` and it's upcoming removal a single dashboard in kibana can potentially turn into searches hitting hundreds or thousands of shards and that can easily cause search rejections even though the most of the requests are very likely super cheap and only need a query rewriting to early terminate with 0 results.

This change adds a pre-filter phase for searches that can, if the number of shards are higher than a the `pre_filter_shard_size` threshold (defaults to 128 shards), fan out to the shards
and check if the query can potentially match any documents at all. While false positives are possible, a negative response means that no matches are possible. These requests are not subject to rejection and can greatly reduce the number of shards a request needs to hit. The approach here is preferable to the kibana approach with field stats since it correctly handles aliases and uses the correct threadpools to execute these requests. Further it's completely transparent to the user and improves scalability of elasticsearch in general on large clusters.
2017-07-12 22:19:20 +02:00
Tim Brooks a3ade99fcf Fix BytesReferenceStreamInput#skip with offset (#25634)
There is a bug when a call to `BytesReferenceStreamInput` skip is made
on a `BytesReference` that has an initial offset. The offset for the
current slice is added to the current index and then subtracted from the
length. This introduces the possibility of a negative number of bytes to
skip. This happens inside a loop, which leads to an infinte loop.

This commit correctly subtracts the current slice index from the
slice.length. Additionally, the `BytesArrayTests` are modified to test
instances that include an offset.
2017-07-11 09:54:29 -05:00
Simon Willnauer 98c91a3bd0 Limit the number of concurrent shard requests per search request (#25632)
This is a protection mechanism to prevent a single search request from
hitting a large number of shards in the cluster concurrently. If a search is
executed against all indices in the cluster this can easily overload the cluster
causing rejections etc. which is not necessarily desirable. Instead this PR adds
a per request limit of `max_concurrent_shard_requests` that throttles the number of
concurrent initial phase requests to `256` by default. This limit can be increased per request
and protects single search requests from overloading the cluster. Subsequent PRs can introduces
addiontional improvemetns ie. limiting this on a `_msearch` level, making defaults a factor of
the number of nodes or sort shards iters such that we gain the best concurrency across nodes.
2017-07-11 16:23:10 +02:00
Simon Willnauer ec1afe30ea Ensure remote cluster alias is preserved in inner hits aggs (#25627)
We lost the cluster alias due to some special caseing in inner hits
and due to the fact that we didn't pass on the alias to the shard request.
This change ensures that we have the cluster alias present on the shard to
ensure all SearchShardTarget reads preserve the alias.

Relates to #25606
2017-07-11 11:34:06 +02:00
Tim Brooks b22bbf94da Avoid blocking on channel close on network thread (#25521)
Currently when we close a channel in Netty4Utils.closeChannels we
block until the closing is complete. This introduces the possibility
that a network selector thread will block while waiting until a
separate network selector thread closes a channel.

For instance: T1 closes channel 1 (which is assigned to a T1 selector).
Channel 1's close listener executes the closing of the node. That
means that T1 now tries to close channel 2. However, channel 2 is
assigned to a selector that is running on T2. T1 now must wait until T2
closes that channel at some point in the future.

This commit addresses this by adding a boolean to closeChannels
indicating if we should block on close. We only set this boolean to true
if we are closing down the server channels at shutdown. This call is
never made from a network thread. When we call the closeChannels method
with that boolean set to false, we do not block on close.
2017-07-10 10:50:51 -05:00
Colin Goodheart-Smithe 3a5a54e83e Collapses package structure for some bucket aggs (#25579)
This change collapses some of the packages for the bucket aggregations into their parent packages. This was done for the following aggregations:
* The variants of the range aggregation (geo_distance, date and ip) were moved into the `o.e.s.a.bucket.range` package
* The `o.e.s.a.bucket.terms.support` package was removed and the classes were moved to `o.e.s.a.bucket.terms`
* The filter aggregation was moved to `o.e.s.a.bucket.filter`

Since this PR is already relatively large with only the above changes subsequent PRs will do similar operations on relevant metric and pipeline aggregations

Relates to #22868
2017-07-10 15:08:15 +01:00
Boaz Leskes 09378f48e4 Add a scheduled translog retention check (#25622)
We currently check whether translog files can be trimmed whenever we create a new translog generation or close a view. However #25294 added a long translog retention period (12h, max 512MB by default), which means translog files should potentially be cleaned up long after there isn't any indexing activity to trigger flushes/the creation of new translog files. We therefore need a scheduled background check to clean up those files once they are no longer needed.

Relates to #10708
2017-07-10 10:28:39 +02:00
Jason Tedor c084542731 Bump version to 6.0.0-beta1
This commit does two things:
 - bumps the version from 6.0.0-alpha3 to 6.0.0-beta1
 - renames the 6.0.0-alpha3 version constant to 6.0.0-beta1

Relates #25621
2017-07-09 18:12:50 -04:00
Jason Tedor bc22c1c286 Add disk threshold settings validation
This commit adds cross-settings validation for the low/high/flood stage
disk watermark settings. This validation was enabled by the introduction
of multiple settings validation.

Relates #25600
2017-07-07 19:54:36 -04:00
Nik Everett 794257c421 Drop current from the list of released versions (#25187)
It hasn't been released....
2017-07-07 15:59:57 -04:00
Yannick Welsch baa87db5d1 Harden global checkpoint tracker
This commit refactors the global checkpont tracker to make it more
resilient. The main idea is to make it more explicit what state is
actually captured and how that state is updated through
replication/cluster state updates etc. It also fixes the issue where the
local checkpoint information is not being updated when a shard becomes
primary. The primary relocation handoff becomes very simple too, we can
just verbatim copy over the internal state.

Relates #25468
2017-07-07 14:04:28 -04:00
Lee Hinman 8aa0a5c111 Improve REST error handling when endpoint does not support HTTP verb, add OPTIONS support (#24437)
* Improved REST endpoint exception handling, see #15335

Also improved OPTIONS http method handling to better conform with the
http spec.

* Tidied up formatting and comments

See #15335

* Tests for #15335

* Cleaned up comments, added section number

* Swapped out tab indents for space indents

* Test class now extends ESSingleNodeTestCase

* Capture RestResponse so it can be examined in test cases

Simple addition to surface the RestResponse object so we can run tests
against it (see issue #15335).

* Refactored class name, included feedback

See #15335.

* Unit test for REST error handling enhancements

Randomizing unit test for enhanced REST response error handling. See
issue #15335 for more details.

* Cleaned up formatting

* New constructor to set HTTP method

Constructor added to support RestController test cases.

* Refactored FakeRestRequest, streamlined test case.

* Cleaned up conflicts

* Tests for #15335

* Added functionality to ignore or include path wildcards

See #15335

* Further enhancements to request handling

Refactored executeHandler to prioritize explicit path matches. See
#15335 for more information.

* Cosmetic fixes

* Refactored method handlers

* Removed redundant import

* Updated integration tests

* Refactoring to address issue #17853

* Cleaned up test assertions

* Fixed edge case if OPTIONS method randomly selected as invalid method

In this test, an OPTIONS method request is valid, and should not return
a 405 error.

* Remove redundant static modifier

* Hook the multiple PathTrie attempts into RestHandler.dispatchRequest

* Add missing space

* Correctly retrieve new handler for each Trie strategy

* Only copy headers to threadcontext once

* Fix test after REST header copying moved higher up

* Restore original params when trying the next trie candidate

* Remove OPTIONS for invalidHttpMethodArray so a 405 is guaranteed in tests

* Re-add the fix I already added and got removed during merge :-/

* Add missing GET method to test

* Add documentation to migration guide about breaking 404 -> 405 changes

* Explain boolean response, pull into local var

* fixup! Explain boolean response, pull into local var

* Encapsulate multiple HTTP methods into PathTrie<MethodHandlers>

* Add PathTrie.retrieveAll where all matching modes can be retrieved

Then TrieMatchingMode can be package private and not leak into RestController

* Include body of error with 405 responses to give hint about valid methods

* Fix missing usageService handler addition

I accidentally removed this :X

* Initialize PathTrieIterator modes with Arrays.asList

* Use "== false" instead of !

* Missing paren :-/
2017-07-07 09:01:23 -06:00
Adrien Grand 40bb1663ee Index ids in binary form. (#25352)
Indexing ids in binary form should help with indexing speed since we would
have to compare fewer bytes upon sorting, should help with memory usage of
the live version map since keys will be shorter, and might help with disk
usage depending on how efficient the terms dictionary is at compressing
terms.

Since we can only expect base64 ids in the auto-generated case, this PR tries
to use an encoding that makes the binary id equal to the base64-decoded id in
the majority of cases (253 out of 256). It also specializes numeric ids, since
this seems to be common when content that is stored in Elasticsearch comes
from another database that uses eg. auto-increment ids.

Another option could be to require base64 ids all the time. It would make things
simpler but I'm not sure users would welcome this requirement.

This PR should bring some benefits, but I expect it to be mostly useful when
coupled with something like #24615.

Closes #18154
2017-07-07 14:22:47 +02:00
Martijn van Groningen 6db708ef75
Move more token filters to analysis-common module
The following token filters were moved: common grams, limit token, pattern capture and pattern raplace.

Relates to #23658
2017-07-07 10:02:52 +02:00
Simon Willnauer 1f67d079b1 Validate `transport.profiles.*` settings (#25508)
Transport profiles unfortunately have never been validated. Yet, it's very
easy to make a mistake when configuring profiles which will most likely stay
undetected since we don't validate the settings but allow almost everything
based on the wildcard in `transport.profiles.*`. This change removes the
settings subset based parsing of profiles but rather uses concrete affix settings
for the profiles which makes it easier to fall back to higher level settings since
the fallback settings are present when the profile setting is parsed. Previously, it was
unclear in the code which setting is used ie. if the profiles settings (with removed
prefixes) or the global node setting. There is no distinction anymore since we don't pull
prefix based settings.
2017-07-07 09:40:59 +02:00
Simon Willnauer 38a1df7da1 Use a port range per JVM in MockTransportService (#25565)
Some tests use MockTransportService to do network based testing.
Yet, we run tests in multiple JVMs that means
concurrent tests could claim port that another JVM just released
and if that test tries to simulate a disconnect it might be smart
enough to re-connect depending on what is tested. To reduce the risk,
since this is very hard to debug we use a different default
port range per JVM unless the incoming settings overriding it.

Closes #25301
2017-07-06 09:14:52 +02:00
Simon Willnauer 6e5cc424a8 Switch indices read-only if a node runs out of disk space (#25541)
Today when we run out of disk all kinds of crazy things can happen
and nodes are becoming hard to maintain once out of disk is hit.
While we try to move shards away if we hit watermarks this might not
be possible in many situations. Based on the discussion in #24299
this change monitors disk utilization and adds a flood-stage watermark
that causes all indices that are allocated on a node hitting the flood-stage
mark to be switched read-only (with the option to be deleted). This allows users to react on the low disk
situation while subsequent write requests will be rejected. Users can switch
individual indices read-write once the situation is sorted out. There is no
automatic read-write switch once the node has enough space. This requires
user interaction.

The flood-stage watermark is set to `95%` utilization by default.

Closes #24299
2017-07-05 22:18:23 +02:00
Christoph Büscher 3185eaece8 QueryBuilders should implement ToXContentObject (#25530)
All query builders written as self contained xContent objects, to we should mark
them accordingly using ToXContentObject. This also makes it possible to use
things like XContentHelper#toXContent to render query builders in tests.
2017-07-05 09:50:10 +02:00
Christoph Büscher f576c987ce Remove QueryParseContext (#25486)
QueryParseContext is currently only used as a wrapper for an XContentParser, so
this change removes it entirely and changes the appropriate APIs that use it so
far to only accept a parser instead.
2017-07-03 17:30:40 +02:00
Simon Willnauer 5a7c8bb04e Cleanup network / transport related settings (#25489)
This commit makes the use of the global network settings explicit instead
of implicit within NetworkService. It cleans up several places where we fall
back to the global settings while we should have used tcp or http ones.

In addition this change also removes unnecessary settings classes
2017-07-02 10:16:50 +02:00
James Baiera 74f4a14d82 Upgrading HDFS Repository Plugin to use HDFS 2.8.1 Client (#25497)
Hadoop 2.7.x libraries fail when running on JDK9 due to the version string changing to a single 
character. On Hadoop 2.8, this is no longer a problem, and it is unclear on whether the fix will be 
backported to the 2.7 branch. This commit upgrades our dependency of Hadoop for the HDFS 
Repository to 2.8.1.
2017-06-30 17:57:56 -04:00
Tim Brooks cac2eec7d2 Add NioTransport threads to thread name checks (#25477)
We have various assertions that check we never block on transport
threads. This commit adds the thread names for the NioTransport to
these assertions.

With this change I had to fix two places where we were calling blocking
methods from the transport threads.
2017-06-29 15:16:07 -05:00
Tim Brooks dd5d165da1 Prevent channel enqueue after selector close (#25478)
This commit adds additional protection to `ESSelector` and its
implementations to ensure that channels are not enqueued after the
selector is closed.

After a channel has been added to the queue, we check that the selector
is open. If it is not, then we remove the channel from the queue. If the
channel is removed successfully, we throw an `IllegalStateException`.
2017-06-29 14:02:50 -05:00
Tim Brooks 6c58f0c4e6 Handle ping correctly in NioTransport (#25462)
Our current TCPTransport logic assumes that we do not pass pings to
the TCPTransport level.

This commit fixes an issue where NioTransport was passing pings to
TCPTransport and leading to exceptions.
2017-06-29 11:03:51 -05:00
Christoph Büscher acade2b40a Tests: Remove platform specific assertion in NioSocketChannelTests
This check depends on the language settings on the system the
test runs on, e.g. it fails on Ubuntu with LANG=de_DE.UTF-8.
2017-06-29 17:32:51 +02:00
Christoph Büscher 927111c91d Remove QueryParseContext from parsing QueryBuilders (#25448)
Currently QueryParseContext is only a thin wrapper around an XContentParser that
adds little functionality of its own. I provides helpers for long deprecated
field names which can be removed and two helper methods that can be made static
and moved to other classes. This is a first step in helping to remove
QueryParseContext entirely.
2017-06-29 17:10:20 +02:00
Tim Brooks cad57959e1 Remove finicky exception message assertion
In SimpleNioTransportTests we assert that an IOException has a certain
message. This message appears that it is not dependible (and might
change based on platform).

Our other transport tests (mock and netty) do not make this assertion.
Instead they only assert on our application exception message. This
commit removes the IOException message assertion. And retains the
ConnectTransportException message assertion.
2017-06-28 14:16:04 -05:00
Tim Brooks 5f8be0e090 Introduce NioTransport into framework for testing (#24262)
This commit introduces a nio based tcp transport into framework for
testing.

Currently Elasticsearch uses a simple blocking tcp transport for
testing purposes (MockTcpTransport). This diverges from production
where our current transport (netty) is non-blocking.

The point of this commit is to introduce a testing variant that more
closely matches the behavior of production instances.
2017-06-28 10:51:20 -05:00
Yannick Welsch 5a4a47332c Use a single method to update shard state
This commit refactors index shard to provide a single method for
updating the shard state on an incoming cluster state update.

Relates #25431
2017-06-28 09:48:47 -04:00
Jason Tedor 5a9fc8aa2a Remove path.conf setting
This commit removes path.conf as a valid setting and replaces it with a
command-line flag for specifying a non-default path for configuration.

Relates #25392
2017-06-26 15:18:29 -04:00
Martijn van Groningen a34f5fa812
Move more token filters to analysis-common module
The following token filters were moved: stemmer, stemmer_override, kstem, dictionary_decompounder, hyphenation_decompounder, reverse, elision and truncate.

Relates to #23658
2017-06-26 09:02:16 +02:00
Ryan Ernst 1583f81047 Test: Allow merging mock secure settings (#25387)
While real secure settings (ie an ES keystore) cannot be merged
together, mocked secure settings can and need to be sometimes merged.
This commit adds a merge method to allow tests to merge together
multiple instances of secure settings.
2017-06-25 10:19:51 -07:00
Martijn van Groningen 9c511bc447
test: Replace OldIndexBackwardsCompatibilityIT#testOldClusterStates with a full cluster restart qa test
OldIndexBackwardsCompatibilityIT#testOldClusterStates tested whether global and index metadata could be read from data directory,
 this can also be tested in full cluster qa test that checks cluster state via api.

 Relates to #24939
2017-06-23 09:54:05 +02:00
Boaz Leskes d963882053 Enable a long translog retention policy by default (#25294)
#25147  added the translog deletion policy but didn't enable it by default. This PR enables a default retention of 512MB (same maximum size of the current translog) and an age of 12 hours (i.e., after 12 hours all translog files will be deleted). This increases to chance to have an ops based recovery, even if the primary flushed or the replica was offline for a few hours.

In order to see which parts of the translog are committed into lucene the translog stats are extended to include information about uncommitted operations.

Views now include all translog ops and guarantee, as before, that those will not go away. Snapshotting a view allows to filter out generations that are not relevant based on a specific sequence number.

Relates to #10708
2017-06-22 17:08:14 +02:00
Adrien Grand 44e9c0b947 Upgrade to lucene-7.0.0-snapshot-ad2cb77. (#25349)
Most notable changes:
 - better update concurrency: LUCENE-7868
 - TopDocs.totalHits is now a long: LUCENE-7872
 - QueryBuilder does not remove the boolean query around multi-term synonyms:
   LUCENE-7878
 - removal of Fields: LUCENE-7500

For the `TopDocs.totalHits` change, this PR relies on the fact that the encoding
of vInts and vLongs are compatible: you can write and read with any of them as
long as the value can be represented by a positive int.
2017-06-22 12:35:33 +02:00
Boaz Leskes 7013cbd927 Update MockTransportService to the age of Transport.Connection (#25320)
MockTransportServices allows us to simulate network disruptions in our testing infra. Sadly it wasn't updated to the state of the art in Transport land. This PR brings it up to speed. Specifically:

1) Opening a connection is now also blocked (before only node connections were blocked)
2) Simplifies things using the latest connection based notification between TcpTransport and TransportService for when a disconnect happens.
3) By 2, it fixes a race condition where we may fail to respond to a sent request when it is sent concurrently with the closing of a connection. The old code relied on a node based bridge between tcp transport and transport service. Sadly, the following doesn't work any more:

```
   if (transport.nodeConnected(node)) {
            // this a connected node, disconnecting from it will be up the exception
            transport.disconnectFromNode(node); <-- this may now be a noop and it doesn't mean that the transport service was notified of the disconnect between the nodeConnected check and here.
   } else {
            throw new ConnectTransportException(node, reason, e);
   }
```
2017-06-21 10:27:57 +02:00
Simon Willnauer 86a544de3b Ensure we never read from a closed MockSecureSettings object (#25322)
If secure settings are closed after the node has been constructed
no key-store access is permitted. We should also try to be as close as possible
to the real behavior if we mock secure settings. This change also adds
the same behavior as bootstrap has to InternalTestCluster to ensure we fail
if we try to read from secure settings after the node has been constructed.
2017-06-21 08:14:38 +02:00
Simon Willnauer 5abb7c4bec Use IndexMetaData settings as a basis for new index settings (#25310)
In MockFSDirectory we should use the actual indexes settings to build
a new IndexMetaData settings object instead of the node settings.

Relates to #25297
2017-06-20 15:44:19 +02:00
Nik Everett 3261586cac Tweak reindex cancel logic and add many debug logs (#25256)
I'm still trying to hunt down rare failures in the cancelation tests
for reindex and friends. Here is the latest:
https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+5.x+multijob-unix-compatibility/os=ubuntu/876/console

It doesn't show much, other than that one of the tasks didn't kill
itself when asked to cancel.

So I'm going a bit crazy with debug logging so that the next time this
comes up I can trace exactly what happened.

Additionally, this tweaks the logic around how rethrottles were
performed around cancel. Previously we set the `requestsPerSecond`
to `0` when we cancelled the task. That was the "old way" to set them
to inifity which was the intent. This switches that from `0` to
`Float.MAX_VALUE` which is the "new way" to set the `requestsPerSecond`
to infinity. I don't know that this is much better, but it feels better.
2017-06-19 18:46:42 -04:00
Jay Modi 1a6491bc54 Test: do not copy secure settings when creating random directory service (#25297)
In tests, we sometimes create a random directory service and as part of that the IndexSettings get
built again. When we build them again, we need to make sure we do not set the secure settings on
the new IndexMetaData object that gets created as the node settings already have the secure
settings and the index settings and node settings will be combined. If both have secure settings,
the settings builder will throw an AlreadySetException.
2017-06-19 14:52:32 -06:00
Yannick Welsch 1a20760d79 Simplify IndexShard indexing and deletion methods (#25249)
Indexing or deleting documents through the IndexShard interface is quite complex and error-prone. It requires multiple calls, e.g. first prepareIndexOnPrimary, then do some checks if mapping updates have occurred, then do the actual indexing using index(...) etc. Currently each consumer of the interface (local recovery, peer recovery, replication) has additional custom checks built around it to deal with mapping updates, some of which are even inconsistent. This commit aims at reducing the complexity by exposing a simpler interface on IndexShard. There are no more prepare*** methods and the mapping complexity is also hidden, but still giving callers a possibility to implement custom logic to deal with mapping updates.
2017-06-19 20:11:54 +02:00