Commit Graph

1354 Commits

Author SHA1 Message Date
Tim Brooks ce45e29be7
Remove manual tracking of registered channels (#27445)
This is related to #27260. Currently, every ESSelector keeps track of
all channels that are registered with it. ESSelector is just an
abstraction over a raw java nio selector. The java nio selector already
tracks its own selection keys. This commit removes our tracking and
relies on the java nio selector tracking.
2017-11-17 16:20:09 -07:00
David Turner 08a257327f
Remove newline from log message (#27425)
It leads to harder-to-parse logs that look like this:

```
  1> [2017-11-16T20:46:21,804][INFO ][o.e.t.r.y.ClientYamlTestClient] Adding header Content-Type
  1>  with value application/json
  1> [2017-11-16T20:46:21,812][INFO ][o.e.t.r.y.ClientYamlTestClient] Adding header Content-Type
  1>  with value application/json
  1> [2017-11-16T20:46:21,820][INFO ][o.e.t.r.y.ClientYamlTestClient] Adding header Content-Type
  1>  with value application/json
  1> [2017-11-16T20:46:21,966][INFO ][o.e.t.r.y.ClientYamlTestClient] Adding header Content-Type
  1>  with value application/json
```
2017-11-17 14:12:06 +00:00
Tim Brooks f761a0e0e4
Remove unneeded Throwable handling in nio (#27412)
This is related to #27260. In the nio transport work we do not catch or
handle `Throwable`. There are a few places where we have exception
handlers that accept `Throwable`. This commit removes those cases.
2017-11-16 18:24:06 -07:00
David Turner 9766b858d0
Prepare for bump to 6.0.1 on the master branch (#27391)
An assortment of fixes, particularly to version number calculations, in preparation for the bump to 6.0.1.
2017-11-16 18:38:54 +00:00
Tim Brooks 80ef9bbdb1
Remove parameterization from TcpTransport (#27407)
This commit is a follow up to the work completed in #27132. Essentially
it transitions two more methods (sendMessage and getLocalAddress) from
Transport to TcpChannel. With this change, there is no longer a need for
TcpTransport to be aware of the specific type of channel a transport
returns. So that class is no longer parameterized by channel type.
2017-11-16 11:19:36 -07:00
Tim Brooks 35a5922927
Delete unneeded nio client (#27408)
This is a follow up to #27132. As that PR greatly simplified the
connection logic inside a low level transport implementation, much of
the functionality provided by the NioClient class is no longer
necessary. This commit removes that class.
2017-11-16 09:22:40 -07:00
Jim Ferenczi 623367d793
Add composite aggregator (#26800)
* This change adds a module called `aggs-composite` that defines a new aggregation named `composite`.
The `composite` aggregation is a multi-buckets aggregation that creates composite buckets made of multiple sources.
The sources for each bucket can be defined as:
  * A `terms` source, values are extracted from a field or a script.
  * A `date_histogram` source, values are extracted from a date field and rounded to the provided interval.
This aggregation can be used to retrieve all buckets of a deeply nested aggregation by flattening the nested aggregation in composite buckets.
A composite buckets is composed of one value per source and is built for each document as the combinations of values in the provided sources.
For instance the following aggregation:

````
"test_agg": {
  "terms": {
    "field": "field1"
  },
  "aggs": {
    "nested_test_agg":
      "terms": {
        "field": "field2"
      }
  }
}
````
... which retrieves the top N terms for `field1` and for each top term in `field1` the top N terms for `field2`, can be replaced by a `composite` aggregation in order to retrieve **all** the combinations of `field1`, `field2` in the matching documents:

````
"composite_agg": {
  "composite": {
    "sources": [
      {
	"field1": {
          "terms": {
              "field": "field1"
            }
        }
      },
      {
	"field2": {
          "terms": {
            "field": "field2"
          }
        }
      },
    }
  }
````

The response of the aggregation looks like this:

````
"aggregations": {
  "composite_agg": {
    "buckets": [
      {
        "key": {
          "field1": "alabama",
          "field2": "almanach"
        },
        "doc_count": 100
      },
      {
        "key": {
          "field1": "alabama",
          "field2": "calendar"
        },
        "doc_count": 1
      },
      {
        "key": {
          "field1": "arizona",
          "field2": "calendar"
        },
        "doc_count": 1
      }
    ]
  }
}
````

By default this aggregation returns 10 buckets sorted in ascending order of the composite key.
Pagination can be achieved by providing `after` values, the values of the composite key to aggregate after.
For instance the following aggregation will aggregate all composite keys that sorts after `arizona, calendar`:

````
"composite_agg": {
  "composite": {
    "after": {"field1": "alabama", "field2": "calendar"},
    "size": 100,
    "sources": [
      {
	"field1": {
          "terms": {
            "field": "field1"
          }
        }
      },
      {
	"field2": {
          "terms": {
            "field": "field2"
          }
	}
      }
    }
  }
````

This aggregation is optimized for indices that set an index sorting that match the composite source definition.
For instance the aggregation above could run faster on indices that defines an index sorting like this:

````
"settings": {
  "index.sort.field": ["field1", "field2"]
}
````

In this case the `composite` aggregation can early terminate on each segment.
This aggregation also accepts multi-valued field but disables early termination for these fields even if index sorting matches the sources definition.
This is mandatory because index sorting picks only one value per document to perform the sort.
2017-11-16 15:13:36 +01:00
Tim Brooks ca11085bb6
Add TcpChannel to unify Transport implementations (#27132)
Right now our different transport implementations must duplicate
functionality in order to stay compliant with the requirements of
TcpTransport. They must all implement common logic to open channels,
close channels, keep track of channels for eventual shutdown, etc.

Additionally, there is a weird and complicated relationship between
Transport and TransportService. We eventually want to start merging
some of the functionality between these classes.

This commit starts moving towards a world where TransportService retains
all the application logic and channel state. Transport implementations
in this world will only be tasked with returning a channel when one is
requested, calling transport service when a channel is accepted from
a server, and starting / stopping itself.

Specifically this commit changes how channels are opened and closed. All
Transport implementations now return a channel type that must comply with
the new TcpChannel interface. This interface has the methods necessary
for TcpTransport to completely manage the lifecycle of a channel. This
includes setting the channel up, waiting for connection, adding close
listeners, and eventually closing.
2017-11-15 12:38:39 -07:00
Luca Cavanna 382da0f227
REST spec: Validate that api name matches file name that contains it (#27366)
This commit validates that each spec json file contains an API that has the same name as the file
2017-11-14 14:53:00 +01:00
Simon Willnauer 2299c70371
Allow affix settings to specify dependencies (#27161)
We use affix settings to group settings / values under a certain namespace.
In some cases like login information for instance a setting is only valid if
one or more other settings are present. For instance `x.test.user` is only valid
if there is an `x.test.passwd` present and vice versa. This change allows to specify
such a dependency to prevent settings updates that leave settings in an inconsistent
state.
2017-11-13 12:06:36 +01:00
Simon Willnauer a34c2f0b8d
Ensure external refreshes will also refresh internal searcher to minimize segment creation (#27253)
We cut over to internal and external IndexReader/IndexSearcher in #26972 which uses
two independent searcher managers. This has the downside that refreshes of the external
reader will never clear the internal version map which in-turn will trigger additional
and potentially unnecessary segment flushes since memory must be freed. Under heavy
indexing load with low refresh intervals this can cause excessive segment creation which
causes high GC activity and significantly increases the required segment merges.

This change adds a dedicated external reference manager that delegates refreshes to the
internal reference manager that then `steals` the refreshed reader from the internal
reference manager for external usage. This ensures that external and internal readers
are consistent on an external refresh. As a sideeffect this also releases old segments
referenced by the internal reference manager which can potentially hold on to already merged
away segments until it is refreshed due to a flush or indexing activity.
2017-11-09 08:40:22 +00:00
Tim Brooks dc86b4c2ed
Decouple `ChannelFactory` from Tcp classes (#27286)
* Decouple `ChannelFactory` from Tcp classes

This is related to #27260. Currently `ChannelFactory` is tightly coupled
to classes related to the elasticsearch Tcp binary protocol. This commit
modifies the factory to be able to construct http or other protocol
channels.
2017-11-08 14:30:00 -07:00
Jason Tedor d5451b2037
Die with dignity while merging
If an out of memory error is thrown while merging, today we quietly
rewrap it into a merge exception and the out of memory error is
lost. Instead, we need to rethrow out of memory errors, and in fact any
fatal error here, and let those go uncaught so that the node is torn
down. This commit causes this to be the case.

Relates #27265
2017-11-06 17:55:11 -05:00
Jason Tedor 766d29e7cf
Correctly encode warning headers
The warnings headers have a fairly limited set of valid characters
(cf. quoted-text in RFC 7230). While we have assertions that we adhere
to this set of valid characters ensuring that our warning messages do
not violate the specificaion, we were neglecting the possibility that
arbitrary user input would trickle into these warning headers. Thus,
missing here was tests for these situations and encoding of characters
that appear outside the set of valid characters. This commit addresses
this by encoding any characters in a deprecation message that are not
from the set of valid characters.

Relates #27269
2017-11-06 13:20:30 -05:00
Simon Willnauer bd7efa908a Add ability to split shards (#26931)
This change adds a new `_split` API that allows to split indices into a new
index with a power of two more shards that the source index.  This API works
alongside the `_shrink` API but doesn't require any shard relocation before
indices can be split.

The split operation is conceptually an inverse `_shrink` operation since we
initialize the index with a _syntetic_ number of routing shards that are used
for the consistent hashing at index time. Compared to indices created with
earlier versions this might produce slightly different shard distributions but
has no impact on the per-index backwards compatibility.  For now, the user is
required to prepare an index to be splittable by setting the
`index.number_of_routing_shards` at index creation time.  The setting allows the
user to prepare the index to be splittable in factors of
`index.number_of_routing_shards` ie. if the index is created with
`index.number_of_routing_shards: 16` and `index.number_of_shards: 2` it can be
split into `4, 8, 16` shards. This is an intermediate step until we can make
this the default. This also allows us to safely backport this change to 6.x.

The `_split` operation is implemented internally as a DeleteByQuery on the
lucene level that is executed while the primary shards execute their initial
recovery. Subsequent merges that are triggered due to this operation will not be
executed immediately. All merges will be deferred unti the shards are started
and will then be throttled accordingly.

This change is intended for the 6.1 feature release but will not support pre-6.1
indices to be split unless these indices have been shrunk before. In that case
these indices can be split backwards into their original number of shards.
2017-11-06 11:37:55 +01:00
Tanguy Leroux 43e7a4a349
Upgrade to Jackson 2.8.10 (#27230)
While it's not possible to upgrade the Jackson dependencies 
to their latest versions yet (see #27032 (comment) for more) 
it's still possible to upgrade to the latest 2.8.x version.
2017-11-06 10:20:05 +01:00
Jim Ferenczi 429275a773
Remove ElasticsearchQueryCachingPolicy (#27190)
We have an hidden setting called `index.queries.cache.term_queries` that disables caching of term queries in the query cache.
Though term queries are not cached in the Lucene UsageTrackingQueryCachingPolicy since version 6.5.
This makes the es policy useless but also makes it impossible to re-enable caching for term queries.
This change appeared in Lucene 6.5 so this setting is no-op since version 5.4 of Elasticsearch
The change in this PR removes the setting and the custom policy.
2017-11-06 08:26:24 +01:00
David Roberts 749c3ec716
Remove the single argument Environment constructor (#27235)
Only tests should use the single argument Environment constructor.  To
enforce this the single arg Environment constructor has been replaced with
a test framework factory method.

Production code (beyond initial Bootstrap) should always use the same
Environment object that Node.getEnvironment() returns.  This Environment
is also available via dependency injection.
2017-11-04 13:25:09 +00:00
kel 0f21262b36 Do not create directories if repository is readonly (#26909)
For FsBlobStore and HdfsBlobStore, if the repository is read only, the blob store should be aware of the readonly setting and do not create directories if they don't exist.

Closes #21495
2017-11-03 13:10:50 +01:00
Jason Tedor d6d830ff0b
Fix logic detecting unreleased versions
When partitioning version constants into released and unreleased
versions, today we have a bug in finding the last unreleased
version. Namely, consider the following version constants on the 6.x
branch: ..., 5.6.3, 5.6.4, 6.0.0-alpha1, ..., 6.0.0-rc1, 6.0.0-rc2,
6.0.0, 6.1.0. In this case, our convention dictates that: 5.6.4, 6.0.0,
and 6.1.0 are unreleased. Today we correctly detect that 6.0.0 and 6.1.0
are unreleased, and then we say the previous patch version is unreleased
too. The problem is the logic to remove that previous patch version is
broken, it does not skip alphas/betas/RCs which have been released. This
commit fixes this by skipping backwards over pre-release versions when
finding the previous patch version to remove.

Relates #27206
2017-11-01 13:01:45 -04:00
Colin Goodheart-Smithe 99aca9cdfc
Enhances exists queries to reduce need for `_field_names` (#26930)
* Enhances exists queries to reduce need for `_field_names`

Before this change we wrote the name all the fields in a document to a `_field_names` field and then implemented exists queries as a term query on this field. The problem with this approach is that it bloats the index and also affects indexing performance.

This change adds a new method `existsQuery()` to `MappedFieldType` which is implemented by each sub-class. For most field types if doc values are available a `DocValuesFieldExistsQuery` is used, falling back to using `_field_names` if doc values are disabled. Note that only fields where no doc values are available are written to `_field_names`.

Closes #26770

* Addresses review comments

* Addresses more review comments

* implements existsQuery explicitly on every mapper

* Reinstates ability to perform term query on `_field_names`

* Added bwc depending on index created version

* Review Comments

* Skips tests that are not supported in 6.1.0

These values will need to be changed after backporting this PR to 6.x
2017-11-01 10:46:59 +00:00
kel c3e2bdf20c Raise IllegalArgumentException if query validation failed (#26811)
Closes #26799
2017-10-31 12:17:27 +01:00
Adrien Grand 3812d3cb43
TopHitsAggregator must propagate calls to `setScorer`. (#27138)
It is required in order to work correctly with bulk scorer implementations
that change the scorer during the collection process. Otherwise sub collectors
might call `Scorer.score()` on the wrong scorer.

Closes #27131
2017-10-31 09:59:06 +01:00
Jason Tedor a566942219
Refactor internal engine
This commit is a minor refactoring of internal engine to move hooks for
generating sequence numbers into the engine itself. As such, we refactor
tests that relied on this hook to use the new hook, and remove the hook
from the sequence number service itself.

Relates #27082
2017-10-30 13:10:20 -04:00
Ryan Ernst 2a8452b513 Reindex: Fix headers in reindex action (#26937)
The headers passed to reindex were skipped except for the last one. This
commit fixes the copying of the headers, as well as adds a base test
case for rest client builders to access the headers within the built
rest client.

relates #22976
2017-10-25 16:37:01 -07:00
olcbean 981b7f4d39 Make yaml test runner stricter by enforcing `required` for paths and parameters (#27035)
Till now the yaml test runner was verifying that the provided path parts and parameters are supported.
With this PR, yaml test runner also checks that all required path parts and parameters are provided.
2017-10-25 19:36:42 +00:00
Luca Cavanna 8caf7d4ff8 Decouple BulkProcessor from ThreadPool (#26727)
Introduce minimal thread scheduler as a base class for `ThreadPool`. Such a class can be used from the `BulkProcessor` to schedule retries and the flush task. This allows to remove the `ThreadPool` dependency from `BulkProcessor`, which requires to provide settings that contain `node.name` and also needed log4j for logging. Instead, it needs now a `Scheduler` that is much lighter and gets automatically created and shut down on close.

Closes #26028
2017-10-25 10:30:23 +02:00
Lee Hinman fcfbdf1f37 Expose adaptive replica selection stats in /_nodes/stats API
This exposes the collected metrics we store for ARS in the nodes stats, as well
as the computed rank of nodes. Each node exposes its perspective about the
cluster.

Here's an example output (with `?human`):

```json
...
"adaptive_selection" : {
  "_k6v1-wERxyUd5ke6s-D0g" : {
    "outgoing_searches" : 0,
    "avg_queue_size" : 0,
    "avg_service_time" : "7.8ms",
    "avg_service_time_ns" : 7896963,
    "avg_response_time" : "9ms",
    "avg_response_time_ns" : 9095598,
    "rank" : "9.1"
  },
  "VJiCUFoiTpySGmO00eWmtQ" : {
    "outgoing_searches" : 0,
    "avg_queue_size" : 0,
    "avg_service_time" : "1.3ms",
    "avg_service_time_ns" : 1330240,
    "avg_response_time" : "4.5ms",
    "avg_response_time_ns" : 4524154,
    "rank" : "4.5"
  },
  "DHNGTdzyT9iiaCpEUsIAKA" : {
    "outgoing_searches" : 0,
    "avg_queue_size" : 0,
    "avg_service_time" : "2.1ms",
    "avg_service_time_ns" : 2113164,
    "avg_response_time" : "6.3ms",
    "avg_response_time_ns" : 6375810,
    "rank" : "6.4"
  }
}
...
```
2017-10-24 08:58:42 -06:00
Tim Brooks 277637f42f Do not set SO_LINGER on server channels (#26997)
Right now we are attempting to set SO_LINGER to 0 on server channels
when we are stopping the tcp transport. This is not a supported socket
option and throws an exception. This also prevents the channels from
being closed.

This commit 1. doesn't set SO_LINGER for server channges, 2. checks
that it is a supported option in nio, and 3. changes the log message
to warn for server channel close exceptions.
2017-10-13 13:06:38 -06:00
Jason Tedor 393e73612e Fix formatting in channel close test
This commit fixes the indentation in the transport test case for a
channel closing while connecting.
2017-10-10 13:39:45 -04:00
Jason Tedor 4c06b8f1d2 Check for closed connection while opening
While opening a connection to a node, a channel can subsequently
close. If this happens, a future callback whose purpose is to close all
other channels and disconnect from the node will fire. However, this
future will not be ready to close all the channels because the
connection will not be exposed to the future callback yet. Since this
callback is run once, we will never try to disconnect from this node
again and we will be left with a closed channel. This commit adds a
check that all channels are open before exposing the channel and throws
a general connection exception. In this case, the usual connection retry
logic will take over.

Relates #26932
2017-10-10 13:34:51 -04:00
Simon Willnauer cdd7c1e6c2 Return List instead of an array from settings (#26903)
Today we return a `String[]` that requires copying values for every
access. Yet, we already store the setting as a list so we can also directly
return the unmodifiable list directly. This makes list / array access in settings
a much cheaper operation especially if lists are large.
2017-10-09 09:52:08 +02:00
Nhat bf4c3642b2 remove _primary and _replica shard preferences (#26791)
The shard preference _primary, _replica and its variants were useful
for the asynchronous replication. However, with the current impl, they
are no longer useful and should be removed.

Closes #26335
2017-10-08 11:03:06 -04:00
Jason Tedor 470e5e7cfc Add additional low-level logging handler ()
* Add additional low-level logging handler

We have the trace handler which is useful for recording sent messages
but there are times where it would be useful to have more low-level
logging about the events occurring on a channel. This commit adds a
logging handler that can be enabled by setting a certain log level
(org.elasticsearch.transport.netty4.ESLoggingHandler) to trace that
provides trace logging on low-level channel events and includes some
information about the request/response read/write events on the channel
as well.

* Remove imports

* License header

* Remove redundant

* Add test

* More assertions
2017-10-05 12:10:58 -04:00
Martijn van Groningen b27e408ed2
Removed void token filter entries and added two tests 2017-10-05 13:25:05 +02:00
Md. Abdulla-Al-Sun a40c474e10
Added Bengali Analyzer to Elasticsearch with respect to the lucene update(PR#238) 2017-10-05 13:25:05 +02:00
Boaz Leskes 2a04118e88 Promote common rest test utility methods to ESRestTestCase
We have duplicates in some classes and I was about to create one more.
2017-10-05 10:08:10 +02:00
Simon Willnauer 00dfdf50cf Represent lists as actual lists inside Settings (#26878)
Today we represent each value of a list setting with it's own dedicated key
that ends with the index of the value in the list. Aside of the obvious
weirdness this has several issues especially if lists are massive since it
causes massive runtime penalties when validating settings. Like a list of 100k
words will literally cause a create index call to timeout and in-turn massive
slowdown on all subsequent validations runs.

With this change we use a simple string list to represent the list. This change
also forbids to add a settings that ends with a .0 which was internally used to
detect a list setting.  Once this has been rolled out for an entire major
version all the internal .0 handling can be removed since all settings will be
converted.

Relates to #26723
2017-10-05 09:27:08 +02:00
Martijn van Groningen dca787ed8a
upgrade to Lucene 7.1.0 snapshot version 2017-10-05 09:06:56 +02:00
Simon Willnauer d1533e2397 Remove Settings#getAsMap() (#26845)
Since `#getAsMap` exposes internal representation we are trying to remove it
step by step. This commit is cleaning up some xcontent writing as well as
usage in tests
2017-10-04 01:21:38 -06:00
Boaz Leskes a18bd9caa2 Increase ESRestTestCase.waitForClusterStateUpdatesToFinish time out to 30s
It is set to 10 sec but sometimes it takes the cluster longer to settle.
2017-10-03 12:24:36 +02:00
Tim Brooks d80ad7f097 Check channel i open before setting SO_LINGER (#26857)
This commit fixes a #26855. Right now we set SO_LINGER to 0 if we are
stopping the transport. This can throw a ChannelClosedException if the
raw channel is already closed. We have a number of scenarios where it is
possible this could be called with a channel that is already closed.
This commit fixes the issue be checking that the channel is not closed
before attempting to set the socket option.
2017-10-02 15:09:52 -06:00
Tim Brooks 9ae7a80ba5 Move raw selector usage into ESSelector (#26825)
Currently we only log generic messages about errors in logs from the
nio event handler. This means that we do not know which channel had
issues connection, reading, writing, etc.

This commit changes the logs to include the local and remote addresses
and profile for a channel.
2017-10-01 17:59:57 -06:00
Simon Willnauer 7b8d036ab5 Replace group map settings with affix setting (#26819)
We use group settings historically instead of using a prefix setting which is more restrictive and type safe. The majority of the usecases needs to access a key, value map based on the _leave node_ of the setting ie. the setting `index.tag.*` might be used to tag an index with `index.tag.test=42` and `index.tag.staging=12` which then would be turned into a `{"test": 42, "staging": 12}` map. The group settings would always use `Settings#getAsMap` which is loosing type information and uses internal representation of the settings. Using prefix settings allows now to access such a method type-safe and natively.
2017-09-30 14:27:21 +02:00
Tim Brooks bf403ae028 Add information about nio channels in logs (#26806)
Currently we only log generic messages about errors in logs from the
nio event handler. This means that we do not know which channel had
issues connection, reading, writing, etc.

This commit changes the logs to include the local and remote addresses
and profile for a channel.
2017-09-28 17:11:26 -06:00
Simon Willnauer 25d6778d31 Add comment to TCP transport impls why we set SO_LINGER on close 2017-09-28 13:07:01 +02:00
Armin Braun af06231d4c #26701 Close TcpTransport on RST in some Spots to Prevent Leaking TIME_WAIT Sockets (#26764)
#26701 Added option to RST instead of FIN to TcpTransport#closeChannels
2017-09-26 19:58:11 +00:00
Simon Willnauer a506ba8602 Remove `Settings,put(Map<String,String>)` (#26785)
`Map<String,String>` is basically erasing the type while other methods on
the `Settings.Builder` are type safe and have corresponding `get` methods.
2017-09-26 12:15:20 +02:00
Simon Willnauer aab4655e63 Unify Settings xcontent reading and writing (#26739)
This change adds a fromXContent method to Settings that allows to read
the xcontent that is produced by toXContent. It also replaces the entire settings
loader infrastructure and removes the structured map representation. Future PRs will
also tackle the `getAsMap` that exposes the internal represenation of settings for
better encapsulation.
2017-09-25 13:23:01 +02:00
Jason Tedor f35d1de502 Introduce global checkpoint background sync
It is the exciting return of the global checkpoint background
sync. Long, long ago, in snapshot version far, far away we had and only
had a global checkpoint background sync. This sync would fire
periodically and send the global checkpoint from the primary shard to
the replicas so that they could update their local knowledge of the
global checkpoint. Later in time, as we sped ahead towards finalizing
the initial version of sequence IDs, we realized that we need the global
checkpoint updates to be inline. This means that on a replication
operation, the primary shard would piggy back the global checkpoint with
the replication operation to the replicas. The replicas would update
their local knowledge of the global checkpoint and reply with their
local checkpoint. However, this could allow the global checkpoint on the
primary to advance again and the replicas would fall behind in their
local knowledge of the global checkpoint. If another replication
operation never fired, then the replicas would be permanently behind. To
account for this, we added one more sync that would fire when the
primary shard fell idle. However, this has problems:
 - the shard idle timer defaults to five minutes, a long time to wait
   for the replicas to learn of the new global checkpoint
 - if a replica missed the sync, there was no follow-up sync to catch
   them up
 - there is an inherent race condition where the primary shard could
   fall idle mid-operation (after having sent the replication request to
   the replicas); in this case, there would never be a background sync
   after the operation completes
 - tying the global checkpoint sync to the idle timer was never natural

To fix this, we add two additional changes for the global checkpoint to
be synced to the replicas. The first is that we add a post-operation
sync that only fires if there are no operations in flight and there is a
lagging replica. This gives us a chance to sync the global checkpoint to
the replicas immediately after an operation so that they are always kept
up to date. The second is that we add back a global checkpoint
background sync that fires on a timer. This timer fires every thirty
seconds, and is not configurable (for simplicity). This background sync
is smarter than what we had previously in the sense that it only sends a
sync if the global checkpoint on at least one replica is lagging that of
the primary. When the timer fires, we can compare the global checkpoint
on the primary to its knowledge of the global checkpoint on the replicas
and only send a sync if there is a shard behind.

Relates #26591
2017-09-21 15:34:13 -04:00
James Baiera c760eec054 Add permission checks before reading from HDFS stream (#26716)
Add checks for special permissions before reading hdfs stream data. Also adds test from 
readonly repository fix. MiniHDFS will now start with an existing repository with a single snapshot 
contained within. Readonly Repository is created in tests and attempts to list the snapshots 
within this repo.
2017-09-21 11:55:07 -04:00
Michael Basnight f385e0cf26 Add bad_request to the rest-api-spec catch params (#26539)
This adds another request to the catch params. It also makes sure that
the generic request param does not allow 400 either.
2017-09-14 14:24:03 -05:00
Christoph Büscher c7c6443b10 [Docs] "The the" is a great band, but ... (#26644)
Removing several occurrences of this typo in the docs and javadocs, seems to be
a common mistake. Corrections turn up once in a while in PRs, better to correct
some of this in one sweep.
2017-09-14 15:08:20 +02:00
Adrien Grand 93da7720ff Move non-core mappers to a module. (#26549)
Today we have all non-plugin mappers in core. I'd like to start moving those
that neither map to json datatypes nor are very frequently used like `date` or
`ip` to a module.

This commit creates a new module called `mappers-extra` and moves the
`scaled_float` and `token_count` mappers to it. I'd like to eventually move
`range` fields there but it's more complicated due to their intimate
relationship with range queries.

Relates #10368
2017-09-13 17:58:53 +02:00
Simon Willnauer 42f3129d7b Allow plugins to validate cluster-state on join (#26595)
Today we don't have a pluggable way to validate if the cluster state
is compatible with the node that joins. We already apply some checks for index
compatibility that prevents nodes to join a cluster with indices it doesn't support
but for plugins this isn't possible. This change adds a cluster state validator that
allows plugins to prevent a join if the cluster-state is incompatible.
2017-09-12 15:32:33 +02:00
Ryan Ernst 5c35bff1c3 Test: Remove leftover static bwc test case (#26584)
This test case was leftover from the static bwc tests. There was still
one use for checking we do not load old indices, but this PR moves the
legacy code needed for that directly into the test. I also opened a
follow up issue to completely remove the unsupported test: #26583.
2017-09-11 15:38:30 -07:00
Jason Tedor b2e4bfa0a7 Snapshot fallback should consider build.snapshot
When determining if a build is a snapshot build, we look for a field in
the JAR manifest. However, when running tests, we are not running with a
compiled core Elasticsearch JAR, we are running with the compiled core
classes on the classpath. We have a fallback for this, we always assume
such a situation is a snapshot build. However, when running builds with
-Dbuild.snapshot=false, this is not the case. As such, we need to
fallback to the value of build.snapshot. However, there are cases where
we are not running with a compiled core Elasticsearch JAR (e.g., when
the transport client is embedded in a web container) so we should only
do this fallback if we are in tests. To verify we are in tests, we check
if randomized runner is on the classpath.

Relates #26554
2017-09-11 07:42:11 -04:00
Martijn van Groningen b391425da1
Added support to the percolate query to percolate multiple documents
The percolator will add a `_percolator_document_slot` field to all percolator
hits to indicate with what document it has matched. This number matches with
the order in which the documents have been specified in the percolate query.

Also improved the support for multiple percolate queries in a search request.
2017-09-08 17:28:39 +02:00
Tim Brooks c1a20f7e48 Merge tsa with ts (#26369)
We currently have a weird relationship between Transport,
TransportService, and TransportServiceAdaptor. At some point I think
that we would like to collapse these all into one concept as we only
support TCP transports.

This commit moves in that direction by eliminating the adaptor and just
passing the transport service to the transport.
2017-09-05 09:15:56 -06:00
Boaz Leskes 2fd4af82e4 Move `UNASSIGNED_SEQ_NO` and `NO_OPS_PERFORMED` to SequenceNumbers (#26494)
Where they better belong.
2017-09-04 16:31:00 +02:00
Alexander Reelsen 80d0a32f8e ScriptService: Replace max compilation per minute setting with max compilation rate (#26399)
The current script service has a script compilation limit for a one
minute window. This is set to a small default value of 15. Instead of
increasing that default value, this commit introduces a new setting 
that allows to configure a rate per time unit, so that the script service can deal with bursts better.

The new setting is named `script.max_compilations_rate`,
requires a nonnegative number and a positive time value.

The default is `75/5m`, which is equivalent to the existing 15 per minute.
2017-09-01 10:15:27 +02:00
Lee Hinman c3da66d021 Implement adaptive replica selection (#26128)
* Implement adaptive replica selection

This implements the selection algorithm described in the C3 paper for
determining which copy of the data a query should be routed to.

By using the service time EWMA, response time EWMA, and queue size EWMA we
calculate the score of a node by piggybacking these metrics with each search
request.

Since Elasticsearch lacks the "broadcast to every copy" behavior that Cassandra
has (as mentioned in the C3 paper) to update metrics after a node has been
highly weighted, this implementation adjusts a node's response stats using the
average of the its own and the "best" node's metrics. This is so that a long GC
or other activity that may cause a node's rank to increase dramatically does not
permanently keep a node from having requests routed to it, instead it will
eventually lower its score back to the realm where it is a potential candidate
for new queries.

This feature is off by default and can be turned on with the dynamic setting
`cluster.routing.use_adaptive_replica_selection`.

Relates to #24915, however instead of `b=3` I used `b=4` (after benchmarking)

* Randomly use adaptive replica selection for internal test cluster

* Use an action name *prefix* for retrieving pending requests

* Add unit test for replica selection

* don't use adaptive replica selection in SearchPreferenceIT

* Track client connections in a SearchTransportService instead of TransportService

* Bind `entry` pieces in local variables

* Add javadoc link to C3 paper and javadocs for stat adjustments

* Bind entry's key and value to local variables

* Remove unneeded actionNamePrefix parameter

* Use conns.longValue() instead of cached Long

* Add comments about removing entries from the map

* Pull out bindings for `entry` in IndexShardRoutingTable

* Use .compareTo instead of manually comparing

* add assert for connections not being null and gte to 1

* Copy map for pending search connections instead of "live" map

* Increase the number of pending search requests used for calculating rank when chosen

When a node gets chosen, this increases the number of search counts for the
winning node so that it will not be as likely to be chosen again for
non-concurrent search requests.

* Remove unused HashMap import

* Rename rank -> rankShardsAndUpdateStats

* Rename rankedActiveInitializingShardsIt -> activeInitializingShardsRankedIt

* Instead of precalculating winning node, use "winning" shard from ranked list

* Sort null ranked nodes before nodes that have a rank
2017-08-30 20:55:11 -06:00
Tal Levy ed151d829d Migrate Search requests to use Writeable reading strategies (#26428)
Migrates many SearchRequest objects to use Writeable conventions and rejects usage of `readFrom` in these new classes.
2017-08-30 11:00:33 -07:00
Sergey Galkin c075323522 Refactor create index service to be unit testable
This commit refactors MetaDataCreateIndexService so that it is unit
testable.

Relates #25961
2017-08-29 16:55:44 -04:00
Michael Basnight cfd14cd2b8 Revert shading for the low level rest client (#26367)
At current, we do not feel there is enough of a reason to shade the low
level rest client. It caused problems with commons logging and IDE's
during the brief time it was used. We did not know exactly how many
users will need this, and decided that leaving shading out until we
gather more information is best. Users can still shade the jar
themselves. For information and feeback, see issue #26366.

Closes #26328

This reverts commit 3a20922046.
This reverts commit 2c271f0f22.
This reverts commit 9d10dbea39.
This reverts commit e816ef89a2.
2017-08-25 14:13:12 -05:00
Nik Everett b3edd11aa0 Allow plugins to plug rescore implementations (#26368)
This allows plugins to plug rescore implementations into
Elasticsearch. While this is a fairly expert thing to do I've
done my best to point folks to the QueryRescorer as one that at
least documents the tradeoffs that it makes. I've attempted to
limit the API surface area by removing `SearchContext` from the
exposed interface, instead exposing just the IndexSearcher and
`QueryShardContext`. I also tried to make some of the class names
more consistent and do some general cleanup while I was there.

I entertained the notion of moving the `QueryRescorer` to module.
After all, it'd be a wonderful test to prove that you can plug
rescore implementation into Elasticsearch if the only built in
rescore implementation is in the module. But I decided against it
because the new module would require a client jar and it'd require
moving some more things around. I think if we really want to do
it, we should do it as a followup.

I did, on the other hand, create an "example" rescore plugin which
should both be a nice example for anyone wanting to plug in their
own rescore implementation and servers as a good integration test
to make sure that you can indeed plug one in.

Closes #26208
2017-08-25 13:46:57 -04:00
Yannick Welsch 0390c76f0a Remove reinitShadowPrimary (#26349)
With shadow replicas gone, there is no need to have this method anymore.
2017-08-25 10:37:51 +09:30
Tal Levy 6ab4b6b0ac revamp TransportRequest handlers to support Writeable (#26315)
This PR begins the long journey to deprecating Streamable.

The idea here is to add additional method signatures that
support Writeable.Reader, so that the work to migrate objects TransportMessage to
implement Writeable and not Streamable.

One example conversion is done in this PR: SimulatePipelineRequest.
2017-08-22 15:47:05 -07:00
Yannick Welsch 3d8feff66e Use Java 9 FilePermission model (#26302)
This commit makes the security code aware of the Java 9 FilePermission changes (see #21534) and allows us to remove the `jdk.io.permissionsUseCanonicalPath` system property.
2017-08-22 11:22:00 +09:30
Ryan Ernst 96b0d3e0cc Script: Convert script query to a dedicated script context (#26003)
This commit converts script query to use a new FilterScript context. The
new context returns a boolean, so the error that would have previously
happened at runtime if a non boolean was returned would now happen at
script compilation. Also, the leniency of supporting returning a number
and 0 mapping to false, non-zero to true is gone, but it was never
documented. With the new context compilation will now also fail if
special variables are used at compilation time, instead of runtime, eg
ctx.
2017-08-18 15:18:35 -07:00
Tim Brooks 5d7a78fcdb Use PlainListenableActionFuture for CloseFuture (#26242)
Right now we use a custom future for the CloseFuture associated with a
channel. This is because we need special unwrapping logic to ensure that
exceptions from a future failure are a certain type (opposed to an
UncategorizedException). However, the current version is limiting
because we can only attach one listener.

This commit changes the CloseFuture to extend the
PlainListenableActionFuture. This change allows us to attach multiple
listeners.
2017-08-18 13:38:38 -05:00
Luca Cavanna 1309dfd44d Add links to external classes in clients javadoc (#25998)
The client sniffer depends on the low-level REST client, while the Java high-level REST client and the transport client depend on Elasticsearch itself. Javadoc are not that useful unless they have links to the Elasticsearch classes in the latter case, and to the low-level REST client in the sniffer javadoc. This commit adds those links.
2017-08-17 21:03:47 +02:00
Colin Goodheart-Smithe a975f4e5d6 Moves more classes over to ToXContentObject/Fragment (#26234)
* Moves more classes over to ToXContentObject/Fragment

* review comments
2017-08-16 15:40:40 +01:00
Simon Willnauer a9169e536b Several internal improvements to internal test cluster infra (#26214)
This chance adds several random test infrastructure improvements that caused
issues in on-going developments but are generally useful. For instance is it impossible
to restart a node with a secure setting source since we close it after the node is started.
This change makes it cloneable such that we can reuse it for a restart.
2017-08-15 17:42:15 +02:00
Martijn van Groningen 1146a35870
Move more token filters to analysis-common module
The following token filters were moved: arabic_stem, brazilian_stem, czech_stem, dutch_stem, french_stem, german_stem and russian_stem.

Relates to #23658
2017-08-11 17:39:24 +02:00
Andy Bristol 7e3cd6a019 reindex: automatically choose the number of slices (#26030)
In reindex APIs, when using the `slices` parameter to choose the number of slices, adds the option to specify `slices` as "auto" which will choose a reasonable number of slices. It uses the number of shards in the source index, up to a ceiling. If there is more than one source index, it uses the smallest number of shards among them.

This gives users an easy way to use slicing in these APIs without having to make decisions about how to configure it, as it provides a good-enough configuration for them out of the box. This may become the default behavior for these APIs in the future.
2017-08-11 08:25:25 -07:00
Simon Willnauer 6f82b0c6e2 Allow `ClusterState.Custom` to be created on initial cluster states (#26144)
Today we have a `null` invariant on all `ClusterState.Custom`. This makes
several code paths complicated and requires complex state handling in some cases.
This change allows to register a custom supplier that is used to initialize the
initial clusterstate with these transient customs.
2017-08-11 09:51:49 +02:00
Nik Everett 99ac7beb8e Teach the build about betas and rcs (#26066)
The build was ignoring suffixes like "beta1" and "rc1" on the version numbers which was causing the backwards compatibility packaging tests to fail because they expected to be upgrading from 6.0.0 even though they were actually upgrading from 6.0.0-beta1. This adds the suffixes to the information that the build scrapes from Version.java. It then uses those suffixes when it resolves artifacts build from the bwc branch and for testing.

Closes #26017
2017-08-10 14:30:00 -04:00
Colin Goodheart-Smithe dfbaf90951 Adds ToXContentFragment (#25771)
* Adds ToXContentFragment

This interface is meant for objects that implement `ToXContent` but are not complete objects. It is basically the opposite of `ToXContentObject`. It means that it will be easier to track the migration of classes over to the fragment/not fragment ToXContent model as it will be clear which classes are not migrated. When no classes directly implement `ToXContent` we can make `ToXContent` package private to be sure that all new classes must implement `ToXContentObject` or `ToXContentFragment`.

* review comments

* more review comments

* javadocs

* iter

* Adds tests

* iter

* adds toString test for aggs

* improves tests following review comments

* iter

* iter
2017-08-09 15:53:30 +01:00
Simon Willnauer 82fa531ab4 Remove `_index` fielddata hack if cluster alias is present (#26082)
We introduced a hack in #25885 to respect the cluster alias if available on the `_index` field. This is important if aggregations or other field data related operations are executed. Yet, we added a small hack that duplicated an implementation detail from the `_index` field data builder to make this work. This change adds a necessary but simple API change that allows us to remove the hack and only have a single implementation.
2017-08-08 09:24:24 +02:00
Adrien Grand f0cba4fce5 Add a scripted similarity. (#25831)
The goal of this similarity is to help users who would like to keep the
functionality of the `tf-idf` similarity that we want to remove, or to allow
for specific usec-cases (disabling idf, disabling tf, disabling length norm,
etc.) to not have to build a custom plugin and familiarize with the low-level
Lucene API.
2017-08-08 08:55:12 +02:00
Martijn van Groningen 99d79d5a0f
tests: when do not generate random unicode strings for field names, but instead random alpha ascii strings
Should fail build failures like this one:
https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+6.0+multijob-unix-compatibility/
2017-08-07 15:09:01 +02:00
Luca Cavanna 14ba36977e [TEST] prevent yaml tests from using raw requests (#26044)
Raw requests are supported only by the java yaml test runner and were introduced to test docs snippets. Some yaml tests ended up using them (see #23497) which causes failures for other language clients. This commit migrates those yaml tests to Java tests that send requests through the Java low-level REST client, and also moves the ability to send raw requests to a special client that's only available when testing docs snippets.

Closes #25694
2017-08-07 11:02:16 +02:00
Boaz Leskes e11cbed534 Adding a refresh listener to a recovering shard should be a noop (#26055)
When `refresh=wait_for` is set on an indexing request, we register a listener on the shards that are call during the next refresh. During the recover translog phase, when the engine is open, we have a window of time when indexing operations succeed and they can add their listeners. Those listeners will only be called when the recovery finishes as we do not refresh during recoveries (unless the indexing buffer is full). Next to being a bad user experience, it can also cause deadlocks with an ongoing peer recovery that may wait for those operations to mark the replica in sync (details below).

To fix this, this PR changes refresh listeners to be a noop when the shard is not yet serving reads (implicitly covering the recovery period). It doesn't matter anyway. 

Deadlock with recovery:

When finalizing a peer recovery we mark the peer as "in sync". To do so we wait until the peer's local checkpoint is at least as high as the global checkpoint. If an operation with `refresh=wait_for` is added as a listener on that peer during recovery, it is not completed from the perspective of the primary. The primary than may wait for it to complete before advancing the local checkpoint for that peer. Since that peer is not considered in sync, the global checkpoint on the primary can be higher, causing a deadlock. Operation waits for recovery to finish and a refresh to happen. Recovery waits on the operation.
2017-08-04 19:51:15 +02:00
Tim Brooks 0401df81e0 Revert "Tests: Disable NIO transport mechanism in tests"
This reverts commit c24dbec6f5.
2017-08-02 09:59:07 -05:00
Colin Goodheart-Smithe 87c6e63e73 Adds mutate function to various tests (#25999)
* Adds mutate function to various tests

Relates to #25929

* fix test

* implements mutate function for all single bucket aggs

* review comments

* convert getMutateFunction to mutateIInstance
2017-08-02 11:38:31 +01:00
Alexander Reelsen c24dbec6f5 Tests: Disable NIO transport mechanism in tests
Due to test instability the new transport mechanism is
always disabled and does not randomly pick the new IO
transport.
2017-08-02 11:18:12 +02:00
Adrien Grand 58feb5efa0 Fix `_exists_` in query_string on empty indices. (#25993)
It currently fails if there are no mappings yet.

Closes #25956
2017-08-02 10:06:34 +02:00
Luca Cavanna e2d25c3c89 [TEST] Remove duplicated main response unit test (#25855)
Also move MainResponseTets to extend AbstractStreamableXContentTestCase
2017-08-02 08:42:38 +02:00
Tim Brooks 58d2dcc54f Ensure send listener is called on IOException
Currently there is an issue where the send listener is not called in the
nio transport when an exception is throw during channel flush. This
leads to memory leaks. This commit ensures that the listener is called
2017-08-01 22:30:04 -05:00
Tim Brooks 0f4f49496f Use nio transport in test clusters (#25986)
This commit adds the nio transport as an option in place of the mock tcp
transport for tests. Each test will only use one transport type. The
transport type is decided by a random boolean generated inside of the
`ESTestCase` class.
2017-08-01 16:19:31 -05:00
Ryan Ernst 072281d5aa Update version to 7.0.0-alpha1 (#25876)
This commit updates the version for master to 7.0.0-alpha1. It also adds
the 6.1 version constant, and fixes many tests, as well as marking some
as awaits fix.

Closes #25893
Closes #25870
2017-08-01 15:47:48 -04:00
Luca Cavanna 4d589afbc2 AbstractQueryBuilder to no longer extend ToXContentBytes (#25948)
ToXContentToBytes is used as a base class that adds toString and buildAsBytes method implementation to classes that implement ToXContent. With the ongoing cleanups, this class is limited and doesn't add a lot of value, given that buildAsBytes can be replaced with XContentHelper.toXContent and toString can be replaced with Strings.toString(this).

The plan would be to remove ToXContentToBytes entirely, and AbstractQueryBuilder is the first place where we can remove its usage.
2017-07-31 17:38:24 +02:00
Boaz Leskes 9d10ffd547 Goodbye, Translog Views (#25962)
During peer recoveries, we need to copy over lucene files and replay the operations they miss from the source translog. Guaranteeing that translog files are not cleaned up has seen many iterations overtime. Back in the old 1.0 days, recoveries went through the Engine and actively prevented both translog cleaning and lucene commits. We then moved to a notion called Translog Views, which allowed the recovery code to "acquire" a view into the translog which is then guaranteed to be kept around until the view is closed. The Engine code was free to commit lucene and do what it ever it wanted without coordinating with recoveries. Translog file deletion logic was based on reference counting on the file level. Those counters were incremented when a view was acquired but also when the view was used to create a `Snapshot` that allowed you to read operations from the files. At some point we removed the file based counting complexity in favor of constructs on the Translog level that just keep track of "open" views and the minimum translog generation they refer to. To do so, Views had to be kept around until the last snapshot that was made from them was consumed. This was fine in recovery code but lead to [a subtle bug](https://github.com/elastic/elasticsearch/pull/25862) in the [Primary Replica Resyncer](https://github.com/elastic/elasticsearch/pull/25862). 

Concurrently, we have developed the notion of a `TranslogDeletionPolicy` which is responsible for the liveness aspect of translog files. This class makes it very simple to take translog Snapshot into account for keep translog files around, allowing people that just need a snapshot to just take a snapshot and not worry about views and such. Recovery code which actually does need a view can now prevent trimming by acquiring a simple retention lock (a `Closable`). This removes the need for the notion of a View.
2017-07-31 17:29:43 +02:00
Colin Goodheart-Smithe 7740cb54a5 Improves AbstractWireSerializingTestCase equals test (#25910)
* Improves AbstractWireSerializingTestCase  equals test

`AbstractWireSerializingTestCase.testEqualsAndHashcode()` now uses `EqualsHashcodeTestUtils` to perform the hashCode and equals checks. To support this `AbstractWireSerializingTestCase` has two new methods: `getCopyFunction()` and `getMutateFunction` which are used when calling `EqualsHashcodeTestUtils`

* Adds TODO

* Makes equivalent changes to AbstractStreamableTestCase

* corrects javadoc error
2017-07-31 14:46:58 +01:00
Martijn van Groningen 0b776a1de0
Move more token filters to analysis-common module
The following token filters were moved: delimited_payload_filter, keep, keep_types, classic, apostrophe, decimal_digit, fingerprint, min_hash and scandinavian_folding.

Relates to #23658
2017-07-31 15:15:04 +02:00
Martijn van Groningen 7c3735bdc4
percolator: Store the QueryBuilder's Writable representation instead of its XContent representation.
The Writeble representation is less heavy to parse and that will benefit percolate performance and throughput.

The query builder's binary format has now the same bwc guarentees as the xcontent format.

Added a qa test that verifies that percolator queries written in older versions are still readable by the current version.
2017-07-28 12:24:10 +02:00
Yannick Welsch 1a01514081 Move tribe to a module (#25778)
This commit moves tribe to a module, stripping core from the tribe functionality.
2017-07-28 11:23:50 +02:00
Jason Tedor 1492ccd7ae Fix environment-aware command tests
This commit fixes tests for environment-aware commands. A previous
change added a check that es.path.conf is not null. The problem is that
this system property is not being set in tests so this check trips every
single time. To fix this, we move the check into a method that can be
overridden, and then override this method in relevant places in tests to
avoid having to set the property in tests. We also add a test that this
check works as expected.
2017-07-28 14:37:04 +09:00
Simon Willnauer b72c71083c Cleanup IndexFieldData visibility (#25900)
Today we expose `IndexFieldDataService` outside of IndexService to do maintenance
or lookup field data in different ways. Yet, we have a streamlined way to access IndexFieldData
via `QueryShardContext` that should encapsulate all access to it. This also ensures that we control all other functionality like cache clearing etc.

This change also removes the `recycler` option from `ClearIndicesCacheRequest` this option is a no-op and should have been removed long ago.
2017-07-26 20:03:42 +02:00
Tim Brooks 6d02b45f10 Support client-only mode for NioTransport (#25839)
Currently, NioTransport does start normal socket selectors and the
client when the network server setting is set to false. This commit
makes it so that the client will be started even when the network server
is not enabled.

Additionally, it randomly introduces the NioTransport as an option for
the MockTransportClient throughout tests.
2017-07-26 10:27:15 -05:00
Luca Cavanna d8203f19fd Remove XContentHelper#toString(ToXContent) in favour of Strings#toString(ToXContent) (#25866)
These two methods do do the same thing. The subtle difference between the two is that the former prints out pretty printed content by default while the latter doesn't. There are way more usages of the latter throughout the codebase hence I kept that variant although I do think that it would be much better to print out prettified content by default from a `toString`. That breaks quite some tests so I didn't make that change yet.

Also XContentHelper#toString was outdated as it didn't check the ToXContent#isFragment method to decide whether a new anonymous object has to be created or not. It would simply fail with any ToXContentObject.
2017-07-26 16:00:59 +02:00
Simon Willnauer 634ce90dc0 Respect cluster alias in `_index` aggs and queries (#25885)
Today when we aggregate on the `_index` field the cross cluster search
alias is not taken into account. Neither is it respected when we search
on the field. This change adds support for cluster alias when the cluster
alias is present on the `_index` field.

Closes #25606
2017-07-26 09:16:52 +02:00
Tim Brooks 2d22bad53f Simplify selector close method (#25838)
Currently we have an option to interrupt the selector thread on close.
This option is not needed as we do not call this method and we should
not be blocking on the network thread. Instead we only need to ever call
wakeup() on the raw selector.
2017-07-25 10:52:15 -05:00
Michael Basnight e816ef89a2 Shade external dependencies in the rest client jar
This commit removes all external dependencies from the rest client jar
and shades them in an 'org.elasticsearch.client' package within the jar
using shadowJar gradle plugin. All projects that depended on the
existing jar have been converted to using the 'org.elasticsearch.client'
package prefixes to interact with the rest client.

Closes #25208
2017-07-24 12:55:43 -05:00
Tim Brooks 0a4b38b60c Close raw channel when bind / connect fails (#25840)
Currently we are failing to close socket channels when the initial bind
or connect operation fails. This leaves the file descriptor hanging
around. This closes the channel when an exception occurs during bind or
connect.
2017-07-22 13:55:33 -05:00
Tim Brooks c7a7c69b2b Simplify NioChannel creation and closing process (#25504)
Currently an NioChannel is created and it is UNREGISTERED. At some point
it is registered with a selector. From that point on, the channel can
only be closed by the selector. The fact that a channel might not be
associated with a selector has significant implications for concurrency
and the channel shutdown process. The only thing that is simplified by
allowing channels to be in a state independent of a selector is some
testing scenarios.

This PR modifies channels so that they are given a selector at creation
time and are always associated with that selector. Only that selector
can close that channel. This simplifies the channel lifecycle and
closing intricacies.
2017-07-21 11:55:23 -05:00
Yannick Welsch a2624dfcef Move primary term from ReplicationRequest to ConcreteShardRequest (#25822)
Removes the primary term from the replication request and pushes it into the transport envelope. This makes it possible to remove the term from the ReplicationOperation universe. The primary term that is to be used for a replication operation is now determined in the reroute phase when the node decides to execute a primary action (and validated once the primary action gets to execute). This makes it possible to validate that the primary action was sent to the correct primary shard instance that it was meant to be sent to (currently we only validate primary actions using the allocation id, which can be reused for failed and reallocated primaries).
2017-07-21 15:57:42 +02:00
Boaz Leskes 7488877d1a Validate a joining node's version with version of existing cluster nodes (#25808)
When a node tries to join a cluster, it goes through a validation step to make sure the node is compatible with the cluster. Currently we validation that the node can read the cluster state and that it is compatible with the indexes of the cluster. This PR adds validation that the joining node's version is compatible with the versions of existing nodes. Concretely we check that:

1) The node's min compatible version is higher or equal to any node in the cluster (this prevents a too-new node from joining)
2) The node's version is higher or equal to the min compat version of all cluster nodes (this prevents a too old join where, for example, the master is on 5.6, there's another 6.0 node in the cluster and a 5.4 node tries to join).
3) The node's major version is at least as higher as the lowest node in the cluster. This is important as we use the minimum version in the cluster to stop executing bwc code for operations that require multiple nodes. If the nodes are already operating in "new cluster mode", we should prevent nodes from the previous major to join (even if they are wire level compatible). This does mean that if you have a very unlucky partition during the upgrade which partitions all old nodes which are also a minority / data nodes only, the may not be able to re-join the cluster. We feel this edge case risk is well worth the simplification it brings to BWC layers only going one way. This restriction only holds if the cluster state has been recovered (i.e., the cluster has properly formed).

 Also, the node join validation can now selectively fail specific nodes (previously the entire batch was failed). This is an important preparation for a follow up PR where we plan to have a rejected joining node die with dignity.
2017-07-20 20:11:29 +02:00
Simon Willnauer 5e629cfba0 Ensure query resources are fetched asynchronously during rewrite (#25791)
The `QueryRewriteContext` used to provide a client object that can
be used to fetch geo-shapes, terms or documents for percolation. Unfortunately
all client calls used to be blocking calls which can have significant impact on the
rewrite phase since it occupies an entire search thread until the resource is
received. In the case that the index the resource is fetched from isn't on the local
node this can have significant impact on query throughput.

Note: this doesn't fix MLT since it fetches stuff in doQuery which is a different beast. Yet, it is a huge step in the right direction
2017-07-20 15:37:50 +02:00
Boaz Leskes 9989ac69a4 Revert "Validate a joining node's version with version of existing cluster nodes (#25770)"
This reverts commit 1e1f8e6376.
2017-07-19 17:34:53 +02:00
Simon Willnauer 4d78935df7 Introduce a new Rewriteable interface to streamline rewriting (#25788)
Today we have duplicated code that is quite complicated to iterate
over rewriteable (`QueryBuilders` mainly) This change introduces a
`Rewriteable` interface that allow to share code to do the rewriting as
well as encapsulation and composition of queries.
2017-07-19 15:06:49 +02:00
Adrien Grand 55ad318541 Reduce the overhead of timeouts and low-level search cancellation. (#25776)
Setting a timeout or enforcing low-level search cancellation used to make us
wrap the collector and check either the current time or whether the search
task was cancelled for every collected document. This can be significant
overhead on cheap queries that match many documents.

This commit changes the approach to wrap the bulk scorer rather than the
collector and exponentially increase the interval between two consecutive
checks in order to reduce the overhead of those checks.
2017-07-19 14:15:53 +02:00
Boaz Leskes 1e1f8e6376 Validate a joining node's version with version of existing cluster nodes (#25770)
When a node tries to join a cluster, it goes through a validation step to make sure the node is compatible with the cluster. Currently we validation that the node can read the cluster state and that it is compatible with the indexes of the cluster. This PR adds validation that the joining node's version is compatible with the versions of existing nodes. Concretely we check that:

1) The node's min compatible version is higher or equal to any node in the cluster (this prevents a too-new node from joining)
2) The node's version is higher or equal to the min compat version of all cluster nodes (this prevents a too old join where, for example, the master is on 5.6, there's another 6.0 node in the cluster and a 5.4 node tries to join).
3) The node's major version is at least as higher as the lowest node in the cluster. This is important as we use the minimum version in the cluster to stop executing bwc code for operations that require multiple nodes. If the nodes are already operating in "new cluster mode", we should prevent nodes from the previous major to join (even if they are wire level compatible). This does mean that if you have a very unlucky partition during the upgrade which partitions all old nodes which are also a minority / data nodes only, the may not be able to re-join the cluster. We feel this edge case risk is well worth the simplification it brings to BWC layers only going one way.

 Also, the node join validation can now selectively fail specific nodes (previously the entire batch was failed). This is an important preparation for a follow up PR where we plan to have a rejected joining node die with dignity.
2017-07-19 12:57:29 +02:00
Lee Hinman 610ba7e427 Register data node stats from info carried back in search responses (#25430)
* Register data node stats from info carried back in search responses

This is part of #24915, where we now calculate the EWMA of service time for
tasks in the search threadpool, and send that as well as the current queue size
back to the coordinating node. The coordinating node now tracks this information
for each node in the cluster.

This information will be used in the future the determining the best replica a
search request should be routed to. This change has no user-visible difference.

* Move response time timing into ResponseListenerWrapper

* Move ResponseListenerWrapper to ActionListener instead of SearchActionListener

Also removes the logger

* Move `requestIndex` back to private

* De-guice-ify ResponseCollectorService \o/

* Undo all changes to SearchQueryThenFetchAsyncAction

* Remove unneeded response collector from TransportSearchAction

* Undo all changes to SearchDfsQueryThenFetchAsyncAction

* Completely rewrite the inside of ResponseCollectorService's record keeping

* Documentation and cleanups for ResponseCollectorService

* Add unit test for collection of queue size and service time

* Fix Guice construction error

* Add basic unit tests for ResponseCollectorService

* Fix version constant for the master merge

* Fix test compilation after master merge

* Add a test for node removal on cluster changed event

* Remove integration test as there are now unit tests

* Rename ResponseListenerWrapper -> SearchExecutionStatsCollector

* Fix line-length

* Make classes private and final where appropriate

* Pass nodeId into SearchExecutionStatsCollector and use only ActionListener

* Get nodeId from connection so searchShardTarget can be private

* Remove threadpool from SearchContext, get it from IndexShard instead

* Add missing import

* Use BiFunction for responseWrapper rather than passing in collector service
2017-07-17 11:04:51 -06:00
Adrien Grand 264088f1c4 Deprecate the `_default_` mapping. (#25652)
Now that indices cannot have types anymore, this feature does not buy anything
anymore.

Closes #25500
2017-07-17 15:37:59 +02:00
Martijn van Groningen 8003171a0c
Move more token filters to analysis-common module
The following token filters were moved: arabic_normalization, german_normalization, hindi_normalization, indic_normalization, persian_normalization, scandinavian_normalization, serbian_normalization, sorani_normalization, cjk_width and cjk_width

Relates to #23658
2017-07-17 08:29:44 +02:00
Boaz Leskes a6bea1bf97 testMockFailToSendNoConnectRule should wait for connection close to bubble up and disconnect the node
#25521 changed channel closing to be handled async on anything but transport stop. This means it may take a while before
calling `connection.close()` and the node being removed from the `connectedNodes` list (but the connection is immediately unusuable).

Fixes #25686
2017-07-15 09:28:17 +02:00
Yannick Welsch 8f0b357651 Let primary own its replication group (#25692)
Currently replication and recovery are both coordinated through the latest cluster state available on the ClusterService as well as through the GlobalCheckpointTracker (to have consistent local/global checkpoint information), making it difficult to understand the relation between recovery and replication, and requiring some tricky checks in the recovery code to coordinate between the two. This commit makes the primary the single owner of its replication group, which simplifies the replication model and allows to clean up corner cases we have in our recovery code. It also reduces the dependencies in the code, so that neither RecoverySourceXXX nor ReplicationOperation need access to the latest state on ClusterService anymore. Finally, it gives us the property that in-sync shard copies won't receive global checkpoint updates which are above their local checkpoint (relates #25485).
2017-07-14 13:52:53 +02:00
Luca Cavanna ec66d655b5 Rename client artifacts (#25693)
It was brought up that our current client artifacts have generic names like 'rest' that may cause conflicts with other artifacts.

This commit renames:

- rest -> elasticsearch-rest-client
- sniffer -> elasticsearch-rest-client-sniffer
- rest-high-level -> elasticsearch-rest-high-level-client

A couple of small changes are also preparing the high level client for its first release.

Closes #20248
2017-07-13 09:44:25 +02:00
Simon Willnauer b7bc790428 Use a non default port range in MockTransportService
We already use a per JVM port range in MockTransportService. Yet,
it's possible that if we are executing in the JVM with ordinal 0 that
other clusters reuse ports from the mock transport service and some tests
try to simulate disconnects etc. By using a non-defautl port range (starting at 10300)
we prevent internal test clusters from reusing any of the mock impls ports

Relates to #25301
2017-07-12 22:29:21 +02:00
Simon Willnauer e81804cfa4 Add a shard filter search phase to pre-filter shards based on query rewriting (#25658)
Today if we search across a large amount of shards we hit every shard. Yet, it's quite
common to search across an index pattern for time based indices but filtering will exclude
all results outside a certain time range ie. `now-3d`. While the search can potentially hit
hundreds of shards the majority of the shards might yield 0 results since there is not document
that is within this date range. Kibana for instance does this regularly but used `_field_stats`
to optimize the indexes they need to query. Now with the deprecation of `_field_stats` and it's upcoming removal a single dashboard in kibana can potentially turn into searches hitting hundreds or thousands of shards and that can easily cause search rejections even though the most of the requests are very likely super cheap and only need a query rewriting to early terminate with 0 results.

This change adds a pre-filter phase for searches that can, if the number of shards are higher than a the `pre_filter_shard_size` threshold (defaults to 128 shards), fan out to the shards
and check if the query can potentially match any documents at all. While false positives are possible, a negative response means that no matches are possible. These requests are not subject to rejection and can greatly reduce the number of shards a request needs to hit. The approach here is preferable to the kibana approach with field stats since it correctly handles aliases and uses the correct threadpools to execute these requests. Further it's completely transparent to the user and improves scalability of elasticsearch in general on large clusters.
2017-07-12 22:19:20 +02:00
Tim Brooks a3ade99fcf Fix BytesReferenceStreamInput#skip with offset (#25634)
There is a bug when a call to `BytesReferenceStreamInput` skip is made
on a `BytesReference` that has an initial offset. The offset for the
current slice is added to the current index and then subtracted from the
length. This introduces the possibility of a negative number of bytes to
skip. This happens inside a loop, which leads to an infinte loop.

This commit correctly subtracts the current slice index from the
slice.length. Additionally, the `BytesArrayTests` are modified to test
instances that include an offset.
2017-07-11 09:54:29 -05:00
Simon Willnauer 98c91a3bd0 Limit the number of concurrent shard requests per search request (#25632)
This is a protection mechanism to prevent a single search request from
hitting a large number of shards in the cluster concurrently. If a search is
executed against all indices in the cluster this can easily overload the cluster
causing rejections etc. which is not necessarily desirable. Instead this PR adds
a per request limit of `max_concurrent_shard_requests` that throttles the number of
concurrent initial phase requests to `256` by default. This limit can be increased per request
and protects single search requests from overloading the cluster. Subsequent PRs can introduces
addiontional improvemetns ie. limiting this on a `_msearch` level, making defaults a factor of
the number of nodes or sort shards iters such that we gain the best concurrency across nodes.
2017-07-11 16:23:10 +02:00
Simon Willnauer ec1afe30ea Ensure remote cluster alias is preserved in inner hits aggs (#25627)
We lost the cluster alias due to some special caseing in inner hits
and due to the fact that we didn't pass on the alias to the shard request.
This change ensures that we have the cluster alias present on the shard to
ensure all SearchShardTarget reads preserve the alias.

Relates to #25606
2017-07-11 11:34:06 +02:00
Tim Brooks b22bbf94da Avoid blocking on channel close on network thread (#25521)
Currently when we close a channel in Netty4Utils.closeChannels we
block until the closing is complete. This introduces the possibility
that a network selector thread will block while waiting until a
separate network selector thread closes a channel.

For instance: T1 closes channel 1 (which is assigned to a T1 selector).
Channel 1's close listener executes the closing of the node. That
means that T1 now tries to close channel 2. However, channel 2 is
assigned to a selector that is running on T2. T1 now must wait until T2
closes that channel at some point in the future.

This commit addresses this by adding a boolean to closeChannels
indicating if we should block on close. We only set this boolean to true
if we are closing down the server channels at shutdown. This call is
never made from a network thread. When we call the closeChannels method
with that boolean set to false, we do not block on close.
2017-07-10 10:50:51 -05:00
Colin Goodheart-Smithe 3a5a54e83e Collapses package structure for some bucket aggs (#25579)
This change collapses some of the packages for the bucket aggregations into their parent packages. This was done for the following aggregations:
* The variants of the range aggregation (geo_distance, date and ip) were moved into the `o.e.s.a.bucket.range` package
* The `o.e.s.a.bucket.terms.support` package was removed and the classes were moved to `o.e.s.a.bucket.terms`
* The filter aggregation was moved to `o.e.s.a.bucket.filter`

Since this PR is already relatively large with only the above changes subsequent PRs will do similar operations on relevant metric and pipeline aggregations

Relates to #22868
2017-07-10 15:08:15 +01:00
Boaz Leskes 09378f48e4 Add a scheduled translog retention check (#25622)
We currently check whether translog files can be trimmed whenever we create a new translog generation or close a view. However #25294 added a long translog retention period (12h, max 512MB by default), which means translog files should potentially be cleaned up long after there isn't any indexing activity to trigger flushes/the creation of new translog files. We therefore need a scheduled background check to clean up those files once they are no longer needed.

Relates to #10708
2017-07-10 10:28:39 +02:00
Jason Tedor c084542731 Bump version to 6.0.0-beta1
This commit does two things:
 - bumps the version from 6.0.0-alpha3 to 6.0.0-beta1
 - renames the 6.0.0-alpha3 version constant to 6.0.0-beta1

Relates #25621
2017-07-09 18:12:50 -04:00
Jason Tedor bc22c1c286 Add disk threshold settings validation
This commit adds cross-settings validation for the low/high/flood stage
disk watermark settings. This validation was enabled by the introduction
of multiple settings validation.

Relates #25600
2017-07-07 19:54:36 -04:00
Nik Everett 794257c421 Drop current from the list of released versions (#25187)
It hasn't been released....
2017-07-07 15:59:57 -04:00
Yannick Welsch baa87db5d1 Harden global checkpoint tracker
This commit refactors the global checkpont tracker to make it more
resilient. The main idea is to make it more explicit what state is
actually captured and how that state is updated through
replication/cluster state updates etc. It also fixes the issue where the
local checkpoint information is not being updated when a shard becomes
primary. The primary relocation handoff becomes very simple too, we can
just verbatim copy over the internal state.

Relates #25468
2017-07-07 14:04:28 -04:00
Lee Hinman 8aa0a5c111 Improve REST error handling when endpoint does not support HTTP verb, add OPTIONS support (#24437)
* Improved REST endpoint exception handling, see #15335

Also improved OPTIONS http method handling to better conform with the
http spec.

* Tidied up formatting and comments

See #15335

* Tests for #15335

* Cleaned up comments, added section number

* Swapped out tab indents for space indents

* Test class now extends ESSingleNodeTestCase

* Capture RestResponse so it can be examined in test cases

Simple addition to surface the RestResponse object so we can run tests
against it (see issue #15335).

* Refactored class name, included feedback

See #15335.

* Unit test for REST error handling enhancements

Randomizing unit test for enhanced REST response error handling. See
issue #15335 for more details.

* Cleaned up formatting

* New constructor to set HTTP method

Constructor added to support RestController test cases.

* Refactored FakeRestRequest, streamlined test case.

* Cleaned up conflicts

* Tests for #15335

* Added functionality to ignore or include path wildcards

See #15335

* Further enhancements to request handling

Refactored executeHandler to prioritize explicit path matches. See
#15335 for more information.

* Cosmetic fixes

* Refactored method handlers

* Removed redundant import

* Updated integration tests

* Refactoring to address issue #17853

* Cleaned up test assertions

* Fixed edge case if OPTIONS method randomly selected as invalid method

In this test, an OPTIONS method request is valid, and should not return
a 405 error.

* Remove redundant static modifier

* Hook the multiple PathTrie attempts into RestHandler.dispatchRequest

* Add missing space

* Correctly retrieve new handler for each Trie strategy

* Only copy headers to threadcontext once

* Fix test after REST header copying moved higher up

* Restore original params when trying the next trie candidate

* Remove OPTIONS for invalidHttpMethodArray so a 405 is guaranteed in tests

* Re-add the fix I already added and got removed during merge :-/

* Add missing GET method to test

* Add documentation to migration guide about breaking 404 -> 405 changes

* Explain boolean response, pull into local var

* fixup! Explain boolean response, pull into local var

* Encapsulate multiple HTTP methods into PathTrie<MethodHandlers>

* Add PathTrie.retrieveAll where all matching modes can be retrieved

Then TrieMatchingMode can be package private and not leak into RestController

* Include body of error with 405 responses to give hint about valid methods

* Fix missing usageService handler addition

I accidentally removed this :X

* Initialize PathTrieIterator modes with Arrays.asList

* Use "== false" instead of !

* Missing paren :-/
2017-07-07 09:01:23 -06:00
Adrien Grand 40bb1663ee Index ids in binary form. (#25352)
Indexing ids in binary form should help with indexing speed since we would
have to compare fewer bytes upon sorting, should help with memory usage of
the live version map since keys will be shorter, and might help with disk
usage depending on how efficient the terms dictionary is at compressing
terms.

Since we can only expect base64 ids in the auto-generated case, this PR tries
to use an encoding that makes the binary id equal to the base64-decoded id in
the majority of cases (253 out of 256). It also specializes numeric ids, since
this seems to be common when content that is stored in Elasticsearch comes
from another database that uses eg. auto-increment ids.

Another option could be to require base64 ids all the time. It would make things
simpler but I'm not sure users would welcome this requirement.

This PR should bring some benefits, but I expect it to be mostly useful when
coupled with something like #24615.

Closes #18154
2017-07-07 14:22:47 +02:00
Martijn van Groningen 6db708ef75
Move more token filters to analysis-common module
The following token filters were moved: common grams, limit token, pattern capture and pattern raplace.

Relates to #23658
2017-07-07 10:02:52 +02:00
Simon Willnauer 1f67d079b1 Validate `transport.profiles.*` settings (#25508)
Transport profiles unfortunately have never been validated. Yet, it's very
easy to make a mistake when configuring profiles which will most likely stay
undetected since we don't validate the settings but allow almost everything
based on the wildcard in `transport.profiles.*`. This change removes the
settings subset based parsing of profiles but rather uses concrete affix settings
for the profiles which makes it easier to fall back to higher level settings since
the fallback settings are present when the profile setting is parsed. Previously, it was
unclear in the code which setting is used ie. if the profiles settings (with removed
prefixes) or the global node setting. There is no distinction anymore since we don't pull
prefix based settings.
2017-07-07 09:40:59 +02:00
Simon Willnauer 38a1df7da1 Use a port range per JVM in MockTransportService (#25565)
Some tests use MockTransportService to do network based testing.
Yet, we run tests in multiple JVMs that means
concurrent tests could claim port that another JVM just released
and if that test tries to simulate a disconnect it might be smart
enough to re-connect depending on what is tested. To reduce the risk,
since this is very hard to debug we use a different default
port range per JVM unless the incoming settings overriding it.

Closes #25301
2017-07-06 09:14:52 +02:00
Simon Willnauer 6e5cc424a8 Switch indices read-only if a node runs out of disk space (#25541)
Today when we run out of disk all kinds of crazy things can happen
and nodes are becoming hard to maintain once out of disk is hit.
While we try to move shards away if we hit watermarks this might not
be possible in many situations. Based on the discussion in #24299
this change monitors disk utilization and adds a flood-stage watermark
that causes all indices that are allocated on a node hitting the flood-stage
mark to be switched read-only (with the option to be deleted). This allows users to react on the low disk
situation while subsequent write requests will be rejected. Users can switch
individual indices read-write once the situation is sorted out. There is no
automatic read-write switch once the node has enough space. This requires
user interaction.

The flood-stage watermark is set to `95%` utilization by default.

Closes #24299
2017-07-05 22:18:23 +02:00
Christoph Büscher 3185eaece8 QueryBuilders should implement ToXContentObject (#25530)
All query builders written as self contained xContent objects, to we should mark
them accordingly using ToXContentObject. This also makes it possible to use
things like XContentHelper#toXContent to render query builders in tests.
2017-07-05 09:50:10 +02:00
Christoph Büscher f576c987ce Remove QueryParseContext (#25486)
QueryParseContext is currently only used as a wrapper for an XContentParser, so
this change removes it entirely and changes the appropriate APIs that use it so
far to only accept a parser instead.
2017-07-03 17:30:40 +02:00
Simon Willnauer 5a7c8bb04e Cleanup network / transport related settings (#25489)
This commit makes the use of the global network settings explicit instead
of implicit within NetworkService. It cleans up several places where we fall
back to the global settings while we should have used tcp or http ones.

In addition this change also removes unnecessary settings classes
2017-07-02 10:16:50 +02:00
James Baiera 74f4a14d82 Upgrading HDFS Repository Plugin to use HDFS 2.8.1 Client (#25497)
Hadoop 2.7.x libraries fail when running on JDK9 due to the version string changing to a single 
character. On Hadoop 2.8, this is no longer a problem, and it is unclear on whether the fix will be 
backported to the 2.7 branch. This commit upgrades our dependency of Hadoop for the HDFS 
Repository to 2.8.1.
2017-06-30 17:57:56 -04:00
Tim Brooks cac2eec7d2 Add NioTransport threads to thread name checks (#25477)
We have various assertions that check we never block on transport
threads. This commit adds the thread names for the NioTransport to
these assertions.

With this change I had to fix two places where we were calling blocking
methods from the transport threads.
2017-06-29 15:16:07 -05:00
Tim Brooks dd5d165da1 Prevent channel enqueue after selector close (#25478)
This commit adds additional protection to `ESSelector` and its
implementations to ensure that channels are not enqueued after the
selector is closed.

After a channel has been added to the queue, we check that the selector
is open. If it is not, then we remove the channel from the queue. If the
channel is removed successfully, we throw an `IllegalStateException`.
2017-06-29 14:02:50 -05:00
Tim Brooks 6c58f0c4e6 Handle ping correctly in NioTransport (#25462)
Our current TCPTransport logic assumes that we do not pass pings to
the TCPTransport level.

This commit fixes an issue where NioTransport was passing pings to
TCPTransport and leading to exceptions.
2017-06-29 11:03:51 -05:00
Christoph Büscher acade2b40a Tests: Remove platform specific assertion in NioSocketChannelTests
This check depends on the language settings on the system the
test runs on, e.g. it fails on Ubuntu with LANG=de_DE.UTF-8.
2017-06-29 17:32:51 +02:00
Christoph Büscher 927111c91d Remove QueryParseContext from parsing QueryBuilders (#25448)
Currently QueryParseContext is only a thin wrapper around an XContentParser that
adds little functionality of its own. I provides helpers for long deprecated
field names which can be removed and two helper methods that can be made static
and moved to other classes. This is a first step in helping to remove
QueryParseContext entirely.
2017-06-29 17:10:20 +02:00
Tim Brooks cad57959e1 Remove finicky exception message assertion
In SimpleNioTransportTests we assert that an IOException has a certain
message. This message appears that it is not dependible (and might
change based on platform).

Our other transport tests (mock and netty) do not make this assertion.
Instead they only assert on our application exception message. This
commit removes the IOException message assertion. And retains the
ConnectTransportException message assertion.
2017-06-28 14:16:04 -05:00
Tim Brooks 5f8be0e090 Introduce NioTransport into framework for testing (#24262)
This commit introduces a nio based tcp transport into framework for
testing.

Currently Elasticsearch uses a simple blocking tcp transport for
testing purposes (MockTcpTransport). This diverges from production
where our current transport (netty) is non-blocking.

The point of this commit is to introduce a testing variant that more
closely matches the behavior of production instances.
2017-06-28 10:51:20 -05:00
Yannick Welsch 5a4a47332c Use a single method to update shard state
This commit refactors index shard to provide a single method for
updating the shard state on an incoming cluster state update.

Relates #25431
2017-06-28 09:48:47 -04:00