Commit Graph

1354 Commits

Author SHA1 Message Date
Nhat Nguyen 84fd39f5bb
Separate acquiring safe commit and last commit (#28271)
Previously we introduced a new parameter to `acquireIndexCommit` to
allow acquire either a safe commit or a last commit. However with the
new parameters, callers can provide a nonsense combination - flush first
but acquire the safe commit. This commit separates acquireIndexCommit
method into two different methods to avoid that problem. Moreover, this
change should also improve the readability.

Relates #28038
2018-02-16 21:25:58 -05:00
Luca Cavanna ebe5e8e635
REST high-level client: encode path parts (#28663)
The REST high-level client supports now encoding of path parts, so that for instance documents with valid ids, but containing characters that need to be encoded as part of urls (`#` etc.), are properly supported. We also make sure that each path part can contain `/` by encoding them properly too.

Closes #28625
2018-02-15 17:22:45 +01:00
olcbean 02fc16f10e Add Cluster Put Settings API to the high level REST client (#28633)
Relates to #27205
2018-02-15 17:21:45 +01:00
Boaz Leskes beb55d148a
Simplify the Translog constructor by always expecting an existing translog (#28676)
Currently the Translog constructor is capable both of opening an existing translog and creating a
new one (deleting existing files). This PR separates these two into separate code paths. The
constructors opens files and a dedicated static methods creates an empty translog.
2018-02-15 09:24:09 +01:00
Lee Hinman b59b1cf59d
Move more XContent.createParser calls to non-deprecated version (#28672)
* Move more XContent.createParser calls to non-deprecated version

Part 2

This moves more of the callers to pass in the DeprecationHandler.

Relates to #28504

* Use parser's deprecation handler where appropriate

* Use logging handler in test that uses deprecated field on purpose
2018-02-14 11:24:48 -07:00
Lee Hinman 7c1f5f5054
Move more XContent.createParser calls to non-deprecated version (#28670)
* Move more XContent.createParser calls to non-deprecated version

This moves more of the callers to pass in the DeprecationHandler.

Relates to #28504

* Use parser's deprecation handler where available
2018-02-14 09:01:40 -07:00
Michael Basnight 920dff7053
Add released major logic to version utils (#28644)
Version Utils did not previously have logic that removed the last majors
minor snapshot if there was a next bugfix and maintenance bugfix
release. This adds the logic and fixes some broken assumptions in tests
as well.

relates #28505
2018-02-12 18:33:45 -06:00
Michael Basnight 4e0c1463d5
Fix build.snapshot bug in version collection (#28641)
The build.snapshot was mistakenly passed in to every snapshot version,
so when release tests were run, these versions were mistaken as released
entities and could not be found in maven, because they do not
exist. This fix removes that bug in logic, and always makes them proper
snapshots. This has a benefit of cleaning up the VersionUtilsTests
because they no longer rely on different sets of versions to check
against, which was also a bug.
2018-02-12 14:56:07 -06:00
Martijn van Groningen c19d84012e
iter 2018-02-12 13:50:48 +01:00
Martijn van Groningen 21e5ee6551
[TEST] Changed how stash dumps are logged in yaml tests in case of failures
Currently if a yaml test has a teardown and a test is failing then
a stash dump of a request in the teardown is logged instead of
a stash dump of a request in the test itself.

By handling the logging of stash dumps separately for setup, tests and
teardown yaml sections we shouldn't miss the stash dump of request/response
that is actually causing the yaml test to fail.
2018-02-12 13:50:48 +01:00
Boaz Leskes 4aece92b2c
IndexShardOperationPermits: shouldn't use new Throwable to capture stack traces (#28598)
The is a follow up to #28567 changing the method used to capture stack traces, as requested
during the review. Instead of creating a throwable, we explicitly capture the stack trace of the
current thread. This should Make Jason Happy Again ™️ .
2018-02-12 10:33:13 +01:00
Michael Basnight e0bea70070
Generalize BWC logic (#28505)
Generalizing BWC building so that there is less code to modify for a release. This ensures we do not
need to think about what major or minor version is in the gradle code. It follows the general rules of the
elastic release structure. For more information on the rules, see the VersionCollection's javadoc.

This also removes the additional bwc snapshots that will never be released, such as 6.0.2, which were
being built and tested against every time we ran bwc tests.

Additionally, it creates 4 new projects that correspond to the different types of snapshots that may exist
for a given version. Its possible to now run those individual tasks to work out bwc logic whereas
previously it was impossible and the entire suite of bwc tests had to be run to work out any logic
changes in the build tools' bwc project. Please note that if the project does not make sense for the 
version that is current, that an error will be thrown from that individual project if an attempt is made to 
run it.

This should allow for automating the version bumps as well, since it removes all the hardcoded version
logic from the configs.
2018-02-09 14:55:10 -06:00
Boaz Leskes ba59cf1262
Capture stack traces while issuing IndexShard operations permits to easy debugging (#28567)
Today we acquire a permit from the shard to coordinate between indexing operations, recoveries and other state transitions. When we leak an  permit it's practically impossible to find who the culprit is. This PR add stack traces capturing for each permit so we can identify which part of the code is responsible for acquiring the unreleased permit. This code is only active when assertions are active. 

The output is something like:
```
java.lang.AssertionError: shard [test][1] on node [node_s0] has pending operations:
--> java.lang.RuntimeException: something helpful 2
	at org.elasticsearch.index.shard.IndexShardOperationPermits.acquire(IndexShardOperationPermits.java:223)
	at org.elasticsearch.index.shard.IndexShard.<init>(IndexShard.java:322)
	at org.elasticsearch.index.IndexService.createShard(IndexService.java:382)
	at org.elasticsearch.indices.IndicesService.createShard(IndicesService.java:514)
	at org.elasticsearch.indices.IndicesService.createShard(IndicesService.java:143)
	at org.elasticsearch.indices.cluster.IndicesClusterStateService.createShard(IndicesClusterStateService.java:552)
	at org.elasticsearch.indices.cluster.IndicesClusterStateService.createOrUpdateShards(IndicesClusterStateService.java:529)
	at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyClusterState(IndicesClusterStateService.java:231)
	at org.elasticsearch.cluster.service.ClusterApplierService.lambda$callClusterStateAppliers$6(ClusterApplierService.java:498)
	at java.base/java.lang.Iterable.forEach(Iterable.java:75)
	at org.elasticsearch.cluster.service.ClusterApplierService.callClusterStateAppliers(ClusterApplierService.java:495)
	at org.elasticsearch.cluster.service.ClusterApplierService.applyChanges(ClusterApplierService.java:482)
	at org.elasticsearch.cluster.service.ClusterApplierService.runTask(ClusterApplierService.java:432)
	at org.elasticsearch.cluster.service.ClusterApplierService$UpdateTask.run(ClusterApplierService.java:161)
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:566)
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:244)
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:207)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
	at java.base/java.lang.Thread.run(Thread.java:844)

--> java.lang.RuntimeException: something helpful
	at org.elasticsearch.index.shard.IndexShardOperationPermits.acquire(IndexShardOperationPermits.java:223)
	at org.elasticsearch.index.shard.IndexShard.<init>(IndexShard.java:311)
	at org.elasticsearch.index.IndexService.createShard(IndexService.java:382)
	at org.elasticsearch.indices.IndicesService.createShard(IndicesService.java:514)
	at org.elasticsearch.indices.IndicesService.createShard(IndicesService.java:143)
	at org.elasticsearch.indices.cluster.IndicesClusterStateService.createShard(IndicesClusterStateService.java:552)
	at org.elasticsearch.indices.cluster.IndicesClusterStateService.createOrUpdateShards(IndicesClusterStateService.java:529)
	at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyClusterState(IndicesClusterStateService.java:231)
	at org.elasticsearch.cluster.service.ClusterApplierService.lambda$callClusterStateAppliers$6(ClusterApplierService.java:498)
	at java.base/java.lang.Iterable.forEach(Iterable.java:75)
	at org.elasticsearch.cluster.service.ClusterApplierService.callClusterStateAppliers(ClusterApplierService.java:495)
	at org.elasticsearch.cluster.service.ClusterApplierService.applyChanges(ClusterApplierService.java:482)
	at org.elasticsearch.cluster.service.ClusterApplierService.runTask(ClusterApplierService.java:432)
	at org.elasticsearch.cluster.service.ClusterApplierService$UpdateTask.run(ClusterApplierService.java:161)
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:566)
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:244)
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:207)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
	at java.base/java.lang.Thread.run(Thread.java:844)

```
2018-02-08 22:59:02 +01:00
Tim Brooks 16f7e00514
Improve testTransportStatsWithException test (#28554)
This commit modifies the transport stats with exception test to remove
the requirement that we calculate the published address size when
comparing bytes received. This is tricky and is currently broken as we
also place the address string in the transport exception, however we do
not adjust the bytes for that.

The solution in this commit is to just serialize the transport exception
in the test and use that for the calculation.
2018-02-07 14:31:42 -07:00
Lee Hinman eebff4d2b3
Use non deprecated xcontenthelper (#28503)
* Move to non-deprecated XContentHelper.createParser(...)

This moves away from one of the now-deprecated XContentHelper.createParser
methods in favor of specifying the deprecation logger at parser creation time.

Relates to #28449

Note that this doesn't move all the `createParser` calls because some of them
use the already-deprecated method that doesn't specify the XContentType.

* Remove the deprecated (and now non-needed) createParser method
2018-02-05 16:18:18 -07:00
Lee Hinman 3ddea8d8d2
Start switching to non-deprecated ParseField.match method (#28488)
This commit switches all the modules and server test code to use the
non-deprecated `ParseField.match` method, passing in the parser's deprecation
handler or the logging deprecation handler when a parser is not available (like
in tests).

Relates to #28449
2018-02-02 10:10:13 -07:00
Yannick Welsch 031415a5f6
Replicate writes only to fully initialized shards (#28049)
The primary currently replicates writes to all other shard copies as soon as they're added to the routing table. Initially those shards are not even ready yet to receive these replication requests, for example when undergoing a file-based peer recovery. Based on the specific stage that the shard copies are in, they will throw different kinds of exceptions when they receive the replication requests. The primary then ignores responses from shards that match certain exception types. With this mechanism it's not possible for a primary to distinguish between a situation where a replication target shard is not allocated and ready yet to receive requests and a situation where the shard was successfully allocated and active but subsequently failed.
This commit changes replication so that only initializing shards that have successfully opened their engine are used as replication targets. This removes the need to replicate requests to initializing shards that are not even ready yet to receive those requests. This saves on network bandwidth and enables features that rely on the distinction between a "not-yet-ready" shard and a failed shard.
2018-02-02 11:13:07 +01:00
Luca Cavanna d860971572
REST high-level client: add support for split and shrink index API (#28425)
Relates to #27205
2018-02-01 16:37:01 +01:00
Jim Ferenczi dd40b984c4
Add a shallow copy method to aggregation builders (#28430)
This change adds a shallow copy method for aggregation builders. This method returns a copy of the builder replacing the factoriesBuilder and metaDada
This method is used when the builder is rewritten (AggregationBuilder#rewrite) in order to make sure that we create a new instance of the parent builder when sub aggregations are rewritten.

Relates #27782
2018-02-01 09:22:32 +01:00
Jason Tedor 1b3d529bef Introduce secure security manager to project
This commit migrates SecureSM, our secure security manager
implementation, from its own repository to being a sub-project of
Elasticsearch.
2018-01-31 18:23:28 -05:00
Nhat Nguyen 5e0be61774
Add logging to index commit deletion policy (#28448)
This would help us to figure out which index commit that an engine 
started with or used in peer-recovery.

Relates #28405
2018-01-31 11:09:49 -05:00
markharwood 77d2dd203e
Search - add allow_partial_search_results flag with default setting false (#28440)
Adds allow_partial_search_results flag to search requests with default setting = true.
When false, will error if search either timeouts, has partial errors or has missing shards rather
than returning partial search results. A cluster-level setting provides a default for search requests with no flag.

Closes #27435
2018-01-31 15:51:29 +00:00
Jim Ferenczi 7edb978256
RandomDocumentPicks#randomFieldName can produce invalid field name (#28419)
This change makes sure that this function does not create field names that end with a '.', more precisely it only allows
alpha-numeric characters to compose the leaf field name.

Closes #27373
2018-01-31 09:21:09 +01:00
Yannick Welsch 9dd0886265
Fix NullPointerException in MockUncasedHostProvider (#28424)
The MockUncasedHostProvider accesses nodes that are not fully built yet, where TransportService.getNode() returns null, which means that the null entries end up in the list of seedNodes that UnicastZenPing then uses.
2018-01-30 10:44:19 +01:00
Simon Willnauer 43d1dcb919
Add a method that ensures that the cluster is yellow and has no intializing shards (#28416) 2018-01-29 20:46:30 +01:00
Nik Everett 66ff1b2a59
Tests: Wipe cluster settings after every test (#28410)
Cluster settings shouldn't leak into the next test.

I played with failing the test if it left over any settings but that
felt like it added more ceremony then it was worth. The advantage is
that any test that intentionally wants to leave settings in place after
the test would fail and require looking at but, so far as I can tell, we
don't have any such tests.
2018-01-29 11:47:04 -05:00
Ryan Ernst 3dd833ca0a
Plugins: Use one confirmation of all meta plugin permissions (#28366)
Currently meta plugins will ask for confirmation of security policy
exceptions for each bundled plugin. This commit collects the necessary
permissions of each bundled plugin, and asks for confirmation of all of
them at the same time.
2018-01-26 15:44:44 -08:00
olcbean 9db23e48cd Add Indices Aliases API to the high level REST client (#27876)
Relates to #27205
2018-01-25 14:34:06 +01:00
Colin Goodheart-Smithe 75116a23cc
Adds test name to MockPageCacheRecycler exception (#28359)
This change adds the test name to the exceptions thrown by the MockPageCacheRecycler and MockBigArrays. Also, if there is more than one page/array which are not released it will add the first one as the cause of the thrown exception and the others as suppressed exceptions.

Relates to #21315
2018-01-25 08:13:33 +00:00
Alexander Reelsen a87714aafc
Settings: Introduce settings updater for a list of settings (#28338)
This introduces a settings updater that allows to specify a list of
settings. Whenever one of those settings changes, the whole block of
settings is passed to the consumer.

This also fixes an issue with affix settings, when used in combination
with group settings, which could result in no found settings when used
to get a setting for a namespace.

Lastly logging has been slightly changed, so that filtered settings now
only log the setting key.

Another bug has been fixed for the mock log appender, which did not
work, when checking for the exact message.

Closes #28047
2018-01-24 09:47:17 +01:00
Christoph Büscher ba9e2e44cb
[Test] Re-Add integer_range and date_range field types for query builder tests (#28171)
The tests for those field types were removed in #26549 because the range mapper
was moved to a module, but later this mapper was moved back to core in #27854.
This change adds back those two field types like before to the general setup in
AbstractQueryTestCase and adds some specifics to the RangeQueryBuilder and
TermsQueryBuilder tests. Also adding back an integration test in SearchQueryIT that
has been removed before but that can be kept with the mapper back in core now.

Relates to #28147
2018-01-23 13:08:54 +01:00
Luca Cavanna 0c83ee2a5d
Trim down usages of `ShardOperationFailedException` interface (#28312)
In many cases we use the `ShardOperationFailedException` interface to abstract an exception that can only be of one type, namely `DefaultShardOperationException`. There is no need to use the interface in such cases, the concrete type should be used instead. That has the additional advantage of simplifying parsing such exceptions back from rest responses for the high-level REST client
2018-01-22 15:51:46 +01:00
kel 452c36c552 Calculate sum in Kahan summation algorithm in aggregations (#27807) (#27848) 2018-01-22 12:42:56 +01:00
Adrien Grand 700d9ecc95
Remove the `update_all_types` option. (#28288)
This option is not useful in 7.x since no indices may have more than one type
anymore.
2018-01-22 12:03:07 +01:00
Tim Brooks a6a57a71d3
Implement socket and server ChannelContexts (#28275)
This commit is related to #27260. Currently have a channel context that
implements reading and writing logic for socket channels. Additionally,
we have exception contexts to handle exceptions. And accepting contexts
to handle accepted channels. This PR introduces a ChannelContext that
handles close and exception handling for all channel types.
Additionally, it has implementers that provide specific functionality
for socket channels (read and writing). And specific functionality for
server channels (accepting).
2018-01-18 13:06:40 -07:00
Tim Brooks 20fb7a6d87
Modify Abstract transport tests to use impls (#28270)
There a number of tests in `AbstractSimpleTransportTestCase` that
create `MockTcpTransport` impls. This commit modifies two of these tests
to use the transport implementation that is being tested.
2018-01-18 10:59:42 -07:00
Tim Brooks 4ea9ddb7d3
Unify nio read / write channel contexts (#28160)
This commit is related to #27260. Right now we have separate read and
write contexts for implementing specific protocol logic. However, some
protocols require a closer relationship between read and write
operations than is allowed by our current model. An example is HTTP
which might require a write if some problem with request parsing was
encountered.

Additionally, some protocols require close messages to be sent when a
channel is shutdown. This is also problematic in our current model,
where we assume that channels should simply be queued for close and
forgotten.

This commit transitions to a single ChannelContext which implements
all read, write, and close logic for protocols. It is the job of the
context to tell the selector when to close the channel. A channel can
still be manually queued for close with a selector. This is how server
channels are closed for now. And this route allows timeout mechanisms on
normal channel closes to be implemented.
2018-01-17 09:44:21 -07:00
Alexander Reelsen d32cb8089b
Tests: Decrease log level for adding a header value (#28246)
This logging message adds considerable noise to many REST tests, if you
are using something like HTTP basic auth in every API call or set any custom
header.

The log level moves from info to debug, so can still be seen if wanted.
2018-01-17 09:14:44 +01:00
Jim Ferenczi bd11e6c441
Fix NPE on composite aggregation with sub-aggregations that need scores (#28129)
The composite aggregation defers the collection of sub-aggregations to a second pass that visits documents only if they
appear in the top buckets. Though the scorer for sub-aggregations is not set on this second pass and generates an NPE if any sub-aggregation
tries to access the score. This change creates a scorer for the second pass and makes sure that sub-aggs can use it safely to check the score of
the collected documents.
2018-01-15 18:30:38 +01:00
Tim Brooks ee7eac8dc1
`MockTcpTransport` to connect asynchronously (#28203)
The method `initiateChannel` on `TcpTransport` is explicit in that
channels can be connect asynchronously. All production implementations
do connect asynchronously. Only the blocking `MockTcpTransport`
connects in a synchronous manner. This avoids testing some of the
blocking code in `TcpTransport` that waits on connections to complete.
Additionally, it requires a more extensive method signature than
required for other transports.

This commit modifies the `MockTcpTransport` to make these connections
asynchronously on a different thread. Additionally, it simplifies that
`initiateChannel` method signature.
2018-01-15 10:20:30 -07:00
Tim Brooks 3895add2ca
Introduce elasticsearch-core jar (#28191)
This is related to #27933. It introduces a jar named elasticsearch-core
in the lib directory. This commit moves the JarHell class from server to
elasticsearch-core. Additionally, PathUtils and some of Loggers are
moved as JarHell depends on them.
2018-01-15 09:59:01 -07:00
Igor Motov c75ac319a6
Add ability to associate an ID with tasks (#27764)
Adds support for capturing the X-Opaque-Id header from a REST request and storing it's value in the tasks that this request started. It works for all user-initiated tasks (not only search).

Closes #23250

Usage:
```
$ curl -H "X-Opaque-Id: imotov" -H "foo:bar" "localhost:9200/_tasks?pretty&group_by=parents"
{
  "tasks" : {
    "7qrTVbiDQKiZfubUP7DPkg:6998" : {
      "node" : "7qrTVbiDQKiZfubUP7DPkg",
      "id" : 6998,
      "type" : "transport",
      "action" : "cluster:monitor/tasks/lists",
      "start_time_in_millis" : 1513029940042,
      "running_time_in_nanos" : 266794,
      "cancellable" : false,
      "headers" : {
        "X-Opaque-Id" : "imotov"
      },
      "children" : [
        {
          "node" : "V-PuCjPhRp2ryuEsNw6V1g",
          "id" : 6088,
          "type" : "netty",
          "action" : "cluster:monitor/tasks/lists[n]",
          "start_time_in_millis" : 1513029940043,
          "running_time_in_nanos" : 67785,
          "cancellable" : false,
          "parent_task_id" : "7qrTVbiDQKiZfubUP7DPkg:6998",
          "headers" : {
            "X-Opaque-Id" : "imotov"
          }
        },
        {
          "node" : "7qrTVbiDQKiZfubUP7DPkg",
          "id" : 6999,
          "type" : "direct",
          "action" : "cluster:monitor/tasks/lists[n]",
          "start_time_in_millis" : 1513029940043,
          "running_time_in_nanos" : 98754,
          "cancellable" : false,
          "parent_task_id" : "7qrTVbiDQKiZfubUP7DPkg:6998",
          "headers" : {
            "X-Opaque-Id" : "imotov"
          }
        }
      ]
    }
  }
}
```
2018-01-12 15:34:17 -05:00
Nhat Nguyen 626c3d1fda
Primary send safe commit in file-based recovery (#28038)
Today a primary shard transfers the most recent commit point to a 
replica shard in a file-based recovery. However, the most recent commit
may not be a "safe" commit; this causes a replica shard not having a
safe commit point until it can retain a safe commit by itself.

This commits collapses the snapshot deletion policy into the combined 
deletion policy and modifies the peer recovery source to send a safe
commit.

Relates #10708
2018-01-11 10:39:12 -05:00
Jason Tedor 2c24ac7426
Set watermarks in single-node test cases
We set the watermarks to low values in other test cases to prevent test
failures on nodes with low disk space (if the disk space is too low, the
test will fail anyway but we should not prematurely fail). This commit
sets the watermarks in the single-node test cases to avoid test failures
in such situations.

Relates #28134
2018-01-09 12:51:50 -05:00
Jim Ferenczi 36729d1c46
Add the ability to bundle multiple plugins into a meta plugin (#28022)
This commit adds the ability to package multiple plugins in a single zip.
The zip file for a meta plugin must contains the following structure:

|____elasticsearch/
| |____   <plugin1> <-- The plugin files for plugin1 (the content of the elastisearch directory)
| |____   <plugin2>  <-- The plugin files for plugin2
| |____   meta-plugin-descriptor.properties <-- example contents below
The meta plugin properties descriptor is mandatory and must contain the following properties:

description: simple summary of the meta plugin.
name: the meta plugin name
The installation process installs each plugin in a sub-folder inside the meta plugin directory.
The example above would create the following structure in the plugins directory:

|_____ plugins
| |____   <name_of_the_meta_plugin>
| | |____   meta-plugin-descriptor.properties
| | |____   <plugin1>
| | |____   <plugin2>
If the sub plugins contain a config or a bin directory, they are copied in a sub folder inside the meta plugin config/bin directory.

|_____ config
| |____   <name_of_the_meta_plugin>
| | |____   <plugin1>
| | |____   <plugin2>

|_____ bin
| |____   <name_of_the_meta_plugin>
| | |____   <plugin1>
| | |____   <plugin2>
The sub-plugins are loaded at startup like normal plugins with the same restrictions; they have a separate class loader and a sub-plugin
cannot have the same name than another plugin (or a sub-plugin inside another meta plugin).

It is also not possible to remove a sub-plugin inside a meta plugin, only full removal of the meta plugin is allowed.

Closes #27316
2018-01-09 18:28:43 +01:00
Tanguy Leroux bba591bea0
Consistent updates of IndexShardSnapshotStatus (#28130)
This commit changes IndexShardSnapshotStatus so that the Stage is updated
coherently with any required information. It also provides a asCopy()
method that returns the status of a IndexShardSnapshotStatus at a given
point in time, ensuring that all information are coherent.

Closes #26480
2018-01-09 14:01:57 +01:00
olcbean fd45a46ce8 Deprecate `isShardsAcked()` in favour of `isShardsAcknowledged()` (#27819)
Several responses include the shards_acknowledged flag (indicating whether the
requisite number of shard copies started before the completion of the operation)
and there are two different getters used : isShardsAcknowledged() and isShardsAcked().

This PR deprecates the isShardsAcked() in favour of isShardsAcknowledged() in 
CreateIndexResponse, RolloverResponse and CreateIndexClusterStateUpdateResponse.

Closes #27784
2018-01-08 10:57:45 +01:00
Jason Tedor eaa636d4bb Clarify reproduce info on Windows
This commit correct the test failure reproduction line on Windows.

Relates #28104
2018-01-06 22:49:14 -05:00
Jason Tedor d712f581ca
Fix reproduction info to point to Gradle wrapper
With the Gradle wrapper in place, we should point the reproduction info
to specify using the Gradle wrapper too.

Relates #28104
2018-01-06 08:47:23 -05:00
Tim Brooks 38701fb6ee
Create nio-transport plugin for NioTransport (#27949)
This is related to #27260. This commit moves the NioTransport from
:test:framework to a new nio-transport plugin. Additionally, supporting
tcp decoding classes are moved to this plugin. Generic byte reading and
writing contexts are moved to the nio library.

Additionally, this commit adds a basic MockNioTransport to
:test:framework that is a TcpTransport implementation for testing that
is driven by nio.
2018-01-05 09:41:29 -07:00