Commit Graph

7250 Commits

Author SHA1 Message Date
Jay Modi b04f8fe159 prevent NPE when calling GetResponse#getSourceAsBytesRef
This commit checks for a null BytesReference as the value for `source`
in GetResult#sourceRef and simply returns null. Previously this would
have resulted in a NPE. While this does seem internal at first glance, it can affect
user code as a GetResponse could trigger this when the document is missing.

Additionally, the CompressorFactory#uncompressIfNeeded now requires a
non-null argument.
2017-01-09 09:19:05 -05:00
Nik Everett f4884e0726 Replace SearchExtRegistry with namedObject (#22492)
This is one of the last things in `SearchRequestParsers`.
2017-01-09 08:35:54 -05:00
Yannick Welsch 8741691511 Fix primary relocation for shadow replicas (#22474)
The recovery process started during primary relocation of shadow replicas accesses the engine on the source shard after it's been closed, which results in the source shard failing itself.
2017-01-08 12:18:52 +01:00
Nik Everett 12923ef896 Close and flush refresh listeners on shard close
Right now closing a shard looks like it strands refresh listeners,
causing tests like
`delete/50_refresh/refresh=wait_for waits until changes are visible in search`
to fail. Here is a build that fails:
https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+multi_cluster_search+multijob-darwin-compatibility/4/console

This attempts to fix the problem by implements `Closeable` on
`RefreshListeners` and rejecting listeners when closed. More importantly
the act of closing the instance flushes all pending listeners
so we shouldn't have any stranded listeners on close.

Because it was needed for testing, this also adds the number of
pending listeners to the `CommonStats` object and all API to which
that flows: `_cat/nodes`, `_cat/indices`, `_cat/shards`, and
`_nodes/stats`.
2017-01-06 20:03:32 -05:00
Ali Beyad b0c009ae76 Gracefully handles pre 2.x compressed snapshots
In pre 2.x versions, if the repository was set to compress snapshots,
then snapshots would be compressed with the LZF algorithm.  In 5.x,
Elasticsearch no longer supports the LZF compression algorithm.  This
presents an issue when retrieving snapshots in a repository or upgrading
repository data to the 5.x version, because Elasticsearch throws an
exception when it tries to read the snapshot metadata because it was
compressed using LZF.

This commit gracefully handles the situation by introducing a new
incompatible-snapshots blob to the repository.  For any pre-2.x snapshot
that cannot be read, that snapshot is removed from the list of active
snapshots, because the snapshot could not be restored anyway.  Instead,
the snapshot is recorded in the incompatible-snapshots blob.  When
listing snapshots, both active snapshots and incompatible snapshots will
be listed, with incompatible snapshots showing a `INCOMPATIBLE` state.
Any attempt to restore an incompatible snapshot will result in an
exception.
2017-01-06 17:52:10 -05:00
javanna ded694fc83 Make StatusToXContent extend ToXContentObject and rename it to StatusToXContentObject
This also allows to make RestToXContentListener require ToXContentObject rather than ToXContent
2017-01-06 23:31:48 +01:00
javanna 8edf59c9e7 Make DocWriteResponse a ToXContentObject
This involved changing also index, delete, update so that bulk can print them out in its own format.
2017-01-06 23:31:48 +01:00
javanna d5510701a0 Make SearchResponse a ToXContentObject 2017-01-06 23:31:48 +01:00
javanna 3393d1b409 Make TermVectorsResponse a ToXContentObject 2017-01-06 23:31:48 +01:00
javanna 45d4938fcc Migrate some more responses to ToXContentObject 2017-01-06 23:31:48 +01:00
javanna f4aab0138d introduce ToXContentObject interface
`ToXContentObject` extends `ToXContent` without adding new methods to it, while allowing to mark classes that output complete xcontent objects to distinguish them from classes that require starting and ending an anonymous object externally.

Ideally ToXContent would be renamed to ToXContentFragment, but that would be a huge change in our codebase, hence we simply document the fact that toXContent outputs fragments with no guarantees that the output is valid per se without an external ancestor.

Relates to #16347
2017-01-06 23:31:48 +01:00
Ryan Ernst cd6e3f4cea Merge branch 'master' into keystore 2017-01-06 09:32:08 -08:00
Ryan Ernst 42ebfe7bdb fix NPE 2017-01-06 09:11:07 -08:00
Tim B b9c2c2f6f0 Move IfConfig.logIfNecessary call into bootstrap (#22455)
This is related to #22116. A logIfNecessary() call makes a call to
NetworkInterface.getInterfaceAddresses() requiring SocketPermission
connect privileges. By moving this to bootstrap the logging call can be
made before installing the SecurityManager.
2017-01-06 11:10:53 -06:00
Simon Willnauer 79093e1663 Ensure shrunk indices carry over version information from its source (#22469)
Today when an index is shrunk the version information is not carried over
from the source to the target index. This can cause major issues like mapping
incompatibilities for instance if an index from a previous major version is shrunk.

This commit ensures that all version information from the soruce index is preserved
when a shrunk index is created.

Closes #22373
2017-01-06 16:36:43 +01:00
Ryan Ernst eb596d7270 more renames 2017-01-06 01:03:45 -08:00
Ryan Ernst 6e406aed2d addressing more PR comments 2017-01-06 00:49:18 -08:00
javanna 5daf46286e remove ParseFieldMatcher usages from InternalSearchHit 2017-01-05 19:33:04 +01:00
javanna 975fee402a remove ParseFieldMatcher usages from suggesters 2017-01-05 19:33:04 +01:00
javanna 13dcb8ccbe remove ParseFieldMatcher usages from IngestMetadata 2017-01-05 19:33:04 +01:00
javanna d60e9bddd0 remove ParseFieldMatcher usages from IndexGraveyard 2017-01-05 19:33:04 +01:00
javanna d87a30647b remove ParseFieldMatcher usages from SearchAfterBuilder 2017-01-05 19:33:04 +01:00
javanna 723bdc4549 remove ParseFieldMatcher usages from FetchSourceContext 2017-01-05 19:33:04 +01:00
javanna 6102523033 remove ParseFieldMatcher usages from Script parsing code 2017-01-05 19:33:04 +01:00
javanna 6b9a8db069 fix unchecked generics warnings in ObjectParser 2017-01-05 19:33:04 +01:00
javanna 1f7960aa52 ObjectParser to no longer require ParseFieldMatcherSupplier as its Context
ParseFieldMatcher as well as ParseFieldMatcherSupplier will be soon removed, hence the ObjectParser's context doesn't need to be a ParseFieldMatcherSupplier anymore. That will allow to remove ParseFieldMatcherSupplier's implementations, little by little.
2017-01-05 19:33:04 +01:00
javanna 9394792392 remove unused ParseFieldMatcher imports/arguments 2017-01-05 19:33:04 +01:00
Yannick Welsch 182e8115de [TEST] Fix IndexRecoveryIT.testDisconnectsDuringRecovery
The test currently checks that the recovering shard is not failed when it is not a primary relocation that has moved past the finalization step.
Checking if it has moved past that step is done by intercepting the request between the replication source and the target and checking if it has seen
then WAIT_FOR_CLUSTERSTATE action as this is the next action that is called after finalization. This action can, however, occur only after the shard was
already failed, and thus trip the assertion. This commit changes the check to look out for the FINALIZE action, independently of whether it succeeded or not.
2017-01-05 19:12:21 +01:00
Yannick Welsch cfc106d721 Don't close store under CancellableThreads (#22434)
#22325 changed the recovery retry logic to use unique recovery ids. The change also introduced an issue, however, which made it possible for the shard store to be closed under CancellableThreads, triggering assertions in the node locking logic. This commit limits the use of CancellableThreads only to the part where we wait on the old recovery target to be closed.
2017-01-05 18:11:58 +01:00
Adrien Grand 97f3a9bd79 Relax LiveVersionMapTests.testRamBytesUsed.
With Java9's new restrictions we cannot compute ram usage as accurately as
before. See https://issues.apache.org/jira/browse/LUCENE-7595.
2017-01-05 11:33:24 +01:00
Simon Willnauer a5daa5d3a2 Execute low level handshake in #openConnection (#22440)
Today we execute the low level handshake on the TCP layer in #connectToNode.
If #openConnection is used directly, which is truly expert, no handshake is executed
which allows connecting to nodes that are not necessarily compatible. This change
moves the handshake to #openConnection to prevent bypassing this logic.
2017-01-05 07:32:53 +01:00
Ryan Ernst bf51522788 Add 5.3 version 2017-01-04 15:00:43 -08:00
Ali Beyad 9d422c1c34 IndicesService handles all exceptions during index deletion (#22433)
Previously, we could run into a situation where attempting to delete an
index due to a cluster state update would cause an unhandled exception
to bubble up to the ClusterService and cause the cluster state applier
to fail. The result of this situation is that the cluster state never
gets updated on the ClusterService because the exception happens before
all cluster state appliers have completed and the ClusterService only
updates the cluster state once all cluster state appliers have
successfully completed.

All other methods on IndicesService properly handle all exceptions and
not just IOExceptions, but there were two instances with respect to
index deletion where only IOExceptions where handled by the
IndicesService. If any other exception occurred during these delete
operations, the exception would be bubbled up to the ClusterService,
causing the aforementioned issues.

This commit ensures all methods in IndicesService properly capture all
types of Exceptions, so that the ClusterService manages to update the
cluster state, even in the presence of shard creation/deletion failures.

Note that the lack of updating the cluster state in the presence of such
exceptions can have many unintended consequences, one of them being
the tripping of the assertion in IndicesClusterStateService#removeUnallocatedIndices
where the assumption is that if there is an IndexService to remove with
an unassigned shard, then the index must exist in the cluster state, but if
the cluster state was never updated due to the aforementioned exceptions,
then the cluster state will not have the index in question.
2017-01-04 13:16:49 -06:00
Adrien Grand f8998fece5 Upgrade to lucene-6.4.0-snapshot-084f7a0. (#22413) 2017-01-04 19:03:52 +01:00
Jim Ferenczi 360ce532eb Implement stats for geo_point and geo_shape field (#22391)
Currently `geo_point` and `geo_shape` field are treated as `text` field by the field stats API and we
try to extract the min/max values with MultiFields.getTerms.
This is ok in master because a `geo_point` field is always a Point field but it can cause problem in 5.x (and 2.x) because the legacy
 `geo_point` are indexed as terms.
 As a result the min and max are extracted and then printed in the FieldStats output using BytesRef.utf8ToString
 which can throw an IndexOutOfBoundException since it's not valid UTF8 strings.
 This change ensure that we never try to extract min/max information from a `geo_point` field.
 It does not add a new type for geo points in the fieldstats API so we'll continue to use `text` for this kind of field.
 This PR is targeted to master even though we could only commit this change to 5.x. I think it's cleaner to have it in master too before we make any decision on
  https://github.com/elastic/elasticsearch/pull/21947.

Fixes #22384
2017-01-04 10:42:22 +01:00
Ryan Ernst 4e4a40df7a feedback 2017-01-03 15:42:38 -08:00
Jason Tedor c6ddff757e Cleanup some comments in IndexShard.java
This commit cleans up the comments in IndexShard related to sequence numbers, making
them uniform in their formatting and taking advantage of the line-length
limit of 140 characters.
2017-01-03 15:10:38 -05:00
Jason Tedor 9a65d2008e Cleanup comments in GlobalCheckpointService.java
This commit cleans up the comments in GlobalCheckpointService, making
them uniform in their formatting and taking advantage of the line-length
limit of 140 characters.
2017-01-03 12:23:33 -05:00
Jason Tedor 64888ab1d3 Cleanup comments in SequenceNumbersService.java
This commit cleans up the comments in SequenceNumbersService, making
them uniform in their formatting and taking advantage of the line-length
limit of 140 characters.
2017-01-03 12:06:43 -05:00
Ali Beyad 6f242920d3 [TEST] only check node decisions if not in the AWAITING_INFO state 2017-01-03 11:39:40 -05:00
javanna ee4dde46d3 Remove ParseFieldMatcher usages from Aggregator 2017-01-03 15:52:32 +01:00
javanna 8b8ff8b9e2 Remove ParseFieldMatcher usages from SearchService 2017-01-03 15:52:32 +01:00
javanna 6329a98a97 Remove ParseFieldMatcher usages from SearchContext 2017-01-03 15:52:32 +01:00
javanna 77f4152a18 Remove ParseFieldMatcher usages from a couple of Rest Actions 2017-01-03 15:52:32 +01:00
javanna 0d67891a64 Remove ParseFieldMatcher usages from QueryParsers#parseRewriteMethod 2017-01-03 15:52:32 +01:00
javanna 40540b3f3f Remove unused QueryParsers#setRewriteMethod 2017-01-03 15:52:32 +01:00
javanna 648ed46f01 Remove ParseFieldMatcher usages from MoreLikeThisQueryBuilder & MultiMatchQueryBuilder 2017-01-03 15:52:32 +01:00
javanna c06d00dce1 Remove ParseFieldMatcher usage from SearchRequest 2017-01-03 15:52:32 +01:00
javanna 45c67b5ee5 Remove ParseFieldMatcher usage from AggregatorParsers 2017-01-03 15:52:32 +01:00
javanna 6f4faf5233 Remove ParseFieldMatcher usage from AllocationCommands 2017-01-03 15:52:32 +01:00
javanna 8f297ec42c Remove ParseFieldMatcher usage from ParseFieldRegistry 2017-01-03 15:52:32 +01:00
javanna 41c7d3e092 Remove ParseFieldMatcher usage from Mappers 2017-01-03 15:52:32 +01:00
Jason Tedor c5a8fd9719 Cleanup some whitespace in LocalCheckpointService.java
This commit just fixes a couple whitespace formatting issues in
o/e/i/s/LocalCheckpointService.java.
2017-01-03 09:30:31 -05:00
Jason Tedor f086d1d3db Cleanup comments in LocalCheckpointService.java
This commit cleans up the comments in LocalCheckpointService, making
them uniform in their formatting and taking advantage of the line-length
limit of 140 characters.
2017-01-03 09:29:01 -05:00
Christoph Büscher a773d46c69 Remove deprecated `minimum_number_should_match` in BoolQueryBuilder
After deprecating getters and setters and the query DSL parameter in 5.x,
support for `minimum_number_should_match` can be removed entirely. Also
consolidated comments with the ones on 5.x branch and added an entry to the
migration docs.
2017-01-03 15:14:33 +01:00
Daniel Mitterdorfer 1ed64f0551 Eliminate unneccessary declaration of IOException
With this commit we remove the declaration of IOException from
assertWarnings and modify all call sites.

Checked with @javanna
2017-01-03 12:36:28 +01:00
Christoph Büscher 16d79842ac Remove getters and setters for "minimumNumberShouldMatch" in BoolQueryBuilder
Currently we have getters an setters for both "minimumNumberShouldMatch" and
"minimumShouldMatch", which both access the same internal value
(minimumShouldMatch). Since we only document the `minimum_should_match`
parameter for the query DSL, I think we can deprecate the other getters and
setters for 5.x and remove with 6.0, also deprecating the
`minimum_number_should_match` query DSL parameter.
2017-01-03 11:29:04 +01:00
Tim Vernum 6ad5486e6b Implement Comparable in Version (#22378)
Supports using streams to calculate min/max of a collection of Versions, etc.
2017-01-03 12:20:17 +11:00
Ali Beyad 38427c1df0 [TEST] don't wait for all cluster info in the explain API, just assert
an upper and lower bound
2017-01-02 18:31:17 -05:00
Ali Beyad 49298c16a9 [TEST] fix explain API awaiting info explanation check 2017-01-02 18:18:09 -05:00
Ali Beyad 47907b7093 [TEST] fix explain API test to allow for either awaiting info state or
no valid shard copy
2017-01-02 15:24:01 -05:00
Ali Beyad 20ab4be59f Cluster Explain API uses the allocation process to explain shard allocation decisions (#22182)
This PR completes the refactoring of the cluster allocation explain API and improves it in the following two high-level ways:

 1. The explain API now uses the same allocators that the AllocationService uses to make shard allocation decisions. Prior to this PR, the explain API would run the deciders against each node for the shard in question, but this was not executed on the same code path as the allocators, and many of the scenarios in shard allocation were not captured due to not executing through the same code paths as the allocators.

 2. The APIs have changed, both on the Java and JSON level, to accurately capture the decisions made by the system. The APIs also now report on shard moving and rebalancing decisions, whereas the previous API did not report decisions for moving shards which cannot remain on their current node or rebalancing shards to form a more balanced cluster.

Note: this change affects plugin developers who may have a custom implementation of the ShardsAllocator interface. The method weighShards has been removed and no longer has any utility. In order to support the new explain API, however, a custom implementation of ShardsAllocator must now implement ShardAllocationDecision decideShardAllocation(ShardRouting shard, RoutingAllocation allocation) which provides a decision and explanation for allocating a single shard. For implementations that do not support explaining a single shard allocation via the cluster allocation explain API, this method can simply return an UnsupportedOperationException.
2017-01-02 12:28:32 -06:00
javanna a3918ad094 Remove unused ParseFieldMatcher#match method 2016-12-31 09:24:44 +01:00
javanna cd6b569286 Remove some usages of ParseFieldMatcher in favour of using ParseField directly
Relates to #19552
Relates to #22130
2016-12-31 09:24:44 +01:00
Igor Motov f985638bba Add a generic way of checking version before serializing custom cluster object
In #22313 we added a check that prevents the SnapshotDeletionsInProgress custom cluster state objects from being sent to older elasticsearch nodes. This commits make this check generic and available to other cluster state custom objects if needed.
2016-12-30 14:27:09 -05:00
javanna 74acffaae9 fix compiler warning on access to static field using `this` 2016-12-30 18:57:47 +01:00
javanna df2acb3d9d Remove some more usages of ParseFieldMatcher in favour of using ParseField directly
Relates to #19552
Relates to #22130
2016-12-30 18:57:47 +01:00
javanna 6c54cbade4 Remove some more usages of ParseFieldMatcher in favour of using ParseField directly
Relates to #19552
Relates to #22130
2016-12-30 18:57:47 +01:00
javanna 45d010e874 Remove some usages of ParseFieldMatcher in favour of using ParseField directly
Relates to #19552
Relates to #22130
2016-12-30 18:57:47 +01:00
Adrien Grand 00de5b83bd The percentage of deleted docs needs to be strictly over 10% for deleted docs to be expunged. 2016-12-30 11:18:02 +01:00
Adrien Grand f1d7721932 Fix TermsAggregatorTests to not use LuceneTestCase.newSearcher since it needs a DirectoryReader. 2016-12-30 10:12:24 +01:00
Adrien Grand f89bb18a5d Dynamic `date` fields should use the `format` that was used to detect it is a date. (#22174)
Unless the dynamic templates define an explicit format in the mapping
definition: in that case the explicit mapping should have precedence.

Closes #9410
2016-12-30 09:48:24 +01:00
Adrien Grand 3f805d68cb Add the ability to set an analyzer on keyword fields. (#21919)
This adds a new `normalizer` property to `keyword` fields that pre-processes the
field value prior to indexing, but without altering the `_source`. Note that
only the normalization components that work on a per-character basis are
applied, so for instance stemming filters will be ignored while lowercasing or
ascii folding will be applied.

Closes #18064
2016-12-30 09:36:10 +01:00
Yannick Welsch 816e1c6cc4 Free shard resources when recovery reset is cancelled
Resetting a recovery consists of resetting the old recovery target and replacing it by a new recovery target object. This is done on the Cancellable threads of
the new recovery target. If the new recovery target is already cancelled before or while this happens, for example due to shard closing or recovery source
changing, we have to make sure that the old recovery target object frees all shard resources.

Relates to #22325
2016-12-29 17:05:47 +01:00
Yannick Welsch 6e6d9eb255 Use a fresh recovery id when retrying recoveries (#22325)
Recoveries are tracked on the target node using RecoveryTarget objects that are kept in a RecoveriesCollection. Each recovery has a unique id that is communicated from the recovery target to the source so that it can call back to the target and execute actions using the right recovery context. In case of a network disconnect, recoveries are retried. At the moment, the same recovery id is reused for the restarted recovery. This can lead to confusion though if the disconnect is unilateral and the recovery source continues with the recovery process. If the target reuses the same recovery id while doing a second attempt, there might be two concurrent recoveries running on the source for the same target.

This commit changes the recovery retry process to use a fresh recovery id. It also waits for the first recovery attempt to be fully finished (all resources locally freed) to further prevent concurrent access to the shard. Finally, in case of primary relocation, it also fails a second recovery attempt if the first attempt moved past the finalization step, as the relocation source can then be moved to RELOCATED state and start indexing as primary into the target shard (see TransportReplicationAction). Resetting the target shard in this state could mean that indexing is halted until the recovery retry attempt is completed and could also destroy existing documents indexed and acknowledged before the reset.

Relates to #22043
2016-12-29 10:58:15 +01:00
Igor Motov ca90d9ea82 Remove PROTO-based custom cluster state components
Switches custom cluster state components from PROTO-based de-serialization to named objects based de-serialization
2016-12-28 13:32:35 -05:00
Jim Ferenczi e7444f7d77 Fix scaled_float numeric type in aggregations (#22351)
`scaled_float` should be used as DOUBLE in aggregations but currently they are used as LONG.
This change fixes this issue and adds a simple it test for it.

Fixes #22350
2016-12-27 09:23:22 +01:00
Adrien Grand 3cb164b22e Fix IndexShardTests.testDocStats. 2016-12-26 20:19:11 +01:00
Adrien Grand 2127db27a3 Add trace logging to CircuitBreakerServiceIT.testParentChecking. 2016-12-26 16:05:27 +01:00
Adrien Grand 2d81750a13 Make ESTestCase resilient to initialization errors. 2016-12-26 14:55:22 +01:00
Adrien Grand f80165c374 Fix LineLength issues. 2016-12-26 11:22:09 +01:00
Adrien Grand d89757b848 Fix mutate function to always actually modify the failure object. 2016-12-26 10:34:50 +01:00
Ali Beyad 1cb5dc42ff Updates SnapshotDeletionsInProgress version number introduced to 5.2.0 2016-12-25 19:28:01 -05:00
Ali Beyad 8261bd358a Synchronize snapshot deletions on the cluster state (#22313)
Before, snapshot/restore would synchronize all operations on the cluster
state except for deleting snapshots.  This meant that only one
snapshot/restore operation would be allowed in the cluster at any given
time, except for deletions - there could be two or more snapshot
deletions running at the same time, or a deletion could be running,
unbeknowest to the rest of the cluster, and thus a snapshot or restore
would be allowed at the same time as the snapshot deletion was still in
progress.  This could cause any number of synchronization issues,
including the situation where a snapshot that was deleted could reappear
in the index-N file, even though its data was no longer present in the
repository.

This commit introduces a new custom type to the cluster state to
represent deletions in progress.  Now, another deletion cannot start if
a deletion is currently in progress.  Similarily, a snapshot or restore
cannot be started if a deletion is currently in progress.  In each case,
if attempting to run another snapshot/restore operation while a deletion
is in progress, a ConcurrentSnapshotExecutionException will be thrown.
This is the same exception thrown if trying to snapshot while another
snapshot is in progress, or restore while a snapshot is in progress.

Closes #19957
2016-12-25 19:00:20 -05:00
Jason Tedor d5c18bf5c9 Fix doc stats test when deleting all docs
This commit fixes an issue with IndexShardTests#testDocStats when the
number of deleted docs is equal to the number of docs. In this case,
Luence will remove the underlying segment tripping an assertion on the
number of deleted docs.
2016-12-23 15:20:42 -05:00
Ryan Ernst d4288cce79 Fix getBytes invocation to use explicit charset 2016-12-23 10:57:02 -08:00
Jason Tedor 6deb5283db Fix delete op serialization format constant
The delete op serizliation format constant for 5.x was off by one. This
commit fixes this, and cleans up the handling of these formats.
2016-12-23 11:50:43 -05:00
Jason Tedor 2713549533 Use reader for doc stats
Today we try to pull stats from index writer but we do not get a
consistent view of stats. Under heavy indexing, this inconsistency can
be very skewed indeed. In particular, it can lead to the number of
deleted docs being reported as negative and this leads to serialization
issues. Instead, we should provide a consistent view of the stats by
using an index reader.

Relates #22317
2016-12-23 09:44:56 -05:00
Boaz Leskes c2baa5f213 TransportService should capture listener before spawning background notification task
Not doing this made it difficult to establish a happens before relationship between connecting to a node and adding a listeners. Causing test code like this to fail sproadically:

```
        // connection to reuse
        handleA.transportService.connectToNode(handleB.node);

        // install a listener to check that no new connections are made
        handleA.transportService.addConnectionListener(new TransportConnectionListener() {
            @Override
            public void onConnectionOpened(DiscoveryNode node) {
                fail("should not open any connections. got [" + node + "]");
            }
        });

```

relates to #22277
2016-12-23 13:55:12 +01:00
Yannick Welsch baea17b53f Separate cluster update tasks that are published from those that are not (#21912)
This commit factors out the cluster state update tasks that are published (ClusterStateUpdateTask) from those that are not (LocalClusterUpdateTask), serving as a basis for future refactorings to separate the publishing mechanism out of ClusterService.
2016-12-23 12:23:52 +01:00
Boaz Leskes eb7450bcdc UnicastZenPing add trace logging on connection opening 2016-12-23 09:15:11 +01:00
Jason Tedor faaa671fb6 Enable assertions in integration tests
When starting a standalone cluster, we do not able assertions. This is
problematic because it means that we miss opportunities to catch
bugs. This commit enables assertions for standalone integration tests,
and fixes a couple bugs that were uncovered by enabling these.

Relates #22334
2016-12-22 20:08:02 -05:00
Ryan Ernst fb690ef748 Settings: Add infrastructure for elasticsearch keystore
This change is the first towards providing the ability to store
sensitive settings in elasticsearch. It adds the
`elasticsearch-keystore` tool, which allows managing a java keystore.
The keystore is loaded upon node startup in Elasticsearch, and used by
the Setting infrastructure when a setting is configured as secure.

There are a lot of caveats to this PR. The most important is it only
provides the tool and setting infrastructure for secure strings. It does
not yet provide for keystore passwords, keypairs, certificates, or even
convert any existing string settings to secure string settings. Those
will all come in follow up PRs. But this PR was already too big, so this
at least gets a basic version of the infrastructure in.

The two main things to look at.  The first is the `SecureSetting` class,
which extends `Setting`, but removes the assumption for the raw value of the
setting to be a string. SecureSetting provides, for now, a single
helper, `stringSetting()` to create a SecureSetting which will return a
SecureString (which is like String, but is closeable, so that the
underlying character array can be cleared). The second is the
`KeyStoreWrapper` class, which wraps the java `KeyStore` to provide a
simpler api (we do not need the entire keystore api) and also extend
the serialized format to add metadata needed for loading the keystore
with no assumptions about keystore type (so that we can change this in
the future) as well as whether the keystore has a password (so that we
can know whether prompting is necessary when we add support for keystore
passwords).
2016-12-22 16:28:34 -08:00
Nik Everett 55099df1cb Support negative numbers in writeVLong (#22314)
We don't *want* to use negative numbers with `writeVLong`
so throw an exception when we try. On the other
hand unforeseen bugs might cause us to write negative numbers (some versions of Elasticsearch don't have the exception, only an assertion)
so this fixes `readVLong` so that instead of reading a wrong
value and corrupting the stream it reads the negative value.
2016-12-22 13:02:39 -05:00
Boaz Leskes 13c5881f3e UnicastZenPing's PingingRound should prevent opening connections after being closed
This may cause them to leak. Provisioning for it was made in #22277 but sadly a crucial ensureOpen call was forgotten
2016-12-22 18:45:44 +01:00
Boaz Leskes 7d0dbd2082 add trace logging to UnicastZenPingTests.testResolveReuseExistingNodeConnections 2016-12-22 18:10:15 +01:00
Tal Levy 6d7261c4d3 Adds ingest processor headers to exception for unknown processor. (#22315)
Optimistically check for `tag` of an unknown processor for better tracking of which
processor declaration is to blame in an invalid configuration.

Closes #21429.
2016-12-22 08:24:00 -08:00
Nik Everett f5f2149ff2 Remove much ceremony from parsing client yaml test suites (#22311)
* Remove a checked exception, replacing it with `ParsingException`.
* Remove all Parser classes for the yaml sections, replacing them with static methods.
* Remove `ClientYamlTestFragmentParser`. Isn't used any more.
* Remove `ClientYamlTestSuiteParseContext`, replacing it with some static utility methods.

I did not rewrite the parsers using `ObjectParser` because I don't think it is worth it right now.
2016-12-22 11:00:34 -05:00
Stéphane Campinas e1b8528ab8 Support numeric bounds with decimal parts for long/integer/short/byte datatypes (#21972)
Close #21600
2016-12-22 15:20:15 +01:00
Martijn van Groningen b9a90eca20 inner hits: Don't inline inner hits if the query the inner hits is inlined into can't resolve mappings and ignore_unmapped has been set to true
Closes #21620
2016-12-22 14:51:33 +01:00