Commit Graph

2923 Commits

Author SHA1 Message Date
David Turner 073b13f5b0 Add docs for cluster.remote.*.proxy setting (#40281)
In #33062 we introduced the `cluster.remote.*.proxy` setting for proxied
connections to remote clusters, but left it deliberately undocumented since it
needed followup work so that it could work with SNI. However, since #32517 is
now closed we can add this documentation and remove the comment about its lack
of documentation.
2019-03-28 12:11:24 +00:00
jimczi 8775e37d03 Fix SearchResponseMerger#testMergeSearchHits
This commit fixes an edge case in tests where search hits are empty
after the merge but some shards returned hits. This can happen if
the total number of merged hits is less than the provided `from`.

Closes #40553
2019-03-28 09:57:21 +01:00
Adrien Grand 65a35c985c
Remove type from VersionConflictEngineException. (#37490) (#40514)
It initially mentioned the type in the exception because the type used to be
required to uniquely identify a document. This is not necessary anymore given
that indices have at most one type.
2019-03-28 09:32:09 +01:00
Adrien Grand 2326a3dccb
Remove String interning from `o.e.index.Index`. (#40350) (#40517)
`Index` interns its name and uuid. My guess is that the main goal is to avoid
having duplicate strings in the representation of the cluster state. However
I doubt it helps much given that we have many other objects in the cluster state
that we don't try to reuse, and interning has some cost. When looking into
 #40263 my profiler pointed to string interning because of the `Index` object
that is created in `QueryShardContext` as one of the bottlenecks of the
`can_match` phase.
2019-03-28 09:31:42 +01:00
Andy Bristol 23395a9b9f
search as you type fieldmapper (#35600)
Adds the search_as_you_type field type that acts like a text field optimized
for as-you-type search completion. It creates a couple subfields that analyze
the indexed terms as shingles, against which full terms are queried, and a
prefix subfield that analyze terms as the largest shingle size used and
edge-ngrams, against which partial terms are queried

Adds a match_bool_prefix query type that creates a boolean clause of a term
query for each term except the last, for which a boolean clause with a prefix
query is created.

The match_bool_prefix query is the recommended way of querying a search as you
type field, which will boil down to term queries for each shingle of the input
text on the appropriate shingle field, and the final (possibly partial) term
as a term query on the prefix field. This field type also supports phrase and
phrase prefix queries however
2019-03-27 13:29:13 -07:00
Benjamin Trent 6563dc7ed9
Muting test for #40553 (#40555) 2019-03-27 14:52:12 -05:00
Tim Brooks ab44f5fd5d
Add InboundHandler for inbound message handling (#40430)
This commit adds an InboundHandler to handle inbound message processing.
With this commit, this code is moved out of the TcpTransport.
Additionally, finer grained unit tests are added to ensure that the
inbound processing works as expected
2019-03-27 12:33:26 -06:00
Yannick Welsch 64b31f44af No mapper service and index caches for replicated closed indices (#40423)
Replicated closed indices can't be indexed into or searched, and therefore don't need a shard with
full indexing and search capabilities allocated. We can save on a lot of heap memory for those
indices by not allocating a mapper service and caching infrastructure (which preallocates a constant
amount per instance). Before this change, a 1GB ES instance could host 250 replicated closed
metricbeat indices (each index with one shard). After this change, the same instance can host 7300
replicated closed metricbeat instances (not that this would be a recommended configuration). Most
of the remaining memory is in the cluster state and the IndexSettings object.
2019-03-27 19:04:24 +01:00
Yannick Welsch 8f7c5732f1 Use default discovery implementation for single-node discovery (#40036)
Switches "discovery.type: single-node" from using a separate implementation for single-node discovery to using the existing standard discovery implementation, with two small adaptions:

-  auto-bootstrapping, but requiring initial_master_nodes not to be set.
- not actively pinging other nodes using the Peerfinder
- not allowing other nodes to join its single-node cluster (if they have e.g. been set up using regular discovery and connect to the single-disco node).
2019-03-27 19:04:24 +01:00
Tim Brooks 3860ddd1a4
Move outbound message handling to OutboundHandler (#40336)
Currently there are some components of message serializer and sending
that still occur in TcpTransport. This commit makes it possible to
send a message without the TcpTransport by moving all of the remaining
application logic to the OutboundHandler. Additionally, it adds unit
tests to ensure that this logic works as expected.
2019-03-27 11:47:36 -06:00
David Turner 707d40ce06 Stabilise testStaleMasterNotHijackingMajority (#40253)
This test inadvertently asserts that the election occurs after a master failure
is clean. However, messy elections are a fact of life so we should not fail on
a messy election.

This change moves this test away from an `AbstractDisruptionTestCase` since it
does not need the fault detector to be so enthusiastic, and weakens the
assertions to merely say that we ignore states published by the old master
without saying anything about the cleanliness of the election.

Closes #36556
2019-03-27 16:00:14 +00:00
Tim Brooks 760cfffe4b
Move TransportMessageListener to TransportService (#40474)
Currently the TransportMessageListener is applied and used in the
Transport class. However, local requests and responses never make it to
this class. This PR moves the listener add/remove methods to the
TransportService. After this change the Transport can only have one
listener set with it. This one listener is the TransportService, which
will then propogate the events to the external listeners.

Additionally this commit back ports #40237

Remove Tracer from MockTransportService

Currently the TransportMessageListener is applied and used in the
Transport class. However, local requests and responses never make it to
this class. This PR moves the listener add/remove methods to the
TransportService. After this change the Transport can only have one
listener set with it. This one listener is the TransportService, which
will then propogate the events to the external listeners.
2019-03-27 09:24:20 -06:00
Przemyslaw Gomulka 65f01277ed
Parse composite patterns using ClassicFormat.parseObject backport(#40100) (#40501)
Java-time fails parsing composite patterns when first pattern matches only the prefix of the input. It expects pattern in longest to shortest order. Because of this constructing just one DateTimeFormatter with appendOptional is not sufficient. Parsers have to be iterated and if the parsing fails, the next one in order should be used. In order to not degrade performance parsing should not be throw exceptions on failure. Format.parseObject was used as it only returns null when parsing failed and allows to check if full input was read.
 
closes #39916
backport #40100
2019-03-27 13:51:44 +01:00
Yannick Welsch b4b17e16e0 Remove TransportSingleItemBulkWriteAction as replication action (#40424)
The implementation of TransportIndexAction and TransportDeleteAction as
TransportReplicationAction existed for interoperability with older 5.x nodes, as these older nodes
coordinated single index / deletes as replication requests. This BWC layer is no longer needed in 7.x,
where these single actions are now mapped to bulk requests. Completely removing the deprecated
transport actions is not possible yet if we want to keep BWC with a 6.x transport client. The best
way here is to wait for the transport client to go away and then just remove the actions.
2019-03-27 13:16:58 +01:00
Jim Ferenczi fe05a4d511 Fix random failures in SearchResponseMerger#testMergeSearchHits (#40223)
This commit fixes the expectation in the test when the search hits are empty.

Closes #40214
2019-03-27 11:17:10 +01:00
alex101101 fb8ad0cf30 Add a soft limit to the field name length (#40309)
Adds an optional limit to the length of field names, throws an IllegalArgumentException if the limit is breached. 
Closes #33651
2019-03-26 17:58:32 +01:00
Daniel Mitterdorfer f2b5960f90 Add version 6.7.1 2019-03-26 17:38:14 +01:00
Yannick Welsch bf7b167bba Remove timeout task after completing cluster state publication (#40411)
Each cluster state publication schedules a cancellation task with the provided publication timeout
(30s by default). This scheduled cancellation keeps a reference to the publication, and therefore the
full cluster state that was published. In case of frequently updating a large cluster state, this results
in a large number of cancellation tasks keeping references to all previously published cluster states.
2019-03-26 17:13:57 +01:00
Henning Andersen bf444b9f02 Store Pending Deletions Fix (#40345)
FilterDirectory.getPendingDeletions does not delegate, fixed
temporarily by overriding in StoreDirectory.

This in turn caused duplicate file name use after a trimUnsafeCommits
had been done, since a new IndexWriter would not consider the pending
deletes in IndexFileDeleter. This should only happen on windows (AFAIK).

Reenabled doing index updates for all tests using
IndexShardTests.indexOnReplicaWithGaps (which could fail due to above
when using mocked WindowsFS).

Added getPendingDeletions delegation to all elasticsearch
FilterDirectory subclasses that were not trivial test-only overrides to
minimize the risk of hitting this issue in another case.
2019-03-26 15:30:44 +01:00
Alan Woodward 12634850d6 IntervalQueryBuilderTests#testNonIndexedFields test fix (#40418)
This test checks that interval queries constructed against a field with no indexed
positions will throw exceptions. It uses a randomly-build IntervalsSourceProvider
against a fixed set of fields; however, the random source builder can occasionally
provide a source with a fixed field, meaning that even if the top-level query asks
for a set of intervals over a non-indexed field, the source will delegate to another
field, and no exception will be thrown.

This commit changes the test to always use a simple Match provider.

Fixes #40436
2019-03-26 08:33:42 +00:00
Nhat Nguyen 495dc11c9c Mute testPendingRefreshWithIntervalChange
Tracked at #39565
2019-03-25 11:47:08 -04:00
Armin Braun 3968d46a17
Remove Redundant Request Wrappers from RepositoryService (#40192) (#40404) 2019-03-25 16:36:02 +01:00
Armin Braun dc5ff0fffc
Log Warning on Failed Blob Deletes in BlobStoreRepository (#40188) (#40340)
* Log Warning on Failed Blob Deletes in BlobStoreRepository
* We should not just debug log these spots, they all can and will lead to leaked files when snapshot deletion fails
2019-03-25 08:52:09 +01:00
Nhat Nguyen b9f96a8e1f
Expose external refreshes through the stats API (#38643)
Right now, the stats API only provides refresh metrics regarding
internal refreshes. This isn't very useful and somewhat misleading for
cluster administrators since the internal refreshes are not indicative
of documents being available for search.

In this PR I added a new metric for collecting external refreshes as
they occur and exposing them through the stats API. Now, calling an
endpoint for stats will yield external refresh metrics as well.

Relates #36712
2019-03-24 22:21:00 -04:00
Armin Braun 13d76239a0
Use Netty ByteBuf Bulk Operations for Faster Deserialization (#40158) (#40339)
* Use bulk methods to read numbers faster from byte buffers
2019-03-24 19:08:51 +01:00
Jason Tedor 10bbb082a4
Only run retention lease actions on active primary (#40386)
In some cases, a request to perform a retention lease action can arrive
on a primary shard before it is active. In this case, the primary shard
would not yet be in primary mode, tripping an assertion in the
replication tracker. Instead, we should not attempt to perform such
actions on an initializing shard. This commit addresses this by not
returning the primary shard in the single shard iterator if the primary
shard is not yet active.
2019-03-23 09:39:39 -04:00
Zachary Tong 78f737dad3
Map value field to double in MovavgIT (#40230)
We were accidentally not mapping the index, which meant dynamic mapping
was choosing floats for the values.  This led to enough loss of precision
for the aggregated values to differ slightly from the test doubles,
which accumulated into large differences in the holt output.

This test fix adds an explicit mapping.
2019-03-21 14:03:14 -04:00
Jason Tedor 1e6941b138
Reduce retention lease sync intervals (#40302)
This commit adjusts the frequency with which CCR renews retention leases
and with which primaries sync retention leases to replicas. This helps
Lucene reclaim soft-deleted documents more aggressively, which we have
found in some use-cases can help improve performance, and either way
will help keep disk space under more control.
2019-03-21 07:37:44 -04:00
Alan Woodward 83d2870308 Add `use_field` option to intervals query (#40157)
This is the equivalent of the `field_masking_span` query, allowing users to
merge intervals from multiple fields - for example, to search for stemmed tokens
near unstemmed tokens.
2019-03-20 16:26:04 +00:00
Like 6f64267626 Make setting index.translog.sync_interval be dynamic (#37382)
Currently, we cannot update index setting index.translog.sync_interval if index is open, because it's
not dynamic which can be updated for closed index only.

Closes #32763
2019-03-20 17:12:45 +01:00
Yannick Welsch a5fb7fb17c Fix snapshot restore logging on fresh restore (#40252)
A recent refactoring (#37130) where imports got mixed up (changing Lucene's
IndexNotFoundException to Elasticsearch's IndexNotFoundException) led to many warnings being
logged in case of restoring a fresh snapshot.
2019-03-20 16:51:44 +01:00
Jim Ferenczi 3400483af4
Add date and date_nanos conversion to the numeric_type sort option (#40199) (#40224)
This change adds an option to convert a `date` field to nanoseconds resolution
 and a `date_nanos` field to millisecond resolution when sorting.
The resolution of the sort can be set using the `numeric_type` option of the
field sort builder. The conversion is done at the shard level and is restricted
to dates from 1970 to 2262 for the nanoseconds resolution in order to avoid
numeric overflow.
2019-03-20 16:50:28 +01:00
Nhat Nguyen efaf95628b Use separate translog dir in testDeleteWithFatalError
This test currently opens a new engine but shares the same translog
directory of the previous opening engine.
2019-03-20 10:22:27 -04:00
Mayya Sharipova 49a7c6e0e8
Expose proximity boosting (#39385) (#40251)
Expose DistanceFeatureQuery for geo, date and date_nanos types

Closes #33382
2019-03-20 09:24:41 -04:00
Henning Andersen 4c2a8638ca Cascading primary failure lead to MSU too low (#40249)
If a replica were first reset due to one primary failover and then
promoted (before resync completes), its MSU would not include changes
since global checkpoint, leading to errors during translog replay.

Fixed by re-initializing MSU before restoring local history.
2019-03-20 14:00:43 +01:00
Simon Willnauer 235f57989f Return cached segments stats if `include_unloaded_segments` is true (#39698)
Today we don't return segments stats for closed indices which makes it
hard to tell how much memory such an index would require. With this change
we return the statistics if requested by setting `include_unloaded_segments` to
true on the rest request.

Relates to #39512
2019-03-20 12:08:41 +01:00
Jason Tedor 9ce740a2eb
Modfiy casing in JVM home log message
This makes the log message consistent with the following line that shows
the JVM arguments.
2019-03-20 00:06:16 -04:00
Zachary Tong 69f5869707 Mute SearchResponseMergerTests#testMergeSearchHits
Tracking issue: https://github.com/elastic/elasticsearch/issues/40214
2019-03-19 13:40:38 -04:00
David Turner 33d8738c68 Fix RareClusterStateIT on MacOS (#40203)
Today RareClusterStateIT#testAssignmentWithJustAddedNodes fails on my Mac
because it waits for the default connection timeout of 30 seconds to connect to
a fake node with IP address 0.0.0.0. This connection attempt fails much more
quickly on Linux so the test passes.

This commit fixes this by reducing the connection timeout for this test.
2019-03-19 17:33:21 +00:00
Nhat Nguyen a13b4bc8c5 Always fail engine if delete operation fails (#40117)
Unlike index operations which can fail at the document level to
analyzing errors, delete operations should never fail at the document
level whether soft-deletes is enabled or not. With this change, we will
always fail the engine if we fail to apply a delete operation to Lucene.

Closes #33256
2019-03-19 13:09:23 -04:00
Luca Cavanna d14e79e849 Serialize top-level pipeline aggs as part of InternalAggregations (#40177)
We currently convert pipeline aggregators to their corresponding
InternalAggregation instance as part of the final reduction phase.
They arrive to the coordinating node as part of QuerySearchResult
objects fom the shards and, despite we may incrementally reduce
aggs (hence we may have some non-final reduce and the final
one later) all the reduction phases happen on the same node.

With CCS minimizing roundtrips though, each cluster performs its
own non-final reduction, and then serializes the results back to
the CCS coordinating node which will perform the final coordination.
This breaks the assumptions made up until now around reductions
happening all on the same node.

With #40101 we have made sure that top-level pipeline aggs are not
reduced as part of the non-final reduction. The next step is to make
sure that they don't get lost, meaning that each coordinating node
needs to send them back to the CCS coordinating node as part of
the top-level `InternalAggregations` object.

Closes #40059
2019-03-19 14:43:39 +01:00
Luca Cavanna 803ec46331 Skip sibling pipeline aggregators reduction during non-final reduce (#40101)
Today a coordinating node forces a final reduction of sibling pipeline aggregators whenever reducing aggs, unless it is reducing aggs incrementally. This works well for incremental reduction of aggs, but breaks CCS when minimizing roundtrips as each cluster ends up reducing its own pipeline aggregators locally while that should only be done by the CCS coordinating node later. This causes issues as after their reduction,  pipeline aggs cannot be further reduced, which is what happens with CCS causing errors like "java.lang.UnsupportedOperationException: Not supported" being returned.

Each coordinating node should rather honour the reduce context flag that
indicates whether we are executing a final reduce or not. If not, it should leave the sibling pipeline aggregations alone.

Note that his bug affects only pipeline aggs that don't have a parent in
the aggs tree, while all the others work well.

Relates to #40059 but does not fix it yet, as the CCS coordinating node also needs to be adapted to recreate sibling pipeline aggregators from the request.
2019-03-19 14:43:39 +01:00
Luca Cavanna 83f12a3d9c CCS: skip empty search hits when minimizing round-trips (#40098)
When minimizing round-trips, each cluster returns its own independent
search response. In case sort by field and/or field collapsing were
requested, when one cluster has no results to return, the information
about the field that sorting was based on (SortField array) as well as
the field (and the values) that collapsing was performed on are missing
in the search response. That causes problems as we can't build the
proper `TopDocs` instance which would need to be either `TopFieldDocs`
or `CollapseTopFieldDocs`. The merge routine expects that all the top
docs are of the same exact type which can't be guaranteed. Given that
the problematic results are empty, hence have no impact on the final
results, we can simply skip them.

Relates to #32125
Closes #40067
2019-03-19 14:43:39 +01:00
Luca Cavanna 9c38fa6468 [TEST] Update TransportSearchActionTests#testShouldMinimizeRoundtrips
Relates to #40044

Closes #40051
2019-03-19 14:43:38 +01:00
Luca Cavanna 07bfb4c7f7 CCS: Disable minimizing round-trips when dfs is requested (#40044)
When using DFS_QUERY_THEN_FETCH search type, the dfs phase is run and
its results are used in the query phase to make scoring accurate.
When using CCS, depending on whether the DFS phase runs in the CCS
coordinating node (like if all shards were local) or in each remote
cluster (when minimizing round-trips), scoring will differ.

This commit disables minimizing round-trips whenever DFS is requested,
as it is not currently  possible to ensure that scoring is accurate in
that case.

Relates to #32125
2019-03-19 14:43:38 +01:00
Nhat Nguyen 8dc6862b17 Unmute and trace testPendingRefreshWithIntervalChange
Tracked at #39565
2019-03-19 09:07:54 -04:00
Henning Andersen dde41cc2dd Node repurpose tool (#39403)
When a node is repurposed to master/no-data or no-master/no-data, v7.x
will not start (see #37748 and #37347). The `elasticsearch repurpose`
tool can fix this by cleaning up the problematic data.
2019-03-19 11:52:02 +01:00
Dimitris Athanasiou 95f660d577
Mute NoMasterNodeIT.testNoMasterActionsWriteMasterBlock test (#39689)
Relates #39688
2019-03-18 15:04:26 -06:00
Henning Andersen 0b214c1bfb Linearizability checker memory reduction (#40149)
The cache used in linearizability checker now uses approximately 6x less
memory by changing the cache from a set of (bits, state) tuples into a
map from bits -> { state }.

Each combination of states is kept once only, building on the
assumption that the number of state permutations is small compared to
the number of bits permutations. For those histories that are difficult
to check we will have many bits combinations that use the same state
permutations.

We end up now using approximately 15 bytes per entry compared to 101
bytes before, ie. a 6x improvement, allowing us to linearizability check
significantly longer histories.

Re-enabled linearizability checker in CoordinatorTests, hoping above
ensures we no longer run out of memory.

Resolves #39437
2019-03-18 21:16:59 +01:00
Nhat Nguyen 38e9522218 Remove wait for cluster state step in peer recovery (#40004)
We introduced WAIT_CLUSTERSTATE action in #19287 (5.0), but then stopped
using it since #25692 (6.0). This change removes that action and related
code in 7.x and 8.0.

Relates #19287
Relates #25692
2019-03-18 15:17:21 -04:00
Nhat Nguyen d720a64b9e Ensure sendBatch not called recursively (#39988)
This PR introduces AsyncRecoveryTarget which executes remote calls of
peer recovery asynchronously. In this change, we also add a new
assertion to ensure that method sendBatch, which sends a batch of
history operations in phase2, is never called recursively on the same
thread. This new assertion will also be used in method sendFileChunks.
2019-03-18 15:17:21 -04:00
Jim Ferenczi eb540125ea
Fix IndexSearcherWrapper visibility (#39071) (#40145)
This change adds a wrapper for IndexSearcher that makes IndexSearcher#search(List, Weight, Collector) visible by
sub-classes. The wrapper is used by the ContextIndexSearcher to call this protected method on a searcher created by a plugin.
This ensures that an override of the protected method in an IndexSearcherWrapper plugin is called when a search is executed.

Closes #30758
2019-03-18 11:33:54 +01:00
Jim Ferenczi 5b73a1bc7d
Add an option to force the numeric type of a field sort (#38095) (#40084)
This change adds an option to the `FieldSortBuilder` that allows to transform the type
of a numeric field into another. Possible values for this option are `long` that transforms
the source field into an integer and `double` that transforms the source field into a floating point.
This new option is useful for cross-index search when the sort field is mapped differently on some
indices. For instance if a field is mapped as a floating point in one index and as an integer in another
it is possible to align the type for both indices using the `numeric_type` option:

```
{
   "sort": {
    "field": "my_field",
    "numeric_type": "double" <1>
   }
}
```

<1> Ensure that values for this field are transformed to a floating point if needed.
2019-03-18 09:32:45 +01:00
Albert Zaharovits 1b75ee0bd7 AuditTrail correctly handle ReplicatedWriteRequest (#39925)
This fix deduplicates index names in `BulkShardRequests` and only audits
the specific resolved index for every comprising `BulkItemRequest`.
2019-03-17 13:05:26 +02:00
Jason Tedor 86d1d03c37
Remove cluster state size (#40109)
This commit removes the cluster state size field from the cluster state
response, and drops the backwards compatibility layer added in 6.7.0 to
continue to support this field. As calculation of this field was
expensive and had dubious value, we have elected to remove this field.
2019-03-15 17:16:25 -04:00
Tim Brooks 0b50a670a4
Remove transport name from tcp channel (#40074)
Currently, we maintain a transport name ("mock-nio", "nio", "netty")
that is passed to a `TcpTransportChannel` when a request is received.
The value of this name is to associate with the task when we register a
task with the task manager. However, it is only possible to run ES with
one transport, so having an implementation specific name is unnecessary.
This commit removes the name and replaces it with the generic
"transport".
2019-03-15 12:04:13 -06:00
Zachary Tong c72feedd74 Do not allow Sampler to allocate more than maxDoc size, better CB accounting (#39381)
The `sampler` agg creates a BestDocsDeferringCollector, which internally
initializes a priority queue of size `shardSize`.  This queue is
populated with empty `Object` sentinels, which is roughly 16b per
object.

Similarly, the Diversified samplers create a DiversifiedTopDocsCollectors
which internally track PQ slots with ScoreDocKeys, weighing in around
28kb

If the user sets a very abusive `shard_size`, this could easily OOM
a node or cluster since these PQ are allocated up-front without
any checks.

This commit makes sure that when we create the collector, it
cannot be larger than the maxDoc so that we don't accidentally blow
up the node.  We ensure the size is not greater than the overall
index maxDoc. A similar treatment is done for `maxDocsPerValue`
parameter of the diversified samplers

For good measure, this also adds in some CB accounting to try and track
memory usage.

Finally, a redundant array creation is removed to reduce a bit of
temporary memory.
2019-03-15 13:19:55 -04:00
Yannick Welsch c74111ff8e Reduce logging noise when stepping down as master before state recovery (#39950)
Reduces the logging noise from the state recovery component when there are duelling elections.

Relates to #32006
2019-03-15 17:24:03 +01:00
David Turner 0d152a54f8 Await all pending activity in testConnectAndDisconnect (#40037)
We call `ensureConnections()` to undo the effects of a disruption. However, it
is possible that one or more targets are currently CONNECTING and have been
since the disruption was active, and that the connection attempt was thwarted
by a concurrent disruption to the connection.  If so, we cannot simply add our
listener to the queue because it will be notified when this CONNECTING activity
completes even though it was disrupted. We must therefore wait for all the
current activity to finish and then go through and reconnect to any missing
nodes.

Closes #40030.
2019-03-15 08:08:57 +00:00
David Turner a323132503 Create retention leases file during recovery (#39359)
Today we load the shard history retention leases from disk whenever opening the
engine, and treat a missing file as an empty set of leases. However in some
cases this is inappropriate: we might be restoring from a snapshot (if the
target index already exists then there may be leases on disk) or
force-allocating a stale primary, and in neither case does it make sense to
restore the retention leases from disk.

With this change we write an empty retention leases file during recovery,
except for the following cases:

- During peer recovery the on-disk leases may be accurate and could be needed
  if the recovery target is made into a primary.

- During recovery from an existing store, as long as we are not
  force-allocating a stale primary.

Relates #37165
2019-03-15 07:49:49 +00:00
David Turner 8d2184b315
Fix up committed configuration on fake Zen1 nodes (#40065)
Today we test Zen1/Zen2 compatibility by running 7.x nodes with a "fake" Zen1
implementation. However this is not a truly faithful test because these nodes
do known how to properly deserialize a 7.x cluster state, voting configurations
and all, whereas a real Zen1 node is in 6.7 and ignores the coordination
metadata.

We only ever apply a cluster state that's been committed, which in Zen2
involves setting the last-committed configuration to equal the last-accepted
configuration. Zen1 knows nothing about this adjustment, so it is possible for
these to differ. This breaks the assertion that the cluster states are equal on
all nodes after integration tests.

This commit fixes this by implementing this adjustment in Zen1 before applying
a cluster state.

Fixes #40055.
2019-03-15 07:44:31 +00:00
Ioannis Kakavas 35aaf04c8c Handle empty input in AddStringKeyStoreCommand (#39490)
This change ensures that we do not make assumptions about the length
of the input that we can read from the stdin. It still consumes only
one line, as the previous implementation
2019-03-15 09:38:22 +02:00
Tamara Braun e2b60c7141 Fix not Recognizing Disabled Object Mapper (#39862)
* Fixes not finding disabled object mapper when using dotted field name
notation
* Closes #39456
2019-03-14 10:57:00 -07:00
Ioannis Kakavas 8dc8fc507d Handle UTF-8 values in the keystore (#39496)
* Handle UTF8 values in the keystore

Our current implementation uses CharBuffer#array to get the chars
that were decoded from the UTF-8 bytes. The backing array of
CharBuffer is created in CharsetDecoder#decode and gets an initial
length that is the same as the length of the ByteBuffer it decodes,
hence the number of UTF-8 bytes.
This works fine for the first 128 characters where each one needs
one bytes, but for the next UTF-8 characters (other latin alphabets
Greek, Cyrillic etc.) where we need 2 to 4 bytes per character, this
backing char array has a larger size than the number of the actual
chars this CharBuffer contains. Calling `array()` on it will return
a char array that can potentially have extra null chars so the
SecureString we get from the KeystoreWrapper, is not the same as the
one we entered.

This commit changes the behavior to use Arrays#copyOfRange to get
the necessary chars from the CharBuffer and adds a test with
random ( maybe not printable ) UTF-8 strings
2019-03-14 18:03:50 +02:00
Jason Tedor 9181668edf
Stop returning cluster state size by default (#40016)
Computing the compressed size of the cluster state on every invocation
of cluster:monitor/state action is expensive, and the value of this
field is dubious anyway. Therefore we want to remove computing this
field. As a first step, we stop computing and return this field by
default. To avoid breaking users, we will give them a system property to
use to tide them over until the next major release when we will actually
remove this field. This comes with a deprecation warning too, and the
backport to the appropriate minor will also include a note in the
migration guide. There will be a follow-up to remove this field in the
next major version.
2019-03-14 08:57:55 -04:00
Yogesh Gaikwad 20e5994179
Mute failing tests in NodeConnectionsServiceTests (#40034) (#40035) 2019-03-14 19:40:15 +11:00
Przemyslaw Gomulka 8a314a36db
Change zone formatting for all printers backport(#39568) #39952
After the joda-java time migration we were formatting zone ids with zoneOrOffsetId method. This when a date was provided with a ZoneRegion for instance America/Edmonton it was appending this zone identifier instead of zone formatted as +HH:MM.
This fix is changing the format of zone suffix for all printers and also always wrapping a Temporal into a ZonedDateTime when formatting.

closes #38471
backport #39568
2019-03-13 18:27:37 +01:00
Jim Ferenczi 7a7658707a
Upgrade to Lucene release 8.0.0 (#39998)
This commit upgrades to the GA release of Lucene 8

Closes #39640
2019-03-13 18:11:50 +01:00
Tim Brooks 352f9f1f39
Remove sizing from `Recycler#obtain` (#39975)
Currently there is a method `Recycler#obtain(size)` that allows a size
parameter to be passed. However all implementations ignore this
parameter and just allocate a page size based on other settings. This
commit removes this method.
2019-03-13 09:32:31 -06:00
Andrey Ershov 9300826d8a Do not log unsuccessful join attempt each time (#39756)
When performing the test with 57 master-eligible nodes and one node
crash, we saw messy elections, when multiple nodes were attempting to
become master.
JoinHelper has logged 105 long log messages with lengthy stack
traces during one such election.
To address this, we decided to log these messages every time only on
debug level.
We will log last unsuccessful join attempt (along with a timestamp)
if any with WARN level if the cluster is failing to form.

(cherry picked from commit 17a148cc27b5ac6c2e04ef5ae344da05a8a90902)
2019-03-13 13:30:31 +01:00
Christoph Büscher b10dd3769c Add analysis modes to restrict token filter use contexts (#36103)
Currently token filter settings are treated as fixed once they are declared and
used in an analyzer. This is done to prevent changes in analyzers that are already
used actively to index documents, since changes to the analysis chain could
corrupt the index. However, it would be safe to allow updates to token
filters at search time ("search_analyzer"). This change introduces a new
property of token filters that allows to mark them as only being usable at search
or at index time. Any analyzer that uses these tokenfilters inherits that property
and can be rejected if they are used in other contexts. This is a first step towards
making specific token filters (e.g. synonym filter) updateable.

Relates to #29051
2019-03-12 23:48:55 +01:00
Andy Bristol e2b88bc706 add version 6.6.3 2019-03-12 13:21:36 -07:00
David Turner 049970af3e Only connect to new nodes on new cluster state (#39629)
Today, when applying new cluster state we attempt to connect to all of its
nodes as a blocking part of the application process. This is the right thing to
do with new nodes, and is a no-op on any already-connected nodes, but is
questionable on known nodes from which we are currently disconnected: there is
a risk that we are partitioned from these nodes so that any attempt to connect
to them will hang until it times out. This can dramatically slow down the
application of new cluster states which hinders the recovery of the cluster
during certain kinds of partition.

If nodes are disconnected from the master then it is likely that they are to be
removed as part of a subsequent cluster state update, so there's no need to try
and reconnect to them like this. Moreover there is no need to attempt to
reconnect to disconnected nodes as part of the cluster state application
process, because we periodically try and reconnect to any disconnected nodes,
and handle their disconnectedness reasonably gracefully in the meantime.

This commit alters this behaviour to avoid reconnecting to known nodes during
cluster state application.

Resolves #29025.
2019-03-12 19:26:13 +00:00
Przemyslaw Gomulka a29bba4ede
Migrate Streamable to writeable for index package backport(#37381) #39949
Migrate streamable classes from index package to Writeable and clean up access modifiers

Related to #34389
backport#37381
2019-03-12 12:10:36 +01:00
lzh3636 ad55e5b80d Log missing file exception when failing to read metadata snapshot (#32920)
Adds the exception to the logged output, which contains info about the file that's missing.
2019-03-12 10:41:44 +01:00
Nhat Nguyen ce5f09ab04 Enforce retention leases require soft deletes (#39922)
If a primary on 6.7 and a replica on 5.6 are running more than 5 minutes
(retention leases background sync interval), the retention leases
background sync will be triggered, and it will trip 6.7 node due to the
illegal checkpoint value. We can fix the problem by making the returned
checkpoint depends on the node version. This PR, however, chooses to
enforce retention leases require soft deletes, and make retention leases
sync noop if soft deletes is disabled instead.

Closes #39914
2019-03-11 22:37:47 -04:00
Nhat Nguyen bf814357ad Enable soft deletes in RetentionLeaseIT
Relates #39922
2019-03-11 22:37:42 -04:00
Armin Braun 9eb4614fa6
More Verbose Assertion in testSnapshotWithStuckNode (#39893) (#39928)
* The test failure in #39852 is caused by a file in the initial repository when there should not be any
  * It seems that on a normal consistent file system no left-over file should exist ever here after the validation finishes and I can't reproduce or see any other path to a dangling file in the fresh respository
=> added a more verbose and strict assertion that will log what file is left over next time
* Relates #39852
2019-03-11 19:27:08 +01:00
Jake Landis b0b0f66669
Remove types from internal monitoring templates and bump to api 7 (#39888) (#39926)
This commit removes the "doc" type from monitoring internal indexes.
The template still carries the "_doc" type since that is needed for
the internal representation.

This change impacts the following templates:
monitoring-alerts.json
monitoring-beats.json
monitoring-es.json
monitoring-kibana.json
monitoring-logstash.json

As part of the required changes, the system_api_version has been
bumped from "6" to "7" and support for version "2" has been dropped.

A new empty pipeline is now introduced for the version "7", and
the formerly empty "6" pipeline will now remove the type and re-direct
the request to the "7" index.

Additionally, to due to a difference in the internal representation
(which requires the inclusion of "_doc" type) and external representation
(which requires the exclusion of any type) a helper method is introduced
to help convert internal to external representation, and used by the
monitoring HTTP template exporter.

Relates #38637
2019-03-11 13:17:27 -05:00
Yannick Welsch 4f941c6963 Do not swallow exceptions in TimedRunnable (#39856)
Executors of type fixed_auto_queue_size (i.e. search / search_throttled) wrap runnables into
TimedRunnable, which is an AbstractRunnable. This is dangerous as it might silently swallow
exceptions, and possibly miss calling a response listener. While this has not triggered any failures in
the tests I have run so far, it might help uncover future problems.

Follow-up to #36137
2019-03-11 19:03:12 +01:00
Yannick Welsch 292eb8b001 Fix CoordinatorTests.testIncompatibleDiffResendsFullState (#39345)
This test started failing since decreasing the leader and follower check timeouts (#38298). The
reason is that the test was relying on the default publication timeout to come into effect before
leader / follower check timeouts, which is now not always true anymore.

Closes #38867
2019-03-11 19:03:10 +01:00
Tim Brooks dd77899278
Log send failure at debug level if channel closed (#39807)
Currently we log exceptions due to channel close at the debug level in
the normal exception handler. Currently we log all send failures due to
channel close at the warn level. This commit changes that to only log at
warn if the send failure is not due to channel closed. Additionally, it
adds the ssl engine closed as a channel close exception.
2019-03-11 10:33:02 -06:00
Yannick Welsch b7be724e50 Check term earlier in publication process (#39909)
in order to avoid tripping assertPreviousStateConsistency.

Closes #39314
2019-03-11 15:40:20 +01:00
David Turner 6e4f304f88 Synchronize pendingOutgoingJoins (#39900)
Today we use a ConcurrentHashSet to track the in-flight outgoing joins in the
`JoinHelper`. This is fine for adding and removing elements but not for the
emptiness test in `isJoinPending()` which might return false if one join
finishes just after another one starts, even though joins were pending
throughout.

As used today this is ok: it means the node was trying to join a master but
this join attempt just finished unsuccessfully, and causes it to (rightfully)
reject a `FollowerCheck` from the failed master. However this kind of API
inconsistency is trappy and there is no need to be clever here, so this change
replaces the set with a `synchronizedSet()`.
2019-03-11 12:13:21 +00:00
Ankit Jain 471aa6a16a Fixing 503 Service Unavailable errors during fetch phase (#39086)
When ESRejectedExecutionException gets thrown on the coordinating node while trying to fetch hits, the resulting exception will hold no shard failures, hence `503` is used as the response status code. In that case, `429` should be returned instead. Also, the status code should be taken from the cause if available whenever there are no shard failures instead of blindly returning `503` like we currently do.

Closes #38586
2019-03-11 10:13:55 +01:00
Adrien Grand b841de2e38
Don't emit deprecation warnings on calls to the monitoring bulk API. (#39805) (#39838)
The monitoring bulk API accepts the same format as the bulk API, yet its concept
of types is different from "mapping types" and the deprecation warning is only
emitted as a side-effect of this API reusing the parsing logic of bulk requests.

This commit extracts the parsing logic from `_bulk` into its own class with a
new flag that allows to configure whether usage of `_type` should emit a warning
or not. Support for payloads has been removed for simplicity since they were
unused.

@jakelandis has a separate change that removes this notion of type from the
monitoring bulk API that we are considering bringing to 8.0.
2019-03-11 07:58:28 +01:00
Adrien Grand 2bbef67770
Propagate exceptions in o.e.common.io.Streams. (#39042) (#39848)
This commit propagates some exceptions that were previously swallowed and also
makes sure that exceptions closing streams are either propagated if the try
block succeeded or added as suppressed exceptions otherwise.
2019-03-11 07:58:01 +01:00
Benjamin Trent 4da04616c9
[ML] refactoring lazy query and agg parsing (#39776) (#39881)
* [ML] refactoring lazy query and agg parsing

* Clean up and addressing PR comments

* removing unnecessary try/catch block

* removing bad call to logger

* removing unused import

* fixing bwc test failure due to serialization and config migrator test

* fixing style issues

* Adjusting DafafeedUpdate class serialization

* Adding todo for refactor in v8

* Making query non-optional so it does not write a boolean byte
2019-03-10 14:54:02 -05:00
Julie Tibshirani 8454cfc1b2 Move validation from FieldTypeLookup to MapperMergeValidator. (#39814)
This commit consolidates more mapping validation logic into the same class.
`FieldTypeLookup` is now a bit simpler, and has the sole responsibility of quickly
resolving field names to their types.

I have a broader refactor planned around mapping merge validation, but this
change should at least be a step in the right direction.
2019-03-08 18:05:21 -08:00
Nhat Nguyen 993182e426 Combine overriddenOps and skippedOps in translog (#39771)
These two stats are not important enough to be distinguishable.
This change combines them into a single stat.

Closes #33317
2019-03-08 16:28:50 -05:00
Julie Tibshirani be9c37fc76 Small simplifications to mapping validation. (#39777)
These simplifications to `MapperMergeValidator` are possible now that there is
always a single mapping definition.

* Remove the type argument in `validateMapperStructure`.
* Remove unnecessary checks against existing mappers.
2019-03-08 12:34:09 -08:00
Nhat Nguyen a0a91f74ff Treat TransportService stopped error as node is closing (#39800)
If TransportService is stopped before a shard-failure request is sent
but after the request is registered, TransportService will notify
ReplicationOperation a TransportException with an error message:
"transport stop, action: internal:cluster/shard/failure".

Relates #39584
2019-03-08 15:15:56 -05:00
Ryan Ernst 465343f12a
Bundle java in distributions (#38013)
* Bundle java in distributions

Setting up a jdk is currently a required external step when installing
elasticsearch. This is particularly problematic for the rpm/deb packages
as installing a jdk in the same package installation command does not
guarantee any order, so must be done in separate steps. Additionally,
JAVA_HOME must be set and often causes problems in selecting a correct
jdk when, for example, the system java is an older unsupported version.

This commit bundles platform specific openjdks into each distribution.
In addition to eliminating the issues above, it also presents future
possible improvements like using jlink to build jdk images only
containing modules that elasticsearch uses.

closes #31845
2019-03-08 11:04:18 -08:00
Gordon Brown e6b9262a31 Mute testOpenCloseApiWildcards (#39578) (#39579) 2019-03-08 15:18:16 +00:00
David Roberts aec2db78ea Mute RareClusterStateIT.testDelayedMappingPropagationOnReplica
Due to https://github.com/elastic/elasticsearch/issues/36813
2019-03-08 13:28:27 +00:00
David Roberts 366eef99a1 Mute SharedClusterSnapshotRestoreIT.testCloseOrDeleteIndexDuringSnapshot
Due to https://github.com/elastic/elasticsearch/issues/39828
2019-03-08 11:42:13 +00:00
David Turner 5d68143b18 Reformat elasticsearch-node messages (#39811)
Flows the warning messages emitted by the `elasticsearch-node` tool to a width
of 72 characters and tweaks the wording slightly.
2019-03-08 10:01:29 +00:00
Jake Landis 797d6b8a66
Execute ingest node pipeline before creating the index (#39607) (#39796)
Prior to this commit (and after 6.5.0), if an ingest node changes
the _index in a pipeline, the original target index would be created.
For daily indexes this could create an extra, empty index per day.

This commit changes the TransportBulkAction to execute the ingest node
pipeline before attempting to create the index. This ensures that the 
only index created is the original or one set by the ingest node pipeline. 
This was the execution order prior to 6.5.0 (#32786). 

The execution order was changed in 6.5 to better support default pipelines. 
Specifically the execution order was changed to be able to read the settings
from the index meta data. This commit also includes a change in logic such 
that if the target index does not exist when ingest node pipeline runs, it 
will now pull the default pipeline (if one exists) from the settings of the 
best matched of the index template. 

Relates #32786
Relates #32758 
Closes #36545
2019-03-07 13:31:41 -06:00
Jason Tedor 0250d554b6
Introduce forget follower API (#39718)
This commit introduces the forget follower API. This API is needed in cases that
unfollowing a following index fails to remove the shard history retention leases
on the leader index. This can happen explicitly through user action, or
implicitly through an index managed by ILM. When this occurs, history will be
retained longer than necessary. While the retention lease will eventually
expire, it can be expensive to allow history to persist for that long, and also
prevent ILM from performing actions like shrink on the leader index. As such, we
introduce an API to allow for manual removal of the shard history retention
leases in this case.
2019-03-07 11:08:45 -05:00
Armin Braun 213cc6673c
Remove Dead Code in o.e.util package (#39717) (#39779)
* None of this code is used so we should delete it, we can always bring it back if needed
2019-03-07 08:31:46 +01:00
Nhat Nguyen b69affda6a Use unwrapped cause to determine if node is closing (#39723)
We need to unwrap and use the actual cause when determining if the node
with primary shard is shutting down because TransportService will throw
a TransportException wrapped in a SendRequestTransportException.

Relates #39584
2019-03-06 15:30:55 -05:00
Nhat Nguyen 1fe7cb594f Don’t ack if unable to remove failing replica (#39584)
Today when a replicated write operation fails to execute on a replica,
the primary will reach out to the master to fail that replica (and mark
it stale). We then won't ack that request until the master removes the
failing replica; otherwise, we will lose the acked operation if the
failed replica is still in the in-sync set. However, if a node with the
primary is shutting down, we might ack such request even though we are
unable to send a shard-failure request to the master. This happens
because we ignore NodeClosedException which is triggered when the
ClusterService is being closed.

Closes #39467
2019-03-06 15:30:55 -05:00
markharwood 1873de5240
Bug fix for AnnotatedTextHighlighter - port of 39525 (#39749)
Bug fix for AnnotatedTextHighlighter - port of 39525

Relates to #39395
2019-03-06 19:02:04 +00:00
Yannick Welsch d094107592 Fix SharedClusterSnapshotRestoreIT
Relates to #39644
2019-03-06 17:51:23 +01:00
Yannick Welsch fef11f7efc Allow snapshotting replicated closed indices (#39644)
This adds the capability to snapshot replicated closed indices.

It also changes snapshot requests in v8.0.0 to automatically expand wildcards to closed indices and hence start snapshotting closed indices by default. For v7.1.0 and above, wildcards are by default only expanded to open indices, which can be changed by explicitly setting the expand_wildcards option either to all or closed.

Note that indices are always restored as open indices, even if they have been snapshotted as closed replicated indices.

Relates to #33888
2019-03-06 16:08:20 +01:00
Simon Willnauer e620fb2e4a Add option to force load term dict into memory (#39741)
Lucene added an optimization to leave the term dictionary on disk
for non-id like fields. This change happened very late in the release
processes such that it's better to have an escape hatch if certain
use-cases are hurt by this optimization. This setting might be
removed in the future if it turns out to be unnecessary.
2019-03-06 15:29:04 +01:00
Christoph Büscher 6c503824c8 Fix occasional SearchServiceTests failure (#39697)
Currently SearchServiceTests.testCloseSearchContextOnRewriteException can fail
if a refresh happens while we test for the SearchPhaseExecutionException that is
thrown later in the test. The test takes the current Store#refCount and expects
it to be the same after the exception is thrown. If a refresh happens in that
interval however, the refCound will be different, causing the test to fail. This
can be provoked e.g. by running this section in a tight loop.
Switching of refresh for this tests solves the issue.
2019-03-06 14:18:03 +01:00
Andrey Ershov 52fd102e23 Avoid serialising state if it was already serialised (#39179)
When preparing the state to send to other nodes, we're serializing it
for each node, despite using putIfAbsent.
This commit checks if the state was already serialized for this node
version before performing the potentially expensive computation.
The map is not used by multiple threads, so computeIfAbsent is not
needed (and could not be used here easily, because IOException could
be thrown).

(cherry picked from commit c99be63b43f5250f3cd220130df73c5e9e097459)
2019-03-06 11:54:13 +01:00
David Turner 295e39a8c8 Drop node if asymmetrically partitioned from master (#39598)
When a node is joining the cluster we ensure that it can send requests to the
master _at that time_. If it joins the cluster and _then_ loses the ability to
send requests to the master then it should be removed from the cluster. Today
this is not the case: the master can still receive responses to its follower
checks, and receives acknowledgements to cluster state publications, so has no
reason to remove the node.

This commit changes the handling of follower checks so that they fail if they
come from a master that the other node was following but which it now believes
to have failed.
2019-03-06 09:41:57 +00:00
David Turner 77dd711847 Tidy up GroupedActionListener (#39633)
Today the `GroupedActionListener` accepts a `defaults` parameter but all
callers pass an empty list. Also it is permitted to pass an empty group but
this is trappy because the delegated listener is never be called in that case.
This commit removes the `defaults` parameter and forbids an empty group.
2019-03-06 09:25:10 +00:00
Armin Braun aaecaf59a4
Optimize Bulk Message Parsing and Message Length Parsing (#39634) (#39730)
* Optimize Bulk Message Parsing and Message Length Parsing

* findNextMarker took almost 1ms per invocation during the PMC rally track
  * Fixed to be about an order of magnitude faster by using Netty's bulk `ByteBuf` search
* It is unnecessary to instantiate an object (the input stream wrapper) and throw it away, just to read the `int` length from the message bytes
  * Fixed by adding bulk `int` read to BytesReference
2019-03-06 08:13:15 +01:00
Jason Tedor 75a0d4f470
Rename retention lease setting (#39719)
This commit renames the retention lease setting
index.soft_deletes.retention.lease so that it is under the namespace
index.soft_deletes.retention_lease. As such, we rename the setting to
index.soft_deletes.retention_lease.period.
2019-03-05 22:04:45 -05:00
Jason Tedor 504c792861
Add Docker build type (#39378)
This commit adds a new build type (together with deb/rpm/tar/zip) to
represent the official Docker images. This build type will be displayed
in APIs such as the main and nodes info APIs.
2019-03-05 22:03:15 -05:00
Luca Cavanna 9d0211485c Tie-break completion suggestions with same score and surface form (#39564)
In case multiple completion suggestion entries have the same score and
surface form, the order in which such options will be returned is
currently not deterministic.

With this commmit we introduce tie-breaking for such situations, based
on shard id, index name, index uuid and doc id like we already do for
 ordinary search hits. With this change we also make shardIndex
mandatory when sorting and comparing completion suggestion options,
which was previously only needed later when fetching hits).

Also, we need to make sure shardIndex is properly set when merging
completion suggestions coming from multiple clusters in
`SearchResponseMerger`
2019-03-05 18:03:54 +01:00
Jim Ferenczi 160dc29f0e Handle total hits equal to track_total_hits (#37907)
This change ensures that a total hits equal to the value set for
track_total_hits is not considered as a lower bound.
2019-03-05 16:28:48 +01:00
Armin Braun 750ec8ba53
Minor Cleanups in QueryPhase (#39680) (#39694)
* Soften redundant cast to allow use of `DeterministicTaskQueue` in this class for #39504
* Remove two redundant variables and lower visibility in two possible spots
* Make field `final`
2019-03-05 15:04:16 +01:00
Christoph Büscher 5cdea6ef17 Fix Fuzziness#asDistance(String) (#39643)
Currently Fuzziness#asDistance(String) doesn't work for custom AUTO values. If
the fuzziness is AUTO, the method returns the correct edit distance to use,
depending on the input string, but for custom AUTO values it currently always
returns an edit distance of 1. Correcting this and adding unit and integration
tests to catch these cases.

Closes #39614
2019-03-05 14:31:07 +01:00
Simon Willnauer 19f6a35358 Move BWC Version to 7.1.0 after backport
Relates to #39512
2019-03-05 14:11:59 +01:00
Simon Willnauer d112c89041 Allow inclusion of unloaded segments in stats (#39512)
Today we have no chance to fetch actual segment stats for segments that
are currently unloaded. This is relevant in the case of frozen indices.
This allows to monitor how much memory a frozen index would use if it was
unfrozen.
2019-03-05 14:02:20 +01:00
Armin Braun e8d9744340
Use Threadpool Time in ClusterApplierService (#39679) (#39685)
* Use threadpool's time in `ClusterApplierService` to allow for deterministic tests
* This is a part of/requirement for #39504
2019-03-05 12:37:49 +01:00
Gordon Brown 380dc27d91 Mute testCloseWhileRelocatingShards (#39589) 2019-03-05 13:34:43 +02:00
Alan Woodward 0b14782b23 Add stopword support to IntervalBuilder (#39637)
The match interval builder analyses input text and converts it to an IntervalSource, and as such
may generate token streams with stopwords. This commit deals with these by using the extend
factory to cover the gaps produced by these stopwords so that phrase and ordered queries work
correctly.
2019-03-05 10:50:45 +00:00
Christoph Büscher 2fe1fa8972
Shortcut counts on exists queries (#39570) (#39660)
`TopDocsCollectorContext` can already shortcut hit counts on `match_all` and `term` queries when there are no deletions. 
This change adds this ability for `exists` queries if the index doesn't have deletions and fields are indexed.

Closes #37475
2019-03-04 19:53:43 +01:00
Prabhakar S 98925e9a09 Fixing the custom object serialization bug in diffable utils. (#39544)
While serializing custom objects, the length of the list is computed after
filtering out the unsupported objects but while writing objects the filter
is not applied thus resulting in writing unsupported objects which will fail
to deserialize by the receiever. Adding the condition to filter out unsupported
custom objects.
2019-03-04 18:41:14 +01:00
Nhat Nguyen 801f13f201 Assert recovery done in testDoNotWaitForPendingSeqNo (#39595)
Since #39006 we should be able to complete a peer-recovery without
waiting for pending indexing operations. Thus, the assertion in
testDoNotWaitForPendingSeqNo should be updated from false to true.

Closes #39510
2019-03-04 10:21:23 -05:00
Yannick Welsch 936dbb00e3
Isolate Zen1 (#39470)
Cherry-picks a few commits from #39466 to align 7.x with master branch.
2019-03-04 15:51:17 +01:00
Luca Cavanna 9ddaabba88 Remote private SearchHits.Total class (#39556)
This is now possible as Lucene's `TotalHits` implements `equals`/`hashcode`,
all the other methods can be in-lined in `SearchHits` instead, no need for
a specific wrapper class.
2019-03-04 13:46:45 +01:00
Armin Braun 547af21a12
Introduce Mapping ActionListener (#39538) (#39636)
* Introduce Safer Chaining of Listeners

* The motivation here is to make reasoning about chains of `ActionListener` a little easier, by providing a safe method for nesting `ActionListener` that guarantees that a response is never dropped. Also, it dries up the code a little by removing the need to repeat `listener::onFailure` and `listener.onResponse` over and over.
* Refactored a number of obvious/easy spots to use the new listener constructor
2019-03-04 12:56:46 +01:00
Daniel Mitterdorfer fca6a2f006
Avoid deprecated API usage in TaskOperationFailure (#39303) (#39628)
With this commit we remove usage of the deprecated method
`ExceptionsHelper#detailedMessage` in the class `TaskOperationFailure`.

Relates #19069
2019-03-04 11:37:59 +01:00
David Turner dd68244841
Wait for state recovery in testFreshestMasterElectedAfterFullClusterRestart (#39602)
Zen1IT#testFreshestMasterElectedAfterFullClusterRestart fails sometimes because
we request the cluster state before state recovery has completed, and therefore
obtain the default value for the setting we're relying on.

Confusingly, we were starting out by setting this setting to its default value,
so the test looked like it was failing because of a production bug. This commit
avoids this confusion in future by setting it to a non-default value at the
start of the test.

Fixes #39586.
2019-03-04 10:26:07 +00:00
Adrien Grand 782f873165
Don't swallow exceptions in Store#close(). (#39035) (#39622)
Store#close() swallows any `IOException`.

Relates #39030
2019-03-04 10:58:43 +01:00
Adrien Grand 934946a232
Don't swallow exception in ThreadPool.terminate. (#39038) (#39623)
The use of `closeWhileHandlingException` means that any exception while trying
to close the threadpool is going to be swallowed.

Relates #39030
2019-03-04 10:58:29 +01:00
Adrien Grand 21540a5ada
Enhancements to IndicesQueryCache. (#39099) (#39626)
This commit adds the following:
 - more tests to IndicesServiceCloseTests, one of them found a bug in the order
   in which `IndicesQueryCache#onClose` and
   `IndicesService.indicesRefCount#decRef` are called.
 - made `IndicesQueryCache.stats2` a synchronized map. All writes to it are
   already protected by the lock of the Lucene cache, but the final read from
   an assertion in `IndicesQueryCache#close()` was not so this change should
   avoid any potential visibility issues.
 - human-readable `toString`s to make debugging easier.

Relates #37117
2019-03-04 10:58:12 +01:00
Armin Braun 68bc178017
Disable Bwc Tests (#39551)
* Disable Bwc Tests
* For #39550
2019-03-04 10:41:52 +01:00
Yannick Welsch 0f65390c29 Do not mutate engine during planning step (#39571)
This cleans up the Engine implementation by separating the sequence number generation from the
planning step in the engine, to avoid for the planning step to have any side effects. This makes it
easier to see that every sequence number is properly accounted for.
2019-03-04 10:11:39 +01:00
David Turner 9ec24bae80 Mute testDoNotWaitForPendingSeqNo
Relates #39510, #39595.
2019-03-03 22:03:53 -05:00
Mayya Sharipova d0e65a45a2 Add debug log for flush for IndicesRequestCacheIT (#39475)
Add debug log when index is flushed to investigate a failure
in IndicesRequestCacheIT

"DEBUG" level is used as "TRACE" produces too  much output irrelevant for this
issue

Relates to #32827
2019-03-01 13:12:45 -05:00
Luca Cavanna 29e3c18713 Mute failing IndexShardIT#testPendingRefreshWithIntervalChange
Relates to #39565
2019-03-01 14:55:19 +01:00
Tanguy Leroux e005eeb0b3
Backport support for replicating closed indices to 7.x (#39506)(#39499)
Backport support for replicating closed indices (#39499)
    
    Before this change, closed indexes were simply not replicated. It was therefore
    possible to close an index and then decommission a data node without knowing
    that this data node contained shards of the closed index, potentially leading to
    data loss. Shards of closed indices were not completely taken into account when
    balancing the shards within the cluster, or automatically replicated through shard
    copies, and they were not easily movable from node A to node B using APIs like
    Cluster Reroute without being fully reopened and closed again.
    
    This commit changes the logic executed when closing an index, so that its shards
    are not just removed and forgotten but are instead reinitialized and reallocated on
    data nodes using an engine implementation which does not allow searching or
     indexing, which has a low memory overhead (compared with searchable/indexable
    opened shards) and which allows shards to be recovered from peer or promoted
    as primaries when needed.
    
    This new closing logic is built on top of the new Close Index API introduced in
    6.7.0 (#37359). Some pre-closing sanity checks are executed on the shards before
    closing them, and closing an index on a 8.0 cluster will reinitialize the index shards
    and therefore impact the cluster health.
    
    Some APIs have been adapted to make them work with closed indices:
    - Cluster Health API
    - Cluster Reroute API
    - Cluster Allocation Explain API
    - Recovery API
    - Cat Indices
    - Cat Shards
    - Cat Health
    - Cat Recovery
    
    This commit contains all the following changes (most recent first):
    * c6c42a1 Adapt NoOpEngineTests after #39006
    * 3f9993d Wait for shards to be active after closing indices (#38854)
    * 5e7a428 Adapt the Cluster Health API to closed indices (#39364)
    * 3e61939 Adapt CloseFollowerIndexIT for replicated closed indices (#38767)
    * 71f5c34 Recover closed indices after a full cluster restart (#39249)
    * 4db7fd9 Adapt the Recovery API for closed indices (#38421)
    * 4fd1bb2 Adapt more tests suites to closed indices (#39186)
    * 0519016 Add replica to primary promotion test for closed indices (#39110)
    * b756f6c Test the Cluster Shard Allocation Explain API with closed indices (#38631)
    * c484c66 Remove index routing table of closed indices in mixed versions clusters (#38955)
    * 00f1828 Mute CloseFollowerIndexIT.testCloseAndReopenFollowerIndex()
    * e845b0a Do not schedule Refresh/Translog/GlobalCheckpoint tasks for closed indices (#38329)
    * cf9a015 Adapt testIndexCanChangeCustomDataPath for replicated closed indices (#38327)
    * b9becdd Adapt testPendingTasks() for replicated closed indices (#38326)
    * 02cc730 Allow shards of closed indices to be replicated as regular shards (#38024)
    * e53a9be Fix compilation error in IndexShardIT after merge with master
    * cae4155 Relax NoOpEngine constraints (#37413)
    * 54d110b [RCI] Adapt NoOpEngine to latest FrozenEngine changes
    * c63fd69 [RCI] Add NoOpEngine for closed indices (#33903)
    
    Relates to #33888
2019-03-01 14:48:26 +01:00
Yannick Welsch 1a50af7dd4 Do not close bad indices on startup (#39500)
With #17187, we verified IndexService creation during initial state recovery on the master and if the
recovery failed the index was imported as closed, not allocating any shards. This was mainly done to
prevent endless allocation loops and full log files on data-nodes when the indexmetadata contained
broken settings / analyzers. Zen2 loads the cluster state eagerly, and this check currently runs on all
nodes (not only the elected master), which can significantly slow down startup on data nodes.
Furthermore, with replicated closed indices (#33888) on the horizon, importing the index as closed
will no longer not allocate any shards. Fortunately, the original issue for endless allocation loops is
no longer a problem due to #18467, where we limit the retries of failed allocations. The solution here
is therefore to just undo #17187, as it's no longer necessary, and covered by #18467, which will solve
the issue for Zen2 and replicated closed indices as well.
2019-03-01 09:23:46 +01:00
Tal Levy b9b46fdec6
fix UpdateSettingsRequestStreamableTests.mutateInstance (#39386) (#39477)
Mutations of the timeout values were using string-representations.

This resulted in very rare cases where the original timeout value was
represented as something like "0ms" and the new random time-value generated
was "0s". Although their string representations differ, their underlying
TimeValue does not. This resulted in `-Dtests.seed=7F4C034C43C22B1B` to
fail.
2019-02-28 21:02:32 -08:00
Mark Tozzi 609118c229 Override and mute InternalAutoDateHistogramTests#testReduceRandom() (#39536)
pending resolution of #39497
2019-02-28 16:00:32 -05:00
Lee Hinman dae48ba262 Add details about what acquired the shard lock last (#38807)
This adds a `details` parameter to shard locking in `NodeEnvironment`. This is
intended to be used for diagnosing issues such as

```
  1> [2019-02-11T14:34:19,262][INFO ][o.e.c.m.MetaDataDeleteIndexService] [node_s0] [.tasks/oSYOG0-9SHOx_pfAoiSExQ] deleting index
  1> [2019-02-11T14:34:19,279][WARN ][o.e.i.IndicesService     ] [node_s0] [.tasks/oSYOG0-9SHOx_pfAoiSExQ] failed to delete index
  1> org.elasticsearch.env.ShardLockObtainFailedException: [.tasks][0]: obtaining shard lock timed out after 0ms
  1> 	at org.elasticsearch.env.NodeEnvironment$InternalShardLock.acquire(NodeEnvironment.java:736) ~[main/:?]
  1> 	at org.elasticsearch.env.NodeEnvironment.shardLock(NodeEnvironment.java:655) ~[main/:?]
  1> 	at org.elasticsearch.env.NodeEnvironment.lockAllForIndex(NodeEnvironment.java:601) ~[main/:?]
  1> 	at org.elasticsearch.env.NodeEnvironment.deleteIndexDirectorySafe(NodeEnvironment.java:554) ~[main/:?]
```

In the hope that we will be able to determine why the shard is still locked.

Relates to #30290 as well as some other CI failures
2019-02-28 10:50:47 -07:00
Armin Braun e564c4d8ad
Add Package Level JavaDoc on Snapshots (#38108) (#39514)
* Add Package Level JavaDoc on Snapshots
2019-02-28 18:23:01 +01:00
Simon Willnauer 5c96b90ed5 Never block on scheduled refresh if a refresh is running (#39462)
Today we block on the ReferenceManager in the case of a scheduled refresh.
Yet if there is a refresh happening concurrently we might block and create
very smallish segments. Instead we should just move on to the next shard
and free up the refresh thread instead.
2019-02-28 11:57:45 +01:00
Armin Braun d3d7d9bb9d
Remove Dead Code + Duplication in o.e.c.routing (#36678) (#39493)
* Removed obviously unused fields+methods
* Inlined public methods that only had one caller
* Simplified `Optional` chain
* Simplified some obviously redundant conditions
2019-02-28 10:33:05 +01:00
Armin Braun 90ab4a6f6e
Stabilize RareClusterState (#38671) (#39468)
* Use actual master node, not just a master elligible node when trying to cancel publication. This only works on the master and for unlucky seeds we never try the master within the 10s that the busy assert runs.
* Closes #36813
2019-02-28 08:01:52 +01:00
Tanguy Leroux 4dd274b51d Unmute CoordinatorTests.testDiscoveryUsesNodesFromLastClusterState() (#39452)
This commit unmutes the test and comments out the
offending call to linearizabilityChecker.isLinearizable() as suggested
in #39437
2019-02-27 20:38:54 +01:00
Tanguy Leroux 983b5d1c0e Mute SpecificMasterNodesIT.testElectOnlyBetweenMasterNodes()
Tracked in #38331
2019-02-27 18:00:02 +01:00
Daniel Mitterdorfer 2ccba18809
Correct name of basic_date_time_no_millis (#39367) (#39454)
With this commit we correct the name of the Java time based formatter
for `basic_date_time_no_millis`.
2019-02-27 17:03:50 +01:00
Alan Woodward 71b8494181
Upgrade to lucene 8.0.0-snapshot-ff9509a8df (#39444)
Backport of #39350

Contains the following:

* LUCENE-8635: Move terms dictionary off-heap for non-primary-key fields in `MMapDirectory`
* LUCENE-8292: `TermsEnum` is fully abstract
* LUCENE-8679: Return WITHIN in `EdgeTree#relateTriangle` only when polygon and triangle share one edge
* LUCENE-8676: Nori tokenizer deals correctly with large buffers
* LUCENE-8697: `GraphTokenStreamFiniteStrings` better handles side paths with gaps
* LUCENE-8664: Add `equals` and `hashCode` to `TotalHits`
* LUCENE-8660: `TopDocsCollector` returns accurate hit counts if the total equals the threshold
* LUCENE-8654: `Polygon2D#relateTriangle` fix for when the polygon is inside the triangle
* LUCENE-8645: `Intervals#fixField` can merge intervals from different fields
* LUCENE-8585: Create jump-tables for DocValues at index time
2019-02-27 14:36:08 +00:00
Armin Braun f675b33d50
Increase Timeout in UnicastZenPingTests (#38893) (#39449)
* Just like #37268 removing another 1s timeout, those are dangerous since they're easily exceeded by an untimely gc pause
* Closes #26701
2019-02-27 15:22:17 +01:00
Jason Tedor 55e98f08d8
Provide a clearer error message on keystore add (#39327)
When trying to add a setting to the keystore with an upper case name, we
reject with an unclear error message. This commit makes that error
message much clearer.
2019-02-27 08:10:23 -05:00
Armin Braun 27485871b8
Don't Ping on Handshake Connection (#39076) (#39446)
* Don't Ping on Handshake Connection

* It does not make sense to run pings on the handshake connection
   * Set the ping interval to `-1` to deactivate pings on it
2019-02-27 13:39:25 +01:00
Tanguy Leroux 6912e27ee0 Mute MinimumMasterNodesIT.testThreeNodesNoMasterBlock()
Tracked in #39172
2019-02-27 13:13:22 +01:00
David Turner 41668f7723 Move PeerFinder's logger to the expected package (#39412)
Today the abstract `org.elasticsearch.discovery.PeerFinder` uses the logger of
its implementation, which in production is in `o.e.cluster.coordination`. This
turns out to be confusing and unhelpful, so with this change we move to using
the logger that belongs to `PeerFinder`.
2019-02-27 08:44:05 +00:00
Armin Braun 28b771f5db
Remove Dead Code Test Infrastructure (#39192) (#39436)
* Just removing some obviously unused things
2019-02-27 09:38:47 +01:00
Tim Brooks f24dae302d
Make security tests transport agnostic (#39411)
Currently there are two security tests that specifically target the
netty security transport. This PR moves the client authentication tests
into `AbstractSimpleSecurityTransportTestCase` so that the nio transport
will also be tested.

Additionally the work to build transport configurations is moved out of
the netty transport and tested independently.
2019-02-26 18:55:19 -07:00
Nhat Nguyen a9e86bc941 Adjust testWaitForPendingSeqNo (#39404)
Since #39006, we should either remove `testWaitForPendingSeqNo` 
or adjust it not to wait for the pending operations. This change picks 
the latter.

Relates #39006
2019-02-26 16:21:56 -05:00
Mayya Sharipova 4ca514f18c Fix testCacheWithFilteredAlias failure (#39401)
Move refresh after Forcemerge

Relates to #32827
2019-02-26 14:11:35 -05:00
Luca Cavanna 2619f48e4d Rename SearchRequest#withLocalReduction (#39108)
`withLocalReduction` is confusing as `local` effectively means "local
to the remote clusters" rather than "local the coordinating node" where
the method is executed. I propose we rename the method to
`crossClusterSearch` which better resembles what the static method is
used for.
2019-02-26 16:30:54 +01:00
Luca Cavanna c09773a76e Completion suggestions to be reduced once instead of twice (#39255)
We have been calling `reduce` against completion suggestions twice, once
in `SearchPhaseController#reducedQueryPhase` where all suggestions get
reduced, and once more in `SearchPhaseController#sortDocs` where we
add the top completion suggestions to the `TopDocs` so their docs can
be fetched. There is no need to do reduction twice. All suggestions can
be reduced in one call, then we can filter the result and pass only the
already reduced completion suggestions over to `sortDocs`. The small
important detail is that `shardIndex`, which is currently used only
to fetch suggestions hits, needs to be set before the first reduction,
hence outside of `sortDocs` where we have been doing it until now.
2019-02-26 11:42:02 +01:00
Yannick Welsch d42f422258 Add linearizability checker for coordination layer (#36943)
Checks that the core coordination algorithm implemented as part of Zen2 (#32006) supports
linearizable semantics. This commit adds a linearizability checker based on the Wing and Gong
graph search algorithm with support for compositional checking and activates these checks for all
CoordinatorTests.
2019-02-26 08:26:55 +01:00
Nhat Nguyen 575eed8582 Bubble up exception when processing NoOp (#39338)
Today we do not bubble up exceptions when processing NoOps but always
treat them as document-level failures. This incorrect treatment causes
the assert_no_failure being tripped in peer-recovery if IndexWriter was
closed exceptionally before.

Closes #38898
2019-02-25 17:54:45 -05:00
Nhat Nguyen e9dda75834 Enable soft-deletes by default for 7.0+ indices (#38929)
Today when users upgrade to 7.0, existing indices will automatically
switch to soft-deletes without an opt-out option. With this change, 
we only enable soft-deletes by default for new indices.

Relates #36141
2019-02-25 17:54:29 -05:00
Igor Motov d5046b1c25 [CI] Fixes testQueryRandomGeoCollection failure again (#39275)
Moves the check for tiny polygons earlier in the test. It turned out
that polygons can be so tiny that we cannot even figure out their
orientation.

Relates to #37356
2019-02-25 16:35:17 -05:00
Evgenia Badyanova 1ed3407930
Reduce garbage from allocations in deprecation logger (#38780) (#39370)
1. Setting length for formatWarning String to avoid AbstractStringBuilder.ensureCapacityInternal calls
2. Adding extra check for parameter array length == 0 to avoid unnecessarily creating StringBuilder in LoggerMessageFormat.format

Helps to narrow the performance gap in throughout for geonames benchmark (#37411) by 3%. For more details: https://github.com/elastic/elasticsearch/issues/37530#issuecomment-462758384 

Relates to #37530
Relates to #37411
Relates to #35754
2019-02-25 16:23:22 -05:00
Lee Hinman 5c7dd6f0ee Set mappings when creating indices in SuggestSearchIT (#39323)
* Set mappings when creating indices in SuggestSearchIT

These tests don't test dynamic mapping, so they can use preset mappings. This
removes the possibility they may fail due to the mapping not being available
since mapping updates are asynchronous.

Resolves #39315

* Wrap creates in assertAcked
2019-02-25 13:27:03 -07:00
Mayya Sharipova bf058d6e4d
Fix anaylze NullPointerException when AnalyzeTokenList tokens is null (#39332) (#39361) 2019-02-25 12:49:18 -05:00
Nhat Nguyen 48219112e3 Do not wait for advancement of checkpoint in recovery (#39006)
With this change, we won't wait for the local checkpoint to advance to
the max_seq_no before starting phase2 of peer-recovery. We also remove
the sequence number range check in peer-recovery. We can safely do these
thanks to Yannick's finding.

The replication group to be used is currently sampled after indexing
into the primary (see `ReplicationOperation` class). This means that
when initiating tracking of a new replica, we have to consider the
following two cases:

- There are operations for which the replication group has not been
sampled yet. As we initiated the new replica as tracking, we know that
those operations will be replicated to the new replica and follow the
typical replication group semantics (e.g. marked as stale when
unavailable).

- There are operations for which the replication group has already been
sampled. These operations will not be sent to the new replica.  However,
we know that those operations are already indexed into Lucene and the
translog on the primary, as the sampling is happening after that. This
means that by taking a snapshot of Lucene or the translog, we will be
getting those ops as well. What we cannot guarantee anymore is that all
ops up to `endingSeqNo` are available in the snapshot (i.e.  also see
comment in `RecoverySourceHandler` saying `We need to wait for all
operations up to the current max to complete, otherwise we can not
guarantee that all operations in the required range will be available
for replaying from the translog of the source.`). This is not needed,
though, as we can no longer guarantee that max seq no == local
checkpoint.

Relates #39000
Closes #38949

Co-authored-by: Yannick Welsch <yannick@welsch.lu>
2019-02-25 12:10:14 -05:00
David Turner 236db51d34 Fix testSnapshotFileFailureDuringSnapshot (#39362)
Today this test catches an exception and asserts that its proximate cause has
message `Random IOException` but occasionally this exception is wrapped two
layers deep, causing the test to fail. This commit adjusts the test to look at
the root cause of the exception instead.

      1> [2019-02-25T12:31:50,837][INFO ][o.e.s.SharedClusterSnapshotRestoreIT] [testSnapshotFileFailureDuringSnapshot] --> caught a top level exception, asserting what's expected
      1> org.elasticsearch.snapshots.SnapshotException: [test-repo:test-snap/e-hn_pLGRmOo97ENEXdQMQ] Snapshot could not be read
      1> 	at org.elasticsearch.snapshots.SnapshotsService.snapshots(SnapshotsService.java:212) ~[main/:?]
      1> 	at org.elasticsearch.action.admin.cluster.snapshots.get.TransportGetSnapshotsAction.masterOperation(TransportGetSnapshotsAction.java:135) ~[main/:?]
      1> 	at org.elasticsearch.action.admin.cluster.snapshots.get.TransportGetSnapshotsAction.masterOperation(TransportGetSnapshotsAction.java:54) ~[main/:?]
      1> 	at org.elasticsearch.action.support.master.TransportMasterNodeAction.masterOperation(TransportMasterNodeAction.java:127) ~[main/:?]
      1> 	at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$2.doRun(TransportMasterNodeAction.java:208) ~[main/:?]
      1> 	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:751) ~[main/:?]
      1> 	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[main/:?]
      1> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_202]
      1> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_202]
      1> 	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_202]
      1> Caused by: org.elasticsearch.snapshots.SnapshotException: [test-repo:test-snap/e-hn_pLGRmOo97ENEXdQMQ] failed to get snapshots
      1> 	at org.elasticsearch.repositories.blobstore.BlobStoreRepository.getSnapshotInfo(BlobStoreRepository.java:564) ~[main/:?]
      1> 	at org.elasticsearch.snapshots.SnapshotsService.snapshots(SnapshotsService.java:206) ~[main/:?]
      1> 	... 9 more
      1> Caused by: java.io.IOException: Random IOException
      1> 	at org.elasticsearch.snapshots.mockstore.MockRepository$MockBlobStore$MockBlobContainer.maybeIOExceptionOrBlock(MockRepository.java:275) ~[test/:?]
      1> 	at org.elasticsearch.snapshots.mockstore.MockRepository$MockBlobStore$MockBlobContainer.readBlob(MockRepository.java:317) ~[test/:?]
      1> 	at org.elasticsearch.repositories.blobstore.ChecksumBlobStoreFormat.readBlob(ChecksumBlobStoreFormat.java:101) ~[main/:?]
      1> 	at org.elasticsearch.repositories.blobstore.BlobStoreFormat.read(BlobStoreFormat.java:90) ~[main/:?]
      1> 	at org.elasticsearch.repositories.blobstore.BlobStoreRepository.getSnapshotInfo(BlobStoreRepository.java:560) ~[main/:?]
      1> 	at org.elasticsearch.snapshots.SnapshotsService.snapshots(SnapshotsService.java:206) ~[main/:?]
      1> 	... 9 more

    FAILURE 0.59s J0 | SharedClusterSnapshotRestoreIT.testSnapshotFileFailureDuringSnapshot <<< FAILURES!
       > Throwable #1: java.lang.AssertionError:
       > Expected: a string containing "Random IOException"
       >      but: was "[test-repo:test-snap/e-hn_pLGRmOo97ENEXdQMQ] failed to get snapshots"
       > 	at __randomizedtesting.SeedInfo.seed([B73CA847D4B4F52D:884E042D2D899330]:0)
       > 	at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
       > 	at org.elasticsearch.snapshots.SharedClusterSnapshotRestoreIT.testSnapshotFileFailureDuringSnapshot(SharedClusterSnapshotRestoreIT.java:821)
       > 	at java.lang.Thread.run(Thread.java:748)
2019-02-25 16:43:55 +00:00
Marios Trivyzas 11fe8cd16f
[Tests] Fix flakiness by ensuring stable cluster (#39300) (#39356)
In integration tests where `setBootstrapMasterNodeIndex()` is used in
combination with `autoMinMasterNodes = false` the cluster can start
bootstrapping once the number of nodes set with the
`setBootstrapMasterNodeIndex` have been started but it's not ensured
that all nodes have successfully joined to form the cluster.

This behaviour was introduced with 5db7ed22a0
and in order to ensure that the cluster is properly formed before proceeding
with the integration test, use `ensureStableCluster()` with the
appropriate number of expected nodes.

Fixes: #39220
2019-02-25 17:26:15 +01:00
David Turner dc23be5a9d Avoid creating a green index in RetentionLeaseIT (#39347)
In #39224 we made shard history retention lease syncing ignore the
`index.write.wait_for_active_shards` setting on the index, and added a test
that showed that it was ignored. However the test as merged actually creates a
green index, so the `wait_for_active_shards` setting has no effect. This change
adjusts the test to create a yellow index to verify that
`wait_for_active_shards` really is ignored.
2019-02-25 15:33:09 +00:00
Yannick Welsch a2bc41621c Clean GatewayAllocator when stepping down as master (#38885)
This fixes an issue where a messy master election might prevent shard allocation to properly
proceed. I've encountered this in failing CI tests when we were bootstrapping multiple nodes. Tests
would sometimes time out on an `ensureGreen` after an unclean master election. The reason for
this is how the async shard information fetching works and how the clean-up logic in
GatewayAllocator is integrated with the rest of the system. When a node becomes master, it will, as
part of the first cluster state update where it becomes master, already try allocating shards (see
`JoinTaskExecutor`, in particular the call to `reroute`). This process, which runs on the
MasterService thread, will trigger async shard fetching. If the node is still processing an earlier
election failure in ClusterApplierService (e.g. due to a messy election), that will possibly trigger the
clean-up logic in GatewayAllocator after the shard fetching has been initiated by MasterService,
thereby cancelling the fetching, which means that no subsequent reroute (allocation) is triggered
after the shard fetching results return. This means that no shard allocation will happen unless the
user triggers an explicit reroute command. The bug imo is that GatewayAllocator is called from both
MasterService and ClusterApplierService threads, with no clear happens-before relation. The fix
here makes it so that the clean-up logic is also run on the MasterService thread instead of the
ClusterApplierService thread, reestablishing a clear happens-before relation. Note that testing this
is tricky. With the newly added test, I can quite often reproduce this by adding `Thread.sleep(10);`
in ClusterApplierService (to make sure it does not go too quickly) and adding `Thread.sleep(50);` in
`TransportNodesListGatewayStartedShards` to make sure that shard state fetching does not go too
quickly either.

Note that older versions of Zen discovery are affected by this as well, but did not exhibit this issue
as often because master elections are much slower there.
2019-02-25 10:37:31 +01:00
David Turner 96c09b032d Ignore waitForActiveShards when syncing leases (#39224)
Adjust the retention lease sync actions so that they do not respect the
`index.write.wait_for_active_shards` setting on an index, allowing them to sync
retention leases even if insufficiently many shards are currently active to
accept writes.

Relates #39089
2019-02-25 08:53:43 +00:00
Nhat Nguyen f17d408fbb Add cause to assert_no_failure when replay translog (#39333)
We tripped this assertion three times for the last two weeks. However,
it only says "this IndexWriter is closed" without the actual cause.

```
[2019-02-14T11:46:31,144][ERROR][o.e.b.ElasticsearchUncaughtExceptionHandler] [node-1] fatal error in thread [elasticsearch[node-1][generic][T#2]], exiting

java.lang.AssertionError: unexpected failure while replicating translog entry: org.apache.lucene.store.AlreadyClosedException: this IndexWriter is closed
```

This change replaces an assert with an AssertionError so that we will
have the actual cause in the next build failures.

Relates #38898
2019-02-23 13:04:43 -05:00
Zachary Tong c7516b03b6
Better HoltWinters parameter validation (#38747)
We validate HW parameters (namely, window > 2 * period) when parsing
the XContent... but that means transport clients can configure bad
params.

This change allows model to validate the window and throw an 
exception if they wish.

It also makes some test changes:

- removes testBadModelParams(), which was a junk test (didn't do
anything), and bad param checking is done elsewhere in units tests
- Fixes one of the windows in testHoltWintersNotEnoughData()
- Ensures the period in testHoltWintersNotEnoughData() is >> window
- Removes `setTypes()` since that's deprecated
2019-02-22 15:25:26 -05:00
Daniel Mitterdorfer 9fea21aca5
Remove ExceptionsHelper#detailedMessage in tests (#37921) (#39297)
With this commit we remove all usages of the deprecated method
`ExceptionsHelper#detailedMessage` in tests. We do not address
production code here but rather in dedicated follow-up PRs to keep the
individual changes manageable.

Relates #19069
2019-02-22 14:03:29 +01:00
Tim Brooks 44df76251f
Rebuild remote connections on profile changes (#39146)
Currently remote compression and ping schedule settings are dynamic.
However, we do not listen for changes. This commit adds listeners for
changes to those two settings. Additionally, when those settings change
we now close existing connections and open new ones with the settings
applied.

Fixes #37201.
2019-02-21 14:00:39 -07:00
Tanguy Leroux fc896e452c
ReadOnlyEngine should update translog recovery state information (#39238) (#39251)
`ReadOnlyEngine` never recovers operations from translog and never 
updates translog information in the index shard's recovery state, even 
though the recovery state goes through the `TRANSLOG` stage during 
the recovery. It means that recovery information for frozen shards indicates 
an unkown number of recovered translog ops in the Recovery APIs 
(translog_ops: `-1` and translog_ops_percent: `-1.0%`) and this is confusing.

This commit changes the `recoverFromTranslog()` method in `ReadOnlyEngine` 
so that it always recover from an empty translog snapshot, allowing the recovery 
state translog information to be correctly updated.

Related to #33888
2019-02-21 18:08:06 +01:00
Daniel Mitterdorfer ef921fd157
Migrate Streamable to Writeable for cluster block package (#37391) (#39236) 2019-02-21 15:21:44 +01:00
Marios Trivyzas ecfd48b6d3
[Tests] Make testEngineGCDeletesSetting deterministic (#38942) (#39231)
`InternalEngine.resolveDocVersion()` uses `relativeTimeInMillis()` from
`ThreadPool` so it needs, the cached time to be advanced. Add a check
to ensure that and decrease the `thread_pool.estimated_time_interval`
to 1msec to prevent long running times for the test.

Fixes: #38874

Co-authored-by: Boaz Leskes <b.leskes@gmail.com>
2019-02-21 14:30:59 +02:00
Marios Trivyzas 1316825f52
Replace superfluous usage of Counter with Supplier (#39048) (#39225)
`Counter` was used as a means of a functional argument to pass
the relative cached time before `Supplier` iface was introduced.
2019-02-21 12:42:54 +02:00
Ignacio Vera be8a5315d7 Extend nextDoc to delegate to the wrapped doc-value iterator for date_nanos (#39176)
The type date_nanos does not direct doc-value iterators and it needs to extend `next_doc` in order to delegate the call to the wrapped iterator.
2019-02-21 11:10:51 +01:00
Tal Levy 8150ca40f2 mute test3MasterNodes2Failed 2019-02-20 17:35:37 -08:00
Nhat Nguyen 820ba8169e Add retention leases replication tests (#38857)
This commit introduces the retention leases to ESIndexLevelReplicationTestCase,
then adds some tests verifying that the retention leases replication works
correctly in spite of the presence of the primary failover or out of order
delivery of retention leases sync requests.

Relates #37165
2019-02-20 19:21:00 -05:00
Mirko Jotic a6ae146ccc Converting Derivative Pipeline Agg integration test into AggregatorTestsCase. (#38679)
Replicates the majority of existing Derivative pipeline integration tests into
an AggregatorTestCase, with the goal of removing the integration
tests in the near future.
2019-02-20 16:35:32 -05:00
Igor Motov 3d93011e32 Fix median calculation in MedianAbsoluteDeviationAggregatorTests (#38979)
Fixes an error in median calculation in
MedianAbsoluteDeviationAggregatorTests for odd number of sample points,
which causes some rare test failures.

Fixes #38937
2019-02-20 13:24:30 -05:00
Ioannis Kakavas c783069804
Fix NPE on Stale Index in IndicesService(#39173)
This is a backport of  #38891 which closes #38845
2019-02-20 15:35:35 +02:00
David Turner efffb3d5b7 Simplify calculation in AwarenessAllocationDecider (#38091)
Today's calculation of the maximum number of shards per attribute is rather
convoluted. This commit clarifies that it returns
ceil(shardCount/numberOfAttributes).
2019-02-20 08:54:57 +00:00
Henning Andersen 00a26b9dd2 Blob store compression fix (#39073)
Blob store compression was not enabled for some of the files in
snapshots due to constructor accessing sub-class fields. Fixed to
instead accept compress field as constructor param. Also fixed chunk
size validation to work.

Deprecated repositories.fs.compress setting as well to be able to unify
in a future commit.
2019-02-20 09:24:41 +01:00
Hendrik Muhs 50b3858f7c add version 6.6.2 2019-02-19 20:28:06 +01:00
David Turner 0a9574c9d4 Add some missing toString() implementations (#39124)
Sometimes we turn objects into strings for logging or debugging using
`toString()`, but the default implementation is often unhelpful. This change
improves on this in two places I ran into recently.
2019-02-19 17:52:41 +00:00
Jason Tedor fef9bdb23f
Allow retention lease operations under blocks (#39089)
This commit allows manipulating retention leases under blocks.
2019-02-19 10:26:49 -05:00
Jason Tedor 12f6963456
Fix retention leases sync on recovery test
This test had a bug. We attempt to allow only the primary to be
allocated, to force all replicas to recovery from the primary after we
had set the state of the retention leases on the primary. However, in
building the index settings, we were overwriting the settings that
exclude the replicas from being allocated. This means that some of the
replicas would end up assigned and rather than receive retention leases
during recovery, they would be part of the replication group receiving
retention leases as they are manipulated. Since retention lease renewals
are only synced periodically, this means that the replica could be
lagging a little behind in some cases leading to an assertion tripping
in the test. This commit addresses this by ensuring that the replicas
are indeed not allocated until after the retention leases are done being
manipulated on the replica. We did this by not overwriting the exclude
settings.

Closes #39105
2019-02-19 09:07:33 -05:00
Alexander Reelsen 7f8a640363 Fix DateFormatters.parseMillis when no timezone is given (#39100)
The parseMillis method was able to work on formats without timezones by
falling back to UTC. The Date Formatter interface did not support this, as
the calling code was using the `Instant.from` java time API.

This switches over to an internal method which adds UTC as a timezone.

Closes #39067
2019-02-19 14:12:22 +01:00
Jim Ferenczi 199155f5fb
Enforce Completion Context Limit (#38675) (#39075)
This change adds a limit to the number of completion contexts that a completion field can define.

Closes #32741
2019-02-19 08:52:24 +01:00
Albert Zaharovits 6bc88b00ec Mute GatewayMetaStateTests.testAtomicityWithFailures (#39079)
Mute test GatewayMetaStateTests.testAtomicityWithFailures
2019-02-19 00:25:45 +02:00
Jason Tedor 2d8f6b6501
Introduce retention lease state file (#39004)
This commit moves retention leases from being persisted in the Lucene
commit point to being persisted in a dedicated state file.
2019-02-18 16:53:46 -05:00
Jason Tedor d43ac8fe11
Include in log retention leases that failed to sync
When retention leases fail to sync after an expiration check, we emit a
log message about this. This commit adds the retention leases that
failed to sync.
2019-02-18 15:08:08 -05:00