Commit Graph

1956 Commits

Author SHA1 Message Date
Yannick Welsch fea91c6113 Mute testTracerLog
Relates to #40586
2019-03-28 14:05:54 +01:00
David Turner 5a2ba34174
Get node ID from nodes info in REST tests (#40052) (#40532)
We discussed recently that the cluster state API should be considered
"internal" and therefore our usual cast-iron stability guarantees do not hold
for this API.

However, there are a good number of REST tests that try to identify the master
node. Today they call `GET /_cluster/state` API and extract the master node ID
from the response. In fact many of these tests just want an arbitary node ID
(or perhaps a data node ID) so an alternative is to call `GET _nodes` or `GET
_nodes/data:true` and obtain a node ID from the keys of the `nodes` map in the
response.

This change adds the ability for YAML-based REST tests to extract an arbitrary
key from a map so that they can obtain a node ID from the nodes info API
instead of using the master node ID from the cluster state API.

Relates #40047.
2019-03-27 23:08:10 +00:00
Tim Brooks ab44f5fd5d
Add InboundHandler for inbound message handling (#40430)
This commit adds an InboundHandler to handle inbound message processing.
With this commit, this code is moved out of the TcpTransport.
Additionally, finer grained unit tests are added to ensure that the
inbound processing works as expected
2019-03-27 12:33:26 -06:00
Yannick Welsch 64b31f44af No mapper service and index caches for replicated closed indices (#40423)
Replicated closed indices can't be indexed into or searched, and therefore don't need a shard with
full indexing and search capabilities allocated. We can save on a lot of heap memory for those
indices by not allocating a mapper service and caching infrastructure (which preallocates a constant
amount per instance). Before this change, a 1GB ES instance could host 250 replicated closed
metricbeat indices (each index with one shard). After this change, the same instance can host 7300
replicated closed metricbeat instances (not that this would be a recommended configuration). Most
of the remaining memory is in the cluster state and the IndexSettings object.
2019-03-27 19:04:24 +01:00
Yannick Welsch 8f7c5732f1 Use default discovery implementation for single-node discovery (#40036)
Switches "discovery.type: single-node" from using a separate implementation for single-node discovery to using the existing standard discovery implementation, with two small adaptions:

-  auto-bootstrapping, but requiring initial_master_nodes not to be set.
- not actively pinging other nodes using the Peerfinder
- not allowing other nodes to join its single-node cluster (if they have e.g. been set up using regular discovery and connect to the single-disco node).
2019-03-27 19:04:24 +01:00
Tim Brooks 3860ddd1a4
Move outbound message handling to OutboundHandler (#40336)
Currently there are some components of message serializer and sending
that still occur in TcpTransport. This commit makes it possible to
send a message without the TcpTransport by moving all of the remaining
application logic to the OutboundHandler. Additionally, it adds unit
tests to ensure that this logic works as expected.
2019-03-27 11:47:36 -06:00
Tim Brooks 760cfffe4b
Move TransportMessageListener to TransportService (#40474)
Currently the TransportMessageListener is applied and used in the
Transport class. However, local requests and responses never make it to
this class. This PR moves the listener add/remove methods to the
TransportService. After this change the Transport can only have one
listener set with it. This one listener is the TransportService, which
will then propogate the events to the external listeners.

Additionally this commit back ports #40237

Remove Tracer from MockTransportService

Currently the TransportMessageListener is applied and used in the
Transport class. However, local requests and responses never make it to
this class. This PR moves the listener add/remove methods to the
TransportService. After this change the Transport can only have one
listener set with it. This one listener is the TransportService, which
will then propogate the events to the external listeners.
2019-03-27 09:24:20 -06:00
Henning Andersen bf444b9f02 Store Pending Deletions Fix (#40345)
FilterDirectory.getPendingDeletions does not delegate, fixed
temporarily by overriding in StoreDirectory.

This in turn caused duplicate file name use after a trimUnsafeCommits
had been done, since a new IndexWriter would not consider the pending
deletes in IndexFileDeleter. This should only happen on windows (AFAIK).

Reenabled doing index updates for all tests using
IndexShardTests.indexOnReplicaWithGaps (which could fail due to above
when using mocked WindowsFS).

Added getPendingDeletions delegation to all elasticsearch
FilterDirectory subclasses that were not trivial test-only overrides to
minimize the risk of hitting this issue in another case.
2019-03-26 15:30:44 +01:00
Armin Braun cafb83297c
Use Correct Enum in Wipe Snapshots Test Method (#40422) (#40438)
* Mistake was made in #39662
* The response deserialized here is `org.elasticsearch.action.admin.cluster.snapshots.get.GetSnapshotsResponse` which uses `org.elasticsearch.snapshots.SnapshotInfo` which uses `org.elasticsearch.snapshots.SnapshotState` and not the shard state
2019-03-26 09:04:15 +01:00
Nhat Nguyen efaf95628b Use separate translog dir in testDeleteWithFatalError
This test currently opens a new engine but shares the same translog
directory of the previous opening engine.
2019-03-20 10:22:27 -04:00
Mayya Sharipova 49a7c6e0e8
Expose proximity boosting (#39385) (#40251)
Expose DistanceFeatureQuery for geo, date and date_nanos types

Closes #33382
2019-03-20 09:24:41 -04:00
Henning Andersen 4c2a8638ca Cascading primary failure lead to MSU too low (#40249)
If a replica were first reset due to one primary failover and then
promoted (before resync completes), its MSU would not include changes
since global checkpoint, leading to errors during translog replay.

Fixed by re-initializing MSU before restoring local history.
2019-03-20 14:00:43 +01:00
Nhat Nguyen a13b4bc8c5 Always fail engine if delete operation fails (#40117)
Unlike index operations which can fail at the document level to
analyzing errors, delete operations should never fail at the document
level whether soft-deletes is enabled or not. With this change, we will
always fail the engine if we fail to apply a delete operation to Lucene.

Closes #33256
2019-03-19 13:09:23 -04:00
Nhat Nguyen 38e9522218 Remove wait for cluster state step in peer recovery (#40004)
We introduced WAIT_CLUSTERSTATE action in #19287 (5.0), but then stopped
using it since #25692 (6.0). This change removes that action and related
code in 7.x and 8.0.

Relates #19287
Relates #25692
2019-03-18 15:17:21 -04:00
Nhat Nguyen 9ba0bdf528 Dump cluster state if ensureGreen timed out in QA tests (#40133)
When the method ensureGreen in QA tests is timed out, it does not
provide enough info for us to investigate why the testing index is
not green yet. With this change, we will dump the cluster state if
ensureGreen timed out.

Relates #32027
2019-03-18 15:17:21 -04:00
Nhat Nguyen d720a64b9e Ensure sendBatch not called recursively (#39988)
This PR introduces AsyncRecoveryTarget which executes remote calls of
peer recovery asynchronously. In this change, we also add a new
assertion to ensure that method sendBatch, which sends a batch of
history operations in phase2, is never called recursively on the same
thread. This new assertion will also be used in method sendFileChunks.
2019-03-18 15:17:21 -04:00
Andrey Ershov 42602478b8 Unmute, fix, refactor and zen2ify NetworkDisruptionIT (#38351)
This commit unmutes NetworkDisruptionIT.

It makes changes necessary for Zen2 - avoids usage of
autoMinMasterNodes and selects cluster size, such that there is no
need to call AddVotingExclusion.
This test also introduces refactors a single method
prepareDistruptedCluster to be used by both test methods.
Unfortunately, NetworkDisruption is broken and the
testNetworkPartitionRemovalRestoresConnections "is fixed" by
introducing assertBusy - #38348.

Relates #36205
Relates #38348

(cherry picked from commit 97707c7f892636e5b75c3df546b067414acb27cd)
2019-03-18 16:39:43 +01:00
Jim Ferenczi eb540125ea
Fix IndexSearcherWrapper visibility (#39071) (#40145)
This change adds a wrapper for IndexSearcher that makes IndexSearcher#search(List, Weight, Collector) visible by
sub-classes. The wrapper is used by the ContextIndexSearcher to call this protected method on a searcher created by a plugin.
This ensures that an override of the protected method in an IndexSearcherWrapper plugin is called when a search is executed.

Closes #30758
2019-03-18 11:33:54 +01:00
Tim Brooks 0b50a670a4
Remove transport name from tcp channel (#40074)
Currently, we maintain a transport name ("mock-nio", "nio", "netty")
that is passed to a `TcpTransportChannel` when a request is received.
The value of this name is to associate with the task when we register a
task with the task manager. However, it is only possible to run ES with
one transport, so having an implementation specific name is unnecessary.
This commit removes the name and replaces it with the generic
"transport".
2019-03-15 12:04:13 -06:00
David Turner a323132503 Create retention leases file during recovery (#39359)
Today we load the shard history retention leases from disk whenever opening the
engine, and treat a missing file as an empty set of leases. However in some
cases this is inappropriate: we might be restoring from a snapshot (if the
target index already exists then there may be leases on disk) or
force-allocating a stale primary, and in neither case does it make sense to
restore the retention leases from disk.

With this change we write an empty retention leases file during recovery,
except for the following cases:

- During peer recovery the on-disk leases may be accurate and could be needed
  if the recovery target is made into a primary.

- During recovery from an existing store, as long as we are not
  force-allocating a stale primary.

Relates #37165
2019-03-15 07:49:49 +00:00
Jason Tedor 9181668edf
Stop returning cluster state size by default (#40016)
Computing the compressed size of the cluster state on every invocation
of cluster:monitor/state action is expensive, and the value of this
field is dubious anyway. Therefore we want to remove computing this
field. As a first step, we stop computing and return this field by
default. To avoid breaking users, we will give them a system property to
use to tide them over until the next major release when we will actually
remove this field. This comes with a deprecation warning too, and the
backport to the appropriate minor will also include a note in the
migration guide. There will be a follow-up to remove this field in the
next major version.
2019-03-14 08:57:55 -04:00
David Turner 049970af3e Only connect to new nodes on new cluster state (#39629)
Today, when applying new cluster state we attempt to connect to all of its
nodes as a blocking part of the application process. This is the right thing to
do with new nodes, and is a no-op on any already-connected nodes, but is
questionable on known nodes from which we are currently disconnected: there is
a risk that we are partitioned from these nodes so that any attempt to connect
to them will hang until it times out. This can dramatically slow down the
application of new cluster states which hinders the recovery of the cluster
during certain kinds of partition.

If nodes are disconnected from the master then it is likely that they are to be
removed as part of a subsequent cluster state update, so there's no need to try
and reconnect to them like this. Moreover there is no need to attempt to
reconnect to disconnected nodes as part of the cluster state application
process, because we periodically try and reconnect to any disconnected nodes,
and handle their disconnectedness reasonably gracefully in the meantime.

This commit alters this behaviour to avoid reconnecting to known nodes during
cluster state application.

Resolves #29025.
2019-03-12 19:26:13 +00:00
Tim Brooks 5612ed97ca
Add log warnings for long running event handling (#39729)
Recently we have had a number of test issues related to blocking
activity occuring on the io thread. This commit adds a log warning for
when handling event takes a >150 milliseconds. This is implemented
for the MockNioTransport which is the transport used in
ESIntegTestCase.
2019-03-08 13:07:24 -07:00
Alpar Torok 0f89427eb6
Back port build changes from same version bwc tests (#39744)
* Back port build changes from #39102

This back-ports how versions are determined and bwc test are set up from
 #39102 without enabling the bwc from current version tests so it's
 easier/possible to backmerge future buld changes.
It's expected that the tets are lacking many of the required fixes in
this version to enable them.
2019-03-07 17:25:09 +02:00
Alpar Torok 34ea84948c
Fix bwc tests failure to extract (#39619)
* Set correct packaging for older versions

Continue using zip packages for pre 7
Some other bwc fixes that hid behind this one.
Closes #39441  #39751
2019-03-07 09:07:11 +02:00
Armin Braun bb2f8485f1
Wipe Snapshots Before Indices in RestTests (#39662) (#39763)
* Wipe Snapshots Before Indices in RestTests

* If we have a snapshot ongoing from the previous test and enter this method, then deleting the indices fails, which in turn fails the whole wipe
   * Fixed by first deleting/aborting snapshots
2019-03-06 21:48:17 +01:00
Nhat Nguyen 1fe7cb594f Don’t ack if unable to remove failing replica (#39584)
Today when a replicated write operation fails to execute on a replica,
the primary will reach out to the master to fail that replica (and mark
it stale). We then won't ack that request until the master removes the
failing replica; otherwise, we will lose the acked operation if the
failed replica is still in the in-sync set. However, if a node with the
primary is shutting down, we might ack such request even though we are
unable to send a shard-failure request to the master. This happens
because we ignore NodeClosedException which is triggered when the
ClusterService is being closed.

Closes #39467
2019-03-06 15:30:55 -05:00
Simon Willnauer e620fb2e4a Add option to force load term dict into memory (#39741)
Lucene added an optimization to leave the term dictionary on disk
for non-id like fields. This change happened very late in the release
processes such that it's better to have an escape hatch if certain
use-cases are hurt by this optimization. This setting might be
removed in the future if it turns out to be unnecessary.
2019-03-06 15:29:04 +01:00
David Turner 295e39a8c8 Drop node if asymmetrically partitioned from master (#39598)
When a node is joining the cluster we ensure that it can send requests to the
master _at that time_. If it joins the cluster and _then_ loses the ability to
send requests to the master then it should be removed from the cluster. Today
this is not the case: the master can still receive responses to its follower
checks, and receives acknowledgements to cluster state publications, so has no
reason to remove the node.

This commit changes the handling of follower checks so that they fail if they
come from a master that the other node was following but which it now believes
to have failed.
2019-03-06 09:41:57 +00:00
Armin Braun aaecaf59a4
Optimize Bulk Message Parsing and Message Length Parsing (#39634) (#39730)
* Optimize Bulk Message Parsing and Message Length Parsing

* findNextMarker took almost 1ms per invocation during the PMC rally track
  * Fixed to be about an order of magnitude faster by using Netty's bulk `ByteBuf` search
* It is unnecessary to instantiate an object (the input stream wrapper) and throw it away, just to read the `int` length from the message bytes
  * Fixed by adding bulk `int` read to BytesReference
2019-03-06 08:13:15 +01:00
Yannick Welsch 936dbb00e3
Isolate Zen1 (#39470)
Cherry-picks a few commits from #39466 to align 7.x with master branch.
2019-03-04 15:51:17 +01:00
Adrien Grand 934946a232
Don't swallow exception in ThreadPool.terminate. (#39038) (#39623)
The use of `closeWhileHandlingException` means that any exception while trying
to close the threadpool is going to be swallowed.

Relates #39030
2019-03-04 10:58:29 +01:00
Tanguy Leroux e005eeb0b3
Backport support for replicating closed indices to 7.x (#39506)(#39499)
Backport support for replicating closed indices (#39499)
    
    Before this change, closed indexes were simply not replicated. It was therefore
    possible to close an index and then decommission a data node without knowing
    that this data node contained shards of the closed index, potentially leading to
    data loss. Shards of closed indices were not completely taken into account when
    balancing the shards within the cluster, or automatically replicated through shard
    copies, and they were not easily movable from node A to node B using APIs like
    Cluster Reroute without being fully reopened and closed again.
    
    This commit changes the logic executed when closing an index, so that its shards
    are not just removed and forgotten but are instead reinitialized and reallocated on
    data nodes using an engine implementation which does not allow searching or
     indexing, which has a low memory overhead (compared with searchable/indexable
    opened shards) and which allows shards to be recovered from peer or promoted
    as primaries when needed.
    
    This new closing logic is built on top of the new Close Index API introduced in
    6.7.0 (#37359). Some pre-closing sanity checks are executed on the shards before
    closing them, and closing an index on a 8.0 cluster will reinitialize the index shards
    and therefore impact the cluster health.
    
    Some APIs have been adapted to make them work with closed indices:
    - Cluster Health API
    - Cluster Reroute API
    - Cluster Allocation Explain API
    - Recovery API
    - Cat Indices
    - Cat Shards
    - Cat Health
    - Cat Recovery
    
    This commit contains all the following changes (most recent first):
    * c6c42a1 Adapt NoOpEngineTests after #39006
    * 3f9993d Wait for shards to be active after closing indices (#38854)
    * 5e7a428 Adapt the Cluster Health API to closed indices (#39364)
    * 3e61939 Adapt CloseFollowerIndexIT for replicated closed indices (#38767)
    * 71f5c34 Recover closed indices after a full cluster restart (#39249)
    * 4db7fd9 Adapt the Recovery API for closed indices (#38421)
    * 4fd1bb2 Adapt more tests suites to closed indices (#39186)
    * 0519016 Add replica to primary promotion test for closed indices (#39110)
    * b756f6c Test the Cluster Shard Allocation Explain API with closed indices (#38631)
    * c484c66 Remove index routing table of closed indices in mixed versions clusters (#38955)
    * 00f1828 Mute CloseFollowerIndexIT.testCloseAndReopenFollowerIndex()
    * e845b0a Do not schedule Refresh/Translog/GlobalCheckpoint tasks for closed indices (#38329)
    * cf9a015 Adapt testIndexCanChangeCustomDataPath for replicated closed indices (#38327)
    * b9becdd Adapt testPendingTasks() for replicated closed indices (#38326)
    * 02cc730 Allow shards of closed indices to be replicated as regular shards (#38024)
    * e53a9be Fix compilation error in IndexShardIT after merge with master
    * cae4155 Relax NoOpEngine constraints (#37413)
    * 54d110b [RCI] Adapt NoOpEngine to latest FrozenEngine changes
    * c63fd69 [RCI] Add NoOpEngine for closed indices (#33903)
    
    Relates to #33888
2019-03-01 14:48:26 +01:00
Lee Hinman dae48ba262 Add details about what acquired the shard lock last (#38807)
This adds a `details` parameter to shard locking in `NodeEnvironment`. This is
intended to be used for diagnosing issues such as

```
  1> [2019-02-11T14:34:19,262][INFO ][o.e.c.m.MetaDataDeleteIndexService] [node_s0] [.tasks/oSYOG0-9SHOx_pfAoiSExQ] deleting index
  1> [2019-02-11T14:34:19,279][WARN ][o.e.i.IndicesService     ] [node_s0] [.tasks/oSYOG0-9SHOx_pfAoiSExQ] failed to delete index
  1> org.elasticsearch.env.ShardLockObtainFailedException: [.tasks][0]: obtaining shard lock timed out after 0ms
  1> 	at org.elasticsearch.env.NodeEnvironment$InternalShardLock.acquire(NodeEnvironment.java:736) ~[main/:?]
  1> 	at org.elasticsearch.env.NodeEnvironment.shardLock(NodeEnvironment.java:655) ~[main/:?]
  1> 	at org.elasticsearch.env.NodeEnvironment.lockAllForIndex(NodeEnvironment.java:601) ~[main/:?]
  1> 	at org.elasticsearch.env.NodeEnvironment.deleteIndexDirectorySafe(NodeEnvironment.java:554) ~[main/:?]
```

In the hope that we will be able to determine why the shard is still locked.

Relates to #30290 as well as some other CI failures
2019-02-28 10:50:47 -07:00
Tal Levy f538b30af9
ensure no initializing shards during cluster cleanup (#39283) (#39480)
there are testing situations where newly created indices
are being wiped before they are fully initialized. This results
in an edge-case in the shard-locking strategy where an index
cannot be deleted.

This should fix that
2019-02-27 15:56:33 -08:00
Armin Braun 28b771f5db
Remove Dead Code Test Infrastructure (#39192) (#39436)
* Just removing some obviously unused things
2019-02-27 09:38:47 +01:00
Tim Brooks f24dae302d
Make security tests transport agnostic (#39411)
Currently there are two security tests that specifically target the
netty security transport. This PR moves the client authentication tests
into `AbstractSimpleSecurityTransportTestCase` so that the nio transport
will also be tested.

Additionally the work to build transport configurations is moved out of
the netty transport and tested independently.
2019-02-26 18:55:19 -07:00
Jason Tedor a6c0166d68
Renew retention leases while following (#39335)
This commit is the final piece of the integration of CCR with retention
leases. Namely, we periodically renew retention leases and advance the
retaining sequence number while following.
2019-02-25 17:14:19 -05:00
Nhat Nguyen 48219112e3 Do not wait for advancement of checkpoint in recovery (#39006)
With this change, we won't wait for the local checkpoint to advance to
the max_seq_no before starting phase2 of peer-recovery. We also remove
the sequence number range check in peer-recovery. We can safely do these
thanks to Yannick's finding.

The replication group to be used is currently sampled after indexing
into the primary (see `ReplicationOperation` class). This means that
when initiating tracking of a new replica, we have to consider the
following two cases:

- There are operations for which the replication group has not been
sampled yet. As we initiated the new replica as tracking, we know that
those operations will be replicated to the new replica and follow the
typical replication group semantics (e.g. marked as stale when
unavailable).

- There are operations for which the replication group has already been
sampled. These operations will not be sent to the new replica.  However,
we know that those operations are already indexed into Lucene and the
translog on the primary, as the sampling is happening after that. This
means that by taking a snapshot of Lucene or the translog, we will be
getting those ops as well. What we cannot guarantee anymore is that all
ops up to `endingSeqNo` are available in the snapshot (i.e.  also see
comment in `RecoverySourceHandler` saying `We need to wait for all
operations up to the current max to complete, otherwise we can not
guarantee that all operations in the required range will be available
for replaying from the translog of the source.`). This is not needed,
though, as we can no longer guarantee that max seq no == local
checkpoint.

Relates #39000
Closes #38949

Co-authored-by: Yannick Welsch <yannick@welsch.lu>
2019-02-25 12:10:14 -05:00
Armin Braun 50d2736746
Fix Deadlock from Thread.suspend in Test (#39261) (#39341)
* The lambda invoked by the `lockedExecutor` eventually gets JITed (which runs a static initializer that we will suspend in with a very tiny chance).
   * Fixed by creating the `Runnable` in the main test thread and using the same instance in all threads
* Closes #35686
2019-02-25 09:15:19 +01:00
Tim Brooks 44df76251f
Rebuild remote connections on profile changes (#39146)
Currently remote compression and ping schedule settings are dynamic.
However, we do not listen for changes. This commit adds listeners for
changes to those two settings. Additionally, when those settings change
we now close existing connections and open new ones with the settings
applied.

Fixes #37201.
2019-02-21 14:00:39 -07:00
Armin Braun 1a21cc0357
Simplify and Fix Synchronization in InternalTestCluster (#39168) (#39241)
* Simplify and Fix Synchronization in InternalTestCluster (#39168)

* Remove unnecessary `synchronized` statements
* Make `Predicate`s constants where possible
* Cleanup some stream usage
* Make unsafe public methods `synchronized`
* Closes #37965
* Closes #37275
* Closes #37345
2019-02-21 16:27:18 +01:00
Lee Hinman d9de899316 Wrap accounting breaker check in assertBusy (#39211)
There may be situations where indices have not yet been closed from a Lucene
perspective, causing the breaker to not immediately be at 0

Relates to #30290
2019-02-21 08:00:31 -07:00
Marios Trivyzas 1316825f52
Replace superfluous usage of Counter with Supplier (#39048) (#39225)
`Counter` was used as a means of a functional argument to pass
the relative cached time before `Supplier` iface was introduced.
2019-02-21 12:42:54 +02:00
Nhat Nguyen 820ba8169e Add retention leases replication tests (#38857)
This commit introduces the retention leases to ESIndexLevelReplicationTestCase,
then adds some tests verifying that the retention leases replication works
correctly in spite of the presence of the primary failover or out of order
delivery of retention leases sync requests.

Relates #37165
2019-02-20 19:21:00 -05:00
Tal Levy cb7e3708bc
Rollup jobs should be cleaned up before indices are deleted (#38930) (#39144)
Rollup jobs should be stopped + deleted before the indices are removed.
It's possible for an active rollup job to issue a bulk request, the test
ends and the cleanup code deletes all indices.  The in-flight bulk
request will then stall + error because the index no-longer exists...
but this process might take longer than the StopRollup timeout.

Which means the test fails, and often fails several other tests since
the job is still active (e.g. other tests cannot create the same-named
job, or fail to stop the job in their cleanup because it's still stalled).

This tends to knock over several tests before the bulk finally times
out and the job shuts down.

Instead, we need to simply stop jobs first.  Inflight bulks will resolve
quickly, and we can carry on with deleting indices after the jobs are
confirmed inactive.

stop-job.asciidoc tended to trigger this issue because it executed
an async stop API and then exited, which setup the above situation. In
can and did happen with other tests though.  As an extra precaution,
the doc test was modified to substitute in wait_for_completion
to help head off these issues too.
2019-02-20 11:12:01 -08:00
Adrien Grand d8852b83d0
Don't swallow IOExceptions in InternalTestCluster. (#39068)
Relates #39030
2019-02-19 15:03:47 +01:00
Ioannis Kakavas 59e9a0f4f4 Disable specific locales for tests in fips mode (#38938)
* Disable specific locales for tests in fips mode

The Bouncy Castle FIPS provider that we use for running our tests
in fips mode has an issue with locale sensitive handling of Dates as
described in https://github.com/bcgit/bc-java/issues/405

This causes certificate validation to fail if any given test that
includes some form of certificate validation happens to run in one
of the locales. This manifested earlier in #33081 which was
handled insufficiently in #33299

This change ensures that the problematic 3 locales

* th-TH
* ja-JP-u-ca-japanese-x-lvariant-JP
* th-TH-u-nu-thai-x-lvariant-TH

will not be used when running our tests in a FIPS 140 JVM. It also
reverts #33299
2019-02-19 08:46:08 +02:00
David Roberts ae9243ad0a Reduce single node test cleanup logging (#39060)
As per https://github.com/elastic/elasticsearch/pull/39049#discussion_r257719530
2019-02-18 17:38:49 +00:00
Nhat Nguyen 2947ccf5c3 Add remote recovery to ShardFollowTaskReplicationTests (#39007)
We simulate remote recovery in ShardFollowTaskReplicationTests 
by bootstrapping the follower with the safe commit of the leader.

Relates #35975
2019-02-18 09:57:56 -05:00