This fixes two bugs:
- A recently introduced bug where an NPE will be thrown if a catch block is
empty.
- A long-time bug where an NPE will be thrown if multiple catch blocks in a
row are empty for the same try block.
Closing a `RemoteClusterConnection` concurrently with trying to connect
could result in double invoking the listener.
This fixes
RemoteClusterConnectionTest#testCloseWhileConcurrentlyConnecting
Closes#45845
This commit changes the tests added in #45383 so that the fixture that
emulates the S3 service now sometimes consumes all the request body
before sending an error, sometimes consumes only a part of the request
body and sometimes consumes nothing. The idea here is to beef up a bit
the tests that writes blob because the client's retry logic relies on
marking and resetting the blob's input stream.
This pull request also changes the testWriteBlobWithRetries() so that it
(rarely) tests with a large blob (up to 1mb), which is more than the client's
default read limit on input streams (131Kb).
Finally, it optimizes the ZeroInputStream so that it is a bit more effective
(now works using an internal buffer and System.arraycopy() primitives).
In case of an in-progress snapshot this endpoint was broken because
it tried to execute repository operations in the callback on a
transport thread which is not allowed (only generic or snapshot
pool are allowed here).
The security indices were being created without specifying the
refresh interval, which means it would inherit a value from any
templates that exists.
However, certain security functionality depends on being able to
wait_for refresh, and causes errors (e.g. in Kibana) if that time
exceeds 30s.
This commit changes the security indices configuration to always be
created with a 1s refresh interval. This prevents any templates from
inadvertantly interfering with the proper functioning of security.
It is possible for an administrator to explicitly change the refresh
interval after the indices have been created.
Backport of: #45434
In the Sys V init scripts, we check for Java. This is not needed, since
the same check happens in elasticsearch-env when starting up. Having
this duplicate check has bitten us in the past, where we made a change
to the logic in elasticsearch-env, but missed updating it here. Since
there is no need for this duplicate check, we remove it from the Sys V
init scripts.
This commit namespaces the existing processors setting under the "node"
namespace. In doing so, we deprecate the existing processors setting in
favor of node.processors.
Customers occasionally discover a known behavior in Elasticsearch's pagination that does not appear to be documented. This warning is intended to educate customers of this behavior while still highlighting alternative solutions.
This change adds a new SSL context
xpack.notification.email.ssl.*
that supports the standard SSL configuration settings (truststore,
verification_mode, etc). This SSL context is used when configuring
outbound SMTP properties for watcher email notifications.
Backport of: #45272
Since #45136, we use soft-deletes instead of translog in peer recovery.
There's no need to retain extra translog to increase a chance of
operation-based recoveries. This commit ignores the translog retention
policy if soft-deletes is enabled so we can discard translog more
quickly.
Backport of #45473
Relates #45136
Today, when rolling a new translog generation, we block all write
threads until a new generation is created. This choice is perfectly
fine except in a highly concurrent environment with the translog
async setting. We can reduce the blocking time by pre-sync the
current generation without writeLock before rolling. The new step
would fsync most of the data of the current generation without
blocking write threads.
Close#45371
* [ML] Adding data frame analytics stats to _usage API (#45820)
* [ML] Adding data frame analytics stats to _usage API
* making the size of analytics stats 10k
* adjusting backport
Adds index versioning for the internal data frame transform index. Allows for new indices to be created and referenced, `GET` requests now query over the index pattern and takes the latest doc (based on INDEX name).
When Watcher is stopped and there are still outstanding watches running
Watcher will report it self as stopped. In normal cases, this is not problematic.
However, for integration tests Watcher is started and stopped between
each test to help ensure a clean slate for each test. The tests are blocking
only on the stopped state and make an implicit assumption that all watches are
finished if the Watcher is stopped. This is an incorrect assumption since
Stopped really means, "I will not accept any more watches". This can lead to
un-predictable behavior in the tests such as message : "Watch is already queued
in thread pool" and state: "not_executed_already_queued".
This can also change the .watcher-history if watches linger between tests.
This commit changes the semantics of a manual stopping watcher to now mean:
"I will not accept any more watches AND all running watches are complete".
There is now an intermediary step "Stopping" and callback to allow transition
to a "Stopped" state when all Watches have completed.
Additionally since this impacts how long the tests will block waiting for a
"Stopped" state, the timeout has been increased.
Related: #42409
In internal test clusters tests we check that wiping all indices was acknowledged
but in REST tests we didn't.
This aligns the behavior in both kinds of tests.
Relates #45605 which might be caused by unacked deletes that were just slow.
* [DOCS] Add template docs to scripts. Reorder template examples.
* Adds a 'Search template' section to the 'How to use scripts' chapter.
This links to the 'Search template' chapter for detailed info and
examples.
* Reorders and retitles several examples in the 'Search template'
chapter. This is primarily to make examples for storing, deleting, and
using search templates more prominent.
* Change <templatename> to <templateid>
Follow up on #32806.
The system property es.http.cname_in_publish_address is deprecated
starting from 7.0.0 and deprecation warning should be added if the
property is specified.
This PR will go to 7.x and master.
Follow-up PR to remove es.http.cname_in_publish_address property
completely will go to the master.
(cherry picked from commit a5ceca7715818f47ec87dd5f17f8812c584b592b)
This commit adds tests to verify the behavior of the S3BlobContainer and
its underlying AWS SDK client when the remote S3 service is responding
errors or not responding at all. The expected behavior is that requests are
retried multiple times before the client gives up and the S3BlobContainer
bubbles up an exception.
The test verifies the behavior of BlobContainer.writeBlob() and
BlobContainer.readBlob(). In the case of S3 writing a blob can be executed
as a single upload or using multipart requests; the test checks both scenario
by writing a small then a large blob.
* There is no point in listing out every shard over and over when the `index-N` blob in the shard contains a list of all the files
* Rebuilding the `index-N` from the `snap-${uuid}.dat` blobs does not provide any material benefit. It only would in the corner case of a corrupted `index-N` but otherwise uncorrupted blobs since we neither check the correctness of the content of all segment blobs nor do we do a similar recovery at the root of the repository.
* Also, at least in version `6.x` we only mark a shard snapshot as successful after writing out the updated `index-N` blob so all snapshots that would work with `7.x` and newer must have correct `index-N` blobs
=> Removed the rebuilding of the `index-N` content from `snap-${uuid}.dat` files and moved to only listing `index-N` when taking a snapshot instead of listing all files
=> Removed check of file existence against physical blob listing
=> Kept full listing on the delete side to retain full cleanup of blobs that aren't referenced by the `index-N`
This PR introduces a mechanism to cancel a search task when its corresponding connection gets closed. That would relief users from having to manually deal with tasks and cancel them if needed. Especially the process of finding the task_id requires calling get tasks which needs to call every node in the cluster.
The implementation is based on associating each http channel with its currently running search task, and cancelling the task when the previously registered close listener gets called.