* Fix DedicatedClusterSnapshotRestoreIT testSnapshotWithStuckNode
* See comment in the test: The problem is that when the snapshot delete works out partially on master failover and the retry fails on `SnapshotMissingException` no repository cleanup is run => we still failed even with repo cleanup logic in the delete path now
* Fixed the test by rerunning a create snapshot and delete loop to clean up the repo before verifying file counts
* Closes#39852
The rest client does not communicate over the transport protocol.
However, in the move to make all apis supported in the HLRC, some
response classes were copied with extending ActionResponse, which is
meant strictly for the transport protocol. This commit removes uses of
that base class from HLRC.
In today's test suite indices mostly use the default value of `12h` for the
`index.soft_deletes.retention_lease.period` setting, which in the context of
the test suite essentially means "never expires". In fact, the tests should all
behave correctly even if the lease period is much shorter; tests that rely on
leases not expiring should configure their indices appropriately.
This commit randomises the lease expiry time for those indices created during
tests which do not set a specific value for this setting.
* Missed this one in #42189 and it randomly runs into a situation where the broken mock repo is broken such that we can't get to a consistent end state via a delete
* Closes#43498
Today we prevent any index that is actively snapshotted or restored to be closed.
This verification is done during the execution of the first phase of index closing
(ie before blocking the indices).
We should also do this verification again in the last phase of index closing
(ie after the shard sanity checks and right before actually changing the index
state and the routing table) because a snapshot/restore could sneak in while
the shards are verified-before-close.
Provides a hook for aggregations to introspect the `ValuesSourceType` for a user supplied Missing value on an unmapped field, when the type would otherwise be `ANY`. Mapped field behavior is unchanged, and still applies the `ValuesSourceType` of the field. This PR just provides the hook for doing this, no existing aggregations have their behavior changed.
* In the current codebase it is hardly obvious what code operates on a shard and is run by a datanode what code operates on the global metadata and is run on master
* Fixed by adjusting the method names accordingly
* The nested context classes don't add much if any value, they simply spread out the parameters that go into a shard snapshot create or delete all over the place since their
constructors can be inlined in all spots
* Fixed by flattening the nested classes into BlobStoreRepository
* Also:
* Inlined the other single use inner classes
* Remove some obvious dead code
* Move assert methods that were only used in a single test class to the child they belong to
* Inline some redundant methods
Data frame task responses had logic to return a HTTP 500 status code if there was
any node or task failures even if other tasks in the same request reported correctly.
This is different to how other task responses are handled where a 200 is always
returned leaving the client should check for failures. Returning a 500 also breaks
the high level rest client so always return a 200
Closes#44011
If an index is the result of a shrink then it will have a value set for
`index.routing.allocation.initial_recovery._id`. If this index is subsequently
split then this value will be copied over, forcing the initial allocation of
the split shards to occur on the node on which the shrink took place. Moreover
if this node no longer exists then the split will fail. This commit suppresses
the copying of this setting when splitting an index.
Fixes#43955
* Use ability to list child "folders" in the blob store to implement recursive delete on all stale index folders when cleaning up instead of using the diff between two `RepositoryData` instances to cover aborted deletes
* Runs after ever delete operation
* Relates #13159 (fixing most of this issues caused by unreferenced indices, leaving some meta files to be cleaned up only)
Joda allowed for date_optional_time and strict_date_optional_time a decimal point to be . dot or , comma
For our java.time implementation we should also extend this for strict_date_optional_time-nanos
the approach to fix this is the same as in iso8601 parser
closes#43730
* Provide an Option to Use Path-Style-Access with S3 Repo
* As discussed, added the option to use path style access back again and
deprecated it.
* Defaulted to `false`
* Added warning to docs
* Closes#41816
This PR enables the indexing optimization using sequence numbers on
replicas. With this optimization, indexing on replicas should be faster
and use less memory as it can forgo the version lookup when possible.
This change also deactivates the append-only optimization on replicas.
Relates #34099
Currently when a document misses a vector value, vector function
returns 0 as a score for this document. We think this is incorrect
behaviour.
With this change, an error will be thrown if vector functions are
used with docs that are missing vector doc values.
Also VectorScriptDocValues is modified to allow size() function,
which can be used to check if a document has a value for the
vector field.
This commit converts the ConnectionManager's openConnection and connectToNode methods to
async-style. This will allow us to not block threads anymore when opening connections. This PR also
adapts the cluster coordination subsystem to make use of the new async APIs, allowing to remove
some hacks in the test infrastructure that had to account for the previous synchronous nature of the
connection APIs.
These assertions do not hold true when a master fails during publication and quickly becomes
master again, publishing a new cluster state in a higher term which races against the previous
cluster state publication to self (which does not matter anyway).
Relates #43994Closes#44012
This PR adds the reference documentation pages of the data frame analytics APIs (PUT, START, STOP, GET, GET stats, DELETE, Evaluate) to the ML APIs pool.
This commit ensures that cluster state publications to self also go through the transport layer. This
allows voting-only nodes to intercept the publication to self.
Fixes an issue discovered by a test failure where a voting-only node, which was the only
bootstrapped node, would not step down as master after state transfer because publishing to self
would succeed.
Closes#43631
* Use unique ports per test worker
* Add test for system property
* check presence of tests.gradle
* Revert "check presence of tests.gradle"
This reverts commit 2fee7512a28f95c94c5bf7a3312e808f918a9510.
Today `RetentionLeaseSyncAction.Request` and
`RetentionLeaseBackgroundSyncAction.Request` both describe themselves as
`Request{...}` in the value returned from their respective `toString()`
methods. This commit adds the name of the owning class to both so we have
something a bit easier to search for and so we can distinguish foreground from
background syncs in logs and test failures and so on.
The count should match the number of all df-analytics that
matched the id in the request. However, we set the count
to the number of df-analytics returned which was bound to the
`size` parameter. This commit fixes this by setting the count
to the count of the `get` response.
This commit changes the way we manage refreshes in the index engines.
Instead of relying on a SearcherManager, this change uses a ReaderManager that
creates ElasticsearchDirectoryReader when needed. Searchers are now created on-demand
(when acquireSearcher is called) from the current ElasticsearchDirectoryReader.
It also slightly changes the Engine.Searcher to extend IndexSearcher in order
to simplify the usage in the consumer.
Currently we log a deprecation warning to the types removal in
RestGetIndicesAction even if the REST method is HEAD, which is used by the
indices.exists API. Since the body is empty in this case we should not need to
show the deprecation warning.
Closes#43905
The test IndexShardIT.testIndexCanChangeCustomDataPath() fails
on 7.x and 7.3 because the translog cannot be recovered.
While I can't reproduce the issue, I think it has been introduced in #43752
which changed ReadOnlyEngine so that it opens the translog in its
constructor in order to load the translog stats. This opening writes a
new checkpoint file, but because 7.x/7.3 does not wait for shards to be
started after being closed, the test immediately starts to copy shard files
to a new directory and possibly does not copy all the required translog files.
By waiting for the shards to be started after being closed, we ensure
that the shards (and engines) have been correctly initialized and that
the translog checkpoint file is not currently being written.
closes#43964