This commit changes how RestHandlers are registered with the
RestController so that a RestHandler no longer needs to register itself
with the RestController. Instead the RestHandler interface has new
methods which when called provide information about the routes
(method and path combinations) that are handled by the handler
including any deprecated and/or replaced combinations.
This change also makes the publication of RestHandlers safe since they
no longer publish a reference to themselves within their constructors.
Closes#51622
Co-authored-by: Jason Tedor <jason@tedor.me>
Backport of #51950
Add cool down period after snapshot finalization and delete to prevent eventually consistent AWS S3 from corrupting shard level metadata as long as the repository is using the old format metadata on the shard level.
Unfortunately bulk delete exceptions don't show the individual delete
errors when a bulk delete fails when you log them outright so I added this work-around
to get the individual details to get useful logging.
* Remove BlobContainer Tests against Mocks
Removing all these weird mocks as asked for by #30424.
All these tests are now part of real repository ITs and otherwise left unchanged if they had
independent tests that didn't call the `createBlobStore` method previously.
The HDFS tests also get added coverage as a side-effect because they did not have an implementation
of the abstract repository ITs.
Closes#30424
* Remove Unused Single Delete in BlobStoreRepository
There are no more production uses of the non-bulk delete or the delete that throws
on missing so this commit removes both these methods.
Only the bulk delete logic remains. Where the bulk delete was derived from single deletes,
the single delete code was inlined into the bulk delete method.
Where single delete was used in tests it was replaced by bulk deleting.
* Make BlobStoreRepository Aware of ClusterState (#49639)
This is a preliminary to #49060.
It does not introduce any substantial behavior change to how the blob store repository
operates. What it does is to add all the infrastructure changes around passing the cluster service to the blob store, associated test changes and a best effort approach to tracking the latest repository generation on all nodes from cluster state updates. This brings a slight improvement to the consistency
by which non-master nodes (or master directly after a failover) will be able to determine the latest repository generation. It does not however do any tricky checks for the situation after a repository operation
(create, delete or cleanup) that could theoretically be used to get even greater accuracy to keep this change simple.
This change does not in any way alter the behavior of the blobstore repository other than adding a better "guess" for the value of the latest repo generation and is mainly intended to isolate the actual logical change to how the
repository operates in #49060
All the implementations of `EsBlobStoreTestCase` use the exact same
bootstrap code that is also used by their implementation of
`EsBlobStoreContainerTestCase`.
This means all tests might as well live under `EsBlobStoreContainerTestCase`
saving a lot of code duplication. Also, there was no HDFS implementation for
`EsBlobStoreTestCase` which is now automatically resolved by moving the tests over
since there is a HDFS implementation for the container tests.
This commit fixes the server side logic of "List Objects" operations
of Azure and S3 fixtures. Until today, the fixtures were returning a "
flat" view of stored objects and were not correctly handling the
delimiter parameter. This causes some objects listing to be wrongly
interpreted by the snapshot deletion logic in Elasticsearch which
relies on the ability to list child containers of BlobContainer (#42653)
to correctly delete stale indices.
As a consequence, the blobs were not correctly deleted from the
emulated storage service and stayed in heap until they got garbage
collected, causing CI failures like #48978.
This commit fixes the server side logic of Azure and S3 fixture when
listing objects so that it now return correct common blob prefixes as
expected by the snapshot deletion process. It also adds an after-test
check to ensure that tests leave the repository empty (besides the
root index files).
Closes#48978
Similarly to what has been done for Azure (#48636) and GCS (#48762),
this committ removes the existing Ant fixture that emulates a S3 storage
service in favor of multiple docker-compose based fixtures.
The goals here are multiple: be able to reuse a s3-fixture outside of the
repository-s3 plugin; allow parallel execution of integration tests; removes
the existing AmazonS3Fixture that has evolved in a weird beast in
dedicated, more maintainable fixtures.
The server side logic that emulates S3 mostly comes from the latest
HttpHandler made for S3 blob store repository tests, with additional
features extracted from the (now removed) AmazonS3Fixture:
authentication checks, session token checks and improved response
errors. Chunked upload request support for S3 object has been added
too.
The server side logic of all tests now reside in a single S3HttpHandler class.
Whereas AmazonS3Fixture contained logic for basic tests, session token
tests, EC2 tests or ECS tests, the S3 fixtures are now dedicated to each
kind of test. Fixtures are inheriting from each other, making things easier
to maintain.
In repository integration tests, we drain the HTTP request body before
returning a response. Before this change this operation was done using
Streams.readFully() which uses a 8kb buffer to read the input stream, it
now uses a 1kb for the same operation. This should reduce the allocations
made during the tests and speed them up a bit on CI.
Co-authored-by: Armin Braun <me@obrown.io>
Today if the connection to S3 times out or drops after starting to download an
object then the SDK does not attempt to recover or resume the download, causing
the restore of the whole shard to fail and retry. This commit allows
Elasticsearch to detect such a mid-stream failure and to resume the download
from where it failed.
The `repository-s3` plugin has supported a storage class of `onezone_ia` since
the SDK upgrade in #30723, but we do not test or document this fact. This
commit adds this storage class to the docs and adds a test to ensure that the
documented storage classes are all accepted by S3 too.
Fixes#30474
When some high values are randomly picked up - for example the number
of indices to snapshot or the number of snapshots to create - the tests
in S3BlobStoreRepositoryTests can generate a high number of requests to
the internal S3 server.
In order to test the retry logic of the S3 client, the internal server is
designed to randomly generate random server errors. When many
requests are made, it is possible that the S3 client reaches its maximum
number of successive retries capacity. Then the S3 client will stop
retrying requests until enough retry attempts succeed, but it means
that any request could fail before reaching the max retries count and
make the test fail too.
Closes#46217Closes#46218Closes#46219
This commit modifies the HTTP server used in S3BlobStoreRepositoryTests
so that it randomly returns server errors for any type of request executed by
the SDK client. It is now possible to verify that the repository tests are s
uccessfully completed even if one or more errors were returned by the S3
service in response of a blob upload, a blob deletion or a object listing request
etc.
Because injecting errors forces the SDK client to retry requests, the test limits
the maximum errors to send in response for each request at 3 retries.
This commit removes the usage of MockAmazonS3 in S3BlobStoreRepositoryTests
and replaces it by a HttpServer that emulates the S3 service. This allows the
repository tests to use the real Amazon's S3 client under the hood in tests and will
allow to test the behavior of the snapshot/restore feature for S3 repositories by
simulating random server-side internal errors.
The HTTP server used to emulate the S3 service is intentionally simple and minimal
to keep things understandable and maintainable. Testing full client options on the
server side (like authentication, chunked encoding etc) remains the responsibility
of the AmazonS3Fixture.
This commit starts from the simple premise that the use of node settings
in blob store repositories is a mistake. Here we see that the node
settings are used to get default settings for store and restore throttle
rates. Yet, since there are not any node settings registered to this
effect, there can never be a default setting to fall back to there, and
so we always end up falling back to the default rate. Since this was the
only use of node settings in blob store repository, we move them. From
this, several places fall out where we were chaining settings through
only to get them to the blob store repository, so we clean these up as
well. That leaves us with the changeset in this commit.
This commit refactors the S3 credentials tests in
RepositoryCredentialsTests so that it now uses a single
node (ESSingleNodeTestCase) to test how secure/insecure
credentials are overriding each other. Using a single node
makes it much easier to understand what each test is actually
testing and IMO better reflect how things are initialized.
It also allows to fold into this class the test
testInsecureRepositoryCredentials which was wrongly located
in S3BlobStoreRepositoryTests. By moving this test away, the
S3BlobStoreRepositoryTests class does not need the
allow_insecure_settings option anymore and thus can be
executed as part of the usual gradle test task.
This commit changes the tests added in #45383 so that the fixture that
emulates the S3 service now sometimes consumes all the request body
before sending an error, sometimes consumes only a part of the request
body and sometimes consumes nothing. The idea here is to beef up a bit
the tests that writes blob because the client's retry logic relies on
marking and resetting the blob's input stream.
This pull request also changes the testWriteBlobWithRetries() so that it
(rarely) tests with a large blob (up to 1mb), which is more than the client's
default read limit on input streams (131Kb).
Finally, it optimizes the ZeroInputStream so that it is a bit more effective
(now works using an internal buffer and System.arraycopy() primitives).
This commit adds tests to verify the behavior of the S3BlobContainer and
its underlying AWS SDK client when the remote S3 service is responding
errors or not responding at all. The expected behavior is that requests are
retried multiple times before the client gives up and the S3BlobContainer
bubbles up an exception.
The test verifies the behavior of BlobContainer.writeBlob() and
BlobContainer.readBlob(). In the case of S3 writing a blob can be executed
as a single upload or using multipart requests; the test checks both scenario
by writing a small then a large blob.
* Repository Cleanup Endpoint (#43900)
* Snapshot cleanup functionality via transport/REST endpoint.
* Added all the infrastructure for this with the HLRC and node client
* Made use of it in tests and resolved relevant TODO
* Added new `Custom` CS element that tracks the cleanup logic.
Kept it similar to the delete and in progress classes and gave it
some (for now) redundant way of handling multiple cleanups but only allow one
* Use the exact same mechanism used by deletes to have the combination
of CS entry and increment in repository state ID provide some
concurrency safety (the initial approach of just an entry in the CS
was not enough, we must increment the repository state ID to be safe
against concurrent modifications, otherwise we run the risk of "cleaning up"
blobs that just got created without noticing)
* Isolated the logic to the transport action class as much as I could.
It's not ideal, but we don't need to keep any state and do the same
for other repository operations
(like getting the detailed snapshot shard status)
* We only use this method in one place in production code and can replace that with a read -> remove it to simplify the interface
* Keep it as an implementation detail in the Azure repository
* Use ability to list child "folders" in the blob store to implement recursive delete on all stale index folders when cleaning up instead of using the diff between two `RepositoryData` instances to cover aborted deletes
* Runs after ever delete operation
* Relates #13159 (fixing most of this issues caused by unreferenced indices, leaving some meta files to be cleaned up only)
* Provide an Option to Use Path-Style-Access with S3 Repo
* As discussed, added the option to use path style access back again and
deprecated it.
* Defaulted to `false`
* Added warning to docs
* Closes#41816
This is a prerequisite of #42189:
* Add directory delete method to blob container specific to each implementation:
* Some notes on the implementations:
* AWS + GCS: We can simply exploit the fact that both AWS and GCS return blobs lexicographically ordered which allows us to simply delete in the same order that we receive the blobs from the listing request. For AWS this simply required listing without the delimiter setting (so we get a deep listing) and for GCS the same behavior is achieved by not using the directory mode on the listing invocation. The nice thing about this is, that even for very large numbers of blobs the memory requirements are now capped nicely since we go page by page when deleting.
* For Azure I extended the parallelization to the listing calls as well and made it work recursively. I verified that this works with thread count `1` since we only block once in the initial thread and then fan out to a "graph" of child listeners that never block.
* HDFS and FS are trivial since we have directory delete methods available for them
* Enhances third party tests to ensure the new functionality works (I manually ran them for all cloud providers)
* Add Ability to List Child Containers to BlobContainer (#42653)
* Add Ability to List Child Containers to BlobContainer
* This is a prerequisite of #42189
Fixes an issue where repositories are unintentionally shared among tests (given that the repo contents is captured in a static variable on the test class, to allow "sharing" among nodes) and two tests randomly chose the same snapshot name, leading to a conflict.
Closes#42519
* Cleanup Bulk Delete Exception Logging
* Follow up to #41368
* Collect all failed blob deletes and add them to the exception message
* Remove logging of blob name list from caller exception logging
* Add Infrastructure to Run 3rd Party Repository Tests
* Add infrastructure to run third party repository tests using our standard JUnit infrastructure
* This is a prerequisite of #42189
* Remove Delete Method from BlobStore (#41619)
* The delete method on the blob store was used almost nowhere and just duplicates the delete method on the blob containers
* The fact that it provided for some recursive delete logic (that did not behave the same way on all implementations) was not used and not properly tested either
Motivated by slow snapshot deletes reported in e.g. #39656 and the fact that these likely are a contributing factor to repositories accumulating stale files over time when deletes fail to finish in time and are interrupted before they can complete.
* Makes snapshot deletion async and parallelizes some steps of the delete process that can be safely run concurrently via the snapshot thread poll
* I did not take the biggest potential speedup step here and parallelize the shard file deletion because that's probably better handled by moving to bulk deletes where possible (and can still be parallelized via the snapshot pool where it isn't). Also, I wanted to keep the size of the PR manageable.
* See https://github.com/elastic/elasticsearch/pull/39656#issuecomment-470492106
* Also, as a side effect this gives the `SnapshotResiliencyTests` a little more coverage for master failover scenarios (since parallel access to a blob store repository during deletes is now possible since a delete isn't a single task anymore).
* By adding a `ThreadPool` reference to the repository this also lays the groundwork to parallelizing shard snapshot uploads to improve the situation reported in #39657
* The check doesn't add much if anything practically, since the S3 repository is eventually consistent and we only log the non-existence of a blob anyway
* We don't do the check on writes for this very reason and documented it as such
* Removing the check saves one API call per single delete speeding up the deletion process and lowering costs
* Adds Bulk delete API to blob container
* Implement bulk delete API for S3
* Adjust S3Fixture to accept both path styles for bulk deletes since the S3 SDK uses both during our ITs
* Closes#40250
Blob store compression was not enabled for some of the files in
snapshots due to constructor accessing sub-class fields. Fixed to
instead accept compress field as constructor param. Also fixed chunk
size validation to work.
Deprecated repositories.fs.compress setting as well to be able to unify
in a future commit.
This PR attempts to remove all typed calls from our YAML REST tests. The PR adds include_type_name: false to create index requests that use a mapping and also to put mapping requests. It also removes _type from index requests where they haven't already been removed. The PR ignores tests named *_with_types.yml since this are specifically testing typed API behaviour.
The change also includes changing the test harness to add the type _doc to index, update, get and bulk requests that do not specify the document type when the test is running against a mixed 7.x/6.x cluster.
* Make repository settings override static settings
* Cache clients according to settings
* Introduce custom implementations for the AWS credentials here to be able to use them as part of a hash key
Stop passing `Settings` to `AbstractComponent`'s ctor. This allows us to
stop passing around `Settings` in a *ton* of places. While this change
touches many files, it touches them all in fairly small, mechanical
ways, doing a few things per file:
1. Drop the `super(settings);` line on everything that extends
`AbstractComponent`.
2. Drop the `settings` argument to the ctor if it is no longer used.
3. If the file doesn't use `logger` then drop `extends
AbstractComponent` from it.
4. Clean up all compilation failure caused by the `settings` removal
and drop any now unused `settings` isntances and method arguments.
I've intentionally *not* removed the `settings` argument from a few
files:
1. TransportAction
2. AbstractLifecycleComponent
3. BaseRestHandler
These files don't *need* `settings` either, but this change is large
enough as is.
Relates to #34488
Drops the `Settings` member from `AbstractComponent`, moving it from the
base class on to the classes that use it. For the most part this is a
mechanical change that doesn't drop `Settings` accesses. The one
exception to this is naming threads where it switches from an invocation
that passes `Settings` and extracts the node name to one that explicitly
passes the node name.
This change doesn't drop the `Settings` argument from
`AbstractComponent`'s ctor because this change is big enough as is.
We'll do that in a follow up change.
Drops the `deprecationLogger` from `AbstractComponent`, moving it to
places where we need it. This saves us from building a bunch of
`DeprecationLogger`s that we don't need.
Relates to #34488
The contains syntax was added in #30874 but the skips were not properly
put in place.
The java runner has the feature so the tests will run as part of the
build, but language clients will be able to support it at their own
pace.
We are using a deprecated method for checking if an S3 bucket
exists. This deprecated method has a limitation that it can not
distinguish between invalid credentials and a lack of permissions. This
commit switches to using a method that correctly surfaces if invalid
credentials are supplied when checking for the existence of a bucket.
In cases when mixed secure S3 client credentials and insecure S3 client
credentials were used (that is, those defined on the repository), we
were overriding the credentials from the repository using insecure
settings to all the repositories. This commit fixes this by not mixing
up repositories that use insecure settings with those that use secure
settings.
The `else` branch where currently the error response should be thrown is not
reachable because `handler` is always non-null inside the previous outer check.
Moving error creation into an else branch on the other condition check, removing
the other superflous check for non-null handler inside the first branch.
Today, `AmazonS3Fixture` returns 403 on attempts to access any inappropriate
bucket, whether known or otherwise. In fact, S3 reports 404 on nonexistent
buckets and 403 on inaccessible ones. This change enhances `AmazonS3Fixture` to
distinguish these cases.
Adds a Minio fixture to run the S3 repository tests against Minio. Also collapses the single qa
subproject into the s3-repository project, which simplifies the code structure (having it all in one
place) and helps to avoid having too many Gradle subprojects.
AWS supports the creation and use of credentials that are only valid for a
fixed period of time. These credentials comprise three parts: the usual access
key and secret key, together with a session token. This commit adds support for
these three-part credentials to the EC2 discovery plugin and the S3 repository
plugin.
Note that session tokens are only valid for a limited period of time and yet
there is no mechanism for refreshing or rotating them when they expire without
restarting Elasticsearch. Nonetheless, this feature is already useful for
nodes that need only run for a few days, such as for training, testing or
evaluation. #29135 tracks the work towards allowing these credentials to be
refreshed at runtime.
Resolves#16428
Adds a new parameter to the BlobContainer#write*Blob methods to specify whether the existing file
should be overridden or not. For some metadata files in the repository, we actually want to replace
the current file. This is currently implemented through an explicit blob delete and then a fresh write.
In case of using a cloud provider (S3, GCS, Azure), this results in 2 API requests instead of just 1.
This change will therefore allow us to achieve the same functionality using less API requests.
Before deleting a repository index generation file, BlobStoreRepository
checks for the existence of the file and then deletes it. We can save
a request here by using BlobContainer.deleteBlobIgnoringIfNotExists()
which ignores error when deleting a file that does not exist.
Since there is no way with S3 to know if a non versioned file existed
before being deleted, this pull request also changes S3BlobContainer so
that it now implements deleteBlobIgnoringIfNotExists(). It will now save
one more request (blobExist?) when appropriate. The tests and fixture
have been modified to conform the S3 API that always returns a 204/NO
CONTENT HTTP response on deletions.
This commit removes some tests in the repository-s3 plugin that
have not been executed for 2+ years but have been maintained
for nothing. Most of the tests in AbstractAwsTestCase were
obsolete or superseded by fixture based integration tests.
The interface and its implementation can be merged into a single class,
which is renamed to S3Service like the other S3BlobStore, S3Repository
classes.
* remove left-over comment
* make sure of the property for plugins
* skip installing modules if these exist in the distribution
* Log the distrbution being ran
* Don't allow running with integ-tests-zip passed externally
* top level x-pack/qa can't run with oss distro
* Add support for matching objects in lists
Makes it possible to have a key that points to a list and assert that a
certain object is present in the list. All keys have to be present and
values have to match. The objects in the source list may have additional
fields.
example:
```
match: { 'nodes.$master.plugins': { name: ingest-attachment } }
```
* Update plugin and module tests to work with other distributions
Some of the tests expected that the integration tests will always be ran
with the `integ-test-zip` distribution so that there will be no other
plugins loaded.
With this change, we check for the presence of the plugin without
assuming exclusivity.
* Allow modules to run on other distros as well
To match the behavior of tets.distributions
* Add and use a new `contains` assertion
Replaces the previus changes that caused `match` to do a partial match.
* Implement PR review comments
Adds the ability to reread and decrypt the local node keystore.
Commonly, the contents of the keystore, backing the `SecureSettings`,
are not retrievable except during node initialization. This changes that
by adding a new API which broadcasts a password to every node. The
password is used to decrypt the local keystore and use it to populate
a `Settings` object that is passes to all the plugins implementing the
`ReloadablePlugin` interface. The plugin is then responsible to do
whatever "reload" means in his case. When the `reload`handler returns,
the keystore is closed and its contents are no longer retrievable.
Password is never stored persistently on any node.
Plugins that have been moded in this commit are: `repository-azure`,
`repository-s3`, `repository-gcs` and `discovery-ec2`.
In #19749 an extra check was added before writing each blob to ensure that we would not be
overriding an existing blob. Due to S3's weak consistency model, this check was best effort. To
make matters worse, however, this resulted in a HEAD request to be done before every PUT, in
particular also when PUTTING a new object. The approach taken in #19749 worsened our
consistency guarantees for follow-up snapshot actions, as it made it less likely for new files that
had been written to be available for reads.
This commit therefore removes this extra check. Due to the weak consistency model, this check
was a best effort thing anyway, and there's currently no way to prevent accidental overrides on S3.
This commit moves the repository-s3 fixture test added in #29296 in a
new `repository-s3/qa/amazon-s3` project. This new project allows the
REST integration tests to be executed using the real S3 service when
all the required environment variables are provided. When no env var
is provided, then the tests are executed using the fixture added
in #29296.
The REST tests located at the `repository-s3`plugin project now only
verify that the plugin is correctly loaded.
The REST tests have been adapted to allow a bucket name and a base
path to be specified as env vars. This way it is possible to run the tests
with different base paths (could be anything, like a CI job name or a
branch name) without multiplicating buckets.
Related to #29349
This commit adds the S3BlobStoreRepositoryTests class that extends the
base testing class for S3. It also removes some usage of socket servers
that emulate socket connections in unit tests. It was added to trigger
security exceptions, but this won't be needed anymore since #29296
is merged.
Today when you input a byte size setting that is out of bounds for the
setting, you get an error message that indicates the maximum value of
the setting. The problem is that because we use ByteSize#toString, we
end up with a representation of the value that does not really tell you
what the bound is. For example, if the bound is 2^31 - 1 bytes, the
output would be 1.9gb which does not really tell you want the limit as
there are many byte size values that we format to the same 1.9gb with
ByteSize#toString. We have a method ByteSize#getStringRep that uses the
input units to the value as the output units for the string
representation, so we end up with no loss if we use this to report the
bound. This commit does this.
This commit adds a new fixture that emulates a S3 service in order to
improve the existing integration tests. This is very similar to what has
been made for Google Cloud Storage in #28788, and such tests would
have helped a lot to catch bugs like #22534.
The AmazonS3Fixture is brittle and only implements the very necessary
stuff for the S3 repository to work, but at least it works and can be
adapted for specific tests needs.
* Fixes ByteSizeValue to serialise correctly
This fix makes a few fixes to ByteSizeValue to make it possible to perform round-trip serialisation:
* Changes wire serialisation to use Zlong methods instead of VLong methods. This is needed because the value `-1` is accepted but previously if `-1` is supplied it cannot be serialised using the wire protocol.
* Limits the supplied size to be no more than Long.MAX_VALUE when converted to bytes. Previously values greater than Long.MAX_VALUE bytes were accepted but would be silently interpreted as Long.MAX_VALUE bytes rather than erroring so the user had no idea the value was not being used the way they had intended. I consider this a bug and so fine to include this bug fix in a minor version but I am open to other points of view.
* Adds a `getStringRep()` method that can be used when serialising the value to JSON. This will print the bytes value if the size is positive, `”0”` if the size is `0` and `”-1”` if the size is `-1`.
* Adds logic to detect fractional values when parsing from a String and emits a deprecation warning in this case.
* Modifies hashCode and equals methods to work with long values rather than doubles so they don’t run into precision problems when dealing with large values. Previous to this change the equals method would not detect small differences in the values (e.g. 1-1000 bytes ranges) if the actual values where very large (e.g. PBs). This was due to the values being in the order of 10^18 but doubles only maintaining a precision of ~10^15.
Closes#27568
* Fix bytes settings default value to not use fractional values
* Fixes test
* Addresses review comments
* Modifies parsing to preserve unit
This should be bwc since in the case that the input is fractional it reverts back to the old method of parsing it to the bytes value.
* Addresses more review comments
* Fixes tests
* Temporarily changes version check to 7.0.0
This will be changed to 6.2 when the fix has been backported
This pull request changes the S3BlobContainer.blobExists() method implementation
to make it use the AmazonS3.doesObjectExist() method instead of
AmazonS3.getObjectMetadata(). The AmazonS3 implementation takes care of
catching any thrown AmazonS3Exception and compares its response code with 404,
returning false (object does not exist) or lets the exception be propagated.
Now the blob size information is available before writing anything,
the repository implementation can know upfront what will be the
more suitable API to upload the blob to S3.
This commit removes the DefaultS3OutputStream and S3OutputStream
classes and moves the implementation of the upload logic directly in the
S3BlobContainer.
related #26993closes#26969
Moved SocketAccess.doPrivileged up the stack to DefaultS3OutputStream in repository-S3 plugin to avoid SecurityException by Streams.copy(). A plugin is only allowed to use its own jars when performing privileged operations. The S3 client might open a new Socket on close(). #25192
This commit renames all rest test files to use the .yml extension
instead of .yaml. This way the extension used within all of
elasticsearch for yaml is consistent.
Specifying s3 access and secret keys inside repository settings are not
secure. However, until there is a way to dynamically update secure
settings, this is the only way to dynamically add repositories with
credentials that are not known at node startup time. This commit adds
back `access_key` and `secret_key` s3 repository settings, but protects
it with a required system property `allow_insecure_settings`.
Most of these settings should always be pulled from the repository
settings. A couple were leftover that should be moved to client
settings. The path style access setting should be removed altogether.
This commit adds deprecations for all of these existing settings, as
well as adding new client specific settings for max retries and
throttling.
relates #24143
This change simplifies how the rest test runner finds test files and
removes all leniency. Previously multiple prefixes and suffixes would
be tried, and tests could exist inside or outside of the classpath,
although outside of the classpath never quite worked. Now only classpath
tests are supported, and only one resource prefix is supported,
`/rest-api-spec/tests`.
closes#20240