Cache latest `RepositoryData` on heap when it's absolutely safe to do so (i.e. when the repository is in strictly consistent mode).
`RepositoryData` can safely be assumed to not grow to a size that would cause trouble because we often have at least two copies of it loaded at the same time when doing repository operations. Also, concurrent snapshot API status requests currently load it independently of each other and so on, making it safe to cache on heap and assume as "small" IMO.
The benefits of this move are:
* Much faster repository status API calls
* listing all snapshot names becomes instant
* Other operations are sped up massively too because they mostly operate in two steps: load repository data then load multiple other blobs to get the additional data
* Additional cloud cost savings
* Better resiliency, saving another spot where an IO issue could break the snapshot
* We can simplify a number of spots in the current code that currently pass around the repository data in tricky ways to avoid loading it multiple times in follow ups.
* Refactor Inflexible Snapshot Repository BwC (#52365)
Transport the version to use for a snapshot instead of whether to use shard generations in the snapshots in progress entry. This allows making upcoming repository metadata changes in a flexible manner in an analogous way to how we handle serialization BwC elsewhere.
Also, exposing the version at the repository API level will make it easier to do BwC relevant changes in derived repositories like source only or encrypted.
This commit changes how RestHandlers are registered with the
RestController so that a RestHandler no longer needs to register itself
with the RestController. Instead the RestHandler interface has new
methods which when called provide information about the routes
(method and path combinations) that are handled by the handler
including any deprecated and/or replaced combinations.
This change also makes the publication of RestHandlers safe since they
no longer publish a reference to themselves within their constructors.
Closes#51622
Co-authored-by: Jason Tedor <jason@tedor.me>
Backport of #51950
Add cool down period after snapshot finalization and delete to prevent eventually consistent AWS S3 from corrupting shard level metadata as long as the repository is using the old format metadata on the shard level.
This solves half of the problem in #46813 by moving the S3
tests to using the shared minio fixture so we at least have
some non-3rd-party, constantly running coverage on these tests.
Unfortunately bulk delete exceptions don't show the individual delete
errors when a bulk delete fails when you log them outright so I added this work-around
to get the individual details to get useful logging.
* Remove BlobContainer Tests against Mocks
Removing all these weird mocks as asked for by #30424.
All these tests are now part of real repository ITs and otherwise left unchanged if they had
independent tests that didn't call the `createBlobStore` method previously.
The HDFS tests also get added coverage as a side-effect because they did not have an implementation
of the abstract repository ITs.
Closes#30424
* Remove Unused Single Delete in BlobStoreRepository
There are no more production uses of the non-bulk delete or the delete that throws
on missing so this commit removes both these methods.
Only the bulk delete logic remains. Where the bulk delete was derived from single deletes,
the single delete code was inlined into the bulk delete method.
Where single delete was used in tests it was replaced by bulk deleting.
* Make BlobStoreRepository Aware of ClusterState (#49639)
This is a preliminary to #49060.
It does not introduce any substantial behavior change to how the blob store repository
operates. What it does is to add all the infrastructure changes around passing the cluster service to the blob store, associated test changes and a best effort approach to tracking the latest repository generation on all nodes from cluster state updates. This brings a slight improvement to the consistency
by which non-master nodes (or master directly after a failover) will be able to determine the latest repository generation. It does not however do any tricky checks for the situation after a repository operation
(create, delete or cleanup) that could theoretically be used to get even greater accuracy to keep this change simple.
This change does not in any way alter the behavior of the blobstore repository other than adding a better "guess" for the value of the latest repo generation and is mainly intended to isolate the actual logical change to how the
repository operates in #49060
All the implementations of `EsBlobStoreTestCase` use the exact same
bootstrap code that is also used by their implementation of
`EsBlobStoreContainerTestCase`.
This means all tests might as well live under `EsBlobStoreContainerTestCase`
saving a lot of code duplication. Also, there was no HDFS implementation for
`EsBlobStoreTestCase` which is now automatically resolved by moving the tests over
since there is a HDFS implementation for the container tests.
This commit fixes the server side logic of "List Objects" operations
of Azure and S3 fixtures. Until today, the fixtures were returning a "
flat" view of stored objects and were not correctly handling the
delimiter parameter. This causes some objects listing to be wrongly
interpreted by the snapshot deletion logic in Elasticsearch which
relies on the ability to list child containers of BlobContainer (#42653)
to correctly delete stale indices.
As a consequence, the blobs were not correctly deleted from the
emulated storage service and stayed in heap until they got garbage
collected, causing CI failures like #48978.
This commit fixes the server side logic of Azure and S3 fixture when
listing objects so that it now return correct common blob prefixes as
expected by the snapshot deletion process. It also adds an after-test
check to ensure that tests leave the repository empty (besides the
root index files).
Closes#48978
Similarly to what has been done for Azure (#48636) and GCS (#48762),
this committ removes the existing Ant fixture that emulates a S3 storage
service in favor of multiple docker-compose based fixtures.
The goals here are multiple: be able to reuse a s3-fixture outside of the
repository-s3 plugin; allow parallel execution of integration tests; removes
the existing AmazonS3Fixture that has evolved in a weird beast in
dedicated, more maintainable fixtures.
The server side logic that emulates S3 mostly comes from the latest
HttpHandler made for S3 blob store repository tests, with additional
features extracted from the (now removed) AmazonS3Fixture:
authentication checks, session token checks and improved response
errors. Chunked upload request support for S3 object has been added
too.
The server side logic of all tests now reside in a single S3HttpHandler class.
Whereas AmazonS3Fixture contained logic for basic tests, session token
tests, EC2 tests or ECS tests, the S3 fixtures are now dedicated to each
kind of test. Fixtures are inheriting from each other, making things easier
to maintain.
Backport of #48849. Update `.editorconfig` to make the Java settings the
default for all files, and then apply a 2-space indent to all `*.gradle`
files. Then reformat all the files.
This commit introduces a consistent, and type-safe manner for handling
global build parameters through out our build logic. Primarily this
replaces the existing usages of extra properties with static accessors.
It also introduces and explicit API for initialization and mutation of
any such parameters, as well as better error handling for uninitialized
or eager access of parameter values.
Closes#42042
In repository integration tests, we drain the HTTP request body before
returning a response. Before this change this operation was done using
Streams.readFully() which uses a 8kb buffer to read the input stream, it
now uses a 1kb for the same operation. This should reduce the allocations
made during the tests and speed them up a bit on CI.
Co-authored-by: Armin Braun <me@obrown.io>
This commit change the repositories base paths used in Azure/S3/GCS
integration tests so that they don't conflict with each other when tests
run in parallel on real storage services.
Closes#47202
This PR adds some restrictions around testfixtures to make sure the same service ( as defiend in docker-compose.yml ) is not shared between multiple projects.
Sharing would break running with --parallel.
Projects can still share fixtures as long as each has it;s own service within.
This is still useful to share some of the setup and configuration code of the fixture.
Project now also have to specify a service name when calling useCluster to refer to a specific service.
If this is not the case all services will be claimed and the fixture can't be shared.
For this reason fixtures have to explicitly specify if they are using themselves ( fixture and tests in the same project ).
Today if the connection to S3 times out or drops after starting to download an
object then the SDK does not attempt to recover or resume the download, causing
the restore of the whole shard to fail and retry. This commit allows
Elasticsearch to detect such a mid-stream failure and to resume the download
from where it failed.
The `repository-s3` plugin has supported a storage class of `onezone_ia` since
the SDK upgrade in #30723, but we do not test or document this fact. This
commit adds this storage class to the docs and adds a test to ensure that the
documented storage classes are all accepted by S3 too.
Fixes#30474
When some high values are randomly picked up - for example the number
of indices to snapshot or the number of snapshots to create - the tests
in S3BlobStoreRepositoryTests can generate a high number of requests to
the internal S3 server.
In order to test the retry logic of the S3 client, the internal server is
designed to randomly generate random server errors. When many
requests are made, it is possible that the S3 client reaches its maximum
number of successive retries capacity. Then the S3 client will stop
retrying requests until enough retry attempts succeed, but it means
that any request could fail before reaching the max retries count and
make the test fail too.
Closes#46217Closes#46218Closes#46219
This commit modifies the HTTP server used in S3BlobStoreRepositoryTests
so that it randomly returns server errors for any type of request executed by
the SDK client. It is now possible to verify that the repository tests are s
uccessfully completed even if one or more errors were returned by the S3
service in response of a blob upload, a blob deletion or a object listing request
etc.
Because injecting errors forces the SDK client to retry requests, the test limits
the maximum errors to send in response for each request at 3 retries.
This commit removes the usage of MockAmazonS3 in S3BlobStoreRepositoryTests
and replaces it by a HttpServer that emulates the S3 service. This allows the
repository tests to use the real Amazon's S3 client under the hood in tests and will
allow to test the behavior of the snapshot/restore feature for S3 repositories by
simulating random server-side internal errors.
The HTTP server used to emulate the S3 service is intentionally simple and minimal
to keep things understandable and maintainable. Testing full client options on the
server side (like authentication, chunked encoding etc) remains the responsibility
of the AmazonS3Fixture.
This commit starts from the simple premise that the use of node settings
in blob store repositories is a mistake. Here we see that the node
settings are used to get default settings for store and restore throttle
rates. Yet, since there are not any node settings registered to this
effect, there can never be a default setting to fall back to there, and
so we always end up falling back to the default rate. Since this was the
only use of node settings in blob store repository, we move them. From
this, several places fall out where we were chaining settings through
only to get them to the blob store repository, so we clean these up as
well. That leaves us with the changeset in this commit.
This commit refactors the S3 credentials tests in
RepositoryCredentialsTests so that it now uses a single
node (ESSingleNodeTestCase) to test how secure/insecure
credentials are overriding each other. Using a single node
makes it much easier to understand what each test is actually
testing and IMO better reflect how things are initialized.
It also allows to fold into this class the test
testInsecureRepositoryCredentials which was wrongly located
in S3BlobStoreRepositoryTests. By moving this test away, the
S3BlobStoreRepositoryTests class does not need the
allow_insecure_settings option anymore and thus can be
executed as part of the usual gradle test task.
This commit changes the tests added in #45383 so that the fixture that
emulates the S3 service now sometimes consumes all the request body
before sending an error, sometimes consumes only a part of the request
body and sometimes consumes nothing. The idea here is to beef up a bit
the tests that writes blob because the client's retry logic relies on
marking and resetting the blob's input stream.
This pull request also changes the testWriteBlobWithRetries() so that it
(rarely) tests with a large blob (up to 1mb), which is more than the client's
default read limit on input streams (131Kb).
Finally, it optimizes the ZeroInputStream so that it is a bit more effective
(now works using an internal buffer and System.arraycopy() primitives).
This commit adds tests to verify the behavior of the S3BlobContainer and
its underlying AWS SDK client when the remote S3 service is responding
errors or not responding at all. The expected behavior is that requests are
retried multiple times before the client gives up and the S3BlobContainer
bubbles up an exception.
The test verifies the behavior of BlobContainer.writeBlob() and
BlobContainer.readBlob(). In the case of S3 writing a blob can be executed
as a single upload or using multipart requests; the test checks both scenario
by writing a small then a large blob.
* Repository Cleanup Endpoint (#43900)
* Snapshot cleanup functionality via transport/REST endpoint.
* Added all the infrastructure for this with the HLRC and node client
* Made use of it in tests and resolved relevant TODO
* Added new `Custom` CS element that tracks the cleanup logic.
Kept it similar to the delete and in progress classes and gave it
some (for now) redundant way of handling multiple cleanups but only allow one
* Use the exact same mechanism used by deletes to have the combination
of CS entry and increment in repository state ID provide some
concurrency safety (the initial approach of just an entry in the CS
was not enough, we must increment the repository state ID to be safe
against concurrent modifications, otherwise we run the risk of "cleaning up"
blobs that just got created without noticing)
* Isolated the logic to the transport action class as much as I could.
It's not ideal, but we don't need to keep any state and do the same
for other repository operations
(like getting the detailed snapshot shard status)
* Create S3 Third Party Test Task that Covers the S3 CLI Tool
* Adjust snapshot cli test tool tests to work with real S3
* Build adjustment
* Clean up repo path before testing
* Dedup the logic for asserting path contents by using the correct utility method here that somehow became unused
* We only use this method in one place in production code and can replace that with a read -> remove it to simplify the interface
* Keep it as an implementation detail in the Azure repository
* Test fixtures improovements
Don't disable some of the precommit tasks on fixtures.
This no longer makes sense now that a project can both produce and use a
fixture.
In order for this to be possible, had to add an additional configuration
to make JarHell class accessible to the task even if it's not a
dependency of the project and fix some of the third party audit fallout
from #43671 which wasn't detected at the time due to the issue being
fixed here.
Closes#43918
* Use ability to list child "folders" in the blob store to implement recursive delete on all stale index folders when cleaning up instead of using the diff between two `RepositoryData` instances to cover aborted deletes
* Runs after ever delete operation
* Relates #13159 (fixing most of this issues caused by unreferenced indices, leaving some meta files to be cleaned up only)