Similarly to what has been done for S3 in #45383, this commit
adds unit tests that verify the behavior of the SDK client and
blob container implementation for Google Storage when the
remote service returns errors.
The main purpose was to add an extra test to the specific retry
logic for 410-Gone errors added in #45963.
Relates #45963
GoogleCloudStorageBlobStore.deleteBlobsIgnoringIfNotExists() does
not correctly catch StorageException thrown by batch.submit().
In the case a snapshot is deleted through BlobStoreRepository.deleteSnapshot()
a storage exception is not caught (only IOException are) so the deletion is
interrupted and indices cannot be cleaned up. The storage exception bubbles
up to SnapshotService.deleteSnapshotFromRepository() but the listener that
removes the deletion from the cluster state is not executed, leaving the
deletion in the cluster state.
This bug has been reported in #46772 where batch.submit() threw an
exception in the test testIndicesDeletedFromRepository and following
tests failed because a snapshot deletion was running.
Relates #46772
This commit adds support for resumable uploads to the internal HTTP
server used in GoogleCloudStorageBlobStoreRepositoryTests. This
way we can also test the behavior of the Google's client when the
service returns server errors in response to resumable upload requests.
The BlobStore implementation for GCS has the choice between 2
methods to upload a blob: resumable and multipart. In the current
implementation, the client executes a resumable upload if the blob
size is larger than LARGE_BLOB_THRESHOLD_BYTE_SIZE,
otherwise it executes a multipart upload. This commit makes this
logic overridable in tests, allowing to randomize the decision of
using one method or the other.
The commit add support for single request resumable uploads
and chunked resumable uploads (the blob is uploaded into multiple
2Mb chunks; each chunk being a resumable upload). For this last
case, this PR also adds a test testSnapshotWithLargeSegmentFiles
which makes it more probable that a chunked resumable upload is
executed.
A resumable upload session can fail on with a 410 error and should
be retried in that case. I added retrying twice using resetting of
the given `InputStream` as the retry mechanism since the same
approach is used by the AWS S3 SDK already as well and relied upon
by the S3 repository implementation.
Related GCS documentation:
https://cloud.google.com/storage/docs/json_api/v1/status-codes#410_Gone
This commit modifies the HTTP server used in
GoogleCloudStorageBlobStoreRepositoryTests so that it randomly
returns server errors. The test does not inject server errors for the
following types of request: batch request, resumable upload request.
This commit removes the usage of MockGoogleCloudStoragePlugin in
GoogleCloudStorageBlobStoreRepositoryTests and replaces it by a
HttpServer that emulates the Storage service. This allows the repository
tests to use the real Google's client under the hood in tests and will allow
us to test the behavior of the snapshot/restore feature for GCS repositories
by simulating random server-side internal errors.
The HTTP server used to emulate the Storage service is intentionally simple
and minimal to keep things understandable and maintainable. Testing full
client options on the server side (like authentication, chunked encoding
etc) remains the responsibility of the GoogleCloudStorageFixture.
This commit starts from the simple premise that the use of node settings
in blob store repositories is a mistake. Here we see that the node
settings are used to get default settings for store and restore throttle
rates. Yet, since there are not any node settings registered to this
effect, there can never be a default setting to fall back to there, and
so we always end up falling back to the default rate. Since this was the
only use of node settings in blob store repository, we move them. From
this, several places fall out where we were chaining settings through
only to get them to the blob store repository, so we clean these up as
well. That leaves us with the changeset in this commit.
* Repository Cleanup Endpoint (#43900)
* Snapshot cleanup functionality via transport/REST endpoint.
* Added all the infrastructure for this with the HLRC and node client
* Made use of it in tests and resolved relevant TODO
* Added new `Custom` CS element that tracks the cleanup logic.
Kept it similar to the delete and in progress classes and gave it
some (for now) redundant way of handling multiple cleanups but only allow one
* Use the exact same mechanism used by deletes to have the combination
of CS entry and increment in repository state ID provide some
concurrency safety (the initial approach of just an entry in the CS
was not enough, we must increment the repository state ID to be safe
against concurrent modifications, otherwise we run the risk of "cleaning up"
blobs that just got created without noticing)
* Isolated the logic to the transport action class as much as I could.
It's not ideal, but we don't need to keep any state and do the same
for other repository operations
(like getting the detailed snapshot shard status)
* We only use this method in one place in production code and can replace that with a read -> remove it to simplify the interface
* Keep it as an implementation detail in the Azure repository
This is a prerequisite of #42189:
* Add directory delete method to blob container specific to each implementation:
* Some notes on the implementations:
* AWS + GCS: We can simply exploit the fact that both AWS and GCS return blobs lexicographically ordered which allows us to simply delete in the same order that we receive the blobs from the listing request. For AWS this simply required listing without the delimiter setting (so we get a deep listing) and for GCS the same behavior is achieved by not using the directory mode on the listing invocation. The nice thing about this is, that even for very large numbers of blobs the memory requirements are now capped nicely since we go page by page when deleting.
* For Azure I extended the parallelization to the listing calls as well and made it work recursively. I verified that this works with thread count `1` since we only block once in the initial thread and then fan out to a "graph" of child listeners that never block.
* HDFS and FS are trivial since we have directory delete methods available for them
* Enhances third party tests to ensure the new functionality works (I manually ran them for all cloud providers)
* Add Ability to List Child Containers to BlobContainer (#42653)
* Add Ability to List Child Containers to BlobContainer
* This is a prerequisite of #42189
* Upgrade to latest GCS SDK and transitive dependencies (I chose the later version here on conflict)
* Remove now unnecessary hack for custom endpoints (the linked bugs were both resolved in the SDK)
Fixes an issue where repositories are unintentionally shared among tests (given that the repo contents is captured in a static variable on the test class, to allow "sharing" among nodes) and two tests randomly chose the same snapshot name, leading to a conflict.
Closes#42519
* Cleanup Bulk Delete Exception Logging
* Follow up to #41368
* Collect all failed blob deletes and add them to the exception message
* Remove logging of blob name list from caller exception logging
* Add Infrastructure to Run 3rd Party Repository Tests
* Add infrastructure to run third party repository tests using our standard JUnit infrastructure
* This is a prerequisite of #42189
* Remove Delete Method from BlobStore (#41619)
* The delete method on the blob store was used almost nowhere and just duplicates the delete method on the blob containers
* The fact that it provided for some recursive delete logic (that did not behave the same way on all implementations) was not used and not properly tested either
* Implement Bulk Deletes for GCS Repository (#41368)
* Just like #40322 for AWS
* We already had a bulk delete API but weren't using it from the blob container implementation, now we are using it
* Made the bulk delete API also compliant with our interface that only suppresses errors about non existent blobs by stating failed deletes (I didn't use any bulk stat action here since having to stat here should be the exception anyway and it would make error handling a lot more complex)
* Fixed bulk delete API to limit its batch size to 100 in line with GCS recommendations
back port of #41368
Motivated by slow snapshot deletes reported in e.g. #39656 and the fact that these likely are a contributing factor to repositories accumulating stale files over time when deletes fail to finish in time and are interrupted before they can complete.
* Makes snapshot deletion async and parallelizes some steps of the delete process that can be safely run concurrently via the snapshot thread poll
* I did not take the biggest potential speedup step here and parallelize the shard file deletion because that's probably better handled by moving to bulk deletes where possible (and can still be parallelized via the snapshot pool where it isn't). Also, I wanted to keep the size of the PR manageable.
* See https://github.com/elastic/elasticsearch/pull/39656#issuecomment-470492106
* Also, as a side effect this gives the `SnapshotResiliencyTests` a little more coverage for master failover scenarios (since parallel access to a blob store repository during deletes is now possible since a delete isn't a single task anymore).
* By adding a `ThreadPool` reference to the repository this also lays the groundwork to parallelizing shard snapshot uploads to improve the situation reported in #39657
Blob store compression was not enabled for some of the files in
snapshots due to constructor accessing sub-class fields. Fixed to
instead accept compress field as constructor param. Also fixed chunk
size validation to work.
Deprecated repositories.fs.compress setting as well to be able to unify
in a future commit.
Stop passing `Settings` to `AbstractComponent`'s ctor. This allows us to
stop passing around `Settings` in a *ton* of places. While this change
touches many files, it touches them all in fairly small, mechanical
ways, doing a few things per file:
1. Drop the `super(settings);` line on everything that extends
`AbstractComponent`.
2. Drop the `settings` argument to the ctor if it is no longer used.
3. If the file doesn't use `logger` then drop `extends
AbstractComponent` from it.
4. Clean up all compilation failure caused by the `settings` removal
and drop any now unused `settings` isntances and method arguments.
I've intentionally *not* removed the `settings` argument from a few
files:
1. TransportAction
2. AbstractLifecycleComponent
3. BaseRestHandler
These files don't *need* `settings` either, but this change is large
enough as is.
Relates to #34488
Drops the `Settings` member from `AbstractComponent`, moving it from the
base class on to the classes that use it. For the most part this is a
mechanical change that doesn't drop `Settings` accesses. The one
exception to this is naming threads where it switches from an invocation
that passes `Settings` and extracts the node name to one that explicitly
passes the node name.
This change doesn't drop the `Settings` argument from
`AbstractComponent`'s ctor because this change is big enough as is.
We'll do that in a follow up change.
The contains syntax was added in #30874 but the skips were not properly
put in place.
The java runner has the feature so the tests will run as part of the
build, but language clients will be able to support it at their own
pace.
Adds a new parameter to the BlobContainer#write*Blob methods to specify whether the existing file
should be overridden or not. For some metadata files in the repository, we actually want to replace
the current file. This is currently implemented through an explicit blob delete and then a fresh write.
In case of using a cloud provider (S3, GCS, Azure), this results in 2 API requests instead of just 1.
This change will therefore allow us to achieve the same functionality using less API requests.
* remove left-over comment
* make sure of the property for plugins
* skip installing modules if these exist in the distribution
* Log the distrbution being ran
* Don't allow running with integ-tests-zip passed externally
* top level x-pack/qa can't run with oss distro
* Add support for matching objects in lists
Makes it possible to have a key that points to a list and assert that a
certain object is present in the list. All keys have to be present and
values have to match. The objects in the source list may have additional
fields.
example:
```
match: { 'nodes.$master.plugins': { name: ingest-attachment } }
```
* Update plugin and module tests to work with other distributions
Some of the tests expected that the integration tests will always be ran
with the `integ-test-zip` distribution so that there will be no other
plugins loaded.
With this change, we check for the presence of the plugin without
assuming exclusivity.
* Allow modules to run on other distros as well
To match the behavior of tets.distributions
* Add and use a new `contains` assertion
Replaces the previus changes that caused `match` to do a partial match.
* Implement PR review comments
Adds the ability to reread and decrypt the local node keystore.
Commonly, the contents of the keystore, backing the `SecureSettings`,
are not retrievable except during node initialization. This changes that
by adding a new API which broadcasts a password to every node. The
password is used to decrypt the local keystore and use it to populate
a `Settings` object that is passes to all the plugins implementing the
`ReloadablePlugin` interface. The plugin is then responsible to do
whatever "reload" means in his case. When the `reload`handler returns,
the keystore is closed and its contents are no longer retrievable.
Password is never stored persistently on any node.
Plugins that have been moded in this commit are: `repository-azure`,
`repository-s3`, `repository-gcs` and `discovery-ec2`.
There's no need for an extra blobExists() call when writing a blob to the GCS service. GCS provides
an option (with stronger consistency guarantees) on the insert method that guarantees that the
blob that's uploaded does not already exist.
Relates to #19749
This does away with the deprecated `com.google.api-client:google-api-client:1.23`
and replaces it with `com.google.cloud:google-cloud-storage:1.28.0`.
It also changes security permissions for the repository-gcs plugin.
Similarly to what has been done in for the repository-s3 plugin,
this commit moves the fixture test into a dedicated
repository-gcs/qa/google-cloud-storage project.
It also exposes some environment variables which allows to
execute the integration tests against the real Google Cloud
Storage service. When the environment variables are not
defined, the integration tests are executed using the fixture
added in #28788. Related to #29349.
* Move Streams.copy into elasticsearch-core and make a multi-release jar
This moves the method `Streams.copy(InputStream in, OutputStream out)` into the
`elasticsearch-core` project (inside the `o.e.core.internal.io` package). It
also makes this class into a multi-release class where the Java 9 equivalent
uses `InputStream#transferTo`.
This is a followup from
https://github.com/elastic/elasticsearch/pull/29300#discussion_r178147495
The repository-gcs unit tests rely on the GoogleCloudStorageTestServer
but it would be better if they rely on a mocked Storage client instead.
That would also help to extract the GoogleCloudStorageFixture and the
GoogleCloudStorageTestServer classes in a QA third party project.
Closes#28960
Today when you input a byte size setting that is out of bounds for the
setting, you get an error message that indicates the maximum value of
the setting. The problem is that because we use ByteSize#toString, we
end up with a representation of the value that does not really tell you
what the bound is. For example, if the bound is 2^31 - 1 bytes, the
output would be 1.9gb which does not really tell you want the limit as
there are many byte size values that we format to the same 1.9gb with
ByteSize#toString. We have a method ByteSize#getStringRep that uses the
input units to the value as the output units for the string
representation, so we end up with no loss if we use this to report the
bound. This commit does this.
This commit adds a new fixture that emulates a S3 service in order to
improve the existing integration tests. This is very similar to what has
been made for Google Cloud Storage in #28788, and such tests would
have helped a lot to catch bugs like #22534.
The AmazonS3Fixture is brittle and only implements the very necessary
stuff for the S3 repository to work, but at least it works and can be
adapted for specific tests needs.
* Decouple XContentBuilder from BytesReference
This commit removes all mentions of `BytesReference` from `XContentBuilder`.
This is needed so that we can completely decouple the XContent code and move it
into its own dependency.
While this change appears large, it is due to two main changes, moving
`.bytes()` and `.string()` out of XContentBuilder itself into static methods
`BytesReference.bytes` and `Strings.toString` respectively. The rest of the
change is code reacting to these changes (the majority of it in tests).
Relates to #28504
This commit adds a GoogleCloudStorageFixture that uses the
logic of a GoogleCloudStorageTestServer (added in #28576)
to emulate a remote Google Cloud Storage service.
By adding this fixture and a more complete integration test, we
should be able to catch more bugs when upgrading the client library.
The fixture is started by the googleCloudStorageFixture task
and a custom Service Account file is created and added to the
Elasticsearch keystore for each test.
Similarly to what has been done for s3 and azure, this commit removes
the repository settings `application_name` and `connect/read_timeout`
in favor of client settings. It introduce a GoogleCloudStorageClientSettings
class (similar to S3ClientSettings) and a bunch of unit tests for that,
it aligns the documentation to be more coherent with the S3 one, it
documents the connect/read timeouts that were not documented at all and
also adds a new client setting that allows to define a custom endpoint.
This pull request extracts in a dedicated class the request/response
logic that "emulates" a Google Cloud Storage service in our
repository-gcs tests.
The idea behind this is to make the logic more reusable. The class
MockHttpTransport has been renamed to MockStorage which now
only takes care of instantiating a Storage client and does the low-level
request/response plumbing needed by this client.
The "Google Cloud Storage" logic has been extracted from
MockHttpTransport and put in a new GoogleCloudStorageTestServer
that is now independent from the google client testing framework.
This commit changes the DefaultHttpRequestInitializer in order to make
it create new HttpIOExceptionHandler and HttpUnsuccessfulResponseHandler
for every new HTTP request instead of reusing the same two handlers for
all requests.
Closes#27092