This commit updates the link to the JDK 14 compiler bug that we have
found. At the time that we committed the workaround, we had a submission
ID, but not yet the public bug URL. This commit adds the public bug URL.
This commit workarounds a bug in the JDK 14 compiler. It is choking on a
method reference, so we substitute a lambda expression instead. The JDK
bug ID is 9064309.
This is a simple naming change PR, to fix the fact that "metadata" is a
single English word, and for too long we have not followed general
naming conventions for it. We are also not consistent about it, for
example, METADATA instead of META_DATA if we were trying to be
consistent with MetaData (although METADATA is correct when considered
in the context of "metadata"). This was a simple find and replace across
the code base, only taking a few minutes to fix this naming issue
forever.
* Comprehensively test supported/unsupported field type:agg combinations (#52493)
This adds a test to AggregatorTestCase that allows us to programmatically
verify that an aggregator supports or does not support a particular
field type. It fetches the list of registered field type parsers,
creates a MappedFieldType from the parser and then attempts to run
a basic agg against the field.
A supplied list of supported VSTypes are then compared against the
output (success or exception) and suceeds or fails the test accordingly.
Co-Authored-By: Mark Tozzi <mark.tozzi@gmail.com>
* Skip fields that are not aggregatable
* Use newIndexSearcher() to avoid incompatible readers (#52723)
Lucene's `newSearcher()` can generate readers like ParallelCompositeReader
which we can't use. We need to instead use our helper `newIndexSearcher`
Backport of #53982
In order to prepare the `AliasOrIndex` abstraction for the introduction of data streams,
the abstraction needs to be made more flexible, because currently it really can be only
an alias or an index.
* Renamed `AliasOrIndex` to `IndexAbstraction`.
* Introduced a `IndexAbstraction.Type` enum to indicate what a `IndexAbstraction` instance is.
* Replaced the `isAlias()` method that returns a boolean with the `getType()` method that returns the new Type enum.
* Moved `getWriteIndex()` up from the `IndexAbstraction.Alias` to the `IndexAbstraction` interface.
* Moved `getAliasName()` up from the `IndexAbstraction.Alias` to the `IndexAbstraction` interface and renamed it to `getName()`.
* Removed unnecessary casting to `IndexAbstraction.Alias` by just checking the `getType()` method.
Relates to #53100
Currently all of our transport protocol decoding and aggregation occurs
in the individual transport modules. This means that each implementation
(test, netty, nio) must implement this logic. Additionally, it means
that the entire message has been read from the network before the server
package receives it.
This commit creates a pipeline in server which can be passed arbitrary
bytes to handle. Internally, the pipeline will decode, decompress, and
aggregate the messages. Additionally, this allows us to run many
megabytes of bytes through the pipeline in tests to ensure that the
logic works.
This work will enable future work:
Circuit breaking or backoff logic based on message type and byte
in the content aggregator.
Sharing bytes with the application layer using the ref counted
releasable network bytes.
Improved network monitoring based specifically on channels.
Finally, this fixes the bug where we do not circuit break on the correct
message size when compression is enabled.
Elasticsearch has a number of different BytesReference implementations.
These implementations can all implement the interface in different ways
with subtly different behavior and performance characteristics. On the
other-hand, the JVM only represents bytes as an array or a direct byte
buffer. This commit deletes the specialized Netty implementations and
moves to using a generic ByteBuffer reference type. This will allow us
to focus on standardizing performance and behave around a smaller number
of implementations that can be used by all components in Elasticsearch.
We are not using the Apache HTTP client backed http transport
with the GCS repo. Same as with the app engine type transport
we can save ourselves the dependency on the http client here
and ignore the missing classes.
* Upgrade GCS Dependency to 1.106.0 (#54092)
Upgrading GCS Dep + related dependencies as it seems some more retry bugs were fixed between .104 and .106
This reverts commit 23cccf088810b8416ed278571352393cc2de9523.
Unfortunately SAS token auth still doesn't work with bulk deletes so we can't use them yet.
Closes#54080
This commit removes the configuration time vs execution time distinction
with regards to certain BuildParms properties. Because of the cost of
determining Java versions for configuration JDK locations we deferred
this until execution time. This had two main downsides. First, we had
to implement all this build logic in tasks, which required a bunch of
additional plumbing and complexity. Second, because some information
wasn't known during configuration time, we had to nest any build logic
that depended on this in awkward callbacks.
We now defer to the JavaInstallationRegistry recently added in Gradle.
This utility uses a much more efficient method for probing Java
installations vs our jrunscript implementation. This, combined with some
optimizations to avoid probing the current JVM as well as deferring
some evaluation via Providers when probing installations for BWC builds
we can maintain effectively the same configuration time performance
while removing a bunch of complexity and runtime cost (snapshotting
inputs for the GenerateGlobalBuildInfoTask was very expensive). The end
result should be a much more responsive build execution in almost all
scenarios.
(cherry picked from commit ecdbd37f2e0f0447ed574b306adb64c19adc3ce1)
This change adds the `nori_number` token filter.
It also adds a `discard_punctuation` option in nori_tokenizer that should be used in conjunction with the new filter.
Upgrading AWS SDK to v1.11.749.
Required building clients inside privileged contexts because some class loading that requires privileges now happens there and working around a new SDK bug in the S3 client builder.
Closes#53191
Upgrading to 8.6.2 in #53865 broke running against HTTPs endpoints (and hence real azure)
because the https url connection needs the newly added permission to work.
The lower end of the timeout range of 100ms is prone to time out
on CI before the mock REST server gets to sending a response that
is not supposed to be a timeout.
Using 1-3s here should make this safe at the cost of randomly making
this test take a few seconds.
Closes#53506
Re-applies the change from #53523 along with test fixes.
closes#53626closes#53624closes#53622closes#53625
Co-authored-by: Nik Everett <nik9000@gmail.com>
Co-authored-by: Lee Hinman <dakrone@users.noreply.github.com>
Co-authored-by: Jake Landis <jake.landis@elastic.co>
This commit upgrades the jackson-databind depdendency to
2.8.11.6. Additionally, we revert a previous change that put
ingest-geoip on the version of jackson-databind from the version
properties file. This is because upgrading ingest-geoip to a later
version of jackson-databind also requires an upgrade to the geoip2
dependency which is currently blocked. Therefore, if we can get to a
point where we otherwise upgrade our Jackson dependencies, we do not
want ingest-geoip to automatically come along with it.
Lucene 8.5.0 release candidates are imminent. This commit upgrades master to use
the latest snapshot to check that there are no last-minute bugs or regressions.
Upgrading the GCS SDK to the most recent version.
Adjusting (i.e. improving) the REST mock accordingly.
This should significantly boost performance by pulling in
https://github.com/googleapis/java-core/issues/86 in some cases.
Tests in GoogleCloudStorageBlobStoreRepositoryTests are known
to be flaky on JDK 8 (#51446, #52430 ) and we suspect a JDK
bug (https://bugs.openjdk.java.net/browse/JDK-8180754) that triggers
some assertion on the server side logic that emulates the Google
Cloud Storage service.
Sadly we were not able to reproduce the failures, even when using
the same OS (Debian 9, Ubuntu 16.04) and JDK (Oracle Corporation
1.8.0_241 [Java HotSpot(TM) 64-Bit Server VM 25.241-b07]) of
almost all the test failures on CI. While we spent some time fixing
code (#51933, #52431) to circumvent the JDK bug they are still flaky
on JDK-8. This commit mute these tests for JDK-8 only.
Close ##52906
The countdown didn't work well here because it only returns `true` once the countdown reaches `0`
but can on subsequent executions return `false` again if a countdown at `0` is counted down again,
leading to more than the expected number of simulated failures.
Closes#52607
Cache latest `RepositoryData` on heap when it's absolutely safe to do so (i.e. when the repository is in strictly consistent mode).
`RepositoryData` can safely be assumed to not grow to a size that would cause trouble because we often have at least two copies of it loaded at the same time when doing repository operations. Also, concurrent snapshot API status requests currently load it independently of each other and so on, making it safe to cache on heap and assume as "small" IMO.
The benefits of this move are:
* Much faster repository status API calls
* listing all snapshot names becomes instant
* Other operations are sped up massively too because they mostly operate in two steps: load repository data then load multiple other blobs to get the additional data
* Additional cloud cost savings
* Better resiliency, saving another spot where an IO issue could break the snapshot
* We can simplify a number of spots in the current code that currently pass around the repository data in tricky ways to avoid loading it multiple times in follow ups.
* Refactor Inflexible Snapshot Repository BwC (#52365)
Transport the version to use for a snapshot instead of whether to use shard generations in the snapshots in progress entry. This allows making upcoming repository metadata changes in a flexible manner in an analogous way to how we handle serialization BwC elsewhere.
Also, exposing the version at the repository API level will make it easier to do BwC relevant changes in derived repositories like source only or encrypted.
* Add Blob Download Retries to GCS Repository
Exactly as #46589 (and kept as close to it as possible code wise so we can dry things up in a follow-up potentially) but for GCS.
Closes#52319
Add a new cluster setting `search.allow_expensive_queries` which by
default is `true`. If set to `false`, certain queries that have
usually slow performance cannot be executed and an error message
is returned.
- Queries that need to do linear scans to identify matches:
- Script queries
- Queries that have a high up-front cost:
- Fuzzy queries
- Regexp queries
- Prefix queries (without index_prefixes enabled
- Wildcard queries
- Range queries on text and keyword fields
- Joining queries
- HasParent queries
- HasChild queries
- ParentId queries
- Nested queries
- Queries on deprecated 6.x geo shapes (using PrefixTree implementation)
- Queries that may have a high per-document cost:
- Script score queries
- Percolate queries
Closes: #29050
(cherry picked from commit a8b39ed842c7770bd9275958c9f747502fd9a3ea)
Move EC2 discovery tests to using the mock REST API introduced in
https://github.com/elastic/elasticsearch/pull/50550 instead of mocking
the AWS SDK classes manually.
Move the trivial remaining AWS SDK mocks to the single test suit that
was using them.
- Enable SunJGSS provider for Kerberos tests
- Handle the fact that in the decrypt method in KeyStoreWrapper might
not throw immediately when the GCM cipher is from BouncyCastle FIPS
and we end up with a DataInputStream that has reached it's end.
- Disable tests, jarHell, testingConventions for ingest attachment
plugin. We don't support this plugin (and document this) in FIPS
mode.
- Don't attempt to install ingest-attachment in smoke-test-plugins
This commit changes how RestHandlers are registered with the
RestController so that a RestHandler no longer needs to register itself
with the RestController. Instead the RestHandler interface has new
methods which when called provide information about the routes
(method and path combinations) that are handled by the handler
including any deprecated and/or replaced combinations.
This change also makes the publication of RestHandlers safe since they
no longer publish a reference to themselves within their constructors.
Closes#51622
Co-authored-by: Jason Tedor <jason@tedor.me>
Backport of #51950
Now that the FIPS 140 security provider is simply a test dependency
we don't need the thirdPartyAudit exceptions, but plugin-cli and
transport-netty4 do need jarHell disabled as they use the non fips
BouncyCastle security provider as a test dependency too.
For small uploads (that can still be up to 5MB!) we needlessly
reading the `InputStream` into a BAOS which entailed allocating
the `byte[]` for the stream contents twice (because to `toByteArray` on the BAOS copies).
Also, for resumeable uploads we were needlessly wrapping the output channel and running each individual write in its own privileged context when we could just wrap the whole upload in a single privileged context.
Relates #51593
This test was still very GC heavy in Java 8 runs in particular
which seems to slow down request processing to the point of timeouts
in some runs.
This PR completely removes the large number of O(MB) `byte[]` allocations
that were happening in the mock http handler which cuts the allocation rate
by about a factor of 5 in my local testing for the GC heavy `testSnapshotWithLargeSegmentFiles`
run.
Closes#51446Closes#50754
This change changes the way to run our test suites in
JVMs configured in FIPS 140 approved mode. It does so by:
- Configuring any given runtime Java in FIPS mode with the bundled
policy and security properties files, setting the system
properties java.security.properties and java.security.policy
with the == operator that overrides the default JVM properties
and policy.
- When runtime java is 11 and higher, using BouncyCastle FIPS
Cryptographic provider and BCJSSE in FIPS mode. These are
used as testRuntime dependencies for unit
tests and internal clusters, and copied (relevant jars)
explicitly to the lib directory for testclusters used in REST tests
- When runtime java is 8, using BouncyCastle FIPS
Cryptographic provider and SunJSSE in FIPS mode.
Running the tests in FIPS 140 approved mode doesn't require an
additional configuration either in CI workers or locally and is
controlled by specifying -Dtests.fips.enabled=true
It is the job of the http server transport to release the request in the handler
but the mock fails to do so since we never override `incomingRequest`.
Add cool down period after snapshot finalization and delete to prevent eventually consistent AWS S3 from corrupting shard level metadata as long as the repository is using the old format metadata on the shard level.
This moves the testing of custom significance heuristic plugins from an
`ESIntegTestCase` to an example plugin. This is *much* more "real" and
can be used as an example for anyone that needs to actually build such a
plugin. The old test had testing concerns and the example all jumbled
together.
It's impossible to tell why #50754 fails without this change.
We're failing to close the `exchange` somewhere and there is no
write timeout in the GCS SDK (something to look into separately)
only a read timeout on the socket so if we're failing on an assertion without
reading the full request body (at least into the read-buffer) we're locking up
waiting forever on `write0`.
This change ensure the `exchange` is closed in the tests where we could lock up
on a write and logs the failure so we can find out what broke #50754.
This solves half of the problem in #46813 by moving the S3
tests to using the shared minio fixture so we at least have
some non-3rd-party, constantly running coverage on these tests.
Follow up to #50550. Cache empty nodes lists (`fetchDynamicNodes` will return an empty list in case of failure)
now that the plugin properly retries requests to AWS EC2 APIs.
Use the default retry condition instead of never retrying in the discovery plugin causing hot retries upstream and add a test that verifies retrying works.
Closes#50462
Avoid backwards incompatible changes for 8.x and 7.6 by removing type
restriction on compile and Factory. Factories may optionally implement
ScriptFactory. If so, then they can indicate determinism and thus
cacheability.
**Backport**
Relates: #49466
When a third party test failed, it potentially left some snapshots
in the repository. In case of tests running against an external
service like Azure, the remaining snapshots can fail the future
test executions are they are not supposed to exist.
Similarly to what has been done for S3 and GCS, this commit
cleans up remaining snapshots before the test execution.
Closes#50304
Unfortunately bulk delete exceptions don't show the individual delete
errors when a bulk delete fails when you log them outright so I added this work-around
to get the individual details to get useful logging.
* Remove BlobContainer Tests against Mocks
Removing all these weird mocks as asked for by #30424.
All these tests are now part of real repository ITs and otherwise left unchanged if they had
independent tests that didn't call the `createBlobStore` method previously.
The HDFS tests also get added coverage as a side-effect because they did not have an implementation
of the abstract repository ITs.
Closes#30424
* Remove Unused Single Delete in BlobStoreRepository
There are no more production uses of the non-bulk delete or the delete that throws
on missing so this commit removes both these methods.
Only the bulk delete logic remains. Where the bulk delete was derived from single deletes,
the single delete code was inlined into the bulk delete method.
Where single delete was used in tests it was replaced by bulk deleting.
Batch deletes get a response for every delete request, not just those that actually hit an existing blob.
The fact that we only responded for existing blobs leads to a degenerate response that throws a parse exception if a batch delete only contains non-existant blobs.
Today settings can declare dependencies on another setting. This
declaration is implemented so that if the declared setting is not set
when the declaring setting is, settings validation fails. Yet, in some
cases we want not only that the setting is set, but that it also has a
specific value. For example, with the monitoring exporter settings, if
xpack.monitoring.exporters.my_exporter.host is set, we not only want
that xpack.monitoring.exporters.my_exporter.type is set, but that it is
also set to local. This commit extends the settings infrastructure so
that this declaration is possible. The use of this in the monitoring
exporter settings will be implemented in a follow-up.
In order to cache script results in the query shard cache, we need to
check if scripts are deterministic. This change adds a default method
to the script factories, `isResultDeterministic() -> false` which is
used by the `QueryShardContext`.
Script results were never cached and that does not change here. Future
changes will implement this method based on whether the results of the
scripts are deterministic or not and therefore cacheable.
Refs: #49466
**Backport**
Our docs specifically mention that CBOR is supported when ingesting attachments. However this is not tested anywhere.
This adds a test, that uses specifically CBOR format in its IndexRequest and another one that behaves like CBOR in the ingest attachment unit tests.
Adds `GET /_script_language` to support Kibana dynamic scripting
language selection.
Response contains whether `inline` and/or `stored` scripts are
enabled as determined by the `script.allowed_types` settings.
For each scripting language registered, such as `painless`,
`expression`, `mustache` or custom, available contexts for the language
are included as determined by the `script.allowed_contexts` setting.
Response format:
```
{
"types_allowed": [
"inline",
"stored"
],
"language_contexts": [
{
"language": "expression",
"contexts": [
"aggregation_selector",
"aggs"
...
]
},
{
"language": "painless",
"contexts": [
"aggregation_selector",
"aggs",
"aggs_combine",
...
]
}
...
]
}
```
Fixes: #49463
**Backport**
* Copying the request is not necessary here. We can simply release it once the response has been generated and a lot of `Unpooled` allocations that way
* Relates #32228
* I think the issue that preventet that PR that PR from being merged was solved by #39634 that moved the bulk index marker search to ByteBuf bulk access so the composite buffer shouldn't require many additional bounds checks (I'd argue the bounds checks we add, we save when copying the composite buffer)
* I couldn't neccessarily reproduce much of a speedup from this change, but I could reproduce a very measureable reduction in GC time with e.g. Rally's PMC (4g heap node and bulk requests of size 5k saw a reduction in young GC time by ~10% for me)
* Make BlobStoreRepository Aware of ClusterState (#49639)
This is a preliminary to #49060.
It does not introduce any substantial behavior change to how the blob store repository
operates. What it does is to add all the infrastructure changes around passing the cluster service to the blob store, associated test changes and a best effort approach to tracking the latest repository generation on all nodes from cluster state updates. This brings a slight improvement to the consistency
by which non-master nodes (or master directly after a failover) will be able to determine the latest repository generation. It does not however do any tricky checks for the situation after a repository operation
(create, delete or cleanup) that could theoretically be used to get even greater accuracy to keep this change simple.
This change does not in any way alter the behavior of the blobstore repository other than adding a better "guess" for the value of the latest repo generation and is mainly intended to isolate the actual logical change to how the
repository operates in #49060
This change adds a dynamic cluster setting named `indices.id_field_data.enabled`.
When set to `false` any attempt to load the fielddata for the `_id` field will fail
with an exception. The default value in this change is set to `false` in order to prevent
fielddata usage on this field for future versions but it will be set to `true` when backporting
to 7x. When the setting is set to true (manually or by default in 7x) the loading will also issue
a deprecation warning since we want to disallow fielddata entirely when https://github.com/elastic/elasticsearch/issues/26472
is implemented.
Closes#43599
All the implementations of `EsBlobStoreTestCase` use the exact same
bootstrap code that is also used by their implementation of
`EsBlobStoreContainerTestCase`.
This means all tests might as well live under `EsBlobStoreContainerTestCase`
saving a lot of code duplication. Also, there was no HDFS implementation for
`EsBlobStoreTestCase` which is now automatically resolved by moving the tests over
since there is a HDFS implementation for the container tests.
The annotated text mapper has a field type that currently extends StringFieldType,
which means that all the positional-related query factory methods need to be copied
over from TextFieldType. In addition, MappedFieldType.intervals() hasn't been
overridden, so you can't use intervals queries with annotated text - a major drawback,
since one of the purposes of annotated text is to be able to run positional queries against
annotations.
This commit changes the annotated text field type to extend TextFieldType instead,
adding tests to ensure that position queries work correctly.
Closes#49289
Same as #49518 pretty much but for GCS.
Fixing a few more spots where input stream can get closed
without being fully drained and adding assertions to make sure
it's always drained.
Moved the no-close stream wrapper to production code utilities since
there's a number of spots in production code where it's also useful
(will reuse it there in a follow-up).
Fixing a few small issues found in this code:
1. We weren't reading the request headers but the response headers when checking for blob existence in the mocked single upload path
2. Error code can never be `null` removed the dead code that resulted
3. In the logging wrapper we weren't checking for `Throwable` so any failing assertions in the http mock would not show up since they
run on a thread managed by the mock http server
This commit fixes the server side logic of "List Objects" operations
of Azure and S3 fixtures. Until today, the fixtures were returning a "
flat" view of stored objects and were not correctly handling the
delimiter parameter. This causes some objects listing to be wrongly
interpreted by the snapshot deletion logic in Elasticsearch which
relies on the ability to list child containers of BlobContainer (#42653)
to correctly delete stale indices.
As a consequence, the blobs were not correctly deleted from the
emulated storage service and stayed in heap until they got garbage
collected, causing CI failures like #48978.
This commit fixes the server side logic of Azure and S3 fixture when
listing objects so that it now return correct common blob prefixes as
expected by the snapshot deletion process. It also adds an after-test
check to ensure that tests leave the repository empty (besides the
root index files).
Closes#48978
Similarly to what has been done for Azure (#48636) and GCS (#48762),
this committ removes the existing Ant fixture that emulates a S3 storage
service in favor of multiple docker-compose based fixtures.
The goals here are multiple: be able to reuse a s3-fixture outside of the
repository-s3 plugin; allow parallel execution of integration tests; removes
the existing AmazonS3Fixture that has evolved in a weird beast in
dedicated, more maintainable fixtures.
The server side logic that emulates S3 mostly comes from the latest
HttpHandler made for S3 blob store repository tests, with additional
features extracted from the (now removed) AmazonS3Fixture:
authentication checks, session token checks and improved response
errors. Chunked upload request support for S3 object has been added
too.
The server side logic of all tests now reside in a single S3HttpHandler class.
Whereas AmazonS3Fixture contained logic for basic tests, session token
tests, EC2 tests or ECS tests, the S3 fixtures are now dedicated to each
kind of test. Fixtures are inheriting from each other, making things easier
to maintain.
Backport of #48849. Update `.editorconfig` to make the Java settings the
default for all files, and then apply a 2-space indent to all `*.gradle`
files. Then reformat all the files.
Similarly to what has be done for Azure in #48636, this commit
adds a new :test:fixtures:gcs-fixture project which provides two
docker-compose based fixtures that emulate a Google Cloud
Storage service.
Some code has been extracted from existing tests and placed
into this new project so that it can be easily reused in other
projects.
This commit introduces a consistent, and type-safe manner for handling
global build parameters through out our build logic. Primarily this
replaces the existing usages of extra properties with static accessors.
It also introduces and explicit API for initialization and mutation of
any such parameters, as well as better error handling for uninitialized
or eager access of parameter values.
Closes#42042
This commit adds a new :test:fixtures:azure-fixture project which
provides a docker-compose based container that runs a AzureHttpFixture
Java class that emulates an Azure Storage service.
The logic to emulate the service is extracted from existing tests and
placed in AzureHttpHandler into the new project so that it can be
easily reused. The :plugins:repository-azure project is an example
of such utilization.
The AzureHttpFixture fixture is just a wrapper around AzureHttpHandler
and is now executed within the docker container.
The :plugins:repository-azure:qa:microsoft-azure project uses the new
test fixture and the existing AzureStorageFixture has been removed.
In repository integration tests, we drain the HTTP request body before
returning a response. Before this change this operation was done using
Streams.readFully() which uses a 8kb buffer to read the input stream, it
now uses a 1kb for the same operation. This should reduce the allocations
made during the tests and speed them up a bit on CI.
Co-authored-by: Armin Braun <me@obrown.io>
In #47176 we changed the internal HTTP server that emulates
the Azure Storage service so that it includes a response body
for injected errors. This fixed most of the issues reported in
#47120 but sadly I missed to map one error to its Azure
equivalent, and it triggered some CI failures today.
Closes#47120
BytesReference is currently an abstract class which is extended by
various implementations. This makes it very difficult to use the
delegation pattern. The implication of this is that our releasable
BytesReference is a PagedBytesReference type and cannot be used as a
generic releasable bytes reference that delegates to any reference type.
This commit makes BytesReference an interface and introduces an
AbstractBytesReference for common functionality.
This commit changes the test so that each node use a specific
service account and private key. It also changes how unique
request ids are generated for refresh token request using the
token itself, so that error count will be specific per node (each
node should execute a single refresh token request as tokens
are valid for 1 hour).
We were incorrectly handling `IOExceptions` thrown by
the `InputStream` side of the upload operation, resulting
in a `ClassCastException` as we expected to never get
`IOException` from the Azure SDK code but we do in practice.
This PR also sets an assertion on `markSupported` for the
streams used by the SDK as adding the test for this scenario
revealed that the SDK client would retry uploads for
non-mark-supporting streams on `IOException` in the `InputStream`.
Today built-in highlighter and plugins have access to the SearchContext through the
highlighter context. However most of the information exposed in the SearchContext are not needed and a QueryShardContext
would be enough to perform highlighting. This change replaces the SearchContext by the informations that are absolutely
required by highlighter: a QueryShardContext and the SearchContextHighlight. This change allows to reduce the exposure of the
complex SearchContext and remove the needs to clone it in the percolator sub phase.
Relates #47198
Relates #46523
Especially in the snapshot code there's a lot
of logic chaining `ActionRunnables` in tricky
ways now and the code is getting hard to follow.
This change introduces two convinience methods that
make it clear that a wrapped listener is invoked with
certainty in some trickier spots and shortens the code a bit.
While function scores using scripts do allow explanations, they are only
creatable with an expert plugin. This commit improves the situation for
the newer script score query by adding the ability to set the
explanation from the script itself.
To set the explanation, a user would check for `explanation != null` to
indicate an explanation is needed, and then call
`explanation.set("some description")`.
* Remove eclipse conditionals
We used to have some meta projects with a `-test` prefix because
historically eclipse could not distinguish between test and main
source-sets and could only use a single classpath.
This is no longer the case for the past few Eclipse versions.
This PR adds the necessary configuration to correctly categorize source
folders and libraries.
With this change eclipse can import projects, and the visibility rules
are correct e.x. auto compete doesn't offer classes from test code or
`testCompile` dependencies when editing classes in `main`.
Unfortunately the cyclic dependency detection in Eclipse doesn't seem to
take the difference between test and non test source sets into account,
but since we are checking this in Gradle anyhow, it's safe to set to
`warning` in the settings. Unfortunately there is no setting to ignore
it.
This might cause problems when building since Eclipse will probably not
know the right order to build things in so more wirk might be necesarry.
This commit change the repositories base paths used in Azure/S3/GCS
integration tests so that they don't conflict with each other when tests
run in parallel on real storage services.
Closes#47202
The Azure SDK client expects server errors to have a body,
something that looks like:
<?xml version="1.0" encoding="utf-8"?>
<Error>
<Code>string-value</Code>
<Message>string-value</Message>
</Error>
I've forgot to add such errors in Azure tests and that triggers
some NPE in the client like the one reported in #47120.
Closes#47120
Similarly to what has been done for S3 and GCS, this commit
adds unit tests that verify the retry logic of the Azure SDK
client implementation when the remote service returns errors.
It only tests the retry logic in case of errors and not in
case of timeouts because Azure client timeout options are
not exposed as settings.
Similarly to what has been done for S3 in #45383, this commit
adds unit tests that verify the behavior of the SDK client and
blob container implementation for Google Storage when the
remote service returns errors.
The main purpose was to add an extra test to the specific retry
logic for 410-Gone errors added in #45963.
Relates #45963
This PR adds some restrictions around testfixtures to make sure the same service ( as defiend in docker-compose.yml ) is not shared between multiple projects.
Sharing would break running with --parallel.
Projects can still share fixtures as long as each has it;s own service within.
This is still useful to share some of the setup and configuration code of the fixture.
Project now also have to specify a service name when calling useCluster to refer to a specific service.
If this is not the case all services will be claimed and the fixture can't be shared.
For this reason fixtures have to explicitly specify if they are using themselves ( fixture and tests in the same project ).
GoogleCloudStorageBlobStore.deleteBlobsIgnoringIfNotExists() does
not correctly catch StorageException thrown by batch.submit().
In the case a snapshot is deleted through BlobStoreRepository.deleteSnapshot()
a storage exception is not caught (only IOException are) so the deletion is
interrupted and indices cannot be cleaned up. The storage exception bubbles
up to SnapshotService.deleteSnapshotFromRepository() but the listener that
removes the deletion from the cluster state is not executed, leaving the
deletion in the cluster state.
This bug has been reported in #46772 where batch.submit() threw an
exception in the test testIndicesDeletedFromRepository and following
tests failed because a snapshot deletion was running.
Relates #46772
This commit adds support for Put Block API to the internal HTTP server
used in Azure repository integration tests. This allows to test the
behavior of the Azure SDK client when the Azure Storage service
returns errors when uploading Blob in multiple blocks or when
downloading a blob using ranged downloads.
This commit adds support for resumable uploads to the internal HTTP
server used in GoogleCloudStorageBlobStoreRepositoryTests. This
way we can also test the behavior of the Google's client when the
service returns server errors in response to resumable upload requests.
The BlobStore implementation for GCS has the choice between 2
methods to upload a blob: resumable and multipart. In the current
implementation, the client executes a resumable upload if the blob
size is larger than LARGE_BLOB_THRESHOLD_BYTE_SIZE,
otherwise it executes a multipart upload. This commit makes this
logic overridable in tests, allowing to randomize the decision of
using one method or the other.
The commit add support for single request resumable uploads
and chunked resumable uploads (the blob is uploaded into multiple
2Mb chunks; each chunk being a resumable upload). For this last
case, this PR also adds a test testSnapshotWithLargeSegmentFiles
which makes it more probable that a chunked resumable upload is
executed.
A resumable upload session can fail on with a 410 error and should
be retried in that case. I added retrying twice using resetting of
the given `InputStream` as the retry mechanism since the same
approach is used by the AWS S3 SDK already as well and relied upon
by the S3 repository implementation.
Related GCS documentation:
https://cloud.google.com/storage/docs/json_api/v1/status-codes#410_Gone
There were some issues with the Azure implementation requiring
permissions to list all containers ue to a container exists
check. This was caught in CI this time, but going forward we
should ensure that CI is executed using a token that does not
allow listing containers.
Relates #43288
Today if the connection to S3 times out or drops after starting to download an
object then the SDK does not attempt to recover or resume the download, causing
the restore of the whole shard to fail and retry. This commit allows
Elasticsearch to detect such a mid-stream failure and to resume the download
from where it failed.
This commit modifies the HTTP server used in
AzureBlobStoreRepositoryTests so that it randomly returns
server errors for any type of request executed by the Azure client.
This commit modifies the HTTP server used in
GoogleCloudStorageBlobStoreRepositoryTests so that it randomly
returns server errors. The test does not inject server errors for the
following types of request: batch request, resumable upload request.
The `repository-s3` plugin has supported a storage class of `onezone_ia` since
the SDK upgrade in #30723, but we do not test or document this fact. This
commit adds this storage class to the docs and adds a test to ensure that the
documented storage classes are all accepted by S3 too.
Fixes#30474
This commit removes the usage of MockGoogleCloudStoragePlugin in
GoogleCloudStorageBlobStoreRepositoryTests and replaces it by a
HttpServer that emulates the Storage service. This allows the repository
tests to use the real Google's client under the hood in tests and will allow
us to test the behavior of the snapshot/restore feature for GCS repositories
by simulating random server-side internal errors.
The HTTP server used to emulate the Storage service is intentionally simple
and minimal to keep things understandable and maintainable. Testing full
client options on the server side (like authentication, chunked encoding
etc) remains the responsibility of the GoogleCloudStorageFixture.
Similarly to what had been done for S3 (#46081) and GCS (#46255)
this commit adds repository integration tests for Azure, based on an
internal HTTP server instead of mocks.
When some high values are randomly picked up - for example the number
of indices to snapshot or the number of snapshots to create - the tests
in S3BlobStoreRepositoryTests can generate a high number of requests to
the internal S3 server.
In order to test the retry logic of the S3 client, the internal server is
designed to randomly generate random server errors. When many
requests are made, it is possible that the S3 client reaches its maximum
number of successive retries capacity. Then the S3 client will stop
retrying requests until enough retry attempts succeed, but it means
that any request could fail before reaching the max retries count and
make the test fail too.
Closes#46217Closes#46218Closes#46219
This commit modifies the HTTP server used in S3BlobStoreRepositoryTests
so that it randomly returns server errors for any type of request executed by
the SDK client. It is now possible to verify that the repository tests are s
uccessfully completed even if one or more errors were returned by the S3
service in response of a blob upload, a blob deletion or a object listing request
etc.
Because injecting errors forces the SDK client to retry requests, the test limits
the maximum errors to send in response for each request at 3 retries.
This commit removes the usage of MockAmazonS3 in S3BlobStoreRepositoryTests
and replaces it by a HttpServer that emulates the S3 service. This allows the
repository tests to use the real Amazon's S3 client under the hood in tests and will
allow to test the behavior of the snapshot/restore feature for S3 repositories by
simulating random server-side internal errors.
The HTTP server used to emulate the S3 service is intentionally simple and minimal
to keep things understandable and maintainable. Testing full client options on the
server side (like authentication, chunked encoding etc) remains the responsibility
of the AmazonS3Fixture.
This commit starts from the simple premise that the use of node settings
in blob store repositories is a mistake. Here we see that the node
settings are used to get default settings for store and restore throttle
rates. Yet, since there are not any node settings registered to this
effect, there can never be a default setting to fall back to there, and
so we always end up falling back to the default rate. Since this was the
only use of node settings in blob store repository, we move them. From
this, several places fall out where we were chaining settings through
only to get them to the blob store repository, so we clean these up as
well. That leaves us with the changeset in this commit.
This commit refactors the S3 credentials tests in
RepositoryCredentialsTests so that it now uses a single
node (ESSingleNodeTestCase) to test how secure/insecure
credentials are overriding each other. Using a single node
makes it much easier to understand what each test is actually
testing and IMO better reflect how things are initialized.
It also allows to fold into this class the test
testInsecureRepositoryCredentials which was wrongly located
in S3BlobStoreRepositoryTests. By moving this test away, the
S3BlobStoreRepositoryTests class does not need the
allow_insecure_settings option anymore and thus can be
executed as part of the usual gradle test task.