This commit fixes repository-azure hanging on basic operations. This will be reverted
once it's fixed upstream in the Azure library.
Signed-off-by: Andriy Redko <andriy.redko@aiven.io>
This commit removes deprecated analyzer instantiation that is no longer
permitted in OpenSearch 2.0.0.
Signed-off-by: Nicholas Walter Knize <nknize@apache.org>
This PR removes LegacyESVersion.V_6_0_* constants including all pre-release
versions and bug fixes.
Signed-off-by: Nicholas Walter Knize <nknize@apache.org>
Lucene 9 removes support for SimpleFS File System format. This PR completely
removes SimpleFS support which was deprecated in a previous PR.
Signed-off-by: Nicholas Walter Knize <nknize@apache.org>
I observed a test failure with the message
'Attempted to append to non-started appender mock' from an assertion in
`OpenSearchTestCase::after`. I believe this indicates that a
MockLogAppender (which is named "mock") was added as an appender to the
static logging context and some other test in the same JVM happened to
cause a logging statement to hit that appender and cause an error, which
then caused an unrelated test to fail (because they share static state
with the logger). Almost all usages of MockLogAppender start it
immediately after creation. I found a few that did not and fixed those.
I also made a static helper in MockLogAppender to start it upon
creation.
Signed-off-by: Andrew Ross <andrross@amazon.com>
* Modernize and consolidate JDKs usage across all stages of the build. Use JDK-17 as bundled JDK distribution to run tests
Signed-off-by: Andriy Redko <andriy.redko@aiven.io>
* Using -Djava.security.egd=file:/dev/urandom explicitly for cli tests
Signed-off-by: Andriy Redko <andriy.redko@aiven.io>
* Drop mocksocket in favour of custom security manager checks (tests only)
Signed-off-by: Andriy Redko <andriy.redko@aiven.io>
* Slightly relaxed host checks to allow all local addresses
Signed-off-by: Andriy Redko <andriy.redko@aiven.io>
Lucene 9 removes support for SimpleFS File System format. This commit deprecates
the SimpleFS format in favor of NIOFS.
Signed-off-by: Nicholas Walter Knize <nknize@apache.org>
* Adding broken links checker
Signed-off-by: Vacha Shah <vachshah@amazon.com>
* Adding exclusions for links
Signed-off-by: Vacha Shah <vachshah@amazon.com>
* Correcting broken link
Signed-off-by: Vacha Shah <vachshah@amazon.com>
* Removing the benchmarks link
Signed-off-by: Vacha Shah <vachshah@amazon.com>
* Update commons-io-2.4.jar to 2.7 for plugins/discovery-azure-classic module
* Remove unused jackson dependency and respective LICENSE and NOTICE
* Update guava dependency to mitigate CVE for repository-azure plugin
Signed-off-by: Abbas Hussain <abbas_10690@yahoo.com>
Hadoop 2.8.5 has been reported to have CVEs (https://bugzilla.redhat.com/show_bug.cgi?id=1883549). We need to upgrade this to 2.10.1. This also updates the hadoop-minicluster version to 2.10.1 as well. This upgrade also brings in two additional dependencies, woodstox-core and stax2-api that are added along with the sha1s, licenses and notices.
Also upgrade guava to the latest as per the CVE https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8908
Signed-off-by: Rabi Panda <adnapibar@gmail.com>
For discovery-gce and repository-gcs plugins update the google-oauth-client library to version 1.31.0. See CVE details at https://nvd.nist.gov/vuln/detail/CVE-2020-7692
Signed-off-by: Rabi Panda <adnapibar@gmail.com>
This commit rebases the versioning to OpenSearch 1.0.0
Co-authored-by: Rabi Panda <adnapibar@gmail.com>
Signed-off-by: Nicholas Walter Knize <nknize@apache.org>
This commit adds the SPDX license header and modifications copyright to security
policy files.
Signed-off-by: Nicholas Walter Knize <nknize@apache.org>
This commit fixes a renaming issue (opensearch.co -> opensearch.org) which was causing few integration test failures.
Signed-off-by: Rabi Panda <adnapibar@gmail.com>
This commit fixes some partial rename issues and as a result fixes the failing secure repository-hdfs tests.
Signed-off-by: Rabi Panda <adnapibar@gmail.com>
This commit adds the SPDX Apache-2.0 license header along with an additional
copyright header for all modifications.
Signed-off-by: Nicholas Walter Knize <nknize@apache.org>
* Change "Test elasticsearch" back
* Update content, language and size of test attachement
* Regenerate test attachment content with updated date and author
Signed-off-by: Harold Wang <harowang@amazon.com>
This commit fixes some more renaming issues and as a result fixes the failing tests,
* :qa:logging-config:test
* :example-plugins:painless-whitelist:yamlRestTest
* :modules:reindex:test
Signed-off-by: Rabi Panda <adnapibar@gmail.com>
This commit refactors instances of 'elasticsearch' with opensearch everywhere
except references to issues, and other places needed to test compatibility with
old elasticsearch clusters.
Signed-off-by: Nicholas Walter Knize <nknize@apache.org>
This commit renames several files that contain the name elasticsearch and replace that with opensearch.
Signed-off-by: Rabi Panda <adnapibar@gmail.com>
Fix miscellaneous issues identified during `gradle precommit`. These issues are the side effects of the renaming to OpenSearch work.
Signed-off-by: Rabi Panda <adnapibar@gmail.com>
This commit fixes the currently broken gradle build resulted from the renaming work. It reverts a few dependencies and comments out the `opensearch_distibutions` task which is currently failing for some builds. We will address these separately in the future once we have a working build.
Signed-off-by: Rabi Panda <adnapibar@gmail.com>
This commit refactors the remaining o.e.index and o.e.test packages in the
test/fixtures module. References throughout the codebase are also refactored.
Signed-off-by: Nicholas Walter Knize <nknize@apache.org>
* [Rename] plugins (#193)
This PR refactors files under "plugins" folders part of the Elasticsearch to OpenSearch renaming effort.
Signed-off-by: Harold Wang <harowang@amazon.com>
This commit refactors the classes in o.e.action.support to
o.opensearch.action.support. The remaining directories will be refactored in a
separate commit.
Signed-off-by: Nicholas Knize <nknize@amazon.com>
Refactor `server/snapshots` to rename the package names from `org.elasticsearch.snapshots` to `org.opensearch.snapshots` as part of the rename to OpenSearch work.
Signed-off-by: Rabi Panda <adnapibar@gmail.com>
This commit refactors all classes in o.e.action.admin.cluster to
org.opensearch.action.admin.cluster. References are updated
throughout the codebase.
Signed-off-by: Nicholas Knize <nknize@amazon.com>
This commit refactors top level classes in o.e.action to o.opensearch.action.
References throughout the rest of the codebase have been updated.
Signed-off-by: Nicholas Knize <nknize@amazon.com>
This commit refactors o.e.action.admin.indices package to
o.opensearch.action.admin.indices. References through out the codebase have been
updated to reflect the new package location.
Signed-off-by: Nicholas Knize <nknize@amazon.com>
This commit refactors ElasticsearchParseException class in the server module to
OpenSearchParseException. References and usages throughout the rest of the
codebase are fully refactored.
Signed-off-by: Nicholas Knize <nknize@amazon.com>
This commit refactors the ElasticsearchException class located in the server module
to OpenSearchException. References and usages throughout the rest of the
codebase are fully refactored.
Signed-off-by: Nicholas Knize <nknize@amazon.com>
* Cleanup build-scan, remove publish scan to elastic server
* Cleanup build script to exclude security-authorization-engine which test has dependency on xpack
* Cleanup build script to exclude security-authorization-engine which test has dependency on xpack
Co-authored-by: Huan Jiang <huanji@amazon.com>
Signed-off-by: Peter Nied <petern@amazon.com>
In the refactoring of TextFieldMapper, we lost the ability to define
a default search or search_quote analyzer in index settings. This
commit restores that ability, and adds some more comprehensive
testing.
Fixes#65434
This PR adds factory methods for the most common implementations:
* `SourceValueFetcher.identity` to pass through the source value untouched.
* `SourceValueFetcher.toString` to simply convert the source value to a string.
As a result of this, we can remove a chunk of code from TypeParsers as well. Tests
for search/index mode analyzers have moved into their own file. This commit also
rationalises the serialization checks for parameters into a single SerializerCheck
interface that takes the values includeDefaults, isConfigured and the value
itself.
Relates to #62988
When constructing a value fetcher, the 'parsesArrayValue' flag must match
`FieldMapper#parsesArrayValue`. However there is nothing in code or tests to
help enforce this.
This PR reworks the value fetcher constructors so that `parsesArrayValue` is
'false' by default. Just as for `FieldMapper#parsesArrayValue`, field types must
explicitly set it to true and ensure the behavior is covered by tests.
Follow-up to #62974.
Referencing a project instance during task execution is discouraged by
Gradle and should be avoided. E.g. It is incompatible with Gradles
incubating configuration cache. Instead there are services available to handle
archive and filesystem operations in task actions.
Brings us one step closer to #57918
For runtime fields, we will want to do all search-time interaction with
a field definition via a MappedFieldType, rather than a FieldMapper, to
avoid interfering with the logic of document parsing. Currently, fetching
values for runtime scripts and for building top hits responses need to
call a method on FieldMapper. This commit moves this method to
MappedFieldType, incidentally simplifying the current call sites and freeing
us up to implement runtime fields as pure MappedFieldType objects.
Currently Netty will batch compression an entire HTTP response
regardless of its content size. It allocates a byte array at least of
the same size as the uncompressed content. This causes issues with our
attempts to remove humungous G1GC allocations. This commit resolves the
issue by split responses into 128KB chunks.
This has the side-effect of making large outbound HTTP responses that
are compressed be send as chunked transfer-encoding.
Currently we duplicate our specialized cors logic in all transport
plugins. This is unnecessary as it could be implemented in a single
place. This commit moves the logic to server. Additionally it fixes a
but where we are incorrectly closing http channels on early Cors
responses.
This commit adds a mechanism to MapperTestCase that allows implementing
test classes to check that their parameters can be updated, or throw conflict
errors as advertised. Child classes override the registerParameters method
and tell the passed-in UpdateChecker class about their parameters. Simple
conflicts can be checked, using the existing minimal mappings as a base to
compare against, or alternatively a particular initial mapping can be provided
to check edge cases (eg, norms can be updated from true to false, but not
vice versa). Updates are registered with a predicate that checks that the update
has in fact been applied to the resulting FieldMapper.
Fixes#61631
Most of our field types have the same implementation for their `existsQuery` method which relies on doc_values if present, otherwise it queries norms if available or uses a term query against the _field_names meta field. This standard implementation is repeated in many different mappers.
There are field types that only query doc_values, because they always have them, and field types that always query _field_names, because they never have norms nor doc_values. We could apply the same standard logic to all of these field types as `MappedFieldType` has the knowledge about what data structures are available.
This commit introduces a standard implementation that does the right thing depending on the data structure that is available. With that only field types that require a different behaviour need to override the existsQuery method.
At the same time, this no longer forces subclasses to override `existsQuery`, which could be forgotten when needed. To address this we introduced a new test method in `MapperTestCase` that verifies the `existsQuery` being generated and its consistency with the available data structures.
The dense vector field is not aggregatable although it produces fielddata through its BinaryDocValuesField. It should pass up hasDocValues set to true to its parent class in its constructor, and return isAggregatable false. Same for the sparse vector field (only in 7.x).
This may not have consequences today, but it will be important once we try to share the same exists query implementation throughout all of the mappers with #57607.
Backports #61590 to 7.x
So far we don't allow metadata fields in the document _source. However, in the case of the _doc_count field mapper (#58339) we want to be able to set
This PR adds a method to the metadata field parsers that exposes if the field can be included in the document source or not.
This way each metadata field can configure if it can be included in the document _source
A recent AWS SDK upgrade has introduced a new source of spurious `WARN`
logs when the security manager prevents access to the user's home
directory and therefore to `$HOME/.aws/config`. This is the behaviour we
want, and it's harmless and handled by the SDK as if the config doesn't
exist, so this log message is unnecessary noise. This commit suppresses
this noisy logging by default.
Relates #20313, #56346, #53962Closes#62493
The AssertingInputStream in S3BlobContainerRetriesTests verifies
that InputStream are either fully consumed or aborted, but the
eof flag is only set when the underlying stream returns it.
When buffered read are executed and when the exact number
of remaining bytes are read, the eof flag is not set to true. Instead
the test should rely on the total number of bytes read to know if
the stream has been fully consumed.
Close#62390
This new snapshot contains the following JIRAs that we're interested in:
- [LUCENE-9525](https://issues.apache.org/jira/browse/LUCENE-9525)
Better handling of small documents. This should improve retrieval times
when documents are less than ~1kB.
- [LUCENE-9510](https://issues.apache.org/jira/browse/LUCENE-9510)
Faster flushes when index sorting is enabled by not compressing the
temporary files that store stored fields and term vectors.
This implements the `fields` API in `_search` for runtime fields using
doc values. Most of that implementation is stolen from the
`docvalue_fields` fetch sub-phase, just moved into the same API that the
`fields` API uses. At this point the `docvalue_fields` fetch phase looks
like a special case of the `fields` API.
While I was at it I moved the "which doc values sub-implementation
should I use for fetching?" question from a bunch of `instanceof`s to a
method on `LeafFieldData` so we can be much more flexible with what is
returned and we're not forced to extend certain classes just to make the
fetch phase happy.
Relates to #59332
Today when an S3RetryingInputStream is closed the remaining bytes
that were not consumed are drained right before closing the underlying
stream. In some contexts it might be more efficient to not consume the
remaining bytes and just drop the connection.
This is for example the case with snapshot backed indices prewarming,
where there is not point in reading potentially large blobs if we know
the cache file we want to write the content of the blob as already been
evicted. Draining all bytes here takes a slot in the prewarming thread
pool for nothing.
This commit removes `integTest` task from all es-plugins.
Most relevant projects have been converted to use yamlRestTest, javaRestTest,
or internalClusterTest in prior PRs.
A few projects needed to be adjusted to allow complete removal of this task
* x-pack/plugin - converted to use yamlRestTest and javaRestTest
* plugins/repository-hdfs - kept the integTest task, but use `rest-test` plugin to define the task
* qa/die-with-dignity - convert to javaRestTest
* x-pack/qa/security-example-spi-extension - convert to javaRestTest
* multiple projects - remove the integTest.enabled = false (yay!)
related: #61802
related: #60630
related: #59444
related: #59089
related: #56841
related: #59939
related: #55896
Kibana often highlights *everything* like this:
```
POST /_search
{
"query": ...,
"size": 500,
"highlight": {
"fields": {
"*": { ... }
}
}
}
```
This can get slow when there are hundreds of mapped fields. I tested
this locally and unscientifically and it took a request from 20ms to
150ms when there are 100 fields. I've seen clusters with 2000 fields
where simple search go from 500ms to 1500ms just by turning on this sort
of highlighting. Even when the query is just a `range` that and the
fields are all numbers and stuff so it won't highlight anything.
This speeds up the `unified` highlighter in this case in a few ways:
1. Build the highlighting infrastructure once field rather than once pre
document per field. This cuts out a *ton* of work analyzing the query
over and over and over again.
2. Bail out of the highlighter before loading values if we can't produce
any results.
Combined these take that local 150ms case down to 65ms. This is unlikely
to be really useful when there are only a few fetched docs and only a
few fields, but we often end up having many fields with many fetched
docs.
This pull request adds a new set of APIs that allows tracking the number of requests performed
by the different registered repositories.
In order to avoid losing data, the repository statistics are archived after the repository is closed for
a configurable retention period `repositories.stats.archive.retention_period`. The API exposes the
statistics for the active repositories as well as the modified/closed repositories.
Backport of #60371
There are currently half a dozen ways to add plugins and modules for
test clusters to use. All of them require the calling project to peek
into the plugin or module they want to use to grab its bundlePlugin
task, and then both depend on that task, as well as extract the archive
path the task will produce. This creates cross project dependencies that
are difficult to detect, and if the dependent plugin/module has not yet
been configured, the build will fail because the task does not yet
exist.
This commit makes the plugin and module methods for testclusters
symmetetric, and simply adding a file provider directly, or a project
path that will produce the plugin/module zip. Internally this new
variant uses normal configuration/dependencies across projects to get
the zip artifact. It also has the added benefit of no longer needing the
caller to add to the test task a dependsOn for bundlePlugin task.
FetchSubPhase has two 'execute' methods, one which takes all hits to be examined,
and one which takes a single HitContext. It's not obvious which one should be implemented
by a given sub-phase, or if implementing both is a possibility; nor is it obvious that we first
run the hitExecute methods of all subphases, and then subsequently call all the
hitsExecute methods.
This commit reworks FetchSubPhase to replace these two variants with a processor class,
`FetchSubPhaseProcessor`, that is returned from a single `getProcessor` method. This
processor class has two methods, `setNextReader()` and `process`. FetchPhase collects
processors from all its subphases (if a subphase does not need to execute on the current
search context, it can return `null` from `getProcessor`). It then sorts its hits by docid, and
groups them by lucene leaf reader. For each reader group, it calls `setNextReader()` on
all non-null processors, and then passes each doc id to `process()`.
Implementations of fetch sub phases can divide their concerns into per-request, per-reader
and per-document sections, and no longer need to worry about sorting docs or dealing with
reader slices.
FetchSubPhase now provides a FetchSubPhaseExecutor that exposes two methods,
setNextReader(LeafReaderContext) and execute(HitContext). The parent FetchPhase collects all
these executors together (if a phase should not be executed, then it returns null here); then
it sorts hits, and groups them by reader; for each reader it calls setNextReader, and then
execute for each hit in turn. Individual sub phases no longer need to concern themselves with
sorting docs or keeping track of readers; global structures can be built in
getExecutor(SearchContext), per-reader structures in setNextReader and per-doc in execute.
The recursive data.path FilePermission check is an extremely hot
codepath in Elasticsearch. Unfortunately the FilePermission check in
Java is extremely allocation heavy. As it iterates through different
file permissions, it allocates byte arrays for each Path component that
must be compared. This PR improves the situation by adding the recursive
data.path FilePermission it its own PermissionsCollection object which
is checked first.
It looks like it is possible for a request to throw an exception early
before any API interaciton has happened. This can lead to the request count
map containing a `null` for the request count key.
The assertion is not correct and we should not NPE here
(as that might also hide the original exception since we are running this code in
a `finally` block from within the S3 SDK).
Closes#61670
Runtime fields need to have a SearchLookup available, when building their fielddata implementations, so that they can look up other fields, runtime or not.
To achieve that, we add a Supplier<SearchLookup> argument to the existing MappedFieldType#fielddataBuilder method.
As we introduce the ability to look up other fields while building fielddata for mapped fields, we implicitly add the ability for a field to require other fields. This requires some protection mechanism that detects dependency cycles to prevent stack overflow errors.
With this commit we also introduce detection for cycles, as well as a limit on the depth of the references for a runtime field. Note that we also plan on introducing cycles detection at compile time, so the runtime cycles detection is a last resort to prevent stack overflow errors but we hope that we can reject runtime fields from being registered in the mappings when they create a cycle in their definition.
Note that this commit does not introduce any production implementation of runtime fields, but is rather a pre-requisite to merge the runtime fields feature branch.
This is a breaking change for MapperPlugins that plug in a mapper, as the signature of MappedFieldType#fielddataBuilder changes from taking a single argument (the index name), to also accept a Supplier<SearchLookup>.
Relates to #59332
Co-authored-by: Nik Everett <nik9000@gmail.com>
DeprecationLogger's constructor should not create two loggers. It was
taking parent logger instance, changing its name with a .deprecation
prefix and creating a new logger.
Most of the time parent logger was not needed. It was causing Log4j to
unnecessarily cache the unused parent logger instance.
depends on #61515
backports #58435
Backport to add case insensitive support for regex queries.
Forks a copy of Lucene’s RegexpQuery and RegExp from Lucene master.
This can be removed when 8.7 Lucene is released.
Closes#59235
Splitting DeprecationLogger into two. HeaderWarningLogger - responsible for adding a response warning headers and ThrottlingLogger - responsible for limiting the duplicated log entries for the same key (previously deprecateAndMaybeLog).
Introducing A ThrottlingAndHeaderWarningLogger which is a base for other common logging usages where both response warning header and logging throttling was needed.
relates #55699
relates #52369
backports #55941
Before when a value was copied to a field through a parent field or `copy_to`,
we parsed it using the `FieldMapper` from the source field. Instead we should
parse it using the target `FieldMapper`. This ensures that we apply the
appropriate mapping type and options to the copied value.
To implement the fix cleanly, this PR refactors the value parsing strategy. Now
instead of looking up values directly, field mappers produce a helper object
`ValueFetcher`. The value fetchers are responsible for almost all aspects of
fetching, including looking up the right paths in the _source.
The PR is fairly big but each commit can be reviewed individually.
Fixes#61033.
Same as https://github.com/elastic/elasticsearch/pull/43288 for GCS.
We don't need to do the bucket exists check before using the repo, that just needlessly
increases the necessary permissions for using the GCS repository.
* Merge test runner task into RestIntegTest (#60261)
* Merge test runner task into RestIntegTest
* Reorganizing Standalone runner and RestIntegTest task
* Rework general test task configuration and extension
* Fix merge issues
* use former 7.x common test configuration
We have various ways of copying between two streams and handling thread-local
buffers throughout the codebase. This commit unifies a number of them and
removes buffer allocations in many spots.
- Replace immediate task creations by using task avoidance api
- One step closer to #56610
- Still many tasks are created during configuration phase. Tackled in separate steps
The `SourceLookup` class provides access to the _source for a particular
document, specified through `SourceLookup#setSegmentAndDocument`. Previously
the search context contained a single `SourceLookup` that was shared between
different fetch subphases. It was hard to reason about its state: is
`SourceLookup` set to the expected document? Is the _source already loaded and
available?
Instead of using a global source lookup, the fetch hit context now provides
access to a lookup that is set to load from the hit document.
This refactor closes#31000, since the same `SourceLookup` is no longer shared
between the 'fetch _source phase' and script execution.
For all OSS plugins (except repository-* and discovery-*) integTest
task is now a no-op and all of the tests are now executed via a test,
yamlRestTest, javaRestTest, or internalClusterTest.
This commit does NOT convert the discovery-* and repository-* since they
are bit more complex then the rest of tests and this PR is large enough.
Those plugins will be addressed in a future PR(s).
This commit also fixes a minor issue that did not copy the rest api
for projects that only had YAML TEST tests.
related: #56841
For OSS plugins that begin with discovery-*, the integTest
task is now a no-op and all of the tests are now executed via a test,
yamlRestTest, javaRestTest, or internalClusterTest.
related: #56841
related: #59444
For OSS plugins that being with repository-*, integTest
task is now a no-op and all of the tests are now executed via a test,
yamlRestTest, javaRestTest, or internalClusterTest.
related: #56841
related: #59444
In #60297 we added some tests related to logging from the transport
layer, but these tests failed occasionally since the cluster
was kept alive between test invocations but the logging framework
expected it only to be used for a single test. With this commit we
reduce the scope of the internal test cluster to `TEST` to solve this
problem.
Closes#60321.
This feature adds a new `fields` parameter to the search request, which
consults both the document `_source` and the mappings to fetch fields in a
consistent way. The PR merges the `field-retrieval` feature branch.
Addresses #49028 and #55363.
Transport connections between nodes remain in place until one or other
node shuts down or the connection is disrupted by a flaky network.
Today it is very difficult to demonstrate that transient failures and
cluster instability are caused by the network even though this is often
the case. In particular, transport connections open and close without
logging anything, even at `DEBUG` level, making it very hard to quantify
the scale of the problem or to correlate the networking problems with
external events.
This commit adds the missing `DEBUG`-level logging when transport
connections open and close, and also tracks the total number of
transport connections a node has opened as a measure of the stability of
the underlying network.
keepalives tell any intermediate devices that the connection remains alive, which helps with overzealous firewalls that are
killing idle connections. keepalives are enabled by default in Elasticsearch, but use system defaults for their
configuration, which often times do not have reasonable defaults (e.g. 7200s for TCP_KEEP_IDLE) in the context of
distributed systems such as Elasticsearch.
This PR sets the socket-level keep_alive options for network.tcp.{keep_idle,keep_interval} to 5 minutes on configurations
that support it (>= Java 11 & (MacOS || Linux)) and where the system defaults are set to something higher than 5
minutes. This helps keep the connections alive while not interfering with system defaults or user-specified settings
unless they are deemed to be set too high by providing better out-of-the-box defaults.
Due to complicated access checks (reads and writes execute in their own access context) on some repositories (GCS, Azure, HDFS), using a hard coded buffer size of 4k for restores was needlessly inefficient.
By the same token, the use of stream copying with the default 8k buffer size for blob writes was inefficient as well.
We also had dedicated, undocumented buffer size settings for HDFS and FS repositories. For these two we would use a 100k buffer by default. We did not have such a setting for e.g. GCS though, which would only use an 8k read buffer which is needlessly small for reading from a raw `URLConnection`.
This commit adds an undocumented setting that sets the default buffer size to `128k` for all repositories. It removes wasteful allocation of such a large buffer for small writes and reads in case of HDFS and FS repositories (i.e. still using the smaller buffer to write metadata) but uses a large buffer for doing restores and uploading segment blobs.
This should speed up Azure and GCS restores and snapshots in a non-trivial way as well as save some memory when reading small blobs on FS and HFDS repositories.
We never used the `IndexSettings` parameter and we only used the
`MappedFieldType` parameter to get the name of the field which we
already know everywhere where we build the `IFD.Builder`. This allows us
to drop a fair bit of ceremony from a couple of tests.
Enables fully concurrent snapshot operations:
* Snapshot create- and delete operations can be started in any order
* Delete operations wait for snapshot finalization to finish, are batched as much as possible to improve efficiency and once enqueued in the cluster state prevent new snapshots from starting on data nodes until executed
* We could be even more concurrent here in a follow-up by interleaving deletes and snapshots on a per-shard level. I decided not to do this for now since it seemed not worth the added complexity yet. Due to batching+deduplicating of deletes the pain of having a delete stuck behind a long -running snapshot seemed manageable (dropped client connections + resulting retries don't cause issues due to deduplication of delete jobs, batching of deletes allows enqueuing more and more deletes even if a snapshot blocks for a long time that will all be executed in essentially constant time (due to bulk snapshot deletion, deleting multiple snapshots is mostly about as fast as deleting a single one))
* Snapshot creation is completely concurrent across shards, but per shard snapshots are linearized for each repository as are snapshot finalizations
See updated JavaDoc and added test cases for more details and illustration on the functionality.
Some notes:
The queuing of snapshot finalizations and deletes and the related locking/synchronization is a little awkward in this version but can be much simplified with some refactoring. The problem is that snapshot finalizations resolve their listeners on the `SNAPSHOT` pool while deletes resolve the listener on the master update thread. With some refactoring both of these could be moved to the master update thread, effectively removing the need for any synchronization around the `SnapshotService` state. I didn't do this refactoring here because it's a fairly large change and not necessary for the functionality but plan to do so in a follow-up.
This change allows for completely removing any trickery around synchronizing deletes and snapshots from SLM and 100% does away with SLM errors from collisions between deletes and snapshots.
Snapshotting a single index in parallel to a long running full backup will execute without having to wait for the long running backup as required by the ILM/SLM use case of moving indices to "snapshot tier". Finalizations are linearized but ordered according to which snapshot saw all of its shards complete first
Many of the parameters we pass into this method were only used to
build the `SnapshotInfo` instance to write.
This change simplifies the signature. Also, it seems less error prone to build
`SnapshotInfo` in `SnapshotsService` isntead of relying on the fact that each repository
implementation will build the correct `SnapshotInfo`.
Removing these limits as they cause unnecessarily many object in the blob stores.
We do not have to worry about BwC of this change since we do not support any 3rd party
implementations of Azure or GCS.
Also, since there is no valid reason to set a different than the default maximum chunk size at this
point, removing the documentation (which was incorrect in the case of Azure to begin with) for the setting
from the docs.
Closes#56018
This PR introduces two new fields in to `RepositoryData` (index-N) to track the blob name of `IndexMetaData` blobs and their content via setting generations and uuids. This is used to deduplicate the `IndexMetaData` blobs (`meta-{uuid}.dat` in the indices folders under `/indices` so that new metadata for an index is only written to the repository during a snapshot if that same metadata can't be found in another snapshot.
This saves one write per index in the common case of unchanged metadata thus saving cost and making snapshot finalization drastically faster if many indices are being snapshotted at the same time.
The implementation is mostly analogous to that for shard generations in #46250 and piggy backs on the BwC mechanism introduced in that PR (which means this PR needs adjustments if it doesn't go into `7.6`).
Relates to #45736 as it improves the efficiency of snapshotting unchanged indices
Relates to #49800 as it has the potential of loading the index metadata for multiple snapshots of the same index concurrently much more efficient speeding up future concurrent snapshot delete
We don't need to switch to the generic or snapshot pool for loading
cached repository data (i.e. most of the time in normal operation).
This makes `executeConsistentStateUpdate` less heavy if it has to retry
and lowers the chance of having to retry in the first place.
Also, this change allowed simplifying a few other spots in the codebase
where we would fork off to another pool just to load repository data.