Today a bulk shard request appears as follows in the detailed task list:
requests[42], index[my_index]
This change adds the shard index and refresh policy too:
requests[42], index[my_index][2], refresh[IMMEDIATE]
If closing a shard while resetting engine,
IndexEventListener.afterIndexShardClosed would be called while there is
still an active IndexWriter on the shard. For integration tests, this
leads to an exception during check index called from MockFSIndexStore
.Listener. Fixed.
Relates to #38561
Today we allow adding entries from a file or from a string, yet we
internally maintain this distinction such that if you try to add a value
from a file for a setting that expects a string or add a value from a
string for a setting that expects a file, you will have a bad time. This
causes a pain for operators such that for each setting they need to know
this difference. Yet, we do not need to maintain this distinction
internally as they are bytes after all. This commit removes that
distinction and includes logic to upgrade legacy keystores.
Today we choose to initialize max_seq_no_of_updates on primaries only so
we can deal with a situation where a primary is on an old node (before
6.5) which does not have MUS while replicas on new nodes (6.5+).
However, this strategy is quite complex and can lead to bugs (for
example #40249) since we have to assign a correct value (not too low) to
MSU in all possible situations (before recovering from translog,
restoring history on promotion, and handing off relocation).
Fortunately, we don't have to deal with this BWC in 7.0+ since all nodes
in the cluster should have MSU. This change simplifies the
initialization of MSU by always assigning it a correct value in the
constructor of Engine regardless of whether it's a replica or primary.
Relates #33842
IntervalsSources can throw IllegalArgumentExceptions if they would produce
too many disjunctions. To mitigate against this when building random
sources, we limit the depth of the randomly generated source to four
nested sources
Fixes#41402
* Add Repository Consistency Assertion to SnapshotResiliencyTests (#40857)
* Add Repository Consistency Assertion to SnapshotResiliencyTests
* Add some quick validation on not leaving behind any dangling metadata or dangling indices to the snapshot resiliency tests
* Added todo about expanding this assertion further
* Fix SnapshotResiliencyTest Repo Consistency Check (#41332)
* Fix SnapshotResiliencyTest Repo Consistency Check
* Due to the random creation of an empty `extra0` file by the Lucene mockFS we see broken tests because we use the existence of an index folder in assertions and the index deletion doesn't go through if there are extra files in an index folder
* Fixed by removing the `extra0` file and resulting empty directory trees before asserting repo consistency
* Closes#41326
* Reenable SnapshotResiliency Test (#41437)
This was fixed in https://github.com/elastic/elasticsearch/pull/41332
but I forgot to reenable the test.
* fix compile on java8
A stuck peer recovery in #40913 reveals that we indefinitely retry on
new cluster states if indexing translog operations hits a mapper
exception. We should not wait and retry if the mapping on the target is
as recent as the mapping that the primary used to index the replaying
operations.
Relates #40913
This commit refactors GeoHashUtils class into a new Geohash utility class located in the ES geo library. The intent is to not only better control what geo methods are whitelisted for painless scripting but to clean up the geo utility API in general.
Motivated by slow snapshot deletes reported in e.g. #39656 and the fact that these likely are a contributing factor to repositories accumulating stale files over time when deletes fail to finish in time and are interrupted before they can complete.
* Makes snapshot deletion async and parallelizes some steps of the delete process that can be safely run concurrently via the snapshot thread poll
* I did not take the biggest potential speedup step here and parallelize the shard file deletion because that's probably better handled by moving to bulk deletes where possible (and can still be parallelized via the snapshot pool where it isn't). Also, I wanted to keep the size of the PR manageable.
* See https://github.com/elastic/elasticsearch/pull/39656#issuecomment-470492106
* Also, as a side effect this gives the `SnapshotResiliencyTests` a little more coverage for master failover scenarios (since parallel access to a blob store repository during deletes is now possible since a delete isn't a single task anymore).
* By adding a `ThreadPool` reference to the repository this also lays the groundwork to parallelizing shard snapshot uploads to improve the situation reported in #39657
* Thanks to #39793 dynamic mapping updates don't contain blocking operations anymore so we don't have to manually put the mapping in this test and can keep it a little simpler
* Add Restore Operation to SnapshotResiliencyTests
* Expand the successful snapshot test case to also include restoring the snapshop
* Add indexing of documents as well to be able to meaningfully verify the restore
* This is part of the larger effort to test eventually consistent blob stores in #39504
* The check doesn't add much if anything practically, since the S3 repository is eventually consistent and we only log the non-existence of a blob anyway
* We don't do the check on writes for this very reason and documented it as such
* Removing the check saves one API call per single delete speeding up the deletion process and lowering costs
Today the `_field_caps` API returns the list of indices where a field
is present only if this field has different types within the requested indices.
However if the request is an index pattern (or an alias, or both...) there
is no way to infer the indices if the response contains only fields that have
the same type in all indices. This commit changes the response to always return
the list of indices in the response. It also adds a way to retrieve unmapped field
in a specific section per field called `unmapped`. This section is created for each field
that is present in some indices but not all if the parameter `include_unmapped` is set to
true in the request (defaults to false).
* Introduce Delegating ActionListener Wrappers
* Dry up use cases of ActionListener that simply pass through the response or exception to another listener
* Name Snapshot Data Blobs by UUID
* There is no functional reason why we need incremental naming for these files but
* As explained in #38941 it is a possible source of corrupting the repository
* It wastes API calls for the list operation
* Is just needless complication
* Since we store the exact names of the data blobs in all the metadata anyway, we can make this change without any BwC considerations
* Even on the worst case scenario of a downgrade the functionality would continue working since the incremental names wouldn't conflict with the uuids and the number parsing for finding the next incremental name suppresses the exception when encountring a non-numeric value after the double underscore prefix
In order to support empty action metadata in the first msearch item,
we need to remove support for prepending msearch request body with an
empty line, which prevents us from parsing the empty line as action
metadata for the first search item.
Relates to #41011
The test is testing the java time API and fails in case it hits daylight saving time changes.
Java time has the right implementation and we don't need to test this.
more details on how the test was affected by the DST change on this comment
closes#39617
backport(#41493)
* Due to #40866 one of the two parallel bulk requests can randomly be
rejected outright when the write queue is full already, we can catch
this situation and ignore it since we can still have the rejection for
the dynamic mapping udate for the other reuqest and it's somewhat rare
to run into this anyway
* Closes#41363
Adds some validation to prevent duplicate source names from being
used in the composite agg.
Also refactored to use a ConstructingObjectParser and removed the
private ctor and setter for sources, making it mandatory.
* The problem here is that if we run into a corrupted index-N file, instead of generating a new index-(N+1) file, we instead set the newest index generation to -1 and thus tried to create `index-0`
* If `index-0` is corrupt, this prevents us from ever creating a new snapshot using the broken shard, because we are unable to create `index-0` since it already exists
* Fixed by still using the index generation for naming the next index file, even if it was a broken index file
* Added test that makes sure restoring as well as snapshotting on top of the broken shard index file work as expected
* closes#41304
* The test fails for the retry backoff enabled case because the retry handler in the bulk processor hasn't been adjusted to account for #40866 which now might lead to an outright rejection of the request instead of its items individually
* Fixed by adding retry functionality to the top level request as well
* Also fixed the duplicate test for the HLRC that wasn't handling the non-backoff case yet the same way the non-client IT did
* closes#41324
Today we assert that there are no operations in flight in this test. However we
will sometimes be in a situation where the operations are blocked, and we
distinguish these cases since #41271 causing the assertion to fail. This commit
addresses this by allowing operations to be blocked sometimes after a primary
promotion.
Fixes#41333.
The `composite` aggregation maps unknown fields as numerics, this means that
any `after` value that is set on a query with an unmapped field on some indices
will fail if the provided value is not numeric. This commit changes the default
value source to use keyword instead in order to be able to parse any type of after
values.
The `_id` field uses a binary encoding to index terms that is not compatible with
the utf8 automaton that the unified highlighter creates to reanalyze the input.
For these reason this commit ignores terms that target the `_id` field when
`require_field_match` is set to false.
Closes#37525
With the removal of the `_all` field the `mlt` query cannot infer a field name
to use to analyze the provided (un)like text if the `fields` parameter is not
explicitly set in the query and the `index.query.default_field` is not changed
in the index settings (by default it is set to `*`). For this reason the like text
is ignored and queries are only built from the provided document ids.
This change fixes this bug by throwing an error if the fields option is not set
and the `index.query.default_field` is equals to `*`. The error is thrown only
if like or unlike texts are provided in the query.
This fixes an issue where every N seconds a slow search request is triggered
since the searcher access time is not set unless the shard is idle. This change
moves to a more pro-active approach setting the searcher as accessed all the time.
* Bulk requests can be thousands of items large and take more than O(10ms) time to handle => we should not handle them on the transport threadpool to not block select loops
* relates #39128
* relates #39658
Today we do not distinguish "no operations in flight" from "operations are
blocked", since both return `0` from `IndexShard#getActiveOperationsCount()`.
We therefore cannot assert that every `TransportReplicationAction` performs its
actions under permit(s). This commit fixes this by returning
`IndexShard#OPERATIONS_BLOCKED` if operations are blocked, allowing these two
cases to be distinguished.