The test testWithRandomException was not updated accordingly to the
latest translog policy. Method setTranslogGenerationOfLastCommit should
be called before whenever setMinTranslogGenerationForRecovery is called.
Relates #27606
We need to keep index commits and translog operations up to the current
global checkpoint to allow us to throw away unsafe operations and
increase the operation-based recovery chance. This is achieved by a new
index deletion policy.
Relates #10708
If you assert that a pattern of files exists but it matches more then
one file the "assert this file exists" code failed with a misleading
error message. This tests if the patter resolved to multiple files and
prints a better error message if it did.
When running the release tests, we set build.snapshot to false and this
causes all version numbers to not have "-SNAPSHOT". This is true even
for the tips of the branches (e.g., currently 5.6.6 on the 5.6
branch). Yet, if we do not set snapshot to false, then we would still be
trying to find artifacts with "-SNAPSHOT" appended which would not have
been build since build.snapshot is false. To fix this, we have to push
build.snapshot into the version logic.
Relates #27778
This commit reorganizes some of the content in the configuring
Elasticsearch section of the docs. The changes are:
- move JVM options out of system configuration into configuring
Elasticsearch
- move JVM options to its own page of the docs
- move configuring the heap to important Elasticsearch settings
- move configuring the heap to its own page of the docs
- move all important settings to individual pages in the docs
- remove bootstrap.memory_lock from important settings, this is covered
in the swap section of system configuration
Relates #27755
This commit changes the RestoreService so that it now fails the snapshot
restore if one of the shards to restore has failed to be allocated. It also adds
a new RestoreInProgressAllocationDecider that forbids such shards to be
allocated again. This way, when a restore is impossible or failed too many
times, the user is forced to take a manual action (like deleting the index
which failed shards) in order to try to restore it again.
This behaviour has been implemented because when the allocation of a
shard has been retried too many times, the MaxRetryDecider is engaged
to prevent any future allocation of the failed shard. If it happens while
restoring a snapshot, the restore hanged and was never completed because
it stayed around waiting for the shards to be assigned (and that won't happen).
It also blocked future attempts to restore the snapshot again. With this commit,
the restore does not hang and is marked as failed, leaving failed shards
around for investigation.
This is the second part of the #26865 issue.
Closes#26865
This pull request changes the S3BlobContainer.blobExists() method implementation
to make it use the AmazonS3.doesObjectExist() method instead of
AmazonS3.getObjectMetadata(). The AmazonS3 implementation takes care of
catching any thrown AmazonS3Exception and compares its response code with 404,
returning false (object does not exist) or lets the exception be propagated.
We have tests that manually unpackage the RPM and Debian package
distributions and start a cluster manually (not from the service) and
run a basic suite of integration tests against them. This is problematic
because it is not how the packages are intended to be used (instead,
they are intended to be installed using the package installation tools,
and started as services) and so violates assumptions that we make about
directory paths. This commit removes these integration tests, instead
relying on the packaging tests to ensure the packages are not
broken. Additionally, we add a sanity check that the package
distributions can be unpackaged. Finally, with this change we can remove
some leniency from elasticsearch-env about checking for the existence of
the environment file which the leniency was there solely for these
integration tests.
Relates #27725
Previously we would unidle a primary shard during recovery in case the
recovery target would miss a background global checkpoint sync. However,
the background global checkpoint syncs are no longer tied to the primary
shard falling idle and so this unidling is no longer needed.
Relates #27757
This removes special casing for documents without a sequence ID.
This code is complex enough with seq IDs we should clean up things
when we can and we don't support 5.x indexing in 7.x anymore
The performance of this method is abysmal, it leads to the
balanced/unbalanced cluster tests taking twenty seconds! The reason for
the performance issue is a quadruple-nested for loop. The inner
double-nested loop is partitioning shards by shard ID in disguise, so we
simply extract this into computing a partition of shards by shard ID
once. Now balanced/unbalanced cluster test does not take twenty seconds
to run.
Relates #27747
This was already changed in 6.x as part of the backport of the recently added open and create index API. wait_for_active_shards can be a number but also "all", with this commit we verify that providing "all" works too.
#27409 deprecated the incorrectly-spelled `levenstein` in favour of `levenshtein`.
#27526 deprecated the inconsistent `jarowinkler` in favour of `jaro_winkler`.
These changes were merged into 6.2, and this change removes them entirely in 7.0.
* CustomFieldQuery: removed a redundant type check that was
already done higher up in the same if/else chain.
* PrioritizedEsThreadPoolExecutor: removed a check that was
simply a duplicate of one earlier one and would never have been true.
The current code contains an instanceOf check and a comment that this should
eventually be changed to something else. The typeName() should return a unique
name for the field type in question (geo_shape) so it can be used instead.
This commit addresses slowness in the test that a rejected execution
contains the node name. The slowness came from setting the count on a
countdown latch too high (two in the case of the search thread pool)
where there would never be a second countdown on the latch. This means
that when then test node is shutting down, closing the node would have
to wait a full ten seconds before forcefully terminating the thread
pool. This commit fixes the issue so that the node can close
immediately, shaving ten seconds off the run time of the test.
Relates #27663
This commit fixes the test of an index with an unknown setting. The
problem here is that we were manipulating the index state on disk, but a
cluster state update could arrive between us manipulating the index
state on disk and us restarting the node, leading to the index state
that we just intentionally broke being fixed. As such, after restart,
the index state would not be in the state that we expected it to be in
and the test would fail. To address this, we hook into the restart and
break the index state immediately before the node is started again.
Relates #26995
This commit attempts to continue unifying the logic between different
transport implementations. As transports call a `TcpTransport` callback
when a new channel is accepted, there is no need to internally track
channels accepted. Instead there is a set of accepted channels in
`TcpTransport`. This set is used for metrics and shutting down channels.
Today we are lenient and we open an index if it has broken
settings. This can happen if a user installs a plugin that registers an
index setting, creates an index with that setting, stop their node,
removes the plugin, and then restarts the node. In this case, the index
will have a setting that we do not recognize yet we open the index
anyway. This leniency is dangerous so this commit removes it. Note that
we still are lenient on upgrades and we should really reconsider this in
a follow-up.
Relates #26995
Setting a timeout here speeds the test up significantly since we do not
need to wait up the default of 30 seconds for shards to start, we only
need an ACK that the index was opened.
This is related to #27563. This commit modifies the
InboundChannelBuffer to support releasable byte pages. These byte
pages are provided by the PageCacheRecycler. The PageCacheRecycler
must be passed to the Transport with this change.
Some of the Vagrant tests were failing due to ES temp directories
left over from previous uses of the same VM confusing subsequent
tests into thinking there were multiple ES installs present.
This change wipes all ES temp directories when the test VMs are
brought up.
We have some methods Strings#splitStringByCommaToArray and
Strings#splitStringByCommaToSet. It is not obvious that the former
leaves whitespace and the latter trims it. We also have
Strings#tokenizeToStringArray which tokenizes a string to an array, and
trims whitespace. It seems the right thing to do here is to rename
Strings#splitStringByCommaToSet to Strings#tokenizeByCommaToSet so that
its name is aligned with another method that tokenizes by a delimiter
and trims whitespace. We also cleanup the code here, removing an
unneeded splitting by delimiter to set method.
Relates #27715
We currently do not have any server-side read timeouts implemented in
elasticsearch. This commit adds a read timeout setting that defaults to
30 seconds. If after 30 seconds a read has not occurred, the channel
will be closed. A timeout of value of 0 will disable the timeout.
The problem here is that splitting was using a method that intentionally
trims whitespace (the method is really meant to be used for splitting
parameters where whitespace should be trimmed like list
settings). However, for routing values whitespace should not be trimmed
because we allow routing with leading and trailing spaces. This commit
switches the parsing of these routing values to a method that does not
trim whitespace.
Relates #27712
* Improved paragraph describing how to run unit tests from Intellij.
* Added information about how to run a local copy of elasticsearch from the source.
This is a follow up to #27695. This commit adds a test checking that
across multiple writes using multiple buffers, a write operation
properly keeps track of which buffers still need to be written.
It's possible that a merge may be ongoing when we check the breaker and segment
stats' memory usage, this causes the test to fail. Instead, we should wait for
merging to complete.
Resolves#27651