We are targeting to always have a safe index once the recovery is done.
This invariant does not hold if the translog is manually truncated by
users because the truncate translog cli resets the global checkpoint to
unassigned. This commit assigns the global checkpoint to the max_seqno
of the last commit when truncating translog. We can only safely do it
because the truncate translog command will generate a new history uuid
for that shard. With a new history UUID, sequence-based recovery between
that shard and other old shards will be disabled.
Relates #28181
Releasble locks hold accounting on who holds the lock when assertions
are enabled. However, the underlying lock can be re-entrant yet we mark
the lock as not held by the current thread as soon as the releasable is
closed. For a re-entrant lock this is not right because the thread could
have entered the lock multiple times. Instead, we have to count how many
times the thread has entered the lock and only mark the lock as not held
by the current thread when the counter reaches zero.
Relates #28202
Currently we keep a 5.x index commit as a safe commit until we have a
6.x safe commit. During that time, if peer-recovery happens, a primary
will send a 5.x commit in file-based sync and the recovery will even
fail as the snapshotted commit does not have sequence number tags.
This commit updates the combined deletion policy to delete legacy
commits if there are 6.x commits.
Relates #27606
Relates #28038
KeyedLockTests calls RandomizedTest.scaledRandomIntBetween from multiple threads which uses RandomizedTest.isNightly()
whereas that method is not concurrency-safe (see implementation of GroupEvaluator.isGroupEnabled)
Today a primary shard transfers the most recent commit point to a
replica shard in a file-based recovery. However, the most recent commit
may not be a "safe" commit; this causes a replica shard not having a
safe commit point until it can retain a safe commit by itself.
This commits collapses the snapshot deletion policy into the combined
deletion policy and modifies the peer recovery source to send a safe
commit.
Relates #10708
The unified highlighter selects a single sentence per fragment from the offset of the first highlighted term.
This change modifies this selection and allows more than one sentence in a single fragment.
The expansion is done forward (on the right of the matching offset), sentences are added to the current fragment iff the overall size of the fragment is smaller than the maximum length (fragment_size).
We should also add a way to expand the left context with the surrounding sentences but this is currently avoided because the unified highlighter in Lucene uses only the first offset that matches the query to derive the start and end offset of the next fragment.
If we expand on the left we could split multiple terms that would be grouped otherwise. Breaking this limitation implies some changes in the core of the unified highlighter.
Closes#28089
Currently when adding a document with a `null` value for a range field,
the range field mapper raises an error. Instead we should ignore null like
we do eg. with numbers or geo points.
Closes#27845
Since Elasticsearch 6.1.0 environment variable substitutions in lists do not work.
Environment variables in a list setting were not resolved, because settings with a list type were skipped during variables resolution.
This commit fixes by processing list settings as well.
Closes#27926
* This change makes sure that we don't detect a file path containing a ':' as
a maven coordinate (e.g.: `file:C:\path\to\zip`)
* restore test muted on master
This change modifies the installation for a meta plugin,
the content of the config and bin directory inside each bundled plugins are now moved in the meta plugin directory.
So instead of `$configDir/meta-plugin-name/bundled_plugin/name/` the content of the config
for a bundled plugin is now in `$configDir/meta-plugin-name`. Same applies for the bin directory.
The configuration of the upgraded cluster task was missing a dependency
on the stopping of the second old node in the cluster. In some cases
(e.g., --parallel) Gradle would then try to run the configuration of a
node in the upgraded cluster before it had even configured the old nodes
in the cluster.
Relates #28036
Currently the code which disable transitive dependencies assumes all
deps are a "module dependency" in gradle. But a jar file dep is not.
This commit relaxes the closure signature to allow any dependency and
only enforce the transitive disabling for module dependencies.
We set the watermarks to low values in other test cases to prevent test
failures on nodes with low disk space (if the disk space is too low, the
test will fail anyway but we should not prematurely fail). This commit
sets the watermarks in the single-node test cases to avoid test failures
in such situations.
Relates #28134
This commit adds the ability to package multiple plugins in a single zip.
The zip file for a meta plugin must contains the following structure:
|____elasticsearch/
| |____ <plugin1> <-- The plugin files for plugin1 (the content of the elastisearch directory)
| |____ <plugin2> <-- The plugin files for plugin2
| |____ meta-plugin-descriptor.properties <-- example contents below
The meta plugin properties descriptor is mandatory and must contain the following properties:
description: simple summary of the meta plugin.
name: the meta plugin name
The installation process installs each plugin in a sub-folder inside the meta plugin directory.
The example above would create the following structure in the plugins directory:
|_____ plugins
| |____ <name_of_the_meta_plugin>
| | |____ meta-plugin-descriptor.properties
| | |____ <plugin1>
| | |____ <plugin2>
If the sub plugins contain a config or a bin directory, they are copied in a sub folder inside the meta plugin config/bin directory.
|_____ config
| |____ <name_of_the_meta_plugin>
| | |____ <plugin1>
| | |____ <plugin2>
|_____ bin
| |____ <name_of_the_meta_plugin>
| | |____ <plugin1>
| | |____ <plugin2>
The sub-plugins are loaded at startup like normal plugins with the same restrictions; they have a separate class loader and a sub-plugin
cannot have the same name than another plugin (or a sub-plugin inside another meta plugin).
It is also not possible to remove a sub-plugin inside a meta plugin, only full removal of the meta plugin is allowed.
Closes#27316
Otherwise newer versions of Gradle will see the outputs as stale and
remove the directory between having created the directory and copying
files into the directory (leading to the directory being created again,
this time missing some sub-directories).
This commit changes IndexShardSnapshotStatus so that the Stage is updated
coherently with any required information. It also provides a asCopy()
method that returns the status of a IndexShardSnapshotStatus at a given
point in time, ensuring that all information are coherent.
Closes#26480
This commit modifies the BWC build to invoke the Gradle wrapper. The
motivation for this is two-fold:
- BWC versions might be dependent on a different version of Gradle than
the current version of Gradle
- in a follow-up we are going to need to be able to set JAVA_HOME to a
different value than the current value of JAVA_HOME
Relates #28138
* Fix license SPDX identifiers (CDDL)
* Fix license type for Custom URL:
* If the license is identified but not listed as an SPDX identifier, the character `;` is used after the identifier to set the license URL.
This commit is related to #27260. It moves the TcpChannelFactory into
NioTransport so that consumers do not have to be passed around.
Additionally it deletes an unused read handler.
We have a packaging test that tries to install all plugins, and then
asserts that all expected plugins are installed. The expected plugins
are dervied from the list of plugins in the plugins sub-project. The
plugin transport-nio was recently added here, but explicit commands to
install and remove this plugin were never added. This commit addresses
this.
If a newly created index from a rollover request matches with an index
template whose aliases contains the rollover request alias, the alias
will point to multiple indices. This will cause indexing requests to be
rejected. To avoid such situation, we make sure that there is no
duplicated alias before creating a new index; otherwise abort and report
an error to the caller.
Closes#26976
This commit removes the finalization of a snapshot by the snapshot deletion request. This way, the deletion marks the snapshot as ABORTED in cluster state and waits for the snapshot completion. It is the responsability of the snapshot execution to detect the abortion and terminates itself correctly. This
avoids concurrent snapshot finalizations and also ordinates the operations: the deletion
aborts the snapshot and waits for the snapshot completion, the creation detects the abortion
and stops by itself and finalizes the snapshot, then the deletion resumes and continues
the deletion process.
Several responses include the shards_acknowledged flag (indicating whether the
requisite number of shard copies started before the completion of the operation)
and there are two different getters used : isShardsAcknowledged() and isShardsAcked().
This PR deprecates the isShardsAcked() in favour of isShardsAcknowledged() in
CreateIndexResponse, RolloverResponse and CreateIndexClusterStateUpdateResponse.
Closes#27784
The full-cluster-restart tests are run with two nodes. This can lead to situations where the shrink tests fail because
the replicas are not allocated yet and the shrink operation needs the source shards to be available on the same node.
Previously this would default to the version of the remote Node.
However, if the remote cluster was mixed-version (e.g. it was part way through a
rolling upgrade), then the TransportService may have negotiated a connection
version that is not identical to connected Node's version.
This mismatch would cause the Stream and the (Remote)Connection to report
different version numbers, which could cause data to be sent over the wire
using an incorrect serialization version.