This commit removes a leftover checkstyle suppression for a source file
that was temporarily forked into the codebase to hack around a bug in
Log4j. When that source file was removed, the suppression was left
behind.
This PR completes the refactoring of the cluster allocation explain API and improves it in the following two high-level ways:
1. The explain API now uses the same allocators that the AllocationService uses to make shard allocation decisions. Prior to this PR, the explain API would run the deciders against each node for the shard in question, but this was not executed on the same code path as the allocators, and many of the scenarios in shard allocation were not captured due to not executing through the same code paths as the allocators.
2. The APIs have changed, both on the Java and JSON level, to accurately capture the decisions made by the system. The APIs also now report on shard moving and rebalancing decisions, whereas the previous API did not report decisions for moving shards which cannot remain on their current node or rebalancing shards to form a more balanced cluster.
Note: this change affects plugin developers who may have a custom implementation of the ShardsAllocator interface. The method weighShards has been removed and no longer has any utility. In order to support the new explain API, however, a custom implementation of ShardsAllocator must now implement ShardAllocationDecision decideShardAllocation(ShardRouting shard, RoutingAllocation allocation) which provides a decision and explanation for allocating a single shard. For implementations that do not support explaining a single shard allocation via the cluster allocation explain API, this method can simply return an UnsupportedOperationException.
In #22313 we added a check that prevents the SnapshotDeletionsInProgress custom cluster state objects from being sent to older elasticsearch nodes. This commits make this check generic and available to other cluster state custom objects if needed.
The `Script source settings` section currently states that `false` means scripting is ENABLED.
The other sections seem to indicate that `false` means scripting is DISABLED.
If the current documentation is correct, that would imply that `inline` and `stored` scripting are ENABLED by default, which seems to conflict with all the other sections in the document.
Unless the dynamic templates define an explicit format in the mapping
definition: in that case the explicit mapping should have precedence.
Closes#9410
This adds a new `normalizer` property to `keyword` fields that pre-processes the
field value prior to indexing, but without altering the `_source`. Note that
only the normalization components that work on a per-character basis are
applied, so for instance stemming filters will be ignored while lowercasing or
ascii folding will be applied.
Closes#18064
Resetting a recovery consists of resetting the old recovery target and replacing it by a new recovery target object. This is done on the Cancellable threads of
the new recovery target. If the new recovery target is already cancelled before or while this happens, for example due to shard closing or recovery source
changing, we have to make sure that the old recovery target object frees all shard resources.
Relates to #22325
Recoveries are tracked on the target node using RecoveryTarget objects that are kept in a RecoveriesCollection. Each recovery has a unique id that is communicated from the recovery target to the source so that it can call back to the target and execute actions using the right recovery context. In case of a network disconnect, recoveries are retried. At the moment, the same recovery id is reused for the restarted recovery. This can lead to confusion though if the disconnect is unilateral and the recovery source continues with the recovery process. If the target reuses the same recovery id while doing a second attempt, there might be two concurrent recoveries running on the source for the same target.
This commit changes the recovery retry process to use a fresh recovery id. It also waits for the first recovery attempt to be fully finished (all resources locally freed) to further prevent concurrent access to the shard. Finally, in case of primary relocation, it also fails a second recovery attempt if the first attempt moved past the finalization step, as the relocation source can then be moved to RELOCATED state and start indexing as primary into the target shard (see TransportReplicationAction). Resetting the target shard in this state could mean that indexing is halted until the recovery retry attempt is completed and could also destroy existing documents indexed and acknowledged before the reset.
Relates to #22043
`scaled_float` should be used as DOUBLE in aggregations but currently they are used as LONG.
This change fixes this issue and adds a simple it test for it.
Fixes#22350
Before, snapshot/restore would synchronize all operations on the cluster
state except for deleting snapshots. This meant that only one
snapshot/restore operation would be allowed in the cluster at any given
time, except for deletions - there could be two or more snapshot
deletions running at the same time, or a deletion could be running,
unbeknowest to the rest of the cluster, and thus a snapshot or restore
would be allowed at the same time as the snapshot deletion was still in
progress. This could cause any number of synchronization issues,
including the situation where a snapshot that was deleted could reappear
in the index-N file, even though its data was no longer present in the
repository.
This commit introduces a new custom type to the cluster state to
represent deletions in progress. Now, another deletion cannot start if
a deletion is currently in progress. Similarily, a snapshot or restore
cannot be started if a deletion is currently in progress. In each case,
if attempting to run another snapshot/restore operation while a deletion
is in progress, a ConcurrentSnapshotExecutionException will be thrown.
This is the same exception thrown if trying to snapshot while another
snapshot is in progress, or restore while a snapshot is in progress.
Closes#19957
This commit fixes an issue with IndexShardTests#testDocStats when the
number of deleted docs is equal to the number of docs. In this case,
Luence will remove the underlying segment tripping an assertion on the
number of deleted docs.
Today we try to pull stats from index writer but we do not get a
consistent view of stats. Under heavy indexing, this inconsistency can
be very skewed indeed. In particular, it can lead to the number of
deleted docs being reported as negative and this leads to serialization
issues. Instead, we should provide a consistent view of the stats by
using an index reader.
Relates #22317
The backwards compatibility tests rely on gradle's built-in mechanisms for resolving dependencies
to get the zip of the older version we test against. By default, this will cache snapshots for
24 hours, which can lead to unexpected failures in CI. This change makes the special configurations
for backwards compatibility always update their snapshots by setting the amount of time to cache
to 0 seconds.
Not doing this made it difficult to establish a happens before relationship between connecting to a node and adding a listeners. Causing test code like this to fail sproadically:
```
// connection to reuse
handleA.transportService.connectToNode(handleB.node);
// install a listener to check that no new connections are made
handleA.transportService.addConnectionListener(new TransportConnectionListener() {
@Override
public void onConnectionOpened(DiscoveryNode node) {
fail("should not open any connections. got [" + node + "]");
}
});
```
relates to #22277
Today we silently ignore invalid test logging annotations. This commit
rejects these annotations, failing the processing of the annotation and
aborting the test.
This commit adds a test for applying logging levels in hierarchical
order, and addresses an issue with restoring the logging levels at the
end of a test or suite.
This commit factors out the cluster state update tasks that are published (ClusterStateUpdateTask) from those that are not (LocalClusterUpdateTask), serving as a basis for future refactorings to separate the publishing mechanism out of ClusterService.
We have to sort the logger names so they wouldn't override each other. Processing org.elasticsearch:DEBUG after org.elasticsearch.transport:TRACE resets the setting of the later
When starting a standalone cluster, we do not able assertions. This is
problematic because it means that we miss opportunities to catch
bugs. This commit enables assertions for standalone integration tests,
and fixes a couple bugs that were uncovered by enabling these.
Relates #22334