With #36221 we introduced shards counting to address a rare failure.
This caused a worse problem in this test when replicas were allocated
and shards failures were randomly returned. The latch has to take into
account additional attempts caused by the shard failures, which means
that in order for run to be called, performPhaseOnShard will be called
(numShards + numFailures) times.
To address this, we need to decide upfront which shard is going to fail,
making sure that at least one shards is successful otherwise the whole
request fails.
Closes#37074
With this commit we introduce a new store type `hybridfs` that is a
hybrid between `mmapfs` and `niofs`. This store type chooses different
strategies to read Lucene files based on the read access pattern (random
or linear) in order to optimize performance.
This store type has been available in earlier versions of Elasticsearch
as `default_fs`. We have chosen a different name now in order to convey
the intent of the store type instead of tying it to the fact whether it
is the default choice.
Relates #36668
SearchAsyncActionTests may fail with RejectedExecutionException as InitialSearchPhase may try to execute a runnable after the test has successfully completed, and the corresponding executor was already shut down. The latch was located in getNextPhase that is almost correct, but does not cover for the last finishAndRunNext round that gets executed after onShardResult is invoked.
This commit moves the latch to count the number of shards, and allowing the test to count down later, after finishAndRunNext has been potentially forked. This way nothing else will be executed once the executor is shut down at the end of the tests.
Closes#36221Closes#33699
Changes the feature usage retrieval to use the job manager rather than
directly talking to the cluster state, because jobs can now be either in
cluster state or stored in an index
This is a follow-up of #36702 / #36698
* Fixes the issue reproduced in the added tests:
* When having open index requests on a shard that are waiting for a refresh, relocating that shard
becomes blocked until that refresh happens (which could be never as in the test scenario).
With #36997 we added the ability to provide a cluster alias with a
SearchRequest.
The next step is to disable the final reduction whenever a cluster alias
is provided with the SearchRequest. A cluster alias will be provided
when executing a cross-cluster search request with alternate execution
mode, where each cluster does its own reduction locally. In order for
the CCS node to be able to later perform an additional reduction of the
results, we need to make sure that all the needed info stays available.
This means that terms aggregations can be reduced but not pruned, and
pipeline aggs should not be executed. The final reduction will happen
later in the CCS coordinating node.
Relates to #36997 & #32125
With the upcoming cross-cluster search alternate execution mode, the CCS
node will be able to split a CCS request into multiple search requests,
one per remote cluster involved. In order to do that, the CCS node has
to be able to signal to each remote cluster that such sub-requests are
part of a CCS request. Each cluster does not know about the other
clusters involved, and does not know either what alias it is given in
the CCS node, hence the CCS coordinating node needs to be able to provide
the alias as part of the search request so that it is used as index prefix
in the returned search hits.
The cluster alias is a notion that's already supported in the search shards
iterator and search shard target, but it is currently used in CCS as both
index prefix and connection lookup key when fanning out to all the shards.
With CCS alternate execution mode the provided cluster alias needs to be
used only as index prefix, as shards are local to each cluster hence no
cluster alias should be used for connection lookups.
The local cluster alias can be set to the SearchRequest at the transport layer
only, and its constructor/getter methods are package private.
Relates to #32125
- Removes bwc invalidation logic from the TokenService
- Removes bwc serialization for InvalidateTokenResponse objects as
old nodes in supported mixed clusters during upgrade will be 6.7 and
thus will know of the new format
- Removes the created field from the TokensInvalidationResult and the
InvalidateTokenResponse as it is no longer useful in > 7.0
If we don't explicitly sett the client SSLSocketFactory when
creating an InMemoryDirectoryServer and setting its SSL config, it
will result in using a TrustAllTrustManager(that extends
X509TrustManager) which is not allowed in a FIPS 140 JVM.
Instead, we get the SSLSocketFactory from the existing SSLContext
and pass that to be used.
Resolves#37013
Improves on #36449 which did not cover the situation where a node had bumped its term during
the election, and not when receiving the first follower check. This was uncovered while refactoring
NodeJoinTests so that they don't need to access to an internal field of Coordinator anymore (which
can now be made private).
The phrase "missing authentication token" is historic and is based
around the use of "AuthenticationToken" objects inside the Realm code.
However, now that we have a TokenService and token API, this message
would sometimes lead people in the wrong direction and they would try
and generate a "token" for authentication purposes when they would
typically just need a username:password Basic Auth header.
This change replaces the word "token" with "credentials".
In #30509 we changed the way SSL configuration is reloaded when the
content of a file changes. As a consequence of that implementation
change the LDAP realm ceased to pick up changes to CA files (or other
certificate material) if they changed.
This commit repairs the reloading behaviour for LDAP realms, and adds
a test for this functionality.
Resolves: #36923
Today InternalTestClusterTests is still using zen1.
This commit fixes it.
Two types of changes were required:
1. Explicitly pass file discovery host provider setting. It's done in
ESIntegTestCase as a part of the Zen2 feature and should be done here
as well.
2. For the test, that uses autoManageMinMasterNodes = false perform
cluster bootstrap.
* This speeds up the test from an average 25s down to 7s runtime
* There is no need for artificially slowing down the snapshot to reproduce the issue of an out of sync routing table in practice.
Over hundreds of test runs the test's snapshot shard service still runs in the index not found exception every time reproducing this issue.
* Relates #36294
Keys are compared in BucketSortPipelineAggregation so making key type (ArrayMap) implement Comparable. Maps are compared using the entry set's iterator so ordered maps order is maintain. For each entry first comparing key then value. Assuming all keys are strings. When comparing entries' values if type is not identical and\or type not implementing Comparable, throwing exception. Not implementing equals() and hashCode() functions as parent's ones are sufficient. Tests included.
Today the routing of a SourceToParse is assigned in a separate step
after the object is created. We can easily forget to set the routing.
With this commit, the routing must be provided in the constructor of
SourceToParse.
Relates #36921
The AutoFollowCoordinator should be resilient to the fact that the follower
index has already been created and in that case it should only update
the auto follow metadata with the fact that the follower index was created.
Relates to #33007
Currently auto follow stats users are unable to see whether an auto follow
error was recent or old. The new timestamp field will help user distinguish
between old and new errors.
Both index following and auto following should be resilient against missing remote connections.
This happens in the case that they get accidentally removed by a user. When this happens
auto following and index following will retry to continue instead of failing with unrecoverable exceptions.
Both the put follow and put auto follow APIs validate whether the
remote cluster connection. The logic added in this change only exists
in case during the lifetime of a follower index or auto follow pattern
the remote connection gets removed. This retry behavior similar how CCR
deals with authorization errors.
Closes#36667Closes#36255
We added some special handling for installing and removing the
ingest-geoip and ingest-user-agent plugins when we converted them to
modules. This special handling was done to minimize breaking users in a
minor release. However, do not want to maintain this behavior forever so
this commit removes that special handling in the master branch so that
starting with 7.0.0 this special handling will be gone.
We introduce a typeless API in #35790 where we translate the default
docType "_doc" to the user-defined docType. However, we do not rewrite
the SourceToParse with the resolved docType. This leads to a situation
where we have two translog operations for the same document with
different types:
- prvOp [Index{id='9LCpwGcBkJN7eZxaB54L', type='_doc', seqNo=1,
primaryTerm=1, version=1, autoGeneratedIdTimestamp=1545125562123}]
- newOp [Index{id='9LCpwGcBkJN7eZxaB54L', type='not_doc', seqNo=1,
primaryTerm=1, version=1, autoGeneratedIdTimestamp=-1}]
Closes#36769
This commit fixes some cross-doc links from the old ingest plugins page
to the new ingest processor pages that arose after converting
ingest-geoip and ingest-user-agent to modules.