When we rewrite alias requests, after filtering down to only those that
the user is authorized to see, it can be that there are no aliases
remaining in the request. However, core Elasticsearch interprets this as
_all so the user would see more than they are authorized for. To address
this, we previously rewrote all such requests to have aliases `"*"`,
`"-*"`, which would be interpreted when aliases are resolved as
nome. Yet, this is only needed for get aliases requests and we were
applying it to all alias requests, including remove index requests. If
such a request was sent to a coordinating node that is not the master
node, the request would be rewritten to include `"*"` and `"-*"`, and
then the master would authorize the user for these. If the user had
limited permissions, the request would fail, even if they were
authorized on the index that the remove index action was over. This
commit addresses this by rewriting for get aliases and remove
aliases request types but not for the remove index.
Co-authored-by: Albert Zaharovits <albert.zaharovits@elastic.co>
Co-authored-by: Tim Vernum <tim@adjective.org>
Today the `LeaderChecker` rejects checks from nodes that are not in the current
cluster with the exception message `leader check from unknown node` which
offers no information about why the node is unknown. In fact the node must have
been in the cluster in the recent past, so it might help guide the user to a
more useful log message if we describe it as a `removed node` instead of an
`unknown node`. This commit changes the exception message like this, and also
tidies up a few other loose ends in the `LeaderChecker`.
Today `GatewayMetaState` implements `PersistedState` but it's an error to use
it as a `PersistedState` before it's been started, or if the node is
master-ineligible. It also holds some fields that are meaningless on nodes that
do not persist their states. Finally, it takes responsibility for both loading
the original cluster state and some of the high-level logic for writing the
cluster state back to disk.
This commit addresses these concerns by introducing a more specific
`PersistedState` implementation for use on master-eligible nodes which is only
instantiated if and when it's appropriate. It also moves the fields and
high-level persistence logic into a new `IncrementalClusterStateWriter` with a
more appropriate lifecycle.
Follow-up to #46326 and #46532
Relates #47001
This change works around JDK-8213202, which is a bug related to TLSv1.3
session resumption before JDK 11.0.3 that occurs when there are
multiple concurrent sessions being established. Nodes connecting to
each other will trigger this bug when client authentication is
disabled, which is the case for SSLClientAuthTests.
Backport of #46680
The HLRC has a method for reindex, that allows to trigger an async reindex by running RestHighLevelClient.submitReindexTask and RestHighLevelClient.reindex. The delete by query however only has an RestHighLevelClient.deleteByQuery method (and its async counterpart), but no RestHighLevelClient.submitDeleteByQueryTask. So add RestHighLevelClient.submitDeleteByQueryTask
Closes#46395
Similarly to what has been done for S3 in #45383, this commit
adds unit tests that verify the behavior of the SDK client and
blob container implementation for Google Storage when the
remote service returns errors.
The main purpose was to add an extra test to the specific retry
logic for 410-Gone errors added in #45963.
Relates #45963
Previously, queries on the _index field were not able to specify index aliases.
This was a regression in functionality compared to the 'indices' query that was
deprecated and removed in 6.0.
Now queries on _index can specify an alias, which is resolved to the concrete
index names when we check whether an index matches. To match a remote shard
target, the pattern needs to be of the form 'cluster:index' to match the
fully-qualified index name. Index aliases can be specified in the following query
types: term, terms, prefix, and wildcard.
This commit replaces the SearchContext used in AbstractQueryTestCase with
a QueryShardContext in order to reduce the visibility of search contexts.
Relates #46523
Add initial PIVOT support for transforming a regular table into a
statistics table around an arbitrary pivoting column:
SELECT * FROM
(SELECT languages, country, salary, FROM mp)
PIVOT (AVG(salary) FOR countries IN ('NL', 'DE', 'ES', 'RO', 'US'))
In the current implementation PIVOT allows only one aggregation however
this restriction is likely to be lifted in the future.
Also not all aggregations are working, in particular MatrixStats are not yet supported.
(cherry picked from commit d91263746a222915c570d4a662ec48c1d6b4f583)
API spec now use an object for the documentation field. _common was not updated yet. This commit updates _common.json and its corresponding parser.
Closes#46744
Co-Authored-By: Tomas Della Vedova <delvedor@users.noreply.github.com>
G1 GC were setup to use an `InitiatingHeapOccupancyPercent` of 75. This
could leave used memory at a very high level for an extended duration,
triggering the real memory circuit breaker even at low activity levels.
The value is a threshold for old generation usage relative to total heap
size and thus it should leave room for the new generation. Default in
G1 is to allow up to 60 percent for new generation and this could mean that the
threshold was effectively at 135% heap usage. GC would still kick in of course and
eventually enough mixed collections would take place such that adaptive adjustment
of IHOP kicks in.
The JVM has adaptive setting of the IHOP, but this does not kick in
until it has sampled a few collections. A newly started, relatively
quiet server with primarily new generation activity could thus
experience heap above 95% frequently for a duration.
The changes here are two-fold:
1. Use 30% default for IHOP (the JVM default of 45 could still mean
105% heap usage threshold and did not fully ensure not to hit the
circuit breaker with low activity)
2. Set G1ReservePercent=25. This is used by the adaptive IHOP mechanism,
meaning old/mixed GC should kick in no later than at 75% heap. This
ensures IHOP stays compatible with the real memory circuit breaker also
after being adjusted by adaptive IHOP.
This PR adds some restrictions around testfixtures to make sure the same service ( as defiend in docker-compose.yml ) is not shared between multiple projects.
Sharing would break running with --parallel.
Projects can still share fixtures as long as each has it;s own service within.
This is still useful to share some of the setup and configuration code of the fixture.
Project now also have to specify a service name when calling useCluster to refer to a specific service.
If this is not the case all services will be claimed and the fixture can't be shared.
For this reason fixtures have to explicitly specify if they are using themselves ( fixture and tests in the same project ).
`testLogsWarningPeriodicallyIfClusterNotFormed` simulates a leader failure and
waits for long enough that a failing leader check is scheduled. However it does
not wait for the failing check to actually fail, which requires another two
actions and therefore might take up to 200ms more. Unlucky timing would result
in this test failing, for instance:
./gradle ':server:test' \
--tests "org.elasticsearch.cluster.coordination.CoordinatorTests.testLogsWarningPeriodicallyIfClusterNotFormed" \
-Dtests.jvm.argline="-Dhppc.bitmixer=DETERMINISTIC" \
-Dtests.seed=F18CDD0EBEB5653:E9BC1A8B062E697A
This commit adds the extra delay needed for the leader failure to complete as
expected.
Fixes#46920
Currently the CountRequest accepts a search source builder, while the
RestCountAction only accepts a top level query object. This can lead
to confusion if another element (e.g. aggregations) is specified,
because that will be ignored on the server side in RestCountAction.
By deprecating the current setter & constructor that accept a
SearchSourceBuilder and adding replacement that accepts a QueryBuilder
it is clear what the count api can handle from HLRC side.
Follow up from #46829
If peer recovery happens after indexing, and indexing flushes some shard
at the end, then the explicit flush in the test will be a noop. Then
replicas will have some uncommitted translog , which is transferred in
peer recovery, although all of these operations are in the commit
already. If that replica becomes primary (after we restarted the
cluster), it will have translog to replay and the test will fail.
Another issue in this test is that synced_flush is not a replication
action, then the global checkpoint on replicas might be not up to date.
We need to either wait for the global checkpoint to be synced or call a
replication action to sync it.
Closes#46712
This at a very low rate and with the force merge in place
before checking the cache size it's not clear why the cache
is not of size `0` -> seems something else
must be happening here that is unexpected.
-> add debug logging to this test to find out
Relates #46701
* addSnapshotLifecyclePolicy drop version assertion
This drops the assertion on the policy version (which was pinned to 1L)
as we want to execute both put policy apis (sync and async) for
documentation purposes. This will sometimes (depending on the async
call) yield a version of 2L. Waiting for the async call to always
complete could be an option but the test is already rather slow and it's
a bit of an overkill as we're already verifying the policy was created.
(cherry picked from commit af4864c39129bcdbf98d00223f445346a62075e4)
Signed-off-by: Andrei Dan <andrei.dan@elastic.co>
We were repeatedly trying to send shard state updates for aborted
snapshots on every cluster state update.
This is simply dead-code since those updates are already safely
sent in the callbacks passed to `SnapshotShardsService#snapshot`.
On master failover, we ensure that the status update is resent
via `SnapshotShardsService#syncShardStatsOnNewMaster`.
=> there is no need for trying to send updates here over and over
and this logic can safely be removed
This assertion was added during the development of required
pipelines. In the initial version of that work, the notion of whether or
not a request was forwarded from the coordinating node to an ingest node
was introduced. It was realized later that instead we needed to track
whether or not the pipeline for the request was resolved. When that
change was made, this assertion, while not incorrect, was left behind
and only applied if the coordnating node was forwarding the
request. Instead, the assertion applies whether or not the request is
being forwarded. This commit addresses that by moving the assertion and
renaming some variables.
GoogleCloudStorageBlobStore.deleteBlobsIgnoringIfNotExists() does
not correctly catch StorageException thrown by batch.submit().
In the case a snapshot is deleted through BlobStoreRepository.deleteSnapshot()
a storage exception is not caught (only IOException are) so the deletion is
interrupted and indices cannot be cleaned up. The storage exception bubbles
up to SnapshotService.deleteSnapshotFromRepository() but the listener that
removes the deletion from the cluster state is not executed, leaving the
deletion in the cluster state.
This bug has been reported in #46772 where batch.submit() threw an
exception in the test testIndicesDeletedFromRepository and following
tests failed because a snapshot deletion was running.
Relates #46772
This commit adds the ability to require an ingest pipeline on an
index. Today we can have a default pipeline, but that could be
overridden by a request pipeline parameter. This commit introduces a new
index setting index.required_pipeline that acts similarly to
index.default_pipeline, except that it can not be overridden by a
request pipeline parameter. Additionally, a default pipeline and a
request pipeline can not both be set. The required pipeline can be set
to _none to ensure that no pipeline ever runs for index requests on that
index.
This test is broken with a very low failure rate after recent changes.
Particularly after #45689 which does not check for duplicate snapshot
uuids during snapshot finalization any more.
The check for duplicate uuids during finalization was removed
conciously since it lead to problems during master failover.
This test fails because it increments the repository state id in an
unexpected manner now, starting from the impossible situation of having
the same snapshot UUID for two different repository state ids.
This situation can't normally be reached, but we manually crafted it here.
This test didn't do anything before though, because the manually crafted
cluster state would simply result in an error during finalization before
and nothing but a normal snapshot delete would be tested.
=> removing this test here, it doesn't test anything.
Closes#46843