G1 GC were setup to use an `InitiatingHeapOccupancyPercent` of 75. This
could leave used memory at a very high level for an extended duration,
triggering the real memory circuit breaker even at low activity levels.
The value is a threshold for old generation usage relative to total heap
size and thus it should leave room for the new generation. Default in
G1 is to allow up to 60 percent for new generation and this could mean that the
threshold was effectively at 135% heap usage. GC would still kick in of course and
eventually enough mixed collections would take place such that adaptive adjustment
of IHOP kicks in.
The JVM has adaptive setting of the IHOP, but this does not kick in
until it has sampled a few collections. A newly started, relatively
quiet server with primarily new generation activity could thus
experience heap above 95% frequently for a duration.
The changes here are two-fold:
1. Use 30% default for IHOP (the JVM default of 45 could still mean
105% heap usage threshold and did not fully ensure not to hit the
circuit breaker with low activity)
2. Set G1ReservePercent=25. This is used by the adaptive IHOP mechanism,
meaning old/mixed GC should kick in no later than at 75% heap. This
ensures IHOP stays compatible with the real memory circuit breaker also
after being adjusted by adaptive IHOP.
This PR adds some restrictions around testfixtures to make sure the same service ( as defiend in docker-compose.yml ) is not shared between multiple projects.
Sharing would break running with --parallel.
Projects can still share fixtures as long as each has it;s own service within.
This is still useful to share some of the setup and configuration code of the fixture.
Project now also have to specify a service name when calling useCluster to refer to a specific service.
If this is not the case all services will be claimed and the fixture can't be shared.
For this reason fixtures have to explicitly specify if they are using themselves ( fixture and tests in the same project ).
`testLogsWarningPeriodicallyIfClusterNotFormed` simulates a leader failure and
waits for long enough that a failing leader check is scheduled. However it does
not wait for the failing check to actually fail, which requires another two
actions and therefore might take up to 200ms more. Unlucky timing would result
in this test failing, for instance:
./gradle ':server:test' \
--tests "org.elasticsearch.cluster.coordination.CoordinatorTests.testLogsWarningPeriodicallyIfClusterNotFormed" \
-Dtests.jvm.argline="-Dhppc.bitmixer=DETERMINISTIC" \
-Dtests.seed=F18CDD0EBEB5653:E9BC1A8B062E697A
This commit adds the extra delay needed for the leader failure to complete as
expected.
Fixes#46920
Currently the CountRequest accepts a search source builder, while the
RestCountAction only accepts a top level query object. This can lead
to confusion if another element (e.g. aggregations) is specified,
because that will be ignored on the server side in RestCountAction.
By deprecating the current setter & constructor that accept a
SearchSourceBuilder and adding replacement that accepts a QueryBuilder
it is clear what the count api can handle from HLRC side.
Follow up from #46829
If peer recovery happens after indexing, and indexing flushes some shard
at the end, then the explicit flush in the test will be a noop. Then
replicas will have some uncommitted translog , which is transferred in
peer recovery, although all of these operations are in the commit
already. If that replica becomes primary (after we restarted the
cluster), it will have translog to replay and the test will fail.
Another issue in this test is that synced_flush is not a replication
action, then the global checkpoint on replicas might be not up to date.
We need to either wait for the global checkpoint to be synced or call a
replication action to sync it.
Closes#46712
This at a very low rate and with the force merge in place
before checking the cache size it's not clear why the cache
is not of size `0` -> seems something else
must be happening here that is unexpected.
-> add debug logging to this test to find out
Relates #46701
* addSnapshotLifecyclePolicy drop version assertion
This drops the assertion on the policy version (which was pinned to 1L)
as we want to execute both put policy apis (sync and async) for
documentation purposes. This will sometimes (depending on the async
call) yield a version of 2L. Waiting for the async call to always
complete could be an option but the test is already rather slow and it's
a bit of an overkill as we're already verifying the policy was created.
(cherry picked from commit af4864c39129bcdbf98d00223f445346a62075e4)
Signed-off-by: Andrei Dan <andrei.dan@elastic.co>
We were repeatedly trying to send shard state updates for aborted
snapshots on every cluster state update.
This is simply dead-code since those updates are already safely
sent in the callbacks passed to `SnapshotShardsService#snapshot`.
On master failover, we ensure that the status update is resent
via `SnapshotShardsService#syncShardStatsOnNewMaster`.
=> there is no need for trying to send updates here over and over
and this logic can safely be removed
This assertion was added during the development of required
pipelines. In the initial version of that work, the notion of whether or
not a request was forwarded from the coordinating node to an ingest node
was introduced. It was realized later that instead we needed to track
whether or not the pipeline for the request was resolved. When that
change was made, this assertion, while not incorrect, was left behind
and only applied if the coordnating node was forwarding the
request. Instead, the assertion applies whether or not the request is
being forwarded. This commit addresses that by moving the assertion and
renaming some variables.
GoogleCloudStorageBlobStore.deleteBlobsIgnoringIfNotExists() does
not correctly catch StorageException thrown by batch.submit().
In the case a snapshot is deleted through BlobStoreRepository.deleteSnapshot()
a storage exception is not caught (only IOException are) so the deletion is
interrupted and indices cannot be cleaned up. The storage exception bubbles
up to SnapshotService.deleteSnapshotFromRepository() but the listener that
removes the deletion from the cluster state is not executed, leaving the
deletion in the cluster state.
This bug has been reported in #46772 where batch.submit() threw an
exception in the test testIndicesDeletedFromRepository and following
tests failed because a snapshot deletion was running.
Relates #46772
This commit adds the ability to require an ingest pipeline on an
index. Today we can have a default pipeline, but that could be
overridden by a request pipeline parameter. This commit introduces a new
index setting index.required_pipeline that acts similarly to
index.default_pipeline, except that it can not be overridden by a
request pipeline parameter. Additionally, a default pipeline and a
request pipeline can not both be set. The required pipeline can be set
to _none to ensure that no pipeline ever runs for index requests on that
index.
This test is broken with a very low failure rate after recent changes.
Particularly after #45689 which does not check for duplicate snapshot
uuids during snapshot finalization any more.
The check for duplicate uuids during finalization was removed
conciously since it lead to problems during master failover.
This test fails because it increments the repository state id in an
unexpected manner now, starting from the impossible situation of having
the same snapshot UUID for two different repository state ids.
This situation can't normally be reached, but we manually crafted it here.
This test didn't do anything before though, because the manually crafted
cluster state would simply result in an error during finalization before
and nothing but a normal snapshot delete would be tested.
=> removing this test here, it doesn't test anything.
Closes#46843
When using auto-generated IDs + the ingest drop processor (which looks to be used by filebeat
as well) + coordinating nodes that do not have the ingest processor functionality, this can lead
to a NullPointerException.
The issue is that markCurrentItemAsDropped() is creating an UpdateResponse with no id when
the request contains auto-generated IDs. The response serialization is lenient for our
REST/XContent format (i.e. we will send "id" : null) but the internal transport format (used for
communication between nodes) assumes for this field to be non-null, which means that it can't
be serialized between nodes. Bulk requests with ingest functionality are processed on the
coordinating node if the node has the ingest capability, and only otherwise sent to a different
node. This means that, in order to reproduce this, one needs two nodes, with the coordinating
node not having the ingest functionality.
Closes#46678
We should only snapshot the index we're going to
restore in the next step. Otherwise, we will
potentially not get the correct response or
fail restoring outright due to internal indices
getting created concurrently when running against
the x-pack distribution.
Closes#46844
Follow up from #42346. Since the `methods` array is in order of
preference when calling the index API with an `{id}` we prefer to use
the `PUT` http method.
The fact that this test randomly uses a relatively large number
of nodes and hence Netty worker threads created a problem with
running out of direct memory on CI.
Tests run with 512M heap (and hence 512M direct memory) by default.
On a CI worker with 16 cores, this means Netty will by default set
up 32 transport workers. If we get unlucky and a lot of them
actually do work (and thus instantiate a `CopyBytesSocketChannel`
which costs 1M per thread for the thread-local IO buffer) we
would run out of memory.
This specific failure was only seen with `NativeRealmIntegTests` so I
only added the constraint on the Netty worker count here.
We can add it to other tests (or `SecurityIntegTestCase`) if need be
but for now it doesn't seem necessary so I opted for least impact.
Closes#46803
In #46250 we will have to move to two different delete
paths for BwC. Both paths will share the initial
reads from the repository though so I extracted/separated the
delete logic away from the initial reads to significantly simplify
the diff against #46250 here.
Also, I added some JavaDoc from #46250 here as well which makes the
code a little easier to follow even ignoring #46250 I think.
* [DOCS] Adds regression analytics resources and examples to the data frame analytics APIs.
Co-Authored-By: Benjamin Trent <ben.w.trent@gmail.com>
Co-Authored-By: Tom Veasey <tveasey@users.noreply.github.com>
This commit disables caching of BWC snapshot distributions in the "trunk" (aka master) branch.
Since the previous major release branches move quickly we rarely get cache hits for these
tasks, and the artifacts themselves are very large. This means the overhead here is high and
savings basically zero. We conditionally disable task output caching in this scenario in CI to
avoid excessive build cache overhead as well as causing too much turn in the cache itself which
would lead to lots of cache entry evictions.