Given that we check the max buckets limit on each shard when collecting the buckets, and that non final reduction cannot add buckets (see #35921), there is no point in counting and checking the number of buckets as part of non final reduction phases.
Such check is still needed though in the final reduction phases to make sure that the number of returned buckets is not above the allowed threshold.
Relates somehow to #32125 as we will make use of non final reduction phases in CCS alternate execution mode and that increases the chance that this check trips for nothing when reducing aggs in each remote cluster.
When building a query Lucene distinguishes two cases, queries that require to produce a score and queries that only need to match. We cloned this mechanism in the QueryBuilders in order to be able to produce different queries based on whether they need to produce a score or not. However the only case in es that require this distinction is the BoolQueryBuilder that sets a different minimum_should_match when a `bool` query is built in a filter context..
This behavior doesn't seem right because it makes the matching of `should` clauses different when the score is not required.
Closes#35293
* Moved method `canOpenIndex` is only used in tests -> moved to test CP
* Simplify `org.elasticsearch.index.store.Store#renameTempFilesSafe`
* Delete some dead methods
Closes#36073
The problem showed up on debian 8 which uses aufs docker storage
driver by default as opposed to overlay2 used on other distros.
aufs does not support acls and thus the failure.
The --use-ntvfs option instructs samba not to rely on acls.
From what I can tell this is an implementation detail that should not
affect the tests ( which continue to pass )
Currently is `java` is not in $PATH the preinst script fails
prematurely and prevents an appropriate message from getting displayed
to the user.
Make package installation more user friendly when java is not in
$PATH and add a test for it.
Also use a she-bang in the preinst script, as, at least in Debian,
maintainer scripts must start with the #! convention [1].
Relates #31845
[1] https://www.debian.org/doc/debian-policy/ch-maintainerscripts.html
* Replace Streamable w/ Writeable in BaseTasksRequest and subclasses
This commit replaces usages of Streamable with Writeable for the
BaseTasksRequest / TransportTasksAction classes and subclasses of
these classes.
Relates to #34389
Introduces a debug log message when a bind fails and a trace message
when a bind succeeds.
It may seem strange to only debug a bind failure, but failures of this
nature are relatively common in some realm configurations (e.g. LDAP
realm with multiple user templates, or additional realms configured
after an LDAP realm).
This fixes a failure of InternalTestClusterTests#testBeforeTest which checks
that the cluster is set up the same when starting from the same seed. Trappily,
using ESTestCase#randomIntBetween() is no good, we have to use
InternalTestCluster#random via RandomNumbers#randomIntBetween() instead.
In #36033 we removed a catch block because we thought we were preventing
exceptions by avoiding concurrent elections, missing the obvious fact that some
joins are supposed to be failing.
As a quick fix the catch was reinstated in 3a5dab6d8e
but this change adds finesse by only catching exceptions from the joins that we
expect to fail. It also inlines an always-false parameter to `initialState()`.
Today, we allow all nodes in an integration test to bootstrap. However this
seems to lead to test failures due to post-election instability. The change
avoids this instability by only bootstrapping a single node in the cluster.
The logic in the dockerComposeSupported method currently returns false
even when docker and docker compose are available on the build machine.
This change updates the check to see if docker compose is available in
one of the two paths and allows the `tests.fixture.enabled` property to
disable the tests even if docker compose is available.
Adds about a minute worth of backoffs and retries to saving task
results so it is *much* more likely that a busy cluster won't lose task
results. This isn't an ideal solution to losing task results, but it is
an incremental improvement. If all of the retries fail when still log
the task result, but that is far from ideal.
Closes#33764
The new limit on the number of open shards in a cluster may be
interpreted by users as a sizing recommendation, but it is not. This
clarifies in the documentation that this is a safety limit, not a
recommendation.
Empty buckets don't need to be added when performing an incremental reduction step, they can be added later in the final reduction step. This will allow us to later remove the max buckets limit when performing non final reduction.
This commit documents how Index Lifecycle Management
interacts with snapshot/restore, and documents a workaround
for situations in which ILM should not immediately resume
managing an index after it is restored.
This is a follow-up to #35144. That commit made the underlying
connection opening process in TcpTransport asynchronous. However the
method still blocked on the process being complete before returning.
This commit moves the blocking to the ConnectionManager level. This is
another step towards the top-level TransportService api being async.
This change adds the support for rest_total_hits_as_int
in the watcher search inputs. Setting this parameter in the request
will transform the search response to contain the total hits as
a number (instead of an object).
Note that this parameter is currently a noop since #35849 is not
merged.
Closes#36008
This commit removes the `MockTcpTransport` as a transport option for
`ESIntegTestCase`. It is the first step in replacing the usages of
`MockTcpTransport` with `MockNioTransport`.
Prior to #35441 `ConnectionManager` had a `Lifecycle` object to support
the ping runnable. After that commit, the connection amanger only needs
the existing `AtomicBoolean` to indicate if it is running.
This commit upgrades netty. This will close#35360. Netty started
throwing an IllegalArgumentException if a CompositeByteBuf is
created with < 2 components. Netty4Utils was updated to reflect this
change.
New that we test with min_doc_count set to 0 as well, we may end up generating a lot more buckets. This commit adjusts the min bound and max bound, as well as the offset for each randomly generated agg instance so that we don't end up hitting the 10.000 max buckets limit.
Relates to #36064
In this test we were randomizing different values but minDocCount was hardcoded to 1. It's important to test other values, especially `0` as it's the default. The test needed some adapting in the way buckets are randomly generated: all aggs need to share the same interval, minDocCount and emptyBucketInfo. Also assertions need to take into account that more (or less) buckets are expected depending on minDocCount.
Some tests kill nodes and otherwise it would take 60s by default
for replicas to get allocated and that is longer than we wait
for getting in a green state in tests.
Relates to #35403
CompositeBytesReference#slice has two bugs:
- One that makes it fail if the reference is empty and an empty slice is
created, this is #35950 and is fixed by special-casing empty-slices.
- One performance bug that makes it always create a composite slice when
creating a slice that ends on a boundary, this is fixed by computing `limit`
as the index of the sub reference that holds the last element rather than
the next element after the slice.
Closes#35950
The `xContentType` was incorrectly serialized:
`xContentType.mediaTypeWithoutParameters()` was used to serialize, but
`xContentType.mediaType()` was used to de-serialize.
Also the serialization test class did not test `ContextSetup` well,
this was due to limitation in serialization test base class. Changed the
test class to manually test xcontent serialization, so that for both binary
and xcontent serialization tests, the `ContextSetup` is properly tested.
Closes#36050
This commit removes padding from the java version in the output of
compiler/runtime/gradle java versions. This value is only ever 1 or 2
digits, and the later information (hotspot/jvm vendor info) is separated
by the java home path from the other versions, so there isn't a visual
reason for needing exact vertical alignment.