This commit upgrades netty. This will close#35360. Netty started
throwing an IllegalArgumentException if a CompositeByteBuf is
created with < 2 components. Netty4Utils was updated to reflect this
change.
New that we test with min_doc_count set to 0 as well, we may end up generating a lot more buckets. This commit adjusts the min bound and max bound, as well as the offset for each randomly generated agg instance so that we don't end up hitting the 10.000 max buckets limit.
Relates to #36064
In this test we were randomizing different values but minDocCount was hardcoded to 1. It's important to test other values, especially `0` as it's the default. The test needed some adapting in the way buckets are randomly generated: all aggs need to share the same interval, minDocCount and emptyBucketInfo. Also assertions need to take into account that more (or less) buckets are expected depending on minDocCount.
Some tests kill nodes and otherwise it would take 60s by default
for replicas to get allocated and that is longer than we wait
for getting in a green state in tests.
Relates to #35403
CompositeBytesReference#slice has two bugs:
- One that makes it fail if the reference is empty and an empty slice is
created, this is #35950 and is fixed by special-casing empty-slices.
- One performance bug that makes it always create a composite slice when
creating a slice that ends on a boundary, this is fixed by computing `limit`
as the index of the sub reference that holds the last element rather than
the next element after the slice.
Closes#35950
The `xContentType` was incorrectly serialized:
`xContentType.mediaTypeWithoutParameters()` was used to serialize, but
`xContentType.mediaType()` was used to de-serialize.
Also the serialization test class did not test `ContextSetup` well,
this was due to limitation in serialization test base class. Changed the
test class to manually test xcontent serialization, so that for both binary
and xcontent serialization tests, the `ContextSetup` is properly tested.
Closes#36050
This commit removes padding from the java version in the output of
compiler/runtime/gradle java versions. This value is only ever 1 or 2
digits, and the later information (hotspot/jvm vendor info) is separated
by the java home path from the other versions, so there isn't a visual
reason for needing exact vertical alignment.
The NotificationService (base class for SlackService, HipchatService ...) has both dynamic
cluster settings and SecureSettings and builds the clients (Account) that are used to comm
with external services. This commit fixes an important bug about updating/reloading any
of these settings (both Secure and dynamic cluster). Briefly the bug is due to the fact that
both the secure settings as well as the dynamic node scoped ones can be updated
independently, but when constructing the clients some of the settings might not be visible.
In the current implementation, there is a time between the
ShrinkStep and the ShrinkSetAliasStep that both the source and
target indices will be present with the same aliases. This means
that queries to during this time will query both and return
duplicates. This fixes that scenario by moving the alias inheritance
to the same aliases update request as is done in ShrinkSetAliasStep
This commit improves the efficiency of exact index name matching by
separating exact matches from those that include wildcards or regular
expressions. Internally, exact matching is done using a HashSet instead
of adding the exact matches to the automata. For the wildcard and
regular expression matches, the underlying implementation has not
changed.
org.elasticsearch.rest.RestController#hasContentType checks to see if the
RestHandler supports the `application/x-ndjson` Content-Type. DeprecationRestHandler
is a wrapper around the real RestHandler, and prior to this change
would always return `false` due to the interface's default supportsContentStream().
This prevents API's that use multi-line JSON from properly being deprecated
resulting in an HTTP 406 error.
This change ensures that the DeprecationRestHandler honors the
supportsContentStream() of the wrapped RestHandler.
Relates to #35958
The support for rest_total_hits_as_int has already been merged to 6x
in #35848 so this change adds this new option to master. The plan was
to add this new option as part of #35848 but we've decided to wait a few
days before merging this breaking change so this commit just handles
the new option as a noop exactly like 6x for now. This will allow
users to migrate to this parameter before #35848 is merged.
Relates #33028
This is related to #34405 and a follow-up to #34753. It makes a number
of changes to our current keepalive pings.
The ping interval configuration is moved to the ConnectionProfile.
The server channel now responds to pings. This makes the keepalive
pings bidirectional.
On the client-side, the pings can now be optimized away. What this
means is that if the channel has received a message or sent a message
since the last pinging round, the ping is not sent for this round.
Some times the test fixtures plugin did not correctly disable tasks
from the build plugin as it should.
The plugin manager and tasks both use domain name collections so
the previus conde should have worked.
I did not have trime to track it down, but suspect there's some race
condition in Gradle causing this. The plugin manager is still incubating.
Since the tasks are on the cp even if the plugin is not applyed, we
don't really need to involve the plugin at all.
Closes#36041
Our `REPRODUCE WITH` line wasn't working for tests with `(` or `)` in
the method name. This isn't super common, but happens sometimes for our
yml based rest tests. This is because randomized runner's globbing
mechanism dind't escape `(` or `)`. I filed this upstream at
https://github.com/randomizedtesting/randomizedtesting/issues/271
and Dawid kindly fixed the issue. This upgrades to pick up the fix.
Closes#35692
The nested agg can defer the collection of children if it is nested
under another aggregation. In such case accessing the score in the children
aggregation throws an error because the scorer has already advanced to the next
parent. This change fixes this error by caching the score of the parent in the
nested aggregation. Children aggregations that work on nested documents will be
able to access the _score. Also note that the _score in this case is always the
parent's score, there is no way to retrieve the score of a nested docs in aggregations.
Closes#35985Closes#34555
* [Zen2] Implement Tombstone REST APIs
* Adds REST API for withdrawing votes and clearing vote withdrawls
* Tests added to Netty4 module since we need a real Network impl. for Http endpoints
When wiping rollup jobs, if we are in a mixed cluster with < v7.0 nodes
we need to fall back to the deprecated endpoint because we may talk
to a 6.x node.
Today the default for USE_ZEN2 is false and it is overridden in many places. By
defaulting it to true we can be sure that the only places in which Zen2 does
not work are those in which it is explicitly set to false.
Today we sometimes create a setup in which the node is a quorum on its own,
which allows it to win a pre-voting round and schedule an election essentially
at will, causing it to discard all the joins it just received and fail the
test. This change excludes this case, preventing stray elections from ruining
things.
Today any node can win an election. However, the whole point of
master-eligibility is that master-ineligible nodes should not be elected as the
leader; furthermore master-ineligible nodes do not have any outgoing STATE
channels so cannot publish cluster states, so their leadership is ineffective
and disruptive.
This change ensures that the elected leader is master-eligible by preventing
master-ineligible nodes from scheduling an election.
This commits changes the serialization version from V_7_0_0 to
v_6_6_0 for the authenticate API response now that the work to add
the realm info in the response has been backported to 6.x in
b515ec7c9b9074dfa2f5fd28bac68fd8a482209e
Relates #35648
A number of tokenfilters can produce multiple tokens at the same position. This
is a problem when using token chains to parse synonym files, as the SynonymMap
requires that there are no stacked tokens in its input.
This commit ensures that when used to parse synonyms, these tokenfilters either produce
a single version of their input token, or that they throw an error when mappings are
generated. In indexes created in elasticsearch 6.x deprecation warnings are emitted in place
of the error.
* asciifolding and cjk_bigram produce only the folded or bigrammed token
* decompounders, synonyms and keyword_repeat are skipped
* n-grams, word-delimiter-filter, multiplexer, fingerprint and phonetic throw errors
Fixes#34298
Looks like some odd race condition causes failed builds by attempting to
run the task that should be disabled.
Disable the task explicitly untill we figure it out.
This change adds an extra check that verifies that all primary shards
have been started of an index that is about to be auto followed.
If not all primary shards have been started for an index
then the next auto follow run will try to follow to auto follow
this index again.
Closes#35480
The ActiveShardCount is used by cluster state observers to wait for a
given number of shards to be active before returning to the caller. The
current implementation does not work when an index is closed while an
observer is waiting on shards to be active. In this case, a NPE is thrown
and the observer is never notified that the shards won't become active.
This commit fixes the ActiveShardCount.enoughShardsActive() so that it
does not fail when an index is closed, similarly to what is done when an
index is deleted.