Pipeline aggregations like `stats_bucket`, `sum_bucket`, and
`percentiles_bucket` only operate on buckets that have multiple buckets.
This adds support for those aggregations to `geo_distance`, `ip_range`,
`auto_date_histogram`, and `rare_terms`.
This all happened because we used a marker interface to mark compatible
aggs, `MultiBucketAggregationBuilder` and it was fairly easy to forget
to implement the interface.
This replaces the marker interface with an abstract method in
`AggregationBuilder`, `bucketCardinality` which makes you return `NONE`,
`ONE`, or `MANY`. The `bucket` aggregations can check for `MANY`. At
this point `ONE` and `NONE` amount to about the same thing, but I
suspect that'll be a useful distinction when validating bucket sorts.
Closes#53215
This commit takes into account the index.number_of_replicas (defaults to
0 - no replicas- ) value when setting an index template. This change
enables the index.wait_for_active_shards value to be interpreted
correctly
(cherry picked from commit 07026ac3d56dc9fae69467adfda7eaed7ea3ca00)
Signed-off-by: Andrei Dan <andrei.dan@elastic.co>
Co-authored-by: tninokehoe <62655306+tninokehoe@users.noreply.github.com>
This commit improves the behavior of aborting snapshots and by that fixes
some extremely rare test failures.
Improvements:
1. When aborting a snapshot while it is in the `INIT` stage we do not need
to ever delete anything from the repository because nothing is written to the
repo during INIT any more (in the past running deletes for these snapshots made
sense because we were writing `snap-` and `meta-` blobs during the `INIT` step).
2. Do not try to finalize snapshots that never moved past `INIT`. Same reason as
with the first step. If we never moved past `INIT` no data was written to the repo
so no need to now write a useless entry for the aborted snapshot to `index-N`.
This is especially true, since the reason the snapshot was aborted during `INIT` was
a delete call so the useless empty snapshot just added to `index-N` would be removed
by the subsequent delete that is still waiting anyway.
3. if after aborting a snapshot we wait for it to finish we should not try deleting it
if it failed. If the snapshot failed it means it did not become part of the most recent
`RepositoryData` so a delete for it will needlessly fail with a confusing message about
that snapshot being missing or concurrent repository modification. I moved to throw the snapshot missing exception here because that seems the most user friendly. This allows the user to simply ignore `404` returns from the delete API when using it to make sure a snapshot is aborted+deleted.
Marking this as a non-issue since it doesn't have any negative repercussions other than confusing exceptions on some snapshot aborts.
Closes#52843
Fixing the naming of the HLRC values to match the ToXContent field names (i.e. the field names returned from an API call).
Also fixes the names in the _cat API as well.
closes#53946
We try to rewrite time zones to fixed offsets in the date histogram aggregation
if the data in the shard is within a single transition.
However this optimization is not applied on time zones that don't apply daylight saving changes
but had some random transitions in the past (e.g. Australia/Brisbane or Asia/Katmandu).
This changes fixes the rewrite of such time zones to fixed offsets.
Backport of #53982
In order to prepare the `AliasOrIndex` abstraction for the introduction of data streams,
the abstraction needs to be made more flexible, because currently it really can be only
an alias or an index.
* Renamed `AliasOrIndex` to `IndexAbstraction`.
* Introduced a `IndexAbstraction.Type` enum to indicate what a `IndexAbstraction` instance is.
* Replaced the `isAlias()` method that returns a boolean with the `getType()` method that returns the new Type enum.
* Moved `getWriteIndex()` up from the `IndexAbstraction.Alias` to the `IndexAbstraction` interface.
* Moved `getAliasName()` up from the `IndexAbstraction.Alias` to the `IndexAbstraction` interface and renamed it to `getName()`.
* Removed unnecessary casting to `IndexAbstraction.Alias` by just checking the `getType()` method.
Relates to #53100
This setting is not documented and has dubious value since it means
there can be nodes in the cluster (non-data and non-master nodes) that
do not have persistent node IDs. This does not have any use cases so
this commit removes the setting.
This commit ensures that node roles are sorted by node role name, which
makes the output easier to consume, and also makes it easier to rely on
the behavior of the output in assertions.
Today the security plugin stashes a copy of the environment in its
constructor, and uses the stashed copy to construct its components even
though it is provided with an environment to create these
components. What is more, the environment it creates in its constructor
is not fully initialized, as it does not have the final copy of the
settings, but the environment passed in while creating components
does. This commit removes that stashed copy of the environment.
Today the machine learning plugin stashes a copy of the environment in
its constructor, and uses the stashed copy to construct its components
even though it is provided with an environment to create these
components. What is more, the environment it creates in its constructor
is not fully initialized, as it does not have the final copy of the
settings, but the environment passed in while creating components
does. This commit removes that stashed copy of the environment.
Currently all of our transport protocol decoding and aggregation occurs
in the individual transport modules. This means that each implementation
(test, netty, nio) must implement this logic. Additionally, it means
that the entire message has been read from the network before the server
package receives it.
This commit creates a pipeline in server which can be passed arbitrary
bytes to handle. Internally, the pipeline will decode, decompress, and
aggregate the messages. Additionally, this allows us to run many
megabytes of bytes through the pipeline in tests to ensure that the
logic works.
This work will enable future work:
Circuit breaking or backoff logic based on message type and byte
in the content aggregator.
Sharing bytes with the application layer using the ref counted
releasable network bytes.
Improved network monitoring based specifically on channels.
Finally, this fixes the bug where we do not circuit break on the correct
message size when compression is enabled.
Elasticsearch has a number of different BytesReference implementations.
These implementations can all implement the interface in different ways
with subtly different behavior and performance characteristics. On the
other-hand, the JVM only represents bytes as an array or a direct byte
buffer. This commit deletes the specialized Netty implementations and
moves to using a generic ByteBuffer reference type. This will allow us
to focus on standardizing performance and behave around a smaller number
of implementations that can be used by all components in Elasticsearch.
* Add REST APIs for IndexTemplateV2Metadata CRUD (#54039)
* Add REST APIs for IndexTemplateV2Metadata CRUD
This commit adds the get/put/delete APIs for interacting with the now v2 versions of index
templates.
These APIs are behind the existing `es.itv2_feature_flag_registered` system property feature flag.
Relates to #53101
* Add exceptions for HLRC tests
* Add skips for 7.x versions
* Use index_template instead of template_v2 in action names
* Add test for MetaDataIndexTemplateService.addIndexTemplateV2
* Move removal to static method and add test
* Add unit tests for request classes (implement hashCode & equals)
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
* Fix compilation
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
Drop a nasty regex in our checkstyle config that I wrote a long time ago
in favor of a checkstyle extension. This is better because:
* It is faster. It saves a little more than a minute across the entire
build.
* It is easier to read. Who knew 100 lines of Java would be easier to
read than a regex, but it is.
* It has tests.
The powershell call operator (&) seems to wait on any child processes.
In the case of Gradle build invocation, the daemon causes the script
execution to hang until the daemon terminates (or 5 minutes elapses?).
We have a number of places where we want to read a fairly complex object from
XContent, but aren't interested in its contents; for example, mappings are often
serialized and deserialized between several objects before they are actually built
into a MappingMetaData object. This means that potentially large maps of maps
are constructed several times, only to immediately be re-serialized again.
This commit adds a new helper method to XContentHelper that reads the children
of an xcontent object directly to a BytesReference, serialized via the same xcontenttype
as the parent parser, avoiding the construction of intermediary maps or lists.
We are not using the Apache HTTP client backed http transport
with the GCS repo. Same as with the app engine type transport
we can save ourselves the dependency on the http client here
and ignore the missing classes.
* Fix Snapshot Completion Listener Lost on Master Failover
If master fails over before (or we run into any other exception) when removing
the snapshot from the CS we must still resolve all the completion listeners for
the snapshot.
Backport of #54276.
Move and rename formatter config file, so that it is easier for
Eclipse users to import.
Also switch to an opt-out list for formatting. Instead of explcitly
listing projects that should be formatted, instead list projects
that should not be formatted. This means that any new projects will
automatically be formatted and checked.
The failing suggester documentation test was expecting specific scores in the
test response, which is fragile implementation details that e.g. can change with
different lucene versions and generally shouldn't be done in documentation test.
Instead we usually replace the float values in the output response by the ones
in the actual response.
Closes#54257
Today we assign CCR persistent tasks to nodes with the data role. It
could be that the data node is not capable of connecting to remote
clusters, in which case the task will fail since it can not connect to
the remote cluster with the leader shard. Instead, we need to assign
such tasks to nodes that are capable of connecting to remote
clusters. This commit addresses this by enabling such persistent tasks
to only be assigned to nodes that have the data role, and also have the
remote cluster client role.
optimize transform for group_by on date_histogram by injecting an additional range query. This limits the number of search and index requests and avoids unnecessary updates. Only recent buckets get re-written.
fixes#54254
This commit causes negative TimeValues, other than -1 which is sometimes used as
a sentinel value, to be rejected during parsing.
Also introduces a hack to allow ILM to load policies which were written to the
cluster state with a negative min_age, treating those values as 0, which should
match the behavior of prior versions.
The NodesStatsRequest class uses a set of strings for its internal
serialization. This commit updates the class's interface so that we
no longer use hard-coded getters and setters, but rather
methods that add strings directly. For example, the old way of
adding "os" metrics to a request would be to call request.os(true).
The new way of doing this is to call request.addMetric("os").
For the time being, the canonical list of metrics is an enum in
NodesStatsRequest. This will eventually be replaced with something
pluggable.