This commit takes into account the index.number_of_replicas (defaults to
0 - no replicas- ) value when setting an index template. This change
enables the index.wait_for_active_shards value to be interpreted
correctly
(cherry picked from commit 07026ac3d56dc9fae69467adfda7eaed7ea3ca00)
Signed-off-by: Andrei Dan <andrei.dan@elastic.co>
Co-authored-by: tninokehoe <62655306+tninokehoe@users.noreply.github.com>
This commit improves the behavior of aborting snapshots and by that fixes
some extremely rare test failures.
Improvements:
1. When aborting a snapshot while it is in the `INIT` stage we do not need
to ever delete anything from the repository because nothing is written to the
repo during INIT any more (in the past running deletes for these snapshots made
sense because we were writing `snap-` and `meta-` blobs during the `INIT` step).
2. Do not try to finalize snapshots that never moved past `INIT`. Same reason as
with the first step. If we never moved past `INIT` no data was written to the repo
so no need to now write a useless entry for the aborted snapshot to `index-N`.
This is especially true, since the reason the snapshot was aborted during `INIT` was
a delete call so the useless empty snapshot just added to `index-N` would be removed
by the subsequent delete that is still waiting anyway.
3. if after aborting a snapshot we wait for it to finish we should not try deleting it
if it failed. If the snapshot failed it means it did not become part of the most recent
`RepositoryData` so a delete for it will needlessly fail with a confusing message about
that snapshot being missing or concurrent repository modification. I moved to throw the snapshot missing exception here because that seems the most user friendly. This allows the user to simply ignore `404` returns from the delete API when using it to make sure a snapshot is aborted+deleted.
Marking this as a non-issue since it doesn't have any negative repercussions other than confusing exceptions on some snapshot aborts.
Closes#52843
We try to rewrite time zones to fixed offsets in the date histogram aggregation
if the data in the shard is within a single transition.
However this optimization is not applied on time zones that don't apply daylight saving changes
but had some random transitions in the past (e.g. Australia/Brisbane or Asia/Katmandu).
This changes fixes the rewrite of such time zones to fixed offsets.
Backport of #53982
In order to prepare the `AliasOrIndex` abstraction for the introduction of data streams,
the abstraction needs to be made more flexible, because currently it really can be only
an alias or an index.
* Renamed `AliasOrIndex` to `IndexAbstraction`.
* Introduced a `IndexAbstraction.Type` enum to indicate what a `IndexAbstraction` instance is.
* Replaced the `isAlias()` method that returns a boolean with the `getType()` method that returns the new Type enum.
* Moved `getWriteIndex()` up from the `IndexAbstraction.Alias` to the `IndexAbstraction` interface.
* Moved `getAliasName()` up from the `IndexAbstraction.Alias` to the `IndexAbstraction` interface and renamed it to `getName()`.
* Removed unnecessary casting to `IndexAbstraction.Alias` by just checking the `getType()` method.
Relates to #53100
This setting is not documented and has dubious value since it means
there can be nodes in the cluster (non-data and non-master nodes) that
do not have persistent node IDs. This does not have any use cases so
this commit removes the setting.
This commit ensures that node roles are sorted by node role name, which
makes the output easier to consume, and also makes it easier to rely on
the behavior of the output in assertions.
Currently all of our transport protocol decoding and aggregation occurs
in the individual transport modules. This means that each implementation
(test, netty, nio) must implement this logic. Additionally, it means
that the entire message has been read from the network before the server
package receives it.
This commit creates a pipeline in server which can be passed arbitrary
bytes to handle. Internally, the pipeline will decode, decompress, and
aggregate the messages. Additionally, this allows us to run many
megabytes of bytes through the pipeline in tests to ensure that the
logic works.
This work will enable future work:
Circuit breaking or backoff logic based on message type and byte
in the content aggregator.
Sharing bytes with the application layer using the ref counted
releasable network bytes.
Improved network monitoring based specifically on channels.
Finally, this fixes the bug where we do not circuit break on the correct
message size when compression is enabled.
Elasticsearch has a number of different BytesReference implementations.
These implementations can all implement the interface in different ways
with subtly different behavior and performance characteristics. On the
other-hand, the JVM only represents bytes as an array or a direct byte
buffer. This commit deletes the specialized Netty implementations and
moves to using a generic ByteBuffer reference type. This will allow us
to focus on standardizing performance and behave around a smaller number
of implementations that can be used by all components in Elasticsearch.
* Add REST APIs for IndexTemplateV2Metadata CRUD (#54039)
* Add REST APIs for IndexTemplateV2Metadata CRUD
This commit adds the get/put/delete APIs for interacting with the now v2 versions of index
templates.
These APIs are behind the existing `es.itv2_feature_flag_registered` system property feature flag.
Relates to #53101
* Add exceptions for HLRC tests
* Add skips for 7.x versions
* Use index_template instead of template_v2 in action names
* Add test for MetaDataIndexTemplateService.addIndexTemplateV2
* Move removal to static method and add test
* Add unit tests for request classes (implement hashCode & equals)
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
* Fix compilation
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
We have a number of places where we want to read a fairly complex object from
XContent, but aren't interested in its contents; for example, mappings are often
serialized and deserialized between several objects before they are actually built
into a MappingMetaData object. This means that potentially large maps of maps
are constructed several times, only to immediately be re-serialized again.
This commit adds a new helper method to XContentHelper that reads the children
of an xcontent object directly to a BytesReference, serialized via the same xcontenttype
as the parent parser, avoiding the construction of intermediary maps or lists.
* Fix Snapshot Completion Listener Lost on Master Failover
If master fails over before (or we run into any other exception) when removing
the snapshot from the CS we must still resolve all the completion listeners for
the snapshot.
This commit causes negative TimeValues, other than -1 which is sometimes used as
a sentinel value, to be rejected during parsing.
Also introduces a hack to allow ILM to load policies which were written to the
cluster state with a negative min_age, treating those values as 0, which should
match the behavior of prior versions.
The NodesStatsRequest class uses a set of strings for its internal
serialization. This commit updates the class's interface so that we
no longer use hard-coded getters and setters, but rather
methods that add strings directly. For example, the old way of
adding "os" metrics to a request would be to call request.os(true).
The new way of doing this is to call request.addMetric("os").
For the time being, the canonical list of metrics is an enum in
NodesStatsRequest. This will eventually be replaced with something
pluggable.
Currently we set the defaults for ccsMinimizeRoundtrips, preFilterShardSize and
requestCache on the HLRC SubmitAsyncSearchRequest in the constructor. This is no
longer needed since we now only send the parameters along with the rest request
that are supported (omitting e.g. ccsMinimizeRoundtrips) and the correct
defaults are set on the client side. This change removes setting and sending
these defaults where possible, leaving only the overwrite of batchedReduceSize
with a default value of 5, since the default used in the vanilla SearchRequest
is 512. However, we don't need to send this value along as a request parameter
if its the default since the correct one will be set on the receiving end if no
value is specified.
Also adding tests for RestSubmitAsyncSearchAction that check the correct
defaults are set when parameters are missing on the server side.
Backport of #54200
These mock calls cause Eclipse to think that `Exception` can be thrown
because `CheckedFunction`'s lower bound is `Exception`. This makes
Eclipse happy.
Make it possible to reuse the cluster state update of rollover for
simulation purposes by extracting it. Also now run the full rollover in
the pre-rollover phase and the actual rollover phase, allowing a
dedicated exception in case of concurrent rollovers as well as a more
thorough pre-check.
We have some very occasional failures in SearchAfterIT, where a search throws
an exception because a shard does not have the mapping for the requested sort
field. The field should have been added in a dynamic mapping update after an
index event, but it seems that there can sometimes be a small delay in propagating
this update to the shards.
This commit changes the test to explicitly define the relevant field at index creation
time.
Fixes#51900
The remove keystore command can handle multiple settings. In a few
places, we were not consistent about mentioning this. This commit
addreses this, in the CLI help, and the docs.
This commit begins the work of removing the "hard-coded" metric getters
and setters from the NodesInfoRequest classes. We start by providing new
flexible getters and setters. We then update the test classes to remove
the old getters, and then remove those getters.
Changes ThreadPool's schedule method to run the schedule task in the context of the thread
that scheduled the task.
This is the more sensible default for this method, and eliminates a range of bugs where the
current thread context is mistakenly dropped.
Closes#17143
Today the keystore add-file command can only handle adding a single
setting/file pair in a single invocation. This incurs the startup costs
of the JVM many times, which in some environments can be expensive. This
commit teaches the add-file keystore command to accept adding multiple
settings in a single invocation.
Today the keystore add command can only handle adding a single
setting/value pair in a single invocation. This incurs the startup costs
of the JVM many times, which in some environments can be expensive. This
commit teaches the add keystore command to accept adding multiple
settings in a single invocation.
This drop the "top level" pipeline aggregators from the aggregation
result tree which should save a little memory and a few serialization
bytes. Perhaps more imporantly, this provides a mechanism by which we
can remove *all* pipelines from the aggregation result tree. This will
save quite a bit of space when pipelines are deep in the tree.
Sadly, doing this isn't simple because of backwards compatibility. Nodes
before 7.7.0 *need* those pipelines. We provide them by setting passing
a `Supplier<PipelineTree>` into the root of the aggregation tree that we
only call if we need to serialize to a version before 7.7.0.
This solution works for cross cluster search because we always reduce
the aggregations in each remote cluster and then forward them back to
the coordinating node. Its quite possible that the coordinating node
needs the pipeline (say it is version 7.1.0) and the gateway node in the
remote cluster doesn't (version 7.7.0). In that case the data nodes
won't send the pipeline aggregations back to the gateway node.
Critically, the gateway node *will* send the pipeline aggregations back
to the coordinating node. This is all managed with that
`Supplier<PipelineTree>`, but *how* it is managed is a bit tricky.
We can run into a state where there's no more events to wait for temporarily
but the cluster still isn't green. I added the wait for green flag to the request
so the assertion for green cluster health below doesn't fail.
Closes#53457
We are using this assertion for identical shard snapshots
for situations where `snapshot1` wasn't the first snapshot
for the tested shard. Hence, we can't assume that it will
not share any files with previous snapshots.
This showed up in failing tests when `snapshot1` was equivalent
to a previous snapshot because no documents were deleted from the
repo randomly in the failing test but even if documents are deleted
there is no guarantee that no files will be shared.
=> I removed this assertion since its immaterial for what is tested
here anyway.
Closes#54034
This commit ensures that we rewrite the shard request with a short-lived can_match searcher.
This is required for frozen indices since the high level rewrite is now performed on a network thread where we don't want to perform I/O.
Closes#53985
Reindex would use timeValueNanos(System.nanoTime()). The intended use
for TimeValue is as a duration, not as absolute time. In particular,
this could result in negative TimeValue's, being unsupported in #53913.
Modified to use the bare long nano-second value.
* Add validation for component templates (#54023)
* Add validation for component templates
This change adds validation to make sure that settings and mappings are correct in
component template. It's done the same way as in index templates - code is reused.
Reletes to #53101
* Fix checkstyle violation
* Update server/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexTemplateService.java
Co-Authored-By: Lee Hinman <dakrone@users.noreply.github.com>
* Update server/src/test/java/org/elasticsearch/cluster/metadata/MetaDataIndexTemplateServiceTests.java
Co-Authored-By: Lee Hinman <dakrone@users.noreply.github.com>
* Update server/src/test/java/org/elasticsearch/cluster/metadata/MetaDataIndexTemplateServiceTests.java
Co-Authored-By: Lee Hinman <dakrone@users.noreply.github.com>
* Update server/src/test/java/org/elasticsearch/cluster/metadata/MetaDataIndexTemplateServiceTests.java
Co-Authored-By: Lee Hinman <dakrone@users.noreply.github.com>
* Update server/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexTemplateService.java
Co-Authored-By: Lee Hinman <dakrone@users.noreply.github.com>
Co-authored-by: Lee Hinman <dakrone@users.noreply.github.com>
* Adjusted to 7.7
* unused import fixed
* npe fixeD
* change exception type
Co-authored-by: Lee Hinman <dakrone@users.noreply.github.com>
This commit ensures that we don't use the non-deterministic canReturnNullResponseIfMatchNoDocs
boolean in the cache key of the ShardSearchRequest. The value of this boolean has no influence
on the cacheability of the request.
Closes#32827
Today we log `failed to execute pipeline for a bulk request` at `ERROR` level
if an attempt to run an ingest pipeline fails. A failure here is commonly due
to an `EsRejectedExecutionException`. We also feed such failures back to the
client and record the rejection in the threadpool statistics.
In line with #51459 there is no need to log failures within actions so noisily
and with such urgency. It is better to leave it up to the client to react
accordingly. Typically an `EsRejectedExecutionException` should result in the
client backing off and retrying, so a failure here is not normally fatal enough
to justify an `ERROR` log at all.
This commit reduces the log level for this message to `DEBUG`.
Fixes an issue where the elasticsearch-node command-line tools would not work correctly
because PersistentTasksCustomMetaData contains named XContent from plugins. This PR
makes it so that the parsing for all custom metadata is skipped, even if the core system would
know how to handle it.
Closes#53549
This removes some more ceremony when declaring agg parsers. You no
longer need a static `parse` method, instead you can just make the
`PARSER` public in most cases.
There are still a few aggs with the `parse` method, but those `parse`
methods are a little more complex to untangle.
Currently the remote info api has added a number of possible fields
(proxy, num_socket_connections, etc) that are available in proxy mode.
These fields are not aligned with what the settings are named. This
commit modifies this API to align with the settings.