The PR takes a different approach to solve #24806 than currently implemented via #25052. The `refreshMetric` that IndexShard maintains is updated using the refresh listeners infrastructure in lucene. This means that we truly count all refreshes that lucene makes and not have to worry about each individual caller (like `IndexShard@refresh` and `Engine#get()`)
parent/child: Allow updating mapping without specifying `_parent` field on each update.
Prior to this change when a mapping has a `_parent` field then any update (also updates that didn't modify the `_parent` field) to the mapping involved specifying the `_parent` field again. With this change specifying the `_parent` field on each mapping update is no longer required.
Closes#23381
We have a callback interface that is not needed because it is
effectively the same as java.util.function.Consumer. This commit removes
it.
Relates #25089
This change moves the parent_id query to the parent-join module and handles the case when only the parent-join field can be declared on an index (index with single type on).
If single type is off it uses the legacy parent join field mapper and switch to the new one otherwise (default in 6).
Relates #20257
This commit fixes the group methdos of Settings to properly include
grouped secure settings. Previously the secure settings were included
but without the group prefix being removed.
closes#25069
The index parameter in the update-aliases, put-alias, and delete-alias APIs no longer accepts alias names. Instead, it accepts only index names (or wildcards which will expand to matching indices).
Closes#23960
This commit fixes a bug in retrieving a sub Settings object for a given
prefix with secure settings. Before this commit the returned Settings
would be filtered by the prefix, but the found setting names would not
have the prefix removed.
This is the first step towards adaptive replica selection (#24915). This PR
tracks the execution time, also known as the "service time" of a task in the
threadpool. The `QueueResizingEsThreadPoolExecutor` then stores a moving average
of these task times which can be retrieved from the executor.
Currently there is no functionality using the EWMA yet (other than tests), this
is only a bite-sized building block so that it's easier to review.
[1]: EWMA = Exponentially Weighted Moving Average
REST handlers that require a body will throw an an ElasticsearchParseException "request body required".
REST handlers that require a body OR source param will throw an ElasticsearchParseException "request body or source param required".
Replaced asserts in BulkRequest parsing code with a more descriptive IllegalArgumentException if the line contains an empty object.
Updated bulk REST test to verify an empty action line is rejected properly.
Updated BulkRequestTests with randomized testing for an empty action line.
Used try-with-resouces for XContentParser in AbstractBulkByQueryRestHandler.
This commit creates TemplateScript and associated classes so that
templates no longer need a special ScriptService.compileTemplate method.
The execute() method is equivalent to the old run() method.
relates #20426
Previously, when allocating bytes for a BigArray, the array was created
(or attempted to be created) and only then would the array be checked
for the amount of RAM used to see if the circuit breaker should trip.
This is problematic because for very large arrays, if creating or
resizing the array, it is possible to attempt to create/resize and get
an OOM error before the circuit breaker trips, because the allocation
happens before checking with the circuit breaker.
This commit ensures that the circuit breaker is checked before all big
array allocations (note, this does not effect the array allocations that
are less than 16kb which use the [Type]ArrayWrapper classes found in
BigArrays.java). If such an allocation or resizing would cause the
circuit breaker to trip, then the breaker trips before attempting to
allocate and potentially running into an OOM error from the JVM.
Closes#24790
In #24605, logic was implemented to ensure that completed snapshots were
properly removed from the cluster state upon a change in master nodes.
This commit removes redundant logic that also attempted to clean up
completed snapshots from the cluster state on master election, but only
covered a limited case that was remedied in #24605.
This commit also adds a test to ensure cleaning up of completed
snapshots at the right moment in time when a master election happens
before finalizing a snapshot, as well as adds a check to handle the case
where the old master and new master could attempt to finalize the
snapshot and write the same blob to the repository simultaneously.
This removes the `accumulateExceptions()` method (and its usage) from `TransportNodesAction` and `TransportTasksAction`, forcing both transport actions to always accumulate exceptions.
Without this change, some transport actions, like `TransportNodesStatsAction` would respond in very unexpected ways by returning no response due to some failure, but instead of returning an
error the response would simply be empty: no response and no error.
This results in a very trappy response structure where users can check for an error, then attempt to blindly use the response when no error is returned.
This commit reduces the number of buckets that are generated for multi
bucket aggregations in AggregationsTests and SearchResponseTests.
The number of buckets are now limited to a maximum of 3 but before some
aggregations could generate up to 10 buckets.
Some response classes in the java api expose both `getTook()` which returns a `TimeValue` and `getTookInMillis` which returns a `long` value. `getTook()` is enough as one can do `getTook().millis()` to obtain the same result as `getTookInMillis()`, which can be removed.
* Adds nodes usage API to monitor usages of actions
The nodes usage API has 2 main endpoints
/_nodes/usage and /_nodes/{nodeIds}/usage return the usage statistics
for all nodes and the specified node(s) respectively.
At the moment only one type of usage statistics is available, the REST
actions usage. This records the number of times each REST action class is
called and when the nodes usage api is called will return a map of rest
action class name to long representing the number of times each of the action
classes has been called.
Still to do:
* [x] Create usage service to store usage statistics
* [x] Record usage in REST layer
* [x] Add Transport Actions
* [x] Add REST Actions
* [x] Tests
* [x] Documentation
* Rafactors UsageService so counts are done by the handlers
* Fixing up docs tests
* Adds a name to all rest actions
* Addresses review comments
This commit adds a new bg_count field to the REST response of
SignificantTerms aggregations. Similarly to the bg_count that already
exists in significant terms buckets, this new bg_count field is set at
the aggregation level and is populated with the superset size value.
This commit adds an optional `context` url parameter to the put stored
script request. When a context is specified, the script is compiled
against that context before storing, as a validation the script will
work when used in that context.
Today there is a lot of code duplication and different handling of errors
in the two different scroll modes. Yet, it's not clear if we keep both of
them but this simplification will help to further refactor this code to also
add cross cluster search capabilities.
This refactoring also fixes bugs when shards failed due to the node dropped out of the cluster in between scroll requests and failures during the fetch phase of the scroll. Both places where simply ignoring the failure and logging to debug. This can cause issues like #16555
This commit provides the TransportRequest that caused the retrieval of a search context to the
SearchOperationListener#validateSearchContext method so that implementers have access to the
request.
By default, the remove plugin CLI command preserves configuration
files. This is so that if a user is upgrading the plugin (which is done
by first removing the old version and then installing the new version)
they do not lose their configuration file. Yet, there are circumstances
where preserving the configuration file is not desired. This commit adds
a purge option to the remove plugin CLI command.
Relates #24981
Currently, the decisions regarding which translog generation files to delete are hard coded in the interaction between the `InternalEngine` and the `Translog` classes. This PR extracts it to a dedicated class called `TranslogDeletionPolicy`, for two main reasons:
1) Simplicity - the code is easier to read and understand (no more two phase commit on the translog, the Engine can just commit and the translog will respond)
2) Preparing for future plans to extend the logic we need - i.e., retain multiple lucene commit and also introduce a size based retention logic, allowing people to always keep a certain amount of translog files around. The latter is useful to increase the chance of an ops based recovery.
This is related to #24927. There was a small possibility that a test
was attempting to compress a stream with zero bytes. This was causing
a failure.
This test now requires at least one byte.
This is a follow-up to #23941. Currently there are a number of
complexities related to compression. The raw DeflaterOutputStream must
be closed prior to sending bytes to ensure that EOS bytes are written.
But the underlying ReleasableBytesStreamOutput cannot be closed until
the bytes are sent to ensure that the bytes are not reused.
Right now we have three different stream references hanging around in
TCPTransport to handle this complexity. This commit introduces
CompressibleBytesOutputStream to be one stream implemenation that will
behave properly with or without compression enabled.
The took time computed for search requests does not take in account the expand search phase.
This change delays the computation to after the expand phase finishes.
Relates #24900
ScriptContexts currently understand a FactoryType that can produce
instances of the script InstanceType. However, for search scripts, this
does not work as we have the concept of LeafSearchScript that is created
per lucene segment. This commit effectively renames the existing
SearchScript class into SearchScript.LeafFactory, which is a new,
optional, class that can be defined within a ScriptContext.
LeafSearchScript is effectively renamed back into SearchScript. This
change allows the model of stateless factory -> stateful factory ->
script instance to continue, but in a generic way that any script
context may take advantage of.
relates #20426
In previous work, we refactored the delay mechanism in index shard
operation permits to allow for async delaying of acquisition. This
refactoring made explicit when permit acquisition is disabled whereas
previously we were relying on an implicit condition, namely that all
permits were acquired by the thread trying to delay acquisition. When
using the implicit mechanism, we tried to acquire a permit and if this
failed, we returned a null releasable as an indication that our
operation should be queued. Yet, now we know when we are delayed and we
should not even try to acquire a permit. If we try to acquire a permit
and one is not available, we know that we are not delayed, and so
acquisition should be successful. If it is not successful, something is
deeply wrong. This commit takes advantage of this refactoring to
simplify the internal implementation.
Relates #24971
When a primary is promoted, it could have gaps in its history due to
concurrency and in-flight operations when it was serving as a
replica. This commit fills the gaps in the history of the promoted shard
after all operations from the previous term have drained, and future
operations are blocked. This commit does not handle replicating the
no-ops that fill the gaps to any remaining replicas, that is the
responsibility of the primary/replica sync that we are laying the ground
work for.
Relates #24945
`terms` aggregations at the root level use the `global_ordinals` execution hint by default.
When all sub-aggregators can be run in `breadth_first` mode the collected buckets for these sub-aggs are dense (remapped after the initial pruning).
But if a sub-aggregator is not deferrable and needs to collect all buckets before pruning we don't remap global ords and the aggregator needs to deal with sparse buckets.
Most (if not all) aggregators expect dense buckets and uses this information to allocate memories.
This change forces the remap of the global ordinals but only when there is at least one sub-aggregator that cannot be deferred.
Relates #24788
This commit introduces a clean transition from the old primary term to
the new primary term when a replica is promoted primary. To accomplish
this, we delay all operations before incrementing the primary term. The
delay is guaranteed to be in place before we increment the term, and
then all operations that are delayed are executed after the delay is
removed which asynchronously happens on another thread. This thread does
not progress until in-flight operations that were executing are
completed, and after these operations drain, the delayed operations
re-acquire permits and are executed.
Relates #24925
Drops `TokenizerFactory#name`, replacing it with
`CustomAnalyzer#getTokenizerName` which is much better targeted at
its single use case inside the analysis API.
Drops a test that I would have had to refactor which is duplicated by
`AnalysisModuleTests`.
To keep this change from blowing up in size I've left two mostly
mechanical changes to be done in followups:
1. `TokenizerFactory` can now be entirely dropped and replaced with
`Supplier<Tokenizer>`.
2. `AbstractTokenizerFactory`'s ctor still takes a `String` parameter
where the name once was.
If the bucket already exists, due to non-overlapping series or missing data, the
MovAvg creates a merged bucket with the existing aggs + the new prediction. This
fixes a small bug where the doc_count was not being set correctly.
Relates to #24327
Today if the primary throws an exception while handling the replica
response (e.g., because it is already closed while updating the local
checkpoint for the replica), or because of a bug that causes an
exception to be thrown in the replica operation listener, this exception
is caught by the underlying transport handler plumbing and is translated
into a response handler failure transport exception that is passed to
the onFailure method of the replica operation listener. This causes the
primary to turn around and fail the replica which is a disastrous and
incorrect outcome as there's nothing wrong with the replica, it is the
primary that is broken and deserves a paddlin'. This commit handles this
situation by failing the primary.
Relates #24926
This commit adds a second refresh to the concurrent relocation
test. This is necessary as the first refresh might have brought back a
local checkpoint for a shard that a newly relocated primary became aware
of but did not yet receive a local checkpoint for that shard. When that
local checkpoint arrives on the new primary, the global checkpoint could
advance again and so we need a second replication action to push that
global checkpoint back out to the replica. This is indeed a hack, and it
will eventually be removed.
Closes#24599
The `IndexDeletionPolicy` is currently instantiated by `IndexShard` and is then passed through to the engine as a parameter. That's a shame as it is really just an implementation detail and the engine already has a method to acquire a commit.
This is preparing for a follow up PR that will we connect the index deletion policy with a new translog deletion policy.
Relates to #10708