Currently global ordinals are documented under `fielddata`. It moves them to
their own file since they also work with doc values and fielddata is on the way
out.
Closes#23101
This commit provides the TransportRequest that caused the retrieval of a search context to the
SearchOperationListener#validateSearchContext method so that implementers have access to the
request.
By default, the remove plugin CLI command preserves configuration
files. This is so that if a user is upgrading the plugin (which is done
by first removing the old version and then installing the new version)
they do not lose their configuration file. Yet, there are circumstances
where preserving the configuration file is not desired. This commit adds
a purge option to the remove plugin CLI command.
Relates #24981
openSUSE-13 has reached [EOL](https://en.opensuse.org/Lifetime).
Replace openSUSE-13 with openSUSE-42 (Leap) for packaging tests and
update docs.
Relates #25001
For comparing actual and parsed object equality for the response parsing we
currently rely on comparing the original xContent and the output of the parsed
object. Currently we only have cryptic error messages if this comparison fails
which are hard to read also because we recursively compare lists and maps of
the xContent structures we compare.
This commits leverages the existing NotEqualMessageBuilder for providing error
messages that are more detailed and useful for debugging if an error occurs.
Currently, the decisions regarding which translog generation files to delete are hard coded in the interaction between the `InternalEngine` and the `Translog` classes. This PR extracts it to a dedicated class called `TranslogDeletionPolicy`, for two main reasons:
1) Simplicity - the code is easier to read and understand (no more two phase commit on the translog, the Engine can just commit and the translog will respond)
2) Preparing for future plans to extend the logic we need - i.e., retain multiple lucene commit and also introduce a size based retention logic, allowing people to always keep a certain amount of translog files around. The latter is useful to increase the chance of an ops based recovery.
If delimiter or replacement parameter are an empty string, the error is not clear enough to indicate how to fix it.
With this change, the user knows these parameter must be a non empty string.
This is related to #24927. There was a small possibility that a test
was attempting to compress a stream with zero bytes. This was causing
a failure.
This test now requires at least one byte.
* Introduce ParentJoinFieldMapper, a field mapper that creates parent/child relation within documents of the same index
This change adds a new field mapper named ParentJoinFieldMapper. This mapper is a replacement for the ParentFieldMapper but instead of using the types in the mapping
it uses an internal field to materialize parent/child relation within a single index.
This change also adds a fetch sub phase that automatically retrieves the join name (parent or child name) and the parent id for child documents in the response hit fields.
The compatibility with `has_parent`, `has_child` queries and `children` agg will be added in a follow up.
Relates #20257
This is a follow-up to #23941. Currently there are a number of
complexities related to compression. The raw DeflaterOutputStream must
be closed prior to sending bytes to ensure that EOS bytes are written.
But the underlying ReleasableBytesStreamOutput cannot be closed until
the bytes are sent to ensure that the bytes are not reused.
Right now we have three different stream references hanging around in
TCPTransport to handle this complexity. This commit introduces
CompressibleBytesOutputStream to be one stream implemenation that will
behave properly with or without compression enabled.
This metric is not used in the ES codebase at all. It's also not as likely to be
used since it relies on a periodic "tick", which we don't currently use.
The took time computed for search requests does not take in account the expand search phase.
This change delays the computation to after the expand phase finishes.
Relates #24900
This makes profiling classes acquire a timer up-front that can be then reused
across all calls, in order to save bound checks for methods that are called in
tight loops.
ScriptContexts currently understand a FactoryType that can produce
instances of the script InstanceType. However, for search scripts, this
does not work as we have the concept of LeafSearchScript that is created
per lucene segment. This commit effectively renames the existing
SearchScript class into SearchScript.LeafFactory, which is a new,
optional, class that can be defined within a ScriptContext.
LeafSearchScript is effectively renamed back into SearchScript. This
change allows the model of stateless factory -> stateful factory ->
script instance to continue, but in a generic way that any script
context may take advantage of.
relates #20426
In previous work, we refactored the delay mechanism in index shard
operation permits to allow for async delaying of acquisition. This
refactoring made explicit when permit acquisition is disabled whereas
previously we were relying on an implicit condition, namely that all
permits were acquired by the thread trying to delay acquisition. When
using the implicit mechanism, we tried to acquire a permit and if this
failed, we returned a null releasable as an indication that our
operation should be queued. Yet, now we know when we are delayed and we
should not even try to acquire a permit. If we try to acquire a permit
and one is not available, we know that we are not delayed, and so
acquisition should be successful. If it is not successful, something is
deeply wrong. This commit takes advantage of this refactoring to
simplify the internal implementation.
Relates #24971
When a primary is promoted, it could have gaps in its history due to
concurrency and in-flight operations when it was serving as a
replica. This commit fills the gaps in the history of the promoted shard
after all operations from the previous term have drained, and future
operations are blocked. This commit does not handle replicating the
no-ops that fill the gaps to any remaining replicas, that is the
responsibility of the primary/replica sync that we are laying the ground
work for.
Relates #24945
`terms` aggregations at the root level use the `global_ordinals` execution hint by default.
When all sub-aggregators can be run in `breadth_first` mode the collected buckets for these sub-aggs are dense (remapped after the initial pruning).
But if a sub-aggregator is not deferrable and needs to collect all buckets before pruning we don't remap global ords and the aggregator needs to deal with sparse buckets.
Most (if not all) aggregators expect dense buckets and uses this information to allocate memories.
This change forces the remap of the global ordinals but only when there is at least one sub-aggregator that cannot be deferred.
Relates #24788
DateProcessor's DateFormat UNIX format parser resulted in
a floating point rounding error when parsing certain stringed
epoch times. Now Double.parseDouble is used, preserving the
intented input.
This commit introduces a clean transition from the old primary term to
the new primary term when a replica is promoted primary. To accomplish
this, we delay all operations before incrementing the primary term. The
delay is guaranteed to be in place before we increment the term, and
then all operations that are delayed are executed after the delay is
removed which asynchronously happens on another thread. This thread does
not progress until in-flight operations that were executing are
completed, and after these operations drain, the delayed operations
re-acquire permits and are executed.
Relates #24925
Within two lines of each other appears "fallthrough" and "fall through",
both typed by the same person who should have been paying better
attention and only one of these is correct and the inconsistency is
bothersome. This commit fixes the errant one.
Drops `TokenizerFactory#name`, replacing it with
`CustomAnalyzer#getTokenizerName` which is much better targeted at
its single use case inside the analysis API.
Drops a test that I would have had to refactor which is duplicated by
`AnalysisModuleTests`.
To keep this change from blowing up in size I've left two mostly
mechanical changes to be done in followups:
1. `TokenizerFactory` can now be entirely dropped and replaced with
`Supplier<Tokenizer>`.
2. `AbstractTokenizerFactory`'s ctor still takes a `String` parameter
where the name once was.
If the bucket already exists, due to non-overlapping series or missing data, the
MovAvg creates a merged bucket with the existing aggs + the new prediction. This
fixes a small bug where the doc_count was not being set correctly.
Relates to #24327
Today if the primary throws an exception while handling the replica
response (e.g., because it is already closed while updating the local
checkpoint for the replica), or because of a bug that causes an
exception to be thrown in the replica operation listener, this exception
is caught by the underlying transport handler plumbing and is translated
into a response handler failure transport exception that is passed to
the onFailure method of the replica operation listener. This causes the
primary to turn around and fail the replica which is a disastrous and
incorrect outcome as there's nothing wrong with the replica, it is the
primary that is broken and deserves a paddlin'. This commit handles this
situation by failing the primary.
Relates #24926
This commit adds a `doc_count` field to the response body of Matrix
Stats aggregation. It exposes the number of documents involved in
the computation of statistics, a value that can already be retrieved using
the method MatrixStats.getDocCount() in the Java API.
This commit adds a second refresh to the concurrent relocation
test. This is necessary as the first refresh might have brought back a
local checkpoint for a shard that a newly relocated primary became aware
of but did not yet receive a local checkpoint for that shard. When that
local checkpoint arrives on the new primary, the global checkpoint could
advance again and so we need a second replication action to push that
global checkpoint back out to the replica. This is indeed a hack, and it
will eventually be removed.
Closes#24599