Today a transport response uses the same wire format version as the
corresponding request. This mostly works since we mostly know we are
communicating with a node with a compatible version. TCP handshakes don't have
this guarantee since they use `Version.CURRENT.minimumCompatibilityVersion()`
to let us handshake with older nodes. This results in the strange situation of
a node of major version `N` responding to a node of major version `N-1` using a
wire format of version `N-2`.
We put extra effort into the longer BWC requirements for successful responses,
but we do not offer the same guarantees for error responses since they may be
rather complicated to serialize. This can result in the request sender
misinterpreting the response which may have unpredictable consequences.
Rather than strengthening the guarantees in this area, this commit simply logs
the exception and closes the connection on a handshake error with a node that
uses an incompatible wire format.
Closes#54337
Today a transport response uses the same wire format version as the
corresponding request. This mostly works since we mostly know we are
communicating with a node with a compatible version. TCP handshakes don't have
this guarantee since they use `Version.CURRENT.minimumCompatibilityVersion()`
to let us handshake with older nodes. This results in the strange situation of
a node of major version `N` responding to a node of major version `N-1` using a
wire format of version `N-2`.
We put extra effort into the longer BWC requirements for successful responses,
but we do not offer the same guarantees for error responses since they may be
rather complicated to serialize. This can result in the request sender
misinterpreting the response which may have unpredictable consequences.
Rather than strengthening the guarantees in this area, this commit simply logs
the exception and closes the connection on a handshake error with a node that
uses an incompatible wire format.
Closes#54337
All of these files were written by us, and not sourced from
anywhere. Therefore, the license head should be granting licenses to
Elasticsearch, rathern than to the ASF. This commit address them by
changing the license to our standard Apache 2.0 license header.
Merging logic is currently split between FieldMapper, with its merge() method, and
MappedFieldType, which checks for merging compatibility. The compatibility checks
are called from a third class, MappingMergeValidator. This makes it difficult to reason
about what is or is not compatible in updates, and even what is in fact updateable - we
have a number of tests that check compatibility on changes in mapping configuration
that are not in fact possible.
This commit refactors the compatibility logic so that it all sits on FieldMapper, and
makes it called at merge time. It adds a new FieldMapperTestCase base class that
FieldMapper tests can extend, and moves the compatibility testing machinery from
FieldTypeTestCase to here.
Relates to #56814
When `date_histogram` is a sub-aggregator it used to allocate a bunch of
objects for every one of it's parent's buckets. This uses the data
structures that we built in #55873 rework the `date_histogram`
aggregator instead of all of the allocation.
Part of #56487
Elasticsearch requires that a HttpRequest abstraction be implemented
by http modules before server processing. This abstraction controls when
underlying resources are released. This commit moves this abstraction to
be created immediately after content aggregation. This change will
enable follow-up work including moving Cors logic into the server
package and tracking bytes as they are aggregated from the network
level.
PR #56893 was supposed to randomise the iteration count in
`testDataOnlyNodePersistence` but this change was mistakenly omitted. This
commit addresses this.
This test failed if all 1000 top-level `rarely()` calls in the loop returned
`false`, because then we would never set the term of the persisted state. This
commit fixes this by adding an earlier call to `persistedState#setCurrentTerm`.
It also changes the test to clean up the threadpools it starts whether it
passes or fails.
When reading/writing the individual doc responses in the context
of a bulk shard response there is no need to serialize the `ShardId`
over and over. This can waste a lot of memory when handling large bulk
requests.
This assertion is too strict. A snapshot will be removed from the cluster state
on the CS thread before it is removed from the listeners map on the snapshot thread pool.
Throughout the removal from the cluster state and listener map, the snapshot is tracked
in `endingSnapshots` though, so we can relax the assertion accordingly and are still able
to catch leaked listeners.
Closes#56607
In the unlikely event that the data nodes started snapshotting the
shards already (and hence got blocked on the data blobs) before the
master has applied the cluster state to its own `SnapshotsService` on
the CS applier thread, we can get a `SnapshotMissingException` here which
breaks the busy assert loop so we have to deal with it explicitly.
Closes#56858
Currently it is possible that a sniff connection round is occurring as
we enter another test loop in testEnsureWeReconnect. The problem is that
once we enter another loop, closing the connection manually can cause
this pre-existing connection round to fail. This round failing can fail
the test. This commit fixes the issue by ensuring that there are no
in-progress connections before entering another loop.
It was relying on the compensated sum working but the test framework was
dodging it. This forces the accuracy tests to come from a single shard
where we get the proper compensated sum.
Closes#56757
We get the number of shards and replicas with our bare hands in index
metadata, rather than letting the settings infrastructure do the work
for us. This commit switches to using the settings infrastructure.
Today a 7.x node logs `cluster UUID set to [...]` on every cluster state update
received from a 6.8 master, because 6.8 nodes are not able to commit the
cluster UUID properly. We could try and deduplicate these logs somehow, but
that would introduce a good deal of complexity. Instead, this commit suppresses
these logs entirely when receiving cluster state updates from a 6.8 master.
Mapper.Builder currently has some complex generics on it to allow fluent builder
construction. However, the second parameter, a return type from the build() method,
is unnecessary, as we can use covariant return types. This commit removes this second
generic parameter.
This is another part of the breakup of the massive BuildPlugin. This PR
moves the code for configuring publications to a separate plugin. Most
of the time these publications are jar files, but this also supports the
zip publication we have for integ tests.
This aggregation will perform normalizations of metrics
for a given series of data in the form of bucket values.
The aggregations supports the following normalizations
- rescale 0-1
- rescale 0-100
- percentage of sum
- mean normalization
- z-score normalization
- softmax normalization
To specify which normalization is to be used, it can be specified
in the normalize agg's `normalizer` field.
For example:
```
{
"normalize": {
"buckets_path": <>,
"normalizer": "percent"
}
}
```
If a channel gets disconnected, then we should cancel the tasks
associated with that channel as their results won't be retrieved.
Closes#56327
Relates #56619
Backport of #56620
We previously rejected removing the number of replicas setting, which
prevents users from reverting this setting to its default the natural
way. To fix this, we put back the setting with the default value in the
cases that the user is trying to remove it. Yet, we also need to do the
work of updating the routing table and so on appropriately. This case
was missed because when the setting is being removed, we were defaulting
to -1 in this code path, which is treated as not being updated. Instead,
we must treat the case when we are removing this setting as if the
setting is being updated, too. This commit does that.
In normal operation native controllers are not expected to write
anything to stdout or stderr. However, if due to an error or
something unexpected with the environment a native controller
does write something to stdout or stderr then it will block if
nothing is reading that output.
This change makes the stdout and stderr of native controllers
reuse the same stdout and stderr as the Elasticsearch JVM (which
are by default redirected to es.stdout.log and es.stderr.log) so
that if something unexpected is written to native controller
output then:
1. The native controller process does not block, waiting for
something to read the output
2. We can see what the output was, making it easier to debug
obscure environmental problems
Backport of #56491
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
This merges the code for the `significant_terms` agg into the package
for the code for the `terms` agg. They are *super* entangled already,
this mostly just admits that to ourselves.
Precondition for the terms work in #56487
When using source filtering exclusions, empty arrays are not preserved in documents, and no empty arrays are returned if arrays are empty after applying exclusions. We have special treatment to make sure that we preserve empty objects, but the behaviour for arrays is different.
It looks like this regression was introduced by #22593, shortly after we refactored source filtering to use automata (#20736).
Note that this change affects what the search API returns when using source exclusions, as well as what gets indexed when using source exclusions for the _source field.
Closes#23796
This adds a few things to the `breakdown` of the profiler:
* `histogram` aggregations now contain `total_buckets` which is the
count of buckets that they collected. This could be useful when
debugging a histogram inside of another bucketing agg that is fairly
selective.
* All bucketing aggs that can delay their sub-aggregations will now add
a list of delayed sub-aggregations. This is useful because we
sometimes have fairly involved logic around which sub-aggregations get
delayed and this will save you from having to guess.
* Aggregtations wrapped in the `MultiBucketAggregatorWrapper` can't
accurately add anything to the breakdown. Instead they the wrapper
adds a marker entry `"multi_bucket_aggregator_wrapper": true` so we
can be quickly pick out such aggregations when debugging.
It also fixes a bug where `_count` breakdown entries were contributing
to the overall `time_in_nanos`. They didn't add a large amount of time
so it is unlikely that this caused a big problem, but I was there.
To support the arbitrary breakdown data this reworks the profiler so
that the `breakdown` can contain any data that is supported by
`StreamOutput#writeGenericValue(Object)` and
`XContentBuilder#value(Object)`.
This PR proposes to use `IndexSortSortedNumericDocValuesRangeQuery` when
possible to speed up certain range queries. Points-based queries are already
very efficient, the only time this query makes a difference is when the range
matches a large number of documents.
Relates to #48665.
This is similar to a previous change that allowed removing the number of
replicas settings (so setting it to its default) on open indices. This
commit allows the same for closed indices.
It is unfortunate that we have separate branches for handling open and
closed indices here, but I do not see a clean way to merge these two
together without making a rather unnatural method (note that they invoke
different methods for doing the settings updates). For now, we leave
this as-is even though it led to the miss here.
Today a user can create an index without setting the
index.number_of_replicas setting even though the index metadata requires
that the setting has a value. We do this when creating an index by
explicitly settings index.number_of_replicas to a default value if one
is not provided. However, if a user updates the number of replicas, and
then let wants to return to the default value, they are naturally
inclined to try setting this setting to null, as the agreed upon way to
return a setting to its default. Since the index metadata requires that
this setting has a non-null value, we blow up when a user attempts to
make this change. This is because we are not taking the same action when
updating a setting on an index that we take when create an
index. Namely, we are not explicitly setting index.number_of_replicas if
the request does not carry a value for this setting. This would happen
when nulling the setting, which we want to support. This commit
addresses this by setting index.number_of_replicas to the default if the
value for this setting is null when updating the settings for an index.
Currently the `time_zone` parameter in `query_string` queries gets applied
correctly only when using the range syntax, e.g "date:[2020-01-02 TO
2020-01-05]. When a date field gets searched without explicit range syntax, e.g.
"date:"2020-01-01" we internally create a range query than uses the specified
date as start date and rounds up to the next underspecified units for the end
date (e.g. here 2020-01-01T23:59:59) without considering the `time_zone`
settings. This change adds a check in QueryStringQueryParser to detect this
scenario early where we have access to the time zone information and directly
create a range query using it.
Closes#55813
In a race condition, a search context could remain enlisted in
SearchService when an index is deleted, potentially causing the index
folder to not be cleaned up (for either lengthy searches or scrolls with
timeouts > 30 minutes or if the scroll is kept active).
If a conditional is added to a processor, and that processor fails, and
that processor has an on_failure handler, the full trace of all of the
executed processors may not be displayed in simulate verbose. The
information is correct, but misses displaying some of the steps used
to get there.
This happens because a processor that is conditional processor is a
wrapper around the real processor and a processor with an on_failure
handler is also a wrapper around the processor(s). When decorating for
simulation we treat compound processor specially, but if a compound processor
is wrapped by a conditional processor that compound processor's processors
can be missed for decoration resulting in the missing displayed steps.
The fix to this is to treat the conditional processor specially and
explicitly seperate it from the processor it is wrapping. This requires
us to keep track of 2 processors a possible conditional processor and
the actual processor it may be wrapping.
related: #56004
Two spots that allow for some optimization:
* We are often creating a composite reference of just a single item in
the transport layer => special cased via static constructor to make sure we never do that
* Also removed the pointless case of an empty composite bytes ref
* `ByteBufferReference` is practically always created from a heap buffer these days so there
is no point of dealing with all the bounds checks and extra references to sliced buffers from that
and we can just use the underlying array directly
Today the heap size check warns the user about two issues why they might
care about the heap size check: resize pauses, and if memory locking is
enabled. Yet, we unconditionally make mention of the memory locking
reason, even if memory locking is not enabled. This can confuse some
users, so we adjust the warning about memory locking to only display if
memory locking is enabled.
Backport: #55377
This commit adds the ability to auto create data streams using index templates v2.
Index templates (v2) now have a data_steam field that includes a timestamp field,
if provided and index name matches with that template then a data stream
(plus first backing index) is auto created.
Relates to #53100
Similar to what the moving function aggregation does, except merging windows of percentiles
sketches together instead of cumulatively merging final metrics
Move data stream resolvability test from IndicesOptionsIntegrationIT to DataStreamIT class.
Whether a transport action supports data streams is no longer controlled via indices options.
This adds support for parsing numbers as range keys. They get converted
into a string, but we allow numbers.
While I was there I replaced the parser for `Range` with a
`ConstructingObjectParser` which will automatically add support for "did
you mean" style corrections on errors.
Closes#56402
This commit refactors the following:
* GeoPointFieldMapper and PointFieldMapper to
AbstractPointGeometryFieldMapper derived from AbstractGeometryFieldMapper.
* .setupFieldType moved up to AbstractGeometryFieldMapper
* lucene indexing moved up to AbstractGeometryFieldMapper.parse
* new addStoredFields, addDocValuesFields abstract methods for implementing
stored field and doc values field indexing in the concrete field mappers
This refactor is the next phase for setting up a framework for extending
spatial field mapper functionality in x-pack.
This commit removes the `prefer_v2_templates` flag and setting. This was a brief setting that
allowed specifying whether V1 or V2 template should be used when an index is created. It has been
removed in favor of V2 templates always having priority.
Relates to #53101Resolves#56528
This is not a breaking change because this flag was never in a released version.
Change TransportBroadcastByNodeAction and TransportBroadcastReplicationAction
to be able to resolve data streams by default. Implementations can change this ability.
This change allows to following APIs to resolve data streams: flush,
refresh (already supported data streams), force merge, clear indices cache,
indices stats (already supported data streams), segments, upgrade stats,
upgrade, validate query, searchable snapshots stats, clear searchable snapshots cache and
reload analyzers APIs.
Relates to #53100
This wires `auto_date_histogram` into the rounding optimization that I
built in #55559. This is should significantly speed up any
`auto_date_histogram`s with `time_zone`s on them.
Right now all implementations of the `terms` agg allocate a new
`Aggregator` per bucket. This uses a bunch of memory. Exactly how much
isn't clear but each `Aggregator` ends up making its own objects to read
doc values which have non-trivial buffers. And it forces all of it
sub-aggregations to do the same. We allocate a new `Aggregator` per
bucket for two reasons:
1. We didn't have an appropriate data structure to track the
sub-ordinals of each parent bucket.
2. You can only make a single call to `runDeferredCollections(long...)`
per `Aggregator` which was the only way to delay collection of
sub-aggregations.
This change switches the method that builds aggregation results from
building them one at a time to building all of the results for the
entire aggregator at the same time.
It also adds a fairly simplistic data structure to track the sub-ordinals
for `long`-keyed buckets.
It uses both of those to power numeric `terms` aggregations and removes
the per-bucket allocation of their `Aggregator`. This fairly
substantially reduces memory consumption of numeric `terms` aggregations
that are not the "top level", especially when those aggregations contain
many sub-aggregations. It also is a pretty big speed up, especially when
the aggregation is under a non-selective aggregation like
the `date_histogram`.
I picked numeric `terms` aggregations because those have the simplest
implementation. At least, I could kind of fit it in my head. And I
haven't fully understood the "bytes"-based terms aggregations, but I
imagine I'll be able to make similar optimizations to them in follow up
changes.
When an index spans a daylight savings time transition we can't use our
optimization that rewrites the requested time zone to a fixed time zone
and instead we used to fall back to a java.util.time based rounding
implementation. In #55559 we optimized "time unit" rounding. This
optimizes "time interval" rounding.
The java.util.time based implementation is about 1650% slower than the
rounding implementation for a fixed time zone. This replaces it with a
similar optimization that is only about 30% slower than the fixed time
zone. The java.util.time implementation allocates a ton of short lived
objects but the optimized implementation doesn't. So it *might* end up
being faster than the microbenchmarks imply.
Use proper facility for creating temporary index service for the simulation
that does not add itself to the `IndicesService` unnecessarily (breaking an assertion about the
internal consistency of the cluster state and the `IndicesService`).
Closes#56298
Backport of: #56413
Allow cluster health api to resolve data streams and
automatically remove data streams after each test in
test cases extending from `ESIntegTestCase`
Relates to #53100
While investigating possible optimizations to speed up searchable
snapshots shard restores, we noticed that Elasticsearch builds the
list of shard files on local disk in order to compare it with the list of
files contained in the snapshot to restore. This list of files is
materialized with a MetadataSnapshot object whose construction
involves to read the footer checksum of every files of the shard
using Store.checksumFromLuceneFile() method.
Further investigation shows that a MetadataSnapshot object is
also created for other types of operations like building the list of
files to recover in a peer recovery (and primary shard relocation)
or in order to assign a shard to a node. These operations use the
Store.getMetadata(IndexCommit) method to build the list of files
and checksums.
In the case of searchable snapshots building the MetadataSnapshot
object can potentially trigger cache misses, which in turn can
cause the download and the writing in cache of the last range of
the file in order to check the 16 bytes footer. This in turn can
cause more evictions.
Since searchable snapshots already contains the footer information
of every file in BlobStoreIndexShardSnapshot it can directly read the
checksum from it and avoid to use the cache at all to create a
MetadataSnapshot for the operations mentioned above.
This commit adds a shortcut to the
SearchableSnapshotDirectory.openInput() method - similarly to what
already exists for segment infos - so that it creates a specific
IndexInput for checksum reading operation.
A bug in InternalGeoCentroid#reduce existed that summed up
the aggregation's long-valued counts into a local integer variable.
Since it is definitely possible to reduce more than Integer.MAX points,
this change simply updates that variable to be a long-valued number.
Closes#55992.
Currently, the logging around the SniffConnectionStrategy is limited.
The log messages are inconsistent and sometimes wrong. This commit
cleans up these log message to describe when connections are happening
and what failed if a step fails.
Additionally, this commit enables TRACE logging for a problematic test
(testEnsureWeReconnect).
Currently when a connection closes a new sniff round begins. The
testCollectNodes test closes four transports before triggering the
method to collect the remote nodes. This leads to a race where there are
a number of reasons the collect nodes call might fail. This commit fixes
that issue by changing the test assertion to include a potential failure
condition.
Fixes#55292.
`auto_date_histogram` was returning the incorrect `interval` because
of a combination of two things:
1. When pipeline aggregations rewrote `auto_date_histogram` we reset the
interval to 1. Oops. Fixed that.
2. *Every* bucket aggregation was rewriting its buckets as though there
was a pipeline aggregation even if there aren't any. This is a bit
silly so we skip that too.
Closes#56116
We fail to unregister the child node in registerAndExecute if the parent
task is being canceled. This leads to a bug where a cancel request never
completes.
Closes#55875
Relates #54312
Rounding dates on a shard that contains a daylight savings time transition
is currently something like 1400% slower than when a shard contains dates
only on one side of the DST transition. And it makes a ton of short lived
garbage. This replaces that implementation with one that benchmarks to
having around 30% overhead instead of the 1400%. And it doesn't generate
any garbage per search hit.
Some background:
There are two ways to round in ES:
* Round to the nearest time unit (Day/Hour/Week/Month/etc)
* Round to the nearest time *interval* (3 days/2 weeks/etc)
I'm only optimizing the first one in this change and plan to do the second
in a follow up. It turns out that rounding to the nearest unit really *is*
two problems: when the unit rounds to midnight (day/week/month/year) and
when it doesn't (hour/minute/second). Rounding to midnight is consistently
about 25% faster and rounding to individual hour or minutes.
This optimization relies on being able to *usually* figure out what the
minimum and maximum dates are on the shard. This is similar to an existing
optimization where we rewrite time zones that aren't fixed
(think America/New_York and its daylight savings time transitions) into
fixed time zones so long as there isn't a daylight savings time transition
on the shard (UTC-5 or UTC-4 for America/New_York). Once I implement
time interval rounding the time zone rewriting optimization *should* no
longer be needed.
This optimization doesn't come into play for `composite` or
`auto_date_histogram` aggs because neither have been migrated to the new
`DATE` `ValuesSourceType` which is where that range lookup happens. When
they are they will be able to pick up the optimization without much work.
I expect this to be substantial for `auto_date_histogram` but less so for
`composite` because it deals with fewer values.
Note: My 30% overhead figure comes from small numbers of daylight savings
time transitions. That overhead gets higher when there are more
transitions in logarithmic fashion. When there are two thousand years
worth of transitions my algorithm ends up being 250% slower than rounding
without a time zone, but java time is 47000% slower at that point,
allocating memory as fast as it possibly can.
We were logging the cleanup of the snap- and meta- blobs for every snapshot delete
which is needlessly noisy and confusing to users. We should only log actual stale/unexpected
blobs here.
This commit creates a new gradle plugin to provide a separate task name
and source set for running ESIntegTestCase tests. The only project
converted to use the new plugin in this PR is server, as an example. The
remaining cases in x-pack will be handled in followups.
backport of #55896
`FieldMapper#parseCreateField` accepts the parse context, plus a list of fields
as an output parameter. These fields are immediately added to the document
through `ParseContext#doc()`.
This commit simplifies the signature by removing the list of fields, and having
the mappers add the fields directly to `ParseContext#doc()`. I think this is
nicer for implementors, because previously fields could be added either through
the list, or the context (through `add`, `addWithKey`, etc.)
A FilterBlobContainer class was introduced in #55952 and it delegates
its behavior to a given BlobContainer while allowing to override
only necessary methods.
This commit replaces the existing BlobContainerWrapper class from
the test framework with the new FilterBlobContainer from core.
this commit adds aggregation support for the geo_shape field
type on geo*_grid aggregations.
it introduces a Tiler for both tiles and hashes that enables a new type of
ValuesSource to replace the GeoPoint's CellIdSource. This makes it possible
for the existing Aggregator to be re-used, so no new implementations of
the grid aggregators are added.
This commit changes searchable snapshots so that it now respects the
repository's max_restore_bytes_per_sec setting when it downloads blobs.
Backport of #55952 for 7.x
This arose after a backport where we do not have the nicities of the
Java 11 diamond operator. This commit fixes it by adding the proper type
parameter.
When the index we are validating a query does not exist, we try to send
back a response letting the client know that the index does not
exist. Yet, we accidentally fallthrough into the case that the
validation failed for some other reason. This means that we end up
notifying the channel twice. Sometimes the notification occurs after the
failure has been written out and the channel closed (so the second
invocation leads to a silent failed to write to a closed channel issue),
and sometimes the response does end up in the channel, creating garbled
responses to the client. This commit fixes that issue by avoiding the
fallthrough.
Backport of #56034.
Move includeDataStream flag from an IndicesOptions to IndexNameExpressionResolver.Context
as a dedicated field that callers to IndexNameExpressionResolver can set.
Also alter indices stats api to support data streams.
The rollover api uses this api and otherwise rolling over data stream does no longer work.
Relates to #53100
Backport of #55858 to 7.x branch.
Currently the TransportBulkAction detects whether an index is missing and
then decides whether it should be auto created. The coordination of the
index creation also happens in the TransportBulkAction on the coordinating node.
This change adds a new transport action that the TransportBulkAction delegates to
if missing indices need to be created. The reasons for this change:
* Auto creation of data streams can't occur on the coordinating node.
Based on the index template (v2) either a regular index or a data stream should be created.
However if the coordinating node is slow in processing cluster state updates then it may be
unaware of the existence of certain index templates, which then can load to the
TransportBulkAction creating an index instead of a data stream. Therefor the coordination of
creating an index or data stream should occur on the master node. See #55377
* From a security perspective it is useful to know whether index creation originates from the
create index api or from auto creating a new index via the bulk or index api. For example
a user would be allowed to auto create an index, but not to use the create index api. The
auto create action will allow security to distinguish these two different patterns of
index creation.
This change adds the following new transport actions:
AutoCreateAction, the TransportBulkAction redirects to this action and this action will actually create the index (instead of the TransportCreateIndexAction). Later via #55377, can improve the AutoCreateAction to also determine whether an index or data stream should be created.
The create_index index privilege is also modified, so that if this permission is granted then a user is also allowed to auto create indices. This change does not yet add an auto_create index privilege. A future change can introduce this new index privilege or modify an existing index / write index privilege.
Relates to #53100
It's possible for a constant_keyword to have a 'null' value before any documents
are seen that contain a value for the field. In this case, no documents have a
value for the field, and 'exists' queries should return no documents.
Making use of #55773 to simplify snapshot state machine.
1. Deletes with no in-progress snapshot now add the delete entry to the cluster state right away
instead of doing a second CS update after the fist update was a NOOP.
2. If a bulk delete matches in-progress as well as completed snapshots, abort the in-progress snapshot
and then move on to delete from the repository.
Backports #55933 to 7.x
Implements value_count and avg aggregations over Histogram fields as discussed in #53285
- value_count returns the sum of all counts array of the histograms
- avg computes a weighted average of the values array of the histogram by multiplying each value with its associated element in the counts array
Using optimistic locking, add the ability to run a repository state
update task with a consistent view of the current repository data.
Allows for a follow-up to remove the snapshot INIT state.
* Allow Deleting Multiple Snapshots at Once (#55474)
Adds deleting multiple snapshots in one go without significantly changing the mechanics of snapshot deletes otherwise.
This change does not yet allow mixing snapshot delete and abort. Abort is still only allowed for a single snapshot delete by exact name.
I see occasional confusion about the explanations emitted by the same-shard
allocation decider, particularly amongst new users setting up a single-node
cluster and trying to determine why their cluster has `yellow` health. For
example:
the shard cannot be allocated to the same node on which a copy of the shard
already exists
This is technically correct but it's quite a complicated sentence. Also, by
starting with "the shard cannot be allocated" it makes it sound like this is
the problem, whereas in fact this message is a good thing and users should
typically focus their attention elsewhere.
This commit simplifies the wording of these messages and makes them sound more
positive, for example:
a copy of this shard is already allocated to this node
In order to iterate through remote connections, the remote connection
manager maintains a local cache of connected nodes. Unfortunately this
is difficult in relationship with testing as it is inherently racy in
comparison to the parent connection manager map of connections.
This commit improves the relationship by only returning a cached
connection if it is still registered with the parent. If the connection
is not open, we will go to the slow path of allocating a iterator
directly from the parent.
* Emit deprecation warning if multiple v1 templates match with a new index (#55558)
* Emit deprecation warning if multiple v1 templates match with a new index
* DEPRECATION_LOGGER rename
This commit includes a number of minor improvements around `DelayableWriteable`: javadocs were expanded and reworded, `get` was renamed to `expand` and `DelayableWriteable` no longer implements `Supplier`. Also a couple of methods are now private instead of package private.
We generate a circular reference exception in translog in 6.8 in the
following scenario:
- The first rollGeneration hits "too many open files" exception when it's
copying a checkpoint file. We will set the tragic exception and close
the translog
- The second rollGeneration hits AlreadyClosedException as the current
writer is closed. We will suppress the ACE to the current tragic
exception. Unfortunately, this leads to a circular reference as ACE
already suppresses the tragic exception.
Other factors that help to manifest this bug:
- We do not fail the engine on AlreadyClosedException in flush
- We do not check for ensureOpen before rolling a new generation
Closes#55893
Ocassionally this test can fail when the randomized index.version.created is
before 6.1. In this case we don't check that if mappedFields.size() == 0 we
expect a MatchNoDocsQuery query being returned, which we do for other versions.
This fails only occasionally but with the seed provided on the original issue.
It also shouldn't be an issue on master since we shouldn't test with these pre-7
index versions there.
Closes#55950
Backports #55826 to 7.x
Modified AggregatorTestCase.searchAndReduce() method so that it returns an empty aggregation result when no documents have been inserted.
Also refactored several aggregation tests so they do not re-implement method AggregatorTestCase.testCase()
Fixes#55824
Currently if we shortcircuit a message the breaker release is null since
there is nothing to be broken. However, the TcpTransportChannel
infrastructure still expects it. This commit resolves this issue be
returning a no-op breaker release.
Currently `currentFieldType` is an instance variable that is first set and then
used by all methods referring to it. We can make it local to each method
instead, avoiding possible state problems and improve readability of the code
instead.
This adds a new api to simulate matching the given index name against the
index templates in the system.
The syntax for the new API takes the following form:
POST _index_template/_simulate_index/{index_name}
{
"index_patterns": ["logs-*"],
"priority": 15,
"template": {
"settings": {
"number_of_shards": 3
}
...
}
}
Where the body is optional, but we support the entire body used by the
PUT _index_template/{name} api. When the body is specified we'll simulate
matching the given index against a system that'd have the given index
template together with the index templates that exist in the system.
The response, in both cases, will return the matching template's resolved
settings, mappings and aliases, together with a special field that'll print any
overlapping templates and their corresponding index patterns.
(cherry picked from commit 1a5845edce1f445c58e094e9a3b6792e21e543b0)
Signed-off-by: Andrei Dan <andrei.dan@elastic.co>
Implements Sum aggregation over Histogram fields by summing the value of each bucket multiplied by their count as requested in #53285
Backports #55681 to 7.x
Today if state recovery is delayed by the `gateway.recover_after_*` settings
then we may end up performing state recovery twice: once when enough nodes have
joined the cluster, and again when the timeout elapses. The second state
recovery reinitializes the routing table, effectively discarding all
recovered/recovering shards and starting again from scratch. This commit adds a
check to prevent this second state recovery.
Closes#55564
Currently the testTransientErrorsDuringRecoveryAreRetried validates that
the expected peer recovery starts only once. This check is coarse and is
executed on all nodes and indexes. This commit modifies this check to
only be performed on the expected index. Additionally this commit
removes the disruption behavior from the "blue" node where it is not
relevant. Finally, this commit improves the logging for this test.
This replaces a reference to the result of partially reducing
aggregations that async search keeps with a reference to the serialized
form of the result of the partial reduction which we need to keep
anyway.
When calling scripts in metric aggregation, the returned metric state is
passed along to the coordinating node to do the final reduce. However,
it is possible the object could contain nested state which is unknown to
StreamOutput/StreamInput. This would then result in the node crashing as
exceptions are not expected in the middle of serialization.
This commit adds a method to StreamOutput that can determine if an
object is writeable by the stream. It uses the same logic
writeGenericValue, special casing each of the supported collection types
to recursively determine if each contained value is itself writeable.
relates #54708
Currently cancelling the RetryableAction does not stop one last run from
being executed. This commit makes a best effort attempt to cancel a
scheduled retry and guards future executions from the action already
being completed.
Currently a failed peer recovery action will fail an recovery. This
includes when the recovery fails due to potentially short lived
transient issues such as rejected exceptions or circuit breaking
errors.
This commit adds the concept of a retryable action. A retryable action
will be retryed in face of certain errors. The action will be retried
after an exponentially increasing backoff period. After defined time,
the action will timeout.
This commit only implements retries for responses that indicate the
target node has NOT executed the action.
If we advance the global checkpoint during commit and sync that
checkpoint after commit, then the assertions in the test won't hold
because the deletion policy did not see the latest global checkpoint
but only the value before committing.
Closes#55680
The disk decider had special handling for the single data node case,
allowing any allocation (skipping watermark checks) for such clusters.
This special handling can now be avoided via a setting.
If a node client (or rather its underlying node) is closed then
any executions on it will just quietly fail as happens in #55660
via closing the nodes on the test thread and asynchronously using
a node client.
Closes#55660
For 7.x, we already implemented the `?prefer_v2_templates` flag and made V2 templates opt-in, so we
can relax the error when updating V1 templates to just a warning. This will still be a hard error
for 8.0+
Relates to #53101
This has no practical impact on users since frozen indices are the only
throttled indices today. However this has an impact on upcoming features
that would use search throttling.
Filtering out throttled indices made sense a couple years ago, but as
we're now improving support for slow requests with `_async_search` and
exploring ways to reduce storage costs, this feature has most likely
become a trap, that we'd like to not have with upcoming features that
would use search throttling.
Relates #54058
Currently there is a clear mechanism to stub sending a request through
the transport. However, this is limited to testing exceptions on the
sender side. This commit reworks our transport related testing
infrastructure to allow stubbing request handling on the receiving side.
Fixes confusing error message when unknown value type is specified in a terms
aggregation. Adds support for parsing "numeric" and "number" value types.
Fixes#55727
We make sure to filter shard generations for indices that are missing
from the metadata when finalizing a partial snapshot (from concurrent index deletion)
but we failed to account for the case where we manually build a fake metadata instance
for snapshots without the global state.
Fixed this by handling missing indices by skipping, same way we do it for filtering the
shard generations.
Relates #50234
With this change, we will always return true for can_match requests on
idle search shards; otherwise, some shards will never get refreshed if
all search requests perform the can_match phase (i.e., total shards >
pre_filter_shard_size).
Relates #27500
Relates #50043
This commit refactors all spatial Field Mappers to a common
AbstractGeometryFieldMapper that implements shared parameter functionality
(e.g., ignore_malformed, ignore_z_value) and provides a common framework for
overriding type parsing, and building in xpack. Common shape functionality is
implemented in a new AbstractShapeGeometryFieldMapper that is reused and
overridden in GeoShapeFieldMapper, GeoShapeFieldMapperWithDocValues,
LegacyGeoShapeFieldMapper, and ShapeFieldMapper. This abstraction provides a
reusable foundation for adding new xpack features; such as coordinate reference
system support.
This adds a validation to VSParserHelper to ensure that a field or
script or both are specified by the user. This is technically
required today already, but throws an exception much deeper
in the agg framework and has a very unintuitive error for the user
(as well as eating more resources instead of failing early)
The (de)serialization code of the async search response
cannot handle exceptions that extend ElasticsearchException (e.g. ScriptException).
This commit fixes this bug by serializing the error with the more generic
StreamInput#writeException.
We are using `FORCE_STALE_PRIMARY_INSTANCE` in instance equality checks `==`
but were creating new instances of `ExistingStoreRecoverySource` when reading
from the wire. This could break these checks in corner cases, causing
`org.elasticsearch.cluster.routing.allocation.IndexMetadataUpdater#shardStarted`
to not remove the force allocation fake id when starting a shard.
Closes#55513
Currently QueryStringQueryParser already checks if the field limit is breached
at construction time, which e.g. leads to errors if the default field is set to
"*" or the default isn't used and there are more fields than the limit, even if the
query itself does not use all these fields.
This change moves this check to happen after query parsing. QueryStringQueryParser now
keeps track of the fields that are actually resolved while parsing. The size of
that set is later used to check against the limit set by the
`indices.query.bool.max_clause_count` setting.
Backport of #55158
Backport of #55115.
Replace calls to deprecate(String,Object...) with deprecateAndMaybeLog(...),
with an appropriate key, so that all messages can potentially be deduplicated.
This commit changes the way that V2 index, component, and request mappings are merged. Specifically:
- Fields are merged in a "replacement" manner, meaning that the entire definition is replaced rather
than merging the interior configuration
- Mapping metadata (all fields outside of `properties`) are merged recursively.
The merging for V1 templates does not change.
Relates to #53101
This commit adds a new GeoShapeBoundsAggregator to the spatial plugin and registers it with the GeoShapeValuesSourceType. This enables geo_bounds aggregations on geo_shape fields
This change fixes problem with updating Index Templates V2.
Validatation added in #54933 didn't filter list of conflicting templates correctly so
new template was always clashing with itself unless patterns were not changed completely.
Introduces InstantiatingObjectParser which is similar to the
ConstructingObjectParser, but instantiates the object using its constructor
instead of a builder function.
Closes#52499
After #53562, the `geo_shape` field mapper is registered within
a module. This opens the door for introducing a new `geo_shape`
field mapper into the Spatial Plugin that has doc-values support.
This is very much an extension of server's GeoShapeFieldMapper,
but with the addition of the doc values implementation.
There is no guarantee the observer and subsequent CS update will execute
before we move on to the next test here and we ahve to wait for the observer + CS update cycle to complete
before moving on to the next test.
closes#55481
If some of the nodes are pre-7.8 nodes, they may not support the `index.prefer_v2_templates`
setting (since it exists only on 7.8+). This commit makes sure that we only add this setting to the
index's metadata if all of the nodes are 7.8+. Otherwise we get errors like:
```
[ WARN ][o.e.i.c.IndicesClusterStateService] [v7.7.0-3] [test-snapshot-index][2] marking and sending shard failed due to [failed to create index]
java.lang.IllegalArgumentException: unknown setting [index.prefer_v2_templates] please check that any required plugins are installed, or check the breaking changes documentation for removed settings
at org.elasticsearch.common.settings.AbstractScopedSettings.validate(AbstractScopedSettings.java:544) ~[elasticsearch-7.7.0-SNAPSHOT.jar:7.7.0-SNAPSHOT]
at org.elasticsearch.common.settings.AbstractScopedSettings.validate(AbstractScopedSettings.java:489) ~[elasticsearch-7.7.0-SNAPSHOT.jar:7.7.0-SNAPSHOT]
at org.elasticsearch.common.settings.AbstractScopedSettings.validate(AbstractScopedSettings.java:460) ~[elasticsearch-7.7.0-SNAPSHOT.jar:7.7.0-SNAPSHOT]
at org.elasticsearch.indices.IndicesService.createIndexService(IndicesService.java:595) ~[elasticsearch-7.7.0-SNAPSHOT.jar:7.7.0-SNAPSHOT]
at org.elasticsearch.indices.IndicesService.createIndex(IndicesService.java:549) ~[elasticsearch-7.7.0-SNAPSHOT.jar:7.7.0-SNAPSHOT]
at org.elasticsearch.indices.IndicesService.createIndex(IndicesService.java:176) ~[elasticsearch-7.7.0-SNAPSHOT.jar:7.7.0-SNAPSHOT]
at org.elasticsearch.indices.cluster.IndicesClusterStateService.createIndices(IndicesClusterStateService.java:484) [elasticsearch-7.7.0-SNAPSHOT.jar:7.7.0-SNAPSHOT]
at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyClusterState(IndicesClusterStateService.java:246) [elasticsearch-7.7.0-SNAPSHOT.jar:7.7.0-SNAPSHOT]
at org.elasticsearch.cluster.service.ClusterApplierService.lambda$callClusterStateAppliers$5(ClusterApplierService.java:517) [elasticsearch-7.7.0-SNAPSHOT.jar:7.7.0-SNAPSHOT]
```
Relates to #55411
Relates to #53101Resolves#55539
This change folds the removal of the in-progress snapshot entry
into setting the safe repository generation. Outside of removing
an unnecessary cluster state update, this also has the advantage
of removing a somewhat inconsistent cluster state where the safe
repository generation points at `RepositoryData` that contains a
finished snapshot while it is still in-progress in the cluster
state, making it easier to reason about the state machine of
upcoming concurrent snapshot operations.
In the documentation of `pre_filter_shard_size` we state that the pre-filter
phase will be executed if the request targets more than 128 shards, yet in the
current test randomization we check that the
TransportSearchAction.shouldPreFilterSearchShards is true already for 127
targeted shards. This should be raised to 129 instead.
Closes#55514
Today a read-only engine requires a complete history of operations, in the
sense that its local checkpoint must equal its maximum sequence number. This is
a valid check for read-only engines that were obtained by closing an index
since closing an index waits for all in-flight operations to complete. However
a snapshot may not have this property if it was taken while indexing was
ongoing, but that's ok.
This commit weakens the check for a complete history to exclude the case of a
searchable snapshot.
Relates #50999
This change ensures that we return the latest expiration time
when retrieving the response from the index.
This commit also fixes a bug that stops the garbage collection of saved responses if the async search index is deleted.
Validate adding global V2 templates don't configure the index.hidden setting.
This also prevents updating the component template to add the index.hidden
setting if that component template is referenced by a global index template.
(cherry picked from commit 2e768981809887649f49d265d039f056985f7e6a)
Signed-off-by: Andrei Dan <andrei.dan@elastic.co>
Peer recovery fails if the primary does not see the recovering replica
in the replication group (when the cluster state update on the primary
is delayed). To verify the local recovery stats, we have to remember
this value in the first try because the local recovery happens once, and
its stats is reset when the recovery fails.
Closes#54829
If more than 100 shard-follow tasks are trying to connect to the remote
cluster, then some of them will abort with "connect listener queue is
full". This is because we retry on ESRejectedExecutionException, but not
on RejectedExecutionException.
This commit adds a new querystring parameter on the following APIs:
- Index
- Update
- Bulk
- Create Index
- Rollover
These APIs now support a `?prefer_v2_templates=true|false` flag. This flag changes the preference
creation to use either V2 index templates or V1 templates. This flag defaults to `false` and will be
changed to `true` for 8.0+ in subsequent work.
Additionally, setting this flag internally sets the `index.prefer_v2_templates` index-level setting.
This setting is used so that actions that automatically create a new index (things like rollover
initiated by ILM) will inherit the preference from the original index. This setting is dynamic so
that a transition from v1 to v2 templates can occur for long-running indices grouped by an alias
performing periodic rollover.
This also adds support for sending this parameter to the High Level Rest Client.
Relates to #53101
We don't really need `LinkedHashSet` here. We can assume that all the
entries are unique and just use a list and use the list utilities to
create the cheapest possible version of the list.
Also, this fixes a bug in `addSnapshot` which would mutate the existing
linked hash set on the current instance (fortunately this never caused a real world bug)
and brings the collection in line with the java docs on its getter that claim immutability.
* Add Snapshot Resiliency Test for Master Failover during Delete
We only have very indirect coverage of master failovers during snaphot delete
at the moment. This comment adds a direct test of this scenario and also
an assertion that makes sure we are not leaking any snapshot completion listeners
in the snapshots service in this scenario.
This gives us better coverage of scenarios like #54256 and makes the diff
to the upcoming more consistent snapshot delete implementation in #54705
smaller.
If we run into an INIT state snapshot and the current master didn't create it, it will be removed anyway.
=> no need to have that block another snapshot from starting.
This has practical relevance because on master fail-over after snapshot INIT but before start, the create snapshot request will be retried by the client (as it's a transport master node action) and needlessly fail with an unexpected exception (snapshot clearly didn't exist so it's confusing to the user).
This allowed making two disruption type tests stricter
isHidden was a `Boolean` in order to treat a special case identified
with V1 templates where if the create index request didn't specify if
the index should be hidden or not (ie. isHidden was `null`) but the
index matched a template that specified the `index.hidden` setting we
needed to remove the global templates from the templates we'll apply to
the new index (note: this is important with V1 templates as inheritance
is supported).
With V2 templates we match only one template with an index so the
equivalent check did not need to exist (we added a sanity check in
https://github.com/elastic/elasticsearch/pull/55015 where we make sure
we don't apply an invalid global template - one that specifes the
`index.hidden` setting, but this is a check we make irrespective of the
user specifying or not if the index should be hidden)
This commit makes `isHidden` when matching V2 templates a boolean
primitive, eliminating the need for the `null` state to exist. Note that
some methods which use the matching V2 templates still work with a
`Boolean` object `isHidden` attribute as they are also matching the V1
templates. These methods will pass in `false` instead of `null` when
finding the V2 templates.
(cherry picked from commit c5b923afec911c6ae8fc5179e65ae6bf55dcc5f1)
Signed-off-by: Andrei Dan <andrei.dan@elastic.co>
Some aggregations, such as the Terms* family, will use an alternate
class to represent unmapped shard results (while the rest of the aggs
use the same object but with some form of "empty" or "nullish" values
to represent unmapped).
This was problematic with AbstractWireSerializingTestCase because it
expects the instanceReader to always match the original class. Instead,
we need to use the NamedWriteable version so that the registry
can be consulted for the proper deserialization reader.
If we start unbanning when the last child task completed and that child
task executed with a specific user, then unban requests are denied
because internal requests can't run with a user. We need to remove bans
with the current thread context.
This validates that if the winner v2 template is a global one, it doesn't specify the
index.hidden setting.
(cherry picked from commit 19a97f76aac73e0455053097e5391165a9357427)
Signed-off-by: Andrei Dan <andrei.dan@elastic.co>
Enabled data streams and itv2 feature enabled system properties in server module's integ test task.
PR #54726 added java integration tests for data streams, so this is why these system properties
need to be enabled when running release build.
Hotfix to not run into stuck snapshots because of master circuit breaking these requests.
Given that these requests are very small and much of the memory associated with them is already allocated
when the circuit breaker kicks in, the risk of this change introducing a higher chance of master running out
of memory should be very small.
Closes#54714
We rewrite more query builders to MatchNoneQueryBuilders now, which are always
cacheable. We should make sure the tests expects this when the rewritten query
is a MatchNoneQueryBuilder.
Closes#55331
Backport from: #54726
The INCLUDE_DATA_STREAMS indices option controls whether data streams can be resolved in an api for both concrete names and wildcard expressions. If data streams cannot be resolved then a 400 error is returned indicating that data streams cannot be used.
In this pr, the INCLUDE_DATA_STREAMS indices option is enabled in the following APIs: search, msearch, refresh, index (op_type create only) and bulk (index requests with op type create only). In a subsequent later change, we will determine which other APIs need to be able to resolve data streams and enable the INCLUDE_DATA_STREAMS indices option for these APIs.
Whether an api resolve all backing indices of a data stream or the latest index of a data stream (write index) depends on the IndexNameExpressionResolver.Context.isResolveToWriteIndex().
If isResolveToWriteIndex() returns true then data streams resolve to the latest index (for example: index api) and otherwise a data stream resolves to all backing indices of a data stream (for example: search api).
Relates to #53100
* Add ValuesSource Registry and associated logic (#54281)
* Remove ValuesSourceType argument to ValuesSourceAggregationBuilder (#48638)
* ValuesSourceRegistry Prototype (#48758)
* Remove generics from ValuesSource related classes (#49606)
* fix percentile aggregation tests (#50712)
* Basic thread safety for ValuesSourceRegistry (#50340)
* Remove target value type from ValuesSourceAggregationBuilder (#49943)
* Cleanup default values source type (#50992)
* CoreValuesSourceType no longer implements Writable (#51276)
* Remove genereics & hard coded ValuesSource references from Matrix Stats (#51131)
* Put values source types on fields (#51503)
* Remove VST Any (#51539)
* Rewire terms agg to use new VS registry (#51182)
Also adds some basic AggTestCases for untested code
paths (and boilerplate for future tests once the IT are
converted over)
* Wire Cardinality aggregation to work with the ValuesSourceRegistry (#51337)
* Wire Percentiles aggregator into new VS framework (#51639)
This required a bit of a refactor to percentiles itself. Before,
the Builder would switch on the chosen algo to generate an
algo-specific factory. This doesn't work (or at least, would be
difficult) in the new VS framework.
This refactor consolidates both factories together and introduces
a PercentilesConfig object to act as a standardized way to pass
algo-specific parameters through the factory. This object
is then used when deciding which kind of aggregator to create
Note: CoreValuesSourceType.HISTOGRAM still lives in core, and will
be moved in a subsequent PR.
* Remove generics and target value type from MultiVSAB (#51647)
* fix checkstyle after merge (#52008)
* Plumb ValuesSourceRegistry through to QuerySearchContext (#51710)
* Convert RareTerms to new VS registry (#52166)
* Wire up Value Count (#52225)
* Wire up Max & Min aggregations (#52219)
* ValuesSource refactoring: Wire up Sum aggregation (#52571)
* ValuesSource refactoring: Wire up SigTerms aggregation (#52590)
* Soft immutability for VSConfig (#52729)
* Unmute testSupportedFieldTypes, fix Percentiles/Ranks/Terms tests (#52734)
Also fixes Percentiles which was incorrectly specified to only accept
numeric, but in fact also accepts Boolean and Date (because those are
numeric on master - thanks `testSupportedFieldTypes` for catching it!)
* VS refactoring: Wire up stats aggregation (#52891)
* ValuesSource refactoring: Wire up string_stats aggregation (#52875)
* VS refactoring: Wire up median (MAD) aggregation (#52945)
* fix valuesourcetype issue with constant_keyword field (#53041)x-pack/plugin/rollup/src/main/java/org/elasticsearch/xpack/rollup/job/RollupIndexer.java
this commit implements `getValuesSourceType` for
the ConstantKeyword field type.
master was merged into feature/extensible-values-source
introducing a new field type that was not implementing
`getValuesSourceType`.
* ValuesSource refactoring: Wire up Avg aggregation (#52752)
* Wire PercentileRanks aggregator into new VS framework (#51693)
* Add a VSConfig resolver for aggregations not using the registry (#53038)
* Vs refactor wire up ranges and date ranges (#52918)
* Wire up geo_bounds aggregation to ValuesSourceRegistry (#53034)
This commit updates the geo_bounds aggregation to depend
on registering itself in the ValuesSourceRegistry
relates #42949.
* VS refactoring: convert Boxplot to new registry (#53132)
* Wire-up geotile_grid and geohash_grid to ValuesSourceRegistry (#53037)
This commit updates the geo*_grid aggregations to depend
on registering itself in the ValuesSourceRegistry
relates to the values-source refactoring meta issue #42949.
* Wire-up geo_centroid agg to ValuesSourceRegistry (#53040)
This commit updates the geo_centroid aggregation to depend
on registering itself in the ValuesSourceRegistry.
relates to the values-source refactoring meta issue #42949.
* Fix type tests for Missing aggregation (#53501)
* ValuesSource Refactor: move histo VSType into XPack module (#53298)
- Introduces a new API (`getBareAggregatorRegistrar()`) which allows plugins to register aggregations against existing agg definitions defined in Core.
- This moves the histogram VSType over to XPack where it belongs. `getHistogramValues()` still remains as a Core concept
- Moves the histo-specific bits over to xpack (e.g. the actual aggregator logic). This requires extra boilerplate since we need to create a new "Analytics" Percentile/Rank aggregators to deal with the histo field. Doubly-so since percentiles/ranks are extra boiler-plate'y... should be much lighter for other aggs
* Wire up DateHistogram to the ValuesSourceRegistry (#53484)
* Vs refactor parser cleanup (#53198)
Co-authored-by: Zachary Tong <polyfractal@elastic.co>
Co-authored-by: Zachary Tong <zach@elastic.co>
Co-authored-by: Christos Soulios <1561376+csoulios@users.noreply.github.com>
Co-authored-by: Tal Levy <JubBoy333@gmail.com>
* First batch of easy fixes
* Remove List.of from ValuesSourceRegistry
Note that we intend to have a follow up PR dealing with the mutability
of the registry, so I didn't even try to address that here.
* More compiler fixes
* More compiler fixes
* More compiler fixes
* Precommit is happy and so am I
* Add new Core VSTs to tests
* Disabled supported type test on SigTerms until we can backport it's fix
* fix checkstyle
* Fix test failure from semantic merge issue
* Fix some metaData->metadata replacements that got lost
* Fix list of supported types for MinAggregator
* Fix list of supported types for Avg
* remove unused import
Co-authored-by: Zachary Tong <polyfractal@elastic.co>
Co-authored-by: Zachary Tong <zach@elastic.co>
Co-authored-by: Christos Soulios <1561376+csoulios@users.noreply.github.com>
Co-authored-by: Tal Levy <JubBoy333@gmail.com>
The main changes are:
1. Throw an error when updating `include_in_parent` or `include_in_root` attribute of nested field dynamically by the PUT mapping API.
2. Add a test for the change.
Closes#53792
Co-authored-by: bellengao <gbl_long@163.com>
Modify the value of nowInMillis in queryShardContext to current timestamp, because the
value will be used lately when validating the filtered alias which uses now in a date_nanos
range query.
When retrieving the snapshots for a set of repos or deleting a single snapshot, it's possible for
the body of the `ActionListener`'s `onResponse` method to throw an Exception. In this case, the
`errHandler` passed in may not be executed, resulting in the `running` boolean not being reset back
to false.
This commit uses `ActionListener.wrap(...)` instead of creating a new ActionListener, which ensures
that if the `onResponse` fails in any way, the `onFailure` handler is still called.
Resolves#55217
Queries like script_score wrap a query and modify its score. If the inner query
rewrites to match_none, then the entire query can rewrite to match_none. This
lets us detect that certain shards can be skipped during the 'can match' phase.
This was a simple change that seemed like it would help in some cases. But it
will likely not have a huge impact, since in many use cases where the 'can
match' phase is helpful, the search is not sorted by score.
Today we pass the `RepositoriesService` to the searchable snapshots plugin
during the initialization of the `RepositoryModule`, forcing the plugin to be a
`RepositoryPlugin` even though it does not implement any repositories.
After discussion we decided it best for now to pass this in via
`Plugin#createComponents` instead, pending some future work in which plugins
can depend on services more dynamically.
Today the voting config exclusions API accepts node filters and resolves them
to a collection of node IDs against the current cluster membership.
This is problematic since we may want to exclude nodes that are not currently
members of the cluster. For instance:
- if attempting to remove a flaky node from the cluster you cannot reliably
exclude it from the voting configuration since it may not reliably be a
member of the cluster
- if `cluster.auto_shrink_voting_configuration: false` then naively shrinking
the cluster will remove some nodes but will leaving their node IDs in the
voting configuration. The only way to clean up the voting configuration is to
grow the cluster back to its original size (potentially replacing some of the
voting configuration) and then use the exclusions API.
This commit adds an alternative API that accepts node names and node IDs but
not node filters in general, and deprecates the current node-filters-based API.
Relates #47990.
Backport of #50836 to 7.x.
Co-authored-by: zacharymorn <zacharymorn@gmail.com>
The ResourceWatcherService enables watching of files for modifications
and deletions. During startup various consumers register the files that
should be watched by this service. There is behavior that might be
unexpected in that the service may not start polling until later in the
startup process due to the use of lifecycle states to control when the
service actually starts the jobs to monitor resources. This change
removes this unexpected behavior so that upon construction the service
has already registered its tasks to poll resources for changes. In
making this modification, the service no longer extends
AbstractLifecycleComponent and instead implements the Closeable
interface so that the polling jobs can be terminated when the service
is no longer required.
Relates #54867
Backport of #54993
`auto_date_histogram`'s reduction behavior is fairly complex and we have
some fairly complex testing logic for it but it is super difficult to
look at that testing logic and say "ah, that is what it does in this
case". This adds some tests explicit (non-randomized) tests of the
reduction logic that *should* be easier to read.
I've noticed that a lot of our tests are using deprecated static methods
from the Hamcrest matchers. While this is not a big deal in any
objective sense, it seems like a small good thing to reduce compilation
warnings and be ready for a new release of the matcher library if we
need to upgrade. I've also switched a few other methods in tests that
have drop-in replacements.
Currently forbidden apis accounts for 800+ tasks in the build. These
tasks are aggressively created by the plugin. In forbidden apis 3.0, we
will get task avoidance
(https://github.com/policeman-tools/forbidden-apis/pull/162), but we
need to ourselves use the same task avoidance mechanisms to not trigger
these task creations. This commit does that for our foribdden apis
usages, in preparation for upgrading to 3.0 when it is released.
Upgrade to lucene 8.5.1 release that contains a bug fix for a bug that might introduce index corruption when deleting data from an index that was previously shrunk.
We can be a little more efficient when aborting a snapshot. Since we know the new repository
data after finalizing the aborted snapshot when can pass it down to the snapshot completion listeners.
This way, we don't have to fork off to the snapshot threadpool to get the repository data when the listener completes and can directly submit the delete task with high priority straight from the cluster state thread.
Snapshot deletes should first check the cluster state for an in-progress snapshot
and try to abort it before checking the repository contents. This allows for atomically
checking and aborting a snapshot in the same cluster state update, removing all possible
races where a snapshot that is in-progress could not be found if it finishes between
checking the repository contents and the cluster state.
Also removes confusing races, where checking the cluster state off of the cluster state thread
finds an in-progress snapshot that is then not found in the cluster state update to abort it.
Finally, the logic to use the repository generation of the in-progress snapshot + 1 was error
prone because it would always fail the delete when the repository had a pending generation different from its safe generation when a snapshot started (leading to the snapshot finalizing at a
higher generation).
These issues (particularly that last point) can easily be reproduced by running `SLMSnapshotBlockingIntegTests` in a loop with current `master` (see #54766).
The snapshot resiliency test for concurrent snapshot creation and deletion was made to more
aggressively start the delete operation so that the above races would become visible.
Previously, the fact that deletes would never coincide with initializing snapshots resulted
in a number of the above races not reproducing.
This PR is the most consistent I could get snapshot deletes without changes to the state machine. The fact that aborted deletes will not put the delete operation in the cluster state before waiting for the snapshot to abort still allows for some possible (though practically very unlikely) races. These will be fixed by a state-machine change in upcoming work in #54705 (which will have a much simpler and clearer diff after this change).
Closes#54766
* Remove Redundant Cluster State during Snapshot INIT + Master Failover (#54420)
Similar to #54395 we know that a snapshot in INIT state has not
written anything to the repository yet. If we see one from a master
failover, there is no point in moving it to ABORTED before removing it
from the cluster state in a subsequent CS update.
Instead, we can simply remove its job from the CS the first time
we see it on master failover and be done with it.
* Move Snapshot Status Related Method to Appropriate Places
Lots of things living in `SnapshotsService` for no reason other than
that `SnapshotsService` provides the `RepositoriesService`.
Cleaning this up to directly use `RepositoriesService` in the relevant
transport actions and by that shortening the already very complex `SnapshotsService`.
Just like in `AbstractCoordinatorTestCase` we can't just assume the cluster
is stable once all the cluster states align since stray follower/leader check
tasks could still hit us after a disconnect, causing future test operations to fail.
=> fixed by running all tasks in the possible time span of running into these
checks before validating that cluster states align on all nodes to prevent this
like we do in the coordinator tests.
Closes#55103
Provides basic repository-level stats that will allow us to get some insight into how many
requests are actually being made by the underlying SDK. Currently only tracks GET and LIST
calls for S3 repositories. Most of the code is unfortunately boiler plate to add a new endpoint
that will help us better understand some of the low-level dynamics of searchable snapshots.
This is a first cut at giving NodeInfo the ability to carry a flexible
list of heterogeneous info responses. The trick is to be able to
serialize and deserialize an arbitrary list of blocks of information. It
is convenient to be able to deserialize into usable Java objects so that
we can aggregate nodes stats for the cluster stats endpoint.
In order to provide a little bit of clarity about which objects can and
can't be used as info blocks, I've introduced a new interface called
"ReportingService."
I have removed the hard-coded getters (e.g., getOs()) in favor of a
flexible method that can return heterogeneous kinds of info blocks
(e.g., getInfo(OsInfo.class)). Taking a class as an argument removes the
need to cast in the client code.
With this change, when a task is canceled, the task manager will cancel
not only its direct child tasks but all also its descendant tasks.
Closes#50990
Adds support for filters to T-Test aggregation. The filters can be used to
select populations based on some criteria and use values from the same or
different fields.
Closes#53692
The secure_settings_password was never taken into consideration in
the ReloadSecureSettings API. This commit fixes that and adds
necessary REST layer testing. Doing so, it also:
- Allows TestClusters to have a password protected keystore
so that it can be set for tests.
- Adds a parameter to the run task so that elastisearch can
be run with a password protected keystore from source.
The usage of local parameter for GetFieldMappingRequest has been removed from the underlying transport action since v2.0.
This PR deprecates the parameter from rest layer. It will be removed in next major version.
We added a fancy method to provide random realistic test data to the
reduction tests in #54910. This uses that to remove some of the more
esoteric machinations in the agg tests. This will marginally increase
the coverage of the serialiation tests and, more importantly, remove
some mysterious value generation code that only really made sense for
random reduction tests but was used all over the place. It doesn't, on
the other hand, make the tests shorter. Just *hopefully* more clear.
I only cleaned up a few tests this way. If we like this it'd probably be
worth grabbing others.
We found some problems during the test.
Data: 200Million docs, 1 shard, 0 replica
hits | avg | sum | value_count |
----------- | ------- | ------- | ----------- |
20,000 | .038s | .033s | .063s |
200,000 | .127s | .125s | .334s |
2,000,000 | .789s | .729s | 3.176s |
20,000,000 | 4.200s | 3.239s | 22.787s |
200,000,000 | 21.000s | 22.000s | 154.917s |
The performance of `avg`, `sum` and other is very close when performing
statistics, but the performance of `value_count` has always been poor,
even not on an order of magnitude. Based on some common-sense knowledge,
we think that `value_count` and sum are similar operations, and the time
consumed should be the same. Therefore, we have discussed the agg
of `value_count`.
The principle of counting in es is to traverse the field of each
document. If the field is an ordinary value, the count value is
increased by 1. If it is an array type, the count value is increased
by n. However, the problem lies in traversing each document and taking
out the field, which changes from disk to an object in the Java
language. We summarize its current problems with Elasticsearch as:
- Number cast to string overhead, and GC problems caused by a large
number of strings
- After the number type is converted to string, sorting and other
unnecessary operations are performed
Here is the proof of type conversion overhead.
```
// Java long to string source code, getChars is very time-consuming.
public static String toString(long i) {
int size = stringSize(i);
if (COMPACT_STRINGS) {
byte[] buf = new byte[size];
getChars(i, size, buf);
return new String(buf, LATIN1);
} else {
byte[] buf = new byte[size * 2];
StringUTF16.getChars(i, size, buf);
return new String(buf, UTF16);
}
}
```
test type | average | min | max | sum
------------ | ------- | ---- | ----------- | -------
double->long | 32.2ns | 28ns | 0.024ms | 3.22s
long->double | 31.9ns | 28ns | 0.036ms | 3.19s
long->String | 163.8ns | 93ns | 1921 ms | 16.3s
particularly serious.
Our optimization code is actually very simple. It is to manage different
types separately, instead of uniformly converting to string unified
processing. We added type identification in ValueCountAggregator, and
made special treatment for number and geopoint types to cancel their
type conversion. Because the string type is reduced and the string
constant is reduced, the improvement effect is very obvious.
hits | avg | sum | value_count | value_count | value_count | value_count | value_count | value_count |
| | | double | double | keyword | keyword | geo_point | geo_point |
| | | before | after | before | after | before | after |
----------- | ------- | ------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- |
20,000 | 38s | .033s | .063s | .026s | .030s | .030s | .038s | .015s |
200,000 | 127s | .125s | .334s | .078s | .116s | .099s | .278s | .031s |
2,000,000 | 789s | .729s | 3.176s | .439s | .348s | .386s | 3.365s | .178s |
20,000,000 | 4.200s | 3.239s | 22.787s | 2.700s | 2.500s | 2.600s | 25.192s | 1.278s |
200,000,000 | 21.000s | 22.000s | 154.917s | 18.990s | 19.000s | 20.000s | 168.971s | 9.093s |
- The results are more in line with common sense. `value_count` is about
the same as `avg`, `sum`, etc., or even lower than these. Previously,
`value_count` was much larger than avg and sum, and it was not even an
order of magnitude when the amount of data was large.
- When calculating numeric types such as `double` and `long`, the
performance is improved by about 8 to 9 times; when calculating the
`geo_point` type, the performance is improved by 18 to 20 times.
Currently the remote cluster sniff connection process can succeed even
if no connections are opened. This commit fixes this by failing the
connection process if no connections are successfully opened.
This commit adds an explicit test of time zone rewrite on date nanos
field. Today this is working but we need tests to ensure that we don't
break it unintentionally.
Makes query result serialization more robust by propagating possible
IOExceptions that can occur during shard level result serialization to the
caller instead of throwing AssertionError that is not intercepted.
Fixes#54665
The use of available processors, the terminology, and the settings
around it have evolved over time. This commit cleans up some places in
the codes and in the docs to adjust to the current terminology.
* Prevent putting V2 index template when overlapping with existing template
This change prevents putting V2 index template when it would overlap with existing V2 template
of the same priority
Relates to #53101
This changes the behavior of aggregations when search is performed
against enough shards to enable "batch reduce" mode. In this case we
force always store aggregations in serialized form rather than a
traditional java reference. This should shrink the memory usage of large
aggregations at the cost of slightly slowing down aggregations where the
coordinating node is also a data node. Because we're only doing this
when there are many shards this is likely to be fairly rare.
As a side effect this lets us add logs for the memory usage of the aggs
buffer:
```
[2020-04-03T17:03:57,052][TRACE][o.e.a.s.SearchPhaseController] [runTask-0] aggs partial reduction [1320->448] max [1320]
[2020-04-03T17:03:57,089][TRACE][o.e.a.s.SearchPhaseController] [runTask-0] aggs partial reduction [1328->448] max [1328]
[2020-04-03T17:03:57,102][TRACE][o.e.a.s.SearchPhaseController] [runTask-0] aggs partial reduction [1328->448] max [1328]
[2020-04-03T17:03:57,103][TRACE][o.e.a.s.SearchPhaseController] [runTask-0] aggs partial reduction [1328->448] max [1328]
[2020-04-03T17:03:57,105][TRACE][o.e.a.s.SearchPhaseController] [runTask-0] aggs final reduction [888] max [1328]
```
These are useful, but you need to keep some things in mind before
trusting them:
1. The buffers are oversized ala Lucene's ArrayUtils. This means that we
are using more space than we need, but probably not much more.
2. Before they are merged the aggregations are inflated into their
traditional Java objects which *probably* take up a lot more space
than the serialized form. That is, after all, the reason why we store
them in serialized form in the first place.
And, just because I can, here is another example of the log:
```
[2020-04-03T17:06:18,731][TRACE][o.e.a.s.SearchPhaseController] [runTask-0] aggs partial reduction [147528->49176] max [147528]
[2020-04-03T17:06:18,750][TRACE][o.e.a.s.SearchPhaseController] [runTask-0] aggs partial reduction [147528->49176] max [147528]
[2020-04-03T17:06:18,809][TRACE][o.e.a.s.SearchPhaseController] [runTask-0] aggs partial reduction [147528->49176] max [147528]
[2020-04-03T17:06:18,827][TRACE][o.e.a.s.SearchPhaseController] [runTask-0] aggs partial reduction [147528->49176] max [147528]
[2020-04-03T17:06:18,829][TRACE][o.e.a.s.SearchPhaseController] [runTask-0] aggs final reduction [98352] max [147528]
```
I got that last one by building a ten shard index with a million docs in
it and running a `sum` in three layers of `terms` aggregations, all on
`long` fields, and with a `batched_reduce_size` of `3`.
`PipelineAggregator`s are only sent across the wire for backwards
compatibility with 7.7.0. `PipelineAggregator` needs to continue to
implement `NamedWriteable` for backwards compatibility but pipeline
aggregations created after 7.7.0 need not implement any of the methods
in that interface because we'll never attempt to call them. So this
creates implementations in `PipelineAggregator` (the base class) that
just throw exceptions.
* HLRC support for Index Templates V2 (#54838)
* HLRC support for Index Templates V2
This change adds High Level Rest Client support for Index Templates V2.
Relates to #53101
* fixed compilation error
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
When a new index is rolled over, we check to see whether there are any duplicate alias
configurations in the index template configuration. Additionally, when a new index is created from a
bulk action, we check the templates to see if there are any ingest pipelines that need to be applied
to the index that will be newly created.
Both of these actions previously checked the v1 templates for their settings, they now also check
the v2 index templates, with the v2 index templates taking precendence similar to the way they do
when creating an index.
Relates to #53101
This change reintroduces the system index APIs for Kibana without the
changes made for marking what system indices could be accessed using
these APIs. In essence, this is a partial revert of #53912. The changes
for marking what system indices should be allowed access will be
handled in a separate change.
The APIs introduced here are wrapped versions of the existing REST
endpoints. A new setting is also introduced since the Kibana system
indices' names are allowed to be changed by a user in case multiple
instances of Kibana use the same instance of Elasticsearch.
Relates #52385
Backport of #54858
Today we construct the node environment relatively early in the node
construction process, before we have even constructed the final
environment, which means before the final settings are
available. Rather, we should defer constructing the node environment
until the final environment is available. This commit does that. This
helps delay node environment construction until after the node roles are
properly determined, which is important since the node environment does
some checks on the basis of whether or not the node is neither a data
nor a master node (such nodes should not have index metadata nor shard
data on disk). Note that a consequence of this is that the initial log
line that displays the node name, node ID, and cluster name does not
appear until later in startup (after we have loaded plugins). This seems
okay.
Guava was removed from Elasticsearch many years ago, but remnants of it
remain due to transitive dependencies. When a dependency pulls guava
into the compile classpath, devs can inadvertently begin using methods
from guava without realizing it. This commit moves guava to a runtime
dependency in the modules that it is needed.
Note that one special case is the html sanitizer in watcher. The third
party dep uses guava in the PolicyFactory class signature. However, only
calling a method on the PolicyFactory actually causes the class to be
loaded, a reference alone does not trigger compilation to look at the
class implementation. There we utilize a MethodHandle for invoking the
relevant method at runtime, where guava will continue to exist.
IndexShardIT#testMaybeFlush relies on the assumption that the safe commit
and translog deletion policy have advanced after IndexShard#sync returns .
This assumption does not hold if there's a race with the global checkpoint sync.
Closes#52223
This commit introduces a new `geo` module that is intended
to be contain all the geo-spatial-specific features in server.
As a first step, the responsibility of registering the geo_shape
field mapper is moved to this module.
Co-authored-by: Nicholas Knize <nknize@gmail.com>
This commit moves the action name validation and circuit breaking into
the InboundAggregator. This work is valuable because it lays the
groundwork for incrementally circuit breaking as data is received.
This PR includes the follow behavioral change:
Handshakes contribute to circuit breaking, but cannot be broken. They
currently do not contribute nor are they broken.
In 7.x, an index template will fail to apply if it contains a `_default_`
mapping. Several users have expressed confusion over the fact that loading the
template doesn't show any default mappings. This docs change clarifies that in
order to see all mappings in the template, you must pass `include_type_name`.
Currently the TransportHandshaker has a specialized codepath for sending
a response. In other work, we are going to start having handshakes
contribute to circuit breaking (while not being breakable). This commit
moves in that direction by allowing the handshaker to responding using a
standard TcpTransportChannel similar to other requests.
This removes pipeline aggregators from the aggregation result tree
except for a single field used for backwards compatibility with pre-7.8
versions of Elasticsearch. That field isn't populated unless we are
serializing to pre-7.8 Elasticsearch. So, good news! We no longer build
pipeline aggregators on the data node. Most of the time.
If you didn't explictly set `global_ordinals` execution mode we were
never collecting the information that we needed to select `depth_first`
based on the request so we were always defaulting to `breadth_first`.
This fixes it so we collect the information.
* Remove Unused Snapshot Status Values
This is a left-over from before #41940 when we used the same status enum for the shards
and the snapshots overall. The two removed values were never used on the shard level
so we can simply remove them here.
* Only allow retrieving a single index or component template
This changes the Index Template v2 APIs to only allow retrieving a single "named" entity, where the
named entity can be nothing (return everything), a wildcard (return the ones that match), or the
name of a template.
Relates to #53101
* Throw exception when resource is not found
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
`scripted_metric` did not work with cross cluster search because it
assumed that you'd never perform a partial reduction, serialize the
results, and then perform a final reduction. That
serialized-after-partial-reduction step was broken.
This is also required to support #54758.
This replaces the last bit of validation that pipeline aggregations
performed on the data nodes with explicit checks in a few
`PipelineAggregationBuilders`. We were *already* catching these
validation errors for pipeline aggregations that require that their
parent be squentially ordered. This just adds validation for pipelines
that require *any* parent like `bucket_selector` and `bucket_sort`.
This is a backport of #54803 for 7.x.
This pull request cherry picks the squashed commit from #54803 with the additional commits:
6f50c92 which adjusts master code to 7.x
a114549 to mute a failing ILM test (#54818)
48cbca1 and 50186b2 that cleans up and fixes the previous test
aae12bb that adds a missing feature flag (#54861)
6f330e3 that adds missing serialization bits (#54864)
bf72c02 that adjust the version in YAML tests
a51955f that adds some plumbing for the transport client used in integration tests
Co-authored-by: David Turner <david.turner@elastic.co>
Co-authored-by: Yannick Welsch <yannick@welsch.lu>
Co-authored-by: Lee Hinman <dakrone@users.noreply.github.com>
Co-authored-by: Andrei Dan <andrei.dan@elastic.co>
This commit updates the link to the JDK 14 compiler bug that we have
found. At the time that we committed the workaround, we had a submission
ID, but not yet the public bug URL. This commit adds the public bug URL.
Today when canceling a task we broadcast ban/unban requests to all nodes
in the cluster. This strategy does not scale well for hierarchical
cancellation. With this change, we will track outstanding child requests
and broadcast the cancellation to only nodes that have outstanding child
tasks. This change also prevents a parent task from sending child
requests once it got canceled.
Relates #50990
Supersedes #51157
Co-authored-by: Igor Motov <igor@motovs.org>
Co-authored-by: Yannick Welsch <yannick@welsch.lu>
This commit addresses a long-standing `// TODO` in the coordinator tests to
ensure that the correct no-master block is applied when a node restarts while
disconnected from the cluster.
It also strengthens this test to check that the no-master block is applied
correctly on all nodes, not just the previous master.
Test `testAckListenerReceivesNacksFromFollowerInHigherTerm` was suppressed as
when it was written it didn't work due the lack of proper term bumping. We
added term bumping but never got around to implementing this test. This commit
addresses this.
Removing a few spots where we clearly don't have to fork to the generic or management
pool since either we only interpret the current cluster state or fork-off directly to
some other pool in the transport action logic anyway.
We should never write a circular reference exception as we will fail a
node with StackOverflowError. However, we have one in #53589.
I tried but failed to find its location. With this commit, we will avoid
StackOverflowError in production and detect circular exceptions in
tests.
Closes#53589
We recently cleaned up the use of the word "metadata" across the
codebase. A few additional uses have trickled in, likely from
in-progress work. This commit cleans up these last few instances.
Relates #54519
* Use V2 index templates during index creation
This commit changes our index creation code to use (and favor!) V2 index templates during index
creation. The creation precedence goes like so, in order of precedence:
- Existing source `IndexMetadata` - for example, when recovering from a peer or a shrink/split/clone
where index templates should not be applied
- A matching V2 index template, if one is found
- When a V2 template is found, all component templates (in the `composed_of` field) are applied
in the order that they appear, with the index template having the 2nd highest precedence (the
create index request always has the top priority when it comes to index settings)
- All matching V1 templates (the old style)
This also adds index template validation when `PUT`-ing a new v2 index template (because this was
required) and ensures that all index and component templates specify *no* top-level mapping type (it
is automatically added when the template is added to the cluster state).
This does not yet implement fine-grained component template merging of mappings, where we favor
merging only a single field's configuration, that will be done in subsequent work.
This also keeps the existing hidden index behavior present for v1 templates, where a hidden index
will match v2 index templates unless they are global (`*`) templates.
Relates to #53101
Some field name constants were not updaten when we moved from "string" to "text"
and "keyword" fields. Renaming them makes it easier and faster to know which
field type is used in test subclassing this base test case.
The test had errors around time units that have different length - think
leap years or months that aren't 30 days. This fixes those errors. In
the proces I've changed a bunch of things to debug the problem:
* Replace `currentTimeMillis` with a random time. Now the test fails
randomly! Wonderful. Much better than on random days of the month.
* Generate buckets "closer together" to test random reduction. Without
this we were super frequently getting stuck in the "year of century"
rounding because *some* of the of the buckets we built were far apart.
This generates a much greater variety of tests.
* Implement `toString` on `RoundingInfo` so I can debug without going
crazy.
* Switch keys in the bucket assertions from epoch millis to `Instant`s
so we can read the failures.
Closes#54540Closes#39497
from `*_flag_registered` to `#_feature_enabled`.
This previous name indicated that a flag was registered,
whilst the feature flag actually controls whether a
feature is enabled.
This commit workarounds a bug in the JDK 14 compiler. It is choking on a
method reference, so we substitute a lambda expression instead. The JDK
bug ID is 9064309.
In #33933 we disallowed changing the `enabled` parameter in object mappings.
However, the fix didn't cover the root object mapper. This PR adjusts the change
to also include the root mapper and clarifies the error message.
Removes pipeline aggregations from the aggregation result tree as they
are no longer used. This stops us from building the pipeline aggregators
at all on data nodes except for backwards compatibility serialization.
This will save a tiny bit of space in the aggregation tree which is
lovely, but the biggest benefit is that it is a step towards simplifying
pipeline aggregators.
This only does about half of the work to remove the pipeline aggs from
the tree. Removing all of it would, well, double the size of the change
and make it harder to review.
I derped out on a last minute bug fix when backporting #54282 and it
only causes the tests to fail about half the time. So I didn't catch
it until after merging. Great! This fixes it.
- Consolidates HDR/TDigest factories into a single factory
- Consolidates most HDR/TDigest builder into an abstract builder
- Deprecates method(), compression(), numSigFig() in favor of a new
unified PercentileConfig object
- Disallows setting algo options that don't apply to current algo
The unified config method carries both the method and algo-specific
setting. This provides a mechanism to reject settings that apply
to the wrong algorithm. For BWC the old methods are retained
but marked as deprecated, and can be removed in future versions.
Co-authored-by: Mark Tozzi <mark.tozzi@gmail.com>
Co-authored-by: Mark Tozzi <mark.tozzi@gmail.com>
This fixes pipeline aggregations used in cross cluster search from an older
version of Elasticsearch to a newer version of Elasticsearch. I broke
this in #53730 when I was too aggressive in shutting off serialization
of pipeline aggs. In particular, this comes up when the coordinating
node is pre-7.8.0 and the gateway node is on or after 7.8.0.
The fix is another step down the line to remove pipeline aggregators
from the aggregation tree. Sort of. It create a new
`List<PipelineAggregator>` member in `InternalAggregation` *but* it is
only used for bwc serialization and it is fed by the mechanism
established in #53730 to read the pipelines from the
Right now you can't tell from the task description whether or not the
search is a scroll. This adds that information to the description which
is super useful if you are trying to debug a cluster that is running out
of scroll contexts.
Today the usage service can let in some issues, such as handlers that do
not have a name, where the errors do not manifest until later (calling
the usage API), or conflicting handlers with the same name. This commit
addresses this by adding some validation to the usage service.
Adds tests for supported ValuesSourceTypes, unmapped fields, scripting,
and the missing param. The tests for unmapped fields and scripting are
migrated from the StatsIT integration test
This adds tests for supported ValuesSourceTypes, unmapped fields,
scripting, and the missing param. The tests for unmapped fields and
scripting are migrated from the SumIT integration test
Tests for unmapped fields, the missing parameter, scripting, and correct
ValuesSource types in MissingAggregatorTests. Basic yaml tests for the
missing agg
For #42949
* Refactor nodes stats request builders to match requests (#54363)
* Remove hard-coded setters from NodesInfoRequestBuilder
* Remove hard-coded setters from NodesStatsRequest
* Use static imports to reduce clutter
* Remove uses of old info APIs
Because -1 is technically a valid TimeValue (as a sentinel value), that is now
explicitly checked for when validating gc_thresholds. The tests are also
adjusted to test this case separately from other negative values.
Refactor SearchHit to have separate document and meta fields.
This is a part of bigger refactoring of issue #24422 to remove
dependency on MapperService to check if a field is metafield.
Relates to PR: #38373
Relates to issue #24422
Co-authored-by: sandmannn <bohdanpukalskyi@gmail.com>
This is a follow up to a previous commit that renamed MetaData to
Metadata in all of the places. In that commit in master, we renamed
META_DATA to METADATA, but lost this on the backport. This commit
addresses that.
This is a simple naming change PR, to fix the fact that "metadata" is a
single English word, and for too long we have not followed general
naming conventions for it. We are also not consistent about it, for
example, METADATA instead of META_DATA if we were trying to be
consistent with MetaData (although METADATA is correct when considered
in the context of "metadata"). This was a simple find and replace across
the code base, only taking a few minutes to fix this naming issue
forever.
* Comprehensively test supported/unsupported field type:agg combinations (#52493)
This adds a test to AggregatorTestCase that allows us to programmatically
verify that an aggregator supports or does not support a particular
field type. It fetches the list of registered field type parsers,
creates a MappedFieldType from the parser and then attempts to run
a basic agg against the field.
A supplied list of supported VSTypes are then compared against the
output (success or exception) and suceeds or fails the test accordingly.
Co-Authored-By: Mark Tozzi <mark.tozzi@gmail.com>
* Skip fields that are not aggregatable
* Use newIndexSearcher() to avoid incompatible readers (#52723)
Lucene's `newSearcher()` can generate readers like ParallelCompositeReader
which we can't use. We need to instead use our helper `newIndexSearcher`
The docs here add nothing compared to those in the package. If anything
they are somewhat confusing since they don't give all necessary details to understand the snapshot process.
=> remove them and link to the complete docs at the package level
Bulk requests currently keep a reference to all bulk item requests until every one of them has
completed. There is no need to do so, however, and, in case of large bulks, can mean
unnecessary holding onto memory that might be better used elsewhere. More so as different
shard-level bulks can complete at different speeds, and one slow shard-level request should
not require holding onto every other shard-level request.
Currently there is an issue with the InboundPipeline releasing bytes
earlier than appropriate. This can lead to the bytes being reused before
the message is handled. This commit fixes that issue and adds a test to
detect when it is occurring.
* Add warnings/errors when V2 templates would match same indices… (#54367)
* Add warnings/errors when V2 templates would match same indices as V1
With the introduction of V2 index templates, we want to warn users that templates they put in place
might not take precedence (because v2 templates are going to "win"). This adds this validation at
`PUT` time for both V1 and V2 templates with the following rules:
** When creating or updating a V2 template
- If the v2 template would match indices for an existing v1 template or templates, provide a
warning (through the deprecation logging so it shows up to the client) as well as logging the
warning
The v2 warning looks like:
```
index template [my-v2-template] has index patterns [foo-*] matching patterns from existing older
templates [old-v1-template,match-all-template] with patterns (old-v1-template =>
[foo*],match-all-template => [*]); this template [my-v2-template] will take
precedence during new index creation
```
** When creating a V1 template
- If the v1 template is for index patterns of `"*"` and a v2 template exists, warn that the v2
template may take precedence
- If the v1 template is for index patterns other than all indices, and a v2 template exists that
would match, throw an error preventing creation of the v1 template
** When updating a V1 template (without changing its existing `index_patterns`!)
- If the v1 template is for index patterns that would match an existing v2 template, warn that the
v2 template may take precedence.
The v1 warning looks like:
```
template [my-v1-template] has index patterns [*] matching patterns from existing index templates
[existing-v2-template] with patterns (existing-v2-template => [foo*]); this template [my-v1-template] may be ignored in favor of an index template at index creation time
```
And the v1 error looks like:
```
template [my-v1-template] has index patterns [foo*] matching patterns from existing index templates
[existing-v2-template] with patterns (existing-v2-template => [f*]), use index templates (/_index_template) instead
```
Relates to #53101
* Remove v2 index and component templates when cleaning up tests
* Finish half-finished comment sentence
* Guard template removal and ignore for earlier versions of ES
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
* Also ignore 500 errors when clearing index template v2 templates
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
This fixes a serialization bug in `auto_date_histogram` that comes up in
a cluster mixed between pre-7.3.0 and post-7.3.0.
Includes #54429 to keep 7.x looking like master for simpler backports.
Closes#54382
Pipeline aggregations like `stats_bucket`, `sum_bucket`, and
`percentiles_bucket` only operate on buckets that have multiple buckets.
This adds support for those aggregations to `geo_distance`, `ip_range`,
`auto_date_histogram`, and `rare_terms`.
This all happened because we used a marker interface to mark compatible
aggs, `MultiBucketAggregationBuilder` and it was fairly easy to forget
to implement the interface.
This replaces the marker interface with an abstract method in
`AggregationBuilder`, `bucketCardinality` which makes you return `NONE`,
`ONE`, or `MANY`. The `bucket` aggregations can check for `MANY`. At
this point `ONE` and `NONE` amount to about the same thing, but I
suspect that'll be a useful distinction when validating bucket sorts.
Closes#53215
This commit takes into account the index.number_of_replicas (defaults to
0 - no replicas- ) value when setting an index template. This change
enables the index.wait_for_active_shards value to be interpreted
correctly
(cherry picked from commit 07026ac3d56dc9fae69467adfda7eaed7ea3ca00)
Signed-off-by: Andrei Dan <andrei.dan@elastic.co>
Co-authored-by: tninokehoe <62655306+tninokehoe@users.noreply.github.com>
This commit improves the behavior of aborting snapshots and by that fixes
some extremely rare test failures.
Improvements:
1. When aborting a snapshot while it is in the `INIT` stage we do not need
to ever delete anything from the repository because nothing is written to the
repo during INIT any more (in the past running deletes for these snapshots made
sense because we were writing `snap-` and `meta-` blobs during the `INIT` step).
2. Do not try to finalize snapshots that never moved past `INIT`. Same reason as
with the first step. If we never moved past `INIT` no data was written to the repo
so no need to now write a useless entry for the aborted snapshot to `index-N`.
This is especially true, since the reason the snapshot was aborted during `INIT` was
a delete call so the useless empty snapshot just added to `index-N` would be removed
by the subsequent delete that is still waiting anyway.
3. if after aborting a snapshot we wait for it to finish we should not try deleting it
if it failed. If the snapshot failed it means it did not become part of the most recent
`RepositoryData` so a delete for it will needlessly fail with a confusing message about
that snapshot being missing or concurrent repository modification. I moved to throw the snapshot missing exception here because that seems the most user friendly. This allows the user to simply ignore `404` returns from the delete API when using it to make sure a snapshot is aborted+deleted.
Marking this as a non-issue since it doesn't have any negative repercussions other than confusing exceptions on some snapshot aborts.
Closes#52843
We try to rewrite time zones to fixed offsets in the date histogram aggregation
if the data in the shard is within a single transition.
However this optimization is not applied on time zones that don't apply daylight saving changes
but had some random transitions in the past (e.g. Australia/Brisbane or Asia/Katmandu).
This changes fixes the rewrite of such time zones to fixed offsets.
Backport of #53982
In order to prepare the `AliasOrIndex` abstraction for the introduction of data streams,
the abstraction needs to be made more flexible, because currently it really can be only
an alias or an index.
* Renamed `AliasOrIndex` to `IndexAbstraction`.
* Introduced a `IndexAbstraction.Type` enum to indicate what a `IndexAbstraction` instance is.
* Replaced the `isAlias()` method that returns a boolean with the `getType()` method that returns the new Type enum.
* Moved `getWriteIndex()` up from the `IndexAbstraction.Alias` to the `IndexAbstraction` interface.
* Moved `getAliasName()` up from the `IndexAbstraction.Alias` to the `IndexAbstraction` interface and renamed it to `getName()`.
* Removed unnecessary casting to `IndexAbstraction.Alias` by just checking the `getType()` method.
Relates to #53100
This setting is not documented and has dubious value since it means
there can be nodes in the cluster (non-data and non-master nodes) that
do not have persistent node IDs. This does not have any use cases so
this commit removes the setting.
This commit ensures that node roles are sorted by node role name, which
makes the output easier to consume, and also makes it easier to rely on
the behavior of the output in assertions.
Currently all of our transport protocol decoding and aggregation occurs
in the individual transport modules. This means that each implementation
(test, netty, nio) must implement this logic. Additionally, it means
that the entire message has been read from the network before the server
package receives it.
This commit creates a pipeline in server which can be passed arbitrary
bytes to handle. Internally, the pipeline will decode, decompress, and
aggregate the messages. Additionally, this allows us to run many
megabytes of bytes through the pipeline in tests to ensure that the
logic works.
This work will enable future work:
Circuit breaking or backoff logic based on message type and byte
in the content aggregator.
Sharing bytes with the application layer using the ref counted
releasable network bytes.
Improved network monitoring based specifically on channels.
Finally, this fixes the bug where we do not circuit break on the correct
message size when compression is enabled.
Elasticsearch has a number of different BytesReference implementations.
These implementations can all implement the interface in different ways
with subtly different behavior and performance characteristics. On the
other-hand, the JVM only represents bytes as an array or a direct byte
buffer. This commit deletes the specialized Netty implementations and
moves to using a generic ByteBuffer reference type. This will allow us
to focus on standardizing performance and behave around a smaller number
of implementations that can be used by all components in Elasticsearch.
* Add REST APIs for IndexTemplateV2Metadata CRUD (#54039)
* Add REST APIs for IndexTemplateV2Metadata CRUD
This commit adds the get/put/delete APIs for interacting with the now v2 versions of index
templates.
These APIs are behind the existing `es.itv2_feature_flag_registered` system property feature flag.
Relates to #53101
* Add exceptions for HLRC tests
* Add skips for 7.x versions
* Use index_template instead of template_v2 in action names
* Add test for MetaDataIndexTemplateService.addIndexTemplateV2
* Move removal to static method and add test
* Add unit tests for request classes (implement hashCode & equals)
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
* Fix compilation
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
We have a number of places where we want to read a fairly complex object from
XContent, but aren't interested in its contents; for example, mappings are often
serialized and deserialized between several objects before they are actually built
into a MappingMetaData object. This means that potentially large maps of maps
are constructed several times, only to immediately be re-serialized again.
This commit adds a new helper method to XContentHelper that reads the children
of an xcontent object directly to a BytesReference, serialized via the same xcontenttype
as the parent parser, avoiding the construction of intermediary maps or lists.
* Fix Snapshot Completion Listener Lost on Master Failover
If master fails over before (or we run into any other exception) when removing
the snapshot from the CS we must still resolve all the completion listeners for
the snapshot.
This commit causes negative TimeValues, other than -1 which is sometimes used as
a sentinel value, to be rejected during parsing.
Also introduces a hack to allow ILM to load policies which were written to the
cluster state with a negative min_age, treating those values as 0, which should
match the behavior of prior versions.
The NodesStatsRequest class uses a set of strings for its internal
serialization. This commit updates the class's interface so that we
no longer use hard-coded getters and setters, but rather
methods that add strings directly. For example, the old way of
adding "os" metrics to a request would be to call request.os(true).
The new way of doing this is to call request.addMetric("os").
For the time being, the canonical list of metrics is an enum in
NodesStatsRequest. This will eventually be replaced with something
pluggable.
Currently we set the defaults for ccsMinimizeRoundtrips, preFilterShardSize and
requestCache on the HLRC SubmitAsyncSearchRequest in the constructor. This is no
longer needed since we now only send the parameters along with the rest request
that are supported (omitting e.g. ccsMinimizeRoundtrips) and the correct
defaults are set on the client side. This change removes setting and sending
these defaults where possible, leaving only the overwrite of batchedReduceSize
with a default value of 5, since the default used in the vanilla SearchRequest
is 512. However, we don't need to send this value along as a request parameter
if its the default since the correct one will be set on the receiving end if no
value is specified.
Also adding tests for RestSubmitAsyncSearchAction that check the correct
defaults are set when parameters are missing on the server side.
Backport of #54200
These mock calls cause Eclipse to think that `Exception` can be thrown
because `CheckedFunction`'s lower bound is `Exception`. This makes
Eclipse happy.
Make it possible to reuse the cluster state update of rollover for
simulation purposes by extracting it. Also now run the full rollover in
the pre-rollover phase and the actual rollover phase, allowing a
dedicated exception in case of concurrent rollovers as well as a more
thorough pre-check.
We have some very occasional failures in SearchAfterIT, where a search throws
an exception because a shard does not have the mapping for the requested sort
field. The field should have been added in a dynamic mapping update after an
index event, but it seems that there can sometimes be a small delay in propagating
this update to the shards.
This commit changes the test to explicitly define the relevant field at index creation
time.
Fixes#51900
The remove keystore command can handle multiple settings. In a few
places, we were not consistent about mentioning this. This commit
addreses this, in the CLI help, and the docs.
This commit begins the work of removing the "hard-coded" metric getters
and setters from the NodesInfoRequest classes. We start by providing new
flexible getters and setters. We then update the test classes to remove
the old getters, and then remove those getters.
Changes ThreadPool's schedule method to run the schedule task in the context of the thread
that scheduled the task.
This is the more sensible default for this method, and eliminates a range of bugs where the
current thread context is mistakenly dropped.
Closes#17143
Today the keystore add-file command can only handle adding a single
setting/file pair in a single invocation. This incurs the startup costs
of the JVM many times, which in some environments can be expensive. This
commit teaches the add-file keystore command to accept adding multiple
settings in a single invocation.
Today the keystore add command can only handle adding a single
setting/value pair in a single invocation. This incurs the startup costs
of the JVM many times, which in some environments can be expensive. This
commit teaches the add keystore command to accept adding multiple
settings in a single invocation.
This drop the "top level" pipeline aggregators from the aggregation
result tree which should save a little memory and a few serialization
bytes. Perhaps more imporantly, this provides a mechanism by which we
can remove *all* pipelines from the aggregation result tree. This will
save quite a bit of space when pipelines are deep in the tree.
Sadly, doing this isn't simple because of backwards compatibility. Nodes
before 7.7.0 *need* those pipelines. We provide them by setting passing
a `Supplier<PipelineTree>` into the root of the aggregation tree that we
only call if we need to serialize to a version before 7.7.0.
This solution works for cross cluster search because we always reduce
the aggregations in each remote cluster and then forward them back to
the coordinating node. Its quite possible that the coordinating node
needs the pipeline (say it is version 7.1.0) and the gateway node in the
remote cluster doesn't (version 7.7.0). In that case the data nodes
won't send the pipeline aggregations back to the gateway node.
Critically, the gateway node *will* send the pipeline aggregations back
to the coordinating node. This is all managed with that
`Supplier<PipelineTree>`, but *how* it is managed is a bit tricky.
We can run into a state where there's no more events to wait for temporarily
but the cluster still isn't green. I added the wait for green flag to the request
so the assertion for green cluster health below doesn't fail.
Closes#53457
We are using this assertion for identical shard snapshots
for situations where `snapshot1` wasn't the first snapshot
for the tested shard. Hence, we can't assume that it will
not share any files with previous snapshots.
This showed up in failing tests when `snapshot1` was equivalent
to a previous snapshot because no documents were deleted from the
repo randomly in the failing test but even if documents are deleted
there is no guarantee that no files will be shared.
=> I removed this assertion since its immaterial for what is tested
here anyway.
Closes#54034
This commit ensures that we rewrite the shard request with a short-lived can_match searcher.
This is required for frozen indices since the high level rewrite is now performed on a network thread where we don't want to perform I/O.
Closes#53985
Reindex would use timeValueNanos(System.nanoTime()). The intended use
for TimeValue is as a duration, not as absolute time. In particular,
this could result in negative TimeValue's, being unsupported in #53913.
Modified to use the bare long nano-second value.
* Add validation for component templates (#54023)
* Add validation for component templates
This change adds validation to make sure that settings and mappings are correct in
component template. It's done the same way as in index templates - code is reused.
Reletes to #53101
* Fix checkstyle violation
* Update server/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexTemplateService.java
Co-Authored-By: Lee Hinman <dakrone@users.noreply.github.com>
* Update server/src/test/java/org/elasticsearch/cluster/metadata/MetaDataIndexTemplateServiceTests.java
Co-Authored-By: Lee Hinman <dakrone@users.noreply.github.com>
* Update server/src/test/java/org/elasticsearch/cluster/metadata/MetaDataIndexTemplateServiceTests.java
Co-Authored-By: Lee Hinman <dakrone@users.noreply.github.com>
* Update server/src/test/java/org/elasticsearch/cluster/metadata/MetaDataIndexTemplateServiceTests.java
Co-Authored-By: Lee Hinman <dakrone@users.noreply.github.com>
* Update server/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexTemplateService.java
Co-Authored-By: Lee Hinman <dakrone@users.noreply.github.com>
Co-authored-by: Lee Hinman <dakrone@users.noreply.github.com>
* Adjusted to 7.7
* unused import fixed
* npe fixeD
* change exception type
Co-authored-by: Lee Hinman <dakrone@users.noreply.github.com>
This commit ensures that we don't use the non-deterministic canReturnNullResponseIfMatchNoDocs
boolean in the cache key of the ShardSearchRequest. The value of this boolean has no influence
on the cacheability of the request.
Closes#32827
Today we log `failed to execute pipeline for a bulk request` at `ERROR` level
if an attempt to run an ingest pipeline fails. A failure here is commonly due
to an `EsRejectedExecutionException`. We also feed such failures back to the
client and record the rejection in the threadpool statistics.
In line with #51459 there is no need to log failures within actions so noisily
and with such urgency. It is better to leave it up to the client to react
accordingly. Typically an `EsRejectedExecutionException` should result in the
client backing off and retrying, so a failure here is not normally fatal enough
to justify an `ERROR` log at all.
This commit reduces the log level for this message to `DEBUG`.
Fixes an issue where the elasticsearch-node command-line tools would not work correctly
because PersistentTasksCustomMetaData contains named XContent from plugins. This PR
makes it so that the parsing for all custom metadata is skipped, even if the core system would
know how to handle it.
Closes#53549
This removes some more ceremony when declaring agg parsers. You no
longer need a static `parse` method, instead you can just make the
`PARSER` public in most cases.
There are still a few aggs with the `parse` method, but those `parse`
methods are a little more complex to untangle.
Currently the remote info api has added a number of possible fields
(proxy, num_socket_connections, etc) that are available in proxy mode.
These fields are not aligned with what the settings are named. This
commit modifies this API to align with the settings.
Wildcard queries on keyword fields get normalized, however this normalization
step should exclude the two special characters * and ? in order to keep the
wildcard query itself intact.
Closes#46300
The snapshot stats response list of snapshot statuses is not ordered according to the
given list of snapshot names so randomly we could mix up snapshot1 and snapshot2
when asserting on the stats.
Fixed by getting each snapshot's stats individually.
Closes#54034
This commit changes the pre_filter_shard_size default from 128 to unspecified.
This allows to apply heuristics based on the request and the target indices when deciding
whether the can match phase should run or not. When unspecified, this pr runs the can match phase
automatically if one of these conditions is met:
* The request targets more than 128 shards.
* The request contains read-only indices.
* The primary sort of the query targets an indexed field.
Users can opt-out from this behavior by setting the `pre_filter_shard_size` to a static value.
Closes#39835
I created this bug today in #53793. When a `DelayableWriteable` that
references an existing object serializes itself it wasn't taking the
version of the node on the other side of the wire into account. This
fixes that.
Today when cluster.remote.connect is set to false, and some aspect of
the codebase tries to get a remote client, today we return a no such
remote cluster exception. This can be quite perplexing to users,
especially if the remote cluster is actually defined in their cluster
state, it is only that the local node is not a remote cluter
client. This commit addresses this by providing a dedicated error
message when a remote cluster is not available because the local node is
not a remote cluster client.
This commit removes the configuration time vs execution time distinction
with regards to certain BuildParms properties. Because of the cost of
determining Java versions for configuration JDK locations we deferred
this until execution time. This had two main downsides. First, we had
to implement all this build logic in tasks, which required a bunch of
additional plumbing and complexity. Second, because some information
wasn't known during configuration time, we had to nest any build logic
that depended on this in awkward callbacks.
We now defer to the JavaInstallationRegistry recently added in Gradle.
This utility uses a much more efficient method for probing Java
installations vs our jrunscript implementation. This, combined with some
optimizations to avoid probing the current JVM as well as deferring
some evaluation via Providers when probing installations for BWC builds
we can maintain effectively the same configuration time performance
while removing a bunch of complexity and runtime cost (snapshotting
inputs for the GenerateGlobalBuildInfoTask was very expensive). The end
result should be a much more responsive build execution in almost all
scenarios.
(cherry picked from commit ecdbd37f2e0f0447ed574b306adb64c19adc3ce1)
This moves the pipeline aggregation validation from the data node to the
coordinating node so that we, eventually, can stop sending pipeline
aggregations to the data nodes entirely. In fact, it moves it into the
"request validation" stage so multiple errors can be accumulated and
sent back to the requester for the entire request. We can't always take
advantage of that, but it'll be nice for folks not to have to play
whack-a-mole with validation.
This is implemented by replacing `PipelineAggretionBuilder#validate`
with:
```
protected abstract void validate(ValidationContext context);
```
The `ValidationContext` handles the accumulation of validation failures,
provides access to the aggregation's siblings, and implements a few
validation utility methods.
If a setting is touched during bootstrap before logging is configured,
and that setting uses a byte size value, the deprecation logger for
ByteSizeValue will be initialized. However, this means a logger will be
configured before log4j is initialized, which we reject at startup. This
commit puts this deprecation logger in a holder pattern so that it is
not initialized until first use, which will happen after logging is
configured.
Benchmarking showed that the effect of the ExitableDirectoryReader
is reduced considerably when checking every 8191 docs. Moreover,
set the cancellable task before calling QueryPhase#preProcess()
and make sure we don't wrap with an ExitableDirectoryReader at all
when lowLevelCancellation is set to false to avoid completely any
performance impact.
Follows: #52822
Follows: #53166
Follows: #53496
(cherry picked from commit cdc377e8e74d3ca6c231c36dc5e80621aab47c69)
* Get Async Search: omit _clusters section when empty (#53907)
The _clusters section is omitted by the search API whenever no remote clusters are searched. Async search should do the same, but Get Async Search returns a deserialized response, hence a weird `_clusters` section with all values set to `0` gets returned instead. In fact the recreated Clusters object is not the same object as the EMPTY constant, yet it has the same content.
This commit addresses this by changing the comparison in the `toXContent` method to not print out the section if the number of total clusters is `0`.
* Async search: remove version from response (#53960)
The goal of the version field was to quickly show when you can expect to find something new in the search response, compared to when nothing has changed. This can also be done by looking at the `_shards` section and `num_reduce_phases` returned with the search response. In fact when there has been one or more additional reduction of the results, you can expect new results in the search response. Otherwise, the `_shards` section could notify of additional failures of shards that have completed the query, but that is not a guarantee that their results will be exposed (only when the following partial reduction is performed their results will be available).
That said this commit clarifies this in the docs and removes the version field from the async search response
* Async Search: replicas to auto expand from 0 to 1 (#53964)
This way single node clusters that are green don't go yellow once async search is used, while
all the others still have one replica.
* [DOCS] address timing issue in async search docs tests (#53910)
The docs snippets for submit async search have proven difficult to test as it is not possible to guarantee that you get a response that is not final, even when providing `wait_for_completion=0`. In the docs we want to show though a proper long-running query, and its first response should be partial rather than final.
With this commit we adapt the docs snippets to show a partial response, and replace under the hood all that's needed to make the snippets tests succeed when we get a final response. Also, increased the timeout so we always get a final response.
Closes#53887Closes#53891
Use sequence numbers and force merge UUID to determine whether a shard has changed or not instead before falling back to comparing files to get incremental snapshots on primary fail-over.
The test in CloseWhileRelocatingShardsIT failed recently
multiple times (3) when waiting for initial indices to be
become green. Looking at the execution logs from #53544
it appears at the very beginning of the test and when
the WindowsFS file system is picked up (which is known
to slow down tests).
This commit simply increases the timeout for the first
ensureGreen() to 60 seconds. If the test continues to fail,
we might want to test a larger timeout or disable
WindowsFS for this test.
Closes#53544
This commits adds a data stream feature flag, initial definition of a data stream and
the stubs for the data stream create, delete and get APIs. Also simple serialization
tests are added and a rest test to thest the data stream API stubs.
This is a large amount of code and mainly mechanical, but this commit should be
straightforward to review, because there isn't any real logic.
The data stream transport and rest action are behind the data stream feature flag and
are only intialized if the feature flag is enabled. The feature flag is enabled if
elasticsearch is build as snapshot or a release build and the
'es.datastreams_feature_flag_registered' is enabled.
The integ-test-zip sets the feature flag if building a release build, otherwise
rest tests would fail.
Relates to #53100
Today we only read `cluster.max_voting_config_exclusions` from the dynamic
settings in the cluster metadata, ignoring any value set in
`elasticsearch.yml`. This commit addresses this.
Closes#53455
When indexing a rectangle that crosses the dateline, we are currently not
handling it properly and we index a polygon that do not cross the dateline.
This changes generates two polygons wrapping the dateline.
* Adds ability for contexts to specify their own defaults.
* Context defaults are applied if no context-specific or
general setting exists.
* See 070ea7e for settings keys.
* Increases the per-context default for the `ingest` context.
* Cache size is doubled, 200 compared to default of 100
* Cache expiration is unchanged at no expiration
* Cache max compilation is quintupled, 375/5m instead of 75/5m
Backport of: 1b37d4b
Refs: #50152
This commit changes the Transforms notifications index to be hidden
index, with a hidden alias.
This commit also removes the temporary hack in
MetaDataCreateIndexService that prevents deprecation warnings for known
dot-prefixed index names which are not hidden/system indices, as this
was the last index pattern to need that hack.
We mark cluster states persisted on master-ineligible nodes as
potentially-stale using the voting configuration `{STALE_STATE_CONFIG}` which
prevents these nodes from being elected as master if they are restarted as
master-eligible. Today we do not handle this special voting configuration
differently in the `ClusterFormationFailureHandler`, leading to a mysterious
message `an election requires a node with id [STALE_STATE_CONFIG]` if the
election does not succeed.
This commit adds a special case description for this situation to explain
better why this node cannot win an election.
Closes#53734
The test was randomly and very rarely failing due to generating the same sort
key for multiple records, which was making order of these records in the results
nondeterministic. While investigating the test I also found that the data wasn't
generated in the way that matches the actual data. Normally, the order of
documents in hits and scoreDocs in InternalTopHits should be the same. However,
in the test only scoreDocs were sorted which was cause very confusing failure
messages. This commit fixes this issue as well.
Fixes#53676
Today in the `CoordinatorTests` each node uses multiple threadpools. This is
mostly fine as they are almost completely stateless, except for the
`ThreadContext`: by using multiple threadpools we cannot make assertions that
the thread context is/isn't preserved as we expect. This commit consolidates
the threadpool instances in use so that each node uses just one.
TermsLookup in master no longer accepts a type parameter. We should emit
a deprecate warning in 7.x when a terms lookup requests includes type to prepare
users for its removal.
Relates to #41059
This commit adds a new AsyncSearchClient to the High Level Rest Client which
initially supporst the submitAsyncSearch in its blocking and non-blocking
flavour. Also adding client side request and response objects and parsing code
to parse the xContent output of the client side AsyncSearchResponse together
with parsing roundtrip tests and a simple roundtrip integration test.
Relates to #49091
Backport of #53592
It's simple to deprecate a field used in an ObjectParser just by adding deprecation
markers to the relevant ParseField objects. The warnings themselves don't currently
have any context - they simply say that a deprecated field has been used, but not
where in the input xcontent it appears. This commit adds the parent object parser
name and XContentLocation to these deprecation messages.
Note that the context is automatically stripped from warning messages when they
are asserted on by integration tests and REST tests, because randomization of
xcontent type during these tests means that the XContentLocation is not constant
The retention lease syncs need to occur under the system context,
because they are internal actions executed on behalf of the user. Today
we are relying on this happening for background syncs by virtue of the
fact that the context the syncs are created under is the system
context. This is due to these occurring on the cluster state applier
thread. However, there are situations where this does not hold such as
when a timed out cluster state publication occurs, and the node where
the shard is allocated is the elected master node. In that case, the
context will be empty due to the fact that we do not reschedule
publication under the system context. Currently, doing so runs us into
some troubles with losing the existing context, possibly dropping
deprecation headers. We could copy that context over when marking the
current context as the system context, but the implications of that
require some more investigation. For now, we explicitly mark the
retention lease syncs as executing under the system context, as this is
situation that we can reason about.
The JodaCompatibleZonedDateTime is a compatibility object that unions
Joda's DateTime and Java's ZonedDateTime, meant for use in scripts. When
it was added, we serialized the JCZDT as a Joda DateTime so that when
sending to older nodes they could still read the object. However, on
newer nodes, we continued also reading this as a Joda DateTime. This
commit changes the read side to form a JCZDT.
closes#53586
* Add IndexTemplateV2 to MetaData (#53753)
* Add IndexTemplateV2 to MetaData
This adds the `IndexTemplateV2` and `IndexTemplateV2Metadata` class to be used for the new
implementation of index templates. The new metadata is stored as a `MetaData.Custom` implementation.
Relates to #53101
* Add ITV2Metadata unit tests
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
* Update min supported version constant
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
testIndexHasDuplicateData tests were failing ocassionally,
due to approximate calculation of BKDReader.estimatePointCount,
where if the node is Leaf, the number of points in it
was (maxPointsInLeafNode + 1) / 2.
As DEFAULT_MAX_POINTS_IN_LEAF_NODE = 1024, for small indexes
used in tests, the estimation could be really off.
This rewrites tests, to make the max points in leaf node to
be a small value to control the tests.
Closes#49703
This commit makes a number of improvements when importing the
Elasticsearch project into IntelliJ IDEA. Specifically:
- Contributing documentation has been updated to reflect that the
'idea' task should no long be used and Gradle project import is
instead the officially supported way of setting up the project.
- Attempts to run the 'idea' task will result in a failure with a
message directing folks to our CONTRIBUTING.md document.
- The project JDK is explicit set rather that using whatever JAVA_HOME
is.
- Gradle build operation delegation is disabled, and test execution is
configured to 'choose per test'.
- Gradle is configured to inherit the project JDK.
- Some code style conventions are automatically configured.
- File encoding is explicitly set to UTF-8.
- Parallel module compilation is enabled and deprecated feature
warnings are disabled.
- A remote debug run configuration using listen mode is created.
- JUnit runner is configured with required system properties.
- License headers are configured such that Apache 2 is the default
notice added to all source files with exception of source in /x-pack
which will use the Elastic license.
Today cluster states are sometimes (rarely) applied in the default context
rather than system context, which means that any appliers which capture their
contexts cannot do things like remote transport actions when security is
enabled.
There are at least two ways that we end up applying the cluster state in the
default context:
1. locally applying a cluster state that indicates that the master has failed
2. the elected master times out while waiting for a response from another node
This commit ensures that cluster states are always applied in the system
context.
Mitigates #53751
With the upgrade to Lucene 8.5, LatLonShape field has support for distance queries. This change implements this new feature and removes the limitation.
Backport to 7x
Enable geo_shape query to work on geo_point fields for shapes: circle, polygon, multipolygon, rectangle see: #48928
Co-Authored-By: @iverase
This commit disables the sort optimization added in #51852 for scroll requests.
Scroll queries keep a state per shard so we cannot modify the request on
the first round (submit).
This bug was introduced in non-released versions which is why this pr
is marked as a non-issue.
On clusters with a large number of shards, the shards limits allocation
decider can exhibit poor performance leading to timeouts applying
cluster state updates. This occurs because for every shard, we do a loop
to count the number of shards on the node, and the number of shards for
the index of the shard. This is roughly quadratic in the number of
shards. This loop is not necessary, since we already have a O(1) method
to count the number of non-relocating shards on a node, and with this
commit we add some infrastructure to RoutingNode to make counting the
number of shards per index O(1).
* Adds per context settings:
`script.context.${CONTEXT}.cache_max_size` ~
`script.cache.max_size`
`script.context.${CONTEXT}.cache_expire` ~
`script.cache.expire`
`script.context.${CONTEXT}.max_compilations_rate` ~
`script.max_compilations_rate`
* Context cache is used if:
`script.max_compilations_rate=use-context`. This
value is dynamically updatable, so users can
switch back to the general cache if desired.
* Settings for context caches take the first value
that applies:
1) Context specific settings if set, eg
`script.context.ingest.cache_max_size`
2) Correlated general setting is set to the non-default
value, eg `script.cache.max_size`
3) Context default
The reason for 2's inclusion is to allow an easy
transition for users who've customized their general
cache settings.
Using the general cache settings for the context caches
results in higher effective settings, since they are
multiplied across the number of contexts. So a general
cache max size of 200 will become 200 * # of contexts.
However, this behavior it will avoid users snapping to a
value that is too low for them.
Backport of: #52855
Refs: #50152
This commit, built on top of #51708, allows to modify shard search requests based on informations collected on other shards. It is intended to speed up sorted queries on time-based indices. For queries that are only interested in the top documents.
This change will rewrite the shard queries to match none if the bottom sort value computed in prior shards is better than all values in the shard.
For queries that mix top documents and aggregations this change will reset the size of the top documents to 0 instead of rewriting to match none.
This means that we don't need to keep a search context open for this shard since we know in advance that it doesn't contain any competitive hit.
Sometimes we want to deprecate and remove a ParseField entirely, without replacement;
for example, the various places where we specify a _type field in 7x. Currently we can
tell users only that a particular field name should not be used, and that another name should
be used in its place. This commit adds the ability to say that a field should not be used at
all.
The introduction of the ExitableDirectoryReader showed increase of
latencies for range queries using pointvalues.
Check for cancellation every 1024 docs instead of every 15 to lower
the impact of the check in query's performance.
Follows: #52822Fixes: #53496
(cherry picked from commit 6b5fc35e4458e60a7ca5822584ec6a60562f2c01)
It is useful to be able to delay state recovery until enough data nodes have
joined the cluster, since this gives the shard allocator a decent opportunity
to re-use as much existing data as possible. However we also have the option to
delay state recovery until a certain number of master-eligible nodes have
joined, and this is unnecessary: we require a majority of master-eligible nodes
for state recovery, and there is no advantage in waiting for more.
This commit deprecates the unnecessary settings in preparation for their
removal.
Relates #51806
* Add REST API for ComponentTemplate CRUD
This adds the Put/Get/DeleteComponentTemplate APIs that allow inserting, retrieving, and removing
ComponentTemplateMetadata into the cluster state metadata.
These APIs are currently only available behind a feature flag system property -
`es.itv2_feature_flag_registered`.
Relates to #53101
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
Re-applies the change from #53523 along with test fixes.
closes#53626closes#53624closes#53622closes#53625
Co-authored-by: Nik Everett <nik9000@gmail.com>
Co-authored-by: Lee Hinman <dakrone@users.noreply.github.com>
Co-authored-by: Jake Landis <jake.landis@elastic.co>
Today it can happen that a transport message fails to send (for example,
because a transport interceptor rejects the request). In this case, the
response handler is never invoked, which can lead to necessary cleanups
not being performed. There are two ways to handle this. One is to expect
every callsite that sends a message to try/catch these exceptions and
handle them appropriately. The other is merely to invoke the response
handler to handle the exception, which is already equipped to handle
transport exceptions.
* Submit async search to work only with POST (#53368)
Currently the submit async search API can be called using both GET and POST at REST, but given that it submits a call and creates internal state, POST should be the only allowed method.
* Refine SearchProgressListener internal API (#53373)
The following cumulative improvements have been made:
- rename `onReduce` and `notifyReduce` to `onFinalReduce` and `notifyFinalReduce`
- add unit test for `SearchShard`
- on* methods in `SearchProgressListener` shouldn't need to be public as they should never be called directly, they only need to be overridden hence they can be made protected. They are actually called directly from a test which required some adapting, like making `AsyncSearchTask.Listener` class package private instead of private
- Instead of overriding `getProgressListener` in `AsyncSearchTask`, as it feels weird to override a getter method, added a specific method that allows to retrieve the Listener directly without needing to cast it. Made the getter and setter for the listener final in the base class.
- rename `SearchProgressListener#searchShards` methods to `buildSearchShards` and make it static given that it accesses no instance members
- make `SearchShard` and `SearchShardTask` classes final
* Move async search yaml tests to x-pack yaml test folder (#53537)
The yaml tests for async search currently sit in its qa folder. There is no reason though for them to live in a separate folder as they don't require particular setup. This commit moves them to the main folder together with the other x-pack yaml tests so that they will be run by the client test runners too.
* [DOCS] Add temporary redirect for async-search (#53454)
The following API spec files contain a link to a not-yet-created
async search docs page:
* [async_search.delete.json][0]
* [async_search.get.json][1]
* [async_search.submit.json][2]
The Elaticsearch-js client uses these spec files to create their docs.
This created a broken link in the Elaticsearch-js docs, which has broken
the docs build.
This PR adds a temporary redirect for the docs page. This redirect
should be removed when the actual API docs are added.
[0]: https://github.com/elastic/elasticsearch/blob/master/x-pack/plugin/src/test/resources/rest-api-spec/api/async_search.delete.json
[1]: https://github.com/elastic/elasticsearch/blob/master/x-pack/plugin/src/test/resources/rest-api-spec/api/async_search.get.json
[2]: https://github.com/elastic/elasticsearch/blob/master/x-pack/plugin/src/test/resources/rest-api-spec/api/async_search.submit.json
Co-authored-by: James Rodewig <james.rodewig@elastic.co>
I broke sorting aggregations by `doc_count` in #51271 by mixing up true
and false. This flips that comparison and adds a few tests to double
check that we don't so this again.
This begins to clean up how `PipelineAggregator`s and executed.
Previously, we would create the `PipelineAggregator`s on the data nodes
and embed them in the aggregation tree. When it came time to execute the
pipeline aggregation we'd use the `PipelineAggregator`s that were on the
first shard's results. This is inefficient because:
1. The data node needs to make the `PipelineAggregator` only to
serialize it and then throw it away.
2. The coordinating node needs to deserialize all of the
`PipelineAggregator`s even though it only needs one of them.
3. You end up with many `PipelineAggregator` instances when you only
really *need* one per pipeline.
4. `PipelineAggregator` needs to implement serialization.
This begins to undo these by building the `PipelineAggregator`s directly
on the coordinating node and using those instead of the
`PipelineAggregator`s in the aggregtion tree. In a follow up change
we'll stop serializing the `PipelineAggregator`s to node versions that
support this behavior. And, one day, we'll be able to remove
`PipelineAggregator` from the aggregation result tree entirely.
Importantly, this doesn't change how pipeline aggregations are declared
or parsed or requested. They are still part of the `AggregationBuilder`
tree because *that* makes sense.
This commit adjusts the _cat/indices and _cat/aliases APIs to allow
specifying indices options, so that these APIs can handle hidden
indices/aliases in the same way as other APIs.
Also adds the hidden option to the expand_wildcards parameter
in the YAML spec for every API that accepts it.
* New wildcard field optimised for wildcard queries (#49993)
Indexes values using size 3 ngrams and also stores the full original as a binary doc value.
Wildcard queries operate by using a cheap approximation query on the ngram field followed up by a more expensive verification query using an automaton on the binary doc values. Also supports aggregations and sorting.
Keyword field values with length more than ignore_above are not
indexed. But highlighters still were retrieving these values
from _source and were trying to highlight them. This sometimes lead to
errors if a field length exceeded max_analyzed_offset. But also this
is an overall wrong behaviour to attempt to highlight something that was
ignored during indexing.
This PR checks if a keyword value was ignored because of its length,
and if yes, skips highlighting it.
Backport: #53408Closes#43800
This change introduces a new API in x-pack basic that allows to track the progress of a search.
Users can submit an asynchronous search through a new endpoint called `_async_search` that
works exactly the same as the `_search` endpoint but instead of blocking and returning the final response when available, it returns a response after a provided `wait_for_completion` time.
````
GET my_index_pattern*/_async_search?wait_for_completion=100ms
{
"aggs": {
"date_histogram": {
"field": "@timestamp",
"fixed_interval": "1h"
}
}
}
````
If after 100ms the final response is not available, a `partial_response` is included in the body:
````
{
"id": "9N3J1m4BgyzUDzqgC15b",
"version": 1,
"is_running": true,
"is_partial": true,
"response": {
"_shards": {
"total": 100,
"successful": 5,
"failed": 0
},
"total_hits": {
"value": 1653433,
"relation": "eq"
},
"aggs": {
...
}
}
}
````
The partial response contains the total number of requested shards, the number of shards that successfully returned and the number of shards that failed.
It also contains the total hits as well as partial aggregations computed from the successful shards.
To continue to monitor the progress of the search users can call the get `_async_search` API like the following:
````
GET _async_search/9N3J1m4BgyzUDzqgC15b/?wait_for_completion=100ms
````
That returns a new response that can contain the same partial response than the previous call if the search didn't progress, in such case the returned `version`
should be the same. If new partial results are available, the version is incremented and the `partial_response` contains the updated progress.
Finally if the response is fully available while or after waiting for completion, the `partial_response` is replaced by a `response` section that contains the usual _search response:
````
{
"id": "9N3J1m4BgyzUDzqgC15b",
"version": 10,
"is_running": false,
"response": {
"is_partial": false,
...
}
}
````
Asynchronous search are stored in a restricted index called `.async-search` if they survive (still running) after the initial submit. Each request has a keep alive that defaults to 5 days but this value can be changed/updated any time:
`````
GET my_index_pattern*/_async_search?wait_for_completion=100ms&keep_alive=10d
`````
The default can be changed when submitting the search, the example above raises the default value for the search to `10d`.
`````
GET _async_search/9N3J1m4BgyzUDzqgC15b/?wait_for_completion=100ms&keep_alive=10d
`````
The time to live for a specific search can be extended when getting the progress/result. In the example above we extend the keep alive to 10 more days.
A background service that runs only on the node that holds the first primary shard of the `async-search` index is responsible for deleting the expired results. It runs every hour but the expiration is also checked by running queries (if they take longer than the keep_alive) and when getting a result.
Like a normal `_search`, if the http channel that is used to submit a request is closed before getting a response, the search is automatically cancelled. Note that this behavior is only for the submit API, subsequent GET requests will not cancel if they are closed.
Asynchronous search are not persistent, if the coordinator node crashes or is restarted during the search, the asynchronous search will stop. To know if the search is still running or not the response contains a field called `is_running` that indicates if the task is up or not. It is the responsibility of the user to resume an asynchronous search that didn't reach a final response by re-submitting the query. However final responses and failures are persisted in a system index that allows
to retrieve a response even if the task finishes.
````
DELETE _async_search/9N3J1m4BgyzUDzqgC15b
````
The response is also not stored if the initial submit action returns a final response. This allows to not add any overhead to queries that completes within the initial `wait_for_completion`.
The `.async-search` index is a restricted index (should be migrated to a system index in +8.0) that is accessible only through the async search APIs. These APIs also ensure that only the user that submitted the initial query can retrieve or delete the running search. Note that admins/superusers would still be able to cancel the search task through the task manager like any other tasks.
Relates #49091
Co-authored-by: Luca Cavanna <javanna@users.noreply.github.com>
Today the NodeConnectionsService emits a DEBUG-level log message each time it
calls TransportService#connectToNode, which happens for every node in the
cluster every ten seconds, and also at every cluster state update. That's a lot
of log messages. Most of these calls are no-ops and can be ignored, but if the
call was not a no-op then it may be worth investigating further. Since the logs
do not distinguish the interesting and uninteresting cases, they are not
useful.
This commit distinguishes the two cases and pushes the noisy logging for the
common no-op case down to TRACE level, leaving only useful and actionable
information in the DEBUG-level logs.
Previously, Term Vectors API was returning empty results for
artificial documents with keyword fields. Checking only for `string()`
on `IndexableField` is not enough, since for `KeywordFieldType`
`binaryValue()` must be used instead.
Fixes#53494
(cherry picked from commit 1fc3fe3d32f41eab2101c0536751b7c47e63cc48)
* Use snake case for nodes stats/info metric names (#53446)
The REST API uses "thread_pool" as the name of the thread pool metric.
If we use this name internally when we serialize nodes stats and info
requests, we won't need to do any fancy logic to check for and switch
out "threadPool", which was the previous internal name.
This commit fixes a bug on sorted queries with a primary sort field
that uses different types in the requested indices. In this scenario
the returned min/max values to sort the shards are not comparable so
we should avoid the sorting rather than throwing an obscure exception.
* Add ComponentTemplate to MetaData (#53290)
* Add ComponentTemplate to MetaData
This adds a `ComponentTemplate` datastructure that will be used as part of #53101 (Index Templates
v2) to the `MetaData` class. Currently there are no APIs for interacting with this class, so it will
always be an empty map (other than in tests). This infrastructure will be built upon to add APIs in
a subsequent commit.
A `ComponentTemplate` is made up of a `Template`, a version, and a MetaData.Custom class. The
`Template` contains similar information to an `IndexTemplateMetaData` object— settings, mappings,
and alias configuration.
* Update minimal supported version constant
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
This changes the `top_metrics` aggregation to return metrics in their
original type. Since it only supports numerics, that means that dates,
longs, and doubles will come back as stored, with their appropriate
formatter applied.
The spirit of StreamInput/StreamOutput is that common I/O patterns should
be handled by these classes so that the persistence methods in
application classes can be kept short, which facilitates easy visual
comparison between read and write methods, and reduces risks of having
serialization issues due to mismatched implementations.
To this end, this change adds readOptionalVLong and writeOptionalVLong
methods to these classes as we have started to build up cases where
that conditional/null logic has been implemented directly in the read &
write methods.
Co-authored-by: Tim Vernum <tim.vernum@elastic.co>
If an index was created in version 6 and contain a date field with a joda-style pattern it should still be allowed to search and insert document into it.
Those created in 6 but date pattern starts with 8, should be considered as java style.
It looks like `date_nanos` fields weren't likely to work properly in
composite aggs because composites iterate field values using points and
we weren't converting the points into milliseconds. Because the doc
values were coming back in milliseconds we ended up geting very confused
and just never collecting sub-aggregations.
This fixes that by adding a method to `DateFieldMapper.Resolution` to
`parsePointAsMillis` which is similarly in name and function to
`NumberFieldMapper.NumberType`'s `parsePoint` except that it normalizes
to milliseconds which is what aggs need at the moment.
Closes#53168
Using a Long alone is not strong enough for the id of search contexts
because we reset the id generator whenever a data node is restarted.
This can lead to two issues:
1. Fetch phase can fetch documents from another index
2. A scroll search can return documents from another index
This commit avoids these issues by adding a UUID to SearchContexId.
In rare circumstances it is possible for an isolated node to have a greater
term than the currently-elected leader. Today such a node will attempt to join
the cluster but will not offer a vote to the leader and will reject its cluster
state publications due to their stale term. This situation persists since there
is no mechanism for the joining node to inform the leader that its term is
stale and a new election is required.
This commit adds the current term of the joining node to the join request. Once
the join has been validated, the leader will perform another election to
increase its term far enough to allow the isolated node to join properly.
Fixes#53271
* Record Force Merges in live commit data
Prerequisite of #52182. Record force merges in the live commit data
so two shard states with the same sequence number that differ only in whether
or not they have been force merged can be distinguished when creating snapshots.
We can't always have the same segment stats and doc stats between
InternalEngine and ReadOnlyEngine if there are some fully deleted
segments. ReadOnlyEngine always filters out them. InternalEngine,
however, will keep them if peer recovery retention leases exist or the
number of the retaining operations is non-zero.
This change reverts the fix in #51331 and uses the wrapped reader to
calculate the segment stats and doc stats. For the test, we need to
disable the extra retaining soft-deletes operations.
Closes#51303
Prior to this commit, rollover did not propagate the `is_hidden` alias
property when rollover over an index. This commit ensures that an alias
that's rollover over will remain hidden.
This removes the `instanceof`s from `SiblingPipelineAggregator` by
adding a `rewriteBuckets` method to `InternalAggregation` that can be
called to, well, rewrite the buckets. The default implementation of
`rewriteBuckets` throws the same exception that was thrown when you
attempted to run a `SiblingPipelineAggregator` on an aggregation without
buckets. It is overridden by `InternalSingleBucketAggregation` and
`InternalMultiBucketAggregation` to correctly rewrite their buckets.
When an composite aggregation is run against an index with a sort that
*starts* with the "source" fields from the composite but has additional
fields it'd blow up in while trying to decide if it could use the sort.
This changes it to decide that it *can* use the sort.
Closes#52480
This change optimizes the merge of terms aggregations by removing
the priority queue that was used to collect all the buckets during
a non-final reduction. We don't need to keep the result sorted since
the merge of buckets in a subsequent reduce can modify the order.
I wrote a small micro-benchmark to test the change and the speed ups
are significative for small merge buffer sizes:
````
########## Master:
Benchmark (bufferSize) (cardinality) (numShards) (topNSize) Mode Cnt Score Error Units
TermsReduceBenchmark.reduceTopHits 5 10000 1000 1000 avgt 10 2459,690 ± 198,682 ms/op
TermsReduceBenchmark.reduceTopHits 16 10000 1000 1000 avgt 10 1030,620 ± 91,544 ms/op
TermsReduceBenchmark.reduceTopHits 32 10000 1000 1000 avgt 10 558,608 ± 44,915 ms/op
TermsReduceBenchmark.reduceTopHits 128 10000 1000 1000 avgt 10 287,333 ± 8,342 ms/op
TermsReduceBenchmark.reduceTopHits 512 10000 1000 1000 avgt 10 257,325 ± 54,515 ms/op
########## Patch:
Benchmark (bufferSize) (cardinality) (numShards) (topNSize) Mode Cnt Score Error Units
TermsReduceBenchmark.reduceTopHits 5 10000 1000 1000 avgt 10 805,611 ± 14,630 ms/op
TermsReduceBenchmark.reduceTopHits 16 10000 1000 1000 avgt 10 378,851 ± 17,929 ms/op
TermsReduceBenchmark.reduceTopHits 32 10000 1000 1000 avgt 10 261,094 ± 10,176 ms/op
TermsReduceBenchmark.reduceTopHits 128 10000 1000 1000 avgt 10 241,051 ± 19,558 ms/op
TermsReduceBenchmark.reduceTopHits 512 10000 1000 1000 avgt 10 231,643 ± 6,170 ms/op
````
The code for the benchmark can be found [here](). It seems to be up to 3x faster for terms aggregations
that return 10,000 unique terms (1000 terms per shard). For a cardinality of 100,000 terms, this patch is up to 5x faster:
````
########## Patch:
Benchmark (bufferSize) (cardinality) (numShards) (topNSize) Mode Cnt Score Error Units
TermsReduceBenchmark.reduceTopHits 5 100000 1000 1000 avgt 10 12791,083 ± 397,128 ms/op
TermsReduceBenchmark.reduceTopHits 16 100000 1000 1000 avgt 10 3974,939 ± 324,617 ms/op
TermsReduceBenchmark.reduceTopHits 32 100000 1000 1000 avgt 10 2186,285 ± 267,124 ms/op
TermsReduceBenchmark.reduceTopHits 128 100000 1000 1000 avgt 10 914,657 ± 160,784 ms/op
TermsReduceBenchmark.reduceTopHits 512 100000 1000 1000 avgt 10 604,198 ± 145,457 ms/op
########## Master:
Benchmark (bufferSize) (cardinality) (numShards) (topNSize) Mode Cnt Score Error Units
TermsReduceBenchmark.reduceTopHits 5 100000 1000 1000 avgt 10 60696,107 ± 929,944 ms/op
TermsReduceBenchmark.reduceTopHits 16 100000 1000 1000 avgt 10 16292,894 ± 783,398 ms/op
TermsReduceBenchmark.reduceTopHits 32 100000 1000 1000 avgt 10 7705,444 ± 77,588 ms/op
TermsReduceBenchmark.reduceTopHits 128 100000 1000 1000 avgt 10 2156,685 ± 88,795 ms/op
TermsReduceBenchmark.reduceTopHits 512 100000 1000 1000 avgt 10 760,273 ± 53,738 ms/op
````
The merge of buckets can also be optimized. Currently we use an hash map to merge buckets coming from different shards so this can be costly if the number of unique terms is high. Instead, we could always sort the shard terms result by key and perform a merge sort to reduce the results. This would save memory and make the merge more linear in terms
of complexity in the coordinating node at the expense of an additional sort in the shards. I plan to test this possible optimization in a follow up.
Relates #51857
It doesn't make a whole lot of sense for `BitArray#clear` to grow the
underlying storage array just to clear the bit. We *already* treat
indices outside of the storage array as unset. This turns such
operations into a noop.
Lucene 8.5.0 release candidates are imminent. This commit upgrades master to use
the latest snapshot to check that there are no last-minute bugs or regressions.
This commit changes the Get Aliases API to include hidden indices by
default - this is slightly different from other APIs, but is necessary
to make this API work intuitively.
* Add unit tests before refactoring
* Convert boolean fields to set of strings
In order to make nodes stats plugins pluggable, we need to make the
NodesStatsRequest class capable of carrying a flexible list of metrics
rather than a fixed list of boolean flags. This commit changes the
internal storage of the class without changing its serialization.
* Change serialization of NodesStatsRequest
* Set up BWC before merging
* Singularize enum name
Under certain circumstances SpanMultiTermQueryWrapper uses
SpanBooleanQueryRewriteWithMaxClause as its rewrite method, which in turn tries
to get a TermsEnum from the wrapped MultiTermQuery currently using a `null`
AttributeSource. While queries TermsQuery or subclasses of AutomatonQuery ignore
this argument, FuzzyQuery uses it to create a FuzzyTermsEnum which triggers an
NPE when the AttributeSource is not provided. This PR fixes this by supplying an
empty AttributeSource instead of a `null` value.
Closes#52894
Today when notifying a global checkpoint listener, we use the listener
thread pool. This commit turns this inside out so that the global
checkpoint listener must provide an executor on which to notify the
listener.
This commit drops the dispatching listenable action future that forks to
the listener thread pool. This was previously used in the transport
client but is no longer used.
It can be that a failure is repeated to a grouped action listener. For
example, if the same exception such as a connect transport exception, is
the cause of repeated failures. Previously we were unconditionally
self-suppressing the exception into the first exception, but
self-supressing is not allowed. Thus, we would throw an exception and
the grouped action listener would never complete. This commit addresses
this by guarding against self-suppression.
Today we notify refresh listeners by forking to the listener thread pool
and then serially notifying listeners on a thread there. Refreshes are
expensive though, so the expectation is that we are executing refreshes
on threads that can afford an expensive operation (e.g., not a network
thread) and as such, executing listeners that we expect to be cheap aon
the calling thread is okay. This commit removes the forking of notifying
refresh listeners to run directly on the calling thread that executed a
refresh.
This commit introduces hidden aliases. These are similar to hidden
indices, in that they are not visible by default, unless explicitly
specified by name or by indicating that hidden indices/aliases are
desired.
The new alias property, `is_hidden` is implemented similarly to
`is_write_index`, except that it must be consistent across all indices
with a given alias - that is, all indices with a given alias must
specify the alias as either hidden, or all specify it as non-hidden,
either explicitly or by omitting the `is_hidden` property.
Our lovely `BitArray` compactly stores "flags", lazilly growing its
underlying storage. It is super useful when you need to store one bit of
data for a zillion buckets or a documents or something. Usefully, it
defaults to `false`. But there is a wrinkle! If you ask it whether or
not a bit is set but it hasn't grown its underlying storage array
"around" that index then it'll throw an `ArrayIndexOutOfBoundsException`.
The per-document use cases tend to show up in order and don't tend to
mind this too much. But the use case in aggregations, the per-bucket use
case, does. Because buckets are collected out of order all the time.
This changes `BitArray` so it'll return `false` if the index is too big
for the underlying storage. After all, that index *can't* have been set
or else we would have grown the underlying array. Logically, I believe
this makes sense. And it makes my life easy. At the cost of three lines.
*but* this adds an extra test to every call to `get`. I think this is
likely ok because it is "very close" to an array index lookup that
already runs the same test. So I *think* it'll end up merged with the
array bounds check.
This commit updates the template used for watch history indices with
the hidden index setting so that new indices will be created as hidden.
Relates #50251
Backport of #52962
Currently the AbstractBulkByScrollRequest accepts slice values of 0 via its
`setSlices` method, denoting the "auto" slicing behaviour that is usable by
settting the "slices=auto" parameter on rest requests. When using the High Level
Rest Client, however, we send the 0 value as an integer, which is then rejected
as invalid by `AbstractBulkByScrollRequest#parseSlices`. Instead of making
parsing of the rest request more lenient, this PR opts for changing the
RequestConverter logic in the client to translate 0 values to "auto" on the rest
requests.
Closes#53044
For Node Info to be pluggable, NodesInfoRequest must be able to carry
arbitrary strings. This commit reworks the internals of that class to
use a set rather than hard-coded boolean fields.
NodesInfoRequest defaults to specifying all values. We test for
this behavior as we refactor and use random testing for the
various combinations of metrics.
Add backwards compatibility for transport requests.
With ExitableDirectoryReader in place, check for query cancellation
during QueryPhase#preProcess where the query rewriting takes place.
Follows: #52822
(cherry picked from commit 0d38626d8e6e9e2620a7a446b617a2ac42852461)
The bool query builder in elasticsearch accepts both must_not and mustNot
fields. Given that leniency is abhorrent and must be eschewed, we should deprecate
the latter as it doesn't fit with the style of parameters elsewhere in the DSL.
Use assertBusy when doing reroute after bridged disruption,
since it can return non-acked if a node is marked faulty
by follower check after disruption ended.
Closes#53064
7.5 and 7.6 had a regression that allowed for
script_score queries to have negative scores.
We have corrected this regression in #52478.
This is an addition to #52478 that adds
a test and release notes.
Implement an Exitable DirectoryReader that wraps the original
DirectoryReader so that when a search task is cancelled the
DirectoryReaders also stop their work fast. This is usuful for
expensive operations like wilcard/prefix queries where the
DirectoryReaders can spend lots of time and consume resources,
as previously their work wouldn't stop even though the original
search task was cancelled (e.g. because of timeout or dropped client
connection).
(cherry picked from commit 67acaf61f33bc5f54e26541514d07e375c202e03)
With #50871 aggrgations should now be parsed directly by an
`ObjectParser` or `ConstructingObjectParser` without the need for the
ceremonial `parse` method. This removes 9 of those `parse` methods and
parses the aggregation directly from their `ObjectParser`.
Currently the remote connection manager will delegate the size() call to
the underlying cluster connection manager. This introduces the
possibility that call will return 1 before the nodeConnection method has
been triggered to add the connection to the remote connection list. This
can cause issues, as the ensureConnected method checks the connection
managers size and executes synchronously if the size is > 0. This leads
to a potential cluster not connected exception while we are still
waiting for the connection opened callback to be triggered.
This commit fixes this issue by using the remote connection manager's
size to report the connection manager's size.
Fixes#52029.
This commit removes the hand-rolled x-content parsing logic from BoolQueryBuilder
and instead uses an ObjectParser to handle parsing. It also removes the long-deprecated
(since version 6) disable_coord parameter.
Converts the deprecations to `deprecatedAndMaybeLog` to reduce the
number of times we log deprecations, since some of these could be called
at a high frequency (due to unconverted queries, aggs, etc)
This commit introduces a module for Kibana that exposes REST APIs that
will be used by Kibana for access to its system indices. These APIs are wrapped
versions of the existing REST endpoints. A new setting is also introduced since
the Kibana system indices' names are allowed to be changed by a user in case
multiple instances of Kibana use the same instance of Elasticsearch.
Additionally, the ThreadContext has been extended to indicate that the use of
system indices may be allowed in a request. This will be built upon in the future
for the protection of system indices.
Backport of #52385
With #50871 aggrgations should now be parsed directly by an
`ObjectParser` or `ConstructingObjectParser` without the need for the
ceremonial `parse` method. This removes 10 of those `parse` methods and
parses the aggregation directly from their `ObjectParser`.
`MinAndMax` encapsulates min and max values for a shard. It uses generics to make sure that the values are of the same type and are also comparable. Though there are warnings whenever this class is currently used, which are addressed with this commit.
Relates to #49092
This field is a specialization of the `keyword` field for the case when all
documents have the same value. It typically performs more efficiently than
keywords at query time by figuring out whether all or none of the documents
match at rewrite time, like `term` queries on `_index`.
The name is up for discussion. I liked including `keyword` in it, so that we
still have room for a `singleton_numeric` in the future. However I'm unsure
whether to call it `singleton`, `constant` or something else, any opinions?
For this field there is a choice between
1. accepting values in `_source` when they are equal to the value configured
in mappings, but rejecting mapping updates
2. rejecting values in `_source` but then allowing updates to the value that
is configured in the mapping
This commit implements option 1, so that it is possible to reindex from/to an
index that has the field mapped as a keyword with no changes to the source.
Backport of #49713
When notifying global checkpoint listeners, we have an opportunity to
early return if there are not any registered listeners. This is
important since it saves some allocations, and also saves forking some
empty work to another thread. This commit adds an early return from
notifying listeners if there are not any registered.
Allow AffixSetting as validator dependencies. If a validator
specifies AffixSettings as a dependency, then `validate(T, Map)`
will have the concrete setting in a map.
Backport of: #52973, 1e0ba70
Fixes: #52933
This commit removes a TODO in the IndexNameExpressionResolver that
indicated the API should use a Set instead of a List. However, this
TODO was not completely correct since the ordering of arguments matters
due to negations when evaluating wildcards and since we also allow
a list of patterns like `*,-foo,*`, which would have a different
meaning even when using a Set with insertion ordering.
Relates #52788
Backport of #52963
Since version 8.4, `MMapDirectory` has an optimization to read long[]
arrays directly in little endian order, which postings leverage. So it'd
be more efficient to open postings with `MMapDirectory`.
I refactored a bit the existing logic to better explain why every listed
file extension is open with `mmap`.
Backport of #51233 to the seven dot x branch.
Tries to load a `Mapper` instance for the mapping snippet of a dynamic template.
This should catch things like using an analyzer that is undefined or mapping attributes that are unused.
This is best effort:
* If `{{name}}` placeholder is used in the mapping snippet then validation is skipped.
* If `match_mapping_type` is not specified then validation is performed for all mapping types.
If parsing succeeds with a single mapping type then this the dynamic mapping is considered valid.
If is detected that a dynamic template mapping snippet is invalid at mapping update time then the mapping update is failed for indices created on 8.0.0-alpha1 and later. For indices created on prior version a deprecation warning is omitted instead. In 7.x clusters the mapping update will never fail in case of an invalid dynamic template mapping snippet and a deprecation warning will always be omitted.
Closes#17411Closes#24419
Co-authored-by: Adrien Grand <jpountz@gmail.com>
The `terms` aggregation can be sortd by the results of its
sub-aggregations. Because it uses that sorting for filtering to the
top-n it tries not to construct all of the buckets for the child
aggregations. This has its own interesting problem around reduction, but
they aren't super relevant to this change. This change moves that
optimization from the `TermsAggregator` and into the aggregators being
sorted on. This should make it more clear what is going on and it
unifies this optimization with validating the sort.
Finally, this should enable some minor optimizations to save a few
comparisons when sorting multi-valued buckets. I'll get those in a
follow up because they are now *fairly* obvious. They probably won't be
a huge performance improvement, but it'll be nice anyway.
Add index name(s) into the source for the cluster state update done when putting mapping.
This ensures that the pending tasks API includes information on source indices.
Computing the stats for completion fields may involve a significant amount of
work since it walks every field of every segment looking for completion fields.
Innocuous-looking APIs like `GET _stats` or `GET _cluster/stats` do this for
every shard in the cluster. This repeated work is unnecessary since these stats
do not change between refreshes; in many indices they remain constant for a
long time.
This commit introduces a cache for these stats which is invalidated on a
refresh, allowing most stats calls to bypass the work needed to compute them on
most shards.
Closes#51915
Backport of #51991
We aren't able to reproduce or figure out the reason that failed this test.
This commit adds more assertions so we can narrow the scope.
Relates #52223
Since #51905, we use the local checkpoint of the safe commit to
calculate the number of uncommitted operations of a translog stats. If a
periodic flush triggered by afterWriteOperation completes before we sync
translog, then the last commit is not safe. We also need to sync
translog from Engine instead of the translog so that we can advance the
safe commit.
Relates #51905Closes#52223
Since #51905, we skip translog recovery if the local checkpoint of the
safe commit equals to the global checkpoint. This change adjusts the
test not to create a new snapshot in that case.
Closes#52221
Relates #51905
Separates the translog from the index deletion conditions (allowing the translog to be cleaned
up more eagerly), and avoids taking the write lock on the translog if no clean-up is actually
necessary.
Today we use the translog_generation of the safe commit as the minimum
required translog generation for recovery. This approach has a
limitation, where we won't be able to clean up translog unless we flush.
Reopening an already recovered engine will create a new empty translog,
and we leave it there until we force flush.
This commit removes the translog_generation commit tag and uses the
local checkpoint of the safe commit to calculate the minimum required
translog generation for recovery instead.
Closes#49970
This commit changes the `index.hidden` setting from being final to a
dynamic setting. While the setting being final allows for easier
reasoning about an index, making this setting update-able has more
benefits in that we can upgrade existing indices to be hidden and it
will enable future features that would dynamically make indices hidden.
Backport of #52772
We've pretty well settled on `ContextParser` for a generic interface to
`ObjectParser`-like-things. This switches the interface used for
building parsing pipeline aggregations to `ContextParser` which saves a
couple of little wrappers around `ObjectParser`.
Currently 3 remote cluster settings (ping interval, skip unavailable,
and compression) have a dependency on the seeds setting being
comfigured. With proxy mode, it is now possible that these settings the
seeds setting has not been configured. This commit removes this
dependency and adds new validation for these settings.
Generalize how queries on `_index` are handled at rewrite time (#52486)
Since this change refactors rewrites, I also took it as an opportunity to adrress #49254: instead of returning the same queries you would get on a keyword field when a field is unmapped, queries get rewritten to a MatchNoDocsQueryBuilder.
This change exposed a couple bugs, like the fact that the percolator doesn't rewrite queries at query time, or that the significant_terms aggregation doesn't rewrite its inner filter, which I fixed.
Closes#49254
Currently we have two ways to create a GroupShardsIterator: one that will resort the iterators based on their natural ordering, and another one that will leave them in their original order. This is currently done through two constructors, one that accepts a single argument which does the sorting, and another which accepts a second boolean argument to control whether sorting should happen or not. This second constructor is only called externally to disable the sorting.
By introducing a specific method to create a sorted shard iterator we clarify and make it easier to track when we do sort and when we do not as the iterators are externally sorted.
This change fixes the incomplete backport of #46731 in 7.x (as of 7.5).
We now check if `max_children` is set on the top level nested sort and fails with an
exception if it's not the case.
Relates #46731Closes#52202
* Remove TODO in MaxAgeCondition serialization
This removes the TODO with a message for any future readers regarding the code in question.
Resolves#52505
Currently the shard bulk request can be rejected by the write threadpool
after a mapping update. This introduces a scenario where the mapping
listener thread will attempt to finish the request and fsync. This
thread can potentially be a transport thread. This commit fixes this
issue by forcing the finish action to happen on the write threadpool.
Fixes#51904.
Lucene's RAMDirectory has been deprecated. This commit replaces all uses of
RAMDirectory in elasticsearch with the newer ByteBuffersDirectory. Most uses
are in tests, but the percolator and painless executor may get some small speedups.
Currently, date ranges queries using NOW-based date math are rewritten to
MatchAllDocs queries when being preprocessed for the percolator. However,
since we added the verification step, this can result in incorrect matches when
percolator queries are run without scores. This commit changes things to instead
wrap date queries that use NOW with a new DateRangeIncludingNowQuery.
This is a simple wrapper query that returns its delegate at rewrite time, but it can
be detected by the percolator QueryAnalyzer and be dealt with accordingly.
This also allows us to remove a method on QueryRewriteContext, and push all
logic relating to NOW-based ranges into the DateFieldMapper.
Fixes#52617
When the Node class is being constructed, an initial environment is
passed in with the initial settings for the node. Once the plugin
servicie is initialized, the final Environment+Settings are created, at
which point the initial environment should no longer be used. This
commit renames the constructor arg to avoid naming clashes with the
final environment variable.
Before boost in script_score query was wrongly applied only to the subquery.
This commit makes sure that the boost is applied to the whole score
that comes out of script.
Closes#48465
In #42838 we moved the terms index of all fields off-heap except the
`_id` field because we were worried it might make indexing slower. In
general, the indexing rate is only affected if explicit IDs are used, as
otherwise Elasticsearch almost never performs lookups in the terms
dictionary for the purpose of indexing. So it's quite wasteful to
require the terms index of `_id` to be loaded on-heap for users who have
append-only workloads. Furthermore I've been conducting benchmarks when
indexing with explicit ids on the http_logs dataset that suggest that
the slowdown is low enough that it's probably not worth forcing the terms
index to be kept on-heap. Here are some numbers for the median indexing
rate in docs/s:
| Run | Master | Patch |
| --- | ------- | ------- |
| 1 | 45851.2 | 46401.4 |
| 2 | 45192.6 | 44561.0 |
| 3 | 45635.2 | 44137.0 |
| 4 | 46435.0 | 44692.8 |
| 5 | 45829.0 | 44949.0 |
And now heap usage in MB for segments:
| Run | Master | Patch |
| --- | ------- | -------- |
| 1 | 41.1720 | 0.352083 |
| 2 | 45.1545 | 0.382534 |
| 3 | 41.7746 | 0.381285 |
| 4 | 45.3673 | 0.412737 |
| 5 | 45.4616 | 0.375063 |
Indexing rate decreased by 1.8% on average, while memory usage decreased
by more than 100x.
The `http_logs` dataset contains small documents and has a simple
indexing chain. More complex indexing chains, e.g. with more fields,
ingest pipelines, etc. would see an even lower decrease of indexing rate.
This drops more of the `instanceof`s from `AggregationPath`. There are
still a couple in `AggregationPath`. And I ended up moving two into
`BucketsAggregator`, but I think this is still an improvement!
We consider index level read_only_allow_delete blocks temporary since
the DiskThresholdMonitor can automatically release those when an index
is no longer allocated on nodes above high threshold.
The rest status has therefore been changed to 429 when encountering this
index block to signal retryability to clients.
Related to #49393
This commit renames ElasticsearchAssertions#assertThrows to
assertRequestBuilderThrows and assertFutureThrows to avoid a
naming clash with JUnit 4.13+ and static imports of these methods.
Additionally, these methods have been updated to make use of
expectThrows internally to avoid duplicating the logic there.
Relates #51787
Backport of #52582
Phase 1 of adding compilation limits per context.
* Refactor rate limiting and caching into separate class,
`ScriptCache`, which will be used per context.
* Disable compilation limit for certain tests.
Backport of 0866031
Refs: #50152
This commit modifies the codebase so that our production code uses a
single instance of the IndexNameExpressionResolver class. This change
is being made in preparation for allowing name expression resolution
to be augmented by a plugin.
In order to remove some instances of IndexNameExpressionResolver, the
single instance is added as a parameter of Plugin#createComponents and
PersistentTaskPlugin#getPersistentTasksExecutor.
Backport of #52596
Cache latest `RepositoryData` on heap when it's absolutely safe to do so (i.e. when the repository is in strictly consistent mode).
`RepositoryData` can safely be assumed to not grow to a size that would cause trouble because we often have at least two copies of it loaded at the same time when doing repository operations. Also, concurrent snapshot API status requests currently load it independently of each other and so on, making it safe to cache on heap and assume as "small" IMO.
The benefits of this move are:
* Much faster repository status API calls
* listing all snapshot names becomes instant
* Other operations are sped up massively too because they mostly operate in two steps: load repository data then load multiple other blobs to get the additional data
* Additional cloud cost savings
* Better resiliency, saving another spot where an IO issue could break the snapshot
* We can simplify a number of spots in the current code that currently pass around the repository data in tricky ways to avoid loading it multiple times in follow ups.
* Refactor Inflexible Snapshot Repository BwC (#52365)
Transport the version to use for a snapshot instead of whether to use shard generations in the snapshots in progress entry. This allows making upcoming repository metadata changes in a flexible manner in an analogous way to how we handle serialization BwC elsewhere.
Also, exposing the version at the repository API level will make it easier to do BwC relevant changes in derived repositories like source only or encrypted.
AllocationDeciders would collect Yes decisions when not asking for debug
info. Changed to only include Yes decisions when debug is requested
(explain).
Added ability to specify comma separated list of source indices without
array. Also fixed so that empty string results in validation error
rather than index does not exist.
Closes#51949
Issue #52000 looks like a case of cluster state updates being slower than
expected, but it seems that these slowdowns are relatively rare: most
invocations of `testDelayWithALargeAmountOfShards` take well under a minute in
CI, but there are occasional failures that take 6+ minutes instead. When it
fails like this, cluster state persistence seems generally slow: most are
slower than expected, with some small updates even taking over 2 seconds to
complete.
The failures all have in common that they use `WindowsFS` to emulate Windows'
behaviour of refusing to delete files that are still open, by tracking all
files (really, inodes) and validating that deleted files are really closed
first. There is a suggestion that this is a little slow in the Lucene test
framework [1]. To see if we can attribute the slowdown to that common factor,
this commit suppresses the use of `WindowsFS` for this test suite.
[1] 4a513fa99f/lucene/test-framework/src/java/org/apache/lucene/util/TestRuleTemporaryFilesCleanup.java (L166)
Currently we lock when generating time based uuids. The lock is
implemented to prevent concurrent writes to the last timestamp. The uuid
generation is an area of contention when indexing. This commit modifies
the code to use atomic compare and set operations to update the last
timestamp.
Every time a setting#exist call is made we lock on the keyset to ensure
that it has been initialized. This a heavyweight operation that only
should be done once. This commit moves to a volatile read instead to
prevent unnecessary locking.
Currently we have three different implementations representing a
`ConnectionManager`. There is the basic `ConnectionManager` which
holds all connections for a cluster. And a remote connection manager
which support proxy behavior. And a stubbable connection manager for
tests. The remote and stubbable instances use the delegate pattern,
so this commit extracts an interface for them all to implement.
It looks like #52000 is caused by a slowdown in cluster state application
(maybe due to #50907) but I would like to understand the details to ensure that
there's nothing else going on here too before simply increasing the timeout.
This commit enables some relevant `DEBUG` loggers and also captures stack
traces from all threads rather than just the three hottest ones.
When `FilterStreamInput` wraps a Netty `ByteBuf` based stream it
did not forward the bulk primitive reads to the delegate.
These are optimized on the delegate but if they're not forwarded
then the delegate will be called e.g. 4 times to read an `int`.
This happens for essentially all network reads prior to this
change because they all run from a `NamedWritableAwareStreamInput`.
This also required optimising `BufferedChecksumStreamInput` individually to use bulk reads from the buffer because it implicitly assumed that the filter stream input wouldn't override any of the bulk operations.
The `top_metrics` agg is kind of like `top_hits` but it only works on
doc values so it *should* be faster.
At this point it is fairly limited in that it only supports a single,
numeric sort and a single, numeric metric. And it only fetches the "very
topest" document worth of metric. We plan to support returning a
configurable number of top metrics, requesting more than one metric and
more than one sort. And, eventually, non-numeric sorts and metrics. The
trick is doing those things fairly efficiently.
Co-Authored by: Zachary Tong <zach@elastic.co>
Fixes the the no-query optimization for `min` and `max` aggregations
for `date_nanos` fields by delegating decoding dates "through" their
`resolution` member.
Closes#52220
This commit makes the names of fetch subphases more consistent:
* Now the names end in just 'Phase', whereas before some ended in
'FetchSubPhase'. This matches the query subphases like AggregationPhase.
* Some names include 'fetch' like FetchScorePhase to avoid ambiguity about what
they do.
This adds a builder and parsed results for the `string_stats`
aggregation directly to the high level rest client. Without this the
HLRC can't access the `string_stats` API without the elastic licensed
`analytics` module.
While I'm in there this adds a few of our usual unit tests and
modernizes the parsing.
When `date_histogram` attempts to optimize itself it for a particular
time zone it checks to see if the entire shard is within the same
"transition". Most time zone transition once every size months or
thereabouts so the optimization can usually kicks in.
*But* it crashes when you attempt feed it a time zone who's last DST
transition was before epoch. The reason for this is a little twisted:
before this patch it'd find the next and previous transitions in
milliseconds since epoch. Then it'd cast them to `Long`s and pass them
into the `DateFieldType` to check if the shard's contents were within
the range. The trouble is they are then converted to `String`s which are
*then* parsed back to `Instant`s which are then convertd to `long`s. And
the parser doesn't like most negative numbers. And everything before
epoch is negative.
This change removes the
`long` -> `Long` -> `String` -> `Instant` -> `long` chain in favor of
passing the `long` -> `Instant` -> `long` which avoids the fairly complex
parsing code and handles a bunch of interesting edge cases around
epoch. And other edge cases around `date_nanos`.
Closes#50265
We need to reduce the translog sync interval for indices with translog
async setting so that we can have the safe commit in the assertBusy
interval. This is needed since #51905, where we use the local checkpoint
of the safe commit to calculate the number of uncommitted operations of
a translog stats.
Closes#52251
Relates #51905
This commit removes the need for DeprecatedRoute and ReplacedRoute to
have an instance of a DeprecationLogger. Instead the RestController now
has a DeprecationLogger that will be used for all deprecated and
replaced route messages.
Relates #51950
Backport of #52278
Add a new cluster setting `search.allow_expensive_queries` which by
default is `true`. If set to `false`, certain queries that have
usually slow performance cannot be executed and an error message
is returned.
- Queries that need to do linear scans to identify matches:
- Script queries
- Queries that have a high up-front cost:
- Fuzzy queries
- Regexp queries
- Prefix queries (without index_prefixes enabled
- Wildcard queries
- Range queries on text and keyword fields
- Joining queries
- HasParent queries
- HasChild queries
- ParentId queries
- Nested queries
- Queries on deprecated 6.x geo shapes (using PrefixTree implementation)
- Queries that may have a high per-document cost:
- Script score queries
- Percolate queries
Closes: #29050
(cherry picked from commit a8b39ed842c7770bd9275958c9f747502fd9a3ea)
The buffer in LoggingOutputStream skips flushing when only a newline
appears. However, if a windows newline appeared, the buffer length was
not reset. This commit resets the length so the \r does not appear in
the next logging message.
closes#51838
MockRandomMergePolicy randomly determines if a segment should use a
compound format. This can cause a force merge performing two merges: (1)
merging to a single segment, (2) rewriting the new segment using the
compound format. If the second merge completes after we have flushed,
then it can flip the flag shouldPeriodicallyFlushAfterBigMerge to true.
Closes#52205
Modifies SLM's and ILM's history indices to be hidden indices for added
protection against accidental querying and deletion, and improves
IndexTemplateRegistry to handle upgrading index templates.
Also modifies the REST test cleanup to delete hidden indices.
This removes a bunch of `instanceof`s in favor of two new methods on
`InernalAggregation`. The default implementations of these methods just
throw exceptions explaining that you can't sort on this aggregation.
They are overridden by all of the classes that used to have `instanceof`
checks against them.
I doubt this is really any faster in practice. The real benefit here is
that it is a little more obvious *that* you can sort by the results of
an aggregation and it should be *much* more obvious where to look at
*how* aggregations sort themselves.
There are still a bunch more `instanceof`s in left in `AggregationPath`
but those will wait for a followup change.
disallow to specify percentile out of range [0,100]. This also fixes a problem in transform by failing
validation if an invalid percentile configuration is used.
Changes the misleading error message when attempting to open
a job while the "cluster.persistent_tasks.allocation.enable"
setting is set to "none" to a clearer message that names the
setting.
Closes#51956
Today we use `cluster.join.timeout` to prevent nodes from waiting indefinitely
if joining a faulty master that is too slow to respond, and
`cluster.publish.timeout` to allow a faulty master to detect that it is unable
to publish its cluster state updates in a timely fashion. If these timeouts
occur then the node restarts the discovery process in an attempt to find a
healthier master.
In the special case of `discovery.type: single-node` there is no point in
looking for another healthier master since the single node in the cluster is
all we've got. This commit suppresses these timeouts and instead lets the node
wait for joins and publications to succeed no matter how long this might take.
Previously, the dot-index rules (namely, that indices with dot-prefixed
names should be either hidden indices or system indices) was done
before* template application, and so only checked for the `index.hidden`
setting in the request, ignoring if that setting was set via a template.
This commit moves that check to a different method, which is applied
after templates have been resolved and applied to the index settings.
This commit fixes another edge case in handling windows newlines in our
capture of stdout/stderr to log4j. The case is that the \r appears at
the beginning of the buffer when flushing, which would unintentionally
be emitted as an empty string. This commit skips the flush if only a \r
was found.
closes#51838
Currently, the logic for looking up `flattened` field types lives in the
top-level `FieldTypeLookup`. This PR moves it into a dedicated class
`DynamicKeyFieldTypeLookup`.
Segment(s) info blobs are already stored with their full content
in the "hash" field in the shard snapshot metadata as long as they are
smaller than 1MB. We can make use of this fact and never upload them
physically to the repo.
This saves a non-trivial number of uploads and downloads when restoring
and might also lower the latency of searchable snapshots since they can save
phyiscally loading this information as well.
- Enable SunJGSS provider for Kerberos tests
- Handle the fact that in the decrypt method in KeyStoreWrapper might
not throw immediately when the GCM cipher is from BouncyCastle FIPS
and we end up with a DataInputStream that has reached it's end.
- Disable tests, jarHell, testingConventions for ingest attachment
plugin. We don't support this plugin (and document this) in FIPS
mode.
- Don't attempt to install ingest-attachment in smoke-test-plugins
This commit changes how RestHandlers are registered with the
RestController so that a RestHandler no longer needs to register itself
with the RestController. Instead the RestHandler interface has new
methods which when called provide information about the routes
(method and path combinations) that are handled by the handler
including any deprecated and/or replaced combinations.
This change also makes the publication of RestHandlers safe since they
no longer publish a reference to themselves within their constructors.
Closes#51622
Co-authored-by: Jason Tedor <jason@tedor.me>
Backport of #51950
We might leak a searcher if the target shard is removed (i.e., its index
is deleted) or relocated while we are creating a SearchContext from a
SearchRewriteContext.
Relates #51708Closes#52021
I labelled this non-issue for an unreleased bug introduced in #51708.
We can just put the `IndexId` instead of just the index name into the recovery soruce and
save one load of `RepositoryData` on each shard restore that way.
We need to either exclude null responses from the scroll search response
or always create a search context for every target shards, although that
scroll query can be written to match_no_docs. Otherwise, we won't find
search_context for subsequent scroll requests.
This commit implements the latter option as it's less error-prone.
Relates #51708
This change ensures that the rewrite of the shard request is executed in the network thread or in the refresh listener when waiting for an active shard. This allows queries that rewrite to match_no_docs to bypass the search thread pool entirely even if the can_match phase was skipped (pre_filter_shard_size > number of shards). Coordinating nodes don't have the ability to create empty responses so this change also ensures that at least one shard creates a full empty response while the other can return null ones. This is needed since creating true empty responses on shards require to create concrete aggregators which would be too costly to build on a network thread. We should move this functionality to aggregation builders in a follow up but that would be a much bigger change.
This change is also important for #49601 since we want to add the ability to use the result of other shards to rewrite the request of subsequent ones. For instance if the first M shards have their top N computed, the top worst document in the global queue can be pass to subsequent shards that can then rewrite to match_no_docs if they can guarantee that they don't have any document better than the provided one.
QueryBuilders that throw exceptions on shards when building the Lucene query
returns the full serialization of the query builder in the exception message.
For large queries that fails to execute due to the max boolean clause, this means
that we keep a reference of these big messages for every shard that participate
in the request. In order to limit the memory needed to hold these query shard
exceptions in the coordinating node, this change removes the query builder
serialization from the shard exception. The query is known by the user so
there should be no need to repeat it on every shard exception. We could also
omit the entire stack trace for known bad request exception but it would deserve
a separate issue/pr.
Closes#51843Closes#48910
When the `rare_terms` aggregation contained another aggregation it'd
break them. Most of the time. This happened because the process that it
uses to remove buckets that turn out not to be rare was incorrectly
merging results from multiple leaves. This'd cause array index out of
bounds issues. We didn't catch it in the test because the issue doesn't
happen on the very first bucket. And the tests generated data in such a
way that the first bucket always contained the rare terms. Randomizing
the order of the generated data fixed the test so it caught the issue.
Closes#51020
Currently, the same class `FieldCapabilities` is used both to represent the
capabilities for one index, and also the merged capabilities across indices. To
help clarify the logic, this PR proposes to create a separate class
`IndexFieldCapabilities` for the capabilities in one index. The refactor will
also help when adding `source_path` information in #49264, since the merged
source path field will have a different structure from the field for a single index.
Individual changes:
* Add a new class IndexFieldCapabilities.
* Remove extra constructor from FieldCapabilities.
* Combine the add and merge methods in FieldCapabilities.Builder.