We never see this exception in the logs even though it's pretty severe.
All we might see is an exception about a transport message not having been read fully
from the logic that follows this code.
Technically we should probably bubble up the exception but that's a bigger change
and needs some carefully reasoning, this change for the time being at least simplifies
tracking down deserialization issues in responses.
FastVectorHighlighter uses the top-level reader to rewrite queries against, which
it gets via an IndexSearcher field on HitContext. However, we can already access
this top-level reader via HitContext's existing LeafReaderContext field.
This commit removes the unnecessary field and constructor parameter, and
changes the implementation of topLevelReader to go via ReaderUtils and
the leaf reader context.
AuthorizationService#authorize uses the thread context to carry the result of the
authorisation as transient headers. The listener argument to the `authorize` method
must necessarily observe the header values. This PR makes it so that
the authorisation transient headers (`_indices_permissions` and `_authz_info`, but
NOT `_originating_action_name`) of the child action override the ones of the parent action.
Co-authored-by: Tim Vernum tim@adjective.org
This was missing and caused nodes to drop out of the cluster on serialization failures
when ever one tried to get an enrich policy task by name.
The test in here is a little dirty but I figured it would be nice to have an actual reproducer
for the issue and I couldn't find any infrastructure to nicely time the tasks so I put this on
top of existing test infra.
Use the newly introduced PIT API to have a consistent view of the data
while doing sequence matching, which involves multiple calls, aka
repeatable reads and thus avoid race conditions or any in-flight updates
on the data.
(cherry picked from commit daa72fc3c71fd36afb55278021ff6bbc591ef148)
Today when an S3RetryingInputStream is closed the remaining bytes
that were not consumed are drained right before closing the underlying
stream. In some contexts it might be more efficient to not consume the
remaining bytes and just drop the connection.
This is for example the case with snapshot backed indices prewarming,
where there is not point in reading potentially large blobs if we know
the cache file we want to write the content of the blob as already been
evicted. Draining all bytes here takes a slot in the prewarming thread
pool for nothing.
This change adds an aggregation that can be used to delay the
query phase execution on shards with a configurable time:
{
"aggs": {
"delay": {
"shard_delay": {
"value": "30s"
},
"aggs": {
"host": {
"terms": {
"field": "hostname"
}
}
}
}
}
}
This test module is built on top of #61954 so the aggregation will be available only within
snapshots since this module is not meant to be used in production.
Closes#54159
Backport of #62361 to 7.x branch.
This test was fine and shouldn't have been muted.
The test case class should have preserved data streams as part of #62205Closes#62210
There is a race in this test where the index request will return
once the dynamic mapping update has been observed by the cluster
state observer internally used by the indexing but not hit all
state appliers and thus isn't showing up as the applied state returned
by `clusterService.state()` yet.
We have a special FetchPhaseExecutionException which contains some useful
information about which shard and doc a fetch phase has failed in. However, this
is not used in many places - currently only the ExplainPhase and the highlighters
throw one, and the FetchPhase itself catches IOExceptions and just passes them
to the ExceptionsHelper with no extra context.
This commit changes FetchPhase to throw FetchPhaseExecutionException if it
encounters problems in any of its subphases, and removes the special handling
from the explain and highlight phases. It also removes the need to pass shard ids
around when building HitContext objects.
The log4j config in :qa:os was broken because it referenced an appender plugin that is not
on that project's classpath. Resolve this by adding a dedicated logging config and removing
the copy step.
The complexity of removing a timeout listener was `O(n)` which
means that in case of many queued up CS update tasks (such as in the
case of an avalanche of dynamic mapping updates) we're dealing with
quadratic complexity for timing out N tasks which was observed to be
an issue in practice.
This PR makes the complexity of timing out a task `O(1)` and generally
simplifies the iteration logic of listeners and applies to be a little
more efficient and inline better.
* Add "synthetics-*-*" templates for synthetics fleet data
For the Elastic Agent we currently have `logs` and `metrics`, however, synthetic data doesn't belong
with those and thus we should have a place for it to live. This would be data reported from
heartbeat and under the 'monitoring' category.
This commit adds a composable index template for `synthetics-*-*` indices similar to the work in
#56709 and #57629.
Resolves#61665
The case InnerHitBuilderTests#testEqualsAndHashcode creates a copy of the object
by serializing + deserializing it, then applies a modification. If the 'fields'
list is empty, then deserializing it results in Collections.emptyList. Because
this is immutable, then modifying it can throw an UnsupportedOperationException.
This PR takes the same approach as for docvalue_fields, where we create a new
list instead of trying to add to an empty one.
This PR adds support for the 'fields' option in the following places:
* Anytime `inner_hits` is used, for both fetching nested/ child docs and field collapsing
* The `top_hits` aggregation
Addresses #61949.
The annotations index is not covered by the comparison between
mappings and templates, as it does not use an index template.
This commit adds an assertion on annotations index mappings
that will fail if the mappings are not upgraded as expected.
Backport of #62325
Followup to #61681:
- reuse the current iterator in `reset()` if possible
- simply some integer-overflow-avoidance in `skip()`
- clarify some comments
- address some IntelliJ warnings
The job comms thread pool is intended for the long-running job
processes that do anomaly detection or data frame analytics and
count towards job count and memory limits.
This commit moves the short-lived memory estimation processes
to the ML utility thread pool.
Although this doesn't matter in most cases, at the limits of
scale it could mean that memory estimations would get in the way
of starting jobs, or would queue up for an excessive period of
time while waiting for jobs to finish.
Similar to the work in #60994 where we introduced the `data_hot`, `data_warm`, etc node roles. This
introduces a new `data_content` node role to be used for the Content tier.
Currently this tier is not used anywhere, but subsequent work will use this tier.
Relates to #60848
When calling `_execute` there is a chance that there will be bulk indexing failures
or search failures.
These will result in the call failing overall. But, no information is provided for troubleshooting the failure.
This commit adds logging to indicate the number of failures, and new debug level logging so that
failure details can be determined if necessary.
closes https://github.com/elastic/elasticsearch/issues/60491
Disabling the `query_string` queries `allow_leading_wildcard` parameter didn't
work after a change probably introduced in #60959 because the various field types
`wildcardQuery` don't check the leading characters like
QueryParserBase#getWildcardQuery does. This PR adds the missing check also
before calling the field types wildcard generating method.
Closes#62267
Just a number of obvious spots where we were allocating
duplicate empty structures or otherwise inefficient that I
found while investigating snapshot cluster state update performance.
This commit deprecates the Repository Stats API added in 7.8.0 as
an experimental API behind a feature flag. The goal is to deprecate
this API in 7.10.0 and remove it in a follow up PR in 8.0.0.
This API is now superseded by the Repositories Metering API.
It has been observed that if the normalizer process fails
to connect to the JVM then this causes a null pointer
exception as the JVM tries to close the native process
object. The accessors and close methods of the native
process class that access the C++ log handler should not
assume that it connected correctly.
Backport of #62059 to 7.x branch.
Return a 404 http status code when attempting to delete a non existing data stream.
However only return a 404 when targeting a data stream without any wildcards.
Closes#62022
This commit addresses a super minor misalignment with master, applying exactly the same change that was made as part of #62057, which was backported before point in time APIs were backported.