This commit handles cases testing withLocale and withZone when the zone
and locale in question is the same as the special base case. This can
happen sometimes since the locale and zoneids are randomized.
Empty values on keyword fields are filtered by the `map` execution mode
of the `terms` aggregation. This commit restores them as valid buckets.
Closes#34434
This commit removes randomization of locale for DateFormatter equals
tests, instead using explicit locales. The test framework already
randomizes locales, so the random choice of the second locale can
sometimes be equal to the already chosen locale. Randomization also does
not provide any extra protection, as the equality of DateFormatter does
not implement equality of the locales itself.
closes#34337
ListenableFuture may run a listener on the same thread that called the
addListener method or it may execute on another thread after the future
has completed. Whenever the ListenableFuture stores the listener for
execution later, it should preserve the thread context which is what
this change does.
Today we rewrite the operations from the leader with the term of the
following primary because the follower should own its history. The
problem is that a newly promoted primary may re-assign its term to
operations which were replicated to replicas before by the previous
primary. If this happens, some operations with the same seq_no may be
assigned different terms. This is not good for the future optimistic
locking using a combination of seqno and term.
This change ensures that the primary of a follower only processes an
operation if that operation was not processed before. The skipped
operations are guaranteed to be delivered to replicas via either
primary-replica resync or peer-recovery. However, the primary must not
acknowledge until the global checkpoint is at least the highest seqno of
all skipped ops (i.e., they all have been processed on every replica).
Relates #31751
Relates #31113
Questions on how to work with `ActionPlugin#getRestHandlerWrapper()`
come up in discuss forums all the time. This change adds an example
to the javadoc how this method should/could be used.
ES is scanning for dangling indices on every cluster state update. For this, it lists the subfolders of
the indices directory to determine which extra index directories exist on the node where there's no
corresponding index in the cluster state. These are potential targets for dangling index import. On
certain machine types, and with large number of indices, this subfolder listing can be horribly slow.
This means that every cluster state update will be slowed down by potentially hundreds of
milliseconds. One of the reasons for this poor performance is that Files.isDirectory() is a relatively
expensive call on some OS and JDK versions. There is no need though to do all these isDirectory
calls for folders which we know we are going to discard anyhow in the next step of the dangling
indices logic. This commit allows adding an exclusion predicate to the availableIndexFolders
methods which can dramatically speed up this method when scanning for dangling indices.
Since all calls to `ESLoggerFactory` outside of the logging package were
deprecated, it seemed like it'd simplify things to migrate all of the
deprecated calls and declare `ESLoggerFactory` to be package private.
This does that.
With this commit we restore the previous behavior in
`BigArraysTests#testMaxSizeExceededOnResize` but lower the sizes that
are tested to the range between 256 bytes to 16 kB so the test does not
produce a whole lot of garbage.
The previous attempt to reduce the amount of garbage produced by that
test was to properly size the array initially but it failed to account
for object alignment which lead to test failures in some cases. While it
would be possible to account for object alignment, we would need to open
up BigArrays or directly use the underlying Lucene API which would
require us to allocate an array upfront only to find its size (incl.
object alignment).
Instead we have fixed this issue by conservatively sizing the array
initially (so the initial allocation will never trip the circuit
breaker) and reduce garbage by reducing the circuit breaker's upper
bound as described previously.
Closes#33750
Relates #34325
This changes the delete job API by adding
the choice to delete a job asynchronously.
The commit adds a `wait_for_completion` parameter
to the delete job request. When set to `false`,
the action returns immediately and the response
contains the task id.
This also changes the handling of subsequent
delete requests for a job that is already being
deleted. It now uses the task framework to check
if the job is being deleted instead of the cluster
state. This is a beneficial for it is going to also
be working once the job configs are moved out of the
cluster state and into an index. Also, force delete
requests that are waiting for the job to be deleted
will not proceed with the deletion if the first task
fails. This will prevent overloading the cluster. Instead,
the failure is communicated better via notifications
so that the user may retry.
Finally, this makes the `deleting` property of the job
visible (also it was renamed from `deleted`). This allows
a client to render a deleting job differently.
Closes#32836
The `status` part of the tasks API reflects the internal status of a
running task. In general, we do not make backwards breaking changes to
the `status` but because it is internal we reserve the right to do so. I
suspect we will very rarely excercise that right but it is important
that we have it so we're not boxed into any particular implementation
for a request.
In some sense this is policy making by documentation change. In another
it is clarification of the way we've always thought of this field.
I also reflect the documentation change into the Javadoc in a few
places. There I acknowledge Kibana's "special relationship" with
Elasticsearch. Kibana parses `_reindex`'s `status` field and, because
we're friends with those folks, we should talk to them before we make
backwards breaking changes to it. We *want* to be friends with everyone
but there is only so much time in the day and we don't *want* to make
backwards breaking fields to `status` at all anyway. So we hope that
breaking changes documentation should be enough for other folks.
Relates to #34245.
* SCRIPTING: Add Expr. Compile for TermSetQuery Ctx.
* Follow up to #33602 adding the ability to compile TermsSetQuery
scripts with the expressions engine in the same way we support
SearchScript in Expressions
* Duplicated the code here for now to make the change less complex,
the only difference to SearchScript is that `_score` and `_value` are not handled for TermsSetQuery
* remove redundant check
Drops the last logging constructor that takes `Settings` because it is
no longer needed.
Watcher goes through a lot of effort to pass `Settings` to `Logger`
constructors and dropping `Settings` from all of those calls allowed us
to remove quite a bit of log-based ceremony from watcher.
Today we use the version of a DirectoryReader as a component of the key
of IndicesRequestCache. This usage is perfectly fine since the version
is advanced every time a new change is made into IndexWriter. In other
words, two DirectoryReaders with the same version should have the same
content. However, this invariant is only guaranteed in the context of a
single IndexWriter because the version is reset to the committed version
value when IndexWriter is re-opened.
Since #33473, each IndexShard may have more than one IndexWriter, and
using the version of a DirectoryReader as a part of the cache key can
cause IndicesRequestCache to return stale cached values. For example, in
#27650, we rollback the engine (i.e., re-open IndexWriter), index new
documents, refresh, then make a count request, but the search layer
mistakenly returns the count of the DirectoryReader of the previous
IndexWriter because the current DirectoryReader has the same version of
the old DirectoryReader even their documents are different. This is
possible because these two readers come from different IndexWriters.
This commit replaces the the version with the reader cache key of
IndexReader as a component of the cache key of IndicesRequestCache.
Closes#27650
Relates #33473
This commit adds the support to early terminate the collection of a leaf
in the min/max aggregator. If the query matches all documents the min and max value
for a numeric field can be retrieved efficiently in the points reader.
This change applies this optimization when possible.
* Make text message not required in constructor for slack
* Remove unnecessary comments in test file
* Throw exception when reduce or combine is not provided; update tests
* Update integration tests for scripted metrics to always include reduce and combine
* Remove some old changes from previous branches
* Rearrange script presence checks to be earlier in build
* Change null check order in script builder for aggregated metrics; correct test scripts in IT
* Add breaking change details to PR
Today we reverse the initial order of the nested documents when we
index them in order to ensure that parents documents appear after
their children. This means that a query will always match nested documents
in the reverse order of their offsets in the source document.
Reversing all documents is not needed so this change ensures that parents
documents appear after their children without modifying the initial order
in each nested level. This allows to match children in the order of their
appearance in the source document which is a requirement to efficiently
implement #33587. Old indices created before this change will continue
to reverse the order of nested documents to ensure backwark compatibility.
* Adds trace logging to IndicesRequestCache
This change adds trace level logging to `IndicesrrequestCache` witht eh
primary aim of helping to identify the cause of teh failures in
https://github.com/elastic/elasticsearch/issues/32827. The cache will
log at trace level when a cache hit or miss occurs including the reader
version and the cache key. Note that this change adds a
`cacheKeyRenderer` whcih supplies a human readable String of the cache
key since the actual cache key itself is a `BytesReference` containing
the wire protocol serialised form of the request.
Logging is also added for the case where a search timeout occurs and fr
that reason the cache entry is invalidated.
* Adds comment to remaind us to remove cacheKeyRenderer
In #28941 we changed the computation of cluster state task descriptions but
this introduced a bug in which we only log the empty descriptions (rather than
the non-empty ones). This change fixes that.
Optionals containing boxed primitive types are prohibitively costly because they
have two level of boxing. For Optional<Integer> the analogous OptionalInt can be
used to avoid the boxing of the contained int value.
This change fixes a bug in the cross fields mode of the `query_string`
query. The multi fields query builder must be reseted before parsing
in order to clear the list of expanded fields coming from the previous text block.
Closes#34215
Mappings with completion type and multi-fields, were not able to index array or
object format on completion fields. Only string format was supported.
This is fixed by providing multiField parser with externalValueContext with already parsed object
closes#15115
This adds some method into the `DateFormatter` interface, namely
* `withLocale()` to change the locale of a date formatter
* `getLocale()`
* `getZone()`
* `hashCode()`
* `equals()`
These methods will be needed for aggregations and mapping changes, where
zones and locales can be specified in the mapping or in search/aggs
parts of a search request.
When nested objects are present in the mappings, we add a filter in
queries to exclude them if there is no evidence that the query cannot
match in this space. In 6x we visit the query in order to find a mandatory
clause that can match root documents only. If we find one we can omit the
nested documents filter. Currently only `term` and `range` queries are checked,
this change adds the support for `terms` query to effectively remove the nested filter
if a mandatory `terms` clause targets a non-nested field.
Closes#34067
Mainly this fixes a warning by replacing the unchecked `new ActionListener`
with the checked `new ActionListener<Response>`, and it also fixes the line
length violations in this class.
This commit adds a check for "enabled" attribute change for types when
a RestPutMappingAction is received. A MappingException is thrown when
such a change is detected. Change are prevented in both ways: "false -> true"
and "true -> false".
Closes#33566
#32281 adds elasticsearch-shard to provide bwc version of elasticsearch-translog for 6.x; have to remove elasticsearch-translog for 7.0
Relates to #31389
The unfollow API changes a follower index into a regular index, so that it will accept write requests from clients.
For the unfollow api to work the index follow needs to be stopped and the index needs to be closed.
Closes#33931
This change introduces the indexing optimization using sequence numbers
in the FollowingEngine. This optimization uses the max_seq_no_updates
which is tracked on the primary of the leader and replicated to replicas
and followers.
Relates #33656
This commit removes the use of ExecutableScript from watcher in favor of
custom script contexts for both watcher condition scripts and transform
scripts.
Fixes the equals and hash function to ignore the order of aggregations to ensure equality after serialization
and deserialization. This ensures storing configs with aggregation works properly.
This also addresses a potential issue in caching when the same query contains aggregations but in
different order. 1st it will not hit in the cache, 2nd cache objects which shall be equal might end up twice in
the cache.
In order to be compatible with joda time, this adds an epoch seconds
formatter, that is able to parse floating point values.
However joda time discards the floating point values, but still parses
the data, where as this one is able to parse the whole value including
milliseconds.
The synonym filters no longer need access to the AnalysisRegistry in their
constructors, so we can remove the special-case code and move them to the
common analysis module.
This commit means that synonyms are no longer available for `server` integration tests,
so several of these are either rewritten or migrated to the common analysis module
as rest-spec-api tests
* Make sure 'ignored' and 'routing' field types inherit from StringFieldType.
* Add tests for prefix and regexp queries.
* Support prefix and regexp queries on _index fields.
This commit adds the ability to plug in compilation of custom contexts
in mock script engine. This is needed for testing plugins which add
custom contexts like watcher.
Prior to this change when a pipeline processor called another
pipeline, only the stats for the first processor were recorded.
The stats for the subsequent pipelines were ignored. This change
properly accounts for pipelines irregardless if they are the first
or subsequently called pipelines.
This change moves the state of the stats from the IngestService
to the pipeline itself. Cluster updates are safe since the pipelines
map is atomically swapped, and if a cluster update happens
while iterating over stats (now read directly from the pipeline)
a slightly stale view of stats may be shown.
Recently we introduced the settings cluster.remote to take the place of
search.remote for configuring remote cluster connections. We made this
change due to the fact that we have generalized the remote cluster
infrastructure to also be used within cross-cluster replication and not
only cross-cluster search. For backwards compatibility, when we made this
change, we allowed that cluster.remote would fallback to
search.remote. Alas, the initial change for this contained a bug for
handling the proxy and seeds settings. The bug for the seeds settings
arose because we were manually iterating over the concrete settings only
for cluster.remote seeds but not for search.remote seeds. This commit
addresses this by iterating over both cluster.remote seeds and
search.remote seeds. Additionally, when checking for existence of proxy
settings, we have to not only check cluster.remote proxy settings, but
also fallback to search.remote proxy settings. This commit addresses
both issues, and adds tests for these situations.
* Handle MatchNoDocsQuery in span query wrappers
This change adds a new SpanMatchNoDocsQuery query that replaces
MatchNoDocsQuery in the span query wrappers.
The `wildcard` query now returns MatchNoDocsQuery if the target field is not
in the mapping (#34093) so we need the equivalent span query in order to
be able to pass it to other span wrappers.
Closes#34105
EngineSearcher can be easily folded into Engine.Searcher which removes
a level of inheritance that is necessary for most of it's subclasses.
This change folds it into Engine.Searcher and removes the dependency on
ReferenceManager.
* This should surface what errors are thrown on CI
and in org.elasticsearch.transport.RemoteClusterConnection.ConnectHandler#collectRemoteNodes
(the sequence of caught error in the last catch block and moving on to the next seed node
seems to be the only path by which the errors logged in #33756 could come about)
* Relates #33756
This commit adds "engine is closed" as an expected failure message.
This change is due to #33967 in which we might access a closed engine on
promotion.
Relates #33967
This change is related to #33903 that ports the DocStats
simplification to the master branch. This change builds the docStats
in the ReadOnlyEngine from the last committed segment infos rather than
the reader.
Co-authored-by: Tanguy Leroux <tlrx.dev@gmail.com>
Although we allow to index BigInteger and BigDecimal into a keyword
field, source filtering on these fields would fail
as XContentBuilder was not able to deserialize BigInteger and BigDecimal
to json.
This modifies XContentBuilder to allow to handle BigInteger and
BigDecimal.
Closes#32395
This commits creates a DateMathParser interface, which is already
implemented for both joda and java time. While currently the java time
DateMathParser is not used, this change will allow a followup which will
create a DateMathParser from a DateFormatter, so the caller does not
need to know the internals of the DateFormatter they have.
Previously, unmapped aggs try to delegate reduction to a sibling agg that is
mapped. That delegated agg will run the reductions, and also
reduce any pipeline aggs. But because delegation comes before running
pipelines, the unmapped agg _also_ tries to run pipeline aggs.
This causes the pipeline to run twice, and potentially double it's output
in buckets which can create invalid JSON (e.g. same key multiple times)
and break when converting to maps.
This fixes by sorting the list of aggregations ahead of time so that mapped
aggs appear first, meaning they preferentially lead the reduction. If all aggs
are unmapped, the first unmapped agg simply creates a new unmapped object
and returns that for the reduction.
This means that unmapped aggs no longer defer and there is no chance for
a secondary execution of pipelines (or other side effects caused by deferring
execution).
Closes#33514
`SingleFieldsVisitor` is meant to load a single stored field but it
manages to be quite complex to reason about because it inherits from our
"basic" `FieldsVisitor` which is designed to load many fields. This
breaks that inheritance and adds logic to `SingleFieldsVisitor` so it can
be properly stand alone. While this amounts to more lines of code they
ought to be significantly easier to reason about.
This change cleans up "unused variable" warnings. There are several cases were we
most likely want to suppress the warnings (especially in the client documentation test
where the snippets contain many unused variables). In a lot of cases the unused
variables can just be deleted though.
This commit removes the sysprop controlling whether ctx is in params for
update scripts and replaces it with use of the new ParameterMap, which
outputs a deprecation warning whenever params.ctx is used.
Today query parsers throw TooManyClauses exception when a query creates
too many clauses. However graph phrase queries do not respect this limit.
This change adds a protection against crazy expansions that can happen when
building a graph phrase query. This is a temporary copy of the fix available
in https://issues.apache.org/jira/browse/LUCENE-8479 but not merged yet.
This logic will be removed when we integrate the Lucene patch in a future
release.
* INGEST: Tests for Drop Processor
* UT for behavior of dropped callback
and drop processor
* Moved drop processor to `server`
project to enable this test
* Simple IT
* Relates #32278
Today `SearchAsyncActionTests#testFanOutAndCollect` uses a simple `HashMap` for
the `nodeToContextMap` variable, which is then accessed from multiple threads
without, apparently, explicit synchronisation. This provides an explanation for
the test failure identified in #29242 in which `.toString()` returns `"[]"`
just before `.isEmpty` returns `false`, without any concurrent modifications.
This change converts `nodeToContextMap` to a `newConcurrentMap()` so that this
cannot occur. It also fixes a race condition in the detection of double-calling
the subsequent search phase.
Closes#29242.
We start tracking max seq_no_of_updates on the primary in #33842. This
commit replicates that value from a primary to its replicas in replication
requests or the translog phase of peer-recovery.
With this change, we guarantee that the value of max seq_no_of_updates
on a replica when any index/delete operation is performed at least the
max_seq_no_of_updates on the primary when that operation was executed.
Relates #33656
We currently fallback to local indices whenever a remote cluster is not found, as there may still be indices / aliases with the same name. Such behaviour is lenient but needs to be kept for backwards compatibility. Clarified that in the code so we don't forget.
Relates to #26247
It mistakenly uses the Elasticsearch major version instead of the Lucene major
version. I noticed it when backporting, it is not noticeable on master because
the only two Lucene versions that are supported, 7 and 8, encode norms the same
way, unlike Lucene 6.
Today, TransportService uses System.currentTimeMillis() to get the current time
to report on things like timeouts, and enqueues lambdas for future execution.
However, in tests it is useful to be able to fake out the current time and to
see what all these enqueued lambdas are really for. This change alters the
situation so that we can obtain the time from the more easily-faked
ThreadPool#relativeTimeInMillis(), and implements some friendlier toString()
methods on the various Runnables so we can see what they are later.
As far as I can tell this guard against fragile analyzers is no longer relevant, since
we stopped setting special analyzers on numeric fields (3bf6f4). Instead of removing
the guard completely, I opted to keep a check for untokenized + unnormalized fields
to avoid going through the analysis process unnecessarily.
My motivation for simplifying this check is that I'd like to add support for
`split_queries_on_whitespace` to the new 'queryable object' fields. As it stands, I would
have to add a dedicated instanceof check for the new mapper, which is not optimal.
This commit introduces an AbstractSimpleSecurityTransportTestCase for
security transports. This classes provides transport tests that are
specific for security transports. Additionally, it fixes the tests referenced in
#33285.
* TESTS: Make score Float#NaN when there is no max score
Fixes test failure due to maxScore set to Float#MinValue instead
on Float#NaN. In addition the initial value for maxScore is set to
Float#NEGATIVE_INFINITY so it is an illegal value.
Closes#33993
When executing a cross-cluster search, we need to search against all local indices (and no remote indices) in case no indices are specified. Also, if only remote indices are specified, no local indices will be queried. We previously added empty local indices whenever they were not present in the map of the grouped indices, then we would act differently later based on the extracted remote indices. Instead, we now add the empty array for local indices only in case we need to search all local indices; the entry for local indices is not added when local indices should not be searched. This way the grouped indices reflect reality and provide a better indication of what indices will be searched.
Settings validation in AutoQueueAdjustingExecutorBuilder always checked against
a default value which means that we never can change a max queue size that is lower
than the default. This change adds tests and fixes this validation.
This PR is the first step to use seq_no to optimize indexing operations.
The idea is to track the max seq_no of either update or delete ops on a
primary, and transfer this information to replicas, and replicas use it
to optimize indexing plan for index operations (with assigned seq_no).
The max_seq_no_of_updates on primary is initialized once when a primary
finishes its local recovery or peer recovery in relocation or being
promoted. After that, the max_seq_no_of_updates is only advanced internally
inside an engine when processing update or delete operations.
Relates #33656
This change adds the OneStatementPerLineCheck to our checkstyle precommit
checks. This rule restricts the number of statements per line to one. The
resoning behind this is that it is very difficult to read multiple statements on
one line. People seem to mostly use it in short lambdas and switch statements in
our code base, but just going through the changes already uncovered some actual
problems in randomization in test code, so I think its worth it.
Today we don't store the auto-generated timestamp of append-only
operations in Lucene; and assign -1 to every index operations
constructed from LuceneChangesSnapshot. This looks innocent but it
generates duplicate documents on a replica if a retry append-only
arrives first via peer-recovery; then an original append-only arrives
via replication. Since the retry append-only (delivered via recovery)
does not have timestamp, the replica will happily optimizes the original
request while it should not.
This change transmits the max auto-generated timestamp from the primary
to replicas before translog phase in peer recovery. This timestamp will
prevent replicas from optimizing append-only requests if retry
counterparts have been processed.
Relates #33656
Relates #33222
Currently, assertSeqNos assumes that the cluster is stable at the end of
the test (i.e., no more shard movement). However, this assumption does
not always hold. In these cases, we can stop the assertion instead of
failing a test.
Closes#33704
If a shard was serving as a replica when another shard was promoted to
primary, then its Lucene index was reset to the global checkpoint.
However, if the new primary fails before the primary/replica resync
completes and we are now being promoted, we have to restore the reverted
operations by replaying the translog to avoid losing acknowledged writes.
Relates #33473
Relates #32867
It's possible for the set "seqNos" to contain only the "unFinishedSeq"
in the testConcurrentReplica test. If this is the case, the call
`randomValueOtherThan` won't make any progress because the predicate
will never be false.
This commit removes this expectation because it's incorrect and it's no
longer needed as we have a dedicated test to verify the contains method.
Relates #33871
Drops `Settings` from some of the methods to lookup loggers and
deprecates another logger lookup that takes `Settings` because
`Settings` is no longer required to build a logger.
* ingest: support simulate with verbose for pipeline processor
This change better supports the use of simulate?verbose with the
pipeline processor. Prior to this change any pipeline processors
executed with simulate?verbose would not show all intermediate
processors for the inner pipelines.
This changes also moves the PipelineProcess and TrackingResultProcessor
classes to enable instance checks and to avoid overly public classes.
As well this updates the error message for when cycles are detected
in pipelines calling other pipelines.
Today all searches happen on the search threadpool which is the correct
behavior in almost any case. Yet, there are exceptions where for instance
searches searches should be passed through a single-thread
thread-pool to reduce impact on a node. This change adds a index-private setting that allows to mark an index as throttled for searches and forks off all non-stats searcher access to this thread-pool for indices that are marked as `index.search.throttled`
Transient settings override persistent settings, but in fact all of the tests
that run as part of `:server:test` and `:server:integTest` will pass if the
precedence is changed to be the other way round. This change adds a test that
verifies the precedence is as documented.
With this commit we clear the fielddata cache per field as it is
supposed to be. Previously we retrieved the proper field from the cache
but then cleared the entire cache anyway.
Closes#33798
Relates #33807
This change adds "contains" method to LocalCheckpointTracker.
One of the use cases is to check if a given operation has been processed
in an engine or not by looking up its seq_no in LocalCheckpointTracker.
Relates #33656
Using index settings for ILM state is fragile and exposes too much
information that doesn't need to be exposed. Using custom index metadata
is more resilient and allows more controlled access to internal
information.
As part of these changes, moves away from using defaults for ILM-related
values, in favor of using null values to clearly indicate that the value is not
present.
Changes the default of the `node.name` setting to the hostname of the
machine on which Elasticsearch is running. Previously it was the first 8
characters of the node id. This had the advantage of producing a unique
name even when the node name isn't configured but the disadvantage of
being unrecognizable and not being available until fairly late in the
startup process. Of particular interest is that it isn't available until
after logging is configured. This forces us to use a volatile read
whenever we add the node name to the log.
Using the hostname is available immediately on startup and is generally
recognizable but has the disadvantage of not being unique when run on
machines that don't set their hostname or when multiple elasticsearch
processes are run on the same host. I believe that, taken together, it
is better to default to the hostname.
1. Running multiple copies of Elasticsearch on the same node is a fairly
advanced feature. We do it all the as part of the elasticsearch build
for testing but we make sure to set the node name then.
2. That the node.name defaults to some flavor of "localhost" on an
unconfigured box feels like it isn't going to come up too much in
production. I expect most production deployments to at least set the
hostname.
As a bonus, production deployments need no longer set the node name in
most cases. At least in my experience most folks set it to the hostname
anyway.
By moving CompletionStats into the engine we can easily cache the stats for
read-only engines if necessary. It also moves the responsibiltiy out of IndexShard
which has quiet some complexity already.
Relates to #33835
Today if we fetch common stats from a shard we might get a partial response
if the shard is closed while we fetch the stats. This causes hard to track and
reproduce NPEs. This change streamlines null checking to ensure we only render
stats we actually received.
Wraps all lines in our test framework at 140 characters because that is
our standard line length and removes all of the checkstyle suppressions
for the test framework.
Drops most of `ModuleTestCase` because it isn't used and we're moving
away from using guice in the way that it wants to test anyway. Also
switches a few classes that extend it but don't use it to extend
`ESTestCase` instead.
We currently special-case SynonymFilterFactory and SynonymGraphFilterFactory, which need to
know their predecessors in the analysis chain in order to correctly analyze their synonym lists. This
special-casing doesn't work with Referring filter factories, such as the Multiplexer or Conditional
filters. We also have a number of filters (eg the Multiplexer) that will break synonyms when they
appear before them in a chain, because they produce multiple tokens at the same position.
This commit adds two methods to the TokenFilterFactory interface.
* `getChainAwareTokenFilterFactory()` allows a filter factory to rewrite itself against its preceding
filter chain, or to resolve references to other filters. It replaces `ReferringFilterFactory` and
`CustomAnalyzerProvider.checkAndApplySynonymFilter`, and by default returns `this`.
* `getSynonymFilter()` defines whether or not a filter should be applied when building a synonym
list `Analyzer`. By default it returns `true`.
Fixes#33609
This test occasionally fails in `testCollectSearchShards` waiting on what seems
to be a search request to a remote cluster for one second. Given that the test
fails here very rarely I suspect maybe one second is very rarely not enough so
we could fix it by increasing the max wait time slightly.
Closes#33852
By moving DocStats into the engine we can easily cache the stats for
read-only engines if necessary. It also moves the responsibility out of IndexShard
which has quiet some complexity already.
The fix in #33757 introduces some workaround since FilterCodecReader didn't
support unwrapping. This cuts over to a more elegant fix to access the readers
segment infos.
This commit changes the random_score function to use the global docID of the document
rather than the segment docID to generate random scores. As a result documents that have
the same segment docID within the shard will generate different scores.
Add minimal sanity checks to custom/scripted similarities.
Lucene 8 introduced more constraints on similarities, in particular:
- scores must not be negative,
- scores must not decrease when term freq increases,
- scores must not increase when norm (interpreted as an unsigned long)
increases.
We can't check every single case, but could at least run some sanity checks.
Relates #33309
* Profiler: Don’t profile NEXTDOC for ConstantScoreQuery.
A ConstantScore query will return the iterator of its inner query.
However, when profiling, the constant score query is wrapped separately
from its inner query, which distorts the times emitted by the profiler.
Return the iterator directly in such a case.
Closes#23430
The change in #27500 introduces this regression that causes `_get` and `_term_vector`
actions to run on the network thread if the realtime flag is set.
This fixes the issue by delegating to the super method forking on the corresponding threadpool.
We use similar / same concepts in SerachTransportService and HandledTransportAction but both
duplicate the efforts with slightly different implementation details. This streamlines
sending responses / exceptions back to a channel in an ActionListener with appropriate logging.
In #33241 we moved the file-based discovery functionality to core
Elasticsearch, but preserved the `discovery-file` plugin, and support for the
existing location of the `unicast_hosts.txt` file, for BWC reasons. This commit
completes the removal of this plugin.
New plugin for annotated_text field type.
Largely a copy of `text` field type but adds ability to include markdown-like syntax in the text.
The “AnnotatedText” class parses text+markup and converts into plain text and AnnotationTokens.
The annotation token values are injected unchanged alongside the regular text tokens to provide a
form of additional indexed overlay useful in positional searches and highlighting.
Annotated_text fields do not support fielddata as we want to phase this out.
Also includes a new "annotated" highlighter type that retains annotations and merges in search
hits as additional annotation markup.
Closes#29467
* MINOR: Drop Redundant Ctx. Check in ScriptService
* This check is completely redundant, the expression script
engine will throw anyway (and with a similar message) for
those contexts that it cannot compile. Moreover, the update context
is not the only context that is not suported by the expression engine
at this point so handling the update context separately here makes
no sense.
* Same fix idea as in #10666a4 to prevent background
threads trying to reconnect after the tests are done from
throwing `ExecutionCancelledException` and breaking the test
* Closes#30714
Allows to skip shard balancing when the cluster_concurrent_rebalance threshold is already reached, which cuts down the time spent in the rebalance method of BalancedShardsAllocator.
Currently `IndexMetadata#getCustomData(...)` wraps the custom metadata
in an unmodifiable map, but in case there is no entry for the specified
key then a NPE is thrown by Collections.unmodifiableMap(...). This is not
ideal in case callers like to throw an exception with a specific message.
(like in the case for ccr to indicate that the follow index was not created
by the create_and_follow api and therefor incompatible as follow index)
I think making `DiffableStringMap` itself immutable is better then just wrapping
custom metadata with `Collections.unmodifiableMap(...)` in all methods that access it.
Also removed the `equals()`, `hashcode()` and to `toString()` methods of
`DiffableStringMap`, because `AbstractMap` already implements these methods.
This commit switches the joda time backcompat in scripting to use
augmentation over ZonedDateTime. The augmentation methods provide
compatibility with the missing methods between joda's DateTime and
java's ZonedDateTime. Due to getDayOfWeek returning an enum in the java
API, ZonedDateTime is wrapped so that the method can return int like the
joda time does. The java time api version is renamed to
getDayOfWeekEnum, which will be kept through 7.x for compatibility while
users switch back to getDayOfWeek once joda compatibility is removed.
This commit adds a guard around the rare case that no documents in the
10 iterations actually have any values, thus making the warning check
incorrect.
closes#32779
This commit is a cleanup of the assertions in global checkpoint
listeners, simplifying them and adding some messages to them in case the
assertions trip.
When we add a global checkpoint listener, it is also carries along with
it a value that it thinks is the current global checkpoint. This value
can be above the actual global checkpoint on a shard if the listener
knows the global checkpoint from another shard copy (e.g., the primary),
and the current shard copy is lagging behind. Today we notify the
listener whenever the global checkpoint advances, regardless if it goes
above the current global checkpoint known to the listener. This commit
reworks this implementation. Rather than thinking of the value
associated with the listener as the current global checkpoint known to
the listener, we think of it as the value that the listener is waiting
for the global checkpoint to advance to (inclusive). Now instead of
notifying all waiting listeners when the global checkpoint advances, we
only notify those that are waiting for a value not larger than the
actual global checkpoint that we advanced to.
This is something that we were already doing when sorting by field, which is
now also done when sorting by score. As-is this change will speed up top-k
`term` queries. This could work for `match_all` queries as well when we
implement the `setMinCompetitiveScore` API on their Scorer.
The existing approach used date formatters when a format based string
like `date_time||epoch_millis` was used, instead of the custom code.
In order to properly solve this, a new interface called
`DateFormatter` has been added, which now can be implemented for custom
formatters. Currently there are two implementations, one using java time
and one doing the epoch_millis formatter, which simply parses a number
and then converts it to a date in UTC timezone.
The DateFormatter interface now also has a method to retrieve the name
of the formatter pattern, which is needed for mapping changes anyway.
The existing `CompoundDateTimeFormatter` class has been removed, the
name was not really nice anyway.
One more minor change is the fact, that the new java time using
FormatDateFormatter does not try to parse the date with its printer
implementation first (which might be a strict one and fail), but a
printer can now be specified in addition. This saves one potential
failure/exception when parsing less strict dates.
If only a printer is specified, the printer will also be used as a
parser.
This race can occur if the latch from the listener notifies the test
thread and the test thread races ahead before the scheduler thread has a
chance to emit the log message. This commit fixes this test by not
counting down the latch until after the log message we are going to
assert on has been emitted.
We used TimeoutException here but that's not serializable. This commit
switches to a serializable exception so that we can test for the
exception type on the remote side.
This change fixes a bug introduced in 6.3 that prevents fields with an explicit
similarity to be updated. It also adds a test that checks this case for similarities
but also for analyzers since they could suffer from the same problem.
Closes#33611
Today we use a special unicast hosts provider, the `MockUncasedHostsProvider`,
in many integration tests, to deal with the dynamic nature of the allocation of
ports to nodes. However #33241 allows us to use file-based discovery to achieve
the same goal, so the special test-only `MockUncasedHostsProvider` is no longer
required.
This change removes `MockUncasedHostProvider` and replaces it with file-based
discovery in tests based on `EsIntegTestCase`.
We fail to notify the resync listener if the resync replication hits a
shard unavailable exception. Moreover, we no longer need to swallow
these unavailable exceptions.
Relates #28571Closes#33613
This field does not need to be volatile because all accesses are done
under a lock. This commit removes the unnecessary volatile modifier from
this field.
The remote cluster settings search.remote.* have been renamed to
cluster.remote.* and are automatically upgraded in the cluster state on
gateway recovery, and on put. This commit adds a note to the migration
docs for these changes.
This change adds a `_source` only snapshot repository that allows to wrap
any existing repository as a _backend_ to snapshot only the `_source` part
including live docs markers. Snapshots taken with the `source` repository
won't include any indices, doc-values or points. The snapshot will be reduced in size and
functionality such that it requires full re-indexing after it's successfully restored.
The restore process will copy the `_source` data locally starts a special shard and engine
to allow `match_all` scrolls and searches. Any other query, or get call will fail with and unsupported operation exception. The restored index is also marked as read-only.
This feature aims mainly for disaster recovery use-cases where snapshot size is
a concern or where time to restore is less of an issue.
**NOTE**: The snapshot produced by this repository is still a valid lucene index. This change doesn't allow for any longer retention policies which is out of scope for this change.
In cross-cluster replication, we will use global checkpoint listeners to
long poll for updates to a shard. However, we do not want these polls to
wait indefinitely as it could be difficult to discern if the listener is
still waiting for updates versus something has gone horribly wrong and
cross-cluster replication is stuck. Instead, we want these listeners to
timeout after some period (for example, one minute) so that they are
notified and we can update status on the following side that
cross-cluster replication is still active. After this, we will
immediately enter back into a poll mode.
To do this, we need the ability to associate a timeout with a global
checkpoint listener. This commit adds this capability.
This change clarifies the documentation of the context completion suggester
regarding filtering and boosting with contexts.
Unlike the suggester v1, filtering on multiple contexts
works as a disjunction, a suggestion matches if it contains at least one of the provided
context values and boosting selects the maximum score among the matching contexts.
This commit also adapts an old test that was written for the v1 suggester and commented out
for version 2 because the behavior changed.
This commit adds settings upgraders for the search.remote.* settings
that can be in the cluster state to automatically upgrade these settings
to cluster.remote.*. Because of the infrastructure that we have here,
these settings can be upgraded when recovering the cluster state, but
also when a user tries to make a dynamic update for these settings.
This commit adds test coverage for two cases not previously covered by
the existing testing. Namely, we add coverage ensuring that the executor
is used to notify listeners being added that are immediately notified
because the shard is closed or because the global checkpoint is already
beyond what the listener knows.
When a replica starts following a newly promoted primary, it may have
some operations which don't exist on the new primary. Thus we need to
throw those operations to align a replica with the new primary. This can
be done by first resetting an engine from the safe commit, then replaying
the local translog up to the global checkpoint.
Relates #32867
* Improves doc values format deprecation message
This changes the deprecation message when doc values fields do not
supply a format form logging a deprecation warning for each offending
field individually to logging a single message which lists all
offending fields
Closes#33572
* Updates YAML test with new deprecation message
Also adds a test to ensure multiple deprecation warnings are collated
into one message
* Condenses collection of fields without format check
Moves the collection of fields that don't have a format to a separate
loop and moves the logging of the deprecation warning to be next to it
at the expesnse of looping through the field list twice
* fixes typo
* Fixes test
Currently we keep track of how many bytes are currently being written to disk
in an AtomicLong within InternalEngine, updating it on refresh. The IndexWriter
has its own accounting for this, and exposes it via a getFlushingBytes method
in the latest lucene 8 snapshot. This commit removes the InternalEngine tracking
in favour of just using the IndexWriter method.
Upgrading list settings is broken because of the conversion that we do
to strings, and then when we try to put back the upgraded value we do
not know that it is a representation of a list. This commit addresses
this by adding special handling for list settings.
This change adds an engine implementation that opens a reader on an
existing index but doesn't permit any refreshes or modifications
to the index.
Relates to #32867
Relates to #32844
When we see a settings value, it could be a list. Yet this should only
happen if the underlying setting type is a list setting type. This
commit adds validation that when we get a setting value that is a list,
that the setting that we are getting is a list setting. And similarly,
if we get a value for a list setting, the underlying value should be a
list.
This change copies and validates the soft-deletes setting during resize.
If the source enables soft-deletes, the target must also enable it.
Closes#33321
* LeafCollector.setScorer() now takes a Scorable
* Scorers may not have null Weights
* IndexWriter.getFlushingBytes() reports how much memory is being used by IW threads writing to disk
Today the FilterRoutingTests take the belt-and-braces approach of excluding
some node attribute values and including some others. This means that we don't
really test that both inclusion and exclusion work correctly: as long as one of
them works as expected then the test will pass. This change improves these
tests by only using one approach at once, demonstrating that both do indeed
work, and adds tests for various other scenarios too.
In some cases we want to deprecate a setting, and then automatically
upgrade uses of that setting to a replacement setting. This commit adds
infrastructure for this so that we can upgrade settings when recovering
the cluster state, as well as when such settings are dynamically applied
on cluster update settings requests. This commit only focuses on cluster
settings, index settings can build on this infrastructure in a
follow-up.
This commit ensures that we bootstrap a new history_uuid when force
allocating a stale primary. A stale primary should never be the source
of an operation-based recovery to another shard which exists before the
forced-allocation.
Closes#26712
* CompoundProcessor is in the ingest package now
-> resolved
* Java generics don't offer type checking so nothing
can be done here -> remvoed TODO and test
* #16019 was closed and not acted on
-> todo can go away