This commit introduces settings version to index metadata. This value is
monotonically increasing and is updated on settings updates. This will
be useful in cross-cluster replication so that we can request settings
updates from the leader only when there is a settings update.
Today when submitting an update settings request to update the number of
replicas with a wildcard that does not match any indices and allow no
indices is set to true, the request ends up being interpreted as
updating the number of replicas for all indices. That is, consider the
following sequence:
PUT /test-index
{
"settings": {
"index.number_of_replicas": 0
}
}
PUT /non-existent-*/_settings?expand_wildcards=open&allow_no_indices=true
{
"settings": {
"index.number_of_replicas": 1
}
}
GET /test-index/_settings
The latter will show that the number of replicas on test-index is now
one. This is surprising, and should be considered a bug.
The underlying problem here is treating no indices in the underlying
methods used to update the routing table and the metadata as meaning all
indices. This commit takes away this assumption. Tests that relied on
this behavior have been changed to no longer rely on this.
A test for this situation is added in UpdateNumberOfReplicasIT.
This removes another two methods from `AbstractComponent`. One isn't
used at all and another is only used in a single class in watcher. I've
moved the method that watcher uses into the single class that uses it.
This commit handles cases testing withLocale and withZone when the zone
and locale in question is the same as the special base case. This can
happen sometimes since the locale and zoneids are randomized.
Empty values on keyword fields are filtered by the `map` execution mode
of the `terms` aggregation. This commit restores them as valid buckets.
Closes#34434
This commit removes randomization of locale for DateFormatter equals
tests, instead using explicit locales. The test framework already
randomizes locales, so the random choice of the second locale can
sometimes be equal to the already chosen locale. Randomization also does
not provide any extra protection, as the equality of DateFormatter does
not implement equality of the locales itself.
closes#34337
ListenableFuture may run a listener on the same thread that called the
addListener method or it may execute on another thread after the future
has completed. Whenever the ListenableFuture stores the listener for
execution later, it should preserve the thread context which is what
this change does.
Today we rewrite the operations from the leader with the term of the
following primary because the follower should own its history. The
problem is that a newly promoted primary may re-assign its term to
operations which were replicated to replicas before by the previous
primary. If this happens, some operations with the same seq_no may be
assigned different terms. This is not good for the future optimistic
locking using a combination of seqno and term.
This change ensures that the primary of a follower only processes an
operation if that operation was not processed before. The skipped
operations are guaranteed to be delivered to replicas via either
primary-replica resync or peer-recovery. However, the primary must not
acknowledge until the global checkpoint is at least the highest seqno of
all skipped ops (i.e., they all have been processed on every replica).
Relates #31751
Relates #31113
Questions on how to work with `ActionPlugin#getRestHandlerWrapper()`
come up in discuss forums all the time. This change adds an example
to the javadoc how this method should/could be used.
ES is scanning for dangling indices on every cluster state update. For this, it lists the subfolders of
the indices directory to determine which extra index directories exist on the node where there's no
corresponding index in the cluster state. These are potential targets for dangling index import. On
certain machine types, and with large number of indices, this subfolder listing can be horribly slow.
This means that every cluster state update will be slowed down by potentially hundreds of
milliseconds. One of the reasons for this poor performance is that Files.isDirectory() is a relatively
expensive call on some OS and JDK versions. There is no need though to do all these isDirectory
calls for folders which we know we are going to discard anyhow in the next step of the dangling
indices logic. This commit allows adding an exclusion predicate to the availableIndexFolders
methods which can dramatically speed up this method when scanning for dangling indices.
Since all calls to `ESLoggerFactory` outside of the logging package were
deprecated, it seemed like it'd simplify things to migrate all of the
deprecated calls and declare `ESLoggerFactory` to be package private.
This does that.
With this commit we restore the previous behavior in
`BigArraysTests#testMaxSizeExceededOnResize` but lower the sizes that
are tested to the range between 256 bytes to 16 kB so the test does not
produce a whole lot of garbage.
The previous attempt to reduce the amount of garbage produced by that
test was to properly size the array initially but it failed to account
for object alignment which lead to test failures in some cases. While it
would be possible to account for object alignment, we would need to open
up BigArrays or directly use the underlying Lucene API which would
require us to allocate an array upfront only to find its size (incl.
object alignment).
Instead we have fixed this issue by conservatively sizing the array
initially (so the initial allocation will never trip the circuit
breaker) and reduce garbage by reducing the circuit breaker's upper
bound as described previously.
Closes#33750
Relates #34325
This changes the delete job API by adding
the choice to delete a job asynchronously.
The commit adds a `wait_for_completion` parameter
to the delete job request. When set to `false`,
the action returns immediately and the response
contains the task id.
This also changes the handling of subsequent
delete requests for a job that is already being
deleted. It now uses the task framework to check
if the job is being deleted instead of the cluster
state. This is a beneficial for it is going to also
be working once the job configs are moved out of the
cluster state and into an index. Also, force delete
requests that are waiting for the job to be deleted
will not proceed with the deletion if the first task
fails. This will prevent overloading the cluster. Instead,
the failure is communicated better via notifications
so that the user may retry.
Finally, this makes the `deleting` property of the job
visible (also it was renamed from `deleted`). This allows
a client to render a deleting job differently.
Closes#32836
The `status` part of the tasks API reflects the internal status of a
running task. In general, we do not make backwards breaking changes to
the `status` but because it is internal we reserve the right to do so. I
suspect we will very rarely excercise that right but it is important
that we have it so we're not boxed into any particular implementation
for a request.
In some sense this is policy making by documentation change. In another
it is clarification of the way we've always thought of this field.
I also reflect the documentation change into the Javadoc in a few
places. There I acknowledge Kibana's "special relationship" with
Elasticsearch. Kibana parses `_reindex`'s `status` field and, because
we're friends with those folks, we should talk to them before we make
backwards breaking changes to it. We *want* to be friends with everyone
but there is only so much time in the day and we don't *want* to make
backwards breaking fields to `status` at all anyway. So we hope that
breaking changes documentation should be enough for other folks.
Relates to #34245.
* SCRIPTING: Add Expr. Compile for TermSetQuery Ctx.
* Follow up to #33602 adding the ability to compile TermsSetQuery
scripts with the expressions engine in the same way we support
SearchScript in Expressions
* Duplicated the code here for now to make the change less complex,
the only difference to SearchScript is that `_score` and `_value` are not handled for TermsSetQuery
* remove redundant check
Drops the last logging constructor that takes `Settings` because it is
no longer needed.
Watcher goes through a lot of effort to pass `Settings` to `Logger`
constructors and dropping `Settings` from all of those calls allowed us
to remove quite a bit of log-based ceremony from watcher.
Today we use the version of a DirectoryReader as a component of the key
of IndicesRequestCache. This usage is perfectly fine since the version
is advanced every time a new change is made into IndexWriter. In other
words, two DirectoryReaders with the same version should have the same
content. However, this invariant is only guaranteed in the context of a
single IndexWriter because the version is reset to the committed version
value when IndexWriter is re-opened.
Since #33473, each IndexShard may have more than one IndexWriter, and
using the version of a DirectoryReader as a part of the cache key can
cause IndicesRequestCache to return stale cached values. For example, in
#27650, we rollback the engine (i.e., re-open IndexWriter), index new
documents, refresh, then make a count request, but the search layer
mistakenly returns the count of the DirectoryReader of the previous
IndexWriter because the current DirectoryReader has the same version of
the old DirectoryReader even their documents are different. This is
possible because these two readers come from different IndexWriters.
This commit replaces the the version with the reader cache key of
IndexReader as a component of the cache key of IndicesRequestCache.
Closes#27650
Relates #33473
This commit adds the support to early terminate the collection of a leaf
in the min/max aggregator. If the query matches all documents the min and max value
for a numeric field can be retrieved efficiently in the points reader.
This change applies this optimization when possible.
* Make text message not required in constructor for slack
* Remove unnecessary comments in test file
* Throw exception when reduce or combine is not provided; update tests
* Update integration tests for scripted metrics to always include reduce and combine
* Remove some old changes from previous branches
* Rearrange script presence checks to be earlier in build
* Change null check order in script builder for aggregated metrics; correct test scripts in IT
* Add breaking change details to PR
Today we reverse the initial order of the nested documents when we
index them in order to ensure that parents documents appear after
their children. This means that a query will always match nested documents
in the reverse order of their offsets in the source document.
Reversing all documents is not needed so this change ensures that parents
documents appear after their children without modifying the initial order
in each nested level. This allows to match children in the order of their
appearance in the source document which is a requirement to efficiently
implement #33587. Old indices created before this change will continue
to reverse the order of nested documents to ensure backwark compatibility.
* Adds trace logging to IndicesRequestCache
This change adds trace level logging to `IndicesrrequestCache` witht eh
primary aim of helping to identify the cause of teh failures in
https://github.com/elastic/elasticsearch/issues/32827. The cache will
log at trace level when a cache hit or miss occurs including the reader
version and the cache key. Note that this change adds a
`cacheKeyRenderer` whcih supplies a human readable String of the cache
key since the actual cache key itself is a `BytesReference` containing
the wire protocol serialised form of the request.
Logging is also added for the case where a search timeout occurs and fr
that reason the cache entry is invalidated.
* Adds comment to remaind us to remove cacheKeyRenderer
In #28941 we changed the computation of cluster state task descriptions but
this introduced a bug in which we only log the empty descriptions (rather than
the non-empty ones). This change fixes that.
Optionals containing boxed primitive types are prohibitively costly because they
have two level of boxing. For Optional<Integer> the analogous OptionalInt can be
used to avoid the boxing of the contained int value.
This change fixes a bug in the cross fields mode of the `query_string`
query. The multi fields query builder must be reseted before parsing
in order to clear the list of expanded fields coming from the previous text block.
Closes#34215
Mappings with completion type and multi-fields, were not able to index array or
object format on completion fields. Only string format was supported.
This is fixed by providing multiField parser with externalValueContext with already parsed object
closes#15115
This adds some method into the `DateFormatter` interface, namely
* `withLocale()` to change the locale of a date formatter
* `getLocale()`
* `getZone()`
* `hashCode()`
* `equals()`
These methods will be needed for aggregations and mapping changes, where
zones and locales can be specified in the mapping or in search/aggs
parts of a search request.
When nested objects are present in the mappings, we add a filter in
queries to exclude them if there is no evidence that the query cannot
match in this space. In 6x we visit the query in order to find a mandatory
clause that can match root documents only. If we find one we can omit the
nested documents filter. Currently only `term` and `range` queries are checked,
this change adds the support for `terms` query to effectively remove the nested filter
if a mandatory `terms` clause targets a non-nested field.
Closes#34067
Mainly this fixes a warning by replacing the unchecked `new ActionListener`
with the checked `new ActionListener<Response>`, and it also fixes the line
length violations in this class.
This commit adds a check for "enabled" attribute change for types when
a RestPutMappingAction is received. A MappingException is thrown when
such a change is detected. Change are prevented in both ways: "false -> true"
and "true -> false".
Closes#33566
#32281 adds elasticsearch-shard to provide bwc version of elasticsearch-translog for 6.x; have to remove elasticsearch-translog for 7.0
Relates to #31389
The unfollow API changes a follower index into a regular index, so that it will accept write requests from clients.
For the unfollow api to work the index follow needs to be stopped and the index needs to be closed.
Closes#33931
This change introduces the indexing optimization using sequence numbers
in the FollowingEngine. This optimization uses the max_seq_no_updates
which is tracked on the primary of the leader and replicated to replicas
and followers.
Relates #33656
This commit removes the use of ExecutableScript from watcher in favor of
custom script contexts for both watcher condition scripts and transform
scripts.
Fixes the equals and hash function to ignore the order of aggregations to ensure equality after serialization
and deserialization. This ensures storing configs with aggregation works properly.
This also addresses a potential issue in caching when the same query contains aggregations but in
different order. 1st it will not hit in the cache, 2nd cache objects which shall be equal might end up twice in
the cache.
In order to be compatible with joda time, this adds an epoch seconds
formatter, that is able to parse floating point values.
However joda time discards the floating point values, but still parses
the data, where as this one is able to parse the whole value including
milliseconds.
The synonym filters no longer need access to the AnalysisRegistry in their
constructors, so we can remove the special-case code and move them to the
common analysis module.
This commit means that synonyms are no longer available for `server` integration tests,
so several of these are either rewritten or migrated to the common analysis module
as rest-spec-api tests
* Make sure 'ignored' and 'routing' field types inherit from StringFieldType.
* Add tests for prefix and regexp queries.
* Support prefix and regexp queries on _index fields.
This commit adds the ability to plug in compilation of custom contexts
in mock script engine. This is needed for testing plugins which add
custom contexts like watcher.
Prior to this change when a pipeline processor called another
pipeline, only the stats for the first processor were recorded.
The stats for the subsequent pipelines were ignored. This change
properly accounts for pipelines irregardless if they are the first
or subsequently called pipelines.
This change moves the state of the stats from the IngestService
to the pipeline itself. Cluster updates are safe since the pipelines
map is atomically swapped, and if a cluster update happens
while iterating over stats (now read directly from the pipeline)
a slightly stale view of stats may be shown.
Recently we introduced the settings cluster.remote to take the place of
search.remote for configuring remote cluster connections. We made this
change due to the fact that we have generalized the remote cluster
infrastructure to also be used within cross-cluster replication and not
only cross-cluster search. For backwards compatibility, when we made this
change, we allowed that cluster.remote would fallback to
search.remote. Alas, the initial change for this contained a bug for
handling the proxy and seeds settings. The bug for the seeds settings
arose because we were manually iterating over the concrete settings only
for cluster.remote seeds but not for search.remote seeds. This commit
addresses this by iterating over both cluster.remote seeds and
search.remote seeds. Additionally, when checking for existence of proxy
settings, we have to not only check cluster.remote proxy settings, but
also fallback to search.remote proxy settings. This commit addresses
both issues, and adds tests for these situations.
* Handle MatchNoDocsQuery in span query wrappers
This change adds a new SpanMatchNoDocsQuery query that replaces
MatchNoDocsQuery in the span query wrappers.
The `wildcard` query now returns MatchNoDocsQuery if the target field is not
in the mapping (#34093) so we need the equivalent span query in order to
be able to pass it to other span wrappers.
Closes#34105
EngineSearcher can be easily folded into Engine.Searcher which removes
a level of inheritance that is necessary for most of it's subclasses.
This change folds it into Engine.Searcher and removes the dependency on
ReferenceManager.
* This should surface what errors are thrown on CI
and in org.elasticsearch.transport.RemoteClusterConnection.ConnectHandler#collectRemoteNodes
(the sequence of caught error in the last catch block and moving on to the next seed node
seems to be the only path by which the errors logged in #33756 could come about)
* Relates #33756
This commit adds "engine is closed" as an expected failure message.
This change is due to #33967 in which we might access a closed engine on
promotion.
Relates #33967
This change is related to #33903 that ports the DocStats
simplification to the master branch. This change builds the docStats
in the ReadOnlyEngine from the last committed segment infos rather than
the reader.
Co-authored-by: Tanguy Leroux <tlrx.dev@gmail.com>
Although we allow to index BigInteger and BigDecimal into a keyword
field, source filtering on these fields would fail
as XContentBuilder was not able to deserialize BigInteger and BigDecimal
to json.
This modifies XContentBuilder to allow to handle BigInteger and
BigDecimal.
Closes#32395
This commits creates a DateMathParser interface, which is already
implemented for both joda and java time. While currently the java time
DateMathParser is not used, this change will allow a followup which will
create a DateMathParser from a DateFormatter, so the caller does not
need to know the internals of the DateFormatter they have.
Previously, unmapped aggs try to delegate reduction to a sibling agg that is
mapped. That delegated agg will run the reductions, and also
reduce any pipeline aggs. But because delegation comes before running
pipelines, the unmapped agg _also_ tries to run pipeline aggs.
This causes the pipeline to run twice, and potentially double it's output
in buckets which can create invalid JSON (e.g. same key multiple times)
and break when converting to maps.
This fixes by sorting the list of aggregations ahead of time so that mapped
aggs appear first, meaning they preferentially lead the reduction. If all aggs
are unmapped, the first unmapped agg simply creates a new unmapped object
and returns that for the reduction.
This means that unmapped aggs no longer defer and there is no chance for
a secondary execution of pipelines (or other side effects caused by deferring
execution).
Closes#33514
`SingleFieldsVisitor` is meant to load a single stored field but it
manages to be quite complex to reason about because it inherits from our
"basic" `FieldsVisitor` which is designed to load many fields. This
breaks that inheritance and adds logic to `SingleFieldsVisitor` so it can
be properly stand alone. While this amounts to more lines of code they
ought to be significantly easier to reason about.
This change cleans up "unused variable" warnings. There are several cases were we
most likely want to suppress the warnings (especially in the client documentation test
where the snippets contain many unused variables). In a lot of cases the unused
variables can just be deleted though.
This commit removes the sysprop controlling whether ctx is in params for
update scripts and replaces it with use of the new ParameterMap, which
outputs a deprecation warning whenever params.ctx is used.
Today query parsers throw TooManyClauses exception when a query creates
too many clauses. However graph phrase queries do not respect this limit.
This change adds a protection against crazy expansions that can happen when
building a graph phrase query. This is a temporary copy of the fix available
in https://issues.apache.org/jira/browse/LUCENE-8479 but not merged yet.
This logic will be removed when we integrate the Lucene patch in a future
release.
* INGEST: Tests for Drop Processor
* UT for behavior of dropped callback
and drop processor
* Moved drop processor to `server`
project to enable this test
* Simple IT
* Relates #32278
Today `SearchAsyncActionTests#testFanOutAndCollect` uses a simple `HashMap` for
the `nodeToContextMap` variable, which is then accessed from multiple threads
without, apparently, explicit synchronisation. This provides an explanation for
the test failure identified in #29242 in which `.toString()` returns `"[]"`
just before `.isEmpty` returns `false`, without any concurrent modifications.
This change converts `nodeToContextMap` to a `newConcurrentMap()` so that this
cannot occur. It also fixes a race condition in the detection of double-calling
the subsequent search phase.
Closes#29242.
We start tracking max seq_no_of_updates on the primary in #33842. This
commit replicates that value from a primary to its replicas in replication
requests or the translog phase of peer-recovery.
With this change, we guarantee that the value of max seq_no_of_updates
on a replica when any index/delete operation is performed at least the
max_seq_no_of_updates on the primary when that operation was executed.
Relates #33656
We currently fallback to local indices whenever a remote cluster is not found, as there may still be indices / aliases with the same name. Such behaviour is lenient but needs to be kept for backwards compatibility. Clarified that in the code so we don't forget.
Relates to #26247
It mistakenly uses the Elasticsearch major version instead of the Lucene major
version. I noticed it when backporting, it is not noticeable on master because
the only two Lucene versions that are supported, 7 and 8, encode norms the same
way, unlike Lucene 6.
Today, TransportService uses System.currentTimeMillis() to get the current time
to report on things like timeouts, and enqueues lambdas for future execution.
However, in tests it is useful to be able to fake out the current time and to
see what all these enqueued lambdas are really for. This change alters the
situation so that we can obtain the time from the more easily-faked
ThreadPool#relativeTimeInMillis(), and implements some friendlier toString()
methods on the various Runnables so we can see what they are later.
As far as I can tell this guard against fragile analyzers is no longer relevant, since
we stopped setting special analyzers on numeric fields (3bf6f4). Instead of removing
the guard completely, I opted to keep a check for untokenized + unnormalized fields
to avoid going through the analysis process unnecessarily.
My motivation for simplifying this check is that I'd like to add support for
`split_queries_on_whitespace` to the new 'queryable object' fields. As it stands, I would
have to add a dedicated instanceof check for the new mapper, which is not optimal.
This commit introduces an AbstractSimpleSecurityTransportTestCase for
security transports. This classes provides transport tests that are
specific for security transports. Additionally, it fixes the tests referenced in
#33285.
* TESTS: Make score Float#NaN when there is no max score
Fixes test failure due to maxScore set to Float#MinValue instead
on Float#NaN. In addition the initial value for maxScore is set to
Float#NEGATIVE_INFINITY so it is an illegal value.
Closes#33993
When executing a cross-cluster search, we need to search against all local indices (and no remote indices) in case no indices are specified. Also, if only remote indices are specified, no local indices will be queried. We previously added empty local indices whenever they were not present in the map of the grouped indices, then we would act differently later based on the extracted remote indices. Instead, we now add the empty array for local indices only in case we need to search all local indices; the entry for local indices is not added when local indices should not be searched. This way the grouped indices reflect reality and provide a better indication of what indices will be searched.
Settings validation in AutoQueueAdjustingExecutorBuilder always checked against
a default value which means that we never can change a max queue size that is lower
than the default. This change adds tests and fixes this validation.
This PR is the first step to use seq_no to optimize indexing operations.
The idea is to track the max seq_no of either update or delete ops on a
primary, and transfer this information to replicas, and replicas use it
to optimize indexing plan for index operations (with assigned seq_no).
The max_seq_no_of_updates on primary is initialized once when a primary
finishes its local recovery or peer recovery in relocation or being
promoted. After that, the max_seq_no_of_updates is only advanced internally
inside an engine when processing update or delete operations.
Relates #33656