If the query coordinating node is also a data node that holds all the
shards for a search request, we can end up recursing through the can
match phase (because we send a local request and on response in the
listener move to the next shard and do this again, without ever having
returned from previous shards). This recursion can lead to stack
overflow for even a reasonable number of indices (daily indices over a
sixty days with five shards per day is enough to trigger the stack
overflow). Moreover, all this execution would be happening on a network
thread (the thread that initially received the query). With this commit,
we allow search phases to override max concurrent requests. This allows
the can match phase to avoid recursing through the shards towards a
stack overflow.
Relates #26484
Requesting to many docvalue_fields in a search request can potentially be costly
because it might incur a per-field per-document seek. This change introduces a
soft limit on the number of fields that can be retrieved. The setting can be
changed per index using the `index.max_docvalue_fields_search` setting.
Relates to #26390
Today we don't have a pluggable way to validate if the cluster state
is compatible with the node that joins. We already apply some checks for index
compatibility that prevents nodes to join a cluster with indices it doesn't support
but for plugins this isn't possible. This change adds a cluster state validator that
allows plugins to prevent a join if the cluster-state is incompatible.
Remove "index.mapper.dynamic" setting for 6.0 (and after) indices, but
still keep working for 5.x (and before) indices. Remove two index
dynamic disable test cases as the disability of index.mapper.dynamic is
already removed for current version. Add a new test class for version
test.
This test case was leftover from the static bwc tests. There was still
one use for checking we do not load old indices, but this PR moves the
legacy code needed for that directly into the test. I also opened a
follow up issue to completely remove the unsupported test: #26583.
When determining if a build is a snapshot build, we look for a field in
the JAR manifest. However, when running tests, we are not running with a
compiled core Elasticsearch JAR, we are running with the compiled core
classes on the classpath. We have a fallback for this, we always assume
such a situation is a snapshot build. However, when running builds with
-Dbuild.snapshot=false, this is not the case. As such, we need to
fallback to the value of build.snapshot. However, there are cases where
we are not running with a compiled core Elasticsearch JAR (e.g., when
the transport client is embedded in a web container) so we should only
do this fallback if we are in tests. To verify we are in tests, we check
if randomized runner is on the classpath.
Relates #26554
RangeQueryBuilder needs to perform too many `instanceof` checks in order to
check for `date` or `range` fields in order to know what it should do with the
shape relation, time zone and date format.
This commit adds those 3 parameters to the `rangeQuery` factory method so that
those instanceof checks are not necessary anymore.
* Limit the number of expanded fields it query_string and simple_query_string
This limits the number of automatically expanded fields for the "all fields"
mode (`"default_field": "*"`) for the `query_string` and `simple_query_string`
queries to 1024 fields.
Resolves#25105
* Add blurb about limit to the docs
* Throw a better error message for empty field names
When a document is parsed with a `""` for a field name, we currently throw a
confusing error about `.` being present in the field. This changes the error
message to be clearer about what's causing the problem.
Resolves#23348
* Fix exception message in test
To protect against poisonous situations, ES will only try to allocate a shard 5 times (by default). After 5 consecutive failures, ES will stop assigning the shard and wait for an operator to fix the problem. Once the problem is fixed, the operator is expected to call `_reroute` with a `retry_failed` flag to force retrying of those shards. Currently that retry flag is only used for a single allocation run. However, if not all shards can be allocated at once (due to throttling) the operator has to keep on calling the API until all shards are assigned which is cumbersome. This PR changes the behavior of the flag to reset the failed allocations counter and this allowing shards to be assigned again.
This test should not rely on strict ordering for same score suggestions.
The Lucene completion suggester uses the doc id in case of a tie and documents are indexed randomly.
This commit removes a norelease from the codebase now that there is a CI
job that fails on the norelease pattern being present. Instead, a new
issue has been opened to track this one.
Relates #26544
The completion suggester has a `shard_size` option that sets the size of the suggestions to retrieve per shard but it is ignored
by the builder. This commit restores the handling of this option and fixes a test that can randomly fail without it.
This change exposes the duplicate removal option added in Lucene for the completion suggester
with a new option called `skip_duplicates` (defaults to false).
This commit also adapts the custom suggest collector to handle deduplication when multiple contexts match the input.
Closes#23364
This change fixes a regression introduced in 6 that removes the skipping of the rescore phase
when a sort other than _score is used.
We now fail the request when a sort is provided in conjunction with rescore instead of just skipping the rescore phase
This commit also adds an assert that checks if the topdocs are sorted by _score after the rescoring.
This is the responsibility of the rescorer to make sure that topdocs are sorted after rescore so we
just check that this condition is met in the rescore phase.
The three SortBuilders that can have inner NestedSortBuilders currently don't
rewrite any of the filters contained in them. This change adds a rewrite method
to NestedSortBuilder and changes rewriting in FieldSortBuilder,
ScriptSortBuilder and GeoDistanceSortBuilder to make sure inner nested sorts get
rewritten if they need to.
Improve testing around the ScriptSortBuilder#build method, adding checks for
correct transfers of the sort mode and nested sorts.
Also changing the behaviour around the nested_path, nested_filter vs. nested
parameter in a similar way as in #26490 and deprecating the setters/getters for
the old syntax.
Closes#17286
Security manager policy files contains grants for specific codebases,
where a codebase is a jar file. We use a system property containing the
name of the jar file to resolve the jar file location when parsing the
policy file. However, this means the version of the jars must be
modified when versions of dependencies change. This is particularly
messy for elasticsearch, where we now have a dependency on the rest
client, and need to support both a snapshot version for testing and non
snapshot for release.
This commit adds an alias for the elasticsearch rest client without a
version to be used in policy files. That allows the policy files to not care whether
the rest client is a snapshot or release.
Resoves #26332 where too many tasks occurred while adjustment was happening, the
measurements were reset to 0, and then an assert failed due to tasks executing
in 0 nanoseconds
When a cache entry expires, it remains in the cache (both the segment
that it belongs to, and the LRU list) until an eviction occurs. The
problem here is that the compute if absent implementation relies on
there not being an association to a key that we are trying to put
because it internally uses put if absent on the underlying segment. If
we try to put an association for a key that has expired but not been
evicted, then compute if absent will return as if there is nothing in
the cache for the given key, yet no call to compute if absent will
succeed in putting a new association for the key. To remedy this, we
modify the internal get method for the cache to let the caller take
action if the entry they are retrieving is expired. This allows the
compute if absent method to take the action of evicting the entry from
the cache, thus allowing the put if absent method used by compute if
absent to succeed for one of the callers trying to compute if absent a
new association.
Relates #26516
This change adds a dynamic cluster setting named `search.max_keep_alive`.
It is used as an upper limit for scroll expiry time in scroll queries and defaults to 1 hour.
This change also ensures that the existing setting `search.default_keep_alive` is always smaller than `search.max_keep_alive`.
Relates #11511
* check style
* add skip for bwc
* iter
* Add a maxium throttle wait time of 1h for reindex
* review
* remove empty line
The `_type` and `types` version of the current `type` parameter have been
deprecated since 5.0. We can remove support for them in 7.0 and also in 6.x and
6.0.
We currently have a weird relationship between Transport,
TransportService, and TransportServiceAdaptor. At some point I think
that we would like to collapse these all into one concept as we only
support TCP transports.
This commit moves in that direction by eliminating the adaptor and just
passing the transport service to the transport.
Improve testing around the GeoDistanceSortBuilder#build method, adding checks for correct
transfers of the sort order, mode, nested sorts and points validation and coercion.
Also changing the behaviour around the nested_path, nested_filter vs. nested parameter in
a similar way as in #26490 and deprecating the setters/getters for the old syntax.
Relates to #17286
Currently we allow both "old" and "new" way of setting nested sorts on the
FieldSortBuilder at the same time. This should throw an error, instead the user
should choose one of the two possible options.
Also adding testing for the now deprecated nestedPath/nestedFilter parameters,
inlcuding checks that they emmit warnings on parsing and that the new
NestetedSortBuilder overwrites the deprecated parameters when building the
SortField.
Relates to #17286
Adding a check to QueryStringQueryBuilderTests that checks the override
behaviour of `quote_analyzer`, also adding documentation explaining the use of
this parameter in `query_string` query.
Closes#25417
We are getting the default Object#toString implementation here, we need
more than this. This commit instead formats the primary response to JSON
so we can see into its soul.
The new NestedSortBuilder currently is only tested via its use in the other
SortBuilder implementations it can be used in. This adds its own simple unit
test class that at first checks our usual fromXContent parsing, serialization
and hashCode/equals checks. It also adds tests for cases where NestedSortBuilder
is nested in itself and reuses the code for creating randomized instances in the
other SortBuilder tests.
In addition to the tests, this changes the `path` parameter in NestedSortBuilder
to be mandatory and removes the `read` method since it is not really needed.
The current script service has a script compilation limit for a one
minute window. This is set to a small default value of 15. Instead of
increasing that default value, this commit introduces a new setting
that allows to configure a rate per time unit, so that the script service can deal with bursts better.
The new setting is named `script.max_compilations_rate`,
requires a nonnegative number and a positive time value.
The default is `75/5m`, which is equivalent to the existing 15 per minute.
In some cases a request can already be aborted and retried. This means
the condition that aborting a request should only happen when an item
has not been processed yet is too strict. This commit allows for a
double abort. If we attempt to abort an operation that was previously
processed but not aborted, we treat that as a hard failure.
Relates #26434
Adds support for bulk items to be aborted before they are processed by the TransportShardBulkAction.
This can be used by an ActionFilter to reject a subset of the items in a bulk action without rejecting the whole action (or all the items for a shard).
Currently we don't have much unit testing about the SortField that is created then
calling the SortBuilders `build` method. Most of this is covered by integration tests
somewhere but it would be good to have some basic checks in FieldSortBuilderTest
as well.
This adds testing for the sort order, mode, missing values and checks that `nested`
gets set in the XFieldComparatorSource when `nestedPath` and `nestedFilter` are
set on the builder.
Relates to #17286
* Implement adaptive replica selection
This implements the selection algorithm described in the C3 paper for
determining which copy of the data a query should be routed to.
By using the service time EWMA, response time EWMA, and queue size EWMA we
calculate the score of a node by piggybacking these metrics with each search
request.
Since Elasticsearch lacks the "broadcast to every copy" behavior that Cassandra
has (as mentioned in the C3 paper) to update metrics after a node has been
highly weighted, this implementation adjusts a node's response stats using the
average of the its own and the "best" node's metrics. This is so that a long GC
or other activity that may cause a node's rank to increase dramatically does not
permanently keep a node from having requests routed to it, instead it will
eventually lower its score back to the realm where it is a potential candidate
for new queries.
This feature is off by default and can be turned on with the dynamic setting
`cluster.routing.use_adaptive_replica_selection`.
Relates to #24915, however instead of `b=3` I used `b=4` (after benchmarking)
* Randomly use adaptive replica selection for internal test cluster
* Use an action name *prefix* for retrieving pending requests
* Add unit test for replica selection
* don't use adaptive replica selection in SearchPreferenceIT
* Track client connections in a SearchTransportService instead of TransportService
* Bind `entry` pieces in local variables
* Add javadoc link to C3 paper and javadocs for stat adjustments
* Bind entry's key and value to local variables
* Remove unneeded actionNamePrefix parameter
* Use conns.longValue() instead of cached Long
* Add comments about removing entries from the map
* Pull out bindings for `entry` in IndexShardRoutingTable
* Use .compareTo instead of manually comparing
* add assert for connections not being null and gte to 1
* Copy map for pending search connections instead of "live" map
* Increase the number of pending search requests used for calculating rank when chosen
When a node gets chosen, this increases the number of search counts for the
winning node so that it will not be as likely to be chosen again for
non-concurrent search requests.
* Remove unused HashMap import
* Rename rank -> rankShardsAndUpdateStats
* Rename rankedActiveInitializingShardsIt -> activeInitializingShardsRankedIt
* Instead of precalculating winning node, use "winning" shard from ranked list
* Sort null ranked nodes before nodes that have a rank
Multi-level Nested Sort with Filters
Allow multiple levels of nested sorting where each level can have it's own filter.
Backward compatible with previous single-level nested sort.
* Moves deferring code into its own subclass
This change moves the code that deals with deferring collection to a subclass of BucketAggregator called DeferringBucketAggregator. This means that the code in AggregatorBase is simplified and also means that the code for deferring colleciton is in one place and easier to maintain.
* Makes SIngleBucketAggregator an interface
This is so aggregators that extend BucketsAggregator directly and those that extend DeferringBucketAggregator can be a single bucket aggregator
* review comments
* More review comments
When creating the keystore explicitly (from executing
elasticsearch-keystore create) or implicitly (for plugins that require
the keystore to be created on install) on an Elasticsearch package
installation, we are running as the root user. This leaves
/etc/elasticsearch/elasticsearch.keystore having the wrong ownership
(root:root) so that the elasticsearch user can not read the keystore on
startup. This commit adds setgid to /etc/elasticsearch on package
installation so that when executing this directory (as we would when
creating the keystore), we will end up with the correct ownership
(root:elasticsearch). Additionally, we set the permissions on the
keystore to be 660 so that the elasticsearch user via its group can read
this file on startup.
Relates #26412
* Remove the _all metadata field
This change removes the `_all` metadata field. This field is deprecated in 6
and cannot be activated for indices created in 6 so it can be safely removed in
the next major version (e.g. 7).
The `locale` field of `date` fields accepts almost any string and unknown
locales are simply ignored, which is trappy. We should fail on unknown languages
or countries.
This commit also makes `-` an accepted separator in addition to `_` since `-`
is the recommended separator (https://tools.ietf.org/html/rfc5646#section-2.1).
`_` is probably still worth supporting since it is the separator used by
`Locale#toString()`.
In order to know, when the script compilation limit has kicked in,
this commit adds a counter in the script stats to expose that
information.
So far the only way to find out about this was to check the logs
or check out responses of individual requests.
At current, we do not feel there is enough of a reason to shade the low
level rest client. It caused problems with commons logging and IDE's
during the brief time it was used. We did not know exactly how many
users will need this, and decided that leaving shading out until we
gather more information is best. Users can still shade the jar
themselves. For information and feeback, see issue #26366.
Closes#26328
This reverts commit 3a20922046.
This reverts commit 2c271f0f22.
This reverts commit 9d10dbea39.
This reverts commit e816ef89a2.
This commit removes the streams test for access after closing the bytes
stream. Output streams being closed mean they can no longer be written
to, but other methods to retrieve side state of the stream can still
make sense, such as bytes() in this case.
relates #12620
This allows plugins to plug rescore implementations into
Elasticsearch. While this is a fairly expert thing to do I've
done my best to point folks to the QueryRescorer as one that at
least documents the tradeoffs that it makes. I've attempted to
limit the API surface area by removing `SearchContext` from the
exposed interface, instead exposing just the IndexSearcher and
`QueryShardContext`. I also tried to make some of the class names
more consistent and do some general cleanup while I was there.
I entertained the notion of moving the `QueryRescorer` to module.
After all, it'd be a wonderful test to prove that you can plug
rescore implementation into Elasticsearch if the only built in
rescore implementation is in the module. But I decided against it
because the new module would require a client jar and it'd require
moving some more things around. I think if we really want to do
it, we should do it as a followup.
I did, on the other hand, create an "example" rescore plugin which
should both be a nice example for anyone wanting to plug in their
own rescore implementation and servers as a good integration test
to make sure that you can indeed plug one in.
Closes#26208
This change rewrite phrase query built on a field indexed without positions
to match_no_docs query when the `lenient` option is set to true.
This change affects all full text queries.
There is a group of five settings relating to raw tcp configurations
(no_delay, buffer sizes, etc) that we have for the http transport. These
currently live in the netty module. As they are unrelated to netty
specifically, this commit moves these settings to the
`HttpTransportSettings` class in core.
This commit removes the keystore creation on elasticsearch startup, and
instead adds a plugin property which indicates the plugin needs the
keystore to exist. It does still make sure the keystore.seed exists on
ES startup, but through an "upgrade" method that loading the keystore in
Bootstrap calls.
closes#26309
This commit renames the TransportResyncReplicationAction name to be an internal action as this is
not an action that should be invoked by a user, but is instead internal to the operation of the
system.
* Check bucket metric ages point to a multi bucket agg
This adds a validation step to the BucketMetricsPipelineAggregationBuilder which ensure that the first aggregation in the `buckets_path` is a multi-bucket aggregation. It does this using a new `MultiBucketAggregationBuilder` marker interface.
The change also moves the validate of pipeline aggregations to the `AggregatorFactories.build()` method so the validate can inspect sibling `AggregatorBuilder` objects rather than `AggregatorFactory` objects. Further it removes the validate from `AggregatorFactory` since this was never implemented and since aggregators only depend on their own internal state and not on other aggregators they should be validated ideally at setter time but in rare case where this is not possible the validation should be done in the `AggregationBuilder.build()` step.
Closes#25775
Move validate stage to happen during AggregatorFactories.Builder.build
Also removes validate method from normal aggs since it was never used.
* review comment fix
* Accept an array of field names and boosts in the index.query.default_field setting
This commit allows to define an array of field names and boosts for the index setting `index.query.default_field`.
The format is equivalent to the `fields` options of the full text search queries (e.g. field_name^boost).
This commit also makes this setting dynamically updatable.
Fixes#25946
* More XContent migrations
* Removes ToXContentToBytes
* Adds toString to classes that used to extend ToXContentToBytes
* use XContentHelper
* more review comments
* prettify tostring output
The test verifies that search on the primary works by executing a search with preference _primary. If the primary is relocating, however, it
does not take the primary relocation target into account. The test only makes sense, however, if balancing is not happening yet, i.e., the
cluster is not green.
The javadoc tool on JDK 9 has issues with the combination of anonymous classes and varargs parameters.
This commit simply refactors a few anonymous classes to private inner classes.
This PR begins the long journey to deprecating Streamable.
The idea here is to add additional method signatures that
support Writeable.Reader, so that the work to migrate objects TransportMessage to
implement Writeable and not Streamable.
One example conversion is done in this PR: SimulatePipelineRequest.
This commit extracts the inner query in the ESToParentBlockJoinQuery for highlighting.
This query has been added in 5.4 and breaks plain highlighting on nested queries.
Highlighters that use postings or term vectors are not affected because they can't highlight nested documents correctly.
Fixes#26230
This commit makes the security code aware of the Java 9 FilePermission changes (see #21534) and allows us to remove the `jdk.io.permissionsUseCanonicalPath` system property.
Gives allocation commands from the cluster reroute API
the ability to provide messages to be logged once the
cluster state change has been committed.
The purpose of this change is to create a record in the
logs when allocation commands which could potentially
be destructive are applied. The allocate_empty_primary
and allocate_stale_primary commands are the only ones
that currently provide log messages.
Closes#22821
* Deprecate global_ordinals_hash and global_ordinals_low_cardinality
This change deprecates the `global_ordinals_hash` and `global_ordinals_low_cardinality` and
makes the `global_ordinals` execution hint choose internally if global ords should be remapped or use the segment ord directly.
These hints are too sensitive and expert to be exposed and we should be able to take the right decision internally based on the agg tree.
Currently the `precision` parameter must be a precision level
in the range of [1,12]. In #5042 it was suggested also supporting
distance units like "1km" to automatically approcimate the needed
precision level. This change adds this support to the Rest API by
making use of GeoUtils#geoHashLevelsForPrecision.
Plain integer values without a unit are still treated as precision
levels like before. Distance values that are too small to be represented
by a precision level of 12 (values approx. less than 0.056m) are
rejected.
Closes#5042
The `from` search parameter cannot really be used in scrolled searches. This
commit adds a check for this case to the SearchRequest#validate() method so we
can reported it as an error rather than silently ignoring it.
Closes#9373
This change is a continuation of #25726 that aligns field expansions for the simple_query_string with the query_string and multi_match query.
The main changes are:
* For exact field name, the new behavior is to rewrite to a matchnodocs query when the field name is not found in the mapping.
* For partial field names (with * suffix), the expansion is done only on keyword, text, date, ip and number field types. Other field types are simply ignored.
* For all fields (*), the expansion is done on accepted field types only (see above) and metadata fields are also filtered.
The use_all_fields option is deprecated in this change and can be replaced by setting `*` in the fields parameter.
This commit also changes how text fields are analyzed. Previously the default search analyzer (or the provided analyzer) was used to analyze every text part
, ignoring the analyzer set on the field in the mapping. With this change, the field analyzer is used instead unless an analyzer has been forced in the parameter of the query.
Finally now that all full text queries can handle the special "*" expansion (`all_fields` mode), the `index.query.default_field` is now set to `*` for indices created in 6.
This commit converts script query to use a new FilterScript context. The
new context returns a boolean, so the error that would have previously
happened at runtime if a non boolean was returned would now happen at
script compilation. Also, the leniency of supporting returning a number
and 0 mapping to false, non-zero to true is gone, but it was never
documented. With the new context compilation will now also fail if
special variables are used at compilation time, instead of runtime, eg
ctx.
Right now we use a custom future for the CloseFuture associated with a
channel. This is because we need special unwrapping logic to ensure that
exceptions from a future failure are a certain type (opposed to an
UncategorizedException). However, the current version is limiting
because we can only attach one listener.
This commit changes the CloseFuture to extend the
PlainListenableActionFuture. This change allows us to attach multiple
listeners.
When slices is set as auto, there's an additional network call
needed for the reindex tasks to know how to rethrottle. Sometimes
the rethrottle action happens before the reindex task is fully
initialized, so in the test we wait for the task to be ready.
This commit also adds some safeguards to ensure that
cancel and rethrottle operations are handled correctly
Closes#26192
If we do not have permissions to write the keystore, an unclear access
denied exception is thrown. This commit catches this exception so that
we can decorate it with a friendlier error message.
Relates #26284
We use `:` for cross-cluster search (eg `cluster:index`), therefore, we should
not allow the ambiguity when allowing cluster or index names.
Relates to #23892
We already added the functionality to create a new keystore on startup
in #26126 but apparently missed to persist the keystore. This change adds
peristence and adds a test for the boostrap loading.
Today a `ClusterState.Custom` can be fetched by a transport client and
leaks to the user even if the classes are private etc since the serialized
bytes can be reconstructed. This change adds an option to customs to mark
them as private such that our clusterstate action will never leak it.
The AwaitsFix issue has been closed as the deleting an index and recreating with same name will give the
shard a fresh folder to be written to (based on the index uuid).
Due to the weird way of structuring the serialization code in AcknowledgedRequest, many request types forgot to properly serialize the request timeout, for example "index deletion", "index rollover", "index shrink", "putting pipeline", and other requests. This means that if those requests were not directly sent to the master node, the acknowledgement timeout information would be lost (and the default used instead).
Some requests also don't properly expose the timeout mechanism in the REST layer, such as put / delete stored script. This commit fixes all that.
This commit adds a keystore.seed setting that is automatically
generated when the ES keystore is created. This setting may be used by
plugins as a secure, random value. This commit also auto creates the
keystore upon startup to ensure the new setting is always available.
For the document field equals and hash code tests, we try to mutate the
document field to intentionally produce a document field not equal to
our provided one. We do this by randomly choosing a document field that
has either
- a randomly chosen field name and the same field value as the provided
document field
- a randomly chosen field value and the same field value as the
provided document field
If we are unlucky, it can be that the document field chosen by this
method can be equal to the provided document field. In this case, our
test will fail because the mutation really should be not equal. In this
case, we should simply try the other mutation. Note that random document
field produced by the second method can be equal to the provided
document because it has the same field name and we can get unlucky with
our randomly chosen field values. It is not the case that the random
document field produced by the first method can be equal to the provided
document field; this is because the current implementation guarantees
that the field name length will be different guaranteeing that we have a
different field name. Nevertheless, we fix the issue here by checking
that our random choice gives us a non-equal document field, and assert
that if we got unlucky the other one will work for us.
In a few places we need to lazy initialize static deprecation
loggers. This is needed to avoid touching logging before logging is
configured, but deprecation loggers that are used in foundational
classes like settings and parsers would be initialized before logging is
configured. Previously we used a lazy set once pattern which is fine,
but there's a simpler approach: the holder pattern.
Relates #26218
This commits changes the keystore cli add commands to prompt for
creating the keystore if it does not exist. This will make it easier on
users starting out, not having to run a separate command for creation.
An array of values is required because there is no default (or
reasonable way to set a default). But validation for values
only happens if it is actually set. If the values param is omitted
entirely than the agg builder will NPE.
This chance adds several random test infrastructure improvements that caused
issues in on-going developments but are generally useful. For instance is it impossible
to restart a node with a secure setting source since we close it after the node is started.
This change makes it cloneable such that we can reuse it for a restart.
Currently the `percentiles` aggregation allows specifying both possible methods
in the query DSL, but only the later one is used. This changes it to rejecting
such requests with an error. Setting the method multiple times via the java API
still works (and the last one wins).
Closes#26095
This simply removes the default identity hashcode and equals methods in InternalAggregation which where only temporarily put there while we implmeneted the methods in the subclasses.
The node setting `cluster.indices.tombstones.size` was not registered with the settings infrastructure, making it impossible for it to be set by a user.
Closes#26191
For CLI tools, we configure logging without reading the
log4j2.properties file. This because any log statements in a CLI tool
should dump to the console while reading from the log4j2.properties file
would cause them to dump whereever the log configuration there indicates
(e.g., possibly a remote machine). To do this, we added some code to the
base implementation of all CLI tools to configure logging without a
config file. This code is also executed when Elasticsearch starts up. In
the past this was fine yet we previously added detection to
Elasticsearch to find cases where we use logging before it is
configured. Because of configuring logging without a config, this means
we only catch uses of logging before the logging without config is
performed. To correct this, we enable a CLI tool to skip enabling
logging without a config and then in the Elasticsearch CLI we indeed
utilize this to skip configuring logging without a config.
Relates #26209
The flood warning checks the wrong threshold, namely the high
watermark. This would impact any node for which the disk usage is above
the high watermark and below the flood stage watermark. This commit
fixes this so that it compares to the flood threshold.
Relates #26204
* Rewrite range queries with open bounds to exists query
This change rewrites range query with open bounds to an exists query that should be faster to execute.
Fixes#22640
`epoch_millis` and `epoch_second` date formats truncate float values, as numbers or as strings.
The `coerce` parameter is not defined for `date` field type and this is not changing.
See PR #26119Closes#14641
The following token filters were moved: arabic_stem, brazilian_stem, czech_stem, dutch_stem, french_stem, german_stem and russian_stem.
Relates to #23658
In reindex APIs, when using the `slices` parameter to choose the number of slices, adds the option to specify `slices` as "auto" which will choose a reasonable number of slices. It uses the number of shards in the source index, up to a ceiling. If there is more than one source index, it uses the smallest number of shards among them.
This gives users an easy way to use slicing in these APIs without having to make decisions about how to configure it, as it provides a good-enough configuration for them out of the box. This may become the default behavior for these APIs in the future.
By default we only serialize analyzers if the index analyzer is not the
`default` analyzer or if the `search_analyzer` is different from the index
`analyzer`. This raises issues with the `_all` field when the
`index.analysis.analyzer.default_search` is set, since it automatically makes
the `search_analyzer` different from the index `analyzer`. Then there are
exceptions since we expect the `_all` configuration to be empty on 6.0 indices.
Closes#26136
Today we have a `null` invariant on all `ClusterState.Custom`. This makes
several code paths complicated and requires complex state handling in some cases.
This change allows to register a custom supplier that is used to initialize the
initial clusterstate with these transient customs.
This is a safer default since sorting by sub aggregations prevents these
aggregations from being deferred. `global_ordinals_hash` will at least
make sure that we do not use memory for buckets that are not collected.
Closes#24359
Two tests were still using the static indices:
* IndexFolderUpgraderTests#testUpgradeRealIndex()
* InternalEngineTests#testUpgradeOldIndex()
I removed these tests too, because these tests functionally overlap
with the full-cluster-restart qa tests.
Relates to #24939
This occasionally fails now because if `top` is `-Infinity` (which we sometimes
test for in randomization), the value might not get changed for the
equals/hashCode tests.
Closes#26107
* Adds ToXContentFragment
This interface is meant for objects that implement `ToXContent` but are not complete objects. It is basically the opposite of `ToXContentObject`. It means that it will be easier to track the migration of classes over to the fragment/not fragment ToXContent model as it will be clear which classes are not migrated. When no classes directly implement `ToXContent` we can make `ToXContent` package private to be sure that all new classes must implement `ToXContentObject` or `ToXContentFragment`.
* review comments
* more review comments
* javadocs
* iter
* Adds tests
* iter
* adds toString test for aggs
* improves tests following review comments
* iter
* iter
* validate half float values
* test upper bound for numeric mapper
* test for upper bound for float, double and half_float
* more tests on NaN and Infinity for NumberFieldMapper
* fix checkstyle errors
* minor renaming
* comments for disabled test
* tests for byte/short/integer/long removed and will be added in separate PR
* remove unused import
* Fix scaledfloat out of range validation message
* 1) delayed autoboxing in numbertype.parse(...)
2) no redudant checks in half_float validation
3) tests with negative values for half_float/float/double
* Add support for auto_generate_synonyms_phrase_query in match_query, multi_match_query, query_string and simple_query_string
This change adds a new parameter called auto_generate_synonyms_phrase_query (defaults to true).
This option can be used in conjunction with synonym_graph token filter to generate phrase queries
when multi terms synonyms are encountered.
For example, a synonym like "ny, new york" would produce the following boolean query when "ny city" is parsed:
((ny OR "new york") AND city)
Note how the multi terms synonym "new york" produces a phrase query.
We introduced a hack in #25885 to respect the cluster alias if available on the `_index` field. This is important if aggregations or other field data related operations are executed. Yet, we added a small hack that duplicated an implementation detail from the `_index` field data builder to make this work. This change adds a necessary but simple API change that allows us to remove the hack and only have a single implementation.
The goal of this similarity is to help users who would like to keep the
functionality of the `tf-idf` similarity that we want to remove, or to allow
for specific usec-cases (disabling idf, disabling tf, disabling length norm,
etc.) to not have to build a custom plugin and familiarize with the low-level
Lucene API.
When `refresh=wait_for` is set on an indexing request, we register a listener on the shards that are call during the next refresh. During the recover translog phase, when the engine is open, we have a window of time when indexing operations succeed and they can add their listeners. Those listeners will only be called when the recovery finishes as we do not refresh during recoveries (unless the indexing buffer is full). Next to being a bad user experience, it can also cause deadlocks with an ongoing peer recovery that may wait for those operations to mark the replica in sync (details below).
To fix this, this PR changes refresh listeners to be a noop when the shard is not yet serving reads (implicitly covering the recovery period). It doesn't matter anyway.
Deadlock with recovery:
When finalizing a peer recovery we mark the peer as "in sync". To do so we wait until the peer's local checkpoint is at least as high as the global checkpoint. If an operation with `refresh=wait_for` is added as a listener on that peer during recovery, it is not completed from the perspective of the primary. The primary than may wait for it to complete before advancing the local checkpoint for that peer. Since that peer is not considered in sync, the global checkpoint on the primary can be higher, causing a deadlock. Operation waits for recovery to finish and a refresh to happen. Recovery waits on the operation.
* Allow ingest simulate to parse _id, _index, _type, _routing and _parent as either string or int (#23823)
* Generate data that includes Integer and String type fields for testing document parsing.
https://github.com/elastic/elasticsearch/pull/17379 fixed many metric aggs so that if the parent aggregation does not collect any documents an empty bucket value is returned instead of an ArrayOutOfBoundsException being thrown. Unfortunately the value count aggregation was mised from this fix.
This change applies this fix from #17379 for the value count aggregation.
`ClusterSearchShardsResponseTests.testSerialization` randomly uses `IdsQueryBuilderTests` to generate an alias filter. `IdsQueryBuilderTests` shecks if the array of current types is length zero but it can also be null which causes a `NullPointerException`. This changes adds a null check to avoid the exception.
Closes#26021
* Adds mutate function to various tests
Relates to #25929
* fix test
* implements mutate function for all single bucket aggs
* review comments
* convert getMutateFunction to mutateIInstance
This commit adds the nio transport as an option in place of the mock tcp
transport for tests. Each test will only use one transport type. The
transport type is decided by a random boolean generated inside of the
`ESTestCase` class.
This commit updates the version for master to 7.0.0-alpha1. It also adds
the 6.1 version constant, and fixes many tests, as well as marking some
as awaits fix.
Closes#25893Closes#25870
We are currently quite lenient about the targets of `copy_to`. However in a
number of cases we can detect illegal use of `copy_to` at mapping update time.
For instance, it does not make sense to use object fields as targets of
`copy_to`, or fields that would end up in a different nested document.
When ES starts up we verify we can write to all data folders and that they support atomic moves. We do so by creating and deleting temp files. If for some reason the files was successfully created but not successfully deleted, we still shut down correctly but subsequent start attempts will fail with a file already exists exception.
This commit makes sure to first clean any existing temporary files.
Superseeds #21007
ToXContentToBytes is used as a base class that adds toString and buildAsBytes method implementation to classes that implement ToXContent. With the ongoing cleanups, this class is limited and doesn't add a lot of value, given that buildAsBytes can be replaced with XContentHelper.toXContent and toString can be replaced with Strings.toString(this).
The plan would be to remove ToXContentToBytes entirely, and AbstractQueryBuilder is the first place where we can remove its usage.
During peer recoveries, we need to copy over lucene files and replay the operations they miss from the source translog. Guaranteeing that translog files are not cleaned up has seen many iterations overtime. Back in the old 1.0 days, recoveries went through the Engine and actively prevented both translog cleaning and lucene commits. We then moved to a notion called Translog Views, which allowed the recovery code to "acquire" a view into the translog which is then guaranteed to be kept around until the view is closed. The Engine code was free to commit lucene and do what it ever it wanted without coordinating with recoveries. Translog file deletion logic was based on reference counting on the file level. Those counters were incremented when a view was acquired but also when the view was used to create a `Snapshot` that allowed you to read operations from the files. At some point we removed the file based counting complexity in favor of constructs on the Translog level that just keep track of "open" views and the minimum translog generation they refer to. To do so, Views had to be kept around until the last snapshot that was made from them was consumed. This was fine in recovery code but lead to [a subtle bug](https://github.com/elastic/elasticsearch/pull/25862) in the [Primary Replica Resyncer](https://github.com/elastic/elasticsearch/pull/25862).
Concurrently, we have developed the notion of a `TranslogDeletionPolicy` which is responsible for the liveness aspect of translog files. This class makes it very simple to take translog Snapshot into account for keep translog files around, allowing people that just need a snapshot to just take a snapshot and not worry about views and such. Recovery code which actually does need a view can now prevent trimming by acquiring a simple retention lock (a `Closable`). This removes the need for the notion of a View.
The following token filters were moved: delimited_payload_filter, keep, keep_types, classic, apostrophe, decimal_digit, fingerprint, min_hash and scandinavian_folding.
Relates to #23658
This commit adds a bootstrap check for the maximum file size, and
ensures the limit is set correctly when Elasticsearch is installed as a
service on systemd-based systems.
Relates #25974
We have a command-line flag -V or --version that can be used to display
the version of Elasticsearch. However, the version that we display does
not contain whether or not the version is a snapshot build. This commit
changes the behavior here so that if the build is a snapshot, that is
included in the version string.
Relates #25970
Previously we manually checked if mutually exclusive options are passed
on the command line. Yet, after an upgrade to our option parser
dependency, we were able to use built-in functionality to establish
these mutually exclusive options and the parser would take care of
checking if such options are passed on the command line. However, the
previous manually checking code is now dead and was left behind. This
commit removes that dead code.
Relates #19278
The failure reason for snapshot shard failures might not be propagated properly if the master node changes after the errors were reported by other data nodes. This commits ensures that the snapshot shard failure reason is preserved properly and adds workaround for reading old snapshot files where this information might not have been preserved.
Closes#25878
The Writeble representation is less heavy to parse and that will benefit percolate performance and throughput.
The query builder's binary format has now the same bwc guarentees as the xcontent format.
Added a qa test that verifies that percolator queries written in older versions are still readable by the current version.
This change merges the functionality of the FiltersFunctionScoreQuery in the FunctionScoreQuery.
It also ensures that an exception is thrown when the computed score is equals to Float.NaN or Float.NEGATIVE_INFINITY.
These scores are invalid for TopDocsCollectors that relies on score comparison.
Fixes#15709Fixes#23628
This commit fixes tests for environment-aware commands. A previous
change added a check that es.path.conf is not null. The problem is that
this system property is not being set in tests so this check trips every
single time. To fix this, we move the check into a method that can be
overridden, and then override this method in relevant places in tests to
avoid having to set the property in tests. We also add a test that this
check works as expected.
A previous change enabled it so that users could configure the
configuration path via a command-line option --path.conf. However, a
subsequent change has made it so that we expect users to set the
configuration path via the environment variable CONF_DIR. To enable
this, we now pass the value of CONF_DIR as the value for the
command-line option --path.conf. This has two problems:
- the presence of --path.conf always being on the command line breaks
other flags like --help for multi-commands
- the scripts for which --help is not broken say that you can pass
--path.conf but this is a lie since passing it will make it appear
twice in the command-line arguments breaking the script
Since --path.conf is no longer the way that we want users to set the
configuration path, we should remove the --path.conf option. However, we
still need a way to get the configuration path from the scripts to the
running Java process. To do this, we now pass the configuration path as
a system property. This keeps it off the script command line fixing the
above problems.
The only remaining question (that I can see) is whether or not to
respect -Des.path.conf=<some path> if the user sets this in their
jvm.options or via ES_JAVA_OPTS. I think that we should not do this (as
has been our tradition), es.path.home and es.path.conf are special,
should be set by our scripts only so users should not be setting them at
all so we should not take any effort to respect these flags if the user
tries to otherwise use them.
Relates #25943
With Gradle 4.1 and newer JDK versions, we can finally invoke Gradle directly using a JDK9 JAVA_HOME without requiring a JDK8 to "bootstrap" the build. As the thirdPartyAudit task runs within the JVM that Gradle runs in, it needs to be adapted now to be JDK9 aware.
This commit also changes the `JavaCompile` tasks to only fork if necessary (i.e. when Gradle's JVM and JAVA_HOME's JVM differ).
At the shard level we use an operation permit to coordinate between regular shard operations and special operations that need exclusive access. In ES versions < 6, the operation requiring exclusive access was invoked during primary relocation, but ES versions >= 6 this exclusive access is also used when a replica learns about a new primary or when a replica is promoted to primary.
These special operations requiring exclusive access delay regular operations from running, by adding them to a queue, and after finishing the exclusive access, release these operations which then need to be put back on the original thread-pool they were running on. In the presence of thread pool rejections, the current implementation had two issues:
- it would not properly release the operation permit when hitting a rejection (i.e. when calling ThreadedActionListener.onResponse from IndexShardOperationPermits.acquire).
- it would not invoke the onFailure method of the action listener when the shard was closed, and just log a warning instead (see ThreadedActionListener.onFailure), which would ultimately lead to the replication task never being cleaned up (see #25863).
This commit fixes both issues by introducing a custom threaded action listener that is permit-aware and properly deals with rejections.
Closes#25863
It fixes random score generation to ensure that you will not always get the
same scores on a read-only index by integrating the seed into the score
computation when using doc ids. It also removes `ctx.docBase` from the formula
since it might change over time if deletes are compacted while scores are
supposed to be cacheable per segment.
Extracts ranges from range queries on byte, short, integer, long, half_float, scaled_float, float, double, date and ip fields.
byte, short, integer and date ranges are normalized to Lucene's LongRange.
half_float and float are normalized to Lucene's DoubleRange.
When extracting range queries, the QueryAnalyzer computes the width of the range. This width is used to determine
what range should be preferred in a conjunction query. The QueryAnalyzer prefers the smaller ranges, because these
ranges tend to match with less documents.
Closes#21040
Today we expose `IndexFieldDataService` outside of IndexService to do maintenance
or lookup field data in different ways. Yet, we have a streamlined way to access IndexFieldData
via `QueryShardContext` that should encapsulate all access to it. This also ensures that we control all other functionality like cache clearing etc.
This change also removes the `recycler` option from `ClearIndicesCacheRequest` this option is a no-op and should have been removed long ago.
Currently, NioTransport does start normal socket selectors and the
client when the network server setting is set to false. This commit
makes it so that the client will be started even when the network server
is not enabled.
Additionally, it randomly introduces the NioTransport as an option for
the MockTransportClient throughout tests.
This predicate is used to deal with the intricacies of detecting when a master is reelected/nodes rejoins an existing master. The current implementation is based on nodeIds, which is fine if the master really change. If the nodeId is equal the code falls back to detecting an increment in the cluster state version which happens when a node is re-elected or when the node rejoins. Sadly this doesn't cover the case where the same node is elected after a full restart of all master nodes. In that case we recover the cluster state from disk but the version is reset back to 0. To fix this, the check should be done based on ephemeral IDs which are reset on restart.
Fixes#25471
These two methods do do the same thing. The subtle difference between the two is that the former prints out pretty printed content by default while the latter doesn't. There are way more usages of the latter throughout the codebase hence I kept that variant although I do think that it would be much better to print out prettified content by default from a `toString`. That breaks quite some tests so I didn't make that change yet.
Also XContentHelper#toString was outdated as it didn't check the ToXContent#isFragment method to decide whether a new anonymous object has to be created or not. It would simply fail with any ToXContentObject.
The test only waited for one op to be stuck. In rare occasions the other ops were still in flight when recovery captured a translog snapshot throwing doc count off.
The configuration removed from the runtime configuration did not
properly remove the deps jar from gradle versions > 3.3. The rest client
now removes both the 3.3 and 3.3+ configurations so this works on both
versions of gradle.
Closes#25884
Relates #25208
Today when we aggregate on the `_index` field the cross cluster search
alias is not taken into account. Neither is it respected when we search
on the field. This change adds support for cluster alias when the cluster
alias is present on the `_index` field.
Closes#25606
This changes makes it so you can index a value like "1.0" or "1.1" into whole
number field types like byte and integer. Without this change then the above
values would have resulted in an error, even with coerce set to true.
Closes#25819
We cannot guarantee that the result of computations will be in the float range,
since it depends on the data and how scores are computed. We already use doubles
as intermediate representations and cast to a float as a final step, which is
the right thing to do. Small doubles will just be rounded to zero, there is not
much we can or should do about it.
Closes#25330
Stored fields were still being accessed for nested inner hits even if the _source was not requested.
This was done to figure out the id of the root document. However this is already known higher up the stack.
So instead this change adds the id to the nested search context, so that it is no longer required to be fetched via the stored fields.
In case the _source is large and no source is requested then hot threads like these ones would still appear:
```
100.3% (501.3ms out of 500ms) cpu usage by thread 'elasticsearch[AfXKKfq][search][T#6]'
2/10 snapshots sharing following 22 elements
org.apache.lucene.store.DataInput.skipBytes(DataInput.java:352)
org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.skipField(CompressingStoredFieldsReader.java:246)
org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.visitDocument(CompressingStoredFieldsReader.java:601)
org.apache.lucene.index.CodecReader.document(CodecReader.java:88)
org.apache.lucene.index.FilterLeafReader.document(FilterLeafReader.java:411)
org.elasticsearch.search.fetch.FetchPhase.loadStoredFields(FetchPhase.java:347)
org.elasticsearch.search.fetch.FetchPhase.createNestedSearchHit(FetchPhase.java:219)
org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:150)
org.elasticsearch.search.fetch.subphase.InnerHitsFetchSubPhase.hitsExecute(InnerHitsFetchSubPhase.java:73)
org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:166)
org.elasticsearch.search.fetch.subphase.InnerHitsFetchSubPhase.hitsExecute(InnerHitsFetchSubPhase.java:73)
org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:166)
org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:422)
```
and:
```
8/10 snapshots sharing following 27 elements
org.apache.lucene.codecs.compressing.LZ4.decompress(LZ4.java:135)
org.apache.lucene.codecs.compressing.CompressionMode$4.decompress(CompressionMode.java:138)
org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader$BlockState$1.fillBuffer(CompressingStoredFieldsReader.java:531)
org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader$BlockState$1.readBytes(CompressingStoredFieldsReader.java:550)
org.apache.lucene.store.DataInput.readBytes(DataInput.java:87)
org.apache.lucene.store.DataInput.skipBytes(DataInput.java:350)
org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.skipField(CompressingStoredFieldsReader.java:246)
org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.visitDocument(CompressingStoredFieldsReader.java:601)
org.apache.lucene.index.CodecReader.document(CodecReader.java:88)
org.apache.lucene.index.FilterLeafReader.document(FilterLeafReader.java:411)
org.elasticsearch.search.fetch.FetchPhase.loadStoredFields(FetchPhase.java:347)
org.elasticsearch.search.fetch.FetchPhase.createNestedSearchHit(FetchPhase.java:219)
org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:150)
org.elasticsearch.search.fetch.subphase.InnerHitsFetchSubPhase.hitsExecute(InnerHitsFetchSubPhase.java:73)
org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:166)
org.elasticsearch.search.fetch.subphase.InnerHitsFetchSubPhase.hitsExecute(InnerHitsFetchSubPhase.java:73)
org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:166)
org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:422)
```
Currently Engine.close can return immediately if the engine is already at the process of shutting down (due to a concurrent close call or an engine failure). This is a shame because some of our testing infra wants to do things like checking the index. This commit changes the logic to make sure that all calls to close wait until resources are freed. Failing the engine is still non blocking.
Fixes#25817
This commit removes all external dependencies from the rest client jar
and shades them in an 'org.elasticsearch.client' package within the jar
using shadowJar gradle plugin. All projects that depended on the
existing jar have been converted to using the 'org.elasticsearch.client'
package prefixes to interact with the rest client.
Closes#25208
The context suggester extracts the context field values from the document but it does not filter doc values field coming from Keyword field.
This change filters doc values field when building the context values.
Fixes#25404
This change handles the case where a SpanNearQueryBuilder tries to create a query with a single clause.
This is not allowed in the SpanNearQuery so instead of throwing an exception when the weight is built, this change builds and returns
the singleton inner clause on toQuery.
Fixes#25630
The default _parent field tries to load global ordinals because it is created with eager_global_ordinals=true.
This leads to an IllegalStateException because this field does not have doc_values.
This change explicitely sets eager_global_ordinals to false in order to avoid the ISE on startup.
Fixes#25849
When a replica processes out of order operations, it can drop some due to version comparisons. In the past that would have resulted in a VersionConflictException being thrown and the operation was totally ignored. With the seq# push, we started storing these operations in the translog (but not indexing them into lucene) in order to have complete op histories to facilitate ops based recoveries. This in turn had the undesired effect that deleted docs may be resurrected during recovery in some extreme edge situation (see a complete explanation below). This PR contains a simple fix, which is also an optimization for the recovery process, incoming operation that have a seq# lower than the current local checkpoint (i.e., have already been processed) should not be indexed into lucene. Note that sometimes we can also skip storing them in the translog, but this is not required for the fix and is more complicated.
This is the equivalent of #25592
## More details on resurrected ops
Consider two operations:
- Index d1, seq no 1
- Delete d1, seq no 3
On a replica they come out of order:
- Translog gen 1 contains:
- delete (seqNo 3)
- Translog gen 2 contains:
- index (seqNo 1) (wasn't indexed into lucene, but put into the translog)
- another operation (seqNo 10)
- Translog gen 3
- another op (seqNo 9)
- Engine commits with:
- local checkpoint 9
- refers to gen 2
If this replica becomes a primary:
- Local recovery will replay translog gen 2 and up, causing index #1 to be re-index.
- Even if recovery will start at gen 3, the translog retention policy will cause file based recovery to replay the entire translog. If it happens to start at gen 2 (but not 1), we will run into the same problem.
#### Some context - out of order delivery involving deletes:
On normal operations, this relies on the gc_deletes setting. We assume that the setting represents an upper bound on the time between the index and the delete operation. The index operation will be detected as stale based on the tombstone map in the LiveVersionMap.
Recovery presents a challenge as it can replay an old index operation that was in the translog and override a delete operation that was done when the engine was opened (and is not part of the replayed snapshot). To deal with this situation, we disable GC deletes (i.e. retain all deletes) for the duration of recoveries. This means that the delete operation will be remembered and the index operation ignored.
Both of the above scenarios (local recover + peer recovery) create a situation where the delete operation is never replayed. It this "lost" as lucene doesn't remember it happened and our LiveVersionMap is populated with it.
#### Solution:
Note that both local and peer recovery represent a scenario where we replay translog ops on top of an existing lucene index, potentially with ongoing indexing. Therefore we can treat them the same.
The local checkpoint in Lucene represent a marker indicating that all operations below it were performed on the index. This is the only form of "memory" that we have that relates to deletes. If we can achieve the following:
1) All ops below the local checkpoint are not indexed to lucene.
2) All ops above the local checkpoint are
It will mean that all variants are covered: (i# == index op seq#, d# == delete op seq#, lc == local checkpoint in commit)
1) i# < d# <= lc - document is already deleted in lucene and stays that way.
2) i# <= lc < d# - delete is replayed on index - document is deleted
3) lc < i# < d# - index is replayed and then delete - document is deleted.
More formally - we want to make sure that for all ops that performed on the primary o1 and o2, if o2 is processed on a shard before o1, o1 will be dropped. We have the following scenarios
1) If both o1 or o2 are not included in the replayed snapshot and are above it (i.e., have a higher seq#), they fall under the gc deletes assumption.
2) If both o1 is part of the replayed snapshot but o2 is above it:
- if o2 arrives first, o1 must arrive due to the recovery and potentially via replication as well. since gc deletes is disabled we are guaranteed to know of o2's existence.
3) If both o2 and o1 are part of the replayed snapshot:
- we fall under the same scenarios as #2 - disabling GC deletes ensures we know of o2 if it arrives first.
4) If o1 falls before the snapshot and o2 is either part of the snapshot or higher:
- Since the snapshot is guaranteed to contain all ops that are not part of lucene and are above the lc in the commit used, this means that o1 is part of lucene and o1 < local checkpoint. This means it won't be processed and we're not in the scenario we're discussing.
5) If o2 falls before the snapshot but o1 is part of it:
- by the same reasoning above, o2 is < local checkpoint. Since o1 < o2, we also get o1 < local checkpoint and this will be dropped.
#### Implementation:
For local recovery, we can filter the ops we read of the translog and avoid replaying them. For peer recovery this is tricky as we do want to send the operations in order to have some history on the target shard. Filtering operations on the engine level (i.e., not indexing to lucene if op seq# <= lc) would work for both.
This commit changes the way we handle field expansion in `match`, `multi_match` and `query_string` query.
The main changes are:
- For exact field name, the new behavior is to rewrite to a matchnodocs query when the field name is not found in the mapping.
- For partial field names (with `*` suffix), the expansion is done only on `keyword`, `text`, `date`, `ip` and `number` field types. Other field types are simply ignored.
- For all fields (`*`), the expansion is done on accepted field types only (see above) and metadata fields are also filtered.
- The `*` notation can also be used to set `default_field` option on`query_string` query. This should replace the needs for the extra option `use_all_fields` which is deprecated in this change.
This commit also rewrites simple `*` query to matchalldocs query when all fields are requested (Fixes#25556).
The same change should be done on `simple_query_string` for completeness.
`use_all_fields` option in `query_string` is also deprecated in this change, `default_field` should be set to `*` instead.
Relates #25551
Removes the primary term from the replication request and pushes it into the transport envelope. This makes it possible to remove the term from the ReplicationOperation universe. The primary term that is to be used for a replication operation is now determined in the reroute phase when the node decides to execute a primary action (and validated once the primary action gets to execute). This makes it possible to validate that the primary action was sent to the correct primary shard instance that it was meant to be sent to (currently we only validate primary actions using the allocation id, which can be reused for failed and reallocated primaries).
If a primary shard is relocated, and then subsequently closed, there is a short window where ReplicationOperation could access the
closed shard (engine is not shut down yet) and, because it does not know that the shard was relocated, try to update the local
checkpoint, tripping an assertion in GlobalCheckPointTracker that a local checkpoint cannot be updated if it's not in primary mode.
This change rewrites search requests on the coordinating node before
we send requests to the individual shards. This will reduce the rewrite load
and object creation for each rewrite on the executing nodes and will fetch
resources only once instead of N times once per shard for queries like `terms`
query with index lookups. (among percolator and geo-shape)
Relates to #25791
When we skip a shard we should first increment the skip and successful shard
counters before we notify the super class about a skipped shard which could
send back the result before we increment the stats.
This commit adds the min wire/index compat versions to the main action
output. Not only will this make the compatility expected more
transparent, but it also allows to test which version others think the
compat versions are, similar to how we test the lucene version.
When a node tries to join a cluster, it goes through a validation step to make sure the node is compatible with the cluster. Currently we validation that the node can read the cluster state and that it is compatible with the indexes of the cluster. This PR adds validation that the joining node's version is compatible with the versions of existing nodes. Concretely we check that:
1) The node's min compatible version is higher or equal to any node in the cluster (this prevents a too-new node from joining)
2) The node's version is higher or equal to the min compat version of all cluster nodes (this prevents a too old join where, for example, the master is on 5.6, there's another 6.0 node in the cluster and a 5.4 node tries to join).
3) The node's major version is at least as higher as the lowest node in the cluster. This is important as we use the minimum version in the cluster to stop executing bwc code for operations that require multiple nodes. If the nodes are already operating in "new cluster mode", we should prevent nodes from the previous major to join (even if they are wire level compatible). This does mean that if you have a very unlucky partition during the upgrade which partitions all old nodes which are also a minority / data nodes only, the may not be able to re-join the cluster. We feel this edge case risk is well worth the simplification it brings to BWC layers only going one way. This restriction only holds if the cluster state has been recovered (i.e., the cluster has properly formed).
Also, the node join validation can now selectively fail specific nodes (previously the entire batch was failed). This is an important preparation for a follow up PR where we plan to have a rejected joining node die with dignity.
Also has updates to ScriptMetaData for allowing the old namespace format to be loaded all the way back through 5.0; however, it will throw an exception if two scripts share the same id but different languages.
This commit removes legacy checks for unsupported an environment
variable and unsupported system properties. This environment variable
and these system properties have not been supported since 1.x so it is
safe to stop checking for the existence of these settings.
Relates #25809
The `QueryRewriteContext` used to provide a client object that can
be used to fetch geo-shapes, terms or documents for percolation. Unfortunately
all client calls used to be blocking calls which can have significant impact on the
rewrite phase since it occupies an entire search thread until the resource is
received. In the case that the index the resource is fetched from isn't on the local
node this can have significant impact on query throughput.
Note: this doesn't fix MLT since it fetches stuff in doQuery which is a different beast. Yet, it is a huge step in the right direction
This commit calls the `useSystemProperties` method on the HttpAsyncClientBuilder so that the jvm
system properties are used. The primary reason for doing this is to ensure the builder uses the
system default SSLContext rather than the default instance created by the http client library.
Closes#23231
Today we have duplicated code that is quite complicated to iterate
over rewriteable (`QueryBuilders` mainly) This change introduces a
`Rewriteable` interface that allow to share code to do the rewriting as
well as encapsulation and composition of queries.
Setting a timeout or enforcing low-level search cancellation used to make us
wrap the collector and check either the current time or whether the search
task was cancelled for every collected document. This can be significant
overhead on cheap queries that match many documents.
This commit changes the approach to wrap the bulk scorer rather than the
collector and exponentially increase the interval between two consecutive
checks in order to reduce the overhead of those checks.
We currently use fielddata on the `_id` field which is trappy, especially as we
do it implicitly. This changes the `random_score` function to use doc ids when
no seed is provided and to suggest a field when a seed is provided.
For now the change only emits a deprecation warning when no field is supplied
but this should be replaced by a strict check on 7.0.
Closes#25240
When a node tries to join a cluster, it goes through a validation step to make sure the node is compatible with the cluster. Currently we validation that the node can read the cluster state and that it is compatible with the indexes of the cluster. This PR adds validation that the joining node's version is compatible with the versions of existing nodes. Concretely we check that:
1) The node's min compatible version is higher or equal to any node in the cluster (this prevents a too-new node from joining)
2) The node's version is higher or equal to the min compat version of all cluster nodes (this prevents a too old join where, for example, the master is on 5.6, there's another 6.0 node in the cluster and a 5.4 node tries to join).
3) The node's major version is at least as higher as the lowest node in the cluster. This is important as we use the minimum version in the cluster to stop executing bwc code for operations that require multiple nodes. If the nodes are already operating in "new cluster mode", we should prevent nodes from the previous major to join (even if they are wire level compatible). This does mean that if you have a very unlucky partition during the upgrade which partitions all old nodes which are also a minority / data nodes only, the may not be able to re-join the cluster. We feel this edge case risk is well worth the simplification it brings to BWC layers only going one way.
Also, the node join validation can now selectively fail specific nodes (previously the entire batch was failed). This is an important preparation for a follow up PR where we plan to have a rejected joining node die with dignity.
Today we provide a lot of functionality on the `QueryRewriteContext` that
we potentially don't have ie. if we rewrite on a coordinating node or when
we percolating. This change moves most of the unnecessary shard level or
index level services and dependencies to `QueryShardContext` instead.
If a request contains an invalid error trace parameter, we send a error
on the channel. This should immediately abort any additional processing
of the request but instead we march on, dispatch the request and
subsequently send another message on the channel. The problem here is
this means two writes on the channel which leads to the request being
released twice ultimately raising in illegal reference count
exception. This commit addresses this by performing an early return in
the case that the request contained an invalid error trace parameter.
Relates #25785
This commit removes a timed latch await in a transport client listeners
test. The problem with a timed wait here is that on an overloaded
machine, the test can fail because the waiting thread was not unlatched
quickly enough. This makes the test unnecessarily flaky. Instead, we
should wait indefinitely and simply let the test fail by the test
timeout if the latch is not counted down for some reason.
Closes#25760
Currently we ignore unknown field names when parsing RangeAggregator.Range and
GeoDistanceAggregationBuilder.Range from `range`, `date_range` or `geo_distance`
aggregations. This can hide subtle errors in the query. This change makes parsing `ranges`
stricter.
This is an appealing assertion, but there scenarios where it can happen under normal operations. For example, when an index is created it may run into an exception when the lucene files have already been created. The master will try to assign the shard to another node (it's empty, so no need to look for data) but if there is no other node, it will reassign it to the same node. At that point the deletion will get a list of existing commits (which it will typically delete).
With #23997 and #25268 we have changed put alias, delete alias, update aliases and delete index to not accept aliases. Instead concrete indices should be provided as their index parameter.
This commit improves the error message in case aliases are provided, from an IndexNotFoundException (404 status code) with "no such index" message, to an IllegalArgumentException (400 status code) with "The provided expression [alias] matches an alias, specify the corresponding concrete indices instead." message.
Note that there is no specific error message for the case where wildcard expressions match one or more aliases. In fact, aliases are simply ignored when expanding wildcards for such APIs. An error is thrown only when the expression ends up matching no indices at all, and allow_no_indices is set to false. In that case the error is still the generic "404 - no such index".
403 can be confused with security. If an API doesn't support working against closed indices and closed indices are referred to in a request, that is a bad request, hence 400 is more appropriate.
The test checks if a file based or ops based recovery happened, but if the replica shard never finished recovering expectations are not met.
Fixes#25761
Currently the `to` and `from` parameter in the `date_range` aggregation is not
parsed with the correct date field format from the mappings or the aggregation
if the argument is numeric, but always treated as a long value specifying
`epoch_millis`. This leads to problems e.g. when the format is `epoch_second`,
but the `to` and `from` are currently treated as millis.
With this change, we interpret these parameters according to the `format` of the target field.
If the `format` in the mappings is not compatible with numeric input values,
a compatible `format` (e.g. `epoch_millis`, `epoch_second`) must be specified in
the `date_range` aggregation itself, otherwise an error is thrown.
#Closes #17920