This commit removes path.conf as a valid setting and replaces it with a
command-line flag for specifying a non-default path for configuration.
Relates #25392
This commit removes an abstraction that was introduced when introducing
the primary context. As this abstraction is used in exactly one place,
we simply make that abstraction local to its usage so that we do not
accumulate yet another general abstraction with exactly one usage.
Relates #25402
This commit updates some assertions in the primary context sealing test
after the restriction on updating allocation IDs from master and
updating global checkpoint on replica while sealed were removed.
* Introduce primary context
The target of a primary relocation is not aware of the state of the
replication group. In particular, it is not tracking in-sync and
initializing shards and their checkpoints. This means that after the
target shard is started, its knowledge of the replication group could
differ from that of the relocation source. In particular, this differing
view can lead to it computing a global checkpoint that moves backwards
after it becomes aware of the state of the entire replication
group. This commit addresses this issue by transferring a primary
context during relocation handoff.
* Fix test
* Add assertion messages
* Javadocs
* Barrier between marking a shard in sync and relocating
* Fix misplaced call
* Paranoia
* Better latch countdown
* Catch any exception
* Fix comment
* Fix wait for cluster state relocation test
* Update knowledge via upate local checkpoint API
* toString
* Visibility
* Refactor permit
* Push down
* Imports
* Docs
* Fix compilation
* Remove assertion
* Fix compilation
* Remove context wrapper
* Move PrimaryContext to new package
* Piping for cluster state version
This commit adds piping for the cluster state version to the global
checkpoint tracker. We do not use it yet.
* Remove unused import
* Implement versioning in tracker
* Fix test
* Unneeded public
* Imports
* Promote on our own
* Add tests
* Import
* Newline
* Update comment
* Serialization
* Assertion message
* Update stale comment
* Remove newline
* Less verbose
* Remove redundant assertion
* Tracking -> in-sync
* Assertions
* Just say no
Friends do not let friends block the cluster state update thread on
network operations.
* Extra newline
* Add allocation ID to assertion
* Rename method
* Another rename
* Introduce sealing
* Sealing tests
* One more assertion
* Fix imports
* Safer sealing
* Remove check
* Remove another sealed check
The following token filters were moved: stemmer, stemmer_override, kstem, dictionary_decompounder, hyphenation_decompounder, reverse, elision and truncate.
Relates to #23658
While real secure settings (ie an ES keystore) cannot be merged
together, mocked secure settings can and need to be sometimes merged.
This commit adds a merge method to allow tests to merge together
multiple instances of secure settings.
This change removes the remaining explicitly specified `index.mapper.single_type`
settings from tests in order to allow the removal of the setting.
This is the already approved part of #25375 broken out to simplfiy reviews on
When Log4j 2 was introduced, we removed support for the system property
es.logger.prefix. Yet, some code was left behind. This commit removes
that dead code.
Relates #25377
Added unit test coverage for GlobalOrdinalsSignificantTermsAggregator, GlobalOrdinalsSignificantTermsAggregator.WithHash, SignificantLongTermsAggregator and SignificantStringTermsAggregator.
Removed integration test.
Relates #22278
This change cleans up remaining tests to not use index.mapping.single_type=false
but instead where applicable use a single type or markt the index as created
with a pre 6.x version.
Yet, there is still on leftover in the client tests that needs special attention.
See `org.elasticsearch.client.SearchIT`
Relates to #24961
OldIndexBackwardsCompatibilityIT#testOldClusterStates tested whether global and index metadata could be read from data directory,
this can also be tested in full cluster qa test that checks cluster state via api.
Relates to #24939
`InternalEngineTests.testConcurrentWritesAndCommits` can be very heavy on disks
if threads are slow and the main thread keeps on pulling commit points holding on
to many many segments. This commit adds some quadratic backoff to not pile up too many
commits and to make sure indexing threads can make progress. This also now doesn't do
busy waiting but waits on a latch with a timeout.
Closes#25110
In #24379 we added ability to upgrade templates on full cluster startup. This PR invokes the same update procedure also when a new node first joins the cluster allowing to update templates on a rolling cluster restart as well.
Closes#24680
When shrinking an index we initialize its max unsafe auto ID timestamp
to the maximum of the max unsafe auto ID timestamps on the source
shards.
Relates #25356
#25147 added the translog deletion policy but didn't enable it by default. This PR enables a default retention of 512MB (same maximum size of the current translog) and an age of 12 hours (i.e., after 12 hours all translog files will be deleted). This increases to chance to have an ops based recovery, even if the primary flushed or the replica was offline for a few hours.
In order to see which parts of the translog are committed into lucene the translog stats are extended to include information about uncommitted operations.
Views now include all translog ops and guarantee, as before, that those will not go away. Snapshotting a view allows to filter out generations that are not relevant based on a specific sequence number.
Relates to #10708
This change cleans up core tests to not use `index.mapping.single_type=false`
but instead where applicable use a single type or markt the index as created
with a pre 6.x version.
Relates to #24961
Due to limitations with CreateProcessW on Windows (ultimately used by
ProcessBuilder) with respect to maximum path lengths, we need to get the
short path name for any native controllers before trying to start them
in case the absolute path exceeds the maximum path length. This commit
uses JNA to invoke the necessary Windows API for this to start the
native controller using the short path.
To be precise about the limitation here, the MSDN docs for
CreateProcessW say for the command line parameter:
>The command line to be executed. The maximum length of this string is
>32,768 characters, including the Unicode terminating null character. If
>lpApplicationName is NULL, the module name portionof lpCommandLine is
>limited to MAX_PATH characters.
This is exactly how the Windows implementation of Process in the JDK
invokes CreateProcessW: with the executable name (lpApplicationName) set
to NULL.
Relates #25344
Most notable changes:
- better update concurrency: LUCENE-7868
- TopDocs.totalHits is now a long: LUCENE-7872
- QueryBuilder does not remove the boolean query around multi-term synonyms:
LUCENE-7878
- removal of Fields: LUCENE-7500
For the `TopDocs.totalHits` change, this PR relies on the fact that the encoding
of vInts and vLongs are compatible: you can write and read with any of them as
long as the value can be represented by a positive int.
Bringing together shards in a shrunken index means that we need to
address the start of history for the shrunken index. The problem here is
that sequence numbers before the maximum of the maximum sequence numbers
on the source shards can collide in the target shards in the shrunken
index. To address this, we set the maximum sequence number and the local
checkpoint on the target shards to this maximum of the maximum sequence
numbers. This enables correct document-level semantics for documents
indexed before the shrink, and history on the shrunken index will
effectively start from here.
Relates #25321
Ports all of RepositoryUpgradabilityIT to qa:full-cluster-restart and ports as much of RestoreBackwardsCompatIT as possible into qa:full-cluster-restart.
This setting is supposed to ease index upgrades as it allows you
to check for a new setting called `index.internal.version` which
can be used to check before upgrading indices.
If secure settings are closed after the node has been constructed
no key-store access is permitted. We should also try to be as close as possible
to the real behavior if we mock secure settings. This change also adds
the same behavior as bootstrap has to InternalTestCluster to ensure we fail
if we try to read from secure settings after the node has been constructed.
Today when an index is shrunk, the primary terms for its shards start
from one. Yet, this is a problem as the index will already contain
assigned sequence numbers across primary terms. To ensure document-level
sequence number semantics, the primary terms of the target shards must
start from the maximum of all the shards in the source index. This
commit causes this to be the case.
Relates #25307
* [Analysis] Parse synonyms with the same analysis chain
Synonym Token Filter / Synonym Graph Filter tokenize synonyms with whatever tokenizer and token filters appear before it in the chain.
Close#7199
I'm still trying to hunt down rare failures in the cancelation tests
for reindex and friends. Here is the latest:
https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+5.x+multijob-unix-compatibility/os=ubuntu/876/console
It doesn't show much, other than that one of the tasks didn't kill
itself when asked to cancel.
So I'm going a bit crazy with debug logging so that the next time this
comes up I can trace exactly what happened.
Additionally, this tweaks the logic around how rethrottles were
performed around cancel. Previously we set the `requestsPerSecond`
to `0` when we cancelled the task. That was the "old way" to set them
to inifity which was the intent. This switches that from `0` to
`Float.MAX_VALUE` which is the "new way" to set the `requestsPerSecond`
to infinity. I don't know that this is much better, but it feels better.
This commit fixes a typo in the KeyStoreCli class. The add-file command was incorrectly set to use
the AddStringKeyStoreCommand instead of the AddFileKeyStoreCommand.
Indexing or deleting documents through the IndexShard interface is quite complex and error-prone. It requires multiple calls, e.g. first prepareIndexOnPrimary, then do some checks if mapping updates have occurred, then do the actual indexing using index(...) etc. Currently each consumer of the interface (local recovery, peer recovery, replication) has additional custom checks built around it to deal with mapping updates, some of which are even inconsistent. This commit aims at reducing the complexity by exposing a simpler interface on IndexShard. There are no more prepare*** methods and the mapping complexity is also hidden, but still giving callers a possibility to implement custom logic to deal with mapping updates.
This commit changes the parsing logic of DocWriteResponse, ReplicationResponse
and GetResult so that it skips any unknown additional fields (for forward compatibility
reasons). This affects the IndexResponse, UpdateResponse,DeleteResponse and
GetResponse objects.
Today we maintain a map of open connections in order to close them when
a low level channel gets closed or handles a failure. We also spawn a thread due to some
tricky concurrency issues especially with respect to netty since they listener might
be called on a transport / boss thread. Executions on those threads must not be blocking
since otherwise we will likely deadlock the event processing which adds to the
complexity of the concurrency model in this class.
This change associates the connection with the close callback that every channel invokes
once it's closed which allows us to remove the connections map. A relaxed non-blocking
concurrency model in the connection close listener allows cleaning up connected nodes without
blocking on any lock.
This test is failing because delete /{index} requests no longer support
index matching an alias. This commit removes testing such requests again
aliases.
Closes#25284
This change adds tests for the aggregation parsing that try to simulate that we
can parse existing aggregations in a forward compatible way in the future,
ignoring potential newly added fields or substructures to the xContent response.
Today TcpTransport is the de-facto base-class for transport implementations.
The need for all the callbacks we have in TransportServiceAdaptor are not necessary
anymore since we can simply have the logic inside the base class itself. This change
moves the stats metrics directly into TcpTransport removing the need for low level
bytes send / received callbacks.
Moves the keyword tokenizer to the analysis-common module. The keyword tokenizer is special because it is used by CustomNormalizerProvider so I pulled it out into its own PR. To get the move to work I've reworked the lookup from static to one using the AnalysisRegistry. This seems safe enough.
Part of #23658.
With #23997 we have introduced a new internal index option that allows to resolve index expressions only against concrete indices while ignoring aliases. Such index option was applied to IndicesAliasesRequest, so that the index part of alias actions would only be resolved against concrete indices.
Same is done in this commit with delete index request. Deleting aliases has always been confusing as some users expect it to only remove the alias from the index (which has its own specific API). Even worse, in case of filtered aliases, deleting an alias may leave users with the expectation that only the documents that match the filter are deleted, which was never the case. To address all this confusion, delete index api works now only against concrete indices. WIldcard expressions will be only resolved against concrete index, as if aliases didn't exist. If one tries to delete against an alias, an IndexNotFoundException will be thrown regardless of whether the alias exists or not, as a concrete index with such a name doesn't exist.
Closes#2318
This PR extends the TranslogDeletionPolicy to allow keeping the translog files longer than what is needed for recovery from lucene. Specifically, we allow specifying the total size of the files and their maximum age (i.e., keep up to 512MB but no longer than 12 hours). This will allow making ops based recoveries more common.
Note that the default size and age still set to 0, maintaining current behavior. This is needed as the other components in the system are not yet ready for a longer translog retention. I will adapt those in follow up PRs.
Relates to #10708
This commit does two things:
1. Adds logging at the DEBUG level for when the index-N blob is
updated.
2. When attempting to delete a snapshot, if the snapshot was not found
in the repository data, an exception is now thrown instead of silently
ignoring the lack of presence of the snapshot in the repository data.
We use assertBusy in many places where the underlying code throw exceptions. Currently we need to wrap those exceptions in a RuntimeException which is ugly.
This commit adds a NamedXContentProvider interface that can
be implemented by plugins or modules using Java's SPI feature
in order to provide additional NamedXContent parsers to external
applications like the Java High Level Rest Client.
At index time Elasticsearch needs to look up the version associated with the
`_id` of the document that is being indexed, which is often the bottleneck for
indexing.
While reviewing the output of the `jfr` telemetry from a Rally benchmark, I saw
that significant time was spent in `ConcurrentHashMap#get` and `ThreadLocal#get`.
The reason is that we cache lookup objects per thread and segment, and for every
indexed document, we first need to look up the cache associated with this
segment (`ConcurrentHashMap#get`) and then get a state that is local to the
current thread (`ThreadLocal#get`). So if you are indexing N documents per
second and have S segments, both these methods will be called N*S times per
second.
This commit changes version lookup to use a cache per index reader rather than
per segment. While this makes cache entries live for less long, we now only need
to do one call to `ConcurrentHashMap#get` and `ThreadLocal#get` per indexed
document.
This snapshot has faster range queries on range fields (LUCENE-7828), more
accurate norms (LUCENE-7730) and the ability to use fake term frequencies
(LUCENE-7854).
This commit renames the needsScores method so as to make it
automatically generatable, based on the name of the `_score` variable
which is available in search scripts. It also adds documentation to
ScriptContext to explain the naming and signature of such methods.
This commit removes the global caching of the field query and replaces it with
a caching per field. Each field can use a different `highlight_query` and the rewriting of
some queries (prefix, automaton, ...) depends on the targeted field so the query used for highlighting
must be unique per field.
There might be a small performance penalty when highlighting multiple fields since the query needs to be rewritten
once per highlighted field with this change.
Fixes#25171
* Remove QUERY_AND_FETCH BWC for pre-5.3.0 nodes
This was a BWC layer where we expicitly set the `search_type` to
"query_and_fetch" when a single node is queried on pre-5.3 nodes. Since 6.0 no
longer needs to be compatible with 5.3 nodes, this can be removed.
* Fix indentation
* Remove unused QUERY_FETCH_ACTION_NAME constant
* Add more missing AggregationBuilder getters
- getMetadata for all aggs
- various getters on TermsAggBuilder (without "get" prefix to maintain convention)
- Also makes InternalSum's ctor public, to follow suit of other metrics (min/max/avg/etc)
We introduced a new API for ranges in order to be able to decide whether points
or doc values would be more appropriate to execute a query, but since
`ProfileWeight` does not implement this API, the optimization is disabled when
profiling is enabled.
In order to add scroll support for cross cluster search we need
to resolve the nodes encoded in the scroll ID to send requests to the
corresponding nodes. This change adds the low level connection infrastructure
that also ensures that connections are re-established if the cluster is
disconnected due to a network failure or restarts.
Relates to #25094
Today if a channel gets closed due to a disconnect we notify the response
handler that the connection is closed and the node is disconnected. Unfortunately
this is not a complete solution since it only works for published connections.
Connections that are unpublished ie. for discovery can indefinitely hang since we
never invoke their handers when we get a failure while a user is waiting for
the response. This change adds connection tracking to TcpTransport that ensures
we are notifying the corresponding connection if there is a failure on a channel.
This modifies a method Mark added to the AggregatorBase that allows aggregations
to add additional memory tracking for datastructures used during execution. If
an aggregation would like to reclaim circuit breaker reserved bytes by adding a
negative number, `addWithoutBreaking` should be used instead of
`addEstimateBytesAndMaybeBreak`.
Resolves#24511
Duplicate data paths already fail to work because we would attempt to
take out a node lock on the directory a second time which will fail
after the first lock attempt succeeds. However, how this failure
manifests is not apparent at all and is quite difficult to
debug. Instead, we should explicitly reject duplicate data paths to make
the failure cause more obvious.
Relates #25178
When attempting to obtain the node lock, if an exception is thrown it is
not logged. This makes debugging difficult. This commit causes such an
exception to be logged.
Relates #25176
* Aggregations bug: Significant_text fails on arrays of text.
The set of previously-seen tokens in a doc was allocated per-JSON-field string value rather than once per JSON document meaning the number of docs containing a term could be over-counted leading to exceptions from the checks in significance heuristics. Added unit test for this scenario
Closes#25029
Sorted scroll search can use early termination when the index sort matches the scroll search sort.
The optimization can be done after the first query (which still needs to collect all documents)
by applying a query that only matches documents that are greater than the last doc retrieved in the previous request.
Since the index is sorted, retrieving the list of documents that are greater than the last doc
only requires a binary search on each segment.
This change introduces this new query called `SortedSearchAfterDocQuery` and apply it when possible.
Scrolls with this optimization will search all documents on the first request and then will early terminate each segment
after $size doc for any subsequent requests.
Relates #6720
#25005 changed the translog dynamic to fsync the checkpoint before trimming a file. This changed the dynamics of potential failure modes which requires a change to testWithRandomException - it's now possible that we had an exception but the translog was trimmed.
Closes#25133
Get mappings HEAD requests incorrectly return a content-length header of
0. This commit addresses this by removing the special handling for get
mappings HEAD requests, and just relying on the general mechanism that
exists for handling HEAD requests in the REST layer.
Relates #23192
Today when an exception is thrown handling a HEAD request, the body is
swallowed before the channel has a chance to see it. Yet, the channel is
where we compute the content length that would be returned as a header
in the response. This is a violation of the HTTP specification. This
commit addresses the issue. To address this issue, we remove the special
handling in bytes rest response for HEAD requests when an exception is
thrown. Instead, we let the upstream channel handle the special case, as
we already do today for the non-exceptional case.
Relates #25172
We have a custom logger implementation known as a prefix logger that is
used to write every message by the logger with a given prefix. This is
useful for node-level, index-level, and shard-level messages where we
want to log the node name, index name, and shard ID, respectively, if
possible. The mechanism that we employ is that of a marker. Log4j has a
built-in facility for managing these markers, but its effectively a
memory leak because these markers are held in a map and can never be
released. This is problematic for us since indices and shards do not
necessarily have infinite life spans and so on a node where there are
many indices being creted and destroyed, this infinite lifespan can be a
problem indeed. To solve this, we use our own cache of markers. This is
necessary to prevent too many instances of the marker for the same
prefix from being created (just think of all the shard-level components
that exist in the system), and to workaround the effective leak in
Log4j. These markers are stored as weak references in a weak hash
map. It is these weak references that are unneeded. When a key is
removed from a weak hash map, the corresponding entry is placed on a
reference queue that is eventually cleared. This commit simplifies
prefix logger by removing this unnecessary weak reference wrapper.
Relates #22460
When the cluster state is updated with Shard Started entries, it simply adds "shard-started" as the source of the change.
This adds the index name and shard ID so that we can see who/what is spamming the changes when the index creation step has already left the cluster state.
There are a few places where arrays are output in messages yet the
output would merely use the default toString implementation rather than
actually putting the content of the array in the message. This commit
fixes the issue.
Relates #24340
This change extends the tests and parsing of SearchResponse to make sure we can
skip additional fields the parser doesn't know for forward compatibility
reasons.
This commit adds back "id" as the key within a script to specify a
stored script (which with file scripts now gone is no longer ambiguous).
It also adds "source" as a replacement for "code". This is in an attempt
to normalize how scripts are specified across both put stored scripts and script usages, including search template requests. This also deprecates the old inline/stored keys.
When `index.mapping.single_type` is `true` the `_uid` field is not used and instead `_id` field is used.
Prior to this change nested documents would in this case still use the `_uid` field to mark to what root
document they belong to. In case of deleting documents this could lead to only the root Lucene document
to be deleted and not the nested Lucene documents. This broke the docid block ordering the block join
relies on in order to work correctly and thus causing the `nested` query, `nested` aggregation, nested sorting
and nested inner hits to either fail or yield incorrect results.
This bug only manifests in 6.0.0-ALPHA2 release and snaphots (5.5.0-SNAPSHOT, 5.6.0-SNAPSHOT, 6.0.0-SNAPSHOT).
This was introduced in #24460: the constructor of `Translog.Delete` that takes
a `StreamInput` does not set the type and id. To make it a bit more robust, I
made fields final so that forgetting to set them would make the compiler
complain.
This change removes the `postings` highlighter. This highlighter has been removed from Lucene master (7.x) because it behaves
exactly like the `unified` highlighter when index_options is set to `offsets`:
https://issues.apache.org/jira/browse/LUCENE-7815
It also makes the `unified` highlighter the default choice for highlighting a field (if `type` is not provided).
The strategy used internally by this highlighter remain the same as before, it checks `term_vectors` first, then `postings` and ultimately it re-analyzes the text.
Ultimately it rewrites the docs so that the options that the `unified` highlighter cannot handle are clearly marked as such.
There are few features that the `unified` highlighter is not able to handle which is why the other highlighters (`plain` and `fvh`) are still available.
I'll open separate issues for these features and we'll deprecate the `fvh` and `plain` highlighters when full support for these features have been added to the `unified`.
This change extends the tests and parsing of SearchShardFailure to make sure we
can skip fields the parser doesn't know for forward compatibility reasons.
When parsing responses we should be ignoring any new unknown fields or inner
objects in most cases to be forward compatible with changes in core on the
client side. This change adds test for this for Suggestions and its various
subclasses to check if we are able to ignore new fields and objects in the
xContent.
When we disabled `_all` by default for indices created in 6.0, we missed adding
a layer that would handle the situation where `_all` was not enabled in 5.x and
then the cluster was updated to 6.0, this means that when the cluster was
updated the `_all` field would be disabled for 5.x indices and field values
would not be added to the `_all` field.
This adds a compatibility layer for 5.x indices where we treat the default
enabled value for the `_all` field to be `true` if unset on 5.x indices.
Resolves#25068
The Log4j dependency is separated into two artifacts, the API and the
core implementation. This is to enable replacing Log4j on the backend
through the SLF4J bridge with another logging implementation. For this
reason, the dependencies are marked as optional. This causes confusion
amongst users as to use the bridge, the API should be non-optional since
it is needed for the bridge to function correctly. While they could pull
it into their application directly, it would be clearer if we simply
marked this depdendency as non-optional. Note that this does not mean
that users have to use Log4j for logging in their application, so we are
not marking core as required, it only clarifies what they need to be
able to plug in a different logging implementation.
Relates #25136
Previously this would output:
```
GET /test-1/_mappings
{ }
```
And after this change:
```
GET /test-1/_mappings
{
"test-1": {
"mappings": {}
}
}
```
To bring parity back to the REST output after #24723.
Relates to #25090
Previously in #24723 we changed the `_alias` API to not go through the
`RestGetIndicesAction` endpoint, instead creating a `RestGetAliasesAction` that
did the same thing.
This changes the formatting so that it matches the old formatting of the
endpoint, before:
```
GET /test-1/_alias
{ }
```
And after this change:
```
GET /test-1/_alias
{
"test-1": {
"aliases": {}
}
}
```
This is related to #25090
GeoUtils#isValidLongitude is inconsistent with GeoUtils#isValidLatitude.
Neither technically need the isInfinite() check because they then compare
against min and max values.
When parsing resonses we should be ignoring any new unknown fields or inner
objects in most cases to be forward compatible with changes in core on the
client side. This change adds test for this for QueryProfileShardResult and
nested substructures and changes the parsing code where necessary to be able to
ignore new fields and objects in the xContent.
Test: randomVersionBetween works with unreleased
Modifies randomVersionBetween so that it works with unreleased
versions. This should make switching a version from unreleased
to released much simpler.
Added common base class for ScriptDocValues.Strings and ScriptDocValues.BytesRefs now that these classes are very similar.
Also cleaned up the BinaryDVFieldDataTests:
* Use junit assertions instead of hamcrest
* Use BytesRef directly instead of byte[]
Closes#24785
The FVH fails with an NPE when a match phrase prefix is rewritten in an empty phrase query.
This change makes sure that the multi match query rewrites to a MatchNoDocsQuery (instead of an empty phrase query) when there is
a single term and that term does not expand to any term in the index.
Fixes#25088
This commit refactors the query phase in order to be able
to automatically detect queries that can be early terminated.
If the index sort matches the query sort, the top docs collection is early terminated
on each segment and the computing of the total number of hits that match the query is delegated to a simple TotalHitCountCollector.
This change also adds a new parameter to the search request called `track_total_hits`.
It indicates if the total number of hits that match the query should be tracked.
If false, queries sorted by the index sort will not try to compute this information and
and will limit the collection to the first N documents per segment.
Aggregations are not impacted and will continue to see every document
even when the index sort matches the query sort and `track_total_hits` is false.
Relates #6720
This commit modifies query_string, simple_query_string and multi_match queries to always use a DisjunctionMaxQuery when a disjunction over multiple fields is built. The tiebreaker is set to 1 in order to behave like the boolean query in terms of scoring.
The removal of the coord factor in Lucene 7 made this change mandatory to correctly handle minimum_should_match.
Closes#23966
This change extracts the main logic from `TransportClearScrollAction`
into a new class `ClearScrollController` and adds a corresponding unit test.
Relates to #25094
The `scorerSupplier` API allows to give a hint to queries in order to let them
know that they will be consumed in a random-access fashion. We should use this
for aggregations, function_score and matched queries.
When we open a translog, we rely on the `translog.ckp` file to tell us what the maximum generation file should be and on the information stored in the last lucene commit to know the first file we need to recover. This requires coordination and is currently subject to a race condition: if a node dies after a lucene commit is made but before we remove the translog generations that were unneeded by it, the next time we open the translog we will ignore those files and never delete them (I have added tests for this).
This PR changes the approach to have the translog store both of those numbers in the `translog.ckp`. This means it's more self contained and easier to control.
This change also decouples the translog recovery logic from the specific commit we're opening. This prepares the ground to fully utilize the deletion policy introduced in #24950 and store more translog data that's needed for Lucene, keep multiple lucene commits around and be free to recover from any of them.
For the response parsing we want to be lenient when it comes to parsing
new xContent fields. In order to ensure this in our testing, this change
adds a utility method to XContentTestUtils that takes xContent bytes
representation as input and recursively a random field on each object
level.
Sometimes we also want to exclude a whole subtree from this treatment
(e.g. skipping "_source"), other times an element (e.g. "fields", "highlight"
in SearchHit) can have arbitraryly named objects. Those cases can be
specified as exceptions.
This commit removes wrapper methods on QueryShardContext used to compile
scripts. Instead, the script service is made accessible in the context,
and calls to compile can be made directly. This will ease transition to
each of those location becoming their own context, since they would no
longer be able to expect the same script class type.
Splits TranslogRecoveryPerformer into three parts:
- the translog operation to engine operation converter
- the operation perfomer (that indexes the operation into the engine)
- the translog statistics (for which there is already RecoveryState.Translog)
This makes it possible for peer recovery to use the same IndexShard interface as bulk shard requests (i.e. Engine operations instead of Translog operations). It also pushes the "fail on bad mapping" logic outside of IndexShard. Future pull requests could unify the BulkShard and peer recovery path even more.
The unified highlighter rewrites MultiPhrasePrefixQuery to SpanNearQuer even when there is a single term in the phrase.
Though SpanNearQuery throws an exception when the number of clauses is less than 2.
This change returns a simple PrefixQuery when there is a single term and builds the SpanNearQuery otherwise.
Relates #25088
The PR takes a different approach to solve #24806 than currently implemented via #25052. The `refreshMetric` that IndexShard maintains is updated using the refresh listeners infrastructure in lucene. This means that we truly count all refreshes that lucene makes and not have to worry about each individual caller (like `IndexShard@refresh` and `Engine#get()`)
parent/child: Allow updating mapping without specifying `_parent` field on each update.
Prior to this change when a mapping has a `_parent` field then any update (also updates that didn't modify the `_parent` field) to the mapping involved specifying the `_parent` field again. With this change specifying the `_parent` field on each mapping update is no longer required.
Closes#23381
We have a callback interface that is not needed because it is
effectively the same as java.util.function.Consumer. This commit removes
it.
Relates #25089
We use a callback in recovery land during primary relocation to ensure
the relocation target is on at least the same version as the relocation
source. This callback is typed as a Callback<Long> which is an
unnecessary custom type (we can use Consumer<T> or the appropriate
primitive callbacks). Here, we can use LongConsumer.
Relates #25081
Previously the HEAD and GET aliases endpoints were misaigned in
behavior. The HEAD verb would 404 if any aliases are missing while the
GET verb would not if any aliases existed. When HEAD was aligned with
GET, this broke the previous usage of HEAD to serve as an existence
check for aliases. It is the behavior of GET that is problematic here
though, if any alias is missing the request should 404. This commit
addresses this by modifying the behavior of GET to behave in this
way. This fixes the behavior for HEAD to also 404 when aliases are
missing.
Relates #25043
This change moves the parent_id query to the parent-join module and handles the case when only the parent-join field can be declared on an index (index with single type on).
If single type is off it uses the legacy parent join field mapper and switch to the new one otherwise (default in 6).
Relates #20257
This commit fixes the group methdos of Settings to properly include
grouped secure settings. Previously the secure settings were included
but without the group prefix being removed.
closes#25069
The index parameter in the update-aliases, put-alias, and delete-alias APIs no longer accepts alias names. Instead, it accepts only index names (or wildcards which will expand to matching indices).
Closes#23960
This commit fixes a bug in retrieving a sub Settings object for a given
prefix with secure settings. Before this commit the returned Settings
would be filtered by the prefix, but the found setting names would not
have the prefix removed.
This is the first step towards adaptive replica selection (#24915). This PR
tracks the execution time, also known as the "service time" of a task in the
threadpool. The `QueueResizingEsThreadPoolExecutor` then stores a moving average
of these task times which can be retrieved from the executor.
Currently there is no functionality using the EWMA yet (other than tests), this
is only a bite-sized building block so that it's easier to review.
[1]: EWMA = Exponentially Weighted Moving Average
hold (resizing can result in a smaller size than the current size, while
the assert attempted to verify the new size is always greater than the
current).
REST handlers that require a body will throw an an ElasticsearchParseException "request body required".
REST handlers that require a body OR source param will throw an ElasticsearchParseException "request body or source param required".
Replaced asserts in BulkRequest parsing code with a more descriptive IllegalArgumentException if the line contains an empty object.
Updated bulk REST test to verify an empty action line is rejected properly.
Updated BulkRequestTests with randomized testing for an empty action line.
Used try-with-resouces for XContentParser in AbstractBulkByQueryRestHandler.
This commit exposes the secure settings in Settings.Builder, so that
the current secure settings can be retrieved and added to when creating
settings for tests. This is necessary since secure settings can only be
added once to a builder, so chains of methods using settings builders
must reuse the already set mock secure settings.
When the jarhell check fails due to a duplicate jar on the classpath,
the exception message includes the full classpath but not the duplicated
jar. For a long classpath, this can make it difficult to find the jar
that is duplicated. This commit changes the exception message to include
the duplicated jar.
Relates #24953
This removes the parsing of things like `GET /idx/_aliases,_mappings`, instead,
a user must choose between retriving all index metadata with `GET /idx`, or only
a specific form such as `GET /idx/_settings`.
Relates to (and is a prerequisite of) #24437
This commit creates TemplateScript and associated classes so that
templates no longer need a special ScriptService.compileTemplate method.
The execute() method is equivalent to the old run() method.
relates #20426
Previously, when allocating bytes for a BigArray, the array was created
(or attempted to be created) and only then would the array be checked
for the amount of RAM used to see if the circuit breaker should trip.
This is problematic because for very large arrays, if creating or
resizing the array, it is possible to attempt to create/resize and get
an OOM error before the circuit breaker trips, because the allocation
happens before checking with the circuit breaker.
This commit ensures that the circuit breaker is checked before all big
array allocations (note, this does not effect the array allocations that
are less than 16kb which use the [Type]ArrayWrapper classes found in
BigArrays.java). If such an allocation or resizing would cause the
circuit breaker to trip, then the breaker trips before attempting to
allocate and potentially running into an OOM error from the JVM.
Closes#24790
In #24605, logic was implemented to ensure that completed snapshots were
properly removed from the cluster state upon a change in master nodes.
This commit removes redundant logic that also attempted to clean up
completed snapshots from the cluster state on master election, but only
covered a limited case that was remedied in #24605.
This commit also adds a test to ensure cleaning up of completed
snapshots at the right moment in time when a master election happens
before finalizing a snapshot, as well as adds a check to handle the case
where the old master and new master could attempt to finalize the
snapshot and write the same blob to the repository simultaneously.
This removes the `accumulateExceptions()` method (and its usage) from `TransportNodesAction` and `TransportTasksAction`, forcing both transport actions to always accumulate exceptions.
Without this change, some transport actions, like `TransportNodesStatsAction` would respond in very unexpected ways by returning no response due to some failure, but instead of returning an
error the response would simply be empty: no response and no error.
This results in a very trappy response structure where users can check for an error, then attempt to blindly use the response when no error is returned.
This commit reduces the number of buckets that are generated for multi
bucket aggregations in AggregationsTests and SearchResponseTests.
The number of buckets are now limited to a maximum of 3 but before some
aggregations could generate up to 10 buckets.
We can hit an already closed exception when filling the gaps after
blocking operations when updating the primary term on a promoted replica
shard. We should catch this and suppress it as it is an expected outcome
instead of letting it bubble up which leads to trying to fail the shard
which throws yet another already closed exception.
Relates #25021
Some response classes in the java api expose both `getTook()` which returns a `TimeValue` and `getTookInMillis` which returns a `long` value. `getTook()` is enough as one can do `getTook().millis()` to obtain the same result as `getTookInMillis()`, which can be removed.
* Adds nodes usage API to monitor usages of actions
The nodes usage API has 2 main endpoints
/_nodes/usage and /_nodes/{nodeIds}/usage return the usage statistics
for all nodes and the specified node(s) respectively.
At the moment only one type of usage statistics is available, the REST
actions usage. This records the number of times each REST action class is
called and when the nodes usage api is called will return a map of rest
action class name to long representing the number of times each of the action
classes has been called.
Still to do:
* [x] Create usage service to store usage statistics
* [x] Record usage in REST layer
* [x] Add Transport Actions
* [x] Add REST Actions
* [x] Tests
* [x] Documentation
* Rafactors UsageService so counts are done by the handlers
* Fixing up docs tests
* Adds a name to all rest actions
* Addresses review comments
This commit adds a new bg_count field to the REST response of
SignificantTerms aggregations. Similarly to the bg_count that already
exists in significant terms buckets, this new bg_count field is set at
the aggregation level and is populated with the superset size value.
This commit adds an optional `context` url parameter to the put stored
script request. When a context is specified, the script is compiled
against that context before storing, as a validation the script will
work when used in that context.
Today there is a lot of code duplication and different handling of errors
in the two different scroll modes. Yet, it's not clear if we keep both of
them but this simplification will help to further refactor this code to also
add cross cluster search capabilities.
This refactoring also fixes bugs when shards failed due to the node dropped out of the cluster in between scroll requests and failures during the fetch phase of the scroll. Both places where simply ignoring the failure and logging to debug. This can cause issues like #16555
This commit provides the TransportRequest that caused the retrieval of a search context to the
SearchOperationListener#validateSearchContext method so that implementers have access to the
request.
By default, the remove plugin CLI command preserves configuration
files. This is so that if a user is upgrading the plugin (which is done
by first removing the old version and then installing the new version)
they do not lose their configuration file. Yet, there are circumstances
where preserving the configuration file is not desired. This commit adds
a purge option to the remove plugin CLI command.
Relates #24981
Currently, the decisions regarding which translog generation files to delete are hard coded in the interaction between the `InternalEngine` and the `Translog` classes. This PR extracts it to a dedicated class called `TranslogDeletionPolicy`, for two main reasons:
1) Simplicity - the code is easier to read and understand (no more two phase commit on the translog, the Engine can just commit and the translog will respond)
2) Preparing for future plans to extend the logic we need - i.e., retain multiple lucene commit and also introduce a size based retention logic, allowing people to always keep a certain amount of translog files around. The latter is useful to increase the chance of an ops based recovery.
If delimiter or replacement parameter are an empty string, the error is not clear enough to indicate how to fix it.
With this change, the user knows these parameter must be a non empty string.
This is related to #24927. There was a small possibility that a test
was attempting to compress a stream with zero bytes. This was causing
a failure.
This test now requires at least one byte.
This is a follow-up to #23941. Currently there are a number of
complexities related to compression. The raw DeflaterOutputStream must
be closed prior to sending bytes to ensure that EOS bytes are written.
But the underlying ReleasableBytesStreamOutput cannot be closed until
the bytes are sent to ensure that the bytes are not reused.
Right now we have three different stream references hanging around in
TCPTransport to handle this complexity. This commit introduces
CompressibleBytesOutputStream to be one stream implemenation that will
behave properly with or without compression enabled.
This metric is not used in the ES codebase at all. It's also not as likely to be
used since it relies on a periodic "tick", which we don't currently use.
The took time computed for search requests does not take in account the expand search phase.
This change delays the computation to after the expand phase finishes.
Relates #24900
This makes profiling classes acquire a timer up-front that can be then reused
across all calls, in order to save bound checks for methods that are called in
tight loops.
ScriptContexts currently understand a FactoryType that can produce
instances of the script InstanceType. However, for search scripts, this
does not work as we have the concept of LeafSearchScript that is created
per lucene segment. This commit effectively renames the existing
SearchScript class into SearchScript.LeafFactory, which is a new,
optional, class that can be defined within a ScriptContext.
LeafSearchScript is effectively renamed back into SearchScript. This
change allows the model of stateless factory -> stateful factory ->
script instance to continue, but in a generic way that any script
context may take advantage of.
relates #20426
In previous work, we refactored the delay mechanism in index shard
operation permits to allow for async delaying of acquisition. This
refactoring made explicit when permit acquisition is disabled whereas
previously we were relying on an implicit condition, namely that all
permits were acquired by the thread trying to delay acquisition. When
using the implicit mechanism, we tried to acquire a permit and if this
failed, we returned a null releasable as an indication that our
operation should be queued. Yet, now we know when we are delayed and we
should not even try to acquire a permit. If we try to acquire a permit
and one is not available, we know that we are not delayed, and so
acquisition should be successful. If it is not successful, something is
deeply wrong. This commit takes advantage of this refactoring to
simplify the internal implementation.
Relates #24971
When a primary is promoted, it could have gaps in its history due to
concurrency and in-flight operations when it was serving as a
replica. This commit fills the gaps in the history of the promoted shard
after all operations from the previous term have drained, and future
operations are blocked. This commit does not handle replicating the
no-ops that fill the gaps to any remaining replicas, that is the
responsibility of the primary/replica sync that we are laying the ground
work for.
Relates #24945
`terms` aggregations at the root level use the `global_ordinals` execution hint by default.
When all sub-aggregators can be run in `breadth_first` mode the collected buckets for these sub-aggs are dense (remapped after the initial pruning).
But if a sub-aggregator is not deferrable and needs to collect all buckets before pruning we don't remap global ords and the aggregator needs to deal with sparse buckets.
Most (if not all) aggregators expect dense buckets and uses this information to allocate memories.
This change forces the remap of the global ordinals but only when there is at least one sub-aggregator that cannot be deferred.
Relates #24788
This commit introduces a clean transition from the old primary term to
the new primary term when a replica is promoted primary. To accomplish
this, we delay all operations before incrementing the primary term. The
delay is guaranteed to be in place before we increment the term, and
then all operations that are delayed are executed after the delay is
removed which asynchronously happens on another thread. This thread does
not progress until in-flight operations that were executing are
completed, and after these operations drain, the delayed operations
re-acquire permits and are executed.
Relates #24925
Within two lines of each other appears "fallthrough" and "fall through",
both typed by the same person who should have been paying better
attention and only one of these is correct and the inconsistency is
bothersome. This commit fixes the errant one.
Drops `TokenizerFactory#name`, replacing it with
`CustomAnalyzer#getTokenizerName` which is much better targeted at
its single use case inside the analysis API.
Drops a test that I would have had to refactor which is duplicated by
`AnalysisModuleTests`.
To keep this change from blowing up in size I've left two mostly
mechanical changes to be done in followups:
1. `TokenizerFactory` can now be entirely dropped and replaced with
`Supplier<Tokenizer>`.
2. `AbstractTokenizerFactory`'s ctor still takes a `String` parameter
where the name once was.
If the bucket already exists, due to non-overlapping series or missing data, the
MovAvg creates a merged bucket with the existing aggs + the new prediction. This
fixes a small bug where the doc_count was not being set correctly.
Relates to #24327
Today if the primary throws an exception while handling the replica
response (e.g., because it is already closed while updating the local
checkpoint for the replica), or because of a bug that causes an
exception to be thrown in the replica operation listener, this exception
is caught by the underlying transport handler plumbing and is translated
into a response handler failure transport exception that is passed to
the onFailure method of the replica operation listener. This causes the
primary to turn around and fail the replica which is a disastrous and
incorrect outcome as there's nothing wrong with the replica, it is the
primary that is broken and deserves a paddlin'. This commit handles this
situation by failing the primary.
Relates #24926
This commit adds a second refresh to the concurrent relocation
test. This is necessary as the first refresh might have brought back a
local checkpoint for a shard that a newly relocated primary became aware
of but did not yet receive a local checkpoint for that shard. When that
local checkpoint arrives on the new primary, the global checkpoint could
advance again and so we need a second replication action to push that
global checkpoint back out to the replica. This is indeed a hack, and it
will eventually be removed.
Closes#24599
The `IndexDeletionPolicy` is currently instantiated by `IndexShard` and is then passed through to the engine as a parameter. That's a shame as it is really just an implementation detail and the engine already has a method to acquire a commit.
This is preparing for a follow up PR that will we connect the index deletion policy with a new translog deletion policy.
Relates to #10708
The order in which double values are added in Java can give different results,
so in testing the sum and sumOfSquares we need to allow some delta for testing
equality. The difference can be larger for large sum values, so we should
account for this by making the delta in the assertion depend on the values
magnitude.
Closes#24931
ClearScrollResponse can print out its content into an XContentBuilder as it implements ToXContentObject. This PR add a fromXContent method to it so that we are able to recreate the response object when parsing the response back. This will be used in the high level REST client.
ClearScrollRequest can be created from a request body, but it doesn't support the opposite, meaning printing out its content to an XContentBuilder. This is useful to the high level REST client and allows for better testing of what we parse.
Moved parsing method from RestClearScrollAction to ClearScrollRequest so that fromXContent and toXContent sit close to each other. Added unit tests to verify that body parameters override query_string parameters when both present (there is already a yaml test for this but unit test is even better)
SearchScrollRequest can be created from a request body, but it doesn't support the opposite, meaning printing out its content to an XContentBuilder. This is useful to the high level REST client and allows for better testing of what we parse.
Moved parsing method from RestSearchScrollAction to SearchScrollRequest so that fromXContent and toXContent sit close to each other. Added unit tests to verify that body parameters override query_string parameters when both present (there is already a yaml test for this but unit test is even better)
When proportioning the shared RAM bytes across the shards of the query
cache, there's a computation that shares these bytes according to the
relative size of the shard cache to the total size of all the shard
caches. This computation had a bug where integer division was performed
instead which leads to this computation often being zero. This commit
fixes this bug by casting the numerator to a double before doing the
division so that double division is performed.
Relates #24856
The Lucene version constants for 5.4.1 and 5.5.0 are wrong, they are
listed as 6.5.0 instead of 6.5.1. This commit fixes these issues, and
adds a test to ensure that this does not happen again.
Relates #24923
This commit fixes a double decrement bug on the current query
counter. The double decrement arises in a situation when the fetch phase
is inlined for a query that is only touching one shard. After the query
phase succeeds we decrement the current query counter. If the fetch
phase ultimately fails, an exception is thrown and we decrement the
current query counter again in the catch block. We also add assertions
that all current stats counters remain non-negative at all
times.
Relates #24922
Removes the need for the `_UNRELEASED` suffix on versions by detecting if a version should be unreleased or not based on the versions around it. This should make it simpler to automate the task of adding a new version label.
In #23093 we made a change so that total bytes for a filesystem would not be a
negative value when the total bytes were > Long.MAX_VALUE.
This fixes#24453 which had a related issue where `available` and `free` bytes
could also be so large that they were negative. These will now return
`Long.MAX_VALUE` for the bytes if the JDK returns a negative value.
These tests spin up two nodes of an older version of Elasticsearch,
create some stuff, shut down the nodes, start the current version,
and verify that the created stuff works.
You can run `gradle qa:full-cluster-restart:check` to run these
tests against the head of the previous branch of Elasticsearch
(5.x for master, 5.4 for 5.x, etc) or you can run
`gradle qa:full-cluster-restart:bwcTest` to run this test against
all "index compatible" versions, one after the other. For master
this is every released version in the 5.x.y version *and* the tip
of the 5.x branch.
I'd love to add more to these tests in the future but these
currently just cover the functionality of the `create_bwc_index.py`
script and start to cover the assertions in the
`OldIndexBackwardsCompatibilityIT` test.
This commit renames the concept of the "compiled type" to a "factory
type", along with all implementations of this class to be named Factory.
This brings it inline with the classes purpose.
This commit adds collection of all contexts to the parameters of
getScriptEngine. This will allow script engines like painless to
precache extra information about the contexts.
This test is failing sporadically and for now we mute it as we have a
failure with additional logging that should hopefully enable us to
assess the situation.
This is a simple refactoring to move the context definitions into the
type that they use. While we have multiple context names for the same
class at the moment, this will eventually become one ScriptContext per
instance type, so the pattern of a static member on the interface called
CONTEXT can be used. This commit also moves the consolidated list of
contexts provided by core ES into ScriptModule.
This change cleans up some missed TODOs for content type detection on the source of put mapping and
put index template requests. In 5.3.0 and newer versions, the source is always JSON so the content
type detection is not needed. The TODOs were missed after the change was backported to 5.3.
Relates #24798
This commit adds the ability to store and retrieve data that should be associated with a
ScrollContext. Additionally the ScrollContext was made final as we should only have a single
implementation of this concept.
This commit changes the compile method of ScriptEngine to be generic in
the same way it is on ScriptService. This moves the shim of handling the
two existing context classes into each script engine, so that each
engine can be worked on independently to convert to real handling of
contexts.
When developing the new ScriptContext, the compiled type was original
generic, so that the instance type was also necessary. However, since
CompiledType is all that is used by the compile method signature, we
actually don't need the instance type to be generic. This commit removes
the InstanceType, and finds the Class for it through reflection on the
CompiledType method.
This commit modifies the compile method of ScriptService to be context
aware. The ScriptContext is now a generic class which contains both the
instance type and compiled type for a script. Instance type may be
stateful (for example, pre loading field information for the index a
script will execute on, like in expressions), while the compiled type is
stateless and used to construct instance type instances. This change is
only a first step to cutover ScriptService to the new paradigm. It only
converts callers to the script service, and has a small shim to wrap
compilation from the script engines to support the current two fixed
instance types, SearchScript and ExecutableScript.
Since groovy was removed, we no longer have any ScriptEngines with
resources to release. We may want to keep the option open for a script
engine to close resources, but this would not be common. This commit
adds a default implementation to ScriptEngine for `close()` to reduce
the boiler plate that must be added for a ScriptEngine implementation.
This commit increases the logging level on the index and relocate
concurrently test to obtain some insight into the global checkpoint
moving backwards.
The current log tries make sure we waited some (but not too long). This is unpredictable and fails all the time. This commit removes all of it and just make sure that we throw the right exceptions after timing out.
Fixes#24369
* SignificantText aggregation - like significant_terms but doesn’t require fielddata=true, recommended used with `sampler` agg to limit expense of tokenizing docs and takes optional `filter_duplicate_text`:true setting to avoid stats skew from repeated sections of text in search results.
Closes#23674
With #24779 in place, we can now guaranteed that a single translog generation file will never have a sequence number conflict that needs to be resolved by looking at primary terms. These conflicts can a occur when a replica contains an operation which isn't part of the history of a newly promoted primary. That primary can then assign a different operation to the same slot and replicate it to the replica.
PS. Knowing that each generation file is conflict free will simplifying repairing these conflicts when we read from the translog.
PPS. This PR also fixes some bugs in the piping of primary terms in the bulk shard action. These bugs are a result of the legacy of IndexRequest/DeleteRequest being a ReplicationRequest. We need to change that as a follow up.
Relates to #10708
This commit cleans up tests which currently use custom script engine
implementations, converting them to use a MockScriptEngine with script
functions provided by the tests. It also creates a common set of metric
scripts which were copied across a couple metric agg tests.
A user reported uneven balancing of load on nodes handling search requests from Kibana which supplies a session ID in a routing preference. Each shardId was selecting the same node for a given session ID because one data node had all primaries and the other data node held all replicas after cluster startup.
This change counteracts the tendency to opt for the same node given the same user-supplied preference by incorporating shard ID in the hash of the preference key. This will help randomise node choices across shards.
Closes#24642
This commit adds comments to org.elasticsearch.Assertions that disables
IntelliJ from complaining about using assert with side-effects, and
using constant conditions there as the side-effect with a constant
condition is intentionally employed.
Today in the code base we have lots of ugly code blocks like:
boolean assertionsEnabled = false;
assert assertionsEnabled = true;
if (assertionsEnabled) {
// something
}
These are a nuisance. Instead, we can do this in exactly one place and
replace these blocks with
if (Assertions.ENABLED) {
// something
}
The cool thing here is that since this is a static final field, the JIT
can optimize away the check at runtime if assertions are disabled.
Relates #24834
This commit moves the handling of nested and parent/child inner hits to specialized classes that can be defined outside of ES core.
InnerHitBuilderContext is now used by the parent query (nested or hasChild, ...) to build the sub context from the InnerHitBuilder definition.
BWC is also ensured so that nodes in previous versions can still send/receive inner hits to/from this version.
Relates #20257
After releasing 5.3.2, the 5.3.3 version constant was created. However,
this causes issues for the rolling upgrade tests, which expect to have
all older versions artifacts published and no point releases created off
of the older versions (older meaning more than one version behind the
current version). This commit removes the 5.3.3 version constant,
assuming we will not need it anywhere.
As we work towards contexts implying the return type of compilation, we
first need ScriptContext to not be an enum. This commit removes the
Standard enum and Plugin subclass of ScriptContext.
This commit fixes the RangeFieldMapper and RangeQueryBuilder to pass the correct relation to the RangeQuery when performing a range query over range fields.
Currently a `delete document` request against a non-existing index actually **creates** this index.
With this change the `delete document` no longer creates the previously non-existing index and throws an `index_not_found` exception instead.
However as discussed in https://github.com/elastic/elasticsearch/pull/15451#issuecomment-165772026, if an external version is explicitly used, the current behavior is preserved and the index is still created and the document is marked for deletion.
Fixes#15425
This commit is a simple cleanup to remove an unnecessary extra method on
ScriptService which was only used in 3 places. There is now only one
search method.
ScriptEngine implementations have an overridable method to indicate they
are safe to use as inline scripts. Since groovy was removed fro 6.0,
there are no longer any implementations which used the default false
value. Furthermore, the value was not actually read anywhere. This
commit removes the method. The ScriptEngineRegistry was also no longer
necessary as it only was used to build a map from language to engine.
This commit removes a convenience method from index shard that is used
at exactly one call site. This method is used to callback a listener
when an operation is on too old of a primary term. Since it is only used
at one call site, we simply inline the method.
Today a replica learns of a new primary term via a cluster state update
and there is not a clean transition between the older primary term and
the newer primary term. This commit modifies this situation so that:
- a replica shard learns of a new primary term via replication
operations executed under the mandate of the new primary
- when a replica shard learns of a new primary term, it blocks
operations on older terms from reaching the engine, with a clear
transition point between the operations on the older term and the
operations on the newer term
This work paves the way for a primary/replica sync on primary
promotion. Future work will also ensure a clean transition point on a
promoted primary, and prepare a replica shard for a sync with the
promoted primary.
Relates #24779