This commit moves the semantic validation (like which version a plugin
was built for or which java version it is compatible with) from reading
a plugin descriptor, leaving the checks on the format of the descriptor
intact.
relates #28540
* NXYSignificanceHeuristic.java: implementation of equality would have
failed with a ClassCastException when comparing to another type.
Replaced with the Eclipse generated form.
Today we use the persisted global checkpoint to calculate the starting
seqno in peer-recovery. However we do not check whether the translog
actually belongs to the existing Lucene index when reading the global
checkpoint. In some rare cases if the translog does not match the Lucene
index, that recovering replica won't be able to complete its recovery.
This can happen as follows.
1. Replica executes a file-based recovery
2. Index files are copied to replica but crashed before finishing the recovery
3. Replica starts recovery again with seq-based as the copied commit is safe
4. Replica fails to open engine because translog and Lucene index are not matched
5. Replica won't be able to recover from primary
This commit enforces the translogUUID requirement when reading the
global checkpoint directly from the checkpoint file.
Relates #28435
This commit changes the state format that was previously passed in to
`MetaDataStateFormat` to always use Smile. This doesn't actually change the
format, since we have used Smile for writing the format since at least 5.0. This
removes the automatic detection of the state format when reading state, since
any state that could be processed in 6.x and 7.x would already have been written
in Smile format.
This is work towards removing the deprecated methods in the XContent code where
we do automatic content-type detection.
Relates to #28504
This commit forces the depth_first mode for `terms` aggregation that contain a sub-aggregation that need to access the score of the document
in a nested context (the `terms` aggregation is a child of a `nested` aggregation). The score of children documents is not accessible in
breadth_first mode because the `terms` aggregation cannot access the nested context.
Close#28394
* Search option terminate_after does not handle post_filters and aggregations correctly
This change fixes the handling of the `terminate_after` option when post_filters (or min_score) are used.
`post_filter` should be applied before `terminate_after` in order to terminate the query when enough document are accepted
by the post_filters.
This commit also changes the type of exception thrown by `terminate_after` in order to ensure that multi collectors (aggregations)
do not try to continue the collection when enough documents have been collected.
Closes#28411
The is a follow up to #28567 changing the method used to capture stack traces, as requested
during the review. Instead of creating a throwable, we explicitly capture the stack trace of the
current thread. This should Make Jason Happy Again ™️ .
Generalizing BWC building so that there is less code to modify for a release. This ensures we do not
need to think about what major or minor version is in the gradle code. It follows the general rules of the
elastic release structure. For more information on the rules, see the VersionCollection's javadoc.
This also removes the additional bwc snapshots that will never be released, such as 6.0.2, which were
being built and tested against every time we ran bwc tests.
Additionally, it creates 4 new projects that correspond to the different types of snapshots that may exist
for a given version. Its possible to now run those individual tasks to work out bwc logic whereas
previously it was impossible and the entire suite of bwc tests had to be run to work out any logic
changes in the build tools' bwc project. Please note that if the project does not make sense for the
version that is current, that an error will be thrown from that individual project if an attempt is made to
run it.
This should allow for automating the version bumps as well, since it removes all the hardcoded version
logic from the configs.
This removes all the server references to the deprecated `ParseField.match`
method in favor of the method that passes in the deprecation logger.
Relates to #28504
The bug was caused because the ScriptService had no reference to a ClusterState instance,
because it received the ClusterState after the PipelineStore. This only is the case
after a restart.
A bad side effect is that during a restart, any pipeline to be loaded after the pipeline that uses a stored script,
was never loaded, which caused many pipeline to be missing in bulk / index request api calls.
After copying over the Lucene segments during peer recovery, we call cleanupAndVerify which removes all other files in the directory and which then calls getMetadata to check if the resulting files are a proper index. There are two issues with this:
- the directory is not fsynced after the deletions, so that the call to getMetadata, which lists files in the directory, can get a stale view, possibly seeing a deleted corruption marker (which leads to the exception seen in #28435)
- failing to delete a corruption marker should result in a hard failure, as the shard is otherwise unusable.
The shard not-available exceptions are currently ignored in the
replication as the best effort avoids failing not-yet-ready shards.
However these exceptions can also happen from fully active shards. If
this is the case, we may have skipped important failures from replicas.
Since #28049, only fully initialized shards are received write requests.
This restriction allows us to handle all exceptions in the replication.
There is a side-effect with this change. If a replica retries its peer
recovery second time after being tracked in the replication group, it
can receive replication requests even though it's not-yet-ready. That
shard may be failed and allocated to another node even though it has a
good lucene index on that node.
This PR does not change the way we report replication errors to users,
hence the shard not-available exceptions won't be reported as before.
Relates #28049
Relates #28534
Today we acquire a permit from the shard to coordinate between indexing operations, recoveries and other state transitions. When we leak an permit it's practically impossible to find who the culprit is. This PR add stack traces capturing for each permit so we can identify which part of the code is responsible for acquiring the unreleased permit. This code is only active when assertions are active.
The output is something like:
```
java.lang.AssertionError: shard [test][1] on node [node_s0] has pending operations:
--> java.lang.RuntimeException: something helpful 2
at org.elasticsearch.index.shard.IndexShardOperationPermits.acquire(IndexShardOperationPermits.java:223)
at org.elasticsearch.index.shard.IndexShard.<init>(IndexShard.java:322)
at org.elasticsearch.index.IndexService.createShard(IndexService.java:382)
at org.elasticsearch.indices.IndicesService.createShard(IndicesService.java:514)
at org.elasticsearch.indices.IndicesService.createShard(IndicesService.java:143)
at org.elasticsearch.indices.cluster.IndicesClusterStateService.createShard(IndicesClusterStateService.java:552)
at org.elasticsearch.indices.cluster.IndicesClusterStateService.createOrUpdateShards(IndicesClusterStateService.java:529)
at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyClusterState(IndicesClusterStateService.java:231)
at org.elasticsearch.cluster.service.ClusterApplierService.lambda$callClusterStateAppliers$6(ClusterApplierService.java:498)
at java.base/java.lang.Iterable.forEach(Iterable.java:75)
at org.elasticsearch.cluster.service.ClusterApplierService.callClusterStateAppliers(ClusterApplierService.java:495)
at org.elasticsearch.cluster.service.ClusterApplierService.applyChanges(ClusterApplierService.java:482)
at org.elasticsearch.cluster.service.ClusterApplierService.runTask(ClusterApplierService.java:432)
at org.elasticsearch.cluster.service.ClusterApplierService$UpdateTask.run(ClusterApplierService.java:161)
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:566)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:244)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:207)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
at java.base/java.lang.Thread.run(Thread.java:844)
--> java.lang.RuntimeException: something helpful
at org.elasticsearch.index.shard.IndexShardOperationPermits.acquire(IndexShardOperationPermits.java:223)
at org.elasticsearch.index.shard.IndexShard.<init>(IndexShard.java:311)
at org.elasticsearch.index.IndexService.createShard(IndexService.java:382)
at org.elasticsearch.indices.IndicesService.createShard(IndicesService.java:514)
at org.elasticsearch.indices.IndicesService.createShard(IndicesService.java:143)
at org.elasticsearch.indices.cluster.IndicesClusterStateService.createShard(IndicesClusterStateService.java:552)
at org.elasticsearch.indices.cluster.IndicesClusterStateService.createOrUpdateShards(IndicesClusterStateService.java:529)
at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyClusterState(IndicesClusterStateService.java:231)
at org.elasticsearch.cluster.service.ClusterApplierService.lambda$callClusterStateAppliers$6(ClusterApplierService.java:498)
at java.base/java.lang.Iterable.forEach(Iterable.java:75)
at org.elasticsearch.cluster.service.ClusterApplierService.callClusterStateAppliers(ClusterApplierService.java:495)
at org.elasticsearch.cluster.service.ClusterApplierService.applyChanges(ClusterApplierService.java:482)
at org.elasticsearch.cluster.service.ClusterApplierService.runTask(ClusterApplierService.java:432)
at org.elasticsearch.cluster.service.ClusterApplierService$UpdateTask.run(ClusterApplierService.java:161)
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:566)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:244)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:207)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
at java.base/java.lang.Thread.run(Thread.java:844)
```
Currently the master node logs a warning message whenever it receives a
failed shard request. However, this can be noisy because
- Multiple failed shard requests can be issued for a single shard
- Failed shard requests can be still issued for an already failed shard
This commit moves the log-warn to AllocationService in which the failing
shard action actually happens. This is another prerequisite step in
order to not ignore the shard not-available exceptions in the
replication.
Relates #28534
The queue size test has a race condition. Namely the offering thread can
run so quickly completing all of its offering iterations before the
queue size thread ever has a chance to run a single size poll
iteration. This means that the size will never actually be polled and
the test can spuriously fail. What we really want to do here, since this
test is checking for a race condition between polling the size of the
queue and offers to the queue, we want to execute each iteration in
lockstep giving the threads multiple changes for the race between
polling the size and offers to occur. This commit addresses this by
running the two threads in lockstep for multiple iterations so that they
have multiple chances to race.
Relates #28584
* Switch to non-deprecated ParseField.match method for o.e.search
This replaces more of the `ParseField.match` calls with the same call using a
deprecation handler. It encapsulates all of the instances in the
`org.elastsicsearch.search` package.
Relates to #28504
* Address Nik's comments
* Replace more deprecated ParseField.match calls with non-deprecated call
This replaces more of the `ParseField.match` calls with the same call using a
deprecation handler.
Relates to #28504
* Address Nik's comments
Plugin descriptors currently contain an elasticsearch version,
which the plugin was built against, and a java version, which the plugin
was built with. These versions are read and validated, but not stored.
This commit keeps them in PluginInfo so they can be used later.
While seeing the elasticsearch version is less interesting (since it is
enforced to match that of the running elasticsearc node), the java
version is interesting since we only validate the format, not the actual
version. This also makes PluginInfo have full parity with the plugin
properties file.
Today when offering an item to a size blocking queue that is at
capacity, we first increment the size of the queue and then check if the
capacity is exceeded or not. If the capacity is indeed exceeded, we do
not add the item to the queue and immediately decrement the size of the
queue. However, this incremented size is exposed externally even though
the offered item was never added to the queue (this is effectively a
race on the size of the queue). This can lead to misleading statistics
such as the size of a queue backing a thread pool. This commit fixes
this issue so that such a size is never exposed. To do this, we replace
the hidden CAS loop that increments the size of the queue with a CAS
loop that only increments the size of the queue if we are going to be
successful in adding the item to the queue.
Relates #28557
Today when a replica shard detects a new primary shard (via a primary
term transition), we roll the translog generation. However, the
mechanism that we are using here is by reaching through the engine to
the translog directly. By poking all the way through rather than asking
the engine to manage the roll for us we miss:
- taking a read lock in the engine while the roll is occurring
- trimming unreferenced readers
This commit addresses this by asking the engine to roll the translog
generation for us.
Relates #28537
* es/master:
Added more parameter to PersistentTaskPlugin#getPersistentTasksExecutor(...)
[Tests] Relax assertion in SuggestStatsIT (#28544)
Make internal Rounding fields final (#28532)
Fix the ability to remove old plugin
[TEST] Expand failure message for wildfly integration tests
Add 6.2.1 version constant
Remove feature parsing for GetIndicesAction (#28535)
No refresh on shard activation needed (#28013)
Improve failure message when restoring an index that already exists in the cluster (#28498)
Use right skip versions.
[Docs] Fix incomplete URLs (#28528)
Use non deprecated xcontenthelper (#28503)
Painless: Fixes a null pointer exception in certain cases of for loop usage (#28506)
The test expects suggest times in milliseconds that are strictly
positive. Internally they are measured in nanos, it is possible that on
really fast execution this is rounded to 0L, so this should also be an
accepted value.
Closes#28543
We now read the plugin descriptor when removing an old plugin. This is
to check if we are removing a plugin that is extended by another
plugin. However, when reading the descriptor we enforce that it is of
the same version that we are. This is not the case when a user has
upgraded Elasticsearch and is now trying to remove an old plugin. This
commit fixes this by skipping the version enforcement when reading the
plugin descriptor only when removing a plugin.
Relates #28540
A shard is fully baked when it moves to POST_RECOVERY. There is no need to do an extra refresh on shard activation again as the shard has already been refreshed when it moved to POST_RECOVERY.
* Move to non-deprecated XContentHelper.createParser(...)
This moves away from one of the now-deprecated XContentHelper.createParser
methods in favor of specifying the deprecation logger at parser creation time.
Relates to #28449
Note that this doesn't move all the `createParser` calls because some of them
use the already-deprecated method that doesn't specify the XContentType.
* Remove the deprecated (and now non-needed) createParser method
If you call `getDates()` on a long or date type field add a deprecation
warning to the response and log something to the deprecation logger.
This *mostly* worked just fine but if the deprecation logger happens to
roll then the roll will be performed with the script's permissions
rather than the permissions of the server. And scripts don't have
permissions to, say, open files. So the rolling failed. This fixes that
by wrapping the call the deprecation logger in `doPriviledged`.
This is a strange `doPrivileged` call because it doens't check
Elasticsearch's `SpecialPermission`. `SpecialPermission` is a permission
that no-script code has and that scripts never have. Usually all
`doPrivileged` calls check `SpecialPermission` to make sure that they
are not accidentally acting on behalf of a script. But in this case we
are *intentionally* acting on behalf of a script.
Closes#28408
Currently when failing a shard we also mark it as stale (eg. remove its
allocationId from from the InSync set). However in some cases, we need
to be able to fail shards but keep them InSync set. This commit adds
such capacity. This is a preparatory change to make the primary-replica
resync less lenient.
Relates #24841
ava.time has the functionality needed to deal with timezones with varying
offsets correctly, but it also has a bunch of methods that silently let you
forget about the hard cases, which raises the risk that we'll quietly do the
wrong thing at some point in the future.
This change adds the trappy methods to the list of forbidden methods to try and
help stop this from happening.
It also fixes the only use of these methods in the codebase so far:
IngestDocument#deepCopy() used ZonedDateTime.of() which may alter the offset of
the given time in cases where the offset is ambiguous.
This commit switches all the modules and server test code to use the
non-deprecated `ParseField.match` method, passing in the parser's deprecation
handler or the logging deprecation handler when a parser is not available (like
in tests).
Relates to #28449
Today the correctness of synced-flush is guaranteed by ensuring that
there is no ongoing indexing operations on the primary. Unfortunately, a
replica might fall out of sync with the primary even the condition is
met. Moreover, if synced-flush mistakenly issues a sync_id for an out of
sync replica, then that replica would not be able to recover from the
primary. ES prevents that peer-recovery because it detects that both
indexes from primary and replica were sealed with the same sync_id but
have a different content. This commit modifies the synced-flush to not
issue sync_id for out of sync replicas. This change will report the
divergence issue earlier to users and also prevent replicas from getting
into the "unrecoverable" state.
Relates #10032
The primary currently replicates writes to all other shard copies as soon as they're added to the routing table. Initially those shards are not even ready yet to receive these replication requests, for example when undergoing a file-based peer recovery. Based on the specific stage that the shard copies are in, they will throw different kinds of exceptions when they receive the replication requests. The primary then ignores responses from shards that match certain exception types. With this mechanism it's not possible for a primary to distinguish between a situation where a replication target shard is not allocated and ready yet to receive requests and a situation where the shard was successfully allocated and active but subsequently failed.
This commit changes replication so that only initializing shards that have successfully opened their engine are used as replication targets. This removes the need to replicate requests to initializing shards that are not even ready yet to receive those requests. This saves on network bandwidth and enables features that rely on the distinction between a "not-yet-ready" shard and a failed shard.
This assertion does not hold if engine is flushed between the invocation
of translog.uncommittedSizeInBytes and translog.uncommittedOperations.
These two values can be calculated from different commits.
If the translog flush threshold is too small (eg. smaller than the
translog header), we may repeatedly flush even there is no uncommitted
operation because the shouldFlush condition can still be true after
flushing. This is currently avoided by adding an extra guard against the
uncommitted operations. However, this extra guard makes the shouldFlush
complicated. This commit replaces that extra guard by a lower bound for
translog flush threshold. We keep the lower bound small for convenience
in testing.
Relates #28350
Relates #23606
Persistent tasks are build on top of node tasks and provide functionality to restart a task to run on a different coordination node in case the coordinating node is no longer available.
It is up to a persistent task implementation to keep track of status, so that in case the task is restarted, the task can continue were it left off before it was restarted.
This change remove the `CircuitBreakerIT. testParentChecking` test method which fails intermittently in unexpected ways with a `MemoryCircuitBreakerTests. testBorrowingSiblingBreakerMemory` unit test method which can test the borrowing functionality more directly
Closes#28223
This change adds a shallow copy method for aggregation builders. This method returns a copy of the builder replacing the factoriesBuilder and metaDada
This method is used when the builder is rewritten (AggregationBuilder#rewrite) in order to make sure that we create a new instance of the parent builder when sub aggregations are rewritten.
Relates #27782
This change fixes a possible AIOOB during the parsing of the document that contains the indexed shape.
This change ensures that the parsing does not continue when the field that contains the shape has been found.
Closes#28456
Adds allow_partial_search_results flag to search requests with default setting = true.
When false, will error if search either timeouts, has partial errors or has missing shards rather
than returning partial search results. A cluster-level setting provides a default for search requests with no flag.
Closes#27435
Sometimes, in some places, the clocks are set back across midnight, leading to
overlapping days. This was not handled as expected, and this change fixes this.
Additionally, in this situation it is not true that rounding a time down to the
nearest day is a monotonic operation, as asserted in these tests. This change
suppresses those assertions in those rare cases.
Fixes#27966.
This change removes the InternalClient and the InternalSecurityClient. These are replaced with
usage of the ThreadContext and a transient value, `action.origin`, to indicate which component the
request came from. The security code has been updated to look for this value and ensure the
request is executed as the proper user. This work comes from #2808 where @s1monw suggested
that we do this.
While working on this, I came across index template registries and rather than updating them to use
the new method, I replaced the ML one with the template upgrade framework so that we could
remove this template registry. The watcher template registry is still needed as the template must be
updated for rolling upgrades to work (see #2950).
* Moves more classes over to ToXContentObject/Fragment
* Removes ToXContentToBytes
* Removes ToXContent from Enums
* review comment fix
* slight change to use XContantHelper
These members are default initialized on contruction and then set by the
init() method. It's possible that another thread accessing the object
after init() is called could still see the null/0 values, depending on how
the compiler optimizes the code.
This is the x-pack side of the removal of `accumulateExceptions()` for both `TransportNodesAction` and `TransportTasksAction`.
There are occasional, random failures that occur during API calls that are silently ignored from the caller's perspective, which also leads to weird API responses that have no response and also no errors, which is obviously untrue.
With the leniency in Version.java we missed to really setup BWC
testing for static indices. This change brings back the testing and adds
missing bwc indices.
Relates to elastic/elasticsearch#24732
Persistent tasks should verify that completion notification is done for correct version of the task, otherwise a delayed notification from an old node can accidentally close a newly reassigned task.
PersistentTasksCustomMetadata was using a generic param named `Params`. This conflicted with the imported interface `ToXContent.Params`. The java compiler was preferring the generic param over the interface so everything was fine but Eclipse apparently prefers the interface int his case which was screwing up the Hierarchy and causing compile errors in Eclipse. This changes fixes it by renaming the Generic param to `P`
Removes the last pieces of ActionRequest from PersistentTaskRequest and renames it into PersistTaskParams, which is now just an interface that extends NamedWriteable and ToXContent.
This commit is response to the renaming of the random ASCII helper
methods in ESTestCase. The name of this method was changed because these
methods only produce random strings generated from [a-zA-Z], not from
all ASCII characters.
Add a check for the current state waitForPersistentTaskStatus before waiting for the next one. This fixes sporadic failure in testPersistentActionStatusUpdate test.
Fixes#928
Refactors NodePersistentTask and RunningPersistentTask into a single AllocatedPersistentTask. Makes it possible to update Persistent Task Status via AllocatedPersistentTask.
If a persistent task throws an exception, the persistent tasks framework will no longer try to restart the task. This is a temporary measure to prevent threshing the cluster with endless restart attempt. We will revisit this in the future version to make the restart process more robust. Please note, however, that if node executing the task goes down, the task will still be restarted on another node.
Removes the transport layer dependency from PersistentActions, makes PersistentActionRegistry immutable and rename actions into tasks in class and variable names.
Refactors xcontent serialization of Request and Status to use their writable names instead of action name. That simplifies the parsing logic, allows reuse of the same status object for multiple actions and is consistent with how named objects in xcontent are used.
This will allow persistent task implementors to detect whether the executor node has changed or has been unset since the last status update has occured.
This commit moves persistent tasks from ClusterState.Custom to MetaData.Custom and adds ability for the task to remain in the metadata after completion.
Also replaced the DELETING status from JobState with a boolean flag on Job. The state of a job is now stored inside a persistent task in cluster state. Jobs that aren't running don't have a persistent task, so I moved that notion of being deleted to the job config itself.
Original commit: elastic/x-pack@21cd19ca1c
Similarly to task status on normal tasks it's now possible to update task status on the persistent tasks. This should allow updating the state of the running tasks (such as loading, started, etc) as well as store intermediate state or progress.
Original commit: elastic/x-pack@048006b467
A persistent action is a transport-like action that is using the cluster state instead of transport to start tasks. This allows persistent tasks to survive restart of executing nodes. A persistent action can be implemented by extending TransportPersistentAction. TransportPersistentAction will start the task by using PersistentActionService, which controls persistent tasks lifecycle. See TestPersistentActionPlugin for an example implementing a persistent action.
Original commit: elastic/x-pack@5e83f1bfa3
This change switches the merge policy to none (for this specific test) in order to make sure that refreshes are always triggered
by a change in the writer.
Closes#27514
Factors the way in which XContent parsing handles deprecated fields
into a callback that is set at parser construction time. The goals here
are:
1. Remove Log4J as a dependency of XContent so that XContent can be used
by clients without forcing log4j and our particular deprecation handling
scheme.
2. Simplify handling of deprecated fields in tests. Now tests can listen
directly for the deprecation callback rather than digging through a
ThreadLocal.
More accurately, this change begins this work. It deprecates a number of
methods, pointing folks to the new versions of those methods that take
`DeprecationHandler`. The plan is to slowly drop these deprecated
methods. Once they are entirely removed we can remove Log4j as
dependency of XContent.
This change adds support for the new ranking evaluation API to the High Level Rest Client.
This mostly means adding support for parsing the various response objects back from the
REST representation. It includes one change to the response syntax where previously we didn't
print the type of the metric details section but we now need it to pick the right parser to
parse this section back.
Closes#28198
Script fields can get a bit more complicated than just stored fields. A script can return null, an object and also an array. Extended parsing to support such valid values. Also renamed util method from `parseStoredFieldsValue` to `parseFieldsValue` given that it can parse stored fields but also script fields, anything that's returned as `fields`.
Closes#28380
The test currently makes the assumption that if underlying directory stops throwing exceptions, we can always open the engine. This is not the case as some errors can cause a corruption marker to be placed in the store.
This commit refactors the test to only check that everything is OK if the engine was successfully opened. On top of that, there is no point in checking replay with no errors as we have another test for that.
Closes#28426
This adds the ability to index term prefixes into a hidden subfield, enabling prefix queries to be run without multitermquery rewrites. The subfield reuses the analysis chain of its parent text field, appending an EdgeNGramTokenFilter. It can be configured with minimum and maximum ngram lengths. Query terms with lengths outside this min-max range fall back to using prefix queries against the parent text field.
The mapping looks like this:
"my_text_field" : {
"type" : "text",
"analyzer" : "english",
"index_prefix" : { "min_chars" : 1, "max_chars" : 10 }
}
Relates to #27049
This commit switches the internal format of the elasticsearch keystore
to no longer use java's KeyStore class, but instead encrypt the binary
data of the secrets using AES-GCM. The cipher key is generated using
PBKDF2WithHmacSHA512. Tests are also added for backcompat reading the v1
and v2 formats.
Currently meta plugins will ask for confirmation of security policy
exceptions for each bundled plugin. This commit collects the necessary
permissions of each bundled plugin, and asks for confirmation of all of
them at the same time.
In some cases testShrinkIndexPrimaryTerm creates then 'mutates' 210
shards. If each shard opens more than 10 files (translog, lucene index),
we exceeded the maximum allowed file handles. In our test, the number of
file handles is limited to 2048 by HandleLimitFS. This commit reduces
the number of shards in testShrinkIndexPrimaryTerm to avoid such errors.
Closes#28153
In rare cases the total nanoseconds for an entire window of operations can be 0
nanoseconds, causing the assertion in
QueueResizingEsThreadPoolExecutor.calculateLambda to trip. This ensures that we
calculate the lambda value with at least 1 nanosecond.
Resolves#27607
Currently this method parses the string as a double. This means that it
might lose accuracy if the value is a long that is greater than
2^52. This commit changes this method to try to detect whether the
string represents a long first.
Today after writing an operation to an engine, we will call
`IndexShard#afterWriteOperation` to flush a new commit if needed. The
`shouldFlush` condition is purely based on the uncommitted translog size
and the translog flush threshold size setting. However this can cause a
replica execute an infinite loop of flushing in the following situation.
1. Primary has a fully baked index commit with its local checkpoint
equals to max_seqno
2. Primary sends that fully baked commit, then replays all retained
translog operations to the replica
3. No operations are added to Lucence on the replica as seqno of these
operations are at most the local checkpoint
4. Once translog operations are replayed, the target calls
`IndexShard#afterWriteOperation` to flush. If the total size of the
replaying operations exceeds the flush threshold size, this call will
`Engine#flush`. However the engine won't flush as its index writer does
not have any uncommitted operations. The method
`IndexShard#afterWriteOperation` will keep flushing as the condition
`shouldFlush` is still true.
This issue can be avoided if we always flush if the `shouldFlush`
condition is true.
This PR removes previously deprecated `isShardsAcked()` method in
favour of `isShardsAcknowledged()` on `CreateIndexResponse`, `CreateIndexClusterStateUpdateResponse` and `RolloverResponse`
Related to #27784
Follow-up of #27819
* es/master:
[Docs] Fix explanation for `from` and `size` example (#28320)
Adapt bwc version after backport #28358
Always return the after_key in composite aggregation response (#28358)
Adds test name to MockPageCacheRecycler exception (#28359)
Adds a note in the `terms` aggregation docs regarding pagination (#28360)
[Test] Fix DiscoveryNodesTests.testDeltas() (#28361)
Update packaging tests to work with meta plugins (#28336)
Remove Painless Type from MethodWriter in favor of Java Class. (#28346)
[Doc] Fixs typo in reverse-nested-aggregation.asciidoc (#28348)
Reindex: Shore up rethrottle test
Only assert single commit iff index created on 6.2
isHeldByCurrentThread should return primitive bool
[Docs] Clarify `html` encoder in highlighting.asciidoc (#27766)
Fix GeoDistance query example (#28355)
Settings: Introduce settings updater for a list of settings (#28338)
Adapt bwc version after backport #28310
This change adds the `after_key` of a composite aggregation directly in the response.
It is redundant when all buckets are not filtered/removed by a pipeline aggregation since in this case the `after_key` is always the last bucket
in the response. Though when using a pipeline aggregation to filter composite buckets, the `after_key` can be lost if the last bucket is filtered.
This commit fixes this situation by always returning the `after_key` in a dedicated section.
The DiscoveryNodes.Delta was changed in #28197. Previous/Master nodes
are now always set in the `Delta` (before the change they were set only
if the master changed) and the `masterChanged()` method is now based on
object equality and nodes ephemeral ids (before the change it was based
on nodes id).
This commit adapts the DiscoveryNodesTests.testDeltas() to reflect the
changes.
We introduced a single commit assertion when opening an index but create
a new translog. However, this assertion is not held in this situation.
1. A replica with two commits c1 and c2 starts peer-recovery with c1
2. The recovery is sequence-based recovery but the primary is before 6.2 so
it sent true for “createNewTranslog”
3. Replica opens engine and create translog. We expect "open index and
create translog" have 1 commit but we have c1 and c2.
This commit makes sure to assert this iff the index was created on 6.2+.
This introduces a settings updater that allows to specify a list of
settings. Whenever one of those settings changes, the whole block of
settings is passed to the consumer.
This also fixes an issue with affix settings, when used in combination
with group settings, which could result in no found settings when used
to get a setting for a namespace.
Lastly logging has been slightly changed, so that filtered settings now
only log the setting key.
Another bug has been fixed for the mock log appender, which did not
work, when checking for the exact message.
Closes#28047
* es/master:
Remove redundant argument for buildConfiguration of s3 plugin (#28281)
Completely remove Painless Type from AnalyzerCaster in favor of Java Class. (#28329)
Fix spelling error
Reindex: Wait for deletion in test
Reindex: log more on rare test failure
Ensure we protect Collections obtained from scripts from self-referencing (#28335)
[Docs] Fix asciidoc style in composite agg docs
Adds the ability to specify a format on composite date_histogram source (#28310)
Provide a better error message for the case when all shards failed (#28333)
[Test] Re-Add integer_range and date_range field types for query builder tests (#28171)
Added Put Mapping API to high-level Rest client (#27869)
Revert change that does not return all indices if a specific alias is requested via get alias api. (#28294)
Painless: Replace Painless Type with Java Class during Casts (#27847)
Notify affixMap settings when any under the registered prefix matches (#28317)
Self referencing maps can cause SOE if they are iterated ie. in their toString methods. This chance adds some protected to the usage of those collections.
This commit adds the ability to specify a date format on the `date_histogram` composite source.
If the format is defined, the key for the source is returned as a formatted date.
Closes#27923
The tests for those field types were removed in #26549 because the range mapper
was moved to a module, but later this mapper was moved back to core in #27854.
This change adds back those two field types like before to the general setup in
AbstractQueryTestCase and adds some specifics to the RangeQueryBuilder and
TermsQueryBuilder tests. Also adding back an integration test in SearchQueryIT that
has been removed before but that can be kept with the mapper back in core now.
Relates to #28147
* Notify affixMap settings when any under the registered prefix matches
Previously if an affixMap setting was registered, and then a completely
different setting was applied, the affixMap update consumer would be notified
with an empty map. This caused settings that were previously set to be unset in
local state in a consumer that assumed it would only be called when the affixMap
setting was changed.
This commit changes the behavior so if a prefix `foo.` is registered, any
setting under the prefix will have the update consumer notified if there are
changes starting with `foo.`.
Resolves#28316
* Add unit test
* Address feedback
* master:
Trim down usages of `ShardOperationFailedException` interface (#28312)
Do not return all indices if a specific alias is requested via get aliases api.
[Test] Lower bwc version for rank-eval rest tests
CountedBitSet doesn't need to extend BitSet. (#28239)
Calculate sum in Kahan summation algorithm in aggregations (#27807) (#27848)
Remove the `update_all_types` option. (#28288)
Add information when master node left to DiscoveryNodes' shortSummary() (#28197)
Provide explanation of dangling indices, fixes#26008 (#26999)
In many cases we use the `ShardOperationFailedException` interface to abstract an exception that can only be of one type, namely `DefaultShardOperationException`. There is no need to use the interface in such cases, the concrete type should be used instead. That has the additional advantage of simplifying parsing such exceptions back from rest responses for the high-level REST client
If a get alias api call requests a specific alias pattern then
indices not having any matching aliases should not be included in the response.
Closes#27763
* es/master: (38 commits)
Build: Add pom generation to meta plugins (#28321)
Add 6.3 version constant to master
Minor improvements to translog docs (#28237)
[Docs] Remove typo in painless-getting-started.asciidoc
Build: Fix meta plugin usage in integ test clusters (#28307)
Painless: Add spi jar that will be published for extending whitelists (#28302)
mistyping in one of the highlighting examples comment -> content (#28139)
Documents applicability of term query to range type (#28166)
Build: Omit dependency licenses check for elasticsearch deps (#28304)
Clean up commits when global checkpoint advanced (#28140)
Implement socket and server ChannelContexts (#28275)
Plugins: Fix meta plugins to install bundled plugins with their real name (#28285)
Build: Fix meta plugin integ test installation (#28286)
Modify Abstract transport tests to use impls (#28270)
Fork Groovy compiler onto compile Java home
[Docs] Update tophits-aggregation.asciidoc (#28273)
Docs: match between snippet to its description (#28296)
[TEST] fix RequestTests#testSearch in case search source is not set
REST high-level client: remove index suffix from indices client method names (#28263)
Fix simple_query_string on invalid input (#28219)
...
Today we keep multiple index commits based on the current global
checkpoint, but only clean up unneeded index commits when we have a new
index commit. However, we can release the old index commits earlier once
the global checkpoint has advanced enough. This commit makes an engine
revisit the index deletion policy whenever a new global checkpoint value
is persisted and advanced enough.
Relates #10708
This change converts any exception that occurs during the parsing of
a simple_query_string to a match_no_docs query (instead of a null query)
when leniency is activated.
Closes#28204
This commit adds an extension point for client actions to action
plugins. This is useful for plugins to expose the client-side actions
without exposing the server-side implementations to the client. The
default implementation, of course, delegates to extracting the
client-side action from the server-side implementation.
Relates #28280
MixedClusterClientYamlTestSuiteIT sometimes fails when executing the
indices.stats/13_fields/* REST tests. It does not reproduce locally
but the execution logs show that it failed when a shard is relocating
during the set up execution. This commit change the set up so that it
now waits for all shards to be active before executing the tests.
closes#26732, #27146
* master:
Fix third-party audit tasks on JDK 8
Remove duplicated javadoc `fieldType` param
Handle 5.6.6 and 6.1.2 release
Introduce multi-release JAR
Move the multi-get response tests to server
Require JDK 9 for compilation (#28071)
Revert "[Docs] Fix Java Api index administration usage (#28133)"
Revert "[Docs] Fix base directory to include for put_mapping.asciidoc"
Added multi get api to the high level rest client.
[Docs] Clarify numeric datatype ranges (#28240)
[Docs] Fix base directory to include for put_mapping.asciidoc
Open engine should keep only starting commit (#28228)
This commit introduces the ability for the core Elasticsearch JAR to be
a multi-release JAR containing code that is compiled for JDK 8 and code
that is compiled for JDK 9. At runtime, a JDK 8 JVM will ignore the JDK
9 compiled classfiles, and a JDK 9 JVM will use the JDK 9 compiled
classfiles instead of the JDK 8 compiled classfiles. With this work, we
utilize the new JDK 9 API for obtaining the PID of the running JVM,
instead of relying on a hack.
For now, we want to keep IDEs on JDK 8 so when the build is in an IDE we
ignore the JDK 9 source set (as otherwise the IDE would give compilation
errors). However, with this change, running Gradle from the command-line
now requires JAVA_HOME and JAVA_9_HOME to be set. This will require
follow-up work in our CI infrastructure and our release builds to
accommodate this change.
Relates #28051
Keeping unsafe commits when opening an engine can be problematic because
these commits are not safe at the recovering time but they can suddenly
become safe in the future. The following issues can happen if unsafe
commits are kept oninit.
1. Replica can use unsafe commit in peer-recovery. This happens when a
replica with a safe commit c1 (max_seqno=1) and an unsafe commit c2
(max_seqno=2) recovers from a primary with c1(max_seqno=1). If a new
document (seqno=2) is added without flushing, the global checkpoint is
advanced to 2; and the replica recovers again, it will use the unsafe
commit c2 (max_seqno=2 <= gcp=2) as the starting commit for sequenced
based recovery even the commit c2 contains a stale operation and the
document (with seqno=2) will not be replicated to the replica.
2. Min translog gen for recovery can go backwards in peer-recovery. This
happens when a replica with a safe commit c1 (local_checkpoint=1,
recovery_translog_gen=1) and an unsafe commit c2 (local_checkpoint=2,
recovery_translog_gen=2). The replica recovers from a primary, and keeps
c2 as the last commit, then sets last_translog_gen to 2. Flushing a new
commit on the replica will cause exception as the new last commit c3
will have recovery_translog_gen=1. The recovery translog generation of a
commit is calculated based on the current local checkpoint. The local
checkpoint of c3 is 1 while the local checkpoint of c2 is 2.
3. Commit without translog can be used for recovery. An old index, which
was created before multiple-commits is introduced (v6.2), may not have a
safe commit. If that index has a snapshotted commit without translog and
an unsafe commit, the policy can consider the snapshotted commit as a
safe commit for recovery even the commit does not have translog.
These issues can be avoided if the combined deletion policy keeps only
the starting commit onInit.
Relates #27804
Relates #28181
* es/master: (30 commits)
[Docs] Fix Java Api index administration usage (#28133)
Fix eclipse build. (#28236)
Never return null from Strings.tokenizeToStringArray (#28224)
Fallback to TransportMasterNodeAction for cluster health retries (#28195)
[Docs] Changes to ingest.asciidoc (#28212)
TEST: Update logging for testAckedIndexing
[GEO] Add WKT Support to GeoBoundingBoxQueryBuilder
Painless: Add whitelist extensions (#28161)
Fix daitch_mokotoff phonetic filter to use the dedicated Lucene filter (#28225)
Avoid doing redundant work when checking for self references. (#26927)
Fix casts in HotThreads. (#27578)
Ignore the `-snapshot` suffix when comparing the Lucene version in the build and the docs. (#27927)
Allow update of `eager_global_ordinals` on `_parent`. (#28014)
Fix NPE on composite aggregation with sub-aggregations that need scores (#28129)
`MockTcpTransport` to connect asynchronously (#28203)
Fix synonym phrase query expansion for cross_fields parsing (#28045)
Introduce elasticsearch-core jar (#28191)
#28218: Update the Lucene version for 6.2.0 after backport
upgrade to lucene 7.2.1 (#28218)
[Docs] Fix an error in painless-types.asciidoc (#28221)
...
The Java API documentation for index administration currenty is wrong because
the PutMappingRequestBuilder#setSource(Object... source) and
CreateIndexRequestBuilder#addMapping(String type, Object... source) methods
delegate to methods that check that the input arguments are valid key/value
pairs:
https://www.elastic.co/guide/en/elasticsearch/client/java-api/current/java-admin-indices.html
This changes the docs so the java api code examples are included from
documentation integration tests so we detect compile and runtime issues earlier.
Closes#28131
This method has a different contract than all the other methods in this class, returning null instead of an empty array when receiving a null input. While switching over some methods from delimitedListToStringArray to this method tokenizeToStringArray, this resulted in unexpected nulls in some places of our code.
Relates #28213
Currently we test all maps, arrays or iterables. However, in the case that maps
contain sub maps for instance, we will test the sub maps again even though the
work has already been done for the top-level map.
Relates #26907
A bug introduced in #24407 currently prevents `eager_global_ordinals` from
being updated. This new approach should fix the issue while still allowing
mapping updates to not specify the `_parent` field if it doesn't need
updating, which was the goal of #24407.
The composite aggregation defers the collection of sub-aggregations to a second pass that visits documents only if they
appear in the top buckets. Though the scorer for sub-aggregations is not set on this second pass and generates an NPE if any sub-aggregation
tries to access the score. This change creates a scorer for the second pass and makes sure that sub-aggs can use it safely to check the score of
the collected documents.
The method `initiateChannel` on `TcpTransport` is explicit in that
channels can be connect asynchronously. All production implementations
do connect asynchronously. Only the blocking `MockTcpTransport`
connects in a synchronous manner. This avoids testing some of the
blocking code in `TcpTransport` that waits on connections to complete.
Additionally, it requires a more extensive method signature than
required for other transports.
This commit modifies the `MockTcpTransport` to make these connections
asynchronously on a different thread. Additionally, it simplifies that
`initiateChannel` method signature.
* Fix synonym phrase query expansion for cross_fields parsing
The `cross_fields` mode for query parser ignores phrase query generated by multi-word synonyms.
In such case only the first field of each analyzer group is kept. This change fixes this issue
by expanding the phrase query for each analyzer group to **all** fields using a disjunction max query.
This is related to #27933. It introduces a jar named elasticsearch-core
in the lib directory. This commit moves the JarHell class from server to
elasticsearch-core. Additionally, PathUtils and some of Loggers are
moved as JarHell depends on them.
* Adds metadata to rewritten aggregations
Previous to this change, if any filters in the filters aggregation were rewritten, the rewritten version of the FiltersAggregationBuilder would not contain the metadata form the original. This is because `AbstractAggregationBuilder.getMetadata()` returns an empty map when not metadata is set.
Closes#28170
* Always set metadata when rewritten
As a replica always keeps a safe commit and starts peer-recovery with
that commit; file-based recovery only happens if new operations are
added to the primary and the required translog is not fully retained. In
the test, we tried to produce this condition by flushing a new commit in
order to trim all translog. However, if the new global checkpoint is not
persisted yet, we will keep two commits and not trim translog. This
commit tightens the file-based condition in the test by waiting for the
global checkpoint persisted properly on the new primary before flushing.
Close#28209
Relates #28181
We introduced a new option `createNewTranslog` in #28181. However, we
named that parameter as deleteLocalTranslog in other places. This commit
makes sure to have a consistent naming in these places.
Relates #28181
The global checkpoint should be assigned to unassigned rather than 0. If
a single document is indexed and the global checkpoint is initialized
with 0, the first commit is safe which the test does not suppose.
Relates #28038
Today a replica starts a peer-recovery with the last commit. If the last
commit is not a safe commit, a replica will immediately fallback to the
file based sync which is more expensive than the sequence based
recovery. This commit modifies the peer-recovery in replica to start
with a safe commit. Moreover we can keep the existing translog on the
target if the recovery is sequence based recovery.
Relates #10708
We are targeting to always have a safe index once the recovery is done.
This invariant does not hold if the translog is manually truncated by
users because the truncate translog cli resets the global checkpoint to
unassigned. This commit assigns the global checkpoint to the max_seqno
of the last commit when truncating translog. We can only safely do it
because the truncate translog command will generate a new history uuid
for that shard. With a new history UUID, sequence-based recovery between
that shard and other old shards will be disabled.
Relates #28181
* master:
Fix lock accounting in releasable lock
Add ability to associate an ID with tasks (#27764)
[DOCS] Removed differencies between text and code (#27993)
text fixes (#28136)
Update getting-started.asciidoc (#28145)
[Docs] Spelling fix in painless-getting-started.asciidoc (#28187)
Fixed the cat.health REST test to accept 4ms, not just 4.0ms (#28186)
Do not keep 5.x commits once having 6.x commits (#28188)
Releasble locks hold accounting on who holds the lock when assertions
are enabled. However, the underlying lock can be re-entrant yet we mark
the lock as not held by the current thread as soon as the releasable is
closed. For a re-entrant lock this is not right because the thread could
have entered the lock multiple times. Instead, we have to count how many
times the thread has entered the lock and only mark the lock as not held
by the current thread when the counter reaches zero.
Relates #28202
Currently we keep a 5.x index commit as a safe commit until we have a
6.x safe commit. During that time, if peer-recovery happens, a primary
will send a 5.x commit in file-based sync and the recovery will even
fail as the snapshotted commit does not have sequence number tags.
This commit updates the combined deletion policy to delete legacy
commits if there are 6.x commits.
Relates #27606
Relates #28038
* master: (43 commits)
Rename core module to server (#28180)
upgraded jna from 4.4.0-1 to 4.5.1 (#28183)
[TEST] Do not call RandomizedTest.scaledRandomIntBetween from multiple threads
Primary send safe commit in file-based recovery (#28038)
[Docs] Correct response json in rank-eval.asciidoc
Add scroll parameter to _reindex API (#28041)
Include all sentences smaller than fragment_size in the unified highlighter (#28132)
Modifies the JavaAPI docs related to AggregationBuilder
[Docs] Improvements in script-fields.asciidoc (#28174)
[Docs] Remove Kerberos/SPNEGO Shield plugin (#28019)
Ignore null value for range field (#27845) (#28116)
Fix environment variable substitutions in list setting (#28106)
docs: Replaces indexed script java api docs with stored script api docs
test: ensure we endup with a single segment
Make sure that we don't detect files as maven coordinate when installing a plugin (#28163)
[Tests] temporary disable meta plugin rest tests #28163
meta-plugin should install bin and config at the top level (#28162)
Painless: Add public member read/write access test. (#28156)
Docs: Clarify password protection support with keystore (#28157)
[Docs] fix plugin properties inclusion for plugins authors
...