When converting the source for an indexing request to JSON, the
conversion can throw an I/O exception which we swallow and proceed with
logging to the slow log. The cause of the I/O exception is lost. This
commit changes this behavior and chooses to drop the entry from the slow
logs and instead lets an exception percolate up to the indexing
operation listener loop. Here, the exception will be caught and logged
at the warn level.
Today we can end up in a situation where the cluster state contains
unknown or invalid settings. This can happen easily during a rolling
upgrade. For example, consider two nodes that are on a version that
considers the setting foo.bar to be known and valid. Assume one of these
nodes is restarted on a higher version that considers foo.bar to now be
either unknown or invalid, and then the second node is restarted
too. Now, both nodes will be on a version that consider foo.bar to be
unknown or invalid yet this setting will still be contained in the
cluster state. This means that if a cluster settings update is applied
and we validate the settings update with the existing settings then
validation will fail. In such a state, the offending setting can not
even be removed. This commit helps out with this situation by archiving
any settings that are unknown or invalid at the time that a settings
update is applied. This allows the setting update to go through, and the
archived settings can be removed at a later time.
These can be seen at the debug level via cluster state update logging
but really they should be more visible like index creation and
deletion. This commit adds info-level logging for template puts and
deletes.
This interning is completely unnecessary because we look up the marker
by the prefix (value, not identity) anyway. This means that regardless
of the identity of the prefix, we end up with the same marker. That is
all that we really care about here.
As we have factored Elasticsearch into smaller libraries, we have ended
up in a situation that some of the dependencies of Elasticsearch are not
available to code that depends on these smaller libraries but not server
Elasticsearch. This is a good thing, this was one of the goals of
separating Elasticsearch into smaller libraries, to shed some of the
dependencies from other components of the system. However, this now
means that simple utility methods from Lucene that we rely on are no
longer available everywhere. This commit copies IOUtils (with some small
formatting changes for our codebase) into the fold so that other
components of the system can rely on these methods where they no longer
depend on Lucene.
The requiresKeystore flag was removed from PluginInfo in 6.3.0. This
commit fixes a pair of code comments that incorrectly refer to this
version as 7.0.0.
This commit removes the ability to specify that a plugin requires the
keystore and instead creates the keystore on package installation or
when Elasticsearch is started for the first time. The reason that we opt
to create the keystore on package installation is to ensure that the
keystore has the correct permissions (the package installation scripts
run as root as opposed to Elasticsearch running as the elasticsearch
user) and to enable removing the keystore on package removal if the
keystore is not modified.
While trying to reroute a shard to or from a non-data node (a node with ``node.data=false``), I encountered a null pointer exception. Though an exception is to be expected, the NPE was occurring because ``allocation.routingNodes()`` would not contain any non-data nodes, so when you attempt to do ``allocation.routingNodes.node(non-data-node)`` it would not find it, and thus error. This occurred regardless of whether I was rerouting to or from a non-data node.
This PR adds a check (as well as a test for these use cases) to return a legible, useful exception if the discovery node you are rerouting to or from is not a data node.
When an index writer encounters a tragic exception, it could be a
Throwable and not an Exception. Yet we blindly cast the tragic exception
to an Exception which can encounter a ClassCastException. This commit
addresses this by checking if the tragic exception is an Exception and
otherwise wrapping the Throwable in a RuntimeException if it is not. We
choose to wrap the Throwable instead of passing it around because
passing it around leads to changing a lot of places where we handle
Exception to handle Throwable instead. In general, we have tried to
avoid handling Throwable and instead let those bubble up to the uncaught
exception handler.
Log4j2 provides a wide range of logging methods. Our code typically only uses a subset of them. In particular, uses of the methods trace|debug|info|warn|error|fatal(Object) or trace|debug|info|warn|error|fatal(Object, Throwable) have all been wrong, leading to not properly logging the provided message. To prevent these issues in the future, the corresponding Logger methods have been blacklisted.
This commit restores the handling of tiebreaker for multi_match
cross fields query. This functionality was lost during a refactoring
of the multi_match query (#25115).
Fixes#28933
This commit makes the controller spawner also look under modules. It
also fixes a bug in module security policy loading where the module is a
meta plugin.
Today we check for a few cases where we should maybe die before failing
the engine (e.g., when a merge fails). However, there are still other
cases where a fatal error can be hidden from us (for example, a failed
index writer commit). This commit modifies the mechanism for failing the
engine to always check for a fatal error before failing the engine.
Today when requesting _all we return all nodes regardless of what other
node qualifiers are in the request. This is contrary to how the
remainder of the API behaves which acts as additive and subtractive
based on the qualifiers and their ordering. It is also contrary to how
the wildcard * behaves. This commit removes the special handling for
_all so that it behaves identical to the wildcard *.
Relates #28971
* Remove Booleans use from XContent and ToXContent
This removes the use of the `common.Boolean` class from two of the XContent
classes, so they can be decoupled from the ES code as much as possible.
Related to #28754, #28504
Today, failures from the primary-replica resync are ignored as the best
effort to not mark shards as stale during the cluster restart. However
this can be problematic if replicas failed to execute resync operations
but just fine in the subsequent write operations. When this happens,
replica will miss some operations from the new primary. There are some
implications if the local checkpoint on replica can't advance because of
the missing operations.
1. The global checkpoint won't advance - this causes both primary and
replicas keep many index commits
2. Engine on replica won't flush periodically because uncommitted stats
is calculated based on the local checkpoint
3. Replica can use a large number of bitsets to keep track operations seqno
However we can prevent this issue but still reserve the best-effort by
failing replicas which fail to execute resync operations but not mark
them as stale. We have prepared to the required infrastructure in #28049
and #28054 for this change.
Relates #24841
This change replaces the use of string concatenation with a call to
String.join(). String concatenation might be quadratic, unless the compiler can
optimise it away, whereas String.join() is more reliably linear. There can
sometimes be a large number of pending ClusterState update tasks and #28920
includes a report that this operation sometimes takes a long time.
At one point, modules and plugins were very different. But effectively
now they are the same, just from different directories. This commit
unifies the loading methods so they are simply two different
directories. Note that the main codepath to load plugin bundles had
duplication (was not calling getPluginBundles) since previous
refactorings to add meta plugins. Note this change also rewords the
primary exception message when a plugin descriptor is missing, as the
wording asking if the plugin was built before 2.0 isn't really
applicable anymore (it is highly unlikely someone tries to install a 1.x
plugin on any modern version).
This allows us to remove another dependency in the decoupling of the XContent
code. Rather than move this class over or decouple it, it can simply be removed.
Relates tangentially to #28504
These classes are used only in two places, and can be replaced by the
`CharArrayReader` and `CharArrayWriter`. The JDK can also perform lock biasing
and elision as well as escape analysis to optimize away non-contended locks,
rendering their lock-free implementations unnecessary.
Ingest has been failing to apply existing pipelines from cluster-state
into the in-memory representation that are no longer valid. One example of
this is a pipeline with a script processor. If a cluster starts up with scripting
disabled, these pipelines will not be loaded. Even though GETing a pipeline worked,
indexing operations claimed that this pipeline did not exist. This is because one
gets information from cluster-state and the other is from an in-memory data-structure.
Now, two things happen
1. suppress the exceptions until after other successful pipelines are loaded
2. replace failed pipelines with a placeholder pipeline
If the pipeline execution service encounters the stubbed pipeline, it is known that
something went wrong at the time of pipeline creation and an exception was thrown to
the user at some point at start-up.
closes#28269.
This switches the underlying byte output representation used by default in
`XContentBuilder` from `BytesStreamOutput` to a `ByteArrayOutputStream` (an
`OutputStream` can still be specified manually)
This is groundwork to allow us to decouple `XContent*` from the rest of the ES
core code so that it may be factored into a separate jar.
Since `BytesStreamOutput` was not using the recycling instance of `BigArrays`,
this should not affect the circuit breaking capabilities elsewhere in the
system.
Relates to #28504
* Factor UnknownNamedObjectException into its own class
This moves the inner class `UnknownNamedObjectException` from
`NamedXContentRegistry` into a top-level class. This is so that
`NamedXContentRegistry` doesn't have to depend on StreamInput and StreamOutput.
Relates to #28504
This reverts commit f057fc294a.
The rescorer does not resort the collapsed values inside the top docs
during rescoring. For this reason the Lucene rescorer is not compatible
with collapsing.
Relates #27243
This removes the readFrom and writeTo methods from XContentType, instead using
the more generic `readEnum` and `writeEnum` methods. Luckily they are both
encoded exactly the same way, so there is no compatibility layer needed for
backwards compatibility.
Relates to #28504
* Remove BytesRef usage from XContentParser and its subclasses
This removes all the BytesRef usage from XContentParser in favor of directly
returning a CharBuffer (this was originally what was returned, it was just
immediately wraped in a BytesRef).
Relates to #28504
* Rename method after Ryan's feedback
* Wrap stream passed to createParser in try-with-resources
This wraps the stream (`.streamInput()`) that is passed to many of the
`createParser` instances in the enclosing (or a new) try-with-resources block.
This ensures the `BytesReference.streamInput()` is closed.
Relates to #28504
* Use try-with-resources instead of closing in a finally block
This change ensures that we ignore terms removed from the analysis rather than returning a match_no_docs query for the part
that contain the stop word. For instance a query like "the AND fox" should ignore "the" if it is considered as a stop word instead of
adding a match_no_docs query.
This change also fixes the analysis of prefix terms that start with a stop word (e.g. `the*`). In such case if `analyze_wildcard` is true and `the`
is considered as a stop word this part of the query is rewritten into a match_no_docs query. Since it's a prefix query this change forces the prefix query
on `the` even if it is removed from the analysis.
Fixes#28855Fixes#28856
Pruning tombstones is quite expensive since we have to walk though all
deletes in the live version map and acquire a lock on every value even though
it's impossible to prune it. This change does a pre-check if a delete is old enough
and if not it skips acquireing the lock.
Increase the default limit of `index.highlight.max_analyzed_offset` to 1M instead of previous 10K.
Enhance an error message when offset increased to include field name, index name and doc_id.
Relates to https://github.com/elastic/kibana/issues/16764
When virtual lock is not possible because JNA is unavailable, we log a
warning message. Yet, this log message refers to mlockall rather than
virtual lock, presumably because of a copy/paste error. This commit
fixes this issue.
This commit replaces `org.apache.logging.log4j.util.Supplier` by
`java.util.function.Supplier` in non-logging code. These usages are
neither incorrect nor wrong but rather than accidental. I think our
intention was to use the JDK's Supplier in these places.
* Reject regex search if regex string is too long (#28344)
* Add docs
* Introduce index level setting `index.max_regex_length`
to control the maximum length of the regular expression
Closes#28344
Today we have two test base classes that have a lot in common when it comes to testing wire and xcontent serialization: `AbstractSerializingTestCase` and `AbstractXContentStreamableTestCase`. There are subtle differences though between the two, in the way they work, what can be overridden and features that they support (e.g. insertion of random fields).
This commit introduces a new base class called `AbstractWireTestCase` which holds all of the serialization test code in common between `Streamable` and `Writeable`. It has two minimal subclasses called `AbstractWireSerializingTestCase` and `AbstractStreamableTestCase` which are specialized for `Writeable` and `Streamable`.
This commit also introduces a new test class called `AbstractXContentTestCase` for all of the xContent testing, which holds a testFromXContent method for parsing and rendering to xContent. This one can be delegated to from the existing `AbstractStreamableXContentTestCase` and `AbstractSerializingTestCase` so that we avoid code duplicate as much as possible and all these base classes offer the same functionalities in the same way. Having this last base class decoupled from the serialization testing may also help with the REST high-level client testing, as there are some classes where it's hard to implement equals/hashcode and this makes it possible to override `assertEqualInstances` for custom equality comparisons (also this base class doesn't require implementing equals/hashcode as it doesn't test such methods.
This commit fixes a bug that was introduced in PR #27415 for 6.1
and 7.0 where a change to support MULTIPOINT shapes mucked up
indexing of standalone points.
Previously the message reported when `node_concurrent_outgoing_recoveries`
resulted in a `THROTTLE` decision included the reporting node's ID rather than
that of the primary. This commit fixes that.
Fixes#28777.
This commit makes AcknowledgedResponse implement ToXContentObject, so that the response knows how to print its own content out to XContent, which allows us to remove AcknowledgedRestListener.
These tests need to be skipped. They cause plugins to be loaded which
causes a child classloader to be opened. We do not want to add the
permissions to be able to close a classloader solely for these tests,
and the JARs created in the test can not be deleted on Windows until the
classloader is closed. Since this will not happen before test teardown,
the test will fail on Windows. So, we skip these tests.
This allows us to save a bit of code, but also adds more coverage as it tests serialization which was missing in some of the existing tests. Also it requires implementing equals/hashcode and we get the corresponding tests for them for free from the base test class.
* Pass InputStream when creating XContent parser
Rather than passing the raw `BytesReference` in when creating the xcontent
parser, this passes the StreamInput (which is an InputStream), this allows us to
decouple XContent from BytesReference.
This also removes the use of `commons.Booleans` so it doesn't require more
external commons classes.
Related to #28504
* Undo boolean removal
* Enhance deprecation javadoc
Add support version and version_type in ingest pipelines
Add support for setting document version and version type in set
processor of an ingest pipeline.
* Remove log4j dependency from elasticsearch-core
This removes the log4j dependency from our elasticsearch-core project. It was
originally necessary only for our jar classpath checking. It is now replaced by
a `Consumer<String>` so that the es-core dependency doesn't have external
dependencies.
The parts of #28191 which were moved in conjunction (like `ESLoggerFactory` and
`Loggers`) have been moved back where appropriate, since they are not required
in the core jar.
This is tangentially related to #28504
* Add javadocs for `output` parameter
* Change @code to @link
Pruning tombstones is best effort and should not block if a key is currently
locked. This can cause a deadlock in rare situations if we switch of append
only optimization while heavily updating the same key in the engine
while the LiveVersionMap is locked. This is very rare since this code
patch only executed every 15 seconds by default since that is the interval
we try to prune the deletes in the version map.
Closes#28714
This commit fixes an issue with setting plugin.mandatory to include a
meta-plugin. The issue here is that the names that we collect are the
underlying plugins, not the meta-plugin. We should not use the
underlying plugins instead using the names of non-meta plugins and the
names of meta-plugins. This commit addresses this. The strategy here is
that when we look at the installed plugins on the filesystem, we keep
track of which ones are meta-plugins and carry this information up to
where check which plugins are installed against the mandatory plugins.
Relates #28710
Today we have several levels of indirection to acquire an Engine.Searcher.
We first acquire a the reference manager for the scope then acquire an
IndexSearcher and then create a searcher for the engine based on that.
This change simplifies the creation into a single method call instead of
3 different ones.
The node stats API enables filtlering the top-level stats for only
desired top-level stats. Yet, this was never enabled for adaptive
replica selection stats. This commit enables this. We also add setting
these stats on the request builder, and fix an inconsistent name in a
setter.
Relates #28721
We currently revisit the index deletion policy whenever the global
checkpoint has advanced enough. We should also revisit the deletion
policy after releasing the last snapshot of a snapshotting commit. With
this change, the old index commits will be cleaned up as soon as
possible.
Follow-up of #28140https://github.com/elastic/elasticsearch/pull/28140#discussion_r162458207
Today we maintain a copy of every delete in the live version maps. This is unnecessary
and might add quite some overhead if maps grow large. This change moves out the deletes
tracking into the tombstone map only and relies on the cleaning of tombstones when deletes
are collected.
The AdaptiveSelectionStats object serializes the clientOutgoingConnections map that's concurrently updated in SearchTransportService. Serializing the map consists of first writing the size of the map and then serializing the entries. If the number of entries changes while the map is being serialized, the size and number of entries go out of sync. The deserialization routine expects those to be in sync though.
Closes#28713
The acquireIndexCommit was separated into acquireSafeIndexCommit and
acquireLastIndexCommit, however the test was not updated accordingly.
Relates #28271
Previously we introduced a new parameter to `acquireIndexCommit` to
allow acquire either a safe commit or a last commit. However with the
new parameters, callers can provide a nonsense combination - flush first
but acquire the safe commit. This commit separates acquireIndexCommit
method into two different methods to avoid that problem. Moreover, this
change should also improve the readability.
Relates #28038
* Remove deprecated createParser methods
This removes the final instances of the callers of `XContent.createParser` and
`XContentHelper.createParser` that did not pass in the `DeprecationHandler`. It
also removes the now-unused deprecated methods and fully removes any mention of
Log4j or LoggingDeprecationHandler from the XContent code.
Relates to #28504
* Add comments in JsonXContentGenerator
This test has a race condition. The action listener used to listen for
connections has a guard against being executed twice. However, this
listener can be executed twice. After on success is invoked the test
starts to tear down. At this point, the threads the test forked will
terminate and the remote cluster connection will be closed. However, a
thread forked to the management thread pool by the remote cluster
connection can still be executing and try to continue connecting. This
thread will be cancelled when the remote cluster connection is closed
and this leads to the action listener being invoked again. To address
this, we explicitly check that the reason that on failure was invoked
was cancellation, and we assert that the listener was already previously
invoked. Interestingly, this issue has always been present yet a recent
change (#28667) exposed errors that occur on tasks submitted to the
thread pool and were silently being lost.
Relates #28695
In order to allow us to gradually move to passing the deprecation handler is, we
need a shim that contains both the non-passed and passed version.
Relates to #28504
When we submit a task to a thread pool for asynchronous execution, we
are returned a future. Since we submitted to go asynchronous, these
futures are not inspected for failure (we would have to block a thread
to do that). While we have on failure handlers for exceptions that are
thrown during execution, we do not handle throwables that are not
exceptions and these end up silently lost. This commit adds a check
after the runnable returns that inspects the status of the future. If an
unhandled throwable occurred during execution, this throwable is
propogated out where it will land in the uncaught exception handler.
Relates #28667
We have code used in the networking layer to search for errors buried in
other exceptions. This code will be useful in other locations so with
this commit we move it to our exceptions helpers.
Relates #28691
Currently the Translog constructor is capable both of opening an existing translog and creating a
new one (deleting existing files). This PR separates these two into separate code paths. The
constructors opens files and a dedicated static methods creates an empty translog.
A recent change moved computing declared versions from using reflection
which occurred repeatedly to a lazily-initialized holder so that
declared versions are computed exactly once. This commit adds a comment
explaining the motivation for this change.
* Move more XContent.createParser calls to non-deprecated version
Part 2
This moves more of the callers to pass in the DeprecationHandler.
Relates to #28504
* Use parser's deprecation handler where appropriate
* Use logging handler in test that uses deprecated field on purpose
* Move more XContent.createParser calls to non-deprecated version
This moves more of the callers to pass in the DeprecationHandler.
Relates to #28504
* Use parser's deprecation handler where available
This method is called often enough (when computing minimum compatibility
versions) that the reflection and sort can be seen while profiling. This
commit addresses this issue by computing the declared versions exactly
once.
Relates #28661
If a tragic even happens while we are refreshing a searcher/reader the engine can open new files on a store that is already closed
For instance the following CI job failed because a merge was concurrently called on a failing shard:
https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+master+oracle-java10-periodic/84
This change increments the ref count of the store during a refresh in order to postpone the closing after a tragic event.
When installing a meta plugin we check the dependency of each sub plugin during the installation.
Though if the extended plugin is part of the meta plugin the installation fails because we only check for plugins that are
already installed. This change is a workaround that extracts all plugins (even those that are not fully installed yet) when the dependency check
is made during the installation. Note that this is how the plugin installation worked before https://github.com/elastic/elasticsearch/pull/28581.
This commit moves the semantic validation (like which version a plugin
was built for or which java version it is compatible with) from reading
a plugin descriptor, leaving the checks on the format of the descriptor
intact.
relates #28540
* NXYSignificanceHeuristic.java: implementation of equality would have
failed with a ClassCastException when comparing to another type.
Replaced with the Eclipse generated form.
Today we use the persisted global checkpoint to calculate the starting
seqno in peer-recovery. However we do not check whether the translog
actually belongs to the existing Lucene index when reading the global
checkpoint. In some rare cases if the translog does not match the Lucene
index, that recovering replica won't be able to complete its recovery.
This can happen as follows.
1. Replica executes a file-based recovery
2. Index files are copied to replica but crashed before finishing the recovery
3. Replica starts recovery again with seq-based as the copied commit is safe
4. Replica fails to open engine because translog and Lucene index are not matched
5. Replica won't be able to recover from primary
This commit enforces the translogUUID requirement when reading the
global checkpoint directly from the checkpoint file.
Relates #28435
This commit changes the state format that was previously passed in to
`MetaDataStateFormat` to always use Smile. This doesn't actually change the
format, since we have used Smile for writing the format since at least 5.0. This
removes the automatic detection of the state format when reading state, since
any state that could be processed in 6.x and 7.x would already have been written
in Smile format.
This is work towards removing the deprecated methods in the XContent code where
we do automatic content-type detection.
Relates to #28504
This commit forces the depth_first mode for `terms` aggregation that contain a sub-aggregation that need to access the score of the document
in a nested context (the `terms` aggregation is a child of a `nested` aggregation). The score of children documents is not accessible in
breadth_first mode because the `terms` aggregation cannot access the nested context.
Close#28394
* Search option terminate_after does not handle post_filters and aggregations correctly
This change fixes the handling of the `terminate_after` option when post_filters (or min_score) are used.
`post_filter` should be applied before `terminate_after` in order to terminate the query when enough document are accepted
by the post_filters.
This commit also changes the type of exception thrown by `terminate_after` in order to ensure that multi collectors (aggregations)
do not try to continue the collection when enough documents have been collected.
Closes#28411
The is a follow up to #28567 changing the method used to capture stack traces, as requested
during the review. Instead of creating a throwable, we explicitly capture the stack trace of the
current thread. This should Make Jason Happy Again ™️ .
Generalizing BWC building so that there is less code to modify for a release. This ensures we do not
need to think about what major or minor version is in the gradle code. It follows the general rules of the
elastic release structure. For more information on the rules, see the VersionCollection's javadoc.
This also removes the additional bwc snapshots that will never be released, such as 6.0.2, which were
being built and tested against every time we ran bwc tests.
Additionally, it creates 4 new projects that correspond to the different types of snapshots that may exist
for a given version. Its possible to now run those individual tasks to work out bwc logic whereas
previously it was impossible and the entire suite of bwc tests had to be run to work out any logic
changes in the build tools' bwc project. Please note that if the project does not make sense for the
version that is current, that an error will be thrown from that individual project if an attempt is made to
run it.
This should allow for automating the version bumps as well, since it removes all the hardcoded version
logic from the configs.
This removes all the server references to the deprecated `ParseField.match`
method in favor of the method that passes in the deprecation logger.
Relates to #28504
The bug was caused because the ScriptService had no reference to a ClusterState instance,
because it received the ClusterState after the PipelineStore. This only is the case
after a restart.
A bad side effect is that during a restart, any pipeline to be loaded after the pipeline that uses a stored script,
was never loaded, which caused many pipeline to be missing in bulk / index request api calls.
After copying over the Lucene segments during peer recovery, we call cleanupAndVerify which removes all other files in the directory and which then calls getMetadata to check if the resulting files are a proper index. There are two issues with this:
- the directory is not fsynced after the deletions, so that the call to getMetadata, which lists files in the directory, can get a stale view, possibly seeing a deleted corruption marker (which leads to the exception seen in #28435)
- failing to delete a corruption marker should result in a hard failure, as the shard is otherwise unusable.
The shard not-available exceptions are currently ignored in the
replication as the best effort avoids failing not-yet-ready shards.
However these exceptions can also happen from fully active shards. If
this is the case, we may have skipped important failures from replicas.
Since #28049, only fully initialized shards are received write requests.
This restriction allows us to handle all exceptions in the replication.
There is a side-effect with this change. If a replica retries its peer
recovery second time after being tracked in the replication group, it
can receive replication requests even though it's not-yet-ready. That
shard may be failed and allocated to another node even though it has a
good lucene index on that node.
This PR does not change the way we report replication errors to users,
hence the shard not-available exceptions won't be reported as before.
Relates #28049
Relates #28534
Today we acquire a permit from the shard to coordinate between indexing operations, recoveries and other state transitions. When we leak an permit it's practically impossible to find who the culprit is. This PR add stack traces capturing for each permit so we can identify which part of the code is responsible for acquiring the unreleased permit. This code is only active when assertions are active.
The output is something like:
```
java.lang.AssertionError: shard [test][1] on node [node_s0] has pending operations:
--> java.lang.RuntimeException: something helpful 2
at org.elasticsearch.index.shard.IndexShardOperationPermits.acquire(IndexShardOperationPermits.java:223)
at org.elasticsearch.index.shard.IndexShard.<init>(IndexShard.java:322)
at org.elasticsearch.index.IndexService.createShard(IndexService.java:382)
at org.elasticsearch.indices.IndicesService.createShard(IndicesService.java:514)
at org.elasticsearch.indices.IndicesService.createShard(IndicesService.java:143)
at org.elasticsearch.indices.cluster.IndicesClusterStateService.createShard(IndicesClusterStateService.java:552)
at org.elasticsearch.indices.cluster.IndicesClusterStateService.createOrUpdateShards(IndicesClusterStateService.java:529)
at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyClusterState(IndicesClusterStateService.java:231)
at org.elasticsearch.cluster.service.ClusterApplierService.lambda$callClusterStateAppliers$6(ClusterApplierService.java:498)
at java.base/java.lang.Iterable.forEach(Iterable.java:75)
at org.elasticsearch.cluster.service.ClusterApplierService.callClusterStateAppliers(ClusterApplierService.java:495)
at org.elasticsearch.cluster.service.ClusterApplierService.applyChanges(ClusterApplierService.java:482)
at org.elasticsearch.cluster.service.ClusterApplierService.runTask(ClusterApplierService.java:432)
at org.elasticsearch.cluster.service.ClusterApplierService$UpdateTask.run(ClusterApplierService.java:161)
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:566)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:244)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:207)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
at java.base/java.lang.Thread.run(Thread.java:844)
--> java.lang.RuntimeException: something helpful
at org.elasticsearch.index.shard.IndexShardOperationPermits.acquire(IndexShardOperationPermits.java:223)
at org.elasticsearch.index.shard.IndexShard.<init>(IndexShard.java:311)
at org.elasticsearch.index.IndexService.createShard(IndexService.java:382)
at org.elasticsearch.indices.IndicesService.createShard(IndicesService.java:514)
at org.elasticsearch.indices.IndicesService.createShard(IndicesService.java:143)
at org.elasticsearch.indices.cluster.IndicesClusterStateService.createShard(IndicesClusterStateService.java:552)
at org.elasticsearch.indices.cluster.IndicesClusterStateService.createOrUpdateShards(IndicesClusterStateService.java:529)
at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyClusterState(IndicesClusterStateService.java:231)
at org.elasticsearch.cluster.service.ClusterApplierService.lambda$callClusterStateAppliers$6(ClusterApplierService.java:498)
at java.base/java.lang.Iterable.forEach(Iterable.java:75)
at org.elasticsearch.cluster.service.ClusterApplierService.callClusterStateAppliers(ClusterApplierService.java:495)
at org.elasticsearch.cluster.service.ClusterApplierService.applyChanges(ClusterApplierService.java:482)
at org.elasticsearch.cluster.service.ClusterApplierService.runTask(ClusterApplierService.java:432)
at org.elasticsearch.cluster.service.ClusterApplierService$UpdateTask.run(ClusterApplierService.java:161)
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:566)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:244)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:207)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
at java.base/java.lang.Thread.run(Thread.java:844)
```
Currently the master node logs a warning message whenever it receives a
failed shard request. However, this can be noisy because
- Multiple failed shard requests can be issued for a single shard
- Failed shard requests can be still issued for an already failed shard
This commit moves the log-warn to AllocationService in which the failing
shard action actually happens. This is another prerequisite step in
order to not ignore the shard not-available exceptions in the
replication.
Relates #28534
The queue size test has a race condition. Namely the offering thread can
run so quickly completing all of its offering iterations before the
queue size thread ever has a chance to run a single size poll
iteration. This means that the size will never actually be polled and
the test can spuriously fail. What we really want to do here, since this
test is checking for a race condition between polling the size of the
queue and offers to the queue, we want to execute each iteration in
lockstep giving the threads multiple changes for the race between
polling the size and offers to occur. This commit addresses this by
running the two threads in lockstep for multiple iterations so that they
have multiple chances to race.
Relates #28584
* Switch to non-deprecated ParseField.match method for o.e.search
This replaces more of the `ParseField.match` calls with the same call using a
deprecation handler. It encapsulates all of the instances in the
`org.elastsicsearch.search` package.
Relates to #28504
* Address Nik's comments
* Replace more deprecated ParseField.match calls with non-deprecated call
This replaces more of the `ParseField.match` calls with the same call using a
deprecation handler.
Relates to #28504
* Address Nik's comments
Plugin descriptors currently contain an elasticsearch version,
which the plugin was built against, and a java version, which the plugin
was built with. These versions are read and validated, but not stored.
This commit keeps them in PluginInfo so they can be used later.
While seeing the elasticsearch version is less interesting (since it is
enforced to match that of the running elasticsearc node), the java
version is interesting since we only validate the format, not the actual
version. This also makes PluginInfo have full parity with the plugin
properties file.
Today when offering an item to a size blocking queue that is at
capacity, we first increment the size of the queue and then check if the
capacity is exceeded or not. If the capacity is indeed exceeded, we do
not add the item to the queue and immediately decrement the size of the
queue. However, this incremented size is exposed externally even though
the offered item was never added to the queue (this is effectively a
race on the size of the queue). This can lead to misleading statistics
such as the size of a queue backing a thread pool. This commit fixes
this issue so that such a size is never exposed. To do this, we replace
the hidden CAS loop that increments the size of the queue with a CAS
loop that only increments the size of the queue if we are going to be
successful in adding the item to the queue.
Relates #28557
Today when a replica shard detects a new primary shard (via a primary
term transition), we roll the translog generation. However, the
mechanism that we are using here is by reaching through the engine to
the translog directly. By poking all the way through rather than asking
the engine to manage the roll for us we miss:
- taking a read lock in the engine while the roll is occurring
- trimming unreferenced readers
This commit addresses this by asking the engine to roll the translog
generation for us.
Relates #28537
The test expects suggest times in milliseconds that are strictly
positive. Internally they are measured in nanos, it is possible that on
really fast execution this is rounded to 0L, so this should also be an
accepted value.
Closes#28543
We now read the plugin descriptor when removing an old plugin. This is
to check if we are removing a plugin that is extended by another
plugin. However, when reading the descriptor we enforce that it is of
the same version that we are. This is not the case when a user has
upgraded Elasticsearch and is now trying to remove an old plugin. This
commit fixes this by skipping the version enforcement when reading the
plugin descriptor only when removing a plugin.
Relates #28540
A shard is fully baked when it moves to POST_RECOVERY. There is no need to do an extra refresh on shard activation again as the shard has already been refreshed when it moved to POST_RECOVERY.
* Move to non-deprecated XContentHelper.createParser(...)
This moves away from one of the now-deprecated XContentHelper.createParser
methods in favor of specifying the deprecation logger at parser creation time.
Relates to #28449
Note that this doesn't move all the `createParser` calls because some of them
use the already-deprecated method that doesn't specify the XContentType.
* Remove the deprecated (and now non-needed) createParser method
If you call `getDates()` on a long or date type field add a deprecation
warning to the response and log something to the deprecation logger.
This *mostly* worked just fine but if the deprecation logger happens to
roll then the roll will be performed with the script's permissions
rather than the permissions of the server. And scripts don't have
permissions to, say, open files. So the rolling failed. This fixes that
by wrapping the call the deprecation logger in `doPriviledged`.
This is a strange `doPrivileged` call because it doens't check
Elasticsearch's `SpecialPermission`. `SpecialPermission` is a permission
that no-script code has and that scripts never have. Usually all
`doPrivileged` calls check `SpecialPermission` to make sure that they
are not accidentally acting on behalf of a script. But in this case we
are *intentionally* acting on behalf of a script.
Closes#28408
Currently when failing a shard we also mark it as stale (eg. remove its
allocationId from from the InSync set). However in some cases, we need
to be able to fail shards but keep them InSync set. This commit adds
such capacity. This is a preparatory change to make the primary-replica
resync less lenient.
Relates #24841
ava.time has the functionality needed to deal with timezones with varying
offsets correctly, but it also has a bunch of methods that silently let you
forget about the hard cases, which raises the risk that we'll quietly do the
wrong thing at some point in the future.
This change adds the trappy methods to the list of forbidden methods to try and
help stop this from happening.
It also fixes the only use of these methods in the codebase so far:
IngestDocument#deepCopy() used ZonedDateTime.of() which may alter the offset of
the given time in cases where the offset is ambiguous.
This commit switches all the modules and server test code to use the
non-deprecated `ParseField.match` method, passing in the parser's deprecation
handler or the logging deprecation handler when a parser is not available (like
in tests).
Relates to #28449
Today the correctness of synced-flush is guaranteed by ensuring that
there is no ongoing indexing operations on the primary. Unfortunately, a
replica might fall out of sync with the primary even the condition is
met. Moreover, if synced-flush mistakenly issues a sync_id for an out of
sync replica, then that replica would not be able to recover from the
primary. ES prevents that peer-recovery because it detects that both
indexes from primary and replica were sealed with the same sync_id but
have a different content. This commit modifies the synced-flush to not
issue sync_id for out of sync replicas. This change will report the
divergence issue earlier to users and also prevent replicas from getting
into the "unrecoverable" state.
Relates #10032
The primary currently replicates writes to all other shard copies as soon as they're added to the routing table. Initially those shards are not even ready yet to receive these replication requests, for example when undergoing a file-based peer recovery. Based on the specific stage that the shard copies are in, they will throw different kinds of exceptions when they receive the replication requests. The primary then ignores responses from shards that match certain exception types. With this mechanism it's not possible for a primary to distinguish between a situation where a replication target shard is not allocated and ready yet to receive requests and a situation where the shard was successfully allocated and active but subsequently failed.
This commit changes replication so that only initializing shards that have successfully opened their engine are used as replication targets. This removes the need to replicate requests to initializing shards that are not even ready yet to receive those requests. This saves on network bandwidth and enables features that rely on the distinction between a "not-yet-ready" shard and a failed shard.
This assertion does not hold if engine is flushed between the invocation
of translog.uncommittedSizeInBytes and translog.uncommittedOperations.
These two values can be calculated from different commits.
If the translog flush threshold is too small (eg. smaller than the
translog header), we may repeatedly flush even there is no uncommitted
operation because the shouldFlush condition can still be true after
flushing. This is currently avoided by adding an extra guard against the
uncommitted operations. However, this extra guard makes the shouldFlush
complicated. This commit replaces that extra guard by a lower bound for
translog flush threshold. We keep the lower bound small for convenience
in testing.
Relates #28350
Relates #23606
Persistent tasks are build on top of node tasks and provide functionality to restart a task to run on a different coordination node in case the coordinating node is no longer available.
It is up to a persistent task implementation to keep track of status, so that in case the task is restarted, the task can continue were it left off before it was restarted.
This change remove the `CircuitBreakerIT. testParentChecking` test method which fails intermittently in unexpected ways with a `MemoryCircuitBreakerTests. testBorrowingSiblingBreakerMemory` unit test method which can test the borrowing functionality more directly
Closes#28223
This change adds a shallow copy method for aggregation builders. This method returns a copy of the builder replacing the factoriesBuilder and metaDada
This method is used when the builder is rewritten (AggregationBuilder#rewrite) in order to make sure that we create a new instance of the parent builder when sub aggregations are rewritten.
Relates #27782
This change fixes a possible AIOOB during the parsing of the document that contains the indexed shape.
This change ensures that the parsing does not continue when the field that contains the shape has been found.
Closes#28456
Adds allow_partial_search_results flag to search requests with default setting = true.
When false, will error if search either timeouts, has partial errors or has missing shards rather
than returning partial search results. A cluster-level setting provides a default for search requests with no flag.
Closes#27435
Sometimes, in some places, the clocks are set back across midnight, leading to
overlapping days. This was not handled as expected, and this change fixes this.
Additionally, in this situation it is not true that rounding a time down to the
nearest day is a monotonic operation, as asserted in these tests. This change
suppresses those assertions in those rare cases.
Fixes#27966.
This change removes the InternalClient and the InternalSecurityClient. These are replaced with
usage of the ThreadContext and a transient value, `action.origin`, to indicate which component the
request came from. The security code has been updated to look for this value and ensure the
request is executed as the proper user. This work comes from #2808 where @s1monw suggested
that we do this.
While working on this, I came across index template registries and rather than updating them to use
the new method, I replaced the ML one with the template upgrade framework so that we could
remove this template registry. The watcher template registry is still needed as the template must be
updated for rolling upgrades to work (see #2950).
* Moves more classes over to ToXContentObject/Fragment
* Removes ToXContentToBytes
* Removes ToXContent from Enums
* review comment fix
* slight change to use XContantHelper
These members are default initialized on contruction and then set by the
init() method. It's possible that another thread accessing the object
after init() is called could still see the null/0 values, depending on how
the compiler optimizes the code.
This is the x-pack side of the removal of `accumulateExceptions()` for both `TransportNodesAction` and `TransportTasksAction`.
There are occasional, random failures that occur during API calls that are silently ignored from the caller's perspective, which also leads to weird API responses that have no response and also no errors, which is obviously untrue.
With the leniency in Version.java we missed to really setup BWC
testing for static indices. This change brings back the testing and adds
missing bwc indices.
Relates to elastic/elasticsearch#24732
Persistent tasks should verify that completion notification is done for correct version of the task, otherwise a delayed notification from an old node can accidentally close a newly reassigned task.
PersistentTasksCustomMetadata was using a generic param named `Params`. This conflicted with the imported interface `ToXContent.Params`. The java compiler was preferring the generic param over the interface so everything was fine but Eclipse apparently prefers the interface int his case which was screwing up the Hierarchy and causing compile errors in Eclipse. This changes fixes it by renaming the Generic param to `P`
Removes the last pieces of ActionRequest from PersistentTaskRequest and renames it into PersistTaskParams, which is now just an interface that extends NamedWriteable and ToXContent.
This commit is response to the renaming of the random ASCII helper
methods in ESTestCase. The name of this method was changed because these
methods only produce random strings generated from [a-zA-Z], not from
all ASCII characters.
Add a check for the current state waitForPersistentTaskStatus before waiting for the next one. This fixes sporadic failure in testPersistentActionStatusUpdate test.
Fixes#928
Refactors NodePersistentTask and RunningPersistentTask into a single AllocatedPersistentTask. Makes it possible to update Persistent Task Status via AllocatedPersistentTask.
If a persistent task throws an exception, the persistent tasks framework will no longer try to restart the task. This is a temporary measure to prevent threshing the cluster with endless restart attempt. We will revisit this in the future version to make the restart process more robust. Please note, however, that if node executing the task goes down, the task will still be restarted on another node.
Removes the transport layer dependency from PersistentActions, makes PersistentActionRegistry immutable and rename actions into tasks in class and variable names.
Refactors xcontent serialization of Request and Status to use their writable names instead of action name. That simplifies the parsing logic, allows reuse of the same status object for multiple actions and is consistent with how named objects in xcontent are used.
This will allow persistent task implementors to detect whether the executor node has changed or has been unset since the last status update has occured.
This commit moves persistent tasks from ClusterState.Custom to MetaData.Custom and adds ability for the task to remain in the metadata after completion.
Also replaced the DELETING status from JobState with a boolean flag on Job. The state of a job is now stored inside a persistent task in cluster state. Jobs that aren't running don't have a persistent task, so I moved that notion of being deleted to the job config itself.
Original commit: elastic/x-pack@21cd19ca1c
Similarly to task status on normal tasks it's now possible to update task status on the persistent tasks. This should allow updating the state of the running tasks (such as loading, started, etc) as well as store intermediate state or progress.
Original commit: elastic/x-pack@048006b467
A persistent action is a transport-like action that is using the cluster state instead of transport to start tasks. This allows persistent tasks to survive restart of executing nodes. A persistent action can be implemented by extending TransportPersistentAction. TransportPersistentAction will start the task by using PersistentActionService, which controls persistent tasks lifecycle. See TestPersistentActionPlugin for an example implementing a persistent action.
Original commit: elastic/x-pack@5e83f1bfa3
This change switches the merge policy to none (for this specific test) in order to make sure that refreshes are always triggered
by a change in the writer.
Closes#27514
Factors the way in which XContent parsing handles deprecated fields
into a callback that is set at parser construction time. The goals here
are:
1. Remove Log4J as a dependency of XContent so that XContent can be used
by clients without forcing log4j and our particular deprecation handling
scheme.
2. Simplify handling of deprecated fields in tests. Now tests can listen
directly for the deprecation callback rather than digging through a
ThreadLocal.
More accurately, this change begins this work. It deprecates a number of
methods, pointing folks to the new versions of those methods that take
`DeprecationHandler`. The plan is to slowly drop these deprecated
methods. Once they are entirely removed we can remove Log4j as
dependency of XContent.
This change adds support for the new ranking evaluation API to the High Level Rest Client.
This mostly means adding support for parsing the various response objects back from the
REST representation. It includes one change to the response syntax where previously we didn't
print the type of the metric details section but we now need it to pick the right parser to
parse this section back.
Closes#28198
Script fields can get a bit more complicated than just stored fields. A script can return null, an object and also an array. Extended parsing to support such valid values. Also renamed util method from `parseStoredFieldsValue` to `parseFieldsValue` given that it can parse stored fields but also script fields, anything that's returned as `fields`.
Closes#28380
The test currently makes the assumption that if underlying directory stops throwing exceptions, we can always open the engine. This is not the case as some errors can cause a corruption marker to be placed in the store.
This commit refactors the test to only check that everything is OK if the engine was successfully opened. On top of that, there is no point in checking replay with no errors as we have another test for that.
Closes#28426
This adds the ability to index term prefixes into a hidden subfield, enabling prefix queries to be run without multitermquery rewrites. The subfield reuses the analysis chain of its parent text field, appending an EdgeNGramTokenFilter. It can be configured with minimum and maximum ngram lengths. Query terms with lengths outside this min-max range fall back to using prefix queries against the parent text field.
The mapping looks like this:
"my_text_field" : {
"type" : "text",
"analyzer" : "english",
"index_prefix" : { "min_chars" : 1, "max_chars" : 10 }
}
Relates to #27049
This commit switches the internal format of the elasticsearch keystore
to no longer use java's KeyStore class, but instead encrypt the binary
data of the secrets using AES-GCM. The cipher key is generated using
PBKDF2WithHmacSHA512. Tests are also added for backcompat reading the v1
and v2 formats.
Currently meta plugins will ask for confirmation of security policy
exceptions for each bundled plugin. This commit collects the necessary
permissions of each bundled plugin, and asks for confirmation of all of
them at the same time.
In some cases testShrinkIndexPrimaryTerm creates then 'mutates' 210
shards. If each shard opens more than 10 files (translog, lucene index),
we exceeded the maximum allowed file handles. In our test, the number of
file handles is limited to 2048 by HandleLimitFS. This commit reduces
the number of shards in testShrinkIndexPrimaryTerm to avoid such errors.
Closes#28153
In rare cases the total nanoseconds for an entire window of operations can be 0
nanoseconds, causing the assertion in
QueueResizingEsThreadPoolExecutor.calculateLambda to trip. This ensures that we
calculate the lambda value with at least 1 nanosecond.
Resolves#27607
Currently this method parses the string as a double. This means that it
might lose accuracy if the value is a long that is greater than
2^52. This commit changes this method to try to detect whether the
string represents a long first.
Today after writing an operation to an engine, we will call
`IndexShard#afterWriteOperation` to flush a new commit if needed. The
`shouldFlush` condition is purely based on the uncommitted translog size
and the translog flush threshold size setting. However this can cause a
replica execute an infinite loop of flushing in the following situation.
1. Primary has a fully baked index commit with its local checkpoint
equals to max_seqno
2. Primary sends that fully baked commit, then replays all retained
translog operations to the replica
3. No operations are added to Lucence on the replica as seqno of these
operations are at most the local checkpoint
4. Once translog operations are replayed, the target calls
`IndexShard#afterWriteOperation` to flush. If the total size of the
replaying operations exceeds the flush threshold size, this call will
`Engine#flush`. However the engine won't flush as its index writer does
not have any uncommitted operations. The method
`IndexShard#afterWriteOperation` will keep flushing as the condition
`shouldFlush` is still true.
This issue can be avoided if we always flush if the `shouldFlush`
condition is true.
This PR removes previously deprecated `isShardsAcked()` method in
favour of `isShardsAcknowledged()` on `CreateIndexResponse`, `CreateIndexClusterStateUpdateResponse` and `RolloverResponse`
Related to #27784
Follow-up of #27819
This change adds the `after_key` of a composite aggregation directly in the response.
It is redundant when all buckets are not filtered/removed by a pipeline aggregation since in this case the `after_key` is always the last bucket
in the response. Though when using a pipeline aggregation to filter composite buckets, the `after_key` can be lost if the last bucket is filtered.
This commit fixes this situation by always returning the `after_key` in a dedicated section.
The DiscoveryNodes.Delta was changed in #28197. Previous/Master nodes
are now always set in the `Delta` (before the change they were set only
if the master changed) and the `masterChanged()` method is now based on
object equality and nodes ephemeral ids (before the change it was based
on nodes id).
This commit adapts the DiscoveryNodesTests.testDeltas() to reflect the
changes.
We introduced a single commit assertion when opening an index but create
a new translog. However, this assertion is not held in this situation.
1. A replica with two commits c1 and c2 starts peer-recovery with c1
2. The recovery is sequence-based recovery but the primary is before 6.2 so
it sent true for “createNewTranslog”
3. Replica opens engine and create translog. We expect "open index and
create translog" have 1 commit but we have c1 and c2.
This commit makes sure to assert this iff the index was created on 6.2+.
This introduces a settings updater that allows to specify a list of
settings. Whenever one of those settings changes, the whole block of
settings is passed to the consumer.
This also fixes an issue with affix settings, when used in combination
with group settings, which could result in no found settings when used
to get a setting for a namespace.
Lastly logging has been slightly changed, so that filtered settings now
only log the setting key.
Another bug has been fixed for the mock log appender, which did not
work, when checking for the exact message.
Closes#28047
Self referencing maps can cause SOE if they are iterated ie. in their toString methods. This chance adds some protected to the usage of those collections.
This commit adds the ability to specify a date format on the `date_histogram` composite source.
If the format is defined, the key for the source is returned as a formatted date.
Closes#27923
The tests for those field types were removed in #26549 because the range mapper
was moved to a module, but later this mapper was moved back to core in #27854.
This change adds back those two field types like before to the general setup in
AbstractQueryTestCase and adds some specifics to the RangeQueryBuilder and
TermsQueryBuilder tests. Also adding back an integration test in SearchQueryIT that
has been removed before but that can be kept with the mapper back in core now.
Relates to #28147
* Notify affixMap settings when any under the registered prefix matches
Previously if an affixMap setting was registered, and then a completely
different setting was applied, the affixMap update consumer would be notified
with an empty map. This caused settings that were previously set to be unset in
local state in a consumer that assumed it would only be called when the affixMap
setting was changed.
This commit changes the behavior so if a prefix `foo.` is registered, any
setting under the prefix will have the update consumer notified if there are
changes starting with `foo.`.
Resolves#28316
* Add unit test
* Address feedback
In many cases we use the `ShardOperationFailedException` interface to abstract an exception that can only be of one type, namely `DefaultShardOperationException`. There is no need to use the interface in such cases, the concrete type should be used instead. That has the additional advantage of simplifying parsing such exceptions back from rest responses for the high-level REST client
If a get alias api call requests a specific alias pattern then
indices not having any matching aliases should not be included in the response.
Closes#27763
Today we keep multiple index commits based on the current global
checkpoint, but only clean up unneeded index commits when we have a new
index commit. However, we can release the old index commits earlier once
the global checkpoint has advanced enough. This commit makes an engine
revisit the index deletion policy whenever a new global checkpoint value
is persisted and advanced enough.
Relates #10708
This change converts any exception that occurs during the parsing of
a simple_query_string to a match_no_docs query (instead of a null query)
when leniency is activated.
Closes#28204
This commit adds an extension point for client actions to action
plugins. This is useful for plugins to expose the client-side actions
without exposing the server-side implementations to the client. The
default implementation, of course, delegates to extracting the
client-side action from the server-side implementation.
Relates #28280
MixedClusterClientYamlTestSuiteIT sometimes fails when executing the
indices.stats/13_fields/* REST tests. It does not reproduce locally
but the execution logs show that it failed when a shard is relocating
during the set up execution. This commit change the set up so that it
now waits for all shards to be active before executing the tests.
closes#26732, #27146
This commit introduces the ability for the core Elasticsearch JAR to be
a multi-release JAR containing code that is compiled for JDK 8 and code
that is compiled for JDK 9. At runtime, a JDK 8 JVM will ignore the JDK
9 compiled classfiles, and a JDK 9 JVM will use the JDK 9 compiled
classfiles instead of the JDK 8 compiled classfiles. With this work, we
utilize the new JDK 9 API for obtaining the PID of the running JVM,
instead of relying on a hack.
For now, we want to keep IDEs on JDK 8 so when the build is in an IDE we
ignore the JDK 9 source set (as otherwise the IDE would give compilation
errors). However, with this change, running Gradle from the command-line
now requires JAVA_HOME and JAVA_9_HOME to be set. This will require
follow-up work in our CI infrastructure and our release builds to
accommodate this change.
Relates #28051
Keeping unsafe commits when opening an engine can be problematic because
these commits are not safe at the recovering time but they can suddenly
become safe in the future. The following issues can happen if unsafe
commits are kept oninit.
1. Replica can use unsafe commit in peer-recovery. This happens when a
replica with a safe commit c1 (max_seqno=1) and an unsafe commit c2
(max_seqno=2) recovers from a primary with c1(max_seqno=1). If a new
document (seqno=2) is added without flushing, the global checkpoint is
advanced to 2; and the replica recovers again, it will use the unsafe
commit c2 (max_seqno=2 <= gcp=2) as the starting commit for sequenced
based recovery even the commit c2 contains a stale operation and the
document (with seqno=2) will not be replicated to the replica.
2. Min translog gen for recovery can go backwards in peer-recovery. This
happens when a replica with a safe commit c1 (local_checkpoint=1,
recovery_translog_gen=1) and an unsafe commit c2 (local_checkpoint=2,
recovery_translog_gen=2). The replica recovers from a primary, and keeps
c2 as the last commit, then sets last_translog_gen to 2. Flushing a new
commit on the replica will cause exception as the new last commit c3
will have recovery_translog_gen=1. The recovery translog generation of a
commit is calculated based on the current local checkpoint. The local
checkpoint of c3 is 1 while the local checkpoint of c2 is 2.
3. Commit without translog can be used for recovery. An old index, which
was created before multiple-commits is introduced (v6.2), may not have a
safe commit. If that index has a snapshotted commit without translog and
an unsafe commit, the policy can consider the snapshotted commit as a
safe commit for recovery even the commit does not have translog.
These issues can be avoided if the combined deletion policy keeps only
the starting commit onInit.
Relates #27804
Relates #28181
The Java API documentation for index administration currenty is wrong because
the PutMappingRequestBuilder#setSource(Object... source) and
CreateIndexRequestBuilder#addMapping(String type, Object... source) methods
delegate to methods that check that the input arguments are valid key/value
pairs:
https://www.elastic.co/guide/en/elasticsearch/client/java-api/current/java-admin-indices.html
This changes the docs so the java api code examples are included from
documentation integration tests so we detect compile and runtime issues earlier.
Closes#28131
This method has a different contract than all the other methods in this class, returning null instead of an empty array when receiving a null input. While switching over some methods from delimitedListToStringArray to this method tokenizeToStringArray, this resulted in unexpected nulls in some places of our code.
Relates #28213
Currently we test all maps, arrays or iterables. However, in the case that maps
contain sub maps for instance, we will test the sub maps again even though the
work has already been done for the top-level map.
Relates #26907
A bug introduced in #24407 currently prevents `eager_global_ordinals` from
being updated. This new approach should fix the issue while still allowing
mapping updates to not specify the `_parent` field if it doesn't need
updating, which was the goal of #24407.
The composite aggregation defers the collection of sub-aggregations to a second pass that visits documents only if they
appear in the top buckets. Though the scorer for sub-aggregations is not set on this second pass and generates an NPE if any sub-aggregation
tries to access the score. This change creates a scorer for the second pass and makes sure that sub-aggs can use it safely to check the score of
the collected documents.
The method `initiateChannel` on `TcpTransport` is explicit in that
channels can be connect asynchronously. All production implementations
do connect asynchronously. Only the blocking `MockTcpTransport`
connects in a synchronous manner. This avoids testing some of the
blocking code in `TcpTransport` that waits on connections to complete.
Additionally, it requires a more extensive method signature than
required for other transports.
This commit modifies the `MockTcpTransport` to make these connections
asynchronously on a different thread. Additionally, it simplifies that
`initiateChannel` method signature.
* Fix synonym phrase query expansion for cross_fields parsing
The `cross_fields` mode for query parser ignores phrase query generated by multi-word synonyms.
In such case only the first field of each analyzer group is kept. This change fixes this issue
by expanding the phrase query for each analyzer group to **all** fields using a disjunction max query.
This is related to #27933. It introduces a jar named elasticsearch-core
in the lib directory. This commit moves the JarHell class from server to
elasticsearch-core. Additionally, PathUtils and some of Loggers are
moved as JarHell depends on them.
* Adds metadata to rewritten aggregations
Previous to this change, if any filters in the filters aggregation were rewritten, the rewritten version of the FiltersAggregationBuilder would not contain the metadata form the original. This is because `AbstractAggregationBuilder.getMetadata()` returns an empty map when not metadata is set.
Closes#28170
* Always set metadata when rewritten
As a replica always keeps a safe commit and starts peer-recovery with
that commit; file-based recovery only happens if new operations are
added to the primary and the required translog is not fully retained. In
the test, we tried to produce this condition by flushing a new commit in
order to trim all translog. However, if the new global checkpoint is not
persisted yet, we will keep two commits and not trim translog. This
commit tightens the file-based condition in the test by waiting for the
global checkpoint persisted properly on the new primary before flushing.
Close#28209
Relates #28181
We introduced a new option `createNewTranslog` in #28181. However, we
named that parameter as deleteLocalTranslog in other places. This commit
makes sure to have a consistent naming in these places.
Relates #28181
The global checkpoint should be assigned to unassigned rather than 0. If
a single document is indexed and the global checkpoint is initialized
with 0, the first commit is safe which the test does not suppose.
Relates #28038
Today a replica starts a peer-recovery with the last commit. If the last
commit is not a safe commit, a replica will immediately fallback to the
file based sync which is more expensive than the sequence based
recovery. This commit modifies the peer-recovery in replica to start
with a safe commit. Moreover we can keep the existing translog on the
target if the recovery is sequence based recovery.
Relates #10708
We are targeting to always have a safe index once the recovery is done.
This invariant does not hold if the translog is manually truncated by
users because the truncate translog cli resets the global checkpoint to
unassigned. This commit assigns the global checkpoint to the max_seqno
of the last commit when truncating translog. We can only safely do it
because the truncate translog command will generate a new history uuid
for that shard. With a new history UUID, sequence-based recovery between
that shard and other old shards will be disabled.
Relates #28181
Releasble locks hold accounting on who holds the lock when assertions
are enabled. However, the underlying lock can be re-entrant yet we mark
the lock as not held by the current thread as soon as the releasable is
closed. For a re-entrant lock this is not right because the thread could
have entered the lock multiple times. Instead, we have to count how many
times the thread has entered the lock and only mark the lock as not held
by the current thread when the counter reaches zero.
Relates #28202
Currently we keep a 5.x index commit as a safe commit until we have a
6.x safe commit. During that time, if peer-recovery happens, a primary
will send a 5.x commit in file-based sync and the recovery will even
fail as the snapshotted commit does not have sequence number tags.
This commit updates the combined deletion policy to delete legacy
commits if there are 6.x commits.
Relates #27606
Relates #28038