Today we can end up in a situation where the cluster state contains
unknown or invalid settings. This can happen easily during a rolling
upgrade. For example, consider two nodes that are on a version that
considers the setting foo.bar to be known and valid. Assume one of these
nodes is restarted on a higher version that considers foo.bar to now be
either unknown or invalid, and then the second node is restarted
too. Now, both nodes will be on a version that consider foo.bar to be
unknown or invalid yet this setting will still be contained in the
cluster state. This means that if a cluster settings update is applied
and we validate the settings update with the existing settings then
validation will fail. In such a state, the offending setting can not
even be removed. This commit helps out with this situation by archiving
any settings that are unknown or invalid at the time that a settings
update is applied. This allows the setting update to go through, and the
archived settings can be removed at a later time.
These can be seen at the debug level via cluster state update logging
but really they should be more visible like index creation and
deletion. This commit adds info-level logging for template puts and
deletes.
This interning is completely unnecessary because we look up the marker
by the prefix (value, not identity) anyway. This means that regardless
of the identity of the prefix, we end up with the same marker. That is
all that we really care about here.
As we have factored Elasticsearch into smaller libraries, we have ended
up in a situation that some of the dependencies of Elasticsearch are not
available to code that depends on these smaller libraries but not server
Elasticsearch. This is a good thing, this was one of the goals of
separating Elasticsearch into smaller libraries, to shed some of the
dependencies from other components of the system. However, this now
means that simple utility methods from Lucene that we rely on are no
longer available everywhere. This commit copies IOUtils (with some small
formatting changes for our codebase) into the fold so that other
components of the system can rely on these methods where they no longer
depend on Lucene.
The requiresKeystore flag was removed from PluginInfo in 6.3.0. This
commit fixes a pair of code comments that incorrectly refer to this
version as 7.0.0.
This commit removes the ability to specify that a plugin requires the
keystore and instead creates the keystore on package installation or
when Elasticsearch is started for the first time. The reason that we opt
to create the keystore on package installation is to ensure that the
keystore has the correct permissions (the package installation scripts
run as root as opposed to Elasticsearch running as the elasticsearch
user) and to enable removing the keystore on package removal if the
keystore is not modified.
While trying to reroute a shard to or from a non-data node (a node with ``node.data=false``), I encountered a null pointer exception. Though an exception is to be expected, the NPE was occurring because ``allocation.routingNodes()`` would not contain any non-data nodes, so when you attempt to do ``allocation.routingNodes.node(non-data-node)`` it would not find it, and thus error. This occurred regardless of whether I was rerouting to or from a non-data node.
This PR adds a check (as well as a test for these use cases) to return a legible, useful exception if the discovery node you are rerouting to or from is not a data node.
When an index writer encounters a tragic exception, it could be a
Throwable and not an Exception. Yet we blindly cast the tragic exception
to an Exception which can encounter a ClassCastException. This commit
addresses this by checking if the tragic exception is an Exception and
otherwise wrapping the Throwable in a RuntimeException if it is not. We
choose to wrap the Throwable instead of passing it around because
passing it around leads to changing a lot of places where we handle
Exception to handle Throwable instead. In general, we have tried to
avoid handling Throwable and instead let those bubble up to the uncaught
exception handler.
Log4j2 provides a wide range of logging methods. Our code typically only uses a subset of them. In particular, uses of the methods trace|debug|info|warn|error|fatal(Object) or trace|debug|info|warn|error|fatal(Object, Throwable) have all been wrong, leading to not properly logging the provided message. To prevent these issues in the future, the corresponding Logger methods have been blacklisted.
This commit restores the handling of tiebreaker for multi_match
cross fields query. This functionality was lost during a refactoring
of the multi_match query (#25115).
Fixes#28933
This commit makes the controller spawner also look under modules. It
also fixes a bug in module security policy loading where the module is a
meta plugin.
Today we check for a few cases where we should maybe die before failing
the engine (e.g., when a merge fails). However, there are still other
cases where a fatal error can be hidden from us (for example, a failed
index writer commit). This commit modifies the mechanism for failing the
engine to always check for a fatal error before failing the engine.
Today when requesting _all we return all nodes regardless of what other
node qualifiers are in the request. This is contrary to how the
remainder of the API behaves which acts as additive and subtractive
based on the qualifiers and their ordering. It is also contrary to how
the wildcard * behaves. This commit removes the special handling for
_all so that it behaves identical to the wildcard *.
Relates #28971
* Remove Booleans use from XContent and ToXContent
This removes the use of the `common.Boolean` class from two of the XContent
classes, so they can be decoupled from the ES code as much as possible.
Related to #28754, #28504
Today, failures from the primary-replica resync are ignored as the best
effort to not mark shards as stale during the cluster restart. However
this can be problematic if replicas failed to execute resync operations
but just fine in the subsequent write operations. When this happens,
replica will miss some operations from the new primary. There are some
implications if the local checkpoint on replica can't advance because of
the missing operations.
1. The global checkpoint won't advance - this causes both primary and
replicas keep many index commits
2. Engine on replica won't flush periodically because uncommitted stats
is calculated based on the local checkpoint
3. Replica can use a large number of bitsets to keep track operations seqno
However we can prevent this issue but still reserve the best-effort by
failing replicas which fail to execute resync operations but not mark
them as stale. We have prepared to the required infrastructure in #28049
and #28054 for this change.
Relates #24841
This change replaces the use of string concatenation with a call to
String.join(). String concatenation might be quadratic, unless the compiler can
optimise it away, whereas String.join() is more reliably linear. There can
sometimes be a large number of pending ClusterState update tasks and #28920
includes a report that this operation sometimes takes a long time.
At one point, modules and plugins were very different. But effectively
now they are the same, just from different directories. This commit
unifies the loading methods so they are simply two different
directories. Note that the main codepath to load plugin bundles had
duplication (was not calling getPluginBundles) since previous
refactorings to add meta plugins. Note this change also rewords the
primary exception message when a plugin descriptor is missing, as the
wording asking if the plugin was built before 2.0 isn't really
applicable anymore (it is highly unlikely someone tries to install a 1.x
plugin on any modern version).
This allows us to remove another dependency in the decoupling of the XContent
code. Rather than move this class over or decouple it, it can simply be removed.
Relates tangentially to #28504
These classes are used only in two places, and can be replaced by the
`CharArrayReader` and `CharArrayWriter`. The JDK can also perform lock biasing
and elision as well as escape analysis to optimize away non-contended locks,
rendering their lock-free implementations unnecessary.
Ingest has been failing to apply existing pipelines from cluster-state
into the in-memory representation that are no longer valid. One example of
this is a pipeline with a script processor. If a cluster starts up with scripting
disabled, these pipelines will not be loaded. Even though GETing a pipeline worked,
indexing operations claimed that this pipeline did not exist. This is because one
gets information from cluster-state and the other is from an in-memory data-structure.
Now, two things happen
1. suppress the exceptions until after other successful pipelines are loaded
2. replace failed pipelines with a placeholder pipeline
If the pipeline execution service encounters the stubbed pipeline, it is known that
something went wrong at the time of pipeline creation and an exception was thrown to
the user at some point at start-up.
closes#28269.
This switches the underlying byte output representation used by default in
`XContentBuilder` from `BytesStreamOutput` to a `ByteArrayOutputStream` (an
`OutputStream` can still be specified manually)
This is groundwork to allow us to decouple `XContent*` from the rest of the ES
core code so that it may be factored into a separate jar.
Since `BytesStreamOutput` was not using the recycling instance of `BigArrays`,
this should not affect the circuit breaking capabilities elsewhere in the
system.
Relates to #28504
* Factor UnknownNamedObjectException into its own class
This moves the inner class `UnknownNamedObjectException` from
`NamedXContentRegistry` into a top-level class. This is so that
`NamedXContentRegistry` doesn't have to depend on StreamInput and StreamOutput.
Relates to #28504
This reverts commit f057fc294a.
The rescorer does not resort the collapsed values inside the top docs
during rescoring. For this reason the Lucene rescorer is not compatible
with collapsing.
Relates #27243
This removes the readFrom and writeTo methods from XContentType, instead using
the more generic `readEnum` and `writeEnum` methods. Luckily they are both
encoded exactly the same way, so there is no compatibility layer needed for
backwards compatibility.
Relates to #28504
* Remove BytesRef usage from XContentParser and its subclasses
This removes all the BytesRef usage from XContentParser in favor of directly
returning a CharBuffer (this was originally what was returned, it was just
immediately wraped in a BytesRef).
Relates to #28504
* Rename method after Ryan's feedback
* Wrap stream passed to createParser in try-with-resources
This wraps the stream (`.streamInput()`) that is passed to many of the
`createParser` instances in the enclosing (or a new) try-with-resources block.
This ensures the `BytesReference.streamInput()` is closed.
Relates to #28504
* Use try-with-resources instead of closing in a finally block
This change ensures that we ignore terms removed from the analysis rather than returning a match_no_docs query for the part
that contain the stop word. For instance a query like "the AND fox" should ignore "the" if it is considered as a stop word instead of
adding a match_no_docs query.
This change also fixes the analysis of prefix terms that start with a stop word (e.g. `the*`). In such case if `analyze_wildcard` is true and `the`
is considered as a stop word this part of the query is rewritten into a match_no_docs query. Since it's a prefix query this change forces the prefix query
on `the` even if it is removed from the analysis.
Fixes#28855Fixes#28856
Pruning tombstones is quite expensive since we have to walk though all
deletes in the live version map and acquire a lock on every value even though
it's impossible to prune it. This change does a pre-check if a delete is old enough
and if not it skips acquireing the lock.
Increase the default limit of `index.highlight.max_analyzed_offset` to 1M instead of previous 10K.
Enhance an error message when offset increased to include field name, index name and doc_id.
Relates to https://github.com/elastic/kibana/issues/16764
When virtual lock is not possible because JNA is unavailable, we log a
warning message. Yet, this log message refers to mlockall rather than
virtual lock, presumably because of a copy/paste error. This commit
fixes this issue.
This commit replaces `org.apache.logging.log4j.util.Supplier` by
`java.util.function.Supplier` in non-logging code. These usages are
neither incorrect nor wrong but rather than accidental. I think our
intention was to use the JDK's Supplier in these places.
* Reject regex search if regex string is too long (#28344)
* Add docs
* Introduce index level setting `index.max_regex_length`
to control the maximum length of the regular expression
Closes#28344
Today we have two test base classes that have a lot in common when it comes to testing wire and xcontent serialization: `AbstractSerializingTestCase` and `AbstractXContentStreamableTestCase`. There are subtle differences though between the two, in the way they work, what can be overridden and features that they support (e.g. insertion of random fields).
This commit introduces a new base class called `AbstractWireTestCase` which holds all of the serialization test code in common between `Streamable` and `Writeable`. It has two minimal subclasses called `AbstractWireSerializingTestCase` and `AbstractStreamableTestCase` which are specialized for `Writeable` and `Streamable`.
This commit also introduces a new test class called `AbstractXContentTestCase` for all of the xContent testing, which holds a testFromXContent method for parsing and rendering to xContent. This one can be delegated to from the existing `AbstractStreamableXContentTestCase` and `AbstractSerializingTestCase` so that we avoid code duplicate as much as possible and all these base classes offer the same functionalities in the same way. Having this last base class decoupled from the serialization testing may also help with the REST high-level client testing, as there are some classes where it's hard to implement equals/hashcode and this makes it possible to override `assertEqualInstances` for custom equality comparisons (also this base class doesn't require implementing equals/hashcode as it doesn't test such methods.
This commit fixes a bug that was introduced in PR #27415 for 6.1
and 7.0 where a change to support MULTIPOINT shapes mucked up
indexing of standalone points.
Previously the message reported when `node_concurrent_outgoing_recoveries`
resulted in a `THROTTLE` decision included the reporting node's ID rather than
that of the primary. This commit fixes that.
Fixes#28777.
This commit makes AcknowledgedResponse implement ToXContentObject, so that the response knows how to print its own content out to XContent, which allows us to remove AcknowledgedRestListener.
These tests need to be skipped. They cause plugins to be loaded which
causes a child classloader to be opened. We do not want to add the
permissions to be able to close a classloader solely for these tests,
and the JARs created in the test can not be deleted on Windows until the
classloader is closed. Since this will not happen before test teardown,
the test will fail on Windows. So, we skip these tests.
This allows us to save a bit of code, but also adds more coverage as it tests serialization which was missing in some of the existing tests. Also it requires implementing equals/hashcode and we get the corresponding tests for them for free from the base test class.