You can use `Debug.explain(someObject)` in painless to throw an
`Error` that can't be caught by painless code and contains an
object's class. This is useful because painless's sandbox doesn't
allow you to call `someObject.getClass()`.
Closes#20263
This commit adds the ability to support running with plugins in tests that make use of
backwards compatibility nodes. This can be used to test rolling upgrades with plugins
to ensure they do not cause issues during a rolling upgrade of elasticsearch.
The `type` parameter has always been accepted by the search_shards api, probably to make the api and its urls the same as search. Truth is that the type never had any effect, it's been ignored from day one while accepting it may make users think that we actually do something with it.
This commit removes support for the type parameter from the REST layer and the Java API. Backwards compatibility is maintained on the transport layer though.
The new added serialization test also uncovered a bug in the java API where the `ClusterSearchShardsRequest` could be created with no arguments, but the indices were required to be not null otherwise the request couldn't be serialized as `writeTo` would throw NPE. Fixed by setting a default value (empty array) for indices.
When a fatal error tragically closes an index writer, such an error
never makes its way to the uncaught exception handler. This prevents the
node from being torn down if an out of memory error or other fatal error
is thrown in the Lucene layer. This commit ensures that such events
bubble their way up to the uncaught exception handler.
Relates #21721
The PR #21694 was initially planned to go into v6.0.0 and v5.1.0. Due to another PR relying on this one though for backport to v5.0.2, #21694 must go to v5.0.2
as well. As such, the initial backward compatibility rules established by the PR must be changed to include v5.0.2 and above.
As part of #20925 and #21341 we added an "all-fields" mode to the
`query_string` and `simple_query_string`. This would expand the query to
all fields and automatically set `lenient` to true.
However, we should still allow a user to override the `lenient` flag to
whichever value they desire, should they add it in the request. This
commit does that.
When a fatal error is thrown on the network layer, such an error never
makes its way to the uncaught exception handler. This prevents the node
from being torn down if an out of memory error or other fatal error is
thrown while handling HTTP or transport traffic. This commit adds logic
to ensure that such errors bubble their way up to the uncaught exception
handler, even though Netty tries really hard to swallow everything.
Relates #21720
When Elasticsearch starts, we go through some initialization before we
install a security manager. Yet, the JVM makes internal policy decisions
on the basis of whether or not a security manager is present. This
commit installs a security manager immediately on startup so that the
JVM always thinks a security manager is present when making such policy
decisions.
Relates #21716
This should make debugging painless' analysis and code generation a
little easier.
The `toString` implementations mirror the AST somewhat, and look like
`(SSource (SReturn (ENumeric 1)))`.
Today we read a vint from the stream to allocate the size of an array up-front
before we start reading the values. This can be dangerous if for instance we read
from a corrupted stream or if some manipulated bytes are send for instance from
an attacker or a fuzzer. In most of the cases we can apply some best effort and
validate the array size to be _sane_ by ensuring we can at read at least N bytes
where N is the expected size of the array.
* Add support for merging custom meta data in tribe node
Currently, when any underlying cluster has custom metadata
(via plugin), tribe node does not store custom meta data in its
cluster state. This is because the tribe node has no idea how to
select the appropriate custom metadata from one or many custom
metadata (corresponding to the number of underlying clusters).
This change adds an interface that custom metadata implementations
can extend to add support for merging mulitple custom metadata of
the same type for storing in the tribe state.
Relates to #20544
Supersedes #20791
* Simplify updating tribe state
* Add tests for merging multiple custom metadata types in tribe node
* cleanup merging custom md logic in tribe service
* [DOCS] Show EC2's auto attribute
This documents the `aws_availability_zone` node attribute as part of the `discovery-ec2` plugin. Also fixes outdated usage of "cloud aws".
This is a followup to #21689 where we removed a misplaced try catch for IndexMissingException and IndexClosedException which was related to #9047 (at least for the index closed case). The code block within the change was moved as part of #20890, which made the catch redundant. It was somehow used before (e.g. in 5.0) but it doesn't seem that this catch had any effect. Added tests to verify that. In fact a specific catch added to the search api only would defeat the purpose of having common indices options that work throughout all our APIs.
Relates to #21689
Today it's not possible to add exceptions to the serialization layer
without breaking BWC. This commit adds the ability to specify the Version
an exception was added that allows to fall back not NotSerializableExceptionWrapper
if the exception is not present in the streams version.
Relates to #21656
TransportSearchAction optimizes the search_type in certain cases, when for instance we are searching against a single shard, or when there is only a suggest section in the request. That optimization is wrapped in a try catch, and when an exception happens we log it and ignore it. This may be a leftover from the past though, as no exception is expected to be thrown in that code block, hence if there is any exception we are probably better off bubbling it up rather than ignoring it.
The `lookupPrototype` method is not used anywhere. Seems like we rather use its `lookupProrotypeSafe` variant (which also throws exception if the prototype is not found) is always. This commit makes the safer variant the default one, by renaming it to "lookupPrototype" and removes the previous "unsafe" variant.
Today we call `writeByte` up to 3x per character in each string written via
`StreamOutput#writeString` this can have quite some overhead when strings
are long or many strings are written. This change adds a local buffer to
convert chars to bytes into the local buffer. Converted bytes are then
written via `writeBytes` instead reducing the overhead of this opertion.
Closes#21660
The overflows were happening in two places, the parsing of the template that
implicitly truncates the `order` when its value does not fall into the `integer`
range, and the comparator that sorts templates in ascending order, since it
returns `order2-order1`, which might overflow.
Closes#21622
* Fix highlighting on a stored keyword field
The highlighter converts stored keyword fields using toString().
Since the keyword fields are stored as utf8 bytes the conversion is broken.
This change uses BytesRef.utf8toString() to convert the field value in a valid string.
Fixes#21636
* Replace BytesRef#utf8ToString with MappedFieldType#valueForDisplay
When Groovy detects a bug in its runtime because an internal assertion
was violated, it throws an GroovyBugError. This descends from
AssertionError and if it goes uncaught will land in the uncaught
exception handler and will not deliver any useful information to the
user. This commit wraps GroovyBugErrors in ScriptExceptions so that
useful information is returned to the user.
If the node name is explicitly set it's not derived from the node ID
meaning that it doesn't immediately appear in the logs. While it can be
tracked down in other places, it would be easier for info purposes if it
just showed up explicitly. This commit adds the node ID to the logs,
whether or not the node name is set.
Relates #21673
Implements a null coalescing operator in painless that looks like `?:`. This form was chosen to emulate Groovy's `?:` operator. It is different in that it only coalesces null values, instead of Groovy's `?:` operator which coalesces all falsy values. I believe that makes it the same as Kotlin's `?:` operator. In other languages this operator looks like `??` (C#) and `COALESCE` (SQL) and `:-` (bash).
This operator is lazy, meaning the right hand side is only evaluated at all if the left hand side is null.
Plugins are closed if they implement java.io.Closeable but this is not
clear from the plugin interface. This commit clarifies this by declaring
that Plugins implement java.io.Closeable and adding an empty
implementation to the base Plugin class.
Relates #21669
By default, it is recommended to start bulk with a size of 10-15MB, and increase it gradually to get the right size for the environment. The example shows originally 1GB, which can lead to some users to just copy-paste the code snippet and start with excessively big sizes.
Backport of #21664 in master branch.