Parsing of a ranking evaluation request and its subcomponents should throw parsing
errors on unknown fields. This change adds tests for this and changes the parser
behaviour in cases where it is needed.
* Move to non-deprecated XContentHelper.createParser(...)
This moves away from one of the now-deprecated XContentHelper.createParser
methods in favor of specifying the deprecation logger at parser creation time.
Relates to #28449
Note that this doesn't move all the `createParser` calls because some of them
use the already-deprecated method that doesn't specify the XContentType.
* Remove the deprecated (and now non-needed) createParser method
The initializer and afterthought were not having their types
appropriately cast which is necessary with expressions which in turn
caused values to be popped off the stack that were null.
If you call `getDates()` on a long or date type field add a deprecation
warning to the response and log something to the deprecation logger.
This *mostly* worked just fine but if the deprecation logger happens to
roll then the roll will be performed with the script's permissions
rather than the permissions of the server. And scripts don't have
permissions to, say, open files. So the rolling failed. This fixes that
by wrapping the call the deprecation logger in `doPriviledged`.
This is a strange `doPrivileged` call because it doens't check
Elasticsearch's `SpecialPermission`. `SpecialPermission` is a permission
that no-script code has and that scripts never have. Usually all
`doPrivileged` calls check `SpecialPermission` to make sure that they
are not accidentally acting on behalf of a script. But in this case we
are *intentionally* acting on behalf of a script.
Closes#28408
This commit switches all the modules and server test code to use the
non-deprecated `ParseField.match` method, passing in the parser's deprecation
handler or the logging deprecation handler when a parser is not available (like
in tests).
Relates to #28449
This change adds a shallow copy method for aggregation builders. This method returns a copy of the builder replacing the factoriesBuilder and metaDada
This method is used when the builder is rewritten (AggregationBuilder#rewrite) in order to make sure that we create a new instance of the parent builder when sub aggregations are rewritten.
Relates #27782
Adds allow_partial_search_results flag to search requests with default setting = true.
When false, will error if search either timeouts, has partial errors or has missing shards rather
than returning partial search results. A cluster-level setting provides a default for search requests with no flag.
Closes#27435
Factors the way in which XContent parsing handles deprecated fields
into a callback that is set at parser construction time. The goals here
are:
1. Remove Log4J as a dependency of XContent so that XContent can be used
by clients without forcing log4j and our particular deprecation handling
scheme.
2. Simplify handling of deprecated fields in tests. Now tests can listen
directly for the deprecation callback rather than digging through a
ThreadLocal.
More accurately, this change begins this work. It deprecates a number of
methods, pointing folks to the new versions of those methods that take
`DeprecationHandler`. The plan is to slowly drop these deprecated
methods. Once they are entirely removed we can remove Log4j as
dependency of XContent.
This gives the test longer to block its updates. Now that we're checking
if the updates actually blocked saw that they may not do so in the
normal 10 seconds on a highly loaded system. And our jenkins machines
often function like highly loaded systems. Maybe this fixes#26758!
This change adds support for the new ranking evaluation API to the High Level Rest Client.
This mostly means adding support for parsing the various response objects back from the
REST representation. It includes one change to the response syntax where previously we didn't
print the type of the metric details section but we now need it to pick the right parser to
parse this section back.
Closes#28198
If a percolator query contains duplicate query clauses somewhere in the query tree then
when these clauses are extracted then they should not affect the msm.
This can lead a percolator query that should be a valid match not become a candidate match,
because at query time, the msm that is being used by the CoveringQuery would never match with
the msm used at index time.
Closes#28315
This commit updates netty to 4.1.16.Final. This is the latest version that we can have work without
extra permissions. This updated version of netty fixes issues seen with Java 9 and some data
not being sent, which results in timeouts.
The rethrottle test fails from time to time because one of the child
task that want to be rethrottled hasn't properly started yet. We retry
in this case but it looks like the retry either isn't long enough or
something else strange is happening.
This change adds yet more logging so future failure of this kind will be
easier to track down and it adds an extra wait condition: this waits for
all child tasks to be running or completed before rethrottling. This
*might* avoid the failure because once a child task is properly started
it should be quite ok to rethrottle.
Relates to #26192
Second part in a series of PR's to remove Painless Type in favor of Java Class. This completely removes the Painless Type dependency from AnalyzerCaster. Both casting and promotion are now based on Java Class exclusively. This also allows AnalyzerCaster to be decoupled from Definition and make cast checks be static calls again.
The test failure tracked by #28053 occurs because we fail to get the
failure response from the reindex on the first try and on our second try
the delete index API call that was supposed to trigger the failure
actually deletes the index during document creation. This causes the
test to fail catastrophically.
This PR attempts to wait for the failure to finish before the test moves
on to the second attempt. The failure doesn't reproduce locally for me
so I can't be sure that this helps at all with the failure, but it
certainly feels like it should help some. Here is hoping this prevents
similar failures in the future.
The test failure tracked by #26758 occurs when we cancel a running reindex
request that has been sliced into many children. The main reindex
response *looks* canceled but none of the children look canceled. This
is super strange because for the main request to look canceled for any
length of time one of the children has to be canceled.
This change adds additional logging to the test so we have more to go on
to debug this the next time it fails.
Self referencing maps can cause SOE if they are iterated ie. in their toString methods. This chance adds some protected to the usage of those collections.
This is the first step in a series to replace Painless Type with Java Class for any casting done during compilation. There should be no behavioural change.
In order to build a plugin that extends the painless whitelist, the spi
classes must be available to the plugin at compile time. This commit
moves the spi classes into a separate jar which will be published. Any
plugin authors whiching to extend painless through spi would then add a
compileOnly dependency on this jar.
Sometimes modules/plugins depend on locally built elasticsearch jars.
This means not only that the jar is constantly changing (so no need for
a sha check), but also that the license falls under the Elasticsearch
license, and there is no need to keep another copy. This commit updates
the dependencies checked by dependencyLicenses to exclude those that are
built by elasticsearch.
Currenty the rest response of the ranking evaluation API wraps all inside an
enclosing `rank_eval` object. This is redundant since it is clear from the API
call and it doesn't provide any other useful information. This change removes
this.
This commit adds a PainlessExtension which may be plugged in via SPI to
add additional classes, methods and members to the painless whitelist on
a per context basis. An example plugin adding and using a whitelist is
also added.
The method `initiateChannel` on `TcpTransport` is explicit in that
channels can be connect asynchronously. All production implementations
do connect asynchronously. Only the blocking `MockTcpTransport`
connects in a synchronous manner. This avoids testing some of the
blocking code in `TcpTransport` that waits on connections to complete.
Additionally, it requires a more extensive method signature than
required for other transports.
This commit modifies the `MockTcpTransport` to make these connections
asynchronously on a different thread. Additionally, it simplifies that
`initiateChannel` method signature.
This is related to #27933. It introduces a jar named elasticsearch-core
in the lib directory. This commit moves the JarHell class from server to
elasticsearch-core. Additionally, PathUtils and some of Loggers are
moved as JarHell depends on them.
This is related to #27260. This commit moves the NioTransport from
:test:framework to a new nio-transport plugin. Additionally, supporting
tcp decoding classes are moved to this plugin. Generic byte reading and
writing contexts are moved to the nio library.
Additionally, this commit adds a basic MockNioTransport to
:test:framework that is a TcpTransport implementation for testing that
is driven by nio.
This commit adds the infrastructure to plugin building and loading to
allow one plugin to extend another. That is, one plugin may extend
another by the "parent" plugin allowing itself to be extended through
java SPI. When all plugins extending a plugin are finished loading, the
"parent" plugin has a callback (through the ExtensiblePlugin interface)
allowing it to reload SPI.
This commit also adds an example plugin which uses as-yet implemented
extensibility (adding to the painless whitelist).