Currently AbstractBuilderTestCase generates certain random values in its
`beforeTest()` method annotated with @Before only the first time that a test
method in the suite is run while initializing the serviceHolder that we use for
the rest of the test. This changes the values of subsequent random values
and has the effect that when running single methods from a test suite with
"-Dtests.method=*", the random values it sees are different from when the same
test method is run as part of the whole test suite. This makes it hard to use
the reproduction lines logged on failure.
This change runs the inialization of the serviceHolder and the randomization
connected to it using the test runners master seed, so reproduction by running
just one method is possible again.
Closes#32400
This commit adds two pieces. The first is a small set of documentation providing
instructions on how to get setup to run context examples. This will require a download
similar to how Kibana works for some of the examples. The second is an ingest processor
example using the downloaded data. More examples will follow as ideally one per PR.
This also adds a set of tests to individually test each script as a unit test.
As part of #32608 we made sure that the fully qualified index name is taken from the query shard context whenever creating a new `QueryShardException`. That change introduced a regression as instead of setting the entire `Index` object to the exception, which holds index name and index uuid, we ended up setting only the index name (including cluster alias). With this commit we make sure that the index uuid does not get lost and we try to lower the chances that a similar bug makes it in another time. That's done by making `QueryShardContext` return the fully qualified `Index` (which also holds the uuid) rather than only the fully qualified index name.
This change consolidates all the logic for generating a FunctionReference (renamed from
FunctionRef) from several arbitrary constructors to a single static function that is used at
both compile-time and run-time. This increases long-term maintainability as it is much
easier to follow when and how a function reference is being generated. It moves most of
the duplicated logic out of the ECapturingFuncRef, EFuncRef and ELambda nodes and
Def as well.
This modifies Def to use a Map<String, LocalMethod> to look up user-defined methods at runtime
instead of writing constant methodhandles to do the reverse lookup. This creates a consistency
between how LocalMethods are looked up at compile-time and run-time. This consistency will allow
this code to be more maintainable moving forward. This will also allow FunctionReference to be
cleaned up in a follow up PR.
Renames existing methods in PainlessLookup. Adds lookupPainlessClass,
lookupPainlessMethod, and lookupPainlessField to PainlessLookup. This consolidates
the logic necessary to look these things up into a single place and begins the clean up of
some of the nodes that were looking each of these things up individually. This also has
the added benefit of improved consistency in error messaging.
This commit adds a boolean system property, `es.scripting.use_java_time`,
which controls the concrete return type used by doc values within
scripts. The return type of accessing doc values for a date field is
changed to Object, essentially duck typing the type to allow
co-existence during the transition from joda time to java time.
* Upgrade to `4.1.28` since the problem reported in #32487 is a bug in Netty itself (see https://github.com/netty/netty/issues/7337)
* Fixed other leaks in test code that now showed up due to fixes improvements in leak reporting in the newer version
* Needed to extend permissions for netty common package because it now sets a classloader at runtime after changes in 63bae0956a
* Adjusted forbidden APIs check accordingly
* Closes#32487
Renames and removes variables from PainlessMethod to follow the new naming
convention. Generates methodtypes at compile-time instead of using a method at run-
time. Moves write method to MethodWriter.
This commit fixes the painless compiler classloader to know about the
classes from the script context. This fixes an issue when a custom
context is used from a plugin which caused a ClassNotFoundException for
the script class and its factory classes.
PainlessMethod was being used as both a method and a constructor, and while there are
similarities, there are also some major differences. This allows the reflection objects to be
stored reducing the number of other pieces of data stored in a PainlessMethod as they are
now redundant. This temporarily increases some of the code in FunctionRef and
PainlessDocGenerator as they now differentiate between constructors and methods, BUT
is also makes the code more maintainable because there aren't checks in several places
anymore to differentiate.
The error tests for hex values previously used a random string of
digits, but this could be a valid hex value. This commit changes these
tests to use a fixed invalid hex value.
closes#32370
MethodType can be computed at compile-time rather than run-time. This removes the
method that collects MethodType at run-time from a PainlessMethod since is it no longer
necessary.
The main highlight is the removal of the reclaim_deletes_weight in the TieredMergePolicy.
The es setting index.merge.policy.reclaim_deletes_weight is deprecated in this commit and the value is ignored. The new merge policy setting setDeletesPctAllowed should be added in a follow up.
This commit changes the randomization to always create an index with a type.
It also adds a way to create a query shard context that maps to an index with
no type registered in order to explicitely test cases where there is no type.
There are two scenarios where a http request could terminate in the cors
handler. If that occurs, the requests need to be released. This commit
releases those requests.
Removes the variables name, clazz, and type as they are unnecessary. Renames
staticMembers -> staticFields, members -> fields, getters -> getterMethodHandles, and
setters -> setterMethodHandles.
Removing some dead code or supressing warnings where apropriate. Most of the
time the variable tested for null is dereferenced earlier or never used before.
Implements a static function in PainlessLookupBuilder that contains all the logic related
to Whitelist. PainlessLookupBuilder is available for use in loading from methods beyond
Whitelist now.
* Test `handler` must release buffer the same way the replaced `org.elasticsearch.http.netty4.Netty4HttpRequestHandler#channelRead0` releases it
* Closes#32289
This finishes the updating the methods in the PainlessLookupBuilder to the new naming scheme. Mechanical change. Methods include the ones used for copying members in the inheritance hierarchy, calculating shortcuts, and setting the functional interface.
This adds the ERR metric to the provided xContent parsers in the module and the
high level rest client registry. Also adding integration tests to make sure the
metric is correctly registered and usable from the client.
Adds a new single-value metrics aggregation that computes the weighted
average of numeric values that are extracted from the aggregated
documents. These values can be extracted from specific numeric
fields in the documents.
When calculating a regular average, each datapoint has an equal "weight"; it
contributes equally to the final value. In contrast, weighted averages
scale each datapoint differently. The amount that each datapoint contributes
to the final value is extracted from the document, or provided by a script.
As a formula, a weighted average is the `∑(value * weight) / ∑(weight)`
A regular average can be thought of as a weighted average where every value has
an implicit weight of `1`.
Closes#15731
The notion of "quality" is an overloaded term in the search ranking evaluation
context. Its usually used to decribe certain levels of "good" vs. "bad" of a
seach result with respect to the users information need. We currently report the
result of the ranking evaluation as `quality_level` which is a bit missleading.
This changes the response parameter name to `metric_score` which fits better.
This is largely mechanical change that cleans up the addConstructor, addMethod, and
addFields methods in PainlessLookup. Changes include renamed variables, better error
messages, and some minor code movement to make it more maintainable long term.
* INGEST: Extend KV Processor (#31789)
Added more capabilities supported by LS to the KV processor:
* Stripping of brackets and quotes from values (`include_brackets` in corresponding LS filter)
* Adding key prefixes
* Trimming specified chars from keys and values
Refactored the way the filter is configured to avoid conditionals during execution.
Refactored Tests a little to not have to add more redundant getters for new parameters.
Relates #31786
* Add documentation
* INGEST: Make a few Processors callable by Painless
* Extracted a few stateless String processors as well as the json processor to static methods and whitelisted them in Painless
* provide whitelist from processors plugin
Currently the ranking evaluation response contains a 'unknown_docs' section
for each search use case in the evaluation set. It contains document ids for
results in the search hits that currently don't have a quality rating.
This change renames it to `unrated_docs`, which better reflects its purpose.
This removes some extraneous naming syntax and makes clear the meaning of certain
naming conventions without ambiguities (stricter) within the lookup package. Purely
mechanical change. Note this does not cover a large portion of the
PainlessLookupBuilder and PainlessLookup yet as there are several more follow up PRs for these incoming.
* Add basic support for field aliases in index mappings. (#31287)
* Allow for aliases when fetching stored fields. (#31411)
* Add tests around accessing field aliases in scripts. (#31417)
* Add documentation around field aliases. (#31538)
* Add validation for field alias mappings. (#31518)
* Return both concrete fields and aliases in DocumentFieldMappers#getMapper. (#31671)
* Make sure that field-level security is enforced when using field aliases. (#31807)
* Add more comprehensive tests for field aliases in queries + aggregations. (#31565)
* Remove the deprecated method DocumentFieldMappers#getFieldMapper. (#32148)
This change cleans up the addPainlessClass methods by doing the following things:
* Rename many variable names to match the new conventions described in the JavaDocs
for PainlessLookup
* Decouples Whitelist.Class from adding a PainlessClass directly
* Adds a second version of addPainlessClass that is intended for use to add future
defaults in a follow PR
This change also fixes the method and field caches by storing Classes instead of Strings
since it would technically be possible now that the whitelists are extendable to have
different Classes with the same name. It was convenient to add this change together
since some of the new constants are shared.
Note the changes are largely mechanical again where all the code behavior should
remain the same.
When building custom tokenfilters without an index in the _analyze endpoint,
we need to ensure that referring filters are correctly built by calling
their #setReferences() method
Fixes#32154
This change adds two contexts the execute scripts against:
* SEARCH_SCRIPT: Allows to run scripts in a search script context.
This context is used in `function_score` query's script function,
script fields, script sorting and `terms_set` query.
* FILTER_SCRIPT: Allows to run scripts in a filter script context.
This context is used in the `script` query.
In both contexts a index name needs to be specified and a sample document.
The document is needed to create an in-memory index that the script can
access via the `doc[...]` and other notations. The index name is needed
because a mapping is needed to index the document.
Examples:
```
POST /_scripts/painless/_execute
{
"script": {
"source": "doc['field'].value.length()"
},
"context" : {
"search_script": {
"document": {
"field": "four"
},
"index": "my-index"
}
}
}
```
Returns:
```
{
"result": 4
}
```
POST /_scripts/painless/_execute
{
"script": {
"source": "doc['field'].value.length() <= params.max_length",
"params": {
"max_length": 4
}
},
"context" : {
"filter_script": {
"document": {
"field": "four"
},
"index": "my-index"
}
}
}
Returns:
```
{
"result": true
}
```
Also changed PainlessExecuteAction.TransportAction to use TransportSingleShardAction
instead of HandledAction, because now in case score or filter contexts are used
the request needs to be redirected to a node that has an active IndexService
for the index being referenced (a node with a shard copy for that index).
Several pieces of data in PainlessClass cannot be passed in at the time the
PainlessClass is created so it must be "frozen" after all the data is collected. This means
PainlessClass is currently serving two functions as both a builder and a set of data. This
separates the two pieces into clearly distinct values.
This change also removes the PainlessMethodKey in favor of a simple String. The goal is
to have the painless method key be completely internal to the PainlessLookup eventually
and this simplifies the way there. Note that this was added since PainlessClass and
PainlessClassBuilder were already being changed instead of a follow up PR.
When building the PainlessMethods and PainlessFields they stored a reference to a
PainlessClass. This reference was prior to "freezing" the PainlessClass so the data was
both incomplete and mutable. This has been replaced with a target java class instead
since the PainlessClass is accessible through a java class now and it requires no special
modifications to get around a chicken and egg issue.
Ensure our tests can run in a FIPS JVM
JKS keystores cannot be used in a FIPS JVM as attempting to use one
in order to init a KeyManagerFactory or a TrustManagerFactory is not
allowed.( JKS keystore algorithms for private key encryption are not
FIPS 140 approved)
This commit replaces JKS keystores in our tests with the
corresponding PEM encoded key and certificates both for key and trust
configurations.
Whenever it's not possible to refactor the test, i.e. when we are
testing that we can load a JKS keystore, etc. we attempt to
mute the test when we are running in FIPS 140 JVM. Testing for the
JVM is naive and is based on the name of the security provider as
we would control the testing infrastrtucture and so this would be
reliable enough.
Other cases of tests being muted are the ones that involve custom
TrustStoreManagers or KeyStoreManagers, null TLS Ciphers and the
SAMLAuthneticator class as we cannot sign XML documents in the
way we were doing. SAMLAuthenticator tests in a FIPS JVM can be
reenabled with precomputed and signed SAML messages at a later stage.
IT will be covered in a subsequent PR
Currently the `keep_types` token filter includes all token types specified using
its `types` parameter. Lucenes TypeTokenFilter also provides a second mode where
instead of keeping the specified tokens (include) they are filtered out
(exclude). This change exposes this option as a new `mode` parameter that can
either take the values `include` (the default, if not specified) or `exclude`.
Closes#29277
In #29623 we added `Request` object flavored requests to the low level
REST client and in #30315 we deprecated the old `performRequest`s. This
changes most of the calls not in X-Pack to their new versions.
The "source" field in SSource seems unused. If removed, it can also be removed
from the ctor, which in turn makes is possible to delete the sourceText in the
Walker class.
* Cleanup Duplication in `PainlessScriptEngine`
* Extract duplicate building of compiler settings to method
* Remove dead method params + dead constant in `ScriptProcessor`
This is related to #27260. It adds the SecurityNioHttpServerTransport
to the security plugin. It randomly uses the nio http transport in
security integration tests.
* Replace Ingest ScriptContext with Custom Interface
* Make org.elasticsearch.ingest.common.ScriptProcessorTests#testScripting more precise
* Don't mock script factory in ScriptProcessorTests
* Adjust mock script plugin in IT for new API
Because this is a static method on a public API, and one that we encourage
plugin authors to use, the method with the typo is deprecated in 6.x
rather than just renamed.
This change adds Expected Reciprocal Rank (ERR) as a ranking evaluation metric
as descriped in:
Chapelle, O., Metlzer, D., Zhang, Y., & Grinspan, P. (2009).
Expected reciprocal rank for graded relevance.
Proceeding of the 18th ACM Conference on Information and Knowledge Management.
https://doi.org/10.1145/1645953.1646033
ERR is an extension of the classical reciprocal rank to the graded relevance
case and assumes a cascade browsing model. It quantifies the usefulness of a
document at rank `i` conditioned on the degree of relevance of the items at ranks
less than `i`. ERR seems to be gain traction as an alternative to (n)DCG, so it
seems like a good metric to support. Also ERR seems to be the default optimization
metric used for training in RankLib, a widely used learning to rank library.
Relates to #29653
In #29623 we added `Request` object flavored requests to the low level
REST client and in #30315 we deprecated the old `performRequest`s. This
changes all calls in the `module/repository-url` project to use the new
versions.
In #29623 we added `Request` object flavored requests to the low level
REST client and in #30315 we deprecated the old `performRequest`s. This
changes all calls in the `modules/reindex` project to use the new
versions.
This change adds support for template snippet (e.g. {{foo}}) resolution
in the date_index_name processor. The following configuration options
will now resolve a templated value if so configured:
* index_name_prefix (e.g "index_name_prefix": "myindex-{{foo}}-")
* date_rounding (e.g. "date_rounding" : "{{bar}}")
* index_name_format (e.g."index_name_format": "{{baz}}")
* Fix assertIngestDocument wrongfully passing
* Previously docA being subset of docB passed because iteration was over docA's keys only
* Scalars in nested fields were not compared in all cases
* Assertion errors were hard to interpret (message wasn't correct since it only mentioned the class type)
* In cases where two paths contained different types a ClassCastException was thrown instead of an AssertionError
* Fixes#28492
* Handle missing values in painless
Throw an exception for `doc['field'].value`
if this document is missing a value for the `field`.
For 7.0:
This is the default behaviour from 7.0
For 6.x:
To enable this behavior from 6.x, a user can set a jvm.option:
`-Des.script.exception_for_missing_value=true` on a node.
If a user does not enable this behavior, a deprecation warning is logged on start up.
Closes#29286
Create lookup package
rename Definition to PainlessLookup and move to lookup package
rename Definition.Method to PainlessMethod
rename Definition.MethodKey to PainlessMethod
rename Definition.Field to PainlessField
rename Definition.Struct to PainlessClass
rename Definition.Cast to PainlessCast
rename Whitelist.Struct to WhitelistClass
rename Whitelist.Constructor to WhitelistConstructor
rename Whitelist.Method to WhitelistMethod
rename Whitelist.Field to WhitelistField
Removes support for storing scripts without the usual json around the
script. So You can no longer do:
```
POST _scripts/<templatename>
{
"query": {
"match": {
"title": "{{query_string}}"
}
}
}
```
and must instead do:
```
POST _scripts/<templatename>
{
"script": {
"lang": "mustache",
"source": {
"query": {
"match": {
"title": "{{query_string}}"
}
}
}
}
}
```
This improves error reporting when you attempt to store a script but don't
quite get the syntax right. Before, there was a good chance that we'd
think of it as a "raw" template and just store it. Now we won't do that.
Nice.
* Upgrade bouncycastle
Required to fix
`bcprov-jdk15on-1.55.jar; invalid manifest format `
on jdk 11
* Downgrade bouncycastle to avoid invalid manifest
* Add checksum for new jars
* Update tika permissions for jdk 11
* Mute test failing on jdk 11
* Add JDK11 to CI
* Thread#stop(Throwable) was removed
http://mail.openjdk.java.net/pipermail/core-libs-dev/2018-June/053536.html
* Disable failing tests #31456
* Temprorarily disable doc tests
To see if there are other failures on JDK11
* Only blacklist specific doc tests
* Disable only failing tests in ingest attachment plugin
* Mute failing HDFS tests #31498
* Mute failing lang-painless tests #31500
* Fix backwards compatability builds
Fix JAVA version to 10 for ES 6.3
* Add 6.x to bwx -> java10
* Prefix out and err from buildBwcVersion for readability
```
> Task :distribution:bwc:next-bugfix-snapshot:buildBwcVersion
[bwc] :buildSrc:compileJava
[bwc] WARNING: An illegal reflective access operation has occurred
[bwc] WARNING: Illegal reflective access by org.codehaus.groovy.reflection.CachedClass (file:/home/alpar/.gradle/wrapper/dists/gradle-4.5-all/cg9lyzfg3iwv6fa00os9gcgj4/gradle-4.5/lib/groovy-all-2.4.12.jar) to method java.lang.Object.finalize()
[bwc] WARNING: Please consider reporting this to the maintainers of org.codehaus.groovy.reflection.CachedClass
[bwc] WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
[bwc] WARNING: All illegal access operations will be denied in a future release
[bwc] :buildSrc:compileGroovy
[bwc] :buildSrc:writeVersionProperties
[bwc] :buildSrc:processResources
[bwc] :buildSrc:classes
[bwc] :buildSrc:jar
```
* Also set RUNTIME_JAVA_HOME for bwcBuild
So that we can make sure it's not too new for the build to understand.
* Align bouncycastle dependency
* fix painles array tets
closes#31500
* Update jar checksums
* Keep 8/10 runtime/compile untill consensus builds on 11
* Only skip failing tests if running on Java 11
* Failures are dependent of compile java version not runtime
* Condition doc test exceptions on compiler java version as well
* Disable hdfs tests based on runtime java
* Set runtime java to minimum supported for bwc
* PR review
* Add comment with ticket for forbidden apis
Today TransportService is tightly coupled with Transport since it
requires an instance of TransportService in order to receive responses
and send requests. This is mainly due to the Request and Response handlers
being maintained in TransportService but also because of the lack of a proper
callback interface.
This change moves request handler registry and response handler registration into
Transport and adds all necessary methods to `TransportConnectionListener` in order
to remove the `TransportService` dependency from `Transport`
Transport now accepts one or more `TransportConnectionListener` instances that are
executed sequentially in a blocking fashion.
This completes the removal of Painless Type. The new data structures in the definition are a map of names (String) to Java Classes and a map of Java Classes to Painless Structs. The names to Java Classes map can contain a 2 to 1 ratio of names to classes depending on whether or not a short (imported) name is used. The Java Classes to Painless Structs is 1 to 1 always where the Java Class name must match the Painless Struct name. This should lead a significantly simpler type system in Painless moving forward since the Painless Type only held redundant information since Painless does not support generics.
ingest: Introduction of a bytes processor
This processor allows for human readable byte values (e.g. 1kb) to be converted to value in bytes (e.g. 1024). Internally this processor re-uses "ByteSizeValue.parseBytesSizeValue" which supports conversions up to Long.MAX_VALUE and the following units: "b", "kb", "mb", "gb", "tb", pb".
This change also introduces a generic return type for the AbstractStringProcessor to allow for code reuse while supporting a String -> T conversion. (String -> Long in this case).
Adds a new parameter to the BlobContainer#write*Blob methods to specify whether the existing file
should be overridden or not. For some metadata files in the repository, we actually want to replace
the current file. This is currently implemented through an explicit blob delete and then a fresh write.
In case of using a cloud provider (S3, GCS, Azure), this results in 2 API requests instead of just 1.
This change will therefore allow us to achieve the same functionality using less API requests.
This commit removes some tests in the repository-s3 plugin that
have not been executed for 2+ years but have been maintained
for nothing. Most of the tests in AbstractAwsTestCase were
obsolete or superseded by fixture based integration tests.
* remove left-over comment
* make sure of the property for plugins
* skip installing modules if these exist in the distribution
* Log the distrbution being ran
* Don't allow running with integ-tests-zip passed externally
* top level x-pack/qa can't run with oss distro
* Add support for matching objects in lists
Makes it possible to have a key that points to a list and assert that a
certain object is present in the list. All keys have to be present and
values have to match. The objects in the source list may have additional
fields.
example:
```
match: { 'nodes.$master.plugins': { name: ingest-attachment } }
```
* Update plugin and module tests to work with other distributions
Some of the tests expected that the integration tests will always be ran
with the `integ-test-zip` distribution so that there will be no other
plugins loaded.
With this change, we check for the presence of the plugin without
assuming exclusivity.
* Allow modules to run on other distros as well
To match the behavior of tets.distributions
* Add and use a new `contains` assertion
Replaces the previus changes that caused `match` to do a partial match.
* Implement PR review comments
* Migrate scripted metric aggregation scripts to ScriptContext design #29328
* Rename new script context container class and add clarifying comments to remaining references to params._agg(s)
* Misc cleanup: make mock metric agg script inner classes static
* Move _score to an accessor rather than an arg for scripted metric agg scripts
This causes the score to be evaluated only when it's used.
* Documentation changes for params._agg -> agg
* Migration doc addition for scripted metric aggs _agg object change
* Rename "agg" Scripted Metric Aggregation script context variable to "state"
* Rename a private base class from ...Agg to ...State that I missed in my last commit
* Clean up imports after merge
TransportAction currently contains 2 doExecute methods, one which takes
a the task, and one that does not. The latter is what some subclasses
implement, while the first one just calls the latter, dropping the given
task. This commit combines these methods, in favor of just always
assuming a task is present.
TransportRequestHandler currently contains 2 messageReceived methods,
one which takes a Task, and one that does not. The first just delegates
to the second. This commit changes all existing implementors of
TransportRequestHandler to implement the version which takes Task, thus
allowing the class to be a functional interface, and eliminating the
need to throw exceptions when a task needs to be ensured.
Most transport actions don't need the node ThreadPool. This commit
removes the ThreadPool as a super constructor parameter for
TransportAction. The actions that do need the thread pool then have a
member added to keep it from their own constructor.
Historically in TcpTransport server channels were represented by the
same channel interface as socket channels. This was necessary as
TcpTransport was parameterized by the channel type. This commit
introduces TcpServerChannel and HttpServerChannel classes. Additionally,
it adds the implementations for the various transports. This allows
server channels to have unique functionality and not implement the
methods they do not support (such as send and getRemoteAddress).
Additionally, with the introduction of HttpServerChannel this commit
extracts some of the storing and closing channel work to the abstract
http server transport.
The `multiplexer` filter emits multiple tokens at the same position, each
version of the token haivng been passed through a different filter chain.
Identical tokens at the same position are removed.
This allows users to, for example, index lowercase and original-case tokens,
or stemmed and unstemmed versions, in the same field, so that they can search
for a stemmed term within x positions of an unstemmed term.
Most transport actions don't need to resolve index names. This commit
removes the index name resolver as a super constructor parameter for
TransportAction. The actions that do need the resolver then have a
member added to keep the resolver from their own constructor.
This is a general cleanup of channels and exception handling in http.
This commit introduces a CloseableChannel that is a superclass of
TcpChannel and HttpChannel. This allows us to unify the closing logic
between tcp and http transports. Additionally, the normal http channels
are extracted to the abstract server transport.
Finally, this commit (mostly) unifies the exception handling between nio
and netty4 http server transports.
Since #30966, Action no longer has anything but a call to the
GenericAction super constructor. This commit renames GenericAction
into Action, thus eliminating the Action class. Additionally, this
commit removes the Request generic parameter of the class, since
it was unused.
The following analyzers were moved from server module to analysis-common module:
`greek`, `hindi`, `hungarian`, `indonesian`, `irish`, `italian`, `latvian`,
`lithuanian`, `norwegian`, `persian`, `portuguese`, `romanian`, `russian`,
`sorani`, `spanish`, `swedish`, `turkish` and `thai`.
Relates to #23658
This is related to #28898. This PR implements pooling of bytes arrays
when reading from the wire in the http server transport. In order to do
this, we must integrate with netty reference counting. That manner in
which this PR implements this is making Pages in InboundChannelBuffer
reference counted. When we accessing the underlying page to pass to
netty, we retain the page. When netty releases its bytebuf, it releases
the underlying pages we have passed to it.
This change moves tests in `smoke-test-rank-eval-with-mustache` into the main
ranking evaluation module by declaring that the integration testing cluster
requires the `lang-mustache` plugin. This avoids having to maintain the qa
project for only one basic test suite.
While the other two ranking evaluation metrics (precicion and reciprocal rank)
already provide a more detailed output for how their score is calculated, the
discounted cumulative gain metric (dcg) and its normalized variant are lacking
this until now. Its not really clear which level of detail might be useful for
debugging and understanding the final metric calculation, but this change adds a
`metric_details` section to REST output that contains some information about the
evaluation details.
Currently we maintain a compatibility map of http status codes in both
the netty4 and nio modules. These maps convert a RestStatus to a netty
HttpResponseStatus. However, as these fundamentally represent integers,
we can just use the netty valueOf method to convert a RestStatus to a
HttpResponseStatus.
This is related to #28898. With the addition of the http nio transport,
we now have two different modules that provide http transports.
Currently most of the http logic lives at the module level. However,
some of this logic can live in server. In particular, some of the
setting of headers, cors, and pipelining. This commit begins this moving
in that direction by introducing lower level abstraction (HttpChannel,
HttpRequest, and HttpResonse) that is implemented by the modules. The
higher level rest request and rest channel work can live entirely in
server.
Many fixtures have similar code for writing the pid & ports files or
for handling HTTP requests. This commit adds an AbstractHttpFixture
class in the test framework that can be extended for specific testing purposes.
Currently the http pipelining handlers seem to support chunked http
content. However, this does not make sense. There is a content
aggregator in the pipeline before the pipelining handler. This means the
pipelining handler should only see full http messages. Additionally, the
request handler immediately after the pipelining handler only supports
full messages.
This commit modifies both nio and netty4 pipelining handlers to assert
that an inbound message is a full http message. Additionally it removes
the tests for chunked content.
This adds a thread interrupter that allows us to encapsulate calls to org.joni.Matcher#search()
This method can hang forever if the regex expression is too complex.
The thread interrupter in the background checks every 3 seconds whether there are threads
execution the org.joni.Matcher#search() method for longer than 5 seconds and
if so interrupts these threads.
Joni has checks that that for every 30k iterations it checks if the current thread is interrupted and
if so returns org.joni.Matcher#INTERRUPTED
Closes#28731
This commit upgrades us to Netty 4.1.25. This upgrade is more
challenging than past upgrades, all because of a new object cleaner
thread that they have added. This thread requires an additional security
permission (set context class loader, needed to avoid leaks in certain
scenarios). Additionally, there is not a clean way to shutdown this
thread which means that the thread can fail thread leak control during
tests. As such, we have to filter this thread from thread leak control.
Previously this was called for the combine script only. This change checks for self references for
init, map, and reduce scripts as well, and adds unit test coverage for the init, map, and combine cases.
The following analyzers were moved from server module to analysis-common module:
`snowball`, `arabic`, `armenian`, `basque`, `bengali`, `brazilian`, `bulgarian`,
`catalan`, `chinese`, `cjk`, `czech`, `danish`, `dutch`, `english`, `finnish`,
`french`, `galician` and `german`.
Relates to #23658
This is related to #27260 and #28898. This commit adds the transport-nio
plugin as a random option when running the http smoke tests. As part of
this PR, I identified an issue where cors support was not properly
enabled causing these tests to fail when using transport-nio. This
commit also fixes that issue.
This field is similar to the `feature` field but is better suited to index
sparse feature vectors. A use-case for this field could be to record topics
associated with every documents alongside a metric that quantifies how well
the topic is connected to this document, and then boost queries based on the
topics that the logged user is interested in.
Relates #27552
move `finger_print`, `pattern` and `standard_html_strip` analyzers
to analysis-common module. (both AnalysisProvider and PreBuiltAnalyzerProvider)
Changed PreBuiltAnalyzerProviderFactory to extend from PreConfiguredAnalysisComponent and
changed to make sure that predefined analyzers are always instantiated with the current
ES version and if an instance is requested for a different version then delegate to PreBuiltCache.
This is similar to the behaviour that exists today in AnalysisRegistry.PreBuiltAnalysis and
PreBuiltAnalyzerProviderFactory. (#31095)
Relates to #23658
This is related to #28898. This commit adds cors support to the nio http
transport. Most of the work is copied directly from the netty module
implementation. Additionally, this commit adds tests for the nio http
channel.
Currently this class takes care of moth selecting the relevant value, and
replacing missing values if any. This is fine for sorting, which always needs
to do both at the same time, but we also have a number of aggregations and
script utils that need to retain information about missing values so this change
proposes to decouple selection of the relevant value and replacement of missing
values.
ObjectParser should throw XContentParseExceptions, not IAE. A dedicated parsing
exception can includes the place where the error occurred.
Closes#30605
This snapshot includes:
- LUCENE-8341: Record soft deletes in SegmentCommitInfo which will resolve#30851
- LUCENE-8335: Enforce soft-deletes field up-front
This is related to #31017. That issue identified that these three http
methods were treated like GET requests. This commit adds them to
RestRequest. This means that these methods will be handled properly and
generate 405s.
This commit introduces the ability for a client to communicate to the
server features that it can support and for these features to be used in
influencing the decisions that the server makes when communicating with
the client. To this end we carry the features from the client to the
underlying stream as we carry the version of the client today. This
enables us to enhance the logic where we make protocol decisions on the
basis of the version on the stream to also make protocol decisions on
the basis of the features on the stream. With such functionality, the
client can communicate to the server if it is a transport client, or if
it has, for example, X-Pack installed. This enables us to support
rolling upgrades from the OSS distribution to the default distribution
without breaking client connectivity as we can now elect to serialize
customs in the cluster state depending on whether or not the client
reports to us using the feature capabilities that it can under these
customs. This means that we would avoid sending a client pieces of the
cluster state that it can not understand. However, we want to take care
and always send the full cluster state during node-to-node communication
as otherwise we would end up with different understanding of what is in
the cluster state across nodes depending on which features they reported
to have. This is why when deciding whether or not to write out a custom
we always send the custom if the client is not a transport client and
otherwise do not send the custom if the client is transport client that
does not report to have the feature required by the custom.
Co-authored-by: Yannick Welsch <yannick@welsch.lu>
This commit removes the RequestBuilder generic type from Action. It was
needed to be used by the newRequest method, which in turn was used by
client.prepareExecute. Both of these methods are now removed, along with
the existing users of prepareExecute constructing the appropriate
builder directly.
Currently failures to compile a script usually lead to a ScriptException, which
inherits the 500 INTERNAL_SERVER_ERROR from ElasticsearchException if it does
not contain another root cause. Instead, this should be a 400 Bad Request error.
This PR changes this more generally for script compilation errors by changing
ScriptException to return 400 (bad request) as status code.
Closes#12315
Currently AbstractHttpServerTransport is in a netty4 module. This is the
incorrect location. This commit moves it out of netty4 module.
Additionally, it moves unit tests that test AbstractHttpServerTransport
logic to server.
The stored scripts API today accepts malformed requests instead of throwing an exception.
This PR deprecates accepting malformed put stored script requests (requests not using the official script format).
Relates to #27612
Currently nio and netty modules use the CompletableFuture class for
managing listeners. This is unfortunate as that class accepts
Throwable. This commit adds a class CompletableContext that wraps
the CompletableFuture but does not accept Throwable. This allows the
modification of netty and nio logic to no longer handle Throwable.
This commit reintroduces 31251c9 and 63a5799. These commits introduced a
memory leak and were reverted. This commit brings those commits back
and fixes the memory leak by removing unnecessary retain method calls.
This reverts commit 31251c9 introduced in #30695.
We suspect this commit is causing the OOME's reported in #30811 and we will use this PR to test this assertion.
Lucene has a new `FeatureField` which gives the ability to record numeric
features as term frequencies. Its main benefit is that it allows to boost
queries with the values of these features and efficiently skip non-competitive
documents at the same time using block-max WAND and indexed impacts.
The new snapshot includes LUCENE-8324 which fixes missing checkpoint
after a fully deletes segment is dropped on flush. This snapshot should
resolves failed tests in the CorruptedFileIT suite.
Closes#30741Closes#30577
This is related to #29500 and #28898. This commit removes the abilitiy
to disable http pipelining. After this commit, any elasticsearch node
will support pipelined requests from a client. Additionally, it extracts
some of the http pipelining work to the server module. This extracted
work is used to implement pipelining for the nio plugin.
=== Char Group Tokenizer
The `char_group` tokenizer breaks text into terms whenever it encounters
a
character which is in a defined set. It is mostly useful for cases where
a simple
custom tokenization is desired, and the overhead of use of the
<<analysis-pattern-tokenizer, `pattern` tokenizer>>
is not acceptable.
=== Configuration
The `char_group` tokenizer accepts one parameter:
`tokenize_on_chars`::
A string containing a list of characters to tokenize the string on.
Whenever a character
from this list is encountered, a new token is started. Also supports
escaped values like `\\n` and `\\f`,
and in addition `\\s` to represent whitespace, `\\d` to represent
digits and `\\w` to represent letters.
Defaults to an empty list.
=== Example output
```The 2 QUICK Brown-Foxes jumped over the lazy dog's bone for $2```
When the configuration `\\s-:<>` is used for `tokenize_on_chars`, the
above sentence would produce the following terms:
```[ The, 2, QUICK, Brown, Foxes, jumped, over, the, lazy, dog's, bone,
for, $2 ]```
The getDate() and getDates() existed prior to 5.x on long fields in
scripting. In 5.x, a new Date type for ScriptDocValues was added. The
getDate() and getDates() methods were left on long fields and added to date
fields to ease the transition. This commit removes those methods for
7.0.
The camel case name `nGram` should be removed in favour of `ngram` and
similar for `edgeNGram` and `edge_ngram`. Before removal, we need to
deprecate the camel case names first. This change adds deprecation
warnings for indices with versions 6.4.0 and higher and logs deprecation
warnings.
This pipeline aggregation gives the user the ability to script functions that "move" across a window
of data, instead of single data points. It is the scripted version of MovingAvg pipeline agg.
Through custom script contexts, we expose a number of convenience methods:
- MovingFunctions.max()
- MovingFunctions.min()
- MovingFunctions.sum()
- MovingFunctions.unweightedAvg()
- MovingFunctions.linearWeightedAvg()
- MovingFunctions.ewma()
- MovingFunctions.holt()
- MovingFunctions.holtWinters()
- MovingFunctions.stdDev()
The user can also define any arbitrary logic via their own scripting, or combine with the above methods.
This commit is related to #28898. It adds an nio driven http server
transport. Currently it only supports basic http features. Cors,
pipeling, and read timeouts will need to be added in future PRs.
This commit changes the default out-of-the-box configuration for the
number of shards from five to one. We think this will help address a
common problem of oversharding. For users with time-based indices that
need a different default, this can be managed with index templates. For
users with non-time-based indices that find they need to re-shard with
the split API in place they no longer need to resort only to
reindexing.
Since this has the impact of changing the default number of shards used
in REST tests, we want to ensure that we still have coverage for issues
that could arise from multiple shards. As such, we randomize (rarely)
the default number of shards in REST tests to two. This is managed via a
global index template. However, some tests check the templates that are
in the cluster state during the test. Since this template is randomly
there, we need a way for tests to skip adding the template used to set
the number of shards to two. For this we add the default_shards feature
skip. To avoid having to write our docs in a complicated way because
sometimes they might be behind one shard, and sometimes they might be
behind two shards we apply the default_shards feature skip to all docs
tests. That is, these tests will always run with the default number of
shards (one).
Currently the ranking evaluation API accepts the full query syntax for
the queries specified in the evaluation set and executes them via multi
search. This potentially runs costly aggregations and suggestions too.
This change adds checks that forbid using aggregations, suggesters,
highlighters and the explain and profile options in the queries that are
run as part of the ranking evaluation since they are irrelevent in the
context of this API.
The following tokenizers were moved: classic, edge_ngram,
letter, lowercase, ngram, path_hierarchy, pattern, thai, uax_url_email and
whitespace.
Left keyword tokenizer factory in server module, because
normalizers directly depend on it.This should be addressed on a
follow up change.
Relates to #23658
With this commit we determine the maximum number of buffers that Netty keeps
while accumulating one HTTP request based on the maximum content length (default
1500 bytes, overridable with the system property `es.net.mtu`). Previously, we
kept the default value of 1024 which is too small for bulk requests which leads
to unnecessary copies of byte buffers internally.
Relates #29448
This folds the `:qa:smoke-test-reindex-with-all-modules` project into
`:modules:reindex` by declaring the reindex's integration testing
cluster requires the `parent-join` and `lang-painless` plugins and then
moving all of the integration tests that depended on parent-join and
painless into reindex.
It saves us one cluster start up during the build at the cost of a
little of the reindex module's "purity". Since the reindex module *does*
have unit tests that test scripting without painless I'm fairly ok with
that.
Previously `BulkProcessor` retry logic was based on the exception type of the failed response (`EsRejectedExecutionException`). This commit changes it to be based on the returned status code. This allows us to reproduce the same retry behaviour when the `BulkProcessor` is used from the high-level REST client, which was previously not the case as we cannot rebuild the same exception type when parsing back the response. This change has no effect on the transport client.
Closes#28885
Upgrade to lucene-7.4.0-snapshot-1ed95c097b
This version contains:
* An Analyzer for Korean
* An IntervalQuery and IntervalsSource that retrieve minimum intervals of positional queries.
* A new API to retrieve matches (offsets and positions) of a query for a single document.
* Support for soft deletes in the index writer.
* A fixed shingle filter that handles index time synonyms.
* Support for emoji sequence in ICUTokenizer (with an upgrade to icu 61.1)
This commit removes the http.enabled setting. While all real nodes (started with bin/elasticsearch) will always have an http binding, there are many tests that rely on the quickness of not actually needing to bind to 2 ports. For this case, the MockHttpTransport.TestPlugin provides a dummy http transport implementation which is used by default in ESIntegTestCase.
closes#12792
Many tests are added with a version check so that they do not run against a
version that doesn't have the feature yet. Master is 7.0, so all tests that
do not run against 6.0+ can be removed and the version check can be removed
on all tests that always run on 6.0+.
Adds two new methods to `RestClient` that take a `Request` object. These
methods will allows us to add more per-request customizable options
without creating more and more and more overloads of the `performRequest`
and `performRequestAsync` methods. These new methods look like:
```
Response performRequest(Request request)
```
and
```
void performRequestAsync(Request request, ResponseListener responseListener)
```
This change doesn't add any actual features but enables adding things like
per request timeouts and per request node selectors. This change *does*
rework the `HighLevelRestClient` and its tests to use these new `Request`
objects and it does update the docs.
We disable the reindex-from-old tests if we're running on windows or in
a directory that contains a space. This adds a warning to the logs when
we do that so that you can tell that it happened. This will be nice to
have when looking at CI and will be a hint to anyone developing locally.
This *mostly* silences `javadoc`'s warning about defaulting to
generating html4 files by enabling generating html5 file for the
projects for which that works. It didn't work in a half dozen projects,
about half of which I've fixed in this PR, entirely by replacing
`<tt>thing</tt>` with `{@code thing}`.
There are a few remaining projects that contain javadoc with invalid
html5. I'll fix those projects in a followup.
We *think* that #28600 is caused by warnings not being collected during
one of the fan out phases of search but we're not 100% sure how this is
happening. This commit drops the number of shards used for the test to 1
so there *isn't* a fan out phase. If this makes the issue go away we'll
have more information.
This folds the `:qa:reindex-from-old` project into the `:modules:reindex`
project. This should speed up the build marginally by removing a single
clsuter start up at the cost of having to wait for old versions of
Elasticsearch to start up when checking reindex's integration tests.
Those don't take that long so this feels worth it.
The ranking evaluation requests so far were not tested against aliases
but they should run regardless of the targeted index is a real index or
an alias. This change adds cases for this to the integration and rest
tests.
The camel case name `htmlStip` should be removed in favour of `html_strip`, but
we need to deprecate it first. This change adds deprecation warnings for indices
with version starting with 6.3.0 and logs deprecation warnings in this cases.
Allow high level java rest client to access details of the metric
calculation by making them accessible across packages. Also renaming the
inner `Breakdown` classes of the evaluation metrics to `Detail` to
better communicate their use.
This commit renames the bulk thread pool to the write thread pool. This
is to better reflect the fact that the underlying thread pool is used to
execute any document write request (single-document index/delete/update
requests, and bulk requests).
With this change, we add support for fallback settings
thread_pool.bulk.* which will be supported until 7.0.0.
We also add a system property so that the display name of the thread
pool remains as "bulk" if needed to avoid breaking users.
Added an api that allows to execute an arbitrary script and a result to be returned.
```
POST /_scripts/painless/_execute
{
"script": {
"source": "params.var1 / params.var2",
"params": {
"var1": 1,
"var2": 1
}
}
}
```
Relates to #27875
This allows the grammar to determine when and what delimiters statements will use by
splitting up the statements into regular statements and delimited statements, those that do
not require a delimiter versus those that do. This allows consumers of the statements to
determine what delimiters the statements will use so that in certain cases semicolons are
not necessary like when there's a closing right bracket.
This change removes the need for semicolon insertion in the lexer, simplifying the existing
lexer quite a bit. It also ensures that there isn't a need to track semicolons being inserted
into places that aren't necessary such as array initializers.