The retry test has failed a couple of times in CI because it wasn't able
to cause any retries. Putting it in a bash `while` loop shows that it
eventually does fail that way. The seed "4F6477A9C999CA20" seems especially
good at failing to get retries. It doesn't fail all the time, but more
than most.
This adds a retry to each test case, retrying a maximum of 10 times or
until it causes the retries. I've seen it fail to get retries 7 times
in a row but not go beyond that. Retrying doesn't seem to really hurt
the test runtime all that much. Most of the time is in the startup
cost.
Failing CI build that triggered this:
https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+master+periodic/852/console
This change makes ES compile with java9 again, build 118.
* There are a handful of changes due to failure to determine types during compile.
* The attachment plugins which use tika needed to have tika upgraded in order to pickup fixes there for java 9.
* azure discovery and s3 repository indirectly depend on jaxb, which is no longer in the default modules. They now add a jaxb dependency externally, and make JarHell allow for this package.
This removes the ScriptMode class entirely, which was an enum with two
options (ON and OFF) which essentially boiled down to true and false.
Now the boolean values are used instead.
Before 5.0 for it was required that the percolator queries were cached in jvm heap as Lucene queries for two reasons:
1) Performance. The percolator evaluated all percolator queries all the time. There was no pre-selecting queries that are likely to match like we have today.
2) Updates made to percolator queries were visible in realtime, Today these changes are visible in near realtime. So updating no longer requires the percolator to have the queries in jvm heap.
So having the percolator queries in jvm heap via the percolator cache is now less attractive. Especially when there are many percolator queries then these queries can consume many GBs of jvm heap.
Removing the percolator cache does make the percolate query slower compared to how the execution time in 5.0.0-alpha1 and alpha2, but it is still faster compared to 2.x and before.
Please note: The maps inside the pirvate singleton instance of Defininition are no longer unmodifiable, but nothing from the outside can modify it! All private :-)
This uses the same backoff policy we use for bulk and just retries until
the request isn't rejected.
Instead of `{"retries": 12}` in the response to count retries this now
looks like `{"retries": {"bulk": 12", "search": 1}`.
Closes#18059
Closes#18385
Squashed commit of the following:
commit b2819df4d392d69b86e5c96d358eb03424e67e02
Author: Robert Muir <rmuir@apache.org>
Date: Tue May 17 09:15:47 2016 -0400
add note about tuple
commit 85fcac6a0d0674da24535121eab23e2c407d683f
Author: Robert Muir <rmuir@apache.org>
Date: Mon May 16 23:39:25 2016 -0400
painless: add method overloading based on arity
Previously multiple extensions could be provided, however, this can lead
to confusion with on-disk scripts (ie, "foo.js" and "foo.javascript")
having different content. Only a single extension is now supported.
The only language currently supporting multiple extensions was the
Javascript engine ("js" and "javascript"). It now only supports the
`.js` extension.
Relates to #10598
This removes all the mentions of the sandbox from the script engine
services and permissions model. This means that the following settings
are no longer supported:
```yaml
script.inline: sandbox
script.stored: sandbox
```
Instead, only a `true` or `false` value can be specified.
Since this would otherwise break the default-allow parameter for
languages like expressions, painless, and mustache, all script engines
have been updated to have individual settings, for instance:
```yaml
script.engine.groovy.inline: true
```
Would enable all inline scripts for groovy. (they can still be
overridden on a per-operation basis).
Expressions, Painless, and Mustache all default to `true` for inline,
file, and stored scripts to preserve the old scripting behavior.
Resolves#17114
We add support to explicitly exclude specific transport actions
from the request size limit check.
We also exclude the following request types currently:
*MasterPingRequest
* PingRequest
Remove performance hack for accessing a document's fields, its not needed.
Add support for accessing is-getter methods like List.isEmpty() as .empty
Closes#18201
Most of the current implementations of BaseNodesResponse (plural Nodes) ignore FailedNodeExceptions.
- This adds a helper function to do the grouping to TransportNodesAction
- Requires a non-null array of FailedNodeExceptions within the BaseNodesResponse constructor
- Reads/writes the array to output
- Also adds StreamInput and StreamOutput methods for generically reading and writing arrays
QueryBuilder has generics, but those are never used: all call sites use
`QueryBuilder<?>`. Only `AbstractQueryBuilder` needs generics so that the base
class can contain a default implementation for setters that returns `this`.
This gives better coverage and consistency with the scripting APIs, by
whitelisting the primary search scripting API classes and using them instead
of only Map and List methods.
For example, accessing fields can now be done with `.value` instead of `.0`
because `getValue()` is whitelisted. For now, access to a document's fields in
this way (loads) are fast-pathed in the code, to avoid dynamic overhead.
Access to geo fields and geo distance functions is now supported.
TODO: date support (e.g. whitelist ReadableDateTime methods as a start)
TODO: improve docs (like expressions and groovy have for document's fields)
TODO: remove fast-path hack
Closes#18169
Squashed commit of the following:
commit ec9f24b2424891a7429bb4c0a03f9868cba0a213
Author: Robert Muir <rmuir@apache.org>
Date: Thu May 5 17:59:37 2016 -0400
cutover to <Def> instead of <Object> here
commit 9edb1550438acd209733bc36f0d2e0aecf190ecb
Author: Robert Muir <rmuir@apache.org>
Date: Thu May 5 17:03:02 2016 -0400
add fast-path for docvalues field loads
commit f8e38c0932fccc0cfa217516130ad61522e59fe5
Author: Robert Muir <rmuir@apache.org>
Date: Thu May 5 16:47:31 2016 -0400
Painless: add fielddata accessors (.value/.values/.distance()/etc)
This commit introduces a handshake when initiating a light
connection. During this handshake, node information, cluster name, and
version are received from the target node of the connection. This
information can be used to immediately validate that the target node is
a member of the same cluster, and used to set the version on the
stream. This will allow us to extend APIs that are used during initial
cluster recovery without a major version change.
Relates #15971
All other values are errors.
Add java test for throttling. We had a REST test but it only ran against
one node so it didn't catch serialization errors.
Add Simple round trip test for rethrottle request
The query shard reset() method resets some internal state in the
query shard context, like clearing query names, the filter flag
or named queries. The problem with this method being public is
that it currently (miss?) used for modifying an existing context
for recursive invocatiob, but the contexts that have been reseted
that way cannot be properly set back to their previous state.
This PR is a step towards removing reset() entirely by first making
it only be used internally in QueryShardContext. In places where
reset() was used we can either create new QueryShardContexts or
modify the existing context because it is discarded afterwards anyway.
Do this by creating a Client subclass that automatically assigns the
parentTask to all requests that come through it. Code that doesn't want
to set the parentTask can call `unwrap` on the Client to get the inner
client instance that doesn't set the parentTask. Reindex uses this for
its ClearScrollRequest so that the request will run properly after the
reindex request has been canceled.
Now that the current uses of magical camelCase support have been
deprecated, we can remove these in master (sans remaining issues like
BulkRequest). This change removes camel case support from ParseField,
query types, analysis, and settings lookup.
see #8988
Callers should explicitly handle parents - either using EMPTY_TASK_ID when
a parent isn't possible or piping parents from the TransportRequest when
possible.
This change cleans up a few things in QueryParseContext and QueryShardContext
that make it hard to reason about the state of these context objects and are
thus error prone and should be simplified.
Currently the parser that used in QueryParseContext can be set and reset any
time from the outside, which makes reasoning about it hard. This change makes
the parser a mandatory constructor argument removes ability to later set a
different ParseFieldMatcher. If none is provided at construction time, the
one set inside the parser is used. If a ParseFieldMatcher is specified at
construction time, it is implicitely set on the parser that is beeing used.
The ParseFieldMatcher is only kept inside the parser.
Also currently the QueryShardContext historically holds an inner QueryParseContext
(in the super class QueryRewriteContext), that is mainly used to hold the parser
and parseFieldMatcher. For that reason, the parser can also be reset, which leads
to the same problems as above. This change removes the QueryParseContext from
QueryRewriteContext and removes the ability to reset or retrieve it from the
QueryShardContext. Instead, `QueryRewriteContext#newParseContext(parser)` can be
used to create new parse contexts with the given parser from a shard context
when needed.
Currently we are able to set a ParseFieldMatcher on XContentParsers,
mainly to conveniently carry it around to be available where the
actual parsing happens. This was just recently introduced together
with ObjectParser so that ObjectParser can make use of deprecation
logging and throwing errors while parsing.
This however is trappy because we create parsers in so many places in
the code and it is easy to forget setting the right ParseFieldMatcher.
Instead we should hold the ParseFieldMatcher only in the parse contexts
(e.g. QueryParseContext).
This PR removes the ParseFieldMatcher from XContentParser. ObjectParser
can still make use of it because we can make the otherwise unbounded
`context` type to extend an interface that makes sure contexts used in
ObjectParser can supply a ParseFieldMatcher. Contexts in ObjectParser
are now no longer optional, but it is sufficient to pass in a small
lambda expression in places where no other context is available.
Relates to #17417
This makes all numeric fields including `date`, `ip` and `token_count` use
points instead of the inverted index as a lookup structure. This is expected
to perform worse for exact queries, but faster for range queries. It also
requires less storage.
Notes about how the change works:
- Numeric mappers have been split into a legacy version that is essentially
the current mapper, and a new version that uses points, eg.
LegacyDateFieldMapper and DateFieldMapper.
- Since new and old fields have the same names, the decision about which one
to use is made based on the index creation version.
- If you try to force using a legacy field on a new index or a field that uses
points on an old index, you will get an exception.
- IP addresses now support IPv6 via Lucene's InetAddressPoint and store them
in SORTED_SET doc values using the same encoding (fixed length of 16 bytes
and sortable).
- The internal MappedFieldType that is stored by the new mappers does not have
any of the points-related properties set. Instead, it keeps setting the index
options when parsing the `index` property of mappings and does
`if (fieldType.indexOptions() != IndexOptions.NONE) { // add point field }`
when parsing documents.
Known issues that won't fix:
- You can't use numeric fields in significant terms aggregations anymore since
this requires document frequencies, which points do not record.
- Term queries on numeric fields will now return constant scores instead of
giving better scores to the rare values.
Known issues that we could work around (in follow-up PRs, this one is too large
already):
- Range queries on `ip` addresses only work if both the lower and upper bounds
are inclusive (exclusive bounds are not exposed in Lucene). We could either
decide to implement it, or drop range support entirely and tell users to
query subnets using the CIDR notation instead.
- Since IP addresses now use a different representation for doc values,
aggregations will fail when running a terms aggregation on an ip field on a
list of indices that contains both pre-5.0 and 5.0 indices.
- The ip range aggregation does not work on the new ip field. We need to either
implement range aggs for SORTED_SET doc values or drop support for ip ranges
and tell users to use filters instead. #17700Closes#16751Closes#17007Closes#11513
When we pass down both parser and QueryParseContext to a method, we cannot
make sure that the parser contained in the context and the parser that is
parsed as an argument have the same state. This removes the parser argument
from methods where we currently have both the parser and the parse context
as arguments and instead retrieves the parse from the context inside the
method.
* Inner hits can now only be provided and prepared via setter in the nested, has_child and has_parent query.
* Also made `score_mode` a required constructor parameter.
* Moved has_child's min_child/max_children validation from doToQuery(...) to a setter.
This commit removes `MappedFieldType.value` and simplifies
`MappedFieldType.valueforSearch`. `valueforSearch` was used to post-process
values that come for stored fields (eg. to convert a long back to a string
representation of a date in the case of a date field) and also values that
are extracted from the source but only in the case of GET calls: it would
not be called when performing source filtering on search requests.
`valueforSearch` is now only called for stored fields, since values that are
extracted from the source should already be formatted as expected.
In both cases, what elasticsearch is really interested in is whether the field
is an analyzed string field. So it can just check `tokenized()` instead.