Painless can cast anything into the magic type `def` but it
really shouldn't try to cast **nothing** into `def`. That causes
the byte code generation library to freak out a little.
Closes#22908
This test was using initial count of slices instead of the count
of unfinished slices to pick the expected throttle. Unfortunely
due to race conditions the actual rethrottle count is between the
two. So we weaken the assertion from "the new throttle is exactly X"
to "the new throttle is between X and Y (inclusive)".
This is related to #22116. Core no longer needs `SocketPermission`
`connect`.
This permission is relegated to these modules/plugins:
- transport-netty4 module
- reindex module
- repository-url module
- discovery-azure-classic plugin
- discovery-ec2 plugin
- discovery-gce plugin
- repository-azure plugin
- repository-gcs plugin
- repository-hdfs plugin
- repository-s3 plugin
And for tests:
- mocksocket jar
- rest client
- httpcore-nio jar
- httpasyncclient jar
Versions of Elasticsearch prior to 2.0 would return a scroll id
even with the last scroll response. They'd then automatically
clear the scroll because it is empty. When terminating reindex
will attempt to clear the last scroll it received, regardless of
the remote version. This quiets the warning when the scroll cannot
be cleared for versions before 2.0.
Closes#22937
This commit upgrades the checkstyle configuration from version 5.9 to
version 7.5, the latest version as of today. The main enhancement
obtained via this upgrade is better detection of redundant modifiers.
Relates #22960
`UpdateByQueryWhileModifyingTests#testUpdateWhileReindexing`
runs update-by-query and concurrently updates, asserting that
the update-by-query never reverts any changes made by the update.
It is a smoke test for concurrent updates.
Now, it expects to hit a certain number of version conflicts
during the updates. This is normal as it is racing the
update-by-query. We have a maximum number of failures we
expect (10) and I'd never seen us come close until
https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+5.x+multijob-unix-compatibility/os=sles/495/console
This bumps the max failures from 10 to 50 and improves
logging a bit. If we continue to see this failure then we have
some other issue.
Closes#22938
This change adds a strict mode for xcontent parsing on the rest layer. The strict mode will be off by default for 5.x and in a separate commit will be enabled by default for 6.0. The strict mode, which can be enabled by setting `http.content_type.required: true` in 5.x, will require that all incoming rest requests have a valid and supported content type header before the request is dispatched. In the non-strict mode, the Content-Type header will be inspected and if it is not present or not valid, we will continue with auto detection of content like we have done previously.
The content type header is parsed to the matching XContentType value with the only exception being for plain text requests. This value is then passed on with the content bytes so that we can reduce the number of places where we need to auto-detect the content type.
As part of this, many transport requests and builders were updated to provide methods that
accepted the XContentType along with the bytes and the methods that would rely on auto-detection have been deprecated.
In the non-strict mode, deprecation warnings are issued whenever a request with body doesn't provide the Content-Type header.
See #19388
In 5.2 we stopped sending the source parameter if the user didn't
specify it. This was a mistake as versions before 2.0 look like
they don't always include the `_source`. This is because reindex
requests some metadata fields. Anyway, now we say `"_source": true`
if there isn't a `_source` configured in the reindex request.
Closes#22893
We were incorrectly resolving qualified method references at run
time when invoked on `def`. This lead to errors like
`The struct with name [org] has not been defined.` when attempting
```
doc.date.dates.stream().map(
org.joda.time.ReadableDateTime::centuryOfEra
).collect(Collectors.toList())
```
Implemented by wrapping an array of reused `ModuleDateTime`s that
we grow when needed. The `ModuleDateTime`s are reused when we
move to the next document.
Also improves the error message returned when attempting to modify
the `ScriptdocValues`, removes a couple of allocations, and documents
that the date functions are available in Painless.
Relates to #22162
Currently, stored scripts use a namespace of (lang, id) to be put, get, deleted, and executed. This is not necessary since the lang is stored with the stored script. A user should only have to specify an id to use a stored script. This change makes that possible while keeping backwards compatibility with the previous namespace of (lang, id). Anywhere the previous namespace is used will log deprecation warnings.
The new behavior is the following:
When a user specifies a stored script, that script will be stored under both the new namespace and old namespace.
Take for example script 'A' with lang 'L0' and data 'D0'. If we add script 'A' to the empty set, the scripts map will be ["A" -- D0, "A#L0" -- D0]. If a script 'A' with lang 'L1' and data 'D1' is then added, the scripts map will be ["A" -- D1, "A#L1" -- D1, "A#L0" -- D0].
When a user deletes a stored script, that script will be deleted from both the new namespace (if it exists) and the old namespace.
Take for example a scripts map with {"A" -- D1, "A#L1" -- D1, "A#L0" -- D0}. If a script is removed specified by an id 'A' and lang null then the scripts map will be {"A#L0" -- D0}. To remove the final script, the deprecated namespace must be used, so an id 'A' and lang 'L0' would need to be specified.
When a user gets/executes a stored script, if the new namespace is used then the script will be retrieved/executed using only 'id', and if the old namespace is used then the script will be retrieved/executed using 'id' and 'lang'
This moves the building blocks for delete by query into core. This
should enabled two thigns:
1. Plugins other than reindex to implement "bulk by scroll" style
operations.
2. Plugins to directly call delete by query. Those plugins should
be careful to make sure that task cancellation still works, but
this should be possible.
Notes:
1. I've mostly just moved classes and moved around tests methods.
2. I haven't been super careful about cohesion between these core
classes and reindex. They are quite interconnected because I wanted
to make the change as mechanical as possible.
Closes#22616
Adds "Appending B. Painless API Reference", a reference of all classes
and methods available from Painless. Removes links to java packages
because they contain methods that we don't expose and don't contain
methods that we do expose (the ones in Augmentation). Instead this
generates a list of every class and every exposed method using the same
type information available to the
interpreter/compiler/whatever-we-call-it. From there you can jump to
the relevant docs.
Right now you build all the asciidoc files by running
```
gradle generatePainlessApi
```
These files are expected to be committed because we build the docs
without running `gradle`.
Also changes the output of `Debug.explain` so that it is easy to
search for the class in the generated reference documentation.
You can also run it in an IDE safely if you pass the path to the
directory in which to generate the docs as the first parameter. It'll
blow away the entire directory an recreate it from scratch so be careful.
And then you can build the docs by running something like:
```
../docs/build_docs.pl --out ../built_docs/ --doc docs/reference/index.asciidoc --open
```
That is, if you have checked out https://github.com/elastic/docs in
`../docs`. Wait a minute or two and your browser will pop open in with
all of Elasticsearch's reference documentation. If you go to
`http://localhost:8000/painless-api-reference.html` you can see this
list. Or you can get there by following the links to `Modules` and
`Scripting` and `Painless` and then clicking the link in the paragraphs
below titled `Appendix B. Painless API Reference`.
I like having these in asciidoc because we can deep link to them from the
rest of the guide with constructs like
`<<painless-api-reference-Object-hashCode-0>>` and
`<<painless-api-reference->>` and we get link checking. Then the only
brittle link maintenance bit is the link generation for javadoc. Which
sucks. But I think it is important that we link to the methods directly
so they are easy to find.
Relates to #22720
This is related to #22116. URLRepository requires SocketPermission
connect. This commit introduces a new module called "repository-url"
where URLRepository will reside. With the new module, permissions can
be removed from core.
Beforehand, the DateProcessor constructs its joda pattern formatter during processor
construction. This led to newly ingested documents being defaulted to
the year that the pipeline was constructed, not that of processing.
Fixes#22547.
This adds the necessary `AuthCache` needed to support preemptive authorization. By adding every host to the cache, the automatically added `RequestAuthCache` interceptor will add credentials on the first pass rather than waiting to do it after _each_ anonymous request is rejected (thus always sending everything twice when basic auth is required).
move "es." internal headers to separate metadata set in ElasticsearchException and stop returning them as response headers
Closes#17593
* [TEST] remove ESExceptionTests, move its methods to ElasticsearchExceptionTests or ExceptionSerializationTests
* Add top hits collapsing to search request
The field collapsing is done with a custom top docs collector that "collapse" search hits with same field value.
The distributed aspect is resolve using the two passes that the regular search uses. The first pass "collapse" the top hits, then the coordinating node merge/collapse the top hits from each shard.
```
GET _search
{
"collapse": {
"field": "category",
}
}
```
This change also adds an ExpandCollapseSearchResponseListener that intercepts the search response and expands collapsed hits using the CollapseBuilder#innerHit} options.
The retrieval of each inner_hits is done by sending a query to all shards filtered by the collapse key.
```
GET _search
{
"collapse": {
"field": "category",
"inner_hits": {
"size": 2
}
}
}
```
This commit adds a SpecialPermission constant and uses that constant
opposed to introducing new instances everywhere.
Additionally, this commit introduces a single static method to check that
the current code has permission. This avoids all the duplicated access
blocks that exist currently.
* Upgrade to Lucene 6.4.0
`ValueSource`s are now converted to `DoubleValueSource`s using the Lucene adapter made for the migration to the new API in 6.4.0.
There are presently 7 ctor args used in any rest handlers:
* `Settings`: Every handler uses it to initialize a logger and
some other strange things.
* `RestController`: Every handler registers itself with it.
* `ClusterSettings`: Used by `RestClusterGetSettingsAction` to
render the default values for cluster settings.
* `IndexScopedSettings`: Used by `RestGetSettingsAction` to get
the default values for index settings.
* `SettingsFilter`: Used by a few handlers to filter returned
settings so we don't expose stuff like passwords.
* `IndexNameExpressionResolver`: Used by `_cat/indices` to
filter the list of indices.
* `Supplier<DiscoveryNodes>`: Used to fill enrich the response
by handlers that list tasks.
We probably want to reduce these arguments over time but
switching construction away from guice gives us tighter
control over the list of available arguments.
These parameters are passed to plugins using
`ActionPlugin#initRestHandlers` which is expected to build and
return that handlers immediately. This felt simpler than
returning an reference to the ctors given all the different
possible args.
Breaks java plugins by moving rest handlers off of guice.
This is related to #22116. Core no longer needs SocketPermission
accept. This permission is relegated to the transport-netty4 module
and (for tests) to the mocksocket jar.
This commit adds a MessyRestTestPlugin to the gradle build. It extends
StandaloneRestTestPlugin. The main piece of functionality that it adds
is to copy plugin-metadata from dependencies into the
generated-resources for the current test source. This is necessary to
ensure that permissions for dependencies are applied when running the
tests.
A current limitation is that the permissions are applied differently
than in the distribution sources. When permissions are granted to all
depedencies for a module or plugin, the permissions are granted to all
dependencies on the classpath for tests besides a few hardcoded
exclusions:
- es core
- es test framework
- lucene test framework
- randomized runner
- junit library
We don't want to expose `String#getBytes` which is required for
`Base64.getEncoder.encode` to work because we're worried about
character sets. This adds `encodeBase64` and `decodeBase64`
methods to `String` in Painless that are duals of one another
such that:
`someString == someString.encodeBase64().decodeBase64()`.
Both methods work with the UTF-8 encoding of the string.
Closes#22648
Everything that extended `AbstractAsyncBulkByScrollAction` also
extended `AbstractAsyncBulkIndexByScrollAction` so this removes
`AbstractAsyncBulkIndexByScrollAction`, merging it into
`AbstractAsyncBulkByScrollAction`.
Changes the error message when `action.auto_create_index` or
`index.mapper.dynamic` forbids automatic creation of an index
from `no such index` to one of:
* `no such index and [action.auto_create_index] is [false]`
* `no such index and [index.mapper.dynamic] is [false]`
* `no such index and [action.auto_create_index] contains [-<pattern>] which forbids automatic creation of the index`
* `no such index and [action.auto_create_index] ([all patterns]) doesn't match`
This should make it more clear *why* there is `no such index`.
Closes#22435
Today we do not preserve response headers if they are present on a transport protocol
response. While preserving these headers is not always desired, in the most cases we
should pass on these headers to have consistent results for depreciation headers etc.
yet, this hasn't been much of a problem since most of the deprecations are detected early
ie. on the coordinating node such that this bug wasn't uncovered until #22647
This commit allow to optionally preserve headers when a context is restored and also streamlines
the context restore since it leaked frequently into the callers thread context when the callers
context wasn't restored again.
Instead of forcing each task to register all nodes where its children are running, this commit runs cancellation on all nodes. The task cancellation operation doesn't run too frequently, so this optimization doesn't seem to be worth additional complexity of the interface.
Previously, certain settings that could take multiple comma delimited
values would pick up incorrect values for all entries but the first if
each comma separated value was followed by a whitespace character. For
example, the multi-value "A,B,C" would be correctly parsed as
["A", "B", "C"] but the multi-value "A, B, C" would be incorrectly parsed
as ["A", " B", " C"].
This commit allows a comma separated list to have whitespace characters
after each entry. The specific settings that were affected by this are:
cluster.routing.allocation.awareness.attributes
index.routing.allocation.require.*
index.routing.allocation.include.*
index.routing.allocation.exclude.*
cluster.routing.allocation.require.*
cluster.routing.allocation.include.*
cluster.routing.allocation.exclude.*
http.cors.allow-methods
http.cors.allow-headers
For the allocation filtering related settings, this commit also provides
validation of each specified entry if the filtering is done by _ip,
_host_ip, or _publish_ip, to ensure that each entry is a valid IP
address.
Closes#22297
This commit tries to simplify the way ElasticsearchException are rendered to xcontent. It adds some documentation and renames and merges some methods. Current behavior is preserved, the goal is to be more readable and centralize everything in the ElasticsearchException class.
Today we have quite some abstractions that are essentially providing a simple
dispatch method to the plugins defining a `HttpServerTransport`. This commit
removes `HttpServer` and `HttpServerAdaptor` and introduces a simple `Dispatcher` functional
interface that delegate to `RestController` by default.
Relates to #18482
The IndexingOperationListener interface did not provide any
information about the shard id when a document was indexed.
This commit adds the shard id as the first parameter to all methods
in the IndexingOperationListener.
This is related to #22116. netty channels require socket `connect` and
`accept` privileges. Netty does not currently wrap these operations
with `doPrivileged` blocks. These changes extend the netty channels
and wrap calls to the relevant super methods in doPrivileged blocks.
Adds a message about how the remote is unlikely to be Elasticsearch.
This isn't as good as including the whole message from the remote but
we can't do that because we are stream parsing it and we don't want
to mark the whole request.
Closes#22330
Moves fetching the local node id into `NodeClient` which is a
fairly useful place to put it so you can generate task ids from
`NodeClient#executeLocally`.
It is no longer needed. It used to contain a lot of strings
used by serialization but those have since been removed. Now
it is just another thing to pass around that we don't really
need.
Reindex-from-remote had a race when it tried to clear the scroll. It
first starts the request to clear the scroll and then submits a task
to the generic threadpool to shutdown the client. These two things
race and, in my experience, closing the scroll generally loses. That
means that most of the time reindex-from-remote isn't clearing the
scrolls that it uses. This isn't the end of the world because we
flush old scroll contexts after a while but this isn't great.
Noticed while experimenting with #22514.
Reindex-from-remote was accepting source filtering in the request
but ignoring it and setting `_source=true` on the search URI. This
fixes the filtering so it is piped through to the remote node and
adds tests for that.
Closes#22507
Removes `AggregatorParsers`, replacing all of its functionality with
`XContentParser#namedObject`.
This is the third bit of payoff from #22003, one less thing to pass
around the entire application.
If the remote doesn't return a content type then reindex
tried to guess the content-type. This didn't work most
of the time and produced a rather useless error message.
Given that Elasticsearch always returns the content-type
we are dropping content-type detection in favor of just
failing the request if the remote didn't return a content-type.
Closes#22329
1. Escape sequences we're working. For example `\\` is now correctly
interpreted as `\` instead of `\\`. Same with `\'` being `'` and
`\"` being `"`.
2. `'` delimited strings weren't allowed to contain `"`s but it looked
like they were intended to support it. Now they do.
3. Improves the error message when the script contains an invalid
escape sequence inside a string to include a list of the valid
escape sequences.
Closes#22372
This integrates the mocksocket jar with elasticsearch tests. Mocksocket wraps actions requiring SocketPermissions in doPrivilege blocks. This will eventually allow SocketPermissions to be assigned to the mocksocket jar opposed to the entire elasticsearch codebase.
We previously named the thread using a frame from the stack trace, but
this was removed to simplify the code here. However, the comment
explaining this was left behind and this commit cleans that up.
* Remove a checked exception, replacing it with `ParsingException`.
* Remove all Parser classes for the yaml sections, replacing them with static methods.
* Remove `ClientYamlTestFragmentParser`. Isn't used any more.
* Remove `ClientYamlTestSuiteParseContext`, replacing it with some static utility methods.
I did not rewrite the parsers using `ObjectParser` because I don't think it is worth it right now.
As the translog evolves towards a full operations log as part of the
sequence numbers push, there is a need for the translog to be able to
represent operations for which a sequence number was assigned, but the
operation did not mutate the index. Examples of how this can arise are
operations that fail after the sequence number is assigned, and gaps in
this history that arise when an operation is assigned a sequence number
but the operation never completed (e.g., a node crash). It is important
that these operations appear in the history so that they can be
replicated and replayed during recovery as otherwise the history will be
incomplete and local checkpoints will not be able to advance. This
commit introduces a no-op to the translog to set the stage for these
efforts.
Relates #22291
Introduces `XContentParser#namedObject which works a little like
`StreamInput#readNamedWriteable`: on startup components register
parsers under names and a superclass. At runtime we look up the
parser and call it to parse the object.
Right now the parsers take a context object they use to help with
the parsing but I hope to be able to eliminate the need for this
context as most what it is used for at this point is to move
around parser registries which should be replaced by this method
eventually. I make no effort to do so in this PR because it is
big enough already. This is meant to the a start down a road that
allows us to remove classes like `QueryParseContext`,
`AggregatorParsers`, `IndicesQueriesRegistry`, and
`ParseFieldRegistry`.
The goal here is to reduce the amount of plumbing required to
allow parsing pluggable things. With this you don't have to pass
registries all over the place. Instead you must pass a super
registry to fewer places and use it to wrap the reader. This is
the same tradeoff that we use for NamedWriteable and it allows
much, much simpler binary serialization. We think we want that
same thing for xcontent serialization.
The only parsing actually converted to this method is parsing
`ScoreFunctions` inside of `FunctionScoreQuery`. I chose this
because it is relatively self contained.
It looks like the exception reason can differ in different default
locales, so the build would fail in any non-English locale. This
switches the catch to the name of the exception which shouldn't
vary.
We are currenlty checking that no deprecation warnings are emitted in our query tests. That can be moved to ESTestCase (disabled in ESIntegTestCase) as it allows us to easily catch where our tests use deprecated features and assert on the expected warnings.
We return deprecation warnings as response headers, besides logging them. Strict parsing mode stayed around, but was only used in query tests, though we also introduced checks for deprecation warnings there that don't need strict parsing anymore (see #20993).
We can then safely remove support for strict parsing mode. The final goal is to remove the ParseFieldMatcher class, but there are many many users of it. This commit prepares the field for the removal, by deprecating ParseFieldMatcher and making it effectively not needed. Strict parsing is removed from ParseFieldMatcher, and strict parsing is replaced in tests where needed with deprecation warnings checks.
Note that the setting to enable strict parsing was never ported to the new settings infra hance it cannot be set in production. It is really only used in our own tests.
Relates to #19552
This commit makes mapping updates atomic when multiple types in an index are updated. Mappings for an index are now applied in a single atomic operation, which also allows to optimize some of the cross-type updates and checks.
In #22094 we introduce a test-only setting to simulate transport
impls that don't support handshakes. This commit implements the same logic
without a setting.
The JSON processor has an optional field called "target_field".
If you don't specify target_field then target_field becomes what you specified as "field".
There isn't anyway to add the fields to the root of a document. By
setting `add_to_root`, now serialized fields will be inserted into the
top-level fields of the ingest document.
Closes#21898.
Today we initialize Netty in a static initializer. We trigger this
method via static initializers from Netty-related classes, but we can
trigger this method earlier than we do to ensure that Netty is
initialized how we want it to be.
Inline scripts defined in Ingest Pipelines are now compiled at creation time to preemptively catch errors on initialization of the pipeline.
Fixes#21842.
Low level handshake code doesn't handle situations gracefully if the connection
is concurrently closed or reset by peer. This commit adds the relevant code to
fail the handshake if the connection is closed.
Moves the last of the "easy" parser construction into
`RestRequest`, this time with a new method
`RestRequest#contentParser`. The rest of the production
code that builds `XContentParser` isn't "easy" because it is
exposed in the Transport Client API (a Builder) object.
The creation of the `ValuesSource` used to pass `DateTimeZone.UTC` as a time
zone all the time in case of empty fields in spite of the fact that all doc
value formats but the date one reject this parameter.
This commit centralizes the creation of the `ValuesSource` and adds unit tests
to it.
Closes#22009
With this commit we enable the Jackson feature 'STRICT_DUPLICATE_DETECTION'
by default. This ensures that JSON keys are always unique. While this has
a performance impact, benchmarking has indicated that the typical drop in
indexing throughput is around 1 - 2%.
As a last resort, we allow users to still disable strict duplicate checks
by setting `-Des.json.strict_duplicate_detection=false` which is
intentionally undocumented.
Closes#19614
Grok was originally ignoring potential matches to named-capture groups
larger than one. For example, If you had two patterns containing the
same named field, but only the second pattern matched, it would fail to
pick this up.
This PR fixes this by exploring all potential places where a
named-capture was used and chooses the first one that matched.
Fixes#22117.
Today we rely on the version that the API user passes in together with the DiscoveryNode. This commit introduces a low level handshake where nodes exchange their version to be used with the transport protocol that is executed every time a connection to a node is established. This, on the one hand allows to change the wire protocol based on the version we are talking to even without a full cluster restart. Today we would need to carry on a BWC layer across major versions but with a handshake we can rely on the fact that the latest version of the previous minor executes a handshake and uses the latest protocol version across all communication with the N+1 version nodes.
This change is yet fully backwards compatible, a followup PR will remove the BWC in 6.0 once this has been back-ported to the 5.x branch
This class is just a wrapper around `SearchContext`, so let's use
`SearchContext` directly. The change is mechanical, except the
`ValuesSourceConfig` class, where I moved the logic to get a `ValuesSource`
given a config.
Our query DSL supports empty queries (`{}`), which have a different meaning depending on the query that holds it, either ignored, match_all or match_none. We deprecated the support for empty queries in 5.0, where we log a deprecation warning wherever they are used.
The way we supported it once we moved query parsing to the coordinating node was having an Optional<QueryBuilder> return type in all of our parse methods (called fromXContent). See #17624. The central place for this was QueryParseContext#parseInnerQueryBuilder. We can now remove all the optional return types and simply throw an exception whenever an empty query is found.
Today we connect and publish the nodes connection before we execute a
handshake with the node we connect to. In the case of connecting to a node
that won't pass the handshake this connection is already `published` and other
code paths can use it. This commit detaches the connection and the publish of the
connection such that `TransportService` can do a handshake before actually connect
and publish the connection.
To get #22003 in cleanly we need to centralize as much `XContentParser` creation as possible into `RestRequest`. That'll mean we have to plumb the `NamedXContentRegistry` into fewer places.
This removes `RestAction.hasBody`, `RestAction.guessBodyContentType`, and `RestActions.getRestContent`, moving callers over to `RestRequest.hasContentOrSourceParam`, `RestRequest.contentOrSourceParam`, and `RestRequest.contentOrSourceParamParser` and `RestRequest.withContentOrSourceParamParserOrNull`. The idea is to use `withContentOrSourceParamParserOrNull` if you need to handle requests without any sort of body content and to use `contentOrSourceParamParser` otherwise.
I believe the vast majority of this PR to be purely mechanical but I know I've made the following behavioral change (I'll add more if I think of more):
* If you make a request to an endpoint that requires a request body and has cut over to the new APIs instead of getting `Failed to derive xcontent` you'll get `Body required`.
* Template parsing is now non-strict by default. This is important because we need to be able to deprecate things without requests failing.
If you try to close the rest client inside one of its callbacks then
it blocks itself. The thread pool switches the status to one that
requests a shutdown and then waits for the pool to shutdown. When
another thread attempts to honor the shutdown request it waits
for all the threads in the pool to finish what they are working on.
Thus thread a is waiting on thread b while thread b is waiting
on thread a. It isn't quite that simple, but it is close.
Relates to #22027
This is an attempt to start moving aggs parsing to `ObjectParser`. There is
still A LOT to do, but ObjectParser is way better than the way aggregations
parsing works today. For instance in most cases, we reject numbers that are
provided as strings, which we are supposed to accept since some client languages
(looking at you Perl) cannot make sure to use the appropriate types.
Relates to #22009
* Remove 2.0 prerelease version constants
This is a start to addressing #21887. This removes:
* pre 2.0 snapshot format support
* automatic units addition to cluster settings
* bwc check for delete by query in pre 2.0 indexes
This adds the `_primary_term` field internally to the mappings. This field is
populated with the current shard's primary term.
It is intended to be used for collision resolution when two document copies have
the same sequence id, therefore, doc_values for the field are stored but the
filed itself is not indexed.
This also fixes the `_seq_no` field so that doc_values are retrievable (they
were previously stored but irretrievable) and changes the `stats` implementation
to more efficiently use the points API to retrieve the min/max instead of
iterating on each doc_value value. Additionally, even though we intend to be
able to search on the field, it was previously not searchable. This commit makes
it searchable.
There is no user-visible `_primary_term` field. Instead, the fields are
updated by calling:
```java
index.parsedDoc().updateSeqID(seqNum, primaryTerm);
```
This includes example methods in `Versions` and `Engine` for retrieving the
sequence id values from the index (see `Engine.getSequenceID`) that are only
used in unit tests. These will be extended/replaced by actual implementations
once we make use of sequence numbers as a conflict resolution measure.
Relates to #10708
Supercedes #21480
P.S. As a side effect of this commit, `SlowCompositeReaderWrapper` cannot be
used for documents that contain `_seq_no` because it is a Point value and SCRW
cannot wrap documents with points, so the tests have been updated to loop
through the `LeafReaderContext`s now instead.
This adds a fromXContent method and unit test to the HighlightField class so we
can parse it as part of a serch response. This is part of the preparation for
parsing search responses on the client side.
`_update_by_query` supports specifying the `pipeline` to process the
documents as a url parameter but `_reindex` doesn't. It doesn't because
everything about the `_reindex` request that has to do with writing
the documents is grouped under the `dest` object in the request body.
This changes the response parameter from
`request [_reindex] contains unrecognized parameter: [pipeline]` to
`_reindex doesn't support [pipeline] as a query parmaeter. Specify it in the [dest] object instead.`
Action filters currently have the ability to filter both the request and
response. But the response side was not actually used. This change
removes support for filtering responses with action filters.
Changes the default socket and connection timeouts for the rest
client from 10 seconds to the more generous 30 seconds.
Defaults reindex-from-remote to those timeouts and make the
timeouts configurable like so:
```
POST _reindex
{
"source": {
"remote": {
"host": "http://otherhost:9200",
"socket_timeout": "1m",
"connect_timeout": "10s"
},
"index": "source",
"query": {
"match": {
"test": "data"
}
}
},
"dest": {
"index": "dest"
}
}
```
Closes#21707
I added an assertion to Netty4/Netty3Transport in 5.x that is not in
master yet. This commit port the assert to ensure we consumed all connection
in `connectToChannels`
We don't use the test infra nor do we run the tests. They might all be
entirely out of date. We also have a different BWC test infra in-place.
This change removes all of the legacy infra.
Timeouts are global today across all connections this commit allows to specify
a connection timeout per node such that depending on the context connections can
be established with different timeouts.
Relates to #19719
We currently treat every node equally when we establish connections to a node.
Yet, if we are not master eligible or can't hold any data there is no point in creating
a dedicated connection for sending the cluster state or running remote recoveries respectively.
The usage of STATE and RECOVERY connections on non-master and/or non-data nodes will result in an IllegalStateException.
For the record, I also had to remove the geo-hash cell and geo-distance range
queries to make the code compile. These queries already throw an exception in
all cases with 5.x indices, so that does not hurt any more.
I also had to rename all 2.x bwc indices from `index-${version}` to
`unsupported-${version}` to make `OldIndexBackwardCompatibilityIT`
happy.
SearchTemplateRequest to implement CompositeIndicesRequest
Given that SearchTemplateRequest effectively delegates to search when a search is being executed, it should implement the CompositeIndicesRequest interface. The subrequests method should return a single search request. When a search is not going to be executed, because we are in simulate mode, there are no inner requests, and there are no corresponding indices to that request either.
Closes#21747
Set lucene version to 6.4.0-snapshot-ec38570 and update all the sha1s/license
Fix invalid combo after upgrade in query_string query. split_on_whitespace=false is disallowed if auto_generate_phrase_queries=true
Adapt the expectations of some tests to the new format of the Lucene explain output
Lucene 6.2 added index and query support for numeric ranges. This commit adds a new RangeFieldMapper for indexing numeric (int, long, float, double) and date ranges and creating appropriate range and term queries. The design is similar to NumericFieldMapper in that it uses a RangeType enumerator for implementing the logic specific to each type. The following range types are supported by this field mapper: int_range, float_range, long_range, double_range, date_range.
Lucene does not provide a DocValue field specific to RangeField types so the RangeFieldMapper implements a CustomRangeDocValuesField for handling doc value support.
When executing a Range query over a Range field, the RangeQueryBuilder has been enhanced to accept a new relation parameter for defining the type of query as one of: WITHIN, CONTAINS, INTERSECTS. This provides support for finding all ranges that are related to a specific range in a desired way. As with other spatial queries, DISJOINT can be achieved as a MUST_NOT of an INTERSECTS query.
The Transport#connectToNodeLight concepts is confusing and not very flexible.
neither really testable on a unittest level. This commit cleans up the code used
to connect to nodes and simplifies transport implementations to share more code.
This also allows to connect to nodes with custom profiles if needed, for instance
future improvements can be added to connect to/from nodes that are non-data nodes without
dedicated bulks and recovery connections.
When Netty listens on a socket, it specifies the established connection
backlog for the socket. On Linux, Netty tries to read the system-wide
configuration for this from /proc/sys/net/core/somaxconn and falls back
to a default value when it can not read this value. This commit grants
Netty permission to read this file so that it can honor the system-wide
configuration for the connection backlog for sockets that it is
listening on. This also removes an obnoxious stack trace that appears
when Netty logging is set to debug logging.
Relates #21840
In the past we ran yaml tests against an internal cluster, which would get restarted after each test failure, hence the client objects needed to eventually be refreshed before each test. That is why we had the initClient method to re-initialize the YamlTestClient in the execution context. We ended up though re-initializing the client unconditionally, which is not needed.
Also, ESRestTestCase recreates the RestClient against the external cluster before each test, which is not needed given that nothing changes in the external cluster.
This commit removes the initClient method from the yaml tests execution context. The YamlTestClient can be eagerly created before the first yaml test runs and then re-used in subsequent tests. Also api calls to check for nodes versions etc. are moved out of YamlTestClient to ESClientYamlSuiteTestCase. Also the RestClient is now initialized in ESRestTestCase before the first test runs, and kept around afterwards as a static member.
Basically each subclass of EsRestTestCase will have its own RestClient instance, but the client will be shared across the different tests within the same class. The yaml test suite is just a special suite, composed of 600+ tests that are loaded from files, which will share the same client instance.
This change should speed tests up as well, as we don't recreate the RestClient before each single test, and we don't call _cat/nodes either before each single test.
This commit simplifies the handling of fatal errors on the network
layer. The simplification here is to remove the use of a
StringWriter/PrintWriter pair to format the stack trace, removing the
need for the method to declare that it throws a checked IOException.
If a bug occurs in painless compilation (not from a user, but from the
painless infrastructure), a VerifyError may be thrown when compiling the
broken generated class. This commit wraps VerifyErrors in
ScriptException so that useful information is returned to the user,
which can be passed on to the ES team for analysis.
This bug would cause a VerifyError when scripts using the === operator
were comparing a def type against a primitive type since the primitive
type wasn't being appropriately boxed.
Today when handling unreleased versions for backwards compatilibity
support, we scatted version constants across the code base and add some
asserts to support removing these constants when the version in question
is actually released. This commit improves this situation, enabling us
to just add a single unreleased version constant that can be renamed
when the version is actually released. This should make maintenance of
these versions simpler.
Relates #21760
NOTE: The result of `?.` and `?:` can't be assigned to primitives. So
`int[] someArray = null; int l = someArray?.length` and
`int s = params.size ?: 100` don't work. Do
`def someArray = null; def l = someArray?.length` and
`def s = params.size ?: 100` instead.
Relates to #21748
* Scripting: Remove groovy scripting language
Groovy was deprecated in 5.0. This change removes it, along with the
legacy default language infrastructure in scripting.
You can use `Debug.explain(someObject)` in painless to throw an
`Error` that can't be caught by painless code and contains an
object's class. This is useful because painless's sandbox doesn't
allow you to call `someObject.getClass()`.
Closes#20263
When a fatal error is thrown on the network layer, such an error never
makes its way to the uncaught exception handler. This prevents the node
from being torn down if an out of memory error or other fatal error is
thrown while handling HTTP or transport traffic. This commit adds logic
to ensure that such errors bubble their way up to the uncaught exception
handler, even though Netty tries really hard to swallow everything.
Relates #21720
This should make debugging painless' analysis and code generation a
little easier.
The `toString` implementations mirror the AST somewhat, and look like
`(SSource (SReturn (ENumeric 1)))`.
Today we read a vint from the stream to allocate the size of an array up-front
before we start reading the values. This can be dangerous if for instance we read
from a corrupted stream or if some manipulated bytes are send for instance from
an attacker or a fuzzer. In most of the cases we can apply some best effort and
validate the array size to be _sane_ by ensuring we can at read at least N bytes
where N is the expected size of the array.
When Groovy detects a bug in its runtime because an internal assertion
was violated, it throws an GroovyBugError. This descends from
AssertionError and if it goes uncaught will land in the uncaught
exception handler and will not deliver any useful information to the
user. This commit wraps GroovyBugErrors in ScriptExceptions so that
useful information is returned to the user.
Implements a null coalescing operator in painless that looks like `?:`. This form was chosen to emulate Groovy's `?:` operator. It is different in that it only coalesces null values, instead of Groovy's `?:` operator which coalesces all falsy values. I believe that makes it the same as Kotlin's `?:` operator. In other languages this operator looks like `??` (C#) and `COALESCE` (SQL) and `:-` (bash).
This operator is lazy, meaning the right hand side is only evaluated at all if the left hand side is null.