These filters leak into highlighting and probably other places and cause
things like the type name to be highlighted when using
requireFieldMatch=false. We could have special hacks to keep them out of
highlighting but it feals better to keep them out of any variable named
"originalQuery".
Closes#15689
This commit adds convenience methods to o.e.t.t.CapturingTransport
that enables capturing requests and clearing the captured requests
with a single method. This is to simplify a common pattern in tests of
capturing requests, and then clearing the captured requests.
Also renaming internal methods to reflect that they are dealing with
jts coordinates. Also renamed the list() to build() method for creating
the coordinates lists and adding constructors to PolygonBuilder that
take CoordinatesBuilders and implicitely call build() on them.
Rather than failing the request, when a node with node.ingest set to false receives an index or bulk request with a pipeline id, it should try to redirect the request to another node with node.ingest set to true. If there are no node with ingest set to true based on the current cluster state, an exception will be returned and the request will fail. Note that in case there are no ingest nodes and bulk has a pipeline id specified only for a subset of index requests, the whole bulk will fail.
The indexing buffer on a node (default: 10% of the JVM heap) is now a "shared pool" across all shards on that node. This way, shards doing intense indexing can use much more than other shards doing only light indexing, and only once the sum of all indexing buffers across all shards exceeds the node's indexing buffer will we ask shards to move recently indexed documents to segments on disk.
determined, the UpdateRequest would still try to parse the content instead
of throwing the standard ElasticsearchParseException. This manifests when
passing illegal JSON in the request body that does not begin with a '{'.
By trying to parse the content from an unknown request body content type,
the UpdateRequest was throwing a null pointer exception. This has been
fixed to throw an ElasticsearchParseException, to be consistent with the
behavior of all other requests in the face of undecipherable request
content types.
Closes#15822
1. Uses forbidden patterns to prevent things from referencing
java.io.Serializable or from mentioning serialVersionUID.
2. Uses -Xlint:-serial so we don't have to hear from javac that we aren't
declaring serialVersionUID on any classes that we make that happen to extend
Serializable.
3. Remove Serializable and serialVersionUID declarations.
I didn't use forbidden apis because it doesn't look like it has a way to ban
explicitly implementing Serializable. If you try to ban Serializable with
forbidden apis you end up banning all Exceptions and all Strings.
Closes#15847
So far the validation of geo shapes was only taking place in the
parse methods in ShapeBuilder. With the recent refactoring we no
longer can rely on shapes being parsed from json, so the same kind
of validation should take place when just using the java api.
A lot of validation concerns the number of points a shape needs to
have in order to be valid. Since this is not possible with current
builders where points can be added one by one, the builder constructors
are changed to require the mandatory parameters and validate those
already at construction time. To help with constructing longer lists
of points, a new utility PointsListBuilder is instroduces which can
produce list of coordinates accepted by most of the other shape builder
constructors.
Also adding tests for invalid shape exceptions to the already existing
shape builder tests.
Now that the ingest infra is part of es core we can remove some code that was required by the plugin and have a better integration with es core. We allow to specify the pipeline id in bulk and index as a request parameter, we have a REST filter that parses it and adds it to the relevant action request. That is not required anymore, as we can add this logic to RestIndexAction and RestBulkAction directly, no need for a filter. Also, we can allow to specify a pipeline id for each index requests in a bulk request. The small downside of this is that the ingest filter has to go over each item of a bulk request, all the time, to figure out whether they have a pipeline id.
Running requests via the percolate or mpercolate api is irrelevant.
What is relevant is that when nodes come back that they report the expected number of matches.