For some seeds the number of concurrent requests previously defined
in NettyHttpRequestSizeLimitIT was too low to trigger the intended
breaker limit.
With this commit we increase the number of concurrent requests that
are sent to the test cluster thus triggering the limit.
`ip` fields currently fail range queries when either bound is inclusive. This
commit makes ranges also work in the exclusive case to be consistent with other
data types.
This commit addresses an issue in the calculation of the time to execute
a term vectors request. The underlying issue was due to measuring the
took time by passing the starting wall clock time along with the request
and calculating the total time using the ending wall clock time on the
responding node. The fix is to use a relative time source on a single
node.
Relates #17817
This changes our packaging to be explicit about the permissions of files
and directories in the tar.gz, rpm, and deb packages. This is to protect
against a user having an incorrectly set umask when installing.
Additionally, plugins that are installed now have their permissions set
by the plugin installation so that plugins that may have been packaged
with incorrect permissions are secured.
Resolves#17634
Replace with a constructor that takes StreamInput or a static method.
In one case (ValuesSourceType) we no longer need to serialize the data
at all!
Relates to #17085
* `rename` processor, renamed `to` to `target_field`
* `date` processor, renamed `match_field` to `field` and renamed `match_formats` to `formats`
* `geoip` processor, renamed `source_field` to `field` and renamed `fields` to `properties`
* `attachment` processor, renamed `source_field` to `field` and renamed `fields` to `properties`
Closes#17835
This adds some new tests to DocumentParserTests to make sure the DocumentParser behaves correctly when dynamically mapping fields. Especially testing that the dynamic setting works when dynamically mapping different field types.
Adds a `fingerprint` token filter which uses Lucene's FingerprintFilter,
and a `fingerprint` analyzer that combines the Fingerprint filter with
lowercasing, stop word removal and asciifolding.
Closes#13325
Don't pass XContentParser to ParseFieldRegistry#lookup
Passing the parser in is not good because we are not parsing anything in the lookup methods, we only need it to retrieve the xcontent location from it so that in case there is an error we emit where we were with the parsing. It is better to pass in the XContentLocation although calling getTokenLocation needs to create a new object at each call. The workaround of passing the Parser in is worse than the original problem.
Cluster health responses have not shown validation errors, which are
retrieved from RoutingTable validations, in any production or testing
instances. The code is unit tested well in this area and any issues are
exposed through the testing infrastructure, so this commit removes
reporting of validation errors in the cluster health response.
Closes#17773Closes#16979
This is what happens when you pull on the "Remove the PROTOTYPEs from
MovAvgModels" string. This removes MovAvgModelStreams in favor of
readNamedWriteable and MovAvgParserMapper in favor of
ParseFieldRegistry<MovAvgModel.AbstractModelParser>.
Relates to #17085
Boolean fields were not handled in `DocumentParser.createBuilderFromFieldType`.
This also improves the logic a bit by falling back to the default mapping of
the given type insteah of hard-coding every case and throws a better exception
than a NPE if no dynamic mappings could be created.
Closes#17879
The Settings class has an enormous amount of methods with variations of
parameters. This change removes all the methods which take multiple
setting names, which were completely unused.
We plan to change every request so that it can support a parentTaskId.
This shrinks EMPTY_TASK_ID, which will be on every request once that change
is made, from 31 bytes to 9 bytes to 1 byte.
This change fixes the lookup during document parsing of whether an
object field is dynamic to handle looking up through parent object
mappers, and also handle when a parent object mapper was created during
document parsing.
closes#17854
Now there aren't any more specialized methods to read or write
NamedWriteables! Now StreamInput and StreamOutput don't need to know
about large chunks of the application!
In Elasticsearch 5.0.0, by default unquoted field names in JSON will be
rejected. This can cause issues, however, for documents that were
already indexed with unquoted field names. To alleviate this, a system
property has been added that can be enabled so migration can occur.
This system property will be removed in Elasticsearch 6.0.0
Resolves#17674