We had a couple of unfortunate field name collisions in our CI, where the json duplicate check tripped. Increasing the minimum length of randomly generated field names should decrease the chance of this issue happening again.
_field_stats has evolved quite a lot to become a multi purpose API capable of retrieving the field capabilities and the min/max value for a field.
In the mean time a more focused API called `_field_caps` has been added, this enpoint is a good replacement for _field_stats since he can
retrieve the field capabilities by just looking at the field mapping (no lookup in the index structures).
Also the recent improvement made to range queries makes the _field_stats API obsolete since this queries are now rewritten per shard based on the min/max found for the field.
This means that a range query that does not match any document in a shard can return quickly and can be cached efficiently.
For these reasons this change deprecates _field_stats. The deprecation should happen in 5.4 but we won't remove this API in 6.x yet which is why
this PR is made directly to 6.0.
The rest tests have also been adapted to not throw an error while this change is backported to 5.4.
This commit adds support for incremental top N reduction if the number of
expected shards in the search request is high enough. The changes here
also clean up more code in SearchPhaseController to make the separation
between values that are the same on each search result and values that
are per response. The reduced search phase result doesn't hold an arbitrary
result to obtain values like `from`, `size` or sort values which is now
cleanly encapsulated.
The refactoring in #23711 hardcoded version logic for replica to assume monotonic versions. Sadly that's wrong for `FORCE` and `VERSION_GTE`. Instead we should use the methods in VersionType to detect conflicts.
Note - once replicas use sequence numbers for out of order delivery, this logic goes away.
This commit upgrades the Log4j dependencies from version 2.7 to version
2.8.2. This release includes a fix for a case where Log4j could lose
exceptions in the presence of a security manager.
Relates #23995
The ExtrasFS filesystem creates extra directories when creating temp
directories during tests to ensure that Lucene does not care about extra
files. These extra files get in our way in the plugins service tests
because some of these tests are counting only on certain directories
existing. This commit suppresses the ExtrasFS filesystem for the plugins
service tests, and fixes a test that was passing for the wrong reason
(because of the existence of an extra directory from ExtrasFS).
This commit removes some leniency from the plugin service which skips
hidden files in the plugins directory. We really want to ensure the
integrity of the plugin folder, so hasta la vista leniency.
Relates #23982
This commit removes the "legacy" feature of secure settings, which setup
a parallel setting that was a fallback in the insecure
elasticsearch.yml. This was previously used to allow the new secure
setting name to be that of the old setting name, but is now not in use
due to other refactorings. It is much cleaner to just have all secure
settings use new setting names. If in the future we want to reuse the
previous setting name, once support for the insecure settings have been
removed, we can then rename the secure setting. This also adds a test
for the behavior.
This change adds secure settings for access/secret keys and proxy
username/password to ec2 discovery. It adds the new settings with the
prefix `discovery.ec2`, copies other relevant ec2 client settings to the
same prefix, and deprecates all other settings (`cloud.aws.*` and
`cloud.aws.ec2.*`). Note that this is simpler than the client configs
in repository-s3 because discovery is only initialized once for the
entire node, so there is no reason to complicate the configuration with
the ability to have multiple sets of client settings.
relates #22475
reindex_from_remote was using `TimeValue#toString` to generate the
scroll timeout which is bad because that generates fractional
time values that are useful for people but bad for Elasticsearch
which doesn't like to parse them. This switches it to using
`TimeValue#getStringRep` which spits out whole time values.
Closes to #23945
Makes #23828 even more desirable
They needed to be updated now that Painless is the default and
the non-sandboxed scripting languages are going away or gone.
I dropped the entire section about customizing the classloader
whitelists. In master this barely does anything (exposes more
things to expressions).
This test was sporadically failing for the following reason:
- 4 nodes (nodes 0, 1, 2, and 3) running with `minimum_master_nodes` set to 3
- we stop 2 nodes (node 0 and 3)
- wait for cluster block to be in place on all nodes
- start 2 nodes (node 4 and node 5) and do a `prepareHealth().setWaitForNodes("4")`
- then do a search request
The search request runs into the `ClusterBlockException` as the `prepareHealth().setWaitForNodes("4")` check succeeds on a cluster state that has
nodes 1, 2, 3, and 4, i.e., only one of the two new nodes has joined the cluster and only one of the two dead nodes was removed by the master
(removing the dead nodes only happens after there are again `minimum_master_nodes` nodes in the cluster).
This commit fixes the issue by reusing a method from InternalTestCluster that checks that the right nodes have rejoined the cluster.
The test assumes that two nodes leaving the cluster results in two cluster state updates on the master, which is invalidated by cluster state
batching.
Shuffling xContent breaks the order of the highlighter fields in the
internal list if the highlighter doesn't use the array syntax. In other tests we
avoid shuffling this json level, but since this is done in the base test for
aggregations we should ensure the highlight builder uses the array syntax here.
This commit fixes the handling of POSIX permissions on Windows in the
spawner tests. Since POSIX permissions do not exist there, we first have
to check if we are on a filesystem that supports POSIX or not before
attempting to set the permissions.
The `getProperty` method is an internal method needed to run pipeline aggregations and retrieve info by path from the aggs tree. It is not needed in the `Aggregation` interface, which is returned to users running aggregations from the transport client. The method is moved to the InternalAggregation class as that's where it belongs.
ESTestCase has methods to shuffle xContent keys given a builder or a parser. Shuffling wasn't actually doing what was expected but rather reordering the keys in their natural ordering, hence the output was always the same at every run. Corrected that and added tests, also fixed a couple of tests that were affected by this fix.
Before now ranges where forbidden, because the percolator query itself could get cached and then the percolator queries with now ranges that should no longer match, incorrectly will continue to match.
By disabling caching when the `percolator` is being used, the percolator can now correctly support range queries with now based ranges.
I think this is the right tradeoff. The percolator query is likely to not be the same between search requests and disabling range queries with now ranges really disabled people using the percolator for their use cases.
Also fixed an issue that existed in the percolator fieldmapper, it was unable to find forbidden queries inside `dismax` queries.
Closes#23859
If a snapshot is taken on multiple indices, and some of them are "good"
indices that don't contain any corruption or failures, and some of them
are "bad" indices that contain missing shards or corrupted shards, and
if the snapshot request is set to partial=false (meaning don't take a
snapshot if there are any failures), then the good indices will not be
snapshotted either. Previously, when getting the status of such a
snapshot, a 500 error would be thrown, because the snap-*.dat blob for
the shards in the good index could not be found.
This commit fixes the problem by reporting shards of good indices as
failed due to a failed snapshot, instead of throwing the
NoSuchFileException.
Closes#23716
Currently, both the Amazon S3 client provides a retry mechanism, and the
S3 blob store also attempts retries for failed read/write requests.
Both retry mechanisms are controlled by the
`repositories.s3.max_retries` setting. However, the S3 blob store retry
mechanism is unnecessary because the Amazon S3 client provided by the
Amazon SDK already handles retries (with exponential backoff) based on
the provided max retry configuration setting (defaults to 3) as long as
the request is retryable. Hence, this commit removes the unneeded retry
logic in the S3 blob store and the S3OutputStream.
Closes#22845
This change disables graph analysis of token streams containing a shingle or a cjk filters that produce shingle or ngram of different size. The graph analysis is disabled for phrase and boolean queries.
Closes#23918
Windows rest tests consistenly fail because the filesystem appears to be
an order of magnitude slower than that of *nix, at least in the context
of our rest tests. This commit overrides the suite timeout to 30 mins
for windows. From past failures, it appears this should be enough, as
the tests seem to fail when they are almost complete. The default suite
timeout for ESTestCase is 20 mins, so this leaves ample buffer for
windows shenanigans.
This commit modifies the BulkProcessor to be decoupled from the
client implementation. Instead it just takes a
BiConsumer<BulkRequest, ActionListener<BulkResponse>> that executes
the BulkRequest.
When executing an index operation on the primary shard,
`TransportShardBulkAction` first parses the document, sees if there are any
mapping updates that needs to be applied, and then updates the mapping on the
master node. It then re-parses the document to make sure that the mappings have
been applied and propagated.
This adds a check that skips the second parsing of the document in the event
there was not a mapping update applied in the first case.
Fixes a performance regression introduced in #23665
Today we have several code paths to merge top docs based on the number of
search results returned from the shards. If there is a only a single shard
holding any hits we go a different code path with quite some complexity while
if there are more than one the code is basically duplicated to safe the
creation of a dense array of top docs which can be large if there are many results.
This commit removes the need of the dense array and in-turn the justification for
the optimization. This commit introduces a single code path to merge top docs.
The InternalEngine Index/Delete methods (plus satellites like version loading from Lucene) have accumulated some cruft over the years making it hard to clearly the code flows for various use cases (primary indexing/recovery/replicas etc). This PR refactors those methods for better readability. The methods are broken up into smaller sub methods, albeit at the price of less code I reused.
To support the refactoring I have considerably beefed up the versioning tests.
This PR is a spin-off from #23543 , which made it clear this is needed.
The purpose of this validation is to make sure that the master doesn't step down
due to a change in master nodes, which also means that there is no way to revert
an accidental change. Since we validate using the current cluster state (and
not the one from which the settings come from) we have to be careful and only
validate if the local node is already a master. Doing so all the time causes
subtle issues. For example, a node that joins a cluster has no nodes in its
current cluster state. When it receives a cluster state from the master with
a dynamic minimum master nodes setting int it, we must make sure we don't reject it.
Closes#23695
SingleNodeDiscoveryIT uses a hardcoded port for the purpose of binding
two nodes within the limited port range that an unconfigured unicast zen
ping hosts list would try to discover another node on. This commit at
least removes this hardcoding for the first node to come up, although
still tries to bind the second node to the limited port range after the
first node has bound.