The "category" in context suggester could be String, Number or Boolean. However with the changes in version 5 this is failing and only accepting String. This will have problem for existing users of Elasticsearch if they choose to migrate to higher version; as their existing Mapping and query will fail as mentioned in a bug #22358
This PR fixes the above mentioned issue and allows user to migrate seamlessly.
Closes#22358
This change makes the build info initialization only try to load a jar
manifest if it is the elasticsearch jar. Anything else (eg a repackaged
ES for use of transport client in an uber jar) will contain "Unknown"
for the build info as it does for tests currently.
fixes#21955
It can easily happen that we touch a logger before logging is configured
due to chains of static intializers and other such scenarios. This
commit adds detection for this mechanism that will fail startup if we
touch a logger before logging is configured. This is a bug that will
cause builds to fail.
Relates #24076
This is required in master now that #24071 is in or else we fail during BWC testing because the 5.x branch contains 5.5 but the build thinks it should contain 5.4.
Currently we can run into test errors by accidently using e.g. a "simple"
analyzer on a numeric field which might lead to number parsing errors. While
these errors are correct, we should avoid these combinations in our regular
tests.
The S3 repostiory has many levels of settings it looks at to create a
repository, and these settings were read at repository creation time.
This meant secure settings like access and secret keys had to be
available after node construction. This change makes setting loading for
every except repository level settings eager, so that secure settings
can be stashed, and the keystore can once again be closed after
bootstrapping the node is complete.
Today Elasticsearch and other CLI tools that rely on environment aware
command leniently accept duplicate settings with the last one
winning. This commit removes this leniency.
Relates #24053
This is related to #23893. This commit allows users to use wilcards for
cluster names when executing a cross cluster search.
So instead of defining every cluster such as:
GET one:*,two:*,three:*/_search
A user could just search:
GET *:*/_search
As ":" characters are currently allowed in index names, if the text
up to the first ":" does not match a defined cluster name, the entire
string is treated as an index name.
Drops any mention of non-sandboxed scripting languages other than a
brief "we don't support them and we shouldn't because A and B"
statement.
Relates to #23930
All our actions that are invoked from rest actions have corresponding
transport actions. This adds the transport action for RestRemoteClusterInfoAction
for consistency.
Relates to #23969
This commit skips the two Painless tests
EqualsTests#testBranchEqualsDefAndPrimitive and
EqualsTests#testBranchNotEqualsDefAndPrimitive on Windows as the tests
are repeatedly failing there.
When a primary relocation completes while there are ongoing replica recoveries, the recoveries for these replicas need to be restarted (as a new primary is in charge of replicating changes). Before this commit, the need for a recovery restart was detected by the data nodes that had the replicas, by checking on each cluster state update if the recovery process had completed before the recovery source changed. That code had a race, however, which could lead to a not-fully recovered shard exposing itself as started (see #23904).
This commit takes a different approach: When the primary relocation completes and the master updates the cluster state to move the primary shard from relocating to started, it will reinitialize all initializing replica shards, by giving them a fresh allocation id. Data nodes that have the replica shard will simply detect that the allocation id changed and restart the recovery process (instead of trying to determine the need to restart based on ongoing recoveries).
Note: Removal of the code in IndicesClusterStateService that checks whether the recovery source has changed will not be backported to the 5.x branch. This ensures backward compatibility for the situation where the master node is older and does not have the code changes that have been introduced in this PR.
Closes#23904
The bwc checkout for backcompat tests currently always tries to fetch
the latest from the upstream remote. This change makes fetching from
upstream conditional on not running an offline build.
The `AsyncBulkByScrollActionTests` were brittle because they used the
current time. That was a mistake. This removes the current time from
the test, instead adding it to the parameters passed in to the
appropriate methods. This means that we take the current time slightly
earlier in all cases, but that shouldn't make a difference.
Closes#24005
Example failure:
https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+master+nfs/161/consoleFull
This commit removes passing the repository metadata object through to
s3 client creation. It is not needed, and in fact in tests was confusing
because you could create the metadata but have it contain different
settings than were passed in as repository settings.
Some systems like GCE rely on a plaintext file containing credentials.
Rather than extract the information out of that credentials file and
store each peace individually in the keystore, it is cleaner to just
store the entire file.
This commit adds support to the keystore wrapper for secure file
settings. These are settings that contain an entire file that would
normally be stored on the local filesystem. Retrieving the file returns
an input stream to the file contents. This also adds a `add-file`
command to the keystore cli.
In order to support both strings and files as values for settings, the
metadata format of the keystore has also been updated (with backcompat)
to keep a map of setting name to type.
We are still carrying some legacy code that deals with lucene indices
that don't have checksums. Yet, we do not support these indices
for a while now, in fact since version 5.0 such an index is not supported
anymore. This commit removes all the special handling and leniency involved.
This commit adds support for replacing a stashed value within a header of a REST test. This is
useful for requests that may want to use a value previously obtained within a header.
Now that we have incremental reduce functions for topN and aggregations
we can set the default for `action.search.shard_count.limit` to unlimited.
This still allows users to restrict these settings while by default we executed
across all shards matching the search requests index pattern.
This reverts the line limit change in #23623 - this PR doesn't touch the suppression file since we are moving towards automatic code formatting which makes it mainly obsolete.
Adds the option for a plugin to specify extra directories containing notices
and licenses files to be incorporated into the overall notices file that is
generated for the plugin.
This can be useful, for example, where the plugin has a non-Java dependency
that itself incorporates many 3rd party components.
The getProperty method is an internal method needed to run pipeline aggregations and retrieve info by path from the aggs tree. It is not needed in the MultiBucketsAggregation.Bucket interface, which is returned to users running aggregations from the transport client. The method is moved to the InternalMultiBucketAggregation class as that's where it belongs.
The `getProperty` method is an internal method needed to run pipeline aggregations and retrieve info by path from the aggs tree. It is not needed in the `Aggregations` interface, which is returned to users running aggregations from the transport client. Furthermore, the method is currenty unused by pipeline aggs too, as only InternalAggregation#getProperty is used. It can then be removed
We deprecated this method in the past because we thought it was a temporary thing that could go away over time. We radically trimmed down the usages of a context while parsing when we got rid of the ParseFieldMatcher, but the usages that are left are legit and we will hardly get rid of them. Also, working on aggs parsing we will need a context to carry around the aggregation name that gets parsed through XContentParser#namedObject .
The buffer limit should have been configurable already, but the factory constructor is package private so it is truly configurable only from the org.elasticsearch.client package. Also the HttpAsyncResponseConsumerFactory interface was package private, so it could only be implemented from the org.elasticsearch.client package.
Closes#23958
After two nodes are being stopped and two more are joining the cluster, we first have to wait on the cluster to consist of the right nodes before
waiting on green status, otherwise we might get a green status for a cluster with dead nodes.
We had a couple of unfortunate field name collisions in our CI, where the json duplicate check tripped. Increasing the minimum length of randomly generated field names should decrease the chance of this issue happening again.
_field_stats has evolved quite a lot to become a multi purpose API capable of retrieving the field capabilities and the min/max value for a field.
In the mean time a more focused API called `_field_caps` has been added, this enpoint is a good replacement for _field_stats since he can
retrieve the field capabilities by just looking at the field mapping (no lookup in the index structures).
Also the recent improvement made to range queries makes the _field_stats API obsolete since this queries are now rewritten per shard based on the min/max found for the field.
This means that a range query that does not match any document in a shard can return quickly and can be cached efficiently.
For these reasons this change deprecates _field_stats. The deprecation should happen in 5.4 but we won't remove this API in 6.x yet which is why
this PR is made directly to 6.0.
The rest tests have also been adapted to not throw an error while this change is backported to 5.4.
This commit adds support for incremental top N reduction if the number of
expected shards in the search request is high enough. The changes here
also clean up more code in SearchPhaseController to make the separation
between values that are the same on each search result and values that
are per response. The reduced search phase result doesn't hold an arbitrary
result to obtain values like `from`, `size` or sort values which is now
cleanly encapsulated.
The refactoring in #23711 hardcoded version logic for replica to assume monotonic versions. Sadly that's wrong for `FORCE` and `VERSION_GTE`. Instead we should use the methods in VersionType to detect conflicts.
Note - once replicas use sequence numbers for out of order delivery, this logic goes away.
This commit upgrades the Log4j dependencies from version 2.7 to version
2.8.2. This release includes a fix for a case where Log4j could lose
exceptions in the presence of a security manager.
Relates #23995