* WaitForRolloverReadyStepTests#mutateInstance sometimes did not mutate the instance
correctly
* 40_explain_lifecycle#"Test new phase still has phase_time" is not really a necessary
integration test. In addition to this, it is flaky due to the asynchronous nature of
ILM metadata population
The following updates were made:
* Add deprecation warnings to `RestUpdateAction`, plus a test in `RestUpdateActionTests`.
* Deprecate relevant methods on the Java HLRC requests/ responses.
* Add HLRC integration tests for the typed APIs.
* Update documentation (for both the REST API and Java HLRC).
* Fix failing integration tests.
Because of an earlier PR, the REST yml tests were already updated (one version without types, and another legacy version that retains types).
SearchSortValuesTests extends now `AbstractSerializingTestCase` which removes some code duplication and standardizes the way we test `fromXContent`, serialization and equals/hashcode.
Also, we were never creating `SearchSortValues` through their public constructor that accept an array of `DocValueFormat` together with the array of raw sort values. That is covered now, which involved some conversion from `BytesRef` to String in the test.
Also, the previous test was not using doing any equality check against the original and parsed versions in `testFromXContent` due to values being parsed with different types in some cases, which is now covered by converting those values using a new method added to `RandomObjects`. The code was already there as part of `randomStoredFieldValues`, but it is now exposed to be used in other scenarios.
For cross cluster search alternate execution mode (see #32125), we will need to take a search request that spans across multiple clusters (based on index prefixes e.g. cluster1:index, cluster2:index etc.) and split it into multiple search requests to be sent to each cluster. A copy constructor added to `SearchRequest` would make that easy and well maintainable in the future.
Something along the same lines already happens in `BulkByScrollParallelizationHelper`, but the corresponding code went outdated as some new fields were added to `SearchRequest` which were not added to the bulk by scroll code. A copy constructor helps making the task of copying a search request maintainable over time.
These classes were introduced when Action required a RequestBuilder type.
That is no longer needed, hence we can remove `NoopBulkRequestBuilder`
and `NoopSearchRequestBuilder`.
Introduce Histogram grouping function for bucketing/grouping data based
on a given range. Both date and numeric histograms are supported using
the appropriate range declaration (numbers vs intervals).
SELECT HISTOGRAM(number, 50) AS h FROM index GROUP BY h
SELECT HISTOGRAM(date, INTERVAL 1 YEAR) AS h FROM index GROUP BY h
In addition add multiply operator for Intervals
Add docs for intervals and histogram
Fix#36509
* Add IntervalQueryBuilder with support for match and combine intervals
* Add relative intervals
* feedback
* YAML test - broekn
* yaml test; begin to add block source
* Add block; make disjunction its own source
* WIP
* Extract IntervalBuilder and add tests for it
* Fix eq/hashcode in Disjunction
* New yaml test
* checkstyle
* license headers
* test fix
* YAML format
* YAML formatting again
* yaml tests; javadoc
* Add OR test -> requires fix from LUCENE-8586
* Add docs
* Re-do API
* Clint's API
* Delete bash script
* doc fixes
* imports
* docs
* test fix
* feedback
* comma
* docs fixes
* Tidy up doc references to old rule
Today we assert that the fake node ID is greater than the real node's ID. In
fact we want to assert that it's greater than _all_ proper UUIDs. This adds
assertions to that effect.
Add CURRENT_TIMESTAMP as keyword as well function alongside NOW()
These return the current date/time for the given query, computed when
the statement reaches the server. For completeness, CURRENT_TIMESTAMP
also accepts precision as an optional parameter.
Fix#36534
* Adds deprecation logging to ScriptDocValues#getValues.
First commit addressing issue #22919.
`ScriptDocValues#getValues` was added for backwards compatibility but no
longer needed. Scripts using the syntax `doc['foo'].values` when
`doc['foo']` is a list should be using `doc['foo']` instead.
* Fixes two build errors in #34279
* Removes unused import in ScriptDocValuesDatesTest
* Removes used of `.values` in example in diversified-sampler-aggregation.asciidoc
* Removes use of .values from painless test.
Part of #34279
* Updates tests to use `doc[foo]` syntax rather than `doc[foo].values`.
* Removes use of `getValues()` and replaces use of `doc[foo].values` with `doc[foo]`.
* Indentation fix.
* Remove unnecessary list construction at previous `getValues()` callsite in ScriptDocValues.GeoPoints.
* Update migration doc and add link to `getValue` in ScriptDocValues javadoc.
* Fix compile
* Fix javadoc issue
* Removes ScriptDocValues#getValues usage from painless whitelist.
Add missing `formatTemplate()` for conditional functions which
resulted in incomplete painless script. Moreover the specific
return type of Object in the painless signatures resulted in
casting exceptions when conditional functions are used in the
ORDER BY.
Fixes: #36631
Today, ResyncTask.Status is not registered, but appears as a task status
sometimes, leading to `Failed to deserialize response from handler` exceptions:
java.lang.IllegalArgumentException: Unknown NamedWriteable [org.elasticsearch.tasks.Task$Status][resync]
This commit adds the missing registration.
ConcurrentHashMap does not always behave correctly if removing elements and
concurrently checking for its emptyiness. Work around this by protecting all
usages with a mutex (there was only one usage unprotected by the mutex anyway)
and then we don't even need a ConcurrentHashMap at all.
In order for CCS alternate execution mode (see #32125) to be able to do the final reduction step on the CCS coordinating node, we need to serialize additional info in the transport layer as part of the `SearchHits`, specifically:
- lucene `SortField[]` which contains info about the fields that sorting was performed on and their type, which depends on mappings (that the CCS node does not know about)
- collapse field (`String`) that field collapsing was executed on, if requested
- collapse values (`Object[]`) that field collapsing was based on, if requested
This info is needed to be able to reconstruct the `TopFieldDocs` or `CollapseFieldTopDocs` in the CCS coordinating node to feed the `mergeTopDocs` method and reduce multiple search responses received (one per cluster) into one.
This commit adds such information to the `SearchHits` class. It's nullable info that is not serialized through the REST layer. `SearchPhaseController` sets such info at the end of the hits reduction phase.
* Enable parallel restore operations
* Add uuid to restore in progress entries to uniquely identify them
* Adjust restore in progress entries to be a map in cluster state
* Added tests for:
* Parallel restore from two different snapshots
* Parallel restore from a single snapshot to different indices to test uuid identifiers are correctly used by `RestoreService` and routing allocator
* Parallel restore with waiting for completion to test transport actions correctly use uuid identifiers
In this test, we keep track of a list of index commits then verify that
we reload exactly every operation from the safe commit. If a background
merge is triggered, then we might have a new index commit which is not
recorded in the tracking list. This change disables merges in the test.
Closes#36470
A previous fix of a similar problem in #35201 wasn't general enough, we also
need to catch cases where the randomly generated query string starts with some
version of "now" and hits a date field.
Closes#36595
this test was failing because of two reason
- it was creating invalid users with passwords shorter than 6 characters
- it was expecting 7 total users to be returned, but it should be 9
The file structure finder has timeout functionality,
but prior to this change it would not interrupt a
single long-running Grok match attempt.
This commit hooks into the ThreadWatchdog facility
provided by the Grok library to interrupt individual
Grok matches that may be running at the time the
file structure finder timeout expires.
testFailLeaderReplicaShard periodically fails because we concurrently
index to the leader group and close one of its replicas. If a
replication request hits a closing shard, we will fail that shard;
however, failing a shard is supported by the test framework - this makes
the test fail.
Previously, Math.floorMod was used for integers and longs
which has different logic for negative numbers. Also, the
priority of data types check was wrong as if one of the args
is double the evaluation should be with doubles, then for floats,
then longs and finally integers.
Fixes: #36364
* Add guidance on using CCR with Logstash
This commit adds a note to the documentation regarding how to configure
Logstash indices in the context of being available as leader indices for
cross-cluster replication.
* Oh okay
* idk
* notconsole
This commit adds deprecation warnings when using format specifiers with
joda data formats that will change with java time. It also adds the "8"
prefix which may be used to force the new java time format parsing.
The getters and setters for useDisMax() have been deprecated since at least 6.0,
also there hasn't been any reference to the query parameter in the
documentation. Removing it from the builder and tests and replacing it with
`tieBreaker(1.0f)` where necessary.
This commit turns MultiValuesSourceFieldConfig into a proper
ToXContentObject for easy testing and verification of its
to/from XContent methods.
Closes#36474.
* add read_ilm cluster privilege
Although managing ILM policies is best done using the
"manage" cluster privilege, it is useful to have read-only
views.
* adds `read_ilm` cluster privilege for viewing policies and status
* adds Explain API to the `view_index_metadata` index privilege
* add manage_ilm privileges
We are attempting to replace the usage of the `MockTcpTransport` with
the `MockNioTransport`. This commit replaces usages of
`MockTcpTransport` in two zen test cases.
When a security manager is present, the JVM will cache positive hostname
lookups indefinitely. This can be problematic, especially in the modern
world with cloud services where DNS addresses can change, or
environments using Docker containers where IP addresses could be
considered ephemeral. This behavior impacts cluster discovery,
cross-cluster replication and cross-cluster search, reindex from remote,
snapshot repositories, webhooks in Watcher, external authentication
mechanisms, and the Elastic Stack Monitoring Service. The experience of
watching a DNS lookup change yet not be reflected within Elasticsearch
is a poor experience for users. The reason the JVM has this is guard
against DNS cache posioning attacks. Yet, there is already a defense in
the modern world against such attacks: TLS. With proper certificate
validation, even if a resolver falls prey to a DNS cache poisoning
attack, using TLS would neuter the attack. Therefore we have a policy
with dubious security value that significantly impacts usability. As
such we make the usability/security tradeoff towards usability, since
the security risks are very low. This commit introduces new system
properties that Elasticsearch observes to override the JVM DNS cache
policy.
Previously persistent task assignment was checked in the
following situations:
- Persistent tasks are changed
- A node joins or leaves the cluster
- The routing table is changed
- Custom metadata in the cluster state is changed
- A new master node is elected
However, there could be situations when a persistent
task that could not be assigned to a node could become
assignable due to some other change, such as memory
usage on the nodes.
This change adds a timed recheck of persistent task
assignment to account for such situations. The timer
is suspended while checks triggered by cluster state
changes are in-flight to avoid adding burden to an
already busy cluster.
Closes#35792
* We should compare the target value with the to be applied value before interpreting the update as a change
* This speeds up the test failing in #36496 considerably by preventing state updates on noop setting updates