We added docs for proxy mode in #40281 but on reflection we should not be
documenting this setting since it does not play well with all proxies and we
can't recommend its use. This commit removes those docs and expands its Javadoc
instead.
We renew the CCR retention lease at a fixed interval, therefore it's
possible to have more than one in-flight renewal requests at the same
time. If requests arrive out of order, then the assertion is violated.
Closes#46416Closes#46013
This is fixing a bug where if an analytics job is started before any
anomaly detection job is opened, we create an index after the state
write alias.
Instead, we should create the state index and alias before starting
an analytics job and this commit makes sure this is the case.
Backport of #46602
Synced-flush consists of three steps: (1) force-flush on every active
copy; (2) check for ongoing indexing operations; (3) seal copies if
there's no change since step 1. If some indexing operations are
completed on the primary but not replicas, then Lucene commits from step
1 on replicas won't be the same as the primary's. And step 2 would pass
if it's executed when all pending operations are done. Once step 2
passes, we will incorrectly emit the "out of sync" warning message
although nothing wrong here.
Relates #28464
Relates #30244
There's nothing wrong in the logs from these failures. I think 30
seconds might not be enough to relocate shards with many documents as CI
is quite slow. This change increases the timeout to 60 seconds for these
relocation tests. It also dumps the hot threads in case of timed out.
Closes#46526Closes#46439
The sample code is wrong. Field type is required for the sample field.
I guess the intention was to give the sample field the name ```fingerprint```, mapping it as ```text``` using the custom analyzer ```my_analyzer```
This change delays the creation of the SubSearchContext for nested and parent/child inner_hits
to the fetch sub phase in order to ensure that a SearchContext can built entirely from a
QueryShardContext. This commit also adds a validation step to the inner hits builder that ensures that we fail the request early if the inner hits path is invalid.
Relates #46523
The changes in #32006 mean that the discovery process can no longer use
master-ineligible nodes as a stepping-stone between master-eligible nodes.
This was normally an indication of a strange and possibly-fragile configuration
and was not recommended, but this commit adds a note to the breaking changes
docs to note that this kind of configuration is more obviously broken in recent
versions.
This test verifies automatic cancellation of search requests on connection close.
It was previously not present in 7.x as the http client was subject do a bug which
made testing cancellation of requests impossible. Now that the bug is fixed upstream,
we can also backport this test
Add a section to both the low level and high level client documentation on asynchronous usage and `Cancellable` added for #44802
Co-Authored-By: Lee Hinman <dakrone@users.noreply.github.com>
This commits makes all the async methods in the high level client return the `Cancellable` object that the low level client now exposes.
Relates to #45379Closes#44802
The low-level REST client exposes a `performRequestAsync` method that
allows to send async requests, but today it does not expose the ability
to cancel such requests. That is something that the underlying apache
async http client supports, and it makes sense for us to expose.
This commit adds a return value to the `performRequestAsync` method,
which is backwards compatible. A `Cancellable` object gets returned,
which exposes a `cancel` public method. When calling `cancel`, the
on-going request associated with the returned `Cancellable` instance
will be cancelled by calling its `abort` method. This works throughout
multiple retries, though some special care was needed for the case where
`cancel` is called between different attempts (when one attempt has
failed and the consecutive one has not been sent yet).
Note that cancelling a request on the client side does not automatically
translate to cancelling the server side execution of it. That needs to be
specifically implemented, which is on the work for the search API (see #43332).
Relates to #44802
When waiting for no initializing shards we also have to wait for events
when we have more than one node in the cluster. When the primary is
started, there is a short period of time, where neither the primary nor
any of the replicas are initializing.
Closes#46535
Fixes that way linestrings that are crossing the antimeridian are
indexed due to a normalization bug these lines were decomposed into
a line segment that was stretching entire globe.
Fixes#43775
This makes the AllocatedPersistentTask#init() method protected so that
implementing classes can perform their initialization logic there,
instead of the constructor. Rollup's task is adjusted to use this
init method.
It also slightly refactors the methods to se a static logger in the
AllocatedTask instead of passing it in via an argument. This is
simpler, logged messages come from the task instead of the
service, and is easier for tests
In some cases (for example some AdoptOpenJDK builds), the java.vendor is
mistakenly populated as "Oracle Corporation" while the real value is
under "java.vendor.version". Since "java.vendor.version" is mandatory
since JDK 10, this commit changes to use "java.vendor.version" as the
favored system property to find the JVM vendor, and we fallback to
"java.vendor" if this is not populated (as happens in some Oracle
builds). Ugh.
DATE_TRUNC(<truncate field>, <date/datetime>) is a function that allows
the user to truncate a timestamp to the specified field by zeroing out
the rest of the fields. The function is implemented according to the
spec from PostgreSQL: https://www.postgresql.org/docs/current/functions-datetime.html#FUNCTIONS-DATETIME-TRUNCCloses: #46319
(cherry picked from commit b37e96712db1aace09f17b574eb02ff6b942a297)
The sql project uses a common set of security tests, which are run in
subprojects. Currently these are shared through a shared directory, but
this is not setup correctly to ensure it is built before tests run. This
commit changes the test classes to be an artifact of the sql/qa/security
project and makes the test runner use the built artifact (a directory of
classes) for tests.
closes#45866
Since the `IndicesSegmentsRequest` scatters to all shards for the index,
it's possible that some of the shards may fail. This adds failure
handling and logging (since this is a best-effort step in the first
place) for this case.
Many scalar functions try to find out the common type between their
arguments in order to set it as their return time, e.g.:
for `float + double` the common type which is set as the return type
of the + operation is `double`.
Previously, for data types TEXT and KEYWORD (string data types) there
was no common data type found and null was returned causing NPEs when
the function was trying to resolve the return data type.
Fixes: #46551
(cherry picked from commit 291017d69dfc810707c3c7c692f5a50af431b790)
* Minor improvement to the nested aggregation docs
* The attributes name and resellers.name were rather confusing,
especially since the first one was dynamically mapped and not shown
in the documentation (you had to read the test to see it). This
change introduces a unique name for the nested attribute and adds
the example document to the documentation.
* Change the index name from "index" to something more speaking.
* Update docs/reference/aggregations/bucket/nested-aggregation.asciidoc
Co-Authored-By: James Rodewig <james.rodewig@elastic.co>
* Update docs/reference/aggregations/bucket/nested-aggregation.asciidoc
Co-Authored-By: James Rodewig <james.rodewig@elastic.co>
* Update docs/reference/aggregations/bucket/nested-aggregation.asciidoc
Co-Authored-By: James Rodewig <james.rodewig@elastic.co>
This uses whatever the server retrieves, rather than hardcoded
"STOPPING" and "STOPPED" since the server may go to STOPPED before the
request is issued.
Resolves#46528
This commit adds a wait/check for all running snapshots to be cleared
before taking another snapshot. The previous snapshot was successful but
had not yet been cleared from the cluster state, so the second snapshot
failed due to a `ConcurrentSnapshotException`.
Resolves#46508
After starting the analytics job and checking its state
the state can be any of "started", "reindexing" or
"analyzing" depending on how quickly the work is done.