* Async search should retry updates on version conflict
The _async_search APIs can throw version conflict exception when the internal response
is updated concurrently. That can happen if the final response is written while the user
extends the expiration time. That scenario should be rare but it happened in Kibana for
several users so this change ensures that updates are retried at least 5 times. That
should resolve the transient errors for Kibana. This change also preserves the version
conflict exception in case the retry didn't work instead of returning a confusing 404.
This commit also ensures that we don't delete the response if the search was cancelled
internally and not deleted explicitly by the user.
Closes#63213
The `remote_monitoring_agent` reserved role is extended to grant more privileges
over the metricbeat-* index pattern.
In addition to the index and create_index index privileges that it granted already,
it now also grants the view_index_metadata privilege.
Closes#63203
Opensuse 42 has not worked in a while. The test image is unmaintained,
and cannot be launched. It was removed from CI packaging test runs, but
still remained in vagrant tests. This commit removes it from vagrant
tests.
This commit ensures that jobs within the SchedulerEngine do not
continue to run after they are cancelled. There was no synchronization
between the cancel method of an ActiveSchedule and the run method, so
an actively running schedule would go ahead and reschedule itself even
if the cancel method had been called.
This commit adds synchronization between cancelling and the scheduling
of the next run to ensure that the job is cancelled. In real life
scenarios this could manifest as a job running multiple times for
SLM. This could happen if a job had been triggered and was cancelled
prior to completing its run such as if the node was no longer the
master node or if SLM was stopping/stopped.
Closes#63754
Backport of #63762
This commit updates the APIs in the logstash plugin to handle
IndexNotFoundExceptions that are returned by client calls. Until we
have the creation of this index in place, we need to handle this case
and not let the exception propagate out of the API.
Backport of #63698
* SQL: integer parameter validation in string functions (#58923)
In insert, locate, substring function, when argument `start` or `length` is greater than Integer.MAX_INT OR less then Integer.MIN_INT + 1 (note that `start` need to minus 1), it causes overflow and leads to unexpected results.
* Add range checks for BinaryStringNumericProcessors
- Add range checks for Left, Right, Repeat.
- Minor refactorings on initial PR changes.
Co-authored-by: yinanwu <yinanwu@tencent.com>
(cherry picked from commit bf6dc58b93529f977d035a846d083b1c31867694)
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
The original description of per-field boosting is incorrect. Boosting a
field does not imply that it is more important relative to other fields.
It simply means that the score is multiplied by the supplied boost
value. Due to the differences in each field's term and document
statistics, it's not possible to imply relative importance of fields
based on the per-field boost value alone.
Co-authored-by: Josh Devins <josh.devins@elastic.co>
Add a new gradle module under eql/qa which runs and validates a set of
queries over a 4m event dataset (restored from a snapshot residing in a
gcs bucket). The results are providing by running the exact set of queries
with Python EQL against the same dataset.
Co-authored-by: Marios Trivyzas <matriv@users.noreply.github.com>
(cherry picked from commit 1cf789e5fcfb0f364f665bfaac021e24a4c2f556)
Co-authored-by: Mark Vieira <portugee@gmail.com>
This commit fixes two issues in dealing with bool fields in EQL:
- avoid simplifications of field == true expressions
- adding comparison to clauses on fields missing logic (where bool)
Fix#63693
(cherry picked from commit d10a5d0e842bbd4e0031834de948ceb24da3872b)
(cherry picked from commit 0227da3a275c7f22ff524d99d53e1a79146f9e28)
This syncs breaking changes from release notes
Co-authored-by: Lisa Cawley <lcawley@elastic.co>
Co-authored-by: James Rodewig <40268737+jrodewig@users.noreply.github.com>
This adds the release notes for 7.10.0
Co-authored-by: David Roberts <dave.roberts@elastic.co>
Co-authored-by: James Rodewig <40268737+jrodewig@users.noreply.github.com>
Co-authored-by: lcawl <lcawley@elastic.co>
This commit fixes the UpdateThreadPoolSettingsTests to be aware of the
hard limit on the maximum size of the system_write executor. This
executor has a hard limit that matches the write executor, which is
the number of allocated processors.
Closes#63131
Backport #63700
The current _update_by_query documentation mentions a scroll_size default of 100 and later another default of 1000.
We use the default of 1000 defined in AbstractBulkByScrollRequest and this PR changes the documentation accordingly.
Closes#63637
* Allow all indices options variants
Irrespective of allow_no_indices value, throw VerificationException when
there is no index validated
Co-authored-by: Andrei Stefan <astefan@users.noreply.github.com>
Today indexing to a shard with 2147483519 documents will fail that
shard. We should check the number of documents and reject the write
requests instead.
Closes#51136
Mentions the list of wildchars in case a wildchar is used as an
`ESCAPE` character.
Relates #63428
(cherry picked from commit 74cbcf871e9593b3640e382ae6845168fd14966b)
For a query like `SELECT name FROM test WHERE name LIKE ''%c*'` ES SQL
generates an error. `*` is not a special character in a `LIKE` construct
and it's expected to not needing to be escaped, so the previous query
should work as is.
In the LIKE pattern any `*` character was treated as invalid character
and the usage of `%` or `_` was suggested instead. But `*` is a valid,
acceptable non-wildcard on the right side of the `LIKE` operator.
Fix: #55108
(cherry picked from commit 190d9fe3deb31aed0d8f312007360625d4fff217)
This fixes a gap in testing and a bug that can occur in various forms:
When we would start a snapshot or clone related to a shard that was done
snapshotting/cloning but its overall operation was not yet finalized
at the time of starting the operation, we would base the operation off of
the wrong generation. This would not cause a corrupted repo, but would
cause the operation to be `PARTIAL`.
This commit fixes the state machine to take into account the correct generation
in this case.
Closes#63498
Co-authored-by: James Rodewig <40268737+jrodewig@users.noreply.github.com>
Co-authored-by: Jay Modi <jaymode@users.noreply.github.com>
Co-authored-by: Adrien Grand <jpountz@gmail.com>
This PR implements value fetching for the following field types:
* `text` phrase and prefix subfields
* `search_as_you_type`, plus its subfields
* `token_count`, which is implemented by fetching doc values
Supporting these types helps ensure that retrieving all fields through
`"fields": ["*"]` doesn't fail because of unsupported value fetchers.
Now that deprecation logs get indexed to a data stream, if we
do not load the data stream plugin in our tests and any test
generates a deprecation log message then millions of exceptions
get logged, slowing down the tests to the extent that they can
fail.
This change loads the data streams plugin during the ML internal
cluster tests. (It should already be present in external cluster
tests.)
Fixes#63548
This test used _doc as the mapping type name, which needs to be set
to doc for versions prior to 6.7.0. This commit fixes the test to use
the proper type name for the current BWC version.
Adds validation that the dest pipeline exists when a transform
is updated. Refactors the pipeline check into the `SourceDestValidator`.
Fixes#59587
Backport of #63494
Today in the `repository-s3` docs we say
> Other S3-compatible storage systems may also work with Elasticsearch,
> but these are not tested or supported.
Saying that they are explicitly not supported is a very strong
statement, implying that it is positively irresponsible to use anything
except S3 or Minio, even after extensive testing. S3-compatibility in
third-party systems has matured in recent years and users today report
success with a good number of them. In contrast, we effectively claim
support for any old NFS implementation when used with the
shared-filesystem repository.
This commit weakens this statement, removing the absolute claim of
unsupportedness and instead spelling out that the user is responsible
for ironing out any incompatibilities with the storage supplier.
Currently we flush the Translog buffer when a new operation causes the
buffer to breach 1MB. This introduces a scenario where an exception is
thrown AFTER the writer has accepted the operation. To avoid this, this
commit flushes the Translog in an #add call before adding a new
operation.
This fixes#63299.