Transforms should propagate up the search execution exception if one is returned when it does the test query.
this allows transforms to return a `4xx` when the aggs are malformed but parseable.
closes https://github.com/elastic/elasticsearch/issues/55994
* Relax version lock between ES/SQL and clients
Allow older-than-server clients to connect, if these are past or on a
certain min release.
(cherry picked from commit 108f907297542ce649aa7304060aaf0a504eb699)
Notes that you cannot use EQL in ES to search the values of `nested`
fields or their sub-fields. However, indices containing `nested` field
mappings are otherwise supported.
This is a backport of #55884 with redirects removed.
Changes:
* Adds an abbreviated title for the search API page.
* Removes the following invalid query parameters:
* `analyzer`
* `analyze_wildcard`
* `default_operator`
* `df`
* `lenient`
* `suggest_mode`
* `suggest_size`
* Replaces the URI search page's query parameter docs with a xref
* Updates the headings of several examples
The following settings are now no-ops:
* xpack.flattened.enabled
* xpack.logstash.enabled
* xpack.rollup.enabled
* xpack.slm.enabled
* xpack.sql.enabled
* xpack.transform.enabled
* xpack.vectors.enabled
Since these settings no longer need to be checked, we can remove settings
parameters from a number of constructors and methods, and do so in this
commit.
We also update documentation to remove references to these settings.
Our upgrade docs don't mention the upgrade assistant, which can be
helpful when migrating across major versions. The docs also don't
mention deprecation logs, which can highlight other functionality that
may change.
This adds a related tip admonition to the upgrade docs.
This commit changes searchable snapshots so that it now respects the
repository's max_restore_bytes_per_sec setting when it downloads blobs.
Backport of #55952 for 7.x
This PR implements the following changes to make ML model snapshot
retention more flexible in advance of adding a UI for the feature in
an upcoming release.
- The default for `model_snapshot_retention_days` for new jobs is now
10 instead of 1
- There is a new job setting, `daily_model_snapshot_retention_after_days`,
that defaults to 1 for new jobs and `model_snapshot_retention_days`
for pre-7.8 jobs
- For days that are older than `model_snapshot_retention_days`, all
model snapshots are deleted as before
- For days that are in between `daily_model_snapshot_retention_after_days`
and `model_snapshot_retention_days` all but the first model snapshot
for that day are deleted
- The `retain` setting of model snapshots is still respected to allow
selected model snapshots to be retained indefinitely
Backport of #56125
Per #54411, we plan to handle case sensitivity via a parameter for the
EQL search API (with the possible exception of the `between` function).
This removes references and examples related to case sensitivity from
the EQL functions docs.
This commit strengthens the assertion about which threads may access a blob
store to exclude the cluster applier thread, since we no longer need to do so.
Relates #50999
As of elastic/ml-cpp#1179, the analytics process reports phases
depending on the analysis type. This commit adjusts the phases
of current analyses from `analyzing` to the following:
- outlier_detection: [`computing_outlier`]
- regression/classification: [`feature_selection`, `coarse_parameter_search`, `fine_tuning_parameters`, `final_training`]
Backport of #56107
This arose after a backport where we do not have the nicities of the
Java 11 diamond operator. This commit fixes it by adding the proper type
parameter.
When the index we are validating a query does not exist, we try to send
back a response letting the client know that the index does not
exist. Yet, we accidentally fallthrough into the case that the
validation failed for some other reason. This means that we end up
notifying the channel twice. Sometimes the notification occurs after the
failure has been written out and the channel closed (so the second
invocation leads to a silent failed to write to a closed channel issue),
and sometimes the response does end up in the channel, creating garbled
responses to the client. This commit fixes that issue by avoiding the
fallthrough.
Previously, when the timezone was missing from the datetime string
and the pattern, UTC was used, instead of the session defined timezone.
Moreover, if a timezone was included in the datetime string and the
pattern then this timezone was used. To have a consistent behaviour
the resulting datetime will always be converted to the session defined
timezone, e.g.:
```
SELECT DATETIME_PARSE('2020-05-04 10:20:30.123 +02:00', 'HH:mm:ss dd/MM/uuuu VV') AS datetime;
```
with `time_zone` set to `-03:00` will result in
```
2020-05-04T05:20:40.123-03:00
```
Follows: #54960
(cherry picked from commit 8810ed03a209cc8fe1bad309a81e85b56a39da27)
Today the cache prewarming introduced in #55322 works by
enqueuing altogether the files parts to warm in the
searchable_snapshots thread pool. In order to make this fairer
among concurrent warmings, this commit starts workers that
concurrently polls file parts to warm from a queue, warms the
part and then immediately schedule another warming
execution. This should leave more room for concurrent
shard warming to sneak in and be executed.
Relates #55322
Previously, the timezone parameter was not passed to the RangeQuery
and as a results queries that use the ES date math notation (now,
now-1d, now/d, now/h, now+2h, etc.) were using the UTC timezone and
not the one passed through the "timezone"/"time_zone" JDBC/REST params.
As a consequence, the date math defined dates were always considered in
UTC and possibly led to incorrect results for queries like:
```
SELECT * FROM t WHERE date BETWEEN now-1d/d AND now/d
```
Fixes: #56049
(cherry picked from commit 300f010c0b18ed0f10a41d5e1606466ba0a3088f)
In #55763 I thought I could remove the flag that marks
reindexing was finished on a data frame analytics task.
However, that exposed a race condition. It is possible that
between updating reindexing progress to 100 because we
have called `DataFrameAnalyticsManager.startAnalytics()` and
a call to the _stats API which updates reindexing progress via the
method `DataFrameAnalyticsTask.updateReindexTaskProgress()` we
end up overwriting the 100 with a lower progress value.
This commit fixes this issue by bringing back the help of
a `isReindexingFinished` flag as it was prior to #55763.
Closes#56128
Backport of #56135
AuthN realms are ordered as a chain so that the credentials of a given
user are verified in succession. Upon the first successful verification,
the user is authenticated. Realms do however have the option to cut short
this iterative process, when the credentials don't verify and the user
cannot exist in any other realm. This mechanism is currently used by
the Reserved and the Kerberos realm.
This commit improves the early termination operation by allowing
realms to gracefully terminate authentication, as if the chain has been
tried out completely. Previously, early termination resulted in an
authentication error which varies the response body compared
to the failed authentication outcome where no realm could verify the
credentials successfully.
Reserved users are hence denied authentication in exactly the same
way as other users are when no realm can validate their credentials.
Backport of #56034.
Move includeDataStream flag from an IndicesOptions to IndexNameExpressionResolver.Context
as a dedicated field that callers to IndexNameExpressionResolver can set.
Also alter indices stats api to support data streams.
The rollover api uses this api and otherwise rolling over data stream does no longer work.
Relates to #53100
* Delay warning about missing x-pack (#54265)
Currently, when monitoring is enabled in a freshly-installed cluster,
the non-master nodes log a warning message indicating that master may
not have x-pack installed. The message is often printed even when the
master does have x-pack installed but takes some time to setup the local
exporter for monitoring. This commit adds the local exporter setting
`wait_master.timeout` which defaults to 30 seconds. The setting
configures the time that the non-master nodes should wait for master to
setup monitoring. After the time elapses, they log a message to the user
about possible missing x-pack installation on master.
The logging of this warning was moved from `resolveBulk()` to
`openBulk()` since `resolveBulk()` is called only on cluster updates and
the message might not be logged until a new cluster update occurs.
Closes#40898
If there are ill-formed pipelines, or other pipelines are not ready to be parsed, `InferenceProcessor.Factory::accept(ClusterState)` logs warnings. This can be confusing and cause log spam.
It might lead folks to think there an issue with the inference processor. Also, they would see logs for the inference processor even though they might not be using the inference processor. Leading to more confusion.
Additionally, pipelines might not be parseable in this method as some processors require the new cluster state metadata before construction (e.g. `enrich` requires cluster metadata to be set before creating the processor).
closes https://github.com/elastic/elasticsearch/issues/55985
Backport of #55858 to 7.x branch.
Currently the TransportBulkAction detects whether an index is missing and
then decides whether it should be auto created. The coordination of the
index creation also happens in the TransportBulkAction on the coordinating node.
This change adds a new transport action that the TransportBulkAction delegates to
if missing indices need to be created. The reasons for this change:
* Auto creation of data streams can't occur on the coordinating node.
Based on the index template (v2) either a regular index or a data stream should be created.
However if the coordinating node is slow in processing cluster state updates then it may be
unaware of the existence of certain index templates, which then can load to the
TransportBulkAction creating an index instead of a data stream. Therefor the coordination of
creating an index or data stream should occur on the master node. See #55377
* From a security perspective it is useful to know whether index creation originates from the
create index api or from auto creating a new index via the bulk or index api. For example
a user would be allowed to auto create an index, but not to use the create index api. The
auto create action will allow security to distinguish these two different patterns of
index creation.
This change adds the following new transport actions:
AutoCreateAction, the TransportBulkAction redirects to this action and this action will actually create the index (instead of the TransportCreateIndexAction). Later via #55377, can improve the AutoCreateAction to also determine whether an index or data stream should be created.
The create_index index privilege is also modified, so that if this permission is granted then a user is also allowed to auto create indices. This change does not yet add an auto_create index privilege. A future change can introduce this new index privilege or modify an existing index / write index privilege.
Relates to #53100
It's possible for a constant_keyword to have a 'null' value before any documents
are seen that contain a value for the field. In this case, no documents have a
value for the field, and 'exists' queries should return no documents.
Makes the following changes to the `porter_stem` token filter docs:
* Rewrites description and adds a Lucene link
* Adds detailed analyze example
* Adds an analyzer example
Making use of #55773 to simplify snapshot state machine.
1. Deletes with no in-progress snapshot now add the delete entry to the cluster state right away
instead of doing a second CS update after the fist update was a NOOP.
2. If a bulk delete matches in-progress as well as completed snapshots, abort the in-progress snapshot
and then move on to delete from the repository.
Adds the step of stopping all data frame analytics before
deleting them to the cleanup of the corresponding HLRC tests.
Closes#56097
Backport of #56101
* Reject queries that act on nested fields or fields with nested field types in their hierarchy (#55721)
(cherry picked from commit 2a024461cd9da821112953d4c6e565ea622c678b)