This commit internalizes whether or not a role represents the ability to
contain data. In the future, this will let us remove the compatibility
role notion.
This commit adjusts the defaults for the tiered data roles so that they
are enabled by default, or if the node has the legacy data role. This
ensures that the default experience is that the tiered data roles are
enabled.
To fully specifiy the behavior for the tiered data roles then:
- starting a new node with the defaults: enabled
- starting a new node with node.roles configured: enabled if and only
if the tiered data roles are explicitly configured, independently
of the node having the data role
- starting a new node with node.data enabled: enabled unless the
tiered data roles are explicitly disabled
- starting a new node with node.data disabled: disabled unless the
tiered data roles are explicitly enabled
XPack usage starts out on management threads, but depending on the
implementation of the usage plugin, they could end up running on
transport threads instead. Fixed to always reschedule on a management
thread.
With this change, we will always return the same point in time in a
search response as its input until we implement the retry mechanism
for the point in times.
Adds support for the unsigned_long type to data frame analytics.
This type is handled in the same way as the long type. Values
sent to the ML native processes are converted to floats and
hence will lose accuracy when outside the range where a float
can uniquely represent long values.
Backport of #64066
When calculating feature importance, the leaf values directly correlate the value of the importance.
Consequently, positive leaf values -> positive feature importance
negative leaf values -> negative feature importance.
It follows that for binary classification, this is done such that the importance relates to the leaf values, which relate directly to the "probability of class 1".
So, the feature importance calculated is always for the importance as it relates to class 1.
The inverse is the importance as it relates to class 0.
* Async search should retry updates on version conflict
The _async_search APIs can throw version conflict exception when the internal response
is updated concurrently. That can happen if the final response is written while the user
extends the expiration time. That scenario should be rare but it happened in Kibana for
several users so this change ensures that updates are retried at least 5 times. That
should resolve the transient errors for Kibana. This change also preserves the version
conflict exception in case the retry didn't work instead of returning a confusing 404.
This commit also ensures that we don't delete the response if the search was cancelled
internally and not deleted explicitly by the user.
Closes#63213
The `remote_monitoring_agent` reserved role is extended to grant more privileges
over the metricbeat-* index pattern.
In addition to the index and create_index index privileges that it granted already,
it now also grants the view_index_metadata privilege.
Closes#63203
This commit ensures that jobs within the SchedulerEngine do not
continue to run after they are cancelled. There was no synchronization
between the cancel method of an ActiveSchedule and the run method, so
an actively running schedule would go ahead and reschedule itself even
if the cancel method had been called.
This commit adds synchronization between cancelling and the scheduling
of the next run to ensure that the job is cancelled. In real life
scenarios this could manifest as a job running multiple times for
SLM. This could happen if a job had been triggered and was cancelled
prior to completing its run such as if the node was no longer the
master node or if SLM was stopping/stopped.
Closes#63754
Backport of #63762
Adds validation that the dest pipeline exists when a transform
is updated. Refactors the pipeline check into the `SourceDestValidator`.
Fixes#59587
Backport of #63494
As a result of this, we can remove a chunk of code from TypeParsers as well. Tests
for search/index mode analyzers have moved into their own file. This commit also
rationalises the serialization checks for parameters into a single SerializerCheck
interface that takes the values includeDefaults, isConfigured and the value
itself.
Relates to #62988
This PR updates the `logstash_admin` role to include the recently-added Logstash Pipeline Management APIs, as well as access to the `.logstash*` index pattern.
Co-authored-by: William Brafford <williamrandolphbrafford@gmail.com>
This PR adds deprecation warnings when accessing System Indices via the REST layer. At this time, these warnings are only enabled for Snapshot builds by default, to allow projects external to Elasticsearch additional time to adjust their access patterns.
Deprecation warnings will be triggered by all REST requests which access registered System Indices, except for purpose-specific APIs which access System Indices as an implementation detail a few specific APIs which will continue to allow access to system indices by default:
- `GET _cluster/health`
- `GET {index}/_recovery`
- `GET _cluster/allocation/explain`
- `GET _cluster/state`
- `POST _cluster/reroute`
- `GET {index}/_stats`
- `GET {index}/_segments`
- `GET {index}/_shard_stores`
- `GET _cat/[indices,aliases,health,recovery,shards,segments]`
Deprecation warnings for accessing system indices take the form:
```
this request accesses system indices: [.some_system_index], but in a future major version, direct access to system indices will be prevented by default
```
Determines the shard size of shards before allocating shards that are
recovering from snapshots. It ensures during shard allocation that the
target node that is selected as recovery target will have enough free
disk space for the recovery event. This applies to regular restores,
CCR bootstrap from remote, as well as mounting searchable snapshots.
The InternalSnapshotInfoService is responsible for fetching snapshot
shard sizes from repositories. It provides a getShardSize() method
to other components of the system that can be used to retrieve the
latest known shard size. If the latest snapshot shard size retrieval
failed, the getShardSize() returns
ShardRouting.UNAVAILABLE_EXPECTED_SHARD_SIZE. While
we'd like a better way to handle such failures, returning this value
allows to keep the existing behavior for now.
Note that this PR does not address an issues (we already have today)
where a replica is being allocated without knowing how much disk
space is being used by the primary.
Co-authored-by: Yannick Welsch <yannick@welsch.lu>
Add a new ids field to the API of invalidating API keys so that it supports bulk
invalidation with a list of IDs.
Note the existing id field is kept as is and it is an error if both id and ids are specified.
Getting the API key document form the security index is the most time consuing part
of the API Key authentication flow (>60% if index is local and >90% if index is remote).
This traffic is now avoided by caching added with this PR.
Additionally, we add a cache invalidator registry so that clearing of different caches will
be managed in a single place (requires follow-up PRs).
We only ever use this with `XContentParser` no need to make it inline
worse by forcing the lambda and hence dynamic callsite here.
=> Extraced the exception formatting code path that is likely very cold
to a separate method and removed the lambda usage in hot loops by simplifying
the signature here.
this adds the new field `feature_importance_baseline` and allows it to be optionally be included in the model's metadata.
Related to: https://github.com/elastic/ml-cpp/pull/1522
* [ML] optimize delete expired snapshots (#63134)
When deleting expired snapshots, we do an individual delete action per snapshot per job.
We should instead gather the expired snapshots and delete them in a single call.
This commit achieves this and a side-effect is there is less audit log spam on nightly cleanup
closes https://github.com/elastic/elasticsearch/issues/62875
We would only check for a null value and not for an empty string so
that meant that we were not actually enforcing this mandatory
setting. This commits ensures we check for both and fail
accordingly if necessary, on startup
This is a very minor optimization but trivial to implement, so might as well.
```
Benchmark (nGramStrs) Mode Cnt Score Error Units
NGramProcessorBenchmark.ngramInnerLoop 1,2,3 avgt 20 4415092.443 ± 31302.115 ns/op
NGramProcessorBenchmark.ngramOuterLoop 1,2,3 avgt 20 4235550.340 ± 103393.465 ns/op
```
This measurement is in nanoseconds, consequently, the overall performance of inference is dominated by other factors (i.e. map#put). But, this optimization adds up overtime and is simple.
Classification feature importance supports various types in the class name:
- string
- boolean
- numerical
The xcontent parsing on the server side and the HLRC side should support and test these types.
* Use compilation as validation for painless role template (#62845)
Role template validation now performs only compilation if the script is painless.
It no longer attempts to execute the script with empty input which is problematic.
The compliation process will catch things like invalid syntax, undefined variables,
which still provide certain level of protection against ill-defined role templates.
Behaviour for Mustache script is unchanged.
* Checkstyle
As we have decided top level importance for classification is not useful,
it has been removed from the results from the training job. This commit
also removes them from inference.
Backport of #62486
The `migrate` action will now configure the
`index.routing.allocation.include._tier_preference` setting to the corresponding
tiers. For the HOT phase it will configure `data_hot`, for the WARM phase it will
configure `data_warm,data_hot` and for the COLD phase
`data_cold,data_warm,data_cold`.
(cherry picked from commit 9dbf0e6f0c267e40c5bcfb568bb2254da103ae40)
Signed-off-by: Andrei Dan <andrei.dan@elastic.co>
In case of more than 500 transforms, get and stats return paged results which can be requested using
page parameters. For >500 transforms count wasn't parsed out of the server response but taken from
size of the list of transforms.
The change also adds client/server hlrc tests and fixes a wrong type for count in get.
fixes#56245
When state persistence was first implemented for data frame analytics
we had the assumption that state would always fit in a single document.
However this is not the case any more.
This commit adds handling of state that spreads over multiple documents.
Backport of #62564
* [ML] changing to not use global bulk indexing parameters in conjunction with add(object) calls (#62694)
* [ML] changing to not use global bulk indexing parameters in conjunction with add(object) calls
global parameters, outside of the global index, are ignored for internal callers in certain cases.
If the interal caller is adding requests via the following methods:
```
- BulkRequest#add(IndexRequest)
- BulkRequest#add(UpdateRequest)
- BulkRequest#add(DocWriteRequest)
- BulkRequest#add(DocWriteRequest[])
```
It is better to specifically set the desired parameters on the requests before they are added
to the bulk request object.
This commit addresses this issue for the ML plugin
* unmuting test
This allows the `check-migration` step to move past the allocation check
if the tier routing settings are manually unset.
This helps a user unblock ILM in case a tier is removed (ie. if the warm tier
is decommissioned this will allow users to resume the ILM policies stuck in
`check-migration` waiting for the warm nodes to become available and the managed
index to allocate. this allows the index to allocate on the other available tiers)
(cherry picked from commit d7a1eaa7f51d0972d10c0df1d3cd77d6b755dd41)
Signed-off-by: Andrei Dan <andrei.dan@elastic.co>