The after snapshot action is interfering with SLM deleting snapshots
here it seems, causing concurrent delete exceptions.
Since these tests are now test-scoped there is no reason to run
snapshot deletes after each test so we can remove them to avoid this issue.
Closes#47937
We were not closing repositories on Node shutdown.
In production, this has little effect but in tests
shutting down a node using `MockRepository` and is
currently stuck in a simulated blocked-IO situation
will only unblock when the node's threadpool is
interrupted. This might in some edge cases (many
snapshot threads and some CI slowness) result
in the execution taking longer than 5s to release
all the shard stores and thus we fail the assertion
about unreleased shard stores in the internal test cluster.
Regardless of tests, I think we should close repositories
and release resources associated with them when closing
a node and not just when removing a repository from the CS
with running nodes as this behavior is really unexpected.
Fixes#47689
* Add SLM support to xpack usage and info APIs
This is a backport of #48096
This adds the missing xpack usage and info information into the
`/_xpack` and `/_xpack/usage` APIs. The output now looks like:
```
GET /_xpack/usage
{
...
"slm" : {
"available" : true,
"enabled" : true,
"policy_count" : 1,
"policy_stats" : {
"retention_runs" : 0,
...
}
}
```
and
```
GET /_xpack
{
...
"features" : {
...
"slm" : {
"available" : true,
"enabled" : true
},
...
}
}
```
Relates to #43663
* Fix missing license
This PR fixes (#47593). Stored scripts with the old-style id of lang#id are
saved through the upgrade process but are no longer accessible in recent
versions. This fix will drop those scripts altogether since there is no way for
a user to access them.
* Update testing document for new packaging tests
The TESTING.asciidoc document had gotten out of date due to some new and
wonderful changes in our vagrant testing code. I've removed all of the
instructions that no longer work, and added working examples and descriptions
in their place.
This adds parsing an inference model as a possible
result of the analytics process. When we do parse such a model
we persist a `TrainedModelConfig` into the inference index
that contains additional metadata derived from the running job.
Today the docs say that the low watermark has no effect on any shards that have
never been allocated, but this is confusing. Here "shard" means "replication
group" not "shard copy" but this conflicts with the "never been allocated"
qualifier since one allocates shard copies and not replication groups.
This commit removes the misleading words. A newly-created replication group
remains newly-created until one of its copies is assigned, which might be quite
some time later, but it seems better to leave this implicit.
Previously when a numeric literal was enclosed in parentheses and then
negated, the negation was lost and the number was considered positive, e.g.:
`-(5)` was considered as `5` instead of `-5`
`- ( (1.28) )` was considered as `1.28` instead of `-1.28`
Fixes: #48009
(cherry picked from commit 4dee4bf3b34081062ba2e28ab8524a066812a180)
Audit messages are stored with millisecond timestamps. If two
messages have the same millisecond timestamp then asserting on
their order is impossible given the information available.
This PR changes the assertion on audit messages in the native
data frame analytics tests to assert that the expected audit
messages exist in any order.
Fixes#48035
which is backport merge and adds a new ingest processor, named enrich processor,
that allows document being ingested to be enriched with data from other indices.
Besides a new enrich processor, this PR adds several APIs to manage an enrich policy.
An enrich policy is in charge of making the data from other indices available to the enrich processor in an efficient manner.
Related to #32789
max_empty_searches = -1 in a datafeed update implies
max_empty_searches will be unset on the datafeed when
the update is applied. The isNoop() method needs to
take this -1 to null equivalence into account.
Previously, the safety check for the 2nd argument of the DateAddProcessor was
restricting it to Integer which was wrong since we allow all non-rational
numbers, so it's changed to a Number check as it's done in other cases.
Enhanced some tests regarding the check for an integer (non-rational
argument).
(cherry picked from commit 0516b6eaf5eb98fa5bd087c3fece80139a6b118e)
We were incorrectly handling `IOExceptions` thrown by
the `InputStream` side of the upload operation, resulting
in a `ClassCastException` as we expected to never get
`IOException` from the Azure SDK code but we do in practice.
This PR also sets an assertion on `markSupported` for the
streams used by the SDK as adding the test for this scenario
revealed that the SDK client would retry uploads for
non-mark-supporting streams on `IOException` in the `InputStream`.
This change adds:
- A new option, allow_lazy_open, to anomaly detection jobs
- A new option, allow_lazy_start, to data frame analytics jobs
Both work in the same way: they allow a job to be
opened/started even if no ML node exists that can
accommodate the job immediately. In this situation
the job waits in the opening/starting state until ML
node capacity is available. (The starting state for data
frame analytics jobs is new in this change.)
Additionally, the ML nightly maintenance tasks now
creates audit warnings for ML jobs that are unassigned.
This means that jobs that cannot be assigned to an ML
node for a very long time will show a yellow warning
triangle in the UI.
A final change is that it is now possible to close a job
that is not assigned to a node without using force.
This is because previously jobs that were open but
not assigned to a node were an aberration, whereas
after this change they'll be relatively common.