ML mappings and index templates have so far been created
programmatically. While this had its merits due to static typing,
there is consensus it would be clear to maintain those in json files.
In addition, we are going to adding ILM policies to these indices
and the component for a plugin to register ILM policies is
`IndexTemplateRegistry`. It expects the templates to be in resource
json files.
For the above reasons this commit refactors ML mappings and index
templates into json resource files that are registered via
`MlIndexTemplateRegistry`.
Backport of #51765
This commit makes the names of fetch subphases more consistent:
* Now the names end in just 'Phase', whereas before some ended in
'FetchSubPhase'. This matches the query subphases like AggregationPhase.
* Some names include 'fetch' like FetchScorePhase to avoid ambiguity about what
they do.
Ironically PreventFailingBuildIT.testSoThatTestsDoNotFail is causing failures
as documented in #52197. The no longer serves a purpose and can now be removed.
This is to support the ML categorization wizard.
Currently cluster:admin/analyze is only provided with the
"manage" cluster privilege, which is an excessive privilege
level to provide access to this single feature. It means
that the ML categorization wizard only works for extremely
highly privileged users.
Following this change the Kibana system user will be
permitted to run the _analyze endpoint on supplied strings
(not on an index). The ML UI will then call the _analyze
endpoint as the Kibana system user after first checking
that the logged-in user is permitted to create an ML job.
This will mean that users with the more reasonable
"manage_ml" cluster privilege will be permitted to use
the ML categorization wizard.
(This is also consistent with the way the ML UI will access
_all_ Elasticsearch functionality when the "ML in Spaces"
project is completed.)
Closes#51391
Relates elastic/kibana#57375
This adds a builder and parsed results for the `string_stats`
aggregation directly to the high level rest client. Without this the
HLRC can't access the `string_stats` API without the elastic licensed
`analytics` module.
While I'm in there this adds a few of our usual unit tests and
modernizes the parsing.
This commit removes the need for DeprecatedRoute and ReplacedRoute to
have an instance of a DeprecationLogger. Instead the RestController now
has a DeprecationLogger that will be used for all deprecated and
replaced route messages.
Relates #51950
Backport of #52278
Add a new cluster setting `search.allow_expensive_queries` which by
default is `true`. If set to `false`, certain queries that have
usually slow performance cannot be executed and an error message
is returned.
- Queries that need to do linear scans to identify matches:
- Script queries
- Queries that have a high up-front cost:
- Fuzzy queries
- Regexp queries
- Prefix queries (without index_prefixes enabled
- Wildcard queries
- Range queries on text and keyword fields
- Joining queries
- HasParent queries
- HasChild queries
- ParentId queries
- Nested queries
- Queries on deprecated 6.x geo shapes (using PrefixTree implementation)
- Queries that may have a high per-document cost:
- Script score queries
- Percolate queries
Closes: #29050
(cherry picked from commit a8b39ed842c7770bd9275958c9f747502fd9a3ea)
* Add more checks around parameter conversions
This commit adds two necessary verifications on received parameters:
- it checks the validity of the parameter's data type: if the declared
data type is resolved to an ES or Java type;
- it checks if the returned converter is non-null (i.e. a conversion is
possible) and generates an appropriate exception otherwise.
(cherry picked from commit eda30ac9c69383165324328c599ace39ac064342)
* Extract common optimizer tests (#52169)
(cherry picked from commit e5ad72bc22e9ec0686ab582195f0032efcb880bf)
* Hook in the optimizer rules (#52172)
(cherry picked from commit 1f90d8cc56052fbf2af604e72f9f5ca73f5e75d5)
Previously, in the in-memory sorting module
`LocalAggregationSorterListener` only the aggregate functions where used
(grabbed by the `sortingColumns`). As a consequence, if the ORDER BY
was also using columns of the GROUP BY clause, (especially in the case
of higher priority - before the aggregate functions) wrong results were
produced. E.g.:
```
SELECT gender, MAX(salary) AS max FROM test_emp
GROUP BY gender
ORDER BY gender, max
```
Add all columns of the ORDER BY to the `sortingColumns` so that the
`LocalAggregationSorterListener` can use the correct comparators in
the underlying PriorityQueue used to implement the in-memory sorting.
Fixes: #50355
(cherry picked from commit be680af11c823292c2d115bff01658f7b75abd76)
add a list of unsupported aggs in transforms and create a test that fails if a new aggregation is
added. Limitation: works only if a new agg is added to either the core or a known plugin
(Analytics, MatrixAggregation).
During a bug hunt, I caught a handful of things (unrelated to the bug) that could be potential issues:
1. Needlessly wrapping in exception handling (minor cleanup)
2. Potential of notifying listeners of a failure multiple times + even trying to notify of a success after a failure notification
Modifies SLM's and ILM's history indices to be hidden indices for added
protection against accidental querying and deletion, and improves
IndexTemplateRegistry to handle upgrading index templates.
Also modifies the REST test cleanup to delete hidden indices.
This commit adds a new security origin, and an associated reserved user
and role, named `_async_search`, which can be used by internal clients to
manage the `.async-search-*` restricted index namespace.
7.x backport of #52201
Provides a path to set register the EQL feature flag in release builds.
This enables EQL in release builds so that release docs tests pass.
Release docs tests do not have infrastructure in place to only register
snippets from included portions of the docs, they instead include all
docs snippets.
Since EQL can not be enabled in release builds, this meant that the EQL
snippets fail in the release docs tests.
This adds the ability to enable EQL in the release docs tests. This
system property will be removed when EQL is ready for release.
disallow to specify percentile out of range [0,100]. This also fixes a problem in transform by failing
validation if an invalid percentile configuration is used.
In #51146 a rudimentary check for poor categorization was added to
7.6.
This change replaces that warning based on a Java-side check with
a new one based on the categorization_status field that the ML C++
sets. categorization_status was added in 7.7 and above by #51879,
so this new warning based on more advanced conditions will also be
in 7.7 and above.
Closes#50749
Changes the misleading error message when attempting to open
a job while the "cluster.persistent_tasks.allocation.enable"
setting is set to "none" to a clearer message that names the
setting.
Closes#51956
Previously, when the specified (or default) fetchSize led to
subsequent HTTP requests and the usage of cursors, those subsequent
were no longer using the client timezone specified in the initial
SQL query. As a consequence, Even though the query is executed once
(with the correct timezone) the processing of the query results by
the HitExtractors in the next pages was done using the default
timezone Z. This could lead to incorrect results.
Fix the issue by correctly using the initially specified timezone,
which is found in the deserialisation of the cursor string.
Fixes: #51258
(cherry picked from commit 8f7afbdeb9295999b48a6c36db5b31cbe0cee432)
The changes add more granularity for identiying the data ingestion user.
The ingest pipeline can now be configure to record authentication realm and
type. It can also record API key name and ID when one is in use.
This improves traceability when data are being ingested from multiple agents
and will become more relevant with the incoming support of required
pipelines (#46847)
Resolves: #49106
This change extracts the code that previously existed in the
"Authentication" class that was responsible for reading and writing
authentication objects to/from the ThreadContext.
This is needed to support multiple authentication objects under
separate keys.
This refactoring highlighted that there were a large number of places
where we extracted the Authentication/User objects from the thread
context, in a variety of ways. These have been consolidated to rely on
the SecurityContext object.
Backport of: #52032
Refactors `DataFrameAnalyticsTask` to hold a `StatsHolder` object.
That just has a `ProgressTracker` for now but this is paving the
way to add additional stats like memory usage, analysis stats, etc.
Backport #52134
Backport: #52139
In the rolling upgrade tests, watcher is manually executed,
in rare scenarios this happens before watcher is started,
resulting in the manual execution to fail.
Relates to #33185
Employs `ResultsPersisterService` from `DataFrameRowsJoiner` in order
to add retries when a data frame analytics job is persisting the results
to the destination data frame.
Backport of #52048
Make the parsing of date more lenient
- as an escaped literal: `{d '2020-02-10[[T| ]10:20[:30][.123456789][tz]]'}`
- cast a string to a date: `CAST(2020-02-10[[T| ]10:20[:30][.123456789][tz]]' AS DATE)`
Closes: #49379
(cherry picked from commit 5863b27500d5e7f6cdd8c6c62b09b84e53ca724a)
Currently, the logic for looking up `flattened` field types lives in the
top-level `FieldTypeLookup`. This PR moves it into a dedicated class
`DynamicKeyFieldTypeLookup`.
This fixes:
- the parsing of milliseconds in intervals: everything past the . used to be converted as-is to milliseconds, with no normalisation of the unit; thus, a value of .23 ended up as 23 millis in the interval, instead of 230.
- the printing of a trailing .0, in case the interval lacks the fractional part;
- tests generating a random millisecond value used to simply print it in the string about to be evaluated without a necessary front-filling of 0[s], where the amount was below 100/10.
(The combination of first and last issues above, plus statistical "luck" made the incorrect handling pass the tests.)
(cherry picked from commit 4de8c64f63ee37c1bcfdb9b9d3a07d09be243222)
* Allow forcemerge in the hot phase for ILM policies
This commit changes the `forcemerge` action to also be allowed in the `hot` phase for policies. The
forcemerge will occur after a rollover, and allows users to take advantage of higher disk speeds for
performing the force merge (on a separate node type, for example).
On caveat with this is that a `forcemerge` in the `hot` phase *MUST* be accompanied by a `rollover`
action. ILM validates policies to ensure this is the case.
Resolves#43165
* Use anyMatch instead of findAny in validation
* Make randomTimeseriesLifecyclePolicy single-pass
It's perfectly fine if a bulk request on the follower hits
IndexShardClosedException in some CCR tests because we sometimes
close some follower shards while the follow-task is replicating operations.
Instead of failing the test immediately, this commit bubbles up that
failure to the shard follow task.
Closes#52052
Allow also whitespace ` ` (together with `T`) as a separator between
date and time parts of the timestamp string. E.g.:
```
{ts '2020-02-08 12.10.45'}
```
or
```
{ts '2020-02-08T12.10.45'}
```
Fixes: #46069
(cherry picked from commit 07c977023fb8ceab5991c359a6cbfe07beaad9bb)
This change adds support for the following new model_size_stats
fields:
- categorized_doc_count
- total_category_count
- frequent_category_count
- rare_category_count
- dead_category_count
- categorization_status
Backport of #51879
This commit changes how RestHandlers are registered with the
RestController so that a RestHandler no longer needs to register itself
with the RestController. Instead the RestHandler interface has new
methods which when called provide information about the routes
(method and path combinations) that are handled by the handler
including any deprecated and/or replaced combinations.
This change also makes the publication of RestHandlers safe since they
no longer publish a reference to themselves within their constructors.
Closes#51622
Co-authored-by: Jason Tedor <jason@tedor.me>
Backport of #51950
Now that the FIPS 140 security provider is simply a test dependency
we don't need the thirdPartyAudit exceptions, but plugin-cli and
transport-netty4 do need jarHell disabled as they use the non fips
BouncyCastle security provider as a test dependency too.
Some parts of the User class (e.g. equals/hashCode) assumed that
principal could never be null, but the constructor didn't enforce
that.
This adds a null check into the constructor and fixes a few tests that
relied on being able to pass in null usernames.
Backport of: #51988
The REST tests for autoscaling either need to be skipped in a
non-snapshot build, or alternatively, the feature flag registered so
that autoscaling can be enabled. We prefer the latter approach, as it
allows us to also test autoscaling in non-snapshot builds incrementally,
instead of at the end of development as autoscaling prepares for
release. This commit registers the autoscaling feature flag in REST
tests for non-snapshot builds.
We can just put the `IndexId` instead of just the index name into the recovery soruce and
save one load of `RepositoryData` on each shard restore that way.
Add some more tests where more than one literal is selected,
unaliased and aliased.
Follows: #42121
(cherry picked from commit 405271d408a233e697eb2e9ded3005a71f4df5e7)
This commit provides a path to set register the autoscaling feature flag
in release builds, and therefore enabling autoscaling in release
builds. The primary reason that we add this is so that our release docs
tests can pass. Our release docs tests do not have infrastructure in
place to only register snippets from included portions of the docs, they
instead include all docs snippets. Since autoscaling can not be enabled
in release builds, this meant that the autoscaling snippets would fail
in the release docs tests. To address then, we need the ability to
enable autoscaling in the release docs tests which we can now do with
the system property added here. This system property will be removed
when autoscaling is ready for release.
* [ML] Add bwc serialization unit test scaffold (#51889)
Adds new `AbstractBWCSerializationTestCase` which provides easy scaffolding for BWC serialization unit tests.
These are no replacement for true BWC tests (which execute actual old code). These tests do provide some good coverage for the current code when serializing to/from old versions.
* removing unnecessary override for 7.series branch
* adding necessary import
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
in preparation for feature importance and split information gain, adding `number_samples` field to `TreeNode` definition.
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
The related issue regarding aggregation queries where some literals
are also selected together with aggregate function has been fixed
with #49570. Add integration tests to verify the behavior.
Relates to: #41411
(cherry picked from commit 9f414a8d05c75e1a9f8250084f6dcd634d5d78d8)
The main purpose of this commit is to add a single autoscaling REST
endpoint skeleton, for the purpose of starting to build out the build
and testing infrastructure that will surround it. For example, rather
than commiting a fully-functioning autoscaling API, we introduce here
the skeleton so that we can start wiring up the build and testing
infrastructure, establish security roles/permissions, an so on. This
way, in a forthcoming PR that introduces actual functionality, that PR
will be smaller and have less distractions around that sort of
infrastructure.
If the configs are removed (by some horrific means), we should still allow tasks to be cleaned up easily.
Datafeeds and jobs with missing configs are now visible in their respective _stats calls and can be stopped/closed.
Not all clients support this e.g if the java high level rest client were
to map this it would look like `client.cat().ml().api()` which hinders
discoverability.
(cherry picked from commit 21cdabf09dc8305ce2f5e3b6cb193f67137d8bdb)
This change ensures that the rewrite of the shard request is executed in the network thread or in the refresh listener when waiting for an active shard. This allows queries that rewrite to match_no_docs to bypass the search thread pool entirely even if the can_match phase was skipped (pre_filter_shard_size > number of shards). Coordinating nodes don't have the ability to create empty responses so this change also ensures that at least one shard creates a full empty response while the other can return null ones. This is needed since creating true empty responses on shards require to create concrete aggregators which would be too costly to build on a network thread. We should move this functionality to aggregation builders in a follow up but that would be a much bigger change.
This change is also important for #49601 since we want to add the ability to use the result of other shards to rewrite the request of subsequent ones. For instance if the first M shards have their top N computed, the top worst document in the global queue can be pass to subsequent shards that can then rewrite to match_no_docs if they can guarantee that they don't have any document better than the provided one.
When clenaing a shard follow task after an index has been deleted, an
exception can occur submitting the complete persistent task
action. However, this exception message is not logged. This commit
addresses this by including the exception that led to the failure in the
log message.
AutoFollowIT tests are regularly failing on CI because they rely
on how cluster state updates are processed within the integration
clusters. We tried to limit this in #49141 by moving to latches
instead of waiting for assertions to pass but there are still some
places were it still need to wait for the cluster state updates to
be processed and auto-follow stats to be updated.
This commit gives more time to assertBusy() that verifies the
AutoFollowStats (up to 60 seconds) and also always log the
auto-follow stats in case the assertions failed.
Closes#48982
* EQL: Plug query params into the AstBuilder (#51886)
As the eventType is customizable, plug that into the parser based on the
given request.
(cherry picked from commit 5b4a3a3c07eacbc339cbd4c05a3621d056cc8d60)
* EQL: Add field resolution and verification (#51872)
Add basic field resolution inside the Analyzer and a basic Verifier to
check for any unresolved fields.
(cherry picked from commit 7087358ae2fb212811d480ec8641a46167946c82)
* EQL: Introduce basic execution pipeline (#51809)
Add main classes that form the 'execution' pipeline are added - most of
them have no functionality; the purpose of this PR is to add flesh out
the contract between the various moving parts so that work can start on
them independently.
(cherry picked from commit 9a1bae50a49af7fe8467b74b154c0d82c6bb9a19)
* EQL: Add AstBuilder to convert to QL tree (#51558)
* EQL: Add AstBuilder visitors
* EQL: Add tests for wildcards and sets
* EQL: Fix licensing
* EQL: Fix ExpressionTests.java license
* EQL: Cleanup imports
* EQL: PR feedback and remove LiteralBuilder
* EQL: Split off logical plan from expressions
* EQL: Remove stray import
* EQL: Add predicate handling for set checks
* EQL: Remove commented out dead code
* EQL: Remove wildcard test, wait until analyzer
(cherry picked from commit a462700f9c8e1fb977d62d42eb0077403b8fa98b)
* EQL grammar updates and tests (#49658)
* EQL: Additional tests and grammar updates
* EQL: Add backtick escaped identifiers
* EQL: Adding keywords to language
* EQL: Add checks for unsupported syntax
* EQL: Testing updates and PR feedback
* EQL: Add string escapes
* EQL: Cleanup grammar for identifier
* EQL: Remove tabs from .eql tests
(cherry picked from commit 6f1890bf2d52cabdfd1e7848fb481cf54b895f25)
Add unit and integration tests where literals are SELECTed
in combination with GROUP BY and possibly aggregate functions.
Relates to #41411 and #34583
which have been fixed.
(cherry picked from commit b97f1ca12675d6ea4772c60578922fe1cc2409ee)
* [DOCS] Align with ILM API docs (#48705)
* [DOCS] Reconciled with Snapshot/Restore reorg
* [DOCS] Split off ILM overview to a separate topic. (#51287)
* [DOCS} Split off overview to a separate topic.
* [DOCS] Incorporated feedback from @jrodewig.
* [DOCS] Edit ILM GS tutorial (#51513)
* [DOCS] Edit ILM GS tutorial
* [DOCS] Incorporated review feedback from @andreidan.
* [DOCS] Removed test link & fixed anchor & title.
* Update docs/reference/ilm/getting-started-ilm.asciidoc
Co-Authored-By: James Rodewig <james.rodewig@elastic.co>
* Fixed glossary merge error.
Co-authored-by: James Rodewig <james.rodewig@elastic.co>
* Adding best_compression (#49974)
This commit adds a `codec` parameter to the ILM `forcemerge` action. When setting the codec to `best_compression` ILM will close the index, then update the codec setting, re-open the index, and finally perform a force merge.
* Fix ForceMergeAction toSteps construction (#51825)
There was a duplicate force merge step and the test continued to fail. This commit clarifies the
`toStep` method and changes the `assertBestCompression` method for better readability.
Resolves#51822
* Update version constants
Co-authored-by: Sivagurunathan Velayutham <sivadeva.93@gmail.com>
Currently, the same class `FieldCapabilities` is used both to represent the
capabilities for one index, and also the merged capabilities across indices. To
help clarify the logic, this PR proposes to create a separate class
`IndexFieldCapabilities` for the capabilities in one index. The refactor will
also help when adding `source_path` information in #49264, since the merged
source path field will have a different structure from the field for a single index.
Individual changes:
* Add a new class IndexFieldCapabilities.
* Remove extra constructor from FieldCapabilities.
* Combine the add and merge methods in FieldCapabilities.Builder.
The work to switch file upload over to treating delimited files
like semi-structured text and using the ingest pipeline for CSV
parsing makes the multi-line start pattern used for delimited
files much more critical than it used to be.
Previously it was always based on the time field, even if that
was towards the end of the columns, and no multi-line pattern
was created if no timestamp was detected.
This change improves the multi-line start pattern by:
1. Never creating a multi-line pattern if the sample contained
only single line records. This improves the import
efficiency in a common case.
2. Choosing the leftmost field that has a well-defined pattern,
whether that be the time field or a boolean/numeric field.
This reduces the risk of a field with newlines occurring
earlier, and also means the algorithm doesn't automatically
fail for data without a timestamp.
Adds a secure and reloadable SECURE_AUTH_PASSWORD setting to allow keystore entries in the form "xpack.monitoring.exporters.*.auth.secure_password" to securely supply passwords for monitoring HTTP exporters. Also deprecates the insecure `AUTH_PASSWORD` setting.
We suspect the flakiness could’ve come from the fact that the rollover
step used to create the new index and roll the write alias to the new
index in separate cluster state updates. So the assertion that the
rolled index exists could’ve passed in the test but, before the
alias was rolled over to the new index, the subsequent write we execute
in the test (namely
`indexDocs("test_user", "x-pack-test-password", "foo_alias", 1)`)
would’ve sent the new document to the source index (ie. foo-logs-000001)
This would see the source index containing 3 documents and the rolled
index (foo-logs-000002) 0 documents.
However, we fixed this and the rollover step executes the “create index
and roll alias” in one single cluster update, so this situation should
not occur anymore.
(cherry picked from commit 834261c4fe7dd93f437eeec43c00d01ff2279f86)
Signed-off-by: Andrei Dan <andrei.dan@elastic.co>
* REST: Test: Fix the `accept_enterprise` parameter for Get License API (#51527)
The Get License API specifies the `accept_enterprise` parameter as a `boolean`:
0ca5cb8cb6/x-pack/plugin/src/test/resources/rest-api-spec/api/license.get.json (L22-L27)
In the test, a `string` is passed however, which makes the test compilation fail in the Go client.
(cherry picked from commit e2a2169b3d44592057c143253bb56375ed3e4268)
* Fix the SQL API documentation in REST specification (#51534)
This patch fixes the SQL REST API documentation to conform to the current schema.
(cherry picked from commit c8b6a849852699883086a6ada42279f2f68d7e07)
* Fix the "slices" parameter for the Delete By Query API in the REST specification (#51535)
This patch updates the `type` parameter in the Delete By Query API: according to
[the documentation](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-delete-by-query.html#docs-delete-by-query-slice),
it can be set to "auto", but the type in the documentation allows only numerical values.
This prevents people from setting the parameter to "auto" eg. in the Go client,
which generates source from the specification, and sets the corresponding Go
type as number.
The patch uses the `|` notation, which we have discussed previously for encoding
a "polymorphic" parameter like this.
Related: https://github.com/elastic/go-elasticsearch/issues/77
* Fix the Enrich API documentation in REST specification (#51528)
This patch fixes the REST API documentation for the Enrich APIs to conform to the current schema.
(cherry picked from commit 59f28f4f2feeba3f6d2f0b632410577eacb28121)
do index refresh of the internal transform index with the system user
instead of using the calling user which does not have sufficient rights
if security is enabled
fixes#51728
While we use `== false` as a more visible form of boolean negation
(instead of `!`), the true case is implied and the true value does not
need to explicitly checked. This commit converts cases that have slipped
into the code checking for `== true`.
* Fix SnapshotLifecycleRestIT.testFullPolicySnapshot
This previously was missing some key information in the output of the failure. This captures that
information and adds logging at each step so we can determine the cause *if* it fails again.
Resolves#50358