* [DOCS] Align with ILM API docs (#48705)
* [DOCS] Reconciled with Snapshot/Restore reorg
* [DOCS] Split off ILM overview to a separate topic. (#51287)
* [DOCS} Split off overview to a separate topic.
* [DOCS] Incorporated feedback from @jrodewig.
* [DOCS] Edit ILM GS tutorial (#51513)
* [DOCS] Edit ILM GS tutorial
* [DOCS] Incorporated review feedback from @andreidan.
* [DOCS] Removed test link & fixed anchor & title.
* Update docs/reference/ilm/getting-started-ilm.asciidoc
Co-Authored-By: James Rodewig <james.rodewig@elastic.co>
* Fixed glossary merge error.
Co-authored-by: James Rodewig <james.rodewig@elastic.co>
* Adding best_compression (#49974)
This commit adds a `codec` parameter to the ILM `forcemerge` action. When setting the codec to `best_compression` ILM will close the index, then update the codec setting, re-open the index, and finally perform a force merge.
* Fix ForceMergeAction toSteps construction (#51825)
There was a duplicate force merge step and the test continued to fail. This commit clarifies the
`toStep` method and changes the `assertBestCompression` method for better readability.
Resolves#51822
* Update version constants
Co-authored-by: Sivagurunathan Velayutham <sivadeva.93@gmail.com>
Currently, the same class `FieldCapabilities` is used both to represent the
capabilities for one index, and also the merged capabilities across indices. To
help clarify the logic, this PR proposes to create a separate class
`IndexFieldCapabilities` for the capabilities in one index. The refactor will
also help when adding `source_path` information in #49264, since the merged
source path field will have a different structure from the field for a single index.
Individual changes:
* Add a new class IndexFieldCapabilities.
* Remove extra constructor from FieldCapabilities.
* Combine the add and merge methods in FieldCapabilities.Builder.
The work to switch file upload over to treating delimited files
like semi-structured text and using the ingest pipeline for CSV
parsing makes the multi-line start pattern used for delimited
files much more critical than it used to be.
Previously it was always based on the time field, even if that
was towards the end of the columns, and no multi-line pattern
was created if no timestamp was detected.
This change improves the multi-line start pattern by:
1. Never creating a multi-line pattern if the sample contained
only single line records. This improves the import
efficiency in a common case.
2. Choosing the leftmost field that has a well-defined pattern,
whether that be the time field or a boolean/numeric field.
This reduces the risk of a field with newlines occurring
earlier, and also means the algorithm doesn't automatically
fail for data without a timestamp.
Adds a secure and reloadable SECURE_AUTH_PASSWORD setting to allow keystore entries in the form "xpack.monitoring.exporters.*.auth.secure_password" to securely supply passwords for monitoring HTTP exporters. Also deprecates the insecure `AUTH_PASSWORD` setting.
We suspect the flakiness could’ve come from the fact that the rollover
step used to create the new index and roll the write alias to the new
index in separate cluster state updates. So the assertion that the
rolled index exists could’ve passed in the test but, before the
alias was rolled over to the new index, the subsequent write we execute
in the test (namely
`indexDocs("test_user", "x-pack-test-password", "foo_alias", 1)`)
would’ve sent the new document to the source index (ie. foo-logs-000001)
This would see the source index containing 3 documents and the rolled
index (foo-logs-000002) 0 documents.
However, we fixed this and the rollover step executes the “create index
and roll alias” in one single cluster update, so this situation should
not occur anymore.
(cherry picked from commit 834261c4fe7dd93f437eeec43c00d01ff2279f86)
Signed-off-by: Andrei Dan <andrei.dan@elastic.co>
* REST: Test: Fix the `accept_enterprise` parameter for Get License API (#51527)
The Get License API specifies the `accept_enterprise` parameter as a `boolean`:
0ca5cb8cb6/x-pack/plugin/src/test/resources/rest-api-spec/api/license.get.json (L22-L27)
In the test, a `string` is passed however, which makes the test compilation fail in the Go client.
(cherry picked from commit e2a2169b3d44592057c143253bb56375ed3e4268)
* Fix the SQL API documentation in REST specification (#51534)
This patch fixes the SQL REST API documentation to conform to the current schema.
(cherry picked from commit c8b6a849852699883086a6ada42279f2f68d7e07)
* Fix the "slices" parameter for the Delete By Query API in the REST specification (#51535)
This patch updates the `type` parameter in the Delete By Query API: according to
[the documentation](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-delete-by-query.html#docs-delete-by-query-slice),
it can be set to "auto", but the type in the documentation allows only numerical values.
This prevents people from setting the parameter to "auto" eg. in the Go client,
which generates source from the specification, and sets the corresponding Go
type as number.
The patch uses the `|` notation, which we have discussed previously for encoding
a "polymorphic" parameter like this.
Related: https://github.com/elastic/go-elasticsearch/issues/77
* Fix the Enrich API documentation in REST specification (#51528)
This patch fixes the REST API documentation for the Enrich APIs to conform to the current schema.
(cherry picked from commit 59f28f4f2feeba3f6d2f0b632410577eacb28121)
do index refresh of the internal transform index with the system user
instead of using the calling user which does not have sufficient rights
if security is enabled
fixes#51728
While we use `== false` as a more visible form of boolean negation
(instead of `!`), the true case is implied and the true value does not
need to explicitly checked. This commit converts cases that have slipped
into the code checking for `== true`.
* Fix SnapshotLifecycleRestIT.testFullPolicySnapshot
This previously was missing some key information in the output of the failure. This captures that
information and adds logging at each step so we can determine the cause *if* it fails again.
Resolves#50358
This setting was introduced with the purpose of reducing the time took by
tests that shut nodes down. Tests like `MlDistributedFailureIT` and
`NetworkDisruptionIT`. However, it is unfortunate to have to set the value
to an explicit value in production. In addition, and most important, the dynamically
choosing the value for this setting makes it impossible to adopt static index template configs
that we register via `IndexTemplateRegistry`, which we need to use in order to start
registering ILM policies for the ML indices.
This commit removes this setting from our templates. I run the tests a few times and could
not see execution time differing significantly.
Backport of #51740
This addresses another race condition that could yield this test flaky.
(cherry picked from commit d20d90aceb2b687239654d6f013f61f7f4cc1512)
Signed-off-by: Andrei Dan <andrei.dan@elastic.co>
* Change index.lifecycle.step.master_timeout to indices.lifecycle.step.master_timeout
This changes setting name from `index.lifecycle.step.master_timeout` to
`indices.lifecycle.step.master_timeout` to avoid confusion about its scope.
`index.*` settings are recognized as index level settings, this one is node level.
Reletes to #51698
Currently when an ILM policy finishes its execution, the index moves into the `TerminalPolicyStep`,
denoted by a completed/completed/completed phase/action/step lifecycle execution state.
This commit changes the behavior so that the index lifecycle execution state halts at the last
configured phase's `PhaseCompleteStep`, so for instance, if an index were configured with a policy
containing a `hot` and `cold` phase, the index would stop at the `cold/complete/complete`
`PhaseCompleteStep`. This allows an ILM user to update the policy to add any later phases and have
indices configured to use that policy pick up execution at the newly added "later" phase. For
example, if a `delete` phase were added to the policy specified about, the index would then move
from `cold/complete/complete` into the `delete` phase.
Relates to #48431
This adds logic to handle paging problems when the ID pattern + tags reference models stored as resources.
Most of the complexity comes from the issue where a model stored as a resource could be at the start, or the end of a page or when we are on the last page.
The changes are to help users prepare for migration to next major
release (v8.0.0) regarding to the break change of realm order config.
Warnings are added for when:
* A realm does not have an order config
* Multiple realms have the same order config
The warning messages are added to both deprecation API and loggings.
The main reasons for doing this are: 1) there is currently no automatic relay
between the two; 2) deprecation API is under basic and we need logging
for OSS.
This commit switches the strategy for managing dot-prefixed indices that
should be hidden indices from using "fake" system indices to an explicit
exclusions list that must be updated when those indices are converted to
hidden indices.
* Optimize not-equalities in con-/disjunctions
This commit adds optimisations of not-equalities in conjunctions and
disjunctions:
* for conjunctions, the not-equality can be optimized away when applied
together with a range or inequality, in case the not-equality point
falls outside the domain of the later condition; if its on the boarder,
it will modify the bound, to simply exclude the equality, if present;
otherwise no optimisation can be applied;
* for disjunctions, the not-equals could filter away the ranges and
inequalities, unless these include an equality on the bound, in which
case the entire condition becomes always true, but this would influence
the score() function, so it's been omitted;
* fix aggregations of inequalities in ranges
This commit fixes the loop that aggregates inequalities into ranges:
- it won't advance the outer loop index in case of a merge, since the
current element is removed;
- it will break the inner loop, since comparision against the element
selected in the outer loop can't continue, as it had been removed.
(cherry picked from commit 789724ac2cc726de603849b4eeb8194da7528bcc)
* Rename ILM history index enablement setting
The previous setting was `index.lifecycle.history_index_enabled`, this commit changes it to
`indices.lifecycle.history_index_enabled` to indicate this is not an index-level setting (it's node
level).
* [ML][Inference] Fix weighted mode definition (#51648)
Weighted mode inaccurately assumed that the "max value" of the input values would be the maximum class value. This does not make sense.
Weighted Mode should know how many classes there are. Hence the new parameter `num_classes`. This indicates what the maximum class value to be expected.
We need to force flush to make the last commit safe; otherwise, we might
fail to open FrozenEngine. Note that we force flush before closing a
shard.
Closes#51620
Three fixes for when the `compressed_definition` is utilized on PUT
* Update the inflate byte limit to be the minimum of 10% the max heap, or 1GB (what it was previously)
* Stream data directly to the JSON parser, so if it is invalid, we don't have to inflate the whole stream to find out
* Throw when the maximum bytes are reach indicating that is why the request was rejected
Previously, if YEAR() was used as and ORDER BY argument without being
wrapped with another scalar (e.g. YEAR(birth_date) + 10), no script
ordering was used but instead the underlying field (e.g. birth_date)
was used instead as a performance optimisation. This works correctly if
YEAR() is the only ORDER BY arg but if further args are used as tie
breakers for the ordering wrong results are produced. This is because
2 rows with the different birth_date but on the same year are not tied
as the underlying ordering is on birth_date and not on the
YEAR(birth_date), and the following ORDER BY args are ignored.
Remove this optimisation for YEAR() to avoid incorrect results in
such cases.
As a consequence another bug is revealed: scalar functions on top
of nested fields produce scripted sorting/filtering which is not yet
supported. In such cases no error was thrown but instead all values for
such nested fields were null and were passed to the script implementing
the sorting/filtering, producing incorrect results.
Detect such cases and throw a validation exception.
Fixes: #51224
(cherry picked from commit f41efd6753dc3650a7eabb3e07b02b3b32c5704c)
set watcher logger to debug level.
These tests haven't run in such a long time,
we first need to get a better picture how/if
these tests fail today.
Backport of #51478
See #33185
Add a verification that full-text search functions `MATCH()` and `QUERY()`
are not allowed in the SELECT clause, so that a nice error message is
returned to the user early instead of an "ugly" exception.
Fixes: #47446
This commit creates a new index privilege named `maintenance`.
The privilege grants the following actions: `refresh`, `flush` (also synced-`flush`),
and `force-merge`. Previously the actions were only under the `manage` privilege
which in some situations was too permissive.
Co-authored-by: Amir H Movahed <arhd83@gmail.com>
Datafeeds being closed while starting could result in and NPE. This was
handled as any other failure, masking out the NPE. However, this
conflicts with the changes in #50886.
Related to #50886 and #51302
This PR tries to address the intermittent vector test failures on 7.x by making
sure we create indices with one shard.
The fix is based on this theory as to what's happening:
* On 7.x, the default number of shards is 1, but in REST tests we randomly use
2 in order to cover the multiple shards case. In the failing test run, we use 2
shards and all documents end up on only one shard.
* During a search, the response from the empty shard doesn't produce
deprecation warnings because we never try to execute the script. If not all
shard responses contain the warning headers, then certain deprecation warnings
can be lost (due to the bug described in #33936).
Addresses #50716.
Relates to #50061.
Prior to the change the watcher index listener didn't implement the
`postIndex(ShardId, Engine.Index, Engine.IndexResult)` method. This
caused document level exceptions like VersionConflictEngineException
to be ignored. This commit fixes this.
The watcher indexing listener did implement the `postIndex(ShardId, Engine.Index, Exception)`
method, but that only handles engine level exceptions.
This change also unmutes the SmokeTestWatcherTestSuiteIT#testMonitorClusterHealth test again.
Relates to #32299
Backport of #51526.
Previous the formatter was breaking simple if/else statements (i.e.
without braces) onto separate lines, which could be fragile because the
formatter cannot also introduce braces. Instead, keep such expressions
on the same line.
The timeout.tcp_read AD/LDAP realm setting, despite the low-level
allusion, controls the time interval the realms wait for a response for
a query (search or bind). If the connection to the server is synchronous
(un-pooled) the response timeout is analogous to the tcp read timeout.
But the tcp read timeout is irrelevant in the common case of a pooled
connection (when a Bind DN is specified).
The timeout.tcp_read qualifier is hereby deprecated in favor of
timeout.response.
In addition, the default value for both timeout.tcp_read and
timeout.response is that of timeout.ldap_search, instead of the 5s (but
the default for timeout.ldap_search is still 5s). The
timeout.ldap_search defines the server-controlled timeout of a search
request. There is no practical use case to have a smaller tcp_read
timeout compared to ldap_search (in this case the request would time-out
on the client but continue to be processed on the server). The proposed
change aims to simplify configuration so that the more common
configuration change, adjusting timeout.ldap_search up, has the expected
result (no timeout during searches) without any additional
modifications.
Closes#46028
In the SQL with SSL tests, we need to find the interfaces that are up,
are loopback devices, or have a loopback address. If we check if the
device is up first, we can run into situations where the device is a
virtual ethernet device that might have disappeared between us seeing
the device, and checking if it is up. By first checking if the device is
a loopback device or it has a loopback address, then we can avoid
checking if the device is up except for loopback devices and therefore
we can avoid the disappearing virtual ethernet device problem.
* Allow Repository Plugins to Filter Metadata on Create
Add a hook that allows repository plugins to filter the repository metadata
before it gets written to the cluster state.
This commit deprecates the creation of dot-prefixed index names (e.g.
.watches) unless they are either 1) a hidden index, or 2) registered by
a plugin that extends SystemIndexPlugin. This is the first step
towards more thorough protections for system indices.
This commit also modifies several plugins which use dot-prefixed indices
to register indices they own as system indices, and adds a plugin to
register .tasks as a system index.
The docs tests have recently been running much slower than before (see #49753).
The gist here is that with ILM/SLM we do a lot of unnecessary setup / teardown work on each
test. Compounded with the slightly slower cluster state storage mechanism, this causes the
tests to run much slower.
In particular, on RAMDisk, docs:check is taking
ES 7.4: 6:55 minutes
ES master: 16:09 minutes
ES with this commit: 6:52 minutes
on SSD, docs:check is taking
ES 7.4: ??? minutes
ES master: 32:20 minutes
ES with this commit: 11:21 minutes
Changes the find_file_structure response to include a CSV
ingest processor in the ingest pipeline it suggests.
Previously the Kibana file upload functionality parsed CSV
in the browser, but by parsing CSV in the ingest pipeline
it makes the Kibana file upload functionality more easily
interchangable with Filebeat such that the configurations
it creates can more easily be used to import data with the
same structure repeatedly in production.
* Reload secure settings with password (#43197)
If a password is not set, we assume an empty string to be
compatible with previous behavior.
Only allow the reload to be broadcast to other nodes if TLS is
enabled for the transport layer.
* Add passphrase support to elasticsearch-keystore (#38498)
This change adds support for keystore passphrases to all subcommands
of the elasticsearch-keystore cli tool and adds a subcommand for
changing the passphrase of an existing keystore.
The work to read the passphrase in Elasticsearch when
loading, which will be addressed in a different PR.
Subcommands of elasticsearch-keystore can handle (open and create)
passphrase protected keystores
When reading a keystore, a user is only prompted for a passphrase
only if the keystore is passphrase protected.
When creating a keystore, a user is allowed (default behavior) to create one with an
empty passphrase
Passphrase can be set to be empty when changing/setting it for an
existing keystore
Relates to: #32691
Supersedes: #37472
* Restore behavior for force parameter (#44847)
Turns out that the behavior of `-f` for the add and add-file sub
commands where it would also forcibly create the keystore if it
didn't exist, was by design - although undocumented.
This change restores that behavior auto-creating a keystore that
is not password protected if the force flag is used. The force
OptionSpec is moved to the BaseKeyStoreCommand as we will presumably
want to maintain the same behavior in any other command that takes
a force option.
* Handle pwd protected keystores in all CLI tools (#45289)
This change ensures that `elasticsearch-setup-passwords` and
`elasticsearch-saml-metadata` can handle a password protected
elasticsearch.keystore.
For setup passwords the user would be prompted to add the
elasticsearch keystore password upon running the tool. There is no
option to pass the password as a parameter as we assume the user is
present in order to enter the desired passwords for the built-in
users.
For saml-metadata, we prompt for the keystore password at all times
even though we'd only need to read something from the keystore when
there is a signing or encryption configuration.
* Modify docs for setup passwords and saml metadata cli (#45797)
Adds a sentence in the documentation of `elasticsearch-setup-passwords`
and `elasticsearch-saml-metadata` to describe that users would be
prompted for the keystore's password when running these CLI tools,
when the keystore is password protected.
Co-Authored-By: Lisa Cawley <lcawley@elastic.co>
* Elasticsearch keystore passphrase for startup scripts (#44775)
This commit allows a user to provide a keystore password on Elasticsearch
startup, but only prompts when the keystore exists and is encrypted.
The entrypoint in Java code is standard input. When the Bootstrap class is
checking for secure keystore settings, it checks whether or not the keystore
is encrypted. If so, we read one line from standard input and use this as the
password. For simplicity's sake, we allow a maximum passphrase length of 128
characters. (This is an arbitrary limit and could be increased or eliminated.
It is also enforced in the keystore tools, so that a user can't create a
password that's too long to enter at startup.)
In order to provide a password on standard input, we have to account for four
different ways of starting Elasticsearch: the bash startup script, the Windows
batch startup script, systemd startup, and docker startup. We use wrapper
scripts to reduce systemd and docker to the bash case: in both cases, a
wrapper script can read a passphrase from the filesystem and pass it to the
bash script.
In order to simplify testing the need for a passphrase, I have added a
has-passwd command to the keystore tool. This command can run silently, and
exit with status 0 when the keystore has a password. It exits with status 1 if
the keystore doesn't exist or exists and is unencrypted.
A good deal of the code-change in this commit has to do with refactoring
packaging tests to cleanly use the same tests for both the "archive" and the
"package" cases. This required not only moving tests around, but also adding
some convenience methods for an abstraction layer over distribution-specific
commands.
* Adjust docs for password protected keystore (#45054)
This commit adds relevant parts in the elasticsearch-keystore
sub-commands reference docs and in the reload secure settings API
doc.
* Fix failing Keystore Passphrase test for feature branch (#50154)
One problem with the passphrase-from-file tests, as written, is that
they would leave a SystemD environment variable set when they failed,
and this setting would cause elasticsearch startup to fail for other
tests as well. By using a try-finally, I hope that these tests will fail
more gracefully.
It appears that our Fedora and Ubuntu environments may be configured to
store journald information under /var rather than under /run, so that it
will persist between boots. Our destructive tests that read from the
journal need to account for this in order to avoid trying to limit the
output we check in tests.
* Run keystore management tests on docker distros (#50610)
* Add Docker handling to PackagingTestCase
Keystore tests need to be able to run in the Docker case. We can do this
by using a DockerShell instead of a plain Shell when Docker is running.
* Improve ES startup check for docker
Previously we were checking truncated output for the packaged JDK as
an indication that Elasticsearch had started. With new preliminary
password checks, we might get a false positive from ES keystore
commands, so we have to check specifically that the Elasticsearch
class from the Bootstrap package is what's running.
* Test password-protected keystore with Docker (#50803)
This commit adds two tests for the case where we mount a
password-protected keystore into a Docker container and provide a
password via a Docker environment variable.
We also fix a logging bug where we were logging the identifier for an
array of strings rather than the contents of that array.
* Add documentation for keystore startup prompting (#50821)
When a keystore is password-protected, Elasticsearch will prompt at
startup. This commit adds documentation for this prompt for the archive,
systemd, and Docker cases.
Co-authored-by: Lisa Cawley <lcawley@elastic.co>
* Warn when unable to upgrade keystore on debian (#51011)
For Red Hat RPM upgrades, we warn if we can't upgrade the keystore. This
commit brings the same logic to the code for Debian packages. See the
posttrans file for gets executed for RPMs.
* Restore handling of string input
Adds tests that were mistakenly removed. One of these tests proved
we were not handling the the stdin (-x) option correctly when no
input was added. This commit restores the original approach of
reading stdin one char at a time until there is no more (-1, \r, \n)
instead of using readline() that might return null
* Apply spotless reformatting
* Use '--since' flag to get recent journal messages
When we get Elasticsearch logs from journald, we want to fetch only log
messages from the last run. There are two reasons for this. First, if
there are many logs, we might get a string that's too large for our
utility methods. Second, when we're looking for a specific message or
error, we almost certainly want to look only at messages from the last
execution.
Previously, we've been trying to do this by clearing out the physical
files under the journald process. But there seems to be some contention
over these directories: if journald writes a log file in between when
our deletion command deletes the file and when it deletes the log
directory, the deletion will fail.
It seems to me that we might be able to use journald's "--since" flag to
retrieve only log messages from the last run, and that this might be
less likely to fail due to race conditions in file deletion.
Unfortunately, it looks as if the "--since" flag has a granularity of
one-second. I've added a two-second sleep to make sure that there's a
sufficient gap between the test that will read from journald and the
test before it.
* Use new journald wrapper pattern
* Update version added in secure settings request
Co-authored-by: Lisa Cawley <lcawley@elastic.co>
Co-authored-by: Ioannis Kakavas <ikakavas@protonmail.com>
moves audit message for index creation after the index has been successfully created. This has
been confusing for a user where index creation failed but audit reported index creation.
This commit sets `xpack.security.ssl.diagnose.trust` to false in all
of our tests when running in FIPS 140 mode and when settings objects
are used to create an instance of the SSLService. This is needed
in 7.x because setting xpack.security.ssl.diagnose.trust to true
wraps SunJSSE TrustManager with our own DiagnosticTrustManager and
this is not allowed when SunJSSE is in FIPS mode.
An alternative would be to set xpack.security.fips.enabled to
true which would also implicitly disable
xpack.security.ssl.diagnose.trust but would have additional effects
(would require that we set PBKDF2 for password hashing algorithm in
all test clusters, would prohibit using JKS keystores in nodes even
if relevant tests have been muted in FIPS mode etc.)
Relates: #49900Resolves: #51268
* Don't overwrite target field with SetSecurityUserProcessor
This change fix problem with `SetSecurityUserProcessor` which was overwriting
whole target field and not only fields really filled by the processor.
Closes#51428
* Unused imports removed
Today we are repeatedly checking if the current build is a snapshot
build or not by reading the system property build.snapshot. This commit
formalizes this by adding a build parameter to indicate whether or not
the current build is a snapshot build.
* Introduce reusable QL plugin for SQL and EQL (#50815)
Extract reusable functionality from SQL into its own dedicated project QL.
Implemented as a plugin, it provides common components across SQL and the upcoming EQL.
While this commit is fairly large, for the most part it's just a big file move from sql package to the newly introduced ql.
(cherry picked from commit ec1ac0d463bfa12a02c8174afbcdd6984345e8b4)
* SQL: Fix incomplete registration of geo NamedWritables
(cherry picked from commit e295763686f9592976e551e504fdad1d2a3a566d)
* QL: Extend NodeSubclass to read classes from jars (#50866)
As the test classes are spread across more than one project, the Gradle
classpath contains not just folders but also jars.
This commit allows the test class to explore the archive content and
load matching classes from said source.
(cherry picked from commit 25ad74928afcbf286dc58f7d430491b0af662f04)
* QL: Remove implicit conversion inside Literal (#50962)
Literal constructor makes an implicit conversion for each value given
which turns out has some subtle side-effects.
Improve MathProcessors to preserve numeric type where possible
Fix bug on issue compatibility between date and intervals
Preserve the source when folding inside the Optimizer
(cherry picked from commit 9b73e225b0aa07a23859550fb117bae571a2b672)
* QL: Refactor DataType for pluggability (#51328)
Change DataType from enum to class
Break DataType enums into QL (default) and SQL types
Make data type conversion pluggable so that new types can be introduced
As part of the process:
- static type conversion in QL package (such as Literal) has been
removed
- several utility classes have been broken into base (QL) and extended
(SQL) parts based on type awareness
- operators (+,-,/,*) are
- due to extensibility, serialization of arithmetic operation has been
slightly changed and pushed down to the operator executor itself
(cherry picked from commit aebda81b30e1563b877a8896309fd50633e0b663)
* Compilation fixes for 7.x
We added a new rounding in #50609 that handles offsets to the start and
end of the rounding so that we could support `offset` in the `composite`
aggregation. This starts moving `date_histogram` to that new offset.
This is a redo of #50873 with more integration tests.
This reverts commit d114c9db3e1d1a766f9f48f846eed0466125ce83.
The DATE and DATESTAMP Grok patterns match 2 digit years
as well as 4 digit years. The pattern determination in
find_file_structure worked correctly in this case, but
the regex used to create a multi-line start pattern was
assuming a 4 digit year. Also, the quick rule-out
patterns did not always correctly consider 2 digit years,
meaning that detection was inconsistent.
This change fixes both problems, and also extends the
tests for DATE and DATESTAMP to check both 2 and 4 digit
years.
* Fix TimeSeriesLifecycleActionsIT.testShrinkAction
Shrinking a 6 shard index to 3 shards can be quite time consuming and
assertBusy probes the conditions at exponentially growing intervals.
This separates the one assertion that was used for all the conditions
into multiple assertBusy statements and increases the timeout for waiting
for the shrink to complete.
* Allow more time for shrink to complete
This commit allows more time for the shrink operation to complete in
testRetryFailedShrinkAction (separating the assertBusy calls too) and
testMoveToRolloverStep.
* Shrink to no more than 2 shards in tests
(cherry picked from commit 5fe780148fa3536915d61475b087896a5b9ace82)
Signed-off-by: Andrei Dan <andrei.dan@elastic.co>
This change changes the way to run our test suites in
JVMs configured in FIPS 140 approved mode. It does so by:
- Configuring any given runtime Java in FIPS mode with the bundled
policy and security properties files, setting the system
properties java.security.properties and java.security.policy
with the == operator that overrides the default JVM properties
and policy.
- When runtime java is 11 and higher, using BouncyCastle FIPS
Cryptographic provider and BCJSSE in FIPS mode. These are
used as testRuntime dependencies for unit
tests and internal clusters, and copied (relevant jars)
explicitly to the lib directory for testclusters used in REST tests
- When runtime java is 8, using BouncyCastle FIPS
Cryptographic provider and SunJSSE in FIPS mode.
Running the tests in FIPS 140 approved mode doesn't require an
additional configuration either in CI workers or locally and is
controlled by specifying -Dtests.fips.enabled=true
* Centralize mocks initialization in ILM steps tests
This change centralizes initialization of `Client`, `AdminClient`
and `IndicesAdminClient` for all classes extending `AbstractStepTestCase`.
This removes a lot of code duplication and make it easier to write tests.
This also removes need for `AsyncActionStep#setClient`
* Unused imports removed
* Added missed tests
* Fix OpenFollowerIndexStepTests
* Check all snapshots in SnapshotLifecycleRestIT.testFullPolicy
Rather than check the first returned snapshot for a snapshot starting with `snap-` in
SnapshotLifecycleRestIT.testFullPolicy, this commit changes the test to find any snapshots starting
with `snap-`.
In the event that there are no snapshots (the failure case), this also exposes the full results map
so we can diagnose why a failure occurred.
Relates to #50358
* Use a more imperative style for checking
* Separate aliases used for tests in TimeSeriesLifecycleActionsIT
This is related to #51375 and hopes to help illuminate why some of those tests are failing. This
commit switches the aliases used in the test to use a random alias name every time (since there were
some complaints in the tests about aliases having more than one write index). With this we hope to
determine the actual cause of the failure in the test.
This also adds additional information to the exception returned when calling move-to-step with the
incorrect current step.
* Fix rest test
* [ML][Inference] add tags url param to GET (#51330)
Adds a new URL parameter, `tags` to the GET _ml/inference/<model_id> endpoint.
This parameter allows the list of models to be further reduced to those who contain all the provided tags.
Previously this test failed waiting for yellow:
https://gradle-enterprise.elastic.co/s/fv55holsa36tg/console-log#L2676
Oddly cluster health returned red status, but there were no unassigned, relocating or initializing shards.
Placed the waiting for green in a try-catch block, so that when this fails again then cluster state gets printed.
Relates to #48381
* Use ESSingleNodeTestCase instead of ESIntegTestCase (#51345)
(cherry picked from commit abcf1c41faf05a0b0196fb06e57c3de8c3d67688)
Signed-off-by: Andrei Dan <andrei.dan@elastic.co>
The ApiKeyService would aggressively "close" ApiKeyCredentials objects
during processing. However, under rare circumstances, the verfication
of the secret key would be performed asychronously and may need access
to the SecureString after it had been closed by the caller.
The trigger for this would be if the cache already held a Future for
that ApiKey, but the future was not yet complete. In this case the
verification of the secret key would take place asynchronously on the
generic thread pool.
This commit moves the "close" of the credentials to the body of the
listener so that it only occurs after key verification is complete.
Backport of: #51244
As we prepare to introduce a new index for storing additional
information about data frame analytics jobs (e.g. intrumentation),
renaming this class to `DestinationIndex` better captures what it does
and leaves its prior name available for a more suitable use.
Backport of #51353
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
API Key expiration value has millisecond precision as we use
{@link Instant#toEpoqueMilli()} when creating the API key
document.
It could often happen that `Instant.now()` Instant in the testCreateApiKey
was close enough to the ApiKeyService's `clock.instant()` Instant,
when the nanos were removed from the latter ( due to the call
to `toEpoqueMilli()` ) the result of comparing these two Instants
was a few nanos short of a 7 days.
Resolves: #47958
check bulk indexing error for permanent problems and ensure the state goes into failed instead of
retry. Corrects the stats API to show the real error and avoids excessive audit logging.
fixes#50122
This change exposes master timeout to ILM steps through global dynamic setting.
All currently implemented steps make use of this setting as well.
Closes#44136
IndexWriter might not filter out fully deleted segments if retention
leases exist or the number of the retaining operations is non-zero.
SoftDeletesDirectoryReaderWrapper, however, always filters out fully
deleted segments.
This change uses the original directory reader when calculating segment
stats instead.
Relates #51192Closes#51303
Data frame analytics classification currently only supports 2 classes for the
dependent variable. We were checking that the field's cardinality is not higher
than 2 but we should also check it is not less than that as otherwise the process
fails.
Backport of #51232
The ID of the datafeed's associated job was being obtained
frequently by looking up the datafeed task in a map that
was being modified in other threads. This could lead to
NPEs if the datafeed stopped running at an unexpected time.
This change reduces the number of places where a datafeed's
associated job ID is looked up to avoid the possibility of
failures when the datafeed's task is removed from the map
of running tasks during multi-step operations in other
threads.
Fixes#51285
This makes the UpdateSettingsStep retryable. This step updates settings needed
during the execution of ILM actions (mark indexes as read-only, change
allocation configurations, mark indexing complete, etc)
As the index updates are idempotent in nature (PUT requests and are applied only
if the values have changed) and the settings values are seldom user-configurable
(aside from the allocate action) the testing for this change goes along the
lines of artificially simulating a setting update failure on a particular value
update, which is followed by a successful step execution (a retry) in an
environment outside of ILM (the step executions are triggered manually).
(cherry picked from commit 8391b0aba469f39532bfc2796b76148167dc0289)
Signed-off-by: Andrei Dan <andrei.dan@elastic.co>
After we rollover the index we wait for the configured number of shards for the
rolled index to become active (based on the index.write.wait_for_active_shards
setting which might be present in a template, or otherwise in the default case,
for the primaries to become active).
This wait might be long due to disk watermarks being tripped, replicas not
being able to spring to life due to cluster nodes reconfiguration and others
and, the RolloverStep might not complete successfully due to this inherent
transient situation, albeit the rolled index having been created.
(cherry picked from commit 457a92fb4c68c55976cc3c3e2f00a053dd2eac70)
Signed-off-by: Andrei Dan <andrei.dan@elastic.co>
When not truncated, a long SAML response XML document can fill max
line length and mask the actual exception message that the trace
statement is meant to inform about.
The same XML Document is also printed in full on trace level in
SamlRequestHandler#parseSamlMessage() so there is no loss of
information
This replaces the message we return for unknown queries with the standard
one that we use for unknown fields from `ObjectParser`. This is nice
because it includes "did you mean". One day we might convert parsing
queries to using object parser, but that looks complex. This change is
much smaller and seems useful.
There are two edge cases that can be ran into when example input is matched in a weird way.
1. Recursion depth could continue many many times, resulting in a HUGE runtime cost. I put a limit of 10 recursions (could be adjusted I suppose).
2. If there are no "fixed regex bits", exploring the grok space would result in a fence-post error during runtime (with assertions turned off)
2> REPRODUCE WITH: ./gradlew ':x-pack:plugin:watcher:test' --tests "org.elasticsearch.xpack.watcher.history.HistoryTemplateTransformMappingsTests.testTransformFields" -Dtests.seed=26754396AB9C1A30 -Dtests.security.manager=true -Dtests.locale=lv-LV -Dtests.timezone=America/Dominica -Dcompiler.java=13 -Druntime.java=8
2> java.lang.NullPointerException
at __randomizedtesting.SeedInfo.seed([26754396AB9C1A30:B2A3CA27E260803B]:0)
at org.elasticsearch.xpack.watcher.history.HistoryTemplateTransformMappingsTests.lambda$testTransformFields$1(HistoryTemplateTransformMappingsTests.java:85)
at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
at java.util.HashMap$ValueSpliterator.forEachRemaining(HashMap.java:1628)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
at org.elasticsearch.xpack.watcher.history.HistoryTemplateTransformMappingsTests.lambda$testTransformFields$2(HistoryTemplateTransformMappingsTests.java:88)
at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:892)
at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:877)
at org.elasticsearch.xpack.watcher.history.HistoryTemplateTransformMappingsTests.testTransformFields(HistoryTemplateTransformMappingsTests.java:74)
Allows ML datafeeds to work with time fields that have
the "date_nanos" type _and make use of the extra precision_.
(Previously datafeeds only worked with time fields that were
exact multiples of milliseconds. So datafeeds would work
with "date_nanos" only if the extra precision over "date" was
not used.)
Relates #49889
* REST PreparedStatement-like query parameters are now supported in the form of an array of non-object, non-array values where ES SQL parser will try to infer the data type of the value being passed as parameter.
(cherry picked from commit 45b8bf619aecb1c03d7bc0cf06928dcc36005a66)
The hierarchy of fields/sub-fields under a field that is of an
unsupported data type will be marked as unsupported as well. Until this
change, the behavior was to set the unsupported data type field's
hierarchy as empty.
Example, considering the following hierarchy of fields/sub-fields
a -> b -> c -> d, if b would be of type "foo", then b, c and d will
be marked as unsupported.
(cherry picked from commit 7adb286c4c485b9e781f88b0a2f98cab9ec5b7e2)
This commit merely adds the skeleton for the autoscaling project, adding
the basics to include the autoscaling module in the default
distribution, opt-in to code formatting, and a placeholder for the docs.
These policies store statistics, but since stats updating is asynchronous, it's
possible for the update from one test to bleed into a separate one. This change
switches the tests to use separate policy ids so that their stats are tracked
independently. It also relaxes the checking constraint in one of the tests.
Hopefully this:
Resolves#48531Resolves#48017
This change introduces a new feature for indices so that they can be
hidden from wildcard expansion. The feature is referred to as hidden
indices. An index can be marked hidden through the use of an index
setting, `index.hidden`, at creation time. One primary use case for
this feature is to have a construct that fits indices that are created
by the stack that contain data used for display to the user and/or
intended for querying by the user. The desire to keep them hidden is
to avoid confusing users when searching all of the data they have
indexed and getting results returned from indices created by the
system.
Hidden indices have the following properties:
* API calls for all indices (empty indices array, _all, or *) will not
return hidden indices by default.
* Wildcard expansion will not return hidden indices by default unless
the wildcard pattern begins with a `.`. This behavior is similar to
shell expansion of wildcards.
* REST API calls can enable the expansion of wildcards to hidden
indices with the `expand_wildcards` parameter. To expand wildcards
to hidden indices, use the value `hidden` in conjunction with `open`
and/or `closed`.
* Creation of a hidden index will ignore global index templates. A
global index template is one with a match-all pattern.
* Index templates can make an index hidden, with the exception of a
global index template.
* Accessing a hidden index directly requires no additional parameters.
Backport of #50452
This commit upgrades the OWASP HTML sanitizer used by watcher to the
latest version and also upgrades guava, which it depends on. The guava
upgrade also requires the addition of a new dependency that guava
itself requires as of version 27.0. The sanitizer's behavior has changed to
re-write these templated values with a comment that results in this output
`{<!-- -->{ctx.metadata.name}}`. This would be an issue if we attempted to
sanitize the template, but the code that uses the sanitizer runs the rendered
string through the sanitizer, which means that the templated values have
been replaced already.
Relates #50395
This commit changes our behavior so that when we receive a
request with an invalid/expired/wrong access token or API Key
we do not fallback to authenticating as the anonymous user even if
anonymous access is enabled for Elasticsearch.
If 1000 different category definitions are created for a job in
the first 100 buckets it processes then an audit warning will now
be created. (This will cause a yellow warning triangle in the
ML UI's jobs list.)
Such a large number of categories suggests that the field that
categorization is working on is not well suited to the ML
categorization functionality.
Object fields cannot be used as features. At the moment _explain
API includes them and even worse it allows it does not error when
an object field is excluded. This creates the expectation to the
user that all children fields will also be excluded while it's not
the case.
This commit omits object fields from the _explain API and also
adds an error if an object field is included or excluded.
Backport of #51115
If a transform config got lost (e.g. because the internal index disappeared) tasks could not be
stopped using transform API. This change makes it possible to stop transforms without a config,
meaning to remove the background task. In order to do so force must be set to true.
When we receive a request with an Authorization header that contains
a Bearer token that is not generated by us or that is malformed in
some way, attempting to decode it as one of our own might cause a
number of exceptions that are not IOExceptions. This commit ensures
that we catch and log these too and call onResponse with `null, so
that we can return 401 instead of 500.
Resolves: #50497
With elastic/elasticsearch#35848, users can now retrieve total hits as an integer when the `rest_total_hits_as_int` query parameter is `true`. This is the default value.
This updates several snippet examples in the Watcher docs that used a workaround to get a total hits integer.
* Extend the optimizations for equalities
This commit supplements the optimisations of equalities in conjunctions
and disjunctions:
* for conjunctions, the existing optimizations with ranges are extended
with not-equalities and inequalities; these lead to a fast resolution,
the conjunction either being evaluate to a FALSE, or the non-equality
conditions being dropped as superfluous;
* optimisations for disjunctions are added to be applied against ranges,
inequalities and not-equalities; these lead to disjunction either
becoming TRUE or the equality being dropped, either as superfluous or
merged into a range/inequality.
* Adress review notes
* Fix the bug around wrongly optimizing 'a=2 OR a!=?', which only yields
TRUE for same values in equality and inequality.
* Var renamings, code style adjustments, comments corrections.
* Address further review comments. Extend optim.
- fix a few code comments;
- extend the Equals OR NotEquals optimitsation (a=2 OR a!=5 -> a!=5);
- extend the Equals OR Range optimisation on limits equality (a=2 OR
2<=a<5 -> 2<=a<5);
- in case an equality is being removed in a conjunction, the rest of
possible optimisations to test is now skipped.
* rename one var for better legiblity
- s/rmEqual/removeEquals
(cherry picked from commit 62e7c6a010f10cd7893ee5c99bad8b8d2a693436)
Backport / reimplementation of #50786 on 7.x.
Opt-in `buildSrc` for automatic formatting. This required a config tweak
in order to pick up all the Java sources, and as a result more files are
now found in the Enrich plugin, that were previously missed.
I also moved the 2 Java files in `buildSrc/src/main/groovy` into the Java
directory, which required some follow-up changes.
Knowing about used analysis components and mapping types would be incredibly
useful in order to know which ones may be deprecated or should get more love.
Some field types also act as a proxy to know about feature usage of some APIs
like the `percolator` or `completion` fields types for percolation and the
completion suggester, respectively.
Replace DES with AES to align with modern encryption standards
Backport also fixs Files.readString API that is not available in Java 8
Resolves: #50843
It's possible that the index could return no settings and thus throw a
`NullPointerException`.
I wasn't able to reproduce the original issue, but this should guard
against in the future.
Resolves#50646
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
This test failed a couple of different ways, related to timing, as well
as concurrent snapshots, and also naming.
This commit splits the giant `assertBusy` into separate parts so that we don't
perform ~5 different requests and tests in the same loop. It also gives each
test a unique repository so that no other test can accidentally re-use
snapshots.
Resolves#50358 (hopefully!)
Backport of #50909. The current formatting config allows some long
generic declarations to break the 140 character limit. Tweak the config
to wrap such lines.
There have been occasional failures, presumably due to
too many tests running in parallel, caused by jobs taking
around 15 seconds to open. (You can see the job open
successfully during the cleanup phase shortly after the
failure of the test in these cases.) This change increases
the wait time from 10 seconds to 20 seconds to reduce the
risk of this happening.
This change adds a new `kibana_admin` role, and deprecates
the old `kibana_user` and`kibana_dashboard_only_user`roles.
The deprecation is implemented via a new reserved metadata
attribute, which can be consumed from the API and also triggers
deprecation logging when used (by a user authenticating to
Elasticsearch).
Some docs have been updated to avoid references to these
deprecated roles.
Backport of: #46456
Co-authored-by: Larry Gregory <lgregorydev@gmail.com>
Check it out:
```
$ curl -u elastic:password -HContent-Type:application/json -XPOST localhost:9200/test/_update/foo?pretty -d'{
"dac": {}
}'
{
"error" : {
"root_cause" : [
{
"type" : "x_content_parse_exception",
"reason" : "[2:3] [UpdateRequest] unknown field [dac] did you mean [doc]?"
}
],
"type" : "x_content_parse_exception",
"reason" : "[2:3] [UpdateRequest] unknown field [dac] did you mean [doc]?"
},
"status" : 400
}
```
The tricky thing about implementing this is that x-content doesn't
depend on Lucene. So this works by creating an extension point for the
error message using SPI. Elasticsearch's server module provides the
"spell checking" implementation.
s
We added a new rounding in #50609 that handles offsets to the start and
end of the rounding so that we could support `offset` in the `composite`
aggregation. This starts moving `date_histogram` to that new offset.
* [ML][Inference] Adding classification_weights to ensemble models
classification_weights are a way to allow models to
prefer specific classification results over others
this might be advantageous if classification value
probabilities are a known quantity and can improve
model error rates.
* Track Snapshot Version in RepositoryData (#50930)
Add tracking of snapshot versions to RepositoryData to make BwC logic more efficient.
Follow up to #50853
In classes where the client is used directly rather than through a call to
executeAsyncWithOrigin explicitly require the client to be OriginSettingClient
rather than using the Client interface.
Also remove calls to deprecated ClientHelper.clientWithOrigin() method.
Adds a new parameter to regression and classification that enables computation
of importance for the top most important features. The computation of the importance
is based on SHAP (SHapley Additive exPlanations) method.
Backport of #50914
This adds a new "http" sub-command to the certutil CLI tool.
The http command generates certificates/CSRs for use on the http
interface of an elasticsearch node/cluster.
It is designed to be a guided tool that provides explanations and
sugestions for each of the configuration options. The generated zip
file output includes extensive "readme" documentation and sample
configuration files for core Elastic products.
Backport of: #49827
The Document Level Security BitSet Cache (see #43669) had a default
configuration of "small size, long lifetime". However, this is not
a very useful default as the cache is most valuable for BitSets that
take a long time to construct, which is (generally speaking) the same
ones that operate over a large number of documents and contain many
bytes.
This commit changes the cache to be "large size, short lifetime" so
that it can hold bitsets representing billions of documents, but
releases memory quickly.
The new defaults are 10% of heap, and 2 hours.
This also adds some logging when a single BitSet exceeds the size of
the cache and when the cache is full.
Backport of: #50535
Previously custom realms were limited in what services and components
they had easy access to. It was possible to work around this because a
security extension is packaged within a Plugin, so there were ways to
store this components in static/SetOnce variables and access them from
the realm, but those techniques were fragile, undocumented and
difficult to discover.
This change includes key services as an argument to most of the methods
on SecurityExtension so that custom realm / role provider authors can
have easy access to them.
Backport of: #50534
The Document Level Security BitSet cache stores a secondary "lookup
map" so that it can determine which cache entries to invalidate when
a Lucene index is closed (merged, etc).
There was a memory leak because this secondary map was not cleared
when entries were naturally evicted from the cache (due to size/ttl
limits).
This has been solved by adding a cache removal listener and processing
those removal events asyncronously.
Backport of: #50635
When creating a role, we do not check if the exceptions for
the field permissions are a subset of granted fields. If such
a role is assigned to a user then that user's authentication fails
for this reason.
We added a check to validate role query in #46275 and on the same lines,
this commit adds check if the exceptions for the field
permissions is a subset of granted fields when parsing the
index privileges from the role descriptor.
Backport of: #50212
Co-authored-by: Yogesh Gaikwad <bizybot@users.noreply.github.com>
The enterprise license type must have "max_resource_units" and may not
have "max_nodes".
This change adds support for this new field, validation that the field
is present if-and-only-if the license is enterprise and bumps the
license version number to reflect the new field.
Includes a BWC layer to return "max_nodes: ${max_resource_units}" in
the GET license API.
Backport of: #50735
* ILM action to wait for SLM policy execution (#50454)
This change add new ILM action to wait for SLM policy execution to ensure that index has snapshot before deletion.
Closes#45067
* Fix flaky TimeSeriesLifecycleActionsIT#testWaitForSnapshot test
This change adds some randomness and cleanup step to TimeSeriesLifecycleActionsIT#testWaitForSnapshot and testWaitForSnapshotSlmExecutedBefore tests in attempt to make them stable.
Reletes to #50781
* Formatting changes
* Longer timeout
* Fix Map.of in Java8
* Unused import removed
* Refresh cached phase policy definition if possible on new policy
There are some cases when updating a policy does not change the
structure in a significant way. In these cases, we can reread the
policy definition for any indices using the updated policy.
This commit adds this refreshing to the `TransportPutLifecycleAction`
to allow this. It allows us to do things like change the configuration
values for a particular step, even when on that step (for example,
changing the rollover criteria while on the `check-rollover-ready` step).
There are more cases where the phase definition can be reread that just
the ones checked here (for example, removing an action that has already
been passed), and those will be added in subsequent work.
Relates to #48431
* SQL: Optimisation fixes for conjunction merges
This commit fixes the following issues around the way comparisions are
merged with ranges in conjunctions:
* the decision to include the equality of the lower limit is corrected;
* the selection of the upper limit is corrected to use the upper bound
of the range;
* the list of terms in the conjunction is sorted to have the ranges at
the bottom; this allows subsequent binary comarisions to find compatible
ranges and potentially be merged away. The end guarantee being that the
optimisation takes place irrespective of the order of the conjunction
terms in the statement.
Some comments are also corrected.
* adress review observation on anon. comparator
Replace anonymous comparator of split AND Expressions with a lambda.
(cherry picked from commit 9828cb143a41f1bda1219541f3a8fdc03bf6dd14)
This commit changes the default behavior for
xpack.security.ssl.diagnose.trust when running in a FIPS 140 JVM.
More specifically, when xpack.security.fips_mode.enabled is true:
- If xpack.security.ssl.diagnose.trust is not explicitly set, the
default value of it becomes false and a log message is printed
on info level, notifying of the fact that the TLS/SSL diagnostic
messages are not enabled when in a FIPS 140 JVM.
- If xpack.security.ssl.diagnose.trust is explicitly set, the value of
it is honored, even in FIPS mode.
This is relevant only for 7.x where we support Java 8 in which
SunJSSE can still be used as a FIPS 140 provider for TLS. SunJSSE
in FIPS mode, disallows the use of other TrustManager implementations
than the one shipped with SunJSSE.
The system created and models we provide now use the `_xpack` user for uniformity with our other features
The `PUT` action is now an admin cluster action
And XPackClient class now references the action instance.
Hide the `.async-search-*` in Security by making it a restricted index namespace.
The namespace is hard-coded.
To grant privileges on restricted indices, one must explicitly toggle the
`allow_restricted_indices` flag in the indices permission in the role definition.
As is the case with any other index, if a certain user lacks all permissions for an
index, that index is effectively nonexistent for that user.
The OpenIdConnectRealm had a bug which would cause it not to populate
User metadata for collections contained in the user JWT claims.
This commit fixes that bug.
Backport of: #50521
* [ML][Inference] PUT API (#50852)
This adds the `PUT` API for creating trained models that support our format.
This includes
* HLRC change for the API
* API creation
* Validations of model format and call
* fixing backport
* Fix SLM check for restore in progress (#50868)
* Fix SLM check for restore in progress
This commit fixes the check in SLM where the `RestoreInProgress`
metadata was checked for existence. Rather than check existence we
should instead check the `isEmpty` method. Prior to this, a successful
restore for a repository that used SLM retention would prevent SLM
retention from running in subsequent invocations, due to SLM thinking
that a restore was still running.
* Fix 7.x-isms
This commit removes validation logic of source and dest indices
for data frame analytics and replaces it with using the common
`SourceDestValidator` class which is already used by transforms.
This way the validations and their messages become consistent
while we reduce code.
This means that where these validations fail the error messages
will be slightly different for data frame analytics.
Backport of #50841
Replaces the "funny"
`Function<String, ConstructingObjectParser<T, Void>>` with a much
simpler `ConstructingObjectParser<T, String>`. This makes pretty much
all of our object parsers static.
A very large number of recursive calls can cause a stack overflow
exception. This commit forks the recursive calls for non-async
processors. Once forked, each thread will handle at most 10
recursive calls to help keep the stack size and thread count
down to a reasonable size.
This adds the necessary named XContent classes to the HLRC for the lang ident model. This is so the HLRC can call `GET _ml/inference/lang_ident_model_1?include_definition=true` without XContent parsing errors.
The constructors are package private as since this classes are used exclusively within the pre-packaged model (and require the specific weights, etc. to be of any use).
If a pipeline referenced by a transform does not exist, we should not allow the transform to be created.
We do allow the pipeline existence check to be skipped with defer_validations, but if the pipeline still does not exist on `_start`, the pipeline will fail to start.
relates: #50135
This test replaces the watch index after watcher got started.
This triggers watches being reloaded and while this happens the
trigger engine is paused, which disallows watches from being
triggered. At this time there are no watches in the .watches
index and I think this is just unlucky timing.
Reloading of watches happens in the background and
the watch state can be started when that happens.
For normal schedule trigger engines this is not an issue,
because watches that are meant to be triggered are triggered
when the engine triggers the next time. However for the
mock scheduled trigger engine this is different,
because watches are triggered programatically and
there is no retry in this test.
I think just adding `timeWarp().trigger("mywatch");` inside
a `assertBusy(...)`` is the right fix here. If it fails
because the mock schedule trigger engine is paused then
the test will try again. In the mean time the the watches
can be reloaded, which then resumes the mock scheduled trigger engine.
Closes#50658
Currently, if an updateable synonym filter is included in a multiplexer filter,
it is not reloaded via the _reload_search_analyzers because the multiplexer
itself doesn't pass on the analysis mode of the filters it contains, so its not
recognized as "updateable" in itself. Instead we can check and merge the
AnalysisMode settings of all filters in the multiplexer and use the resulting
mode (e.g. search-time only) for the multiplexer itself, thus making any synonym
filters contained in it reloadable. This, of course, will also make the
analyzers using the multiplexer be usable at search-time only.
Closes#50554
This commits makes the "init" ILM step retryable. It also adds a test
where an index is created with a non-parsable index name and then fails.
Related to #48183
This solves half of the problem in #46813 by moving the S3
tests to using the shared minio fixture so we at least have
some non-3rd-party, constantly running coverage on these tests.
This PR adds per-field metadata that can be set in the mappings and is later
returned by the field capabilities API. This metadata is completely opaque to
Elasticsearch but may be used by tools that index data in Elasticsearch to
communicate metadata about fields with tools that then search this data. A
typical example that has been requested in the past is the ability to attach
a unit to a numeric field.
In order to not bloat the cluster state, Elasticsearch requires that this
metadata be small:
- keys can't be longer than 20 chars,
- values can only be numbers or strings of no more than 50 chars - no inner
arrays or objects,
- the metadata can't have more than 5 keys in total.
Given that metadata is opaque to Elasticsearch, field capabilities don't try to
do anything smart when merging metadata about multiple indices, the union of
all field metadatas is returned.
Here is how the meta might look like in mappings:
```json
{
"properties": {
"latency": {
"type": "long",
"meta": {
"unit": "ms"
}
}
}
}
```
And then in the field capabilities response:
```json
{
"latency": {
"long": {
"searchable": true,
"aggreggatable": true,
"meta": {
"unit": [ "ms" ]
}
}
}
}
```
When there are no conflicts, values are arrays of size 1, but when there are
conflicts, Elasticsearch includes all unique values in this array, without
giving ways to know which index has which metadata value:
```json
{
"latency": {
"long": {
"searchable": true,
"aggreggatable": true,
"meta": {
"unit": [ "ms", "ns" ]
}
}
}
}
```
Closes#33267
This makes the "update-rollover-lifecycle-date" step, which is part of the
rollover action, retryable. It also adds an integration test to check the
step is retried and it eventually succeeds.
(cherry picked from commit 5bf068522deb2b6cd2563bcf80f34fdbf459c9f2)
Signed-off-by: Andrei Dan <andrei.dan@elastic.co>
In security we currently monitor a set of files for changes:
- config/role_mapping.yml (or alternative configured path)
- config/roles.yml
- config/users
- config/users_roles
This commit prevents unnecessary reloading when the file change actually doesn't change the internal structure.
Backport of: #50207
Co-authored-by: Anton Shuvaev <anton.shuvaev91@gmail.com>
vector REST tests occasionally fail on 7.x because
we don't receive the expected response headers with deprecation warnings.
This happens as searchers were executed against all indices including
internal indices, whose shards did not produce expected warnings.
This PR ensures that searchers are executed only against expected
indices.
Closes#50716
In 7.x an internal API used for validating remote cluster does not throw, see #50420 for the
details. This change implements a workaround for remote cluster validation, only for 7.x branches.
fixes#50420
Switch from a 32 bit Java hash to a 128 bit Murmur hash for
creating document IDs from by/over/partition field values.
The 32 bit Java hash was not sufficiently unique, and could
produce identical numbers for relatively common combinations
of by/partition field values such as L018/128 and L017/228.
Fixes#50613
The end offset of a tokenizer is supposed to point one past the
end of the input, not to the end character of the input. The
ml_classic tokenizer was erroneously doing the latter.
* Add aditional logging for ILM history store tests (#50624)
These tests use the same index name, making it hard to read logs when
diagnosing the failures. Additionally more information about the current
state of the index could be retrieved when failing.
This changes these two things in the hope of capturing more data about
why this fails on some CI nodes but not others.
Relates to #50353
This drops all remaining references to `BaseRestHandler.logger` which
has been deprecated for something like a year now. I replaced all of the
references with locally declared loggers which is so much less spooky
action at a distance to me.
Sharing a random generator may cause test failures as non-threadsafe random generators are periodically utilized in tests (see: https://github.com/elastic/elasticsearch/issues/50651)
This change constructs a calls `Randomness.get()` within the `bulkIndexWithRetry` method so that the returned `Random` object is only used in a single thread. Before, the member variable could have been used between threads, which caused test failures.
* [ML][Inference] lang_ident model (#50292)
This PR contains a java port of Google's CLD3 compact NN model https://github.com/google/cld3
The ported model is formatted to fit within our inference model formatting and stored as a resource in the `:xpack:ml:` plugin and is under basic license.
The model is broken up into two major parts:
- Preprocessing through the custom embedding (based on CLD3's embedding layer)
- Pushing the embedded text through the two layers of fully connected shallow NN.
Main differences between this port and CLD3:
- We take advantage of Java's internal Unicode handling where possible (i.e. codepoints, characters, decoders, etc.)
- We do not trim down input text by removing duplicated tokens
- We do not encode doubles/floats as longs/integers.
This adds a new cluster privilege `monitor_snapshot` which is a restricted
version of `create_snapshot`, granting the same privileges to view
snapshot and repository info and status but not granting the actual
privilege to create a snapshot.
Co-authored-by: j-bean <anton.shuvaev91@gmail.com>
Some changes had to be made in order to make the test pass due to the removal or types.
Added some more assertions. The failure description in this comment [0] indicates that the rest handler couldn't be found. The test passes now.
I plan to merge this into master and see how CI reacts, if it handles this change well then I will also unmute this test in 7 dot x branch.
Also check watch count after stopping watcher in test teardown and
disabled slm in smoke test watcher qa test.
Relates to #41172
0: https://github.com/elastic/elasticsearch/issues/41172#issuecomment-496993976
* Adds JavaDoc to `AbstractWireTestCase` and
`AbstractWireSerializingTestCase` so it is more obvious you should prefer
the latter if you have a choice
* Moves the `instanceReader` method out of `AbstractWireTestCase` becaue
it is no longer used.
* Marks a bunch of methods final so it is more obvious which classes are
for what.
* Cleans up the side effects of the above.
*Most* of our parsing can be done without passing any extra context into
the parser that isn't already part of the xcontent stream. While I was
looking around at the places that *do* need a context I found a few
places that were declared to need a context but don't actually need it.
This adds support for retrying AsyncActionSteps by triggering the async
step after ILM was moved back on the failed step (the async step we'll
be attempting to run after the cluster state reflects ILM being moved
back on the failed step).
This also marks the RolloverStep as retryable and adds an integration
test where the RolloverStep is failing to execute as the rolled over
index already exists to test that the async action RolloverStep is
retried until the rolled over index is deleted.
(cherry picked from commit 8bee5f4cb58a1242cc2ef4bc0317dae6c8be49d3)
Signed-off-by: Andrei Dan <andrei.dan@elastic.co>
Adds a `force` parameter to the delete data frame analytics
request. When `force` is `true`, the action force-stops the
jobs and then proceeds to the deletion. This can be used in
order to delete a non-stopped job with a single request.
Closes#48124
Backport of #50553
We have about 800 `ObjectParsers` in Elasticsearch, about 700 of which
are final. This is *probably* the right way to declare them because in
practice we never mutate them after they are built. And we certainly
don't change the static reference. Anyway, this adds `final` to a bunch
of these parsers, mostly the ones in xpack and their "paired" parsers in
the high level rest client. I picked these just to have somewhere to
break the up the change so it wouldn't be huge.
I found the non-final parsers with this:
```
diff \
<(find . -type f -name '*.java' -exec grep -iHe 'static.*PARSER\s*=' {} \+ | sort) \
<(find . -type f -name '*.java' -exec grep -iHe 'static.*final.*PARSER\s*=' {} \+ | sort) \
2>&1 | grep '^<'
```
Eclipse 4.13 shows a type mismatch error in the affected line because it cannot
correctly infer the boolean return type for the method call. Assigning return
value to a local variable resolves this problem.
XPackPlugin created an SSLService within the plugin contructor.
This has 2 negative consequences:
1. The service may be constructed based on a partial view of settings.
Other plugins are free to add setting values via the
additionalSettings() method, but this (necessarily) happens after
plugins have been constructed.
2. Any exceptions thrown during the plugin construction are handled
differently than exceptions thrown during "createComponents".
Since SSL configurations exceptions are relatively common, it is
far preferable for them to be thrown and handled as part of the
createComponents flow.
This commit moves the creation of the SSLService to
XPackPlugin.createComponents, and alters the sequence of some other
steps to accommodate this change.
Backport of: #49667
PR #44238 changed several links related to the Elasticsearch search request body API. This updates several places still using outdated links or anchors.
This will ultimately let us remove some redirects related to those link changes.
The docs/reference/redirects.asciidoc file stores a list of relocated or
deleted pages for the Elasticsearch Reference documentation.
This prunes several older redirects that are no longer needed and
don't require work to fix broken links in other repositories.
We are matching on the exact number of shards in this test, but may run into
snapshotting more than the single index created in it due to auto-created indices like
`.watcher`.
Fixed by making the test only take a snapshot of the single index used by this test.
Closes#50450
* Add ILM histore store index (#50287)
* Add ILM histore store index
This commit adds an ILM history store that tracks the lifecycle
execution state as an index progresses through its ILM policy. ILM
history documents store output similar to what the ILM explain API
returns.
An example document with ALL fields (not all documents will have all
fields) would look like:
```json
{
"@timestamp": 1203012389,
"policy": "my-ilm-policy",
"index": "index-2019.1.1-000023",
"index_age":123120,
"success": true,
"state": {
"phase": "warm",
"action": "allocate",
"step": "ERROR",
"failed_step": "update-settings",
"is_auto-retryable_error": true,
"creation_date": 12389012039,
"phase_time": 12908389120,
"action_time": 1283901209,
"step_time": 123904107140,
"phase_definition": "{\"policy\":\"ilm-history-ilm-policy\",\"phase_definition\":{\"min_age\":\"0ms\",\"actions\":{\"rollover\":{\"max_size\":\"50gb\",\"max_age\":\"30d\"}}},\"version\":1,\"modified_date_in_millis\":1576517253463}",
"step_info": "{... etc step info here as json ...}"
},
"error_details": "java.lang.RuntimeException: etc\n\tcaused by:etc etc etc full stacktrace"
}
```
These documents go into the `ilm-history-1-00000N` index to provide an
audit trail of the operations ILM has performed.
This history storage is enabled by default but can be disabled by setting
`index.lifecycle.history_index_enabled` to `false.`
Resolves#49180
* Make ILMHistoryStore.putAsync truly async (#50403)
This moves the `putAsync` method in `ILMHistoryStore` never to block.
Previously due to the way that the `BulkProcessor` works, it was possible
for `BulkProcessor#add` to block executing a bulk request. This was bad
as we may be adding things to the history store in cluster state update
threads.
This also moves the index creation to be done prior to the bulk request
execution, rather than being checked every time an operation was added
to the queue. This lessens the chance of the index being created, then
deleted (by some external force), and then recreated via a bulk indexing
request.
Resolves#50353
refactors source and dest validation, adds support for CCS, makes resolve work like reindex/search, allow aliased dest index with a single write index.
fixes#49988fixes#49851
relates #43201
Previously, during expression optimisation, CAST would be considered
nullable if the casted expression resulted to a NULL literal, and would
be always non-nullable otherwise. As a result if CASE was wrapped by a
null check function like IS NULL or IS NOT NULL it was simplified to
TRUE/FALSE, eliminating the actual casting operation. So in case of an
expression with an erroneous casting like CAST('foo' AS DATETIME) IS NULL
it would be simplified to FALSE instead of throwing an Exception signifying
the attempt to cast 'foo' to a DATETIME type.
CAST now always returns Nullability.UKNOWN except from the case that
its result evaluated to a constant NULL, where it returns Nullability.TRUE.
This way the IS NULL/IS NOT NULL don't get simplified to FALSE/TRUE
and the CAST actually gets evaluated resulting to a thrown Exception.
Fixes: #50191
(cherry picked from commit 671e07a931cd828661e226cba22a5d38804a17a5)
* Update remote cluster stats to support simple mode (#49961)
Remote cluster stats API currently only returns useful information if
the strategy in use is the SNIFF mode. This PR modifies the API to
provide relevant information if the user is in the SIMPLE mode. This
information is the configured addresses, max socket connections, and
open socket connections.
* Send hostname in SNI header in simple remote mode (#50247)
Currently an intermediate proxy must route conncctions to the
appropriate remote cluster when using simple mode. This commit offers
a additional mechanism for the proxy to route the connections by
including the hostname in the TLS SNI header.
* Rename the remote connection mode simple to proxy (#50291)
This commit renames the simple connection mode to the proxy connection
mode for remote cluster connections. In order to do this, the mode specific
settings which we namespaced by their mode (ex: sniff.seed and
proxy.addresses) have been reverted.
* Modify proxy mode to support a single address (#50391)
Currently, the remote proxy connection mode uses a list setting for the
proxy address. This commit modifies this so that the setting is
proxy_address and only supports a single remote proxy address.
Avoid backwards incompatible changes for 8.x and 7.6 by removing type
restriction on compile and Factory. Factories may optionally implement
ScriptFactory. If so, then they can indicate determinism and thus
cacheability.
**Backport**
Relates: #49466
In order to ensure any persisted model state is searchable by the moment
the job reports itself as `stopped`, we need to refresh the state index
before completing.
This should fix the occasional failures we see in #50168 and #50313 where
the model state appears missing.
Closes#50168Closes#50313
Backport of #50322
This fixes support for nested fields
We now support fully nested, fully collapsed, or a mix of both on inference docs.
ES mappings allow the `_source` to be any combination of nested objects + dot delimited fields.
So, we should do our best to find the best path down the Map for the desired field.
Our REST infrastructure will reject requests that have a body where the
body of the request is never consumed. This ensures that we reject
requests on endpoints that do not support having a body. This requires
cooperation from the REST handlers though, to actually consume the body,
otherwise the REST infrastructure will proceed with rejecting the
request. This commit addresses an issue in the has privileges API where
we would prematurely try to reject a request for not having a username,
before consuming the body. Since the body was not consumed, the REST
infrastructure would instead reject the request as a bad request.
This commit adds removal of unused data frame analytics state
from the _delete_expired_data API (and in extend th ML daily
maintenance task). At the moment the potential state docs
include the progress document and state for regression and
classification analyses.
Backport of #50243
Backport #50244 to 7.x branch.
If a processor executes asynchronously and the ingest simulate api simulates with
multiple documents then the order of the documents in the response may not match
the order of the documents in the request.
Alexander Reelsen discovered this issue with the enrich processor with the following reproduction:
```
PUT cities/_doc/munich
{"zip":"80331","city":"Munich"}
PUT cities/_doc/berlin
{"zip":"10965","city":"Berlin"}
PUT /_enrich/policy/zip-policy
{
"match": {
"indices": "cities",
"match_field": "zip",
"enrich_fields": [ "city" ]
}
}
POST /_enrich/policy/zip-policy/_execute
GET _cat/indices/.enrich-*
POST /_ingest/pipeline/_simulate
{
"pipeline": {
"processors" : [
{
"enrich" : {
"policy_name": "zip-policy",
"field" : "zip",
"target_field": "city",
"max_matches": "1"
}
}
]
},
"docs": [
{ "_id": "first", "_source" : { "zip" : "80331" } } ,
{ "_id": "second", "_source" : { "zip" : "50667" } }
]
}
```
* fixed test compile error
Follow up to #49729
This change removes falling back to listing out the repository contents to find the latest `index-N` in write-mounted blob store repositories.
This saves 2-3 list operations on each snapshot create and delete operation. Also it makes all the snapshot status APIs cheaper (and faster) by saving one list operation there as well in many cases.
This removes the resiliency to concurrent modifications of the repository as a result and puts a repository in a `corrupted` state in case loading `RepositoryData` failed from the assumed generation.
This adds a new "xpack.license.upload.types" setting that restricts
which license types may be uploaded to a cluster.
By default all types are allowed (excluding basic, which can only be
generated and never uploaded).
This setting does not restrict APIs that generate licenses such as the
start trial API.
This setting is not documented as it is intended to be set by
orchestrators and not end users.
Backport of: #49418
In the yaml cluster upgrade tests, we start a scroll in a mixed-version cluster,
then attempt to continue the scroll after the upgrade is complete. This test
occasionally fails because the scroll can expire before the cluster is done
upgrading.
The current scroll keep-alive time 5m. This PR bumps it to 10m, which gives a
good buffer since in failing tests the time was only exceeded by ~30 seconds.
Addresses #46529.
Backport of #49612.
The current Docker entrypoint script picks up environment variables and
translates them into -E command line arguments. However, since any tool
executes via `docker exec` doesn't run the entrypoint, it results in
a poorer user experience.
Therefore, refactor the env var handling so that the -E options are
generated in `elasticsearch-env`. These have to be appended to any
existing command arguments, since some CLI tools have subcommands and
-E arguments must come after the subcommand.
Also extract the support for `_FILE` env vars into a separate script, so
that it can be called from more than once place (the behaviour is
idempotent).
Finally, add noop -E handling to CronEvalTool for parity, and support
`-E` in MultiCommand before subcommands.
The testclusters shutdown code was killing child processes
of the ES JVM before the ES JVM. This causes any running
ML jobs to be recorded as failed, as the ES JVM notices that
they have disconnected from it without being told to stop,
as they would if they crashed. In many test suites this
doesn't matter because the test cluster will never be
restarted, but in the case of upgrade tests it makes it
impossible to test what happens when an ML job is running
at the time of the upgrade.
This change reverses the order of killing the ES process
tree such that the parent processes are killed before their
children. A list of children is stored before killing the
parent so that they can subsequently be killed (if they
don't exit by themselves as a side effect of the parent
dying).
Backport of #50175
Executing the data frame analytics _explain API with a config that contains
a field that is not in the includes list but at the same time is the excludes
list results to trying to remove the field twice from the iterator. That causes
an `IllegalStateException`. This commit fixes this issue and adds a test that
captures the scenario.
Backport of #50192
* Remove BlobContainer Tests against Mocks
Removing all these weird mocks as asked for by #30424.
All these tests are now part of real repository ITs and otherwise left unchanged if they had
independent tests that didn't call the `createBlobStore` method previously.
The HDFS tests also get added coverage as a side-effect because they did not have an implementation
of the abstract repository ITs.
Closes#30424
Lucene 8.4 added support for "CONTAINS", therefore in this commit those
changes are integrated in Elasticsearch. This commit contains as well a
bug fix when querying with a geometry collection with "DISJOINT" relation.
The release builds use a production license key, and our rest test load
licenses that are signed by the dev license key.
This change adds the new enterprise license Rest tests to the
blacklist on release builds.
Backport of: #50163
Since 7.4, we switch from translog to Lucene as the source of history
for peer recoveries. However, we reduce the likelihood of
operation-based recoveries when performing a full cluster restart from
pre-7.4 because existing copies do not have PPRL.
To remedy this issue, we fallback using translog in peer recoveries if
the recovering replica does not have a peer recovery retention lease,
and the replication group hasn't fully migrated to PRRL.
Relates #45136
Today we do not use retention leases in peer recovery for closed indices
because we can't sync retention leases on closed indices. This change
allows that ability and adjusts peer recovery to use retention leases
for all indices with soft-deletes enabled.
Relates #45136
Co-authored-by: David Turner <david.turner@elastic.co>
This adds a new field for the inference processor.
`warning_field` is a place for us to write warnings provided from the inference call. When there are warnings we are not going to write an inference result. The goal of this is to indicate that the data provided was too poor or too different for the model to make an accurate prediction.
The user could optionally include the `warning_field`. When it is not provided, it is assumed no warnings were desired to be written.
The first of these warnings is when ALL of the input fields are missing. If none of the trained fields are present, we don't bother inferencing against the model and instead provide a warning stating that the fields were missing.
Also, this adds checks to not allow duplicated fields during processor creation.
This exchanges the direct use of the `Client` for `ResultsPersisterService`. State doc persistence will now retry. Failures to persist state will still not throw, but will be audited and logged.
This commit fixes a bug that caused the data frame analytics
_explain API to time out in a multi-node setup when the source
index was missing. When we try to create the extracted fields detector,
we check the index settings. If the index is missing that responds
with a failure that could be wrapped as a remote exception.
While we unwrapped correctly to check if the cause was an
`IndexNotFoundException`, we then proceeded to cast the original
exception instead of the cause.
Backport of #50176
This test was fixed as part of #49736 so that it used a
TokenService mock instance that was enabled, so that token
verification fails because the token is invalid and not because
the token service is not enabled.
When the randomly generated token we send, decodes to being of
version > 7.2 , we need to have mocked a GetResponse for the call
that TokenService#getUserTokenFromId will make, otherwise this
hangs and times out.
Today the HTTP exporter settings without the exporter type having been
configured to HTTP. When it is time to initialize the exporter, we can
blow up. Since this initialization happens on the cluster state applier
thread, it is quite problematic that we do not reject settings updates
where the type is not configured to HTTP, but there are HTTP exporter
settings configured. This commit addresses this by validating that the
exporter type is not only set, but is set to HTTP.
The "code_user" and "code_admin" reserved roles existed to support
code search which is no longer included in Kibana.
The "kibana_system" role included privileges to read/write from the
code search indices, but no longer needs that access.
Backport of: #50068
* [ML] Add graceful retry for anomaly detector result indexing failures (#49508)
All results indexing now retry the amount of times configured in `xpack.ml.persist_results_max_retries`. The retries are done in a semi-random, exponential backoff.
* fixing test
This adds "enterprise" as an acceptable type for a license loaded
through the PUT _license API.
Internally an enterprise license is treated as having a "platinum"
operating mode.
The handling of License types was refactored to have a new explicit
"LicenseType" enum in addition to the existing "OperatingMode" enum.
By default (in 7.x) the GET license API will return "platinum" when an
enterprise license is active in order to be compatible with existing
consumers of that API.
A new "accept_enterprise" flag has been introduced to allow clients to
opt-in to receive the correct "enterprise" type.
Backport of: #49223
The `sparse_vector` REST tests occasionally fail on 7.x because we don't receive the expected response headers with deprecation warnings.
One theory as to what is happening is that there is an extra empty index present in addition to the test index. Since the search doesn't specify an index name, it hits both the test index and this extra empty index and shard responses from the extra index don't produce deprecation warnings. If not all shard responses contain the warning headers, then certain deprecation warnings can be lost (due to the bug described in #33936).
This PR tries to harden the `sparse_vector` tests by always specifying the index name during a search. This doesn't fix the root causes of the issue, but is good practice and can help avoid intermittent failures.
Addresses #49383.
The `ClassificationIT.testTwoJobsWithSameRandomizeSeedUseSameTrainingSet`
test was previously set up to just have 10 rows. With `training_percent`
of 50%, only 5 rows will be used for training. There is a good chance that
all 5 rows will be of one class which results to failure.
This commit increases the rows to 100. Now 50 rows should be used for training
and the chance of failure should be very small.
Backport of #50072
Watcher logs when actions fail in ActionWrapper, but failures to
generate an email attachment are not logged and we thus only know the
type of the exception and not where/how it occurred.
Adjusts the subclasses of `TransportMasterNodeAction` to use their own loggers
instead of the one for the base class.
Relates #50056.
Partial backport of #46431 to 7.x.
Return a 401 in all cases when a request is submitted with an
access token that we can't consume. Before this change, we would
throw a 500 when a request came in with an access token that we
had generated but was then invalidated/expired and deleted from
the tokens index.
Resolves: #38866
Backport of #49736
This adds a new `randomize_seed` for regression and classification.
When not explicitly set, the seed is randomly generated. One can
reuse the seed in a similar job in order to ensure the same docs
are picked for training.
Backport of #49990
The elasticsearch-node tools allow manipulating the on-disk cluster state. The tool is currently
unaware of plugins and will therefore drop custom metadata from the cluster state once the
state is written out again (as it skips over the custom metadata that it can't read). This commit
preserves unknown customs when editing on-disk metadata through the elasticsearch-node
command-line tools.
Today settings can declare dependencies on another setting. This
declaration is implemented so that if the declared setting is not set
when the declaring setting is, settings validation fails. Yet, in some
cases we want not only that the setting is set, but that it also has a
specific value. For example, with the monitoring exporter settings, if
xpack.monitoring.exporters.my_exporter.host is set, we not only want
that xpack.monitoring.exporters.my_exporter.type is set, but that it is
also set to local. This commit extends the settings infrastructure so
that this declaration is possible. The use of this in the monitoring
exporter settings will be implemented in a follow-up.
The `testReplaceChildren()` has been fixed for Pivot as part
of #49693.
Reverting: #49045
(cherry picked from commit 4b9b9edbcf2041a8619b65580bbe192bf424cebc)
When checking the cardinality of a field, the query should be take into account. The user might know about some bad data in their index and want to filter down to the target_field values they care about.
Work in progress in the c++ side is increasing memory estimates
a bit and this test fails. At the time of this commit the mem
estimate when there is no source query is a about 2Mb. So I
am relaxing the test to assert memory estimate is less than 1Mb
instead of 500Kb.
Backport of #49924
Step on the road to #49060.
This commit adds the logic to keep track of a repository's generation
across repository operations. See changes to package level Javadoc for the concrete changes in the distributed state machine.
It updates the write side of new repository generations to be fully consistent via the cluster state. With this change, no `index-N` will be overwritten for the same repository ever. So eventual consistency issues around conflicting updates to the same `index-N` are not a possibility any longer.
With this change the read side will still use listing of repository contents instead of relying solely on the cluster state contents.
The logic for that will be introduced in #49060. This retains the ability to externally delete the contents of a repository and continue using it afterwards for the time being. In #49060 the use of listing to determine the repository generation will be removed in all cases (except for full-cluster restart) as the last step in this effort.
Makes sure that CCR also properly works with _source disabled.
Changes one exception in LuceneChangesSnapshot as the case of missing _recovery_source
because of a missing lease was not properly properly bubbled up to CCR (testIndexFallBehind
was failing).
To recap, Attributes form the properties of a derived table.
Each LogicalPlan has Attributes as output since each one can be part of
a query and as such its result are sent to its consumer.
This change essentially removes the name id comparison so any changes
applied to existing expressions should work as long as the said
expressions are semantically equivalent.
This change enforces the hashCode and equals which has the side-effect
of using hashCode as identifiers for each expression.
By removing any property from an Attribute, the various components need
to look the original source for comparison which, while annoying, should
prevent a reference from getting out of sync with its source due to
optimizations.
Essentially going forward there are only 3 types of NamedExpressions:
Alias - user define (implicit or explicit) name
FieldAttribute - field backed by Elasticsearch
ReferenceAttribute - a reference to another source acting as an
Attribute. Typically the Attribute of an Alias.
* Remove the usage of NamedExpression as basis for all Expressions.
Instead, restrict their use only for named context, such as projections
by using Aliasing instead.
* Remove different types of Attributes and allow only FieldAttribute,
UnresolvedAttribute and ReferenceAttribute. To avoid issues with
rewrites, resolve the references inside the QueryContainer so the
information always stays on the source.
* Side-effect, simplify the rules as the state for InnerAggs doesn't
have to be contained anymore.
* Improve ResolveMissingRef rule to handle references to named
non-singular expression tree against the same expression used up the
tree.
#49693 backport to 7.x
(cherry picked from commit 5d095e2173bcbf120f534a6f2a584185a7879b57)
In order to cache script results in the query shard cache, we need to
check if scripts are deterministic. This change adds a default method
to the script factories, `isResultDeterministic() -> false` which is
used by the `QueryShardContext`.
Script results were never cached and that does not change here. Future
changes will implement this method based on whether the results of the
scripts are deterministic or not and therefore cacheable.
Refs: #49466
**Backport**
This commit refactors the `IndexLifecycleRunner` to split out and
consolidate the number of methods that change state from within ILM. It
adds a new class `IndexLifecycleTransition` that contains a number of
static methods used to modify ILM's state. These methods all return new
cluster states rather than making changes themselves (they can be
thought of as helpers for modifying ILM state).
Rather than having multiple ways to move an index to a particular step
(like `moveClusterStateToStep`, `moveClusterStateToNextStep`,
`moveClusterStateToPreviouslyFailedStep`, etc (there are others)) this
now consolidates those into three with (hopefully) useful names:
- `moveClusterStateToStep`
- `moveClusterStateToErrorStep`
- `moveClusterStateToPreviouslyFailedStep`
In the move, I was also able to consolidate duplicate or redundant
arguments to these functions. Prior to this commit there were many calls
that provided duplicate information (both `IndexMetaData` and
`LifecycleExecutionState` for example) where the duplicate argument
could be derived from a previous argument with no problems.
With this split, `IndexLifecycleRunner` now contains the methods used to
actually run steps as well as the methods that kick off cluster state
updates for state transitions. `IndexLifecycleTransition` contains only
the helpers for constructing new states from given scenarios.
This also adds Javadocs to all methods in both `IndexLifecycleRunner`
and `IndexLifecycleTransition` (this accounts for almost all of the
increase in code lines for this commit). It also makes all methods be as
restrictive in visibility, to limit the scope of where they are used.
This refactoring is part of work towards capturing actions and
transitions that ILM makes, by consolidating and simplifying the places
we make state changes, it will make adding operation auditing easier.
Historically only two things happened in the final reduction:
empty buckets were filled, and pipeline aggs were reduced (since it
was the final reduction, this was safe). Usage of the final reduction
is growing however. Auto-date-histo might need to perform
many reductions on final-reduce to merge down buckets, CCS
may need to side-step the final reduction if sending to a
different cluster, etc
Having pipelines generate their output in the final reduce was
convenient, but is becoming increasingly difficult to manage
as the rest of the agg framework advances.
This commit decouples pipeline aggs from the final reduction by
introducing a new "top level" reduce, which should be called
at the beginning of the reduce cycle (e.g. from the SearchPhaseController).
This will only reduce pipeline aggs on the final reduce after
the non-pipeline agg tree has been fully reduced.
By separating pipeline reduction into their own set of methods,
aggregations are free to use the final reduction for whatever
purpose without worrying about generating pipeline results
which are non-reducible
* Copying the request is not necessary here. We can simply release it once the response has been generated and a lot of `Unpooled` allocations that way
* Relates #32228
* I think the issue that preventet that PR that PR from being merged was solved by #39634 that moved the bulk index marker search to ByteBuf bulk access so the composite buffer shouldn't require many additional bounds checks (I'd argue the bounds checks we add, we save when copying the composite buffer)
* I couldn't neccessarily reproduce much of a speedup from this change, but I could reproduce a very measureable reduction in GC time with e.g. Rally's PMC (4g heap node and bulk requests of size 5k saw a reduction in young GC time by ~10% for me)
This commit fixes a number of issues with data replication:
- Local and global checkpoints are not updated after the new operations have been fsynced, but
might capture a state before the fsync. The reason why this probably went undetected for so
long is that AsyncIOProcessor is synchronous if you index one item at a time, and hence working
as intended unless you have a high enough level of concurrent indexing. As we rely in other
places on the assumption that we have an up-to-date local checkpoint in case of synchronous
translog durability, there's a risk for the local and global checkpoints not to be up-to-date after
replication completes, and that this won't be corrected by the periodic global checkpoint sync.
- AsyncIOProcessor also has another "bad" side effect here: if you index one bulk at a time, the
bulk is always first fsynced on the primary before being sent to the replica. Further, if one thread
is tasked by AsyncIOProcessor to drain the processing queue and fsync, other threads can
easily pile more bulk requests on top of that thread. Things are not very fair here, and the thread
might continue doing a lot more fsyncs before returning (as the other threads pile more and
more on top), which blocks it from returning as a replication request (e.g. if this thread is on the
primary, it blocks the replication requests to the replicas from going out, and delaying
checkpoint advancement).
This commit fixes all these issues, and also simplifies the code that coordinates all the after
write actions.
Some extended testing with MS-SQL server and H2 (which agree on
results) revealed bugs in the implementation of WEEK related extraction
and diff functions.
Non-iso WEEK seems to be broken since #48209 because
of the replacement of Calendar and the change in the ISO rules.
ISO_WEEK failed for some edge cases around the January 1st.
DATE_DIFF was previously based on non-iso WEEK extraction which seems
not to be the case.
Fixes: #49376
(cherry picked from commit 54fe7f57289c46bb0905b1418f51a00e8c581560)
This stems from a time where index requests were directly forwarded to
TransportReplicationAction. Nowadays they are wrapped in a BulkShardRequest, and this logic is
obsolete.
In contrast to prior PR (#49647), this PR also fixes (see b3697cc) a situation where the previous
index expression logic had an interesting side effect. For bulk requests (which had resolveIndex
= false), the reroute phase was waiting for the index to appear in case where it was not present,
and for all other replication requests (resolveIndex = true) it would right away throw an
IndexNotFoundException while resolving the name and exit. With #49647, every replication
request was now waiting for the index to appear, which was problematic when the given index
had just been deleted (e.g. deleting a follower index while it's still receiving requests from the
leader, where these requests would now wait up to a minute for the index to appear). This PR
now adds b3697cc on top of that prior PR to make sure to reestablish some of the prior behavior
where the reroute phase waits for the bulk request for the index to appear. That logic was in
place to ensure that when an index was created and not all nodes had learned about it yet, that
the bulk would not fail somewhere in the reroute phase. This is now only restricted to the
situation where the current node has an older cluster state than the one that coordinated the
bulk request (which checks that the index is present). This also means that when an index is
deleted, we will no longer unnecessarily wait up to the timeout for the index o appear, and
instead fail the request.
Closes#20279
This adds a `_source` setting under the `source` setting of a data
frame analytics config. The new `_source` is reusing the structure
of a `FetchSourceContext` like `analyzed_fields` does. Specifying
includes and excludes for source allows selecting which fields
will get reindexed and will be available in the destination index.
Closes#49531
Backport of #49690
* Make BlobStoreRepository Aware of ClusterState (#49639)
This is a preliminary to #49060.
It does not introduce any substantial behavior change to how the blob store repository
operates. What it does is to add all the infrastructure changes around passing the cluster service to the blob store, associated test changes and a best effort approach to tracking the latest repository generation on all nodes from cluster state updates. This brings a slight improvement to the consistency
by which non-master nodes (or master directly after a failover) will be able to determine the latest repository generation. It does not however do any tricky checks for the situation after a repository operation
(create, delete or cleanup) that could theoretically be used to get even greater accuracy to keep this change simple.
This change does not in any way alter the behavior of the blobstore repository other than adding a better "guess" for the value of the latest repo generation and is mainly intended to isolate the actual logical change to how the
repository operates in #49060
- Improves HTTP client hostname verification failure messages
- Adds "DiagnosticTrustManager" which logs certificate information
when trust cannot be established (hostname failure, CA path failure,
etc)
These diagnostic messages are designed so that many common TLS
problems can be diagnosed based solely (or primarily) on the
elasticsearch logs.
These diagnostics can be disabled by setting
xpack.security.ssl.diagnose.trust: false
Backport of: #48911
Add a mirror of the maven repository of the shibboleth project
and upgrade opensaml and related dependencies to the latest
version available version
Resolves: #44947
This commit adds a function in NodeClient that allows to track the progress
of a search request locally. Progress is tracked through a SearchProgressListener
that exposes query and fetch responses as well as partial and final reduces.
This new method can be used by modules/plugins inside a node in order to track the
progress of a local search request.
Relates #49091
This stems from a time where index requests were directly forwarded to
TransportReplicationAction. Nowadays they are wrapped in a BulkShardRequest, and this logic is
obsolete.
Closes#20279
This change adds a dynamic cluster setting named `indices.id_field_data.enabled`.
When set to `false` any attempt to load the fielddata for the `_id` field will fail
with an exception. The default value in this change is set to `false` in order to prevent
fielddata usage on this field for future versions but it will be set to `true` when backporting
to 7x. When the setting is set to true (manually or by default in 7x) the loading will also issue
a deprecation warning since we want to disallow fielddata entirely when https://github.com/elastic/elasticsearch/issues/26472
is implemented.
Closes#43599
Authentication has grown more complex with the addition of new realm
types and authentication methods. When user authentication does not
behave as expected it can be difficult to determine where and why it
failed.
This commit adds DEBUG and TRACE logging at key points in the
authentication flow so that it is possible to gain addition insight
into the operation of the system.
Backport of: #49575
The AuthenticationService has a feature to "smart order" the realm
chain so that whicherver realm was the last one to successfully
authenticate a given user will be tried first when that user tries to
authenticate again.
There was a bug where the building of this realm order would
incorrectly drop the first realm from the default chain unless that
realm was the "last successful" realm.
In most cases this didn't cause problems because the first realm is
the reserved realm and so it is unusual for a user that authenticated
against a different realm to later need to authenticate against the
resevered realm.
This commit fixes that bug and adds relevant asserts and tests.
Backport of: #49473
All the implementations of `EsBlobStoreTestCase` use the exact same
bootstrap code that is also used by their implementation of
`EsBlobStoreContainerTestCase`.
This means all tests might as well live under `EsBlobStoreContainerTestCase`
saving a lot of code duplication. Also, there was no HDFS implementation for
`EsBlobStoreTestCase` which is now automatically resolved by moving the tests over
since there is a HDFS implementation for the container tests.
Grouping By YEAR() is translated to a histogram aggregation, but
previously if there was a scalar function invloved (e.g.:
`YEAR(date + INTERVAL 2 YEARS)`), there was no proper script created
and the histogram was applied on a field with name: `date + INTERVAL 2 YEARS`
which doesn't make sense, and resulted in null result.
Check the underlying field of YEAR() and if it's a function call
`asScript()` to properly get the painless script on which the histogram
is applied.
Fixes: #49386
(cherry picked from commit 93c37abc943d00d3a14ba08435d118a6d48874c7)
Add TRUNC as alias to already implemented TRUNCATE
numeric function which is the flavour supported by
Oracle and PostgreSQL.
Relates to: #41195
(cherry picked from commit f2aa7f0779bc5cce40cc0c1f5e5cf1a5bb7d84f0)
We depend on the number of data frame rows in order to report progress
for the writing of results, the last phase of a job run. However, results
include other objects than just the data frame rows (e.g, progress, inference model, etc.).
The problem this commit fixes is that if we receive the last data frame row results
we'll report that progress is complete even though we still have more results to process
potentially. If the job gets stopped for any reason at this point, we will not be able
to restart the job properly as we'll think that the job was completed.
This commit addresses this by limiting the max progress we can report for the
writing_results phase before the results processor completes to 98.
At the end, when the process is done we set the progress to 100.
The commit also improves failure capturing and reporting in the results processor.
Backport of #49551
Previously, CaseProcessor was pre-calculating (called `process()`)
on all the building elements of a CASE/IIF expression, not only the
conditions involved but also the results, as well as the final else result.
In case one of those results had an erroneous calculation
(e.g.: division by zero) this was executed and resulted in
an Exception to be thrown, even if this result was not used because of
the condition guarding it. e.g.:
```
SELECT CASE myField1 = 0 THEN NULL ELSE myField2 / myField1 END
FROM test;
```
Fixes: #49388
(cherry picked from commit dbd169afc98686cae1bc72024fad0ca32b272efd)
This commit back ports three commits related to enabling the simple
connection strategy.
Allow simple connection strategy to be configured (#49066)
Currently the simple connection strategy only exists in the code. It
cannot be configured. This commit moves in the direction of allowing it
to be configured. It introduces settings for the addresses and socket
count. Additionally it introduces new settings for the sniff strategy
so that the more generic number of connections and seed node settings
can be deprecated.
The simple settings are not yet registered as the registration is
dependent on follow-up work to validate the settings.
Ensure at least 1 seed configured in remote test (#49389)
This fixes#49384. Currently when we select a random subset of seed
nodes from a list, it is possible for 0 seeds to be selected. This test
depends on at least 1 seed being selected.
Add the simple strategy to cluster settings (#49414)
This is related to #49067. This commit adds the simple connection
strategy settings and strategy mode setting to the cluster settings
registry. With these changes, the simple connection mode can be used.
Additionally, it adds validation to ensure that settings cannot be
misconfigured.
The categorization job wizard in the ML UI will use this
information when showing the effect of the chosen categorization
analyzer on a sample of input.
Before this change excluding an unsupported field resulted in
an error message that explained the excluded field could not be
detected as if it doesn't exist. This error message is confusing.
This commit commit changes this so that there is no error in this
scenario. When excluding a field that does exist but has been
automatically been excluded from the analysis there is no harm
(unlike excluding a missing field which could be a typo).
Backport of #49535
This test must check for state `SUCCESS` as well. `SUCESS` in
`SnapshotsInProgress` means "all data nodes finished snapshotting sucessfully but master must still finalize the snapshot in the repo".
`SUCESS` does not mean that the snapshot is actually fully finished in this object.
You can easily reporduce the scenario in #49303 that has an in-progress snapshot in `SUCCESS` state
by waiting 20s before running the busy assert loop on the snapshot status so that all steps but the blocked
finalization can finish.
Closes#49303
This commit replaces the _estimate_memory_usage API with
a new API, the _explain API.
The API consolidates information that is useful before
creating a data frame analytics job.
It includes:
- memory estimation
- field selection explanation
Memory estimation is moved here from what was previously
calculated in the _estimate_memory_usage API.
Field selection is a new feature that explains to the user
whether each available field was selected to be included or
not in the analysis. In the case it was not included, it also
explains the reason why.
Backport of #49455
This commit enhances the required pipeline functionality by changing it
so that default/request pipelines can also be executed, but the required
pipeline is always executed last. This gives users the flexibility to
execute their own indexing pipelines, but also ensure that any required
pipelines are also executed. Since such pipelines are executed last, we
change the name of required pipelines to final pipelines.
Add extra checks to prevent ConstantFolding rule to try to fold
the CASE/IIF functions early before the SimplifyCase rule gets applied.
Fixes: #49387
(cherry picked from commit f35c9725350e35985d8dd3001870084e1784a5ca)
This change automatically pre-sort search shards on search requests that use a primary sort based on the value of a field. When possible, the can_match phase will extract the min/max (depending on the provided sort order) values of each shard and use it to pre-sort the shards prior to running the subsequent phases. This feature can be useful to ensure that shards that contain recent data are executed first so that intermediate merge have more chance to contain contiguous data (think of date_histogram for instance) but it could also be used in a follow up to early terminate sorted top-hits queries that don't require the total hit count. The latter could significantly speed up the retrieval of the most/least
recent documents from time-based indices.
Relates #49091
This commit adds a deprecation warning when starting
a node where either of the server contexts
(xpack.security.transport.ssl and xpack.security.http.ssl)
meet either of these conditions:
1. The server lacks a certificate/key pair (i.e. neither
ssl.keystore.path not ssl.certificate are configured)
2. The server has some ssl configuration, but ssl.enabled is not
specified. This new validation does not care whether ssl.enabled is
true or false (though other validation might), it simply makes it
an error to configure server SSL without being explicit about
whether to enable that configuration.
Backport of: #45892
Backport of #48277
Otherwise integration tests may fail if the monitoring interval is low:
```
[2019-10-21T09:57:25,527][ERROR][o.e.b.ElasticsearchUncaughtExceptionHandler] [integTest-0] fatal error in thread [elasticsearch[integTest-0][generic][T#4]], exiting
java.lang.AssertionError: initial cluster state not set yet
at org.elasticsearch.cluster.service.ClusterApplierService.state(ClusterApplierService.java:208) ~[elasticsearch-7.6.0-SNAPSHOT.jar:7.6.0-SNAPSHOT]
at org.elasticsearch.cluster.service.ClusterService.state(ClusterService.java:125) ~[elasticsearch-7.6.0-SNAPSHOT.jar:7.6.0-SNAPSHOT]
at org.elasticsearch.xpack.monitoring.MonitoringService$MonitoringExecution$1.doRun(MonitoringService.java:231) ~[?:?]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.6.0-SNAPSHOT.jar:7.6.0-SNAPSHOT]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:703) ~[elasticsearch-7.6.0-SNAPSHOT.jar:7.6.0-SNAPSHOT]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?]
at java.lang.Thread.run(Thread.java:835) [?:?]
```
I ran into this when lowering the monitoring interval when investigating
enrich monitoring test: #48258
fix force stopping transform if indexer state hasn't been written and/or is set to STOPPED. In certain situations the transform could not be stopped, which means the task could not be removed. Introduces improved abstraction in order to better test state handling in future.
* SLM set the operation mode to RUNNING on first run
Set the SLM operation mode to RUNNING when setting the first SLM lifecycle
policy. Historically, SLM was not decoupled from ILM but now they are
independent components. Setting the SLM operation mode to what the ILM running
mode was when we set the first SLM lifecycle policy was a remain from those
times.
* SLM update package info
* SLM suppress unusued warning
* SLM use logger for the correct class
* SLM Add integration test for operation mode
* Use ESSingleNodeTestCase instead of ESIntegTestCase
(cherry picked from commit 4ad3d93f89d03bf9a25685a990d1a439f33ce0e6)
Signed-off-by: Andrei Dan <andrei.dan@elastic.co>
If a datafeed is stopped normally and force stopped at the same
time then it is possible that the force stop removes the
persistent task while the normal stop is performing actions.
Currently this causes the normal stop to error, but since
stopping a stopped datafeed is not an error this doesn't make
sense. Instead the force stop should just take precedence.
This is a followup to #49191 and should really have been
included in the changes in that PR.
This commit moves the async calls required to retrieve the components
that make up `ExtractedFieldsExtractor` out of `DataFrameDataExtractorFactory`
and into a dedicated `ExtractorFieldsExtractorFactory` class.
A few more refactorings are performed:
- The detector no longer needs the results field. Instead, it knows
whether to use it or not based on whether the task is restarting.
- We pass more accurately whether the task is restarting or not.
- The validation of whether fields that have a cardinality limit
are valid is now performed in the detector after retrieving the
respective cardinalities.
Backport of #49315
This is a pure code rearrangement refactor. Logic for what specific ValuesSource instance to use for a given type (e.g. script or field) moved out of ValuesSourceConfig and into CoreValuesSourceType (previously just ValueSourceType; we extract an interface for future extensibility). ValueSourceConfig still selects which case to use, and then the ValuesSourceType instance knows how to construct the ValuesSource for that case.
This commit changes the ThreadContext to just use a regular ThreadLocal
over the lucene CloseableThreadLocal. The CloseableThreadLocal solves
issues with ThreadLocals that are no longer needed during runtime but
in the case of the ThreadContext, we need it for the runtime of the
node and it is typically not closed until the node closes, so we miss
out on the benefits that this class provides.
Additionally by removing the close logic, we simplify code in other
places that deal with exceptions and tracking to see if it happens when
the node is closing.
Closes#42577