Adds a secure and reloadable SECURE_AUTH_PASSWORD setting to allow keystore entries in the form "xpack.monitoring.exporters.*.auth.secure_password" to securely supply passwords for monitoring HTTP exporters. Also deprecates the insecure `AUTH_PASSWORD` setting.
We suspect the flakiness could’ve come from the fact that the rollover
step used to create the new index and roll the write alias to the new
index in separate cluster state updates. So the assertion that the
rolled index exists could’ve passed in the test but, before the
alias was rolled over to the new index, the subsequent write we execute
in the test (namely
`indexDocs("test_user", "x-pack-test-password", "foo_alias", 1)`)
would’ve sent the new document to the source index (ie. foo-logs-000001)
This would see the source index containing 3 documents and the rolled
index (foo-logs-000002) 0 documents.
However, we fixed this and the rollover step executes the “create index
and roll alias” in one single cluster update, so this situation should
not occur anymore.
(cherry picked from commit 834261c4fe7dd93f437eeec43c00d01ff2279f86)
Signed-off-by: Andrei Dan <andrei.dan@elastic.co>
* REST: Test: Fix the `accept_enterprise` parameter for Get License API (#51527)
The Get License API specifies the `accept_enterprise` parameter as a `boolean`:
0ca5cb8cb6/x-pack/plugin/src/test/resources/rest-api-spec/api/license.get.json (L22-L27)
In the test, a `string` is passed however, which makes the test compilation fail in the Go client.
(cherry picked from commit e2a2169b3d44592057c143253bb56375ed3e4268)
* Fix the SQL API documentation in REST specification (#51534)
This patch fixes the SQL REST API documentation to conform to the current schema.
(cherry picked from commit c8b6a849852699883086a6ada42279f2f68d7e07)
* Fix the "slices" parameter for the Delete By Query API in the REST specification (#51535)
This patch updates the `type` parameter in the Delete By Query API: according to
[the documentation](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-delete-by-query.html#docs-delete-by-query-slice),
it can be set to "auto", but the type in the documentation allows only numerical values.
This prevents people from setting the parameter to "auto" eg. in the Go client,
which generates source from the specification, and sets the corresponding Go
type as number.
The patch uses the `|` notation, which we have discussed previously for encoding
a "polymorphic" parameter like this.
Related: https://github.com/elastic/go-elasticsearch/issues/77
* Fix the Enrich API documentation in REST specification (#51528)
This patch fixes the REST API documentation for the Enrich APIs to conform to the current schema.
(cherry picked from commit 59f28f4f2feeba3f6d2f0b632410577eacb28121)
do index refresh of the internal transform index with the system user
instead of using the calling user which does not have sufficient rights
if security is enabled
fixes#51728
While we use `== false` as a more visible form of boolean negation
(instead of `!`), the true case is implied and the true value does not
need to explicitly checked. This commit converts cases that have slipped
into the code checking for `== true`.
* Fix SnapshotLifecycleRestIT.testFullPolicySnapshot
This previously was missing some key information in the output of the failure. This captures that
information and adds logging at each step so we can determine the cause *if* it fails again.
Resolves#50358
This setting was introduced with the purpose of reducing the time took by
tests that shut nodes down. Tests like `MlDistributedFailureIT` and
`NetworkDisruptionIT`. However, it is unfortunate to have to set the value
to an explicit value in production. In addition, and most important, the dynamically
choosing the value for this setting makes it impossible to adopt static index template configs
that we register via `IndexTemplateRegistry`, which we need to use in order to start
registering ILM policies for the ML indices.
This commit removes this setting from our templates. I run the tests a few times and could
not see execution time differing significantly.
Backport of #51740
This addresses another race condition that could yield this test flaky.
(cherry picked from commit d20d90aceb2b687239654d6f013f61f7f4cc1512)
Signed-off-by: Andrei Dan <andrei.dan@elastic.co>
* Change index.lifecycle.step.master_timeout to indices.lifecycle.step.master_timeout
This changes setting name from `index.lifecycle.step.master_timeout` to
`indices.lifecycle.step.master_timeout` to avoid confusion about its scope.
`index.*` settings are recognized as index level settings, this one is node level.
Reletes to #51698
Currently when an ILM policy finishes its execution, the index moves into the `TerminalPolicyStep`,
denoted by a completed/completed/completed phase/action/step lifecycle execution state.
This commit changes the behavior so that the index lifecycle execution state halts at the last
configured phase's `PhaseCompleteStep`, so for instance, if an index were configured with a policy
containing a `hot` and `cold` phase, the index would stop at the `cold/complete/complete`
`PhaseCompleteStep`. This allows an ILM user to update the policy to add any later phases and have
indices configured to use that policy pick up execution at the newly added "later" phase. For
example, if a `delete` phase were added to the policy specified about, the index would then move
from `cold/complete/complete` into the `delete` phase.
Relates to #48431
This adds logic to handle paging problems when the ID pattern + tags reference models stored as resources.
Most of the complexity comes from the issue where a model stored as a resource could be at the start, or the end of a page or when we are on the last page.
The changes are to help users prepare for migration to next major
release (v8.0.0) regarding to the break change of realm order config.
Warnings are added for when:
* A realm does not have an order config
* Multiple realms have the same order config
The warning messages are added to both deprecation API and loggings.
The main reasons for doing this are: 1) there is currently no automatic relay
between the two; 2) deprecation API is under basic and we need logging
for OSS.
This commit switches the strategy for managing dot-prefixed indices that
should be hidden indices from using "fake" system indices to an explicit
exclusions list that must be updated when those indices are converted to
hidden indices.
* Optimize not-equalities in con-/disjunctions
This commit adds optimisations of not-equalities in conjunctions and
disjunctions:
* for conjunctions, the not-equality can be optimized away when applied
together with a range or inequality, in case the not-equality point
falls outside the domain of the later condition; if its on the boarder,
it will modify the bound, to simply exclude the equality, if present;
otherwise no optimisation can be applied;
* for disjunctions, the not-equals could filter away the ranges and
inequalities, unless these include an equality on the bound, in which
case the entire condition becomes always true, but this would influence
the score() function, so it's been omitted;
* fix aggregations of inequalities in ranges
This commit fixes the loop that aggregates inequalities into ranges:
- it won't advance the outer loop index in case of a merge, since the
current element is removed;
- it will break the inner loop, since comparision against the element
selected in the outer loop can't continue, as it had been removed.
(cherry picked from commit 789724ac2cc726de603849b4eeb8194da7528bcc)
* Rename ILM history index enablement setting
The previous setting was `index.lifecycle.history_index_enabled`, this commit changes it to
`indices.lifecycle.history_index_enabled` to indicate this is not an index-level setting (it's node
level).
* [ML][Inference] Fix weighted mode definition (#51648)
Weighted mode inaccurately assumed that the "max value" of the input values would be the maximum class value. This does not make sense.
Weighted Mode should know how many classes there are. Hence the new parameter `num_classes`. This indicates what the maximum class value to be expected.
We need to force flush to make the last commit safe; otherwise, we might
fail to open FrozenEngine. Note that we force flush before closing a
shard.
Closes#51620
Three fixes for when the `compressed_definition` is utilized on PUT
* Update the inflate byte limit to be the minimum of 10% the max heap, or 1GB (what it was previously)
* Stream data directly to the JSON parser, so if it is invalid, we don't have to inflate the whole stream to find out
* Throw when the maximum bytes are reach indicating that is why the request was rejected
Previously, if YEAR() was used as and ORDER BY argument without being
wrapped with another scalar (e.g. YEAR(birth_date) + 10), no script
ordering was used but instead the underlying field (e.g. birth_date)
was used instead as a performance optimisation. This works correctly if
YEAR() is the only ORDER BY arg but if further args are used as tie
breakers for the ordering wrong results are produced. This is because
2 rows with the different birth_date but on the same year are not tied
as the underlying ordering is on birth_date and not on the
YEAR(birth_date), and the following ORDER BY args are ignored.
Remove this optimisation for YEAR() to avoid incorrect results in
such cases.
As a consequence another bug is revealed: scalar functions on top
of nested fields produce scripted sorting/filtering which is not yet
supported. In such cases no error was thrown but instead all values for
such nested fields were null and were passed to the script implementing
the sorting/filtering, producing incorrect results.
Detect such cases and throw a validation exception.
Fixes: #51224
(cherry picked from commit f41efd6753dc3650a7eabb3e07b02b3b32c5704c)
Add a verification that full-text search functions `MATCH()` and `QUERY()`
are not allowed in the SELECT clause, so that a nice error message is
returned to the user early instead of an "ugly" exception.
Fixes: #47446
This commit creates a new index privilege named `maintenance`.
The privilege grants the following actions: `refresh`, `flush` (also synced-`flush`),
and `force-merge`. Previously the actions were only under the `manage` privilege
which in some situations was too permissive.
Co-authored-by: Amir H Movahed <arhd83@gmail.com>
Datafeeds being closed while starting could result in and NPE. This was
handled as any other failure, masking out the NPE. However, this
conflicts with the changes in #50886.
Related to #50886 and #51302
This PR tries to address the intermittent vector test failures on 7.x by making
sure we create indices with one shard.
The fix is based on this theory as to what's happening:
* On 7.x, the default number of shards is 1, but in REST tests we randomly use
2 in order to cover the multiple shards case. In the failing test run, we use 2
shards and all documents end up on only one shard.
* During a search, the response from the empty shard doesn't produce
deprecation warnings because we never try to execute the script. If not all
shard responses contain the warning headers, then certain deprecation warnings
can be lost (due to the bug described in #33936).
Addresses #50716.
Relates to #50061.
Prior to the change the watcher index listener didn't implement the
`postIndex(ShardId, Engine.Index, Engine.IndexResult)` method. This
caused document level exceptions like VersionConflictEngineException
to be ignored. This commit fixes this.
The watcher indexing listener did implement the `postIndex(ShardId, Engine.Index, Exception)`
method, but that only handles engine level exceptions.
This change also unmutes the SmokeTestWatcherTestSuiteIT#testMonitorClusterHealth test again.
Relates to #32299
Backport of #51526.
Previous the formatter was breaking simple if/else statements (i.e.
without braces) onto separate lines, which could be fragile because the
formatter cannot also introduce braces. Instead, keep such expressions
on the same line.
The timeout.tcp_read AD/LDAP realm setting, despite the low-level
allusion, controls the time interval the realms wait for a response for
a query (search or bind). If the connection to the server is synchronous
(un-pooled) the response timeout is analogous to the tcp read timeout.
But the tcp read timeout is irrelevant in the common case of a pooled
connection (when a Bind DN is specified).
The timeout.tcp_read qualifier is hereby deprecated in favor of
timeout.response.
In addition, the default value for both timeout.tcp_read and
timeout.response is that of timeout.ldap_search, instead of the 5s (but
the default for timeout.ldap_search is still 5s). The
timeout.ldap_search defines the server-controlled timeout of a search
request. There is no practical use case to have a smaller tcp_read
timeout compared to ldap_search (in this case the request would time-out
on the client but continue to be processed on the server). The proposed
change aims to simplify configuration so that the more common
configuration change, adjusting timeout.ldap_search up, has the expected
result (no timeout during searches) without any additional
modifications.
Closes#46028
In the SQL with SSL tests, we need to find the interfaces that are up,
are loopback devices, or have a loopback address. If we check if the
device is up first, we can run into situations where the device is a
virtual ethernet device that might have disappeared between us seeing
the device, and checking if it is up. By first checking if the device is
a loopback device or it has a loopback address, then we can avoid
checking if the device is up except for loopback devices and therefore
we can avoid the disappearing virtual ethernet device problem.
* Allow Repository Plugins to Filter Metadata on Create
Add a hook that allows repository plugins to filter the repository metadata
before it gets written to the cluster state.
This commit deprecates the creation of dot-prefixed index names (e.g.
.watches) unless they are either 1) a hidden index, or 2) registered by
a plugin that extends SystemIndexPlugin. This is the first step
towards more thorough protections for system indices.
This commit also modifies several plugins which use dot-prefixed indices
to register indices they own as system indices, and adds a plugin to
register .tasks as a system index.
The docs tests have recently been running much slower than before (see #49753).
The gist here is that with ILM/SLM we do a lot of unnecessary setup / teardown work on each
test. Compounded with the slightly slower cluster state storage mechanism, this causes the
tests to run much slower.
In particular, on RAMDisk, docs:check is taking
ES 7.4: 6:55 minutes
ES master: 16:09 minutes
ES with this commit: 6:52 minutes
on SSD, docs:check is taking
ES 7.4: ??? minutes
ES master: 32:20 minutes
ES with this commit: 11:21 minutes
Changes the find_file_structure response to include a CSV
ingest processor in the ingest pipeline it suggests.
Previously the Kibana file upload functionality parsed CSV
in the browser, but by parsing CSV in the ingest pipeline
it makes the Kibana file upload functionality more easily
interchangable with Filebeat such that the configurations
it creates can more easily be used to import data with the
same structure repeatedly in production.
* Reload secure settings with password (#43197)
If a password is not set, we assume an empty string to be
compatible with previous behavior.
Only allow the reload to be broadcast to other nodes if TLS is
enabled for the transport layer.
* Add passphrase support to elasticsearch-keystore (#38498)
This change adds support for keystore passphrases to all subcommands
of the elasticsearch-keystore cli tool and adds a subcommand for
changing the passphrase of an existing keystore.
The work to read the passphrase in Elasticsearch when
loading, which will be addressed in a different PR.
Subcommands of elasticsearch-keystore can handle (open and create)
passphrase protected keystores
When reading a keystore, a user is only prompted for a passphrase
only if the keystore is passphrase protected.
When creating a keystore, a user is allowed (default behavior) to create one with an
empty passphrase
Passphrase can be set to be empty when changing/setting it for an
existing keystore
Relates to: #32691
Supersedes: #37472
* Restore behavior for force parameter (#44847)
Turns out that the behavior of `-f` for the add and add-file sub
commands where it would also forcibly create the keystore if it
didn't exist, was by design - although undocumented.
This change restores that behavior auto-creating a keystore that
is not password protected if the force flag is used. The force
OptionSpec is moved to the BaseKeyStoreCommand as we will presumably
want to maintain the same behavior in any other command that takes
a force option.
* Handle pwd protected keystores in all CLI tools (#45289)
This change ensures that `elasticsearch-setup-passwords` and
`elasticsearch-saml-metadata` can handle a password protected
elasticsearch.keystore.
For setup passwords the user would be prompted to add the
elasticsearch keystore password upon running the tool. There is no
option to pass the password as a parameter as we assume the user is
present in order to enter the desired passwords for the built-in
users.
For saml-metadata, we prompt for the keystore password at all times
even though we'd only need to read something from the keystore when
there is a signing or encryption configuration.
* Modify docs for setup passwords and saml metadata cli (#45797)
Adds a sentence in the documentation of `elasticsearch-setup-passwords`
and `elasticsearch-saml-metadata` to describe that users would be
prompted for the keystore's password when running these CLI tools,
when the keystore is password protected.
Co-Authored-By: Lisa Cawley <lcawley@elastic.co>
* Elasticsearch keystore passphrase for startup scripts (#44775)
This commit allows a user to provide a keystore password on Elasticsearch
startup, but only prompts when the keystore exists and is encrypted.
The entrypoint in Java code is standard input. When the Bootstrap class is
checking for secure keystore settings, it checks whether or not the keystore
is encrypted. If so, we read one line from standard input and use this as the
password. For simplicity's sake, we allow a maximum passphrase length of 128
characters. (This is an arbitrary limit and could be increased or eliminated.
It is also enforced in the keystore tools, so that a user can't create a
password that's too long to enter at startup.)
In order to provide a password on standard input, we have to account for four
different ways of starting Elasticsearch: the bash startup script, the Windows
batch startup script, systemd startup, and docker startup. We use wrapper
scripts to reduce systemd and docker to the bash case: in both cases, a
wrapper script can read a passphrase from the filesystem and pass it to the
bash script.
In order to simplify testing the need for a passphrase, I have added a
has-passwd command to the keystore tool. This command can run silently, and
exit with status 0 when the keystore has a password. It exits with status 1 if
the keystore doesn't exist or exists and is unencrypted.
A good deal of the code-change in this commit has to do with refactoring
packaging tests to cleanly use the same tests for both the "archive" and the
"package" cases. This required not only moving tests around, but also adding
some convenience methods for an abstraction layer over distribution-specific
commands.
* Adjust docs for password protected keystore (#45054)
This commit adds relevant parts in the elasticsearch-keystore
sub-commands reference docs and in the reload secure settings API
doc.
* Fix failing Keystore Passphrase test for feature branch (#50154)
One problem with the passphrase-from-file tests, as written, is that
they would leave a SystemD environment variable set when they failed,
and this setting would cause elasticsearch startup to fail for other
tests as well. By using a try-finally, I hope that these tests will fail
more gracefully.
It appears that our Fedora and Ubuntu environments may be configured to
store journald information under /var rather than under /run, so that it
will persist between boots. Our destructive tests that read from the
journal need to account for this in order to avoid trying to limit the
output we check in tests.
* Run keystore management tests on docker distros (#50610)
* Add Docker handling to PackagingTestCase
Keystore tests need to be able to run in the Docker case. We can do this
by using a DockerShell instead of a plain Shell when Docker is running.
* Improve ES startup check for docker
Previously we were checking truncated output for the packaged JDK as
an indication that Elasticsearch had started. With new preliminary
password checks, we might get a false positive from ES keystore
commands, so we have to check specifically that the Elasticsearch
class from the Bootstrap package is what's running.
* Test password-protected keystore with Docker (#50803)
This commit adds two tests for the case where we mount a
password-protected keystore into a Docker container and provide a
password via a Docker environment variable.
We also fix a logging bug where we were logging the identifier for an
array of strings rather than the contents of that array.
* Add documentation for keystore startup prompting (#50821)
When a keystore is password-protected, Elasticsearch will prompt at
startup. This commit adds documentation for this prompt for the archive,
systemd, and Docker cases.
Co-authored-by: Lisa Cawley <lcawley@elastic.co>
* Warn when unable to upgrade keystore on debian (#51011)
For Red Hat RPM upgrades, we warn if we can't upgrade the keystore. This
commit brings the same logic to the code for Debian packages. See the
posttrans file for gets executed for RPMs.
* Restore handling of string input
Adds tests that were mistakenly removed. One of these tests proved
we were not handling the the stdin (-x) option correctly when no
input was added. This commit restores the original approach of
reading stdin one char at a time until there is no more (-1, \r, \n)
instead of using readline() that might return null
* Apply spotless reformatting
* Use '--since' flag to get recent journal messages
When we get Elasticsearch logs from journald, we want to fetch only log
messages from the last run. There are two reasons for this. First, if
there are many logs, we might get a string that's too large for our
utility methods. Second, when we're looking for a specific message or
error, we almost certainly want to look only at messages from the last
execution.
Previously, we've been trying to do this by clearing out the physical
files under the journald process. But there seems to be some contention
over these directories: if journald writes a log file in between when
our deletion command deletes the file and when it deletes the log
directory, the deletion will fail.
It seems to me that we might be able to use journald's "--since" flag to
retrieve only log messages from the last run, and that this might be
less likely to fail due to race conditions in file deletion.
Unfortunately, it looks as if the "--since" flag has a granularity of
one-second. I've added a two-second sleep to make sure that there's a
sufficient gap between the test that will read from journald and the
test before it.
* Use new journald wrapper pattern
* Update version added in secure settings request
Co-authored-by: Lisa Cawley <lcawley@elastic.co>
Co-authored-by: Ioannis Kakavas <ikakavas@protonmail.com>
moves audit message for index creation after the index has been successfully created. This has
been confusing for a user where index creation failed but audit reported index creation.
This commit sets `xpack.security.ssl.diagnose.trust` to false in all
of our tests when running in FIPS 140 mode and when settings objects
are used to create an instance of the SSLService. This is needed
in 7.x because setting xpack.security.ssl.diagnose.trust to true
wraps SunJSSE TrustManager with our own DiagnosticTrustManager and
this is not allowed when SunJSSE is in FIPS mode.
An alternative would be to set xpack.security.fips.enabled to
true which would also implicitly disable
xpack.security.ssl.diagnose.trust but would have additional effects
(would require that we set PBKDF2 for password hashing algorithm in
all test clusters, would prohibit using JKS keystores in nodes even
if relevant tests have been muted in FIPS mode etc.)
Relates: #49900Resolves: #51268
* Don't overwrite target field with SetSecurityUserProcessor
This change fix problem with `SetSecurityUserProcessor` which was overwriting
whole target field and not only fields really filled by the processor.
Closes#51428
* Unused imports removed
Today we are repeatedly checking if the current build is a snapshot
build or not by reading the system property build.snapshot. This commit
formalizes this by adding a build parameter to indicate whether or not
the current build is a snapshot build.
* Introduce reusable QL plugin for SQL and EQL (#50815)
Extract reusable functionality from SQL into its own dedicated project QL.
Implemented as a plugin, it provides common components across SQL and the upcoming EQL.
While this commit is fairly large, for the most part it's just a big file move from sql package to the newly introduced ql.
(cherry picked from commit ec1ac0d463bfa12a02c8174afbcdd6984345e8b4)
* SQL: Fix incomplete registration of geo NamedWritables
(cherry picked from commit e295763686f9592976e551e504fdad1d2a3a566d)
* QL: Extend NodeSubclass to read classes from jars (#50866)
As the test classes are spread across more than one project, the Gradle
classpath contains not just folders but also jars.
This commit allows the test class to explore the archive content and
load matching classes from said source.
(cherry picked from commit 25ad74928afcbf286dc58f7d430491b0af662f04)
* QL: Remove implicit conversion inside Literal (#50962)
Literal constructor makes an implicit conversion for each value given
which turns out has some subtle side-effects.
Improve MathProcessors to preserve numeric type where possible
Fix bug on issue compatibility between date and intervals
Preserve the source when folding inside the Optimizer
(cherry picked from commit 9b73e225b0aa07a23859550fb117bae571a2b672)
* QL: Refactor DataType for pluggability (#51328)
Change DataType from enum to class
Break DataType enums into QL (default) and SQL types
Make data type conversion pluggable so that new types can be introduced
As part of the process:
- static type conversion in QL package (such as Literal) has been
removed
- several utility classes have been broken into base (QL) and extended
(SQL) parts based on type awareness
- operators (+,-,/,*) are
- due to extensibility, serialization of arithmetic operation has been
slightly changed and pushed down to the operator executor itself
(cherry picked from commit aebda81b30e1563b877a8896309fd50633e0b663)
* Compilation fixes for 7.x
We added a new rounding in #50609 that handles offsets to the start and
end of the rounding so that we could support `offset` in the `composite`
aggregation. This starts moving `date_histogram` to that new offset.
This is a redo of #50873 with more integration tests.
This reverts commit d114c9db3e1d1a766f9f48f846eed0466125ce83.
The DATE and DATESTAMP Grok patterns match 2 digit years
as well as 4 digit years. The pattern determination in
find_file_structure worked correctly in this case, but
the regex used to create a multi-line start pattern was
assuming a 4 digit year. Also, the quick rule-out
patterns did not always correctly consider 2 digit years,
meaning that detection was inconsistent.
This change fixes both problems, and also extends the
tests for DATE and DATESTAMP to check both 2 and 4 digit
years.
* Fix TimeSeriesLifecycleActionsIT.testShrinkAction
Shrinking a 6 shard index to 3 shards can be quite time consuming and
assertBusy probes the conditions at exponentially growing intervals.
This separates the one assertion that was used for all the conditions
into multiple assertBusy statements and increases the timeout for waiting
for the shrink to complete.
* Allow more time for shrink to complete
This commit allows more time for the shrink operation to complete in
testRetryFailedShrinkAction (separating the assertBusy calls too) and
testMoveToRolloverStep.
* Shrink to no more than 2 shards in tests
(cherry picked from commit 5fe780148fa3536915d61475b087896a5b9ace82)
Signed-off-by: Andrei Dan <andrei.dan@elastic.co>
This change changes the way to run our test suites in
JVMs configured in FIPS 140 approved mode. It does so by:
- Configuring any given runtime Java in FIPS mode with the bundled
policy and security properties files, setting the system
properties java.security.properties and java.security.policy
with the == operator that overrides the default JVM properties
and policy.
- When runtime java is 11 and higher, using BouncyCastle FIPS
Cryptographic provider and BCJSSE in FIPS mode. These are
used as testRuntime dependencies for unit
tests and internal clusters, and copied (relevant jars)
explicitly to the lib directory for testclusters used in REST tests
- When runtime java is 8, using BouncyCastle FIPS
Cryptographic provider and SunJSSE in FIPS mode.
Running the tests in FIPS 140 approved mode doesn't require an
additional configuration either in CI workers or locally and is
controlled by specifying -Dtests.fips.enabled=true
* Centralize mocks initialization in ILM steps tests
This change centralizes initialization of `Client`, `AdminClient`
and `IndicesAdminClient` for all classes extending `AbstractStepTestCase`.
This removes a lot of code duplication and make it easier to write tests.
This also removes need for `AsyncActionStep#setClient`
* Unused imports removed
* Added missed tests
* Fix OpenFollowerIndexStepTests
* Check all snapshots in SnapshotLifecycleRestIT.testFullPolicy
Rather than check the first returned snapshot for a snapshot starting with `snap-` in
SnapshotLifecycleRestIT.testFullPolicy, this commit changes the test to find any snapshots starting
with `snap-`.
In the event that there are no snapshots (the failure case), this also exposes the full results map
so we can diagnose why a failure occurred.
Relates to #50358
* Use a more imperative style for checking
* Separate aliases used for tests in TimeSeriesLifecycleActionsIT
This is related to #51375 and hopes to help illuminate why some of those tests are failing. This
commit switches the aliases used in the test to use a random alias name every time (since there were
some complaints in the tests about aliases having more than one write index). With this we hope to
determine the actual cause of the failure in the test.
This also adds additional information to the exception returned when calling move-to-step with the
incorrect current step.
* Fix rest test
* [ML][Inference] add tags url param to GET (#51330)
Adds a new URL parameter, `tags` to the GET _ml/inference/<model_id> endpoint.
This parameter allows the list of models to be further reduced to those who contain all the provided tags.
* Use ESSingleNodeTestCase instead of ESIntegTestCase (#51345)
(cherry picked from commit abcf1c41faf05a0b0196fb06e57c3de8c3d67688)
Signed-off-by: Andrei Dan <andrei.dan@elastic.co>
The ApiKeyService would aggressively "close" ApiKeyCredentials objects
during processing. However, under rare circumstances, the verfication
of the secret key would be performed asychronously and may need access
to the SecureString after it had been closed by the caller.
The trigger for this would be if the cache already held a Future for
that ApiKey, but the future was not yet complete. In this case the
verification of the secret key would take place asynchronously on the
generic thread pool.
This commit moves the "close" of the credentials to the body of the
listener so that it only occurs after key verification is complete.
Backport of: #51244
As we prepare to introduce a new index for storing additional
information about data frame analytics jobs (e.g. intrumentation),
renaming this class to `DestinationIndex` better captures what it does
and leaves its prior name available for a more suitable use.
Backport of #51353
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
API Key expiration value has millisecond precision as we use
{@link Instant#toEpoqueMilli()} when creating the API key
document.
It could often happen that `Instant.now()` Instant in the testCreateApiKey
was close enough to the ApiKeyService's `clock.instant()` Instant,
when the nanos were removed from the latter ( due to the call
to `toEpoqueMilli()` ) the result of comparing these two Instants
was a few nanos short of a 7 days.
Resolves: #47958
check bulk indexing error for permanent problems and ensure the state goes into failed instead of
retry. Corrects the stats API to show the real error and avoids excessive audit logging.
fixes#50122
This change exposes master timeout to ILM steps through global dynamic setting.
All currently implemented steps make use of this setting as well.
Closes#44136
IndexWriter might not filter out fully deleted segments if retention
leases exist or the number of the retaining operations is non-zero.
SoftDeletesDirectoryReaderWrapper, however, always filters out fully
deleted segments.
This change uses the original directory reader when calculating segment
stats instead.
Relates #51192Closes#51303
Data frame analytics classification currently only supports 2 classes for the
dependent variable. We were checking that the field's cardinality is not higher
than 2 but we should also check it is not less than that as otherwise the process
fails.
Backport of #51232
The ID of the datafeed's associated job was being obtained
frequently by looking up the datafeed task in a map that
was being modified in other threads. This could lead to
NPEs if the datafeed stopped running at an unexpected time.
This change reduces the number of places where a datafeed's
associated job ID is looked up to avoid the possibility of
failures when the datafeed's task is removed from the map
of running tasks during multi-step operations in other
threads.
Fixes#51285
This makes the UpdateSettingsStep retryable. This step updates settings needed
during the execution of ILM actions (mark indexes as read-only, change
allocation configurations, mark indexing complete, etc)
As the index updates are idempotent in nature (PUT requests and are applied only
if the values have changed) and the settings values are seldom user-configurable
(aside from the allocate action) the testing for this change goes along the
lines of artificially simulating a setting update failure on a particular value
update, which is followed by a successful step execution (a retry) in an
environment outside of ILM (the step executions are triggered manually).
(cherry picked from commit 8391b0aba469f39532bfc2796b76148167dc0289)
Signed-off-by: Andrei Dan <andrei.dan@elastic.co>
After we rollover the index we wait for the configured number of shards for the
rolled index to become active (based on the index.write.wait_for_active_shards
setting which might be present in a template, or otherwise in the default case,
for the primaries to become active).
This wait might be long due to disk watermarks being tripped, replicas not
being able to spring to life due to cluster nodes reconfiguration and others
and, the RolloverStep might not complete successfully due to this inherent
transient situation, albeit the rolled index having been created.
(cherry picked from commit 457a92fb4c68c55976cc3c3e2f00a053dd2eac70)
Signed-off-by: Andrei Dan <andrei.dan@elastic.co>
When not truncated, a long SAML response XML document can fill max
line length and mask the actual exception message that the trace
statement is meant to inform about.
The same XML Document is also printed in full on trace level in
SamlRequestHandler#parseSamlMessage() so there is no loss of
information
This replaces the message we return for unknown queries with the standard
one that we use for unknown fields from `ObjectParser`. This is nice
because it includes "did you mean". One day we might convert parsing
queries to using object parser, but that looks complex. This change is
much smaller and seems useful.
There are two edge cases that can be ran into when example input is matched in a weird way.
1. Recursion depth could continue many many times, resulting in a HUGE runtime cost. I put a limit of 10 recursions (could be adjusted I suppose).
2. If there are no "fixed regex bits", exploring the grok space would result in a fence-post error during runtime (with assertions turned off)
2> REPRODUCE WITH: ./gradlew ':x-pack:plugin:watcher:test' --tests "org.elasticsearch.xpack.watcher.history.HistoryTemplateTransformMappingsTests.testTransformFields" -Dtests.seed=26754396AB9C1A30 -Dtests.security.manager=true -Dtests.locale=lv-LV -Dtests.timezone=America/Dominica -Dcompiler.java=13 -Druntime.java=8
2> java.lang.NullPointerException
at __randomizedtesting.SeedInfo.seed([26754396AB9C1A30:B2A3CA27E260803B]:0)
at org.elasticsearch.xpack.watcher.history.HistoryTemplateTransformMappingsTests.lambda$testTransformFields$1(HistoryTemplateTransformMappingsTests.java:85)
at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
at java.util.HashMap$ValueSpliterator.forEachRemaining(HashMap.java:1628)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
at org.elasticsearch.xpack.watcher.history.HistoryTemplateTransformMappingsTests.lambda$testTransformFields$2(HistoryTemplateTransformMappingsTests.java:88)
at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:892)
at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:877)
at org.elasticsearch.xpack.watcher.history.HistoryTemplateTransformMappingsTests.testTransformFields(HistoryTemplateTransformMappingsTests.java:74)
Allows ML datafeeds to work with time fields that have
the "date_nanos" type _and make use of the extra precision_.
(Previously datafeeds only worked with time fields that were
exact multiples of milliseconds. So datafeeds would work
with "date_nanos" only if the extra precision over "date" was
not used.)
Relates #49889
* REST PreparedStatement-like query parameters are now supported in the form of an array of non-object, non-array values where ES SQL parser will try to infer the data type of the value being passed as parameter.
(cherry picked from commit 45b8bf619aecb1c03d7bc0cf06928dcc36005a66)
The hierarchy of fields/sub-fields under a field that is of an
unsupported data type will be marked as unsupported as well. Until this
change, the behavior was to set the unsupported data type field's
hierarchy as empty.
Example, considering the following hierarchy of fields/sub-fields
a -> b -> c -> d, if b would be of type "foo", then b, c and d will
be marked as unsupported.
(cherry picked from commit 7adb286c4c485b9e781f88b0a2f98cab9ec5b7e2)
This commit merely adds the skeleton for the autoscaling project, adding
the basics to include the autoscaling module in the default
distribution, opt-in to code formatting, and a placeholder for the docs.
These policies store statistics, but since stats updating is asynchronous, it's
possible for the update from one test to bleed into a separate one. This change
switches the tests to use separate policy ids so that their stats are tracked
independently. It also relaxes the checking constraint in one of the tests.
Hopefully this:
Resolves#48531Resolves#48017
This change introduces a new feature for indices so that they can be
hidden from wildcard expansion. The feature is referred to as hidden
indices. An index can be marked hidden through the use of an index
setting, `index.hidden`, at creation time. One primary use case for
this feature is to have a construct that fits indices that are created
by the stack that contain data used for display to the user and/or
intended for querying by the user. The desire to keep them hidden is
to avoid confusing users when searching all of the data they have
indexed and getting results returned from indices created by the
system.
Hidden indices have the following properties:
* API calls for all indices (empty indices array, _all, or *) will not
return hidden indices by default.
* Wildcard expansion will not return hidden indices by default unless
the wildcard pattern begins with a `.`. This behavior is similar to
shell expansion of wildcards.
* REST API calls can enable the expansion of wildcards to hidden
indices with the `expand_wildcards` parameter. To expand wildcards
to hidden indices, use the value `hidden` in conjunction with `open`
and/or `closed`.
* Creation of a hidden index will ignore global index templates. A
global index template is one with a match-all pattern.
* Index templates can make an index hidden, with the exception of a
global index template.
* Accessing a hidden index directly requires no additional parameters.
Backport of #50452
This commit upgrades the OWASP HTML sanitizer used by watcher to the
latest version and also upgrades guava, which it depends on. The guava
upgrade also requires the addition of a new dependency that guava
itself requires as of version 27.0. The sanitizer's behavior has changed to
re-write these templated values with a comment that results in this output
`{<!-- -->{ctx.metadata.name}}`. This would be an issue if we attempted to
sanitize the template, but the code that uses the sanitizer runs the rendered
string through the sanitizer, which means that the templated values have
been replaced already.
Relates #50395
This commit changes our behavior so that when we receive a
request with an invalid/expired/wrong access token or API Key
we do not fallback to authenticating as the anonymous user even if
anonymous access is enabled for Elasticsearch.
If 1000 different category definitions are created for a job in
the first 100 buckets it processes then an audit warning will now
be created. (This will cause a yellow warning triangle in the
ML UI's jobs list.)
Such a large number of categories suggests that the field that
categorization is working on is not well suited to the ML
categorization functionality.
Object fields cannot be used as features. At the moment _explain
API includes them and even worse it allows it does not error when
an object field is excluded. This creates the expectation to the
user that all children fields will also be excluded while it's not
the case.
This commit omits object fields from the _explain API and also
adds an error if an object field is included or excluded.
Backport of #51115
If a transform config got lost (e.g. because the internal index disappeared) tasks could not be
stopped using transform API. This change makes it possible to stop transforms without a config,
meaning to remove the background task. In order to do so force must be set to true.
When we receive a request with an Authorization header that contains
a Bearer token that is not generated by us or that is malformed in
some way, attempting to decode it as one of our own might cause a
number of exceptions that are not IOExceptions. This commit ensures
that we catch and log these too and call onResponse with `null, so
that we can return 401 instead of 500.
Resolves: #50497
* Extend the optimizations for equalities
This commit supplements the optimisations of equalities in conjunctions
and disjunctions:
* for conjunctions, the existing optimizations with ranges are extended
with not-equalities and inequalities; these lead to a fast resolution,
the conjunction either being evaluate to a FALSE, or the non-equality
conditions being dropped as superfluous;
* optimisations for disjunctions are added to be applied against ranges,
inequalities and not-equalities; these lead to disjunction either
becoming TRUE or the equality being dropped, either as superfluous or
merged into a range/inequality.
* Adress review notes
* Fix the bug around wrongly optimizing 'a=2 OR a!=?', which only yields
TRUE for same values in equality and inequality.
* Var renamings, code style adjustments, comments corrections.
* Address further review comments. Extend optim.
- fix a few code comments;
- extend the Equals OR NotEquals optimitsation (a=2 OR a!=5 -> a!=5);
- extend the Equals OR Range optimisation on limits equality (a=2 OR
2<=a<5 -> 2<=a<5);
- in case an equality is being removed in a conjunction, the rest of
possible optimisations to test is now skipped.
* rename one var for better legiblity
- s/rmEqual/removeEquals
(cherry picked from commit 62e7c6a010f10cd7893ee5c99bad8b8d2a693436)
Backport / reimplementation of #50786 on 7.x.
Opt-in `buildSrc` for automatic formatting. This required a config tweak
in order to pick up all the Java sources, and as a result more files are
now found in the Enrich plugin, that were previously missed.
I also moved the 2 Java files in `buildSrc/src/main/groovy` into the Java
directory, which required some follow-up changes.
Knowing about used analysis components and mapping types would be incredibly
useful in order to know which ones may be deprecated or should get more love.
Some field types also act as a proxy to know about feature usage of some APIs
like the `percolator` or `completion` fields types for percolation and the
completion suggester, respectively.
Replace DES with AES to align with modern encryption standards
Backport also fixs Files.readString API that is not available in Java 8
Resolves: #50843
It's possible that the index could return no settings and thus throw a
`NullPointerException`.
I wasn't able to reproduce the original issue, but this should guard
against in the future.
Resolves#50646
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
This test failed a couple of different ways, related to timing, as well
as concurrent snapshots, and also naming.
This commit splits the giant `assertBusy` into separate parts so that we don't
perform ~5 different requests and tests in the same loop. It also gives each
test a unique repository so that no other test can accidentally re-use
snapshots.
Resolves#50358 (hopefully!)
Backport of #50909. The current formatting config allows some long
generic declarations to break the 140 character limit. Tweak the config
to wrap such lines.
There have been occasional failures, presumably due to
too many tests running in parallel, caused by jobs taking
around 15 seconds to open. (You can see the job open
successfully during the cleanup phase shortly after the
failure of the test in these cases.) This change increases
the wait time from 10 seconds to 20 seconds to reduce the
risk of this happening.
This change adds a new `kibana_admin` role, and deprecates
the old `kibana_user` and`kibana_dashboard_only_user`roles.
The deprecation is implemented via a new reserved metadata
attribute, which can be consumed from the API and also triggers
deprecation logging when used (by a user authenticating to
Elasticsearch).
Some docs have been updated to avoid references to these
deprecated roles.
Backport of: #46456
Co-authored-by: Larry Gregory <lgregorydev@gmail.com>
Check it out:
```
$ curl -u elastic:password -HContent-Type:application/json -XPOST localhost:9200/test/_update/foo?pretty -d'{
"dac": {}
}'
{
"error" : {
"root_cause" : [
{
"type" : "x_content_parse_exception",
"reason" : "[2:3] [UpdateRequest] unknown field [dac] did you mean [doc]?"
}
],
"type" : "x_content_parse_exception",
"reason" : "[2:3] [UpdateRequest] unknown field [dac] did you mean [doc]?"
},
"status" : 400
}
```
The tricky thing about implementing this is that x-content doesn't
depend on Lucene. So this works by creating an extension point for the
error message using SPI. Elasticsearch's server module provides the
"spell checking" implementation.
s
We added a new rounding in #50609 that handles offsets to the start and
end of the rounding so that we could support `offset` in the `composite`
aggregation. This starts moving `date_histogram` to that new offset.
* [ML][Inference] Adding classification_weights to ensemble models
classification_weights are a way to allow models to
prefer specific classification results over others
this might be advantageous if classification value
probabilities are a known quantity and can improve
model error rates.
* Track Snapshot Version in RepositoryData (#50930)
Add tracking of snapshot versions to RepositoryData to make BwC logic more efficient.
Follow up to #50853
In classes where the client is used directly rather than through a call to
executeAsyncWithOrigin explicitly require the client to be OriginSettingClient
rather than using the Client interface.
Also remove calls to deprecated ClientHelper.clientWithOrigin() method.
Adds a new parameter to regression and classification that enables computation
of importance for the top most important features. The computation of the importance
is based on SHAP (SHapley Additive exPlanations) method.
Backport of #50914
This adds a new "http" sub-command to the certutil CLI tool.
The http command generates certificates/CSRs for use on the http
interface of an elasticsearch node/cluster.
It is designed to be a guided tool that provides explanations and
sugestions for each of the configuration options. The generated zip
file output includes extensive "readme" documentation and sample
configuration files for core Elastic products.
Backport of: #49827
The Document Level Security BitSet Cache (see #43669) had a default
configuration of "small size, long lifetime". However, this is not
a very useful default as the cache is most valuable for BitSets that
take a long time to construct, which is (generally speaking) the same
ones that operate over a large number of documents and contain many
bytes.
This commit changes the cache to be "large size, short lifetime" so
that it can hold bitsets representing billions of documents, but
releases memory quickly.
The new defaults are 10% of heap, and 2 hours.
This also adds some logging when a single BitSet exceeds the size of
the cache and when the cache is full.
Backport of: #50535
Previously custom realms were limited in what services and components
they had easy access to. It was possible to work around this because a
security extension is packaged within a Plugin, so there were ways to
store this components in static/SetOnce variables and access them from
the realm, but those techniques were fragile, undocumented and
difficult to discover.
This change includes key services as an argument to most of the methods
on SecurityExtension so that custom realm / role provider authors can
have easy access to them.
Backport of: #50534
The Document Level Security BitSet cache stores a secondary "lookup
map" so that it can determine which cache entries to invalidate when
a Lucene index is closed (merged, etc).
There was a memory leak because this secondary map was not cleared
when entries were naturally evicted from the cache (due to size/ttl
limits).
This has been solved by adding a cache removal listener and processing
those removal events asyncronously.
Backport of: #50635
When creating a role, we do not check if the exceptions for
the field permissions are a subset of granted fields. If such
a role is assigned to a user then that user's authentication fails
for this reason.
We added a check to validate role query in #46275 and on the same lines,
this commit adds check if the exceptions for the field
permissions is a subset of granted fields when parsing the
index privileges from the role descriptor.
Backport of: #50212
Co-authored-by: Yogesh Gaikwad <bizybot@users.noreply.github.com>
The enterprise license type must have "max_resource_units" and may not
have "max_nodes".
This change adds support for this new field, validation that the field
is present if-and-only-if the license is enterprise and bumps the
license version number to reflect the new field.
Includes a BWC layer to return "max_nodes: ${max_resource_units}" in
the GET license API.
Backport of: #50735
* ILM action to wait for SLM policy execution (#50454)
This change add new ILM action to wait for SLM policy execution to ensure that index has snapshot before deletion.
Closes#45067
* Fix flaky TimeSeriesLifecycleActionsIT#testWaitForSnapshot test
This change adds some randomness and cleanup step to TimeSeriesLifecycleActionsIT#testWaitForSnapshot and testWaitForSnapshotSlmExecutedBefore tests in attempt to make them stable.
Reletes to #50781
* Formatting changes
* Longer timeout
* Fix Map.of in Java8
* Unused import removed
* Refresh cached phase policy definition if possible on new policy
There are some cases when updating a policy does not change the
structure in a significant way. In these cases, we can reread the
policy definition for any indices using the updated policy.
This commit adds this refreshing to the `TransportPutLifecycleAction`
to allow this. It allows us to do things like change the configuration
values for a particular step, even when on that step (for example,
changing the rollover criteria while on the `check-rollover-ready` step).
There are more cases where the phase definition can be reread that just
the ones checked here (for example, removing an action that has already
been passed), and those will be added in subsequent work.
Relates to #48431
* SQL: Optimisation fixes for conjunction merges
This commit fixes the following issues around the way comparisions are
merged with ranges in conjunctions:
* the decision to include the equality of the lower limit is corrected;
* the selection of the upper limit is corrected to use the upper bound
of the range;
* the list of terms in the conjunction is sorted to have the ranges at
the bottom; this allows subsequent binary comarisions to find compatible
ranges and potentially be merged away. The end guarantee being that the
optimisation takes place irrespective of the order of the conjunction
terms in the statement.
Some comments are also corrected.
* adress review observation on anon. comparator
Replace anonymous comparator of split AND Expressions with a lambda.
(cherry picked from commit 9828cb143a41f1bda1219541f3a8fdc03bf6dd14)
This commit changes the default behavior for
xpack.security.ssl.diagnose.trust when running in a FIPS 140 JVM.
More specifically, when xpack.security.fips_mode.enabled is true:
- If xpack.security.ssl.diagnose.trust is not explicitly set, the
default value of it becomes false and a log message is printed
on info level, notifying of the fact that the TLS/SSL diagnostic
messages are not enabled when in a FIPS 140 JVM.
- If xpack.security.ssl.diagnose.trust is explicitly set, the value of
it is honored, even in FIPS mode.
This is relevant only for 7.x where we support Java 8 in which
SunJSSE can still be used as a FIPS 140 provider for TLS. SunJSSE
in FIPS mode, disallows the use of other TrustManager implementations
than the one shipped with SunJSSE.
The system created and models we provide now use the `_xpack` user for uniformity with our other features
The `PUT` action is now an admin cluster action
And XPackClient class now references the action instance.
Hide the `.async-search-*` in Security by making it a restricted index namespace.
The namespace is hard-coded.
To grant privileges on restricted indices, one must explicitly toggle the
`allow_restricted_indices` flag in the indices permission in the role definition.
As is the case with any other index, if a certain user lacks all permissions for an
index, that index is effectively nonexistent for that user.
The OpenIdConnectRealm had a bug which would cause it not to populate
User metadata for collections contained in the user JWT claims.
This commit fixes that bug.
Backport of: #50521
* [ML][Inference] PUT API (#50852)
This adds the `PUT` API for creating trained models that support our format.
This includes
* HLRC change for the API
* API creation
* Validations of model format and call
* fixing backport
* Fix SLM check for restore in progress (#50868)
* Fix SLM check for restore in progress
This commit fixes the check in SLM where the `RestoreInProgress`
metadata was checked for existence. Rather than check existence we
should instead check the `isEmpty` method. Prior to this, a successful
restore for a repository that used SLM retention would prevent SLM
retention from running in subsequent invocations, due to SLM thinking
that a restore was still running.
* Fix 7.x-isms
This commit removes validation logic of source and dest indices
for data frame analytics and replaces it with using the common
`SourceDestValidator` class which is already used by transforms.
This way the validations and their messages become consistent
while we reduce code.
This means that where these validations fail the error messages
will be slightly different for data frame analytics.
Backport of #50841
Replaces the "funny"
`Function<String, ConstructingObjectParser<T, Void>>` with a much
simpler `ConstructingObjectParser<T, String>`. This makes pretty much
all of our object parsers static.
A very large number of recursive calls can cause a stack overflow
exception. This commit forks the recursive calls for non-async
processors. Once forked, each thread will handle at most 10
recursive calls to help keep the stack size and thread count
down to a reasonable size.
This adds the necessary named XContent classes to the HLRC for the lang ident model. This is so the HLRC can call `GET _ml/inference/lang_ident_model_1?include_definition=true` without XContent parsing errors.
The constructors are package private as since this classes are used exclusively within the pre-packaged model (and require the specific weights, etc. to be of any use).
If a pipeline referenced by a transform does not exist, we should not allow the transform to be created.
We do allow the pipeline existence check to be skipped with defer_validations, but if the pipeline still does not exist on `_start`, the pipeline will fail to start.
relates: #50135
This test replaces the watch index after watcher got started.
This triggers watches being reloaded and while this happens the
trigger engine is paused, which disallows watches from being
triggered. At this time there are no watches in the .watches
index and I think this is just unlucky timing.
Reloading of watches happens in the background and
the watch state can be started when that happens.
For normal schedule trigger engines this is not an issue,
because watches that are meant to be triggered are triggered
when the engine triggers the next time. However for the
mock scheduled trigger engine this is different,
because watches are triggered programatically and
there is no retry in this test.
I think just adding `timeWarp().trigger("mywatch");` inside
a `assertBusy(...)`` is the right fix here. If it fails
because the mock schedule trigger engine is paused then
the test will try again. In the mean time the the watches
can be reloaded, which then resumes the mock scheduled trigger engine.
Closes#50658
Currently, if an updateable synonym filter is included in a multiplexer filter,
it is not reloaded via the _reload_search_analyzers because the multiplexer
itself doesn't pass on the analysis mode of the filters it contains, so its not
recognized as "updateable" in itself. Instead we can check and merge the
AnalysisMode settings of all filters in the multiplexer and use the resulting
mode (e.g. search-time only) for the multiplexer itself, thus making any synonym
filters contained in it reloadable. This, of course, will also make the
analyzers using the multiplexer be usable at search-time only.
Closes#50554
This commits makes the "init" ILM step retryable. It also adds a test
where an index is created with a non-parsable index name and then fails.
Related to #48183
This PR adds per-field metadata that can be set in the mappings and is later
returned by the field capabilities API. This metadata is completely opaque to
Elasticsearch but may be used by tools that index data in Elasticsearch to
communicate metadata about fields with tools that then search this data. A
typical example that has been requested in the past is the ability to attach
a unit to a numeric field.
In order to not bloat the cluster state, Elasticsearch requires that this
metadata be small:
- keys can't be longer than 20 chars,
- values can only be numbers or strings of no more than 50 chars - no inner
arrays or objects,
- the metadata can't have more than 5 keys in total.
Given that metadata is opaque to Elasticsearch, field capabilities don't try to
do anything smart when merging metadata about multiple indices, the union of
all field metadatas is returned.
Here is how the meta might look like in mappings:
```json
{
"properties": {
"latency": {
"type": "long",
"meta": {
"unit": "ms"
}
}
}
}
```
And then in the field capabilities response:
```json
{
"latency": {
"long": {
"searchable": true,
"aggreggatable": true,
"meta": {
"unit": [ "ms" ]
}
}
}
}
```
When there are no conflicts, values are arrays of size 1, but when there are
conflicts, Elasticsearch includes all unique values in this array, without
giving ways to know which index has which metadata value:
```json
{
"latency": {
"long": {
"searchable": true,
"aggreggatable": true,
"meta": {
"unit": [ "ms", "ns" ]
}
}
}
}
```
Closes#33267
This makes the "update-rollover-lifecycle-date" step, which is part of the
rollover action, retryable. It also adds an integration test to check the
step is retried and it eventually succeeds.
(cherry picked from commit 5bf068522deb2b6cd2563bcf80f34fdbf459c9f2)
Signed-off-by: Andrei Dan <andrei.dan@elastic.co>
In security we currently monitor a set of files for changes:
- config/role_mapping.yml (or alternative configured path)
- config/roles.yml
- config/users
- config/users_roles
This commit prevents unnecessary reloading when the file change actually doesn't change the internal structure.
Backport of: #50207
Co-authored-by: Anton Shuvaev <anton.shuvaev91@gmail.com>
vector REST tests occasionally fail on 7.x because
we don't receive the expected response headers with deprecation warnings.
This happens as searchers were executed against all indices including
internal indices, whose shards did not produce expected warnings.
This PR ensures that searchers are executed only against expected
indices.
Closes#50716
In 7.x an internal API used for validating remote cluster does not throw, see #50420 for the
details. This change implements a workaround for remote cluster validation, only for 7.x branches.
fixes#50420
Switch from a 32 bit Java hash to a 128 bit Murmur hash for
creating document IDs from by/over/partition field values.
The 32 bit Java hash was not sufficiently unique, and could
produce identical numbers for relatively common combinations
of by/partition field values such as L018/128 and L017/228.
Fixes#50613
The end offset of a tokenizer is supposed to point one past the
end of the input, not to the end character of the input. The
ml_classic tokenizer was erroneously doing the latter.
* Add aditional logging for ILM history store tests (#50624)
These tests use the same index name, making it hard to read logs when
diagnosing the failures. Additionally more information about the current
state of the index could be retrieved when failing.
This changes these two things in the hope of capturing more data about
why this fails on some CI nodes but not others.
Relates to #50353
This drops all remaining references to `BaseRestHandler.logger` which
has been deprecated for something like a year now. I replaced all of the
references with locally declared loggers which is so much less spooky
action at a distance to me.
Sharing a random generator may cause test failures as non-threadsafe random generators are periodically utilized in tests (see: https://github.com/elastic/elasticsearch/issues/50651)
This change constructs a calls `Randomness.get()` within the `bulkIndexWithRetry` method so that the returned `Random` object is only used in a single thread. Before, the member variable could have been used between threads, which caused test failures.
* [ML][Inference] lang_ident model (#50292)
This PR contains a java port of Google's CLD3 compact NN model https://github.com/google/cld3
The ported model is formatted to fit within our inference model formatting and stored as a resource in the `:xpack:ml:` plugin and is under basic license.
The model is broken up into two major parts:
- Preprocessing through the custom embedding (based on CLD3's embedding layer)
- Pushing the embedded text through the two layers of fully connected shallow NN.
Main differences between this port and CLD3:
- We take advantage of Java's internal Unicode handling where possible (i.e. codepoints, characters, decoders, etc.)
- We do not trim down input text by removing duplicated tokens
- We do not encode doubles/floats as longs/integers.
This adds a new cluster privilege `monitor_snapshot` which is a restricted
version of `create_snapshot`, granting the same privileges to view
snapshot and repository info and status but not granting the actual
privilege to create a snapshot.
Co-authored-by: j-bean <anton.shuvaev91@gmail.com>
* Adds JavaDoc to `AbstractWireTestCase` and
`AbstractWireSerializingTestCase` so it is more obvious you should prefer
the latter if you have a choice
* Moves the `instanceReader` method out of `AbstractWireTestCase` becaue
it is no longer used.
* Marks a bunch of methods final so it is more obvious which classes are
for what.
* Cleans up the side effects of the above.
*Most* of our parsing can be done without passing any extra context into
the parser that isn't already part of the xcontent stream. While I was
looking around at the places that *do* need a context I found a few
places that were declared to need a context but don't actually need it.
This adds support for retrying AsyncActionSteps by triggering the async
step after ILM was moved back on the failed step (the async step we'll
be attempting to run after the cluster state reflects ILM being moved
back on the failed step).
This also marks the RolloverStep as retryable and adds an integration
test where the RolloverStep is failing to execute as the rolled over
index already exists to test that the async action RolloverStep is
retried until the rolled over index is deleted.
(cherry picked from commit 8bee5f4cb58a1242cc2ef4bc0317dae6c8be49d3)
Signed-off-by: Andrei Dan <andrei.dan@elastic.co>
Adds a `force` parameter to the delete data frame analytics
request. When `force` is `true`, the action force-stops the
jobs and then proceeds to the deletion. This can be used in
order to delete a non-stopped job with a single request.
Closes#48124
Backport of #50553
We have about 800 `ObjectParsers` in Elasticsearch, about 700 of which
are final. This is *probably* the right way to declare them because in
practice we never mutate them after they are built. And we certainly
don't change the static reference. Anyway, this adds `final` to a bunch
of these parsers, mostly the ones in xpack and their "paired" parsers in
the high level rest client. I picked these just to have somewhere to
break the up the change so it wouldn't be huge.
I found the non-final parsers with this:
```
diff \
<(find . -type f -name '*.java' -exec grep -iHe 'static.*PARSER\s*=' {} \+ | sort) \
<(find . -type f -name '*.java' -exec grep -iHe 'static.*final.*PARSER\s*=' {} \+ | sort) \
2>&1 | grep '^<'
```
Eclipse 4.13 shows a type mismatch error in the affected line because it cannot
correctly infer the boolean return type for the method call. Assigning return
value to a local variable resolves this problem.
XPackPlugin created an SSLService within the plugin contructor.
This has 2 negative consequences:
1. The service may be constructed based on a partial view of settings.
Other plugins are free to add setting values via the
additionalSettings() method, but this (necessarily) happens after
plugins have been constructed.
2. Any exceptions thrown during the plugin construction are handled
differently than exceptions thrown during "createComponents".
Since SSL configurations exceptions are relatively common, it is
far preferable for them to be thrown and handled as part of the
createComponents flow.
This commit moves the creation of the SSLService to
XPackPlugin.createComponents, and alters the sequence of some other
steps to accommodate this change.
Backport of: #49667
We are matching on the exact number of shards in this test, but may run into
snapshotting more than the single index created in it due to auto-created indices like
`.watcher`.
Fixed by making the test only take a snapshot of the single index used by this test.
Closes#50450
* Add ILM histore store index (#50287)
* Add ILM histore store index
This commit adds an ILM history store that tracks the lifecycle
execution state as an index progresses through its ILM policy. ILM
history documents store output similar to what the ILM explain API
returns.
An example document with ALL fields (not all documents will have all
fields) would look like:
```json
{
"@timestamp": 1203012389,
"policy": "my-ilm-policy",
"index": "index-2019.1.1-000023",
"index_age":123120,
"success": true,
"state": {
"phase": "warm",
"action": "allocate",
"step": "ERROR",
"failed_step": "update-settings",
"is_auto-retryable_error": true,
"creation_date": 12389012039,
"phase_time": 12908389120,
"action_time": 1283901209,
"step_time": 123904107140,
"phase_definition": "{\"policy\":\"ilm-history-ilm-policy\",\"phase_definition\":{\"min_age\":\"0ms\",\"actions\":{\"rollover\":{\"max_size\":\"50gb\",\"max_age\":\"30d\"}}},\"version\":1,\"modified_date_in_millis\":1576517253463}",
"step_info": "{... etc step info here as json ...}"
},
"error_details": "java.lang.RuntimeException: etc\n\tcaused by:etc etc etc full stacktrace"
}
```
These documents go into the `ilm-history-1-00000N` index to provide an
audit trail of the operations ILM has performed.
This history storage is enabled by default but can be disabled by setting
`index.lifecycle.history_index_enabled` to `false.`
Resolves#49180
* Make ILMHistoryStore.putAsync truly async (#50403)
This moves the `putAsync` method in `ILMHistoryStore` never to block.
Previously due to the way that the `BulkProcessor` works, it was possible
for `BulkProcessor#add` to block executing a bulk request. This was bad
as we may be adding things to the history store in cluster state update
threads.
This also moves the index creation to be done prior to the bulk request
execution, rather than being checked every time an operation was added
to the queue. This lessens the chance of the index being created, then
deleted (by some external force), and then recreated via a bulk indexing
request.
Resolves#50353
refactors source and dest validation, adds support for CCS, makes resolve work like reindex/search, allow aliased dest index with a single write index.
fixes#49988fixes#49851
relates #43201
Previously, during expression optimisation, CAST would be considered
nullable if the casted expression resulted to a NULL literal, and would
be always non-nullable otherwise. As a result if CASE was wrapped by a
null check function like IS NULL or IS NOT NULL it was simplified to
TRUE/FALSE, eliminating the actual casting operation. So in case of an
expression with an erroneous casting like CAST('foo' AS DATETIME) IS NULL
it would be simplified to FALSE instead of throwing an Exception signifying
the attempt to cast 'foo' to a DATETIME type.
CAST now always returns Nullability.UKNOWN except from the case that
its result evaluated to a constant NULL, where it returns Nullability.TRUE.
This way the IS NULL/IS NOT NULL don't get simplified to FALSE/TRUE
and the CAST actually gets evaluated resulting to a thrown Exception.
Fixes: #50191
(cherry picked from commit 671e07a931cd828661e226cba22a5d38804a17a5)
* Update remote cluster stats to support simple mode (#49961)
Remote cluster stats API currently only returns useful information if
the strategy in use is the SNIFF mode. This PR modifies the API to
provide relevant information if the user is in the SIMPLE mode. This
information is the configured addresses, max socket connections, and
open socket connections.
* Send hostname in SNI header in simple remote mode (#50247)
Currently an intermediate proxy must route conncctions to the
appropriate remote cluster when using simple mode. This commit offers
a additional mechanism for the proxy to route the connections by
including the hostname in the TLS SNI header.
* Rename the remote connection mode simple to proxy (#50291)
This commit renames the simple connection mode to the proxy connection
mode for remote cluster connections. In order to do this, the mode specific
settings which we namespaced by their mode (ex: sniff.seed and
proxy.addresses) have been reverted.
* Modify proxy mode to support a single address (#50391)
Currently, the remote proxy connection mode uses a list setting for the
proxy address. This commit modifies this so that the setting is
proxy_address and only supports a single remote proxy address.
Avoid backwards incompatible changes for 8.x and 7.6 by removing type
restriction on compile and Factory. Factories may optionally implement
ScriptFactory. If so, then they can indicate determinism and thus
cacheability.
**Backport**
Relates: #49466
In order to ensure any persisted model state is searchable by the moment
the job reports itself as `stopped`, we need to refresh the state index
before completing.
This should fix the occasional failures we see in #50168 and #50313 where
the model state appears missing.
Closes#50168Closes#50313
Backport of #50322
This fixes support for nested fields
We now support fully nested, fully collapsed, or a mix of both on inference docs.
ES mappings allow the `_source` to be any combination of nested objects + dot delimited fields.
So, we should do our best to find the best path down the Map for the desired field.
Our REST infrastructure will reject requests that have a body where the
body of the request is never consumed. This ensures that we reject
requests on endpoints that do not support having a body. This requires
cooperation from the REST handlers though, to actually consume the body,
otherwise the REST infrastructure will proceed with rejecting the
request. This commit addresses an issue in the has privileges API where
we would prematurely try to reject a request for not having a username,
before consuming the body. Since the body was not consumed, the REST
infrastructure would instead reject the request as a bad request.
This commit adds removal of unused data frame analytics state
from the _delete_expired_data API (and in extend th ML daily
maintenance task). At the moment the potential state docs
include the progress document and state for regression and
classification analyses.
Backport of #50243
Backport #50244 to 7.x branch.
If a processor executes asynchronously and the ingest simulate api simulates with
multiple documents then the order of the documents in the response may not match
the order of the documents in the request.
Alexander Reelsen discovered this issue with the enrich processor with the following reproduction:
```
PUT cities/_doc/munich
{"zip":"80331","city":"Munich"}
PUT cities/_doc/berlin
{"zip":"10965","city":"Berlin"}
PUT /_enrich/policy/zip-policy
{
"match": {
"indices": "cities",
"match_field": "zip",
"enrich_fields": [ "city" ]
}
}
POST /_enrich/policy/zip-policy/_execute
GET _cat/indices/.enrich-*
POST /_ingest/pipeline/_simulate
{
"pipeline": {
"processors" : [
{
"enrich" : {
"policy_name": "zip-policy",
"field" : "zip",
"target_field": "city",
"max_matches": "1"
}
}
]
},
"docs": [
{ "_id": "first", "_source" : { "zip" : "80331" } } ,
{ "_id": "second", "_source" : { "zip" : "50667" } }
]
}
```
* fixed test compile error
Follow up to #49729
This change removes falling back to listing out the repository contents to find the latest `index-N` in write-mounted blob store repositories.
This saves 2-3 list operations on each snapshot create and delete operation. Also it makes all the snapshot status APIs cheaper (and faster) by saving one list operation there as well in many cases.
This removes the resiliency to concurrent modifications of the repository as a result and puts a repository in a `corrupted` state in case loading `RepositoryData` failed from the assumed generation.
This adds a new "xpack.license.upload.types" setting that restricts
which license types may be uploaded to a cluster.
By default all types are allowed (excluding basic, which can only be
generated and never uploaded).
This setting does not restrict APIs that generate licenses such as the
start trial API.
This setting is not documented as it is intended to be set by
orchestrators and not end users.
Backport of: #49418
Backport of #49612.
The current Docker entrypoint script picks up environment variables and
translates them into -E command line arguments. However, since any tool
executes via `docker exec` doesn't run the entrypoint, it results in
a poorer user experience.
Therefore, refactor the env var handling so that the -E options are
generated in `elasticsearch-env`. These have to be appended to any
existing command arguments, since some CLI tools have subcommands and
-E arguments must come after the subcommand.
Also extract the support for `_FILE` env vars into a separate script, so
that it can be called from more than once place (the behaviour is
idempotent).
Finally, add noop -E handling to CronEvalTool for parity, and support
`-E` in MultiCommand before subcommands.
Executing the data frame analytics _explain API with a config that contains
a field that is not in the includes list but at the same time is the excludes
list results to trying to remove the field twice from the iterator. That causes
an `IllegalStateException`. This commit fixes this issue and adds a test that
captures the scenario.
Backport of #50192
* Remove BlobContainer Tests against Mocks
Removing all these weird mocks as asked for by #30424.
All these tests are now part of real repository ITs and otherwise left unchanged if they had
independent tests that didn't call the `createBlobStore` method previously.
The HDFS tests also get added coverage as a side-effect because they did not have an implementation
of the abstract repository ITs.
Closes#30424
Lucene 8.4 added support for "CONTAINS", therefore in this commit those
changes are integrated in Elasticsearch. This commit contains as well a
bug fix when querying with a geometry collection with "DISJOINT" relation.
The release builds use a production license key, and our rest test load
licenses that are signed by the dev license key.
This change adds the new enterprise license Rest tests to the
blacklist on release builds.
Backport of: #50163
Since 7.4, we switch from translog to Lucene as the source of history
for peer recoveries. However, we reduce the likelihood of
operation-based recoveries when performing a full cluster restart from
pre-7.4 because existing copies do not have PPRL.
To remedy this issue, we fallback using translog in peer recoveries if
the recovering replica does not have a peer recovery retention lease,
and the replication group hasn't fully migrated to PRRL.
Relates #45136
Today we do not use retention leases in peer recovery for closed indices
because we can't sync retention leases on closed indices. This change
allows that ability and adjusts peer recovery to use retention leases
for all indices with soft-deletes enabled.
Relates #45136
Co-authored-by: David Turner <david.turner@elastic.co>
This adds a new field for the inference processor.
`warning_field` is a place for us to write warnings provided from the inference call. When there are warnings we are not going to write an inference result. The goal of this is to indicate that the data provided was too poor or too different for the model to make an accurate prediction.
The user could optionally include the `warning_field`. When it is not provided, it is assumed no warnings were desired to be written.
The first of these warnings is when ALL of the input fields are missing. If none of the trained fields are present, we don't bother inferencing against the model and instead provide a warning stating that the fields were missing.
Also, this adds checks to not allow duplicated fields during processor creation.
This exchanges the direct use of the `Client` for `ResultsPersisterService`. State doc persistence will now retry. Failures to persist state will still not throw, but will be audited and logged.
This commit fixes a bug that caused the data frame analytics
_explain API to time out in a multi-node setup when the source
index was missing. When we try to create the extracted fields detector,
we check the index settings. If the index is missing that responds
with a failure that could be wrapped as a remote exception.
While we unwrapped correctly to check if the cause was an
`IndexNotFoundException`, we then proceeded to cast the original
exception instead of the cause.
Backport of #50176
This test was fixed as part of #49736 so that it used a
TokenService mock instance that was enabled, so that token
verification fails because the token is invalid and not because
the token service is not enabled.
When the randomly generated token we send, decodes to being of
version > 7.2 , we need to have mocked a GetResponse for the call
that TokenService#getUserTokenFromId will make, otherwise this
hangs and times out.