Adds a secure and reloadable SECURE_AUTH_PASSWORD setting to allow keystore entries in the form "xpack.monitoring.exporters.*.auth.secure_password" to securely supply passwords for monitoring HTTP exporters. Also deprecates the insecure `AUTH_PASSWORD` setting.
The changes are to help users prepare for migration to next major
release (v8.0.0) regarding to the break change of realm order config.
Warnings are added for when:
* A realm does not have an order config
* Multiple realms have the same order config
The warning messages are added to both deprecation API and loggings.
The main reasons for doing this are: 1) there is currently no automatic relay
between the two; 2) deprecation API is under basic and we need logging
for OSS.
* Rename ILM history index enablement setting
The previous setting was `index.lifecycle.history_index_enabled`, this commit changes it to
`indices.lifecycle.history_index_enabled` to indicate this is not an index-level setting (it's node
level).
* [DOCS] Rewrite analysis intro. Move index/search analysis content.
* Rewrites 'Text analysis' page intro as high-level definition.
Adds guidance on when users should configure text analysis
* Rewrites and splits index/search analysis content:
* Conceptual content -> 'Index and search analysis' under 'Concepts'
* Task-based content -> 'Specify an analyzer' under 'Configure...'
* Adds detailed examples for when to use the same index/search analyzer
and when not.
* Adds new example snippets for specifying search analyzers
* clarifications
* Add toc. Decrement headings.
* Reword 'When to configure' section
* Remove sentence from tip
Previously, if YEAR() was used as and ORDER BY argument without being
wrapped with another scalar (e.g. YEAR(birth_date) + 10), no script
ordering was used but instead the underlying field (e.g. birth_date)
was used instead as a performance optimisation. This works correctly if
YEAR() is the only ORDER BY arg but if further args are used as tie
breakers for the ordering wrong results are produced. This is because
2 rows with the different birth_date but on the same year are not tied
as the underlying ordering is on birth_date and not on the
YEAR(birth_date), and the following ORDER BY args are ignored.
Remove this optimisation for YEAR() to avoid incorrect results in
such cases.
As a consequence another bug is revealed: scalar functions on top
of nested fields produce scripted sorting/filtering which is not yet
supported. In such cases no error was thrown but instead all values for
such nested fields were null and were passed to the script implementing
the sorting/filtering, producing incorrect results.
Detect such cases and throw a validation exception.
Fixes: #51224
(cherry picked from commit f41efd6753dc3650a7eabb3e07b02b3b32c5704c)
The timeout.tcp_read AD/LDAP realm setting, despite the low-level
allusion, controls the time interval the realms wait for a response for
a query (search or bind). If the connection to the server is synchronous
(un-pooled) the response timeout is analogous to the tcp read timeout.
But the tcp read timeout is irrelevant in the common case of a pooled
connection (when a Bind DN is specified).
The timeout.tcp_read qualifier is hereby deprecated in favor of
timeout.response.
In addition, the default value for both timeout.tcp_read and
timeout.response is that of timeout.ldap_search, instead of the 5s (but
the default for timeout.ldap_search is still 5s). The
timeout.ldap_search defines the server-controlled timeout of a search
request. There is no practical use case to have a smaller tcp_read
timeout compared to ldap_search (in this case the request would time-out
on the client but continue to be processed on the server). The proposed
change aims to simplify configuration so that the more common
configuration change, adjusting timeout.ldap_search up, has the expected
result (no timeout during searches) without any additional
modifications.
Closes#46028
This commit adds examples in our documentation for
- An HLRC instance authenticating to an elasticsearch cluster using
an elasticsearch token service access token or an API key
- An HLRC instance connecting to an elasticsearch cluster that is
setup for TLS on the HTTP layer when the CA certificate of the
cluster is available either as a PEM file or a keystore
- An HLRC instance connecting to an elasticsearch cluster that
requires client authentication where the client key and certificate
are available in a keystore
Co-Authored-By: Lisa Cawley <lcawley@elastic.co>
This commit deprecates the creation of dot-prefixed index names (e.g.
.watches) unless they are either 1) a hidden index, or 2) registered by
a plugin that extends SystemIndexPlugin. This is the first step
towards more thorough protections for system indices.
This commit also modifies several plugins which use dot-prefixed indices
to register indices they own as system indices, and adds a plugin to
register .tasks as a system index.
The docs tests have recently been running much slower than before (see #49753).
The gist here is that with ILM/SLM we do a lot of unnecessary setup / teardown work on each
test. Compounded with the slightly slower cluster state storage mechanism, this causes the
tests to run much slower.
In particular, on RAMDisk, docs:check is taking
ES 7.4: 6:55 minutes
ES master: 16:09 minutes
ES with this commit: 6:52 minutes
on SSD, docs:check is taking
ES 7.4: ??? minutes
ES master: 32:20 minutes
ES with this commit: 11:21 minutes
Changes the find_file_structure response to include a CSV
ingest processor in the ingest pipeline it suggests.
Previously the Kibana file upload functionality parsed CSV
in the browser, but by parsing CSV in the ingest pipeline
it makes the Kibana file upload functionality more easily
interchangable with Filebeat such that the configurations
it creates can more easily be used to import data with the
same structure repeatedly in production.
* Reload secure settings with password (#43197)
If a password is not set, we assume an empty string to be
compatible with previous behavior.
Only allow the reload to be broadcast to other nodes if TLS is
enabled for the transport layer.
* Add passphrase support to elasticsearch-keystore (#38498)
This change adds support for keystore passphrases to all subcommands
of the elasticsearch-keystore cli tool and adds a subcommand for
changing the passphrase of an existing keystore.
The work to read the passphrase in Elasticsearch when
loading, which will be addressed in a different PR.
Subcommands of elasticsearch-keystore can handle (open and create)
passphrase protected keystores
When reading a keystore, a user is only prompted for a passphrase
only if the keystore is passphrase protected.
When creating a keystore, a user is allowed (default behavior) to create one with an
empty passphrase
Passphrase can be set to be empty when changing/setting it for an
existing keystore
Relates to: #32691
Supersedes: #37472
* Restore behavior for force parameter (#44847)
Turns out that the behavior of `-f` for the add and add-file sub
commands where it would also forcibly create the keystore if it
didn't exist, was by design - although undocumented.
This change restores that behavior auto-creating a keystore that
is not password protected if the force flag is used. The force
OptionSpec is moved to the BaseKeyStoreCommand as we will presumably
want to maintain the same behavior in any other command that takes
a force option.
* Handle pwd protected keystores in all CLI tools (#45289)
This change ensures that `elasticsearch-setup-passwords` and
`elasticsearch-saml-metadata` can handle a password protected
elasticsearch.keystore.
For setup passwords the user would be prompted to add the
elasticsearch keystore password upon running the tool. There is no
option to pass the password as a parameter as we assume the user is
present in order to enter the desired passwords for the built-in
users.
For saml-metadata, we prompt for the keystore password at all times
even though we'd only need to read something from the keystore when
there is a signing or encryption configuration.
* Modify docs for setup passwords and saml metadata cli (#45797)
Adds a sentence in the documentation of `elasticsearch-setup-passwords`
and `elasticsearch-saml-metadata` to describe that users would be
prompted for the keystore's password when running these CLI tools,
when the keystore is password protected.
Co-Authored-By: Lisa Cawley <lcawley@elastic.co>
* Elasticsearch keystore passphrase for startup scripts (#44775)
This commit allows a user to provide a keystore password on Elasticsearch
startup, but only prompts when the keystore exists and is encrypted.
The entrypoint in Java code is standard input. When the Bootstrap class is
checking for secure keystore settings, it checks whether or not the keystore
is encrypted. If so, we read one line from standard input and use this as the
password. For simplicity's sake, we allow a maximum passphrase length of 128
characters. (This is an arbitrary limit and could be increased or eliminated.
It is also enforced in the keystore tools, so that a user can't create a
password that's too long to enter at startup.)
In order to provide a password on standard input, we have to account for four
different ways of starting Elasticsearch: the bash startup script, the Windows
batch startup script, systemd startup, and docker startup. We use wrapper
scripts to reduce systemd and docker to the bash case: in both cases, a
wrapper script can read a passphrase from the filesystem and pass it to the
bash script.
In order to simplify testing the need for a passphrase, I have added a
has-passwd command to the keystore tool. This command can run silently, and
exit with status 0 when the keystore has a password. It exits with status 1 if
the keystore doesn't exist or exists and is unencrypted.
A good deal of the code-change in this commit has to do with refactoring
packaging tests to cleanly use the same tests for both the "archive" and the
"package" cases. This required not only moving tests around, but also adding
some convenience methods for an abstraction layer over distribution-specific
commands.
* Adjust docs for password protected keystore (#45054)
This commit adds relevant parts in the elasticsearch-keystore
sub-commands reference docs and in the reload secure settings API
doc.
* Fix failing Keystore Passphrase test for feature branch (#50154)
One problem with the passphrase-from-file tests, as written, is that
they would leave a SystemD environment variable set when they failed,
and this setting would cause elasticsearch startup to fail for other
tests as well. By using a try-finally, I hope that these tests will fail
more gracefully.
It appears that our Fedora and Ubuntu environments may be configured to
store journald information under /var rather than under /run, so that it
will persist between boots. Our destructive tests that read from the
journal need to account for this in order to avoid trying to limit the
output we check in tests.
* Run keystore management tests on docker distros (#50610)
* Add Docker handling to PackagingTestCase
Keystore tests need to be able to run in the Docker case. We can do this
by using a DockerShell instead of a plain Shell when Docker is running.
* Improve ES startup check for docker
Previously we were checking truncated output for the packaged JDK as
an indication that Elasticsearch had started. With new preliminary
password checks, we might get a false positive from ES keystore
commands, so we have to check specifically that the Elasticsearch
class from the Bootstrap package is what's running.
* Test password-protected keystore with Docker (#50803)
This commit adds two tests for the case where we mount a
password-protected keystore into a Docker container and provide a
password via a Docker environment variable.
We also fix a logging bug where we were logging the identifier for an
array of strings rather than the contents of that array.
* Add documentation for keystore startup prompting (#50821)
When a keystore is password-protected, Elasticsearch will prompt at
startup. This commit adds documentation for this prompt for the archive,
systemd, and Docker cases.
Co-authored-by: Lisa Cawley <lcawley@elastic.co>
* Warn when unable to upgrade keystore on debian (#51011)
For Red Hat RPM upgrades, we warn if we can't upgrade the keystore. This
commit brings the same logic to the code for Debian packages. See the
posttrans file for gets executed for RPMs.
* Restore handling of string input
Adds tests that were mistakenly removed. One of these tests proved
we were not handling the the stdin (-x) option correctly when no
input was added. This commit restores the original approach of
reading stdin one char at a time until there is no more (-1, \r, \n)
instead of using readline() that might return null
* Apply spotless reformatting
* Use '--since' flag to get recent journal messages
When we get Elasticsearch logs from journald, we want to fetch only log
messages from the last run. There are two reasons for this. First, if
there are many logs, we might get a string that's too large for our
utility methods. Second, when we're looking for a specific message or
error, we almost certainly want to look only at messages from the last
execution.
Previously, we've been trying to do this by clearing out the physical
files under the journald process. But there seems to be some contention
over these directories: if journald writes a log file in between when
our deletion command deletes the file and when it deletes the log
directory, the deletion will fail.
It seems to me that we might be able to use journald's "--since" flag to
retrieve only log messages from the last run, and that this might be
less likely to fail due to race conditions in file deletion.
Unfortunately, it looks as if the "--since" flag has a granularity of
one-second. I've added a two-second sleep to make sure that there's a
sufficient gap between the test that will read from journald and the
test before it.
* Use new journald wrapper pattern
* Update version added in secure settings request
Co-authored-by: Lisa Cawley <lcawley@elastic.co>
Co-authored-by: Ioannis Kakavas <ikakavas@protonmail.com>
Today we are repeatedly checking if the current build is a snapshot
build or not by reading the system property build.snapshot. This commit
formalizes this by adding a build parameter to indicate whether or not
the current build is a snapshot build.
* Creates a top-level page for EQL in the ES reference.
This page contains a high-level introduction and will include a nav for other EQL docs pages as they're built.
* Creates a requirements page.
This page outlines the fields needed to use EQL in ES.
This commit enables autoscaling in docs tests based on snapshot
builds. This is done so that when the API docs are added to the docs,
then the cluster will have been started to enable autoscaling so these
APIs are available for testing.
In 2bb31fe (v0.6.0!) we added DEBUG-level logging to the default config of
action loggers "for easier debugging". This change to the default config lives
on to this day. It does not obviously make debugging any easier any more, but
it does result in a good deal of log noise sometimes. This commit removes this
special case from the default config.
Closes#51198
This change changes the way to run our test suites in
JVMs configured in FIPS 140 approved mode. It does so by:
- Configuring any given runtime Java in FIPS mode with the bundled
policy and security properties files, setting the system
properties java.security.properties and java.security.policy
with the == operator that overrides the default JVM properties
and policy.
- When runtime java is 11 and higher, using BouncyCastle FIPS
Cryptographic provider and BCJSSE in FIPS mode. These are
used as testRuntime dependencies for unit
tests and internal clusters, and copied (relevant jars)
explicitly to the lib directory for testclusters used in REST tests
- When runtime java is 8, using BouncyCastle FIPS
Cryptographic provider and SunJSSE in FIPS mode.
Running the tests in FIPS 140 approved mode doesn't require an
additional configuration either in CI workers or locally and is
controlled by specifying -Dtests.fips.enabled=true
The way it was originally written, it sounds like
we are boosting at query time.
Of course, the effect is at query time,
but the point here is that boosting is done at index time
* [ML][Inference] add tags url param to GET (#51330)
Adds a new URL parameter, `tags` to the GET _ml/inference/<model_id> endpoint.
This parameter allows the list of models to be further reduced to those who contain all the provided tags.
The regex for the response to `GET _cat/health?v` in `getting-started.asciidoc`
requires `max_task_wait_time` to match `(-|\\d+(micros|ms|s))`, which doesn't
match times such as `3.9ms` that contain a decimal point. This commit adjusts
the regex to match times formatted like this too.
Fixes#47537
The method parameter is not used in the percentile aggs, instead
the method is determined by the presence of `hdr` or `tdigest`
objects.
Relates to #8324
Add the character position of a scripting error to error responses.
The contents of the `position` field are experimental and subject to
change. Currently, `offset` refers to the character location where the
error was encountered, `start` and `end` define a range of characters
that contain the error.
eg.
```
{
"error": {
"root_cause": [
{
"type": "script_exception",
"reason": "runtime error",
"script_stack": [
"y = x;",
" ^---- HERE"
],
"script": "def x = new ArrayList(); Map y = x;",
"lang": "painless",
"position": {
"offset": 33,
"start": 29,
"end": 35
}
}
```
Refs: #50993
Since autoscaling is currently only under development, this commit
causes the autoscaling docs to be excluded any time that release docs
are being built.
* REST PreparedStatement-like query parameters are now supported in the form of an array of non-object, non-array values where ES SQL parser will try to infer the data type of the value being passed as parameter.
(cherry picked from commit 45b8bf619aecb1c03d7bc0cf06928dcc36005a66)
The docs test suite is still timing out on CI at 35 minutes, so
pushing it to 40 minutes while we determine the cause of the slowdown.
Relates: #49753
Backport of: #51200
This commit merely adds the skeleton for the autoscaling project, adding
the basics to include the autoscaling module in the default
distribution, opt-in to code formatting, and a placeholder for the docs.
This change introduces a new feature for indices so that they can be
hidden from wildcard expansion. The feature is referred to as hidden
indices. An index can be marked hidden through the use of an index
setting, `index.hidden`, at creation time. One primary use case for
this feature is to have a construct that fits indices that are created
by the stack that contain data used for display to the user and/or
intended for querying by the user. The desire to keep them hidden is
to avoid confusing users when searching all of the data they have
indexed and getting results returned from indices created by the
system.
Hidden indices have the following properties:
* API calls for all indices (empty indices array, _all, or *) will not
return hidden indices by default.
* Wildcard expansion will not return hidden indices by default unless
the wildcard pattern begins with a `.`. This behavior is similar to
shell expansion of wildcards.
* REST API calls can enable the expansion of wildcards to hidden
indices with the `expand_wildcards` parameter. To expand wildcards
to hidden indices, use the value `hidden` in conjunction with `open`
and/or `closed`.
* Creation of a hidden index will ignore global index templates. A
global index template is one with a match-all pattern.
* Index templates can make an index hidden, with the exception of a
global index template.
* Accessing a hidden index directly requires no additional parameters.
Backport of #50452
Object fields cannot be used as features. At the moment _explain
API includes them and even worse it allows it does not error when
an object field is excluded. This creates the expectation to the
user that all children fields will also be excluded while it's not
the case.
This commit omits object fields from the _explain API and also
adds an error if an object field is included or excluded.
Backport of #51115
elastic/docs#1687 added support for the `[%collapsible]` Asciidoc
attribute, which creates collapsible sections in the HTML output.
This PR makes two related changes to the nodes stats API documentation:
* Makes the response parameter sections collapsible. This allows users
to more easily navigate the page without long walls of text.
* Reorders the response parameter sections to match the default order
returned by the API.
Relates to #47524.
Adds a 'Configure text analysis' page to house tutorial content for the
analysis topic.
Also relocates the following pages as children as this new page:
* 'Test an analyzer'
* 'Configuring built-in analyzers'
* 'Create a custom analyzer'
I plan to add a tutorial for specifying index-time and search-time
analyzers to this section as part of a future PR.
This helps the topic better match the structure of
our machine learning docs, e.g.
https://www.elastic.co/guide/en/machine-learning/7.5/ml-concepts.html
This PR only includes the 'Anatomy of an analyzer' page as a 'Concepts'
child page, but I plan to add other concepts, such as 'Index time vs.
search time', with later PRs.
* Changes titles to sentence case.
* Appends pages with 'reference' to differentiate their content from
conceptual overviews.
* Moves the 'Normalizers' page to end of the Analysis topic pages.
Updates snippet to consistently use 2-space indentation. The snippet
previously used a mix of tab/5-space and 2-space indents.
Co-authored-by: Peter Johnson <wiz@wiz.co.nz>
Co-authored-by: Peter Johnson <peter@geocode.earth>
Updates several example snippets in the Cluster Allocation Explain API
docs to consistently use the `my_index` index.
Previously, the snippets switches from `my_index` to `idx`, which could
confuse users.
Co-authored-by: Emmanuel DEMEY <demey.emmanuel@gmail.com>
Co-authored-by: Emmanuel DEMEY <demey.emmanuel@gmail.com>
Backport: #50467
This commit adds the name of the current pipeline to ingest metadata.
This pipeline name is accessible under the following key: '_ingest.pipeline'.
Example usage in pipeline:
PUT /_ingest/pipeline/2
{
"processors": [
{
"set": {
"field": "pipeline_name",
"value": "{{_ingest.pipeline}}"
}
}
]
}
Closes#42106
Knowing about used analysis components and mapping types would be incredibly
useful in order to know which ones may be deprecated or should get more love.
Some field types also act as a proxy to know about feature usage of some APIs
like the `percolator` or `completion` fields types for percolation and the
completion suggester, respectively.
The version replacement for the code snippet should replace 7.6 with the current version,
but doesn't match because of a missing whitespace.
Closes#51052
This change adds a new `kibana_admin` role, and deprecates
the old `kibana_user` and`kibana_dashboard_only_user`roles.
The deprecation is implemented via a new reserved metadata
attribute, which can be consumed from the API and also triggers
deprecation logging when used (by a user authenticating to
Elasticsearch).
Some docs have been updated to avoid references to these
deprecated roles.
Backport of: #46456
Co-authored-by: Larry Gregory <lgregorydev@gmail.com>
We deprecated and removed the camel-case versions of the nGram and edgeNGram
filters a while ago and we should do the same with the nGram and edgeNGram tokenizers.
This PR deprecates the use of these names in favour of ngram and edge_ngram in
7. Usage will be disallowed on new indices starting with 8 then.
* Adds support for geo-bounds filtering in geogrid aggregations (#50002)
It is fairly common to filter the geo point candidates in
geohash_grid and geotile_grid aggregations according to some
viewable bounding box. This change introduces the option of
specifying this filter directly in the tiling aggregation.
This is even more relevant to `geo_shape` where the bounds will restrict
the shape to be within the bounds
this optional `bounds` parameter is parsed in an equivalent fashion to
the bounds specified in the geo_bounding_box query.
This commit allows the plugin installer to install multiple plugins in a
single invocation. The installation will be treated as a transaction, so
that all of the plugins are install successfully, or none of the plugins
are installed.
Adds a new parameter to regression and classification that enables computation
of importance for the top most important features. The computation of the importance
is based on SHAP (SHapley Additive exPlanations) method.
Backport of #50914
Updates a snippet to use the `s` query string parameter rather than
piping the output to a separate `sort` command.
This ensures the snippet is tested and available in clients other than
curl (Kibana console, etc.).
Issue was originally raised by @hackaholic in #40926.
* Move metadata storage to Lucene (#50907)
Today we split the on-disk cluster metadata across many files: one file for the metadata of each
index, plus one file for the global metadata and another for the manifest. Most metadata updates
only touch a few of these files, but some must write them all. If a node holds a large number of
indices then it's possible its disks are not fast enough to process a complete metadata update before timing out. In severe cases affecting master-eligible nodes this can prevent an election
from succeeding.
This commit uses Lucene as a metadata storage for the cluster state, and is a squashed version
of the following PRs that were targeting a feature branch:
* Introduce Lucene-based metadata persistence (#48733)
This commit introduces `LucenePersistedState` which master-eligible nodes
can use to persist the cluster metadata in a Lucene index rather than in
many separate files.
Relates #48701
* Remove per-index metadata without assigned shards (#49234)
Today on master-eligible nodes we maintain per-index metadata files for every
index. However, we also keep this metadata in the `LucenePersistedState`, and
only use the per-index metadata files for importing dangling indices. However
there is no point in importing a dangling index without any shard data, so we
do not need to maintain these extra files any more.
This commit removes per-index metadata files from nodes which do not hold any
shards of those indices.
Relates #48701
* Use Lucene exclusively for metadata storage (#50144)
This moves metadata persistence to Lucene for all node types. It also reenables BWC and adds
an interoperability layer for upgrades from prior versions.
This commit disables a number of tests related to dangling indices and command-line tools.
Those will be addressed in follow-ups.
Relates #48701
* Add command-line tool support for Lucene-based metadata storage (#50179)
Adds command-line tool support (unsafe-bootstrap, detach-cluster, repurpose, & shard
commands) for the Lucene-based metadata storage.
Relates #48701
* Use single directory for metadata (#50639)
Earlier PRs for #48701 introduced a separate directory for the cluster state. This is not needed
though, and introduces an additional unnecessary cognitive burden to the users.
Co-Authored-By: David Turner <david.turner@elastic.co>
* Add async dangling indices support (#50642)
Adds support for writing out dangling indices in an asynchronous way. Also provides an option to
avoid writing out dangling indices at all.
Relates #48701
* Fold node metadata into new node storage (#50741)
Moves node metadata to uses the new storage mechanism (see #48701) as the authoritative source.
* Write CS asynchronously on data-only nodes (#50782)
Writes cluster states out asynchronously on data-only nodes. The main reason for writing out
the cluster state at all is so that the data-only nodes can snap into a cluster, that they can do a
bit of bootstrap validation and so that the shard recovery tools work.
Cluster states that are written asynchronously have their voting configuration adapted to a non
existing configuration so that these nodes cannot mistakenly become master even if their node
role is changed back and forth.
Relates #48701
* Remove persistent cluster settings tool (#50694)
Adds the elasticsearch-node remove-settings tool to remove persistent settings from the on
disk cluster state in case where it contains incompatible settings that prevent the cluster from
forming.
Relates #48701
* Make cluster state writer resilient to disk issues (#50805)
Adds handling to make the cluster state writer resilient to disk issues. Relates to #48701
* Omit writing global metadata if no change (#50901)
Uses the same optimization for the new cluster state storage layer as the old one, writing global
metadata only when changed. Avoids writing out the global metadata if none of the persistent
fields changed. Speeds up server:integTest by ~10%.
Relates #48701
* DanglingIndicesIT should ensure node removed first (#50896)
These tests occasionally failed because the deletion was submitted before the
restarting node was removed from the cluster, causing the deletion not to be
fully acked. This commit fixes this by checking the restarting node has been
removed from the cluster.
Co-authored-by: David Turner <david.turner@elastic.co>
* fix tests
Co-authored-by: David Turner <david.turner@elastic.co>
* ILM action to wait for SLM policy execution (#50454)
This change add new ILM action to wait for SLM policy execution to ensure that index has snapshot before deletion.
Closes#45067
* Fix flaky TimeSeriesLifecycleActionsIT#testWaitForSnapshot test
This change adds some randomness and cleanup step to TimeSeriesLifecycleActionsIT#testWaitForSnapshot and testWaitForSnapshotSlmExecutedBefore tests in attempt to make them stable.
Reletes to #50781
* Formatting changes
* Longer timeout
* Fix Map.of in Java8
* Unused import removed
* Refresh cached phase policy definition if possible on new policy
There are some cases when updating a policy does not change the
structure in a significant way. In these cases, we can reread the
policy definition for any indices using the updated policy.
This commit adds this refreshing to the `TransportPutLifecycleAction`
to allow this. It allows us to do things like change the configuration
values for a particular step, even when on that step (for example,
changing the rollover criteria while on the `check-rollover-ready` step).
There are more cases where the phase definition can be reread that just
the ones checked here (for example, removing an action that has already
been passed), and those will be added in subsequent work.
Relates to #48431
This commit changes the default behavior for
xpack.security.ssl.diagnose.trust when running in a FIPS 140 JVM.
More specifically, when xpack.security.fips_mode.enabled is true:
- If xpack.security.ssl.diagnose.trust is not explicitly set, the
default value of it becomes false and a log message is printed
on info level, notifying of the fact that the TLS/SSL diagnostic
messages are not enabled when in a FIPS 140 JVM.
- If xpack.security.ssl.diagnose.trust is explicitly set, the value of
it is honored, even in FIPS mode.
This is relevant only for 7.x where we support Java 8 in which
SunJSSE can still be used as a FIPS 140 provider for TLS. SunJSSE
in FIPS mode, disallows the use of other TrustManager implementations
than the one shipped with SunJSSE.
* [ML][Inference] PUT API (#50852)
This adds the `PUT` API for creating trained models that support our format.
This includes
* HLRC change for the API
* API creation
* Validations of model format and call
* fixing backport
Adds support for the `offset` parameter to the `date_histogram` source
of composite aggs. The `offset` parameter is supported by the normal
`date_histogram` aggregation and is useful for folks that need to
measure things from, say, 6am one day to 6am the next day.
This is implemented by creating a new `Rounding` that knows how to
handle offsets and delegates to other rounding implementations. That
implementation doesn't fully implement the `Rounding` contract, namely
`nextRoundingValue`. That method isn't used by composite aggs so I can't
be sure that any implementation that I add will be correct. I propose to
leave it throwing `UnsupportedOperationException` until I need it.
Closes#48757
If a pipeline referenced by a transform does not exist, we should not allow the transform to be created.
We do allow the pipeline existence check to be skipped with defer_validations, but if the pipeline still does not exist on `_start`, the pipeline will fail to start.
relates: #50135
Adds a 'text analysis overview' page to the analysis topic docs.
The goals of this page are:
* Concisely summarize the analysis process while avoiding in-depth concepts, tutorials, or API examples
* Explain why analysis is important, largely through highlighting problems with full-text searches missing analysis
* Highlight how analysis can be used to improve search results
The Analysis docs mention including a default analyzer in the index settings. However, no example snippet is included.
This adds an example snippet that users can easily copy and adjust.
This PR adds per-field metadata that can be set in the mappings and is later
returned by the field capabilities API. This metadata is completely opaque to
Elasticsearch but may be used by tools that index data in Elasticsearch to
communicate metadata about fields with tools that then search this data. A
typical example that has been requested in the past is the ability to attach
a unit to a numeric field.
In order to not bloat the cluster state, Elasticsearch requires that this
metadata be small:
- keys can't be longer than 20 chars,
- values can only be numbers or strings of no more than 50 chars - no inner
arrays or objects,
- the metadata can't have more than 5 keys in total.
Given that metadata is opaque to Elasticsearch, field capabilities don't try to
do anything smart when merging metadata about multiple indices, the union of
all field metadatas is returned.
Here is how the meta might look like in mappings:
```json
{
"properties": {
"latency": {
"type": "long",
"meta": {
"unit": "ms"
}
}
}
}
```
And then in the field capabilities response:
```json
{
"latency": {
"long": {
"searchable": true,
"aggreggatable": true,
"meta": {
"unit": [ "ms" ]
}
}
}
}
```
When there are no conflicts, values are arrays of size 1, but when there are
conflicts, Elasticsearch includes all unique values in this array, without
giving ways to know which index has which metadata value:
```json
{
"latency": {
"long": {
"searchable": true,
"aggreggatable": true,
"meta": {
"unit": [ "ms", "ns" ]
}
}
}
}
```
Closes#33267
#43007 restructured the SQL REST API docs so they display across several pages.
This updates up a reference that assumes a single page in the "Paginating through a large response" section. It also reformats a tip for the Kibana console.
Closes#50688
The docs/reference/redirects.asciidoc file stores a list of relocated or
deleted pages for the Elasticsearch Reference documentation.
This prunes several older redirects that are no longer needed.
If `geo_point fields` are multi-valued, using `geo_centroid` as a
sub-agg to `geohash_grid` could result in centroids outside of bucket
boundaries.
This adds a related warning to the geo_centroid agg docs.
We have about 800 `ObjectParsers` in Elasticsearch, about 700 of which
are final. This is *probably* the right way to declare them because in
practice we never mutate them after they are built. And we certainly
don't change the static reference. Anyway, this adds `final` to these
parsers.
I found the non-final parsers with this:
```
diff \
<(find . -type f -name '*.java' -exec grep -iHe 'static.*PARSER\s*=' {} \+ | sort) \
<(find . -type f -name '*.java' -exec grep -iHe 'static.*final.*PARSER\s*=' {} \+ | sort) \
2>&1 | grep '^<'
```
Adds a `force` parameter to the delete data frame analytics
request. When `force` is `true`, the action force-stops the
jobs and then proceeds to the deletion. This can be used in
order to delete a non-stopped job with a single request.
Closes#48124
Backport of #50553
This intervals source will return terms that are similar to an input term, up to
an edit distance defined by fuzziness, similar to FuzzyQuery.
Closes#49595
* Docs: Refine note about `after_key`
I was curious about composite aggregations, specifically I wanted to
know how to write a composite aggregation that had all of its buckets
filtered out so you *had* to use the `after_key`. Then I saw that we've
declared composite aggregations not to work with pipelines in #44180. So
I'm not sure you *can* do that any more. Which makes the note about
`after_key` inaccurate. This rejiggers that section of the docs a little
so it is more obvious that you send the `after_key` back to us. And so
it is more obvious that you should *only* use the `after_key` that we
give you rather than try to work it out for yourself.
* Apply suggestions from code review
Co-Authored-By: James Rodewig <james.rodewig@elastic.co>
Co-authored-by: James Rodewig <james.rodewig@elastic.co>
The cat nodes API performs a `ClusterStateAction` then a `NodesInfoAction`.
Today it accepts the `?local` parameter and passes this to the
`ClusterStateAction` but this parameter has no effect on the `NodesInfoAction`.
This is surprising, because `GET _cat/nodes?local` looks like it might be a
completely local call but in fact it still depends on every node in the
cluster.
This commit deprecates the `?local` parameter on this API so that it can be
removed in 8.0.
Relates #50088
PR #44238 changed several links related to the Elasticsearch search request body API. This updates several places still using outdated links or anchors.
This will ultimately let us remove some redirects related to those link changes.
The docs/reference/redirects.asciidoc file stores a list of relocated or
deleted pages for the Elasticsearch Reference documentation.
This prunes several older redirects that are no longer needed and
don't require work to fix broken links in other repositories.
The additional change to the original PR (#49657), is that `org.elasticsearch.client.cluster.RemoteConnectionInfo` now parses the initial_connect_timeout field as a string instead of a TimeValue instance.
The reason that this is needed is because that the initial_connect_timeout field in the remote connection api is serialized for human consumption, but not for parsing purposes.
Therefore the HLRC can't parse it correctly (which caused test failures in CI, but not in the PR CI
:( ). The way this field is serialized needs to be changed in the remote connection api, but that is a breaking change. We should wait making this change until rest api versioning is introduced.
Co-Authored-By: j-bean <anton.shuvaev91@gmail.com>
Co-authored-by: j-bean <anton.shuvaev91@gmail.com>
Percentile aggregations are non-deterministic. A percentile aggregation
can produce different results even when using the same data.
Based on [this discuss post][0], the non-deterministic property stems
from processes in Lucene that can affect the order in which docs are
provided to the aggregation.
This adds a warning stating that the aggregation is non-deterministic
and what that means.
[0]: https://discuss.elastic.co/t/different-results-for-same-query/111757
File scripts were removed in 6.0 with #24627.
This removes an outdated file scripts reference from the conditional clauses section of the search templates docs.
* Add ILM histore store index (#50287)
* Add ILM histore store index
This commit adds an ILM history store that tracks the lifecycle
execution state as an index progresses through its ILM policy. ILM
history documents store output similar to what the ILM explain API
returns.
An example document with ALL fields (not all documents will have all
fields) would look like:
```json
{
"@timestamp": 1203012389,
"policy": "my-ilm-policy",
"index": "index-2019.1.1-000023",
"index_age":123120,
"success": true,
"state": {
"phase": "warm",
"action": "allocate",
"step": "ERROR",
"failed_step": "update-settings",
"is_auto-retryable_error": true,
"creation_date": 12389012039,
"phase_time": 12908389120,
"action_time": 1283901209,
"step_time": 123904107140,
"phase_definition": "{\"policy\":\"ilm-history-ilm-policy\",\"phase_definition\":{\"min_age\":\"0ms\",\"actions\":{\"rollover\":{\"max_size\":\"50gb\",\"max_age\":\"30d\"}}},\"version\":1,\"modified_date_in_millis\":1576517253463}",
"step_info": "{... etc step info here as json ...}"
},
"error_details": "java.lang.RuntimeException: etc\n\tcaused by:etc etc etc full stacktrace"
}
```
These documents go into the `ilm-history-1-00000N` index to provide an
audit trail of the operations ILM has performed.
This history storage is enabled by default but can be disabled by setting
`index.lifecycle.history_index_enabled` to `false.`
Resolves#49180
* Make ILMHistoryStore.putAsync truly async (#50403)
This moves the `putAsync` method in `ILMHistoryStore` never to block.
Previously due to the way that the `BulkProcessor` works, it was possible
for `BulkProcessor#add` to block executing a bulk request. This was bad
as we may be adding things to the history store in cluster state update
threads.
This also moves the index creation to be done prior to the bulk request
execution, rather than being checked every time an operation was added
to the queue. This lessens the chance of the index being created, then
deleted (by some external force), and then recreated via a bulk indexing
request.
Resolves#50353
Co-authored-by: Daniel Huang <danielhuang@tencent.com>
This is a spinoff of #48130 that generalizes the proposal to allow early termination with the composite aggregation when leading sources match a prefix or the entire index sort specification.
In such case the composite aggregation can use the index sort natural order to early terminate the collection when it reaches a composite key that is greater than the bottom of the queue.
The optimization is also applicable when a query other than match_all is provided. However the optimization is deactivated for sources that match the index sort in the following cases:
* Multi-valued source, in such case early termination is not possible.
* missing_bucket is set to true
* Update remote cluster stats to support simple mode (#49961)
Remote cluster stats API currently only returns useful information if
the strategy in use is the SNIFF mode. This PR modifies the API to
provide relevant information if the user is in the SIMPLE mode. This
information is the configured addresses, max socket connections, and
open socket connections.
* Send hostname in SNI header in simple remote mode (#50247)
Currently an intermediate proxy must route conncctions to the
appropriate remote cluster when using simple mode. This commit offers
a additional mechanism for the proxy to route the connections by
including the hostname in the TLS SNI header.
* Rename the remote connection mode simple to proxy (#50291)
This commit renames the simple connection mode to the proxy connection
mode for remote cluster connections. In order to do this, the mode specific
settings which we namespaced by their mode (ex: sniff.seed and
proxy.addresses) have been reverted.
* Modify proxy mode to support a single address (#50391)
Currently, the remote proxy connection mode uses a list setting for the
proxy address. This commit modifies this so that the setting is
proxy_address and only supports a single remote proxy address.
The freeze index API docs state that frozen indices are blocked for
write operations.
While this implies frozen indices are read-only, it does not explicitly
use the term "read-only", which is found in other docs, such as the
force merge docs.
This adds the "ready-only" term to the freeze index API docs as well
as other clarification.
The `filter` rule is not allowed on the top-level of the query, so removing it
from the list of allowed rules. Where it can be nested inside other rules, those
rules already mention it.
Docker bypasses the Uncomplicated Firewall (UFW) on Linux by editing the `iptables` config directly, which leads to the exposure of port 9200, even if you blocked it via UFW.
This adds a warning along with work-arounds to the docs.
Signed-off-by: Kovah <mail@kovah.de>
Users often mistakenly map numeric IDs to numeric datatypes. However,
this is often slow for the `term` and other term-level queries.
The "Tune for search speed" docs includes advice for mapping numeric
IDs to `keyword` fields. However, this tip is not included in the
`numeric` or `keyword` field datatype doc pages.
This rewords the tip in the "Tune for search speed" docs, relocates it
to the `numeric` field docs, and reuses it using tagged regions.
For 7.x and earlier branches, `_cat/repositories` API requests require a
repository name.
This removes an erroneous request example without a repository name
added with a8e0275.
Lucene 8.4 added support for "CONTAINS", therefore in this commit those
changes are integrated in Elasticsearch. This commit contains as well a
bug fix when querying with a geometry collection with "DISJOINT" relation.