[DOCS] Update X-Pack terminology in security docs (#36564)

This commit is contained in:
Lisa Cawley 2018-12-19 14:53:37 -08:00 committed by GitHub
parent 9c1e47d434
commit 4140b9eede
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
63 changed files with 331 additions and 311 deletions

View File

@ -51,7 +51,8 @@ keys for each instance. If you chose to generate a CA, which is the default
behavior, the certificate and private key are included in the output file. If
you chose to generate CSRs, you should provide them to your commercial or
organization-specific certificate authority to obtain signed certificates. The
signed certificates must be in PEM format to work with {security}.
signed certificates must be in PEM format to work with the {stack}
{security-features}.
[float]
=== Parameters

View File

@ -93,7 +93,8 @@ the command produces a zip file containing the generated certificates and keys.
The `csr` mode generates certificate signing requests (CSRs) that you can send
to a trusted certificate authority to obtain signed certificates. The signed
certificates must be in PEM or PKCS#12 format to work with {security}.
certificates must be in PEM or PKCS#12 format to work with {es}
{security-features}.
By default, the command produces a single CSR for a single instance.

View File

@ -19,8 +19,8 @@ bin/elasticsearch-setup-passwords auto|interactive
[float]
=== Description
This command is intended for use only during the initial configuration of
{xpack}. It uses the
This command is intended for use only during the initial configuration of the
{es} {security-features}. It uses the
{stack-ov}/built-in-users.html#bootstrap-elastic-passwords[`elastic` bootstrap password]
to run user management API requests. After you set a password for the `elastic`
user, the bootstrap password is no longer active and you cannot use this command.
@ -36,7 +36,7 @@ location, ensure that the *ES_PATH_CONF* environment variable returns the
correct path before you run the `elasticsearch-setup-passwords` command. You can
override settings in your `elasticsearch.yml` file by using the `-E` command
option. For more information about debugging connection failures, see
{xpack-ref}/trb-security-setup.html[`elasticsearch-setup-passwords` command fails due to connection failure].
{stack-ov}/trb-security-setup.html[`elasticsearch-setup-passwords` command fails due to connection failure].
[float]
=== Parameters

View File

@ -40,12 +40,12 @@ https://www.elastic.co/subscriptions.
[float]
==== Authorization
If {security} is enabled, you need `manage` cluster privileges to install the
license.
If {es} {security-features} are enabled, you need `manage` cluster privileges to
install the license.
If {security} is enabled and you are installing a gold or platinum license, you
must enable TLS on the transport networking layer before you install the license.
See <<configuring-tls>>.
If {es} {security-features} are enabled and you are installing a gold or platinum
license, you must enable TLS on the transport networking layer before you
install the license. See <<configuring-tls>>.
[float]
==== Examples

View File

@ -88,10 +88,10 @@ When putting stored scripts, support for storing them with the deprecated `templ
now removed. Scripts must be stored using the `script` context as mentioned in the documentation.
[float]
==== Get Aliases API limitations when {security} is enabled removed
==== Removed Get Aliases API limitations when {security-features} are enabled
The behavior and response codes of the get aliases API no longer vary
depending on whether {security} is enabled. Previously a
depending on whether {security-features} are enabled. Previously a
404 - NOT FOUND (IndexNotFoundException) could be returned in case the
current user was not authorized for any alias. An empty response with
status 200 - OK is now returned instead at all times.

View File

@ -19,9 +19,9 @@ Deletes an existing anomaly detection job.
All job configuration, model state and results are deleted.
IMPORTANT: Deleting a job must be done via this API only. Do not delete the
job directly from the `.ml-*` indices using the Elasticsearch
DELETE Document API. When {security} is enabled, make sure no `write`
privileges are granted to anyone over the `.ml-*` indices.
job directly from the `.ml-*` indices using the Elasticsearch delete document
API. When {es} {security-features} are enabled, make sure no `write` privileges
are granted to anyone over the `.ml-*` indices.
Before you can delete a job, you must delete the {dfeeds} that are associated
with it. See <<ml-delete-datafeed,Delete {dfeeds-cap}>>. Unless the `force` parameter
@ -47,8 +47,9 @@ separated list.
==== Authorization
You must have `manage_ml`, or `manage` cluster privileges to use this API.
For more information, see {xpack-ref}/security-privileges.html[Security Privileges].
If {es} {security-features} are enabled, you must have `manage_ml`, or `manage`
cluster privileges to use this API.
For more information, see {stack-ov}/security-privileges.html[Security Privileges].
==== Examples

View File

@ -29,16 +29,17 @@ structure of the data that will be passed to the anomaly detection engine.
==== Authorization
You must have `monitor_ml`, `monitor`, `manage_ml`, or `manage` cluster
privileges to use this API. For more information, see
{xpack-ref}/security-privileges.html[Security Privileges].
If {es} {security-features} are enabled, you must have `monitor_ml`, `monitor`,
`manage_ml`, or `manage` cluster privileges to use this API. For more
information, see
{stack-ov}/security-privileges.html[Security Privileges].
==== Security Integration
When {security} is enabled, the {dfeed} query will be previewed using the
credentials of the user calling the preview {dfeed} API. When the {dfeed}
is started it will run the query using the roles of the last user to
When {es} {security-features} are enabled, the {dfeed} query is previewed using
the credentials of the user calling the preview {dfeed} API. When the {dfeed}
is started it runs the query using the roles of the last user to
create or update it. If the two sets of roles differ then the preview may
not accurately reflect what the {dfeed} will return when started. To avoid
such problems, the same user that creates/updates the {dfeed} should preview

View File

@ -88,15 +88,16 @@ see <<ml-datafeed-resource>>.
==== Authorization
You must have `manage_ml`, or `manage` cluster privileges to use this API.
For more information, see
{xpack-ref}/security-privileges.html[Security Privileges].
If {es} {security-features} are enabled, you must have `manage_ml`, or `manage`
cluster privileges to use this API. For more information, see
{stack-ov}/security-privileges.html[Security Privileges].
==== Security Integration
==== Security integration
When {security} is enabled, your {dfeed} will remember which roles the user who
created it had at the time of creation, and run the query using those same roles.
When {es} {security-features} are enabled, your {dfeed} remembers which roles the
user who created it had at the time of creation and runs the query using those
same roles.
==== Examples

View File

@ -77,16 +77,16 @@ of the latest processed record.
==== Authorization
You must have `manage_ml`, or `manage` cluster privileges to use this API.
For more information, see
{xpack-ref}/security-privileges.html[Security Privileges].
If {es} {security-features} are enabled, you must have `manage_ml`, or `manage`
cluster privileges to use this API. For more information, see
{stack-ov}/security-privileges.html[Security Privileges].
==== Security Integration
==== Security integration
When {security} is enabled, your {dfeed} will remember which roles the last
user to create or update it had at the time of creation/update, and run the query
using those same roles.
When {es} {security-features} are enabled, your {dfeed} remembers which roles the
last user to create or update it had at the time of creation/update and runs the
query using those same roles.
==== Examples

View File

@ -79,15 +79,16 @@ see <<ml-datafeed-resource>>.
==== Authorization
You must have `manage_ml`, or `manage` cluster privileges to use this API.
For more information, see
{xpack-ref}/security-privileges.html[Security Privileges].
If {es} {security-features} are enabled, you must have `manage_ml`, or `manage`
cluster privileges to use this API. For more information, see
{stack-ov}/security-privileges.html[Security Privileges].
==== Security Integration
When {security} is enabled, your {dfeed} will remember which roles the user who
updated it had at the time of update, and run the query using those same roles.
When {es} {security-features} are enabled, your {dfeed} remembers which roles the
user who updated it had at the time of update and runs the query using those
same roles.
==== Examples

View File

@ -47,7 +47,7 @@ xpack.monitoring.exporters:
uniquely defines the exporter but is otherwise unused.
<3> `host` is a required setting for `http` exporters. It must specify the HTTP
port rather than the transport port. The default port value is `9200`.
<4> User authentication for those using {security} or some other
<4> User authentication for those using {stack} {security-features} or some other
form of user authentication protecting the cluster.
<5> See <<http-exporter-settings>> for all TLS/SSL settings. If not supplied,
the default node-level TLS/SSL settings are used.

View File

@ -47,10 +47,10 @@ a message indicating that they are waiting for the resources to be set up.
One benefit of the `local` exporter is that it lives within the cluster and
therefore no extra configuration is required when the cluster is secured with
{security}. All operations, including indexing operations, that occur from a
`local` exporter make use of the internal transport mechanisms within {es}. This
behavior enables the exporter to be used without providing any user credentials
when {security} is enabled.
{stack} {security-features}. All operations, including indexing operations, that
occur from a `local` exporter make use of the internal transport mechanisms
within {es}. This behavior enables the exporter to be used without providing any
user credentials when {security-features} are enabled.
For more information about the configuration options for the `local` exporter,
see <<local-exporter-settings>>.

View File

@ -2,8 +2,8 @@
[[api-definitions]]
== Definitions
These resource definitions are used in {ml} and {security} APIs and in {kib}
advanced {ml} job configuration options.
These resource definitions are used in APIs related to {ml-features} and
{security-features} and in {kib} advanced {ml} job configuration options.
* <<ml-calendar-resource,Calendars>>
* <<ml-datafeed-resource,{dfeeds-cap}>>

View File

@ -1,8 +1,10 @@
[role="xpack"]
[[configuring-tls-docker]]
=== Encrypting Communications in an {es} Docker Container
=== Encrypting communications in an {es} Docker Container
Starting with version 6.0.0, {security} (Gold, Platinum or Enterprise subscriptions) https://www.elastic.co/guide/en/elasticsearch/reference/6.0/breaking-6.0.0-xes.html[requires SSL/TLS]
Starting with version 6.0.0, {stack} {security-features}
(Gold, Platinum or Enterprise subscriptions)
https://www.elastic.co/guide/en/elasticsearch/reference/6.0/breaking-6.0.0-xes.html[require SSL/TLS]
encryption for the transport networking layer.
This section demonstrates an easy path to get started with SSL/TLS for both
@ -10,7 +12,7 @@ HTTPS and transport using the {es} Docker image. The example uses
Docker Compose to manage the containers.
For further details, please refer to
{xpack-ref}/encrypting-communications.html[Encrypting Communications] and
{stack-ov}/encrypting-communications.html[Encrypting communications] and
https://www.elastic.co/subscriptions[available subscriptions].
[float]
@ -156,7 +158,7 @@ volumes: {"esdata_01": {"driver": "local"}, "esdata_02": {"driver": "local"}}
<1> Bootstrap `elastic` with the password defined in `.env`. See
{stack-ov}/built-in-users.html#bootstrap-elastic-passwords[the Elastic Bootstrap Password].
<2> Automatically generate and apply a trial subscription, in order to enable
{security}.
{security-features}.
<3> Disable verification of authenticity for inter-node communication. Allows
creating self-signed certificates without having to pin specific internal IP addresses.
endif::[]

View File

@ -16,8 +16,8 @@ The _JCE Unlimited Strength Jurisdiction Policy Files`_ are required for
encryption with key lengths greater than 128 bits, such as 256-bit AES encryption.
After installation, all cipher suites in the JCE are available for use but requires
configuration in order to use them. To enable the use of stronger cipher suites with
{security}, configure the `cipher_suites` parameter. See the
configuration in order to use them. To enable the use of stronger cipher suites
with {es} {security-features}, configure the `cipher_suites` parameter. See the
{ref}/security-settings.html#ssl-tls-settings[Configuration Parameters for TLS/SSL]
section of this document for specific parameter information.

View File

@ -12,14 +12,12 @@ Additionally, it is recommended that the certificates contain subject alternativ
names (SAN) that correspond to the node's IP address and DNS name so that
hostname verification can be performed.
In order to simplify the process of generating certificates for the Elastic
Stack, a command line tool, {ref}/certutil.html[`elasticsearch-certutil`] has been
included with {xpack}. This tool takes care of generating a CA and signing
certificates with the CA. `elasticsearch-certutil` can be used interactively or
in a silent mode through the use of an input file. The `elasticsearch-certutil`
tool also supports generation of certificate signing requests (CSR), so that a
commercial- or organization-specific CA can be used to sign the certificates.
For example:
The {ref}/certutil.html[`elasticsearch-certutil`] command simplifies the process
of generating certificates for the {stack}. It takes care of generating a CA and
signing certificates with the CA. It can be used interactively or in a silent
mode through the use of an input file. It also supports generation of
certificate signing requests (CSR), so that a commercial- or
organization-specific CA can be used to sign the certificates. For example:
. Optional: Create a certificate authority for your {es} cluster.
+

View File

@ -2,11 +2,13 @@
[[configuring-tls]]
=== Encrypting communications in {es}
{security} enables you to encrypt traffic to, from, and within your {es} cluster.
Connections are secured using Transport Layer Security (TLS/SSL).
{stack} {security-features} enable you to encrypt traffic to, from, and within
your {es} cluster. Connections are secured using Transport Layer Security
(TLS/SSL).
WARNING: Clusters that do not have encryption enabled send all data in plain text
including passwords and will not be able to install a license that enables {security}.
including passwords and will not be able to install a license that enables
{security-features}.
To enable encryption, you need to perform the following steps on each node in
the cluster:
@ -27,7 +29,7 @@ information, see <<security-settings>>.
<<tls-ldap,encrypt communications between {es} and your LDAP server>>.
For more information about encrypting communications across the Elastic Stack,
see {xpack-ref}/encrypting-communications.html[Encrypting Communications].
see {stack-ov}/encrypting-communications.html[Encrypting Communications].
:edit_url: https://github.com/elastic/elasticsearch/edit/{branch}/docs/reference/security/securing-communications/node-certificates.asciidoc
include::node-certificates.asciidoc[]

View File

@ -3,7 +3,7 @@
=== Separating node-to-node and client traffic
Elasticsearch has the feature of so called {ref}/modules-transport.html[TCP transport profiles]
that allows it to bind to several ports and addresses. {security} extends on this
that allows it to bind to several ports and addresses. {es} {security-features} extends on this
functionality to enhance the security of the cluster by enabling the separation
of node-to-node transport traffic from client transport traffic. This is important
if the client transport traffic is not trusted and could potentially be malicious.

View File

@ -1,12 +1,13 @@
[[ssl-tls]]
=== Setting Up TLS on a Cluster
=== Setting Up TLS on a cluster
{security} enables you to encrypt traffic to, from, and within your {es}
cluster. Connections are secured using Transport Layer Security (TLS), which is
commonly referred to as "SSL".
The {stack} {security-features} enables you to encrypt traffic to, from, and
within your {es} cluster. Connections are secured using Transport Layer Security
(TLS), which is commonly referred to as "SSL".
WARNING: Clusters that do not have encryption enabled send all data in plain text
including passwords and will not be able to install a license that enables {security}.
including passwords and will not be able to install a license that enables
{security-features}.
The following steps describe how to enable encryption across the various
components of the Elastic Stack. You must perform each of the steps that are

View File

@ -5,7 +5,7 @@
To protect the user credentials that are sent for authentication, it's highly
recommended to encrypt communications between {es} and your Active Directory
server. Connecting via SSL/TLS ensures that the identity of the Active Directory
server is authenticated before {security} transmits the user credentials and the
server is authenticated before {es} transmits the user credentials and the
usernames and passwords are encrypted in transit.
Clients and nodes that connect via SSL/TLS to the Active Directory server need
@ -47,11 +47,11 @@ For more information about these settings, see <<ref-ad-settings>>.
. Restart {es}.
NOTE: By default, when you configure {security} to connect to Active Directory
using SSL/TLS, {security} attempts to verify the hostname or IP address
NOTE: By default, when you configure {es} to connect to Active Directory
using SSL/TLS, it attempts to verify the hostname or IP address
specified with the `url` attribute in the realm configuration with the
values in the certificate. If the values in the certificate and realm
configuration do not match, {security} does not allow a connection to the
configuration do not match, {es} does not allow a connection to the
Active Directory server. This is done to protect against man-in-the-middle
attacks. If necessary, you can disable this behavior by setting the
`ssl.verification_mode` property to `certificate`.

View File

@ -1,8 +1,8 @@
[role="xpack"]
[[tls-http]]
==== Encrypting HTTP Client Communications
==== Encrypting HTTP Client communications
When {security} is enabled, you can optionally use TLS to ensure that
When {security-features} are enabled, you can optionally use TLS to ensure that
communication between HTTP clients and the cluster is encrypted.
NOTE: Enabling TLS on the HTTP layer is strongly recommended but is not required.

View File

@ -5,7 +5,7 @@
To protect the user credentials that are sent for authentication in an LDAP
realm, it's highly recommended to encrypt communications between {es} and your
LDAP server. Connecting via SSL/TLS ensures that the identity of the LDAP server
is authenticated before {security} transmits the user credentials and the
is authenticated before {es} transmits the user credentials and the
contents of the connection are encrypted. Clients and nodes that connect via
TLS to the LDAP server need to have the LDAP server's certificate or the
server's root CA certificate installed in their keystore or truststore.
@ -15,7 +15,7 @@ For more information, see <<configuring-ldap-realm>>.
. Configure the realm's TLS settings on each node to trust certificates signed
by the CA that signed your LDAP server certificates. The following example
demonstrates how to trust a CA certificate, `cacert.pem`, located within the
{xpack} configuration directory:
{es} configuration directory (ES_PATH_CONF):
+
--
[source,shell]
@ -45,11 +45,11 @@ protocol and the secure port number. For example, `url: ldaps://ldap.example.com
. Restart {es}.
NOTE: By default, when you configure {security} to connect to an LDAP server
using SSL/TLS, {security} attempts to verify the hostname or IP address
NOTE: By default, when you configure {es} to connect to an LDAP server
using SSL/TLS, it attempts to verify the hostname or IP address
specified with the `url` attribute in the realm configuration with the
values in the certificate. If the values in the certificate and realm
configuration do not match, {security} does not allow a connection to the
configuration do not match, {es} does not allow a connection to the
LDAP server. This is done to protect against man-in-the-middle attacks. If
necessary, you can disable this behavior by setting the
`ssl.verification_mode` property to `certificate`.

View File

@ -1,10 +1,10 @@
[role="xpack"]
[[tls-transport]]
==== Encrypting Communications Between Nodes in a Cluster
==== Encrypting communications between nodes in a cluster
The transport networking layer is used for internal communication between nodes
in a cluster. When {security} is enabled, you must use TLS to ensure that
communication between the nodes is encrypted.
in a cluster. When {security-features} are enabled, you must use TLS to ensure
that communication between the nodes is encrypted.
. <<node-certificates,Generate node certificates>>.

View File

@ -161,9 +161,9 @@ xpack.security.audit.index.settings:
--
NOTE: These settings apply to the local audit indices, as well as to the
<<remote-audit-settings, remote audit indices>>, but only if the remote cluster
does *not* have {security} installed, or the {es} versions are different.
If the remote cluster has {security} installed, and the versions coincide, the
settings for the audit indices there will take precedence,
does *not* have {security-features} enabled or the {es} versions are different.
If the remote cluster has {security-features} enabled and the versions coincide,
the settings for the audit indices there will take precedence,
even if they are unspecified (i.e. left to defaults).
--

View File

@ -90,9 +90,10 @@ access. Defaults to `true`.
[float]
[[security-automata-settings]]
==== Automata Settings
In places where {security} accepts wildcard patterns (e.g. index patterns in
roles, group matches in the role mapping API), each pattern is compiled into
an Automaton. The follow settings are available to control this behaviour.
In places where the {security-features} accept wildcard patterns (e.g. index
patterns in roles, group matches in the role mapping API), each pattern is
compiled into an Automaton. The follow settings are available to control this
behaviour.
`xpack.security.automata.max_determinized_states`::
The upper limit on how many automaton states may be created by a single pattern.
@ -357,7 +358,7 @@ Defaults to `60s`.
`group_search.base_dn`::
The container DN to search for groups in which the user has membership. When
this element is absent, {security} searches for the attribute specified by
this element is absent, {es} searches for the attribute specified by
`user_group_attribute` set on the user in order to determine group membership.
`group_search.scope`::
@ -391,7 +392,7 @@ YAML role mapping configuration file]. Defaults to
`ES_PATH_CONF/role_mapping.yml`.
`follow_referrals`::
Specifies whether {security} should follow referrals returned
Specifies whether {es} should follow referrals returned
by the LDAP server. Referrals are URLs returned by the server that are to be
used to continue the LDAP operation (for example, search). Defaults to `true`.
@ -517,7 +518,7 @@ The `type` setting must be set to `active_directory`. In addition to the
the following settings:
`url`::
An LDAP URL of the form `ldap[s]://<server>:<port>`. {security} attempts to
An LDAP URL of the form `ldap[s]://<server>:<port>`. {es} attempts to
authenticate against this URL. If the URL is not specified, it is derived from
the `domain_name` setting and assumes an unencrypted connection to port 389.
Defaults to `ldap://<domain_name>:389`. This setting is required when connecting
@ -756,7 +757,7 @@ this realm, so that it only supports user lookups.
Defaults to `true`.
`follow_referrals`::
If set to `true` {security} follows referrals returned by the LDAP server.
If set to `true`, {es} follows referrals returned by the LDAP server.
Referrals are URLs returned by the server that are to be used to continue the
LDAP operation (such as `search`). Defaults to `true`.
@ -832,7 +833,7 @@ capabilities and configuration of the Identity Provider.
If a path is provided, then it is resolved relative to the {es} config
directory.
If a URL is provided, then it must be either a `file` URL or a `https` URL.
{security} automatically polls this metadata resource and reloads
{es} automatically polls this metadata resource and reloads
the IdP configuration when changes are detected.
File based resources are polled at a frequency determined by the global {es}
`resource.reload.interval.high` setting, which defaults to 5 seconds.
@ -864,24 +865,20 @@ The URL of the Single Logout service within {kib}. Typically this is the
`https://kibana.example.com/logout`.
`attributes.principal`::
The Name of the SAML attribute that should be used as the {security} user's
principal (username).
The Name of the SAML attribute that contains the user's principal (username).
`attributes.groups`::
The Name of the SAML attribute that should be used to populate {security}
user's groups.
The Name of the SAML attribute that contains the user's groups.
`attributes.name`::
The Name of the SAML attribute that should be used to populate {security}
user's full name.
The Name of the SAML attribute that contains the user's full name.
`attributes.mail`::
The Name of the SAML attribute that should be used to populate {security}
user's email address.
The Name of the SAML attribute that contains the user's email address.
`attributes.dn`::
The Name of the SAML attribute that should be used to populate {security}
user's X.500 _Distinguished Name_.
The Name of the SAML attribute that contains the user's X.50
_Distinguished Name_.
`attribute_patterns.principal`::
A Java regular expression that is matched against the SAML attribute specified
@ -950,7 +947,7 @@ For more information, see
===== SAML realm signing settings
If a signing key is configured (that is, either `signing.key` or
`signing.keystore.path` is set), then {security} signs outgoing SAML messages.
`signing.keystore.path` is set), then {es} signs outgoing SAML messages.
Signing can be configured using the following settings:
`signing.saml_messages`::
@ -1001,7 +998,7 @@ Defaults to the keystore password.
===== SAML realm encryption settings
If an encryption key is configured (that is, either `encryption.key` or
`encryption.keystore.path` is set), then {security} publishes an encryption
`encryption.keystore.path` is set), then {es} publishes an encryption
certificate when generating metadata and attempts to decrypt incoming SAML
content. Encryption can be configured using the following settings:
@ -1210,8 +1207,8 @@ through the list of URLs will continue until a successful connection is made.
==== Default TLS/SSL settings
You can configure the following TLS/SSL settings in
`elasticsearch.yml`. For more information, see
{stack-ov}/encrypting-communications.html[Encrypting communications]. These settings will be used
for all of {xpack} unless they have been overridden by more specific
{stack-ov}/encrypting-communications.html[Encrypting communications]. These
settings are used unless they have been overridden by more specific
settings such as those for HTTP or Transport.
`xpack.ssl.supported_protocols`::
@ -1262,8 +1259,8 @@ Jurisdiction Policy Files_ has been installed, the default value also includes `
The following settings are used to specify a private key, certificate, and the
trusted certificates that should be used when communicating over an SSL/TLS connection.
If none of the settings below are specified, this will default to the <<ssl-tls-settings, {xpack}
defaults>>. If no trusted certificates are configured, the default certificates that are trusted by the JVM will be
If none of the settings below are specified, the
<<ssl-tls-settings,default settings>> are used. If no trusted certificates are configured, the default certificates that are trusted by the JVM will be
trusted along with the certificate(s) from the <<tls-ssl-key-settings, key settings>>. The key and certificate must be in place
for connections that require client authentication or when acting as a SSL enabled server.

View File

@ -110,7 +110,7 @@ Password to the truststore.
===== PKCS#12 Files
{security} can be configured to use PKCS#12 container files (`.p12` or `.pfx` files)
{es} can be configured to use PKCS#12 container files (`.p12` or `.pfx` files)
that contain the private key, certificate and certificates that should be trusted.
PKCS#12 files are configured in the same way as Java Keystore Files:
@ -148,7 +148,7 @@ Password to the PKCS#12 file.
===== PKCS#11 Tokens
{security} can be configured to use a PKCS#11 token that contains the private key,
{es} can be configured to use a PKCS#11 token that contains the private key,
certificate and certificates that should be trusted.
PKCS#11 token require additional configuration on the JVM level and can be enabled

View File

@ -21,11 +21,11 @@ on each node in the cluster. For more information, see
=== PKI realm check
//See PkiRealmBootstrapCheckTests.java
If you use {security} and a Public Key Infrastructure (PKI) realm, you must
configure Transport Layer Security (TLS) on your cluster and enable client
authentication on the network layers (either transport or http). For more
information, see {xpack-ref}/pki-realm.html[PKI User Authentication] and
{xpack-ref}/ssl-tls.html[Setting Up TLS on a Cluster].
If you use {es} {security-features} and a Public Key Infrastructure (PKI) realm,
you must configure Transport Layer Security (TLS) on your cluster and enable
client authentication on the network layers (either transport or http). For more
information, see {stack-ov}/pki-realm.html[PKI user authentication] and
{stack-ov}/ssl-tls.html[Setting up TLS on a cluster].
To pass this bootstrap check, if a PKI realm is enabled, you must configure TLS
and enable client authentication on at least one network communication layer.
@ -42,7 +42,7 @@ and copy it to each node in the cluster. By default, role mappings are stored in
`ES_PATH_CONF/role_mapping.yml`. Alternatively, you can specify a
different role mapping file for each type of realm and specify its location in
the `elasticsearch.yml` file. For more information, see
{xpack-ref}/mapping-roles.html#mapping-roles-file[Using Role Mapping Files].
{stack-ov}/mapping-roles.html#mapping-roles-file[Using role mapping files].
To pass this bootstrap check, the role mapping files must exist and must be
valid. The Distinguished Names (DNs) that are listed in the role mappings files
@ -54,24 +54,24 @@ must also be valid.
//See TLSLicenseBootstrapCheck.java
In 6.0 and later releases, if you have a gold, platinum, or enterprise license
and {security} is enabled, you must configure SSL/TLS for
and {es} {security-features} are enabled, you must configure SSL/TLS for
internode-communication.
NOTE: Single-node clusters that use a loopback interface do not have this
requirement. For more information, see
{xpack-ref}/encrypting-communications.html[Encrypting Communications].
{stack-ov}/encrypting-communications.html[Encrypting communications].
To pass this bootstrap check, you must
{xpack-ref}/ssl-tls.html[set up SSL/TLS in your cluster].
{stack-ov}/ssl-tls.html[set up SSL/TLS in your cluster].
[float]
=== Token SSL check
//See TokenSSLBootstrapCheckTests.java
If you use {security} and the built-in token service is enabled, you must
configure your cluster to use SSL/TLS for the HTTP interface. HTTPS is required
in order to use the token service.
If you use {es} {security-features} and the built-in token service is enabled,
you must configure your cluster to use SSL/TLS for the HTTP interface. HTTPS is
required in order to use the token service.
In particular, if `xpack.security.authc.token.enabled` is
set to `true` in the `elasticsearch.yml` file, you must also set
@ -79,4 +79,4 @@ set to `true` in the `elasticsearch.yml` file, you must also set
settings, see <<security-settings>> and <<modules-http>>.
To pass this bootstrap check, you must enable HTTPS or disable the built-in
token service by using the {security} settings.
token service.

View File

@ -76,12 +76,18 @@ TIP: Ensure the installation machine has access to the internet and that any cor
[[msi-installer-selected-plugins]]
image::images/msi_installer/msi_installer_selected_plugins.png[]
As of version 6.3.0, X-Pack is now https://www.elastic.co/products/x-pack/open[bundled by default]. The final step allows a choice of the type of X-Pack license to install, in addition to security configuration and built-in user configuration:
As of version 6.3.0, {xpack} is now https://www.elastic.co/products/x-pack/open[bundled by default].
The final step allows a choice of the type of license to install, in addition to
security configuration and built-in user configuration:
[[msi-installer-xpack]]
image::images/msi_installer/msi_installer_xpack.png[]
NOTE: X-Pack includes a choice of a Trial or Basic license. A Trial license is valid for 30 days, after which you can obtain one of the available subscriptions. The Basic license is free and perpetual. Consult the https://www.elastic.co/subscriptions[available subscriptions] for further details on which features are available under which license.
NOTE: {xpack} includes a choice of a Trial or Basic license. A Trial license is
valid for 30 days, after which you can obtain one of the available subscriptions.
The Basic license is free and perpetual. Consult the
https://www.elastic.co/subscriptions[available subscriptions] for further
details on which features are available under which license.
After clicking the install button, the installation will begin:
@ -260,7 +266,8 @@ as _properties_ within Windows Installer documentation) that can be passed to `m
`PLUGINS`::
A comma separated list of the plugins to download and install as part of the installation. Defaults to `""`
A comma separated list of the plugins to download and install as part of the
installation. Defaults to `""`
`HTTPSPROXYHOST`::
@ -280,47 +287,47 @@ as _properties_ within Windows Installer documentation) that can be passed to `m
`XPACKLICENSE`::
The type of X-Pack license to install, either `Basic` or `Trial`. Defaults to `Basic`
The type of license to install, either `Basic` or `Trial`. Defaults to `Basic`
`XPACKSECURITYENABLED`::
When installing with a `Trial` license, whether X-Pack Security should be enabled.
Defaults to `true`
When installing with a `Trial` license, whether {security-features} are
enabled. Defaults to `true`
`BOOTSTRAPPASSWORD`::
When installing with a `Trial` license and X-Pack Security enabled, the password to
used to bootstrap the cluster and persisted as the `bootstrap.password` setting in the keystore.
Defaults to a randomized value.
When installing with a `Trial` license and {security-features} are enabled,
the password to used to bootstrap the cluster and persisted as the
`bootstrap.password` setting in the keystore. Defaults to a randomized value.
`SKIPSETTINGPASSWORDS`::
When installing with a `Trial` license and {security} enabled, whether the
installation should skip setting up the built-in users `elastic`, `kibana`,
`logstash_system`, `apm_system`, and `beats_system`.
When installing with a `Trial` license and {security-features} enabled,
whether the installation should skip setting up the built-in users.
Defaults to `false`
`ELASTICUSERPASSWORD`::
When installing with a `Trial` license and X-Pack Security enabled, the password
to use for the built-in user `elastic`. Defaults to `""`
When installing with a `Trial` license and {security-features} are enabled,
the password to use for the built-in user `elastic`. Defaults to `""`
`KIBANAUSERPASSWORD`::
When installing with a `Trial` license and X-Pack Security enabled, the password
to use for the built-in user `kibana`. Defaults to `""`
When installing with a `Trial` license and {security-features} are enabled,
the password to use for the built-in user `kibana`. Defaults to `""`
`LOGSTASHSYSTEMUSERPASSWORD`::
When installing with a `Trial` license and X-Pack Security enabled, the password
to use for the built-in user `logstash_system`. Defaults to `""`
When installing with a `Trial` license and {security-features} are enabled,
the password to use for the built-in user `logstash_system`. Defaults to `""`
To pass a value, simply append the property name and value using the format `<PROPERTYNAME>="<VALUE>"` to
the installation command. For example, to use a different installation directory to the default one and to install https://www.elastic.co/products/x-pack[X-Pack]:
To pass a value, simply append the property name and value using the format
`<PROPERTYNAME>="<VALUE>"` to the installation command. For example, to use a
different installation directory to the default one:
["source","sh",subs="attributes,callouts"]
--------------------------------------------
start /wait msiexec.exe /i elasticsearch-{version}.msi /qn INSTALLDIR="C:\Custom Install Directory\{version}" PLUGINS="x-pack"
start /wait msiexec.exe /i elasticsearch-{version}.msi /qn INSTALLDIR="C:\Custom Install Directory\{version}"
--------------------------------------------
Consult the https://msdn.microsoft.com/en-us/library/windows/desktop/aa367988(v=vs.85).aspx[Windows Installer SDK Command-Line Options]
@ -328,10 +335,10 @@ for additional rules related to values containing quotation marks.
ifdef::include-xpack[]
[[msi-installer-enable-indices]]
==== Enable automatic creation of X-Pack indices
==== Enable automatic creation of {xpack} indices
X-Pack will try to automatically create a number of indices within Elasticsearch.
The {stack} features try to automatically create a number of indices within {es}.
include::xpack-indices.asciidoc[]
endif::include-xpack[]

View File

@ -111,5 +111,5 @@ Then in your project's `pom.xml` if using maven, add the following repositories
--------------------------------------------------------------
--
. If you are using {security}, there are more configuration steps. See
{xpack-ref}/java-clients.html[Java Client and Security].
. If you are using {stack} {security-features}, there are more configuration
steps. See {stack-ov}/java-clients.html[Java Client and Security].

View File

@ -2,7 +2,7 @@
[[security-api]]
== Security APIs
You can use the following APIs to perform {security} activities.
You can use the following APIs to perform security activities.
* <<security-api-authenticate>>
* <<security-api-clear-cache>>

View File

@ -63,7 +63,8 @@ The value specified in the field rule can be one of the following types:
The _user object_ against which rules are evaluated has the following fields:
`username`::
(string) The username by which {security} knows this user. For example, `"username": "jsmith"`.
(string) The username by which the {es} {security-features} knows this user. For
example, `"username": "jsmith"`.
`dn`::
(string) The _Distinguished Name_ of the user. For example, `"dn": "cn=jsmith,ou=users,dc=example,dc=com",`.
`groups`::

View File

@ -14,12 +14,12 @@ certificates that are used to encrypt communications in your {es} cluster.
For more information about how certificates are configured in conjunction with
Transport Layer Security (TLS), see
{xpack-ref}/ssl-tls.html[Setting up SSL/TLS on a cluster].
{stack-ov}/ssl-tls.html[Setting up SSL/TLS on a cluster].
The API returns a list that includes certificates from all TLS contexts
including:
* {xpack} default TLS settings
* Default {es} TLS settings
* Settings for transport and HTTP interfaces
* TLS settings that are used within authentication realms
* TLS settings for remote monitoring exporters
@ -32,13 +32,13 @@ that are used for configuring server identity, such as `xpack.ssl.keystore` and
The list does not include certificates that are sourced from the default SSL
context of the Java Runtime Environment (JRE), even if those certificates are in
use within {xpack}.
use within {es}.
NOTE: When a PKCS#11 token is configured as the truststore of the JRE, the API
will return all the certificates that are included in the PKCS#11 token
irrespectively to whether these are used in the {es} TLS configuration or not.
If {xpack} is configured to use a keystore or truststore, the API output
If {es} is configured to use a keystore or truststore, the API output
includes all certificates in that store, even though some of the certificates
might not be in active use within the cluster.
@ -56,16 +56,16 @@ single certificate. The fields in each object are:
`subject_dn`:: (string) The Distinguished Name of the certificate's subject.
`serial_number`:: (string) The hexadecimal representation of the certificate's
serial number.
`has_private_key`:: (boolean) If {xpack} has access to the private key for this
`has_private_key`:: (boolean) If {es} has access to the private key for this
certificate, this field has a value of `true`.
`expiry`:: (string) The ISO formatted date of the certificate's expiry
(not-after) date.
==== Authorization
If {security} is enabled, you must have `monitor` cluster privileges to use this
API. For more information, see
{xpack-ref}/security-privileges.html[Security Privileges].
If the {security-features} are enabled, you must have `monitor` cluster
privileges to use this API. For more information, see
{stack-ov}/security-privileges.html[Security Privileges].
==== Examples

View File

@ -20,8 +20,9 @@ related to this watch from the watch history.
IMPORTANT: Deleting a watch must be done via this API only. Do not delete the
watch directly from the `.watches` index using the Elasticsearch
DELETE Document API. When {security} is enabled, make sure no `write`
privileges are granted to anyone over the `.watches` index.
DELETE Document API. When {es} {security-features} are enabled, make
sure no `write` privileges are granted to anyone over the `.watches`
index.
[float]
==== Path Parameters

View File

@ -56,7 +56,7 @@ This API supports the following fields:
that will be used during the watch execution
| `ignore_condition` | no | false | When set to `true`, the watch execution uses the
{xpack-ref}/condition-always.html[Always Condition].
{stack-ov}/condition-always.html[Always Condition].
This can also be specified as an HTTP parameter.
| `alternative_input` | no | null | When present, the watch uses this object as a payload
@ -73,7 +73,7 @@ This API supports the following fields:
This can also be specified as an HTTP parameter.
| `watch` | no | null | When present, this
{xpack-ref}/how-watcher-works.html#watch-definition[watch] is used
{stack-ov}/how-watcher-works.html#watch-definition[watch] is used
instead of the one specified in the request. This watch is
not persisted to the index and record_execution cannot be set.
|======
@ -91,7 +91,7 @@ are five possible modes an action can be associated with:
| `simulate` | The action execution is simulated. Each action type
define its own simulation operation mode. For example, the
{xpack-ref}/actions-email.html[email] action creates
{stack-ov}/actions-email.html[email] action creates
the email that would have been sent but does not actually
send it. In this mode, the action might be throttled if the
current state of the watch indicates it should be.
@ -116,14 +116,14 @@ are five possible modes an action can be associated with:
[float]
==== Authorization
You must have `manage_watcher` cluster privileges to use this API. For more
information, see {xpack-ref}/security-privileges.html[Security Privileges].
information, see {stack-ov}/security-privileges.html[Security Privileges].
[float]
==== Security Integration
When {security} is enabled on your Elasticsearch cluster, then watches will be
executed with the privileges of the user that stored the watches. If your user
is allowed to read index `a`, but not index `b`, then the exact same set of
When {es} {security-features} are enabled on your cluster, watches
are executed with the privileges of the user that stored the watches. If your
user is allowed to read index `a`, but not index `b`, then the exact same set of
rules will apply during execution of a watch.
When using the execute watch API, the authorization data of the user that

View File

@ -20,8 +20,8 @@ trigger engine.
IMPORTANT: Putting a watch must be done via this API only. Do not put a watch
directly to the `.watches` index using the Elasticsearch Index API.
If {security} is enabled, make sure no `write` privileges are
granted to anyone over the `.watches` index.
If {es} {security-features} are enabled, make sure no `write`
privileges are granted to anyone over the `.watches` index.
When adding a watch you can also define its initial
{xpack-ref}/how-watcher-works.html#watch-active-state[active state]. You do that
@ -77,9 +77,9 @@ information, see {xpack-ref}/security-privileges.html[Security Privileges].
[float]
==== Security Integration
When {security} is enabled, your watch will only be able to index or search on
indices for which the user that stored the watch, has privileges. If the user is
able to read index `a`, but not index `b`, the same will apply, when the watch
When {es} {security-features} are enabled, your watch can index or search only
on indices for which the user that stored the watch has privileges. If the user
is able to read index `a`, but not index `b`, the same will apply, when the watch
is executed.
[float]

View File

@ -29,7 +29,7 @@ The following is a list of the events that can be generated:
| `run_as_denied` | | | Logged when an authenticated user attempts to <<run-as-privilege, run as>>
another user action they do not have the necessary
<<security-reference, privilege>> to do so.
| `tampered_request` | | | Logged when {security} detects that the request has
| `tampered_request` | | | Logged when the {security-features} detect that the request has
been tampered with. Typically relates to `search/scroll`
requests when the scroll ID is believed to have been
tampered with.

View File

@ -38,9 +38,9 @@ xpack.security.audit.index.settings:
These settings apply to the local audit indices, as well as to the
<<forwarding-audit-logfiles, remote audit indices>>, but only if the remote cluster
does *not* have {security} installed, or the {es} versions are different.
If the remote cluster has {security} installed, and the versions coincide, the
settings for the audit indices there will take precedence,
does *not* have {security-features} enabled or the {es} versions are different.
If the remote cluster has {security-features} enabled and the versions coincide,
the settings for the audit indices there will take precedence,
even if they are unspecified (i.e. left to defaults).
NOTE: Audit events are batched for indexing so there is a lag before

View File

@ -13,7 +13,7 @@ Audit logs are **disabled** by default. To enable this functionality, you
must set `xpack.security.audit.enabled` to `true` in `elasticsearch.yml`.
============================================================================
{Security} provides two ways to persist audit logs:
The {es} {security-features} provide two ways to persist audit logs:
* The <<audit-log-output, `logfile`>> output, which persists events to
a dedicated `<clustername>_audit.log` file on the host's file system.

View File

@ -2,13 +2,12 @@
[[configuring-ad-realm]]
=== Configuring an Active Directory realm
You can configure {security} to communicate with Active Directory to authenticate
You can configure {es} to communicate with Active Directory to authenticate
users. To integrate with Active Directory, you configure an `active_directory`
realm and map Active Directory users and groups to {security} roles in the role
mapping file.
realm and map Active Directory users and groups to roles in the role mapping file.
For more information about Active Directory realms, see
{xpack-ref}/active-directory-realm.html[Active Directory User Authentication].
{stack-ov}/active-directory-realm.html[Active Directory User Authentication].
. Add a realm configuration of type `active_directory` to `elasticsearch.yml`
under the `xpack.security.authc.realms.active_directory` namespace.
@ -25,7 +24,7 @@ NOTE: Binding to Active Directory fails if the domain name is not mapped in DNS.
If DNS is not being provided by a Windows DNS server, add a mapping for
the domain in the local `/etc/hosts` file.
For example, the following realm configuration configures {security} to connect
For example, the following realm configuration configures {es} to connect
to `ldaps://example.com:636` to authenticate users through Active Directory:
[source, yaml]
@ -60,7 +59,7 @@ You must also set the `url` setting, since you must authenticate against the
Global Catalog, which uses a different port and might not be running on every
Domain Controller.
For example, the following realm configuration configures {security} to connect
For example, the following realm configuration configures {es} to connect
to specific Domain Controllers on the Global Catalog port with the domain name
set to the forest root:
@ -96,7 +95,7 @@ ports (389 or 636) in order to query the configuration container to retrieve the
domain name from the NetBIOS name.
--
. (Optional) Configure how {security} should interact with multiple Active
. (Optional) Configure how {es} should interact with multiple Active
Directory servers.
+
--
@ -113,14 +112,14 @@ operation are supported: failover and load balancing. See <<ref-ad-settings>>.
+
--
The Active Directory realm authenticates users using an LDAP bind request. By
default, all of the LDAP operations are run by the user that {security} is
default, all of the LDAP operations are run by the user that {es} is
authenticating. In some cases, regular users may not be able to access all of the
necessary items within Active Directory and a _bind user_ is needed. A bind user
can be configured and is used to perform all operations other than the LDAP bind
request, which is required to authenticate the credentials provided by the user.
The use of a bind user enables the
{xpack-ref}/run-as-privilege.html[run as feature] to be used with the Active
{stack-ov}/run-as-privilege.html[run as feature] to be used with the Active
Directory realm and the ability to maintain a set of pooled connections to
Active Directory. These pooled connection reduce the number of resources that
must be created and destroyed with every user authentication.
@ -235,7 +234,7 @@ user:
<4> The Active Directory distinguished name (DN) of the user `John Doe`.
For more information, see
{xpack-ref}/mapping-roles.html[Mapping users and groups to roles].
{stack-ov}/mapping-roles.html[Mapping users and groups to roles].
--
. (Optional) Configure the `metadata` setting in the Active Directory realm to

View File

@ -76,7 +76,8 @@ required changes.
IMPORTANT: As the administrator of the cluster, it is your responsibility to
ensure the same users are defined on every node in the cluster.
{security} does not deliver any mechanism to guarantee this.
The {es} {security-features} do not deliver any mechanisms to
guarantee this.
--
@ -103,7 +104,7 @@ the same changes are made on every node in the cluster.
. (Optional) Change how often the `users` and `users_roles` files are checked.
+
--
By default, {security} checks these files for changes every 5 seconds. You can
By default, {es} checks these files for changes every 5 seconds. You can
change this default behavior by changing the `resource.reload.interval.high`
setting in the `elasticsearch.yml` file (as this is a common setting in {es},
changing its value may effect other schedules in the system).

View File

@ -2,15 +2,14 @@
[[configuring-pki-realm]]
=== Configuring a PKI realm
You can configure {security} to use Public Key Infrastructure (PKI) certificates
to authenticate users in {es}. This requires clients to present X.509
certificates.
You can configure {es} to use Public Key Infrastructure (PKI) certificates
to authenticate users. This requires clients to present X.509 certificates.
NOTE: You cannot use PKI certificates to authenticate users in {kib}.
To use PKI in {es}, you configure a PKI realm, enable client authentication on
the desired network layers (transport or http), and map the Distinguished Names
(DNs) from the user certificates to {security} roles in the
(DNs) from the user certificates to roles in the
<<security-api-role-mapping,role-mapping API>> or role-mapping file.
You can also use a combination of PKI and username/password authentication. For
@ -22,7 +21,7 @@ allow clients without certificates to authenticate with other credentials.
IMPORTANT: You must enable SSL/TLS and enable client authentication to use PKI.
For more information, see {xpack-ref}/pki-realm.html[PKI User Authentication].
For more information, see {stack-ov}/pki-realm.html[PKI User Authentication].
. Add a realm configuration for a `pki` realm to `elasticsearch.yml` under the
`xpack.security.authc.realms.pki` namespace.
@ -75,8 +74,7 @@ xpack:
. Enable client authentication on the desired network layers (transport or http).
+
--
//TBD: This step might need to be split into a separate topic with additional details
//about setting up client authentication.
The PKI realm relies on the TLS settings of the node's network interface. The
realm can be configured to be more restrictive than the underlying network
connection - that is, it is possible to configure the node such that some
@ -174,7 +172,7 @@ the result. The user's distinguished name will be populated under the `pki_dn`
key. You can also use the authenticate API to validate your role mapping.
For more information, see
{xpack-ref}/mapping-roles.html[Mapping Users and Groups to Roles].
{stack-ov}/mapping-roles.html[Mapping Users and Groups to Roles].
NOTE: The PKI realm supports
{stack-ov}/realm-chains.html#authorization_realms[authorization realms] as an

View File

@ -101,7 +101,7 @@ introduction to realms, see {stack-ov}/realms.html[Realms].
It is recommended that the SAML realm be at the bottom of your authentication
chain (that is, it has the _highest_ order).
<4> This is the path to the metadata file that you saved for your identity provider.
The path that you enter here is relative to your `config/` directory. {security}
The path that you enter here is relative to your `config/` directory. {es}
automatically monitors this file for changes and reloads the configuration
whenever it is updated.
<5> This is the identifier (SAML EntityID) that your IdP uses. It should match
@ -218,8 +218,8 @@ When a user authenticates using SAML, they are identified to the {stack},
but this does not automatically grant them access to perform any actions or
access any data.
Your SAML users cannot do anything until they are mapped to {security}
roles. See {stack-ov}/saml-role-mapping.html[Configuring role mappings].
Your SAML users cannot do anything until they are mapped to roles. See
{stack-ov}/saml-role-mapping.html[Configuring role mappings].
NOTE: The SAML realm supports
{stack-ov}/realm-chains.html#authorization_realms[authorization realms] as an

View File

@ -3,9 +3,9 @@
=== Integrating with other authentication systems
If you are using an authentication system that is not supported out-of-the-box
by {security}, you can create a custom realm to interact with it to authenticate
users. You implement a custom realm as an SPI loaded security extension
as part of an ordinary elasticsearch plugin.
by the {es} {security-features}, you can create a custom realm to interact with
it to authenticate users. You implement a custom realm as an SPI loaded security
extension as part of an ordinary elasticsearch plugin.
[[implementing-custom-realm]]
==== Implementing a custom realm
@ -50,8 +50,8 @@ public AuthenticationFailureHandler getAuthenticationFailureHandler() {
----------------------------------------------------
+
The `getAuthenticationFailureHandler` method is used to optionally provide a
custom `AuthenticationFailureHandler`, which will control how {security} responds
in certain authentication failure events.
custom `AuthenticationFailureHandler`, which will control how the
{es} {security-features} respond in certain authentication failure events.
+
[source,java]
----------------------------------------------------

View File

@ -151,7 +151,7 @@ order::
idp.metadata.path::
This is the path to the metadata file that you saved for your Identity Provider.
The path that you enter here is relative to your `config/` directory.
{security} will automatically monitor this file for changes and will
{es} will automatically monitor this file for changes and will
reload the configuration whenever it is updated.
idp.entity_id::
@ -207,14 +207,14 @@ Attributes in SAML are named using a URI such as
more values associated with them.
These attribute identifiers vary between IdPs, and most IdPs offer ways to
customise the URIs and their associated value.
customize the URIs and their associated value.
{es} uses these attributes to infer information about the user who has
logged in, and they can be used for role mapping (below).
In order for these attributes to be useful, {es} and the IdP need to have a
common value for the names of the attributes. This is done manually, by
configuring the IdP and the {security} SAML realm to use the same URI name for
configuring the IdP and the SAML realm to use the same URI name for
each logical user attribute.
The recommended steps for configuring these SAML attributes are as follows:
@ -469,7 +469,7 @@ or separate keys used for each of those.
The Elastic Stack uses X.509 certificates with RSA private keys for SAML
cryptography. These keys can be generated using any standard SSL tool, including
the `elasticsearch-certutil` tool that ships with {xpack}.
the `elasticsearch-certutil` tool.
Your IdP may require that the Elastic Stack have a cryptographic key for signing
SAML messages, and that you provide the corresponding signing certificate within
@ -518,7 +518,7 @@ Encryption certificates can be generated with the same process.
===== Configuring {es} for signing
By default, {security} will sign _all_ outgoing SAML messages if a signing
By default, {es} will sign _all_ outgoing SAML messages if a signing
key has been configured.
If you wish to use *PEM formatted* keys and certificates for signing, then
@ -559,17 +559,17 @@ are: `AuthnRequest`, `LogoutRequest` and `LogoutResponse`.
===== Configuring {es} for encrypted messages
{security} supports a single key for message decryption. If a key is
configured, then {security} will attempt to use it to decrypt
The {es} {security-features} support a single key for message decryption. If a
key is configured, then {es} attempts to use it to decrypt
`EncryptedAssertion` and `EncryptedAttribute` elements in Authentication
responses, and `EncryptedID` elements in Logout requests.
{security} will reject any SAML message that contains an `EncryptedAssertion`
{es} rejects any SAML message that contains an `EncryptedAssertion`
that cannot be decrypted.
If an `Assertion` contains both encrypted and plain-text attributes, then
failure to decrypt the encrypted attributes will not cause an automatic
rejection. Rather, {security} will process the available plain-text attributes
rejection. Rather, {es} processes the available plain-text attributes
(and any `EncryptedAttributes` that could be decrypted).
If you wish to use *PEM formatted* keys and certificates for SAML encryption,
@ -620,8 +620,8 @@ When a user authenticates using SAML, they are identified to the Elastic Stack,
but this does not automatically grant them access to perform any actions or
access any data.
Your SAML users cannot do anything until they are assigned {security}
roles. This is done through either the
Your SAML users cannot do anything until they are assigned roles. This is done
through either the
{ref}/security-api-put-role-mapping.html[add role mapping API], or with
<<authorization_realms, authorization realms>>.
@ -680,7 +680,7 @@ PUT /_security/role_mapping/saml-finance
// CONSOLE
// TEST
If your users also exist in a repository that can be directly accessed by {security}
If your users also exist in a repository that can be directly accessed by {es}
(such as an LDAP directory) then you can use
<<authorization_realms, authorization realms>> instead of role mappings.

View File

@ -10,18 +10,17 @@ You can configure characteristics of the user cache with the `cache.ttl`,
NOTE: PKI realms do not cache user credentials but do cache the resolved user
object to avoid unnecessarily needing to perform role mapping on each request.
The cached user credentials are hashed in memory. By default, {security} uses a
salted `sha-256` hash algorithm. You can use a different hashing algorithm by
setting the `cache.hash_algo` realm settings. See
The cached user credentials are hashed in memory. By default, the {es}
{security-features} use a salted `sha-256` hash algorithm. You can use a
different hashing algorithm by setting the `cache.hash_algo` realm settings. See
{ref}/security-settings.html#hashing-settings[User cache and password hash algorithms].
[[cache-eviction-api]]
==== Evicting users from the cache
{security} exposes a
{ref}/security-api-clear-cache.html[Clear Cache API] you can use
to force the eviction of cached users. For example, the following request evicts
all users from the `ad1` realm:
You can use the {ref}/security-api-clear-cache.html[clear cache API] to force
the eviction of cached users . For example, the following request evicts all
users from the `ad1` realm:
[source, js]
------------------------------------------------------------

View File

@ -4,7 +4,8 @@
Elasticsearch allows to execute operations against {ref}/indices-aliases.html[index aliases],
which are effectively virtual indices. An alias points to one or more indices,
holds metadata and potentially a filter. {security} treats aliases and indices
holds metadata and potentially a filter. The {es} {security-features} treat
aliases and indices
the same. Privileges for indices actions are granted on specific indices or
aliases. In order for an indices action to be authorized, the user that executes
it needs to have permissions for that action on all the specific indices or

View File

@ -3,7 +3,8 @@
=== Custom roles provider extension
If you need to retrieve user roles from a system not supported out-of-the-box
by {security}, you can create a custom roles provider to retrieve and resolve
by the {es} {security-features}, you can create a custom roles provider to
retrieve and resolve
roles. You implement a custom roles provider as an SPI loaded security extension
as part of an ordinary elasticsearch plugin.

View File

@ -130,7 +130,7 @@ The following describes the structure of an application privileges entry:
<2> The list of the names of the application privileges to grant to this role.
<3> The resources to which those privileges apply. These are handled in the same
way as index name pattern in `indices` permissions. These resources do not
have any special meaning to {security}.
have any special meaning to the {es} {security-features}.
For details about the validation rules for these fields, see the
{ref}/security-api-put-privileges.html[add application privileges API].
@ -176,7 +176,7 @@ Based on the above definition, users owning the `clicks_admin` role can:
TIP: For a complete list of available <<security-privileges, cluster and indices privileges>>
There are two available mechanisms to define roles: using the _Role Management APIs_
or in local files on the {es} nodes. {security} also supports implementing
or in local files on the {es} nodes. You can also implement
custom roles providers. If you need to integrate with another system to retrieve
user roles, you can build a custom roles provider plugin. For more information,
see <<custom-roles-provider, Custom Roles Provider Extension>>.
@ -185,7 +185,7 @@ see <<custom-roles-provider, Custom Roles Provider Extension>>.
[[roles-management-ui]]
=== Role management UI
{security} enables you to easily manage users and roles from within {kib}. To
You can manage users and roles easily in {kib}. To
manage roles, log in to {kib} and go to *Management / Elasticsearch / Roles*.
[float]
@ -242,5 +242,5 @@ click_admins:
query: '{"match": {"category": "click"}}'
-----------------------------------
{security} continuously monitors the `roles.yml` file and automatically picks
{es} continuously monitors the `roles.yml` file and automatically picks
up and applies any changes to it.

View File

@ -10,9 +10,10 @@ For other types of realms, you must create _role-mappings_ that define which
roles should be assigned to each user based on their username, groups, or
other metadata.
{security} allows role-mappings to be defined via an
<<mapping-roles-api, API>>, or managed through <<mapping-roles-file, files>>.
These two sources of role-mapping are combined inside of {security}, so it is
You can define role-mappings via an
<<mapping-roles-api, API>> or manage them through <<mapping-roles-file, files>>.
These two sources of role-mapping are combined inside of the {es}
{security-features}, so it is
possible for a single user to have some roles that have been mapped through
the API, and other roles that are mapped through files.
@ -54,7 +55,7 @@ are values. The mappings can have a many-to-many relationship. When you map role
to groups, the roles of a user in that group are the combination of the roles
assigned to that group and the roles assigned to that user.
By default, {security} checks role mapping files for changes every 5 seconds.
By default, {es} checks role mapping files for changes every 5 seconds.
You can change this default behavior by changing the
`resource.reload.interval.high` setting in the `elasticsearch.yml` file. Since
this is a common setting in Elasticsearch, changing its value might effect other
@ -69,8 +70,8 @@ To specify users and groups in the role mappings, you use their
_Distinguished Names_ (DNs). A DN is a string that uniquely identifies the user
or group, for example `"cn=John Doe,cn=contractors,dc=example,dc=com"`.
NOTE: {security} only supports Active Directory security groups. You cannot map
distribution groups to roles.
NOTE: The {es} {security-features} support only Active Directory security groups.
You cannot map distribution groups to roles.
For example, the following snippet uses the file-based method to map the
`admins` group to the `monitoring` role and map the `John Doe` user, the
@ -85,7 +86,7 @@ user:
- "cn=users,dc=example,dc=com"
- "cn=admins,dc=example,dc=com"
------------------------------------------------------------
<1> The name of a {security} role.
<1> The name of a role.
<2> The distinguished name of an LDAP group or an Active Directory security group.
<3> The distinguished name of an LDAP or Active Directory user.

View File

@ -2,10 +2,11 @@
[[run-as-privilege]]
=== Submitting requests on behalf of other users
{security} supports a permission that enables an authenticated user to submit
The {es} {security-features} support a permission that enables an authenticated
user to submit
requests on behalf of other users. If your application already authenticates
users, you can use the _run as_ mechanism to restrict data access according to
{security} permissions without having to re-authenticate each user through.
{es} permissions without having to re-authenticate each user through.
To "run as" (impersonate) another user, you must be able to retrieve the user from
the realm you use to authenticate. Both the internal `native` and `file` realms

View File

@ -15,10 +15,10 @@ secured cluster:
* <<http-clients, HTTP Clients>>
{security} enables you to secure your {es} cluster. But {es} itself is only one
product within the Elastic Stack. It is often the case that other products in
the stack are connected to the cluster and therefore need to be secured as well,
or at least communicate with the cluster in a secured way:
The {es} {security-features} enable you to secure your {es} cluster. But
{es} itself is only one product within the {stack}. It is often the case that
other products in the stack are connected to the cluster and therefore need to
be secured as well, or at least communicate with the cluster in a secured way:
* <<hadoop, Apache Hadoop>>
* {auditbeat-ref}/securing-beats.html[Auditbeat]

View File

@ -3,9 +3,9 @@
See:
* {auditbeat-ref}/securing-beats.html[Auditbeat and {security}]
* {filebeat-ref}/securing-beats.html[Filebeat and {security}]
* {heartbeat-ref}/securing-beats.html[Heartbeat and {security}]
* {metricbeat-ref}/securing-beats.html[Metricbeat and {security}]
* {packetbeat-ref}/securing-beats.html[Packetbeat and {security}]
* {winlogbeat-ref}/securing-beats.html[Winlogbeat and {security}]
* {auditbeat-ref}/securing-beats.html[{auditbeat}]
* {filebeat-ref}/securing-beats.html[{filebeat}]
* {heartbeat-ref}/securing-beats.html[{heartbeat}]
* {metricbeat-ref}/securing-beats.html[{metricbeat}]
* {packetbeat-ref}/securing-beats.html[{packetbeat}]
* {winlogbeat-ref}/securing-beats.html[{winlogbeat}]

View File

@ -1,9 +1,10 @@
[[cross-cluster-configuring]]
=== Cross Cluster Search and Security
=== Cross cluster search and security
{ref}/modules-cross-cluster-search.html[Cross Cluster Search] enables
{ref}/modules-cross-cluster-search.html[Cross cluster search] enables
federated search across multiple clusters. When using cross cluster search
with secured clusters, all clusters must have {security} enabled.
with secured clusters, all clusters must have the {es} {security-features}
enabled.
The local cluster (the cluster used to initiate cross cluster search) must be
allowed to connect to the remote clusters, which means that the CA used to
@ -22,8 +23,8 @@ This feature was added as Beta in {es} `v5.3` with further improvements made in
To use cross cluster search with secured clusters:
* Enable {security} on every node in each connected cluster. For more
information about the `xpack.security.enabled` setting, see
* Enable the {es} {security-features} on every node in each connected cluster.
For more information about the `xpack.security.enabled` setting, see
{ref}/security-settings.html[Security Settings in {es}].
* Enable encryption globally. To encrypt communications, you must enable

View File

@ -1,7 +1,8 @@
[[http-clients]]
=== HTTP/REST Clients and Security
=== HTTP/REST clients and security
{security} works with standard HTTP {wikipedia}/Basic_access_authentication[basic authentication]
The {es} {security-features} work with standard HTTP
{wikipedia}/Basic_access_authentication[basic authentication]
headers to authenticate users. Since Elasticsearch is stateless, this header must
be sent with every request:
@ -48,8 +49,8 @@ curl --user rdeniro:taxidriver -XPUT 'localhost:9200/idx'
[float]
==== Client Libraries over HTTP
For more information about how to use {security} with the language specific clients
please refer to
For more information about using {security-features} with the language
specific clients, refer to
https://github.com/elasticsearch/elasticsearch-ruby/tree/master/elasticsearch-transport#authentication[Ruby],
http://elasticsearch-py.readthedocs.org/en/master/#ssl-and-authentication[Python],
https://metacpan.org/pod/Search::Elasticsearch::Cxn::HTTPTiny#CONFIGURATION[Perl],

View File

@ -1,9 +1,9 @@
[[java-clients]]
=== Java Client and Security
=== Java Client and security
deprecated[7.0.0, The `TransportClient` is deprecated in favour of the {java-rest}/java-rest-high.html[Java High Level REST Client] and will be removed in Elasticsearch 8.0. The {java-rest}/java-rest-high-level-migration.html[migration guide] describes all the steps needed to migrate.]
{security} supports the Java http://www.elastic.co/guide/en/elasticsearch/client/java-api/current/transport-client.html[transport client] for Elasticsearch.
The {es} {security-features} support the Java http://www.elastic.co/guide/en/elasticsearch/client/java-api/current/transport-client.html[transport client] for Elasticsearch.
The transport client uses the same transport protocol that the cluster nodes use
for inter-node communication. It is very efficient as it does not have to marshall
and unmarshall JSON requests like a typical REST client.
@ -21,7 +21,8 @@ To use the transport client with a secured cluster, you need to:
. {ref}/setup-xpack-client.html[Configure the {xpack} transport client].
. Configure a user with the privileges required to start the transport client.
A default `transport_client` role is built-in to {xpack} that grants the
A default `transport_client` role is built-in to the {es} {security-features},
which grants the
appropriate cluster permissions for the transport client to work with the secured
cluster. The transport client uses the _Nodes Info API_ to fetch information about
the nodes in the cluster.
@ -137,7 +138,7 @@ TransportClient client = new PreBuiltXPackTransportClient(Settings.builder()
[float]
[[disabling-client-auth]]
===== Disabling Client Authentication
===== Disabling client authentication
If you want to disable client authentication, you can use a client-specific
transport protocol. For more information see <<separating-node-client-traffic, Separating Node to Node and Client Traffic>>.
@ -167,7 +168,7 @@ NOTE: If you are using a public CA that is already trusted by the Java runtime,
[float]
[[connecting-anonymously]]
===== Connecting Anonymously
===== Connecting anonymously
To enable the transport client to connect anonymously, you must assign the
anonymous user the privileges defined in the <<java-transport-client-role,transport_client>>
@ -176,14 +177,14 @@ see <<anonymous-access,Enabling Anonymous Access>>.
[float]
[[security-client]]
==== Security Client
==== Security client
{security} exposes its own API through the `SecurityClient` class. To get a hold
of a `SecurityClient` you'll first need to create the `XPackClient`, which is a
wrapper around the existing Elasticsearch clients (any client class implementing
The {stack} {security-features} expose an API through the `SecurityClient` class.
To get a hold of a `SecurityClient` you first need to create the `XPackClient`,
which is a wrapper around the existing {es} clients (any client class implementing
`org.elasticsearch.client.Client`).
The following example shows how you can clear {security}'s realm caches using
The following example shows how you can clear the realm caches using
the `SecurityClient`:
[source,java]

View File

@ -1,15 +1,15 @@
[[secure-monitoring]]
=== Monitoring and Security
=== Monitoring and security
<<xpack-monitoring, {monitoring}>> consists of two components: an agent
that you install on on each {es} and Logstash node, and a Monitoring UI
The <<xpack-monitoring,{stack} {monitor-features}>> consists of two components:
an agent that you install on on each {es} and Logstash node, and a Monitoring UI
in {kib}. The monitoring agent collects and indexes metrics from the nodes
and you visualize the data through the Monitoring dashboards in {kib}. The agent
can index data on the same {es} cluster, or send it to an external
monitoring cluster.
To use {monitoring} with {security} enabled, you need to
{kibana-ref}/using-kibana-with-security.html[set up {kib} to work with {security}]
To use the {monitor-features} with the {security-features} enabled, you need to
{kibana-ref}/using-kibana-with-security.html[set up {kib} to work with the {security-features}]
and create at least one user for the Monitoring UI. If you are using an external
monitoring cluster, you also need to configure a user for the monitoring agent
and configure the agent to use the appropriate credentials when communicating

View File

@ -2,25 +2,25 @@
[[configuring-security]]
== Configuring security in {es}
++++
<titleabbrev>Configuring Security</titleabbrev>
<titleabbrev>Configuring security</titleabbrev>
++++
{security} enables you to easily secure a cluster. With {security}, you can
The {es} {security-features} enable you to easily secure a cluster. You can
password-protect your data as well as implement more advanced security measures
such as encrypting communications, role-based access control, IP filtering, and
auditing. For more information, see
{xpack-ref}/elasticsearch-security.html[Securing the Elastic Stack].
{stack-ov}/elasticsearch-security.html[Securing the {stack}].
To use {security} in {es}:
To use {es} {security-features}:
. Verify that you are using a license that includes the {security} feature.
. Verify that you are using a license that includes the {security-features}.
+
--
If you want to try all of the {xpack} features, you can start a 30-day trial. At
the end of the trial period, you can purchase a subscription to keep using the
full functionality of the {xpack} components. For more information, see
If you want to try all of the platinum features, you can start a 30-day trial.
At the end of the trial period, you can purchase a subscription to keep using
the full functionality. For more information, see
https://www.elastic.co/subscriptions and
{xpack-ref}/license-management.html[License Management].
{stack-ov}/license-management.html[License Management].
--
. Verify that the `xpack.security.enabled` setting is `true` on each node in
@ -37,7 +37,7 @@ NOTE: This requirement applies to clusters with more than one node and to
clusters with a single node that listens on an external interface. Single-node
clusters that use a loopback interface do not have this requirement. For more
information, see
{xpack-ref}/encrypting-communications.html[Encrypting Communications].
{stack-ov}/encrypting-communications.html[Encrypting Communications].
--
.. <<node-certificates,Generate node certificates for each of your {es} nodes>>.
@ -49,7 +49,7 @@ information, see
. Set the passwords for all built-in users.
+
--
{security} provides
The {es} {security-features} provide
{stack-ov}/built-in-users.html[built-in users] to
help you get up and running. The +elasticsearch-setup-passwords+ command is the
simplest method to set the built-in users' passwords for the first time.
@ -126,7 +126,7 @@ curl -XPOST -u elastic 'localhost:9200/_security/user/johndoe' -H "Content-Type:
xpack.security.audit.enabled: true
----------------------------
+
For more information, see {xpack-ref}/auditing.html[Auditing Security Events]
For more information, see {stack-ov}/auditing.html[Auditing Security Events]
and <<auditing-settings>>.
.. Restart {es}.

View File

@ -6,7 +6,8 @@ Elasticsearch nodes store data that may be confidential. Attacks on the data may
come from the network. These attacks could include sniffing of the data,
manipulation of the data, and attempts to gain access to the server and thus the
files storing the data. Securing your nodes is required in order to use a production
license that enables {security} and helps reduce the risk from network-based attacks.
license that enables {security-features} and helps reduce the risk from
network-based attacks.
This section shows how to:

View File

@ -5,19 +5,19 @@
You can apply IP filtering to application clients, node clients, or transport
clients, in addition to other nodes that are attempting to join the cluster.
If a node's IP address is on the blacklist, {security} will still allow the
connection to Elasticsearch, but it will be dropped immediately, and no requests
will be processed.
If a node's IP address is on the blacklist, the {es} {security-features} allow
the connection to {es} but it is be dropped immediately and no requests are
processed.
NOTE: Elasticsearch installations are not designed to be publicly accessible
over the Internet. IP Filtering and the other security capabilities of
{security} do not change this condition.
over the Internet. IP Filtering and the other capabilities of the
{es} {security-features} do not change this condition.
[float]
=== Enabling IP filtering
{security} features an access control feature that allows or rejects hosts,
domains, or subnets.
The {es} {security-features} contain an access control feature that allows or
rejects hosts, domains, or subnets.
You configure IP filtering by specifying the `xpack.security.transport.filter.allow` and
`xpack.security.transport.filter.deny` settings in in `elasticsearch.yml`. Allow rules
@ -79,7 +79,7 @@ xpack.security.http.filter.enabled: true
=== Specifying TCP transport profiles
{ref}/modules-transport.html[TCP transport profiles]
enable Elasticsearch to bind on multiple hosts. {security} enables you to apply
enable Elasticsearch to bind on multiple hosts. The {es} {security-features} enable you to apply
different IP filtering on different profiles.
[source,yaml]

View File

@ -70,13 +70,13 @@ For example, the following `webhook` action creates a new issue in GitHub:
<1> The username and password for the user creating the issue
NOTE: By default, both the username and the password are stored in the `.watches`
index in plain text. When {security} is enabled, {watcher} can encrypt the
password before storing it.
index in plain text. When the {es} {security-features} are enabled,
{watcher} can encrypt the password before storing it.
You can also use PKI-based authentication when submitting requests to a cluster
secured with {security}. When you use PKI-based authentication instead of HTTP
basic auth, you don't need to store any authentication information in the watch
itself. To use PKI-based authentication, you {ref}/notification-settings.html#ssl-notification-settings
that has {es} {security-features} enabled. When you use PKI-based authentication
instead of HTTP basic auth, you don't need to store any authentication
information in the watch itself. To use PKI-based authentication, you {ref}/notification-settings.html#ssl-notification-settings
[configure the SSL key settings] for {watcher} in `elasticsearch.yml`.

View File

@ -11,8 +11,8 @@ related to this watch from the watch history.
IMPORTANT: Deleting a watch must be done via this API only. Do not delete the
watch directly from the `.watches` index using Elasticsearch's DELETE
Document API. I {security} is enabled, make sure no `write` privileges
are granted to anyone over the `.watches` index.
Document API. If the {es} {security-features} are enabled, make sure
no `write` privileges are granted to anyone over the `.watches` index.
The following example deletes a watch with the `my-watch` id:

View File

@ -10,8 +10,8 @@ registered with the relevant trigger engine (typically the scheduler, for the
IMPORTANT: Putting a watch must be done via this API only. Do not put a watch
directly to the `.watches` index using Elasticsearch's Index API.
When {security} is enabled, make sure no `write` privileges are
granted to anyone over the `.watches` index.
When the {es} {security-features} are enabled, make sure no `write`
privileges are granted to anyone over the `.watches` index.
The following example adds a watch with the `my-watch` id that has the following

View File

@ -19,9 +19,9 @@ since {watcher} stores its watches in the `.watches` index, you can list them
by executing a search on this index.
IMPORTANT: You can only perform read actions on the `.watches` index. You must
use the {watcher} APIs to create, update, and delete watches. If
{security} is enabled, we recommend you only grant users `read`
privileges on the `.watches` index.
use the {watcher} APIs to create, update, and delete watches. If {es}
{security-features} are enabled, we recommend you only grant users
`read` privileges on the `.watches` index.
For example, the following returns the first 100 watches: