[7.x][DOCS] Copies security source files from stack-docs (#47534)

This commit is contained in:
Lisa Cawley 2019-10-04 08:19:10 -07:00 committed by GitHub
parent 3cc8081274
commit 9b3e5409c1
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
44 changed files with 3742 additions and 3 deletions

View File

@ -72,7 +72,7 @@ For example, the
following request creates a `remote_monitor` user that has the
`remote_monitoring_agent` role:
[source, sh]
[source,console]
---------------------------------------------------------------
POST /_security/user/remote_monitor
{
@ -81,7 +81,6 @@ POST /_security/user/remote_monitor
"full_name" : "Internal Agent For Remote Monitoring"
}
---------------------------------------------------------------
// CONSOLE
// TEST[skip:needs-gold+-license]
Alternatively, use the `remote_monitoring_user` {stack-ov}/built-in-users.html[built-in user].

View File

@ -0,0 +1,80 @@
[role="xpack"]
[[active-directory-realm]]
=== Active Directory user authentication
You can configure {stack} {security-features} to communicate with Active
Directory to authenticate users. To integrate with Active Directory, you
configure an `active_directory` realm and map Active Directory users and groups
to roles in the <<mapping-roles, role mapping file>>.
See {ref}/configuring-ad-realm.html[Configuring an active directory realm].
The {security-features} use LDAP to communicate with Active Directory, so
`active_directory` realms are similar to <<ldap-realm, `ldap` realms>>. Like
LDAP directories, Active Directory stores users and groups hierarchically. The
directory's hierarchy is built from containers such as the _organizational unit_
(`ou`), _organization_ (`o`), and _domain controller_ (`dc`).
The path to an entry is a _Distinguished Name_ (DN) that uniquely identifies a
user or group. User and group names typically have attributes such as a
_common name_ (`cn`) or _unique ID_ (`uid`). A DN is specified as a string, for
example `"cn=admin,dc=example,dc=com"` (white spaces are ignored).
The {security-features} supports only Active Directory security groups. You
cannot map distribution groups to roles.
NOTE: When you use Active Directory for authentication, the username entered by
the user is expected to match the `sAMAccountName` or `userPrincipalName`,
not the common name.
The Active Directory realm authenticates users using an LDAP bind request. After
authenticating the user, the realm then searches to find the user's entry in
Active Directory. Once the user has been found, the Active Directory realm then
retrieves the user's group memberships from the `tokenGroups` attribute on the
user's entry in Active Directory.
[[ad-load-balancing]]
==== Load balancing and failover
The `load_balance.type` setting can be used at the realm level to configure how
the {security-features} should interact with multiple Active Directory servers.
Two modes of operation are supported: failover and load balancing.
See
{ref}/security-settings.html#load-balancing[Load balancing and failover settings].
[[ad-settings]]
==== Active Directory realm settings
See
{ref}/security-settings.html#ref-ad-settings[Active Directory realm settings].
[[mapping-roles-ad]]
==== Mapping Active Directory users and groups to roles
See {ref}/configuring-ad-realm.html[Configuring an Active Directory realm].
[[ad-user-metadata]]
==== User metadata in Active Directory realms
When a user is authenticated via an Active Directory realm, the following
properties are populated in the user's _metadata_:
|=======================
| Field | Description
| `ldap_dn` | The distinguished name of the user.
| `ldap_groups` | The distinguished name of each of the groups that were
resolved for the user (regardless of whether those
groups were mapped to a role).
|=======================
This metadata is returned in the
{ref}/security-api-authenticate.html[authenticate API] and can be used with
<<templating-role-query, templated queries>> in roles.
Additional metadata can be extracted from the Active Directory server by configuring
the `metadata` setting on the Active Directory realm.
[[active-directory-ssl]]
==== Setting up SSL between Elasticsearch and Active Directory
See
{ref}/configuring-tls.html#tls-active-directory[Encrypting communications between {es} and Active Directory].

View File

@ -0,0 +1,203 @@
[role="xpack"]
[[built-in-users]]
=== Built-in users
The {stack-security-features} provide built-in user credentials to help you get
up and running. These users have a fixed set of privileges and cannot be
authenticated until their passwords have been set. The `elastic` user can be
used to <<set-built-in-user-passwords,set all of the built-in user passwords>>.
`elastic`:: A built-in _superuser_. See <<built-in-roles>>.
`kibana`:: The user Kibana uses to connect and communicate with Elasticsearch.
`logstash_system`:: The user Logstash uses when storing monitoring information in Elasticsearch.
`beats_system`:: The user the Beats use when storing monitoring information in Elasticsearch.
`apm_system`:: The user the APM server uses when storing monitoring information in {es}.
`remote_monitoring_user`:: The user {metricbeat} uses when collecting and
storing monitoring information in {es}. It has the `remote_monitoring_agent` and
`remote_monitoring_collector` built-in roles.
[float]
[[built-in-user-explanation]]
==== How the built-in users work
These built-in users are stored in a special `.security` index, which is managed
by {es}. If a built-in user is disabled or its password
changes, the change is automatically reflected on each node in the cluster. If
your `.security` index is deleted or restored from a snapshot, however, any
changes you have applied are lost.
Although they share the same API, the built-in users are separate and distinct
from users managed by the <<native-realm, native realm>>. Disabling the native
realm will not have any effect on the built-in users. The built-in users can
be disabled individually, using the
{ref}/security-api-disable-user.html[disable users API].
[float]
[[bootstrap-elastic-passwords]]
==== The Elastic bootstrap password
When you install {es}, if the `elastic` user does not already have a password,
it uses a default bootstrap password. The bootstrap password is a transient
password that enables you to run the tools that set all the built-in user passwords.
By default, the bootstrap password is derived from a randomized `keystore.seed`
setting, which is added to the keystore during installation. You do not need
to know or change this bootstrap password. If you have defined a
`bootstrap.password` setting in the keystore, however, that value is used instead.
For more information about interacting with the keystore, see
{ref}/secure-settings.html[Secure Settings].
NOTE: After you <<set-built-in-user-passwords,set passwords for the built-in users>>,
in particular for the `elastic` user, there is no further use for the bootstrap
password.
[float]
[[set-built-in-user-passwords]]
==== Setting built-in user passwords
You must set the passwords for all built-in users.
The +elasticsearch-setup-passwords+ tool is the simplest method to set the
built-in users' passwords for the first time. It uses the `elastic` user's
bootstrap password to run user management API requests. For example, you can run
the command in an "interactive" mode, which prompts you to enter new passwords
for the `elastic`, `kibana`, `logstash_system`, `beats_system`, `apm_system`,
and `remote_monitoring_user` users:
[source,shell]
--------------------------------------------------
bin/elasticsearch-setup-passwords interactive
--------------------------------------------------
For more information about the command options, see
{ref}/setup-passwords.html[elasticsearch-setup-passwords].
IMPORTANT: After you set a password for the `elastic` user, the bootstrap
password is no longer valid; you cannot run the `elasticsearch-setup-passwords`
command a second time.
Alternatively, you can set the initial passwords for the built-in users by using
the *Management > Users* page in {kib} or the
{ref}/security-api-change-password.html[Change Password API]. These methods are
more complex. You must supply the `elastic` user and its bootstrap password to
log into {kib} or run the API. This requirement means that you cannot use the
default bootstrap password that is derived from the `keystore.seed` setting.
Instead, you must explicitly set a `bootstrap.password` setting in the keystore
before you start {es}. For example, the following command prompts you to enter a
new bootstrap password:
[source,shell]
----------------------------------------------------
bin/elasticsearch-keystore add "bootstrap.password"
----------------------------------------------------
You can then start {es} and {kib} and use the `elastic` user and bootstrap
password to log into {kib} and change the passwords. Alternatively, you can
submit Change Password API requests for each built-in user. These methods are
better suited for changing your passwords after the initial setup is complete,
since at that point the bootstrap password is no longer required.
[[add-built-in-user-passwords]]
[float]
[[add-built-in-user-kibana]]
==== Adding built-in user passwords to {kib}
After the `kibana` user password is set, you need to update the {kib} server
with the new password by setting `elasticsearch.password` in the `kibana.yml`
configuration file:
[source,yaml]
-----------------------------------------------
elasticsearch.password: kibanapassword
-----------------------------------------------
See {kibana-ref}/using-kibana-with-security.html[Configuring security in {kib}].
[float]
[[add-built-in-user-logstash]]
==== Adding built-in user passwords to {ls}
The `logstash_system` user is used internally within Logstash when
monitoring is enabled for Logstash.
To enable this feature in Logstash, you need to update the Logstash
configuration with the new password by setting `xpack.monitoring.elasticsearch.password` in
the `logstash.yml` configuration file:
[source,yaml]
----------------------------------------------------------
xpack.monitoring.elasticsearch.password: logstashpassword
----------------------------------------------------------
If you have upgraded from an older version of Elasticsearch,
the `logstash_system` user may have defaulted to _disabled_ for security reasons.
Once the password has been changed, you can enable the user via the following API call:
[source,console]
---------------------------------------------------------------------
PUT _security/user/logstash_system/_enable
---------------------------------------------------------------------
See {logstash-ref}/ls-security.html#ls-monitoring-user[Configuring credentials for {ls} monitoring].
[float]
[[add-built-in-user-beats]]
==== Adding built-in user passwords to Beats
The `beats_system` user is used internally within Beats when monitoring is
enabled for Beats.
To enable this feature in Beats, you need to update the configuration for each
of your beats to reference the correct username and password. For example:
[source,yaml]
----------------------------------------------------------
xpack.monitoring.elasticsearch.username: beats_system
xpack.monitoring.elasticsearch.password: beatspassword
----------------------------------------------------------
For example, see {metricbeat-ref}/monitoring.html[Monitoring {metricbeat}].
The `remote_monitoring_user` is used when {metricbeat} collects and stores
monitoring data for the {stack}. See <<monitoring-production>>.
If you have upgraded from an older version of {es}, then you may not have set a
password for the `beats_system` or `remote_monitoring_user` users. If this is
the case, then you should use the *Management > Users* page in {kib} or the
{ref}/security-api-change-password.html[Change Password API] to set a password
for these users.
[float]
[[add-built-in-user-apm]]
==== Adding built-in user passwords to APM
The `apm_system` user is used internally within APM when monitoring is enabled.
To enable this feature in APM, you need to update the
{apm-server-ref-70}/configuring-howto-apm-server.html[APM configuration file] to
reference the correct username and password. For example:
[source,yaml]
----------------------------------------------------------
xpack.monitoring.elasticsearch.username: apm_system
xpack.monitoring.elasticsearch.password: apmserverpassword
----------------------------------------------------------
See {apm-server-ref-70}/monitoring.html[Monitoring APM Server].
If you have upgraded from an older version of {es}, then you may not have set a
password for the `apm_system` user. If this is the case,
then you should use the *Management > Users* page in {kib} or the
{ref}/security-api-change-password.html[Change Password API] to set a password
for these users.
[float]
[[disabling-default-password]]
==== Disabling default password functionality
[IMPORTANT]
=============================================================================
This setting is deprecated. The elastic user no longer has a default password.
The password must be set before the user can be used.
See <<bootstrap-elastic-passwords>>.
=============================================================================

View File

@ -0,0 +1,27 @@
[role="xpack"]
[[file-realm]]
=== File-based user authentication
You can manage and authenticate users with the built-in `file` realm.
With the `file` realm, users are defined in local files on each node in the cluster.
IMPORTANT: As the administrator of the cluster, it is your responsibility to
ensure the same users are defined on every node in the cluster. The {stack}
{security-features} do not deliver any mechanism to guarantee this.
The `file` realm is primarily supported to serve as a fallback/recovery realm. It
is mostly useful in situations where all users locked themselves out of the system
(no one remembers their username/password). In this type of scenarios, the `file`
realm is your only way out - you can define a new `admin` user in the `file` realm
and use it to log in and reset the credentials of all other users.
IMPORTANT: When you configure realms in `elasticsearch.yml`, only the realms you
specify are used for authentication. To use the `file` realm as a fallback, you
must include it in the realm chain.
To define users, the {security-features} provide the
{ref}/users-command.html[users] command-line tool. This tool enables you to add
and remove users, assign user roles, and manage user passwords.
For more information, see
{ref}/configuring-file-realm.html[Configuring a file realm].

View File

@ -0,0 +1,23 @@
include::overview.asciidoc[]
include::built-in-users.asciidoc[]
include::internal-users.asciidoc[]
include::realms.asciidoc[]
include::realm-chains.asciidoc[]
include::active-directory-realm.asciidoc[]
include::file-realm.asciidoc[]
include::ldap-realm.asciidoc[]
include::native-realm.asciidoc[]
include::pki-realm.asciidoc[]
include::saml-realm.asciidoc[]
include::kerberos-realm.asciidoc[]
include::{xes-repo-dir}/security/authentication/custom-realm.asciidoc[]
include::{xes-repo-dir}/security/authentication/anonymous-access.asciidoc[]
include::{xes-repo-dir}/security/authentication/user-cache.asciidoc[]
include::{xes-repo-dir}/security/authentication/saml-guide.asciidoc[]
include::{xes-repo-dir}/security/authentication/oidc-guide.asciidoc[]

View File

@ -0,0 +1,14 @@
[role="xpack"]
[[internal-users]]
=== Internal users
The {stack-security-features} use three _internal_ users (`_system`, `_xpack`,
and `_xpack_security`), which are responsible for the operations that take place
inside an {es} cluster.
These users are only used by requests that originate from within the cluster.
For this reason, they cannot be used to authenticate against the API and there
is no password to manage or reset.
From time-to-time you may find a reference to one of these users inside your
logs, including <<auditing, audit logs>>.

View File

@ -0,0 +1,62 @@
[role="xpack"]
[[kerberos-realm]]
=== Kerberos authentication
You can configure the {stack} {security-features} to support Kerberos V5
authentication, an industry standard protocol to authenticate users in {es}.
NOTE: You cannot use the Kerberos realm to authenticate on the transport network layer.
To authenticate users with Kerberos, you need to
{ref}/configuring-kerberos-realm.html[configure a Kerberos realm] and
<<mapping-roles, map users to roles>>.
For more information on realm settings, see
{ref}/security-settings.html#ref-kerberos-settings[Kerberos realm settings].
[[kerberos-terms]]
==== Key concepts
There are a few terms and concepts that you'll encounter when you're setting up
Kerberos realms:
_kdc_::
Key Distribution Center. A service that issues Kerberos tickets.
_principal_::
A Kerberos principal is a unique identity to which Kerberos can assign
tickets. It can be used to identify a user or a service provided by a
server.
+
--
Kerberos V5 principal names are of format `primary/instance@REALM`, where
`primary` is a user name.
`instance` is an optional string that qualifies the primary and is separated
by a slash(`/`) from the primary. For a user, usually it is not used; for
service hosts, it is the fully qualified domain name of the host.
`REALM` is the Kerberos realm. Usually it is is the domain name in upper case.
An example of a typical user principal is `user@ES.DOMAIN.LOCAL`. An example of
a typical service principal is `HTTP/es.domain.local@ES.DOMAIN.LOCAL`.
--
_realm_::
Realms define the administrative boundary within which the authentication server
has authority to authenticate users and services.
_keytab_::
A file that stores pairs of principals and encryption keys.
IMPORTANT: Anyone with read permissions to this file can use the
credentials in the network to access other services so it is important
to protect it with proper file permissions.
_krb5.conf_::
A file that contains Kerberos configuration information such as the default realm
name, the location of Key distribution centers (KDC), realms information,
mappings from domain names to Kerberos realms, and default configurations for
realm session key encryption types.
_ticket granting ticket (TGT)_::
A TGT is an authentication ticket generated by the Kerberos authentication
server. It contains an encrypted authenticator.

View File

@ -0,0 +1,88 @@
[role="xpack"]
[[ldap-realm]]
=== LDAP user authentication
You can configure the {stack} {security-features} to communicate with a
Lightweight Directory Access Protocol (LDAP) server to authenticate users. To
integrate with LDAP, you configure an `ldap` realm and map LDAP groups to user
roles in the <<mapping-roles, role mapping file>>.
LDAP stores users and groups hierarchically, similar to the way folders are
grouped in a file system. An LDAP directory's hierarchy is built from containers
such as the _organizational unit_ (`ou`), _organization_ (`o`), and
_domain controller_ (`dc`).
The path to an entry is a _Distinguished Name_ (DN) that uniquely identifies a
user or group. User and group names typically have attributes such as a
_common name_ (`cn`) or _unique ID_ (`uid`). A DN is specified as a string,
for example `"cn=admin,dc=example,dc=com"` (white spaces are ignored).
The `ldap` realm supports two modes of operation, a user search mode
and a mode with specific templates for user DNs.
[[ldap-user-search]]
==== User search mode and user DN templates mode
See {ref}/configuring-ldap-realm.html[Configuring an LDAP Realm].
[[ldap-load-balancing]]
==== Load balancing and failover
The `load_balance.type` setting can be used at the realm level to configure how
the {security-features} should interact with multiple LDAP servers. The
{security-features} support both failover and load balancing modes of operation.
See
{ref}/security-settings.html#load-balancing[Load balancing and failover settings].
[[ldap-settings]]
==== LDAP realm settings
See {ref}/security-settings.html#ref-ldap-settings[LDAP realm settings].
[[mapping-roles-ldap]]
==== Mapping LDAP groups to roles
An integral part of a realm authentication process is to resolve the roles
associated with the authenticated user. Roles define the privileges a user has
in the cluster.
Since with the `ldap` realm the users are managed externally in the LDAP server,
the expectation is that their roles are managed there as well. In fact, LDAP
supports the notion of groups, which often represent user roles for different
systems in the organization.
The `ldap` realm enables you to map LDAP users to roles via their LDAP
groups, or other metadata. This role mapping can be configured via the
{ref}/security-api-put-role-mapping.html[add role mapping API] or by using a
file stored on each node. When a user authenticates with LDAP, the privileges
for that user are the union of all privileges defined by the roles to which
the user is mapped. For more information, see
{ref}/configuring-ldap-realm.html[Configuring an LDAP realm].
[[ldap-user-metadata]]
==== User metadata in LDAP realms
When a user is authenticated via an LDAP realm, the following properties are
populated in the user's _metadata_:
|=======================
| Field | Description
| `ldap_dn` | The distinguished name of the user.
| `ldap_groups` | The distinguished name of each of the groups that were
resolved for the user (regardless of whether those
groups were mapped to a role).
|=======================
This metadata is returned in the
{ref}/security-api-authenticate.html[authenticate API], and can be used with
<<templating-role-query, templated queries>> in roles.
Additional fields can be included in the user's metadata by configuring
the `metadata` setting on the LDAP realm. This metadata is available for use
with the <<mapping-roles-api, role mapping API>> or in
<<templating-role-query, templated role queries>>.
[[ldap-ssl]]
==== Setting up SSL between Elasticsearch and LDAP
See
{ref}/configuring-tls.html#tls-ldap[Encrypting communications between {es} and LDAP].

View File

@ -0,0 +1,32 @@
[role="xpack"]
[[native-realm]]
=== Native user authentication
The easiest way to manage and authenticate users is with the internal `native`
realm. You can use the REST APIs or Kibana to add and remove users, assign user
roles, and manage user passwords.
[[native-realm-configuration]]
[float]
==== Configuring a native realm
See {ref}/configuring-native-realm.html[Configuring a native realm].
[[native-settings]]
==== Native realm settings
See {ref}/security-settings.html#ref-native-settings[Native realm settings].
[[managing-native-users]]
==== Managing native users
The {stack} {security-features} enable you to easily manage users in {kib} on the
*Management / Security / Users* page.
Alternatively, you can manage users through the `user` API. For more
information and examples, see
{ref}/security-api.html#security-user-apis[user management APIs].
[[migrating-from-file]]
NOTE: To migrate file-based users to the `native` realm, use the
{ref}/migrate-tool.html[migrate tool].

View File

@ -0,0 +1,26 @@
[role="xpack"]
[[setting-up-authentication]]
== User authentication
Authentication identifies an individual. To gain access to restricted resources,
a user must prove their identity, via passwords, credentials, or some other
means (typically referred to as authentication tokens).
The {stack} authenticates users by identifying the users behind the requests
that hit the cluster and verifying that they are who they claim to be. The
authentication process is handled by one or more authentication services called
<<realms,_realms_>>.
You can use the native support for managing and authenticating users, or
integrate with external user management systems such as LDAP and Active
Directory.
The {stack-security-features} provide built-in realms such as `native`,`ldap`,
`active_directory`, `pki`, `file`, and `saml`. If none of the built-in realms
meet your needs, you can also build your own custom realm and plug it into the
{stack}.
When {security-features} are enabled, depending on the realms you've configured,
you must attach your user credentials to the requests sent to {es}. For example,
when using realms that support usernames and passwords you can simply attach
{wikipedia}/Basic_access_authentication[basic auth] header to the requests.

View File

@ -0,0 +1,27 @@
[role="xpack"]
[[pki-realm]]
=== PKI user authentication
You can configure {stack} {security-features} to use Public Key Infrastructure
(PKI) certificates to authenticate users in {es}. This requires clients to
present X.509 certificates.
You can use PKI certificates to authenticate users in {es} as well as {kib}.
To use PKI in {es}, you configure a PKI realm, enable client authentication on
the desired network layers (transport or http), and map the Distinguished Names
(DNs) from the user certificates to roles. You create the mappings in a <<pki-role-mapping, role
mapping file>> or use the {ref}/security-api-put-role-mapping.html[create role mappings API]. If you want the same users to also be
authenticated using certificates when they connect to {kib}, you must configure the {es} PKI
realm to
{ref}/configuring-pki-realm.html#pki-realm-for-proxied-clients[allow
delegation] and to
{kibana-ref}/kibana-authentication.html#pki-authentication[enable PKI
authentication in {kib}].
See also {ref}/configuring-pki-realm.html[Configuring a PKI realm].
[[pki-settings]]
==== PKI realm settings
See {ref}/security-settings.html#ref-pki-settings[PKI realm settings].

View File

@ -0,0 +1,104 @@
[role="xpack"]
[[realm-chains]]
=== Realm chains
<<realms,Realms>> live within a _realm chain_. It is essentially a prioritized
list of configured realms (typically of various types). Realms are consulted in
ascending order (that is to say, the realm with the lowest `order` value is
consulted first). You should make sure each configured realm has a distinct
`order` setting. In the event that two or more realms have the same `order`,
they are processed in `name` order.
During the authentication process, {stack} {security-features} consult and try
to authenticate the request one realm at a time. Once one of the realms
successfully authenticates the request, the authentication is considered to be
successful. The authenticated user is associated with the request, which then
proceeds to the authorization phase. If a realm cannot authenticate the request,
the next realm in the chain is consulted. If all realms in the chain cannot
authenticate the request, the authentication is considered to be unsuccessful
and an authentication error is returned (as HTTP status code `401`).
NOTE: Some systems (e.g. Active Directory) have a temporary lock-out period
after several successive failed login attempts. If the same username exists in
multiple realms, unintentional account lockouts are possible. For more
information, see <<trouble-shoot-active-directory>>.
The default realm chain contains the `native` and `file` realms. To explicitly
configure a realm chain, you specify the chain in the `elasticsearch.yml` file.
When you configure a realm chain, only the realms you specify are used for
authentication. To use the `native` and `file` realms, you must include them in
the chain.
The following snippet configures a realm chain that includes the `file` and
`native` realms, as well as two LDAP realms and an Active Directory realm.
[source,yaml]
----------------------------------------
xpack.security.authc:
realms:
file:
type: file
order: 0
native:
type: native
order: 1
ldap1:
type: ldap
order: 2
enabled: false
url: 'url_to_ldap1'
...
ldap2:
type: ldap
order: 3
url: 'url_to_ldap2'
...
ad1:
type: active_directory
order: 4
url: 'url_to_ad'
----------------------------------------
As can be seen above, each realm has a unique name that identifies it. Each type
of realm dictates its own set of required and optional settings. That said,
there are
{ref}/security-settings.html#ref-realm-settings[settings that are common to all realms].
[[authorization_realms]]
==== Delegating authorization to another realm
Some realms have the ability to perform _authentication_ internally, but
delegate the lookup and assignment of roles (that is, _authorization_) to
another realm.
For example, you may wish to use a PKI realm to authenticate your users with
TLS client certificates, then lookup that user in an LDAP realm and use their
LDAP group assignments to determine their roles in Elasticsearch.
Any realm that supports retrieving users (without needing their credentials) can
be used as an _authorization realm_ (that is, its name may appear as one of the
values in the list of `authorization_realms`). See <<run-as-privilege>> for
further explanation on which realms support this.
For realms that support this feature, it can be enabled by configuring the
`authorization_realms` setting on the authenticating realm. Check the list of
{ref}/security-settings.html#realm-settings[supported settings] for each realm
to see if they support the `authorization_realms` setting.
If delegated authorization is enabled for a realm, it authenticates the user in
its standard manner (including relevant caching) then looks for that user in the
configured list of authorization realms. It tries each realm in the order they
are specified in the `authorization_realms` setting. The user is retrieved by
principal - the user must have identical usernames in the _authentication_ and
_authorization realms_. If the user cannot be found in any of the authorization
realms, authentication fails.
See <<configuring-authorization-delegation>> for more details.
NOTE: Delegated authorization requires a
https://www.elastic.co/subscriptions[Platinum or Trial license].

View File

@ -0,0 +1,67 @@
[role="xpack"]
[[realms]]
=== Realms
Authentication in the {stack} {security-features} is handled by one or more
authentication services called _realms_. A _realm_ is used to resolve and
authenticate users based on authentication tokens. The {security-features}
provide the following built-in realms:
_native_::
An internal realm where users are stored in a dedicated {es} index.
This realm supports an authentication token in the form of username and password,
and is available by default when no realms are explicitly configured. The users
are managed via the {ref}/security-api.html#security-user-apis[user management APIs].
See <<native-realm>>.
_ldap_::
A realm that uses an external LDAP server to authenticate the
users. This realm supports an authentication token in the form of username and
password, and requires explicit configuration in order to be used. See
<<ldap-realm>>.
_active_directory_::
A realm that uses an external Active Directory Server to authenticate the
users. With this realm, users are authenticated by usernames and passwords.
See <<active-directory-realm>>.
_pki_::
A realm that authenticates users using Public Key Infrastructure (PKI). This
realm works in conjunction with SSL/TLS and identifies the users through the
Distinguished Name (DN) of the client's X.509 certificates. See <<pki-realm>>.
_file_::
An internal realm where users are defined in files stored on each node in the
{es} cluster. This realm supports an authentication token in the form
of username and password and is always available. See <<file-realm>>.
_saml_::
A realm that facilitates authentication using the SAML 2.0 Web SSO protocol.
This realm is designed to support authentication through {kib} and is not
intended for use in the REST API. See <<saml-realm>>.
_kerberos_::
A realm that authenticates a user using Kerberos authentication. Users are
authenticated on the basis of Kerberos tickets. See <<kerberos-realm>>.
The {stack} {security-features} also support custom realms. If you need to
integrate with another authentication system, you can build a custom realm
plugin. For more information, see
<<custom-realms>>.
==== Internal and external realms
Realm types can roughly be classified in two categories:
Internal:: Realms that are internal to Elasticsearch and don't require any
communication with external parties. They are fully managed by {stack}
{security-features}. There can only be a maximum of one configured realm per
internal realm type. The {security-features} provide two internal realm types:
`native` and `file`.
External:: Realms that require interaction with parties/components external to
{es}, typically, with enterprise grade identity management systems. Unlike
internal realms, there can be as many external realms as one would like - each
with its own unique name and configuration. The {stack} {security-features}
provide the following external realm types: `ldap`, `active_directory`, `saml`,
`kerberos`, and `pki`.

View File

@ -0,0 +1,41 @@
[role="xpack"]
[[saml-realm]]
=== SAML authentication
The {stack} {security-features} support user authentication using SAML
single sign-on (SSO). The {security-features} provide this support using the Web
Browser SSO profile of the SAML 2.0 protocol.
This protocol is specifically designed to support authentication via an
interactive web browser, so it does not operate as a standard authentication
realm. Instead, there are {kib} and {es} {security-features} that work
together to enable interactive SAML sessions.
This means that the SAML realm is not suitable for use by standard REST clients.
If you configure a SAML realm for use in {kib}, you should also configure
another realm, such as the <<native-realm, native realm>> in your authentication
chain.
In order to simplify the process of configuring SAML authentication within the
Elastic Stack, there is a step-by-step guide to
<<saml-guide, Configuring Elasticsearch and Kibana to use SAML single sign-on>>.
The remainder of this document will describe {es} specific configuration options
for SAML realms.
[[saml-settings]]
==== SAML realm settings
See {ref}/security-settings.html#ref-saml-settings[SAML realm settings].
==== SAML realm signing settings
See {ref}/security-settings.html#ref-saml-signing-settings[SAML realm signing settings].
==== SAML realm encryption settings
See {ref}/security-settings.html#ref-saml-encryption-settings[SAML realm encryption settings].
==== SAML realm SSL settings
See {ref}/security-settings.html#ref-saml-ssl-settings[SAML realm SSL settings].

View File

@ -0,0 +1,162 @@
[role="xpack"]
[[built-in-roles]]
=== Built-in roles
The {stack-security-features} apply a default role to all users, including
<<anonymous-access, anonymous users>>. The default role enables users to access
the authenticate endpoint, change their own passwords, and get information about
themselves.
There is also a set of built-in roles you can explicitly assign to users. These
roles have a fixed set of privileges and cannot be updated.
[[built-in-roles-apm-system]] `apm_system` ::
Grants access necessary for the APM system user to send system-level data
(such as monitoring) to {es}.
[[built-in-roles-apm-user]] `apm_user` ::
Grants the privileges required for APM users (such as `read` and
`view_index_metadata` privileges on the `apm-*` and `.ml-anomalies*` indices).
[[built-in-roles-beats-admin]] `beats_admin` ::
Grants access to the `.management-beats` index, which contains configuration
information for the Beats.
[[built-in-roles-beats-system]] `beats_system` ::
Grants access necessary for the Beats system user to send system-level data
(such as monitoring) to {es}.
+
--
[NOTE]
===============================
* This role should not be assigned to users as the granted permissions may
change between releases.
* This role does not provide access to the beats indices and is not
suitable for writing beats output to {es}.
===============================
--
[[built-in-roles-data-frame-transforms-admin]] `data_frame_transforms_admin` ::
Grants `manage_data_frame_transforms` cluster privileges, which enable you to
manage {transforms}. This role also includes all
{kibana-ref}/kibana-privileges.html[Kibana privileges] for the {ml-features}.
[[built-in-roles-data-frame-transforms-user]] `data_frame_transforms_user` ::
Grants `monitor_data_fram_transforms` cluster privileges, which enable you to
use {transforms}. This role also includes all
{kibana-ref}/kibana-privileges.html[Kibana privileges] for the {ml-features}.
[[built-in-roles-ingest-user]] `ingest_admin` ::
Grants access to manage *all* index templates and *all* ingest pipeline configurations.
+
NOTE: This role does *not* provide the ability to create indices; those privileges
must be defined in a separate role.
[[built-in-roles-kibana-dashboard]] `kibana_dashboard_only_user` ::
Grants access to the {kib} Dashboard and read-only permissions to Kibana.
This role does not have access to editing tools in {kib}. For more
information, see
{kibana-ref}/xpack-dashboard-only-mode.html[{kib} Dashboard Only Mode].
[[built-in-roles-kibana-system]] `kibana_system` ::
Grants access necessary for the {kib} system user to read from and write to the
{kib} indices, manage index templates and tokens, and check the availability of
the {es} cluster. This role grants read access to the `.monitoring-*` indices
and read and write access to the `.reporting-*` indices. For more information,
see {kibana-ref}/using-kibana-with-security.html[Configuring Security in {kib}].
+
NOTE: This role should not be assigned to users as the granted permissions may
change between releases.
[[built-in-roles-kibana-user]] `kibana_user`::
Grants access to all features in {kib}. For more information on Kibana authorization,
see {kibana-ref}/xpack-security-authorization.html[Kibana Authorization].
[[built-in-roles-logstash-admin]] `logstash_admin` ::
Grants access to the `.logstash*` indices for managing configurations.
[[built-in-roles-logstash-system]] `logstash_system` ::
Grants access necessary for the Logstash system user to send system-level data
(such as monitoring) to {es}. For more information, see
{logstash-ref}/ls-security.html[Configuring Security in Logstash].
+
--
[NOTE]
===============================
* This role should not be assigned to users as the granted permissions may
change between releases.
* This role does not provide access to the logstash indices and is not
suitable for use within a Logstash pipeline.
===============================
--
[[built-in-roles-ml-admin]] `machine_learning_admin`::
Grants `manage_ml` cluster privileges, read access to `.ml-anomalies*`,
`.ml-notifications*`, `.ml-state*`, `.ml-meta*` indices and write access to
`.ml-annotations*` indices. This role also includes all
{kibana-ref}/kibana-privileges.html[Kibana privileges] for the {ml-features}.
[[built-in-roles-ml-user]] `machine_learning_user`::
Grants the minimum privileges required to view {ml} configuration,
status, and work with results. This role grants `monitor_ml` cluster privileges,
read access to the `.ml-notifications` and `.ml-anomalies*` indices
(which store {ml} results), and write access to `.ml-annotations*` indices.
This role also includes all {kibana-ref}/kibana-privileges.html[Kibana privileges] for the {ml-features}.
[[built-in-roles-monitoring-user]] `monitoring_user`::
Grants the minimum privileges required for any user of {monitoring} other than those
required to use {kib}. This role grants access to the monitoring indices and grants
privileges necessary for reading basic cluster information. This role also includes
all {kibana-ref}/kibana-privileges.html[Kibana privileges] for the {stack-monitor-features}.
Monitoring users should also be assigned the `kibana_user` role.
[[built-in-roles-remote-monitoring-agent]] `remote_monitoring_agent`::
Grants the minimum privileges required to write data into the monitoring indices
(`.monitoring-*`). This role also has the privileges necessary to create
{metricbeat} indices (`metricbeat-*`) and write data into them.
[[built-in-roles-remote-monitoring-collector]] `remote_monitoring_collector`::
Grants the minimum privileges required to collect monitoring data for the {stack}.
[[built-in-roles-reporting-user]] `reporting_user`::
Grants the specific privileges required for users of {reporting} other than those
required to use {kib}. This role grants access to the reporting indices; each
user has access to only their own reports. Reporting users should also be
assigned the `kibana_user` role and a role that grants them access to the data
that will be used to generate reports.
[[built-in-roles-snapshot-user]] `snapshot_user`::
Grants the necessary privileges to create snapshots of **all** the indices and
to view their metadata. This role enables users to view the configuration of
existing snapshot repositories and snapshot details. It does not grant authority
to remove or add repositories or to restore snapshots. It also does not enable
to change index settings or to read or update index data.
[[built-in-roles-superuser]] `superuser`::
Grants full access to the cluster, including all indices and data. A user with
the `superuser` role can also manage users and roles and
<<run-as-privilege, impersonate>> any other user in the system. Due to the
permissive nature of this role, take extra care when assigning it to a user.
[[built-in-roles-transport-client]] `transport_client`::
Grants the privileges required to access the cluster through the Java Transport
Client. The Java Transport Client fetches information about the nodes in the
cluster using the _Node Liveness API_ and the _Cluster State API_ (when
sniffing is enabled). Assign your users this role if they use the
Transport Client.
+
NOTE: Using the Transport Client effectively means the users are granted access
to the cluster state. This means users can view the metadata over all indices,
index templates, mappings, node and basically everything about the cluster.
However, this role does not grant permission to view the data in all indices.
[[built-in-roles-watcher-admin]] `watcher_admin`::
+
Grants read access to the `.watches` index, read access to the watch history and
the triggered watches index and allows to execute all watcher actions.
[[built-in-roles-watcher-user]] `watcher_user`::
+
Grants read access to the `.watches` index, the get watch action and the watcher
stats.

View File

@ -0,0 +1,95 @@
[role="xpack"]
[[configuring-authorization-delegation]]
=== Configuring authorization delegation
In some cases, after the user has been authenticated by a realm, we may
want to delegate user lookup and assignment of roles to another realm.
Any realm that supports retrieving users (without needing their credentials)
can be used as an authorization realm.
For example, a user that is authenticated by the Kerberos realm can be looked up
in the LDAP realm. The LDAP realm takes on responsibility for searching the user
in LDAP and determining the role. In this case, the LDAP realm acts as an
_authorization realm_.
==== LDAP realm as an authorization realm
Following is an example configuration for the LDAP realm that can be used as
an _authorization realm_. This LDAP realm is configured in user search mode
with a specified filter.
For more information on configuring LDAP realms see <<ldap-realm>>.
[source, yaml]
------------------------------------------------------------
xpack:
security:
authc:
realms:
ldap:
ldap1:
order: 0
authentication.enabled: true <1>
user_search:
base_dn: "dc=example,dc=org"
filter: "(cn={0})"
group_search:
base_dn: "dc=example,dc=org"
files:
role_mapping: "ES_PATH_CONF/role_mapping.yml"
unmapped_groups_as_roles: false
------------------------------------------------------------
<1> Here, we explicitly allow the LDAP realm to be used for authentication
(that is, users can authenticate using their LDAP username and password).
If we wanted this LDAP realm to be used for authorization only, then we
would set this to `false`.
==== Kerberos realm configured to delegate authorization
Following is an example configuration where the Kerberos realm authenticates a
user and then delegates authorization to the LDAP realm. The
Kerberos realm authenticates the user and extracts user principal name
(usually of format `user@REALM`). In this example, we enable the `remove_realm_name`
setting to remove the `@REALM` part from the user principal name to get the username.
This username is used to do a user lookup by the configured authorization realms (in this case the LDAP realm).
For more information on Kerberos realm see <<kerberos-realm>>.
[source, yaml]
------------------------------------------------------------
xpack:
security:
authc:
realms:
kerberos:
kerb1:
order: 1
keytab.path: "ES_PATH_CONF/es.keytab"
remove_realm_name: true
authorization_realms: ldap1
------------------------------------------------------------
==== PKI realm configured to delegate authorization
We can similarly configure PKI realm to delegate authorization to LDAP realm.
The user is authenticated by the PKI realm and the authorization is delegated to
the LDAP realm. In this example, the username is the common name (CN)
extracted from the DN of the client certificate. The LDAP realm uses this
username to lookup user and assign the role.
For more information on PKI realms see <<pki-realm>>.
[source, yaml]
------------------------------------------------------------
xpack:
security:
authc:
realms:
pki:
pki1:
order: 2
authorization_realms: ldap1
------------------------------------------------------------
Similar to the above examples, we can configure realms to delegate authorization to
authorization realms (which have the capability to lookup users by the username and assign roles).

View File

@ -0,0 +1,58 @@
[role="xpack"]
[[document-level-security]]
=== Document level security
Document level security restricts the documents that users have read access to.
In particular, it restricts which documents can be accessed from document-based
read APIs.
To enable document level security, you use a query to specify the documents that
each role can access. The document query is associated with a particular index
or index pattern and operates in conjunction with the privileges specified for
the indices.
The following role definition grants read access only to documents that
belong to the `click` category within all the `events-*` indices:
[source,console]
--------------------------------------------------
POST /_security/role/click_role
{
"indices": [
{
"names": [ "events-*" ],
"privileges": [ "read" ],
"query": "{\"match\": {\"category\": \"click\"}}"
}
]
}
--------------------------------------------------
NOTE: Omitting the `query` entry entirely disables document level security for
the respective indices permission entry.
The specified `query` expects the same format as if it was defined in the
search request and supports the full {es} {ref}/query-dsl.html[Query DSL].
For example, the following role grants read access only to the documents whose
`department_id` equals `12`:
[source,console]
--------------------------------------------------
POST /_security/role/dept_role
{
"indices" : [
{
"names" : [ "*" ],
"privileges" : [ "read" ],
"query" : {
"term" : { "department_id" : 12 }
}
}
]
}
--------------------------------------------------
NOTE: `query` also accepts queries written as string values.
For more information, see <<field-and-document-access-control>>.

View File

@ -0,0 +1,223 @@
[role="xpack"]
[[field-level-security]]
=== Field level security
Field level security restricts the fields that users have read access to.
In particular, it restricts which fields can be accessed from document-based
read APIs.
To enable field level security, specify the fields that each role can access
as part of the indices permissions in a role definition. Field level security is
thus bound to a well-defined set of indices (and potentially a set of
<<document-level-security, documents>>).
The following role definition grants read access only to the `category`,
`@timestamp`, and `message` fields in all the `events-*` indices.
[source,console]
--------------------------------------------------
POST /_security/role/test_role1
{
"indices": [
{
"names": [ "events-*" ],
"privileges": [ "read" ],
"field_security" : {
"grant" : [ "category", "@timestamp", "message" ]
}
}
]
}
--------------------------------------------------
Access to the following meta fields is always allowed: `_id`,
`_type`, `_parent`, `_routing`, `_timestamp`, `_ttl`, `_size` and `_index`. If
you specify an empty list of fields, only these meta fields are accessible.
NOTE: Omitting the fields entry entirely disables field level security.
You can also specify field expressions. For example, the following
example grants read access to all fields that start with an `event_` prefix:
[source,console]
--------------------------------------------------
POST /_security/role/test_role2
{
"indices" : [
{
"names" : [ "*" ],
"privileges" : [ "read" ],
"field_security" : {
"grant" : [ "event_*" ]
}
}
]
}
--------------------------------------------------
Use the dot notations to refer to nested fields in more complex documents. For
example, assuming the following document:
[source,js]
--------------------------------------------------
{
"customer": {
"handle": "Jim",
"email": "jim@mycompany.com",
"phone": "555-555-5555"
}
}
--------------------------------------------------
// NOTCONSOLE
The following role definition enables only read access to the customer `handle`
field:
[source,console]
--------------------------------------------------
POST /_security/role/test_role3
{
"indices" : [
{
"names" : [ "*" ],
"privileges" : [ "read" ],
"field_security" : {
"grant" : [ "customer.handle" ]
}
}
]
}
--------------------------------------------------
This is where wildcard support shines. For example, use `customer.*` to enable
only read access to the `customer` data:
[source,console]
--------------------------------------------------
POST /_security/role/test_role4
{
"indices" : [
{
"names" : [ "*" ],
"privileges" : [ "read" ],
"field_security" : {
"grant" : [ "customer.*" ]
}
}
]
}
--------------------------------------------------
You can deny permission to access fields with the following syntax:
[source,console]
--------------------------------------------------
POST /_security/role/test_role5
{
"indices" : [
{
"names" : [ "*" ],
"privileges" : [ "read" ],
"field_security" : {
"grant" : [ "*"],
"except": [ "customer.handle" ]
}
}
]
}
--------------------------------------------------
The following rules apply:
* The absence of `field_security` in a role is equivalent to * access.
* If permission has been granted explicitly to some fields, you can specify
denied fields. The denied fields must be a subset of the fields to which
permissions were granted.
* Defining denied and granted fields implies access to all granted fields except
those which match the pattern in the denied fields.
For example:
[source,console]
--------------------------------------------------
POST /_security/role/test_role6
{
"indices" : [
{
"names" : [ "*" ],
"privileges" : [ "read" ],
"field_security" : {
"except": [ "customer.handle" ],
"grant" : [ "customer.*" ]
}
}
]
}
--------------------------------------------------
In the above example, users can read all fields with the prefix "customer."
except for "customer.handle".
An empty array for `grant` (for example, `"grant" : []`) means that access has
not been granted to any fields.
When a user has several roles that specify field level permissions, the
resulting field level permissions per index are the union of the individual role
permissions. For example, if these two roles are merged:
[source,console]
--------------------------------------------------
POST /_security/role/test_role7
{
"indices" : [
{
"names" : [ "*" ],
"privileges" : [ "read" ],
"field_security" : {
"grant": [ "a.*" ],
"except" : [ "a.b*" ]
}
}
]
}
POST /_security/role/test_role8
{
"indices" : [
{
"names" : [ "*" ],
"privileges" : [ "read" ],
"field_security" : {
"grant": [ "a.b*" ],
"except" : [ "a.b.c*" ]
}
}
]
}
--------------------------------------------------
The resulting permission is equal to:
[source,js]
--------------------------------------------------
{
// role 1 + role 2
...
"indices" : [
{
"names" : [ "*" ],
"privileges" : [ "read" ],
"field_security" : {
"grant": [ "a.*" ],
"except" : [ "a.b.c*" ]
}
}
]
}
--------------------------------------------------
// NOTCONSOLE
NOTE: Field-level security should not be set on {ref}/alias.html[`alias`] fields. To secure a
concrete field, its field name must be used directly.
For more information, see <<field-and-document-access-control>>.

Binary file not shown.

After

Width:  |  Height:  |  Size: 180 KiB

View File

@ -0,0 +1,24 @@
include::overview.asciidoc[]
include::built-in-roles.asciidoc[]
include::{xes-repo-dir}/security/authorization/managing-roles.asciidoc[]
include::privileges.asciidoc[]
include::document-level-security.asciidoc[]
include::field-level-security.asciidoc[]
include::{xes-repo-dir}/security/authorization/alias-privileges.asciidoc[]
include::{xes-repo-dir}/security/authorization/mapping-roles.asciidoc[]
include::{xes-repo-dir}/security/authorization/field-and-document-access-control.asciidoc[]
include::{xes-repo-dir}/security/authorization/run-as-privilege.asciidoc[]
include::configuring-authorization-delegation.asciidoc[]
include::{xes-repo-dir}/security/authorization/custom-authorization.asciidoc[]

View File

@ -0,0 +1,75 @@
[role="xpack"]
[[authorization]]
== User authorization
The {stack-security-features} add _authorization_, which is the process of determining whether the user behind an incoming request is allowed to execute
the request.
This process takes place after the user is successfully identified and
<<setting-up-authentication,authenticated>>.
[[roles]]
[float]
=== Role-based access control
The {security-features} provide a role-based access control (RBAC) mechanism,
which enables you to authorize users by assigning privileges to roles and
assigning roles to users or groups.
image::security/authorization/images/authorization.png[This image illustrates role-based access control]
The authorization process revolves around the following constructs:
_Secured Resource_::
A resource to which access is restricted. Indices, aliases, documents, fields,
users, and the {es} cluster itself are all examples of secured objects.
_Privilege_::
A named group of one or more actions that a user may execute against a
secured resource. Each secured resource has its own sets of available privileges.
For example, `read` is an index privilege that represents all actions that enable
reading the indexed/stored data. For a complete list of available privileges
see <<security-privileges>>.
_Permissions_::
A set of one or more privileges against a secured resource. Permissions can
easily be described in words, here are few examples:
* `read` privilege on the `products` index
* `manage` privilege on the cluster
* `run_as` privilege on `john` user
* `read` privilege on documents that match query X
* `read` privilege on `credit_card` field
_Role_::
A named set of permissions
_User_::
The authenticated user.
_Group_::
One or more groups to which a user belongs. Groups are not supported in some
realms, such as native, file, or PKI realms.
A role has a unique name and identifies a set of permissions that translate to
privileges on resources. You can associate a user or group with an arbitrary
number of roles. When you map roles to groups, the roles of a user in that group
are the combination of the roles assigned to that group and the roles assigned
to that user. Likewise, the total set of permissions that a user has is defined
by the union of the permissions in all its roles.
The method for assigning roles to users varies depending on which realms you use
to authenticate users. For more information, see <<mapping-roles>>.
[[attributes]]
[float]
=== Attribute-based access control
The {security-features} also provide an attribute-based access control (ABAC)
mechanism, which enables you to use attributes to restrict access to documents
in search queries and aggregations. For example, you can assign attributes to
users and documents, then implement an access policy in a role definition. Users
with that role can read a specific document only if they have all the required
attributes.
For more information, see
https://www.elastic.co/blog/attribute-based-access-control-with-xpack[Document-level attribute-based access control with X-Pack 6.1].

View File

@ -0,0 +1,240 @@
[role="xpack"]
[[security-privileges]]
=== Security privileges
This section lists the privileges that you can assign to a role.
[[privileges-list-cluster]]
==== Cluster privileges
[horizontal]
`all`::
All cluster administration operations, like snapshotting, node shutdown/restart,
settings update, rerouting, or managing users and roles.
`create_snapshot`::
Privileges to create snapshots for existing repositories. Can also list and view
details on existing repositories and snapshots.
`manage`::
Builds on `monitor` and adds cluster operations that change values in the cluster.
This includes snapshotting, updating settings, and rerouting. It also includes
obtaining snapshot and restore status. This privilege does not include the
ability to manage security.
`manage_api_key`::
All security-related operations on {es} API keys including
{ref}/security-api-create-api-key.html[creating new API keys],
{ref}/security-api-get-api-key.html[retrieving information about API keys], and
{ref}/security-api-invalidate-api-key.html[invalidating API keys].
+
--
[NOTE]
======
* When you create new API keys, they will always be owned by the authenticated
user.
* When you have this privilege, you can invalidate your own API keys and those
owned by other users.
======
--
`manage_ccr`::
All {ccr} operations related to managing follower indices and auto-follow
patterns. It also includes the authority to grant the privileges necessary to
manage follower indices and auto-follow patterns. This privilege is necessary
only on clusters that contain follower indices.
`manage_data_frame_transforms`::
All operations related to managing {transforms}.
`manage_ilm`::
All {Ilm} operations related to managing policies.
`manage_index_templates`::
All operations on index templates.
`manage_ingest_pipelines`::
All operations on ingest node pipelines.
`manage_ml`::
All {ml} operations, such as creating and deleting {dfeeds}, jobs, and model
snapshots.
+
--
NOTE: {dfeeds-cap} that were created prior to version 6.2 or created when
{security-features} were disabled run as a system user with elevated privileges,
including permission to read all indices. Newer {dfeeds} run with the security
roles of the user who created or updated them.
--
`manage_own_api_key`::
All security-related operations on {es} API keys that are owned by the current
authenticated user. The operations include
{ref}/security-api-create-api-key.html[creating new API keys],
{ref}/security-api-get-api-key.html[retrieving information about API keys], and
{ref}/security-api-invalidate-api-key.html[invalidating API keys].
`manage_pipeline`::
All operations on ingest pipelines.
`manage_rollup`::
All rollup operations, including creating, starting, stopping and deleting
rollup jobs.
`manage_saml`::
Enables the use of internal {es} APIs to initiate and manage SAML authentication
on behalf of other users.
`manage_security`::
All security-related operations such as CRUD operations on users and roles and
cache clearing.
`manage_token`::
All security-related operations on tokens that are generated by the {es} Token
Service.
`manage_watcher`::
All watcher operations, such as putting watches, executing, activate or acknowledging.
+
--
NOTE: Watches that were created prior to version 6.1 or created when the
{security-features} were disabled run as a system user with elevated privileges,
including permission to read and write all indices. Newer watches run with the
security roles of the user who created or updated them.
--
`monitor`::
All cluster read-only operations, like cluster health and state, hot threads,
node info, node and cluster stats, and pending cluster tasks.
`monitor_data_frame_transforms`::
All read-only operations related to {transforms}.
`monitor_ml`::
All read-only {ml} operations, such as getting information about {dfeeds}, jobs,
model snapshots, or results.
`monitor_rollup`::
All read-only rollup operations, such as viewing the list of historical and
currently running rollup jobs and their capabilities.
`monitor_watcher`::
All read-only watcher operations, such as getting a watch and watcher stats.
`read_ccr`::
All read-only {ccr} operations, such as getting information about indices and
metadata for leader indices in the cluster. It also includes the authority to
check whether users have the appropriate privileges to follow leader indices.
This privilege is necessary only on clusters that contain leader indices.
`read_ilm`::
All read-only {Ilm} operations, such as getting policies and checking the
status of {Ilm}
`transport_client`::
All privileges necessary for a transport client to connect. Required by the remote
cluster to enable <<cross-cluster-configuring,Cross Cluster Search>>.
[[privileges-list-indices]]
==== Indices privileges
[horizontal]
`all`::
Any action on an index
`create`::
Privilege to index documents. Also grants access to the update mapping
action.
+
--
NOTE: This privilege does not restrict the index operation to the creation
of documents but instead restricts API use to the index API. The index API allows a user
to overwrite a previously indexed document.
--
`create_index`::
Privilege to create an index. A create index request may contain aliases to be
added to the index once created. In that case the request requires the `manage`
privilege as well, on both the index and the aliases names.
`delete`::
Privilege to delete documents.
`delete_index`::
Privilege to delete an index.
`index`::
Privilege to index and update documents. Also grants access to the update
mapping action.
`manage`::
All `monitor` privileges plus index administration (aliases, analyze, cache clear,
close, delete, exists, flush, mapping, open, force merge, refresh, settings,
search shards, templates, validate).
`manage_follow_index`::
All actions that are required to manage the lifecycle of a follower index, which
includes creating a follower index, closing it, and converting it to a regular
index. This privilege is necessary only on clusters that contain follower indices.
`manage_ilm`::
All {Ilm} operations relating to managing the execution of policies of an index
This includes operations like retrying policies, and removing a policy
from an index.
`manage_leader_index`::
All actions that are required to manage the lifecycle of a leader index, which
includes {ref}/ccr-post-forget-follower.html[forgetting a follower]. This
privilege is necessary only on clusters that contain leader indices.
`monitor`::
All actions that are required for monitoring (recovery, segments info, index
stats and status).
`read`::
Read-only access to actions (count, explain, get, mget, get indexed scripts,
more like this, multi percolate/search/termvector, percolate, scroll,
clear_scroll, search, suggest, tv).
`read_cross_cluster`::
Read-only access to the search action from a <<cross-cluster-configuring,remote cluster>>.
`view_index_metadata`::
Read-only access to index metadata (aliases, aliases exists, get index, exists, field mappings,
mappings, search shards, type exists, validate, warmers, settings, ilm). This
privilege is primarily available for use by {kib} users.
`write`::
Privilege to perform all write operations to documents, which includes the
permission to index, update, and delete documents as well as performing bulk
operations. Also grants access to the update mapping action.
==== Run as privilege
The `run_as` permission enables an authenticated user to submit requests on
behalf of another user. The value can be a user name or a comma-separated list
of user names. (You can also specify users as an array of strings or a YAML
sequence.) For more information, see
<<run-as-privilege, Submitting Requests on Behalf of Other Users>>.
[[application-privileges]]
==== Application privileges
Application privileges are managed within {es} and can be retrieved with the
{ref}/security-api-has-privileges.html[has privileges API] and the
{ref}/security-api-get-privileges.html[get application privileges API]. They do
not, however, grant access to any actions or resources within {es}. Their
purpose is to enable applications to represent and store their own privilege
models within {es} roles.
To create application privileges, use the
{ref}/security-api-put-privileges.html[add application privileges API]. You can
then associate these application privileges with roles, as described in
<<defining-roles>>.

View File

@ -0,0 +1,39 @@
[[cross-cluster-kibana]]
==== {ccs-cap} and {kib}
When {kib} is used to search across multiple clusters, a two-step authorization
process determines whether or not the user can access indices on a remote
cluster:
* First, the local cluster determines if the user is authorized to access remote
clusters. (The local cluster is the cluster {kib} is connected to.)
* If they are, the remote cluster then determines if the user has access
to the specified indices.
To grant {kib} users access to remote clusters, assign them a local role
with read privileges to indices on the remote clusters. You specify remote
cluster indices as `<remote_cluster_name>:<index_name>`.
To enable users to actually read the remote indices, you must create a matching
role on the remote clusters that grants the `read_cross_cluster` privilege
and access to the appropriate indices.
For example, if {kib} is connected to the cluster where you're actively
indexing {ls} data (your _local cluster_) and you're periodically
offloading older time-based indices to an archive cluster
(your _remote cluster_) and you want to enable {kib} users to search both
clusters:
. On the local cluster, create a `logstash_reader` role that grants
`read` and `view_index_metadata` privileges on the local `logstash-*` indices.
+
NOTE: If you configure the local cluster as another remote in {es}, the
`logstash_reader` role on your local cluster also needs to grant the
`read_cross_cluster` privilege.
. Assign your {kib} users the `kibana_user` role and your `logstash_reader`
role.
. On the remote cluster, create a `logstash_reader` role that grants the
`read_cross_cluster` privilege and `read` and `view_index_metadata` privileges
for the `logstash-*` indices.

View File

@ -150,4 +150,4 @@ GET two:logs-2017.04/_search <1>
// TEST[skip:todo]
//TBD: Is there a missing description of the <1> callout above?
include::{kib-repo-dir}/user/security/cross-cluster-kibana.asciidoc[]
include::cross-cluster-kibana.asciidoc[]

View File

@ -0,0 +1,33 @@
// tag::create-users[]
There are <<built-in-users,built-in users>> that you can use for specific
administrative purposes: `apm_system`, `beats_system`, `elastic`, `kibana`,
`logstash_system`, and `remote_monitoring_user`.
// end::create-users[]
Before you can use them, you must set their passwords:
. Restart {es}. For example, if you installed {es} with a `.tar.gz` package, run
the following command from the {es} directory:
+
--
["source","sh",subs="attributes,callouts"]
----------------------------------------------------------------------
./bin/elasticsearch
----------------------------------------------------------------------
See {ref}/starting-elasticsearch.html[Starting {es}].
--
. Set the built-in users' passwords.
+
--
// tag::create-users[]
Run the following command from the {es} directory:
["source","sh",subs="attributes,callouts"]
----------------------------------------------------------------------
./bin/elasticsearch-setup-passwords interactive
----------------------------------------------------------------------
// end::create-users[]
--

View File

@ -0,0 +1,35 @@
When you use the basic and trial licenses, the {es} {security-features} are
disabled by default. To enable them:
. Stop {kib}. The method for starting and stopping {kib} varies depending on
how you installed it. For example, if you installed {kib} from an archive
distribution (`.tar.gz` or `.zip`), stop it by entering `Ctrl-C` on the command
line. See {kibana-ref}/start-stop.html[Starting and stopping {kib}].
. Stop {es}. For example, if you installed {es} from an archive distribution,
enter `Ctrl-C` on the command line. See
{ref}/stopping-elasticsearch.html[Stopping {es}].
. Add the `xpack.security.enabled` setting to the
`ES_PATH_CONF/elasticsearch.yml` file.
+
--
TIP: The `ES_PATH_CONF` environment variable contains the path for the {es}
configuration files. If you installed {es} using archive distributions (`zip` or
`tar.gz`), it defaults to `ES_HOME/config`. If you used package distributions
(Debian or RPM), it defaults to `/etc/elasticsearch`. For more information, see
{ref}/settings.html[Configuring {es}].
For example, add the following setting:
[source,yaml]
----
xpack.security.enabled: true
----
TIP: If you have a basic or trial license, the default value for this setting is
`false`. If you have a gold or higher license, the default value is `true`.
Therefore, it is a good idea to explicitly add this setting to avoid confusion
about whether {security-features} are enabled.
--

View File

@ -0,0 +1,62 @@
When the {es} {security-features} are enabled, users must log in to {kib}
with a valid user ID and password.
{kib} also performs some tasks under the covers that require use of the
built-in `kibana` user.
. Configure {kib} to use the built-in `kibana` user and the password that you
created:
** If you don't mind having passwords visible in your configuration file,
uncomment and update the following settings in the `kibana.yml` file in your
{kib} directory:
+
--
TIP: If you installed {kib} using archive distributions (`zip` or
`tar.gz`), the `kibana.yml` configuration file is in `KIBANA_HOME/config`. If
you used package distributions (Debian or RPM), it's in `/etc/kibana`. For more
information, see {kibana-ref}/settings.html[Configuring {kib}].
For example, add the following settings:
[source,yaml]
----
elasticsearch.username: "kibana"
elasticsearch.password: "your_password"
----
Specify the password that you set with the `elasticsearch-setup-passwords`
command then save your changes to the file.
--
** If you prefer not to put your user ID and password in the `kibana.yml` file,
store them in a keystore instead. Run the following commands to create the {kib}
keystore and add the secure settings:
+
--
// tag::store-kibana-user[]
["source","sh",subs="attributes,callouts"]
----------------------------------------------------------------------
./bin/kibana-keystore create
./bin/kibana-keystore add elasticsearch.username
./bin/kibana-keystore add elasticsearch.password
----------------------------------------------------------------------
When prompted, specify the `kibana` built-in user and its password for these
setting values. The settings are automatically applied when you start {kib}.
To learn more, see {kibana-ref}/secure-settings.html[Secure settings].
// end::store-kibana-user[]
--
. Restart {kib}. For example, if you installed
{kib} with a `.tar.gz` package, run the following command from the {kib}
directory:
+
--
["source","sh",subs="attributes,callouts"]
----------------------------------------------------------------------
./bin/kibana
----------------------------------------------------------------------
See {kibana-ref}/start-stop.html[Starting and stopping {kib}].
--

View File

@ -0,0 +1,377 @@
[role="xpack"]
[testenv="basic"]
[[security-getting-started]]
== Tutorial: Getting started with security
In this tutorial, you learn how to secure a cluster by configuring users and
roles in {es}, {kib}, {ls}, and {metricbeat}.
[float]
[[get-started-security-prerequisites]]
=== Before you begin
. Install and configure {es}, {kib}, {ls}, and {metricbeat} as described in
{stack-gs}/get-started-elastic-stack.html[Getting started with the {stack}].
+
--
IMPORTANT: To complete this tutorial, you must install the default {es} and
{kib} packages, which include role-based access control (RBAC) and native
authentication {security-features}. When you install these products, they apply
basic licenses with no expiration dates. All of the subsequent steps in this
tutorial assume that you are using a basic license. For more information, see
{subscriptions} and <<license-management>>.
--
. Stop {ls}. The method for starting and stopping {ls} varies depending on whether
you are running it from the command line or running it as a service. For example,
if you are running {ls} from the command line, you can stop it by entering
`Ctrl-C`. See {logstash-ref}/shutdown.html[Shutting down {ls}].
. Stop {metricbeat}. For example, enter `Ctrl-C` on the command line where it is
running.
. Launch the {kib} web interface by pointing your browser to port 5601. For
example, http://127.0.0.1:5601[http://127.0.0.1:5601].
[role="xpack"]
[[get-started-enable-security]]
=== Enable {es} {security-features}
include::get-started-enable-security.asciidoc[]
. Enable single-node discovery in the `ES_PATH_CONF/elasticsearch.yml` file.
+
--
This tutorial involves a single node cluster, but if you had multiple
nodes, you would enable {es} {security-features} on every node in the cluster
and configure Transport Layer Security (TLS) for internode-communication, which
is beyond the scope of this tutorial. By enabling single-node discovery, we are
postponing the configuration of TLS. For example, add the following setting:
[source,yaml]
----
discovery.type: single-node
----
For more information, see
{ref}/bootstrap-checks.html#single-node-discovery[Single-node discovery].
--
When you enable {es} {security-features}, basic authentication is enabled by
default. To communicate with the cluster, you must specify a username and
password. Unless you <<anonymous-access,enable anonymous access>>, all requests
that don't include a user name and password are rejected.
[role="xpack"]
[[get-started-built-in-users]]
=== Create passwords for built-in users
include::get-started-builtin-users.asciidoc[]
You need these built-in users in subsequent steps, so choose passwords that you
can remember!
NOTE: This tutorial does not use the built-in `apm_system`, `logstash_system`,
`beats_system`, and `remote_monitoring_user` users, which are typically
associated with monitoring. For more information, see
{logstash-ref}/ls-security.html#ls-monitoring-user[Configuring credentials for {ls} monitoring]
and {metricbeat-ref}/monitoring.html[Monitoring {metricbeat}].
[role="xpack"]
[[get-started-kibana-user]]
=== Add the built-in user to {kib}
include::get-started-kibana-users.asciidoc[]
[role="xpack"]
[[get-started-authentication]]
=== Configure authentication
Now that you've set up the built-in users, you need to decide how you want to
manage all the other users.
The {stack} _authenticates_ users to ensure that they are valid. The
authentication process is handled by _realms_. You can use one or more built-in
realms, such as the native, file, LDAP, PKI, Active Directory, SAML, or Kerberos
realms. Alternatively, you can create your own custom realms. In this tutorial,
we'll use a native realm.
In general, you configure realms by adding `xpack.security.authc.realms`
settings in the `elasticsearch.yml` file. However, the native realm is available
by default when no other realms are configured. Therefore, you don't need to do
any extra configuration steps in this tutorial. You can jump straight to
creating users!
If you want to learn more about authentication and realms, see
<<setting-up-authentication>>.
[role="xpack"]
[[get-started-users]]
=== Create users
Let's create two users in the native realm.
. Log in to {kib} with the `elastic` built-in user.
. Go to the *Management / Security / Users* page:
+
--
[role="screenshot"]
image::security/images/management-builtin-users.jpg["User management screenshot in Kibana"]
In this example, you can see a list of built-in users.
--
. Click *Create new user*. For example, create a user for yourself:
+
--
[role="screenshot"]
image::security/images/create-user.jpg["Creating a user in Kibana"]
You'll notice that when you create a user, you can assign it a role. Don't
choose a role yet--we'll come back to that in subsequent steps.
--
. Click *Create new user* and create a `logstash_internal` user.
+
--
In {stack-gs}/get-started-elastic-stack.html[Getting started with the {stack}],
you configured {ls} to listen for {metricbeat}
input and to send the events to {es}. You therefore need to create a user
that {ls} can use to communicate with {es}. For example:
[role="screenshot"]
image::security/images/create-logstash-user.jpg["Creating a {ls} user in {kib}"]
--
[role="xpack"]
[[get-started-roles]]
=== Assign roles
By default, all users can change their own passwords, get information about
themselves, and run the `authenticate` API. If you want them to do more than
that, you need to give them one or more _roles_.
Each role defines a specific set of actions (such as read, create, or delete)
that can be performed on specific secured resources (such as indices, aliases,
documents, fields, or clusters). To help you get up and running, there are
built-in roles.
Go to the *Management / Security / Roles* page to see them:
[role="screenshot"]
image::security/images/management-roles.jpg["Role management screenshot in Kibana"]
Select a role to see more information about its privileges. For example, select
the `kibana_system` role to see its list of cluster and index privileges. To
learn more, see <<privileges-list-indices>>.
Let's assign the `kibana_user` role to your user. Go back to the
*Management / Security / Users* page and select your user. Add the `kibana_user`
role and save the change. For example:
[role="screenshot"]
image::security/images/assign-role.jpg["Assigning a role to a user in Kibana"]
This user now has access to all features in {kib}. For more information about granting
access to Kibana see {kibana-ref}/xpack-security-authorization.html[Kibana Authorization].
If you completed all of the steps in
{stack-gs}/get-started-elastic-stack.html[Getting started with the {stack}], you should
have {metricbeat} data stored in {es}. Let's create two roles that grant
different levels of access to that data.
Go to the *Management / Security / Roles* page and click *Create role*.
Create a `metricbeat_reader` role that has `read` and `view_index_metadata`
privileges on the `metricbeat-*` indices:
[role="screenshot"]
image::security/images/create-reader-role.jpg["Creating a role in Kibana"]
Create a `metricbeat_writer` role that has `manage_index_templates` and `monitor`
cluster privileges, as well as `write`, `delete`, and `create_index` privileges
on the `metricbeat-*` indices:
[role="screenshot"]
image::security/images/create-writer-role.jpg["Creating another role in Kibana"]
Now go back to the *Management / Security / Users* page and assign these roles
to the appropriate users. Assign the `metricbeat_reader` role to your personal
user. Assign the `metricbeat_writer` role to the `logstash_internal` user.
The list of users should now contain all of the built-in users as well as the
two you created. It should also show the appropriate roles for your users:
[role="screenshot"]
image::security/images/management-users.jpg["User management screenshot in Kibana"]
If you want to learn more about authorization and roles, see <<authorization>>.
[role="xpack"]
[[get-started-logstash-user]]
=== Add user information in {ls}
In order for {ls} to send data successfully to {es}, you must configure its
authentication credentials in the {ls} configuration file.
. Configure {ls} to use the `logstash_internal` user and the password that you
created:
** If you don't mind having passwords visible in your configuration file, add
the following `user` and `password` settings in the `demo-metrics-pipeline.conf`
file in your {ls} directory:
+
--
[source,ruby]
----
...
output {
elasticsearch {
hosts => "localhost:9200"
manage_template => false
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
user => "logstash_internal" <1>
password => "your_password" <2>
}
}
----
<1> Specify the `logstash_internal` user that you created earlier in this tutorial.
<2> Specify the password that you chose for this user ID.
--
** If you prefer not to put your user ID and password in the configuration file,
store them in a keystore instead.
+
--
Run the following commands to create the {ls}
keystore and add the secure settings:
["source","sh",subs="attributes,callouts"]
----------------------------------------------------------------------
set +o history
export LOGSTASH_KEYSTORE_PASS=mypassword <1>
set -o history
./bin/logstash-keystore create
./bin/logstash-keystore add ES_USER
./bin/logstash-keystore add ES_PWD
----------------------------------------------------------------------
<1> You can optionally protect access to the {ls} keystore by storing a password
in an environment variable called `LOGSTASH_KEYSTORE_PASS`. For more information,
see {logstash-ref}/keystore.html#keystore-password[Keystore password].
When prompted, specify the `logstash_internal` user and its password for the
`ES_USER` and `ES_PWD` values.
NOTE: The {ls} keystore differs from the {kib} keystore. Whereas the {kib}
keystore enables you to store `kibana.yml` settings by name, the {ls} keystore
enables you to create arbitrary names that you can reference in the {ls}
configuration. To learn more, see
{logstash-ref}/keystore.html[Secrets keystore for secure settings].
You can now use these `ES_USER` and `ES_PWD` keys in your configuration
file. For example, add the `user` and `password` settings in the
`demo-metrics-pipeline.conf` file as follows:
[source,ruby]
----
...
output {
elasticsearch {
hosts => "localhost:9200"
manage_template => false
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
user => "${ES_USER}"
password => "${ES_PWD}"
}
}
----
--
. Start {ls} by using the appropriate method for your environment.
+
--
For example, to
run {ls} from a command line, go to the {ls} directory and enter the following
command:
["source","sh",subs="attributes,callouts"]
----------------------------------------------------------------------
./bin/logstash -f demo-metrics-pipeline.conf
----------------------------------------------------------------------
To start {ls} as a service, see
{logstash-ref}/running-logstash.html[Running {ls} as a service on Debian or RPM].
--
. If you were connecting directly from {metricbeat} to {es}, you would need
to configure authentication credentials for the {es} output in the {metricbeat}
configuration file. In
{stack-gs}/get-started-elastic-stack.html[Getting started with the {stack}],
however, you configured
{metricbeat} to send the data to {ls} for additional parsing, so no extra
settings are required in {metricbeat}. For more information, see
{metricbeat-ref}/securing-metricbeat.html[Securing {metricbeat}].
. Start {metricbeat} by using the appropriate method for your environment.
+
--
For example, on macOS, run the following command from the {metricbeat} directory:
["source","sh",subs="attributes,callouts"]
----------------------------------------------------------------------
./metricbeat -e
----------------------------------------------------------------------
For more methods, see {metricbeat-ref}/metricbeat-starting.html[Starting {metricbeat}].
--
Wait a few minutes for new data to be sent from {metricbeat} to {ls} and {es}.
[role="xpack"]
[[get-started-verify-users]]
=== View system metrics in {kib}
Log in to {kib} with the user ID that has `metricbeat_reader` and `kibana_user`
roles (for example, `jdoe`).
These roles enable the user to see the system metrics in {kib} (for example, on
the *Discover* page or in the
http://localhost:5601/app/kibana#/dashboard/Metricbeat-system-overview[{metricbeat} system overview dashboard]).
[float]
[[gs-security-nextsteps]]
=== What's next?
Congratulations! You've successfully set up authentication and authorization by
using the native realm. You learned how to create user IDs and roles that
prevent unauthorized access to the {stack}.
Later, when you're ready to increase the number of nodes in your cluster, you'll
want to encrypt communications across the {stack}. To learn how, read
<<encrypting-communications>>.
For more detailed information about securing the {stack}, see:
* {ref}/configuring-security.html[Configuring security in {es}]. Encrypt
inter-node communications, set passwords for the built-in users, and manage your
users and roles.
* {kibana-ref}/using-kibana-with-security.html[Configuring security in {kib}].
Set the authentication credentials in {kib} and encrypt communications between
the browser and the {kib} server.
* {logstash-ref}/ls-security.html[Configuring security in Logstash]. Set the
authentication credentials for Logstash and encrypt communications between
Logstash and {es}.
* <<beats,Configuring security in the Beats>>. Configure authentication
credentials and encrypt connections to {es}.
* <<java-clients,Configuring the Java transport client to use encrypted communications>>.
* {hadoop-ref}/security.html[Configuring {es} for Apache Hadoop to use secured transport].

View File

@ -0,0 +1,50 @@
[role="xpack"]
[[how-security-works]]
== How security works
An Elasticsearch cluster is typically made out of many moving parts. There are
the Elasticsearch nodes that form the cluster and often Logstash instances,
Kibana instances, Beats agents, and clients all communicating with the cluster.
It should not come as a surprise that securing such clusters has many facets and
layers.
The {stack-security-features} provide the means to secure the Elastic cluster
on several levels:
* <<setting-up-authentication,User authentication>>
* <<authorization,User authorization and access control>>
* Node/client authentication and channel encryption
* Auditing
[float]
=== Node/client authentication and channel encryption
The {security-features} support configuring SSL/TLS for securing the
communication channels to, from and within the cluster. This support accounts for:
* Encryption of data transmitted over the wires
* Certificate based node authentication - preventing unauthorized nodes/clients
from establishing a connection with the cluster.
For more information, see <<encrypting-communications, Encrypting Communications>>.
The {security-features} also enable you to <<ip-filtering, configure IP Filters>>
which can be seen as a light mechanism for node/client authentication. With IP
filtering, you can restrict the nodes and clients that can connect to the
cluster based on their IP addresses. The IP filters configuration provides
whitelisting and blacklisting of IPs, subnets and DNS domains.
[float]
=== Auditing
When dealing with any secure system, it is critical to have a audit trail
mechanism set in place. Audit trails log various activities/events that occur in
the system, enabling you to analyze and back track past events when things go
wrong (e.g. security breach).
The {security-features} provide such audit trail functionality for all nodes in
the cluster. You can configure the audit level which accounts for the type of
events that are logged. These events include failed authentication attempts,
user access denied, node connection denied, and more.
For more information on auditing see <<auditing>>.

Binary file not shown.

After

Width:  |  Height:  |  Size: 110 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 109 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 203 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 108 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 211 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 172 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 150 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 216 KiB

View File

@ -0,0 +1,113 @@
[role="xpack"]
[[elasticsearch-security]]
= Securing the {stack}
[partintro]
--
The {stack-security-features} enable you to easily secure a cluster. You can
password-protect your data as well as implement more advanced security
measures such as encrypting communications, role-based access control,
IP filtering, and auditing. This guide describes how to configure the security
features you need, and interact with your secured cluster.
Security protects Elasticsearch clusters by:
* <<preventing-unauthorized-access, Preventing unauthorized access>>
with password protection, role-based access control, and IP filtering.
* <<preserving-data-integrity, Preserving the integrity of your data>>
with message authentication and SSL/TLS encryption.
* <<maintaining-audit-trail, Maintaining an audit trail>>
so you know who's doing what to your cluster and the data it stores.
[float]
[[preventing-unauthorized-access]]
=== Preventing unauthorized access
To prevent unauthorized access to your Elasticsearch cluster, you must have a
way to _authenticate_ users. This simply means that you need a way to validate
that a user is who they claim to be. For example, you have to make sure only
the person named _Kelsey Andorra_ can sign in as the user `kandorra`. The
{es-security-features} provide a standalone authentication mechanism that enables
you to quickly password-protect your cluster. If you're already using
<<ldap-realm, LDAP>>, <<active-directory-realm, Active Directory>>, or
<<pki-realm, PKI>> to manage users in your organization, the {security-features}
are able to integrate with those systems to perform user authentication.
In many cases, simply authenticating users isn't enough. You also need a way to
control what data users have access to and what tasks they can perform. The
{es-security-features} enable you to _authorize_ users by assigning access
_privileges_ to _roles_ and assigning those roles to users. For example, this
<<authorization,role-based access control>> mechanism (a.k.a RBAC) enables
you to specify that the user `kandorra` can only perform read operations on the
`events` index and can't do anything at all with other indices.
The {security-features} also support <<ip-filtering, IP-based authorization>>.
You can whitelist and blacklist specific IP addresses or subnets to control
network-level access to a server.
[float]
[[preserving-data-integrity]]
=== Preserving data integrity
A critical part of security is keeping confidential data confidential.
Elasticsearch has built-in protections against accidental data loss and
corruption. However, there's nothing to stop deliberate tampering or data
interception. The {stack-security-features} preserve the integrity of your
data by <<ssl-tls, encrypting communications>> to and from nodes. For even
greater protection, you can increase the <<ciphers, encryption strength>> and
<<separating-node-client-traffic, separate client traffic from node-to-node communications>>.
[float]
[[maintaining-audit-trail]]
=== Maintaining an audit trail
Keeping a system secure takes vigilance. By using {stack-security-features} to
maintain an audit trail, you can easily see who is accessing your cluster and
what they're doing. By analyzing access patterns and failed attempts to access
your cluster, you can gain insights into attempted attacks and data breaches.
Keeping an auditable log of the activity in your cluster can also help diagnose
operational issues.
[float]
=== Where to Go Next
* <<security-getting-started, Getting Started>>
steps through how to install and start using Security for basic authentication.
* <<how-security-works, How Security Works>>
provides more information about how Security supports user authentication,
authorization, and encryption.
* <<ccs-clients-integrations>>
shows you how to interact with an Elasticsearch cluster protected by the
{stack-security-features}.
[float]
=== Have Comments, Questions, or Feedback?
Head over to our {security-forum}[Security Discussion Forum]
to share your experience, questions, and suggestions.
--
include::how-security-works.asciidoc[]
include::authentication/index.asciidoc[]
include::authorization/index.asciidoc[]
include::{xes-repo-dir}/security/auditing/index.asciidoc[]
include::{xes-repo-dir}/security/securing-communications.asciidoc[]
include::{xes-repo-dir}/security/using-ip-filtering.asciidoc[]
include::{xes-repo-dir}/security/ccs-clients-integrations.asciidoc[]
include::get-started-security.asciidoc[]
include::securing-communications/tutorial-tls-intro.asciidoc[]
include::troubleshooting.asciidoc[]
include::limitations.asciidoc[]

View File

@ -0,0 +1,93 @@
[role="xpack"]
[[security-limitations]]
== Security limitations
[subs="attributes"]
++++
<titleabbrev>Limitations</titleabbrev>
++++
[float]
=== Plugins
Elasticsearch's plugin infrastructure is extremely flexible in terms of what can
be extended. While it opens up Elasticsearch to a wide variety of (often custom)
additional functionality, when it comes to security, this high extensibility level
comes at a cost. We have no control over the third-party plugins' code (open
source or not) and therefore we cannot guarantee their compliance with
{stack-security-features}. For this reason, third-party plugins are not
officially supported on clusters with {security-features} enabled.
[float]
=== Changes in index wildcard behavior
Elasticsearch clusters with the {security-features} enabled apply the `/_all`
wildcard, and all other wildcards, to the indices that the current user has
privileges for, not the set of all indices on the cluster.
[float]
=== Multi document APIs
Multi get and multi term vectors API throw IndexNotFoundException when trying to access non existing indices that the user is
not authorized for. By doing that they leak information regarding the fact that the index doesn't exist, while the user is not
authorized to know anything about those indices.
[float]
=== Filtered index aliases
Aliases containing filters are not a secure way to restrict access to individual
documents, due to the limitations described in
<<alias-limitations, Index and field names can be leaked when using aliases>>.
The {stack-security-features} provide a secure way to restrict access to
documents through the
<<field-and-document-access-control, document-level security>> feature.
[float]
=== Field and document level security limitations
When a user's role enables document or field level security for an index:
* The user cannot perform write operations:
** The update API isn't supported.
** Update requests included in bulk requests aren't supported.
* The request cache is disabled for search requests.
When a user's role enables document level security for an index:
* Document level security isn't applied for APIs that aren't document based.
An example is the field stats API.
* Document level security doesn't affect global index statistics that relevancy
scoring uses. So this means that scores are computed without taking the role
query into account. Note that documents not matching with the role query are
never returned.
* The `has_child` and `has_parent` queries aren't supported as query in the
role definition. The `has_child` and `has_parent` queries can be used in the
search API with document level security enabled.
* Any query that makes remote calls to fetch data to query by isn't supported.
The following queries aren't supported:
** The `terms` query with terms lookup isn't supported.
** The `geo_shape` query with indexed shapes isn't supported.
** The `percolate` query isn't supported.
* If suggesters are specified and document level security is enabled then
the specified suggesters are ignored.
* A search request cannot be profiled if document level security is enabled.
[float]
[[alias-limitations]]
=== Index and field names can be leaked when using aliases
Calling certain Elasticsearch APIs on an alias can potentially leak information
about indices that the user isn't authorized to access. For example, when you get
the mappings for an alias with the `_mapping` API, the response includes the
index name and mappings for each index that the alias applies to.
Until this limitation is addressed, avoid index and field names that contain
confidential or sensitive information.
[float]
=== LDAP realm
The <<ldap-realm, LDAP Realm>> does not currently support the discovery of nested
LDAP Groups. For example, if a user is a member of `group_1` and `group_1` is a
member of `group_2`, only `group_1` will be discovered. However, the
<<active-directory-realm, Active Directory Realm>> *does* support transitive
group membership.

View File

@ -0,0 +1,188 @@
[role="xpack"]
[testenv="basic"]
[[encrypting-communications-hosts]]
=== Add nodes to your cluster
You can add more nodes to your cluster and optionally designate specific
purposes for each node. For example, you can allocate master nodes, data nodes,
ingest nodes, machine learning nodes, and dedicated coordinating nodes. For
details about each node type, see {ref}/modules-node.html[Nodes].
Let's add two nodes to our cluster!
. Install two additional copies of {es}. It's possible to run multiple {es}
nodes using a shared installation. In this tutorial, however, we're keeping
things simple by using the `zip` or `tar.gz` packages and by putting each copy
in a separate folder. You can simply repeat the steps that you used to install
{es} in the
{stack-gs}/get-started-elastic-stack.html#install-elasticsearch[Getting started with the {stack}]
tutorial.
. Generate certificates for the two new nodes.
+
--
For example, run the following command:
["source","sh",subs="attributes,callouts"]
----------------------------------------------------------------------
./bin/elasticsearch-certutil cert \
--ca elastic-stack-ca.p12 \ <1>
--multiple
----------------------------------------------------------------------
<1> Use the certificate authority that you created in <<encrypting-communications-certificates>>.
You are prompted for information about each new node. Specify `node-2` and
`node-3` for the instance names. For the purposes of this tutorial, specify the
same IP address (`127.0.0.1,::1`) and DNS name (`localhost`) for each node.
You are prompted to enter the password for your CA. You are also prompted to
create a password for each certificate.
By default, the command produces a zip file named `certificate-bundle.zip`,
which contains the generated certificates and keys.
--
. Decompress the `certificate-bundle.zip` file. For example:
+
--
["source","sh",subs="attributes,callouts"]
----------------------------------------------------------------------
unzip certificate-bundle.zip
Archive: certificate-bundle.zip
creating: node-2/
inflating: node-2/node-2.p12
creating: node-3/
inflating: node-3/node-3.p12
----------------------------------------------------------------------
The `certificate-bundle.zip` file contains a folder for each of your nodes. Each
folder contains a single PKCS#12 keystore that includes a node certificate,
node key, and CA certificate.
--
. Create a folder to contain certificates in the configuration directory of each
{es} node. For example, create a `certs` folder in the `config` directory.
. Copy the appropriate certificate to the configuration directory on each node.
For example, copy the `node-2.p12` file into the `config/certs` directory on the
second node and the `node-3.p12` into the `config/certs` directory on the third
node.
. Specify the name of the cluster and give each node a unique name.
+
--
For example, add the following settings to the `ES_PATH_CONF/elasticsearch.yml`
file on the second node:
[source,yaml]
----
cluster.name: test-cluster
node.name: node-2
----
Add the following settings to the `ES_PATH_CONF/elasticsearch.yml`
file on the third node:
[source,yaml]
----
cluster.name: test-cluster
node.name: node-3
----
NOTE: In order to join the same cluster as the first node, they must share the
same `cluster.name` value.
--
. (Optional) Provide seed addresses to help your nodes discover other nodes with
which to form a cluster.
+
--
For example, add the following setting in the `ES_PATH_CONF/elasticsearch.yml`
file:
[source,yaml]
----
discovery.seed_hosts: ["localhost"]
----
The default value for this setting is `127.0.0.1, [::1]`, therefore it isn't
actually required in this tutorial. When you want to form a cluster with nodes
on other hosts, however, you must use this setting to provide a list of
master-eligible nodes to seed the discovery process. For more information, see
{ref}/modules-discovery-hosts-providers.html[Discovery].
--
. On each node, enable TLS for transport communications. You must also configure
each node to identify itself using its signed certificate.
+
--
include::tutorial-tls-internode.asciidoc[tag=enable-tls]
--
. On each node, store the password for the PKCS#12 file in the {es} keystore.
+
--
include::tutorial-tls-internode.asciidoc[tag=secure-passwords]
On the second node, supply the password that you created for the `node-2.p12`
file. On the third node, supply the password that you created for the
`node-3.p12` file.
--
. Start each {es} node. For example, if you installed {es} with a `.tar.gz`
package, run the following command from each {es} directory:
+
--
["source","sh",subs="attributes,callouts"]
----------------------------------------------------------------------
./bin/elasticsearch
----------------------------------------------------------------------
See {ref}/starting-elasticsearch.html[Starting {es}].
If you encounter errors, you can see some common problems and solutions in
<<trb-security-ssl>>.
--
. Verify that your cluster now contains three nodes.
+
--
For example, log into {kib} with the `elastic` built-in user. Go to
*Dev Tools > Console* and run the {ref}/cluster-health.html[cluster health API]:
[source,console]
----------------------------------
GET _cluster/health
----------------------------------
Confirm the `number_of_nodes` in the response from this API.
You can also use the {ref}/cat-nodes.html[cat nodes API] to identify the master
node:
[source,console]
----------------------------------
GET _cat/nodes?v
----------------------------------
The node that has an asterisk(*) in the `master` column is the elected master
node.
--
Now that you have multiple nodes, your data can be distributed across the
cluster in multiple primary and replica shards. For more information about the
concepts of clusters, nodes, and shards, see
{ref}/getting-started.html[Getting started with {es}].
[float]
[[encrypting-internode-nextsteps]]
=== What's next?
Congratulations! You've encrypted communications between the nodes in your
cluster and can pass the
{ref}/bootstrap-checks-xpack.html#bootstrap-checks-tls[TLS bootstrap check].
If you want to encrypt communications between other products in the {stack}, see
<<encrypting-communications>>.

View File

@ -0,0 +1,77 @@
[role="xpack"]
[testenv="basic"]
[[encrypting-communications-certificates]]
=== Generate certificates
In a secured cluster, {es} nodes use certificates to identify themselves when
communicating with other nodes.
The cluster must validate the authenticity of these certificates. The
recommended approach is to trust a specific certificate authority (CA). Thus
when nodes are added to your cluster they just need to use a certificate signed
by the same CA.
. Generate a certificate authority for your cluster.
+
--
Run the following command:
["source","sh",subs="attributes,callouts"]
----------------------------------------------------------------------
./bin/elasticsearch-certutil ca
----------------------------------------------------------------------
You are prompted for an output filename and a password. In this tutorial, we'll
use the default filename (`elastic-stack-ca.p12`).
The output file is a PKCS#12 keystore that contains the public certificate for
your certificate authority and the private key that is used to sign the node
certificates.
TIP: We'll need to use this file again when we add nodes to the cluster, so
remember its location and password. Ideally you should store the file securely,
since it holds the key to your cluster.
For more information about this command, see
{ref}/certutil.html[elasticsearch-certutil].
--
. Create a folder to contain certificates in the configuration directory of your
{es} node. For example, create a `certs` folder in the `config` directory.
. Generate certificates and private keys for the first node in your cluster.
+
--
Run the following command:
["source","sh",subs="attributes,callouts"]
----------------------------------------------------------------------
./bin/elasticsearch-certutil cert \
--ca elastic-stack-ca.p12 \ <1>
--dns localhost \ <2>
--ip 127.0.0.1,::1 <3>
--out config/certs/node-1.p12 <4>
----------------------------------------------------------------------
<1> The `--ca` parameter contains the name of certificate authority that you
generated for this cluster.
<2> The `--dns` parameter contains a comma-separated list of DNS names for the
node.
<3> The `--ip` parameter contains a comma-separated list of IP addresses for the
node.
<4> The `--out` parameter contains the name and location of the generated
certificate. Ideally the file name matches the `node.name` value in the
`elasticsearch.yml` file.
You are prompted to enter the password for your CA. You are also prompted to
create a password for the certificate.
The output file is a PKCS#12 keystore that includes a node certificate, node key,
and CA certificate.
--
TIP: The {ref}/certutil.html[elasticsearch-certutil] command has a lot more
options. For example, it can generate Privacy Enhanced Mail (PEM) formatted
certificates and keys. It can also generate certificate signing requests (CSRs)
that you can use to obtain signed certificates from a commercial or
organization-specific certificate authority. However, those options are not
covered in this tutorial.

View File

@ -0,0 +1,177 @@
[role="xpack"]
[testenv="trial"]
[[encrypting-internode]]
=== Encrypt internode communications
Now that we've generated a certificate authority and certificates, let's update
the cluster to use these files.
IMPORTANT: When you enable {es} {security-features}, unless you have a trial
license, you must use Transport Layer Security (TLS) to encrypt internode
communication. By following the steps in this tutorial tutorial, you learn how
to meet the minimum requirements to pass the
{ref}/bootstrap-checks-xpack.html#bootstrap-checks-tls[TLS bootstrap check].
. (Optional) Name the cluster.
+
--
For example, add the {ref}/cluster.name.html[cluster.name] setting in the
`ES_PATH_CONF/elasticsearch.yml` file:
[source,yaml]
----
cluster.name: test-cluster
----
TIP: The `ES_PATH_CONF` environment variable contains the path for the {es}
configuration files. If you installed {es} using archive distributions (`zip` or
`tar.gz`), it defaults to `ES_HOME/config`. If you used package distributions
(Debian or RPM), it defaults to `/etc/elasticsearch`. For more information, see
{ref}/settings.html[Configuring {es}].
The default cluster name is `elasticsearch`. You should choose a unique name,
however, to ensure that your nodes join the right cluster.
--
. (Optional) Name the {es} node.
+
--
For example, add the {ref}/node.name.html[node.name] setting in the
`ES_PATH_CONF/elasticsearch.yml` file:
[source,yaml]
----
node.name: node-1
----
In this tutorial, the cluster will consist of three nodes that exist on the same
machine and share the same (loopback) IP address and hostname. Therefore, we
must give each node a unique name.
This step is also necessary if you want to use the `node.name` value to define
the location of certificates in subsequent steps.
--
. Disable single-node discovery.
+
--
To enable {es} to form a multi-node cluster, use the default value for the
`discovery.type` setting. If that setting exists in your
`ES_PATH_CONF/elasticsearch.yml` file, remove it.
--
. (Optional) If you are starting the cluster for the first time, specify the
initial set of master-eligible nodes.
+
--
For example, add the following setting in the `ES_PATH_CONF/elasticsearch.yml`
file:
[source,yaml]
----
cluster.initial_master_nodes: ["node-1"]
----
If you start an {es} node without configuring this setting or any other
discovery settings, it will start up in development mode and auto-bootstrap
itself into a new cluster.
TIP: If you are starting a cluster with multiple master-eligible nodes for the
first time, add all of those node names to the `cluster.initial_master_nodes`
setting.
See {ref}/modules-discovery-bootstrap-cluster.html[Bootstrapping a cluster] and
{ref}/discovery-settings.html[Important discovery and cluster formation settings].
--
. Enable Transport Layer Security (TLS/SSL) for transport (internode)
communications.
+
--
// tag::enable-tls[]
For example, add the following settings in the `ES_PATH_CONF/elasticsearch.yml`
file:
[source,yaml]
----
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.keystore.path: certs/${node.name}.p12 <1>
xpack.security.transport.ssl.truststore.path: certs/${node.name}.p12
----
<1> If the file name for your certificate does not match the `node.name` value,
you must put the appropriate file name in the `elasticsearch.yml` file.
// end::enable-tls[]
NOTE: The PKCS#12 keystore that is output by the `elasticsearch-certutil` can be
used as both a keystore and a truststore. If you use other tools to manage and
generate your certificates, you might have different values for these settings,
but that scenario is not covered in this tutorial.
For more information, see <<get-started-enable-security>> and
{ref}/security-settings.html#transport-tls-ssl-settings[Transport TLS settings].
--
. Store the password for the PKCS#12 file in the {es} keystore.
+
--
// tag::secure-passwords[]
For example, run the following commands:
["source","sh",subs="attributes,callouts"]
----------------------------------------------------------------------
./bin/elasticsearch-keystore create <1>
./bin/elasticsearch-keystore add xpack.security.transport.ssl.keystore.secure_password
./bin/elasticsearch-keystore add xpack.security.transport.ssl.truststore.secure_password
----------------------------------------------------------------------
<1> If the {es} keystore already exists, this command asks whether you want to
overwrite it. You do not need to overwrite it; you can simply add settings to
your existing {es} keystore.
// end::secure-passwords[]
You are prompted to supply the password that you created for the `node-1.p12`
file. We are using this file for both the transport TLS keystore and truststore,
therefore supply the same password for both of these settings.
--
. {ref}/starting-elasticsearch.html[Start {es}].
+
--
For example, if you installed {es} with a `.tar.gz` package, run the following
command from the {es} directory:
["source","sh",subs="attributes,callouts"]
----------------------------------------------------------------------
./bin/elasticsearch
----------------------------------------------------------------------
--
. Create passwords for the built-in users and configure {kib} to use them.
+
--
NOTE: If you already configured passwords for these users in other tutorials,
you can skip this step.
include::{stack-repo-dir}/security/get-started-builtin-users.asciidoc[tag=create-users]
After you setup the password for the `kibana` built-in user,
<<get-started-kibana-user,configure {kib} to use it>>.
For example, run the following commands to create the {kib} keystore and add the
`kibana` built-in user and its password in secure settings:
include::{stack-repo-dir}/security/get-started-kibana-users.asciidoc[tag=store-kibana-user]
--
. Start {kib}.
+
--
For example, if you installed {kib} with a `.tar.gz` package, run the following
command from the {kib} directory:
["source","sh",subs="attributes,callouts"]
----------------------------------------------------------------------
./bin/kibana
----------------------------------------------------------------------
See {kibana-ref}/start-stop.html[Starting and stopping {kib}].
--

View File

@ -0,0 +1,47 @@
[role="xpack"]
[testenv="basic"]
[[encrypting-internode-communications]]
== Tutorial: Encrypting communications
In the {stack-gs}/get-started-elastic-stack.html[Getting started with the {stack}]
and <<security-getting-started,Getting started with security>> tutorials, we
used a cluster with a single {es} node to get up and running with the {stack}.
You can add as many nodes as you want in a cluster but they must be able to
communicate with each other. The communication between nodes in a cluster is
handled by the {ref}/modules-transport.html[transport module]. To secure your
cluster, you must ensure that the internode communications are encrypted.
NOTE: In this tutorial, we add more nodes by installing more copies of {es} on
the same machine. By default, {es} binds to loopback addresses for HTTP and
transport communication. That is fine for the purposes of this tutorial and for
downloading and experimenting with {es} in a test or development environment.
When you are deploying a production environment, however, you are generally
adding nodes on different machines so that your cluster is resilient to outages
and avoids data loss. In a production scenario, there are additional
requirements that are not covered in this tutorial. See
{ref}/bootstrap-checks.html#dev-vs-prod-mode[Development vs production mode] and
{ref}/add-elasticsearch-nodes.html[Adding nodes to your cluster].
[float]
[[encrypting-internode-prerequisites]]
=== Before you begin
Ideally, you should do this tutorial after you complete the
{stack-gs}/get-started-elastic-stack.html[Getting started with the {stack}] and
<<security-getting-started,Getting started with security>> tutorials.
At a minimum, you must install and configure {es} and {kib} in a cluster with a
single {es} node. In particular, this tutorial provides instructions for adding
nodes that work with the `zip` and `tar.gz` packages.
IMPORTANT: To complete this tutorial, you must install the default {es} and
{kib} packages, which include the encrypted communications {security-features}.
When you install these products, they apply basic licenses with no expiration
dates. All of the subsequent steps in this tutorial assume that you are using a
basic license. For more information, see {subscriptions} and
<<license-management>>.
include::tutorial-tls-certificates.asciidoc[]
include::tutorial-tls-internode.asciidoc[]
include::tutorial-tls-addnodes.asciidoc[]

View File

@ -0,0 +1,778 @@
[role="xpack"]
[[security-troubleshooting]]
== Troubleshooting security
++++
<titleabbrev>Troubleshooting</titleabbrev>
++++
Use the information in this section to troubleshoot common problems and find
answers for frequently asked questions.
* <<security-trb-settings>>
* <<security-trb-roles>>
* <<security-trb-extraargs>>
* <<trouble-shoot-active-directory>>
* <<trb-security-maccurl>>
* <<trb-security-sslhandshake>>
* <<trb-security-ssl>>
* <<trb-security-kerberos>>
* <<trb-security-saml>>
* <<trb-security-internalserver>>
* <<trb-security-setup>>
* <<trb-security-path>>
include::{stack-repo-dir}/help.asciidoc[tag=get-help]
[[security-trb-settings]]
=== Some settings are not returned via the nodes settings API
*Symptoms:*
* When you use the {ref}/cluster-nodes-info.html[nodes info API] to retrieve
settings for a node, some information is missing.
*Resolution:*
This is intentional. Some of the settings are considered to be highly
sensitive: all `ssl` settings, ldap `bind_dn`, and `bind_password`.
For this reason, we filter these settings and do not expose them via
the nodes info API rest endpoint. You can also define additional
sensitive settings that should be hidden using the
`xpack.security.hide_settings` setting. For example, this snippet
hides the `url` settings of the `ldap1` realm and all settings of the
`ad1` realm.
[source, yaml]
------------------------------------------
xpack.security.hide_settings: xpack.security.authc.realms.ldap1.url,
xpack.security.authc.realms.ad1.*
------------------------------------------
[[security-trb-roles]]
=== Authorization exceptions
*Symptoms:*
* I configured the appropriate roles and the users, but I still get an
authorization exception.
* I can authenticate to LDAP, but I still get an authorization exception.
*Resolution:*
. Verify that the role names associated with the users match the roles defined
in the `roles.yml` file. You can use the `elasticsearch-users` tool to list all
the users. Any unknown roles are marked with `*`.
+
--
[source, shell]
------------------------------------------
bin/elasticsearch-users list
rdeniro : admin
alpacino : power_user
jacknich : monitoring,unknown_role* <1>
------------------------------------------
<1> `unknown_role` was not found in `roles.yml`
For more information about this command, see the
{ref}/users-command.html[`elasticsearch-users` command].
--
. If you are authenticating to LDAP, a number of configuration options can cause
this error.
+
--
|======================
|_group identification_ |
Groups are located by either an LDAP search or by the "memberOf" attribute on
the user. Also, If subtree search is turned off, it will search only one
level deep. See the <<ldap-settings, LDAP Settings>> for all the options.
There are many options here and sticking to the defaults will not work for all
scenarios.
| _group to role mapping_|
Either the `role_mapping.yml` file or the location for this file could be
misconfigured. For more information, see {ref}/security-files.html[Security files].
|_role definition_|
The role definition might be missing or invalid.
|======================
To help track down these possibilities, add the following lines to the end of
the `log4j2.properties` configuration file in the `ES_PATH_CONF`:
[source,properties]
----------------
logger.authc.name = org.elasticsearch.xpack.security.authc
logger.authc.level = DEBUG
----------------
A successful authentication should produce debug statements that list groups and
role mappings.
--
[[security-trb-extraargs]]
=== Users command fails due to extra arguments
*Symptoms:*
* The `elasticsearch-users` command fails with the following message:
`ERROR: extra arguments [...] were provided`.
*Resolution:*
This error occurs when the `elasticsearch-users` tool is parsing the input and
finds unexpected arguments. This can happen when there are special characters
used in some of the arguments. For example, on Windows systems the `,` character
is considered a parameter separator; in other words `-r role1,role2` is
translated to `-r role1 role2` and the `elasticsearch-users` tool only
recognizes `role1` as an expected parameter. The solution here is to quote the
parameter: `-r "role1,role2"`.
For more information about this command, see
{ref}/users-command.html[`elasticsearch-users` command].
[[trouble-shoot-active-directory]]
=== Users are frequently locked out of Active Directory
*Symptoms:*
* Certain users are being frequently locked out of Active Directory.
*Resolution:*
Check your realm configuration; realms are checked serially, one after another.
If your Active Directory realm is being checked before other realms and there
are usernames that appear in both Active Directory and another realm, a valid
login for one realm might be causing failed login attempts in another realm.
For example, if `UserA` exists in both Active Directory and a file realm, and
the Active Directory realm is checked first and file is checked second, an
attempt to authenticate as `UserA` in the file realm would first attempt to
authenticate against Active Directory and fail, before successfully
authenticating against the `file` realm. Because authentication is verified on
each request, the Active Directory realm would be checked - and fail - on each
request for `UserA` in the `file` realm. In this case, while the authentication
request completed successfully, the account on Active Directory would have
received several failed login attempts, and that account might become
temporarily locked out. Plan the order of your realms accordingly.
Also note that it is not typically necessary to define multiple Active Directory
realms to handle domain controller failures. When using Microsoft DNS, the DNS
entry for the domain should always point to an available domain controller.
[[trb-security-maccurl]]
=== Certificate verification fails for curl on Mac
*Symptoms:*
* `curl` on the Mac returns a certificate verification error even when the
`--cacert` option is used.
*Resolution:*
Apple's integration of `curl` with their keychain technology disables the
`--cacert` option.
See http://curl.haxx.se/mail/archive-2013-10/0036.html for more information.
You can use another tool, such as `wget`, to test certificates. Alternately, you
can add the certificate for the signing certificate authority MacOS system
keychain, using a procedure similar to the one detailed at the
http://support.apple.com/kb/PH14003[Apple knowledge base]. Be sure to add the
signing CA's certificate and not the server's certificate.
[[trb-security-sslhandshake]]
=== SSLHandshakeException causes connections to fail
*Symptoms:*
* A `SSLHandshakeException` causes a connection to a node to fail and indicates
that there is a configuration issue. Some of the common exceptions are shown
below with tips on how to resolve these issues.
*Resolution:*
`java.security.cert.CertificateException: No name matching node01.example.com found`::
+
--
Indicates that a client connection was made to `node01.example.com` but the
certificate returned did not contain the name `node01.example.com`. In most
cases, the issue can be resolved by ensuring the name is specified during
certificate creation. For more information, see <<ssl-tls>>. Another scenario is
when the environment does not wish to use DNS names in certificates at all. In
this scenario, all settings in `elasticsearch.yml` should only use IP addresses
including the `network.publish_host` setting.
--
`java.security.cert.CertificateException: No subject alternative names present`::
+
--
Indicates that a client connection was made to an IP address but the returned
certificate did not contain any `SubjectAlternativeName` entries. IP addresses
are only used for hostname verification if they are specified as a
`SubjectAlternativeName` during certificate creation. If the intent was to use
IP addresses for hostname verification, then the certificate will need to be
regenerated with the appropriate IP address. See <<ssl-tls>>.
--
`javax.net.ssl.SSLHandshakeException: null cert chain` and `javax.net.ssl.SSLException: Received fatal alert: bad_certificate`::
+
--
The `SSLHandshakeException` indicates that a self-signed certificate was
returned by the client that is not trusted as it cannot be found in the
`truststore` or `keystore`. This `SSLException` is seen on the client side of
the connection.
--
`sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target` and `javax.net.ssl.SSLException: Received fatal alert: certificate_unknown`::
+
--
This `SunCertPathBuilderException` indicates that a certificate was returned
during the handshake that is not trusted. This message is seen on the client
side of the connection. The `SSLException` is seen on the server side of the
connection. The CA certificate that signed the returned certificate was not
found in the `keystore` or `truststore` and needs to be added to trust this
certificate.
--
`javax.net.ssl.SSLHandshakeException: Invalid ECDH ServerKeyExchange signature`::
+
--
The `Invalid ECDH ServerKeyExchange signature` can indicate that a key and a corresponding certificate don't match and are
causing the handshake to fail.
Verify the contents of each of the files you are using for your configured certificate authorities, certificates and keys. In particular, check that the key and certificate belong to the same key pair.
--
[[trb-security-ssl]]
=== Common SSL/TLS exceptions
*Symptoms:*
* You might see some exceptions related to SSL/TLS in your logs. Some of the
common exceptions are shown below with tips on how to resolve these issues. +
*Resolution:*
`WARN: received plaintext http traffic on a https channel, closing connection`::
+
--
Indicates that there was an incoming plaintext http request. This typically
occurs when an external applications attempts to make an unencrypted call to the
REST interface. Please ensure that all applications are using `https` when
calling the REST interface with SSL enabled.
--
`org.elasticsearch.common.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record:`::
+
--
Indicates that there was incoming plaintext traffic on an SSL connection. This
typically occurs when a node is not configured to use encrypted communication
and tries to connect to nodes that are using encrypted communication. Please
verify that all nodes are using the same setting for
`xpack.security.transport.ssl.enabled`.
For more information about this setting, see
{ref}/security-settings.html[Security Settings in {es}].
--
`java.io.StreamCorruptedException: invalid internal transport message format, got`::
+
--
Indicates an issue with data received on the transport interface in an unknown
format. This can happen when a node with encrypted communication enabled
connects to a node that has encrypted communication disabled. Please verify that
all nodes are using the same setting for `xpack.security.transport.ssl.enabled`.
For more information about this setting, see
{ref}/security-settings.html[Security Settings in {es}].
--
`java.lang.IllegalArgumentException: empty text`::
+
--
This exception is typically seen when a `https` request is made to a node that
is not using `https`. If `https` is desired, please ensure the following setting
is in `elasticsearch.yml`:
[source,yaml]
----------------
xpack.security.http.ssl.enabled: true
----------------
For more information about this setting, see
{ref}/security-settings.html[Security Settings in {es}].
--
`ERROR: unsupported ciphers [...] were requested but cannot be used in this JVM`::
+
--
This error occurs when a SSL/TLS cipher suite is specified that cannot supported
by the JVM that {es} is running in. Security tries to use the specified cipher
suites that are supported by this JVM. This error can occur when using the
Security defaults as some distributions of OpenJDK do not enable the PKCS11
provider by default. In this case, we recommend consulting your JVM
documentation for details on how to enable the PKCS11 provider.
Another common source of this error is requesting cipher suites that use
encrypting with a key length greater than 128 bits when running on an Oracle JDK.
In this case, you must install the
<<ciphers, JCE Unlimited Strength Jurisdiction Policy Files>>.
--
[[trb-security-kerberos]]
=== Common Kerberos exceptions
*Symptoms:*
* User authentication fails due to either GSS negotiation failure
or a service login failure (either on the server or in the {es} http client).
Some of the common exceptions are listed below with some tips to help resolve
them.
*Resolution:*
`Failure unspecified at GSS-API level (Mechanism level: Checksum failed)`::
+
--
When you see this error message on the HTTP client side, then it may be
related to an incorrect password.
When you see this error message in the {es} server logs, then it may be
related to the {es} service keytab. The keytab file is present but it failed
to log in as the user. Please check the keytab expiry. Also check whether the
keytab contain up-to-date credentials; if not, replace them.
You can use tools like `klist` or `ktab` to list principals inside
the keytab and validate them. You can use `kinit` to see if you can acquire
initial tickets using the keytab. Please check the tools and their documentation
in your Kerberos environment.
Kerberos depends on proper hostname resolution, so please check your DNS infrastructure.
Incorrect DNS setup, DNS SRV records or configuration for KDC servers in `krb5.conf`
can cause problems with hostname resolution.
--
`Failure unspecified at GSS-API level (Mechanism level: Request is a replay (34))`::
`Failure unspecified at GSS-API level (Mechanism level: Clock skew too great (37))`::
+
--
To prevent replay attacks, Kerberos V5 sets a maximum tolerance for computer
clock synchronization and it is typically 5 minutes. Please check whether
the time on the machines within the domain is in sync.
--
`gss_init_sec_context() failed: An unsupported mechanism was requested`::
`No credential found for: 1.2.840.113554.1.2.2 usage: Accept`::
+
--
You would usually see this error message on the client side when using `curl` to
test {es} Kerberos setup. For example, these messages occur when you are using
an old version of curl on the client and therefore Kerberos Spnego support is missing.
The Kerberos realm in {es} only supports Spengo mechanism (Oid 1.3.6.1.5.5.2);
it does not yet support Kerberos mechanism (Oid 1.2.840.113554.1.2.2).
Make sure that:
* You have installed curl version 7.49 or above as older versions of curl have
known Kerberos bugs.
* The curl installed on your machine has `GSS-API`, `Kerberos` and `SPNEGO`
features listed when you invoke command `curl -V`. If not, you will need to
compile `curl` version with this support.
To download latest curl version visit https://curl.haxx.se/download.html
--
As Kerberos logs are often cryptic in nature and many things can go wrong
as it depends on external services like DNS and NTP. You might
have to enable additional debug logs to determine the root cause of the issue.
{es} uses a JAAS (Java Authentication and Authorization Service) Kerberos login
module to provide Kerberos support. To enable debug logs on {es} for the login
module use following Kerberos realm setting:
[source,yaml]
----------------
xpack.security.authc.realms.<realm-name>.krb.debug: true
----------------
For detailed information, see {ref}/security-settings.html#ref-kerberos-settings[Kerberos realm settings].
Sometimes you may need to go deeper to understand the problem during SPNEGO
GSS context negotiation or look at the Kerberos message exchange. To enable
Kerberos/SPNEGO debug logging on JVM, add following JVM system properties:
`-Dsun.security.krb5.debug=true`
`-Dsun.security.spnego.debug=true`
For more information about JVM system properties, see {ref}/jvm-options.html[configuring JVM options].
[[trb-security-saml]]
=== Common SAML issues
Some of the common SAML problems are shown below with tips on how to resolve
these issues.
. *Symptoms:*
+
--
Authentication in {kib} fails and the following error is printed in the {es}
logs:
....
Cannot find any matching realm for [SamlPrepareAuthenticationRequest{realmName=saml1,
assertionConsumerServiceURL=https://my.kibana.url/api/security/v1/saml}]
....
*Resolution:*
In order to initiate a SAML authentication, {kib} needs to know which SAML realm
it should use from the ones that are configured in {es}. You can use the
`xpack.security.authc.saml.reaml` setting to explicitly set the SAML realm name
in {kib}. It must match the name of the SAML realm that is configured in {es}.
If you get an error like the one above, it possibly means that the value of
`xpack.security.authc.saml.reaml` in your {kib} configuration is wrong. Verify
that it matches the name of the configured realm in {es}, which is the string
after `xpack.security.authc.realms.saml.` in your {es} configuration.
--
. *Symptoms:*
+
--
Authentication in {kib} fails and the following error is printed in the
{es} logs:
....
Authentication to realm saml1 failed - Provided SAML response is not valid for realm
saml/saml1 (Caused by ElasticsearchSecurityException[Conditions [https://some-url-here...]
do not match required audience [https://my.kibana.url]])
....
*Resolution:*
We received a SAML response that is addressed to another SAML Service Provider.
This usually means that the configured SAML Service Provider Entity ID in
`elasticsearch.yml` (`sp.entity_id`) does not match what has been configured as
the SAML Service Provider Entity ID in the SAML Identity Provider documentation.
To resolve this issue, ensure that both the saml realm in {es} and the IdP are
configured with the same string for the SAML Entity ID of the Service Provider.
TIP: These strings are compared as case-sensitive strings and not as
canonicalized URLs even when the values are URL-like. Be mindful of trailing
slashes, port numbers, etc.
--
. *Symptoms:*
+
--
Authentication in {kib} fails and the following error is printed in the
{es} logs:
....
Cannot find metadata for entity [your:entity.id] in [metadata.xml]
....
*Resolution:*
We could not find the metadata for the SAML Entity ID `your:entity.id` in the
configured metadata file (`metadata.xml`).
.. Ensure that the `metadata.xml` file you are using is indeed the one provided
by your SAML Identity Provider.
.. Ensure that the `metadata.xml` file contains one <EntityDescriptor> element
as follows: `<EntityDescriptor ID="0597c9aa-e69b-46e7-a1c6-636c7b8a8070" entityID="https://saml.example.com/f174199a-a96e-4201-88f1-0d57a610c522/" ...`
where the value of the `entityID` attribute is the same as the value of the
`idp.entity_id` that you have set in your SAML realm configuration in
`elasticsearch.yml`.
.. Note that these are also compared as case-sensitive strings and not as
canonicalized URLs even when the values are URL-like.
--
. *Symptoms:*
+
--
Authentication in {kib} fails and the following error is printed in the {es}
logs:
....
unable to authenticate user [<unauthenticated-saml-user>]
for action [cluster:admin/xpack/security/saml/authenticate]
....
*Resolution:*
This error indicates that {es} failed to process the incoming SAML
authentication message. Since the message can't be processed, {es} is not aware
of who the to-be authenticated user is and the `<unauthenticated-saml-user>`
placeholder is used instead. To diagnose the _actual_ problem, you must check
the {es} logs for further details.
--
. *Symptoms:*
+
--
Authentication in {kib} fails and the following error is printed in the
{es} logs:
....
Authentication to realm my-saml-realm failed -
Provided SAML response is not valid for realm saml/my-saml-realm
(Caused by ElasticsearchSecurityException[SAML Response is not a 'success' response:
The SAML IdP did not grant the request. It indicated that the Elastic Stack side sent
something invalid (urn:oasis:names:tc:SAML:2.0:status:Requester). Specific status code which might
indicate what the issue is: [urn:oasis:names:tc:SAML:2.0:status:InvalidNameIDPolicy]]
)
....
*Resolution:*
This means that the SAML Identity Provider failed to authenticate the user and
sent a SAML Response to the Service Provider ({stack}) indicating this failure.
The message will convey whether the SAML Identity Provider thinks that the problem
is with the Service Provider ({stack}) or with the Identity Provider itself and
the specific status code that follows is extremely useful as it usually indicates
the underlying issue. The list of specific error codes is defined in the
https://docs.oasis-open.org/security/saml/v2.0/saml-core-2.0-os.pdf[SAML 2.0 Core specification - Section 3.2.2.2]
and the most commonly encountered ones are:
. `urn:oasis:names:tc:SAML:2.0:status:AuthnFailed`: The SAML Identity Provider failed to
authenticate the user. There is not much to troubleshoot on the {stack} side for this status, the logs of
the SAML Identity Provider will hopefully offer much more information.
. `urn:oasis:names:tc:SAML:2.0:status:InvalidNameIDPolicy`: The SAML Identity Provider cannot support
releasing a NameID with the requested format. When creating SAML Authentication Requests, {es} sets
the NameIDPolicy element of the Authentication request with the appropriate value. This is controlled
by the {ref}/security-settings.html#ref-saml-settings[`nameid_format`] configuration parameter in
`elasticsearch.yml`, which if not set defaults to `urn:oasis:names:tc:SAML:2.0:nameid-format:transient`.
This instructs the Identity Provider to return a NameID with that specific format in the SAML Response. If
the SAML Identity Provider cannot grant that request, for example because it is configured to release a
NameID format with `urn:oasis:names:tc:SAML:2.0:nameid-format:persistent` format instead, it returns this error
indicating an invalid NameID policy. This issue can be resolved by adjusting `nameid_format` to match the format
the SAML Identity Provider can return or by setting it to `urn:oasis:names:tc:SAML:2.0:nameid-format:unspecified`
so that the Identity Provider is allowed to return any format it wants.
--
. *Symptoms:*
+
--
Authentication in {kib} fails and the following error is printed in the
{es} logs:
....
The XML Signature of this SAML message cannot be validated. Please verify that the saml
realm uses the correct SAMLmetadata file/URL for this Identity Provider
....
*Resolution:*
This means that {es} failed to validate the digital signature of the SAML
message that the Identity Provider sent. {es} uses the public key of the
Identity Provider that is included in the SAML metadata, in order to validate
the signature that the IdP has created using its corresponding private key.
Failure to do so, can have a number of causes:
.. As the error message indicates, the most common cause is that the wrong
metadata file is used and as such the public key it contains doesn't correspond
to the private key the Identity Provider uses.
.. The configuration of the Identity Provider has changed or the key has been
rotated and the metadata file that {es} is using has not been updated.
.. The SAML Response has been altered in transit and the signature cannot be
validated even though the correct key is used.
NOTE: The private keys and public keys and self-signed X.509 certificates that
are used in SAML for digital signatures as described above have no relation to
the keys and certificates that are used for TLS either on the transport or the
http layer. A failure such as the one described above has nothing to do with
your `xpack.ssl` related configuration.
--
. *Symptoms:*
+
--
Users are unable to login with a local username and password in {kib} because
SAML is enabled.
*Resolution:*
If you want your users to be able to use local credentials to authenticate to
{kib} in addition to using the SAML realm for Single Sign-On, you must enable
the `basic` `authProvider` in {kib}. The process is documented in the
<<saml-kibana-basic, SAML Guide>>
--
*Logging:*
Very detailed trace logging can be enabled specifically for the SAML realm by
setting the following transient setting:
[source, shell]
-----------------------------------------------
PUT /_cluster/settings
{
"transient": {
"logger.org.elasticsearch.xpack.security.authc.saml": "trace"
}
}
-----------------------------------------------
Alternatively, you can add the following lines to the end of the
`log4j2.properties` configuration file in the `ES_PATH_CONF`:
[source,properties]
----------------
logger.saml.name = org.elasticsearch.xpack.security.authc.saml
logger.saml.level = TRACE
----------------
[[trb-security-internalserver]]
=== Internal Server Error in Kibana
*Symptoms:*
* In 5.1.1, an `UnhandledPromiseRejectionWarning` occurs and {kib} displays an
Internal Server Error.
//TBD: Is the same true for later releases?
*Resolution:*
If the Security plugin is enabled in {es} but disabled in {kib}, you must
still set `elasticsearch.username` and `elasticsearch.password` in `kibana.yml`.
Otherwise, {kib} cannot connect to {es}.
[[trb-security-setup]]
=== Setup-passwords command fails due to connection failure
The {ref}/setup-passwords.html[elasticsearch-setup-passwords command] sets
passwords for the built-in users by sending user management API requests. If
your cluster uses SSL/TLS for the HTTP (REST) interface, the command attempts to
establish a connection with the HTTPS protocol. If the connection attempt fails,
the command fails.
*Symptoms:*
. {es} is running HTTPS, but the command fails to detect it and returns the
following errors:
+
--
[source, shell]
------------------------------------------
Cannot connect to elasticsearch node.
java.net.SocketException: Unexpected end of file from server
...
ERROR: Failed to connect to elasticsearch at
http://127.0.0.1:9200/_security/_authenticate?pretty.
Is the URL correct and elasticsearch running?
------------------------------------------
--
. SSL/TLS is configured, but trust cannot be established. The command returns
the following errors:
+
--
[source, shell]
------------------------------------------
SSL connection to
https://127.0.0.1:9200/_security/_authenticate?pretty
failed: sun.security.validator.ValidatorException:
PKIX path building failed:
sun.security.provider.certpath.SunCertPathBuilderException:
unable to find valid certification path to requested target
Please check the elasticsearch SSL settings under
xpack.security.http.ssl.
...
ERROR: Failed to establish SSL connection to elasticsearch at
https://127.0.0.1:9200/_security/_authenticate?pretty.
------------------------------------------
--
. The command fails because hostname verification fails, which results in the
following errors:
+
--
[source, shell]
------------------------------------------
SSL connection to
https://idp.localhost.test:9200/_security/_authenticate?pretty
failed: java.security.cert.CertificateException:
No subject alternative DNS name matching
elasticsearch.example.com found.
Please check the elasticsearch SSL settings under
xpack.security.http.ssl.
...
ERROR: Failed to establish SSL connection to elasticsearch at
https://elasticsearch.example.com:9200/_security/_authenticate?pretty.
------------------------------------------
--
*Resolution:*
. If your cluster uses TLS/SSL for the HTTP interface but the
`elasticsearch-setup-passwords` command attempts to establish a non-secure
connection, use the `--url` command option to explicitly specify an HTTPS URL.
Alternatively, set the `xpack.security.http.ssl.enabled` setting to `true`.
. If the command does not trust the {es} server, verify that you configured the
`xpack.security.http.ssl.certificate_authorities` setting or the
`xpack.security.http.ssl.truststore.path` setting.
. If hostname verification fails, you can disable this verification by setting
`xpack.security.http.ssl.verification_mode` to `certificate`.
For more information about these settings, see
{ref}/security-settings.html[Security Settings in {es}].
[[trb-security-path]]
=== Failures due to relocation of the configuration files
*Symptoms:*
* Active Directory or LDAP realms might stop working after upgrading to {es} 6.3
or later releases. In 6.4 or later releases, you might see messages in the {es}
log that indicate a config file is in a deprecated location.
*Resolution:*
By default, in 6.2 and earlier releases, the security configuration files are
located in the `ES_PATH_CONF/x-pack` directory, where `ES_PATH_CONF` is an
environment variable that defines the location of the
{ref}/settings.html#config-files-location[config directory].
In 6.3 and later releases, the config directory no longer contains an `x-pack`
directory. The files that were stored in this folder, such as the
`log4j2.properties`, `role_mapping.yml`, `roles.yml`, `users`, and `users_roles`
files, now exist directly in the config directory.
IMPORTANT: If you upgraded to 6.3 or later releases, your old security
configuration files still exist in an `x-pack` folder. That file path is
deprecated, however, and you should move your files out of that folder.
In 6.3 and later releases, settings such as `files.role_mapping` default to
`ES_PATH_CONF/role_mapping.yml`. If you do not want to use the default locations,
you must update the settings appropriately. See
{ref}/security-settings.html[Security settings in {es}].