Merge branch 'master' into feature/sql

Original commit: elastic/x-pack-elasticsearch@c25c179ce6
This commit is contained in:
Nik Everett 2017-09-18 12:32:46 -04:00
commit 52ee02da27
110 changed files with 1578 additions and 1298 deletions

View File

@ -0,0 +1,273 @@
[role="xpack"]
[[certgen]]
== certgen
The `certgen` command simplifies the creation of certificate authorities (CA),
certificate signing requests (CSR), and signed certificates for use with the
Elastic Stack.
[float]
=== Synopsis
[source,shell]
--------------------------------------------------
bin/x-pack/certgen
(([--cert <cert_file>] [--days <n>] [--dn <name>] [--key <key_file>]
[--keysize <bits>] [--pass <password>] [--p12 <password>])
| [--csr])
[-E <KeyValuePair>] [-h, --help] [--in <input_file>] [--out <output_file>]
([-s, --silent] | [-v, --verbose])
--------------------------------------------------
[float]
=== Description
By default, the command runs in interactive mode and you are prompted for
information about each instance. An instance is any piece of the Elastic Stack
that requires a Transport Layer Security (TLS) or SSL certificate. Depending on
your configuration, {es}, Logstash, {kib}, and Beats might all require a
certificate and private key.
The minimum required value for each instance is a name. This can simply be the
hostname, which is used as the Common Name of the certificate. You can also use
a full distinguished name. IP addresses and DNS names are optional. Multiple
values can be specified as a comma separated string. If no IP addresses or DNS
names are provided, you might disable hostname verification in your TLS or SSL
configuration.
Depending on the parameters that you specify, you are also prompted for
necessary information such as the path for the output file and the CA private
key password.
The `certgen` command also supports a silent mode of operation to enable easier
batch operations. For more information, see <<certgen-silent>>.
The output file is a zip file that contains the signed certificates and private
keys for each instance. If you chose to generate a CA, which is the default
behavior, the certificate and private key are included in the output file. If
you chose to generate CSRs, you should provide them to your commercial or
organization-specific certificate authority to obtain signed certificates. The
signed certificates must be in PEM format to work with {security}.
[float]
=== Parameters
`--cert <cert_file>`:: Specifies to generate new instance certificates and keys
using an existing CA certificate, which is provided in the `<cert_file>` argument.
This parameter cannot be used with the `-csr` parameter.
`--csr`:: Specifies to operation in certificate signing request mode.
`--days <n>`::
Specifies an integer value that represents the number of days the generated keys
are valid. The default value is `1095`. This parameter cannot be used with the
`-csr` parameter.
`--dn <name>`::
Defines the _Distinguished Name_ that is used for the generated CA certificate.
The default value is `CN=Elastic Certificate Tool Autogenerated CA`.
This parameter cannot be used with the `-csr` parameter.
`-E <KeyValuePair>`:: Configures a setting.
`-h, --help`:: Returns all of the command parameters.
`--in <input_file>`:: Specifies the file that is used to run in silent mode. The
input file must be a YAML file, as described in <<certgen-silent>>.
`--key <key_file>`:: Specifies the _private-key_ file for the CA certificate.
This parameter is required whenever the `-cert` parameter is used.
`--keysize <bits>`::
Defines the number of bits that are used in generated RSA keys. The default
value is `2048`.
`--out <output_file>`:: Specifies a path for the output file.
`--pass <password>`:: Specifies the password for the CA private key.
If the `-key` parameter is provided, then this is the password for the existing
private key file. Otherwise, it is the password that should be applied to the
generated CA key. This parameter cannot be used with the `-csr` parameter.
`--p12 <password>`::
Generate a PKCS#12 (`.p12` or `.pfx`) container file for each of the instance
certificates and keys. The generated file is protected by the supplied password,
which can be blank. This parameter cannot be used with the `-csr` parameter.
`-s, --silent`:: Shows minimal output.
`-v, --verbose`:: Shows verbose output.
[float]
=== Examples
////
The tool can be used interactively:
[source,shell]
--------------------------------------------------
bin/x-pack/certgen
--------------------------------------------------
This tool assists you in the generation of X.509 certificates and certificate
signing requests for use with SSL in the Elastic stack. Depending on the command
line option specified, you may be prompted for the following:
* The path to the output file
* The output file is a zip file containing the signed certificates and
private keys for each instance. If a Certificate Authority was generated,
the certificate and private key will also be included in the output file.
* Information about each instance
* An instance is any piece of the Elastic Stack that requires a SSL certificate.
Depending on your configuration, Elasticsearch, Logstash, Kibana, and Beats
may all require a certificate and private key.
* The minimum required value for each instance is a name. This can simply be the
hostname, which will be used as the Common Name of the certificate. A full
distinguished name may also be used.
* IP addresses and DNS names are optional. Multiple values can be specified as a
comma separated string. If no IP addresses or DNS names are provided, you may
disable hostname verification in your SSL configuration.
* Certificate Authority private key password
* The password may be left empty if desired.
Let's get started...
Please enter the desired output file [/home/es/config/x-pack/certificate-bundle.zip]:
Enter instance name: node01
Enter name for directories and files [node01]:
Enter IP Addresses for instance (comma-separated if more than one) []: 10.10.0.1
Enter DNS names for instance (comma-separated if more than one) []: node01.mydomain.com,node01
Would you like to specify another instance? Press 'y' to continue entering instance information: y
Enter instance name: node02
Enter name for directories and files [node02]:
Enter IP Addresses for instance (comma-separated if more than one) []: 10.10.0.2
Enter DNS names for instance (comma-separated if more than one) []: node02.mydomain.com
Would you like to specify another instance? Press 'y' to continue entering instance information:
Certificates written to /home/es/config/x-pack/certificate-bundle.zip
This file should be properly secured as it contains the private keys for all
instances and the certificate authority.
After unzipping the file, there will be a directory for each instance containing
the certificate and private key. Copy the certificate, key, and CA certificate
to the configuration directory of the Elastic product that they will be used for
and follow the SSL configuration instructions in the product guide.
For client applications, you may only need to copy the CA certificate and
configure the client to trust this certificate.
....
--------------------------------------------------
In this example, the command generates a zip file with the CA certificate,
private key, two signed certificates and keys in PEM format for `node01` and
`node02`.
////
////
When using a commercial or organization specific CA, the `certgen` tool can be
used to generate certificate signing requests (CSR) for the nodes in your
cluster:
[source,shell]
--------------------------------------------------
....
bin/x-pack/certgen -csr
This tool assists you in the generation of X.509 certificates and certificate
signing requests for use with SSL in the Elastic stack. Depending on the command
line option specified, you may be prompted for the following:
* The path to the output file
* The output file is a zip file containing the certificate signing requests
and private keys for each instance.
* Information about each instance
* An instance is any piece of the Elastic Stack that requires a SSL certificate.
Depending on your configuration, Elasticsearch, Logstash, Kibana, and Beats
may all require a certificate and private key.
* The minimum required value for each instance is a name. This can simply be the
hostname, which will be used as the Common Name of the certificate. A full
distinguished name may also be used.
* IP addresses and DNS names are optional. Multiple values can be specified as a
comma separated string. If no IP addresses or DNS names are provided, you may
disable hostname verification in your SSL configuration.
Let's get started...
Please enter the desired output file [/home/es/config/x-pack/csr-bundle.zip]:
Enter instance name: node01
Enter name for directories and files [node01]:
Enter IP Addresses for instance (comma-separated if more than one) []: 10.10.0.1
Enter DNS names for instance (comma-separated if more than one) []: node01.mydomain.com,node01
Would you like to specify another instance? Press 'y' to continue entering instance information: y
Enter instance name: node02
Enter name for directories and files [node02]:
Enter IP Addresses for instance (comma-separated if more than one) []: 10.10.0.2
Enter DNS names for instance (comma-separated if more than one) []: node02.mydomain.com
Would you like to specify another instance? Press 'y' to continue entering instance information:
Certificate signing requests written to /Users/jmodi/dev/tmp/elasticsearch-5.0.0-alpha5-SNAPSHOT/config/x-pack/csr-bundle.zip
This file should be properly secured as it contains the private keys for all
instances.
After unzipping the file, there will be a directory for each instance containing
the certificate signing request and the private key. Provide the certificate
signing requests to your certificate authority. Once you have received the
signed certificate, copy the signed certificate, key, and CA certificate to the
configuration directory of the Elastic product that they will be used for and
follow the SSL configuration instructions in the product guide.
....
--------------------------------------------------
In this case, the command generates a zip file with two CSRs and private
keys. The CSRs should be provided to the CA in order to obtain the signed
certificates. The signed certificates will need to be in PEM format in order to
be used.
////
[float]
[[certgen-silent]]
==== Using `certgen` in Silent Mode
To use the silent mode of operation, you must create a YAML file that contains
information about the instances. It must match the following format:
[source, yaml]
--------------------------------------------------
instances:
- name: "node1" <1>
ip: <2>
- "192.0.2.1"
dns: <3>
- "node1.mydomain.com"
- name: "node2"
ip:
- "192.0.2.2"
- "198.51.100.1"
- name: "node3"
- name: "node4"
dns:
- "node4.mydomain.com"
- "node4.internal"
- name: "CN=node5,OU=IT,DC=mydomain,DC=com"
filename: "node5" <4>
--------------------------------------------------
<1> The name of the instance. This can be a simple string value or can be a
Distinguished Name (DN). This is the only required field.
<2> An optional array of strings that represent IP Addresses for this instance.
Both IPv4 and IPv6 values are allowed. The values are added as Subject
Alternative Names.
<3> An optional array of strings that represent DNS names for this instance.
The values are added as Subject Alternative Names.
<4> The filename to use for this instance. This name is used as the name of the
directory that contains the instance's files in the output. It is also used in
the names of the files within the directory. This filename should not have an
extension. Note: If the `name` provided for the instance does not represent a
valid filename, then the `filename` field must be present.
When your YAML file is ready, you can use the `certgen` command to generate
certificates or certificate signing requests. Simply use the `-in` parameter to
specify the location of the file. For example:
[source, sh]
--------------------------------------------------
bin/x-pack/certgen -in instances.yml
--------------------------------------------------
This command generates a CA certificate and private key as well as certificates
and private keys for the instances that are listed in the YAML file.

View File

@ -7,10 +7,11 @@
{xpack} includes commands that help you configure security:
//* <<certgen>>
* <<certgen>>
//* <<setup-passwords>>
* <<users-command>>
--
include::certgen.asciidoc[]
include::users-command.asciidoc[]

View File

@ -2,9 +2,37 @@
[[watcher-api-ack-watch]]
=== Ack Watch API
{xpack-ref}/actions.html#actions-ack-throttle[Acknowledging a watch] enables you to manually throttle
execution of the watch's actions. An action's _acknowledgement state_ is stored
in the `status.actions.<id>.ack.state` structure.
{xpack-ref}/actions.html#actions-ack-throttle[Acknowledging a watch] enables you
to manually throttle execution of the watch's actions. An action's
_acknowledgement state_ is stored in the `status.actions.<id>.ack.state`
structure.
[float]
==== Request
`PUT _xpack/watcher/watch/<watch_id>/_ack` +
`PUT _xpack/watcher/watch/<watch_id>/_ack/<action_id>`
[float]
==== Path Parameters
`action_id`::
(list) A comma-separated list of the action IDs to acknowledge. If you omit
this parameter, all of the actions of the watch are acknowledged.
`watch_id` (required)::
(string) Identifier for the watch.
[float]
==== Authorization
You must have `manage_watcher` cluster privileges to use this API. For more
information, see {xpack-ref}/security-privileges.html[Security Privileges].
[float]
==== Examples
To demonstrate let's create a new watch:

View File

@ -6,6 +6,26 @@ A watch can be either
{xpack-ref}/how-watcher-works.html#watch-active-state[active or inactive]. This
API enables you to activate a currently inactive watch.
[float]
==== Request
`PUT _xpack/watcher/watch/<watch_id>/_activate`
[float]
==== Path Parameters
`watch_id` (required)::
(string) Identifier for the watch.
[float]
==== Authorization
You must have `manage_watcher` cluster privileges to use this API. For more
information, see {xpack-ref}/security-privileges.html[Security Privileges].
[float]
==== Examples
The status of an inactive watch is returned with the watch definition when you
call the <<watcher-api-get-watch, Get Watch API>>:

View File

@ -6,6 +6,25 @@ A watch can be either
{xpack-ref}/how-watcher-works.html#watch-active-state[active or inactive]. This
API enables you to deactivate a currently active watch.
[float]
==== Request
`PUT _xpack/watcher/watch/<watch_id>/_deactivate`
[float]
==== Path Parameters
`watch_id` (required)::
(string) Identifier for the watch.
[float]
==== Authorization
You must have `manage_watcher` cluster privileges to use this API. For more
information, see {xpack-ref}/security-privileges.html[Security Privileges].
[float]
==== Examples
The status of an active watch is returned with the watch definition when you
call the <<watcher-api-get-watch, Get Watch API>>:

View File

@ -2,18 +2,42 @@
[[watcher-api-delete-watch]]
=== Delete Watch API
The DELETE watch API removes a watch (identified by its `id`) from {watcher}.
Once removed, the document representing the watch in the `.watches` index is
gone and it will never be executed again.
The DELETE watch API removes a watch from {watcher}.
[float]
==== Request
`DELETE _xpack/watcher/watch/<watch_id>`
[float]
==== Description
When the watch is removed, the document representing the watch in the `.watches`
index is gone and it will never be run again.
Please note that deleting a watch **does not** delete any watch execution records
related to this watch from the watch history.
IMPORTANT: Deleting a watch must be done via this API only. Do not delete the
watch directly from the `.watches` index using Elasticsearch's
watch directly from the `.watches` index using the Elasticsearch
DELETE Document API. When {security} is enabled, make sure no `write`
privileges are granted to anyone over the `.watches` index.
[float]
==== Path Parameters
`watch_id` (required)::
(string) Identifier for the watch.
[float]
==== Authorization
You must have `manage_watcher` cluster privileges to use this API. For more
information, see {xpack-ref}/security-privileges.html[Security Privileges].
[float]
==== Examples
The following example deletes a watch with the `my-watch` id:
[source,js]
@ -34,4 +58,3 @@ Response:
}
--------------------------------------------------
// TESTRESPONSE

View File

@ -6,20 +6,45 @@ The execute watch API forces the execution of a stored watch. It can be used to
force execution of the watch outside of its triggering logic, or to simulate the
watch execution for debugging purposes.
The following example executes the `my_watch` watch:
[float]
==== Request
[source,js]
--------------------------------------------------
POST _xpack/watcher/watch/my_watch/_execute
--------------------------------------------------
// CONSOLE
// TEST[setup:my_active_watch]
`POST _xpack/watcher/watch/<watch_id>/_execute` +
For testing and debugging purposes, you also have fine-grained control on how the
watch is executed--execute the watch without executing all of its actions or
alternatively by simulating them. You can also force execution by ignoring the
watch condition and control whether a watch record would be written to the watch
history after execution.
`POST _xpack/watcher/watch/_execute`
[float]
==== Description
For testing and debugging purposes, you also have fine-grained control on how
the watch runs. You can execute the watch without executing all of its actions
or alternatively by simulating them. You can also force execution by ignoring
the watch condition and control whether a watch record would be written to the
watch history after execution.
[float]
[[watcher-api-execute-inline-watch]]
===== Inline Watch Execution
You can use the Execute API to execute watches that are not yet registered by
specifying the watch definition inline. This serves as great tool for testing
and debugging your watches prior to adding them to {watcher}.
[float]
==== Path Parameters
`watch_id`::
(string) Identifier for the watch.
[float]
==== Query Parameters
`debug`::
(boolean) Defines whether the watch runs in debug mode. The default value is
`false`.
[float]
==== Request Body
This API supports the following fields:
@ -53,6 +78,58 @@ This API supports the following fields:
not persisted to the index and record_execution cannot be set.
|======
[float]
[[watcher-api-execute-watch-action-mode]]
===== Action Execution Modes
Action modes define how actions are handled during the watch execution. There
are five possible modes an action can be associated with:
[options="header"]
|======
| Name | Description
| `simulate` | The action execution is simulated. Each action type
define its own simulation operation mode. For example, the
{xpack-ref}/actions-email.html[email] action creates
the email that would have been sent but does not actually
send it. In this mode, the action might be throttled if the
current state of the watch indicates it should be.
| `force_simulate` | Similar to the the `simulate` mode, except the action is
not be throttled even if the current state of the watch
indicates it should be.
| `execute` | Executes the action as it would have been executed if the
watch would have been triggered by its own trigger. The
execution might be throttled if the current state of the
watch indicates it should be.
| `force_execute` | Similar to the `execute` mode, except the action is not
throttled even if the current state of the watch indicates
it should be.
| `skip` | The action is skipped and is not executed or simulated.
Effectively forces the action to be throttled.
|======
[float]
==== Authorization
You must have `manage_watcher` cluster privileges to use this API. For more
information, see {xpack-ref}/security-privileges.html[Security Privileges].
[float]
==== Examples
The following example executes the `my_watch` watch:
[source,js]
--------------------------------------------------
POST _xpack/watcher/watch/my_watch/_execute
--------------------------------------------------
// CONSOLE
// TEST[setup:my_active_watch]
The following example shows a comprehensive example of executing the `my-watch` watch:
[source,js]
@ -77,14 +154,14 @@ POST _xpack/watcher/watch/my_watch/_execute
// TEST[setup:my_active_watch]
<1> The triggered and schedule times are provided.
<2> The input as defined by the watch is ignored and instead the provided input
will be used as the execution payload.
<3> The condition as defined by the watch will be ignored and will be assumed to
is used as the execution payload.
<3> The condition as defined by the watch is ignored and is assumed to
evaluate to `true`.
<4> Forces the simulation of `my-action`. Forcing the simulation means that
throttling is ignored and the watch is simulated by {watcher} instead of
being executed normally.
<5> The execution of the watch will create a watch record in the watch history,
and the throttling state of the watch will potentially be updated accordingly.
<5> The execution of the watch creates a watch record in the watch history,
and the throttling state of the watch is potentially updated accordingly.
This is an example of the output:
@ -192,40 +269,6 @@ This is an example of the output:
<2> The watch record document as it would be stored in the `.watcher-history` index.
<3> The watch execution results.
[[watcher-api-execute-watch-action-mode]]
==== Action Execution Modes
Action modes define how actions are handled during the watch execution. There
are five possible modes an action can be associated with:
[options="header"]
|======
| Name | Description
| `simulate` | The action execution will be simulated. Each action type
define its own simulation operation mode. For example, the
{xpack-ref}/actions-email.html[email] action will create
the email that would have been sent but will not actually
send it. In this mode, the action may be throttled if the
current state of the watch indicates it should be.
| `force_simulate` | Similar to the the `simulate` mode, except the action will
not be throttled even if the current state of the watch
indicates it should be.
| `execute` | Executes the action as it would have been executed if the
watch would have been triggered by its own trigger. The
execution may be throttled if the current state of the
watch indicates it should be.
| `force_execute` | Similar to the `execute` mode, except the action will not
be throttled even if the current state of the watch
indicates it should be.
| `skip` | The action will be skipped and won't be executed nor
simulated. Effectively forcing the action to be throttled.
|======
You can set a different execution mode for every action by associating the mode
name with the action id:
@ -257,14 +300,6 @@ POST _xpack/watcher/watch/my_watch/_execute
// CONSOLE
// TEST[setup:my_active_watch]
[float]
[[watcher-api-execute-inline-watch]]
==== Inline Watch Execution
You can use the Execute API to execute watches that are not yet registered by
specifying the watch definition inline. This serves as great tool for testing
and debugging your watches prior to adding them to {watcher}.
The following example shows how to execute a watch inline:
[source,js]

View File

@ -2,7 +2,28 @@
[[watcher-api-get-watch]]
=== Get Watch API
This API retrieves a watch by its id.
This API retrieves a watch by its ID.
[float]
==== Request
`GET _xpack/watcher/watch/<watch_id>`
[float]
==== Path Parameters
`watch_id` (required)::
(string) Identifier for the watch.
[float]
==== Authorization
You must have `manage_watcher` or `monitor_watcher` cluster privileges to use
this API. For more information, see
{xpack-ref}/security-privileges.html[Security Privileges].
[float]
==== Examples
The following example gets a watch with `my-watch` id:

View File

@ -3,16 +3,80 @@
=== Put Watch API
The PUT watch API either registers a new watch in {watcher} or update an
existing one. Once registered, a new document will be added to the `.watches`
index, representing the watch, and its trigger will immediately be registered
with the relevant trigger engine (typically the scheduler, for the `schedule`
trigger).
existing one.
[float]
==== Request
`PUT _xpack/watcher/watch/<watch_id>`
[float]
==== Description
When a watch is registered, a new document that represents the watch is added to
the `.watches` index and its trigger is immediately registered with the relevant
trigger engine. Typically for the `schedule` trigger, the scheduler is the
trigger engine.
IMPORTANT: Putting a watch must be done via this API only. Do not put a watch
directly to the `.watches` index using Elasticsearch's Index API.
directly to the `.watches` index using the Elasticsearch Index API.
If {security} is enabled, make sure no `write` privileges are
granted to anyone over the `.watches` index.
When adding a watch you can also define its initial
{xpack-ref}/how-watcher-works.html#watch-active-state[active state]. You do that
by setting the `active` parameter.
[float]
==== Path Parameters
`watch_id` (required)::
(string) Identifier for the watch.
[float]
==== Query Parameters
`active`::
(boolean) Defines whether the watch is active or inactive by default. The
default value is `true`, which means the watch is active by default.
[float]
==== Request Body
A watch has the following fields:
[options="header"]
|======
| Name | Description
| `trigger` | The {xpack-ref}/trigger.html[trigger] that defines when
the watch should run.
| `input` | The {xpack-ref}/input.html[input] that defines the input
that loads the data for the watch.
| `condition` | The {xpack-ref}/condition.html[condition] that defines if
the actions should be run.
| `actions` | The list of {xpack-ref}/actions.html[actions] that will be
run if the condition matches
| `metadata` | Metadata json that will be copied into the history entries.
| `throttle_period` | The minimum time between actions being run, the default
for this is 5 seconds. This default can be changed in the
config file with the setting `xpack.watcher.throttle.period.default_period`.
|======
[float]
==== Authorization
You must have `manage_watcher` cluster privileges to use this API. For more
information, see {xpack-ref}/security-privileges.html[Security Privileges].
[float]
==== Examples
The following example adds a watch with the `my-watch` id that has the following
characteristics:
@ -72,39 +136,10 @@ PUT _xpack/watcher/watch/my-watch
--------------------------------------------------
// CONSOLE
A watch has the following fields:
[options="header"]
|======
| Name | Description
| `trigger` | The {xpack-ref}/trigger.html[trigger] that defines when
the watch should run.
| `input` | The {xpack-ref}/input.html[input] that defines the input
that loads the data for the watch.
| `condition` | The {xpack-ref}/condition.html[condition] that defines if
the actions should be run.
| `actions` | The list of {xpack-ref}/actions.html[actions] that will be
run if the condition matches
| `metadata` | Metadata json that will be copied into the history entries.
| `throttle_period` | The minimum time between actions being run, the default
for this is 5 seconds. This default can be changed in the
config file with the setting `xpack.watcher.throttle.period.default_period`.
|======
[float]
[[watcher-api-put-watch-active-state]]
==== Controlling Default Active State
When adding a watch you can also define its initial
When you add a watch you can also define its initial
{xpack-ref}/how-watcher-works.html#watch-active-state[active state]. You do that
by setting the `active` parameter. The following command add a watch and sets it
to be inactive by default:
by setting the `active` parameter. The following command adds a watch and sets
it to be inactive by default:
[source,js]
--------------------------------------------------

View File

@ -3,7 +3,20 @@
=== Start API
The `start` API starts the {watcher} service if the service is not already
running, as in the following example:
running.
[float]
==== Request
`POST _xpack/watcher/_start`
==== Authorization
You must have `manage_watcher` cluster privileges to use this API. For more
information, see {xpack-ref}/security-privileges.html[Security Privileges].
[float]
==== Examples
[source,js]
--------------------------------------------------

View File

@ -2,22 +2,70 @@
[[watcher-api-stats]]
=== Stats API
The `stats` API returns the current {watcher} metrics. You can control what
metrics this API returns using the `metric` parameter.
The `stats` API returns the current {watcher} metrics.
The supported metrics are:
[float]
==== Request
[options="header"]
|======
| Metric | Description
| `executing_watches` | Include the current executing watches in the response.
| `queued_watches` | Include the watches queued for execution in the response.
| `_all` | Include all metrics in the response.
|======
`GET _xpack/watcher/stats` +
The {watcher} `stats` API always returns basic metrics regardless of the
`metric` option. The following example calls the `stats` API including the
basic metrics:
`GET _xpack/watcher/stats/<metric>`
[float]
==== Description
This API always returns basic metrics. You retrieve more metrics by using
the `metric` parameter.
[float]
===== Current executing watches metric
The current executing watches metric gives insight into the watches that are
currently being executed by {watcher}. Additional information is shared per
watch that is currently executing. This information includes the `watch_id`,
the time its execution started and its current execution phase.
To include this metric, the `metric` option should be set to `executing_watches`
or `_all`. In addition you can also specify the `emit_stacktraces=true`
parameter, which adds stack traces for each watch that is being executed. These
stack traces can give you more insight into an execution of a watch.
[float]
===== Queued watches metric
{watcher} moderates the execution of watches such that their execution won't put
too much pressure on the node and its resources. If too many watches trigger
concurrently and there isn't enough capacity to execute them all, some of the
watches are queued, waiting for the current executing watches to finish their
execution. The queued watches metric gives insight on these queued watches.
To include this metric, the `metric` option should include `queued_watches` or
`_all`.
[float]
==== Path Parameters
`emit_stacktraces`::
(boolean) Defines whether stack traces are generated for each watch that is
running. The default value is `false`.
`metric`::
(enum) Defines which additional metrics are included in the response.
`executing_watches`::: Includes the current executing watches in the response.
`queued_watches`::: Includes the watches queued for execution in the response.
`_all`::: Includes all metrics in the response.
[float]
==== Authorization
You must have `manage_watcher` or `monitor_watcher` cluster privileges to use
this API. For more information, see
{xpack-ref}/security-privileges.html[Security Privileges].
[float]
==== Examples
The following example calls the `stats` API to retrieve basic metrics:
[source,js]
--------------------------------------------------
@ -39,21 +87,11 @@ A successful call returns a JSON structure similar to the following example:
}
--------------------------------------------------
<1> The current state of watcher. May be either `started`, `starting` or `stopped`.
<1> The current state of watcher, which can be `started`, `starting`, or `stopped`.
<2> The number of watches currently registered.
<3> The number of watches that were triggered and currently queued for execution.
<4> The largest size of the execution thread pool indicating the largest number
of concurrent executing watches.
==== Current executing watches metric
The current executing watches metric gives insight into the watches that are
currently being executed by {watcher}. Additional information is shared per
watch that is currently executing. This information includes the `watch_id`,
the time its execution started and its current execution phase.
To include this metric, the `metric` option should be set to `executing_watches`
or `_all`.
<4> The largest size of the execution thread pool, which indicates the largest
number of concurrent executing watches.
The following example specifies the `metric` option as a query string argument
and will include the basic metrics and metrics about the current executing watches:
@ -96,8 +134,8 @@ captures a watch in execution:
}
--------------------------------------------------
<1> A list of all the Watches that are currently being executed by {watcher}.
When no watches are currently executing an empty array is returned. The
<1> A list of all the watches that are currently being executed by {watcher}.
When no watches are currently executing, an empty array is returned. The
captured watches are sorted by execution time in descending order. Thus the
longest running watch is always at the top.
<2> The id of the watch being executed.
@ -108,21 +146,6 @@ captures a watch in execution:
<6> The current watch execution phase. Can be `input`, `condition` `actions`,
`awaits_execution`, `started`, `watch_transform`, `aborted`, `finished`.
In addition you can also specify the `emit_stacktraces=true` parameter, which
adds stack traces for each watch that is being executed. These stacktraces can
give you more insight into an execution of a watch.
==== Queued watches metric
{watcher} moderates the execution of watches such that their execution won't put
too much pressure on the node and its resources. If too many watches trigger
concurrently and there isn't enough capacity to execute them all, some of the
watches are queued, waiting for the current executing watches to finish their
execution. The queued watches metric gives insight on these queued watches.
To include this metric, the `metric` option should include `queued_watches` or
`_all`.
The following example specifies the `queued_watches` metric option and includes
both the basic metrics and the queued watches:

View File

@ -2,8 +2,21 @@
[[watcher-api-stop]]
=== Stop API
The `stop` API stops the {watcher} service if the service is running, as in the
following example:
The `stop` API stops the {watcher} service if the service is running.
[float]
==== Request
`POST _xpack/watcher/_stop`
[float]
==== Authorization
You must have `manage_watcher` cluster privileges to use this API. For more
information, see {xpack-ref}/security-privileges.html[Security Privileges].
[float]
==== Examples
[source,js]
--------------------------------------------------

View File

@ -18,9 +18,9 @@ IMPORTANT: When you configure realms in `elasticsearch.yml`, only the
realms you specify are used for authentication. To use the
`file` realm as a fallback, you must include it in the realm chain.
To define users, {security} provides the <<managing-file-users, users>> command-line
tool. This tool enables you to add and remove users, assign user roles and manage
user passwords.
To define users, {security} provides the {ref}/users-command.html[users]
command-line tool. This tool enables you to add and remove users, assign user
roles and manage user passwords.
==== Configuring a File Realm
@ -84,152 +84,6 @@ xpack:
(Expert Setting).
|=======================
[[managing-file-users]]
==== Managing Users
The `users` command-line tool is located in `ES_HOME/bin/x-pack` and enables
several administrative tasks for managing users:
* <<file-realm-add-user, Adding users>>
* <<file-realm-list-users, Listing users and roles>>
* <<file-realm-manage-passwd, Managing user passwords>>
* <<file-realm-manage-roles, Managing users' roles>>
* <<file-realm-remove-user, Removing users>>
[[file-realm-add-user]]
===== Adding Users
Use the `useradd` sub-command to add a user to your local node.
NOTE: To ensure that Elasticsearch can read the user and role information at
startup, run `users useradd` as the same user you use to run Elasticsearch.
Running the command as root or some other user will update the permissions
for the `users` and `users_roles` files and prevent Elasticsearch from
accessing them.
[source,shell]
----------------------------------------
bin/x-pack/users useradd <username>
----------------------------------------
Usernames must be at least 1 and no more than 1024 characters. They can
contain alphanumeric characters (`a-z`, `A-Z`, `0-9`), spaces, punctuation, and
printable symbols in the https://en.wikipedia.org/wiki/Basic_Latin_(Unicode_block)[Basic Latin (ASCII) block].
Leading or trailing whitespace is not allowed.
You can specify the user's password at the command-line with the `-p` option.
When this option is absent, the command prompts you for the password. Omit the
`-p` option to keep plaintext passwords out of the terminal session's command
history.
[source,shell]
----------------------------------------------------
bin/x-pack/users useradd <username> -p <secret>
----------------------------------------------------
Passwords must be at least 6 characters long.
You can define a user's roles with the `-r` option. This option accepts a
comma-separated list of role names to assign to the user.
[source,shell]
-------------------------------------------------------------------
bin/x-pack/users useradd <username> -r <comma-separated list of role names>
-------------------------------------------------------------------
The following example adds a new user named `jacknich` to the `file` realm. The
password for this user is `theshining`, and this user is associated with the
`network` and `monitoring` roles.
[source,shell]
-------------------------------------------------------------------
bin/x-pack/users useradd jacknich -p theshining -r network,monitoring
-------------------------------------------------------------------
For valid role names please see <<valid-role-name, Role Definitions>>.
[[file-realm-list-users]]
===== Listing Users
Use the `list` sub-command to list the users registered with the `file` realm
on the local node.
[source, shell]
----------------------------------
bin/x-pack/users list
rdeniro : admin
alpacino : power_user
jacknich : monitoring,network
----------------------------------
Users are in the left-hand column and their corresponding roles are listed in
the right-hand column.
The `list <username>` sub-command lists a specific user. Use this command to
verify that a user was successfully added to the local `file` realm.
[source,shell]
-----------------------------------
bin/x-pack/users list jacknich
jacknich : monitoring,network
-----------------------------------
[[file-realm-manage-passwd]]
===== Managing User Passwords
Use the `passwd` sub-command to reset a user's password. You can specify the new
password directly with the `-p` option. When `-p` option is omitted, the tool
will prompt you to enter and confirm a password in interactive mode.
[source,shell]
--------------------------------------------------
bin/x-pack/users passwd <username>
--------------------------------------------------
[source,shell]
--------------------------------------------------
bin/x-pack/users passwd <username> -p <password>
--------------------------------------------------
[[file-realm-manage-roles]]
===== Assigning Users to Roles
Use the `roles` sub-command to manage the roles of a particular user. The `-a`
option adds a comma-separated list of roles to a user. The `-r` option removes
a comma-separated list of roles from a user. You can combine adding and removing
roles within the same command to change a user's roles.
[source,shell]
------------------------------------------------------------------------------------------------------------
bin/x-pack/users roles <username> -a <commma-separate list of roles> -r <comma-separated list of roles>
------------------------------------------------------------------------------------------------------------
The following command removes the `network` and `monitoring` roles from user
`jacknich` and adds the `user` role:
[source,shell]
------------------------------------------------------------
bin/x-pack/users roles jacknich -r network,monitoring -a user
------------------------------------------------------------
Listing the user displays the new role assignment:
[source,shell]
---------------------------------
bin/x-pack/users list jacknich
jacknich : user
---------------------------------
[[file-realm-remove-user]]
===== Deleting Users
Use the `userdel` sub-command to delete a user.
[source,shell]
--------------------------------------------------
bin/x-pack/users userdel <username>
--------------------------------------------------
==== A Look Under the Hood
All the data about the users for the `file` realm is stored in two files, `users`
@ -255,8 +109,8 @@ Puppet or Chef).
==============================
While it is possible to modify these files directly using any standard text
editor, we strongly recommend using the `bin/x-pack/users` command-line tool
to apply the required changes.
editor, we strongly recommend using the {ref}/users-command.html[`bin/x-pack/users`]
command-line tool to apply the required changes.
[float]
[[users-file]]

View File

@ -156,19 +156,17 @@ A role is defined by the following JSON structure:
[source,js]
-----
{
"name": "...", <1>
"run_as": [ ... ] <2>
"cluster": [ ... ], <3>
"indices": [ ... ] <4>
"run_as": [ ... ], <1>
"cluster": [ ... ], <2>
"indices": [ ... ] <3>
}
-----
<1> The role name, also used as the role ID.
<2> A list of usernames the owners of this role can <<run-as-privilege, impersonate>>.
<3> A list of cluster privileges. These privileges define the
<1> A list of usernames the owners of this role can <<run-as-privilege, impersonate>>.
<2> A list of cluster privileges. These privileges define the
cluster level actions users with this role are able to execute. This field
is optional (missing `cluster` privileges effectively mean no cluster level
permissions).
<4> A list of indices permissions entries. This field is optional (missing `indices`
<3> A list of indices permissions entries. This field is optional (missing `indices`
privileges effectively mean no index level permissions).
[[valid-role-name]]

View File

@ -2,8 +2,8 @@
=== Mapping Users and Groups to Roles
If you authenticate users with the `native` or `file` realms, you can manage
role assignment user the <<managing-native-users, User Management APIs>> or the
<<managing-file-users, file-realm>> command-line tool respectively.
role assignment by using the <<managing-native-users, User Management APIs>> or
the {ref}/users-command.html[users] command-line tool respectively.
For other types of realms, you must create _role-mappings_ that define which
roles should be assigned to each user based on their username, groups, or

View File

@ -98,7 +98,8 @@ IMPORTANT: Once you get these basic security measures in place, we strongly
recommend that you secure communications to and from nodes by
configuring your cluster to use {xpack-ref}/ssl-tls.html[SSL/TLS encryption].
Nodes that do not have encryption enabled send passwords in plain
text!
text and will not be able to install a non-trial license that enables the use
of {security}.
Depending on your security requirements, you might also want to:

View File

@ -47,9 +47,8 @@ _realms_. {security} provides the following built-in realms:
| `file` | | | An internal realm where users are defined in files
stored on each node in the Elasticsearch cluster.
With this realm, users are authenticated by usernames
and passwords. The users are managed via
<<managing-file-users,dedicated tools>> that are
provided by {xpack} on installation.
and passwords. The users are managed via dedicated
tools that are provided by {xpack} on installation.
|======
If none of the built-in realms meets your needs, you can also build your own

View File

@ -4,8 +4,8 @@
Elasticsearch nodes store data that may be confidential. Attacks on the data may
come from the network. These attacks could include sniffing of the data,
manipulation of the data, and attempts to gain access to the server and thus the
files storing the data. Securing your nodes with the procedures below helps to
reduce risk from network-based attacks.
files storing the data. Securing your nodes is required in order to use a production
license that enables {security} and helps reduce the risk from network-based attacks.
This section shows how to:

View File

@ -38,19 +38,6 @@ transport.profiles.client.bind_host: 1.1.1.1 <2>
If separate networks are not available, then <<ip-filtering, IP Filtering>> can
be enabled to limit access to the profiles.
The TCP transport profiles also allow for enabling SSL on a per profile basis.
This is useful if you have a secured network for the node-to-node communication,
but the client is on an unsecured network. To enable SSL on a client profile when
SSL is disabled for node-to-node communication, add the following to
`elasticsearch.yml`:
[source, yaml]
--------------------------------------------------
transport.profiles.client.xpack.security.ssl.enabled: true <1>
--------------------------------------------------
<1> This enables SSL on the client profile. The default value for this setting
is the value of `xpack.security.transport.ssl.enabled`.
When using SSL for transport, a different set of certificates can also be used
for the client traffic by adding the following to `elasticsearch.yml`:

View File

@ -6,7 +6,7 @@ cluster. Connections are secured using Transport Layer Security (TLS), which is
commonly referred to as "SSL".
WARNING: Clusters that do not have encryption enabled send all data in plain text
including passwords.
including passwords and will not be able to install a license that enables {security}.
To enable encryption, you need to perform the following steps on each node in
the cluster:

View File

@ -715,11 +715,11 @@ are also available for each transport profile. By default, the settings for a
transport profile will be the same as the default transport unless they
are specified.
As an example, lets look at the enabled setting. For the default transport
this is `xpack.security.transport.ssl.enabled`. In order to use this setting in a
As an example, lets look at the key setting. For the default transport
this is `xpack.security.transport.ssl.key`. In order to use this setting in a
transport profile, use the prefix `transport.profiles.$PROFILE.xpack.security.` and
append the portion of the setting after `xpack.security.transport.`. For the enabled
setting, this would be `transport.profiles.$PROFILE.xpack.security.ssl.enabled`.
append the portion of the setting after `xpack.security.transport.`. For the key
setting, this would be `transport.profiles.$PROFILE.xpack.security.ssl.key`.
[float]
[[ip-filtering-settings]]

View File

@ -228,9 +228,10 @@ You can also set a watch to the _inactive_ state. Inactive watches are not
registered with a trigger engine and can never be triggered.
To set a watch to the inactive state when you create it, set the
{ref}/watcher-api-put-watch.html#watcher-api-put-watch-active-state[`active`]
parameter to _inactive_. To deactivate an existing watch, use the
{ref}/watcher-api-deactivate-watch.html[Deactivate Watch API]. To reactivate an inactive watch, use the
{ref}/watcher-api-put-watch.html[`active`] parameter to _inactive_. To
deactivate an existing watch, use the
{ref}/watcher-api-deactivate-watch.html[Deactivate Watch API]. To reactivate an
inactive watch, use the
{ref}/watcher-api-activate-watch.html[Activate Watch API].
NOTE: You can use the {ref}/watcher-api-execute-watch.html[Execute Watch API]

View File

@ -1,3 +1,4 @@
import org.elasticsearch.gradle.LoggedExec
import org.elasticsearch.gradle.MavenFilteringHack
import org.elasticsearch.gradle.test.NodeInfo
@ -210,7 +211,39 @@ integTestRunner {
systemProperty 'tests.rest.blacklist', 'getting_started/10_monitor_cluster_health/*'
}
// location of generated keystores and certificates
File keystoreDir = new File(project.buildDir, 'keystore')
// Generate the node's keystore
File nodeKeystore = new File(keystoreDir, 'test-node.jks')
task createNodeKeyStore(type: LoggedExec) {
doFirst {
if (nodeKeystore.parentFile.exists() == false) {
nodeKeystore.parentFile.mkdirs()
}
if (nodeKeystore.exists()) {
delete nodeKeystore
}
}
executable = new File(project.javaHome, 'bin/keytool')
standardInput = new ByteArrayInputStream('FirstName LastName\nUnit\nOrganization\nCity\nState\nNL\nyes\n\n'.getBytes('UTF-8'))
args '-genkey',
'-alias', 'test-node',
'-keystore', nodeKeystore,
'-keyalg', 'RSA',
'-keysize', '2048',
'-validity', '712',
'-dname', 'CN=smoke-test-plugins-ssl',
'-keypass', 'keypass',
'-storepass', 'keypass'
}
// Add keystores to test classpath: it expects it there
sourceSets.test.resources.srcDir(keystoreDir)
processTestResources.dependsOn(createNodeKeyStore)
integTestCluster {
dependsOn createNodeKeyStore
setting 'xpack.ml.enabled', 'true'
setting 'logger.org.elasticsearch.xpack.ml.datafeed', 'TRACE'
// Integration tests are supposed to enable/disable exporters before/after each test
@ -218,11 +251,17 @@ integTestCluster {
setting 'xpack.monitoring.exporters._local.enabled', 'false'
setting 'xpack.monitoring.collection.interval', '-1'
setting 'xpack.security.authc.token.enabled', 'true'
setting 'xpack.security.transport.ssl.enabled', 'true'
setting 'xpack.security.transport.ssl.keystore.path', nodeKeystore.name
setting 'xpack.security.transport.ssl.verification_mode', 'certificate'
keystoreSetting 'bootstrap.password', 'x-pack-test-password'
keystoreSetting 'xpack.security.transport.ssl.keystore.secure_password', 'keypass'
distribution = 'zip' // this is important since we use the reindex module in ML
setupCommand 'setupTestUser', 'bin/x-pack/users', 'useradd', 'x_pack_rest_user', '-p', 'x-pack-test-password', '-r', 'superuser'
extraConfigFile nodeKeystore.name, nodeKeystore
waitCondition = { NodeInfo node, AntBuilder ant ->
File tmpFile = new File(node.cwd, 'wait.success')

View File

@ -782,4 +782,23 @@ public class License implements ToXContentObject {
}
}
}
/**
* Returns <code>true</code> iff the license is a production licnese
*/
public boolean isProductionLicense() {
switch (operationMode()) {
case MISSING:
case TRIAL:
case BASIC:
return false;
case STANDARD:
case GOLD:
case PLATINUM:
return true;
default:
throw new AssertionError("unknown operation mode: " + operationMode());
}
}
}

View File

@ -30,6 +30,7 @@ import org.elasticsearch.env.Environment;
import org.elasticsearch.gateway.GatewayService;
import org.elasticsearch.watcher.ResourceWatcherService;
import org.elasticsearch.xpack.XPackPlugin;
import org.elasticsearch.xpack.XPackSettings;
import org.elasticsearch.xpack.scheduler.SchedulerEngine;
import java.time.Clock;
@ -207,20 +208,31 @@ public class LicenseService extends AbstractLifecycleComponent implements Cluste
}
}
}
clusterService.submitStateUpdateTask("register license [" + newLicense.uid() + "]", new
AckedClusterStateUpdateTask<PutLicenseResponse>(request, listener) {
@Override
protected PutLicenseResponse newResponse(boolean acknowledged) {
return new PutLicenseResponse(acknowledged, LicensesStatus.VALID);
}
@Override
public ClusterState execute(ClusterState currentState) throws Exception {
MetaData.Builder mdBuilder = MetaData.builder(currentState.metaData());
mdBuilder.putCustom(LicensesMetaData.TYPE, new LicensesMetaData(newLicense));
return ClusterState.builder(currentState).metaData(mdBuilder).build();
}
});
if (newLicense.isProductionLicense()
&& XPackSettings.SECURITY_ENABLED.get(settings)
&& XPackSettings.TRANSPORT_SSL_ENABLED.get(settings) == false) {
// security is on but TLS is not configured we gonna fail the entire request and throw an exception
throw new IllegalStateException("Can not upgrade to a production license unless TLS is configured or " +
"security is disabled");
// TODO we should really validate that all nodes have xpack in stalled and are consistently configured but this
// should happen on a different level and not in this code
} else {
clusterService.submitStateUpdateTask("register license [" + newLicense.uid() + "]", new
AckedClusterStateUpdateTask<PutLicenseResponse>(request, listener) {
@Override
protected PutLicenseResponse newResponse(boolean acknowledged) {
return new PutLicenseResponse(acknowledged, LicensesStatus.VALID);
}
@Override
public ClusterState execute(ClusterState currentState) throws Exception {
MetaData.Builder mdBuilder = MetaData.builder(currentState.metaData());
mdBuilder.putCustom(LicensesMetaData.TYPE, new LicensesMetaData(newLicense));
return ClusterState.builder(currentState).metaData(mdBuilder).build();
}
});
}
}
}
@ -271,7 +283,7 @@ public class LicenseService extends AbstractLifecycleComponent implements Cluste
}
public License getLicense() {
final License license = getLicense(clusterService.state().metaData().custom(LicensesMetaData.TYPE));
final License license = getLicense(clusterService.state().metaData());
return license == LicensesMetaData.LICENSE_TOMBSTONE ? null : license;
}
@ -469,7 +481,12 @@ public class LicenseService extends AbstractLifecycleComponent implements Cluste
};
}
License getLicense(final LicensesMetaData metaData) {
public static License getLicense(final MetaData metaData) {
final LicensesMetaData licensesMetaData = metaData.custom(LicensesMetaData.TYPE);
return getLicense(licensesMetaData);
}
static License getLicense(final LicensesMetaData metaData) {
if (metaData != null) {
License license = metaData.getLicense();
if (license == LicensesMetaData.LICENSE_TOMBSTONE) {

View File

@ -16,6 +16,7 @@ import org.elasticsearch.client.transport.TransportClient;
import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
import org.elasticsearch.cluster.metadata.IndexTemplateMetaData;
import org.elasticsearch.cluster.node.DiscoveryNode;
import org.elasticsearch.cluster.node.DiscoveryNodes;
import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.inject.Binder;
@ -44,6 +45,7 @@ import org.elasticsearch.license.Licensing;
import org.elasticsearch.license.XPackLicenseState;
import org.elasticsearch.plugins.ActionPlugin;
import org.elasticsearch.plugins.ClusterPlugin;
import org.elasticsearch.plugins.DiscoveryPlugin;
import org.elasticsearch.plugins.IngestPlugin;
import org.elasticsearch.plugins.NetworkPlugin;
import org.elasticsearch.plugins.Plugin;
@ -128,6 +130,7 @@ import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.function.BiConsumer;
import java.util.function.Supplier;
import java.util.function.UnaryOperator;
import java.util.stream.Collectors;
@ -137,7 +140,7 @@ import javax.security.auth.DestroyFailedException;
import static org.elasticsearch.xpack.watcher.Watcher.ENCRYPT_SENSITIVE_DATA_SETTING;
public class XPackPlugin extends Plugin implements ScriptPlugin, ActionPlugin, IngestPlugin, NetworkPlugin, ClusterPlugin {
public class XPackPlugin extends Plugin implements ScriptPlugin, ActionPlugin, IngestPlugin, NetworkPlugin, ClusterPlugin, DiscoveryPlugin {
public static final String NAME = "x-pack";
@ -639,4 +642,9 @@ public class XPackPlugin extends Plugin implements ScriptPlugin, ActionPlugin, I
public Map<String, Supplier<ClusterState.Custom>> getInitialClusterStateCustomSupplier() {
return security.getInitialClusterStateCustomSupplier();
}
@Override
public BiConsumer<DiscoveryNode, ClusterState> getJoinValidator() {
return security.getJoinValidator();
}
}

View File

@ -5,7 +5,6 @@
*/
package org.elasticsearch.xpack;
import org.elasticsearch.common.Booleans;
import org.elasticsearch.common.network.NetworkModule;
import org.elasticsearch.common.settings.Setting;
import org.elasticsearch.common.settings.Setting.Property;
@ -56,20 +55,9 @@ public class XPackSettings {
public static final Setting<Boolean> LOGSTASH_ENABLED = Setting.boolSetting("xpack.logstash.enabled", true,
Setting.Property.NodeScope);
/**
* Legacy setting for enabling or disabling transport ssl. Defaults to true. This is just here to make upgrading easier since the
* user needs to set this setting in 5.x to upgrade
*/
private static final Setting<Boolean> TRANSPORT_SSL_ENABLED =
new Setting<>("xpack.security.transport.ssl.enabled", (s) -> Boolean.toString(true),
(s) -> {
final boolean parsed = Booleans.parseBoolean(s);
if (parsed == false) {
throw new IllegalArgumentException("transport ssl cannot be disabled. Remove setting [" +
XPackPlugin.featureSettingPrefix(XPackPlugin.SECURITY) + ".transport.ssl.enabled]");
}
return true;
}, Property.NodeScope, Property.Deprecated);
/** Setting for enabling or disabling TLS. Defaults to false. */
public static final Setting<Boolean> TRANSPORT_SSL_ENABLED = Setting.boolSetting("xpack.security.transport.ssl.enabled", false,
Property.NodeScope);
/** Setting for enabling or disabling http ssl. Defaults to false. */
public static final Setting<Boolean> HTTP_SSL_ENABLED = Setting.boolSetting("xpack.security.http.ssl.enabled", false,

View File

@ -106,7 +106,7 @@ class AggregationDataExtractor implements DataExtractor {
private void initAggregationProcessor(Aggregations aggs) throws IOException {
aggregationToJsonProcessor = new AggregationToJsonProcessor(context.timeField, context.fields, context.includeDocCount,
context.start);
context.start, getHistogramInterval());
aggregationToJsonProcessor.process(aggs);
}

View File

@ -5,7 +5,9 @@
*/
package org.elasticsearch.xpack.ml.datafeed.extractor.aggregation;
import org.apache.logging.log4j.Logger;
import org.elasticsearch.common.Nullable;
import org.elasticsearch.common.logging.Loggers;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.json.JsonXContent;
import org.elasticsearch.search.aggregations.Aggregation;
@ -40,13 +42,16 @@ import java.util.TreeMap;
*/
class AggregationToJsonProcessor {
private static final Logger LOGGER = Loggers.getLogger(AggregationToJsonProcessor.class);
private final String timeField;
private final Set<String> fields;
private final boolean includeDocCount;
private final LinkedHashMap<String, Object> keyValuePairs;
private long keyValueWrittenCount;
private SortedMap<Long, List<Map<String, Object>>> docsByBucketTimestamp;
private long startTime;
private final SortedMap<Long, List<Map<String, Object>>> docsByBucketTimestamp;
private final long startTime;
private final long histogramInterval;
/**
* Constructs a processor that processes aggregations into JSON
@ -55,8 +60,9 @@ class AggregationToJsonProcessor {
* @param fields the fields to convert into JSON
* @param includeDocCount whether to include the doc_count
* @param startTime buckets with a timestamp before this time are discarded
* @param histogramInterval the histogram interval
*/
AggregationToJsonProcessor(String timeField, Set<String> fields, boolean includeDocCount, long startTime)
AggregationToJsonProcessor(String timeField, Set<String> fields, boolean includeDocCount, long startTime, long histogramInterval)
throws IOException {
this.timeField = Objects.requireNonNull(timeField);
this.fields = Objects.requireNonNull(fields);
@ -65,6 +71,7 @@ class AggregationToJsonProcessor {
docsByBucketTimestamp = new TreeMap<>();
keyValueWrittenCount = 0;
this.startTime = startTime;
this.histogramInterval = histogramInterval;
}
public void process(Aggregations aggs) throws IOException {
@ -154,8 +161,10 @@ class AggregationToJsonProcessor {
boolean checkBucketTime = true;
for (Histogram.Bucket bucket : agg.getBuckets()) {
if (checkBucketTime) {
if (toHistogramKeyToEpoch(bucket.getKey()) < startTime) {
long bucketTime = toHistogramKeyToEpoch(bucket.getKey());
if (bucketTime + histogramInterval <= startTime) {
// skip buckets outside the required time range
LOGGER.debug("Skipping bucket at [" + bucketTime + "], startTime is [" + startTime + "]");
continue;
} else {
checkBucketTime = false;

View File

@ -25,8 +25,6 @@ import static org.elasticsearch.common.settings.Setting.timeSetting;
public class MonitoringSettings extends AbstractComponent {
public static final String LEGACY_DATA_INDEX_NAME = ".marvel-es-data";
public static final String HISTORY_DURATION_SETTING_NAME = "history.duration";
/**
* The minimum amount of time allowed for the history duration.

View File

@ -74,6 +74,7 @@ import java.util.stream.StreamSupport;
import static org.elasticsearch.common.Strings.collectionToCommaDelimitedString;
import static org.elasticsearch.xpack.monitoring.exporter.MonitoringTemplateUtils.LAST_UPDATED_VERSION;
import static org.elasticsearch.xpack.monitoring.exporter.MonitoringTemplateUtils.PIPELINE_IDS;
import static org.elasticsearch.xpack.monitoring.exporter.MonitoringTemplateUtils.TEMPLATE_VERSION;
import static org.elasticsearch.xpack.monitoring.exporter.MonitoringTemplateUtils.loadPipeline;
import static org.elasticsearch.xpack.monitoring.exporter.MonitoringTemplateUtils.pipelineName;
@ -505,11 +506,8 @@ public class LocalExporter extends Exporter implements ClusterStateListener, Cle
if (clusterState != null) {
long expirationTime = expiration.getMillis();
// Get the list of monitoring index patterns
String[] patterns = StreamSupport.stream(getResolvers().spliterator(), false)
.map(MonitoringIndexNameResolver::indexPattern)
.distinct()
.toArray(String[]::new);
// list of index patterns that we clean up; we may add watcher history in the future
final String[] indexPatterns = new String[] { ".monitoring-*" };
MonitoringDoc monitoringDoc = new MonitoringDoc(null, null, null, null, null,
System.currentTimeMillis(), (MonitoringDoc.Node) null);
@ -519,13 +517,15 @@ public class LocalExporter extends Exporter implements ClusterStateListener, Cle
.map(r -> r.index(monitoringDoc))
.collect(Collectors.toSet());
// avoid deleting the current alerts index, but feel free to delete older ones
currents.add(".monitoring-alerts-" + TEMPLATE_VERSION);
Set<String> indices = new HashSet<>();
for (ObjectObjectCursor<String, IndexMetaData> index : clusterState.getMetaData().indices()) {
String indexName = index.key;
if (Regex.simpleMatch(patterns, indexName)) {
// Never delete the data index or a current index
if (Regex.simpleMatch(indexPatterns, indexName)) {
// Never delete any "current" index (e.g., today's index or the most recent version no timestamp, like alerts)
if (currents.contains(indexName)) {
continue;
}

View File

@ -8,6 +8,7 @@ package org.elasticsearch.xpack.security;
import org.elasticsearch.bootstrap.BootstrapCheck;
import org.elasticsearch.bootstrap.BootstrapContext;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.xpack.XPackSettings;
import org.elasticsearch.xpack.security.authc.RealmSettings;
import org.elasticsearch.xpack.security.authc.pki.PkiRealm;
import org.elasticsearch.xpack.security.transport.netty4.SecurityNetty4Transport;
@ -46,16 +47,18 @@ class PkiRealmBootstrapCheck implements BootstrapCheck {
}
// Default Transport
final boolean transportSSLEnabled = XPackSettings.TRANSPORT_SSL_ENABLED.get(settings);
final Settings transportSSLSettings = settings.getByPrefix(setting("transport.ssl."));
final boolean clientAuthEnabled = sslService.isSSLClientAuthEnabled(transportSSLSettings);
if (clientAuthEnabled) {
if (transportSSLEnabled && clientAuthEnabled) {
return BootstrapCheckResult.success();
}
// Transport Profiles
Map<String, Settings> groupedSettings = settings.getGroups("transport.profiles.");
for (Map.Entry<String, Settings> entry : groupedSettings.entrySet()) {
if (sslService.isSSLClientAuthEnabled(SecurityNetty4Transport.profileSslSettings(entry.getValue()), transportSSLSettings)) {
if (transportSSLEnabled && sslService.isSSLClientAuthEnabled(
SecurityNetty4Transport.profileSslSettings(entry.getValue()), transportSSLSettings)) {
return BootstrapCheckResult.success();
}
}

View File

@ -16,10 +16,10 @@ import org.elasticsearch.action.support.DestructiveOperations;
import org.elasticsearch.bootstrap.BootstrapCheck;
import org.elasticsearch.client.Client;
import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.LocalNodeMasterListener;
import org.elasticsearch.cluster.NamedDiff;
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
import org.elasticsearch.cluster.metadata.IndexTemplateMetaData;
import org.elasticsearch.cluster.node.DiscoveryNode;
import org.elasticsearch.cluster.node.DiscoveryNodes;
import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.Booleans;
@ -52,9 +52,12 @@ import org.elasticsearch.http.HttpServerTransport;
import org.elasticsearch.index.IndexModule;
import org.elasticsearch.indices.breaker.CircuitBreakerService;
import org.elasticsearch.ingest.Processor;
import org.elasticsearch.license.License;
import org.elasticsearch.license.LicenseService;
import org.elasticsearch.license.XPackLicenseState;
import org.elasticsearch.plugins.ActionPlugin;
import org.elasticsearch.plugins.ClusterPlugin;
import org.elasticsearch.plugins.DiscoveryPlugin;
import org.elasticsearch.plugins.IngestPlugin;
import org.elasticsearch.plugins.NetworkPlugin;
import org.elasticsearch.rest.RestController;
@ -163,9 +166,9 @@ import org.elasticsearch.xpack.security.transport.filter.IPFilter;
import org.elasticsearch.xpack.security.transport.netty4.SecurityNetty4HttpServerTransport;
import org.elasticsearch.xpack.security.transport.netty4.SecurityNetty4Transport;
import org.elasticsearch.xpack.security.user.AnonymousUser;
import org.elasticsearch.xpack.ssl.SSLBootstrapCheck;
import org.elasticsearch.xpack.ssl.SSLConfigurationSettings;
import org.elasticsearch.xpack.ssl.SSLService;
import org.elasticsearch.xpack.ssl.TLSLicenseBootstrapCheck;
import org.elasticsearch.xpack.template.TemplateUtils;
import org.joda.time.DateTime;
import org.joda.time.DateTimeZone;
@ -195,7 +198,7 @@ import static java.util.Collections.singletonList;
import static org.elasticsearch.xpack.XPackSettings.HTTP_SSL_ENABLED;
import static org.elasticsearch.xpack.security.SecurityLifecycleService.SECURITY_TEMPLATE_NAME;
public class Security implements ActionPlugin, IngestPlugin, NetworkPlugin, ClusterPlugin {
public class Security implements ActionPlugin, IngestPlugin, NetworkPlugin, ClusterPlugin, DiscoveryPlugin {
private static final Logger logger = Loggers.getLogger(XPackPlugin.class);
@ -243,9 +246,9 @@ public class Security implements ActionPlugin, IngestPlugin, NetworkPlugin, Clus
// fetched
final List<BootstrapCheck> checks = new ArrayList<>();
checks.addAll(Arrays.asList(
new SSLBootstrapCheck(sslService, env),
new TokenSSLBootstrapCheck(),
new PkiRealmBootstrapCheck(sslService)));
new PkiRealmBootstrapCheck(sslService),
new TLSLicenseBootstrapCheck()));
checks.addAll(InternalRealms.getBootstrapChecks(settings));
this.bootstrapChecks = Collections.unmodifiableList(checks);
} else {
@ -902,4 +905,25 @@ public class Security implements ActionPlugin, IngestPlugin, NetworkPlugin, Clus
return Collections.emptyMap();
}
}
@Override
public BiConsumer<DiscoveryNode, ClusterState> getJoinValidator() {
return enabled ? new ValidateTLSOnJoin(XPackSettings.TRANSPORT_SSL_ENABLED.get(settings)) : null;
}
static final class ValidateTLSOnJoin implements BiConsumer<DiscoveryNode, ClusterState> {
private final boolean isTLSEnabled;
ValidateTLSOnJoin(boolean isTLSEnabled) {
this.isTLSEnabled = isTLSEnabled;
}
@Override
public void accept(DiscoveryNode node, ClusterState state) {
License license = LicenseService.getLicense(state.metaData());
if (license != null && license.isProductionLicense() && isTLSEnabled == false) {
throw new IllegalStateException("TLS setup is required for license type [" + license.operationMode().name() + "]");
}
}
}
}

View File

@ -180,7 +180,7 @@ public class PkiRealm extends Realm {
}
try (SecureString password = SSL_SETTINGS.truststorePassword.get(settings)) {
String trustStoreAlgorithm = SSL_SETTINGS.truststoreAlgorithm.get(settings);
String trustStoreType = SSL_SETTINGS.truststoreType.get(settings);
String trustStoreType = SSLConfigurationSettings.getKeyStoreType(SSL_SETTINGS.truststoreType, settings, truststorePath);
try {
return CertUtils.trustManager(truststorePath, trustStoreType, password.getChars(), trustStoreAlgorithm, realmConfig.env());
} catch (Exception e) {

View File

@ -171,10 +171,12 @@ public class SecurityServerTransportInterceptor extends AbstractComponent implem
Map<String, ServerTransportFilter> profileFilters = new HashMap<>(profileSettingsMap.size() + 1);
final Settings transportSSLSettings = settings.getByPrefix(setting("transport.ssl."));
final boolean transportSSLEnabled = XPackSettings.TRANSPORT_SSL_ENABLED.get(settings);
for (Map.Entry<String, Settings> entry : profileSettingsMap.entrySet()) {
Settings profileSettings = entry.getValue();
final Settings profileSslSettings = SecurityNetty4Transport.profileSslSettings(profileSettings);
final boolean extractClientCert = sslService.isSSLClientAuthEnabled(profileSslSettings, transportSSLSettings);
final boolean extractClientCert = transportSSLEnabled &&
sslService.isSSLClientAuthEnabled(profileSslSettings, transportSSLSettings);
String type = TRANSPORT_TYPE_SETTING_TEMPLATE.apply(TRANSPORT_TYPE_SETTING_KEY).get(entry.getValue());
switch (type) {
case "client":
@ -193,7 +195,7 @@ public class SecurityServerTransportInterceptor extends AbstractComponent implem
}
if (!profileFilters.containsKey(TcpTransport.DEFAULT_PROFILE)) {
final boolean extractClientCert = sslService.isSSLClientAuthEnabled(transportSSLSettings);
final boolean extractClientCert = transportSSLEnabled && sslService.isSSLClientAuthEnabled(transportSSLSettings);
profileFilters.put(TcpTransport.DEFAULT_PROFILE, new ServerTransportFilter.NodeProfile(authcService, authzService,
threadPool.getThreadContext(), extractClientCert, destructiveOperations, reservedRealmEnabled, securityContext));
}

View File

@ -21,6 +21,7 @@ import org.elasticsearch.indices.breaker.CircuitBreakerService;
import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.transport.TcpTransport;
import org.elasticsearch.transport.netty4.Netty4Transport;
import org.elasticsearch.xpack.XPackSettings;
import org.elasticsearch.xpack.ssl.SSLConfiguration;
import org.elasticsearch.xpack.ssl.SSLService;
import org.elasticsearch.xpack.security.transport.filter.IPFilter;
@ -47,6 +48,7 @@ public class SecurityNetty4Transport extends Netty4Transport {
@Nullable private final IPFilter authenticator;
private final SSLConfiguration sslConfiguration;
private final Map<String, SSLConfiguration> profileConfiguration;
private final boolean sslEnabled;
public SecurityNetty4Transport(Settings settings, ThreadPool threadPool, NetworkService networkService, BigArrays bigArrays,
NamedWriteableRegistry namedWriteableRegistry, CircuitBreakerService circuitBreakerService,
@ -54,23 +56,28 @@ public class SecurityNetty4Transport extends Netty4Transport {
super(settings, threadPool, networkService, bigArrays, namedWriteableRegistry, circuitBreakerService);
this.authenticator = authenticator;
this.sslService = sslService;
this.sslEnabled = XPackSettings.TRANSPORT_SSL_ENABLED.get(settings);
final Settings transportSSLSettings = settings.getByPrefix(setting("transport.ssl."));
sslConfiguration = sslService.sslConfiguration(transportSSLSettings, Settings.EMPTY);
Map<String, Settings> profileSettingsMap = settings.getGroups("transport.profiles.", true);
Map<String, SSLConfiguration> profileConfiguration = new HashMap<>(profileSettingsMap.size() + 1);
for (Map.Entry<String, Settings> entry : profileSettingsMap.entrySet()) {
Settings profileSettings = entry.getValue();
final Settings profileSslSettings = profileSslSettings(profileSettings);
SSLConfiguration configuration = sslService.sslConfiguration(profileSslSettings, transportSSLSettings);
profileConfiguration.put(entry.getKey(), configuration);
if (sslEnabled) {
this.sslConfiguration = sslService.sslConfiguration(transportSSLSettings, Settings.EMPTY);
Map<String, Settings> profileSettingsMap = settings.getGroups("transport.profiles.", true);
Map<String, SSLConfiguration> profileConfiguration = new HashMap<>(profileSettingsMap.size() + 1);
for (Map.Entry<String, Settings> entry : profileSettingsMap.entrySet()) {
Settings profileSettings = entry.getValue();
final Settings profileSslSettings = profileSslSettings(profileSettings);
SSLConfiguration configuration = sslService.sslConfiguration(profileSslSettings, transportSSLSettings);
profileConfiguration.put(entry.getKey(), configuration);
}
if (profileConfiguration.containsKey(TcpTransport.DEFAULT_PROFILE) == false) {
profileConfiguration.put(TcpTransport.DEFAULT_PROFILE, sslConfiguration);
}
this.profileConfiguration = Collections.unmodifiableMap(profileConfiguration);
} else {
this.profileConfiguration = Collections.emptyMap();
this.sslConfiguration = null;
}
if (profileConfiguration.containsKey(TcpTransport.DEFAULT_PROFILE) == false) {
profileConfiguration.put(TcpTransport.DEFAULT_PROFILE, sslConfiguration);
}
this.profileConfiguration = Collections.unmodifiableMap(profileConfiguration);
}
@Override
@ -83,11 +90,15 @@ public class SecurityNetty4Transport extends Netty4Transport {
@Override
protected ChannelHandler getServerChannelInitializer(String name) {
SSLConfiguration configuration = profileConfiguration.get(name);
if (configuration == null) {
throw new IllegalStateException("unknown profile: " + name);
if (sslEnabled) {
SSLConfiguration configuration = profileConfiguration.get(name);
if (configuration == null) {
throw new IllegalStateException("unknown profile: " + name);
}
return new SecurityServerChannelInitializer(name, configuration);
} else {
return new IPFilterServerChannelInitializer(name);
}
return new SecurityServerChannelInitializer(name, configuration);
}
@Override
@ -127,13 +138,26 @@ public class SecurityNetty4Transport extends Netty4Transport {
}
}
class SecurityServerChannelInitializer extends ServerChannelInitializer {
class IPFilterServerChannelInitializer extends ServerChannelInitializer {
IPFilterServerChannelInitializer(String name) {
super(name);
}
@Override
protected void initChannel(Channel ch) throws Exception {
super.initChannel(ch);
if (authenticator != null) {
ch.pipeline().addFirst("ipfilter", new IpFilterRemoteAddressFilter(authenticator, name));
}
}
}
class SecurityServerChannelInitializer extends IPFilterServerChannelInitializer {
private final SSLConfiguration configuration;
SecurityServerChannelInitializer(String name, SSLConfiguration configuration) {
super(name);
this.configuration = configuration;
}
@Override
@ -141,9 +165,12 @@ public class SecurityNetty4Transport extends Netty4Transport {
super.initChannel(ch);
SSLEngine serverEngine = sslService.createSSLEngine(configuration, null, -1);
serverEngine.setUseClientMode(false);
ch.pipeline().addFirst(new SslHandler(serverEngine));
if (authenticator != null) {
ch.pipeline().addFirst(new IpFilterRemoteAddressFilter(authenticator, name));
IpFilterRemoteAddressFilter remoteAddressFilter = ch.pipeline().get(IpFilterRemoteAddressFilter.class);
final SslHandler sslHandler = new SslHandler(serverEngine);
if (remoteAddressFilter == null) {
ch.pipeline().addFirst("sslhandler", sslHandler);
} else {
ch.pipeline().addAfter("ipfilter", "sslhandler", sslHandler);
}
}
}
@ -153,13 +180,15 @@ public class SecurityNetty4Transport extends Netty4Transport {
private final boolean hostnameVerificationEnabled;
SecurityClientChannelInitializer() {
this.hostnameVerificationEnabled = sslConfiguration.verificationMode().isHostnameVerificationEnabled();
this.hostnameVerificationEnabled = sslEnabled && sslConfiguration.verificationMode().isHostnameVerificationEnabled();
}
@Override
protected void initChannel(Channel ch) throws Exception {
super.initChannel(ch);
ch.pipeline().addFirst(new ClientSslHandlerInitializer(sslConfiguration, sslService, hostnameVerificationEnabled));
if (sslEnabled) {
ch.pipeline().addFirst(new ClientSslHandlerInitializer(sslConfiguration, sslService, hostnameVerificationEnabled));
}
}
}
@ -197,5 +226,4 @@ public class SecurityNetty4Transport extends Netty4Transport {
public static Settings profileSslSettings(Settings profileSettings) {
return profileSettings.getByPrefix(setting("ssl."));
}
}

View File

@ -1,211 +0,0 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.xpack.ssl;
import org.bouncycastle.asn1.x509.GeneralNames;
import org.bouncycastle.operator.OperatorCreationException;
import org.elasticsearch.common.Nullable;
import org.elasticsearch.common.hash.MessageDigests;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.env.Environment;
import org.elasticsearch.node.Node;
import javax.net.ssl.X509ExtendedKeyManager;
import javax.net.ssl.X509ExtendedTrustManager;
import javax.security.auth.DestroyFailedException;
import javax.security.auth.x500.X500Principal;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.io.Reader;
import java.io.UncheckedIOException;
import java.net.InetAddress;
import java.net.NetworkInterface;
import java.net.SocketException;
import java.nio.charset.StandardCharsets;
import java.nio.file.Path;
import java.security.KeyPair;
import java.security.KeyStoreException;
import java.security.MessageDigest;
import java.security.NoSuchAlgorithmException;
import java.security.PrivateKey;
import java.security.UnrecoverableKeyException;
import java.security.cert.Certificate;
import java.security.cert.CertificateException;
import java.security.cert.CertificateFactory;
import java.security.cert.X509Certificate;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collections;
import java.util.Enumeration;
import java.util.HashSet;
import java.util.List;
import java.util.Objects;
import java.util.Set;
/**
* Represents a {@link KeyConfig} that is automatically generated on node startup if necessary. This helps with the default user experience
* so that the user does not need to have any knowledge about SSL setup to start a node
*/
final class GeneratedKeyConfig extends KeyConfig {
// these values have been generated using openssl
// For private key: openssl pkcs8 -in private.pem -inform PEM -nocrypt -topk8 -outform DER | openssl dgst -sha256 -hex
// For certificate: openssl x509 -in ca.pem -noout -fingerprint -sha256
private static final String PRIVATE_KEY_SHA256 = "eec5bdb422c17c75d3850ffc64a724e52a99ec64366677da2fe4e782d7426e9f";
private static final String CA_CERT_FINGERPRINT_SHA256 = "A147166C71EB8B61DADFC5B19ECAC8443BE2DB32A56FC1A73BC1623250738598";
private final X509ExtendedKeyManager keyManager;
private final X509ExtendedTrustManager trustManager;
GeneratedKeyConfig(Settings settings) throws NoSuchAlgorithmException, IOException, CertificateException, OperatorCreationException,
UnrecoverableKeyException, KeyStoreException {
final KeyPair keyPair = CertUtils.generateKeyPair(2048);
final X500Principal principal = new X500Principal("CN=" + Node.NODE_NAME_SETTING.get(settings));
final Certificate caCert = readCACert();
final PrivateKey privateKey = readPrivateKey();
final GeneralNames generalNames = CertUtils.getSubjectAlternativeNames(false, getLocalAddresses());
X509Certificate certificate =
CertUtils.generateSignedCertificate(principal, generalNames, keyPair, (X509Certificate) caCert, privateKey, 365);
try {
privateKey.destroy();
} catch (DestroyFailedException e) {
// best effort attempt. This is known to fail for RSA keys on the oracle JDK but maybe they'll fix it in ten years or so...
}
keyManager = CertUtils.keyManager(new Certificate[] { certificate, caCert }, keyPair.getPrivate(), new char[0]);
trustManager = CertUtils.trustManager(new Certificate[] { caCert });
}
@Override
X509ExtendedTrustManager createTrustManager(@Nullable Environment environment) {
return trustManager;
}
@Override
List<Path> filesToMonitor(@Nullable Environment environment) {
// no files to watch
return Collections.emptyList();
}
@Override
public String toString() {
return "Generated Key Config. DO NOT USE IN PRODUCTION";
}
@Override
public boolean equals(Object o) {
return this == o;
}
@Override
public int hashCode() {
return Objects.hash(keyManager, trustManager);
}
@Override
X509ExtendedKeyManager createKeyManager(@Nullable Environment environment) {
return keyManager;
}
@Override
List<PrivateKey> privateKeys(@Nullable Environment environment) {
try {
return Collections.singletonList(readPrivateKey());
} catch (IOException e) {
throw new UncheckedIOException("failed to read key", e);
}
}
/**
* Enumerates all of the loopback and link local addresses so these can be used as SubjectAlternativeNames inside the certificate for
* a good out of the box experience with TLS
*/
private Set<InetAddress> getLocalAddresses() throws SocketException {
Enumeration<NetworkInterface> networkInterfaces = NetworkInterface.getNetworkInterfaces();
Set<InetAddress> inetAddresses = new HashSet<>();
while (networkInterfaces.hasMoreElements()) {
NetworkInterface intf = networkInterfaces.nextElement();
if (intf.isUp()) {
if (intf.isLoopback()) {
inetAddresses.addAll(Collections.list(intf.getInetAddresses()));
} else {
Enumeration<InetAddress> inetAddressEnumeration = intf.getInetAddresses();
while (inetAddressEnumeration.hasMoreElements()) {
InetAddress inetAddress = inetAddressEnumeration.nextElement();
if (inetAddress.isLoopbackAddress() || inetAddress.isLinkLocalAddress()) {
inetAddresses.add(inetAddress);
}
}
}
}
}
return inetAddresses;
}
/**
* Reads the bundled CA private key. This key is used for signing a automatically generated certificate that allows development nodes
* to talk to each other on the same machine.
*
* This private key is the same for every distribution and is only here for a nice out of the box experience. Once in production mode
* this key should not be used!
*/
static PrivateKey readPrivateKey() throws IOException {
try (InputStream inputStream = GeneratedKeyConfig.class.getResourceAsStream("private.pem");
Reader reader = new InputStreamReader(inputStream, StandardCharsets.UTF_8)) {
PrivateKey privateKey = CertUtils.readPrivateKey(reader, () -> null);
MessageDigest md = MessageDigests.sha256();
final byte[] privateKeyBytes = privateKey.getEncoded();
try {
final byte[] digest = md.digest(privateKeyBytes);
final byte[] expected = hexStringToByteArray(PRIVATE_KEY_SHA256);
if (Arrays.equals(digest, expected) == false) {
throw new IllegalStateException("private key hash does not match the expected value!");
}
} finally {
Arrays.fill(privateKeyBytes, (byte) 0);
}
return privateKey;
}
}
/**
* Reads the bundled CA certificate
*/
static Certificate readCACert() throws IOException, CertificateException {
try (InputStream inputStream = GeneratedKeyConfig.class.getResourceAsStream("ca.pem");
Reader reader = new InputStreamReader(inputStream, StandardCharsets.UTF_8)) {
CertificateFactory certificateFactory = CertificateFactory.getInstance("X.509");
List<Certificate> certificateList = new ArrayList<>(1);
CertUtils.readCertificates(reader, certificateList, certificateFactory);
if (certificateList.size() != 1) {
throw new IllegalStateException("expected [1] default CA certificate but found [" + certificateList.size() + "]");
}
Certificate certificate = certificateList.get(0);
final byte[] encoded = MessageDigests.sha256().digest(certificate.getEncoded());
final byte[] expected = hexStringToByteArray(CA_CERT_FINGERPRINT_SHA256);
if (Arrays.equals(encoded, expected) == false) {
throw new IllegalStateException("CA certificate fingerprint does not match!");
}
return certificateList.get(0);
}
}
private static byte[] hexStringToByteArray(String hexString) {
if (hexString.length() % 2 != 0) {
throw new IllegalArgumentException("String must be an even length");
}
final int numBytes = hexString.length() / 2;
final byte[] data = new byte[numBytes];
for(int i = 0; i < numBytes; i++) {
final int index = i * 2;
final int index2 = index + 1;
data[i] = (byte) ((Character.digit(hexString.charAt(index), 16) << 4) + Character.digit(hexString.charAt(index2), 16));
}
return data;
}
}

View File

@ -1,99 +0,0 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.xpack.ssl;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.bootstrap.BootstrapCheck;
import org.elasticsearch.bootstrap.BootstrapContext;
import org.elasticsearch.common.inject.internal.Nullable;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.env.Environment;
import org.elasticsearch.xpack.XPackSettings;
import java.io.IOException;
import java.io.UncheckedIOException;
import java.security.InvalidKeyException;
import java.security.NoSuchAlgorithmException;
import java.security.NoSuchProviderException;
import java.security.PrivateKey;
import java.security.PublicKey;
import java.security.SignatureException;
import java.security.cert.CertificateException;
import java.util.Arrays;
import java.util.Objects;
import java.util.stream.Stream;
/**
* Bootstrap check to ensure that we only use the generated key config in non-production situations. This class is currently public because
* {@link org.elasticsearch.xpack.security.Security} is in a different package and we use package private accessors of the
* {@link SSLService} to get the configuration for the node to node transport
*/
public final class SSLBootstrapCheck implements BootstrapCheck {
private final SSLService sslService;
private final Environment environment;
public SSLBootstrapCheck(SSLService sslService, @Nullable Environment environment) {
this.sslService = sslService;
this.environment = environment;
}
@Override
public BootstrapCheckResult check(BootstrapContext context) {
final Settings transportSSLSettings = context.settings.getByPrefix(XPackSettings.TRANSPORT_SSL_PREFIX);
if (sslService.sslConfiguration(transportSSLSettings).keyConfig() == KeyConfig.NONE
|| isDefaultCACertificateTrusted() || isDefaultPrivateKeyUsed()) {
return BootstrapCheckResult.failure(
"default SSL key and certificate do not provide security; please generate keys and certificates");
} else {
return BootstrapCheckResult.success();
}
}
/**
* Looks at all of the trusted certificates to ensure the default CA is not being trusted. We cannot let this happen in production mode
*/
private boolean isDefaultCACertificateTrusted() {
final PublicKey publicKey;
try {
publicKey = GeneratedKeyConfig.readCACert().getPublicKey();
} catch (IOException | CertificateException e) {
throw new ElasticsearchException("failed to check default CA", e);
}
return sslService.getLoadedSSLConfigurations().stream()
.flatMap(config -> Stream.of(config.keyConfig().createTrustManager(environment),
config.trustConfig().createTrustManager(environment)))
.filter(Objects::nonNull)
.flatMap((tm) -> Arrays.stream(tm.getAcceptedIssuers()))
.anyMatch((cert) -> {
try {
cert.verify(publicKey);
return true;
} catch (CertificateException | NoSuchAlgorithmException | InvalidKeyException | NoSuchProviderException
| SignatureException e) {
// just ignore these
return false;
}
});
}
/**
* Looks at all of the private keys and if there is a key that is equal to the default CA key then we should bail out
*/
private boolean isDefaultPrivateKeyUsed() {
final PrivateKey defaultPrivateKey;
try {
defaultPrivateKey = GeneratedKeyConfig.readPrivateKey();
} catch (IOException e) {
throw new UncheckedIOException("failed to read key", e);
}
return sslService.getLoadedSSLConfigurations().stream()
.flatMap(sslConfiguration -> sslConfiguration.keyConfig().privateKeys(environment).stream())
.anyMatch(defaultPrivateKey::equals);
}
}

View File

@ -15,7 +15,6 @@ import org.elasticsearch.xpack.XPackSettings;
import javax.net.ssl.KeyManagerFactory;
import javax.net.ssl.TrustManagerFactory;
import java.nio.file.Path;
import java.security.KeyStore;
import java.util.ArrayList;
import java.util.List;
import java.util.Objects;
@ -205,8 +204,8 @@ public final class SSLConfiguration {
} else {
SecureString keyStorePassword = SETTINGS_PARSER.keystorePassword.get(settings);
String keyStoreAlgorithm = SETTINGS_PARSER.keystoreAlgorithm.get(settings);
String keyStoreType = SETTINGS_PARSER.keystoreType.get(settings);
SecureString keyStoreKeyPassword = SETTINGS_PARSER.keystoreKeyPassword.get(settings);;
String keyStoreType = SSLConfigurationSettings.getKeyStoreType(SETTINGS_PARSER.keystoreType, settings, keyStorePath);
SecureString keyStoreKeyPassword = SETTINGS_PARSER.keystoreKeyPassword.get(settings);
if (keyStoreKeyPassword.length() == 0) {
keyStoreKeyPassword = keyStorePassword;
}
@ -244,7 +243,7 @@ public final class SSLConfiguration {
} else if (trustStorePath != null) {
SecureString trustStorePassword = SETTINGS_PARSER.truststorePassword.get(settings);
String trustStoreAlgorithm = SETTINGS_PARSER.truststoreAlgorithm.get(settings);
String trustStoreType = SETTINGS_PARSER.truststoreType.get(settings);
String trustStoreType = SSLConfigurationSettings.getKeyStoreType(SETTINGS_PARSER.truststoreType, settings, trustStorePath);
return new StoreTrustConfig(trustStorePath, trustStoreType, trustStorePassword, trustStoreAlgorithm);
} else if (global == null && System.getProperty("javax.net.ssl.trustStore") != null) {
try (SecureString truststorePassword = new SecureString(System.getProperty("javax.net.ssl.trustStorePassword", ""))) {

View File

@ -7,21 +7,20 @@ package org.elasticsearch.xpack.ssl;
import javax.net.ssl.KeyManagerFactory;
import javax.net.ssl.TrustManagerFactory;
import java.security.KeyStore;
import java.util.Arrays;
import java.util.Collection;
import java.util.Collections;
import java.util.List;
import java.util.Locale;
import java.util.Optional;
import java.util.function.Function;
import org.elasticsearch.common.settings.SecureSetting;
import org.elasticsearch.common.settings.SecureString;
import org.elasticsearch.common.settings.Setting;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.settings.Setting.Property;
import org.elasticsearch.common.settings.Settings;
/**
* Bridges {@link SSLConfiguration} into the {@link Settings} framework, using {@link Setting} objects.
@ -33,12 +32,12 @@ public class SSLConfigurationSettings {
public final Setting<Optional<String>> keystorePath;
public final Setting<SecureString> keystorePassword;
public final Setting<String> keystoreAlgorithm;
public final Setting<String> keystoreType;
public final Setting<Optional<String>> keystoreType;
public final Setting<SecureString> keystoreKeyPassword;
public final Setting<Optional<String>> truststorePath;
public final Setting<SecureString> truststorePassword;
public final Setting<String> truststoreAlgorithm;
public final Setting<String> truststoreType;
public final Setting<Optional<String>> truststoreType;
public final Setting<Optional<String>> trustRestrictionsPath;
public final Setting<Optional<String>> keyPath;
public final Setting<SecureString> keyPassword;
@ -62,6 +61,7 @@ public class SSLConfigurationSettings {
* Older versions of X-Pack only supported JKS and never looked at the JVM's configured default.
*/
private static final String DEFAULT_KEYSTORE_TYPE = "jks";
private static final String PKCS12_KEYSTORE_TYPE = "PKCS12";
private static final Function<String, Setting<List<String>>> CIPHERS_SETTING_TEMPLATE = key -> Setting.listSetting(key, Collections
.emptyList(), Function.identity(), Property.NodeScope, Property.Filtered);
@ -132,14 +132,13 @@ public class SSLConfigurationSettings {
public static final Setting<String> TRUST_STORE_ALGORITHM_PROFILES = Setting.affixKeySetting("transport.profiles.",
"xpack.security.ssl.truststore.algorithm", TRUST_STORE_ALGORITHM_TEMPLATE);
private static final Function<String, Setting<String>> KEY_STORE_TYPE_TEMPLATE = key ->
new Setting<>(key, DEFAULT_KEYSTORE_TYPE, Function.identity(), Property.NodeScope, Property.Filtered);
public static final Setting<String> KEY_STORE_TYPE_PROFILES = Setting.affixKeySetting("transport.profiles.",
private static final Function<String, Setting<Optional<String>>> KEY_STORE_TYPE_TEMPLATE = key ->
new Setting<>(key, s -> null, Optional::ofNullable, Property.NodeScope, Property.Filtered);
public static final Setting<Optional<String>> KEY_STORE_TYPE_PROFILES = Setting.affixKeySetting("transport.profiles.",
"xpack.security.ssl.keystore.type", KEY_STORE_TYPE_TEMPLATE);
private static final Function<String, Setting<String>> TRUST_STORE_TYPE_TEMPLATE = key ->
new Setting<>(key, DEFAULT_KEYSTORE_TYPE, Function.identity(), Property.NodeScope, Property.Filtered);
public static final Setting<String> TRUST_STORE_TYPE_PROFILES = Setting.affixKeySetting("transport.profiles.",
private static final Function<String, Setting<Optional<String>>> TRUST_STORE_TYPE_TEMPLATE = KEY_STORE_TYPE_TEMPLATE;
public static final Setting<Optional<String>> TRUST_STORE_TYPE_PROFILES = Setting.affixKeySetting("transport.profiles.",
"xpack.security.ssl.truststore.type", TRUST_STORE_TYPE_TEMPLATE);
private static final Function<String, Setting<Optional<String>>> TRUST_RESTRICTIONS_TEMPLATE = key -> new Setting<>(key, s -> null,
@ -201,7 +200,7 @@ public class SSLConfigurationSettings {
keystoreAlgorithm = KEY_STORE_ALGORITHM_TEMPLATE.apply(prefix + "keystore.algorithm");
truststoreAlgorithm = TRUST_STORE_ALGORITHM_TEMPLATE.apply(prefix + "truststore.algorithm");
keystoreType = KEY_STORE_TYPE_TEMPLATE.apply(prefix + "keystore.type");
truststoreType = KEY_STORE_TYPE_TEMPLATE.apply(prefix + "truststore.type");
truststoreType = TRUST_STORE_TYPE_TEMPLATE.apply(prefix + "truststore.type");
trustRestrictionsPath = TRUST_RESTRICTIONS_TEMPLATE.apply(prefix + "trust_restrictions.path");
keyPath = KEY_PATH_TEMPLATE.apply(prefix + "key");
legacyKeyPassword = LEGACY_KEY_PASSWORD_TEMPLATE.apply(prefix + "key_passphrase");
@ -218,6 +217,19 @@ public class SSLConfigurationSettings {
legacyKeystorePassword, legacyKeystoreKeyPassword, legacyKeyPassword, legacyTruststorePassword);
}
public static String getKeyStoreType(Setting<Optional<String>> setting, Settings settings, String path) {
return setting.get(settings).orElseGet(() -> inferKeyStoreType(path));
}
private static String inferKeyStoreType(String path) {
String name = path == null ? "" : path.toLowerCase(Locale.ROOT);
if (name.endsWith(".p12") || name.endsWith(".pfx") || name.endsWith(".pkcs12")) {
return PKCS12_KEYSTORE_TYPE;
} else {
return DEFAULT_KEYSTORE_TYPE;
}
}
public List<Setting<?>> getAllSettings() {
return allSettings;
}

View File

@ -10,7 +10,6 @@ import org.apache.http.nio.conn.ssl.SSLIOSessionStrategy;
import org.apache.lucene.util.SetOnce;
import org.bouncycastle.operator.OperatorCreationException;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.common.Nullable;
import org.elasticsearch.common.CheckedSupplier;
import org.elasticsearch.common.Strings;
import org.elasticsearch.common.component.AbstractComponent;
@ -33,7 +32,6 @@ import javax.security.auth.DestroyFailedException;
import java.io.IOException;
import java.net.InetAddress;
import java.net.Socket;
import java.nio.file.Path;
import java.security.KeyManagementException;
import java.security.KeyStoreException;
import java.security.NoSuchAlgorithmException;
@ -447,74 +445,12 @@ public class SSLService extends AbstractComponent {
final SSLConfiguration transportSSLConfiguration = new SSLConfiguration(transportSSLSettings, globalSSLConfiguration);
this.transportSSLConfiguration.set(transportSSLConfiguration);
List<Settings> profileSettings = getTransportProfileSSLSettings(settings);
// if no key is provided for transport we can auto-generate a key with a signed certificate for development use only. There is a
// bootstrap check that prevents this configuration from being use in production (SSLBootstrapCheck)
if (transportSSLConfiguration.keyConfig() == KeyConfig.NONE) {
createDevelopmentTLSConfiguration(sslConfigurations, transportSSLConfiguration, profileSettings);
} else {
sslConfigurations.computeIfAbsent(transportSSLConfiguration, this::createSslContext);
profileSettings.forEach((profileSetting) ->
sslConfigurations.computeIfAbsent(new SSLConfiguration(profileSetting, transportSSLConfiguration), this::createSslContext));
}
sslConfigurations.computeIfAbsent(transportSSLConfiguration, this::createSslContext);
profileSettings.forEach((profileSetting) ->
sslConfigurations.computeIfAbsent(new SSLConfiguration(profileSetting, transportSSLConfiguration), this::createSslContext));
return Collections.unmodifiableMap(sslConfigurations);
}
private void createDevelopmentTLSConfiguration(Map<SSLConfiguration, SSLContextHolder> sslConfigurations,
SSLConfiguration transportSSLConfiguration, List<Settings> profileSettings)
throws NoSuchAlgorithmException, IOException, CertificateException, OperatorCreationException, UnrecoverableKeyException,
KeyStoreException {
// lazily generate key to avoid slowing down startup where we do not need it
final GeneratedKeyConfig generatedKeyConfig = new GeneratedKeyConfig(settings);
final TrustConfig trustConfig =
new TrustConfig.CombiningTrustConfig(Arrays.asList(transportSSLConfiguration.trustConfig(), new TrustConfig() {
@Override
X509ExtendedTrustManager createTrustManager(@Nullable Environment environment) {
return generatedKeyConfig.createTrustManager(environment);
}
@Override
List<Path> filesToMonitor(@Nullable Environment environment) {
return Collections.emptyList();
}
@Override
public String toString() {
return "Generated Trust Config. DO NOT USE IN PRODUCTION";
}
@Override
public boolean equals(Object o) {
return this == o;
}
@Override
public int hashCode() {
return System.identityHashCode(this);
}
}));
X509ExtendedTrustManager extendedTrustManager = trustConfig.createTrustManager(env);
ReloadableTrustManager trustManager = new ReloadableTrustManager(extendedTrustManager, trustConfig);
ReloadableX509KeyManager keyManager =
new ReloadableX509KeyManager(generatedKeyConfig.createKeyManager(env), generatedKeyConfig);
sslConfigurations.put(transportSSLConfiguration, createSslContext(keyManager, trustManager, transportSSLConfiguration));
profileSettings.forEach((profileSetting) -> {
SSLConfiguration configuration = new SSLConfiguration(profileSetting, transportSSLConfiguration);
if (configuration.keyConfig() == KeyConfig.NONE) {
sslConfigurations.compute(configuration, (conf, holder) -> {
if (holder != null && holder.keyManager == keyManager && holder.trustManager == trustManager) {
return holder;
} else {
return createSslContext(keyManager, trustManager, configuration);
}
});
} else {
sslConfigurations.computeIfAbsent(configuration, this::createSslContext);
}
});
}
/**
* This socket factory wraps an existing SSLSocketFactory and sets the protocols and ciphers on each SSLSocket after it is created. This
* is needed even though the SSLContext is configured properly as the configuration does not flow down to the sockets created by the

View File

@ -0,0 +1,30 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.xpack.ssl;
import org.elasticsearch.bootstrap.BootstrapCheck;
import org.elasticsearch.bootstrap.BootstrapContext;
import org.elasticsearch.license.License;
import org.elasticsearch.license.LicenseService;
import org.elasticsearch.xpack.XPackSettings;
/**
* Bootstrap check to ensure that if we are starting up with a production license in the local clusterstate TLS is enabled
*/
public final class TLSLicenseBootstrapCheck implements BootstrapCheck {
@Override
public BootstrapCheckResult check(BootstrapContext context) {
if (XPackSettings.TRANSPORT_SSL_ENABLED.get(context.settings) == false) {
License license = LicenseService.getLicense(context.metaData);
if (license != null && license.isProductionLicense()) {
return BootstrapCheckResult.failure("Transport SSL must be enabled for setups with production licenses. Please set " +
"[xpack.security.transport.ssl.enabled] to [true] or disable security by setting [xpack.security.enabled] " +
"to [false]");
}
}
return BootstrapCheckResult.success();
}
}

View File

@ -1,20 +0,0 @@
-----BEGIN CERTIFICATE-----
MIIDWDCCAkCgAwIBAgIJANRlkT/I8aROMA0GCSqGSIb3DQEBCwUAMCYxJDAiBgNV
BAMTG3hwYWNrIHB1YmxpYyBkZXZlbG9wbWVudCBjYTAeFw0xNzAxMDUxNDUyMDNa
Fw00NDA1MjMxNDUyMDNaMCYxJDAiBgNVBAMTG3hwYWNrIHB1YmxpYyBkZXZlbG9w
bWVudCBjYTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBALBfQEQYZmPW
cAw939i8RRsa27+qxd32ysJu9aKgSEiIDFKU0JwFh6pog1l8frICM4jF0TqILGHv
+QbQYsD2e3jYp0cj8dy2+YN6jgTXMf1N8yh6GYXEzRrEKYhqVTHLpZgbhxEFxsws
gZiEMHiVxn6h5i4uWDmkp6zt4kHlKgvjtIEzZ1xiXWcS7jJvVPb8r0xUFPDu8Qij
BhjxkbkXprzjGEtt4bKqZ8/R+pr+eUuvmApMSMB38dZxDRXxyavbmbJcGDJX+ZKN
4OcECH55B/EtxhPxpfFXmX+y5Lh597vkhgitw8Qhayaa8gF16tt4rUgYude9kGSi
m3hs6Q9mWM8CAwEAAaOBiDCBhTAdBgNVHQ4EFgQUM6+ZLgmnj1FXHEPejFcpiRR+
ANIwVgYDVR0jBE8wTYAUM6+ZLgmnj1FXHEPejFcpiRR+ANKhKqQoMCYxJDAiBgNV
BAMTG3hwYWNrIHB1YmxpYyBkZXZlbG9wbWVudCBjYYIJANRlkT/I8aROMAwGA1Ud
EwQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBABqgr2p+Ivb3myF56BuiJYYz55Wa
ncm4Aqdw6p/A5qkl3pSXu2zbgSfyFvux7Q1+lowIvw4fAOTBcQQpQkYWJmObkCLg
HMiKbBreFVqPOqScjTBk6t1g/mOdJXfOognc6QRwfunEBqevNVDT2w3sGlNHooEM
3XUPBgyuznE1Olqt7U0tMGsENyBgZv51bUg7ZZCLrV2sdgqc3XYZUqBnttvbBDyU
tozgDMoCXLvVHcpWcKsA+ONd0szbSAu1uF0ZfqgaoSslM1ph9ydPbXEvnD5AFO6Y
VBTW3v4cnluhrxO6TwRqNo43L5ENqZhtX9gVtzQ54exQsuoKzZ8NO5X1uIA=
-----END CERTIFICATE-----

View File

@ -1,27 +0,0 @@
-----BEGIN RSA PRIVATE KEY-----
MIIEogIBAAKCAQEAsF9ARBhmY9ZwDD3f2LxFGxrbv6rF3fbKwm71oqBISIgMUpTQ
nAWHqmiDWXx+sgIziMXROogsYe/5BtBiwPZ7eNinRyPx3Lb5g3qOBNcx/U3zKHoZ
hcTNGsQpiGpVMculmBuHEQXGzCyBmIQweJXGfqHmLi5YOaSnrO3iQeUqC+O0gTNn
XGJdZxLuMm9U9vyvTFQU8O7xCKMGGPGRuRemvOMYS23hsqpnz9H6mv55S6+YCkxI
wHfx1nENFfHJq9uZslwYMlf5ko3g5wQIfnkH8S3GE/Gl8VeZf7LkuHn3u+SGCK3D
xCFrJpryAXXq23itSBi5172QZKKbeGzpD2ZYzwIDAQABAoIBADRpKbzSj2Ktr4BD
xsguMk76rUCIq+Ho25npxT69aJ19KERGCrPChO0jv5yQ/UlClDPZrPI60w2LdTIM
LLxwwoJHx3XBfbb7/KuQeLGBjU5bop1tozX4JIcGsdzi1ExG2v+XdoydbdTwiNZc
udark1/AFpm0le0TO+yMiEbSpasAUetmwmBLl0ld1qOoEFNM4ueLtM0/JE4kQHJC
a6a0fS1D+TQsPCdziW80X2hpwCIbg4CF3LqR521SfwIzRscbaXzCzeBNCShJE8Nm
Qun91Szze80aaFBBIwMKbppEx5iYCCKeTyO3yswRuZ44+iBe/piB3F/qRKnjBwNS
LeL9NOECgYEA4xMUueF8HN23QeC/KZ/6LALwyNBtT7JP7YbW6dvo+3F0KSPxbDL1
nMmeOTV8suAlGslsE+GuvPU6M9fUCxpbVnbYH5hEuh0v25WRekFv14H/yEpVF26o
OHeilUIzpRTUOndgkmN8cXNp2xkzs2Yp7F2RSlog2kXQOYgC91YmvjECgYEAxtbC
OzxUUjebeqnolo8wYur42BievUDqyD9pCnaix+2F59SdLDgirJIOEtv31jjpLaIh
nO8akxMCPNwhEgVzelI2k+jJ+Kermi3+tEAnlBBDf/tMEGNav2XE3MnYkDt2jdza
fganfhKQwAufyq2lUHC/Slh+xcLPepTef6zFxv8CgYB6ZEJ7ninDdU3dWEIxMWUq
a7tUweLpXfbu1Arqqfl97bzqn9D0vNLd215I/6di0qWtNnvmi3Ifrx3b660C/wXU
KOJ8xRnmJu0wsgFjn/mkcxFm54nNw3swVGtxf+lORVfO26FVxgHBNLANxBu1yo82
M4ioRsQGYjLFj6XpoqnnQQKBgE8RpYlCs1FCdZxwpmIArMAZKj1chPtDLlnVBWM4
zABuzpni7WFhLUCsj9YmDMbuOKOB3pX2av3jSDeFXc05x7LzsGpe3rn3iwCzm554
CIUTdpQVDSlTKQoFYSRfS7QHQVymX2hQIxi6Lz9/H9rL9Hopa5gX2smvbywSuOvS
e49nAoGAM7TQ9iFBsygXxbxh2EL47nw/LBxbDm86TazpSKrHd9pV6Z/Xv870QEf7
cZJ9T/KRGkxlK8L6B7uzeckpk4uMWuDRiymnbg2pqk94oELKkh0iLnlGSMf3IPO8
qIRFQsQfA3PaU6SG/izaB1lquBRtIj5kAW2ZXI4O9l5V39Y5/n4=
-----END RSA PRIVATE KEY-----

View File

@ -1,9 +0,0 @@
-----BEGIN PUBLIC KEY-----
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAsF9ARBhmY9ZwDD3f2LxF
Gxrbv6rF3fbKwm71oqBISIgMUpTQnAWHqmiDWXx+sgIziMXROogsYe/5BtBiwPZ7
eNinRyPx3Lb5g3qOBNcx/U3zKHoZhcTNGsQpiGpVMculmBuHEQXGzCyBmIQweJXG
fqHmLi5YOaSnrO3iQeUqC+O0gTNnXGJdZxLuMm9U9vyvTFQU8O7xCKMGGPGRuRem
vOMYS23hsqpnz9H6mv55S6+YCkxIwHfx1nENFfHJq9uZslwYMlf5ko3g5wQIfnkH
8S3GE/Gl8VeZf7LkuHn3u+SGCK3DxCFrJpryAXXq23itSBi5172QZKKbeGzpD2ZY
zwIDAQAB
-----END PUBLIC KEY-----

View File

@ -18,6 +18,7 @@ import org.elasticsearch.action.termvectors.MultiTermVectorsResponse;
import org.elasticsearch.action.termvectors.TermVectorsRequest;
import org.elasticsearch.action.termvectors.TermVectorsResponse;
import org.elasticsearch.action.update.UpdateRequest;
import org.elasticsearch.analysis.common.CommonAnalysisPlugin;
import org.elasticsearch.client.Requests;
import org.elasticsearch.common.settings.SecureString;
import org.elasticsearch.common.settings.Settings;
@ -61,7 +62,6 @@ import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertNoFa
import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchHits;
import static org.elasticsearch.xpack.security.authc.support.UsernamePasswordToken.BASIC_AUTH_HEADER;
import static org.elasticsearch.xpack.security.authc.support.UsernamePasswordToken.basicAuthHeaderValue;
import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;
import static org.hamcrest.Matchers.equalTo;
import static org.hamcrest.Matchers.instanceOf;
import static org.hamcrest.Matchers.is;
@ -73,8 +73,7 @@ public class DocumentLevelSecurityTests extends SecurityIntegTestCase {
@Override
protected Collection<Class<? extends Plugin>> nodePlugins() {
return Arrays.asList(XPackPlugin.class, ParentJoinPlugin.class,
InternalSettingsPlugin.class);
return Arrays.asList(XPackPlugin.class, CommonAnalysisPlugin.class, ParentJoinPlugin.class, InternalSettingsPlugin.class);
}
@Override

View File

@ -18,6 +18,7 @@ import org.elasticsearch.action.termvectors.MultiTermVectorsResponse;
import org.elasticsearch.action.termvectors.TermVectorsRequest;
import org.elasticsearch.action.termvectors.TermVectorsResponse;
import org.elasticsearch.action.update.UpdateRequest;
import org.elasticsearch.analysis.common.CommonAnalysisPlugin;
import org.elasticsearch.client.Requests;
import org.elasticsearch.common.settings.SecureString;
import org.elasticsearch.common.settings.Settings;
@ -72,7 +73,7 @@ public class FieldLevelSecurityTests extends SecurityIntegTestCase {
@Override
protected Collection<Class<? extends Plugin>> nodePlugins() {
return Arrays.asList(XPackPlugin.class, ParentJoinPlugin.class,
return Arrays.asList(XPackPlugin.class, CommonAnalysisPlugin.class, ParentJoinPlugin.class,
InternalSettingsPlugin.class);
}

View File

@ -34,7 +34,6 @@ import org.junit.BeforeClass;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.io.UncheckedIOException;
import java.nio.file.Path;
import java.util.Arrays;
@ -221,8 +220,8 @@ public abstract class AbstractAdLdapRealmTestCase extends SecurityIntegTestCase
}
@Override
protected boolean useGeneratedSSLConfig() {
return useGlobalSSL == false;
protected boolean transportSSLEnabled() {
return useGlobalSSL;
}
protected final void configureFileRoleMappings(Settings.Builder builder, List<RoleMappingEntry> mappings) {

View File

@ -48,11 +48,11 @@ public abstract class AbstractLicenseServiceTestCase extends ESTestCase {
environment = mock(Environment.class);
}
protected void setInitialState(License license, XPackLicenseState licenseState) {
protected void setInitialState(License license, XPackLicenseState licenseState, Settings settings) {
Path tempDir = createTempDir();
when(environment.configFile()).thenReturn(tempDir);
licenseType = randomBoolean() ? "trial" : "basic";
Settings settings = Settings.builder().put(LicenseService.SELF_GENERATED_LICENSE_TYPE.getKey(), licenseType).build();
settings = Settings.builder().put(settings).put(LicenseService.SELF_GENERATED_LICENSE_TYPE.getKey(), licenseType).build();
licenseService = new LicenseService(settings, clusterService, clock, environment, resourceWatcherService, licenseState);
ClusterState state = mock(ClusterState.class);
final ClusterBlocks noBlock = ClusterBlocks.builder().build();

View File

@ -5,10 +5,12 @@
*/
package org.elasticsearch.license;
import java.util.Arrays;
import java.util.Collection;
import java.util.Collections;
import java.util.concurrent.CountDownLatch;
import org.elasticsearch.analysis.common.CommonAnalysisPlugin;
import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.ClusterStateUpdateTask;
import org.elasticsearch.cluster.metadata.MetaData;
@ -37,7 +39,7 @@ public abstract class AbstractLicensesIntegrationTestCase extends ESIntegTestCas
@Override
protected Collection<Class<? extends Plugin>> nodePlugins() {
return Collections.<Class<? extends Plugin>>singletonList(XPackPlugin.class);
return Arrays.asList(XPackPlugin.class, CommonAnalysisPlugin.class);
}
@Override

View File

@ -13,6 +13,7 @@ import org.elasticsearch.cluster.ClusterStateUpdateTask;
import org.elasticsearch.cluster.metadata.MetaData;
import org.elasticsearch.cluster.node.DiscoveryNode;
import org.elasticsearch.cluster.node.DiscoveryNodes;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.unit.TimeValue;
import org.junit.After;
import org.junit.Before;
@ -33,7 +34,7 @@ public class LicenseClusterChangeTests extends AbstractLicenseServiceTestCase {
@Before
public void setup() {
licenseState = new TestUtils.AssertingLicenseState();
setInitialState(null, licenseState);
setInitialState(null, licenseState, Settings.EMPTY);
licenseService.start();
}

View File

@ -8,6 +8,7 @@ package org.elasticsearch.license;
import org.elasticsearch.cluster.ClusterName;
import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.ClusterStateUpdateTask;
import org.elasticsearch.common.settings.Settings;
import org.mockito.ArgumentCaptor;
import org.mockito.Mockito;
@ -19,7 +20,7 @@ public class LicenseRegistrationTests extends AbstractLicenseServiceTestCase {
public void testTrialLicenseRequestOnEmptyLicenseState() throws Exception {
XPackLicenseState licenseState = new XPackLicenseState();
setInitialState(null, licenseState);
setInitialState(null, licenseState, Settings.EMPTY);
when(discoveryNodes.isLocalNodeElectedMaster()).thenReturn(true);
licenseService.start();

View File

@ -5,6 +5,7 @@
*/
package org.elasticsearch.license;
import org.elasticsearch.analysis.common.CommonAnalysisPlugin;
import org.elasticsearch.common.network.NetworkModule;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.plugins.Plugin;
@ -36,7 +37,7 @@ public class LicenseServiceClusterNotRecoveredTests extends AbstractLicensesInte
@Override
protected Collection<Class<? extends Plugin>> nodePlugins() {
return Arrays.asList(XPackPlugin.class, Netty4Plugin.class);
return Arrays.asList(XPackPlugin.class, CommonAnalysisPlugin.class, Netty4Plugin.class);
}
@Override

View File

@ -5,6 +5,7 @@
*/
package org.elasticsearch.license;
import org.elasticsearch.analysis.common.CommonAnalysisPlugin;
import org.elasticsearch.common.network.NetworkModule;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.unit.TimeValue;
@ -42,7 +43,7 @@ public class LicenseServiceClusterTests extends AbstractLicensesIntegrationTestC
@Override
protected Collection<Class<? extends Plugin>> nodePlugins() {
return Arrays.asList(XPackPlugin.class, Netty4Plugin.class);
return Arrays.asList(XPackPlugin.class, CommonAnalysisPlugin.class, Netty4Plugin.class);
}
@Override

View File

@ -0,0 +1,53 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.license;
import org.elasticsearch.analysis.common.CommonAnalysisPlugin;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.plugins.Plugin;
import org.elasticsearch.test.SecurityIntegTestCase;
import org.elasticsearch.transport.Netty4Plugin;
import org.elasticsearch.xpack.XPackPlugin;
import java.util.Arrays;
import java.util.Collection;
import static org.hamcrest.CoreMatchers.equalTo;
/**
* Basic integration test that checks if license can be upgraded to a production license if TLS is enabled and vice versa.
*/
public class LicenseServiceWithSecurityTests extends SecurityIntegTestCase {
@Override
protected Collection<Class<? extends Plugin>> nodePlugins() {
return Arrays.asList(XPackPlugin.class, CommonAnalysisPlugin.class, Netty4Plugin.class);
}
@Override
protected Collection<Class<? extends Plugin>> transportClientPlugins() {
return nodePlugins();
}
public void testLicenseUpgradeFailsWithoutTLS() throws Exception {
assumeFalse("transport ssl is enabled", isTransportSSLEnabled());
LicensingClient licensingClient = new LicensingClient(client());
License license = licensingClient.prepareGetLicense().get().license();
License prodLicense = TestUtils.generateSignedLicense("platinum", TimeValue.timeValueHours(24));
IllegalStateException ise = expectThrows(IllegalStateException.class, () -> licensingClient.preparePutLicense(prodLicense).get());
assertEquals("Can not upgrade to a production license unless TLS is configured or security is disabled", ise.getMessage());
assertThat(licensingClient.prepareGetLicense().get().license(), equalTo(license));
}
public void testLicenseUpgradeSucceedsWithTLS() throws Exception {
assumeTrue("transport ssl is disabled", isTransportSSLEnabled());
LicensingClient licensingClient = new LicensingClient(client());
License prodLicense = TestUtils.generateSignedLicense("platinum", TimeValue.timeValueHours(24));
PutLicenseResponse putLicenseResponse = licensingClient.preparePutLicense(prodLicense).get();
assertEquals(putLicenseResponse.status(), LicensesStatus.VALID);
assertThat(licensingClient.prepareGetLicense().get().license(), equalTo(prodLicense));
}
}

View File

@ -7,6 +7,7 @@ package org.elasticsearch.license;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.cluster.ClusterStateUpdateTask;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.unit.TimeValue;
import static org.hamcrest.Matchers.equalTo;
@ -20,7 +21,7 @@ public class LicensesAcknowledgementTests extends AbstractLicenseServiceTestCase
public void testAcknowledgment() throws Exception {
XPackLicenseState licenseState = new XPackLicenseState();
setInitialState(TestUtils.generateSignedLicense("trial", TimeValue.timeValueHours(2)), licenseState);
setInitialState(TestUtils.generateSignedLicense("trial", TimeValue.timeValueHours(2)), licenseState, Settings.EMPTY);
licenseService.start();
// try installing a signed license
License signedLicense = TestUtils.generateSignedLicense("basic", TimeValue.timeValueHours(10));
@ -37,6 +38,58 @@ public class LicensesAcknowledgementTests extends AbstractLicenseServiceTestCase
verify(clusterService, times(1)).submitStateUpdateTask(any(String.class), any(ClusterStateUpdateTask.class));
}
public void testRejectUpgradeToProductionWithoutTLS() throws Exception {
XPackLicenseState licenseState = new XPackLicenseState();
setInitialState(TestUtils.generateSignedLicense("trial", TimeValue.timeValueHours(2)), licenseState, Settings.EMPTY);
licenseService.start();
// try installing a signed license
License signedLicense = TestUtils.generateSignedLicense("platinum", TimeValue.timeValueHours(10));
PutLicenseRequest putLicenseRequest = new PutLicenseRequest().license(signedLicense);
// ensure acknowledgement message was part of the response
IllegalStateException ise = expectThrows(IllegalStateException.class, () ->
licenseService.registerLicense(putLicenseRequest, new AssertingLicensesUpdateResponse(false, LicensesStatus.VALID, true)));
assertEquals("Can not upgrade to a production license unless TLS is configured or security is disabled", ise.getMessage());
}
public void testUpgradeToProductionWithoutTLSAndSecurityDisabled() throws Exception {
XPackLicenseState licenseState = new XPackLicenseState();
setInitialState(TestUtils.generateSignedLicense("trial", TimeValue.timeValueHours(2)), licenseState, Settings.builder()
.put("xpack.security.enabled", false).build());
licenseService.start();
// try installing a signed license
License signedLicense = TestUtils.generateSignedLicense("platinum", TimeValue.timeValueHours(10));
PutLicenseRequest putLicenseRequest = new PutLicenseRequest().license(signedLicense);
licenseService.registerLicense(putLicenseRequest, new AssertingLicensesUpdateResponse(false, LicensesStatus.VALID, true));
assertThat(licenseService.getLicense(), not(signedLicense));
verify(clusterService, times(1)).submitStateUpdateTask(any(String.class), any(ClusterStateUpdateTask.class));
// try installing a signed license with acknowledgement
putLicenseRequest = new PutLicenseRequest().license(signedLicense).acknowledge(true);
// ensure license was installed and no acknowledgment message was returned
licenseService.registerLicense(putLicenseRequest, new AssertingLicensesUpdateResponse(true, LicensesStatus.VALID, false));
verify(clusterService, times(2)).submitStateUpdateTask(any(String.class), any(ClusterStateUpdateTask.class));
}
public void testUpgradeToProductionWithTLSAndSecurity() throws Exception {
XPackLicenseState licenseState = new XPackLicenseState();
setInitialState(TestUtils.generateSignedLicense("trial", TimeValue.timeValueHours(2)), licenseState, Settings.builder()
.put("xpack.security.enabled", true)
.put("xpack.security.transport.ssl.enabled", true).build());
licenseService.start();
// try installing a signed license
License signedLicense = TestUtils.generateSignedLicense("platinum", TimeValue.timeValueHours(10));
PutLicenseRequest putLicenseRequest = new PutLicenseRequest().license(signedLicense);
licenseService.registerLicense(putLicenseRequest, new AssertingLicensesUpdateResponse(false, LicensesStatus.VALID, true));
assertThat(licenseService.getLicense(), not(signedLicense));
verify(clusterService, times(1)).submitStateUpdateTask(any(String.class), any(ClusterStateUpdateTask.class));
// try installing a signed license with acknowledgement
putLicenseRequest = new PutLicenseRequest().license(signedLicense).acknowledge(true);
// ensure license was installed and no acknowledgment message was returned
licenseService.registerLicense(putLicenseRequest, new AssertingLicensesUpdateResponse(true, LicensesStatus.VALID, false));
verify(clusterService, times(2)).submitStateUpdateTask(any(String.class), any(ClusterStateUpdateTask.class));
}
private static class AssertingLicensesUpdateResponse implements ActionListener<PutLicenseResponse> {
private final boolean expectedAcknowledgement;
private final LicensesStatus expectedStatus;

View File

@ -77,19 +77,19 @@ public class LicensesManagerServiceTests extends XPackSingleNodeTestCase {
// put gold license
TestUtils.registerAndAckSignedLicenses(licenseService, goldLicense, LicensesStatus.VALID);
LicensesMetaData licensesMetaData = clusterService.state().metaData().custom(LicensesMetaData.TYPE);
assertThat(licenseService.getLicense(licensesMetaData), equalTo(goldLicense));
assertThat(LicenseService.getLicense(licensesMetaData), equalTo(goldLicense));
License platinumLicense = TestUtils.generateSignedLicense("platinum", TimeValue.timeValueSeconds(3));
// put platinum license
TestUtils.registerAndAckSignedLicenses(licenseService, platinumLicense, LicensesStatus.VALID);
licensesMetaData = clusterService.state().metaData().custom(LicensesMetaData.TYPE);
assertThat(licenseService.getLicense(licensesMetaData), equalTo(platinumLicense));
assertThat(LicenseService.getLicense(licensesMetaData), equalTo(platinumLicense));
License basicLicense = TestUtils.generateSignedLicense("basic", TimeValue.timeValueSeconds(3));
// put basic license
TestUtils.registerAndAckSignedLicenses(licenseService, basicLicense, LicensesStatus.VALID);
licensesMetaData = clusterService.state().metaData().custom(LicensesMetaData.TYPE);
assertThat(licenseService.getLicense(licensesMetaData), equalTo(basicLicense));
assertThat(LicenseService.getLicense(licensesMetaData), equalTo(basicLicense));
}
public void testInvalidLicenseStorage() throws Exception {

View File

@ -7,6 +7,7 @@ package org.elasticsearch.license;
import com.carrotsearch.randomizedtesting.RandomizedTest;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.cluster.metadata.MetaData;
import org.elasticsearch.common.io.PathUtils;
import org.elasticsearch.common.joda.DateMathParser;
import org.elasticsearch.common.joda.FormatDateTimeFormatter;
@ -345,4 +346,8 @@ public class TestUtils {
super.update(mode, active);
}
}
public static void putLicense(MetaData.Builder builder, License license) {
builder.putCustom(LicensesMetaData.TYPE, new LicensesMetaData(license));
}
}

View File

@ -9,6 +9,7 @@ import org.elasticsearch.action.Action;
import org.elasticsearch.action.admin.cluster.health.ClusterHealthResponse;
import org.elasticsearch.action.admin.cluster.node.info.NodeInfo;
import org.elasticsearch.action.admin.cluster.node.info.NodesInfoResponse;
import org.elasticsearch.analysis.common.CommonAnalysisPlugin;
import org.elasticsearch.client.Client;
import org.elasticsearch.client.Requests;
import org.elasticsearch.cluster.health.ClusterHealthStatus;
@ -111,6 +112,7 @@ public abstract class TribeTransportTestCase extends ESIntegTestCase {
plugins.add(MockTribePlugin.class);
plugins.add(TribeAwareTestZenDiscoveryPlugin.class);
plugins.add(XPackPlugin.class);
plugins.add(CommonAnalysisPlugin.class);
return plugins;
}

View File

@ -44,17 +44,16 @@ import org.junit.BeforeClass;
import org.junit.Rule;
import org.junit.rules.ExternalResource;
import java.io.IOException;
import java.net.InetSocketAddress;
import java.nio.file.Path;
import java.util.Collection;
import java.util.Collections;
import java.util.List;
import java.util.Map;
import java.util.concurrent.CountDownLatch;
import java.util.function.Function;
import java.util.stream.Collectors;
import static org.elasticsearch.test.SecuritySettingsSource.TEST_PASSWORD_SECURE_STRING;
import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;
import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertNoTimeout;
import static org.elasticsearch.xpack.security.SecurityLifecycleService.SECURITY_INDEX_NAME;
@ -85,7 +84,7 @@ public abstract class SecurityIntegTestCase extends ESIntegTestCase {
@BeforeClass
public static void generateBootstrapPassword() {
BOOTSTRAP_PASSWORD = new SecureString("FOOBAR".toCharArray());
BOOTSTRAP_PASSWORD = TEST_PASSWORD_SECURE_STRING.clone();
}
//UnicastZen requires the number of nodes in a cluster to generate the unicast configuration.
@ -170,12 +169,12 @@ public abstract class SecurityIntegTestCase extends ESIntegTestCase {
case SUITE:
if (customSecuritySettingsSource == null) {
customSecuritySettingsSource =
new CustomSecuritySettingsSource(useGeneratedSSLConfig(), createTempDir(), currentClusterScope);
new CustomSecuritySettingsSource(transportSSLEnabled(), createTempDir(), currentClusterScope);
}
break;
case TEST:
customSecuritySettingsSource =
new CustomSecuritySettingsSource(useGeneratedSSLConfig(), createTempDir(), currentClusterScope);
new CustomSecuritySettingsSource(transportSSLEnabled(), createTempDir(), currentClusterScope);
break;
}
}
@ -326,7 +325,7 @@ public abstract class SecurityIntegTestCase extends ESIntegTestCase {
/**
* Allows to control whether ssl key information is auto generated or not on the transport layer
*/
protected boolean useGeneratedSSLConfig() {
protected boolean transportSSLEnabled() {
return randomBoolean();
}
@ -340,8 +339,8 @@ public abstract class SecurityIntegTestCase extends ESIntegTestCase {
private class CustomSecuritySettingsSource extends SecuritySettingsSource {
private CustomSecuritySettingsSource(boolean useGeneratedSSLConfig, Path configDir, Scope scope) {
super(maxNumberOfNodes(), useGeneratedSSLConfig, configDir, scope);
private CustomSecuritySettingsSource(boolean sslEnabled, Path configDir, Scope scope) {
super(maxNumberOfNodes(), sslEnabled, configDir, scope);
}
@Override
@ -520,4 +519,8 @@ public abstract class SecurityIntegTestCase extends ESIntegTestCase {
}
return null;
}
protected boolean isTransportSSLEnabled() {
return customSecuritySettingsSource.isSslEnabled();
}
}

View File

@ -79,7 +79,7 @@ public class SecuritySettingsSource extends ClusterDiscoveryConfiguration.Unicas
private final Path parentFolder;
private final String subfolderPrefix;
private final boolean useGeneratedSSLConfig;
private final boolean sslEnabled;
private final boolean hostnameVerificationEnabled;
private final boolean usePEM;
@ -87,15 +87,15 @@ public class SecuritySettingsSource extends ClusterDiscoveryConfiguration.Unicas
* Creates a new {@link org.elasticsearch.test.NodeConfigurationSource} for the security configuration.
*
* @param numOfNodes the number of nodes for proper unicast configuration (can be more than actually available)
* @param useGeneratedSSLConfig whether ssl key/cert should be auto-generated
* @param sslEnabled whether ssl is enabled
* @param parentFolder the parent folder that will contain all of the configuration files that need to be created
* @param scope the scope of the test that is requiring an instance of SecuritySettingsSource
*/
public SecuritySettingsSource(int numOfNodes, boolean useGeneratedSSLConfig, Path parentFolder, Scope scope) {
public SecuritySettingsSource(int numOfNodes, boolean sslEnabled, Path parentFolder, Scope scope) {
super(numOfNodes, DEFAULT_SETTINGS);
this.parentFolder = parentFolder;
this.subfolderPrefix = scope.name();
this.useGeneratedSSLConfig = useGeneratedSSLConfig;
this.sslEnabled = sslEnabled;
this.hostnameVerificationEnabled = randomBoolean();
this.usePEM = randomBoolean();
}
@ -203,20 +203,24 @@ public class SecuritySettingsSource extends ClusterDiscoveryConfiguration.Unicas
}
private void addNodeSSLSettings(Settings.Builder builder) {
if (usePEM) {
addSSLSettingsForPEMFiles(builder, "",
"/org/elasticsearch/xpack/security/transport/ssl/certs/simple/testnode.pem", "testnode",
"/org/elasticsearch/xpack/security/transport/ssl/certs/simple/testnode.crt",
Arrays.asList("/org/elasticsearch/xpack/security/transport/ssl/certs/simple/testnode-client-profile.crt",
"/org/elasticsearch/xpack/security/transport/ssl/certs/simple/active-directory-ca.crt",
"/org/elasticsearch/xpack/security/transport/ssl/certs/simple/testclient.crt",
"/org/elasticsearch/xpack/security/transport/ssl/certs/simple/openldap.crt",
"/org/elasticsearch/xpack/security/transport/ssl/certs/simple/testnode.crt"),
useGeneratedSSLConfig, hostnameVerificationEnabled, false);
if (sslEnabled) {
if (usePEM) {
addSSLSettingsForPEMFiles(builder, "",
"/org/elasticsearch/xpack/security/transport/ssl/certs/simple/testnode.pem", "testnode",
"/org/elasticsearch/xpack/security/transport/ssl/certs/simple/testnode.crt",
Arrays.asList("/org/elasticsearch/xpack/security/transport/ssl/certs/simple/testnode-client-profile.crt",
"/org/elasticsearch/xpack/security/transport/ssl/certs/simple/active-directory-ca.crt",
"/org/elasticsearch/xpack/security/transport/ssl/certs/simple/testclient.crt",
"/org/elasticsearch/xpack/security/transport/ssl/certs/simple/openldap.crt",
"/org/elasticsearch/xpack/security/transport/ssl/certs/simple/testnode.crt"),
sslEnabled, hostnameVerificationEnabled, false);
} else {
addSSLSettingsForStore(builder, "", "/org/elasticsearch/xpack/security/transport/ssl/certs/simple/testnode.jks",
"testnode", useGeneratedSSLConfig, hostnameVerificationEnabled, false);
} else {
addSSLSettingsForStore(builder, "", "/org/elasticsearch/xpack/security/transport/ssl/certs/simple/testnode.jks",
"testnode", sslEnabled, hostnameVerificationEnabled, false);
}
} else if (randomBoolean()) {
builder.put(XPackSettings.TRANSPORT_SSL_ENABLED.getKey(), false);
}
}
@ -227,10 +231,10 @@ public class SecuritySettingsSource extends ClusterDiscoveryConfiguration.Unicas
"/org/elasticsearch/xpack/security/transport/ssl/certs/simple/testclient.crt",
Arrays.asList("/org/elasticsearch/xpack/security/transport/ssl/certs/simple/testnode.crt",
"/org/elasticsearch/xpack/security/transport/ssl/certs/simple/testclient.crt"),
useGeneratedSSLConfig, hostnameVerificationEnabled, true);
sslEnabled, hostnameVerificationEnabled, true);
} else {
addSSLSettingsForStore(builder, prefix, "/org/elasticsearch/xpack/security/transport/ssl/certs/simple/testclient.jks",
"testclient", useGeneratedSSLConfig, hostnameVerificationEnabled, true);
"testclient", sslEnabled, hostnameVerificationEnabled, true);
}
}
@ -241,31 +245,30 @@ public class SecuritySettingsSource extends ClusterDiscoveryConfiguration.Unicas
* @param password the password
*/
public static void addSSLSettingsForStore(Settings.Builder builder, String resourcePathToStore, String password) {
addSSLSettingsForStore(builder, "", resourcePathToStore, password, false, true, true);
addSSLSettingsForStore(builder, "", resourcePathToStore, password, true, true, true);
}
private static void addSSLSettingsForStore(Settings.Builder builder, String prefix, String resourcePathToStore, String password,
boolean useGeneratedSSLConfig, boolean hostnameVerificationEnabled,
boolean sslEnabled, boolean hostnameVerificationEnabled,
boolean transportClient) {
Path store = resolveResourcePath(resourcePathToStore);
if (transportClient == false) {
builder.put(prefix + "xpack.security.http.ssl.enabled", false);
}
builder.put(XPackSettings.TRANSPORT_SSL_ENABLED.getKey(), sslEnabled);
builder.put(prefix + "xpack.ssl.verification_mode", hostnameVerificationEnabled ? "full" : "certificate");
if (useGeneratedSSLConfig == false) {
builder.put(prefix + "xpack.ssl.keystore.path", store);
if (transportClient) {
// continue using insecure settings for clients until we figure out what to do there...
builder.put(prefix + "xpack.ssl.keystore.password", password);
} else {
addSecureSettings(builder, secureSettings ->
secureSettings.setString(prefix + "xpack.ssl.keystore.secure_password", password));
}
builder.put(prefix + "xpack.ssl.keystore.path", store);
if (transportClient) {
// continue using insecure settings for clients until we figure out what to do there...
builder.put(prefix + "xpack.ssl.keystore.password", password);
} else {
addSecureSettings(builder, secureSettings ->
secureSettings.setString(prefix + "xpack.ssl.keystore.secure_password", password));
}
if (useGeneratedSSLConfig == false && true /*randomBoolean()*/) {
if (randomBoolean()) {
builder.put(prefix + "xpack.ssl.truststore.path", store);
if (transportClient) {
// continue using insecure settings for clients until we figure out what to do there...
@ -278,29 +281,28 @@ public class SecuritySettingsSource extends ClusterDiscoveryConfiguration.Unicas
}
private static void addSSLSettingsForPEMFiles(Settings.Builder builder, String prefix, String keyPath, String password,
String certificatePath, List<String> trustedCertificates, boolean useGeneratedSSLConfig,
String certificatePath, List<String> trustedCertificates, boolean sslEnabled,
boolean hostnameVerificationEnabled, boolean transportClient) {
if (transportClient == false) {
builder.put(prefix + "xpack.security.http.ssl.enabled", false);
}
builder.put(XPackSettings.TRANSPORT_SSL_ENABLED.getKey(), sslEnabled);
builder.put(prefix + "xpack.ssl.verification_mode", hostnameVerificationEnabled ? "full" : "certificate");
if (useGeneratedSSLConfig == false) {
builder.put(prefix + "xpack.ssl.key", resolveResourcePath(keyPath))
.put(prefix + "xpack.ssl.certificate", resolveResourcePath(certificatePath));
if (transportClient) {
// continue using insecure settings for clients until we figure out what to do there...
builder.put(prefix + "xpack.ssl.key_passphrase", password);
} else {
addSecureSettings(builder, secureSettings ->
secureSettings.setString(prefix + "xpack.ssl.secure_key_passphrase", password));
}
builder.put(prefix + "xpack.ssl.key", resolveResourcePath(keyPath))
.put(prefix + "xpack.ssl.certificate", resolveResourcePath(certificatePath));
if (transportClient) {
// continue using insecure settings for clients until we figure out what to do there...
builder.put(prefix + "xpack.ssl.key_passphrase", password);
} else {
addSecureSettings(builder, secureSettings ->
secureSettings.setString(prefix + "xpack.ssl.secure_key_passphrase", password));
}
if (trustedCertificates.isEmpty() == false) {
builder.put(prefix + "xpack.ssl.certificate_authorities",
Strings.arrayToCommaDelimitedString(resolvePathsToString(trustedCertificates)));
}
if (trustedCertificates.isEmpty() == false) {
builder.put(prefix + "xpack.ssl.certificate_authorities",
Strings.arrayToCommaDelimitedString(resolvePathsToString(trustedCertificates)));
}
}
@ -337,4 +339,8 @@ public class SecuritySettingsSource extends ClusterDiscoveryConfiguration.Unicas
throw new ElasticsearchException("exception while reading the store", e);
}
}
public boolean isSslEnabled() {
return sslEnabled;
}
}

View File

@ -87,7 +87,7 @@ public class AggregationDataExtractorTests extends ESTestCase {
types = Arrays.asList("type-1", "type-2");
query = QueryBuilders.matchAllQuery();
aggs = new AggregatorFactories.Builder()
.addAggregator(AggregationBuilders.histogram("time").field("time").subAggregation(
.addAggregator(AggregationBuilders.histogram("time").field("time").interval(1000).subAggregation(
AggregationBuilders.terms("airline").field("airline").subAggregation(
AggregationBuilders.avg("responsetime").field("responsetime"))));
}
@ -122,7 +122,7 @@ public class AggregationDataExtractorTests extends ESTestCase {
String searchRequest = capturedSearchRequests.get(0).toString().replaceAll("\\s", "");
assertThat(searchRequest, containsString("\"size\":0"));
assertThat(searchRequest, containsString("\"query\":{\"bool\":{\"filter\":[{\"match_all\":{\"boost\":1.0}}," +
"{\"range\":{\"time\":{\"from\":1000,\"to\":4000,\"include_lower\":true,\"include_upper\":false," +
"{\"range\":{\"time\":{\"from\":0,\"to\":4000,\"include_lower\":true,\"include_upper\":false," +
"\"format\":\"epoch_millis\",\"boost\":1.0}}}]"));
assertThat(searchRequest,
stringContainsInOrder(Arrays.asList("aggregations", "histogram", "time", "terms", "airline", "avg", "responsetime")));

View File

@ -41,9 +41,12 @@ import static org.mockito.Mockito.when;
public class AggregationToJsonProcessorTests extends ESTestCase {
private long keyValuePairsWritten = 0;
private String timeField = "time";
private boolean includeDocCount = true;
private long startTime = 0;
private long histogramInterval = 1000;
public void testProcessGivenMultipleDateHistograms() {
List<Histogram.Bucket> nestedHistogramBuckets = Arrays.asList(
createHistogramBucket(1000L, 3, Collections.singletonList(createMax("metric1", 1200))),
createHistogramBucket(2000L, 5, Collections.singletonList(createMax("metric1", 2800)))
@ -55,7 +58,7 @@ public class AggregationToJsonProcessorTests extends ESTestCase {
);
IllegalArgumentException e = expectThrows(IllegalArgumentException.class,
() -> aggToString("time", Sets.newHashSet("my_field"), histogramBuckets));
() -> aggToString(Sets.newHashSet("my_field"), histogramBuckets));
assertThat(e.getMessage(), containsString("More than one Date histogram cannot be used in the aggregation. " +
"[buckets] is another instance of a Date histogram"));
}
@ -67,7 +70,7 @@ public class AggregationToJsonProcessorTests extends ESTestCase {
);
IllegalArgumentException e = expectThrows(IllegalArgumentException.class,
() -> aggToString("time", Collections.emptySet(), histogramBuckets));
() -> aggToString(Collections.emptySet(), histogramBuckets));
assertThat(e.getMessage(), containsString("Missing max aggregation for time_field [time]"));
}
@ -78,7 +81,7 @@ public class AggregationToJsonProcessorTests extends ESTestCase {
);
IllegalArgumentException e = expectThrows(IllegalArgumentException.class,
() -> aggToString("time", Collections.emptySet(), histogramBuckets));
() -> aggToString(Collections.emptySet(), histogramBuckets));
assertThat(e.getMessage(), containsString("Missing max aggregation for time_field [time]"));
}
@ -88,7 +91,8 @@ public class AggregationToJsonProcessorTests extends ESTestCase {
createHistogramBucket(2000L, 5, Collections.singletonList(createMax("timestamp", 2800)))
);
String json = aggToString("timestamp", Collections.emptySet(), histogramBuckets);
timeField = "timestamp";
String json = aggToString(Collections.emptySet(), histogramBuckets);
assertThat(json, equalTo("{\"timestamp\":1200,\"doc_count\":3} {\"timestamp\":2800,\"doc_count\":5}"));
assertThat(keyValuePairsWritten, equalTo(4L));
@ -100,7 +104,8 @@ public class AggregationToJsonProcessorTests extends ESTestCase {
createHistogramBucket(2000L, 5, Collections.singletonList(createMax("time", 2000)))
);
String json = aggToString("time", Collections.emptySet(), false, histogramBuckets, 0L);
includeDocCount = false;
String json = aggToString(Collections.emptySet(), histogramBuckets);
assertThat(json, equalTo("{\"time\":1000} {\"time\":2000}"));
assertThat(keyValuePairsWritten, equalTo(2L));
@ -132,7 +137,7 @@ public class AggregationToJsonProcessorTests extends ESTestCase {
new Term("B", 2, Collections.singletonList(histogramB)));
String json = aggToString("time", Sets.newHashSet("my_value", "my_field"), true, createAggs(Collections.singletonList(terms)));
String json = aggToString(Sets.newHashSet("my_value", "my_field"), createAggs(Collections.singletonList(terms)));
assertThat(json, equalTo("{\"my_field\":\"A\",\"time\":1000,\"my_value\":1.0,\"doc_count\":3} " +
"{\"my_field\":\"B\",\"time\":1000,\"my_value\":10.0,\"doc_count\":6} " +
"{\"my_field\":\"A\",\"time\":2000,\"my_value\":2.0,\"doc_count\":4} " +
@ -152,7 +157,7 @@ public class AggregationToJsonProcessorTests extends ESTestCase {
createMax("time", 3000), createSingleValue("my_value", 3.0)))
);
String json = aggToString("time", Sets.newHashSet("my_value"), histogramBuckets);
String json = aggToString(Sets.newHashSet("my_value"), histogramBuckets);
assertThat(json, equalTo("{\"time\":1000,\"my_value\":1.0,\"doc_count\":3} " +
"{\"time\":2000,\"doc_count\":3} " +
@ -173,7 +178,7 @@ public class AggregationToJsonProcessorTests extends ESTestCase {
createTerms("my_field", new Term("c", 4), new Term("b", 3))))
);
String json = aggToString("time", Sets.newHashSet("time", "my_field"), histogramBuckets);
String json = aggToString(Sets.newHashSet("time", "my_field"), histogramBuckets);
assertThat(json, equalTo("{\"time\":1100,\"my_field\":\"a\",\"doc_count\":1} " +
"{\"time\":1100,\"my_field\":\"b\",\"doc_count\":2} " +
@ -199,7 +204,7 @@ public class AggregationToJsonProcessorTests extends ESTestCase {
createTerms("my_field", new Term("c", 4, "my_value", 41.0), new Term("b", 3, "my_value", 42.0))))
);
String json = aggToString("time", Sets.newHashSet("my_field", "my_value"), histogramBuckets);
String json = aggToString(Sets.newHashSet("my_field", "my_value"), histogramBuckets);
assertThat(json, equalTo("{\"time\":1000,\"my_field\":\"a\",\"my_value\":11.0,\"doc_count\":1} " +
"{\"time\":1000,\"my_field\":\"b\",\"my_value\":12.0,\"doc_count\":2} " +
@ -246,7 +251,8 @@ public class AggregationToJsonProcessorTests extends ESTestCase {
createTerms("my_field", new Term("c", 4, c4NumericAggs), new Term("b", 3, b4NumericAggs))))
);
String json = aggToString("time", Sets.newHashSet("my_field", "my_value", "my_value2"), false, histogramBuckets, 0L);
includeDocCount = false;
String json = aggToString(Sets.newHashSet("my_field", "my_value", "my_value2"), histogramBuckets);
assertThat(json, equalTo("{\"time\":1000,\"my_field\":\"a\",\"my_value\":111.0,\"my_value2\":112.0} " +
"{\"time\":1000,\"my_field\":\"b\",\"my_value2\":122.0} " +
@ -265,7 +271,7 @@ public class AggregationToJsonProcessorTests extends ESTestCase {
when(histogramBucket.getAggregations()).thenReturn(subAggs);
IllegalArgumentException e = expectThrows(IllegalArgumentException.class,
() -> aggToString("time", Sets.newHashSet("nested-agg"), histogramBucket));
() -> aggToString(Sets.newHashSet("nested-agg"), histogramBucket));
assertThat(e.getMessage(), containsString("Unsupported aggregation type [nested-agg]"));
}
@ -279,7 +285,7 @@ public class AggregationToJsonProcessorTests extends ESTestCase {
when(histogramBucket.getAggregations()).thenReturn(subAggs);
IllegalArgumentException e = expectThrows(IllegalArgumentException.class,
() -> aggToString("time", Sets.newHashSet("terms_1", "terms_2"), histogramBucket));
() -> aggToString(Sets.newHashSet("terms_1", "terms_2"), histogramBucket));
assertThat(e.getMessage(), containsString("Multiple bucket aggregations at the same level are not supported"));
}
@ -288,7 +294,7 @@ public class AggregationToJsonProcessorTests extends ESTestCase {
Max maxAgg = createMax("max_value", 1200);
Histogram.Bucket histogramBucket = createHistogramBucket(1000L, 2, Arrays.asList(terms, createMax("time", 1000), maxAgg));
String json = aggToString("time", Sets.newHashSet("terms", "max_value"), histogramBucket);
String json = aggToString(Sets.newHashSet("terms", "max_value"), histogramBucket);
assertThat(json, equalTo("{\"time\":1000,\"max_value\":1200.0,\"terms\":\"a\",\"doc_count\":1} " +
"{\"time\":1000,\"max_value\":1200.0,\"terms\":\"b\",\"doc_count\":2}"));
}
@ -298,7 +304,7 @@ public class AggregationToJsonProcessorTests extends ESTestCase {
Terms terms = createTerms("terms", new Term("a", 1), new Term("b", 2));
Histogram.Bucket histogramBucket = createHistogramBucket(1000L, 2, Arrays.asList(createMax("time", 1000), maxAgg, terms));
String json = aggToString("time", Sets.newHashSet("terms", "max_value"), histogramBucket);
String json = aggToString(Sets.newHashSet("terms", "max_value"), histogramBucket);
assertThat(json, equalTo("{\"time\":1000,\"max_value\":1200.0,\"terms\":\"a\",\"doc_count\":1} " +
"{\"time\":1000,\"max_value\":1200.0,\"terms\":\"b\",\"doc_count\":2}"));
}
@ -320,7 +326,7 @@ public class AggregationToJsonProcessorTests extends ESTestCase {
createTerms("my_field", new Term("c", 4), new Term("b", 3))))
);
String json = aggToString("time", Sets.newHashSet("time", "my_value"), histogramBuckets);
String json = aggToString(Sets.newHashSet("time", "my_value"), histogramBuckets);
assertThat(json, equalTo("{\"time\":1100,\"my_value\":1.0,\"doc_count\":4} " +
"{\"time\":2200,\"my_value\":2.0,\"doc_count\":5} " +
@ -339,7 +345,7 @@ public class AggregationToJsonProcessorTests extends ESTestCase {
createMax("time", 4000), createPercentiles("my_field", 4.0)))
);
String json = aggToString("time", Sets.newHashSet("my_field"), histogramBuckets);
String json = aggToString(Sets.newHashSet("my_field"), histogramBuckets);
assertThat(json, equalTo("{\"time\":1000,\"my_field\":1.0,\"doc_count\":4} " +
"{\"time\":2000,\"my_field\":2.0,\"doc_count\":7} " +
@ -360,7 +366,7 @@ public class AggregationToJsonProcessorTests extends ESTestCase {
);
IllegalArgumentException e = expectThrows(IllegalArgumentException.class,
() -> aggToString("time", Sets.newHashSet("my_field"), histogramBuckets));
() -> aggToString(Sets.newHashSet("my_field"), histogramBuckets));
assertThat(e.getMessage(), containsString("Multi-percentile aggregation [my_field] is not supported"));
}
@ -368,7 +374,7 @@ public class AggregationToJsonProcessorTests extends ESTestCase {
public void testBucketAggContainsRequiredAgg() throws IOException {
Set<String> fields = new HashSet<>();
fields.add("foo");
AggregationToJsonProcessor processor = new AggregationToJsonProcessor("time", fields, false, 0L);
AggregationToJsonProcessor processor = new AggregationToJsonProcessor("time", fields, false, 0L, 10L);
Terms termsAgg = mock(Terms.class);
when(termsAgg.getBuckets()).thenReturn(Collections.emptyList());
@ -395,6 +401,27 @@ public class AggregationToJsonProcessorTests extends ESTestCase {
assertTrue(processor.bucketAggContainsRequiredAgg(termsAgg));
}
public void testBucketBeforeStartIsPruned() throws IOException {
List<Histogram.Bucket> histogramBuckets = Arrays.asList(
createHistogramBucket(1000L, 4, Arrays.asList(
createMax("time", 1000), createPercentiles("my_field", 1.0))),
createHistogramBucket(2000L, 7, Arrays.asList(
createMax("time", 2000), createPercentiles("my_field", 2.0))),
createHistogramBucket(3000L, 10, Arrays.asList(
createMax("time", 3000), createPercentiles("my_field", 3.0))),
createHistogramBucket(4000L, 14, Arrays.asList(
createMax("time", 4000), createPercentiles("my_field", 4.0)))
);
startTime = 2000;
histogramInterval = 1000;
String json = aggToString(Sets.newHashSet("my_field"), histogramBuckets);
assertThat(json, equalTo("{\"time\":2000,\"my_field\":2.0,\"doc_count\":7} " +
"{\"time\":3000,\"my_field\":3.0,\"doc_count\":10} " +
"{\"time\":4000,\"my_field\":4.0,\"doc_count\":14}"));
}
public void testBucketsBeforeStartArePruned() throws IOException {
List<Histogram.Bucket> histogramBuckets = Arrays.asList(
createHistogramBucket(1000L, 4, Arrays.asList(
@ -407,40 +434,50 @@ public class AggregationToJsonProcessorTests extends ESTestCase {
createMax("time", 4000), createPercentiles("my_field", 4.0)))
);
String json = aggToString("time", Sets.newHashSet("my_field"), true, histogramBuckets, 2000L);
startTime = 3000;
histogramInterval = 1000;
String json = aggToString(Sets.newHashSet("my_field"), histogramBuckets);
assertThat(json, equalTo("{\"time\":2000,\"my_field\":2.0,\"doc_count\":7} " +
assertThat(json, equalTo("{\"time\":3000,\"my_field\":3.0,\"doc_count\":10} " +
"{\"time\":4000,\"my_field\":4.0,\"doc_count\":14}"));
}
public void testFirstBucketIsNotPrunedIfItContainsStartTime() throws IOException {
List<Histogram.Bucket> histogramBuckets = Arrays.asList(
createHistogramBucket(1000L, 4, Arrays.asList(
createMax("time", 1000), createPercentiles("my_field", 1.0))),
createHistogramBucket(2000L, 7, Arrays.asList(
createMax("time", 2000), createPercentiles("my_field", 2.0))),
createHistogramBucket(3000L, 10, Arrays.asList(
createMax("time", 3000), createPercentiles("my_field", 3.0))),
createHistogramBucket(4000L, 14, Arrays.asList(
createMax("time", 4000), createPercentiles("my_field", 4.0)))
);
startTime = 1999;
histogramInterval = 1000;
String json = aggToString(Sets.newHashSet("my_field"), histogramBuckets);
assertThat(json, equalTo("{\"time\":1000,\"my_field\":1.0,\"doc_count\":4} " +
"{\"time\":2000,\"my_field\":2.0,\"doc_count\":7} " +
"{\"time\":3000,\"my_field\":3.0,\"doc_count\":10} " +
"{\"time\":4000,\"my_field\":4.0,\"doc_count\":14}"));
}
private String aggToString(String timeField, Set<String> fields, Histogram.Bucket bucket) throws IOException {
return aggToString(timeField, fields, true, Collections.singletonList(bucket), 0L);
private String aggToString(Set<String> fields, Histogram.Bucket bucket) throws IOException {
return aggToString(fields, Collections.singletonList(bucket));
}
private String aggToString(String timeField, Set<String> fields, List<Histogram.Bucket> buckets) throws IOException {
return aggToString(timeField, fields, true, buckets, 0L);
}
private String aggToString(String timeField, Set<String> fields, boolean includeDocCount, List<Histogram.Bucket> buckets,
long startTime)
throws IOException {
private String aggToString(Set<String> fields, List<Histogram.Bucket> buckets) throws IOException {
Histogram histogram = createHistogramAggregation("buckets", buckets);
return aggToString(timeField, fields, includeDocCount, createAggs(Collections.singletonList(histogram)), startTime);
return aggToString(fields, createAggs(Collections.singletonList(histogram)));
}
private String aggToString(String timeField, Set<String> fields, boolean includeDocCount, Aggregations aggregations)
throws IOException {
return aggToString(timeField, fields, includeDocCount, aggregations, 0L);
}
private String aggToString(String timeField, Set<String> fields, boolean includeDocCount, Aggregations aggregations, long startTime)
throws IOException {
private String aggToString(Set<String> fields, Aggregations aggregations) throws IOException {
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
AggregationToJsonProcessor processor = new AggregationToJsonProcessor(timeField, fields, includeDocCount, startTime);
AggregationToJsonProcessor processor = new AggregationToJsonProcessor(
timeField, fields, includeDocCount, startTime, histogramInterval);
processor.process(aggregations);
processor.writeDocs(10000, outputStream);
keyValuePairsWritten = processor.getKeyValueCount();

View File

@ -11,6 +11,7 @@ import org.elasticsearch.cluster.ClusterModule;
import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.metadata.MetaData;
import org.elasticsearch.common.bytes.BytesArray;
import org.elasticsearch.common.io.PathUtils;
import org.elasticsearch.common.io.stream.NamedWriteableRegistry;
import org.elasticsearch.common.network.NetworkModule;
import org.elasticsearch.common.settings.Settings;
@ -60,6 +61,8 @@ import org.elasticsearch.xpack.security.Security;
import org.elasticsearch.xpack.security.authc.TokenMetaData;
import java.io.IOException;
import java.net.URISyntaxException;
import java.nio.file.Path;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
@ -80,10 +83,20 @@ abstract class MlNativeAutodetectIntegTestCase extends SecurityIntegTestCase {
@Override
protected Settings externalClusterClientSettings() {
Path keyStore;
try {
keyStore = PathUtils.get(getClass().getResource("/test-node.jks").toURI());
} catch (URISyntaxException e) {
throw new IllegalStateException("error trying to get keystore path", e);
}
Settings.Builder builder = Settings.builder();
builder.put(NetworkModule.TRANSPORT_TYPE_KEY, Security.NAME4);
builder.put(Security.USER_SETTING.getKey(), "x_pack_rest_user:" + SecuritySettingsSource.TEST_PASSWORD_SECURE_STRING);
builder.put(XPackSettings.MACHINE_LEARNING_ENABLED.getKey(), true);
builder.put("xpack.security.transport.ssl.enabled", true);
builder.put("xpack.security.transport.ssl.keystore.path", keyStore.toAbsolutePath().toString());
builder.put("xpack.security.transport.ssl.keystore.password", "keypass");
builder.put("xpack.security.transport.ssl.verification_mode", "certificate");
return builder.build();
}

View File

@ -12,6 +12,7 @@ import org.elasticsearch.action.bulk.BulkRequestBuilder;
import org.elasticsearch.action.bulk.BulkResponse;
import org.elasticsearch.action.index.IndexRequest;
import org.elasticsearch.action.support.WriteRequest;
import org.elasticsearch.analysis.common.CommonAnalysisPlugin;
import org.elasticsearch.client.Client;
import org.elasticsearch.cluster.metadata.MetaData;
import org.elasticsearch.common.settings.Settings;
@ -93,7 +94,8 @@ public abstract class BaseMlIntegTestCase extends ESIntegTestCase {
@Override
protected Collection<Class<? extends Plugin>> nodePlugins() {
return Arrays.asList(XPackPlugin.class, ReindexPlugin.class);
return Arrays.asList(XPackPlugin.class, CommonAnalysisPlugin.class,
ReindexPlugin.class);
}
@Override

View File

@ -53,14 +53,15 @@ public abstract class AbstractIndicesCleanerTestCase extends MonitoringIntegTest
assertIndicesCount(0);
}
public void testIgnoreCurrentDataIndex() throws Exception {
public void testIgnoreCurrentAlertsIndex() throws Exception {
internalCluster().startNode();
// Will be deleted
createTimestampedIndex(now().minusDays(10));
// Won't be deleted
createDataIndex(now().minusDays(10));
createAlertsIndex(now().minusYears(1));
assertIndicesCount(2);
CleanerService.Listener listener = getListener();
@ -68,20 +69,30 @@ public abstract class AbstractIndicesCleanerTestCase extends MonitoringIntegTest
assertIndicesCount(1);
}
public void testIgnoreDataIndicesInOtherVersions() throws Exception {
public void testDoesNotIgnoreIndicesInOtherVersions() throws Exception {
internalCluster().startNode();
// Will be deleted
createTimestampedIndex(now().minusDays(10));
createIndex(".monitoring-data-2", now().minusDays(10));
createAlertsIndex(now().minusYears(1), MonitoringTemplateUtils.OLD_TEMPLATE_VERSION);
createTimestampedIndex(now().minusDays(10), "0");
createTimestampedIndex(now().minusDays(10), "1");
createTimestampedIndex(now().minusYears(1), MonitoringTemplateUtils.OLD_TEMPLATE_VERSION);
// In the past, this index would not be deleted, but starting in 6.x the monitoring cluster
// will be required to be a newer template version than the production cluster, so the index
// pushed to it will never be "unknown" in terms of their version (relates to the
// _xpack/monitoring/_setup API)
createTimestampedIndex(now().minusDays(10), String.valueOf(Integer.MAX_VALUE));
// Won't be deleted
createIndex(MonitoringSettings.LEGACY_DATA_INDEX_NAME, now().minusYears(1));
createDataIndex(now().minusDays(10));
assertIndicesCount(3);
createAlertsIndex(now().minusYears(1));
assertIndicesCount(8);
CleanerService.Listener listener = getListener();
listener.onCleanUpIndices(days(0));
assertIndicesCount(2);
assertIndicesCount(1);
}
public void testIgnoreCurrentTimestampedIndex() throws Exception {
@ -92,6 +103,7 @@ public abstract class AbstractIndicesCleanerTestCase extends MonitoringIntegTest
// Won't be deleted
createTimestampedIndex(now());
assertIndicesCount(2);
CleanerService.Listener listener = getListener();
@ -99,23 +111,6 @@ public abstract class AbstractIndicesCleanerTestCase extends MonitoringIntegTest
assertIndicesCount(1);
}
public void testIgnoreTimestampedIndicesInOtherVersions() throws Exception {
internalCluster().startNode();
// Will be deleted
createTimestampedIndex(now().minusDays(10));
// Won't be deleted
createTimestampedIndex(now().minusDays(10), "0");
createTimestampedIndex(now().minusDays(10), "1");
createTimestampedIndex(now().minusDays(10), String.valueOf(Integer.MAX_VALUE));
assertIndicesCount(4);
CleanerService.Listener listener = getListener();
listener.onCleanUpIndices(days(0));
assertIndicesCount(3);
}
public void testDeleteIndices() throws Exception {
internalCluster().startNode();
@ -183,10 +178,17 @@ public abstract class AbstractIndicesCleanerTestCase extends MonitoringIntegTest
}
/**
* Creates a monitoring data index from an earlier version (from when we used to have them).
* Creates a monitoring alerts index from the current version.
*/
protected void createDataIndex(DateTime creationDate) {
createIndex(".monitoring-data-2", creationDate);
protected void createAlertsIndex(final DateTime creationDate) {
createAlertsIndex(creationDate, MonitoringTemplateUtils.TEMPLATE_VERSION);
}
/**
* Creates a monitoring alerts index from the current version.
*/
protected void createAlertsIndex(final DateTime creationDate, final String version) {
createIndex(".monitoring-alerts-" + version, creationDate);
}
/**

View File

@ -26,6 +26,12 @@ public class PkiRealmBootstrapCheckTests extends ESTestCase {
.put("path.home", createTempDir())
.build();
Environment env = new Environment(settings);
assertTrue(new PkiRealmBootstrapCheck(new SSLService(settings, env)).check(new BootstrapContext(settings, null)).isFailure());
// enable transport tls
settings = Settings.builder().put(settings)
.put("xpack.security.transport.ssl.enabled", true)
.build();
assertFalse(new PkiRealmBootstrapCheck(new SSLService(settings, env)).check(new BootstrapContext(settings, null)).isFailure());
// disable client auth default

View File

@ -5,8 +5,6 @@
*/
package org.elasticsearch.xpack.security;
import org.apache.http.HttpEntity;
import org.apache.http.util.EntityUtils;
import org.elasticsearch.Version;
import org.elasticsearch.client.Response;
import org.elasticsearch.test.rest.yaml.ClientYamlTestCandidate;
@ -14,8 +12,10 @@ import org.elasticsearch.test.rest.yaml.ESClientYamlSuiteTestCase;
import org.elasticsearch.test.rest.yaml.ObjectPath;
import org.junit.Before;
import java.nio.charset.StandardCharsets;
import java.util.Collections;
import java.util.HashSet;
import java.util.Map;
import java.util.Set;
import static org.elasticsearch.xpack.security.SecurityLifecycleService.SECURITY_TEMPLATE_NAME;
import static org.hamcrest.Matchers.greaterThanOrEqualTo;
@ -37,34 +37,18 @@ public abstract class SecurityClusterClientYamlTestCase extends ESClientYamlSuit
}
public static void waitForSecurity() throws Exception {
String masterNode = null;
HttpEntity entity = client().performRequest("GET", "/_cat/nodes?h=id,master").getEntity();
String catNodesResponse = EntityUtils.toString(entity, StandardCharsets.UTF_8);
for (String line : catNodesResponse.split("\n")) {
int indexOfStar = line.indexOf('*'); // * in the node's output denotes it is master
if (indexOfStar != -1) {
masterNode = line.substring(0, indexOfStar).trim();
break;
}
}
assertNotNull(masterNode);
final String masterNodeId = masterNode;
assertBusy(() -> {
try {
Response nodeDetailsResponse = client().performRequest("GET", "/_nodes");
ObjectPath path = ObjectPath.createFromResponse(nodeDetailsResponse);
Map<String, Object> nodes = path.evaluate("nodes");
String masterVersion = null;
for (String key : nodes.keySet()) {
// get the ES version number master is on
if (key.startsWith(masterNodeId)) {
masterVersion = path.evaluate("nodes." + key + ".version");
break;
}
Response nodesResponse = client().performRequest("GET", "/_nodes");
ObjectPath nodesPath = ObjectPath.createFromResponse(nodesResponse);
Map<String, Object> nodes = nodesPath.evaluate("nodes");
Set<Version> nodeVersions = new HashSet<>();
for (String nodeId : nodes.keySet()) {
String nodeVersionPath = "nodes." + nodeId + ".version";
Version nodeVersion = Version.fromString(nodesPath.evaluate(nodeVersionPath));
nodeVersions.add(nodeVersion);
}
assertNotNull(masterVersion);
final String masterTemplateVersion = masterVersion;
Version highestNodeVersion = Collections.max(nodeVersions);
Response response = client().performRequest("GET", "/_cluster/state/metadata");
ObjectPath objectPath = ObjectPath.createFromResponse(response);
@ -74,10 +58,8 @@ public abstract class SecurityClusterClientYamlTestCase extends ESClientYamlSuit
assertThat(mappings.size(), greaterThanOrEqualTo(1));
for (String key : mappings.keySet()) {
String templatePath = mappingsPath + "." + key + "._meta.security-version";
String templateVersion = objectPath.evaluate(templatePath);
final Version mVersion = Version.fromString(masterTemplateVersion);
final Version tVersion = Version.fromString(templateVersion);
assertTrue(mVersion.onOrBefore(tVersion));
Version templateVersion = Version.fromString(objectPath.evaluate(templatePath));
assertEquals(highestNodeVersion, templateVersion);
}
} catch (Exception e) {
throw new AssertionError("failed to get cluster state", e);

View File

@ -8,18 +8,28 @@ package org.elasticsearch.xpack.security;
import java.util.Arrays;
import java.util.Collection;
import java.util.Collections;
import java.util.EnumSet;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.function.BiConsumer;
import org.elasticsearch.Version;
import org.elasticsearch.client.Client;
import org.elasticsearch.cluster.ClusterName;
import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.metadata.MetaData;
import org.elasticsearch.cluster.node.DiscoveryNode;
import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.network.NetworkModule;
import org.elasticsearch.common.settings.ClusterSettings;
import org.elasticsearch.common.settings.Setting;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.env.Environment;
import org.elasticsearch.license.License;
import org.elasticsearch.license.TestUtils;
import org.elasticsearch.license.XPackLicenseState;
import org.elasticsearch.test.ESTestCase;
import org.elasticsearch.threadpool.ThreadPool;
@ -191,4 +201,38 @@ public class SecurityTests extends ESTestCase {
assertThat(filter, hasItem(Security.setting("authc.realms.*.ssl.truststore.path")));
assertThat(filter, hasItem(Security.setting("authc.realms.*.ssl.truststore.algorithm")));
}
public void testTLSJoinValidatorOnDisabledSecurity() throws Exception {
Settings disabledSettings = Settings.builder().put("xpack.security.enabled", false).build();
createComponents(disabledSettings);
BiConsumer<DiscoveryNode, ClusterState> joinValidator = security.getJoinValidator();
assertNull(joinValidator);
}
public void testTLSJoinValidator() throws Exception {
createComponents(Settings.EMPTY);
BiConsumer<DiscoveryNode, ClusterState> joinValidator = security.getJoinValidator();
assertNotNull(joinValidator);
DiscoveryNode node = new DiscoveryNode("foo", buildNewFakeTransportAddress(), Version.CURRENT);
joinValidator.accept(node, ClusterState.builder(ClusterName.DEFAULT).build());
assertTrue(joinValidator instanceof Security.ValidateTLSOnJoin);
int numIters = randomIntBetween(1,10);
for (int i = 0; i < numIters; i++) {
boolean tlsOn = randomBoolean();
Security.ValidateTLSOnJoin validator = new Security.ValidateTLSOnJoin(tlsOn);
MetaData.Builder builder = MetaData.builder();
License license = TestUtils.generateSignedLicense(TimeValue.timeValueHours(24));
TestUtils.putLicense(builder, license);
ClusterState state = ClusterState.builder(ClusterName.DEFAULT).metaData(builder.build()).build();
EnumSet<License.OperationMode> productionModes = EnumSet.of(License.OperationMode.GOLD, License.OperationMode.PLATINUM,
License.OperationMode.STANDARD);
if (productionModes.contains(license.operationMode()) && tlsOn == false) {
IllegalStateException ise = expectThrows(IllegalStateException.class, () -> validator.accept(node, state));
assertEquals("TLS setup is required for license type [" + license.operationMode().name() + "]", ise.getMessage());
} else {
validator.accept(node, state);
}
validator.accept(node, ClusterState.builder(ClusterName.DEFAULT).metaData(MetaData.builder().build()).build());
}
}
}

View File

@ -69,14 +69,14 @@ public class SecurityTribeIT extends NativeRealmIntegTestCase {
private static final String SECOND_CLUSTER_NODE_PREFIX = "node_cluster2_";
private static InternalTestCluster cluster2;
private static boolean useGeneratedSSL;
private static boolean useSSL;
private Node tribeNode;
private Client tribeClient;
@BeforeClass
public static void setupSSL() {
useGeneratedSSL = randomBoolean();
useSSL = randomBoolean();
}
@Override
@ -84,7 +84,7 @@ public class SecurityTribeIT extends NativeRealmIntegTestCase {
super.setUp();
if (cluster2 == null) {
SecuritySettingsSource cluster2SettingsSource =
new SecuritySettingsSource(defaultMaxNumberOfNodes(), useGeneratedSSL, createTempDir(), Scope.SUITE) {
new SecuritySettingsSource(defaultMaxNumberOfNodes(), useSSL, createTempDir(), Scope.SUITE) {
@Override
public Settings nodeSettings(int nodeOrdinal) {
Settings.Builder builder = Settings.builder()
@ -118,8 +118,8 @@ public class SecurityTribeIT extends NativeRealmIntegTestCase {
}
@Override
public boolean useGeneratedSSLConfig() {
return useGeneratedSSL;
public boolean transportSSLEnabled() {
return useSSL;
}
@AfterClass
@ -216,7 +216,7 @@ public class SecurityTribeIT extends NativeRealmIntegTestCase {
private void setupTribeNode(Settings settings) throws Exception {
SecuritySettingsSource cluster2SettingsSource =
new SecuritySettingsSource(1, useGeneratedSSL, createTempDir(), Scope.TEST) {
new SecuritySettingsSource(1, useSSL, createTempDir(), Scope.TEST) {
@Override
public Settings nodeSettings(int nodeOrdinal) {
return Settings.builder()

View File

@ -35,7 +35,6 @@ import org.joda.time.DateTime;
import org.joda.time.DateTimeZone;
import static org.elasticsearch.test.SecuritySettingsSource.TEST_PASSWORD_SECURE_STRING;
import static org.hamcrest.Matchers.arrayContaining;
import static org.hamcrest.Matchers.containsInAnyOrder;
import static org.hamcrest.Matchers.equalTo;
import static org.hamcrest.Matchers.is;
@ -79,7 +78,7 @@ public class AuditTrailTests extends SecurityIntegTestCase {
}
@Override
public boolean useGeneratedSSLConfig() {
public boolean transportSSLEnabled() {
return true;
}

View File

@ -85,6 +85,7 @@ public class IndexAuditTrailTests extends SecurityIntegTestCase {
public static final String SECOND_CLUSTER_NODE_PREFIX = "remote_" + SUITE_CLUSTER_NODE_PREFIX;
private static boolean remoteIndexing;
private static boolean useSSL;
private static InternalTestCluster remoteCluster;
private static Settings remoteSettings;
@ -100,6 +101,7 @@ public class IndexAuditTrailTests extends SecurityIntegTestCase {
@BeforeClass
public static void configureBeforeClass() {
useSSL = randomBoolean();
remoteIndexing = randomBoolean();
if (remoteIndexing == false) {
remoteSettings = Settings.EMPTY;
@ -115,6 +117,11 @@ public class IndexAuditTrailTests extends SecurityIntegTestCase {
remoteSettings = null;
}
@Override
protected boolean transportSSLEnabled() {
return useSSL;
}
@Before
public void initializeRemoteClusterIfNecessary() throws Exception {
if (remoteIndexing == false) {
@ -132,11 +139,11 @@ public class IndexAuditTrailTests extends SecurityIntegTestCase {
// Setup a second test cluster with randomization for number of nodes, security enabled, and SSL
final int numNodes = randomIntBetween(1, 2);
final boolean useSecurity = randomBoolean();
final boolean useGeneratedSSL = useSecurity && randomBoolean();
logger.info("--> remote indexing enabled. security enabled: [{}], SSL enabled: [{}], nodes: [{}]", useSecurity, useGeneratedSSL,
final boolean remoteUseSSL = useSecurity && useSSL;
logger.info("--> remote indexing enabled. security enabled: [{}], SSL enabled: [{}], nodes: [{}]", useSecurity, useSSL,
numNodes);
SecuritySettingsSource cluster2SettingsSource =
new SecuritySettingsSource(numNodes, useGeneratedSSL, createTempDir(), Scope.SUITE) {
new SecuritySettingsSource(numNodes, useSSL, createTempDir(), Scope.SUITE) {
@Override
public Settings nodeSettings(int nodeOrdinal) {
Settings.Builder builder = Settings.builder()
@ -193,8 +200,9 @@ public class IndexAuditTrailTests extends SecurityIntegTestCase {
.put("xpack.security.audit.index.client.xpack.security.user", SecuritySettingsSource.TEST_USER_NAME + ":" +
SecuritySettingsSource.TEST_PASSWORD);
if (useGeneratedSSL == false) {
if (remoteUseSSL) {
cluster2SettingsSource.addClientSSLSettings(builder, "xpack.security.audit.index.client.");
builder.put("xpack.security.audit.index.client.xpack.security.transport.ssl.enabled", true);
}
if (useSecurity == false && builder.get(NetworkModule.TRANSPORT_TYPE_KEY) == null) {
builder.put("xpack.security.audit.index.client." + NetworkModule.TRANSPORT_TYPE_KEY, getTestTransportType());

View File

@ -28,7 +28,6 @@ import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collections;
import java.util.List;
import java.util.Map;
import java.util.Optional;
import java.util.Set;
import java.util.stream.StreamSupport;
@ -50,13 +49,13 @@ public class RemoteIndexAuditTrailStartingTests extends SecurityIntegTestCase {
private InternalTestCluster remoteCluster;
private final boolean useGeneratedSSL = randomBoolean();
private final boolean sslEnabled = randomBoolean();
private final boolean localAudit = randomBoolean();
private final String outputs = randomFrom("index", "logfile", "index,logfile");
@Override
public boolean useGeneratedSSLConfig() {
return useGeneratedSSL;
public boolean transportSSLEnabled() {
return sslEnabled;
}
@Override
@ -90,7 +89,7 @@ public class RemoteIndexAuditTrailStartingTests extends SecurityIntegTestCase {
// Setup a second test cluster with a single node, security enabled, and SSL
final int numNodes = 1;
SecuritySettingsSource cluster2SettingsSource =
new SecuritySettingsSource(numNodes, useGeneratedSSL, createTempDir(), Scope.TEST) {
new SecuritySettingsSource(numNodes, sslEnabled, createTempDir(), Scope.TEST) {
@Override
public Settings nodeSettings(int nodeOrdinal) {
Settings.Builder builder = Settings.builder()
@ -104,6 +103,7 @@ public class RemoteIndexAuditTrailStartingTests extends SecurityIntegTestCase {
.put("xpack.security.audit.index.client.xpack.security.user", TEST_USER_NAME + ":" + TEST_PASSWORD);
addClientSSLSettings(builder, "xpack.security.audit.index.client.");
builder.put("xpack.security.audit.index.client.xpack.security.transport.ssl.enabled", sslEnabled);
return builder.build();
}
};

View File

@ -83,8 +83,8 @@ public class RunAsIntegTests extends SecurityIntegTestCase {
}
@Override
public boolean useGeneratedSSLConfig() {
return true;
protected boolean transportSSLEnabled() {
return false;
}
public void testUserImpersonation() throws Exception {

View File

@ -17,7 +17,6 @@ import org.elasticsearch.test.SecuritySettingsSource;
import org.elasticsearch.xpack.security.SecurityLifecycleService;
import org.elasticsearch.xpack.security.authc.support.CharArrays;
import org.elasticsearch.xpack.security.client.SecurityClient;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import java.nio.charset.StandardCharsets;
@ -52,9 +51,8 @@ public class ESNativeMigrateToolTests extends NativeRealmIntegTestCase {
}
@Override
protected boolean useGeneratedSSLConfig() {
// don't use autogenerated when we expect a different cert
return useSSL == false;
protected boolean transportSSLEnabled() {
return useSSL;
}
@Override

View File

@ -15,7 +15,6 @@ import org.elasticsearch.action.index.IndexResponse;
import org.elasticsearch.client.transport.NoNodeAvailableException;
import org.elasticsearch.client.transport.TransportClient;
import org.elasticsearch.common.network.NetworkModule;
import org.elasticsearch.common.settings.MockSecureSettings;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.transport.TransportAddress;
import org.elasticsearch.http.HttpServerTransport;
@ -73,8 +72,8 @@ public class PkiAuthenticationTests extends SecurityIntegTestCase {
}
@Override
protected boolean useGeneratedSSLConfig() {
return false;
protected boolean transportSSLEnabled() {
return true;
}
public void testTransportClientCanAuthenticateViaPki() {

View File

@ -66,8 +66,8 @@ public class PkiOptionalClientAuthTests extends SecurityIntegTestCase {
}
@Override
protected boolean useGeneratedSSLConfig() {
return false;
protected boolean transportSSLEnabled() {
return true;
}
public void testRestClientWithoutClientCertificate() throws Exception {

View File

@ -13,8 +13,6 @@ import org.elasticsearch.common.network.NetworkAddress;
import org.elasticsearch.common.network.NetworkModule;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.transport.TransportAddress;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.discovery.MasterNotDiscoveredException;
import org.elasticsearch.node.MockNode;
import org.elasticsearch.node.Node;
import org.elasticsearch.node.NodeValidationException;
@ -42,7 +40,6 @@ import java.nio.file.Path;
import java.util.Arrays;
import java.util.concurrent.CountDownLatch;
import static java.util.Collections.singletonMap;
import static org.elasticsearch.test.SecuritySettingsSource.addSSLSettingsForStore;
import static org.elasticsearch.xpack.security.test.SecurityTestUtils.writeFile;
import static org.hamcrest.CoreMatchers.equalTo;
@ -57,10 +54,9 @@ public class ServerTransportFilterIntegrationTests extends SecurityIntegTestCase
randomClientPort = randomIntBetween(49000, 65500); // ephemeral port
}
// don't use it here to simplify the settings we need
@Override
public boolean useGeneratedSSLConfig() {
return false;
public boolean transportSSLEnabled() {
return true;
}
@Override

View File

@ -87,8 +87,8 @@ public class DNSOnlyHostnameVerificationTests extends SecurityIntegTestCase {
}
@Override
public boolean useGeneratedSSLConfig() {
return false;
public boolean transportSSLEnabled() {
return true;
}
@Override

View File

@ -23,8 +23,8 @@ public class IPHostnameVerificationTests extends SecurityIntegTestCase {
Path keystore;
@Override
protected boolean useGeneratedSSLConfig() {
return false;
protected boolean transportSSLEnabled() {
return true;
}
@Override

View File

@ -42,6 +42,7 @@ public class SecurityNetty4TransportTests extends ESTestCase {
MockSecureSettings secureSettings = new MockSecureSettings();
secureSettings.setString("xpack.ssl.keystore.secure_password", "testnode");
Settings settings = Settings.builder()
.put("xpack.security.transport.ssl.enabled", true)
.put("xpack.ssl.keystore.path", testnodeStore)
.setSecureSettings(secureSettings)
.put("path.home", createTempDir())
@ -51,12 +52,13 @@ public class SecurityNetty4TransportTests extends ESTestCase {
}
private SecurityNetty4Transport createTransport() {
return createTransport(Settings.EMPTY);
return createTransport(Settings.builder().put("xpack.security.transport.ssl.enabled", true).build());
}
private SecurityNetty4Transport createTransport(Settings additionalSettings) {
final Settings settings =
Settings.builder()
.put("xpack.security.transport.ssl.enabled", true)
.put(additionalSettings)
.build();
return new SecurityNetty4Transport(
@ -185,6 +187,7 @@ public class SecurityNetty4TransportTests extends ESTestCase {
secureSettings.setString("xpack.security.transport.ssl.keystore.secure_password", "testnode");
secureSettings.setString("xpack.ssl.truststore.secure_password", "truststore-testnode-only");
Settings.Builder builder = Settings.builder()
.put("xpack.security.transport.ssl.enabled", true)
.put("xpack.security.transport.ssl.keystore.path",
getDataPath("/org/elasticsearch/xpack/security/transport/ssl/certs/simple/testnode.jks"))
.put("xpack.security.transport.ssl.client_authentication", "none")

View File

@ -27,8 +27,8 @@ import static org.hamcrest.Matchers.containsString;
public class SslHostnameVerificationTests extends SecurityIntegTestCase {
@Override
protected boolean useGeneratedSSLConfig() {
return false;
protected boolean transportSSLEnabled() {
return true;
}
@Override

View File

@ -66,8 +66,8 @@ public class EllipticCurveSSLTests extends SecurityIntegTestCase {
}
@Override
protected boolean useGeneratedSSLConfig() {
return false;
protected boolean transportSSLEnabled() {
return true;
}
public void testConnection() throws Exception {

View File

@ -51,8 +51,8 @@ public class SslIntegrationTests extends SecurityIntegTestCase {
}
@Override
protected boolean useGeneratedSSLConfig() {
return false;
protected boolean transportSSLEnabled() {
return true;
}
// no SSL exception as this is the exception is returned when connecting

View File

@ -73,8 +73,8 @@ public class SslMultiPortTests extends SecurityIntegTestCase {
}
@Override
protected boolean useGeneratedSSLConfig() {
return false;
protected boolean transportSSLEnabled() {
return true;
}
private TransportClient createTransportClient(Settings additionalSettings) {
@ -82,6 +82,7 @@ public class SslMultiPortTests extends SecurityIntegTestCase {
.put(transportClientSettings().filter(s -> s.startsWith("xpack.ssl") == false))
.put("node.name", "programmatic_transport_client")
.put("cluster.name", internalCluster().getClusterName())
.put("xpack.security.transport.ssl.enabled", true)
.put(additionalSettings)
.build();
return new TestXPackTransportClient(settings);
@ -105,6 +106,7 @@ public class SslMultiPortTests extends SecurityIntegTestCase {
public void testThatStandardTransportClientCanConnectToNoClientAuthProfile() throws Exception {
try(TransportClient transportClient = new TestXPackTransportClient(Settings.builder()
.put(transportClientSettings())
.put("xpack.security.transport.ssl.enabled", true)
.put("node.name", "programmatic_transport_client")
.put("cluster.name", internalCluster().getClusterName())
.build())) {
@ -247,6 +249,7 @@ public class SslMultiPortTests extends SecurityIntegTestCase {
Settings settings = Settings.builder()
.put(Security.USER_SETTING.getKey(), TEST_USER_NAME + ":" + TEST_PASSWORD)
.put("cluster.name", internalCluster().getClusterName())
.put("xpack.security.transport.ssl.enabled", true)
.put("xpack.ssl.truststore.path",
getDataPath("/org/elasticsearch/xpack/security/transport/ssl/certs/simple/truststore-testnode-only.jks"))
.put("xpack.ssl.truststore.password", "truststore-testnode-only")
@ -254,7 +257,6 @@ public class SslMultiPortTests extends SecurityIntegTestCase {
try (TransportClient transportClient = new TestXPackTransportClient(settings)) {
transportClient.addTransportAddress(new TransportAddress(InetAddress.getLoopbackAddress(),
getProfilePort("no_client_auth")));
assertGreenClusterState(transportClient);
}
}
@ -268,6 +270,7 @@ public class SslMultiPortTests extends SecurityIntegTestCase {
Settings settings = Settings.builder()
.put(Security.USER_SETTING.getKey(), TEST_USER_NAME + ":" + TEST_PASSWORD)
.put("cluster.name", internalCluster().getClusterName())
.put("xpack.security.transport.ssl.enabled", true)
.put("xpack.ssl.client_authentication", SSLClientAuth.REQUIRED)
.put("xpack.ssl.truststore.path",
getDataPath("/org/elasticsearch/xpack/security/transport/ssl/certs/simple/truststore-testnode-only.jks"))
@ -292,6 +295,7 @@ public class SslMultiPortTests extends SecurityIntegTestCase {
Settings settings = Settings.builder()
.put(Security.USER_SETTING.getKey(), TEST_USER_NAME + ":" + TEST_PASSWORD)
.put("cluster.name", internalCluster().getClusterName())
.put("xpack.security.transport.ssl.enabled", true)
.put("xpack.ssl.client_authentication", SSLClientAuth.REQUIRED)
.put("xpack.ssl.truststore.path",
getDataPath("/org/elasticsearch/xpack/security/transport/ssl/certs/simple/truststore-testnode-only.jks"))
@ -316,6 +320,7 @@ public class SslMultiPortTests extends SecurityIntegTestCase {
.put(Security.USER_SETTING.getKey(), TEST_USER_NAME + ":" + TEST_PASSWORD)
.put("cluster.name", internalCluster().getClusterName())
.put("xpack.ssl.client_authentication", SSLClientAuth.REQUIRED)
.put("xpack.security.transport.ssl.enabled", true)
.build();
try (TransportClient transportClient = new TestXPackTransportClient(settings)) {
transportClient.addTransportAddress(randomFrom(internalCluster().getInstance(Transport.class).boundAddress().boundAddresses()));
@ -336,6 +341,7 @@ public class SslMultiPortTests extends SecurityIntegTestCase {
.put(Security.USER_SETTING.getKey(), TEST_USER_NAME + ":" + TEST_PASSWORD)
.put("cluster.name", internalCluster().getClusterName())
.put("xpack.ssl.client_authentication", SSLClientAuth.REQUIRED)
.put("xpack.security.transport.ssl.enabled", true)
.build();
try (TransportClient transportClient = new TestXPackTransportClient(settings)) {
transportClient.addTransportAddress(new TransportAddress(InetAddress.getLoopbackAddress(), getProfilePort("client")));
@ -356,6 +362,7 @@ public class SslMultiPortTests extends SecurityIntegTestCase {
.put(Security.USER_SETTING.getKey(), TEST_USER_NAME + ":" + TEST_PASSWORD)
.put("cluster.name", internalCluster().getClusterName())
.put("xpack.ssl.client_authentication", SSLClientAuth.REQUIRED)
.put("xpack.security.transport.ssl.enabled", true)
.build();
try (TransportClient transportClient = new TestXPackTransportClient(settings)) {
transportClient.addTransportAddress(new TransportAddress(InetAddress.getLoopbackAddress(),

View File

@ -17,7 +17,7 @@ import org.elasticsearch.test.SecurityIntegTestCase;
public class SslNullCipherTests extends SecurityIntegTestCase {
@Override
public boolean useGeneratedSSLConfig() {
public boolean transportSSLEnabled() {
return true;
}
@ -25,7 +25,7 @@ public class SslNullCipherTests extends SecurityIntegTestCase {
public Settings nodeSettings(int nodeOrdinal) {
Settings settings = super.nodeSettings(nodeOrdinal);
Settings.Builder builder = Settings.builder()
.put(settings.filter((s) -> s.startsWith("xpack.ssl") == false));
.put(settings);
builder.put("xpack.security.transport.ssl.cipher_suites", "TLS_RSA_WITH_NULL_SHA256");
return builder.build();
}
@ -34,7 +34,7 @@ public class SslNullCipherTests extends SecurityIntegTestCase {
public Settings transportClientSettings() {
Settings settings = super.transportClientSettings();
Settings.Builder builder = Settings.builder()
.put(settings.filter((s) -> s.startsWith("xpack.ssl") == false));
.put(settings);
builder.put("xpack.security.transport.ssl.cipher_suites", "TLS_RSA_WITH_NULL_SHA256");
return builder.build();

View File

@ -1,45 +0,0 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.xpack.ssl;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.node.Node;
import org.elasticsearch.test.ESTestCase;
import javax.net.ssl.X509ExtendedKeyManager;
import java.security.PrivateKey;
import java.security.cert.X509Certificate;
import java.security.interfaces.RSAPrivateKey;
import static org.hamcrest.Matchers.empty;
import static org.hamcrest.Matchers.instanceOf;
import static org.hamcrest.Matchers.is;
public class GeneratedKeyConfigTests extends ESTestCase {
public void testGenerating() throws Exception {
Settings settings = Settings.builder().put(Node.NODE_NAME_SETTING.getKey(), randomAlphaOfLengthBetween(1, 8)).build();
GeneratedKeyConfig keyConfig = new GeneratedKeyConfig(settings);
assertThat(keyConfig.filesToMonitor(null), is(empty()));
X509ExtendedKeyManager keyManager = keyConfig.createKeyManager(null);
assertNotNull(keyManager);
assertNotNull(keyConfig.createTrustManager(null));
String[] aliases = keyManager.getServerAliases("RSA", null);
assertEquals(1, aliases.length);
PrivateKey privateKey = keyManager.getPrivateKey(aliases[0]);
assertNotNull(privateKey);
assertThat(privateKey, instanceOf(RSAPrivateKey.class));
X509Certificate[] certificates = keyManager.getCertificateChain(aliases[0]);
assertEquals(2, certificates.length);
assertEquals(GeneratedKeyConfig.readCACert(), certificates[1]);
X509Certificate generatedCertificate = certificates[0];
assertEquals("CN=" + Node.NODE_NAME_SETTING.get(settings), generatedCertificate.getSubjectX500Principal().getName());
assertEquals(certificates[1].getSubjectX500Principal(), generatedCertificate.getIssuerX500Principal());
}
}

View File

@ -1,93 +0,0 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.xpack.ssl;
import org.elasticsearch.bootstrap.BootstrapContext;
import org.elasticsearch.common.settings.MockSecureSettings;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.env.Environment;
import org.elasticsearch.test.ESTestCase;
public class SSLBootstrapCheckTests extends ESTestCase {
public void testSSLBootstrapCheckWithNoKey() throws Exception {
SSLService sslService = new SSLService(Settings.EMPTY, null);
SSLBootstrapCheck bootstrapCheck = new SSLBootstrapCheck(sslService, null);
assertTrue(bootstrapCheck.check(new BootstrapContext(Settings.EMPTY, null)).isFailure());
}
public void testSSLBootstrapCheckWithKey() throws Exception {
final String keyPrefix = randomBoolean() ? "security.transport." : "";
MockSecureSettings secureSettings = new MockSecureSettings();
secureSettings.setString("xpack." + keyPrefix + "ssl.secure_key_passphrase", "testclient");
Settings settings = Settings.builder()
.put("path.home", createTempDir())
.put("xpack." + keyPrefix + "ssl.key",
getDataPath("/org/elasticsearch/xpack/security/transport/ssl/certs/simple/testclient.pem"))
.put("xpack." + keyPrefix + "ssl.certificate",
getDataPath("/org/elasticsearch/xpack/security/transport/ssl/certs/simple/testclient.crt"))
.setSecureSettings(secureSettings)
.build();
final Environment env = randomBoolean() ? new Environment(settings) : null;
SSLBootstrapCheck bootstrapCheck = new SSLBootstrapCheck(new SSLService(settings, env), env);
assertFalse(bootstrapCheck.check(new BootstrapContext(settings, null)).isFailure());
}
public void testSSLBootstrapCheckWithDefaultCABeingTrusted() throws Exception {
final String keyPrefix = randomBoolean() ? "security.transport." : "";
MockSecureSettings secureSettings = new MockSecureSettings();
secureSettings.setString("xpack." + keyPrefix + "ssl.secure_key_passphrase", "testclient");
Settings settings = Settings.builder()
.put("path.home", createTempDir())
.put("xpack." + keyPrefix + "ssl.key",
getDataPath("/org/elasticsearch/xpack/security/transport/ssl/certs/simple/testclient.pem"))
.put("xpack." + keyPrefix + "ssl.certificate",
getDataPath("/org/elasticsearch/xpack/security/transport/ssl/certs/simple/testclient.crt"))
.putArray("xpack." + keyPrefix + "ssl.certificate_authorities",
getDataPath("/org/elasticsearch/xpack/security/transport/ssl/certs/simple/testclient.crt").toString(),
getDataPath("/org/elasticsearch/xpack/ssl/ca.pem").toString())
.setSecureSettings(secureSettings)
.build();
final Environment env = randomBoolean() ? new Environment(settings) : null;
SSLBootstrapCheck bootstrapCheck = new SSLBootstrapCheck(new SSLService(settings, env), env);
assertTrue(bootstrapCheck.check(new BootstrapContext(settings, null)).isFailure());
settings = Settings.builder().put(settings.filter((s) -> s.contains(".certificate_authorities")))
.put("xpack.security.http.ssl.certificate_authorities",
getDataPath("/org/elasticsearch/xpack/ssl/ca.pem").toString())
.build();
bootstrapCheck = new SSLBootstrapCheck(new SSLService(settings, env), env);
assertTrue(bootstrapCheck.check(new BootstrapContext(settings, null)).isFailure());
}
public void testSSLBootstrapCheckWithDefaultKeyBeingUsed() throws Exception {
final String keyPrefix = randomBoolean() ? "security.transport." : "";
MockSecureSettings secureSettings = new MockSecureSettings();
secureSettings.setString("xpack." + keyPrefix + "ssl.secure_key_passphrase", "testclient");
Settings settings = Settings.builder()
.put("path.home", createTempDir())
.put("xpack." + keyPrefix + "ssl.key",
getDataPath("/org/elasticsearch/xpack/security/transport/ssl/certs/simple/testclient.pem"))
.put("xpack." + keyPrefix + "ssl.certificate",
getDataPath("/org/elasticsearch/xpack/security/transport/ssl/certs/simple/testclient.crt"))
.put("xpack.security.http.ssl.key", getDataPath("/org/elasticsearch/xpack/ssl/private.pem").toString())
.put("xpack.security.http.ssl.certificate", getDataPath("/org/elasticsearch/xpack/ssl/ca.pem").toString())
.setSecureSettings(secureSettings)
.build();
final Environment env = randomBoolean() ? new Environment(settings) : null;
SSLBootstrapCheck bootstrapCheck = new SSLBootstrapCheck(new SSLService(settings, env), env);
assertTrue(bootstrapCheck.check(new BootstrapContext(settings, null)).isFailure());
settings = Settings.builder().put(settings.filter((s) -> s.contains(".http.ssl.")))
.put("xpack.security.transport.profiles.foo.xpack.security.ssl.key",
getDataPath("/org/elasticsearch/xpack/ssl/private.pem").toString())
.put("xpack.security.transport.profiles.foo.xpack.security.ssl.certificate",
getDataPath("/org/elasticsearch/xpack/ssl/ca.pem").toString())
.build();
bootstrapCheck = new SSLBootstrapCheck(new SSLService(settings, env), env);
assertTrue(bootstrapCheck.check(new BootstrapContext(settings, null)).isFailure());
}
}

View File

@ -56,8 +56,8 @@ public class SSLClientAuthTests extends SecurityIntegTestCase {
}
@Override
protected boolean useGeneratedSSLConfig() {
return false;
protected boolean transportSSLEnabled() {
return true;
}
public void testThatHttpFailsWithoutSslClientAuth() throws IOException {
@ -93,6 +93,7 @@ public class SSLClientAuthTests extends SecurityIntegTestCase {
MockSecureSettings secureSettings = new MockSecureSettings();
secureSettings.setString("xpack.ssl.keystore.secure_password", "testclient-client-profile");
Settings settings = Settings.builder()
.put("xpack.security.transport.ssl.enabled", true)
.put("xpack.ssl.client_authentication", SSLClientAuth.NONE)
.put("xpack.ssl.keystore.path", store)
.setSecureSettings(secureSettings)

View File

@ -219,15 +219,12 @@ public class SSLConfigurationReloaderTests extends ESTestCase {
.setSecureSettings(secureSettings)
.build();
Environment env = randomBoolean() ? null : new Environment(settings);
final X500Principal expectedPrincipal = new X500Principal("CN=xpack public development ca");
final SetOnce<Integer> trustedCount = new SetOnce<>();
final BiConsumer<X509ExtendedTrustManager, SSLConfiguration> trustManagerPreChecks = (trustManager, config) -> {
// trust manager checks
Certificate[] certificates = trustManager.getAcceptedIssuers();
trustedCount.set(certificates.length);
assertTrue(Arrays.stream(trustManager.getAcceptedIssuers())
.anyMatch((cert) -> expectedPrincipal.equals(cert.getSubjectX500Principal())));
};
@ -247,8 +244,6 @@ public class SSLConfigurationReloaderTests extends ESTestCase {
final BiConsumer<X509ExtendedTrustManager, SSLConfiguration> trustManagerPostChecks = (updatedTrustManager, config) -> {
assertThat(trustedCount.get() - updatedTrustManager.getAcceptedIssuers().length, is(5));
assertTrue(Arrays.stream(updatedTrustManager.getAcceptedIssuers())
.anyMatch((cert) -> expectedPrincipal.equals(cert.getSubjectX500Principal())));
};
validateTrustConfigurationIsReloaded(settings, env, trustManagerPreChecks, modifier, trustManagerPostChecks);
@ -267,15 +262,12 @@ public class SSLConfigurationReloaderTests extends ESTestCase {
.put("path.home", createTempDir())
.build();
Environment env = randomBoolean() ? null : new Environment(settings);
final X500Principal expectedPrincipal = new X500Principal("CN=xpack public development ca");
final BiConsumer<X509ExtendedTrustManager, SSLConfiguration> trustManagerPreChecks = (trustManager, config) -> {
// trust manager checks
Certificate[] certificates = trustManager.getAcceptedIssuers();
assertThat(certificates.length, is(2));
assertThat(certificates.length, is(1));
assertThat(((X509Certificate)certificates[0]).getSubjectX500Principal().getName(), containsString("Test Client"));
assertTrue(Arrays.stream(trustManager.getAcceptedIssuers())
.anyMatch((cert) -> expectedPrincipal.equals(cert.getSubjectX500Principal())));
};
final Runnable modifier = () -> {
@ -291,10 +283,8 @@ public class SSLConfigurationReloaderTests extends ESTestCase {
final BiConsumer<X509ExtendedTrustManager, SSLConfiguration> trustManagerPostChecks = (updatedTrustManager, config) -> {
Certificate[] updatedCerts = updatedTrustManager.getAcceptedIssuers();
assertThat(updatedCerts.length, is(2));
assertThat(updatedCerts.length, is(1));
assertThat(((X509Certificate)updatedCerts[0]).getSubjectX500Principal().getName(), containsString("Test Node"));
assertTrue(Arrays.stream(updatedTrustManager.getAcceptedIssuers())
.anyMatch((cert) -> expectedPrincipal.equals(cert.getSubjectX500Principal())));
};
validateTrustConfigurationIsReloaded(settings, env, trustManagerPreChecks, modifier, trustManagerPostChecks);

View File

@ -75,17 +75,20 @@ public class SSLConfigurationSettingsTests extends ESTestCase {
assertThat(ssl.keyPassword.exists(settings), is(false));
assertThat(ssl.keyPath.get(settings).isPresent(), is(false));
assertThat(ssl.keystoreAlgorithm.get(settings), is(KeyManagerFactory.getDefaultAlgorithm()));
assertThat(ssl.keystoreType.get(settings), is("jks"));
assertThat(ssl.keystoreType.get(settings).isPresent(), is(false));
assertThat(ssl.keystoreKeyPassword.exists(settings), is(false));
assertThat(ssl.keystorePassword.exists(settings), is(false));
assertThat(ssl.keystorePath.get(settings).isPresent(), is(false));
assertThat(ssl.supportedProtocols.get(settings).size(), is(0));
assertThat(ssl.truststoreAlgorithm.get(settings), is(TrustManagerFactory.getDefaultAlgorithm()));
assertThat(ssl.truststoreType.get(settings), is("jks"));
assertThat(ssl.truststoreType.get(settings).isPresent(), is(false));
assertThat(ssl.truststorePassword.exists(settings), is(false));
assertThat(ssl.truststorePath.get(settings).isPresent(), is(false));
assertThat(ssl.trustRestrictionsPath.get(settings).isPresent(), is(false));
assertThat(ssl.verificationMode.get(settings).isPresent(), is(false));
assertThat(SSLConfigurationSettings.getKeyStoreType(ssl.keystoreType, settings, null), is("jks"));
assertThat(SSLConfigurationSettings.getKeyStoreType(ssl.truststoreType, settings, null), is("jks"));
}
}

View File

@ -5,6 +5,12 @@
*/
package org.elasticsearch.xpack.ssl;
import javax.net.ssl.KeyManager;
import javax.net.ssl.KeyManagerFactory;
import javax.net.ssl.TrustManager;
import java.security.cert.X509Certificate;
import java.util.Arrays;
import org.elasticsearch.common.settings.MockSecureSettings;
import org.elasticsearch.common.settings.Setting;
import org.elasticsearch.common.settings.Settings;
@ -12,17 +18,11 @@ import org.elasticsearch.env.Environment;
import org.elasticsearch.test.ESTestCase;
import org.elasticsearch.xpack.ssl.TrustConfig.CombiningTrustConfig;
import javax.net.ssl.KeyManager;
import javax.net.ssl.KeyManagerFactory;
import javax.net.ssl.TrustManager;
import java.security.cert.X509Certificate;
import java.util.Arrays;
import static org.hamcrest.Matchers.equalTo;
import static org.hamcrest.Matchers.everyItem;
import static org.hamcrest.Matchers.instanceOf;
import static org.hamcrest.Matchers.isIn;
import static org.hamcrest.Matchers.is;
import static org.hamcrest.Matchers.isIn;
import static org.hamcrest.Matchers.not;
import static org.hamcrest.Matchers.sameInstance;
@ -58,6 +58,7 @@ public class SSLConfigurationTests extends ESTestCase {
assertThat(ksKeyInfo.keyStorePath, is(equalTo(path)));
assertThat(ksKeyInfo.keyStorePassword, is(equalTo("testnode")));
assertThat(ksKeyInfo.keyStoreType, is(equalTo("jks")));
assertThat(ksKeyInfo.keyPassword, is(equalTo(ksKeyInfo.keyStorePassword)));
assertThat(ksKeyInfo.keyStoreAlgorithm, is(KeyManagerFactory.getDefaultAlgorithm()));
assertThat(sslConfiguration.trustConfig(), is(instanceOf(CombiningTrustConfig.class)));
@ -123,6 +124,66 @@ public class SSLConfigurationTests extends ESTestCase {
SSLConfiguration.SETTINGS_PARSER.legacyKeystorePassword, SSLConfiguration.SETTINGS_PARSER.legacyKeystoreKeyPassword});
}
public void testInferKeystoreTypeFromJksFile() {
MockSecureSettings secureSettings = new MockSecureSettings();
secureSettings.setString("keystore.secure_password", "password");
secureSettings.setString("keystore.secure_key_password", "keypass");
Settings settings = Settings.builder()
.put("keystore.path", "xpack/tls/path.jks")
.setSecureSettings(secureSettings)
.build();
SSLConfiguration sslConfiguration = new SSLConfiguration(settings);
assertThat(sslConfiguration.keyConfig(), instanceOf(StoreKeyConfig.class));
StoreKeyConfig ksKeyInfo = (StoreKeyConfig) sslConfiguration.keyConfig();
assertThat(ksKeyInfo.keyStoreType, is(equalTo("jks")));
}
public void testInferKeystoreTypeFromPkcs12File() {
final String ext = randomFrom("p12", "pfx", "pkcs12");
MockSecureSettings secureSettings = new MockSecureSettings();
secureSettings.setString("keystore.secure_password", "password");
secureSettings.setString("keystore.secure_key_password", "keypass");
Settings settings = Settings.builder()
.put("keystore.path", "xpack/tls/path." + ext)
.setSecureSettings(secureSettings)
.build();
SSLConfiguration sslConfiguration = new SSLConfiguration(settings);
assertThat(sslConfiguration.keyConfig(), instanceOf(StoreKeyConfig.class));
StoreKeyConfig ksKeyInfo = (StoreKeyConfig) sslConfiguration.keyConfig();
assertThat(ksKeyInfo.keyStoreType, is(equalTo("PKCS12")));
}
public void testInferKeystoreTypeFromUnrecognised() {
MockSecureSettings secureSettings = new MockSecureSettings();
secureSettings.setString("keystore.secure_password", "password");
secureSettings.setString("keystore.secure_key_password", "keypass");
Settings settings = Settings.builder()
.put("keystore.path", "xpack/tls/path.foo")
.setSecureSettings(secureSettings)
.build();
SSLConfiguration sslConfiguration = new SSLConfiguration(settings);
assertThat(sslConfiguration.keyConfig(), instanceOf(StoreKeyConfig.class));
StoreKeyConfig ksKeyInfo = (StoreKeyConfig) sslConfiguration.keyConfig();
assertThat(ksKeyInfo.keyStoreType, is(equalTo("jks")));
}
public void testExplicitKeystoreType() {
final String ext = randomFrom("p12", "jks");
final String type = randomAlphaOfLengthBetween(2, 8);
MockSecureSettings secureSettings = new MockSecureSettings();
secureSettings.setString("keystore.secure_password", "password");
secureSettings.setString("keystore.secure_key_password", "keypass");
Settings settings = Settings.builder()
.put("keystore.path", "xpack/tls/path." + ext)
.put("keystore.type", type)
.setSecureSettings(secureSettings)
.build();
SSLConfiguration sslConfiguration = new SSLConfiguration(settings);
assertThat(sslConfiguration.keyConfig(), instanceOf(StoreKeyConfig.class));
StoreKeyConfig ksKeyInfo = (StoreKeyConfig) sslConfiguration.keyConfig();
assertThat(ksKeyInfo.keyStoreType, is(equalTo(type)));
}
public void testThatProfileSettingsOverrideServiceSettings() {
MockSecureSettings profileSecureSettings = new MockSecureSettings();
profileSecureSettings.setString("keystore.secure_password", "password");

View File

@ -85,8 +85,8 @@ public class SSLReloadIntegTests extends SecurityIntegTestCase {
}
@Override
protected boolean useGeneratedSSLConfig() {
return false;
protected boolean transportSSLEnabled() {
return true;
}
public void testThatSSLConfigurationReloadsOnModification() throws Exception {

View File

@ -134,8 +134,8 @@ public class SSLTrustRestrictionsTests extends SecurityIntegTestCase {
}
@Override
protected boolean useGeneratedSSLConfig() {
return false;
protected boolean transportSSLEnabled() {
return true;
}
public void testCertificateWithTrustedNameIsAccepted() throws Exception {

View File

@ -0,0 +1,45 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.xpack.ssl;
import org.elasticsearch.bootstrap.BootstrapContext;
import org.elasticsearch.cluster.metadata.MetaData;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.license.License;
import org.elasticsearch.license.TestUtils;
import org.elasticsearch.test.ESTestCase;
import java.util.EnumSet;
public class TLSLicenseBootstrapCheckTests extends ESTestCase {
public void testBootstrapCheck() throws Exception {
assertTrue(new TLSLicenseBootstrapCheck().check(new BootstrapContext(Settings.EMPTY, MetaData.EMPTY_META_DATA)).isSuccess());
assertTrue(new TLSLicenseBootstrapCheck().check(new BootstrapContext(Settings.builder().put("xpack.security.transport.ssl.enabled"
, randomBoolean()).build(), MetaData.EMPTY_META_DATA)).isSuccess());
int numIters = randomIntBetween(1,10);
for (int i = 0; i < numIters; i++) {
License license = TestUtils.generateSignedLicense(TimeValue.timeValueHours(24));
EnumSet<License.OperationMode> productionModes = EnumSet.of(License.OperationMode.GOLD, License.OperationMode.PLATINUM,
License.OperationMode.STANDARD);
MetaData.Builder builder = MetaData.builder();
TestUtils.putLicense(builder, license);
MetaData build = builder.build();
if (productionModes.contains(license.operationMode()) == false) {
assertTrue(new TLSLicenseBootstrapCheck().check(new BootstrapContext(
Settings.builder().put("xpack.security.transport.ssl.enabled", true).build(), build)).isSuccess());
} else {
assertTrue(new TLSLicenseBootstrapCheck().check(new BootstrapContext(
Settings.builder().put("xpack.security.transport.ssl.enabled", false).build(), build)).isFailure());
assertEquals("Transport SSL must be enabled for setups with production licenses. Please set " +
"[xpack.security.transport.ssl.enabled] to [true] or disable security by setting " +
"[xpack.security.enabled] to [false]",
new TLSLicenseBootstrapCheck().check(new BootstrapContext(
Settings.builder().put("xpack.security.transport.ssl.enabled", false).build(), build)).getMessage());
}
}
}
}

View File

@ -5,6 +5,7 @@
*/
package org.elasticsearch.xpack.upgrade;
import org.elasticsearch.analysis.common.CommonAnalysisPlugin;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.index.reindex.ReindexPlugin;
@ -52,7 +53,8 @@ public abstract class IndexUpgradeIntegTestCase extends AbstractLicensesIntegrat
@Override
protected Collection<Class<? extends Plugin>> nodePlugins() {
return Arrays.asList(XPackPlugin.class, ReindexPlugin.class, MockPainlessScriptEngine.TestPlugin.class);
return Arrays.asList(XPackPlugin.class, ReindexPlugin.class, MockPainlessScriptEngine.TestPlugin.class,
CommonAnalysisPlugin.class);
}
@Override

View File

@ -12,6 +12,7 @@ import org.elasticsearch.action.admin.indices.alias.get.GetAliasesResponse;
import org.elasticsearch.action.search.SearchResponse;
import org.elasticsearch.action.support.PlainActionFuture;
import org.elasticsearch.action.support.WriteRequest;
import org.elasticsearch.analysis.common.CommonAnalysisPlugin;
import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.block.ClusterBlockException;
import org.elasticsearch.cluster.metadata.AliasMetaData;
@ -52,7 +53,7 @@ public class InternalIndexReindexerIT extends IndexUpgradeIntegTestCase {
@Override
protected Collection<Class<? extends Plugin>> nodePlugins() {
return Arrays.asList(XPackPlugin.class, ReindexPlugin.class, CustomScriptPlugin.class);
return Arrays.asList(XPackPlugin.class, ReindexPlugin.class, CustomScriptPlugin.class, CommonAnalysisPlugin.class);
}
public static class CustomScriptPlugin extends MockScriptPlugin {

View File

@ -97,7 +97,7 @@ teardown:
"Should fail gracefully when body content is not provided":
- do:
catch: request
catch: bad_request
xpack.license.post:
acknowledge: true

View File

@ -180,7 +180,7 @@ setup:
"Test delete with in-use model":
- do:
catch: request
catch: bad_request
xpack.ml.delete_model_snapshot:
job_id: "delete-model-snapshot"
snapshot_id: "active-snapshot"

View File

@ -89,19 +89,19 @@ setup:
"Test invalid param combinations":
- do:
catch: request
catch: bad_request
xpack.ml.get_filters:
filter_id: "filter-foo"
from: 0
- do:
catch: request
catch: bad_request
xpack.ml.get_filters:
filter_id: "filter-foo"
size: 1
- do:
catch: request
catch: bad_request
xpack.ml.get_filters:
filter_id: "filter-foo"
from: 0

Some files were not shown because too many files have changed in this diff Show More