Merge branch 'master' into tls_6.0

Original commit: elastic/x-pack-elasticsearch@9ce33bc7c3
This commit is contained in:
Simon Willnauer 2017-09-15 09:51:16 +02:00
commit c3066d1a51
96 changed files with 1958 additions and 487 deletions

View File

@ -2,9 +2,37 @@
[[watcher-api-ack-watch]]
=== Ack Watch API
{xpack-ref}/actions.html#actions-ack-throttle[Acknowledging a watch] enables you to manually throttle
execution of the watch's actions. An action's _acknowledgement state_ is stored
in the `status.actions.<id>.ack.state` structure.
{xpack-ref}/actions.html#actions-ack-throttle[Acknowledging a watch] enables you
to manually throttle execution of the watch's actions. An action's
_acknowledgement state_ is stored in the `status.actions.<id>.ack.state`
structure.
[float]
==== Request
`PUT _xpack/watcher/watch/<watch_id>/_ack` +
`PUT _xpack/watcher/watch/<watch_id>/_ack/<action_id>`
[float]
==== Path Parameters
`action_id`::
(list) A comma-separated list of the action IDs to acknowledge. If you omit
this parameter, all of the actions of the watch are acknowledged.
`watch_id` (required)::
(string) Identifier for the watch.
[float]
==== Authorization
You must have `manage_watcher` cluster privileges to use this API. For more
information, see {xpack-ref}/security-privileges.html[Security Privileges].
[float]
==== Examples
To demonstrate let's create a new watch:

View File

@ -6,6 +6,26 @@ A watch can be either
{xpack-ref}/how-watcher-works.html#watch-active-state[active or inactive]. This
API enables you to activate a currently inactive watch.
[float]
==== Request
`PUT _xpack/watcher/watch/<watch_id>/_activate`
[float]
==== Path Parameters
`watch_id` (required)::
(string) Identifier for the watch.
[float]
==== Authorization
You must have `manage_watcher` cluster privileges to use this API. For more
information, see {xpack-ref}/security-privileges.html[Security Privileges].
[float]
==== Examples
The status of an inactive watch is returned with the watch definition when you
call the <<watcher-api-get-watch, Get Watch API>>:

View File

@ -6,6 +6,25 @@ A watch can be either
{xpack-ref}/how-watcher-works.html#watch-active-state[active or inactive]. This
API enables you to deactivate a currently active watch.
[float]
==== Request
`PUT _xpack/watcher/watch/<watch_id>/_deactivate`
[float]
==== Path Parameters
`watch_id` (required)::
(string) Identifier for the watch.
[float]
==== Authorization
You must have `manage_watcher` cluster privileges to use this API. For more
information, see {xpack-ref}/security-privileges.html[Security Privileges].
[float]
==== Examples
The status of an active watch is returned with the watch definition when you
call the <<watcher-api-get-watch, Get Watch API>>:

View File

@ -2,18 +2,42 @@
[[watcher-api-delete-watch]]
=== Delete Watch API
The DELETE watch API removes a watch (identified by its `id`) from {watcher}.
Once removed, the document representing the watch in the `.watches` index is
gone and it will never be executed again.
The DELETE watch API removes a watch from {watcher}.
[float]
==== Request
`DELETE _xpack/watcher/watch/<watch_id>`
[float]
==== Description
When the watch is removed, the document representing the watch in the `.watches`
index is gone and it will never be run again.
Please note that deleting a watch **does not** delete any watch execution records
related to this watch from the watch history.
IMPORTANT: Deleting a watch must be done via this API only. Do not delete the
watch directly from the `.watches` index using Elasticsearch's
watch directly from the `.watches` index using the Elasticsearch
DELETE Document API. When {security} is enabled, make sure no `write`
privileges are granted to anyone over the `.watches` index.
[float]
==== Path Parameters
`watch_id` (required)::
(string) Identifier for the watch.
[float]
==== Authorization
You must have `manage_watcher` cluster privileges to use this API. For more
information, see {xpack-ref}/security-privileges.html[Security Privileges].
[float]
==== Examples
The following example deletes a watch with the `my-watch` id:
[source,js]
@ -34,4 +58,3 @@ Response:
}
--------------------------------------------------
// TESTRESPONSE

View File

@ -6,20 +6,45 @@ The execute watch API forces the execution of a stored watch. It can be used to
force execution of the watch outside of its triggering logic, or to simulate the
watch execution for debugging purposes.
The following example executes the `my_watch` watch:
[float]
==== Request
[source,js]
--------------------------------------------------
POST _xpack/watcher/watch/my_watch/_execute
--------------------------------------------------
// CONSOLE
// TEST[setup:my_active_watch]
`POST _xpack/watcher/watch/<watch_id>/_execute` +
For testing and debugging purposes, you also have fine-grained control on how the
watch is executed--execute the watch without executing all of its actions or
alternatively by simulating them. You can also force execution by ignoring the
watch condition and control whether a watch record would be written to the watch
history after execution.
`POST _xpack/watcher/watch/_execute`
[float]
==== Description
For testing and debugging purposes, you also have fine-grained control on how
the watch runs. You can execute the watch without executing all of its actions
or alternatively by simulating them. You can also force execution by ignoring
the watch condition and control whether a watch record would be written to the
watch history after execution.
[float]
[[watcher-api-execute-inline-watch]]
===== Inline Watch Execution
You can use the Execute API to execute watches that are not yet registered by
specifying the watch definition inline. This serves as great tool for testing
and debugging your watches prior to adding them to {watcher}.
[float]
==== Path Parameters
`watch_id`::
(string) Identifier for the watch.
[float]
==== Query Parameters
`debug`::
(boolean) Defines whether the watch runs in debug mode. The default value is
`false`.
[float]
==== Request Body
This API supports the following fields:
@ -53,6 +78,58 @@ This API supports the following fields:
not persisted to the index and record_execution cannot be set.
|======
[float]
[[watcher-api-execute-watch-action-mode]]
===== Action Execution Modes
Action modes define how actions are handled during the watch execution. There
are five possible modes an action can be associated with:
[options="header"]
|======
| Name | Description
| `simulate` | The action execution is simulated. Each action type
define its own simulation operation mode. For example, the
{xpack-ref}/actions-email.html[email] action creates
the email that would have been sent but does not actually
send it. In this mode, the action might be throttled if the
current state of the watch indicates it should be.
| `force_simulate` | Similar to the the `simulate` mode, except the action is
not be throttled even if the current state of the watch
indicates it should be.
| `execute` | Executes the action as it would have been executed if the
watch would have been triggered by its own trigger. The
execution might be throttled if the current state of the
watch indicates it should be.
| `force_execute` | Similar to the `execute` mode, except the action is not
throttled even if the current state of the watch indicates
it should be.
| `skip` | The action is skipped and is not executed or simulated.
Effectively forces the action to be throttled.
|======
[float]
==== Authorization
You must have `manage_watcher` cluster privileges to use this API. For more
information, see {xpack-ref}/security-privileges.html[Security Privileges].
[float]
==== Examples
The following example executes the `my_watch` watch:
[source,js]
--------------------------------------------------
POST _xpack/watcher/watch/my_watch/_execute
--------------------------------------------------
// CONSOLE
// TEST[setup:my_active_watch]
The following example shows a comprehensive example of executing the `my-watch` watch:
[source,js]
@ -77,14 +154,14 @@ POST _xpack/watcher/watch/my_watch/_execute
// TEST[setup:my_active_watch]
<1> The triggered and schedule times are provided.
<2> The input as defined by the watch is ignored and instead the provided input
will be used as the execution payload.
<3> The condition as defined by the watch will be ignored and will be assumed to
is used as the execution payload.
<3> The condition as defined by the watch is ignored and is assumed to
evaluate to `true`.
<4> Forces the simulation of `my-action`. Forcing the simulation means that
throttling is ignored and the watch is simulated by {watcher} instead of
being executed normally.
<5> The execution of the watch will create a watch record in the watch history,
and the throttling state of the watch will potentially be updated accordingly.
<5> The execution of the watch creates a watch record in the watch history,
and the throttling state of the watch is potentially updated accordingly.
This is an example of the output:
@ -192,40 +269,6 @@ This is an example of the output:
<2> The watch record document as it would be stored in the `.watcher-history` index.
<3> The watch execution results.
[[watcher-api-execute-watch-action-mode]]
==== Action Execution Modes
Action modes define how actions are handled during the watch execution. There
are five possible modes an action can be associated with:
[options="header"]
|======
| Name | Description
| `simulate` | The action execution will be simulated. Each action type
define its own simulation operation mode. For example, the
{xpack-ref}/actions-email.html[email] action will create
the email that would have been sent but will not actually
send it. In this mode, the action may be throttled if the
current state of the watch indicates it should be.
| `force_simulate` | Similar to the the `simulate` mode, except the action will
not be throttled even if the current state of the watch
indicates it should be.
| `execute` | Executes the action as it would have been executed if the
watch would have been triggered by its own trigger. The
execution may be throttled if the current state of the
watch indicates it should be.
| `force_execute` | Similar to the `execute` mode, except the action will not
be throttled even if the current state of the watch
indicates it should be.
| `skip` | The action will be skipped and won't be executed nor
simulated. Effectively forcing the action to be throttled.
|======
You can set a different execution mode for every action by associating the mode
name with the action id:
@ -257,14 +300,6 @@ POST _xpack/watcher/watch/my_watch/_execute
// CONSOLE
// TEST[setup:my_active_watch]
[float]
[[watcher-api-execute-inline-watch]]
==== Inline Watch Execution
You can use the Execute API to execute watches that are not yet registered by
specifying the watch definition inline. This serves as great tool for testing
and debugging your watches prior to adding them to {watcher}.
The following example shows how to execute a watch inline:
[source,js]

View File

@ -2,7 +2,28 @@
[[watcher-api-get-watch]]
=== Get Watch API
This API retrieves a watch by its id.
This API retrieves a watch by its ID.
[float]
==== Request
`GET _xpack/watcher/watch/<watch_id>`
[float]
==== Path Parameters
`watch_id` (required)::
(string) Identifier for the watch.
[float]
==== Authorization
You must have `manage_watcher` or `monitor_watcher` cluster privileges to use
this API. For more information, see
{xpack-ref}/security-privileges.html[Security Privileges].
[float]
==== Examples
The following example gets a watch with `my-watch` id:

View File

@ -3,16 +3,80 @@
=== Put Watch API
The PUT watch API either registers a new watch in {watcher} or update an
existing one. Once registered, a new document will be added to the `.watches`
index, representing the watch, and its trigger will immediately be registered
with the relevant trigger engine (typically the scheduler, for the `schedule`
trigger).
existing one.
[float]
==== Request
`PUT _xpack/watcher/watch/<watch_id>`
[float]
==== Description
When a watch is registered, a new document that represents the watch is added to
the `.watches` index and its trigger is immediately registered with the relevant
trigger engine. Typically for the `schedule` trigger, the scheduler is the
trigger engine.
IMPORTANT: Putting a watch must be done via this API only. Do not put a watch
directly to the `.watches` index using Elasticsearch's Index API.
directly to the `.watches` index using the Elasticsearch Index API.
If {security} is enabled, make sure no `write` privileges are
granted to anyone over the `.watches` index.
When adding a watch you can also define its initial
{xpack-ref}/how-watcher-works.html#watch-active-state[active state]. You do that
by setting the `active` parameter.
[float]
==== Path Parameters
`watch_id` (required)::
(string) Identifier for the watch.
[float]
==== Query Parameters
`active`::
(boolean) Defines whether the watch is active or inactive by default. The
default value is `true`, which means the watch is active by default.
[float]
==== Request Body
A watch has the following fields:
[options="header"]
|======
| Name | Description
| `trigger` | The {xpack-ref}/trigger.html[trigger] that defines when
the watch should run.
| `input` | The {xpack-ref}/input.html[input] that defines the input
that loads the data for the watch.
| `condition` | The {xpack-ref}/condition.html[condition] that defines if
the actions should be run.
| `actions` | The list of {xpack-ref}/actions.html[actions] that will be
run if the condition matches
| `metadata` | Metadata json that will be copied into the history entries.
| `throttle_period` | The minimum time between actions being run, the default
for this is 5 seconds. This default can be changed in the
config file with the setting `xpack.watcher.throttle.period.default_period`.
|======
[float]
==== Authorization
You must have `manage_watcher` cluster privileges to use this API. For more
information, see {xpack-ref}/security-privileges.html[Security Privileges].
[float]
==== Examples
The following example adds a watch with the `my-watch` id that has the following
characteristics:
@ -72,39 +136,10 @@ PUT _xpack/watcher/watch/my-watch
--------------------------------------------------
// CONSOLE
A watch has the following fields:
[options="header"]
|======
| Name | Description
| `trigger` | The {xpack-ref}/trigger.html[trigger] that defines when
the watch should run.
| `input` | The {xpack-ref}/input.html[input] that defines the input
that loads the data for the watch.
| `condition` | The {xpack-ref}/condition.html[condition] that defines if
the actions should be run.
| `actions` | The list of {xpack-ref}/actions.html[actions] that will be
run if the condition matches
| `metadata` | Metadata json that will be copied into the history entries.
| `throttle_period` | The minimum time between actions being run, the default
for this is 5 seconds. This default can be changed in the
config file with the setting `xpack.watcher.throttle.period.default_period`.
|======
[float]
[[watcher-api-put-watch-active-state]]
==== Controlling Default Active State
When adding a watch you can also define its initial
When you add a watch you can also define its initial
{xpack-ref}/how-watcher-works.html#watch-active-state[active state]. You do that
by setting the `active` parameter. The following command add a watch and sets it
to be inactive by default:
by setting the `active` parameter. The following command adds a watch and sets
it to be inactive by default:
[source,js]
--------------------------------------------------

View File

@ -3,7 +3,20 @@
=== Start API
The `start` API starts the {watcher} service if the service is not already
running, as in the following example:
running.
[float]
==== Request
`POST _xpack/watcher/_start`
==== Authorization
You must have `manage_watcher` cluster privileges to use this API. For more
information, see {xpack-ref}/security-privileges.html[Security Privileges].
[float]
==== Examples
[source,js]
--------------------------------------------------

View File

@ -2,22 +2,70 @@
[[watcher-api-stats]]
=== Stats API
The `stats` API returns the current {watcher} metrics. You can control what
metrics this API returns using the `metric` parameter.
The `stats` API returns the current {watcher} metrics.
The supported metrics are:
[float]
==== Request
[options="header"]
|======
| Metric | Description
| `executing_watches` | Include the current executing watches in the response.
| `queued_watches` | Include the watches queued for execution in the response.
| `_all` | Include all metrics in the response.
|======
`GET _xpack/watcher/stats` +
The {watcher} `stats` API always returns basic metrics regardless of the
`metric` option. The following example calls the `stats` API including the
basic metrics:
`GET _xpack/watcher/stats/<metric>`
[float]
==== Description
This API always returns basic metrics. You retrieve more metrics by using
the `metric` parameter.
[float]
===== Current executing watches metric
The current executing watches metric gives insight into the watches that are
currently being executed by {watcher}. Additional information is shared per
watch that is currently executing. This information includes the `watch_id`,
the time its execution started and its current execution phase.
To include this metric, the `metric` option should be set to `executing_watches`
or `_all`. In addition you can also specify the `emit_stacktraces=true`
parameter, which adds stack traces for each watch that is being executed. These
stack traces can give you more insight into an execution of a watch.
[float]
===== Queued watches metric
{watcher} moderates the execution of watches such that their execution won't put
too much pressure on the node and its resources. If too many watches trigger
concurrently and there isn't enough capacity to execute them all, some of the
watches are queued, waiting for the current executing watches to finish their
execution. The queued watches metric gives insight on these queued watches.
To include this metric, the `metric` option should include `queued_watches` or
`_all`.
[float]
==== Path Parameters
`emit_stacktraces`::
(boolean) Defines whether stack traces are generated for each watch that is
running. The default value is `false`.
`metric`::
(enum) Defines which additional metrics are included in the response.
`executing_watches`::: Includes the current executing watches in the response.
`queued_watches`::: Includes the watches queued for execution in the response.
`_all`::: Includes all metrics in the response.
[float]
==== Authorization
You must have `manage_watcher` or `monitor_watcher` cluster privileges to use
this API. For more information, see
{xpack-ref}/security-privileges.html[Security Privileges].
[float]
==== Examples
The following example calls the `stats` API to retrieve basic metrics:
[source,js]
--------------------------------------------------
@ -39,21 +87,11 @@ A successful call returns a JSON structure similar to the following example:
}
--------------------------------------------------
<1> The current state of watcher. May be either `started`, `starting` or `stopped`.
<1> The current state of watcher, which can be `started`, `starting`, or `stopped`.
<2> The number of watches currently registered.
<3> The number of watches that were triggered and currently queued for execution.
<4> The largest size of the execution thread pool indicating the largest number
of concurrent executing watches.
==== Current executing watches metric
The current executing watches metric gives insight into the watches that are
currently being executed by {watcher}. Additional information is shared per
watch that is currently executing. This information includes the `watch_id`,
the time its execution started and its current execution phase.
To include this metric, the `metric` option should be set to `executing_watches`
or `_all`.
<4> The largest size of the execution thread pool, which indicates the largest
number of concurrent executing watches.
The following example specifies the `metric` option as a query string argument
and will include the basic metrics and metrics about the current executing watches:
@ -96,8 +134,8 @@ captures a watch in execution:
}
--------------------------------------------------
<1> A list of all the Watches that are currently being executed by {watcher}.
When no watches are currently executing an empty array is returned. The
<1> A list of all the watches that are currently being executed by {watcher}.
When no watches are currently executing, an empty array is returned. The
captured watches are sorted by execution time in descending order. Thus the
longest running watch is always at the top.
<2> The id of the watch being executed.
@ -108,21 +146,6 @@ captures a watch in execution:
<6> The current watch execution phase. Can be `input`, `condition` `actions`,
`awaits_execution`, `started`, `watch_transform`, `aborted`, `finished`.
In addition you can also specify the `emit_stacktraces=true` parameter, which
adds stack traces for each watch that is being executed. These stacktraces can
give you more insight into an execution of a watch.
==== Queued watches metric
{watcher} moderates the execution of watches such that their execution won't put
too much pressure on the node and its resources. If too many watches trigger
concurrently and there isn't enough capacity to execute them all, some of the
watches are queued, waiting for the current executing watches to finish their
execution. The queued watches metric gives insight on these queued watches.
To include this metric, the `metric` option should include `queued_watches` or
`_all`.
The following example specifies the `queued_watches` metric option and includes
both the basic metrics and the queued watches:

View File

@ -2,8 +2,21 @@
[[watcher-api-stop]]
=== Stop API
The `stop` API stops the {watcher} service if the service is running, as in the
following example:
The `stop` API stops the {watcher} service if the service is running.
[float]
==== Request
`POST _xpack/watcher/_stop`
[float]
==== Authorization
You must have `manage_watcher` cluster privileges to use this API. For more
information, see {xpack-ref}/security-privileges.html[Security Privileges].
[float]
==== Examples
[source,js]
--------------------------------------------------

View File

@ -14,13 +14,13 @@ is mostly useful in situations where all users locked themselves out of the syst
realm is your only way out - you can define a new `admin` user in the `file` realm
and use it to log in and reset the credentials of all other users.
IMPORTANT: When you configure realms in `elasticsearch.yml`, only the
realms you specify are used for authentication. To use the
`file` realm as a fallback, you must include it in the realm chain.
IMPORTANT: When you configure realms in `elasticsearch.yml`, only the
realms you specify are used for authentication. To use the
`file` realm as a fallback, you must include it in the realm chain.
To define users, {security} provides the <<managing-file-users, users>> command-line
tool. This tool enables you to add and remove users, assign user roles and manage
user passwords.
To define users, {security} provides the {ref}/users-command.html[users]
command-line tool. This tool enables you to add and remove users, assign user
roles and manage user passwords.
==== Configuring a File Realm
@ -84,152 +84,6 @@ xpack:
(Expert Setting).
|=======================
[[managing-file-users]]
==== Managing Users
The `users` command-line tool is located in `ES_HOME/bin/x-pack` and enables
several administrative tasks for managing users:
* <<file-realm-add-user, Adding users>>
* <<file-realm-list-users, Listing users and roles>>
* <<file-realm-manage-passwd, Managing user passwords>>
* <<file-realm-manage-roles, Managing users' roles>>
* <<file-realm-remove-user, Removing users>>
[[file-realm-add-user]]
===== Adding Users
Use the `useradd` sub-command to add a user to your local node.
NOTE: To ensure that Elasticsearch can read the user and role information at
startup, run `users useradd` as the same user you use to run Elasticsearch.
Running the command as root or some other user will update the permissions
for the `users` and `users_roles` files and prevent Elasticsearch from
accessing them.
[source,shell]
----------------------------------------
bin/x-pack/users useradd <username>
----------------------------------------
Usernames must be at least 1 and no more than 1024 characters. They can
contain alphanumeric characters (`a-z`, `A-Z`, `0-9`), spaces, punctuation, and
printable symbols in the https://en.wikipedia.org/wiki/Basic_Latin_(Unicode_block)[Basic Latin (ASCII) block].
Leading or trailing whitespace is not allowed.
You can specify the user's password at the command-line with the `-p` option.
When this option is absent, the command prompts you for the password. Omit the
`-p` option to keep plaintext passwords out of the terminal session's command
history.
[source,shell]
----------------------------------------------------
bin/x-pack/users useradd <username> -p <secret>
----------------------------------------------------
Passwords must be at least 6 characters long.
You can define a user's roles with the `-r` option. This option accepts a
comma-separated list of role names to assign to the user.
[source,shell]
-------------------------------------------------------------------
bin/x-pack/users useradd <username> -r <comma-separated list of role names>
-------------------------------------------------------------------
The following example adds a new user named `jacknich` to the `file` realm. The
password for this user is `theshining`, and this user is associated with the
`network` and `monitoring` roles.
[source,shell]
-------------------------------------------------------------------
bin/x-pack/users useradd jacknich -p theshining -r network,monitoring
-------------------------------------------------------------------
For valid role names please see <<valid-role-name, Role Definitions>>.
[[file-realm-list-users]]
===== Listing Users
Use the `list` sub-command to list the users registered with the `file` realm
on the local node.
[source, shell]
----------------------------------
bin/x-pack/users list
rdeniro : admin
alpacino : power_user
jacknich : monitoring,network
----------------------------------
Users are in the left-hand column and their corresponding roles are listed in
the right-hand column.
The `list <username>` sub-command lists a specific user. Use this command to
verify that a user was successfully added to the local `file` realm.
[source,shell]
-----------------------------------
bin/x-pack/users list jacknich
jacknich : monitoring,network
-----------------------------------
[[file-realm-manage-passwd]]
===== Managing User Passwords
Use the `passwd` sub-command to reset a user's password. You can specify the new
password directly with the `-p` option. When `-p` option is omitted, the tool
will prompt you to enter and confirm a password in interactive mode.
[source,shell]
--------------------------------------------------
bin/x-pack/users passwd <username>
--------------------------------------------------
[source,shell]
--------------------------------------------------
bin/x-pack/users passwd <username> -p <password>
--------------------------------------------------
[[file-realm-manage-roles]]
===== Assigning Users to Roles
Use the `roles` sub-command to manage the roles of a particular user. The `-a`
option adds a comma-separated list of roles to a user. The `-r` option removes
a comma-separated list of roles from a user. You can combine adding and removing
roles within the same command to change a user's roles.
[source,shell]
------------------------------------------------------------------------------------------------------------
bin/x-pack/users roles <username> -a <commma-separate list of roles> -r <comma-separated list of roles>
------------------------------------------------------------------------------------------------------------
The following command removes the `network` and `monitoring` roles from user
`jacknich` and adds the `user` role:
[source,shell]
------------------------------------------------------------
bin/x-pack/users roles jacknich -r network,monitoring -a user
------------------------------------------------------------
Listing the user displays the new role assignment:
[source,shell]
---------------------------------
bin/x-pack/users list jacknich
jacknich : user
---------------------------------
[[file-realm-remove-user]]
===== Deleting Users
Use the `userdel` sub-command to delete a user.
[source,shell]
--------------------------------------------------
bin/x-pack/users userdel <username>
--------------------------------------------------
==== A Look Under the Hood
All the data about the users for the `file` realm is stored in two files, `users`
@ -255,8 +109,8 @@ Puppet or Chef).
==============================
While it is possible to modify these files directly using any standard text
editor, we strongly recommend using the `bin/x-pack/users` command-line tool
to apply the required changes.
editor, we strongly recommend using the {ref}/users-command.html[`bin/x-pack/users`]
command-line tool to apply the required changes.
[float]
[[users-file]]

View File

@ -2,8 +2,8 @@
=== Mapping Users and Groups to Roles
If you authenticate users with the `native` or `file` realms, you can manage
role assignment user the <<managing-native-users, User Management APIs>> or the
<<managing-file-users, file-realm>> command-line tool respectively.
role assignment by using the <<managing-native-users, User Management APIs>> or
the {ref}/users-command.html[users] command-line tool respectively.
For other types of realms, you must create _role-mappings_ that define which
roles should be assigned to each user based on their username, groups, or

View File

@ -47,9 +47,8 @@ _realms_. {security} provides the following built-in realms:
| `file` | | | An internal realm where users are defined in files
stored on each node in the Elasticsearch cluster.
With this realm, users are authenticated by usernames
and passwords. The users are managed via
<<managing-file-users,dedicated tools>> that are
provided by {xpack} on installation.
and passwords. The users are managed via dedicated
tools that are provided by {xpack} on installation.
|======
If none of the built-in realms meets your needs, you can also build your own

View File

@ -228,9 +228,10 @@ You can also set a watch to the _inactive_ state. Inactive watches are not
registered with a trigger engine and can never be triggered.
To set a watch to the inactive state when you create it, set the
{ref}/watcher-api-put-watch.html#watcher-api-put-watch-active-state[`active`]
parameter to _inactive_. To deactivate an existing watch, use the
{ref}/watcher-api-deactivate-watch.html[Deactivate Watch API]. To reactivate an inactive watch, use the
{ref}/watcher-api-put-watch.html[`active`] parameter to _inactive_. To
deactivate an existing watch, use the
{ref}/watcher-api-deactivate-watch.html[Deactivate Watch API]. To reactivate an
inactive watch, use the
{ref}/watcher-api-activate-watch.html[Activate Watch API].
NOTE: You can use the {ref}/watcher-api-execute-watch.html[Execute Watch API]

View File

@ -111,7 +111,7 @@ S3Object getZip() {
return client.getObject('prelert-artifacts', key)
} catch (AmazonServiceException e) {
if (e.getStatusCode() != 403) {
throw new GradleException('Error while trying to get ml-cpp snapshot', e)
throw new GradleException('Error while trying to get ml-cpp snapshot: ' + e.getMessage(), e)
}
sleep(500)
retries--

View File

@ -286,7 +286,7 @@ public class XPackPlugin extends Plugin implements ScriptPlugin, ActionPlugin, I
components.add(licenseState);
try {
components.addAll(security.createComponents(internalClient, threadPool, clusterService, resourceWatcherService,
components.addAll(security.createComponents(client, threadPool, clusterService, resourceWatcherService,
extensionsService.getExtensions()));
} catch (final Exception e) {
throw new IllegalStateException("security initialization failed", e);
@ -322,7 +322,7 @@ public class XPackPlugin extends Plugin implements ScriptPlugin, ActionPlugin, I
components.addAll(logstash.createComponents(internalClient, clusterService));
components.addAll(upgrade.createComponents(internalClient, clusterService, threadPool, resourceWatcherService,
components.addAll(upgrade.createComponents(client, clusterService, threadPool, resourceWatcherService,
scriptService, xContentRegistry));
// just create the reloader as it will pull all of the loaded ssl configurations and start watching them

View File

@ -44,6 +44,7 @@ import org.elasticsearch.xpack.ml.action.DeleteJobAction;
import org.elasticsearch.xpack.ml.action.DeleteModelSnapshotAction;
import org.elasticsearch.xpack.ml.action.FinalizeJobExecutionAction;
import org.elasticsearch.xpack.ml.action.FlushJobAction;
import org.elasticsearch.xpack.ml.action.ForecastJobAction;
import org.elasticsearch.xpack.ml.action.GetBucketsAction;
import org.elasticsearch.xpack.ml.action.GetCategoriesAction;
import org.elasticsearch.xpack.ml.action.GetDatafeedsAction;
@ -108,6 +109,7 @@ import org.elasticsearch.xpack.ml.rest.filter.RestPutFilterAction;
import org.elasticsearch.xpack.ml.rest.job.RestCloseJobAction;
import org.elasticsearch.xpack.ml.rest.job.RestDeleteJobAction;
import org.elasticsearch.xpack.ml.rest.job.RestFlushJobAction;
import org.elasticsearch.xpack.ml.rest.job.RestForecastJobAction;
import org.elasticsearch.xpack.ml.rest.job.RestGetJobStatsAction;
import org.elasticsearch.xpack.ml.rest.job.RestGetJobsAction;
import org.elasticsearch.xpack.ml.rest.job.RestOpenJobAction;
@ -383,7 +385,8 @@ public class MachineLearning implements ActionPlugin {
new RestStartDatafeedAction(settings, restController),
new RestStopDatafeedAction(settings, restController),
new RestDeleteModelSnapshotAction(settings, restController),
new RestDeleteExpiredDataAction(settings, restController)
new RestDeleteExpiredDataAction(settings, restController),
new RestForecastJobAction(settings, restController)
);
}
@ -431,7 +434,8 @@ public class MachineLearning implements ActionPlugin {
new ActionHandler<>(CompletionPersistentTaskAction.INSTANCE, CompletionPersistentTaskAction.TransportAction.class),
new ActionHandler<>(RemovePersistentTaskAction.INSTANCE, RemovePersistentTaskAction.TransportAction.class),
new ActionHandler<>(UpdateProcessAction.INSTANCE, UpdateProcessAction.TransportAction.class),
new ActionHandler<>(DeleteExpiredDataAction.INSTANCE, DeleteExpiredDataAction.TransportAction.class)
new ActionHandler<>(DeleteExpiredDataAction.INSTANCE, DeleteExpiredDataAction.TransportAction.class),
new ActionHandler<>(ForecastJobAction.INSTANCE, ForecastJobAction.TransportAction.class)
);
}

View File

@ -0,0 +1,234 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.xpack.ml.action;
import org.elasticsearch.action.Action;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.ActionRequestBuilder;
import org.elasticsearch.action.support.ActionFilters;
import org.elasticsearch.action.support.tasks.BaseTasksResponse;
import org.elasticsearch.client.ElasticsearchClient;
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.ParseField;
import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.io.stream.Writeable;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.xcontent.ObjectParser;
import org.elasticsearch.common.xcontent.ToXContentObject;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentParser;
import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.transport.TransportService;
import org.elasticsearch.xpack.ml.job.config.Job;
import org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectProcessManager;
import org.elasticsearch.xpack.ml.job.process.autodetect.params.ForecastParams;
import java.io.IOException;
import java.util.Objects;
import static org.elasticsearch.xpack.ml.action.ForecastJobAction.Request.END_TIME;
public class ForecastJobAction extends Action<ForecastJobAction.Request, ForecastJobAction.Response, ForecastJobAction.RequestBuilder> {
public static final ForecastJobAction INSTANCE = new ForecastJobAction();
public static final String NAME = "cluster:admin/xpack/ml/job/forecast";
private ForecastJobAction() {
super(NAME);
}
@Override
public RequestBuilder newRequestBuilder(ElasticsearchClient client) {
return new RequestBuilder(client, this);
}
@Override
public Response newResponse() {
return new Response();
}
public static class Request extends TransportJobTaskAction.JobTaskRequest<Request> implements ToXContentObject {
public static final ParseField END_TIME = new ParseField("end");
private static final ObjectParser<Request, Void> PARSER = new ObjectParser<>(NAME, Request::new);
static {
PARSER.declareString((request, jobId) -> request.jobId = jobId, Job.ID);
PARSER.declareString(Request::setEndTime, END_TIME);
}
public static Request parseRequest(String jobId, XContentParser parser) {
Request request = PARSER.apply(parser, null);
if (jobId != null) {
request.jobId = jobId;
}
return request;
}
private String endTime;
Request() {
}
public Request(String jobId) {
super(jobId);
}
public String getEndTime() {
return endTime;
}
public void setEndTime(String endTime) {
this.endTime = endTime;
}
@Override
public void readFrom(StreamInput in) throws IOException {
super.readFrom(in);
this.endTime = in.readOptionalString();
}
@Override
public void writeTo(StreamOutput out) throws IOException {
super.writeTo(out);
out.writeOptionalString(endTime);
}
@Override
public int hashCode() {
return Objects.hash(jobId, endTime);
}
@Override
public boolean equals(Object obj) {
if (this == obj) {
return true;
}
if (obj == null || getClass() != obj.getClass()) {
return false;
}
Request other = (Request) obj;
return Objects.equals(jobId, other.jobId) && Objects.equals(endTime, other.endTime);
}
@Override
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
builder.startObject();
builder.field(Job.ID.getPreferredName(), jobId);
if (endTime != null) {
builder.field(END_TIME.getPreferredName(), endTime);
}
builder.endObject();
return builder;
}
}
static class RequestBuilder extends ActionRequestBuilder<Request, Response, RequestBuilder> {
RequestBuilder(ElasticsearchClient client, ForecastJobAction action) {
super(client, action, new Request());
}
}
public static class Response extends BaseTasksResponse implements Writeable, ToXContentObject {
private boolean acknowledged;
private long id;
Response() {
super(null, null);
}
Response(boolean acknowledged, long id) {
super(null, null);
this.acknowledged = acknowledged;
this.id = id;
}
public boolean isacknowledged() {
return acknowledged;
}
@Override
public void readFrom(StreamInput in) throws IOException {
super.readFrom(in);
acknowledged = in.readBoolean();
}
@Override
public void writeTo(StreamOutput out) throws IOException {
super.writeTo(out);
out.writeBoolean(acknowledged);
}
@Override
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
builder.startObject();
builder.field("acknowledged", acknowledged);
builder.field("id", id);
builder.endObject();
return builder;
}
@Override
public boolean equals(Object o) {
if (this == o)
return true;
if (o == null || getClass() != o.getClass())
return false;
Response response = (Response) o;
return acknowledged == response.acknowledged;
}
@Override
public int hashCode() {
return Objects.hash(acknowledged);
}
}
public static class TransportAction extends TransportJobTaskAction<Request, Response> {
@Inject
public TransportAction(Settings settings, TransportService transportService, ThreadPool threadPool, ClusterService clusterService,
ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver,
AutodetectProcessManager processManager) {
super(settings, ForecastJobAction.NAME, threadPool, clusterService, transportService, actionFilters,
indexNameExpressionResolver, ForecastJobAction.Request::new, ForecastJobAction.Response::new, ThreadPool.Names.SAME,
processManager);
// ThreadPool.Names.SAME, because operations is executed by
// autodetect worker thread
}
@Override
protected ForecastJobAction.Response readTaskResponse(StreamInput in) throws IOException {
Response response = new Response();
response.readFrom(in);
return response;
}
@Override
protected void taskOperation(Request request, OpenJobAction.JobTask task, ActionListener<Response> listener) {
ForecastParams.Builder paramsBuilder = ForecastParams.builder();
if (request.getEndTime() != null) {
paramsBuilder.endTime(request.getEndTime(), END_TIME);
}
ForecastParams params = paramsBuilder.build();
processManager.forecastJob(task, params, e -> {
if (e == null) {
listener.onResponse(new Response(true, params.getId()));
} else {
listener.onFailure(e);
}
});
}
}
}

View File

@ -17,6 +17,7 @@ import org.elasticsearch.xpack.ml.job.results.AnomalyRecord;
import org.elasticsearch.xpack.ml.job.results.Bucket;
import org.elasticsearch.xpack.ml.job.results.BucketInfluencer;
import org.elasticsearch.xpack.ml.job.results.CategoryDefinition;
import org.elasticsearch.xpack.ml.job.results.Forecast;
import org.elasticsearch.xpack.ml.job.results.Influence;
import org.elasticsearch.xpack.ml.job.results.Influencer;
import org.elasticsearch.xpack.ml.job.results.ModelPlot;
@ -289,6 +290,7 @@ public class ElasticsearchMappings {
.field(TYPE, DOUBLE)
.endObject();
addForecastFieldsToMapping(builder);
addAnomalyRecordFieldsToMapping(builder);
addInfluencerFieldsToMapping(builder);
addModelSizeStatsFieldsToMapping(builder);
@ -320,6 +322,24 @@ public class ElasticsearchMappings {
}
}
private static void addForecastFieldsToMapping(XContentBuilder builder) throws IOException {
// Forecast Output
builder.startObject(Forecast.FORECAST_LOWER.getPreferredName())
.field(TYPE, DOUBLE)
.endObject()
.startObject(Forecast.FORECAST_UPPER.getPreferredName())
.field(TYPE, DOUBLE)
.endObject()
.startObject(Forecast.FORECAST_PREDICTION.getPreferredName())
.field(TYPE, DOUBLE)
.endObject()
.startObject(Forecast.FORECAST_ID.getPreferredName())
.field(TYPE, LONG)
.endObject();
}
/**
* AnomalyRecord fields to be added under the 'properties' section of the mapping
* @param builder Add properties to this builder

View File

@ -29,6 +29,7 @@ import org.elasticsearch.xpack.ml.job.results.AnomalyRecord;
import org.elasticsearch.xpack.ml.job.results.Bucket;
import org.elasticsearch.xpack.ml.job.results.BucketInfluencer;
import org.elasticsearch.xpack.ml.job.results.CategoryDefinition;
import org.elasticsearch.xpack.ml.job.results.Forecast;
import org.elasticsearch.xpack.ml.job.results.Influencer;
import org.elasticsearch.xpack.ml.job.results.ModelPlot;
@ -151,6 +152,12 @@ public class JobResultsPersister extends AbstractComponent {
return this;
}
public Builder persistForecast(Forecast forecast) {
logger.trace("[{}] ES BULK ACTION: index forecast to index [{}] with ID [{}]", jobId, indexName, forecast.getId());
indexResult(forecast.getId(), forecast, Forecast.RESULT_TYPE_VALUE);
return this;
}
private void indexResult(String id, ToXContent resultDoc, String resultType) {
try (XContentBuilder content = toXContentBuilder(resultDoc)) {
bulkRequest.add(new IndexRequest(indexName, DOC_TYPE, id).source(content));

View File

@ -27,6 +27,7 @@ import org.elasticsearch.xpack.ml.job.process.autodetect.output.AutoDetectResult
import org.elasticsearch.xpack.ml.job.process.autodetect.output.FlushAcknowledgement;
import org.elasticsearch.xpack.ml.job.process.autodetect.params.DataLoadParams;
import org.elasticsearch.xpack.ml.job.process.autodetect.params.FlushJobParams;
import org.elasticsearch.xpack.ml.job.process.autodetect.params.ForecastParams;
import org.elasticsearch.xpack.ml.job.process.autodetect.state.DataCounts;
import org.elasticsearch.xpack.ml.job.process.autodetect.state.ModelSizeStats;
import org.elasticsearch.xpack.ml.job.process.autodetect.state.ModelSnapshot;
@ -205,6 +206,13 @@ public class AutodetectCommunicator implements Closeable {
}, handler);
}
public void forecastJob(ForecastParams params, BiConsumer<Void, Exception> handler) {
submitOperation(() -> {
autodetectProcess.forecastJob(params);
return null;
}, handler);
}
@Nullable
FlushAcknowledgement waitFlushToCompletion(String flushId) {
LOGGER.debug("[{}] waiting for flush", job.getId());

View File

@ -10,6 +10,7 @@ import org.elasticsearch.xpack.ml.job.config.ModelPlotConfig;
import org.elasticsearch.xpack.ml.job.persistence.StateStreamer;
import org.elasticsearch.xpack.ml.job.process.autodetect.params.DataLoadParams;
import org.elasticsearch.xpack.ml.job.process.autodetect.params.FlushJobParams;
import org.elasticsearch.xpack.ml.job.process.autodetect.params.ForecastParams;
import org.elasticsearch.xpack.ml.job.process.autodetect.state.ModelSnapshot;
import org.elasticsearch.xpack.ml.job.results.AutodetectResult;
@ -85,6 +86,14 @@ public interface AutodetectProcess extends Closeable {
*/
String flushJob(FlushJobParams params) throws IOException;
/**
* Do a forecast on a running job.
*
* @param params The forecast parameters
* @throws IOException If the write fails
*/
void forecastJob(ForecastParams params) throws IOException;
/**
* Flush the output data stream
*/

View File

@ -42,6 +42,7 @@ import org.elasticsearch.xpack.ml.job.process.autodetect.output.FlushAcknowledge
import org.elasticsearch.xpack.ml.job.process.autodetect.params.AutodetectParams;
import org.elasticsearch.xpack.ml.job.process.autodetect.params.DataLoadParams;
import org.elasticsearch.xpack.ml.job.process.autodetect.params.FlushJobParams;
import org.elasticsearch.xpack.ml.job.process.autodetect.params.ForecastParams;
import org.elasticsearch.xpack.ml.job.process.autodetect.state.DataCounts;
import org.elasticsearch.xpack.ml.job.process.autodetect.state.ModelSizeStats;
import org.elasticsearch.xpack.ml.job.process.autodetect.state.ModelSnapshot;
@ -49,6 +50,7 @@ import org.elasticsearch.xpack.ml.job.process.normalizer.NormalizerFactory;
import org.elasticsearch.xpack.ml.job.process.normalizer.Renormalizer;
import org.elasticsearch.xpack.ml.job.process.normalizer.ScoresUpdater;
import org.elasticsearch.xpack.ml.job.process.normalizer.ShortCircuitingRenormalizer;
import org.elasticsearch.xpack.ml.job.results.Forecast;
import org.elasticsearch.xpack.ml.notifications.Auditor;
import org.elasticsearch.xpack.ml.utils.ExceptionsHelper;
import org.elasticsearch.xpack.persistent.PersistentTasksCustomMetaData.PersistentTask;
@ -240,6 +242,33 @@ public class AutodetectProcessManager extends AbstractComponent {
});
}
/**
* Do a forecast for the running job.
*
* @param jobTask The job task
* @param params Forecast parameters
*/
public void forecastJob(JobTask jobTask, ForecastParams params, Consumer<Exception> handler) {
logger.debug("Forecasting job {}", jobTask.getJobId());
AutodetectCommunicator communicator = autoDetectCommunicatorByOpenJob.get(jobTask.getAllocationId());
if (communicator == null) {
String message = String.format(Locale.ROOT, "Cannot forecast because job [%s] is not open", jobTask.getJobId());
logger.debug(message);
handler.accept(ExceptionsHelper.conflictStatusException(message));
return;
}
communicator.forecastJob(params, (aVoid, e) -> {
if (e == null) {
handler.accept(null);
} else {
String msg = String.format(Locale.ROOT, "[%s] exception while forecasting job", jobTask.getJobId());
logger.error(msg, e);
handler.accept(ExceptionsHelper.serverError(msg, e));
}
});
}
public void writeUpdateProcessMessage(JobTask jobTask, List<JobUpdate.DetectorUpdate> updates, ModelPlotConfig config,
Consumer<Exception> handler) {
AutodetectCommunicator communicator = autoDetectCommunicatorByOpenJob.get(jobTask.getAllocationId());

View File

@ -11,6 +11,7 @@ import org.elasticsearch.xpack.ml.job.persistence.StateStreamer;
import org.elasticsearch.xpack.ml.job.process.autodetect.output.FlushAcknowledgement;
import org.elasticsearch.xpack.ml.job.process.autodetect.params.DataLoadParams;
import org.elasticsearch.xpack.ml.job.process.autodetect.params.FlushJobParams;
import org.elasticsearch.xpack.ml.job.process.autodetect.params.ForecastParams;
import org.elasticsearch.xpack.ml.job.process.autodetect.state.ModelSnapshot;
import org.elasticsearch.xpack.ml.job.process.autodetect.state.Quantiles;
import org.elasticsearch.xpack.ml.job.results.AutodetectResult;
@ -78,7 +79,7 @@ public class BlackHoleAutodetectProcess implements AutodetectProcess {
@Override
public String flushJob(FlushJobParams params) throws IOException {
FlushAcknowledgement flushAcknowledgement = new FlushAcknowledgement(FLUSH_ID, null);
AutodetectResult result = new AutodetectResult(null, null, null, null, null, null, null, null, flushAcknowledgement);
AutodetectResult result = new AutodetectResult(null, null, null, null, null, null, null, null, null, null, flushAcknowledgement);
results.add(result);
return FLUSH_ID;
}
@ -91,7 +92,7 @@ public class BlackHoleAutodetectProcess implements AutodetectProcess {
public void close() throws IOException {
if (open) {
Quantiles quantiles = new Quantiles(jobId, new Date(), "black hole quantiles");
AutodetectResult result = new AutodetectResult(null, null, null, quantiles, null, null, null, null, null);
AutodetectResult result = new AutodetectResult(null, null, null, quantiles, null, null, null, null, null, null, null);
results.add(result);
open = false;
}
@ -147,4 +148,8 @@ public class BlackHoleAutodetectProcess implements AutodetectProcess {
public String readError() {
return "";
}
@Override
public void forecastJob(ForecastParams params) throws IOException {
}
}

View File

@ -17,6 +17,7 @@ import org.elasticsearch.xpack.ml.job.process.autodetect.output.AutodetectResult
import org.elasticsearch.xpack.ml.job.process.autodetect.output.StateProcessor;
import org.elasticsearch.xpack.ml.job.process.autodetect.params.DataLoadParams;
import org.elasticsearch.xpack.ml.job.process.autodetect.params.FlushJobParams;
import org.elasticsearch.xpack.ml.job.process.autodetect.params.ForecastParams;
import org.elasticsearch.xpack.ml.job.process.autodetect.state.ModelSnapshot;
import org.elasticsearch.xpack.ml.job.process.autodetect.writer.ControlMsgToProcessWriter;
import org.elasticsearch.xpack.ml.job.process.autodetect.writer.LengthEncodedWriter;
@ -94,7 +95,9 @@ class NativeAutodetectProcess implements AutodetectProcess {
if (processCloseInitiated == false && processKilled == false) {
// The log message doesn't say "crashed", as the process could have been killed
// by a user or other process (e.g. the Linux OOM killer)
LOGGER.error("[{}] autodetect process stopped unexpectedly", jobId);
String errors = cppLogHandler.getErrors();
LOGGER.error("[{}] autodetect process stopped unexpectedly: {}", jobId, errors);
onProcessCrash.run();
}
}
@ -163,6 +166,12 @@ class NativeAutodetectProcess implements AutodetectProcess {
return writer.writeFlushMessage();
}
@Override
public void forecastJob(ForecastParams params) throws IOException {
ControlMsgToProcessWriter writer = new ControlMsgToProcessWriter(recordWriter, numberOfAnalysisFields);
writer.writeForecastMessage(params);
}
@Override
public void flushStream() throws IOException {
recordWriter.flush();

View File

@ -26,6 +26,8 @@ import org.elasticsearch.xpack.ml.job.results.AnomalyRecord;
import org.elasticsearch.xpack.ml.job.results.AutodetectResult;
import org.elasticsearch.xpack.ml.job.results.Bucket;
import org.elasticsearch.xpack.ml.job.results.CategoryDefinition;
import org.elasticsearch.xpack.ml.job.results.Forecast;
import org.elasticsearch.xpack.ml.job.results.ForecastStats;
import org.elasticsearch.xpack.ml.job.results.Influencer;
import org.elasticsearch.xpack.ml.job.results.ModelPlot;
@ -184,6 +186,20 @@ public class AutoDetectResultProcessor {
if (modelPlot != null) {
context.bulkResultsPersister.persistModelPlot(modelPlot);
}
Forecast forecast = result.getForecast();
if (forecast != null) {
context.bulkResultsPersister.persistForecast(forecast);
}
ForecastStats forecastStats = result.getForecastStats();
if (forecastStats != null) {
// forecast stats are send by autodetect but do not get persisted,
// still they mark the end of a forecast
LOGGER.trace("Received Forecast Stats [{}]", forecastStats.getId());
// forecast stats mark the end of a forecast, therefore commit whatever we have
context.bulkResultsPersister.executeRequest();
}
ModelSizeStats modelSizeStats = result.getModelSizeStats();
if (modelSizeStats != null) {
LOGGER.trace("[{}] Parsed ModelSizeStats: {} / {} / {} / {} / {} / {}",

View File

@ -0,0 +1,102 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.xpack.ml.job.process.autodetect.params;
import org.elasticsearch.ElasticsearchParseException;
import org.elasticsearch.common.ParseField;
import org.elasticsearch.common.joda.DateMathParser;
import org.elasticsearch.index.mapper.DateFieldMapper;
import org.elasticsearch.xpack.ml.job.messages.Messages;
import java.util.Objects;
public class ForecastParams {
private final long endTime;
private final long id;
private ForecastParams(long id, long endTime) {
this.id = id;
this.endTime = endTime;
}
/**
* The forecast end time in seconds from the epoch
* @return The end time in seconds from the epoch
*/
public long getEndTime() {
return endTime;
}
/**
* The forecast id
*
* @return The forecast Id
*/
public long getId() {
return id;
}
@Override
public int hashCode() {
return Objects.hash(id, endTime);
}
@Override
public boolean equals(Object obj) {
if (this == obj) {
return true;
}
if (obj == null || getClass() != obj.getClass()) {
return false;
}
ForecastParams other = (ForecastParams) obj;
return Objects.equals(id, other.id) && Objects.equals(endTime, other.endTime);
}
public static Builder builder() {
return new Builder();
}
public static class Builder {
private long endTimeEpochSecs;
private long startTime;
private long forecastId;
private Builder() {
startTime = System.currentTimeMillis();
endTimeEpochSecs = tomorrow(startTime);
forecastId = generateId();
}
static long tomorrow(long now) {
return (now / 1000) + (60 * 60 * 24);
}
private long generateId() {
return startTime;
}
public Builder endTime(String endTime, ParseField paramName) {
DateMathParser dateMathParser = new DateMathParser(DateFieldMapper.DEFAULT_DATE_TIME_FORMATTER);
try {
endTimeEpochSecs = dateMathParser.parse(endTime, System::currentTimeMillis) / 1000;
} catch (Exception e) {
String msg = Messages.getMessage(Messages.REST_INVALID_DATETIME_PARAMS, paramName.getPreferredName(), endTime);
throw new ElasticsearchParseException(msg, e);
}
return this;
}
public ForecastParams build() {
return new ForecastParams(forecastId, endTimeEpochSecs);
}
}
}

View File

@ -7,11 +7,13 @@ package org.elasticsearch.xpack.ml.job.process.autodetect.writer;
import org.elasticsearch.common.xcontent.ToXContent;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentFactory;
import org.elasticsearch.common.xcontent.json.JsonXContent;
import org.elasticsearch.xpack.ml.job.config.DetectionRule;
import org.elasticsearch.xpack.ml.job.config.ModelPlotConfig;
import org.elasticsearch.xpack.ml.job.process.autodetect.params.DataLoadParams;
import org.elasticsearch.xpack.ml.job.process.autodetect.params.FlushJobParams;
import org.elasticsearch.xpack.ml.job.process.autodetect.params.ForecastParams;
import java.io.IOException;
import java.io.OutputStream;
@ -37,6 +39,11 @@ public class ControlMsgToProcessWriter {
*/
private static final String FLUSH_MESSAGE_CODE = "f";
/**
* This must match the code defined in the api::CAnomalyDetector C++ class.
*/
private static final String FORECAST_MESSAGE_CODE = "p";
/**
* This must match the code defined in the api::CAnomalyDetector C++ class.
*/
@ -137,14 +144,32 @@ public class ControlMsgToProcessWriter {
String flushId = Long.toString(ms_FlushNumber.getAndIncrement());
writeMessage(FLUSH_MESSAGE_CODE + flushId);
char[] spaces = new char[FLUSH_SPACES_LENGTH];
Arrays.fill(spaces, ' ');
writeMessage(new String(spaces));
fillCommandBuffer();
lengthEncodedWriter.flush();
return flushId;
}
public void writeForecastMessage(ForecastParams params) throws IOException {
XContentBuilder builder = XContentFactory.jsonBuilder()
.startObject()
.field("forecast_id", params.getId())
.field("end_time", params.getEndTime())
.endObject();
writeMessage(FORECAST_MESSAGE_CODE + builder.string());
fillCommandBuffer();
lengthEncodedWriter.flush();
}
// todo(hendrikm): workaround, see
// https://github.com/elastic/machine-learning-cpp/issues/123
private void fillCommandBuffer() throws IOException {
char[] spaces = new char[FLUSH_SPACES_LENGTH];
Arrays.fill(spaces, ' ');
writeMessage(new String(spaces));
}
public void writeResetBucketsMessage(DataLoadParams params) throws IOException {
writeControlCodeFollowedByTimeRange(RESET_BUCKETS_MESSAGE_CODE, params.getStart(), params.getEnd());
}

View File

@ -5,6 +5,7 @@
*/
package org.elasticsearch.xpack.ml.job.results;
import org.elasticsearch.Version;
import org.elasticsearch.common.ParseField;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
@ -31,7 +32,7 @@ public class AutodetectResult implements ToXContentObject, Writeable {
TYPE.getPreferredName(), a -> new AutodetectResult((Bucket) a[0], (List<AnomalyRecord>) a[1], (List<Influencer>) a[2],
(Quantiles) a[3], a[4] == null ? null : ((ModelSnapshot.Builder) a[4]).build(),
a[5] == null ? null : ((ModelSizeStats.Builder) a[5]).build(),
(ModelPlot) a[6], (CategoryDefinition) a[7], (FlushAcknowledgement) a[8]));
(ModelPlot) a[6], (Forecast) a[7], (ForecastStats) a[8], (CategoryDefinition) a[9], (FlushAcknowledgement) a[10]));
static {
PARSER.declareObject(ConstructingObjectParser.optionalConstructorArg(), Bucket.PARSER, Bucket.RESULT_TYPE_FIELD);
@ -42,6 +43,8 @@ public class AutodetectResult implements ToXContentObject, Writeable {
PARSER.declareObject(ConstructingObjectParser.optionalConstructorArg(), ModelSizeStats.PARSER,
ModelSizeStats.RESULT_TYPE_FIELD);
PARSER.declareObject(ConstructingObjectParser.optionalConstructorArg(), ModelPlot.PARSER, ModelPlot.RESULTS_FIELD);
PARSER.declareObject(ConstructingObjectParser.optionalConstructorArg(), Forecast.PARSER, Forecast.RESULTS_FIELD);
PARSER.declareObject(ConstructingObjectParser.optionalConstructorArg(), ForecastStats.PARSER, ForecastStats.RESULTS_FIELD);
PARSER.declareObject(ConstructingObjectParser.optionalConstructorArg(), CategoryDefinition.PARSER, CategoryDefinition.TYPE);
PARSER.declareObject(ConstructingObjectParser.optionalConstructorArg(), FlushAcknowledgement.PARSER, FlushAcknowledgement.TYPE);
}
@ -53,12 +56,14 @@ public class AutodetectResult implements ToXContentObject, Writeable {
private final ModelSnapshot modelSnapshot;
private final ModelSizeStats modelSizeStats;
private final ModelPlot modelPlot;
private final Forecast forecast;
private final ForecastStats forecastStats;
private final CategoryDefinition categoryDefinition;
private final FlushAcknowledgement flushAcknowledgement;
public AutodetectResult(Bucket bucket, List<AnomalyRecord> records, List<Influencer> influencers, Quantiles quantiles,
ModelSnapshot modelSnapshot, ModelSizeStats modelSizeStats, ModelPlot modelPlot,
CategoryDefinition categoryDefinition, FlushAcknowledgement flushAcknowledgement) {
ModelSnapshot modelSnapshot, ModelSizeStats modelSizeStats, ModelPlot modelPlot, Forecast forecast,
ForecastStats forecastStats, CategoryDefinition categoryDefinition, FlushAcknowledgement flushAcknowledgement) {
this.bucket = bucket;
this.records = records;
this.influencers = influencers;
@ -66,6 +71,8 @@ public class AutodetectResult implements ToXContentObject, Writeable {
this.modelSnapshot = modelSnapshot;
this.modelSizeStats = modelSizeStats;
this.modelPlot = modelPlot;
this.forecast = forecast;
this.forecastStats = forecastStats;
this.categoryDefinition = categoryDefinition;
this.flushAcknowledgement = flushAcknowledgement;
}
@ -116,6 +123,22 @@ public class AutodetectResult implements ToXContentObject, Writeable {
} else {
this.flushAcknowledgement = null;
}
if (in.getVersion().onOrAfter(Version.V_6_1_0)) {
if (in.readBoolean()) {
this.forecast = new Forecast(in);
} else {
this.forecast = null;
}
if (in.readBoolean()) {
this.forecastStats = new ForecastStats(in);
} else {
this.forecastStats = null;
}
} else {
this.forecast = null;
this.forecastStats = null;
}
}
@Override
@ -129,6 +152,11 @@ public class AutodetectResult implements ToXContentObject, Writeable {
writeNullable(modelPlot, out);
writeNullable(categoryDefinition, out);
writeNullable(flushAcknowledgement, out);
if (out.getVersion().onOrAfter(Version.V_6_1_0)) {
writeNullable(forecast, out);
writeNullable(forecastStats, out);
}
}
private static void writeNullable(Writeable writeable, StreamOutput out) throws IOException {
@ -157,6 +185,8 @@ public class AutodetectResult implements ToXContentObject, Writeable {
addNullableField(ModelSnapshot.TYPE, modelSnapshot, builder);
addNullableField(ModelSizeStats.RESULT_TYPE_FIELD, modelSizeStats, builder);
addNullableField(ModelPlot.RESULTS_FIELD, modelPlot, builder);
addNullableField(Forecast.RESULTS_FIELD, forecast, builder);
addNullableField(ForecastStats.RESULTS_FIELD, forecastStats, builder);
addNullableField(CategoryDefinition.TYPE, categoryDefinition, builder);
addNullableField(FlushAcknowledgement.TYPE, flushAcknowledgement, builder);
builder.endObject();
@ -203,6 +233,14 @@ public class AutodetectResult implements ToXContentObject, Writeable {
return modelPlot;
}
public Forecast getForecast() {
return forecast;
}
public ForecastStats getForecastStats() {
return forecastStats;
}
public CategoryDefinition getCategoryDefinition() {
return categoryDefinition;
}
@ -213,8 +251,8 @@ public class AutodetectResult implements ToXContentObject, Writeable {
@Override
public int hashCode() {
return Objects.hash(bucket, records, influencers, categoryDefinition, flushAcknowledgement, modelPlot, modelSizeStats,
modelSnapshot, quantiles);
return Objects.hash(bucket, records, influencers, categoryDefinition, flushAcknowledgement, modelPlot, forecast, forecastStats,
modelSizeStats, modelSnapshot, quantiles);
}
@Override
@ -232,6 +270,8 @@ public class AutodetectResult implements ToXContentObject, Writeable {
Objects.equals(categoryDefinition, other.categoryDefinition) &&
Objects.equals(flushAcknowledgement, other.flushAcknowledgement) &&
Objects.equals(modelPlot, other.modelPlot) &&
Objects.equals(forecast, other.forecast) &&
Objects.equals(forecastStats, other.forecastStats) &&
Objects.equals(modelSizeStats, other.modelSizeStats) &&
Objects.equals(modelSnapshot, other.modelSnapshot) &&
Objects.equals(quantiles, other.quantiles);

View File

@ -0,0 +1,308 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.xpack.ml.job.results;
import org.elasticsearch.common.ParseField;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.io.stream.Writeable;
import org.elasticsearch.common.xcontent.ConstructingObjectParser;
import org.elasticsearch.common.xcontent.ObjectParser.ValueType;
import org.elasticsearch.common.xcontent.ToXContentObject;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentParser.Token;
import org.elasticsearch.xpack.ml.job.config.Job;
import org.elasticsearch.xpack.ml.utils.time.TimeUtils;
import java.io.IOException;
import java.util.Date;
import java.util.Objects;
/**
* Model Forecast POJO.
*/
public class Forecast implements ToXContentObject, Writeable {
/**
* Result type
*/
public static final String RESULT_TYPE_VALUE = "model_forecast";
public static final ParseField RESULTS_FIELD = new ParseField(RESULT_TYPE_VALUE);
public static final ParseField FORECAST_ID = new ParseField("forecast_id");
public static final ParseField PARTITION_FIELD_NAME = new ParseField("partition_field_name");
public static final ParseField PARTITION_FIELD_VALUE = new ParseField("partition_field_value");
public static final ParseField OVER_FIELD_NAME = new ParseField("over_field_name");
public static final ParseField OVER_FIELD_VALUE = new ParseField("over_field_value");
public static final ParseField BY_FIELD_NAME = new ParseField("by_field_name");
public static final ParseField BY_FIELD_VALUE = new ParseField("by_field_value");
public static final ParseField MODEL_FEATURE = new ParseField("model_feature");
public static final ParseField FORECAST_LOWER = new ParseField("forecast_lower");
public static final ParseField FORECAST_UPPER = new ParseField("forecast_upper");
public static final ParseField FORECAST_PREDICTION = new ParseField("forecast_prediction");
public static final ParseField BUCKET_SPAN = new ParseField("bucket_span");
public static final ConstructingObjectParser<Forecast, Void> PARSER =
new ConstructingObjectParser<>(RESULT_TYPE_VALUE, a -> new Forecast((String) a[0], (long) a[1], (Date) a[2], (long) a[3]));
static {
PARSER.declareString(ConstructingObjectParser.constructorArg(), Job.ID);
PARSER.declareLong(ConstructingObjectParser.constructorArg(), FORECAST_ID);
PARSER.declareField(ConstructingObjectParser.constructorArg(), p -> {
if (p.currentToken() == Token.VALUE_NUMBER) {
return new Date(p.longValue());
} else if (p.currentToken() == Token.VALUE_STRING) {
return new Date(TimeUtils.dateStringToEpoch(p.text()));
}
throw new IllegalArgumentException("unexpected token [" + p.currentToken() + "] for ["
+ Result.TIMESTAMP.getPreferredName() + "]");
}, Result.TIMESTAMP, ValueType.VALUE);
PARSER.declareLong(ConstructingObjectParser.constructorArg(), BUCKET_SPAN);
PARSER.declareString((modelForecast, s) -> {}, Result.RESULT_TYPE);
PARSER.declareString(Forecast::setPartitionFieldName, PARTITION_FIELD_NAME);
PARSER.declareString(Forecast::setPartitionFieldValue, PARTITION_FIELD_VALUE);
PARSER.declareString(Forecast::setOverFieldName, OVER_FIELD_NAME);
PARSER.declareString(Forecast::setOverFieldValue, OVER_FIELD_VALUE);
PARSER.declareString(Forecast::setByFieldName, BY_FIELD_NAME);
PARSER.declareString(Forecast::setByFieldValue, BY_FIELD_VALUE);
PARSER.declareString(Forecast::setModelFeature, MODEL_FEATURE);
PARSER.declareDouble(Forecast::setForecastLower, FORECAST_LOWER);
PARSER.declareDouble(Forecast::setForecastUpper, FORECAST_UPPER);
PARSER.declareDouble(Forecast::setForecastPrediction, FORECAST_PREDICTION);
}
private final String jobId;
private final long forecastId;
private final Date timestamp;
private final long bucketSpan;
private String partitionFieldName;
private String partitionFieldValue;
private String overFieldName;
private String overFieldValue;
private String byFieldName;
private String byFieldValue;
private String modelFeature;
private double forecastLower;
private double forecastUpper;
private double forecastPrediction;
public Forecast(String jobId, long forecastId, Date timestamp, long bucketSpan) {
this.jobId = jobId;
this.forecastId = forecastId;
this.timestamp = timestamp;
this.bucketSpan = bucketSpan;
}
public Forecast(StreamInput in) throws IOException {
jobId = in.readString();
forecastId = in.readLong();
timestamp = new Date(in.readLong());
partitionFieldName = in.readOptionalString();
partitionFieldValue = in.readOptionalString();
overFieldName = in.readOptionalString();
overFieldValue = in.readOptionalString();
byFieldName = in.readOptionalString();
byFieldValue = in.readOptionalString();
modelFeature = in.readOptionalString();
forecastLower = in.readDouble();
forecastUpper = in.readDouble();
forecastPrediction = in.readDouble();
bucketSpan = in.readLong();
}
@Override
public void writeTo(StreamOutput out) throws IOException {
out.writeString(jobId);
out.writeLong(forecastId);
out.writeLong(timestamp.getTime());
out.writeOptionalString(partitionFieldName);
out.writeOptionalString(partitionFieldValue);
out.writeOptionalString(overFieldName);
out.writeOptionalString(overFieldValue);
out.writeOptionalString(byFieldName);
out.writeOptionalString(byFieldValue);
out.writeOptionalString(modelFeature);
out.writeDouble(forecastLower);
out.writeDouble(forecastUpper);
out.writeDouble(forecastPrediction);
out.writeLong(bucketSpan);
}
@Override
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
builder.startObject();
builder.field(Job.ID.getPreferredName(), jobId);
builder.field(FORECAST_ID.getPreferredName(), forecastId);
builder.field(Result.RESULT_TYPE.getPreferredName(), RESULT_TYPE_VALUE);
builder.field(BUCKET_SPAN.getPreferredName(), bucketSpan);
if (timestamp != null) {
builder.dateField(Result.TIMESTAMP.getPreferredName(),
Result.TIMESTAMP.getPreferredName() + "_string", timestamp.getTime());
}
if (partitionFieldName != null) {
builder.field(PARTITION_FIELD_NAME.getPreferredName(), partitionFieldName);
}
if (partitionFieldValue != null) {
builder.field(PARTITION_FIELD_VALUE.getPreferredName(), partitionFieldValue);
}
if (overFieldName != null) {
builder.field(OVER_FIELD_NAME.getPreferredName(), overFieldName);
}
if (overFieldValue != null) {
builder.field(OVER_FIELD_VALUE.getPreferredName(), overFieldValue);
}
if (byFieldName != null) {
builder.field(BY_FIELD_NAME.getPreferredName(), byFieldName);
}
if (byFieldValue != null) {
builder.field(BY_FIELD_VALUE.getPreferredName(), byFieldValue);
}
if (modelFeature != null) {
builder.field(MODEL_FEATURE.getPreferredName(), modelFeature);
}
builder.field(FORECAST_LOWER.getPreferredName(), forecastLower);
builder.field(FORECAST_UPPER.getPreferredName(), forecastUpper);
builder.field(FORECAST_PREDICTION.getPreferredName(), forecastPrediction);
builder.endObject();
return builder;
}
public String getJobId() {
return jobId;
}
public long getForecastId() {
return forecastId;
}
public String getId() {
int valuesHash = Objects.hash(byFieldValue, overFieldValue, partitionFieldValue);
int length = (byFieldValue == null ? 0 : byFieldValue.length()) +
(overFieldValue == null ? 0 : overFieldValue.length()) +
(partitionFieldValue == null ? 0 : partitionFieldValue.length());
return jobId + "_model_forecast_" + forecastId + "_" + timestamp.getTime() + "_" + bucketSpan + "_"
+ (modelFeature == null ? "" : modelFeature) + "_" + valuesHash + "_" + length;
}
public Date getTimestamp() {
return timestamp;
}
public long getBucketSpan() {
return bucketSpan;
}
public String getPartitionFieldName() {
return partitionFieldName;
}
public void setPartitionFieldName(String partitionFieldName) {
this.partitionFieldName = partitionFieldName;
}
public String getPartitionFieldValue() {
return partitionFieldValue;
}
public void setPartitionFieldValue(String partitionFieldValue) {
this.partitionFieldValue = partitionFieldValue;
}
public String getOverFieldName() {
return overFieldName;
}
public void setOverFieldName(String overFieldName) {
this.overFieldName = overFieldName;
}
public String getOverFieldValue() {
return overFieldValue;
}
public void setOverFieldValue(String overFieldValue) {
this.overFieldValue = overFieldValue;
}
public String getByFieldName() {
return byFieldName;
}
public void setByFieldName(String byFieldName) {
this.byFieldName = byFieldName;
}
public String getByFieldValue() {
return byFieldValue;
}
public void setByFieldValue(String byFieldValue) {
this.byFieldValue = byFieldValue;
}
public String getModelFeature() {
return modelFeature;
}
public void setModelFeature(String modelFeature) {
this.modelFeature = modelFeature;
}
public double getForecastLower() {
return forecastLower;
}
public void setForecastLower(double forecastLower) {
this.forecastLower = forecastLower;
}
public double getForecastUpper() {
return forecastUpper;
}
public void setForecastUpper(double forecastUpper) {
this.forecastUpper = forecastUpper;
}
public double getForecastPrediction() {
return forecastPrediction;
}
public void setForecastPrediction(double forecastPrediction) {
this.forecastPrediction = forecastPrediction;
}
@Override
public boolean equals(Object other) {
if (this == other) {
return true;
}
if (other instanceof Forecast == false) {
return false;
}
Forecast that = (Forecast) other;
return Objects.equals(this.jobId, that.jobId) &&
forecastId == that.forecastId &&
Objects.equals(this.timestamp, that.timestamp) &&
Objects.equals(this.partitionFieldValue, that.partitionFieldValue) &&
Objects.equals(this.partitionFieldName, that.partitionFieldName) &&
Objects.equals(this.overFieldValue, that.overFieldValue) &&
Objects.equals(this.overFieldName, that.overFieldName) &&
Objects.equals(this.byFieldValue, that.byFieldValue) &&
Objects.equals(this.byFieldName, that.byFieldName) &&
Objects.equals(this.modelFeature, that.modelFeature) &&
this.forecastLower == that.forecastLower &&
this.forecastUpper == that.forecastUpper &&
this.forecastPrediction == that.forecastPrediction &&
this.bucketSpan == that.bucketSpan;
}
@Override
public int hashCode() {
return Objects.hash(jobId, forecastId, timestamp, partitionFieldName, partitionFieldValue,
overFieldName, overFieldValue, byFieldName, byFieldValue,
modelFeature, forecastLower, forecastUpper, forecastPrediction, bucketSpan);
}
}

View File

@ -0,0 +1,114 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.xpack.ml.job.results;
import org.elasticsearch.common.ParseField;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.io.stream.Writeable;
import org.elasticsearch.common.xcontent.ConstructingObjectParser;
import org.elasticsearch.common.xcontent.ToXContentObject;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.xpack.ml.job.config.Job;
import java.io.IOException;
import java.util.Objects;
/**
* Model ForecastStats POJO.
*
* Note forecast stats are sent from the autodetect process but do not get
* indexed.
*/
public class ForecastStats implements ToXContentObject, Writeable {
/**
* Result type
*/
public static final String RESULT_TYPE_VALUE = "model_forecast_stats";
public static final ParseField RESULTS_FIELD = new ParseField(RESULT_TYPE_VALUE);
public static final ParseField FORECAST_ID = new ParseField("forecast_id");
public static final ParseField RECORD_COUNT = new ParseField("forecast_record_count");
public static final ConstructingObjectParser<ForecastStats, Void> PARSER =
new ConstructingObjectParser<>(RESULT_TYPE_VALUE, a -> new ForecastStats((String) a[0], (long) a[1]));
static {
PARSER.declareString(ConstructingObjectParser.constructorArg(), Job.ID);
PARSER.declareLong(ConstructingObjectParser.constructorArg(), FORECAST_ID);
PARSER.declareString((modelForecastStats, s) -> {}, Result.RESULT_TYPE);
PARSER.declareLong(ForecastStats::setRecordCount, RECORD_COUNT);
}
private final String jobId;
private final long forecastId;
private long recordCount;
public ForecastStats(String jobId, long forecastId) {
this.jobId = jobId;
this.forecastId = forecastId;
}
public ForecastStats(StreamInput in) throws IOException {
jobId = in.readString();
forecastId = in.readLong();
recordCount = in.readLong();
}
@Override
public void writeTo(StreamOutput out) throws IOException {
out.writeString(jobId);
out.writeLong(forecastId);
out.writeLong(recordCount);
}
@Override
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
builder.startObject();
builder.field(Job.ID.getPreferredName(), jobId);
builder.field(Result.RESULT_TYPE.getPreferredName(), RESULT_TYPE_VALUE);
builder.field(FORECAST_ID.getPreferredName(), forecastId);
builder.field(RECORD_COUNT.getPreferredName(), recordCount);
builder.endObject();
return builder;
}
public String getJobId() {
return jobId;
}
public String getId() {
return jobId + "_model_forecast_stats";
}
public void setRecordCount(long recordCount) {
this.recordCount = recordCount;
}
public double getRecordCount() {
return recordCount;
}
@Override
public boolean equals(Object other) {
if (this == other) {
return true;
}
if (other instanceof ForecastStats == false) {
return false;
}
ForecastStats that = (ForecastStats) other;
return Objects.equals(this.jobId, that.jobId) &&
this.forecastId == that.forecastId &&
this.recordCount == that.recordCount;
}
@Override
public int hashCode() {
return Objects.hash(jobId, forecastId, recordCount);
}
}

View File

@ -127,6 +127,10 @@ public final class ReservedFieldNames {
ModelPlot.MODEL_UPPER.getPreferredName(), ModelPlot.MODEL_MEDIAN.getPreferredName(),
ModelPlot.ACTUAL.getPreferredName(),
Forecast.FORECAST_LOWER.getPreferredName(), Forecast.FORECAST_UPPER.getPreferredName(),
Forecast.FORECAST_PREDICTION.getPreferredName(),
Forecast.FORECAST_ID.getPreferredName(),
ModelSizeStats.MODEL_BYTES_FIELD.getPreferredName(),
ModelSizeStats.TOTAL_BY_FIELD_COUNT_FIELD.getPreferredName(),
ModelSizeStats.TOTAL_OVER_FIELD_COUNT_FIELD.getPreferredName(),

View File

@ -0,0 +1,50 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.xpack.ml.rest.job;
import org.elasticsearch.client.node.NodeClient;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.xcontent.XContentParser;
import org.elasticsearch.rest.BaseRestHandler;
import org.elasticsearch.rest.RestController;
import org.elasticsearch.rest.RestRequest;
import org.elasticsearch.rest.action.RestToXContentListener;
import org.elasticsearch.xpack.ml.MachineLearning;
import org.elasticsearch.xpack.ml.action.ForecastJobAction;
import org.elasticsearch.xpack.ml.job.config.Job;
import java.io.IOException;
public class RestForecastJobAction extends BaseRestHandler {
public RestForecastJobAction(Settings settings, RestController controller) {
super(settings);
controller.registerHandler(RestRequest.Method.POST,
MachineLearning.BASE_PATH + "anomaly_detectors/{" + Job.ID.getPreferredName() + "}/_forecast", this);
}
@Override
public String getName() {
return "xpack_ml_forecast_job_action";
}
@Override
protected RestChannelConsumer prepareRequest(RestRequest restRequest, NodeClient client) throws IOException {
String jobId = restRequest.param(Job.ID.getPreferredName());
final ForecastJobAction.Request request;
if (restRequest.hasContentOrSourceParam()) {
XContentParser parser = restRequest.contentOrSourceParamParser();
request = ForecastJobAction.Request.parseRequest(jobId, parser);
} else {
request = new ForecastJobAction.Request(restRequest.param(Job.ID.getPreferredName()));
if (restRequest.hasParam(ForecastJobAction.Request.END_TIME.getPreferredName())) {
request.setEndTime(restRequest.param(ForecastJobAction.Request.END_TIME.getPreferredName()));
}
}
return channel -> client.execute(ForecastJobAction.INSTANCE, request, new RestToXContentListener<>(channel));
}
}

View File

@ -27,6 +27,7 @@ import org.elasticsearch.search.SearchHit;
import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.xpack.XPackSettings;
import org.elasticsearch.xpack.security.authc.Authentication;
import org.elasticsearch.xpack.security.user.User;
import org.elasticsearch.xpack.security.user.XPackUser;
import java.io.IOException;
@ -48,15 +49,21 @@ public class InternalClient extends FilterClient {
private final String nodeName;
private final boolean securityEnabled;
private final User user;
/**
* Constructs an InternalClient.
* If security is enabled the client is secure. Otherwise this client is a passthrough.
*/
public InternalClient(Settings settings, ThreadPool threadPool, Client in) {
this(settings, threadPool, in, XPackUser.INSTANCE);
}
InternalClient(Settings settings, ThreadPool threadPool, Client in, User user) {
super(settings, threadPool, in);
this.nodeName = Node.NODE_NAME_SETTING.get(settings);
this.securityEnabled = XPackSettings.SECURITY_ENABLED.get(settings);
this.user = user;
}
@Override
@ -80,7 +87,7 @@ public class InternalClient extends FilterClient {
protected void processContext(ThreadContext threadContext) {
try {
Authentication authentication = new Authentication(XPackUser.INSTANCE,
Authentication authentication = new Authentication(user,
new Authentication.RealmRef("__attach", "__attach", nodeName), null);
authentication.writeToContext(threadContext);
} catch (IOException ioe) {

View File

@ -0,0 +1,23 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.xpack.security;
import org.elasticsearch.client.Client;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.xpack.security.user.XPackSecurityUser;
/**
* A special filter client for internal usage by security to modify the security index.
*
* The {@link XPackSecurityUser} user is added to the execution context before each action is executed.
*/
public class InternalSecurityClient extends InternalClient {
public InternalSecurityClient(Settings settings, ThreadPool threadPool, Client in) {
super(settings, threadPool, in, XPackSecurityUser.INSTANCE);
}
}

View File

@ -14,6 +14,7 @@ import org.elasticsearch.action.ActionResponse;
import org.elasticsearch.action.support.ActionFilter;
import org.elasticsearch.action.support.DestructiveOperations;
import org.elasticsearch.bootstrap.BootstrapCheck;
import org.elasticsearch.client.Client;
import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.NamedDiff;
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
@ -298,12 +299,13 @@ public class Security implements ActionPlugin, IngestPlugin, NetworkPlugin, Clus
return modules;
}
public Collection<Object> createComponents(InternalClient client, ThreadPool threadPool, ClusterService clusterService,
public Collection<Object> createComponents(Client nodeClient, ThreadPool threadPool, ClusterService clusterService,
ResourceWatcherService resourceWatcherService,
List<XPackExtension> extensions) throws Exception {
if (enabled == false) {
return Collections.emptyList();
}
final InternalSecurityClient client = new InternalSecurityClient(settings, threadPool, nodeClient);
threadContext.set(threadPool.getThreadContext());
List<Object> components = new ArrayList<>();
securityContext.set(new SecurityContext(settings, threadPool.getThreadContext()));

View File

@ -57,7 +57,7 @@ public class SecurityLifecycleService extends AbstractComponent implements Clust
private final IndexLifecycleManager securityIndex;
public SecurityLifecycleService(Settings settings, ClusterService clusterService,
ThreadPool threadPool, InternalClient client,
ThreadPool threadPool, InternalSecurityClient client,
@Nullable IndexAuditTrail indexAuditTrail) {
super(settings);
this.settings = settings;

View File

@ -48,6 +48,7 @@ import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.transport.TransportMessage;
import org.elasticsearch.xpack.XPackPlugin;
import org.elasticsearch.xpack.security.InternalClient;
import org.elasticsearch.xpack.security.InternalSecurityClient;
import org.elasticsearch.xpack.security.audit.AuditLevel;
import org.elasticsearch.xpack.security.audit.AuditTrail;
import org.elasticsearch.xpack.security.authc.AuthenticationToken;
@ -177,7 +178,7 @@ public class IndexAuditTrail extends AbstractComponent implements AuditTrail, Cl
return NAME;
}
public IndexAuditTrail(Settings settings, InternalClient client, ThreadPool threadPool, ClusterService clusterService) {
public IndexAuditTrail(Settings settings, InternalSecurityClient client, ThreadPool threadPool, ClusterService clusterService) {
super(settings);
this.threadPool = threadPool;
this.clusterService = clusterService;

View File

@ -18,6 +18,7 @@ import org.elasticsearch.index.query.QueryBuilders;
import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.threadpool.ThreadPool.Names;
import org.elasticsearch.xpack.security.InternalClient;
import org.elasticsearch.xpack.security.InternalSecurityClient;
import org.elasticsearch.xpack.security.SecurityLifecycleService;
import java.time.Instant;
@ -30,12 +31,12 @@ import static org.elasticsearch.action.support.TransportActions.isShardNotAvaila
*/
final class ExpiredTokenRemover extends AbstractRunnable {
private final InternalClient client;
private final InternalSecurityClient client;
private final AtomicBoolean inProgress = new AtomicBoolean(false);
private final Logger logger;
private final TimeValue timeout;
ExpiredTokenRemover(Settings settings, InternalClient internalClient) {
ExpiredTokenRemover(Settings settings, InternalSecurityClient internalClient) {
this.client = internalClient;
this.logger = Loggers.getLogger(getClass(), settings);
this.timeout = TokenService.DELETE_TIMEOUT.get(settings);

View File

@ -50,6 +50,7 @@ import org.elasticsearch.rest.RestStatus;
import org.elasticsearch.xpack.XPackPlugin;
import org.elasticsearch.xpack.XPackSettings;
import org.elasticsearch.xpack.security.InternalClient;
import org.elasticsearch.xpack.security.InternalSecurityClient;
import org.elasticsearch.xpack.security.SecurityLifecycleService;
import javax.crypto.Cipher;
@ -132,7 +133,7 @@ public final class TokenService extends AbstractComponent {
private final Clock clock;
private final TimeValue expirationDelay;
private final TimeValue deleteInterval;
private final InternalClient internalClient;
private final InternalSecurityClient internalClient;
private final SecurityLifecycleService lifecycleService;
private final ExpiredTokenRemover expiredTokenRemover;
private final boolean enabled;
@ -148,7 +149,7 @@ public final class TokenService extends AbstractComponent {
* @param clock the clock that will be used for comparing timestamps
* @param internalClient the client to use when checking for revocations
*/
public TokenService(Settings settings, Clock clock, InternalClient internalClient,
public TokenService(Settings settings, Clock clock, InternalSecurityClient internalClient,
SecurityLifecycleService lifecycleService, ClusterService clusterService) throws GeneralSecurityException {
super(settings);
byte[] saltArr = new byte[SALT_BYTES];

View File

@ -36,6 +36,7 @@ import org.elasticsearch.index.query.QueryBuilders;
import org.elasticsearch.search.SearchHit;
import org.elasticsearch.xpack.XPackPlugin;
import org.elasticsearch.xpack.security.InternalClient;
import org.elasticsearch.xpack.security.InternalSecurityClient;
import org.elasticsearch.xpack.security.SecurityLifecycleService;
import org.elasticsearch.xpack.security.action.realm.ClearRealmCacheRequest;
import org.elasticsearch.xpack.security.action.realm.ClearRealmCacheResponse;
@ -73,12 +74,12 @@ public class NativeUsersStore extends AbstractComponent {
private final Hasher hasher = Hasher.BCRYPT;
private final InternalClient client;
private final InternalSecurityClient client;
private final boolean isTribeNode;
private volatile SecurityLifecycleService securityLifecycleService;
public NativeUsersStore(Settings settings, InternalClient client, SecurityLifecycleService securityLifecycleService) {
public NativeUsersStore(Settings settings, InternalSecurityClient client, SecurityLifecycleService securityLifecycleService) {
super(settings);
this.client = client;
this.isTribeNode = XPackPlugin.isTribeNode(settings);

View File

@ -38,6 +38,7 @@ import org.elasticsearch.index.query.QueryBuilder;
import org.elasticsearch.index.query.QueryBuilders;
import org.elasticsearch.xpack.XPackPlugin;
import org.elasticsearch.xpack.security.InternalClient;
import org.elasticsearch.xpack.security.InternalSecurityClient;
import org.elasticsearch.xpack.security.SecurityLifecycleService;
import org.elasticsearch.xpack.security.action.rolemapping.DeleteRoleMappingRequest;
import org.elasticsearch.xpack.security.action.rolemapping.PutRoleMappingRequest;
@ -70,12 +71,12 @@ public class NativeRoleMappingStore extends AbstractComponent implements UserRol
private static final String SECURITY_GENERIC_TYPE = "doc";
private final InternalClient client;
private final InternalSecurityClient client;
private final boolean isTribeNode;
private final SecurityLifecycleService securityLifecycleService;
private final List<String> realmsToRefresh = new CopyOnWriteArrayList<>();
public NativeRoleMappingStore(Settings settings, InternalClient client, SecurityLifecycleService securityLifecycleService) {
public NativeRoleMappingStore(Settings settings, InternalSecurityClient client, SecurityLifecycleService securityLifecycleService) {
super(settings);
this.client = client;
this.isTribeNode = XPackPlugin.isTribeNode(settings);

View File

@ -68,6 +68,7 @@ import org.elasticsearch.xpack.security.support.Automatons;
import org.elasticsearch.xpack.security.user.AnonymousUser;
import org.elasticsearch.xpack.security.user.SystemUser;
import org.elasticsearch.xpack.security.user.User;
import org.elasticsearch.xpack.security.user.XPackSecurityUser;
import org.elasticsearch.xpack.security.user.XPackUser;
import static org.elasticsearch.xpack.security.Security.setting;
@ -290,7 +291,6 @@ public class AuthorizationService extends AbstractComponent {
throw denial(authentication, action, request);
} else if (indicesAccessControl.getIndexPermissions(SecurityLifecycleService.SECURITY_INDEX_NAME) != null
&& indicesAccessControl.getIndexPermissions(SecurityLifecycleService.SECURITY_INDEX_NAME).isGranted()
&& XPackUser.is(authentication.getUser()) == false
&& MONITOR_INDEX_PREDICATE.test(action) == false
&& isSuperuser(authentication.getUser()) == false) {
// only the XPackUser is allowed to work with this index, but we should allow indices monitoring actions through for debugging
@ -392,7 +392,11 @@ public class AuthorizationService extends AbstractComponent {
" roles");
}
if (XPackUser.is(user)) {
assert XPackUser.INSTANCE.roles().length == 1 && ReservedRolesStore.SUPERUSER_ROLE.name().equals(XPackUser.INSTANCE.roles()[0]);
assert XPackUser.INSTANCE.roles().length == 1;
roleActionListener.onResponse(XPackUser.ROLE);
return;
}
if (XPackSecurityUser.is(user)) {
roleActionListener.onResponse(ReservedRolesStore.SUPERUSER_ROLE);
return;
}

View File

@ -38,6 +38,7 @@ import org.elasticsearch.license.LicenseUtils;
import org.elasticsearch.license.XPackLicenseState;
import org.elasticsearch.xpack.XPackPlugin;
import org.elasticsearch.xpack.security.InternalClient;
import org.elasticsearch.xpack.security.InternalSecurityClient;
import org.elasticsearch.xpack.security.SecurityLifecycleService;
import org.elasticsearch.xpack.security.action.role.ClearRolesCacheRequest;
import org.elasticsearch.xpack.security.action.role.ClearRolesCacheResponse;
@ -79,14 +80,14 @@ public class NativeRolesStore extends AbstractComponent {
TimeValue.timeValueMinutes(20), Property.NodeScope, Property.Deprecated);
private static final String ROLE_DOC_TYPE = "doc";
private final InternalClient client;
private final InternalSecurityClient client;
private final XPackLicenseState licenseState;
private final boolean isTribeNode;
private SecurityClient securityClient;
private final SecurityLifecycleService securityLifecycleService;
public NativeRolesStore(Settings settings, InternalClient client, XPackLicenseState licenseState,
public NativeRolesStore(Settings settings, InternalSecurityClient client, XPackLicenseState licenseState,
SecurityLifecycleService securityLifecycleService) {
super(settings);
this.client = client;

View File

@ -12,6 +12,7 @@ import org.elasticsearch.xpack.security.authz.permission.Role;
import org.elasticsearch.xpack.security.support.MetadataUtils;
import org.elasticsearch.xpack.security.user.KibanaUser;
import org.elasticsearch.xpack.security.user.SystemUser;
import org.elasticsearch.xpack.security.user.XPackUser;
import org.elasticsearch.xpack.watcher.execution.TriggeredWatchStore;
import org.elasticsearch.xpack.watcher.history.HistoryStore;
import org.elasticsearch.xpack.watcher.watch.Watch;
@ -126,7 +127,7 @@ public class ReservedRolesStore {
}
public static boolean isReserved(String role) {
return RESERVED_ROLES.containsKey(role) || SystemUser.ROLE_NAME.equals(role);
return RESERVED_ROLES.containsKey(role) || SystemUser.ROLE_NAME.equals(role) || XPackUser.ROLE_NAME.equals(role);
}
}

View File

@ -38,6 +38,7 @@ import org.elasticsearch.common.component.AbstractComponent;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.index.mapper.MapperService;
import org.elasticsearch.xpack.security.InternalClient;
import org.elasticsearch.xpack.security.InternalSecurityClient;
import org.elasticsearch.xpack.template.TemplateUtils;
import org.elasticsearch.xpack.upgrade.IndexUpgradeCheck;
@ -58,7 +59,7 @@ public class IndexLifecycleManager extends AbstractComponent {
private final String indexName;
private final String templateName;
private final InternalClient client;
private final InternalSecurityClient client;
private final List<BiConsumer<ClusterIndexHealth, ClusterIndexHealth>> indexHealthChangeListeners = new CopyOnWriteArrayList<>();
@ -70,7 +71,7 @@ public class IndexLifecycleManager extends AbstractComponent {
private volatile boolean mappingIsUpToDate;
private volatile Version mappingVersion;
public IndexLifecycleManager(Settings settings, InternalClient client, String indexName, String templateName) {
public IndexLifecycleManager(Settings settings, InternalSecurityClient client, String indexName, String templateName) {
super(settings);
this.client = client;
this.indexName = indexName;

View File

@ -184,6 +184,8 @@ public class User implements ToXContentObject {
return SystemUser.INSTANCE;
} else if (XPackUser.is(username)) {
return XPackUser.INSTANCE;
} else if (XPackSecurityUser.is(username)) {
return XPackSecurityUser.INSTANCE;
}
throw new IllegalStateException("user [" + username + "] is not an internal user");
}
@ -214,6 +216,9 @@ public class User implements ToXContentObject {
} else if (XPackUser.is(user)) {
output.writeBoolean(true);
output.writeString(XPackUser.NAME);
} else if (XPackSecurityUser.is(user)) {
output.writeBoolean(true);
output.writeString(XPackSecurityUser.NAME);
} else {
if (user.authenticatedUser == null) {
// no backcompat necessary, since there is no inner user

View File

@ -0,0 +1,38 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.xpack.security.user;
/**
* internal user that manages xpack security. Has all cluster/indices permissions.
*/
public class XPackSecurityUser extends User {
public static final String NAME = "_xpack_security";
public static final XPackSecurityUser INSTANCE = new XPackSecurityUser();
private static final String ROLE_NAME = "superuser";
private XPackSecurityUser() {
super(NAME, ROLE_NAME);
}
@Override
public boolean equals(Object o) {
return INSTANCE == o;
}
@Override
public int hashCode() {
return System.identityHashCode(this);
}
public static boolean is(User user) {
return INSTANCE.equals(user);
}
public static boolean is(String principal) {
return NAME.equals(principal);
}
}

View File

@ -5,13 +5,22 @@
*/
package org.elasticsearch.xpack.security.user;
import org.elasticsearch.xpack.security.authz.RoleDescriptor;
import org.elasticsearch.xpack.security.authz.permission.Role;
import org.elasticsearch.xpack.security.support.MetadataUtils;
/**
* XPack internal user that manages xpack. Has all cluster/indices permissions for x-pack to operate.
* XPack internal user that manages xpack. Has all cluster/indices permissions for x-pack to operate excluding security permissions.
*/
public class XPackUser extends User {
public static final String NAME = "_xpack";
private static final String ROLE_NAME = "superuser";
public static final String ROLE_NAME = NAME;
public static final Role ROLE = Role.builder(new RoleDescriptor(ROLE_NAME, new String[] { "all" },
new RoleDescriptor.IndicesPrivileges[] {
RoleDescriptor.IndicesPrivileges.builder().indices("/@&~(\\.security*)/").privileges("all").build()},
new String[] { "*" },
MetadataUtils.DEFAULT_RESERVED_METADATA), null).build();
public static final XPackUser INSTANCE = new XPackUser();
private XPackUser() {

View File

@ -8,6 +8,7 @@ package org.elasticsearch.xpack.upgrade;
import org.elasticsearch.Version;
import org.elasticsearch.action.ActionRequest;
import org.elasticsearch.action.ActionResponse;
import org.elasticsearch.client.Client;
import org.elasticsearch.cluster.metadata.IndexMetaData;
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
import org.elasticsearch.cluster.node.DiscoveryNodes;
@ -24,6 +25,7 @@ import org.elasticsearch.script.ScriptService;
import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.watcher.ResourceWatcherService;
import org.elasticsearch.xpack.security.InternalClient;
import org.elasticsearch.xpack.security.InternalSecurityClient;
import org.elasticsearch.xpack.upgrade.actions.IndexUpgradeAction;
import org.elasticsearch.xpack.upgrade.actions.IndexUpgradeInfoAction;
import org.elasticsearch.xpack.upgrade.rest.RestIndexUpgradeAction;
@ -53,12 +55,13 @@ public class Upgrade implements ActionPlugin {
this.upgradeCheckFactories = new ArrayList<>();
}
public Collection<Object> createComponents(InternalClient internalClient, ClusterService clusterService, ThreadPool threadPool,
public Collection<Object> createComponents(Client client, ClusterService clusterService, ThreadPool threadPool,
ResourceWatcherService resourceWatcherService, ScriptService scriptService,
NamedXContentRegistry xContentRegistry) {
final InternalSecurityClient internalSecurityClient = new InternalSecurityClient(settings, threadPool, client);
List<IndexUpgradeCheck> upgradeChecks = new ArrayList<>(upgradeCheckFactories.size());
for (BiFunction<InternalClient, ClusterService, IndexUpgradeCheck> checkFactory : upgradeCheckFactories) {
upgradeChecks.add(checkFactory.apply(internalClient, clusterService));
upgradeChecks.add(checkFactory.apply(internalSecurityClient, clusterService));
}
return Collections.singletonList(new IndexUpgradeService(settings, Collections.unmodifiableList(upgradeChecks)));
}

View File

@ -18,6 +18,7 @@ import org.elasticsearch.action.termvectors.MultiTermVectorsResponse;
import org.elasticsearch.action.termvectors.TermVectorsRequest;
import org.elasticsearch.action.termvectors.TermVectorsResponse;
import org.elasticsearch.action.update.UpdateRequest;
import org.elasticsearch.analysis.common.CommonAnalysisPlugin;
import org.elasticsearch.client.Requests;
import org.elasticsearch.common.settings.SecureString;
import org.elasticsearch.common.settings.Settings;
@ -61,7 +62,6 @@ import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertNoFa
import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchHits;
import static org.elasticsearch.xpack.security.authc.support.UsernamePasswordToken.BASIC_AUTH_HEADER;
import static org.elasticsearch.xpack.security.authc.support.UsernamePasswordToken.basicAuthHeaderValue;
import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;
import static org.hamcrest.Matchers.equalTo;
import static org.hamcrest.Matchers.instanceOf;
import static org.hamcrest.Matchers.is;
@ -73,8 +73,7 @@ public class DocumentLevelSecurityTests extends SecurityIntegTestCase {
@Override
protected Collection<Class<? extends Plugin>> nodePlugins() {
return Arrays.asList(XPackPlugin.class, ParentJoinPlugin.class,
InternalSettingsPlugin.class);
return Arrays.asList(XPackPlugin.class, CommonAnalysisPlugin.class, ParentJoinPlugin.class, InternalSettingsPlugin.class);
}
@Override

View File

@ -18,6 +18,7 @@ import org.elasticsearch.action.termvectors.MultiTermVectorsResponse;
import org.elasticsearch.action.termvectors.TermVectorsRequest;
import org.elasticsearch.action.termvectors.TermVectorsResponse;
import org.elasticsearch.action.update.UpdateRequest;
import org.elasticsearch.analysis.common.CommonAnalysisPlugin;
import org.elasticsearch.client.Requests;
import org.elasticsearch.common.settings.SecureString;
import org.elasticsearch.common.settings.Settings;
@ -72,7 +73,7 @@ public class FieldLevelSecurityTests extends SecurityIntegTestCase {
@Override
protected Collection<Class<? extends Plugin>> nodePlugins() {
return Arrays.asList(XPackPlugin.class, ParentJoinPlugin.class,
return Arrays.asList(XPackPlugin.class, CommonAnalysisPlugin.class, ParentJoinPlugin.class,
InternalSettingsPlugin.class);
}

View File

@ -5,10 +5,12 @@
*/
package org.elasticsearch.license;
import java.util.Arrays;
import java.util.Collection;
import java.util.Collections;
import java.util.concurrent.CountDownLatch;
import org.elasticsearch.analysis.common.CommonAnalysisPlugin;
import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.ClusterStateUpdateTask;
import org.elasticsearch.cluster.metadata.MetaData;
@ -37,7 +39,7 @@ public abstract class AbstractLicensesIntegrationTestCase extends ESIntegTestCas
@Override
protected Collection<Class<? extends Plugin>> nodePlugins() {
return Collections.<Class<? extends Plugin>>singletonList(XPackPlugin.class);
return Arrays.asList(XPackPlugin.class, CommonAnalysisPlugin.class);
}
@Override

View File

@ -5,6 +5,7 @@
*/
package org.elasticsearch.license;
import org.elasticsearch.analysis.common.CommonAnalysisPlugin;
import org.elasticsearch.common.network.NetworkModule;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.plugins.Plugin;
@ -36,7 +37,7 @@ public class LicenseServiceClusterNotRecoveredTests extends AbstractLicensesInte
@Override
protected Collection<Class<? extends Plugin>> nodePlugins() {
return Arrays.asList(XPackPlugin.class, Netty4Plugin.class);
return Arrays.asList(XPackPlugin.class, CommonAnalysisPlugin.class, Netty4Plugin.class);
}
@Override

View File

@ -5,6 +5,7 @@
*/
package org.elasticsearch.license;
import org.elasticsearch.analysis.common.CommonAnalysisPlugin;
import org.elasticsearch.common.network.NetworkModule;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.unit.TimeValue;
@ -42,7 +43,7 @@ public class LicenseServiceClusterTests extends AbstractLicensesIntegrationTestC
@Override
protected Collection<Class<? extends Plugin>> nodePlugins() {
return Arrays.asList(XPackPlugin.class, Netty4Plugin.class);
return Arrays.asList(XPackPlugin.class, CommonAnalysisPlugin.class, Netty4Plugin.class);
}
@Override

View File

@ -9,6 +9,7 @@ import org.elasticsearch.action.Action;
import org.elasticsearch.action.admin.cluster.health.ClusterHealthResponse;
import org.elasticsearch.action.admin.cluster.node.info.NodeInfo;
import org.elasticsearch.action.admin.cluster.node.info.NodesInfoResponse;
import org.elasticsearch.analysis.common.CommonAnalysisPlugin;
import org.elasticsearch.client.Client;
import org.elasticsearch.client.Requests;
import org.elasticsearch.cluster.health.ClusterHealthStatus;
@ -111,6 +112,7 @@ public abstract class TribeTransportTestCase extends ESIntegTestCase {
plugins.add(MockTribePlugin.class);
plugins.add(TribeAwareTestZenDiscoveryPlugin.class);
plugins.add(XPackPlugin.class);
plugins.add(CommonAnalysisPlugin.class);
return plugins;
}

View File

@ -35,6 +35,7 @@ import org.elasticsearch.xpack.XPackPlugin;
import org.elasticsearch.xpack.XPackSettings;
import org.elasticsearch.xpack.ml.MachineLearning;
import org.elasticsearch.xpack.security.InternalClient;
import org.elasticsearch.xpack.security.InternalSecurityClient;
import org.elasticsearch.xpack.security.Security;
import org.elasticsearch.xpack.security.client.SecurityClient;
import org.junit.AfterClass;
@ -436,6 +437,11 @@ public abstract class SecurityIntegTestCase extends ESIntegTestCase {
return internalCluster().getInstance(InternalClient.class);
}
protected InternalSecurityClient internalSecurityClient() {
Client client = client();
return new InternalSecurityClient(client.settings(), client.threadPool(), client);
}
protected SecurityClient securityClient() {
return securityClient(client());
}

View File

@ -0,0 +1,35 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.xpack.ml.action;
import org.elasticsearch.common.xcontent.XContentParser;
import org.elasticsearch.test.AbstractStreamableXContentTestCase;
import org.elasticsearch.xpack.ml.action.ForecastJobAction.Request;
public class ForecastJobActionRequestTests extends AbstractStreamableXContentTestCase<Request> {
@Override
protected Request doParseInstance(XContentParser parser) {
return Request.parseRequest(null, parser);
}
@Override
protected boolean supportsUnknownFields() {
return false;
}
@Override
protected Request createTestInstance() {
Request request = new Request(randomAlphaOfLengthBetween(1, 20));
return request;
}
@Override
protected Request createBlankInstance() {
return new Request();
}
}

View File

@ -0,0 +1,23 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.xpack.ml.action;
import org.elasticsearch.test.AbstractStreamableTestCase;
import org.elasticsearch.xpack.ml.action.ForecastJobAction.Response;
public class ForecastJobActionResponseTests extends AbstractStreamableTestCase<Response> {
@Override
protected Response createTestInstance() {
return new Response(randomBoolean(), randomNonNegativeLong());
}
@Override
protected Response createBlankInstance() {
return new Response();
}
}

View File

@ -357,47 +357,47 @@ public class AutodetectResultProcessorIT extends XPackSingleNodeTestCase {
private List<AutodetectResult> results = new ArrayList<>();
ResultsBuilder addBucket(Bucket bucket) {
results.add(new AutodetectResult(Objects.requireNonNull(bucket), null, null, null, null, null, null, null, null));
results.add(new AutodetectResult(Objects.requireNonNull(bucket), null, null, null, null, null, null, null, null, null, null));
return this;
}
ResultsBuilder addRecords(List<AnomalyRecord> records) {
results.add(new AutodetectResult(null, records, null, null, null, null, null, null, null));
results.add(new AutodetectResult(null, records, null, null, null, null, null, null, null, null, null));
return this;
}
ResultsBuilder addInfluencers(List<Influencer> influencers) {
results.add(new AutodetectResult(null, null, influencers, null, null, null, null, null, null));
results.add(new AutodetectResult(null, null, influencers, null, null, null, null, null, null, null, null));
return this;
}
ResultsBuilder addCategoryDefinition(CategoryDefinition categoryDefinition) {
results.add(new AutodetectResult(null, null, null, null, null, null, null, categoryDefinition, null));
results.add(new AutodetectResult(null, null, null, null, null, null, null, null, null, categoryDefinition, null));
return this;
}
ResultsBuilder addmodelPlot(ModelPlot modelPlot) {
results.add(new AutodetectResult(null, null, null, null, null, null, modelPlot, null, null));
results.add(new AutodetectResult(null, null, null, null, null, null, modelPlot, null, null, null, null));
return this;
}
ResultsBuilder addModelSizeStats(ModelSizeStats modelSizeStats) {
results.add(new AutodetectResult(null, null, null, null, null, modelSizeStats, null, null, null));
results.add(new AutodetectResult(null, null, null, null, null, modelSizeStats, null, null, null, null, null));
return this;
}
ResultsBuilder addModelSnapshot(ModelSnapshot modelSnapshot) {
results.add(new AutodetectResult(null, null, null, null, modelSnapshot, null, null, null, null));
results.add(new AutodetectResult(null, null, null, null, modelSnapshot, null, null, null, null, null, null));
return this;
}
ResultsBuilder addQuantiles(Quantiles quantiles) {
results.add(new AutodetectResult(null, null, null, quantiles, null, null, null, null, null));
results.add(new AutodetectResult(null, null, null, quantiles, null, null, null, null, null, null, null));
return this;
}
ResultsBuilder addFlushAcknowledgement(FlushAcknowledgement flushAcknowledgement) {
results.add(new AutodetectResult(null, null, null, null, null, null, null, null, flushAcknowledgement));
results.add(new AutodetectResult(null, null, null, null, null, null, null, null, null, null, flushAcknowledgement));
return this;
}

View File

@ -0,0 +1,44 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.xpack.ml.job.process.autodetect.params;
import org.elasticsearch.ElasticsearchParseException;
import org.elasticsearch.common.ParseField;
import org.elasticsearch.test.ESTestCase;
import org.elasticsearch.xpack.ml.job.messages.Messages;
import static org.hamcrest.Matchers.greaterThanOrEqualTo;
import static org.hamcrest.Matchers.lessThanOrEqualTo;
public class ForecastParamsTests extends ESTestCase {
private static ParseField END = new ParseField("end");
public void testDefault_GivesTomorrowTimeInSeconds() {
long nowSecs = System.currentTimeMillis() / 1000;
nowSecs += 60 * 60 * 24;
ForecastParams params = ForecastParams.builder().build();
assertThat(params.getEndTime(), greaterThanOrEqualTo(nowSecs));
assertThat(params.getEndTime(), lessThanOrEqualTo(nowSecs +1));
}
public void test_UnparseableEndTimeThrows() {
ElasticsearchParseException e =
ESTestCase.expectThrows(ElasticsearchParseException.class, () -> ForecastParams.builder().endTime("bad", END).build());
assertEquals(Messages.getMessage(Messages.REST_INVALID_DATETIME_PARAMS, "end", "bad"), e.getMessage());
}
public void testFormats() {
assertEquals(10L, ForecastParams.builder().endTime("10000", END).build().getEndTime());
assertEquals(1462096800L, ForecastParams.builder().endTime("2016-05-01T10:00:00Z", END).build().getEndTime());
long nowSecs = System.currentTimeMillis() / 1000;
long end = ForecastParams.builder().endTime("now+2H", END).build().getEndTime();
assertThat(end, greaterThanOrEqualTo(nowSecs + 7200));
assertThat(end, lessThanOrEqualTo(nowSecs + 7200 +1));
}
}

View File

@ -35,6 +35,8 @@ public class AutodetectResultTests extends AbstractSerializingTestCase<Autodetec
ModelSnapshot modelSnapshot;
ModelSizeStats.Builder modelSizeStats;
ModelPlot modelPlot;
Forecast forecast;
ForecastStats forecastStats;
CategoryDefinition categoryDefinition;
FlushAcknowledgement flushAcknowledgement;
String jobId = "foo";
@ -84,6 +86,16 @@ public class AutodetectResultTests extends AbstractSerializingTestCase<Autodetec
} else {
modelPlot = null;
}
if (randomBoolean()) {
forecast = new Forecast(jobId, randomNonNegativeLong(), new Date(randomLong()), randomNonNegativeLong());
} else {
forecast = null;
}
if (randomBoolean()) {
forecastStats = new ForecastStats(jobId, randomNonNegativeLong());
} else {
forecastStats = null;
}
if (randomBoolean()) {
categoryDefinition = new CategoryDefinition(jobId);
categoryDefinition.setCategoryId(randomLong());
@ -96,7 +108,8 @@ public class AutodetectResultTests extends AbstractSerializingTestCase<Autodetec
flushAcknowledgement = null;
}
return new AutodetectResult(bucket, records, influencers, quantiles, modelSnapshot,
modelSizeStats == null ? null : modelSizeStats.build(), modelPlot, categoryDefinition, flushAcknowledgement);
modelSizeStats == null ? null : modelSizeStats.build(), modelPlot, forecast, forecastStats, categoryDefinition,
flushAcknowledgement);
}
@Override

View File

@ -0,0 +1,46 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.xpack.ml.job.results;
import org.elasticsearch.common.io.stream.Writeable.Reader;
import org.elasticsearch.common.xcontent.XContentParser;
import org.elasticsearch.test.AbstractSerializingTestCase;
import java.io.IOException;
public class ForecastStatsTests extends AbstractSerializingTestCase<ForecastStats> {
@Override
protected ForecastStats parseInstance(XContentParser parser) {
return ForecastStats.PARSER.apply(parser, null);
}
@Override
protected ForecastStats createTestInstance() {
return createTestInstance("ForecastStatsTest", randomNonNegativeLong());
}
public ForecastStats createTestInstance(String jobId, long forecastId) {
ForecastStats forecastStats = new ForecastStats(jobId, forecastId);
if (randomBoolean()) {
forecastStats.setRecordCount(randomLong());
}
return forecastStats;
}
@Override
protected Reader<ForecastStats> instanceReader() {
return ForecastStats::new;
}
@Override
protected ForecastStats doParseInstance(XContentParser parser) throws IOException {
return ForecastStats.PARSER.apply(parser, null);
}
}

View File

@ -0,0 +1,68 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.xpack.ml.job.results;
import org.elasticsearch.common.io.stream.Writeable.Reader;
import org.elasticsearch.common.xcontent.XContentParser;
import org.elasticsearch.test.AbstractSerializingTestCase;
import java.io.IOException;
import java.util.Date;
public class ForecastTests extends AbstractSerializingTestCase<Forecast> {
@Override
protected Forecast parseInstance(XContentParser parser) {
return Forecast.PARSER.apply(parser, null);
}
@Override
protected Forecast createTestInstance() {
return createTestInstance("ForecastTest");
}
public Forecast createTestInstance(String jobId) {
Forecast forecast = new Forecast(jobId, randomNonNegativeLong(), new Date(randomLong()), randomNonNegativeLong());
if (randomBoolean()) {
forecast.setByFieldName(randomAlphaOfLengthBetween(1, 20));
}
if (randomBoolean()) {
forecast.setByFieldValue(randomAlphaOfLengthBetween(1, 20));
}
if (randomBoolean()) {
forecast.setPartitionFieldName(randomAlphaOfLengthBetween(1, 20));
}
if (randomBoolean()) {
forecast.setPartitionFieldValue(randomAlphaOfLengthBetween(1, 20));
}
if (randomBoolean()) {
forecast.setModelFeature(randomAlphaOfLengthBetween(1, 20));
}
if (randomBoolean()) {
forecast.setForecastLower(randomDouble());
}
if (randomBoolean()) {
forecast.setForecastUpper(randomDouble());
}
if (randomBoolean()) {
forecast.setForecastPrediction(randomDouble());
}
return forecast;
}
@Override
protected Reader<Forecast> instanceReader() {
return Forecast::new;
}
@Override
protected Forecast doParseInstance(XContentParser parser) throws IOException {
return Forecast.PARSER.apply(parser, null);
}
}

View File

@ -12,6 +12,7 @@ import org.elasticsearch.action.bulk.BulkRequestBuilder;
import org.elasticsearch.action.bulk.BulkResponse;
import org.elasticsearch.action.index.IndexRequest;
import org.elasticsearch.action.support.WriteRequest;
import org.elasticsearch.analysis.common.CommonAnalysisPlugin;
import org.elasticsearch.client.Client;
import org.elasticsearch.cluster.metadata.MetaData;
import org.elasticsearch.common.settings.Settings;
@ -93,7 +94,8 @@ public abstract class BaseMlIntegTestCase extends ESIntegTestCase {
@Override
protected Collection<Class<? extends Plugin>> nodePlugins() {
return Arrays.asList(XPackPlugin.class, ReindexPlugin.class);
return Arrays.asList(XPackPlugin.class, CommonAnalysisPlugin.class,
ReindexPlugin.class);
}
@Override

View File

@ -5,8 +5,6 @@
*/
package org.elasticsearch.xpack.security;
import org.apache.http.HttpEntity;
import org.apache.http.util.EntityUtils;
import org.elasticsearch.Version;
import org.elasticsearch.client.Response;
import org.elasticsearch.test.rest.yaml.ClientYamlTestCandidate;
@ -14,8 +12,10 @@ import org.elasticsearch.test.rest.yaml.ESClientYamlSuiteTestCase;
import org.elasticsearch.test.rest.yaml.ObjectPath;
import org.junit.Before;
import java.nio.charset.StandardCharsets;
import java.util.Collections;
import java.util.HashSet;
import java.util.Map;
import java.util.Set;
import static org.elasticsearch.xpack.security.SecurityLifecycleService.SECURITY_TEMPLATE_NAME;
import static org.hamcrest.Matchers.greaterThanOrEqualTo;
@ -37,34 +37,18 @@ public abstract class SecurityClusterClientYamlTestCase extends ESClientYamlSuit
}
public static void waitForSecurity() throws Exception {
String masterNode = null;
HttpEntity entity = client().performRequest("GET", "/_cat/nodes?h=id,master").getEntity();
String catNodesResponse = EntityUtils.toString(entity, StandardCharsets.UTF_8);
for (String line : catNodesResponse.split("\n")) {
int indexOfStar = line.indexOf('*'); // * in the node's output denotes it is master
if (indexOfStar != -1) {
masterNode = line.substring(0, indexOfStar).trim();
break;
}
}
assertNotNull(masterNode);
final String masterNodeId = masterNode;
assertBusy(() -> {
try {
Response nodeDetailsResponse = client().performRequest("GET", "/_nodes");
ObjectPath path = ObjectPath.createFromResponse(nodeDetailsResponse);
Map<String, Object> nodes = path.evaluate("nodes");
String masterVersion = null;
for (String key : nodes.keySet()) {
// get the ES version number master is on
if (key.startsWith(masterNodeId)) {
masterVersion = path.evaluate("nodes." + key + ".version");
break;
}
Response nodesResponse = client().performRequest("GET", "/_nodes");
ObjectPath nodesPath = ObjectPath.createFromResponse(nodesResponse);
Map<String, Object> nodes = nodesPath.evaluate("nodes");
Set<Version> nodeVersions = new HashSet<>();
for (String nodeId : nodes.keySet()) {
String nodeVersionPath = "nodes." + nodeId + ".version";
Version nodeVersion = Version.fromString(nodesPath.evaluate(nodeVersionPath));
nodeVersions.add(nodeVersion);
}
assertNotNull(masterVersion);
final String masterTemplateVersion = masterVersion;
Version highestNodeVersion = Collections.max(nodeVersions);
Response response = client().performRequest("GET", "/_cluster/state/metadata");
ObjectPath objectPath = ObjectPath.createFromResponse(response);
@ -74,10 +58,8 @@ public abstract class SecurityClusterClientYamlTestCase extends ESClientYamlSuit
assertThat(mappings.size(), greaterThanOrEqualTo(1));
for (String key : mappings.keySet()) {
String templatePath = mappingsPath + "." + key + "._meta.security-version";
String templateVersion = objectPath.evaluate(templatePath);
final Version mVersion = Version.fromString(masterTemplateVersion);
final Version tVersion = Version.fromString(templateVersion);
assertTrue(mVersion.onOrBefore(tVersion));
Version templateVersion = Version.fromString(objectPath.evaluate(templatePath));
assertEquals(highestNodeVersion, templateVersion);
}
} catch (Exception e) {
throw new AssertionError("failed to get cluster state", e);

View File

@ -64,7 +64,7 @@ public class SecurityLifecycleServiceTests extends ESTestCase {
threadPool = new TestThreadPool("security template service tests");
transportClient = new MockTransportClient(Settings.EMPTY);
class IClient extends InternalClient {
class IClient extends InternalSecurityClient {
IClient(Client transportClient) {
super(Settings.EMPTY, null, transportClient);
}
@ -79,7 +79,7 @@ public class SecurityLifecycleServiceTests extends ESTestCase {
}
}
InternalClient client = new IClient(transportClient);
InternalSecurityClient client = new IClient(transportClient);
securityLifecycleService = new SecurityLifecycleService(Settings.EMPTY, clusterService,
threadPool, client, mock(IndexAuditTrail.class));
listeners = new CopyOnWriteArrayList<>();

View File

@ -87,9 +87,9 @@ public class SecurityTests extends ESTestCase {
allowedSettings.addAll(ClusterSettings.BUILT_IN_CLUSTER_SETTINGS);
ClusterSettings clusterSettings = new ClusterSettings(settings, allowedSettings);
when(clusterService.getClusterSettings()).thenReturn(clusterSettings);
InternalClient client = new InternalClient(Settings.EMPTY, threadPool, mock(Client.class));
when(threadPool.relativeTimeInMillis()).thenReturn(1L);
return security.createComponents(client, threadPool, clusterService, mock(ResourceWatcherService.class), Arrays.asList(extensions));
return security.createComponents(mock(Client.class), threadPool, clusterService, mock(ResourceWatcherService.class),
Arrays.asList(extensions));
}
private <T> T findComponent(Class<T> type, Collection<Object> components) {

View File

@ -140,7 +140,7 @@ public class AuditTrailTests extends SecurityIntegTestCase {
return eventsRef.get();
}
private Collection<Map<String, Object>> getAuditEvents() throws Exception {
final InternalClient client = internalClient();
final InternalClient client = internalSecurityClient();
DateTime now = new DateTime(DateTimeZone.UTC);
String indexName = IndexNameResolver.resolve(IndexAuditTrail.INDEX_NAME_PREFIX, now, IndexNameResolver.Rollover.DAILY);

View File

@ -27,6 +27,7 @@ import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.transport.MockTransportClient;
import org.elasticsearch.transport.TransportMessage;
import org.elasticsearch.xpack.security.InternalClient;
import org.elasticsearch.xpack.security.InternalSecurityClient;
import org.elasticsearch.xpack.security.audit.index.IndexAuditTrail.State;
import org.elasticsearch.xpack.security.authc.AuthenticationToken;
import org.elasticsearch.xpack.security.transport.filter.SecurityIpFilterRule;
@ -42,7 +43,7 @@ import static org.mockito.Mockito.when;
public class IndexAuditTrailMutedTests extends ESTestCase {
private InternalClient client;
private InternalSecurityClient client;
private TransportClient transportClient;
private ThreadPool threadPool;
private ClusterService clusterService;
@ -61,7 +62,7 @@ public class IndexAuditTrailMutedTests extends ESTestCase {
threadPool = new TestThreadPool("index audit trail tests");
transportClient = new MockTransportClient(Settings.EMPTY);
clientCalled = new AtomicBoolean(false);
class IClient extends InternalClient {
class IClient extends InternalSecurityClient {
IClient(Client transportClient){
super(Settings.EMPTY, threadPool, transportClient);
}

View File

@ -303,7 +303,7 @@ public class IndexAuditTrailTests extends SecurityIntegTestCase {
when(nodes.isLocalNodeElectedMaster()).thenReturn(true);
threadPool = new TestThreadPool("index audit trail tests");
enqueuedMessage = new SetOnce<>();
auditor = new IndexAuditTrail(settings, internalClient(), threadPool, clusterService) {
auditor = new IndexAuditTrail(settings, internalSecurityClient(), threadPool, clusterService) {
@Override
void enqueue(Message message, String type) {
enqueuedMessage.set(message);

View File

@ -50,7 +50,7 @@ public class IndexAuditTrailUpdateMappingTests extends SecurityIntegTestCase {
when(localNode.getHostAddress()).thenReturn(buildNewFakeTransportAddress().toString());
ClusterService clusterService = mock(ClusterService.class);
when(clusterService.localNode()).thenReturn(localNode);
auditor = new IndexAuditTrail(settings, internalClient(), threadPool, clusterService);
auditor = new IndexAuditTrail(settings, internalSecurityClient(), threadPool, clusterService);
// before starting we add an event
auditor.authenticationFailed(new FakeRestRequest());

View File

@ -48,6 +48,7 @@ import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.transport.TransportMessage;
import org.elasticsearch.xpack.XPackSettings;
import org.elasticsearch.xpack.security.InternalClient;
import org.elasticsearch.xpack.security.InternalSecurityClient;
import org.elasticsearch.xpack.security.SecurityLifecycleService;
import org.elasticsearch.xpack.security.audit.AuditTrailService;
import org.elasticsearch.xpack.security.authc.Authentication.RealmRef;
@ -135,7 +136,7 @@ public class AuthenticationServiceTests extends ESTestCase {
threadPool = new ThreadPool(settings,
new FixedExecutorBuilder(settings, TokenService.THREAD_POOL_NAME, 1, 1000, "xpack.security.authc.token.thread_pool"));
threadContext = threadPool.getThreadContext();
InternalClient internalClient = new InternalClient(Settings.EMPTY, threadPool, client);
InternalSecurityClient internalClient = new InternalSecurityClient(Settings.EMPTY, threadPool, client);
lifecycleService = mock(SecurityLifecycleService.class);
ClusterService clusterService = new ClusterService(settings, new ClusterSettings(settings, ClusterSettings
.BUILT_IN_CLUSTER_SETTINGS), threadPool, Collections.emptyMap());

View File

@ -56,7 +56,7 @@ public class TokenAuthIntegTests extends SecurityIntegTestCase {
}
public void testTokenServiceBootstrapOnNodeJoin() throws Exception {
final Client client = internalClient();
final Client client = internalSecurityClient();
SecurityClient securityClient = new SecurityClient(client);
CreateTokenResponse response = securityClient.prepareCreateToken()
.setGrantType("password")
@ -84,7 +84,7 @@ public class TokenAuthIntegTests extends SecurityIntegTestCase {
public void testTokenServiceCanRotateKeys() throws Exception {
final Client client = internalClient();
final Client client = internalSecurityClient();
SecurityClient securityClient = new SecurityClient(client);
CreateTokenResponse response = securityClient.prepareCreateToken()
.setGrantType("password")
@ -116,7 +116,7 @@ public class TokenAuthIntegTests extends SecurityIntegTestCase {
}
public void testExpiredTokensDeletedAfterExpiration() throws Exception {
final Client client = internalClient();
final Client client = internalSecurityClient();
SecurityClient securityClient = new SecurityClient(client);
CreateTokenResponse response = securityClient.prepareCreateToken()
.setGrantType("password")

View File

@ -24,6 +24,7 @@ import org.elasticsearch.threadpool.FixedExecutorBuilder;
import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.xpack.XPackSettings;
import org.elasticsearch.xpack.security.InternalClient;
import org.elasticsearch.xpack.security.InternalSecurityClient;
import org.elasticsearch.xpack.security.SecurityLifecycleService;
import org.elasticsearch.xpack.security.authc.Authentication.RealmRef;
import org.elasticsearch.xpack.security.authc.TokenService.BytesKey;
@ -49,7 +50,7 @@ import static org.mockito.Mockito.when;
public class TokenServiceTests extends ESTestCase {
private InternalClient internalClient;
private InternalSecurityClient internalClient;
private static ThreadPool threadPool;
private static final Settings settings = Settings.builder().put(Node.NODE_NAME_SETTING.getKey(), "TokenServiceTests")
.put(XPackSettings.TOKEN_SERVICE_ENABLED_SETTING.getKey(), true).build();
@ -63,7 +64,7 @@ public class TokenServiceTests extends ESTestCase {
@Before
public void setupClient() throws GeneralSecurityException {
client = mock(Client.class);
internalClient = new InternalClient(settings, threadPool, client);
internalClient = new InternalSecurityClient(settings, threadPool, client);
lifecycleService = mock(SecurityLifecycleService.class);
when(lifecycleService.isSecurityIndexWriteable()).thenReturn(true);
doAnswer(invocationOnMock -> {

View File

@ -29,6 +29,7 @@ import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.index.get.GetResult;
import org.elasticsearch.test.ESTestCase;
import org.elasticsearch.xpack.security.InternalClient;
import org.elasticsearch.xpack.security.InternalSecurityClient;
import org.elasticsearch.xpack.security.SecurityLifecycleService;
import org.elasticsearch.xpack.security.authc.AuthenticationResult;
import org.elasticsearch.xpack.security.authc.support.Hasher;
@ -54,12 +55,12 @@ public class NativeUsersStoreTests extends ESTestCase {
private static final String PASSWORD_FIELD = User.Fields.PASSWORD.getPreferredName();
private static final String BLANK_PASSWORD = "";
private InternalClient internalClient;
private InternalSecurityClient internalClient;
private final List<Tuple<ActionRequest, ActionListener<? extends ActionResponse>>> requests = new CopyOnWriteArrayList<>();
@Before
public void setupMocks() {
internalClient = new InternalClient(Settings.EMPTY, null, null) {
internalClient = new InternalSecurityClient(Settings.EMPTY, null, null) {
@Override
protected <

View File

@ -17,6 +17,7 @@ import org.elasticsearch.common.util.concurrent.ThreadContext;
import org.elasticsearch.env.Environment;
import org.elasticsearch.test.ESTestCase;
import org.elasticsearch.xpack.security.InternalClient;
import org.elasticsearch.xpack.security.InternalSecurityClient;
import org.elasticsearch.xpack.security.SecurityLifecycleService;
import org.elasticsearch.xpack.security.authc.RealmConfig;
import org.elasticsearch.xpack.security.authc.support.UserRoleMapper;
@ -53,7 +54,7 @@ public class NativeUserRoleMapperTests extends ESTestCase {
Collections.singletonList(FieldPredicate.create("cn=mutants,ou=groups,ou=dept_h,o=forces,dc=gc,dc=ca"))),
Arrays.asList("mutants"), Collections.emptyMap(), false);
final InternalClient client = mock(InternalClient.class);
final InternalSecurityClient client = mock(InternalSecurityClient.class);
final SecurityLifecycleService lifecycleService = mock(SecurityLifecycleService.class);
when(lifecycleService.isSecurityIndexAvailable()).thenReturn(true);

View File

@ -715,7 +715,7 @@ public class AuthorizationServiceTests extends ESTestCase {
}
}
public void testXPackUserAndSuperusersCanExecuteOperationAgainstSecurityIndex() {
public void testSuperusersCanExecuteOperationAgainstSecurityIndex() {
final User superuser = new User("custom_admin", ReservedRolesStore.SUPERUSER_ROLE_DESCRIPTOR.getName());
roleMap.put(ReservedRolesStore.SUPERUSER_ROLE_DESCRIPTOR.getName(), ReservedRolesStore.SUPERUSER_ROLE_DESCRIPTOR);
ClusterState state = mock(ClusterState.class);
@ -726,37 +726,35 @@ public class AuthorizationServiceTests extends ESTestCase {
.numberOfShards(1).numberOfReplicas(0).build(), true)
.build());
for (User user : Arrays.asList(XPackUser.INSTANCE, superuser)) {
List<Tuple<String, TransportRequest>> requests = new ArrayList<>();
requests.add(new Tuple<>(DeleteAction.NAME, new DeleteRequest(SecurityLifecycleService.SECURITY_INDEX_NAME, "type", "id")));
requests.add(new Tuple<>(BulkAction.NAME + "[s]",
createBulkShardRequest(SecurityLifecycleService.SECURITY_INDEX_NAME, DeleteRequest::new)));
requests.add(new Tuple<>(UpdateAction.NAME, new UpdateRequest(SecurityLifecycleService.SECURITY_INDEX_NAME, "type", "id")));
requests.add(new Tuple<>(IndexAction.NAME, new IndexRequest(SecurityLifecycleService.SECURITY_INDEX_NAME, "type", "id")));
requests.add(new Tuple<>(BulkAction.NAME + "[s]",
createBulkShardRequest(SecurityLifecycleService.SECURITY_INDEX_NAME, IndexRequest::new)));
requests.add(new Tuple<>(SearchAction.NAME, new SearchRequest(SecurityLifecycleService.SECURITY_INDEX_NAME)));
requests.add(new Tuple<>(TermVectorsAction.NAME,
new TermVectorsRequest(SecurityLifecycleService.SECURITY_INDEX_NAME, "type", "id")));
requests.add(new Tuple<>(GetAction.NAME, new GetRequest(SecurityLifecycleService.SECURITY_INDEX_NAME, "type", "id")));
requests.add(new Tuple<>(TermVectorsAction.NAME,
new TermVectorsRequest(SecurityLifecycleService.SECURITY_INDEX_NAME, "type", "id")));
requests.add(new Tuple<>(IndicesAliasesAction.NAME, new IndicesAliasesRequest()
.addAliasAction(AliasActions.add().alias("security_alias").index(SecurityLifecycleService.SECURITY_INDEX_NAME))));
requests.add(new Tuple<>(ClusterHealthAction.NAME, new ClusterHealthRequest(SecurityLifecycleService.SECURITY_INDEX_NAME)));
requests.add(new Tuple<>(ClusterHealthAction.NAME,
new ClusterHealthRequest(SecurityLifecycleService.SECURITY_INDEX_NAME, "foo", "bar")));
List<Tuple<String, TransportRequest>> requests = new ArrayList<>();
requests.add(new Tuple<>(DeleteAction.NAME, new DeleteRequest(SecurityLifecycleService.SECURITY_INDEX_NAME, "type", "id")));
requests.add(new Tuple<>(BulkAction.NAME + "[s]",
createBulkShardRequest(SecurityLifecycleService.SECURITY_INDEX_NAME, DeleteRequest::new)));
requests.add(new Tuple<>(UpdateAction.NAME, new UpdateRequest(SecurityLifecycleService.SECURITY_INDEX_NAME, "type", "id")));
requests.add(new Tuple<>(IndexAction.NAME, new IndexRequest(SecurityLifecycleService.SECURITY_INDEX_NAME, "type", "id")));
requests.add(new Tuple<>(BulkAction.NAME + "[s]",
createBulkShardRequest(SecurityLifecycleService.SECURITY_INDEX_NAME, IndexRequest::new)));
requests.add(new Tuple<>(SearchAction.NAME, new SearchRequest(SecurityLifecycleService.SECURITY_INDEX_NAME)));
requests.add(new Tuple<>(TermVectorsAction.NAME,
new TermVectorsRequest(SecurityLifecycleService.SECURITY_INDEX_NAME, "type", "id")));
requests.add(new Tuple<>(GetAction.NAME, new GetRequest(SecurityLifecycleService.SECURITY_INDEX_NAME, "type", "id")));
requests.add(new Tuple<>(TermVectorsAction.NAME,
new TermVectorsRequest(SecurityLifecycleService.SECURITY_INDEX_NAME, "type", "id")));
requests.add(new Tuple<>(IndicesAliasesAction.NAME, new IndicesAliasesRequest()
.addAliasAction(AliasActions.add().alias("security_alias").index(SecurityLifecycleService.SECURITY_INDEX_NAME))));
requests.add(new Tuple<>(ClusterHealthAction.NAME, new ClusterHealthRequest(SecurityLifecycleService.SECURITY_INDEX_NAME)));
requests.add(new Tuple<>(ClusterHealthAction.NAME,
new ClusterHealthRequest(SecurityLifecycleService.SECURITY_INDEX_NAME, "foo", "bar")));
for (Tuple<String, TransportRequest> requestTuple : requests) {
String action = requestTuple.v1();
TransportRequest request = requestTuple.v2();
authorize(createAuthentication(user), action, request);
verify(auditTrail).accessGranted(user, action, request);
}
for (Tuple<String, TransportRequest> requestTuple : requests) {
String action = requestTuple.v1();
TransportRequest request = requestTuple.v2();
authorize(createAuthentication(superuser), action, request);
verify(auditTrail).accessGranted(superuser, action, request);
}
}
public void testXPackUserAndSuperusersCanExecuteOperationAgainstSecurityIndexWithWildcard() {
public void testSuperusersCanExecuteOperationAgainstSecurityIndexWithWildcard() {
final User superuser = new User("custom_admin", ReservedRolesStore.SUPERUSER_ROLE_DESCRIPTOR.getName());
roleMap.put(ReservedRolesStore.SUPERUSER_ROLE_DESCRIPTOR.getName(), ReservedRolesStore.SUPERUSER_ROLE_DESCRIPTOR);
ClusterState state = mock(ClusterState.class);
@ -769,11 +767,6 @@ public class AuthorizationServiceTests extends ESTestCase {
String action = SearchAction.NAME;
SearchRequest request = new SearchRequest("_all");
authorize(createAuthentication(XPackUser.INSTANCE), action, request);
verify(auditTrail).accessGranted(XPackUser.INSTANCE, action, request);
assertThat(request.indices(), arrayContaining(".security"));
request = new SearchRequest("_all");
authorize(createAuthentication(superuser), action, request);
verify(auditTrail).accessGranted(superuser, action, request);
assertThat(request.indices(), arrayContaining(".security"));
@ -1073,7 +1066,7 @@ public class AuthorizationServiceTests extends ESTestCase {
PlainActionFuture<Role> rolesFuture = new PlainActionFuture<>();
authorizationService.roles(XPackUser.INSTANCE, rolesFuture);
final Role roles = rolesFuture.actionGet();
assertThat(roles, equalTo(ReservedRolesStore.SUPERUSER_ROLE));
assertThat(roles, equalTo(XPackUser.ROLE));
verifyZeroInteractions(rolesStore);
}

View File

@ -69,6 +69,7 @@ import org.elasticsearch.xpack.security.authz.store.ReservedRolesStore;
import org.elasticsearch.xpack.security.test.SecurityTestUtils;
import org.elasticsearch.xpack.security.user.AnonymousUser;
import org.elasticsearch.xpack.security.user.User;
import org.elasticsearch.xpack.security.user.XPackSecurityUser;
import org.elasticsearch.xpack.security.user.XPackUser;
import org.junit.Before;
@ -1191,22 +1192,29 @@ public class IndicesAndAliasesResolverTests extends ESTestCase {
}
}
public void testXPackUserHasAccessToSecurityIndex() {
public void testXPackSecurityUserHasAccessToSecurityIndex() {
SearchRequest request = new SearchRequest();
{
final AuthorizedIndices authorizedIndices = buildAuthorizedIndices(XPackUser.INSTANCE, SearchAction.NAME);
final AuthorizedIndices authorizedIndices = buildAuthorizedIndices(XPackSecurityUser.INSTANCE, SearchAction.NAME);
List<String> indices = resolveIndices(request, authorizedIndices).getLocal();
assertThat(indices, hasItem(SecurityLifecycleService.SECURITY_INDEX_NAME));
}
{
IndicesAliasesRequest aliasesRequest = new IndicesAliasesRequest();
aliasesRequest.addAliasAction(AliasActions.add().alias("security_alias").index(SecurityLifecycleService.SECURITY_INDEX_NAME));
final AuthorizedIndices authorizedIndices = buildAuthorizedIndices(XPackUser.INSTANCE, IndicesAliasesAction.NAME);
final AuthorizedIndices authorizedIndices = buildAuthorizedIndices(XPackSecurityUser.INSTANCE, IndicesAliasesAction.NAME);
List<String> indices = resolveIndices(aliasesRequest, authorizedIndices).getLocal();
assertThat(indices, hasItem(SecurityLifecycleService.SECURITY_INDEX_NAME));
}
}
public void testXPackUserDoesNotHaveAccessToSecurityIndex() {
SearchRequest request = new SearchRequest();
final AuthorizedIndices authorizedIndices = buildAuthorizedIndices(XPackUser.INSTANCE, SearchAction.NAME);
List<String> indices = resolveIndices(request, authorizedIndices).getLocal();
assertThat(indices, not(hasItem(SecurityLifecycleService.SECURITY_INDEX_NAME)));
}
public void testNonXPackUserAccessingSecurityIndex() {
User allAccessUser = new User("all_access", "all_access");
roleMap.put("all_access", new RoleDescriptor("all_access", new String[] { "all" },

View File

@ -38,6 +38,7 @@ import org.elasticsearch.test.ESTestCase;
import org.elasticsearch.threadpool.TestThreadPool;
import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.xpack.security.InternalClient;
import org.elasticsearch.xpack.security.InternalSecurityClient;
import org.elasticsearch.xpack.security.SecurityLifecycleService;
import org.elasticsearch.xpack.security.action.role.PutRoleRequest;
import org.elasticsearch.xpack.security.audit.index.IndexAuditTrail;
@ -184,7 +185,7 @@ public class NativeRolesStoreTests extends ESTestCase {
}
public void testPutOfRoleWithFlsDlsUnlicensed() throws IOException {
final InternalClient internalClient = mock(InternalClient.class);
final InternalSecurityClient internalClient = mock(InternalSecurityClient.class);
final ClusterService clusterService = mock(ClusterService.class);
final XPackLicenseState licenseState = mock(XPackLicenseState.class);
final AtomicBoolean methodCalled = new AtomicBoolean(false);

View File

@ -80,6 +80,7 @@ import org.elasticsearch.xpack.security.authz.accesscontrol.IndicesAccessControl
import org.elasticsearch.xpack.security.authz.permission.FieldPermissionsCache;
import org.elasticsearch.xpack.security.authz.permission.Role;
import org.elasticsearch.xpack.security.user.SystemUser;
import org.elasticsearch.xpack.security.user.XPackUser;
import org.elasticsearch.xpack.watcher.execution.TriggeredWatchStore;
import org.elasticsearch.xpack.watcher.history.HistoryStore;
import org.elasticsearch.xpack.watcher.transport.actions.ack.AckWatchAction;
@ -123,6 +124,7 @@ public class ReservedRolesStoreTests extends ESTestCase {
assertThat(ReservedRolesStore.isReserved("watcher_user"), is(true));
assertThat(ReservedRolesStore.isReserved("watcher_admin"), is(true));
assertThat(ReservedRolesStore.isReserved("kibana_dashboard_only_user"), is(true));
assertThat(ReservedRolesStore.isReserved(XPackUser.ROLE_NAME), is(true));
}
public void testIngestAdminRole() {

View File

@ -45,6 +45,7 @@ import org.elasticsearch.index.shard.ShardId;
import org.elasticsearch.test.ESTestCase;
import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.xpack.security.InternalClient;
import org.elasticsearch.xpack.security.InternalSecurityClient;
import org.elasticsearch.xpack.security.test.SecurityTestUtils;
import org.elasticsearch.xpack.template.TemplateUtils;
import org.hamcrest.Matchers;
@ -71,7 +72,7 @@ public class IndexLifecycleManagerTests extends ESTestCase {
when(threadPool.getThreadContext()).thenReturn(new ThreadContext(Settings.EMPTY));
actions = new LinkedHashMap<>();
final InternalClient client = new InternalClient(Settings.EMPTY, threadPool, mockClient) {
final InternalSecurityClient client = new InternalSecurityClient(Settings.EMPTY, threadPool, mockClient) {
@Override
protected <Request extends ActionRequest,
Response extends ActionResponse,

View File

@ -5,6 +5,7 @@
*/
package org.elasticsearch.xpack.upgrade;
import org.elasticsearch.analysis.common.CommonAnalysisPlugin;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.index.reindex.ReindexPlugin;
@ -52,7 +53,8 @@ public abstract class IndexUpgradeIntegTestCase extends AbstractLicensesIntegrat
@Override
protected Collection<Class<? extends Plugin>> nodePlugins() {
return Arrays.asList(XPackPlugin.class, ReindexPlugin.class, MockPainlessScriptEngine.TestPlugin.class);
return Arrays.asList(XPackPlugin.class, ReindexPlugin.class, MockPainlessScriptEngine.TestPlugin.class,
CommonAnalysisPlugin.class);
}
@Override

View File

@ -12,6 +12,7 @@ import org.elasticsearch.action.admin.indices.alias.get.GetAliasesResponse;
import org.elasticsearch.action.search.SearchResponse;
import org.elasticsearch.action.support.PlainActionFuture;
import org.elasticsearch.action.support.WriteRequest;
import org.elasticsearch.analysis.common.CommonAnalysisPlugin;
import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.block.ClusterBlockException;
import org.elasticsearch.cluster.metadata.AliasMetaData;
@ -52,7 +53,7 @@ public class InternalIndexReindexerIT extends IndexUpgradeIntegTestCase {
@Override
protected Collection<Class<? extends Plugin>> nodePlugins() {
return Arrays.asList(XPackPlugin.class, ReindexPlugin.class, CustomScriptPlugin.class);
return Arrays.asList(XPackPlugin.class, ReindexPlugin.class, CustomScriptPlugin.class, CommonAnalysisPlugin.class);
}
public static class CustomScriptPlugin extends MockScriptPlugin {

View File

@ -157,3 +157,4 @@ indices:data/write/update/byquery
indices:data/write/delete/byquery
indices:data/write/reindex
cluster:admin/xpack/deprecation/info
cluster:admin/xpack/ml/job/forecast

View File

@ -0,0 +1,24 @@
{
"xpack.ml.forecast": {
"methods": [ "POST" ],
"url": {
"path": "/_xpack/ml/anomaly_detectors/{job_id}/_forecast",
"paths": [ "/_xpack/ml/anomaly_detectors/{job_id}/_forecast" ],
"parts": {
"job_id": {
"type": "string",
"required": true,
"description": "The ID of the job to forecast for"
}
},
"params": {
"end": {
"type": "string",
"required": false,
"description": "The end time of the forecast"
}
}
},
"body": null
}
}

View File

@ -97,7 +97,7 @@ teardown:
"Should fail gracefully when body content is not provided":
- do:
catch: request
catch: bad_request
xpack.license.post:
acknowledge: true

View File

@ -180,7 +180,7 @@ setup:
"Test delete with in-use model":
- do:
catch: request
catch: bad_request
xpack.ml.delete_model_snapshot:
job_id: "delete-model-snapshot"
snapshot_id: "active-snapshot"

View File

@ -89,19 +89,19 @@ setup:
"Test invalid param combinations":
- do:
catch: request
catch: bad_request
xpack.ml.get_filters:
filter_id: "filter-foo"
from: 0
- do:
catch: request
catch: bad_request
xpack.ml.get_filters:
filter_id: "filter-foo"
size: 1
- do:
catch: request
catch: bad_request
xpack.ml.get_filters:
filter_id: "filter-foo"
from: 0

View File

@ -0,0 +1,41 @@
setup:
- do:
xpack.ml.put_job:
job_id: forecast-job
body: >
{
"description":"A forecast job",
"analysis_config" : {
"detectors" :[{"function":"metric","field_name":"responsetime","by_field_name":"airline"}]
},
"data_description" : {
"format":"xcontent"
}
}
---
"Test forecast unknown job":
- do:
catch: missing
xpack.ml.forecast:
job_id: "non-existing-job"
---
"Test forecast on closed job":
- do:
catch: /status_exception/
xpack.ml.forecast:
job_id: "forecast-job"
---
"Test bad end param errors":
- do:
xpack.ml.open_job:
job_id: "forecast-job"
- do:
catch: /parse_exception/
xpack.ml.forecast:
job_id: "forecast-job"
end: "tomorrow"

View File

@ -348,7 +348,7 @@
}
- do:
catch: request
catch: bad_request
xpack.ml.update_job:
job_id: _all
body: >

View File

@ -173,35 +173,35 @@ setup:
---
"Test mutually-exclusive params":
- do:
catch: request
catch: bad_request
xpack.ml.get_buckets:
job_id: "jobs-get-result-buckets"
timestamp: "2016-06-01T00:00:00Z"
start: "2016-05-01T00:00:00Z"
- do:
catch: request
catch: bad_request
xpack.ml.get_buckets:
job_id: "jobs-get-result-buckets"
timestamp: "2016-06-01T00:00:00Z"
end: "2016-05-01T00:00:00Z"
- do:
catch: request
catch: bad_request
xpack.ml.get_buckets:
job_id: "jobs-get-result-buckets"
timestamp: "2016-06-01T00:00:00Z"
from: "2016-05-01T00:00:00Z"
- do:
catch: request
catch: bad_request
xpack.ml.get_buckets:
job_id: "jobs-get-result-buckets"
timestamp: "2016-06-01T00:00:00Z"
end: "2016-05-01T00:00:00Z"
- do:
catch: request
catch: bad_request
xpack.ml.get_buckets:
job_id: "jobs-get-result-buckets"
timestamp: "2016-06-01T00:00:00Z"
@ -210,7 +210,7 @@ setup:
---
"Test mutually-exclusive params via body":
- do:
catch: request
catch: bad_request
xpack.ml.get_buckets:
job_id: "jobs-get-result-buckets"
body:
@ -218,7 +218,7 @@ setup:
start: "2016-05-01T00:00:00Z"
- do:
catch: request
catch: bad_request
xpack.ml.get_buckets:
job_id: "jobs-get-result-buckets"
body:
@ -226,7 +226,7 @@ setup:
end: "2016-05-01T00:00:00Z"
- do:
catch: request
catch: bad_request
xpack.ml.get_buckets:
job_id: "jobs-get-result-buckets"
body:
@ -234,7 +234,7 @@ setup:
from: "2016-05-01T00:00:00Z"
- do:
catch: request
catch: bad_request
xpack.ml.get_buckets:
job_id: "jobs-get-result-buckets"
body:
@ -242,7 +242,7 @@ setup:
end: "2016-05-01T00:00:00Z"
- do:
catch: request
catch: bad_request
xpack.ml.get_buckets:
job_id: "jobs-get-result-buckets"
body:

View File

@ -118,21 +118,21 @@ setup:
---
"Test with invalid param combinations":
- do:
catch: request
catch: bad_request
xpack.ml.get_categories:
job_id: "jobs-get-result-categories"
category_id: 1
from: 0
- do:
catch: request
catch: bad_request
xpack.ml.get_categories:
job_id: "jobs-get-result-categories"
category_id: 1
size: 1
- do:
catch: request
catch: bad_request
xpack.ml.get_categories:
job_id: "jobs-get-result-categories"
category_id: 1
@ -142,7 +142,7 @@ setup:
---
"Test with invalid param combinations via body":
- do:
catch: request
catch: bad_request
xpack.ml.get_categories:
job_id: "jobs-get-result-categories"
category_id: 1
@ -150,7 +150,7 @@ setup:
from: 0
- do:
catch: request
catch: bad_request
xpack.ml.get_categories:
job_id: "jobs-get-result-categories"
category_id: 1
@ -158,7 +158,7 @@ setup:
size: 1
- do:
catch: request
catch: bad_request
xpack.ml.get_categories:
job_id: "jobs-get-result-categories"
category_id: 1

View File

@ -125,7 +125,7 @@
# Missing a system_id causes it to fail
- do:
catch: request
catch: bad_request
xpack.monitoring.bulk:
system_api_version: "6"
interval: "10s"
@ -136,7 +136,7 @@
# Missing a system_api_version causes it to fail
- do:
catch: request
catch: bad_request
xpack.monitoring.bulk:
system_id: "kibana"
interval: "10s"
@ -147,7 +147,7 @@
# Missing an interval causes it to fail
- do:
catch: request
catch: bad_request
xpack.monitoring.bulk:
system_id: "kibana"
system_api_version: "6"

View File

@ -86,7 +86,7 @@ teardown:
---
"Test put user with different username in body":
- do:
catch: request
catch: bad_request
xpack.security.put_user:
username: "joe"
body: >

View File

@ -23,7 +23,7 @@ teardown:
---
"Test creating a user without password":
- do:
catch: request
catch: bad_request
xpack.security.put_user:
username: "no_password_user"
body: >