Merge branch 'master' into feature/sql
Original commit: elastic/x-pack-elasticsearch@67f8321368
@ -9,7 +9,7 @@ apply plugin: 'elasticsearch.docs-test'
|
|||||||
* only remove entries from this list. When it is empty we'll remove it
|
* only remove entries from this list. When it is empty we'll remove it
|
||||||
* entirely and have a party! There will be cake and everything.... */
|
* entirely and have a party! There will be cake and everything.... */
|
||||||
buildRestTests.expectedUnconvertedCandidates = [
|
buildRestTests.expectedUnconvertedCandidates = [
|
||||||
'en/ml/getting-started.asciidoc',
|
'en/ml/getting-started-data.asciidoc',
|
||||||
'en/ml/functions/count.asciidoc',
|
'en/ml/functions/count.asciidoc',
|
||||||
'en/ml/functions/geo.asciidoc',
|
'en/ml/functions/geo.asciidoc',
|
||||||
'en/ml/functions/info.asciidoc',
|
'en/ml/functions/info.asciidoc',
|
||||||
|
219
docs/en/ml/getting-started-data.asciidoc
Normal file
@ -0,0 +1,219 @@
|
|||||||
|
[[ml-gs-data]]
|
||||||
|
=== Identifying Data for Analysis
|
||||||
|
|
||||||
|
For the purposes of this tutorial, we provide sample data that you can play with
|
||||||
|
and search in {es}. When you consider your own data, however, it's important to
|
||||||
|
take a moment and think about where the {xpackml} features will be most
|
||||||
|
impactful.
|
||||||
|
|
||||||
|
The first consideration is that it must be time series data. The {ml} features
|
||||||
|
are designed to model and detect anomalies in time series data.
|
||||||
|
|
||||||
|
The second consideration, especially when you are first learning to use {ml},
|
||||||
|
is the importance of the data and how familiar you are with it. Ideally, it is
|
||||||
|
information that contains key performance indicators (KPIs) for the health,
|
||||||
|
security, or success of your business or system. It is information that you need
|
||||||
|
to monitor and act on when anomalous behavior occurs. You might even have {kib}
|
||||||
|
dashboards that you're already using to watch this data. The better you know the
|
||||||
|
data, the quicker you will be able to create {ml} jobs that generate useful
|
||||||
|
insights.
|
||||||
|
|
||||||
|
The final consideration is where the data is located. This tutorial assumes that
|
||||||
|
your data is stored in {es}. It guides you through the steps required to create
|
||||||
|
a _{dfeed}_ that passes data to a job. If your own data is outside of {es},
|
||||||
|
analysis is still possible by using a post data API.
|
||||||
|
|
||||||
|
IMPORTANT: If you want to create {ml} jobs in {kib}, you must use {dfeeds}.
|
||||||
|
That is to say, you must store your input data in {es}. When you create
|
||||||
|
a job, you select an existing index pattern and {kib} configures the {dfeed}
|
||||||
|
for you under the covers.
|
||||||
|
|
||||||
|
|
||||||
|
[float]
|
||||||
|
[[ml-gs-sampledata]]
|
||||||
|
==== Obtaining a Sample Data Set
|
||||||
|
|
||||||
|
In this step we will upload some sample data to {es}. This is standard
|
||||||
|
{es} functionality, and is needed to set the stage for using {ml}.
|
||||||
|
|
||||||
|
The sample data for this tutorial contains information about the requests that
|
||||||
|
are received by various applications and services in a system. A system
|
||||||
|
administrator might use this type of information to track the total number of
|
||||||
|
requests across all of the infrastructure. If the number of requests increases
|
||||||
|
or decreases unexpectedly, for example, this might be an indication that there
|
||||||
|
is a problem or that resources need to be redistributed. By using the {xpack}
|
||||||
|
{ml} features to model the behavior of this data, it is easier to identify
|
||||||
|
anomalies and take appropriate action.
|
||||||
|
|
||||||
|
Download this sample data by clicking here:
|
||||||
|
https://download.elastic.co/demos/machine_learning/gettingstarted/server_metrics.tar.gz[server_metrics.tar.gz]
|
||||||
|
|
||||||
|
Use the following commands to extract the files:
|
||||||
|
|
||||||
|
[source,shell]
|
||||||
|
----------------------------------
|
||||||
|
tar -zxvf server_metrics.tar.gz
|
||||||
|
----------------------------------
|
||||||
|
|
||||||
|
Each document in the server-metrics data set has the following schema:
|
||||||
|
|
||||||
|
[source,js]
|
||||||
|
----------------------------------
|
||||||
|
{
|
||||||
|
"index":
|
||||||
|
{
|
||||||
|
"_index":"server-metrics",
|
||||||
|
"_type":"metric",
|
||||||
|
"_id":"1177"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
{
|
||||||
|
"@timestamp":"2017-03-23T13:00:00",
|
||||||
|
"accept":36320,
|
||||||
|
"deny":4156,
|
||||||
|
"host":"server_2",
|
||||||
|
"response":2.4558210155,
|
||||||
|
"service":"app_3",
|
||||||
|
"total":40476
|
||||||
|
}
|
||||||
|
----------------------------------
|
||||||
|
|
||||||
|
TIP: The sample data sets include summarized data. For example, the `total`
|
||||||
|
value is a sum of the requests that were received by a specific service at a
|
||||||
|
particular time. If your data is stored in {es}, you can generate
|
||||||
|
this type of sum or average by using aggregations. One of the benefits of
|
||||||
|
summarizing data this way is that {es} automatically distributes
|
||||||
|
these calculations across your cluster. You can then feed this summarized data
|
||||||
|
into {xpackml} instead of raw results, which reduces the volume
|
||||||
|
of data that must be considered while detecting anomalies. For the purposes of
|
||||||
|
this tutorial, however, these summary values are stored in {es}. For more
|
||||||
|
information, see <<ml-configuring-aggregation>>.
|
||||||
|
|
||||||
|
Before you load the data set, you need to set up {ref}/mapping.html[_mappings_]
|
||||||
|
for the fields. Mappings divide the documents in the index into logical groups
|
||||||
|
and specify a field's characteristics, such as the field's searchability or
|
||||||
|
whether or not it's _tokenized_, or broken up into separate words.
|
||||||
|
|
||||||
|
The sample data includes an `upload_server-metrics.sh` script, which you can use
|
||||||
|
to create the mappings and load the data set. You can download it by clicking
|
||||||
|
here: https://download.elastic.co/demos/machine_learning/gettingstarted/upload_server-metrics.sh[upload_server-metrics.sh]
|
||||||
|
Before you run it, however, you must edit the USERNAME and PASSWORD variables
|
||||||
|
with your actual user ID and password.
|
||||||
|
|
||||||
|
The script runs a command similar to the following example, which sets up a
|
||||||
|
mapping for the data set:
|
||||||
|
|
||||||
|
[source,shell]
|
||||||
|
----------------------------------
|
||||||
|
|
||||||
|
curl -u elastic:x-pack-test-password -X PUT -H 'Content-Type: application/json'
|
||||||
|
http://localhost:9200/server-metrics -d '{
|
||||||
|
"settings":{
|
||||||
|
"number_of_shards":1,
|
||||||
|
"number_of_replicas":0
|
||||||
|
},
|
||||||
|
"mappings":{
|
||||||
|
"metric":{
|
||||||
|
"properties":{
|
||||||
|
"@timestamp":{
|
||||||
|
"type":"date"
|
||||||
|
},
|
||||||
|
"accept":{
|
||||||
|
"type":"long"
|
||||||
|
},
|
||||||
|
"deny":{
|
||||||
|
"type":"long"
|
||||||
|
},
|
||||||
|
"host":{
|
||||||
|
"type":"keyword"
|
||||||
|
},
|
||||||
|
"response":{
|
||||||
|
"type":"float"
|
||||||
|
},
|
||||||
|
"service":{
|
||||||
|
"type":"keyword"
|
||||||
|
},
|
||||||
|
"total":{
|
||||||
|
"type":"long"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}'
|
||||||
|
----------------------------------
|
||||||
|
|
||||||
|
NOTE: If you run this command, you must replace `x-pack-test-password` with your
|
||||||
|
actual password.
|
||||||
|
|
||||||
|
////
|
||||||
|
This mapping specifies the following qualities for the data set:
|
||||||
|
|
||||||
|
* The _@timestamp_ field is a date.
|
||||||
|
//that uses the ISO format `epoch_second`,
|
||||||
|
//which is the number of seconds since the epoch.
|
||||||
|
* The _accept_, _deny_, and _total_ fields are long numbers.
|
||||||
|
* The _host
|
||||||
|
////
|
||||||
|
|
||||||
|
You can then use the {es} `bulk` API to load the data set. The
|
||||||
|
`upload_server-metrics.sh` script runs commands similar to the following
|
||||||
|
example, which loads the four JSON files:
|
||||||
|
|
||||||
|
[source,shell]
|
||||||
|
----------------------------------
|
||||||
|
|
||||||
|
curl -u elastic:x-pack-test-password -X POST -H "Content-Type: application/json"
|
||||||
|
http://localhost:9200/server-metrics/_bulk --data-binary "@server-metrics_1.json"
|
||||||
|
|
||||||
|
curl -u elastic:x-pack-test-password -X POST -H "Content-Type: application/json"
|
||||||
|
http://localhost:9200/server-metrics/_bulk --data-binary "@server-metrics_2.json"
|
||||||
|
|
||||||
|
curl -u elastic:x-pack-test-password -X POST -H "Content-Type: application/json"
|
||||||
|
http://localhost:9200/server-metrics/_bulk --data-binary "@server-metrics_3.json"
|
||||||
|
|
||||||
|
curl -u elastic:x-pack-test-password -X POST -H "Content-Type: application/json"
|
||||||
|
http://localhost:9200/server-metrics/_bulk --data-binary "@server-metrics_4.json"
|
||||||
|
----------------------------------
|
||||||
|
|
||||||
|
TIP: This will upload 200MB of data. This is split into 4 files as there is a
|
||||||
|
maximum 100MB limit when using the `_bulk` API.
|
||||||
|
|
||||||
|
These commands might take some time to run, depending on the computing resources
|
||||||
|
available.
|
||||||
|
|
||||||
|
You can verify that the data was loaded successfully with the following command:
|
||||||
|
|
||||||
|
[source,shell]
|
||||||
|
----------------------------------
|
||||||
|
|
||||||
|
curl 'http://localhost:9200/_cat/indices?v' -u elastic:x-pack-test-password
|
||||||
|
----------------------------------
|
||||||
|
|
||||||
|
You should see output similar to the following:
|
||||||
|
|
||||||
|
[source,shell]
|
||||||
|
----------------------------------
|
||||||
|
|
||||||
|
health status index ... pri rep docs.count ...
|
||||||
|
green open server-metrics ... 1 0 905940 ...
|
||||||
|
----------------------------------
|
||||||
|
|
||||||
|
Next, you must define an index pattern for this data set:
|
||||||
|
|
||||||
|
. Open {kib} in your web browser and log in. If you are running {kib}
|
||||||
|
locally, go to `http://localhost:5601/`.
|
||||||
|
|
||||||
|
. Click the **Management** tab, then **{kib}** > **Index Patterns**.
|
||||||
|
|
||||||
|
. If you already have index patterns, click **Create Index** to define a new
|
||||||
|
one. Otherwise, the **Create index pattern** wizard is already open.
|
||||||
|
|
||||||
|
. For this tutorial, any pattern that matches the name of the index you've
|
||||||
|
loaded will work. For example, enter `server-metrics*` as the index pattern.
|
||||||
|
|
||||||
|
. In the **Configure settings** step, select the `@timestamp` field in the
|
||||||
|
**Time Filter field name** list.
|
||||||
|
|
||||||
|
. Click **Create index pattern**.
|
||||||
|
|
||||||
|
This data set can now be analyzed in {ml} jobs in {kib}.
|
@ -32,26 +32,12 @@ To create a multi-metric job in {kib}:
|
|||||||
. Open {kib} in your web browser and log in. If you are running {kib} locally,
|
. Open {kib} in your web browser and log in. If you are running {kib} locally,
|
||||||
go to `http://localhost:5601/`.
|
go to `http://localhost:5601/`.
|
||||||
|
|
||||||
. Click **Machine Learning** in the side navigation, then click **Create new job**. +
|
. Click **Machine Learning** in the side navigation, then click **Create new job**.
|
||||||
+
|
|
||||||
--
|
|
||||||
[role="screenshot"]
|
|
||||||
image::images/ml-kibana.jpg[Job Management]
|
|
||||||
--
|
|
||||||
|
|
||||||
. Click **Create multi metric job**. +
|
. Select the index pattern that you created for the sample data. For example,
|
||||||
+
|
`server-metrics*`.
|
||||||
--
|
|
||||||
[role="screenshot"]
|
|
||||||
image::images/ml-create-job2.jpg["Create a multi metric job"]
|
|
||||||
--
|
|
||||||
|
|
||||||
. Click the `server-metrics` index. +
|
. In the **Use a wizard** section, click **Multi metric**.
|
||||||
+
|
|
||||||
--
|
|
||||||
[role="screenshot"]
|
|
||||||
image::images/ml-gs-index.jpg["Select an index"]
|
|
||||||
--
|
|
||||||
|
|
||||||
. Configure the job by providing the following job settings: +
|
. Configure the job by providing the following job settings: +
|
||||||
+
|
+
|
||||||
|
331
docs/en/ml/getting-started-single.asciidoc
Normal file
@ -0,0 +1,331 @@
|
|||||||
|
[[ml-gs-jobs]]
|
||||||
|
=== Creating Single Metric Jobs
|
||||||
|
|
||||||
|
At this point in the tutorial, the goal is to detect anomalies in the
|
||||||
|
total requests received by your applications and services. The sample data
|
||||||
|
contains a single key performance indicator(KPI) to track this, which is the total
|
||||||
|
requests over time. It is therefore logical to start by creating a single metric
|
||||||
|
job for this KPI.
|
||||||
|
|
||||||
|
TIP: If you are using aggregated data, you can create an advanced job
|
||||||
|
and configure it to use a `summary_count_field_name`. The {ml} algorithms will
|
||||||
|
make the best possible use of summarized data in this case. For simplicity, in
|
||||||
|
this tutorial we will not make use of that advanced functionality. For more
|
||||||
|
information, see <<ml-configuring-aggregation>>.
|
||||||
|
|
||||||
|
A single metric job contains a single _detector_. A detector defines the type of
|
||||||
|
analysis that will occur (for example, `max`, `average`, or `rare` analytical
|
||||||
|
functions) and the fields that will be analyzed.
|
||||||
|
|
||||||
|
To create a single metric job in {kib}:
|
||||||
|
|
||||||
|
. Open {kib} in your web browser and log in. If you are running {kib} locally,
|
||||||
|
go to `http://localhost:5601/`.
|
||||||
|
|
||||||
|
. Click **Machine Learning** in the side navigation.
|
||||||
|
|
||||||
|
. Click **Create new job**.
|
||||||
|
|
||||||
|
. Select the index pattern that you created for the sample data. For example,
|
||||||
|
`server-metrics*`.
|
||||||
|
|
||||||
|
. In the **Use a wizard** section, click **Single metric**.
|
||||||
|
|
||||||
|
. Configure the job by providing the following information: +
|
||||||
|
+
|
||||||
|
--
|
||||||
|
[role="screenshot"]
|
||||||
|
image::images/ml-gs-single-job.jpg["Create a new job from the server-metrics index"]
|
||||||
|
--
|
||||||
|
|
||||||
|
.. For the **Aggregation**, select `Sum`. This value specifies the analysis
|
||||||
|
function that is used.
|
||||||
|
+
|
||||||
|
--
|
||||||
|
Some of the analytical functions look for single anomalous data points. For
|
||||||
|
example, `max` identifies the maximum value that is seen within a bucket.
|
||||||
|
Others perform some aggregation over the length of the bucket. For example,
|
||||||
|
`mean` calculates the mean of all the data points seen within the bucket.
|
||||||
|
Similarly, `count` calculates the total number of data points within the bucket.
|
||||||
|
In this tutorial, you are using the `sum` function, which calculates the sum of
|
||||||
|
the specified field's values within the bucket. For descriptions of all the
|
||||||
|
functions, see <<ml-functions>>.
|
||||||
|
--
|
||||||
|
|
||||||
|
.. For the **Field**, select `total`. This value specifies the field that
|
||||||
|
the detector uses in the function.
|
||||||
|
+
|
||||||
|
--
|
||||||
|
NOTE: Some functions such as `count` and `rare` do not require fields.
|
||||||
|
--
|
||||||
|
|
||||||
|
.. For the **Bucket span**, enter `10m`. This value specifies the size of the
|
||||||
|
interval that the analysis is aggregated into.
|
||||||
|
+
|
||||||
|
--
|
||||||
|
The {xpackml} features use the concept of a bucket to divide up the time series
|
||||||
|
into batches for processing. For example, if you are monitoring
|
||||||
|
the total number of requests in the system,
|
||||||
|
using a bucket span of 1 hour would mean that at the end of each hour, it
|
||||||
|
calculates the sum of the requests for the last hour and computes the
|
||||||
|
anomalousness of that value compared to previous hours.
|
||||||
|
|
||||||
|
The bucket span has two purposes: it dictates over what time span to look for
|
||||||
|
anomalous features in data, and also determines how quickly anomalies can be
|
||||||
|
detected. Choosing a shorter bucket span enables anomalies to be detected more
|
||||||
|
quickly. However, there is a risk of being too sensitive to natural variations
|
||||||
|
or noise in the input data. Choosing too long a bucket span can mean that
|
||||||
|
interesting anomalies are averaged away. There is also the possibility that the
|
||||||
|
aggregation might smooth out some anomalies based on when the bucket starts
|
||||||
|
in time.
|
||||||
|
|
||||||
|
The bucket span has a significant impact on the analysis. When you're trying to
|
||||||
|
determine what value to use, take into account the granularity at which you
|
||||||
|
want to perform the analysis, the frequency of the input data, the duration of
|
||||||
|
typical anomalies, and the frequency at which alerting is required.
|
||||||
|
--
|
||||||
|
|
||||||
|
. Determine whether you want to process all of the data or only part of it. If
|
||||||
|
you want to analyze all of the existing data, click
|
||||||
|
**Use full server-metrics* data**. If you want to see what happens when you
|
||||||
|
stop and start {dfeeds} and process additional data over time, click the time
|
||||||
|
picker in the {kib} toolbar. Since the sample data spans a period of time
|
||||||
|
between March 23, 2017 and April 22, 2017, click **Absolute**. Set the start
|
||||||
|
time to March 23, 2017 and the end time to April 1, 2017, for example. Once
|
||||||
|
you've got the time range set up, click the **Go** button. +
|
||||||
|
+
|
||||||
|
--
|
||||||
|
[role="screenshot"]
|
||||||
|
image::images/ml-gs-job1-time.jpg["Setting the time range for the {dfeed}"]
|
||||||
|
--
|
||||||
|
+
|
||||||
|
--
|
||||||
|
A graph is generated, which represents the total number of requests over time.
|
||||||
|
|
||||||
|
Note that the **Estimate bucket span** option is no longer greyed out in the
|
||||||
|
**Buck span** field. This is an experimental feature that you can use to help
|
||||||
|
determine an appropriate bucket span for your data. For the purposes of this
|
||||||
|
tutorial, we will leave the bucket span at 10 minutes.
|
||||||
|
--
|
||||||
|
|
||||||
|
. Provide a name for the job, for example `total-requests`. The job name must
|
||||||
|
be unique in your cluster. You can also optionally provide a description of the
|
||||||
|
job and create a job group.
|
||||||
|
|
||||||
|
. Click **Create Job**. +
|
||||||
|
+
|
||||||
|
--
|
||||||
|
[role="screenshot"]
|
||||||
|
image::images/ml-gs-job1.jpg["A graph of the total number of requests over time"]
|
||||||
|
--
|
||||||
|
|
||||||
|
As the job is created, the graph is updated to give a visual representation of
|
||||||
|
the progress of {ml} as the data is processed. This view is only available whilst the
|
||||||
|
job is running.
|
||||||
|
|
||||||
|
When the job is created, you can choose to view the results, continue the job
|
||||||
|
in real-time, and create a watch. In this tutorial, we will look at how to
|
||||||
|
manage jobs and {dfeeds} before we view the results.
|
||||||
|
|
||||||
|
TIP: The `create_single_metic.sh` script creates a similar job and {dfeed} by
|
||||||
|
using the {ml} APIs. You can download that script by clicking
|
||||||
|
here: https://download.elastic.co/demos/machine_learning/gettingstarted/create_single_metric.sh[create_single_metric.sh]
|
||||||
|
For API reference information, see {ref}/ml-apis.html[Machine Learning APIs].
|
||||||
|
|
||||||
|
[[ml-gs-job1-manage]]
|
||||||
|
=== Managing Jobs
|
||||||
|
|
||||||
|
After you create a job, you can see its status in the **Job Management** tab: +
|
||||||
|
|
||||||
|
[role="screenshot"]
|
||||||
|
image::images/ml-gs-job1-manage1.jpg["Status information for the total-requests job"]
|
||||||
|
|
||||||
|
The following information is provided for each job:
|
||||||
|
|
||||||
|
Job ID::
|
||||||
|
The unique identifier for the job.
|
||||||
|
|
||||||
|
Description::
|
||||||
|
The optional description of the job.
|
||||||
|
|
||||||
|
Processed records::
|
||||||
|
The number of records that have been processed by the job.
|
||||||
|
|
||||||
|
Memory status::
|
||||||
|
The status of the mathematical models. When you create jobs by using the APIs or
|
||||||
|
by using the advanced options in {kib}, you can specify a `model_memory_limit`.
|
||||||
|
That value is the maximum amount of memory resources that the mathematical
|
||||||
|
models can use. Once that limit is approached, data pruning becomes more
|
||||||
|
aggressive. Upon exceeding that limit, new entities are not modeled. For more
|
||||||
|
information about this setting, see
|
||||||
|
{ref}/ml-job-resource.html#ml-apilimits[Analysis Limits]. The memory status
|
||||||
|
field reflects whether you have reached or exceeded the model memory limit. It
|
||||||
|
can have one of the following values: +
|
||||||
|
`ok`::: The models stayed below the configured value.
|
||||||
|
`soft_limit`::: The models used more than 60% of the configured memory limit
|
||||||
|
and older unused models will be pruned to free up space.
|
||||||
|
`hard_limit`::: The models used more space than the configured memory limit.
|
||||||
|
As a result, not all incoming data was processed.
|
||||||
|
|
||||||
|
Job state::
|
||||||
|
The status of the job, which can be one of the following values: +
|
||||||
|
`opened`::: The job is available to receive and process data.
|
||||||
|
`closed`::: The job finished successfully with its model state persisted.
|
||||||
|
The job must be opened before it can accept further data.
|
||||||
|
`closing`::: The job close action is in progress and has not yet completed.
|
||||||
|
A closing job cannot accept further data.
|
||||||
|
`failed`::: The job did not finish successfully due to an error.
|
||||||
|
This situation can occur due to invalid input data.
|
||||||
|
If the job had irrevocably failed, it must be force closed and then deleted.
|
||||||
|
If the {dfeed} can be corrected, the job can be closed and then re-opened.
|
||||||
|
|
||||||
|
{dfeed-cap} state::
|
||||||
|
The status of the {dfeed}, which can be one of the following values: +
|
||||||
|
started::: The {dfeed} is actively receiving data.
|
||||||
|
stopped::: The {dfeed} is stopped and will not receive data until it is
|
||||||
|
re-started.
|
||||||
|
|
||||||
|
Latest timestamp::
|
||||||
|
The timestamp of the last processed record.
|
||||||
|
|
||||||
|
|
||||||
|
If you click the arrow beside the name of job, you can show or hide additional
|
||||||
|
information, such as the settings, configuration information, or messages for
|
||||||
|
the job.
|
||||||
|
|
||||||
|
You can also click one of the **Actions** buttons to start the {dfeed}, edit
|
||||||
|
the job or {dfeed}, and clone or delete the job, for example.
|
||||||
|
|
||||||
|
[float]
|
||||||
|
[[ml-gs-job1-datafeed]]
|
||||||
|
==== Managing {dfeeds-cap}
|
||||||
|
|
||||||
|
A {dfeed} can be started and stopped multiple times throughout its lifecycle.
|
||||||
|
If you want to retrieve more data from {es} and the {dfeed} is stopped, you must
|
||||||
|
restart it.
|
||||||
|
|
||||||
|
For example, if you did not use the full data when you created the job, you can
|
||||||
|
now process the remaining data by restarting the {dfeed}:
|
||||||
|
|
||||||
|
. In the **Machine Learning** / **Job Management** tab, click the following
|
||||||
|
button to start the {dfeed}: image:images/ml-start-feed.jpg["Start {dfeed}"]
|
||||||
|
|
||||||
|
|
||||||
|
. Choose a start time and end time. For example,
|
||||||
|
click **Continue from 2017-04-01 23:59:00** and select **2017-04-30** as the
|
||||||
|
search end time. Then click **Start**. The date picker defaults to the latest
|
||||||
|
timestamp of processed data. Be careful not to leave any gaps in the analysis,
|
||||||
|
otherwise you might miss anomalies. +
|
||||||
|
+
|
||||||
|
--
|
||||||
|
[role="screenshot"]
|
||||||
|
image::images/ml-gs-job1-datafeed.jpg["Restarting a {dfeed}"]
|
||||||
|
--
|
||||||
|
|
||||||
|
The {dfeed} state changes to `started`, the job state changes to `opened`,
|
||||||
|
and the number of processed records increases as the new data is analyzed. The
|
||||||
|
latest timestamp information also increases.
|
||||||
|
|
||||||
|
TIP: If your data is being loaded continuously, you can continue running the job
|
||||||
|
in real time. For this, start your {dfeed} and select **No end time**.
|
||||||
|
|
||||||
|
If you want to stop the {dfeed} at this point, you can click the following
|
||||||
|
button: image:images/ml-stop-feed.jpg["Stop {dfeed}"]
|
||||||
|
|
||||||
|
Now that you have processed all the data, let's start exploring the job results.
|
||||||
|
|
||||||
|
[[ml-gs-job1-analyze]]
|
||||||
|
=== Exploring Single Metric Job Results
|
||||||
|
|
||||||
|
The {xpackml} features analyze the input stream of data, model its behavior,
|
||||||
|
and perform analysis based on the detectors you defined in your job. When an
|
||||||
|
event occurs outside of the model, that event is identified as an anomaly.
|
||||||
|
|
||||||
|
Result records for each anomaly are stored in `.ml-anomalies-*` indices in {es}.
|
||||||
|
By default, the name of the index where {ml} results are stored is labelled
|
||||||
|
`shared`, which corresponds to the `.ml-anomalies-shared` index.
|
||||||
|
|
||||||
|
You can use the **Anomaly Explorer** or the **Single Metric Viewer** in {kib} to
|
||||||
|
view the analysis results.
|
||||||
|
|
||||||
|
Anomaly Explorer::
|
||||||
|
This view contains swim lanes showing the maximum anomaly score over time.
|
||||||
|
There is an overall swim lane that shows the overall score for the job, and
|
||||||
|
also swim lanes for each influencer. By selecting a block in a swim lane, the
|
||||||
|
anomaly details are displayed alongside the original source data (where
|
||||||
|
applicable).
|
||||||
|
|
||||||
|
Single Metric Viewer::
|
||||||
|
This view contains a chart that represents the actual and expected values over
|
||||||
|
time. This is only available for jobs that analyze a single time series and
|
||||||
|
where `model_plot_config` is enabled. As in the **Anomaly Explorer**, anomalous
|
||||||
|
data points are shown in different colors depending on their score.
|
||||||
|
|
||||||
|
By default when you view the results for a single metric job, the
|
||||||
|
**Single Metric Viewer** opens:
|
||||||
|
[role="screenshot"]
|
||||||
|
image::images/ml-gs-job1-analysis.jpg["Single Metric Viewer for total-requests job"]
|
||||||
|
|
||||||
|
|
||||||
|
The blue line in the chart represents the actual data values. The shaded blue
|
||||||
|
area represents the bounds for the expected values. The area between the upper
|
||||||
|
and lower bounds are the most likely values for the model. If a value is outside
|
||||||
|
of this area then it can be said to be anomalous.
|
||||||
|
|
||||||
|
If you slide the time selector from the beginning of the data to the end of the
|
||||||
|
data, you can see how the model improves as it processes more data. At the
|
||||||
|
beginning, the expected range of values is pretty broad and the model is not
|
||||||
|
capturing the periodicity in the data. But it quickly learns and begins to
|
||||||
|
reflect the daily variation.
|
||||||
|
|
||||||
|
Any data points outside the range that was predicted by the model are marked
|
||||||
|
as anomalies. When you have high volumes of real-life data, many anomalies
|
||||||
|
might be found. These vary in probability from very likely to highly unlikely,
|
||||||
|
that is to say, from not particularly anomalous to highly anomalous. There
|
||||||
|
can be none, one or two or tens, sometimes hundreds of anomalies found within
|
||||||
|
each bucket. There can be many thousands found per job. In order to provide
|
||||||
|
a sensible view of the results, an _anomaly score_ is calculated for each bucket
|
||||||
|
time interval. The anomaly score is a value from 0 to 100, which indicates
|
||||||
|
the significance of the observed anomaly compared to previously seen anomalies.
|
||||||
|
The highly anomalous values are shown in red and the low scored values are
|
||||||
|
indicated in blue. An interval with a high anomaly score is significant and
|
||||||
|
requires investigation.
|
||||||
|
|
||||||
|
Slide the time selector to a section of the time series that contains a red
|
||||||
|
anomaly data point. If you hover over the point, you can see more information
|
||||||
|
about that data point. You can also see details in the **Anomalies** section
|
||||||
|
of the viewer. For example:
|
||||||
|
[role="screenshot"]
|
||||||
|
image::images/ml-gs-job1-anomalies.jpg["Single Metric Viewer Anomalies for total-requests job"]
|
||||||
|
|
||||||
|
For each anomaly you can see key details such as the time, the actual and
|
||||||
|
expected ("typical") values, and their probability.
|
||||||
|
|
||||||
|
By default, the table contains all anomalies that have a severity of "warning"
|
||||||
|
or higher in the selected section of the timeline. If you are only interested in
|
||||||
|
critical anomalies, for example, you can change the severity threshold for this
|
||||||
|
table.
|
||||||
|
|
||||||
|
The anomalies table also automatically calculates an interval for the data in
|
||||||
|
the table. If the time difference between the earliest and latest records in the
|
||||||
|
table is less than two days, the data is aggregated by hour to show the details
|
||||||
|
of the highest severity anomaly for each detector. Otherwise, it is
|
||||||
|
aggregated by day. You can change the interval for the table, for example, to
|
||||||
|
show all anomalies.
|
||||||
|
|
||||||
|
You can see the same information in a different format by using the
|
||||||
|
**Anomaly Explorer**:
|
||||||
|
[role="screenshot"]
|
||||||
|
image::images/ml-gs-job1-explorer.jpg["Anomaly Explorer for total-requests job"]
|
||||||
|
|
||||||
|
|
||||||
|
Click one of the red sections in the swim lane to see details about the anomalies
|
||||||
|
that occurred in that time interval. For example:
|
||||||
|
[role="screenshot"]
|
||||||
|
image::images/ml-gs-job1-explorer-anomaly.jpg["Anomaly Explorer details for total-requests job"]
|
||||||
|
|
||||||
|
After you have identified anomalies, often the next step is to try to determine
|
||||||
|
the context of those situations. For example, are there other factors that are
|
||||||
|
contributing to the problem? Are the anomalies confined to particular
|
||||||
|
applications or servers? You can begin to troubleshoot these situations by
|
||||||
|
layering additional jobs or creating multi-metric jobs.
|
99
docs/en/ml/getting-started-wizards.asciidoc
Normal file
@ -0,0 +1,99 @@
|
|||||||
|
[[ml-gs-wizards]]
|
||||||
|
=== Creating Jobs in {kib}
|
||||||
|
++++
|
||||||
|
<titleabbrev>Creating Jobs</titleabbrev>
|
||||||
|
++++
|
||||||
|
|
||||||
|
Machine learning jobs contain the configuration information and metadata
|
||||||
|
necessary to perform an analytical task. They also contain the results of the
|
||||||
|
analytical task.
|
||||||
|
|
||||||
|
[NOTE]
|
||||||
|
--
|
||||||
|
This tutorial uses {kib} to create jobs and view results, but you can
|
||||||
|
alternatively use APIs to accomplish most tasks.
|
||||||
|
For API reference information, see {ref}/ml-apis.html[Machine Learning APIs].
|
||||||
|
|
||||||
|
The {xpackml} features in {kib} use pop-ups. You must configure your
|
||||||
|
web browser so that it does not block pop-up windows or create an
|
||||||
|
exception for your {kib} URL.
|
||||||
|
--
|
||||||
|
|
||||||
|
{kib} provides wizards that help you create typical {ml} jobs. For example, you
|
||||||
|
can use wizards to create single metric, multi-metric, population, and advanced
|
||||||
|
jobs.
|
||||||
|
|
||||||
|
To see the job creation wizards:
|
||||||
|
|
||||||
|
. Open {kib} in your web browser and log in. If you are running {kib} locally,
|
||||||
|
go to `http://localhost:5601/`.
|
||||||
|
|
||||||
|
. Click **Machine Learning** in the side navigation.
|
||||||
|
|
||||||
|
. Click **Create new job**.
|
||||||
|
|
||||||
|
. Click the `server-metrics*` index pattern.
|
||||||
|
|
||||||
|
You can then choose from a list of job wizards. For example:
|
||||||
|
|
||||||
|
[role="screenshot"]
|
||||||
|
image::images/ml-create-job.jpg["Job creation wizards in {kib}"]
|
||||||
|
|
||||||
|
If you are not certain which wizard to use, there is also a **Data Visualizer**
|
||||||
|
that can help you explore the fields in your data.
|
||||||
|
|
||||||
|
To learn more about the sample data:
|
||||||
|
|
||||||
|
. Click **Data Visualizer**. +
|
||||||
|
+
|
||||||
|
--
|
||||||
|
[role="screenshot"]
|
||||||
|
image::images/ml-data-visualizer.jpg["Data Visualizer in {kib}"]
|
||||||
|
--
|
||||||
|
|
||||||
|
. Select a time period that you're interested in exploring by using the time
|
||||||
|
picker in the {kib} toolbar. Alternatively, click
|
||||||
|
**Use full server-metrics* data** to view data over the full time range. In this
|
||||||
|
sample data, the documents relate to March and April 2017.
|
||||||
|
|
||||||
|
. Optional: Change the number of documents per shard that are used in the
|
||||||
|
visualizations. There is a relatively small number of documents in the sample
|
||||||
|
data, so you can choose a value of `all`. For larger data sets, keep in mind
|
||||||
|
that using a large sample size increases query run times and increases the load
|
||||||
|
on the cluster.
|
||||||
|
|
||||||
|
[role="screenshot"]
|
||||||
|
image::images/ml-data-metrics.jpg["Data Visualizer output for metrics in {kib}"]
|
||||||
|
|
||||||
|
The fields in the indices are listed in two sections. The first section contains
|
||||||
|
the numeric ("metric") fields. The second section contains non-metric fields
|
||||||
|
(such as `keyword`, `text`, `date`, `boolean`, `ip`, and `geo_point` data types).
|
||||||
|
|
||||||
|
For metric fields, the **Data Visualizer** indicates how many documents contain
|
||||||
|
the field in the selected time period. It also provides information about the
|
||||||
|
minimum, median, and maximum values, the number of distinct values, and their
|
||||||
|
distribution. You can use the distribution chart to get a better idea of how
|
||||||
|
the values in the data are clustered. Alternatively, you can view the top values
|
||||||
|
for metric fields. For example:
|
||||||
|
|
||||||
|
[role="screenshot"]
|
||||||
|
image::images/ml-data-topmetrics.jpg["Data Visualizer output for top values in {kib}"]
|
||||||
|
|
||||||
|
For date fields, the **Data Visualizer** provides the earliest and latest field
|
||||||
|
values and the number and percentage of documents that contain the field
|
||||||
|
during the selected time period. For example:
|
||||||
|
|
||||||
|
[role="screenshot"]
|
||||||
|
image::images/ml-data-dates.jpg["Data Visualizer output for date fields in {kib}"]
|
||||||
|
|
||||||
|
For keyword fields, the **Data Visualizer** provides the number of distinct
|
||||||
|
values, a list of the top values, and the number and percentage of documents
|
||||||
|
that contain the field during the selected time period. For example:
|
||||||
|
|
||||||
|
[role="screenshot"]
|
||||||
|
image::images/ml-data-keywords.jpg["Data Visualizer output for date fields in {kib}"]
|
||||||
|
|
||||||
|
In this tutorial, you will create single and multi-metric jobs that use the
|
||||||
|
`total`, `response`, `service`, and `host` fields. Though there is an option to
|
||||||
|
create an advanced job directly from the **Data Visualizer**, we will use the
|
||||||
|
single and multi-metric job creation wizards instead.
|
@ -1,14 +1,9 @@
|
|||||||
[[ml-getting-started]]
|
[[ml-getting-started]]
|
||||||
== Getting Started
|
== Getting Started with Machine Learning
|
||||||
|
++++
|
||||||
|
<titleabbrev>Getting Started</titleabbrev>
|
||||||
|
++++
|
||||||
|
|
||||||
////
|
|
||||||
{xpackml} features automatically detect:
|
|
||||||
* Anomalies in single or multiple time series
|
|
||||||
* Outliers in a population (also known as _entity profiling_)
|
|
||||||
* Rare events (also known as _log categorization_)
|
|
||||||
|
|
||||||
This tutorial is focuses on an anomaly detection scenario in single time series.
|
|
||||||
////
|
|
||||||
Ready to get some hands-on experience with the {xpackml} features? This
|
Ready to get some hands-on experience with the {xpackml} features? This
|
||||||
tutorial shows you how to:
|
tutorial shows you how to:
|
||||||
|
|
||||||
@ -79,583 +74,8 @@ significant changes to the system. You can alternatively assign the
|
|||||||
|
|
||||||
For more information, see <<built-in-roles>> and <<privileges-list-cluster>>.
|
For more information, see <<built-in-roles>> and <<privileges-list-cluster>>.
|
||||||
|
|
||||||
[[ml-gs-data]]
|
include::getting-started-data.asciidoc[]
|
||||||
=== Identifying Data for Analysis
|
include::getting-started-wizards.asciidoc[]
|
||||||
|
include::getting-started-single.asciidoc[]
|
||||||
For the purposes of this tutorial, we provide sample data that you can play with
|
|
||||||
and search in {es}. When you consider your own data, however, it's important to
|
|
||||||
take a moment and think about where the {xpackml} features will be most
|
|
||||||
impactful.
|
|
||||||
|
|
||||||
The first consideration is that it must be time series data. The {ml} features
|
|
||||||
are designed to model and detect anomalies in time series data.
|
|
||||||
|
|
||||||
The second consideration, especially when you are first learning to use {ml},
|
|
||||||
is the importance of the data and how familiar you are with it. Ideally, it is
|
|
||||||
information that contains key performance indicators (KPIs) for the health,
|
|
||||||
security, or success of your business or system. It is information that you need
|
|
||||||
to monitor and act on when anomalous behavior occurs. You might even have {kib}
|
|
||||||
dashboards that you're already using to watch this data. The better you know the
|
|
||||||
data, the quicker you will be able to create {ml} jobs that generate useful
|
|
||||||
insights.
|
|
||||||
|
|
||||||
The final consideration is where the data is located. This tutorial assumes that
|
|
||||||
your data is stored in {es}. It guides you through the steps required to create
|
|
||||||
a _{dfeed}_ that passes data to a job. If your own data is outside of {es},
|
|
||||||
analysis is still possible by using a post data API.
|
|
||||||
|
|
||||||
IMPORTANT: If you want to create {ml} jobs in {kib}, you must use {dfeeds}.
|
|
||||||
That is to say, you must store your input data in {es}. When you create
|
|
||||||
a job, you select an existing index pattern and {kib} configures the {dfeed}
|
|
||||||
for you under the covers.
|
|
||||||
|
|
||||||
|
|
||||||
[float]
|
|
||||||
[[ml-gs-sampledata]]
|
|
||||||
==== Obtaining a Sample Data Set
|
|
||||||
|
|
||||||
In this step we will upload some sample data to {es}. This is standard
|
|
||||||
{es} functionality, and is needed to set the stage for using {ml}.
|
|
||||||
|
|
||||||
The sample data for this tutorial contains information about the requests that
|
|
||||||
are received by various applications and services in a system. A system
|
|
||||||
administrator might use this type of information to track the total number of
|
|
||||||
requests across all of the infrastructure. If the number of requests increases
|
|
||||||
or decreases unexpectedly, for example, this might be an indication that there
|
|
||||||
is a problem or that resources need to be redistributed. By using the {xpack}
|
|
||||||
{ml} features to model the behavior of this data, it is easier to identify
|
|
||||||
anomalies and take appropriate action.
|
|
||||||
|
|
||||||
Download this sample data by clicking here:
|
|
||||||
https://download.elastic.co/demos/machine_learning/gettingstarted/server_metrics.tar.gz[server_metrics.tar.gz]
|
|
||||||
|
|
||||||
Use the following commands to extract the files:
|
|
||||||
|
|
||||||
[source,shell]
|
|
||||||
----------------------------------
|
|
||||||
tar -zxvf server_metrics.tar.gz
|
|
||||||
----------------------------------
|
|
||||||
|
|
||||||
Each document in the server-metrics data set has the following schema:
|
|
||||||
|
|
||||||
[source,js]
|
|
||||||
----------------------------------
|
|
||||||
{
|
|
||||||
"index":
|
|
||||||
{
|
|
||||||
"_index":"server-metrics",
|
|
||||||
"_type":"metric",
|
|
||||||
"_id":"1177"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
{
|
|
||||||
"@timestamp":"2017-03-23T13:00:00",
|
|
||||||
"accept":36320,
|
|
||||||
"deny":4156,
|
|
||||||
"host":"server_2",
|
|
||||||
"response":2.4558210155,
|
|
||||||
"service":"app_3",
|
|
||||||
"total":40476
|
|
||||||
}
|
|
||||||
----------------------------------
|
|
||||||
|
|
||||||
TIP: The sample data sets include summarized data. For example, the `total`
|
|
||||||
value is a sum of the requests that were received by a specific service at a
|
|
||||||
particular time. If your data is stored in {es}, you can generate
|
|
||||||
this type of sum or average by using aggregations. One of the benefits of
|
|
||||||
summarizing data this way is that {es} automatically distributes
|
|
||||||
these calculations across your cluster. You can then feed this summarized data
|
|
||||||
into {xpackml} instead of raw results, which reduces the volume
|
|
||||||
of data that must be considered while detecting anomalies. For the purposes of
|
|
||||||
this tutorial, however, these summary values are stored in {es}. For more
|
|
||||||
information, see <<ml-configuring-aggregation>>.
|
|
||||||
|
|
||||||
Before you load the data set, you need to set up {ref}/mapping.html[_mappings_]
|
|
||||||
for the fields. Mappings divide the documents in the index into logical groups
|
|
||||||
and specify a field's characteristics, such as the field's searchability or
|
|
||||||
whether or not it's _tokenized_, or broken up into separate words.
|
|
||||||
|
|
||||||
The sample data includes an `upload_server-metrics.sh` script, which you can use
|
|
||||||
to create the mappings and load the data set. You can download it by clicking
|
|
||||||
here: https://download.elastic.co/demos/machine_learning/gettingstarted/upload_server-metrics.sh[upload_server-metrics.sh]
|
|
||||||
Before you run it, however, you must edit the USERNAME and PASSWORD variables
|
|
||||||
with your actual user ID and password.
|
|
||||||
|
|
||||||
The script runs a command similar to the following example, which sets up a
|
|
||||||
mapping for the data set:
|
|
||||||
|
|
||||||
[source,shell]
|
|
||||||
----------------------------------
|
|
||||||
|
|
||||||
curl -u elastic:x-pack-test-password -X PUT -H 'Content-Type: application/json'
|
|
||||||
http://localhost:9200/server-metrics -d '{
|
|
||||||
"settings":{
|
|
||||||
"number_of_shards":1,
|
|
||||||
"number_of_replicas":0
|
|
||||||
},
|
|
||||||
"mappings":{
|
|
||||||
"metric":{
|
|
||||||
"properties":{
|
|
||||||
"@timestamp":{
|
|
||||||
"type":"date"
|
|
||||||
},
|
|
||||||
"accept":{
|
|
||||||
"type":"long"
|
|
||||||
},
|
|
||||||
"deny":{
|
|
||||||
"type":"long"
|
|
||||||
},
|
|
||||||
"host":{
|
|
||||||
"type":"keyword"
|
|
||||||
},
|
|
||||||
"response":{
|
|
||||||
"type":"float"
|
|
||||||
},
|
|
||||||
"service":{
|
|
||||||
"type":"keyword"
|
|
||||||
},
|
|
||||||
"total":{
|
|
||||||
"type":"long"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}'
|
|
||||||
----------------------------------
|
|
||||||
|
|
||||||
NOTE: If you run this command, you must replace `x-pack-test-password` with your
|
|
||||||
actual password.
|
|
||||||
|
|
||||||
////
|
|
||||||
This mapping specifies the following qualities for the data set:
|
|
||||||
|
|
||||||
* The _@timestamp_ field is a date.
|
|
||||||
//that uses the ISO format `epoch_second`,
|
|
||||||
//which is the number of seconds since the epoch.
|
|
||||||
* The _accept_, _deny_, and _total_ fields are long numbers.
|
|
||||||
* The _host
|
|
||||||
////
|
|
||||||
|
|
||||||
You can then use the {es} `bulk` API to load the data set. The
|
|
||||||
`upload_server-metrics.sh` script runs commands similar to the following
|
|
||||||
example, which loads the four JSON files:
|
|
||||||
|
|
||||||
[source,shell]
|
|
||||||
----------------------------------
|
|
||||||
|
|
||||||
curl -u elastic:x-pack-test-password -X POST -H "Content-Type: application/json"
|
|
||||||
http://localhost:9200/server-metrics/_bulk --data-binary "@server-metrics_1.json"
|
|
||||||
|
|
||||||
curl -u elastic:x-pack-test-password -X POST -H "Content-Type: application/json"
|
|
||||||
http://localhost:9200/server-metrics/_bulk --data-binary "@server-metrics_2.json"
|
|
||||||
|
|
||||||
curl -u elastic:x-pack-test-password -X POST -H "Content-Type: application/json"
|
|
||||||
http://localhost:9200/server-metrics/_bulk --data-binary "@server-metrics_3.json"
|
|
||||||
|
|
||||||
curl -u elastic:x-pack-test-password -X POST -H "Content-Type: application/json"
|
|
||||||
http://localhost:9200/server-metrics/_bulk --data-binary "@server-metrics_4.json"
|
|
||||||
----------------------------------
|
|
||||||
|
|
||||||
TIP: This will upload 200MB of data. This is split into 4 files as there is a
|
|
||||||
maximum 100MB limit when using the `_bulk` API.
|
|
||||||
|
|
||||||
These commands might take some time to run, depending on the computing resources
|
|
||||||
available.
|
|
||||||
|
|
||||||
You can verify that the data was loaded successfully with the following command:
|
|
||||||
|
|
||||||
[source,shell]
|
|
||||||
----------------------------------
|
|
||||||
|
|
||||||
curl 'http://localhost:9200/_cat/indices?v' -u elastic:x-pack-test-password
|
|
||||||
----------------------------------
|
|
||||||
|
|
||||||
You should see output similar to the following:
|
|
||||||
|
|
||||||
[source,shell]
|
|
||||||
----------------------------------
|
|
||||||
|
|
||||||
health status index ... pri rep docs.count ...
|
|
||||||
green open server-metrics ... 1 0 905940 ...
|
|
||||||
----------------------------------
|
|
||||||
|
|
||||||
Next, you must define an index pattern for this data set:
|
|
||||||
|
|
||||||
. Open {kib} in your web browser and log in. If you are running {kib}
|
|
||||||
locally, go to `http://localhost:5601/`.
|
|
||||||
|
|
||||||
. Click the **Management** tab, then **Index Patterns**.
|
|
||||||
|
|
||||||
. If you already have index patterns, click the plus sign (+) to define a new
|
|
||||||
one. Otherwise, the **Configure an index pattern** wizard is already open.
|
|
||||||
|
|
||||||
. For this tutorial, any pattern that matches the name of the index you've
|
|
||||||
loaded will work. For example, enter `server-metrics*` as the index pattern.
|
|
||||||
|
|
||||||
. Verify that the **Index contains time-based events** is checked.
|
|
||||||
|
|
||||||
. Select the `@timestamp` field from the **Time-field name** list.
|
|
||||||
|
|
||||||
. Click **Create**.
|
|
||||||
|
|
||||||
This data set can now be analyzed in {ml} jobs in {kib}.
|
|
||||||
|
|
||||||
|
|
||||||
[[ml-gs-jobs]]
|
|
||||||
=== Creating Single Metric Jobs
|
|
||||||
|
|
||||||
Machine learning jobs contain the configuration information and metadata
|
|
||||||
necessary to perform an analytical task. They also contain the results of the
|
|
||||||
analytical task.
|
|
||||||
|
|
||||||
[NOTE]
|
|
||||||
--
|
|
||||||
This tutorial uses {kib} to create jobs and view results, but you can
|
|
||||||
alternatively use APIs to accomplish most tasks.
|
|
||||||
For API reference information, see {ref}/ml-apis.html[Machine Learning APIs].
|
|
||||||
|
|
||||||
The {xpackml} features in {kib} use pop-ups. You must configure your
|
|
||||||
web browser so that it does not block pop-up windows or create an
|
|
||||||
exception for your Kibana URL.
|
|
||||||
--
|
|
||||||
|
|
||||||
You can choose to create single metric, multi-metric, or advanced jobs in
|
|
||||||
{kib}. At this point in the tutorial, the goal is to detect anomalies in the
|
|
||||||
total requests received by your applications and services. The sample data
|
|
||||||
contains a single key performance indicator to track this, which is the total
|
|
||||||
requests over time. It is therefore logical to start by creating a single metric
|
|
||||||
job for this KPI.
|
|
||||||
|
|
||||||
TIP: If you are using aggregated data, you can create an advanced job
|
|
||||||
and configure it to use a `summary_count_field_name`. The {ml} algorithms will
|
|
||||||
make the best possible use of summarized data in this case. For simplicity, in
|
|
||||||
this tutorial we will not make use of that advanced functionality.
|
|
||||||
|
|
||||||
//TO-DO: Add link to aggregations.asciidoc: For more information, see <<>>.
|
|
||||||
|
|
||||||
A single metric job contains a single _detector_. A detector defines the type of
|
|
||||||
analysis that will occur (for example, `max`, `average`, or `rare` analytical
|
|
||||||
functions) and the fields that will be analyzed.
|
|
||||||
|
|
||||||
To create a single metric job in {kib}:
|
|
||||||
|
|
||||||
. Open {kib} in your web browser and log in. If you are running {kib} locally,
|
|
||||||
go to `http://localhost:5601/`.
|
|
||||||
|
|
||||||
. Click **Machine Learning** in the side navigation: +
|
|
||||||
+
|
|
||||||
--
|
|
||||||
[role="screenshot"]
|
|
||||||
image::images/ml-kibana.jpg[Job Management]
|
|
||||||
--
|
|
||||||
|
|
||||||
. Click **Create new job**.
|
|
||||||
|
|
||||||
. Click **Create single metric job**. +
|
|
||||||
+
|
|
||||||
--
|
|
||||||
[role="screenshot"]
|
|
||||||
image::images/ml-create-jobs.jpg["Create a new job"]
|
|
||||||
--
|
|
||||||
|
|
||||||
. Click the `server-metrics` index. +
|
|
||||||
+
|
|
||||||
--
|
|
||||||
[role="screenshot"]
|
|
||||||
image::images/ml-gs-index.jpg["Select an index"]
|
|
||||||
--
|
|
||||||
|
|
||||||
. Configure the job by providing the following information: +
|
|
||||||
+
|
|
||||||
--
|
|
||||||
[role="screenshot"]
|
|
||||||
image::images/ml-gs-single-job.jpg["Create a new job from the server-metrics index"]
|
|
||||||
--
|
|
||||||
|
|
||||||
.. For the **Aggregation**, select `Sum`. This value specifies the analysis
|
|
||||||
function that is used.
|
|
||||||
+
|
|
||||||
--
|
|
||||||
Some of the analytical functions look for single anomalous data points. For
|
|
||||||
example, `max` identifies the maximum value that is seen within a bucket.
|
|
||||||
Others perform some aggregation over the length of the bucket. For example,
|
|
||||||
`mean` calculates the mean of all the data points seen within the bucket.
|
|
||||||
Similarly, `count` calculates the total number of data points within the bucket.
|
|
||||||
In this tutorial, you are using the `sum` function, which calculates the sum of
|
|
||||||
the specified field's values within the bucket.
|
|
||||||
--
|
|
||||||
|
|
||||||
.. For the **Field**, select `total`. This value specifies the field that
|
|
||||||
the detector uses in the function.
|
|
||||||
+
|
|
||||||
--
|
|
||||||
NOTE: Some functions such as `count` and `rare` do not require fields.
|
|
||||||
--
|
|
||||||
|
|
||||||
.. For the **Bucket span**, enter `10m`. This value specifies the size of the
|
|
||||||
interval that the analysis is aggregated into.
|
|
||||||
+
|
|
||||||
--
|
|
||||||
The {xpackml} features use the concept of a bucket to divide up the time series
|
|
||||||
into batches for processing. For example, if you are monitoring
|
|
||||||
the total number of requests in the system,
|
|
||||||
//and receive a data point every 10 minutes
|
|
||||||
using a bucket span of 1 hour would mean that at the end of each hour, it
|
|
||||||
calculates the sum of the requests for the last hour and computes the
|
|
||||||
anomalousness of that value compared to previous hours.
|
|
||||||
|
|
||||||
The bucket span has two purposes: it dictates over what time span to look for
|
|
||||||
anomalous features in data, and also determines how quickly anomalies can be
|
|
||||||
detected. Choosing a shorter bucket span enables anomalies to be detected more
|
|
||||||
quickly. However, there is a risk of being too sensitive to natural variations
|
|
||||||
or noise in the input data. Choosing too long a bucket span can mean that
|
|
||||||
interesting anomalies are averaged away. There is also the possibility that the
|
|
||||||
aggregation might smooth out some anomalies based on when the bucket starts
|
|
||||||
in time.
|
|
||||||
|
|
||||||
The bucket span has a significant impact on the analysis. When you're trying to
|
|
||||||
determine what value to use, take into account the granularity at which you
|
|
||||||
want to perform the analysis, the frequency of the input data, the duration of
|
|
||||||
typical anomalies and the frequency at which alerting is required.
|
|
||||||
--
|
|
||||||
|
|
||||||
. Determine whether you want to process all of the data or only part of it. If
|
|
||||||
you want to analyze all of the existing data, click
|
|
||||||
**Use full server-metrics* data**. If you want to see what happens when you
|
|
||||||
stop and start {dfeeds} and process additional data over time, click the time
|
|
||||||
picker in the {kib} toolbar. Since the sample data spans a period of time
|
|
||||||
between March 23, 2017 and April 22, 2017, click **Absolute**. Set the start
|
|
||||||
time to March 23, 2017 and the end time to April 1, 2017, for example. Once
|
|
||||||
you've got the time range set up, click the **Go** button. +
|
|
||||||
+
|
|
||||||
--
|
|
||||||
[role="screenshot"]
|
|
||||||
image::images/ml-gs-job1-time.jpg["Setting the time range for the {dfeed}"]
|
|
||||||
--
|
|
||||||
+
|
|
||||||
--
|
|
||||||
A graph is generated, which represents the total number of requests over time.
|
|
||||||
--
|
|
||||||
|
|
||||||
. Provide a name for the job, for example `total-requests`. The job name must
|
|
||||||
be unique in your cluster. You can also optionally provide a description of the
|
|
||||||
job.
|
|
||||||
|
|
||||||
. Click **Create Job**. +
|
|
||||||
+
|
|
||||||
--
|
|
||||||
[role="screenshot"]
|
|
||||||
image::images/ml-gs-job1.jpg["A graph of the total number of requests over time"]
|
|
||||||
--
|
|
||||||
|
|
||||||
As the job is created, the graph is updated to give a visual representation of
|
|
||||||
the progress of {ml} as the data is processed. This view is only available whilst the
|
|
||||||
job is running.
|
|
||||||
|
|
||||||
TIP: The `create_single_metic.sh` script creates a similar job and {dfeed} by
|
|
||||||
using the {ml} APIs. You can download that script by clicking
|
|
||||||
here: https://download.elastic.co/demos/machine_learning/gettingstarted/create_single_metric.sh[create_single_metric.sh]
|
|
||||||
For API reference information, see {ref}/ml-apis.html[Machine Learning APIs].
|
|
||||||
|
|
||||||
[[ml-gs-job1-manage]]
|
|
||||||
=== Managing Jobs
|
|
||||||
|
|
||||||
After you create a job, you can see its status in the **Job Management** tab: +
|
|
||||||
|
|
||||||
[role="screenshot"]
|
|
||||||
image::images/ml-gs-job1-manage1.jpg["Status information for the total-requests job"]
|
|
||||||
|
|
||||||
The following information is provided for each job:
|
|
||||||
|
|
||||||
Job ID::
|
|
||||||
The unique identifier for the job.
|
|
||||||
|
|
||||||
Description::
|
|
||||||
The optional description of the job.
|
|
||||||
|
|
||||||
Processed records::
|
|
||||||
The number of records that have been processed by the job.
|
|
||||||
|
|
||||||
Memory status::
|
|
||||||
The status of the mathematical models. When you create jobs by using the APIs or
|
|
||||||
by using the advanced options in {kib}, you can specify a `model_memory_limit`.
|
|
||||||
That value is the maximum amount of memory resources that the mathematical
|
|
||||||
models can use. Once that limit is approached, data pruning becomes more
|
|
||||||
aggressive. Upon exceeding that limit, new entities are not modeled. For more
|
|
||||||
information about this setting, see
|
|
||||||
{ref}/ml-job-resource.html#ml-apilimits[Analysis Limits]. The memory status
|
|
||||||
field reflects whether you have reached or exceeded the model memory limit. It
|
|
||||||
can have one of the following values: +
|
|
||||||
`ok`::: The models stayed below the configured value.
|
|
||||||
`soft_limit`::: The models used more than 60% of the configured memory limit
|
|
||||||
and older unused models will be pruned to free up space.
|
|
||||||
`hard_limit`::: The models used more space than the configured memory limit.
|
|
||||||
As a result, not all incoming data was processed.
|
|
||||||
|
|
||||||
Job state::
|
|
||||||
The status of the job, which can be one of the following values: +
|
|
||||||
`open`::: The job is available to receive and process data.
|
|
||||||
`closed`::: The job finished successfully with its model state persisted.
|
|
||||||
The job must be opened before it can accept further data.
|
|
||||||
`closing`::: The job close action is in progress and has not yet completed.
|
|
||||||
A closing job cannot accept further data.
|
|
||||||
`failed`::: The job did not finish successfully due to an error.
|
|
||||||
This situation can occur due to invalid input data.
|
|
||||||
If the job had irrevocably failed, it must be force closed and then deleted.
|
|
||||||
If the {dfeed} can be corrected, the job can be closed and then re-opened.
|
|
||||||
|
|
||||||
{dfeed-cap} state::
|
|
||||||
The status of the {dfeed}, which can be one of the following values: +
|
|
||||||
started::: The {dfeed} is actively receiving data.
|
|
||||||
stopped::: The {dfeed} is stopped and will not receive data until it is
|
|
||||||
re-started.
|
|
||||||
|
|
||||||
Latest timestamp::
|
|
||||||
The timestamp of the last processed record.
|
|
||||||
|
|
||||||
|
|
||||||
If you click the arrow beside the name of job, you can show or hide additional
|
|
||||||
information, such as the settings, configuration information, or messages for
|
|
||||||
the job.
|
|
||||||
|
|
||||||
You can also click one of the **Actions** buttons to start the {dfeed}, edit
|
|
||||||
the job or {dfeed}, and clone or delete the job, for example.
|
|
||||||
|
|
||||||
[float]
|
|
||||||
[[ml-gs-job1-datafeed]]
|
|
||||||
==== Managing {dfeeds-cap}
|
|
||||||
|
|
||||||
A {dfeed} can be started and stopped multiple times throughout its lifecycle.
|
|
||||||
If you want to retrieve more data from {es} and the {dfeed} is stopped, you must
|
|
||||||
restart it.
|
|
||||||
|
|
||||||
For example, if you did not use the full data when you created the job, you can
|
|
||||||
now process the remaining data by restarting the {dfeed}:
|
|
||||||
|
|
||||||
. In the **Machine Learning** / **Job Management** tab, click the following
|
|
||||||
button to start the {dfeed}: image:images/ml-start-feed.jpg["Start {dfeed}"]
|
|
||||||
|
|
||||||
|
|
||||||
. Choose a start time and end time. For example,
|
|
||||||
click **Continue from 2017-04-01 23:59:00** and select **2017-04-30** as the
|
|
||||||
search end time. Then click **Start**. The date picker defaults to the latest
|
|
||||||
timestamp of processed data. Be careful not to leave any gaps in the analysis,
|
|
||||||
otherwise you might miss anomalies. +
|
|
||||||
+
|
|
||||||
--
|
|
||||||
[role="screenshot"]
|
|
||||||
image::images/ml-gs-job1-datafeed.jpg["Restarting a {dfeed}"]
|
|
||||||
--
|
|
||||||
|
|
||||||
The {dfeed} state changes to `started`, the job state changes to `opened`,
|
|
||||||
and the number of processed records increases as the new data is analyzed. The
|
|
||||||
latest timestamp information also increases. For example:
|
|
||||||
[role="screenshot"]
|
|
||||||
image::images/ml-gs-job1-manage2.jpg["Job opened and {dfeed} started"]
|
|
||||||
|
|
||||||
TIP: If your data is being loaded continuously, you can continue running the job
|
|
||||||
in real time. For this, start your {dfeed} and select **No end time**.
|
|
||||||
|
|
||||||
If you want to stop the {dfeed} at this point, you can click the following
|
|
||||||
button: image:images/ml-stop-feed.jpg["Stop {dfeed}"]
|
|
||||||
|
|
||||||
Now that you have processed all the data, let's start exploring the job results.
|
|
||||||
|
|
||||||
[[ml-gs-job1-analyze]]
|
|
||||||
=== Exploring Single Metric Job Results
|
|
||||||
|
|
||||||
The {xpackml} features analyze the input stream of data, model its behavior,
|
|
||||||
and perform analysis based on the detectors you defined in your job. When an
|
|
||||||
event occurs outside of the model, that event is identified as an anomaly.
|
|
||||||
|
|
||||||
Result records for each anomaly are stored in `.ml-anomalies-*` indices in {es}.
|
|
||||||
By default, the name of the index where {ml} results are stored is labelled
|
|
||||||
`shared`, which corresponds to the `.ml-anomalies-shared` index.
|
|
||||||
|
|
||||||
You can use the **Anomaly Explorer** or the **Single Metric Viewer** in {kib} to
|
|
||||||
view the analysis results.
|
|
||||||
|
|
||||||
Anomaly Explorer::
|
|
||||||
This view contains swim lanes showing the maximum anomaly score over time.
|
|
||||||
There is an overall swim lane that shows the overall score for the job, and
|
|
||||||
also swim lanes for each influencer. By selecting a block in a swim lane, the
|
|
||||||
anomaly details are displayed alongside the original source data (where
|
|
||||||
applicable).
|
|
||||||
|
|
||||||
Single Metric Viewer::
|
|
||||||
This view contains a chart that represents the actual and expected values over
|
|
||||||
time. This is only available for jobs that analyze a single time series and
|
|
||||||
where `model_plot_config` is enabled. As in the **Anomaly Explorer**, anomalous
|
|
||||||
data points are shown in different colors depending on their score.
|
|
||||||
|
|
||||||
By default when you view the results for a single metric job, the
|
|
||||||
**Single Metric Viewer** opens:
|
|
||||||
[role="screenshot"]
|
|
||||||
image::images/ml-gs-job1-analysis.jpg["Single Metric Viewer for total-requests job"]
|
|
||||||
|
|
||||||
|
|
||||||
The blue line in the chart represents the actual data values. The shaded blue
|
|
||||||
area represents the bounds for the expected values. The area between the upper
|
|
||||||
and lower bounds are the most likely values for the model. If a value is outside
|
|
||||||
of this area then it can be said to be anomalous.
|
|
||||||
|
|
||||||
If you slide the time selector from the beginning of the data to the end of the
|
|
||||||
data, you can see how the model improves as it processes more data. At the
|
|
||||||
beginning, the expected range of values is pretty broad and the model is not
|
|
||||||
capturing the periodicity in the data. But it quickly learns and begins to
|
|
||||||
reflect the daily variation.
|
|
||||||
|
|
||||||
Any data points outside the range that was predicted by the model are marked
|
|
||||||
as anomalies. When you have high volumes of real-life data, many anomalies
|
|
||||||
might be found. These vary in probability from very likely to highly unlikely,
|
|
||||||
that is to say, from not particularly anomalous to highly anomalous. There
|
|
||||||
can be none, one or two or tens, sometimes hundreds of anomalies found within
|
|
||||||
each bucket. There can be many thousands found per job. In order to provide
|
|
||||||
a sensible view of the results, an _anomaly score_ is calculated for each bucket
|
|
||||||
time interval. The anomaly score is a value from 0 to 100, which indicates
|
|
||||||
the significance of the observed anomaly compared to previously seen anomalies.
|
|
||||||
The highly anomalous values are shown in red and the low scored values are
|
|
||||||
indicated in blue. An interval with a high anomaly score is significant and
|
|
||||||
requires investigation.
|
|
||||||
|
|
||||||
Slide the time selector to a section of the time series that contains a red
|
|
||||||
anomaly data point. If you hover over the point, you can see more information
|
|
||||||
about that data point. You can also see details in the **Anomalies** section
|
|
||||||
of the viewer. For example:
|
|
||||||
[role="screenshot"]
|
|
||||||
image::images/ml-gs-job1-anomalies.jpg["Single Metric Viewer Anomalies for total-requests job"]
|
|
||||||
|
|
||||||
For each anomaly you can see key details such as the time, the actual and
|
|
||||||
expected ("typical") values, and their probability.
|
|
||||||
|
|
||||||
By default, the table contains all anomalies that have a severity of "warning"
|
|
||||||
or higher in the selected section of the timeline. If you are only interested in
|
|
||||||
critical anomalies, for example, you can change the severity threshold for this
|
|
||||||
table.
|
|
||||||
|
|
||||||
The anomalies table also automatically calculates an interval for the data in
|
|
||||||
the table. If the time difference between the earliest and latest records in the
|
|
||||||
table is less than two days, the data is aggregated by hour to show the details
|
|
||||||
of the highest severity anomaly for each detector. Otherwise, it is
|
|
||||||
aggregated by day. You can change the interval for the table, for example, to
|
|
||||||
show all anomalies.
|
|
||||||
|
|
||||||
You can see the same information in a different format by using the
|
|
||||||
**Anomaly Explorer**:
|
|
||||||
[role="screenshot"]
|
|
||||||
image::images/ml-gs-job1-explorer.jpg["Anomaly Explorer for total-requests job"]
|
|
||||||
|
|
||||||
|
|
||||||
Click one of the red sections in the swim lane to see details about the anomalies
|
|
||||||
that occurred in that time interval. For example:
|
|
||||||
[role="screenshot"]
|
|
||||||
image::images/ml-gs-job1-explorer-anomaly.jpg["Anomaly Explorer details for total-requests job"]
|
|
||||||
|
|
||||||
After you have identified anomalies, often the next step is to try to determine
|
|
||||||
the context of those situations. For example, are there other factors that are
|
|
||||||
contributing to the problem? Are the anomalies confined to particular
|
|
||||||
applications or servers? You can begin to troubleshoot these situations by
|
|
||||||
layering additional jobs or creating multi-metric jobs.
|
|
||||||
|
|
||||||
include::getting-started-multi.asciidoc[]
|
include::getting-started-multi.asciidoc[]
|
||||||
include::getting-started-next.asciidoc[]
|
include::getting-started-next.asciidoc[]
|
||||||
|
BIN
docs/en/ml/images/ml-create-job.jpg
Normal file
After Width: | Height: | Size: 187 KiB |
Before Width: | Height: | Size: 64 KiB |
BIN
docs/en/ml/images/ml-data-dates.jpg
Normal file
After Width: | Height: | Size: 17 KiB |
BIN
docs/en/ml/images/ml-data-keywords.jpg
Normal file
After Width: | Height: | Size: 17 KiB |
BIN
docs/en/ml/images/ml-data-metrics.jpg
Normal file
After Width: | Height: | Size: 350 KiB |
BIN
docs/en/ml/images/ml-data-topmetrics.jpg
Normal file
After Width: | Height: | Size: 99 KiB |
BIN
docs/en/ml/images/ml-data-visualizer.jpg
Normal file
After Width: | Height: | Size: 75 KiB |
Before Width: | Height: | Size: 23 KiB |
Before Width: | Height: | Size: 336 KiB After Width: | Height: | Size: 398 KiB |
Before Width: | Height: | Size: 72 KiB After Width: | Height: | Size: 133 KiB |
Before Width: | Height: | Size: 126 KiB After Width: | Height: | Size: 157 KiB |
Before Width: | Height: | Size: 130 KiB After Width: | Height: | Size: 236 KiB |
Before Width: | Height: | Size: 113 KiB After Width: | Height: | Size: 134 KiB |
Before Width: | Height: | Size: 86 KiB After Width: | Height: | Size: 154 KiB |
Before Width: | Height: | Size: 85 KiB |
Before Width: | Height: | Size: 133 KiB After Width: | Height: | Size: 175 KiB |
Before Width: | Height: | Size: 176 KiB After Width: | Height: | Size: 245 KiB |
Before Width: | Height: | Size: 50 KiB After Width: | Height: | Size: 87 KiB |
Before Width: | Height: | Size: 43 KiB |
Before Width: | Height: | Size: 1.2 KiB After Width: | Height: | Size: 1.3 KiB |
Before Width: | Height: | Size: 5.1 KiB After Width: | Height: | Size: 4.5 KiB |
@ -9,6 +9,11 @@ Machine Learning::
|
|||||||
* The `max_running_jobs` node property is removed in this release. Use the
|
* The `max_running_jobs` node property is removed in this release. Use the
|
||||||
`xpack.ml.max_open_jobs` setting instead. For more information, see <<ml-settings>>.
|
`xpack.ml.max_open_jobs` setting instead. For more information, see <<ml-settings>>.
|
||||||
|
|
||||||
|
Security::
|
||||||
|
* The fields returned as part of the mappings section by get index, get
|
||||||
|
mappings, get field mappings and field capabilities API are now only the
|
||||||
|
ones that the user is authorized to access in case field level security is enabled.
|
||||||
|
|
||||||
See also:
|
See also:
|
||||||
|
|
||||||
* <<release-notes-7.0.0-alpha1,{es} 7.0.0-alpha1 Release Notes>>
|
* <<release-notes-7.0.0-alpha1,{es} 7.0.0-alpha1 Release Notes>>
|
||||||
|
@ -26,6 +26,11 @@ Machine Learning::
|
|||||||
* The `max_running_jobs` node property is removed in this release. Use the
|
* The `max_running_jobs` node property is removed in this release. Use the
|
||||||
`xpack.ml.max_open_jobs` setting instead. For more information, <<ml-settings>>.
|
`xpack.ml.max_open_jobs` setting instead. For more information, <<ml-settings>>.
|
||||||
|
|
||||||
|
Security::
|
||||||
|
* The fields returned as part of the mappings section by get index, get
|
||||||
|
mappings, get field mappings and field capabilities API are now only the ones
|
||||||
|
that the user is authorized to access in case field level security is enabled.
|
||||||
|
|
||||||
See also:
|
See also:
|
||||||
|
|
||||||
* <<breaking-changes-7.0,{es} Breaking Changes>>
|
* <<breaking-changes-7.0,{es} Breaking Changes>>
|
||||||
|
@ -118,6 +118,18 @@ are five possible modes an action can be associated with:
|
|||||||
You must have `manage_watcher` cluster privileges to use this API. For more
|
You must have `manage_watcher` cluster privileges to use this API. For more
|
||||||
information, see {xpack-ref}/security-privileges.html[Security Privileges].
|
information, see {xpack-ref}/security-privileges.html[Security Privileges].
|
||||||
|
|
||||||
|
[float]
|
||||||
|
==== Security Integration
|
||||||
|
|
||||||
|
When {security} is enabled on your Elasticsearch cluster, then watches will be
|
||||||
|
executed with the privileges of the user that stored the watches. If your user
|
||||||
|
is allowed to read index `a`, but not index `b`, then the exact same set of
|
||||||
|
rules will apply during execution of a watch.
|
||||||
|
|
||||||
|
When using the execute watch API, the authorization data of the user that
|
||||||
|
called the API will be used as a base, instead of of the information who stored
|
||||||
|
the watch.
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
==== Examples
|
==== Examples
|
||||||
|
|
||||||
|
@ -74,6 +74,14 @@ A watch has the following fields:
|
|||||||
You must have `manage_watcher` cluster privileges to use this API. For more
|
You must have `manage_watcher` cluster privileges to use this API. For more
|
||||||
information, see {xpack-ref}/security-privileges.html[Security Privileges].
|
information, see {xpack-ref}/security-privileges.html[Security Privileges].
|
||||||
|
|
||||||
|
[float]
|
||||||
|
==== Security Integration
|
||||||
|
|
||||||
|
When {security} is enabled, your watch will only be able to index or search on
|
||||||
|
indices for which the user that stored the watch, has privileges. If the user is
|
||||||
|
able to read index `a`, but not index `b`, the same will apply, when the watch
|
||||||
|
is executed.
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
==== Examples
|
==== Examples
|
||||||
|
|
||||||
|
@ -1,6 +1,8 @@
|
|||||||
[[java-clients]]
|
[[java-clients]]
|
||||||
=== Java Client and Security
|
=== Java Client and Security
|
||||||
|
|
||||||
|
deprecated[7.0.0, The `TransportClient` is deprecated in favour of the {java-rest}/java-rest-high.html[Java High Level REST Client] and will be removed in Elasticsearch 8.0. The {java-rest}/java-rest-high-level-migration.html[migration guide] describes all the steps needed to migrate.]
|
||||||
|
|
||||||
{security} supports the Java http://www.elastic.co/guide/en/elasticsearch/client/java-api/current/transport-client.html[transport client] for Elasticsearch.
|
{security} supports the Java http://www.elastic.co/guide/en/elasticsearch/client/java-api/current/transport-client.html[transport client] for Elasticsearch.
|
||||||
The transport client uses the same transport protocol that the cluster nodes use
|
The transport client uses the same transport protocol that the cluster nodes use
|
||||||
for inter-node communication. It is very efficient as it does not have to marshall
|
for inter-node communication. It is very efficient as it does not have to marshall
|
||||||
|
@ -38,7 +38,10 @@ IMPORTANT: If you want to use {ml} features in your cluster, you must have
|
|||||||
default behavior.
|
default behavior.
|
||||||
|
|
||||||
`xpack.ml.max_open_jobs`::
|
`xpack.ml.max_open_jobs`::
|
||||||
The maximum number of jobs that can run on a node. Defaults to `10`.
|
The maximum number of jobs that can run on a node. Defaults to `20`.
|
||||||
|
The maximum number of jobs is also constrained by memory usage, so fewer
|
||||||
|
jobs than specified by this setting will run on a node if the estimated
|
||||||
|
memory use of the jobs would be higher than allowed.
|
||||||
|
|
||||||
`xpack.ml.max_machine_memory_percent`::
|
`xpack.ml.max_machine_memory_percent`::
|
||||||
The maximum percentage of the machine's memory that {ml} may use for running
|
The maximum percentage of the machine's memory that {ml} may use for running
|
||||||
|
@ -72,15 +72,22 @@ Settings API.
|
|||||||
|
|
||||||
`xpack.monitoring.history.duration`::
|
`xpack.monitoring.history.duration`::
|
||||||
|
|
||||||
Sets the retention duration beyond which the indices created by a Monitoring exporter will
|
Sets the retention duration beyond which the indices created by a Monitoring
|
||||||
be automatically deleted. Defaults to `7d` (7 days).
|
exporter are automatically deleted. Defaults to `7d` (7 days).
|
||||||
+
|
|
||||||
This setting has a minimum value of `1d` (1 day) to ensure that something is being monitored,
|
|
||||||
and it cannot be disabled.
|
|
||||||
+
|
+
|
||||||
|
--
|
||||||
|
This setting has a minimum value of `1d` (1 day) to ensure that something is
|
||||||
|
being monitored, and it cannot be disabled.
|
||||||
|
|
||||||
IMPORTANT: This setting currently only impacts `local`-type exporters. Indices created using
|
IMPORTANT: This setting currently only impacts `local`-type exporters. Indices created using
|
||||||
the `http` exporter will not be deleted automatically.
|
the `http` exporter will not be deleted automatically.
|
||||||
|
|
||||||
|
If both {monitoring} and {watcher} are enabled, you can use this setting to
|
||||||
|
affect the {watcher} cleaner service too. For more information, see the
|
||||||
|
`xpack.watcher.history.cleaner_service.enabled` setting in the
|
||||||
|
<<notification-settings>>.
|
||||||
|
--
|
||||||
|
|
||||||
`xpack.monitoring.exporters`::
|
`xpack.monitoring.exporters`::
|
||||||
|
|
||||||
Configures where the agent stores monitoring data. By default, the agent uses a
|
Configures where the agent stores monitoring data. By default, the agent uses a
|
||||||
|
@ -35,9 +35,13 @@ required. For more information, see
|
|||||||
{xpack-ref}/encrypting-data.html[Encrypting sensitive data in {watcher}].
|
{xpack-ref}/encrypting-data.html[Encrypting sensitive data in {watcher}].
|
||||||
|
|
||||||
`xpack.watcher.history.cleaner_service.enabled`::
|
`xpack.watcher.history.cleaner_service.enabled`::
|
||||||
Set to `false` (default) to disable the cleaner service, which removes previous
|
Set to `false` (default) to disable the cleaner service. If this setting is
|
||||||
versions of {watcher} indices (for example, .watcher-history*) when it
|
`true`, the `xpack.monitoring.enabled` setting must also be set to `true`. The
|
||||||
determines that they are old.
|
cleaner service removes previous versions of {watcher} indices (for example,
|
||||||
|
`.watcher-history*`) when it determines that they are old. The duration of
|
||||||
|
{watcher} indices is determined by the `xpack.monitoring.history.duration`
|
||||||
|
setting, which defaults to 7 days. For more information about that setting,
|
||||||
|
see <<monitoring-settings>>.
|
||||||
|
|
||||||
`xpack.http.proxy.host`::
|
`xpack.http.proxy.host`::
|
||||||
Specifies the address of the proxy server to use to connect to HTTP services.
|
Specifies the address of the proxy server to use to connect to HTTP services.
|
||||||
|
@ -33,7 +33,7 @@ internet access:
|
|||||||
.. Manually download the {xpack} zip file:
|
.. Manually download the {xpack} zip file:
|
||||||
https://artifacts.elastic.co/downloads/packs/x-pack/x-pack-{version}.zip[
|
https://artifacts.elastic.co/downloads/packs/x-pack/x-pack-{version}.zip[
|
||||||
+https://artifacts.elastic.co/downloads/packs/x-pack/x-pack-{version}.zip+]
|
+https://artifacts.elastic.co/downloads/packs/x-pack/x-pack-{version}.zip+]
|
||||||
(https://artifacts.elastic.co/downloads/packs/x-pack/x-pack-{version}.zip.sha1[sha1])
|
(https://artifacts.elastic.co/downloads/packs/x-pack/x-pack-{version}.zip.sha512[sha512])
|
||||||
+
|
+
|
||||||
--
|
--
|
||||||
NOTE: The plugins for {es}, {kib}, and Logstash are included in the same zip
|
NOTE: The plugins for {es}, {kib}, and Logstash are included in the same zip
|
||||||
|
@ -2,14 +2,12 @@
|
|||||||
[[setup-xpack-client]]
|
[[setup-xpack-client]]
|
||||||
== Configuring {xpack} Java Clients
|
== Configuring {xpack} Java Clients
|
||||||
|
|
||||||
|
deprecated[7.0.0, The `TransportClient` is deprecated in favour of the {java-rest}/java-rest-high.html[Java High Level REST Client] and will be removed in Elasticsearch 8.0. The {java-rest}/java-rest-high-level-migration.html[migration guide] describes all the steps needed to migrate.]
|
||||||
|
|
||||||
If you want to use a Java {javaclient}/transport-client.html[transport client] with a
|
If you want to use a Java {javaclient}/transport-client.html[transport client] with a
|
||||||
cluster where {xpack} is installed, then you must download and configure the
|
cluster where {xpack} is installed, then you must download and configure the
|
||||||
{xpack} transport client.
|
{xpack} transport client.
|
||||||
|
|
||||||
WARNING: The `TransportClient` is aimed to be replaced by the Java High Level REST
|
|
||||||
Client, which executes HTTP requests instead of serialized Java requests. The
|
|
||||||
`TransportClient` will be deprecated in upcoming versions of {es}.
|
|
||||||
|
|
||||||
. Add the {xpack} transport JAR file to your *CLASSPATH*. You can download the {xpack}
|
. Add the {xpack} transport JAR file to your *CLASSPATH*. You can download the {xpack}
|
||||||
distribution and extract the JAR file manually or you can get it from the
|
distribution and extract the JAR file manually or you can get it from the
|
||||||
https://artifacts.elastic.co/maven/org/elasticsearch/client/x-pack-transport/{version}/x-pack-transport-{version}.jar[Elasticsearc Maven repository].
|
https://artifacts.elastic.co/maven/org/elasticsearch/client/x-pack-transport/{version}/x-pack-transport-{version}.jar[Elasticsearc Maven repository].
|
||||||
|
@ -45,7 +45,7 @@ payload as well as an array of contexts to the action.
|
|||||||
"attach_payload" : true,
|
"attach_payload" : true,
|
||||||
"client" : "/foo/bar/{{ctx.watch_id}}",
|
"client" : "/foo/bar/{{ctx.watch_id}}",
|
||||||
"client_url" : "http://www.example.org/",
|
"client_url" : "http://www.example.org/",
|
||||||
"context" : [
|
"contexts" : [
|
||||||
{
|
{
|
||||||
"type": "link",
|
"type": "link",
|
||||||
"href": "http://acme.pagerduty.com"
|
"href": "http://acme.pagerduty.com"
|
||||||
|
@ -1,6 +1,8 @@
|
|||||||
[[api-java]]
|
[[api-java]]
|
||||||
== Java API
|
== Java API
|
||||||
|
|
||||||
|
deprecated[7.0.0, The `TransportClient` is deprecated in favour of the {java-rest}/java-rest-high.html[Java High Level REST Client] and will be removed in Elasticsearch 8.0. The {java-rest}/java-rest-high-level-migration.html[migration guide] describes all the steps needed to migrate.]
|
||||||
|
|
||||||
{xpack} provides a Java client called `WatcherClient` that adds native Java
|
{xpack} provides a Java client called `WatcherClient` that adds native Java
|
||||||
support for the {watcher}.
|
support for the {watcher}.
|
||||||
|
|
||||||
|
@ -18,3 +18,11 @@ from the page without saving your changes they will be lost without warning.
|
|||||||
Make sure to save your changes before leaving the page.
|
Make sure to save your changes before leaving the page.
|
||||||
|
|
||||||
image::watcher-ui-edit-watch.png[]
|
image::watcher-ui-edit-watch.png[]
|
||||||
|
|
||||||
|
[float]
|
||||||
|
=== Security Integration
|
||||||
|
|
||||||
|
When {security} is enabled, a watch stores information about what the user who
|
||||||
|
stored the watch is allowed to execute **at that time**. This means, if those
|
||||||
|
permissions change over time, the watch will still be able to execute with the
|
||||||
|
permissions that existed when the watch was created.
|
||||||
|
@ -7,7 +7,6 @@ package org.elasticsearch.smoketest;
|
|||||||
|
|
||||||
import org.apache.http.HttpHost;
|
import org.apache.http.HttpHost;
|
||||||
import com.carrotsearch.randomizedtesting.annotations.Name;
|
import com.carrotsearch.randomizedtesting.annotations.Name;
|
||||||
import com.carrotsearch.randomizedtesting.annotations.ParametersFactory;
|
|
||||||
import org.elasticsearch.Version;
|
import org.elasticsearch.Version;
|
||||||
import org.elasticsearch.client.RestClient;
|
import org.elasticsearch.client.RestClient;
|
||||||
import org.elasticsearch.common.settings.SecureString;
|
import org.elasticsearch.common.settings.SecureString;
|
||||||
@ -17,9 +16,8 @@ import org.elasticsearch.test.rest.yaml.ClientYamlDocsTestClient;
|
|||||||
import org.elasticsearch.test.rest.yaml.ClientYamlTestCandidate;
|
import org.elasticsearch.test.rest.yaml.ClientYamlTestCandidate;
|
||||||
import org.elasticsearch.test.rest.yaml.ClientYamlTestClient;
|
import org.elasticsearch.test.rest.yaml.ClientYamlTestClient;
|
||||||
import org.elasticsearch.test.rest.yaml.ClientYamlTestResponse;
|
import org.elasticsearch.test.rest.yaml.ClientYamlTestResponse;
|
||||||
import org.elasticsearch.test.rest.yaml.ESClientYamlSuiteTestCase;
|
|
||||||
import org.elasticsearch.test.rest.yaml.restspec.ClientYamlSuiteRestSpec;
|
import org.elasticsearch.test.rest.yaml.restspec.ClientYamlSuiteRestSpec;
|
||||||
import org.elasticsearch.xpack.ml.integration.MlRestTestStateCleaner;
|
import org.elasticsearch.xpack.test.rest.XPackRestIT;
|
||||||
import org.junit.After;
|
import org.junit.After;
|
||||||
|
|
||||||
import java.io.IOException;
|
import java.io.IOException;
|
||||||
@ -32,18 +30,13 @@ import static java.util.Collections.singletonMap;
|
|||||||
import static org.elasticsearch.xpack.security.authc.support.UsernamePasswordToken.basicAuthHeaderValue;
|
import static org.elasticsearch.xpack.security.authc.support.UsernamePasswordToken.basicAuthHeaderValue;
|
||||||
import static org.hamcrest.Matchers.is;
|
import static org.hamcrest.Matchers.is;
|
||||||
|
|
||||||
public class XDocsClientYamlTestSuiteIT extends ESClientYamlSuiteTestCase {
|
public class XDocsClientYamlTestSuiteIT extends XPackRestIT {
|
||||||
private static final String USER_TOKEN = basicAuthHeaderValue("test_admin", new SecureString("x-pack-test-password".toCharArray()));
|
private static final String USER_TOKEN = basicAuthHeaderValue("test_admin", new SecureString("x-pack-test-password".toCharArray()));
|
||||||
|
|
||||||
public XDocsClientYamlTestSuiteIT(@Name("yaml") ClientYamlTestCandidate testCandidate) {
|
public XDocsClientYamlTestSuiteIT(@Name("yaml") ClientYamlTestCandidate testCandidate) {
|
||||||
super(testCandidate);
|
super(testCandidate);
|
||||||
}
|
}
|
||||||
|
|
||||||
@ParametersFactory
|
|
||||||
public static Iterable<Object[]> parameters() throws Exception {
|
|
||||||
return ESClientYamlSuiteTestCase.createParameters();
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
protected void afterIfFailed(List<Throwable> errors) {
|
protected void afterIfFailed(List<Throwable> errors) {
|
||||||
super.afterIfFailed(errors);
|
super.afterIfFailed(errors);
|
||||||
@ -93,11 +86,23 @@ public class XDocsClientYamlTestSuiteIT extends ESClientYamlSuiteTestCase {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
private boolean isWatcherTest() {
|
@Override
|
||||||
|
protected boolean isWatcherTest() {
|
||||||
String testName = getTestName();
|
String testName = getTestName();
|
||||||
return testName != null && testName.contains("watcher");
|
return testName != null && testName.contains("watcher");
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
protected boolean isMonitoringTest() {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
protected boolean isMachineLearningTest() {
|
||||||
|
String testName = getTestName();
|
||||||
|
return testName != null && testName.contains("ml/");
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Deletes users after every test just in case any test adds any.
|
* Deletes users after every test just in case any test adds any.
|
||||||
*/
|
*/
|
||||||
@ -117,11 +122,6 @@ public class XDocsClientYamlTestSuiteIT extends ESClientYamlSuiteTestCase {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@After
|
|
||||||
public void cleanMlState() throws Exception {
|
|
||||||
new MlRestTestStateCleaner(logger, adminClient(), this).clearMlMetadata();
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
protected boolean randomizeContentType() {
|
protected boolean randomizeContentType() {
|
||||||
return false;
|
return false;
|
||||||
|
@ -47,6 +47,7 @@ import org.elasticsearch.plugins.ActionPlugin;
|
|||||||
import org.elasticsearch.plugins.ClusterPlugin;
|
import org.elasticsearch.plugins.ClusterPlugin;
|
||||||
import org.elasticsearch.plugins.DiscoveryPlugin;
|
import org.elasticsearch.plugins.DiscoveryPlugin;
|
||||||
import org.elasticsearch.plugins.IngestPlugin;
|
import org.elasticsearch.plugins.IngestPlugin;
|
||||||
|
import org.elasticsearch.plugins.MapperPlugin;
|
||||||
import org.elasticsearch.plugins.NetworkPlugin;
|
import org.elasticsearch.plugins.NetworkPlugin;
|
||||||
import org.elasticsearch.plugins.Plugin;
|
import org.elasticsearch.plugins.Plugin;
|
||||||
import org.elasticsearch.plugins.ScriptPlugin;
|
import org.elasticsearch.plugins.ScriptPlugin;
|
||||||
@ -108,12 +109,15 @@ import java.util.List;
|
|||||||
import java.util.Map;
|
import java.util.Map;
|
||||||
import java.util.Set;
|
import java.util.Set;
|
||||||
import java.util.function.BiConsumer;
|
import java.util.function.BiConsumer;
|
||||||
|
import java.util.function.Function;
|
||||||
|
import java.util.function.Predicate;
|
||||||
import java.util.function.Supplier;
|
import java.util.function.Supplier;
|
||||||
import java.util.function.UnaryOperator;
|
import java.util.function.UnaryOperator;
|
||||||
import java.util.stream.Collectors;
|
import java.util.stream.Collectors;
|
||||||
import java.util.stream.Stream;
|
import java.util.stream.Stream;
|
||||||
|
|
||||||
public class XPackPlugin extends Plugin implements ScriptPlugin, ActionPlugin, IngestPlugin, NetworkPlugin, ClusterPlugin, DiscoveryPlugin {
|
public class XPackPlugin extends Plugin implements ScriptPlugin, ActionPlugin, IngestPlugin, NetworkPlugin, ClusterPlugin,
|
||||||
|
DiscoveryPlugin, MapperPlugin {
|
||||||
|
|
||||||
public static final String NAME = "x-pack";
|
public static final String NAME = "x-pack";
|
||||||
|
|
||||||
@ -565,4 +569,9 @@ public class XPackPlugin extends Plugin implements ScriptPlugin, ActionPlugin, I
|
|||||||
public BiConsumer<DiscoveryNode, ClusterState> getJoinValidator() {
|
public BiConsumer<DiscoveryNode, ClusterState> getJoinValidator() {
|
||||||
return security.getJoinValidator();
|
return security.getJoinValidator();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public Function<String, Predicate<String>> getFieldFilter() {
|
||||||
|
return security.getFieldFilter();
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
@ -0,0 +1,34 @@
|
|||||||
|
/*
|
||||||
|
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||||
|
* or more contributor license agreements. Licensed under the Elastic License;
|
||||||
|
* you may not use this file except in compliance with the Elastic License.
|
||||||
|
*/
|
||||||
|
package org.elasticsearch.xpack.logstash;
|
||||||
|
|
||||||
|
import org.elasticsearch.xpack.security.authz.RoleDescriptor;
|
||||||
|
import org.elasticsearch.xpack.security.SecurityExtension;
|
||||||
|
import org.elasticsearch.xpack.security.support.MetadataUtils;
|
||||||
|
|
||||||
|
import java.util.Collections;
|
||||||
|
import java.util.HashMap;
|
||||||
|
import java.util.Map;
|
||||||
|
|
||||||
|
public class LogstashSecurityExtension implements SecurityExtension {
|
||||||
|
@Override
|
||||||
|
public Map<String, RoleDescriptor> getReservedRoles() {
|
||||||
|
Map<String, RoleDescriptor> roles = new HashMap<>();
|
||||||
|
|
||||||
|
roles.put("logstash_admin",
|
||||||
|
new RoleDescriptor("logstash_admin",
|
||||||
|
null,
|
||||||
|
new RoleDescriptor.IndicesPrivileges[]{
|
||||||
|
RoleDescriptor.IndicesPrivileges.builder().indices(".logstash*")
|
||||||
|
.privileges("create", "delete", "index", "manage", "read")
|
||||||
|
.build()
|
||||||
|
},
|
||||||
|
null,
|
||||||
|
MetadataUtils.DEFAULT_RESERVED_METADATA));
|
||||||
|
|
||||||
|
return Collections.unmodifiableMap(roles);
|
||||||
|
}
|
||||||
|
}
|
@ -0,0 +1,47 @@
|
|||||||
|
/*
|
||||||
|
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||||
|
* or more contributor license agreements. Licensed under the Elastic License;
|
||||||
|
* you may not use this file except in compliance with the Elastic License.
|
||||||
|
*/
|
||||||
|
package org.elasticsearch.xpack.ml;
|
||||||
|
|
||||||
|
import org.elasticsearch.xpack.security.authz.RoleDescriptor;
|
||||||
|
import org.elasticsearch.xpack.security.SecurityExtension;
|
||||||
|
import org.elasticsearch.xpack.security.support.MetadataUtils;
|
||||||
|
|
||||||
|
import java.util.Collections;
|
||||||
|
import java.util.HashMap;
|
||||||
|
import java.util.Map;
|
||||||
|
|
||||||
|
public class MachineLearningSecurityExtension implements SecurityExtension {
|
||||||
|
@Override
|
||||||
|
public Map<String, RoleDescriptor> getReservedRoles() {
|
||||||
|
Map<String, RoleDescriptor> roles = new HashMap<>();
|
||||||
|
|
||||||
|
roles.put("machine_learning_user",
|
||||||
|
new RoleDescriptor("machine_learning_user",
|
||||||
|
new String[] { "monitor_ml" },
|
||||||
|
new RoleDescriptor.IndicesPrivileges[] {
|
||||||
|
RoleDescriptor.IndicesPrivileges.builder()
|
||||||
|
.indices(".ml-anomalies*", ".ml-notifications")
|
||||||
|
.privileges("view_index_metadata", "read")
|
||||||
|
.build()
|
||||||
|
},
|
||||||
|
null,
|
||||||
|
MetadataUtils.DEFAULT_RESERVED_METADATA));
|
||||||
|
|
||||||
|
roles.put("machine_learning_admin",
|
||||||
|
new RoleDescriptor("machine_learning_admin",
|
||||||
|
new String[] { "manage_ml" },
|
||||||
|
new RoleDescriptor.IndicesPrivileges[] {
|
||||||
|
RoleDescriptor.IndicesPrivileges.builder()
|
||||||
|
.indices(".ml-*")
|
||||||
|
.privileges("view_index_metadata", "read")
|
||||||
|
.build()
|
||||||
|
},
|
||||||
|
null,
|
||||||
|
MetadataUtils.DEFAULT_RESERVED_METADATA));
|
||||||
|
|
||||||
|
return Collections.unmodifiableMap(roles);
|
||||||
|
}
|
||||||
|
}
|
@ -26,6 +26,7 @@ import org.elasticsearch.common.xcontent.XContentBuilder;
|
|||||||
import org.elasticsearch.common.xcontent.XContentParser;
|
import org.elasticsearch.common.xcontent.XContentParser;
|
||||||
import org.elasticsearch.threadpool.ThreadPool;
|
import org.elasticsearch.threadpool.ThreadPool;
|
||||||
import org.elasticsearch.transport.TransportService;
|
import org.elasticsearch.transport.TransportService;
|
||||||
|
import org.elasticsearch.xpack.ml.job.JobManager;
|
||||||
import org.elasticsearch.xpack.ml.job.config.Job;
|
import org.elasticsearch.xpack.ml.job.config.Job;
|
||||||
import org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectProcessManager;
|
import org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectProcessManager;
|
||||||
import org.elasticsearch.xpack.ml.job.process.autodetect.params.ForecastParams;
|
import org.elasticsearch.xpack.ml.job.process.autodetect.params.ForecastParams;
|
||||||
@ -34,6 +35,8 @@ import org.elasticsearch.xpack.ml.job.results.Forecast;
|
|||||||
import java.io.IOException;
|
import java.io.IOException;
|
||||||
import java.util.Objects;
|
import java.util.Objects;
|
||||||
|
|
||||||
|
import static org.elasticsearch.xpack.ml.action.ForecastJobAction.Request.DURATION;
|
||||||
|
|
||||||
public class ForecastJobAction extends Action<ForecastJobAction.Request, ForecastJobAction.Response, ForecastJobAction.RequestBuilder> {
|
public class ForecastJobAction extends Action<ForecastJobAction.Request, ForecastJobAction.Response, ForecastJobAction.RequestBuilder> {
|
||||||
|
|
||||||
public static final ForecastJobAction INSTANCE = new ForecastJobAction();
|
public static final ForecastJobAction INSTANCE = new ForecastJobAction();
|
||||||
@ -244,13 +247,16 @@ public class ForecastJobAction extends Action<ForecastJobAction.Request, Forecas
|
|||||||
|
|
||||||
public static class TransportAction extends TransportJobTaskAction<Request, Response> {
|
public static class TransportAction extends TransportJobTaskAction<Request, Response> {
|
||||||
|
|
||||||
|
private final JobManager jobManager;
|
||||||
|
|
||||||
@Inject
|
@Inject
|
||||||
public TransportAction(Settings settings, TransportService transportService, ThreadPool threadPool, ClusterService clusterService,
|
public TransportAction(Settings settings, TransportService transportService, ThreadPool threadPool, ClusterService clusterService,
|
||||||
ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver,
|
ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver,
|
||||||
AutodetectProcessManager processManager) {
|
JobManager jobManager, AutodetectProcessManager processManager) {
|
||||||
super(settings, ForecastJobAction.NAME, threadPool, clusterService, transportService, actionFilters,
|
super(settings, ForecastJobAction.NAME, threadPool, clusterService, transportService, actionFilters,
|
||||||
indexNameExpressionResolver, ForecastJobAction.Request::new, ForecastJobAction.Response::new, ThreadPool.Names.SAME,
|
indexNameExpressionResolver, ForecastJobAction.Request::new, ForecastJobAction.Response::new, ThreadPool.Names.SAME,
|
||||||
processManager);
|
processManager);
|
||||||
|
this.jobManager = jobManager;
|
||||||
// ThreadPool.Names.SAME, because operations is executed by autodetect worker thread
|
// ThreadPool.Names.SAME, because operations is executed by autodetect worker thread
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -265,6 +271,16 @@ public class ForecastJobAction extends Action<ForecastJobAction.Request, Forecas
|
|||||||
protected void taskOperation(Request request, OpenJobAction.JobTask task, ActionListener<Response> listener) {
|
protected void taskOperation(Request request, OpenJobAction.JobTask task, ActionListener<Response> listener) {
|
||||||
ForecastParams.Builder paramsBuilder = ForecastParams.builder();
|
ForecastParams.Builder paramsBuilder = ForecastParams.builder();
|
||||||
if (request.getDuration() != null) {
|
if (request.getDuration() != null) {
|
||||||
|
TimeValue duration = request.getDuration();
|
||||||
|
TimeValue bucketSpan = jobManager.getJobOrThrowIfUnknown(task.getJobId()).getAnalysisConfig().getBucketSpan();
|
||||||
|
|
||||||
|
if (duration.compareTo(bucketSpan) < 0) {
|
||||||
|
throw new IllegalArgumentException(
|
||||||
|
"[" + DURATION.getPreferredName()
|
||||||
|
+ "] must be greater or equal to the bucket span: ["
|
||||||
|
+ duration.getStringRep() + "/" + bucketSpan.getStringRep() + "]");
|
||||||
|
}
|
||||||
|
|
||||||
paramsBuilder.duration(request.getDuration());
|
paramsBuilder.duration(request.getDuration());
|
||||||
}
|
}
|
||||||
if (request.getExpiresIn() != null) {
|
if (request.getExpiresIn() != null) {
|
||||||
|
@ -49,6 +49,7 @@ import org.elasticsearch.common.xcontent.XContentParser;
|
|||||||
import org.elasticsearch.index.Index;
|
import org.elasticsearch.index.Index;
|
||||||
import org.elasticsearch.license.LicenseUtils;
|
import org.elasticsearch.license.LicenseUtils;
|
||||||
import org.elasticsearch.license.XPackLicenseState;
|
import org.elasticsearch.license.XPackLicenseState;
|
||||||
|
import org.elasticsearch.plugins.MapperPlugin;
|
||||||
import org.elasticsearch.rest.RestStatus;
|
import org.elasticsearch.rest.RestStatus;
|
||||||
import org.elasticsearch.tasks.Task;
|
import org.elasticsearch.tasks.Task;
|
||||||
import org.elasticsearch.tasks.TaskId;
|
import org.elasticsearch.tasks.TaskId;
|
||||||
@ -61,8 +62,10 @@ import org.elasticsearch.xpack.ml.MlMetadata;
|
|||||||
import org.elasticsearch.xpack.ml.job.config.Job;
|
import org.elasticsearch.xpack.ml.job.config.Job;
|
||||||
import org.elasticsearch.xpack.ml.job.config.JobState;
|
import org.elasticsearch.xpack.ml.job.config.JobState;
|
||||||
import org.elasticsearch.xpack.ml.job.config.JobTaskStatus;
|
import org.elasticsearch.xpack.ml.job.config.JobTaskStatus;
|
||||||
|
import org.elasticsearch.xpack.ml.job.config.JobUpdate;
|
||||||
import org.elasticsearch.xpack.ml.job.persistence.AnomalyDetectorsIndex;
|
import org.elasticsearch.xpack.ml.job.persistence.AnomalyDetectorsIndex;
|
||||||
import org.elasticsearch.xpack.ml.job.persistence.ElasticsearchMappings;
|
import org.elasticsearch.xpack.ml.job.persistence.ElasticsearchMappings;
|
||||||
|
import org.elasticsearch.xpack.ml.job.persistence.JobProvider;
|
||||||
import org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectProcessManager;
|
import org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectProcessManager;
|
||||||
import org.elasticsearch.xpack.ml.utils.ExceptionsHelper;
|
import org.elasticsearch.xpack.ml.utils.ExceptionsHelper;
|
||||||
import org.elasticsearch.xpack.persistent.AllocatedPersistentTask;
|
import org.elasticsearch.xpack.persistent.AllocatedPersistentTask;
|
||||||
@ -390,15 +393,17 @@ public class OpenJobAction extends Action<OpenJobAction.Request, OpenJobAction.R
|
|||||||
private final XPackLicenseState licenseState;
|
private final XPackLicenseState licenseState;
|
||||||
private final PersistentTasksService persistentTasksService;
|
private final PersistentTasksService persistentTasksService;
|
||||||
private final Client client;
|
private final Client client;
|
||||||
|
private final JobProvider jobProvider;
|
||||||
|
|
||||||
@Inject
|
@Inject
|
||||||
public TransportAction(Settings settings, TransportService transportService, ThreadPool threadPool, XPackLicenseState licenseState,
|
public TransportAction(Settings settings, TransportService transportService, ThreadPool threadPool, XPackLicenseState licenseState,
|
||||||
ClusterService clusterService, PersistentTasksService persistentTasksService, ActionFilters actionFilters,
|
ClusterService clusterService, PersistentTasksService persistentTasksService, ActionFilters actionFilters,
|
||||||
IndexNameExpressionResolver indexNameExpressionResolver, Client client) {
|
IndexNameExpressionResolver indexNameExpressionResolver, Client client, JobProvider jobProvider) {
|
||||||
super(settings, NAME, transportService, clusterService, threadPool, actionFilters, indexNameExpressionResolver, Request::new);
|
super(settings, NAME, transportService, clusterService, threadPool, actionFilters, indexNameExpressionResolver, Request::new);
|
||||||
this.licenseState = licenseState;
|
this.licenseState = licenseState;
|
||||||
this.persistentTasksService = persistentTasksService;
|
this.persistentTasksService = persistentTasksService;
|
||||||
this.client = client;
|
this.client = client;
|
||||||
|
this.jobProvider = jobProvider;
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
@ -422,10 +427,10 @@ public class OpenJobAction extends Action<OpenJobAction.Request, OpenJobAction.R
|
|||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
protected void masterOperation(Request request, ClusterState state, ActionListener<Response> listener) throws Exception {
|
protected void masterOperation(Request request, ClusterState state, ActionListener<Response> listener) {
|
||||||
JobParams jobParams = request.getJobParams();
|
JobParams jobParams = request.getJobParams();
|
||||||
if (licenseState.isMachineLearningAllowed()) {
|
if (licenseState.isMachineLearningAllowed()) {
|
||||||
// Step 4. Wait for job to be started and respond
|
// Step 5. Wait for job to be started and respond
|
||||||
ActionListener<PersistentTask<JobParams>> finalListener = new ActionListener<PersistentTask<JobParams>>() {
|
ActionListener<PersistentTask<JobParams>> finalListener = new ActionListener<PersistentTask<JobParams>>() {
|
||||||
@Override
|
@Override
|
||||||
public void onResponse(PersistentTask<JobParams> task) {
|
public void onResponse(PersistentTask<JobParams> task) {
|
||||||
@ -442,11 +447,42 @@ public class OpenJobAction extends Action<OpenJobAction.Request, OpenJobAction.R
|
|||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
// Step 3. Start job task
|
// Step 4. Start job task
|
||||||
ActionListener<Boolean> missingMappingsListener = ActionListener.wrap(
|
ActionListener<PutJobAction.Response> establishedMemoryUpdateListener = ActionListener.wrap(
|
||||||
response -> persistentTasksService.startPersistentTask(MlMetadata.jobTaskId(jobParams.jobId),
|
response -> persistentTasksService.startPersistentTask(MlMetadata.jobTaskId(jobParams.jobId),
|
||||||
TASK_NAME, jobParams, finalListener)
|
TASK_NAME, jobParams, finalListener),
|
||||||
, listener::onFailure
|
listener::onFailure
|
||||||
|
);
|
||||||
|
|
||||||
|
// Step 3. Update established model memory for pre-6.1 jobs that haven't had it set
|
||||||
|
ActionListener<Boolean> missingMappingsListener = ActionListener.wrap(
|
||||||
|
response -> {
|
||||||
|
MlMetadata mlMetadata = clusterService.state().getMetaData().custom(MlMetadata.TYPE);
|
||||||
|
Job job = mlMetadata.getJobs().get(jobParams.getJobId());
|
||||||
|
if (job != null) {
|
||||||
|
Version jobVersion = job.getJobVersion();
|
||||||
|
Long jobEstablishedModelMemory = job.getEstablishedModelMemory();
|
||||||
|
if ((jobVersion == null || jobVersion.before(Version.V_6_1_0))
|
||||||
|
&& (jobEstablishedModelMemory == null || jobEstablishedModelMemory == 0)) {
|
||||||
|
jobProvider.getEstablishedMemoryUsage(job.getId(), null, null, establishedModelMemory -> {
|
||||||
|
if (establishedModelMemory != null && establishedModelMemory > 0) {
|
||||||
|
JobUpdate update = new JobUpdate.Builder(job.getId())
|
||||||
|
.setEstablishedModelMemory(establishedModelMemory).build();
|
||||||
|
UpdateJobAction.Request updateRequest = new UpdateJobAction.Request(job.getId(), update);
|
||||||
|
|
||||||
|
executeAsyncWithOrigin(client, ML_ORIGIN, UpdateJobAction.INSTANCE, updateRequest,
|
||||||
|
establishedMemoryUpdateListener);
|
||||||
|
} else {
|
||||||
|
establishedMemoryUpdateListener.onResponse(null);
|
||||||
|
}
|
||||||
|
}, listener::onFailure);
|
||||||
|
} else {
|
||||||
|
establishedMemoryUpdateListener.onResponse(null);
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
establishedMemoryUpdateListener.onResponse(null);
|
||||||
|
}
|
||||||
|
}, listener::onFailure
|
||||||
);
|
);
|
||||||
|
|
||||||
// Step 2. Try adding state doc mapping
|
// Step 2. Try adding state doc mapping
|
||||||
@ -502,7 +538,13 @@ public class OpenJobAction extends Action<OpenJobAction.Request, OpenJobAction.R
|
|||||||
String[] concreteIndices = aliasOrIndex.getIndices().stream().map(IndexMetaData::getIndex).map(Index::getName)
|
String[] concreteIndices = aliasOrIndex.getIndices().stream().map(IndexMetaData::getIndex).map(Index::getName)
|
||||||
.toArray(String[]::new);
|
.toArray(String[]::new);
|
||||||
|
|
||||||
String[] indicesThatRequireAnUpdate = mappingRequiresUpdate(state, concreteIndices, Version.CURRENT, logger);
|
String[] indicesThatRequireAnUpdate;
|
||||||
|
try {
|
||||||
|
indicesThatRequireAnUpdate = mappingRequiresUpdate(state, concreteIndices, Version.CURRENT, logger);
|
||||||
|
} catch (IOException e) {
|
||||||
|
listener.onFailure(e);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
if (indicesThatRequireAnUpdate.length > 0) {
|
if (indicesThatRequireAnUpdate.length > 0) {
|
||||||
try (XContentBuilder mapping = mappingSupplier.get()) {
|
try (XContentBuilder mapping = mappingSupplier.get()) {
|
||||||
@ -707,7 +749,7 @@ public class OpenJobAction extends Action<OpenJobAction.Request, OpenJobAction.R
|
|||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (nodeSupportsJobVersion(node.getVersion(), job.getJobVersion()) == false) {
|
if (nodeSupportsJobVersion(node.getVersion()) == false) {
|
||||||
String reason = "Not opening job [" + jobId + "] on node [" + node
|
String reason = "Not opening job [" + jobId + "] on node [" + node
|
||||||
+ "], because this node does not support jobs of version [" + job.getJobVersion() + "]";
|
+ "], because this node does not support jobs of version [" + job.getJobVersion() + "]";
|
||||||
logger.trace(reason);
|
logger.trace(reason);
|
||||||
@ -849,15 +891,16 @@ public class OpenJobAction extends Action<OpenJobAction.Request, OpenJobAction.R
|
|||||||
return unavailableIndices;
|
return unavailableIndices;
|
||||||
}
|
}
|
||||||
|
|
||||||
static boolean nodeSupportsJobVersion(Version nodeVersion, Version jobVersion) {
|
private static boolean nodeSupportsJobVersion(Version nodeVersion) {
|
||||||
return nodeVersion.onOrAfter(Version.V_5_5_0);
|
return nodeVersion.onOrAfter(Version.V_5_5_0);
|
||||||
}
|
}
|
||||||
|
|
||||||
static String[] mappingRequiresUpdate(ClusterState state, String[] concreteIndices, Version minVersion, Logger logger) {
|
static String[] mappingRequiresUpdate(ClusterState state, String[] concreteIndices, Version minVersion,
|
||||||
|
Logger logger) throws IOException {
|
||||||
List<String> indicesToUpdate = new ArrayList<>();
|
List<String> indicesToUpdate = new ArrayList<>();
|
||||||
|
|
||||||
ImmutableOpenMap<String, ImmutableOpenMap<String, MappingMetaData>> currentMapping = state.metaData().findMappings(concreteIndices,
|
ImmutableOpenMap<String, ImmutableOpenMap<String, MappingMetaData>> currentMapping = state.metaData().findMappings(concreteIndices,
|
||||||
new String[] { ElasticsearchMappings.DOC_TYPE });
|
new String[] { ElasticsearchMappings.DOC_TYPE }, MapperPlugin.NOOP_FIELD_FILTER);
|
||||||
|
|
||||||
for (String index : concreteIndices) {
|
for (String index : concreteIndices) {
|
||||||
ImmutableOpenMap<String, MappingMetaData> innerMap = currentMapping.get(index);
|
ImmutableOpenMap<String, MappingMetaData> innerMap = currentMapping.get(index);
|
||||||
|
@ -54,6 +54,11 @@ import java.util.concurrent.TimeUnit;
|
|||||||
*/
|
*/
|
||||||
public class DatafeedConfig extends AbstractDiffable<DatafeedConfig> implements ToXContentObject {
|
public class DatafeedConfig extends AbstractDiffable<DatafeedConfig> implements ToXContentObject {
|
||||||
|
|
||||||
|
private static final int SECONDS_IN_MINUTE = 60;
|
||||||
|
private static final int TWO_MINS_SECONDS = 2 * SECONDS_IN_MINUTE;
|
||||||
|
private static final int TWENTY_MINS_SECONDS = 20 * SECONDS_IN_MINUTE;
|
||||||
|
private static final int HALF_DAY_SECONDS = 12 * 60 * SECONDS_IN_MINUTE;
|
||||||
|
|
||||||
// Used for QueryPage
|
// Used for QueryPage
|
||||||
public static final ParseField RESULTS_FIELD = new ParseField("datafeeds");
|
public static final ParseField RESULTS_FIELD = new ParseField("datafeeds");
|
||||||
|
|
||||||
@ -350,6 +355,53 @@ public class DatafeedConfig extends AbstractDiffable<DatafeedConfig> implements
|
|||||||
return Strings.toString(this);
|
return Strings.toString(this);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Calculates a sensible default frequency for a given bucket span.
|
||||||
|
* <p>
|
||||||
|
* The default depends on the bucket span:
|
||||||
|
* <ul>
|
||||||
|
* <li> <= 2 mins -> 1 min</li>
|
||||||
|
* <li> <= 20 mins -> bucket span / 2</li>
|
||||||
|
* <li> <= 12 hours -> 10 mins</li>
|
||||||
|
* <li> > 12 hours -> 1 hour</li>
|
||||||
|
* </ul>
|
||||||
|
*
|
||||||
|
* If the datafeed has aggregations, the default frequency is the
|
||||||
|
* closest multiple of the histogram interval based on the rules above.
|
||||||
|
*
|
||||||
|
* @param bucketSpan the bucket span
|
||||||
|
* @return the default frequency
|
||||||
|
*/
|
||||||
|
public TimeValue defaultFrequency(TimeValue bucketSpan) {
|
||||||
|
TimeValue defaultFrequency = defaultFrequencyTarget(bucketSpan);
|
||||||
|
if (hasAggregations()) {
|
||||||
|
long histogramIntervalMillis = getHistogramIntervalMillis();
|
||||||
|
long targetFrequencyMillis = defaultFrequency.millis();
|
||||||
|
long defaultFrequencyMillis = histogramIntervalMillis > targetFrequencyMillis ? histogramIntervalMillis
|
||||||
|
: (targetFrequencyMillis / histogramIntervalMillis) * histogramIntervalMillis;
|
||||||
|
defaultFrequency = TimeValue.timeValueMillis(defaultFrequencyMillis);
|
||||||
|
}
|
||||||
|
return defaultFrequency;
|
||||||
|
}
|
||||||
|
|
||||||
|
private TimeValue defaultFrequencyTarget(TimeValue bucketSpan) {
|
||||||
|
long bucketSpanSeconds = bucketSpan.seconds();
|
||||||
|
if (bucketSpanSeconds <= 0) {
|
||||||
|
throw new IllegalArgumentException("Bucket span has to be > 0");
|
||||||
|
}
|
||||||
|
|
||||||
|
if (bucketSpanSeconds <= TWO_MINS_SECONDS) {
|
||||||
|
return TimeValue.timeValueSeconds(SECONDS_IN_MINUTE);
|
||||||
|
}
|
||||||
|
if (bucketSpanSeconds <= TWENTY_MINS_SECONDS) {
|
||||||
|
return TimeValue.timeValueSeconds(bucketSpanSeconds / 2);
|
||||||
|
}
|
||||||
|
if (bucketSpanSeconds <= HALF_DAY_SECONDS) {
|
||||||
|
return TimeValue.timeValueMinutes(10);
|
||||||
|
}
|
||||||
|
return TimeValue.timeValueHours(1);
|
||||||
|
}
|
||||||
|
|
||||||
public static class Builder {
|
public static class Builder {
|
||||||
|
|
||||||
private static final int DEFAULT_SCROLL_SIZE = 1000;
|
private static final int DEFAULT_SCROLL_SIZE = 1000;
|
||||||
|
@ -11,6 +11,7 @@ import org.elasticsearch.client.Client;
|
|||||||
import org.elasticsearch.common.bytes.BytesArray;
|
import org.elasticsearch.common.bytes.BytesArray;
|
||||||
import org.elasticsearch.common.io.Streams;
|
import org.elasticsearch.common.io.Streams;
|
||||||
import org.elasticsearch.common.logging.Loggers;
|
import org.elasticsearch.common.logging.Loggers;
|
||||||
|
import org.elasticsearch.common.unit.TimeValue;
|
||||||
import org.elasticsearch.common.util.concurrent.ThreadContext;
|
import org.elasticsearch.common.util.concurrent.ThreadContext;
|
||||||
import org.elasticsearch.common.xcontent.XContentType;
|
import org.elasticsearch.common.xcontent.XContentType;
|
||||||
import org.elasticsearch.index.mapper.DateFieldMapper;
|
import org.elasticsearch.index.mapper.DateFieldMapper;
|
||||||
@ -96,8 +97,11 @@ class DatafeedJob {
|
|||||||
|
|
||||||
String msg = Messages.getMessage(Messages.JOB_AUDIT_DATAFEED_STARTED_FROM_TO,
|
String msg = Messages.getMessage(Messages.JOB_AUDIT_DATAFEED_STARTED_FROM_TO,
|
||||||
DateFieldMapper.DEFAULT_DATE_TIME_FORMATTER.printer().print(lookbackStartTimeMs),
|
DateFieldMapper.DEFAULT_DATE_TIME_FORMATTER.printer().print(lookbackStartTimeMs),
|
||||||
endTime == null ? "real-time" : DateFieldMapper.DEFAULT_DATE_TIME_FORMATTER.printer().print(lookbackEnd));
|
endTime == null ? "real-time" : DateFieldMapper.DEFAULT_DATE_TIME_FORMATTER.printer().print(lookbackEnd),
|
||||||
|
TimeValue.timeValueMillis(frequencyMs).getStringRep());
|
||||||
auditor.info(jobId, msg);
|
auditor.info(jobId, msg);
|
||||||
|
LOGGER.info("[{}] {}", jobId, msg);
|
||||||
|
|
||||||
|
|
||||||
FlushJobAction.Request request = new FlushJobAction.Request(jobId);
|
FlushJobAction.Request request = new FlushJobAction.Request(jobId);
|
||||||
request.setCalcInterim(true);
|
request.setCalcInterim(true);
|
||||||
@ -114,7 +118,7 @@ class DatafeedJob {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
if (!isIsolated) {
|
if (!isIsolated) {
|
||||||
LOGGER.debug("Lookback finished after being stopped");
|
LOGGER.debug("[{}] Lookback finished after being stopped", jobId);
|
||||||
}
|
}
|
||||||
return null;
|
return null;
|
||||||
}
|
}
|
||||||
@ -129,7 +133,7 @@ class DatafeedJob {
|
|||||||
FlushJobAction.Request request = new FlushJobAction.Request(jobId);
|
FlushJobAction.Request request = new FlushJobAction.Request(jobId);
|
||||||
request.setSkipTime(String.valueOf(startTime));
|
request.setSkipTime(String.valueOf(startTime));
|
||||||
FlushJobAction.Response flushResponse = flushJob(request);
|
FlushJobAction.Response flushResponse = flushJob(request);
|
||||||
LOGGER.info("Skipped to time [" + flushResponse.getLastFinalizedBucketEnd().getTime() + "]");
|
LOGGER.info("[{}] Skipped to time [{}]", jobId, flushResponse.getLastFinalizedBucketEnd().getTime());
|
||||||
return flushResponse.getLastFinalizedBucketEnd().getTime();
|
return flushResponse.getLastFinalizedBucketEnd().getTime();
|
||||||
}
|
}
|
||||||
return startTime;
|
return startTime;
|
||||||
|
@ -20,7 +20,6 @@ import org.elasticsearch.xpack.ml.job.results.Bucket;
|
|||||||
import org.elasticsearch.xpack.ml.job.results.Result;
|
import org.elasticsearch.xpack.ml.job.results.Result;
|
||||||
import org.elasticsearch.xpack.ml.notifications.Auditor;
|
import org.elasticsearch.xpack.ml.notifications.Auditor;
|
||||||
|
|
||||||
import java.time.Duration;
|
|
||||||
import java.util.Collections;
|
import java.util.Collections;
|
||||||
import java.util.Objects;
|
import java.util.Objects;
|
||||||
import java.util.function.Consumer;
|
import java.util.function.Consumer;
|
||||||
@ -47,9 +46,9 @@ public class DatafeedJobBuilder {
|
|||||||
|
|
||||||
// Step 5. Build datafeed job object
|
// Step 5. Build datafeed job object
|
||||||
Consumer<Context> contextHanlder = context -> {
|
Consumer<Context> contextHanlder = context -> {
|
||||||
Duration frequency = getFrequencyOrDefault(datafeed, job);
|
TimeValue frequency = getFrequencyOrDefault(datafeed, job);
|
||||||
Duration queryDelay = Duration.ofMillis(datafeed.getQueryDelay().millis());
|
TimeValue queryDelay = datafeed.getQueryDelay();
|
||||||
DatafeedJob datafeedJob = new DatafeedJob(job.getId(), buildDataDescription(job), frequency.toMillis(), queryDelay.toMillis(),
|
DatafeedJob datafeedJob = new DatafeedJob(job.getId(), buildDataDescription(job), frequency.millis(), queryDelay.millis(),
|
||||||
context.dataExtractorFactory, client, auditor, currentTimeSupplier,
|
context.dataExtractorFactory, client, auditor, currentTimeSupplier,
|
||||||
context.latestFinalBucketEndMs, context.latestRecordTimeMs);
|
context.latestFinalBucketEndMs, context.latestRecordTimeMs);
|
||||||
listener.onResponse(datafeedJob);
|
listener.onResponse(datafeedJob);
|
||||||
@ -100,10 +99,13 @@ public class DatafeedJobBuilder {
|
|||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
private static Duration getFrequencyOrDefault(DatafeedConfig datafeed, Job job) {
|
private static TimeValue getFrequencyOrDefault(DatafeedConfig datafeed, Job job) {
|
||||||
TimeValue frequency = datafeed.getFrequency();
|
TimeValue frequency = datafeed.getFrequency();
|
||||||
|
if (frequency == null) {
|
||||||
TimeValue bucketSpan = job.getAnalysisConfig().getBucketSpan();
|
TimeValue bucketSpan = job.getAnalysisConfig().getBucketSpan();
|
||||||
return frequency == null ? DefaultFrequency.ofBucketSpan(bucketSpan.seconds()) : Duration.ofSeconds(frequency.seconds());
|
return datafeed.defaultFrequency(bucketSpan);
|
||||||
|
}
|
||||||
|
return frequency;
|
||||||
}
|
}
|
||||||
|
|
||||||
private static DataDescription buildDataDescription(Job job) {
|
private static DataDescription buildDataDescription(Job job) {
|
||||||
|
@ -29,6 +29,7 @@ public final class DatafeedJobValidator {
|
|||||||
if (datafeedConfig.hasAggregations()) {
|
if (datafeedConfig.hasAggregations()) {
|
||||||
checkSummaryCountFieldNameIsSet(analysisConfig);
|
checkSummaryCountFieldNameIsSet(analysisConfig);
|
||||||
checkValidHistogramInterval(datafeedConfig, analysisConfig);
|
checkValidHistogramInterval(datafeedConfig, analysisConfig);
|
||||||
|
checkFrequencyIsMultipleOfHistogramInterval(datafeedConfig);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -55,6 +56,18 @@ public final class DatafeedJobValidator {
|
|||||||
TimeValue.timeValueMillis(histogramIntervalMillis).getStringRep(),
|
TimeValue.timeValueMillis(histogramIntervalMillis).getStringRep(),
|
||||||
TimeValue.timeValueMillis(bucketSpanMillis).getStringRep()));
|
TimeValue.timeValueMillis(bucketSpanMillis).getStringRep()));
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
private static void checkFrequencyIsMultipleOfHistogramInterval(DatafeedConfig datafeedConfig) {
|
||||||
|
TimeValue frequency = datafeedConfig.getFrequency();
|
||||||
|
if (frequency != null) {
|
||||||
|
long histogramIntervalMillis = datafeedConfig.getHistogramIntervalMillis();
|
||||||
|
long frequencyMillis = frequency.millis();
|
||||||
|
if (frequencyMillis % histogramIntervalMillis != 0) {
|
||||||
|
throw ExceptionsHelper.badRequestException(Messages.getMessage(
|
||||||
|
Messages.DATAFEED_FREQUENCY_MUST_BE_MULTIPLE_OF_AGGREGATIONS_INTERVAL,
|
||||||
|
frequency, TimeValue.timeValueMillis(histogramIntervalMillis).getStringRep()));
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -5,6 +5,7 @@
|
|||||||
*/
|
*/
|
||||||
package org.elasticsearch.xpack.ml.datafeed;
|
package org.elasticsearch.xpack.ml.datafeed;
|
||||||
|
|
||||||
|
import org.elasticsearch.ElasticsearchStatusException;
|
||||||
import org.elasticsearch.action.ActionListener;
|
import org.elasticsearch.action.ActionListener;
|
||||||
import org.elasticsearch.client.Client;
|
import org.elasticsearch.client.Client;
|
||||||
import org.elasticsearch.cluster.ClusterChangedEvent;
|
import org.elasticsearch.cluster.ClusterChangedEvent;
|
||||||
@ -16,7 +17,7 @@ import org.elasticsearch.common.settings.Settings;
|
|||||||
import org.elasticsearch.common.unit.TimeValue;
|
import org.elasticsearch.common.unit.TimeValue;
|
||||||
import org.elasticsearch.common.util.concurrent.AbstractRunnable;
|
import org.elasticsearch.common.util.concurrent.AbstractRunnable;
|
||||||
import org.elasticsearch.common.util.concurrent.FutureUtils;
|
import org.elasticsearch.common.util.concurrent.FutureUtils;
|
||||||
import org.elasticsearch.index.mapper.DateFieldMapper;
|
import org.elasticsearch.rest.RestStatus;
|
||||||
import org.elasticsearch.threadpool.ThreadPool;
|
import org.elasticsearch.threadpool.ThreadPool;
|
||||||
import org.elasticsearch.xpack.ml.MachineLearning;
|
import org.elasticsearch.xpack.ml.MachineLearning;
|
||||||
import org.elasticsearch.xpack.ml.MlMetadata;
|
import org.elasticsearch.xpack.ml.MlMetadata;
|
||||||
@ -49,8 +50,6 @@ import static org.elasticsearch.xpack.persistent.PersistentTasksService.WaitForP
|
|||||||
|
|
||||||
public class DatafeedManager extends AbstractComponent {
|
public class DatafeedManager extends AbstractComponent {
|
||||||
|
|
||||||
private static final String INF_SYMBOL = "forever";
|
|
||||||
|
|
||||||
private final Client client;
|
private final Client client;
|
||||||
private final ClusterService clusterService;
|
private final ClusterService clusterService;
|
||||||
private final PersistentTasksService persistentTasksService;
|
private final PersistentTasksService persistentTasksService;
|
||||||
@ -157,9 +156,6 @@ public class DatafeedManager extends AbstractComponent {
|
|||||||
// otherwise if a stop datafeed call is made immediately after the start datafeed call we could cancel
|
// otherwise if a stop datafeed call is made immediately after the start datafeed call we could cancel
|
||||||
// the DatafeedTask without stopping datafeed, which causes the datafeed to keep on running.
|
// the DatafeedTask without stopping datafeed, which causes the datafeed to keep on running.
|
||||||
private void innerRun(Holder holder, long startTime, Long endTime) {
|
private void innerRun(Holder holder, long startTime, Long endTime) {
|
||||||
logger.info("Starting datafeed [{}] for job [{}] in [{}, {})", holder.datafeed.getId(), holder.datafeed.getJobId(),
|
|
||||||
DateFieldMapper.DEFAULT_DATE_TIME_FORMATTER.printer().print(startTime),
|
|
||||||
endTime == null ? INF_SYMBOL : DateFieldMapper.DEFAULT_DATE_TIME_FORMATTER.printer().print(endTime));
|
|
||||||
holder.future = threadPool.executor(MachineLearning.DATAFEED_THREAD_POOL_NAME).submit(new AbstractRunnable() {
|
holder.future = threadPool.executor(MachineLearning.DATAFEED_THREAD_POOL_NAME).submit(new AbstractRunnable() {
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
@ -429,8 +425,16 @@ public class DatafeedManager extends AbstractComponent {
|
|||||||
|
|
||||||
@Override
|
@Override
|
||||||
public void onFailure(Exception e) {
|
public void onFailure(Exception e) {
|
||||||
|
// Given that the UI force-deletes the datafeed and then force-deletes the job, it's
|
||||||
|
// quite likely that the auto-close here will get interrupted by a process kill request,
|
||||||
|
// and it's misleading/worrying to log an error in this case.
|
||||||
|
if (e instanceof ElasticsearchStatusException &&
|
||||||
|
((ElasticsearchStatusException) e).status() == RestStatus.CONFLICT) {
|
||||||
|
logger.debug("[{}] {}", getJobId(), e.getMessage());
|
||||||
|
} else {
|
||||||
logger.error("[" + getJobId() + "] failed to auto-close job", e);
|
logger.error("[" + getJobId() + "] failed to auto-close job", e);
|
||||||
}
|
}
|
||||||
|
}
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1,55 +0,0 @@
|
|||||||
/*
|
|
||||||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
|
||||||
* or more contributor license agreements. Licensed under the Elastic License;
|
|
||||||
* you may not use this file except in compliance with the Elastic License.
|
|
||||||
*/
|
|
||||||
package org.elasticsearch.xpack.ml.datafeed;
|
|
||||||
|
|
||||||
import java.time.Duration;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Factory methods for a sensible default for the datafeed frequency
|
|
||||||
*/
|
|
||||||
public final class DefaultFrequency {
|
|
||||||
private static final int SECONDS_IN_MINUTE = 60;
|
|
||||||
private static final int TWO_MINS_SECONDS = 2 * SECONDS_IN_MINUTE;
|
|
||||||
private static final int TWENTY_MINS_SECONDS = 20 * SECONDS_IN_MINUTE;
|
|
||||||
private static final int HALF_DAY_SECONDS = 12 * 60 * SECONDS_IN_MINUTE;
|
|
||||||
private static final Duration TEN_MINUTES = Duration.ofMinutes(10);
|
|
||||||
private static final Duration ONE_HOUR = Duration.ofHours(1);
|
|
||||||
|
|
||||||
private DefaultFrequency() {
|
|
||||||
// Do nothing
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Creates a sensible default frequency for a given bucket span.
|
|
||||||
* <p>
|
|
||||||
* The default depends on the bucket span:
|
|
||||||
* <ul>
|
|
||||||
* <li> <= 2 mins -> 1 min</li>
|
|
||||||
* <li> <= 20 mins -> bucket span / 2</li>
|
|
||||||
* <li> <= 12 hours -> 10 mins</li>
|
|
||||||
* <li> > 12 hours -> 1 hour</li>
|
|
||||||
* </ul>
|
|
||||||
*
|
|
||||||
* @param bucketSpanSeconds the bucket span in seconds
|
|
||||||
* @return the default frequency
|
|
||||||
*/
|
|
||||||
public static Duration ofBucketSpan(long bucketSpanSeconds) {
|
|
||||||
if (bucketSpanSeconds <= 0) {
|
|
||||||
throw new IllegalArgumentException("Bucket span has to be > 0");
|
|
||||||
}
|
|
||||||
|
|
||||||
if (bucketSpanSeconds <= TWO_MINS_SECONDS) {
|
|
||||||
return Duration.ofSeconds(SECONDS_IN_MINUTE);
|
|
||||||
}
|
|
||||||
if (bucketSpanSeconds <= TWENTY_MINS_SECONDS) {
|
|
||||||
return Duration.ofSeconds(bucketSpanSeconds / 2);
|
|
||||||
}
|
|
||||||
if (bucketSpanSeconds <= HALF_DAY_SECONDS) {
|
|
||||||
return TEN_MINUTES;
|
|
||||||
}
|
|
||||||
return ONE_HOUR;
|
|
||||||
}
|
|
||||||
}
|
|
@ -98,8 +98,8 @@ public class ChunkedDataExtractor implements DataExtractor {
|
|||||||
currentEnd = currentStart;
|
currentEnd = currentStart;
|
||||||
chunkSpan = context.chunkSpan == null ? dataSummary.estimateChunk() : context.chunkSpan.getMillis();
|
chunkSpan = context.chunkSpan == null ? dataSummary.estimateChunk() : context.chunkSpan.getMillis();
|
||||||
chunkSpan = context.timeAligner.alignToCeil(chunkSpan);
|
chunkSpan = context.timeAligner.alignToCeil(chunkSpan);
|
||||||
LOGGER.debug("Chunked search configured: totalHits = {}, dataTimeSpread = {} ms, chunk span = {} ms",
|
LOGGER.debug("[{}]Chunked search configured: totalHits = {}, dataTimeSpread = {} ms, chunk span = {} ms",
|
||||||
dataSummary.totalHits, dataSummary.getDataTimeSpread(), chunkSpan);
|
context.jobId, dataSummary.totalHits, dataSummary.getDataTimeSpread(), chunkSpan);
|
||||||
} else {
|
} else {
|
||||||
// search is over
|
// search is over
|
||||||
currentEnd = context.end;
|
currentEnd = context.end;
|
||||||
@ -164,7 +164,7 @@ public class ChunkedDataExtractor implements DataExtractor {
|
|||||||
currentStart = currentEnd;
|
currentStart = currentEnd;
|
||||||
currentEnd = Math.min(currentStart + chunkSpan, context.end);
|
currentEnd = Math.min(currentStart + chunkSpan, context.end);
|
||||||
currentExtractor = dataExtractorFactory.newExtractor(currentStart, currentEnd);
|
currentExtractor = dataExtractorFactory.newExtractor(currentStart, currentEnd);
|
||||||
LOGGER.trace("advances time to [{}, {})", currentStart, currentEnd);
|
LOGGER.trace("[{}] advances time to [{}, {})", context.jobId, currentStart, currentEnd);
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
|
@ -7,6 +7,7 @@ package org.elasticsearch.xpack.ml.datafeed.extractor.scroll;
|
|||||||
|
|
||||||
import org.apache.logging.log4j.Logger;
|
import org.apache.logging.log4j.Logger;
|
||||||
import org.elasticsearch.action.search.ClearScrollAction;
|
import org.elasticsearch.action.search.ClearScrollAction;
|
||||||
|
import org.elasticsearch.action.search.ClearScrollRequest;
|
||||||
import org.elasticsearch.action.search.SearchAction;
|
import org.elasticsearch.action.search.SearchAction;
|
||||||
import org.elasticsearch.action.search.SearchPhaseExecutionException;
|
import org.elasticsearch.action.search.SearchPhaseExecutionException;
|
||||||
import org.elasticsearch.action.search.SearchRequestBuilder;
|
import org.elasticsearch.action.search.SearchRequestBuilder;
|
||||||
@ -214,13 +215,15 @@ class ScrollDataExtractor implements DataExtractor {
|
|||||||
}
|
}
|
||||||
|
|
||||||
private void resetScroll() {
|
private void resetScroll() {
|
||||||
if (scrollId != null) {
|
|
||||||
clearScroll(scrollId);
|
clearScroll(scrollId);
|
||||||
}
|
|
||||||
scrollId = null;
|
scrollId = null;
|
||||||
}
|
}
|
||||||
|
|
||||||
void clearScroll(String scrollId) {
|
private void clearScroll(String scrollId) {
|
||||||
ClearScrollAction.INSTANCE.newRequestBuilder(client).addScrollId(scrollId).get();
|
if (scrollId != null) {
|
||||||
|
ClearScrollRequest request = new ClearScrollRequest();
|
||||||
|
request.addScrollId(scrollId);
|
||||||
|
client.execute(ClearScrollAction.INSTANCE, request).actionGet();
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -39,6 +39,8 @@ public final class Messages {
|
|||||||
public static final String DATAFEED_DATA_HISTOGRAM_MUST_HAVE_NESTED_MAX_AGGREGATION =
|
public static final String DATAFEED_DATA_HISTOGRAM_MUST_HAVE_NESTED_MAX_AGGREGATION =
|
||||||
"Date histogram must have nested max aggregation for time_field [{0}]";
|
"Date histogram must have nested max aggregation for time_field [{0}]";
|
||||||
public static final String DATAFEED_MISSING_MAX_AGGREGATION_FOR_TIME_FIELD = "Missing max aggregation for time_field [{0}]";
|
public static final String DATAFEED_MISSING_MAX_AGGREGATION_FOR_TIME_FIELD = "Missing max aggregation for time_field [{0}]";
|
||||||
|
public static final String DATAFEED_FREQUENCY_MUST_BE_MULTIPLE_OF_AGGREGATIONS_INTERVAL =
|
||||||
|
"Datafeed frequency [{0}] must be a multiple of the aggregation interval [{1}]";
|
||||||
|
|
||||||
public static final String INCONSISTENT_ID =
|
public static final String INCONSISTENT_ID =
|
||||||
"Inconsistent {0}; ''{1}'' specified in the body differs from ''{2}'' specified as a URL argument";
|
"Inconsistent {0}; ''{1}'' specified in the body differs from ''{2}'' specified as a URL argument";
|
||||||
@ -58,7 +60,7 @@ public final class Messages {
|
|||||||
public static final String JOB_AUDIT_DATAFEED_LOOKBACK_NO_DATA = "Datafeed lookback retrieved no data";
|
public static final String JOB_AUDIT_DATAFEED_LOOKBACK_NO_DATA = "Datafeed lookback retrieved no data";
|
||||||
public static final String JOB_AUDIT_DATAFEED_NO_DATA = "Datafeed has been retrieving no data for a while";
|
public static final String JOB_AUDIT_DATAFEED_NO_DATA = "Datafeed has been retrieving no data for a while";
|
||||||
public static final String JOB_AUDIT_DATAFEED_RECOVERED = "Datafeed has recovered data extraction and analysis";
|
public static final String JOB_AUDIT_DATAFEED_RECOVERED = "Datafeed has recovered data extraction and analysis";
|
||||||
public static final String JOB_AUDIT_DATAFEED_STARTED_FROM_TO = "Datafeed started (from: {0} to: {1})";
|
public static final String JOB_AUDIT_DATAFEED_STARTED_FROM_TO = "Datafeed started (from: {0} to: {1}) with frequency [{2}]";
|
||||||
public static final String JOB_AUDIT_DATAFEED_STARTED_REALTIME = "Datafeed started in real-time";
|
public static final String JOB_AUDIT_DATAFEED_STARTED_REALTIME = "Datafeed started in real-time";
|
||||||
public static final String JOB_AUDIT_DATAFEED_STOPPED = "Datafeed stopped";
|
public static final String JOB_AUDIT_DATAFEED_STOPPED = "Datafeed stopped";
|
||||||
public static final String JOB_AUDIT_DELETED = "Job deleted";
|
public static final String JOB_AUDIT_DELETED = "Job deleted";
|
||||||
|
@ -5,6 +5,7 @@
|
|||||||
*/
|
*/
|
||||||
package org.elasticsearch.xpack.ml.job.persistence;
|
package org.elasticsearch.xpack.ml.job.persistence;
|
||||||
|
|
||||||
|
import com.carrotsearch.hppc.cursors.ObjectObjectCursor;
|
||||||
import org.apache.logging.log4j.Logger;
|
import org.apache.logging.log4j.Logger;
|
||||||
import org.elasticsearch.action.ActionListener;
|
import org.elasticsearch.action.ActionListener;
|
||||||
import org.elasticsearch.action.admin.indices.alias.IndicesAliasesRequest;
|
import org.elasticsearch.action.admin.indices.alias.IndicesAliasesRequest;
|
||||||
@ -17,6 +18,7 @@ import org.elasticsearch.action.search.SearchRequest;
|
|||||||
import org.elasticsearch.action.support.IndicesOptions;
|
import org.elasticsearch.action.support.IndicesOptions;
|
||||||
import org.elasticsearch.client.Client;
|
import org.elasticsearch.client.Client;
|
||||||
import org.elasticsearch.cluster.ClusterState;
|
import org.elasticsearch.cluster.ClusterState;
|
||||||
|
import org.elasticsearch.cluster.metadata.AliasMetaData;
|
||||||
import org.elasticsearch.common.CheckedConsumer;
|
import org.elasticsearch.common.CheckedConsumer;
|
||||||
import org.elasticsearch.common.logging.Loggers;
|
import org.elasticsearch.common.logging.Loggers;
|
||||||
import org.elasticsearch.common.settings.Settings;
|
import org.elasticsearch.common.settings.Settings;
|
||||||
@ -191,22 +193,14 @@ public class JobStorageDeletionTask extends Task {
|
|||||||
executeAsyncWithOrigin(client.threadPool().getThreadContext(), ML_ORIGIN, aliasesRequest,
|
executeAsyncWithOrigin(client.threadPool().getThreadContext(), ML_ORIGIN, aliasesRequest,
|
||||||
ActionListener.<GetAliasesResponse>wrap(
|
ActionListener.<GetAliasesResponse>wrap(
|
||||||
getAliasesResponse -> {
|
getAliasesResponse -> {
|
||||||
Set<String> aliases = new HashSet<>();
|
// remove the aliases from the concrete indices found in the first step
|
||||||
getAliasesResponse.getAliases().valuesIt().forEachRemaining(
|
IndicesAliasesRequest removeRequest = buildRemoveAliasesRequest(getAliasesResponse);
|
||||||
metaDataList -> metaDataList.forEach(metadata -> aliases.add(metadata.getAlias())));
|
if (removeRequest == null) {
|
||||||
if (aliases.isEmpty()) {
|
|
||||||
// don't error if the job's aliases have already been deleted - carry on and delete the
|
// don't error if the job's aliases have already been deleted - carry on and delete the
|
||||||
// rest of the job's data
|
// rest of the job's data
|
||||||
finishedHandler.onResponse(true);
|
finishedHandler.onResponse(true);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
List<String> indices = new ArrayList<>();
|
|
||||||
getAliasesResponse.getAliases().keysIt().forEachRemaining(indices::add);
|
|
||||||
// remove the aliases from the concrete indices found in the first step
|
|
||||||
IndicesAliasesRequest removeRequest = new IndicesAliasesRequest().addAliasAction(
|
|
||||||
IndicesAliasesRequest.AliasActions.remove()
|
|
||||||
.aliases(aliases.toArray(new String[aliases.size()]))
|
|
||||||
.indices(indices.toArray(new String[indices.size()])));
|
|
||||||
executeAsyncWithOrigin(client.threadPool().getThreadContext(), ML_ORIGIN, removeRequest,
|
executeAsyncWithOrigin(client.threadPool().getThreadContext(), ML_ORIGIN, removeRequest,
|
||||||
ActionListener.<IndicesAliasesResponse>wrap(removeResponse -> finishedHandler.onResponse(true),
|
ActionListener.<IndicesAliasesResponse>wrap(removeResponse -> finishedHandler.onResponse(true),
|
||||||
finishedHandler::onFailure),
|
finishedHandler::onFailure),
|
||||||
@ -214,4 +208,21 @@ public class JobStorageDeletionTask extends Task {
|
|||||||
},
|
},
|
||||||
finishedHandler::onFailure), client.admin().indices()::getAliases);
|
finishedHandler::onFailure), client.admin().indices()::getAliases);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
private IndicesAliasesRequest buildRemoveAliasesRequest(GetAliasesResponse getAliasesResponse) {
|
||||||
|
Set<String> aliases = new HashSet<>();
|
||||||
|
List<String> indices = new ArrayList<>();
|
||||||
|
for (ObjectObjectCursor<String, List<AliasMetaData>> entry : getAliasesResponse.getAliases()) {
|
||||||
|
// The response includes _all_ indices, but only those associated with
|
||||||
|
// the aliases we asked about will have associated AliasMetaData
|
||||||
|
if (entry.value.isEmpty() == false) {
|
||||||
|
indices.add(entry.key);
|
||||||
|
entry.value.forEach(metadata -> aliases.add(metadata.getAlias()));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return aliases.isEmpty() ? null : new IndicesAliasesRequest().addAliasAction(
|
||||||
|
IndicesAliasesRequest.AliasActions.remove()
|
||||||
|
.aliases(aliases.toArray(new String[aliases.size()]))
|
||||||
|
.indices(indices.toArray(new String[indices.size()])));
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
@ -8,7 +8,6 @@ package org.elasticsearch.xpack.ml.job.process.autodetect;
|
|||||||
import org.apache.logging.log4j.Logger;
|
import org.apache.logging.log4j.Logger;
|
||||||
import org.apache.logging.log4j.message.ParameterizedMessage;
|
import org.apache.logging.log4j.message.ParameterizedMessage;
|
||||||
import org.elasticsearch.ElasticsearchException;
|
import org.elasticsearch.ElasticsearchException;
|
||||||
import org.elasticsearch.ExceptionsHelper;
|
|
||||||
import org.elasticsearch.common.CheckedSupplier;
|
import org.elasticsearch.common.CheckedSupplier;
|
||||||
import org.elasticsearch.common.Nullable;
|
import org.elasticsearch.common.Nullable;
|
||||||
import org.elasticsearch.common.logging.Loggers;
|
import org.elasticsearch.common.logging.Loggers;
|
||||||
@ -32,6 +31,7 @@ import org.elasticsearch.xpack.ml.job.process.autodetect.state.ModelSizeStats;
|
|||||||
import org.elasticsearch.xpack.ml.job.process.autodetect.state.ModelSnapshot;
|
import org.elasticsearch.xpack.ml.job.process.autodetect.state.ModelSnapshot;
|
||||||
import org.elasticsearch.xpack.ml.job.process.autodetect.writer.DataToProcessWriter;
|
import org.elasticsearch.xpack.ml.job.process.autodetect.writer.DataToProcessWriter;
|
||||||
import org.elasticsearch.xpack.ml.job.process.autodetect.writer.DataToProcessWriterFactory;
|
import org.elasticsearch.xpack.ml.job.process.autodetect.writer.DataToProcessWriterFactory;
|
||||||
|
import org.elasticsearch.xpack.ml.utils.ExceptionsHelper;
|
||||||
|
|
||||||
import java.io.Closeable;
|
import java.io.Closeable;
|
||||||
import java.io.IOException;
|
import java.io.IOException;
|
||||||
@ -154,7 +154,12 @@ public class AutodetectCommunicator implements Closeable {
|
|||||||
} catch (InterruptedException e) {
|
} catch (InterruptedException e) {
|
||||||
Thread.currentThread().interrupt();
|
Thread.currentThread().interrupt();
|
||||||
} catch (ExecutionException e) {
|
} catch (ExecutionException e) {
|
||||||
throw ExceptionsHelper.convertToElastic(e);
|
if (processKilled) {
|
||||||
|
// In this case the original exception is spurious and highly misleading
|
||||||
|
throw ExceptionsHelper.conflictStatusException("Close job interrupted by kill request");
|
||||||
|
} else {
|
||||||
|
throw new ElasticsearchException(e);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -242,19 +247,15 @@ public class AutodetectCommunicator implements Closeable {
|
|||||||
*/
|
*/
|
||||||
private void checkProcessIsAlive() {
|
private void checkProcessIsAlive() {
|
||||||
if (!autodetectProcess.isProcessAlive()) {
|
if (!autodetectProcess.isProcessAlive()) {
|
||||||
ParameterizedMessage message =
|
// Don't log here - it just causes double logging when the exception gets logged
|
||||||
new ParameterizedMessage("[{}] Unexpected death of autodetect: {}", job.getId(), autodetectProcess.readError());
|
throw new ElasticsearchException("[{}] Unexpected death of autodetect: {}", job.getId(), autodetectProcess.readError());
|
||||||
LOGGER.error(message);
|
|
||||||
throw new ElasticsearchException(message.getFormattedMessage());
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
private void checkResultsProcessorIsAlive() {
|
private void checkResultsProcessorIsAlive() {
|
||||||
if (autoDetectResultProcessor.isFailed()) {
|
if (autoDetectResultProcessor.isFailed()) {
|
||||||
ParameterizedMessage message =
|
// Don't log here - it just causes double logging when the exception gets logged
|
||||||
new ParameterizedMessage("[{}] Unexpected death of the result processor", job.getId());
|
throw new ElasticsearchException("[{}] Unexpected death of the result processor", job.getId());
|
||||||
LOGGER.error(message);
|
|
||||||
throw new ElasticsearchException(message.getFormattedMessage());
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -88,7 +88,7 @@ public class AutodetectProcessManager extends AbstractComponent {
|
|||||||
// TODO: Remove the deprecated setting in 7.0 and move the default value to the replacement setting
|
// TODO: Remove the deprecated setting in 7.0 and move the default value to the replacement setting
|
||||||
@Deprecated
|
@Deprecated
|
||||||
public static final Setting<Integer> MAX_RUNNING_JOBS_PER_NODE =
|
public static final Setting<Integer> MAX_RUNNING_JOBS_PER_NODE =
|
||||||
Setting.intSetting("max_running_jobs", 10, 1, 512, Property.NodeScope, Property.Deprecated);
|
Setting.intSetting("max_running_jobs", 20, 1, 512, Property.NodeScope, Property.Deprecated);
|
||||||
public static final Setting<Integer> MAX_OPEN_JOBS_PER_NODE =
|
public static final Setting<Integer> MAX_OPEN_JOBS_PER_NODE =
|
||||||
Setting.intSetting("xpack.ml.max_open_jobs", MAX_RUNNING_JOBS_PER_NODE, 1, Property.NodeScope);
|
Setting.intSetting("xpack.ml.max_open_jobs", MAX_RUNNING_JOBS_PER_NODE, 1, Property.NodeScope);
|
||||||
|
|
||||||
@ -473,6 +473,10 @@ public class AutodetectProcessManager extends AbstractComponent {
|
|||||||
communicator.close(restart, reason);
|
communicator.close(restart, reason);
|
||||||
processByAllocation.remove(allocationId);
|
processByAllocation.remove(allocationId);
|
||||||
} catch (Exception e) {
|
} catch (Exception e) {
|
||||||
|
// If the close failed because the process has explicitly been killed by us then just pass on that exception
|
||||||
|
if (e instanceof ElasticsearchStatusException && ((ElasticsearchStatusException) e).status() == RestStatus.CONFLICT) {
|
||||||
|
throw e;
|
||||||
|
}
|
||||||
logger.warn("[" + jobId + "] Exception closing autodetect process", e);
|
logger.warn("[" + jobId + "] Exception closing autodetect process", e);
|
||||||
setJobState(jobTask, JobState.FAILED);
|
setJobState(jobTask, JobState.FAILED);
|
||||||
throw ExceptionsHelper.serverError("Exception closing autodetect process", e);
|
throw ExceptionsHelper.serverError("Exception closing autodetect process", e);
|
||||||
|
@ -13,8 +13,8 @@ public enum MonitoredSystem {
|
|||||||
|
|
||||||
ES("es"),
|
ES("es"),
|
||||||
KIBANA("kibana"),
|
KIBANA("kibana"),
|
||||||
// TODO: when "BEATS" is re-added, add it to tests where we randomly select "LOGSTASH"
|
|
||||||
LOGSTASH("logstash"),
|
LOGSTASH("logstash"),
|
||||||
|
BEATS("beats"),
|
||||||
UNKNOWN("unknown");
|
UNKNOWN("unknown");
|
||||||
|
|
||||||
private final String system;
|
private final String system;
|
||||||
@ -35,6 +35,8 @@ public enum MonitoredSystem {
|
|||||||
return KIBANA;
|
return KIBANA;
|
||||||
case "logstash":
|
case "logstash":
|
||||||
return LOGSTASH;
|
return LOGSTASH;
|
||||||
|
case "beats":
|
||||||
|
return BEATS;
|
||||||
default:
|
default:
|
||||||
// Return an "unknown" monitored system
|
// Return an "unknown" monitored system
|
||||||
// that can easily be filtered out if
|
// that can easily be filtered out if
|
||||||
|
@ -0,0 +1,79 @@
|
|||||||
|
/*
|
||||||
|
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||||
|
* or more contributor license agreements. Licensed under the Elastic License;
|
||||||
|
* you may not use this file except in compliance with the Elastic License.
|
||||||
|
*/
|
||||||
|
package org.elasticsearch.xpack.monitoring;
|
||||||
|
|
||||||
|
import org.elasticsearch.xpack.monitoring.action.MonitoringBulkAction;
|
||||||
|
import org.elasticsearch.xpack.security.authz.RoleDescriptor;
|
||||||
|
import org.elasticsearch.xpack.security.SecurityExtension;
|
||||||
|
import org.elasticsearch.xpack.security.support.MetadataUtils;
|
||||||
|
import org.elasticsearch.xpack.security.user.KibanaUser;
|
||||||
|
import org.elasticsearch.xpack.security.user.LogstashSystemUser;
|
||||||
|
|
||||||
|
import java.util.Collections;
|
||||||
|
import java.util.HashMap;
|
||||||
|
import java.util.Map;
|
||||||
|
|
||||||
|
public class MonitoringSecurityExtension implements SecurityExtension {
|
||||||
|
@Override
|
||||||
|
public Map<String, RoleDescriptor> getReservedRoles() {
|
||||||
|
Map<String, RoleDescriptor> roles = new HashMap<>();
|
||||||
|
|
||||||
|
roles.put("monitoring_user",
|
||||||
|
new RoleDescriptor("monitoring_user",
|
||||||
|
null,
|
||||||
|
new RoleDescriptor.IndicesPrivileges[] {
|
||||||
|
RoleDescriptor.IndicesPrivileges.builder()
|
||||||
|
.indices(".monitoring-*")
|
||||||
|
.privileges("read", "read_cross_cluster")
|
||||||
|
.build()
|
||||||
|
},
|
||||||
|
null,
|
||||||
|
MetadataUtils.DEFAULT_RESERVED_METADATA));
|
||||||
|
|
||||||
|
roles.put("remote_monitoring_agent",
|
||||||
|
new RoleDescriptor("remote_monitoring_agent",
|
||||||
|
new String[] {
|
||||||
|
"manage_index_templates", "manage_ingest_pipelines", "monitor",
|
||||||
|
"cluster:monitor/xpack/watcher/watch/get",
|
||||||
|
"cluster:admin/xpack/watcher/watch/put",
|
||||||
|
"cluster:admin/xpack/watcher/watch/delete",
|
||||||
|
},
|
||||||
|
new RoleDescriptor.IndicesPrivileges[] {
|
||||||
|
RoleDescriptor.IndicesPrivileges.builder()
|
||||||
|
.indices(".monitoring-*")
|
||||||
|
.privileges("all")
|
||||||
|
.build()
|
||||||
|
},
|
||||||
|
null,
|
||||||
|
MetadataUtils.DEFAULT_RESERVED_METADATA));
|
||||||
|
|
||||||
|
// TODO(core-infra) put KibanaUser & LogstashSystemUser into a common place for the split and use them here
|
||||||
|
roles.put("logstash_system",
|
||||||
|
new RoleDescriptor(LogstashSystemUser.ROLE_NAME,
|
||||||
|
new String[]{"monitor", MonitoringBulkAction.NAME},
|
||||||
|
null,
|
||||||
|
null,
|
||||||
|
MetadataUtils.DEFAULT_RESERVED_METADATA));
|
||||||
|
|
||||||
|
roles.put("kibana_system",
|
||||||
|
new RoleDescriptor(KibanaUser.ROLE_NAME,
|
||||||
|
new String[] { "monitor", "manage_index_templates", MonitoringBulkAction.NAME },
|
||||||
|
new RoleDescriptor.IndicesPrivileges[] {
|
||||||
|
RoleDescriptor.IndicesPrivileges.builder()
|
||||||
|
.indices(".kibana*", ".reporting-*")
|
||||||
|
.privileges("all")
|
||||||
|
.build(),
|
||||||
|
RoleDescriptor.IndicesPrivileges.builder()
|
||||||
|
.indices(".monitoring-*")
|
||||||
|
.privileges("read", "read_cross_cluster")
|
||||||
|
.build()
|
||||||
|
},
|
||||||
|
null,
|
||||||
|
MetadataUtils.DEFAULT_RESERVED_METADATA));
|
||||||
|
|
||||||
|
return Collections.unmodifiableMap(roles);
|
||||||
|
}
|
||||||
|
}
|
@ -41,7 +41,7 @@ public final class MonitoringTemplateUtils {
|
|||||||
/**
|
/**
|
||||||
* IDs of templates that can be used with {@linkplain #loadTemplate(String) loadTemplate}.
|
* IDs of templates that can be used with {@linkplain #loadTemplate(String) loadTemplate}.
|
||||||
*/
|
*/
|
||||||
public static final String[] TEMPLATE_IDS = { "alerts", "es", "kibana", "logstash" };
|
public static final String[] TEMPLATE_IDS = { "alerts", "es", "kibana", "logstash", "beats" };
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* IDs of templates that can be used with {@linkplain #createEmptyTemplate(String) createEmptyTemplate} that are not managed by a
|
* IDs of templates that can be used with {@linkplain #createEmptyTemplate(String) createEmptyTemplate} that are not managed by a
|
||||||
|
@ -56,7 +56,8 @@ public class RestMonitoringBulkAction extends MonitoringRestHandler {
|
|||||||
final Map<MonitoredSystem, List<String>> versionsMap = new HashMap<>();
|
final Map<MonitoredSystem, List<String>> versionsMap = new HashMap<>();
|
||||||
versionsMap.put(MonitoredSystem.KIBANA, allVersions);
|
versionsMap.put(MonitoredSystem.KIBANA, allVersions);
|
||||||
versionsMap.put(MonitoredSystem.LOGSTASH, allVersions);
|
versionsMap.put(MonitoredSystem.LOGSTASH, allVersions);
|
||||||
// Beats did not report data in the 5.x timeline, so it should never send the original version [when we add it!]
|
// Beats did not report data in the 5.x timeline, so it should never send the original version
|
||||||
|
versionsMap.put(MonitoredSystem.BEATS, Collections.singletonList(MonitoringTemplateUtils.TEMPLATE_VERSION));
|
||||||
supportedApiVersions = Collections.unmodifiableMap(versionsMap);
|
supportedApiVersions = Collections.unmodifiableMap(versionsMap);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -61,6 +61,7 @@ import org.elasticsearch.plugins.ActionPlugin;
|
|||||||
import org.elasticsearch.plugins.ClusterPlugin;
|
import org.elasticsearch.plugins.ClusterPlugin;
|
||||||
import org.elasticsearch.plugins.DiscoveryPlugin;
|
import org.elasticsearch.plugins.DiscoveryPlugin;
|
||||||
import org.elasticsearch.plugins.IngestPlugin;
|
import org.elasticsearch.plugins.IngestPlugin;
|
||||||
|
import org.elasticsearch.plugins.MapperPlugin;
|
||||||
import org.elasticsearch.plugins.NetworkPlugin;
|
import org.elasticsearch.plugins.NetworkPlugin;
|
||||||
import org.elasticsearch.rest.RestController;
|
import org.elasticsearch.rest.RestController;
|
||||||
import org.elasticsearch.rest.RestHandler;
|
import org.elasticsearch.rest.RestHandler;
|
||||||
@ -139,9 +140,11 @@ import org.elasticsearch.xpack.security.authc.support.mapper.expressiondsl.Expre
|
|||||||
import org.elasticsearch.xpack.security.authz.AuthorizationService;
|
import org.elasticsearch.xpack.security.authz.AuthorizationService;
|
||||||
import org.elasticsearch.xpack.security.authz.RoleDescriptor;
|
import org.elasticsearch.xpack.security.authz.RoleDescriptor;
|
||||||
import org.elasticsearch.xpack.security.authz.SecuritySearchOperationListener;
|
import org.elasticsearch.xpack.security.authz.SecuritySearchOperationListener;
|
||||||
|
import org.elasticsearch.xpack.security.authz.accesscontrol.IndicesAccessControl;
|
||||||
import org.elasticsearch.xpack.security.authz.accesscontrol.OptOutQueryCache;
|
import org.elasticsearch.xpack.security.authz.accesscontrol.OptOutQueryCache;
|
||||||
import org.elasticsearch.xpack.security.authz.accesscontrol.SecurityIndexSearcherWrapper;
|
import org.elasticsearch.xpack.security.authz.accesscontrol.SecurityIndexSearcherWrapper;
|
||||||
import org.elasticsearch.xpack.security.authz.accesscontrol.SetSecurityUserProcessor;
|
import org.elasticsearch.xpack.security.authz.accesscontrol.SetSecurityUserProcessor;
|
||||||
|
import org.elasticsearch.xpack.security.authz.permission.FieldPermissions;
|
||||||
import org.elasticsearch.xpack.security.authz.permission.FieldPermissionsCache;
|
import org.elasticsearch.xpack.security.authz.permission.FieldPermissionsCache;
|
||||||
import org.elasticsearch.xpack.security.authz.store.CompositeRolesStore;
|
import org.elasticsearch.xpack.security.authz.store.CompositeRolesStore;
|
||||||
import org.elasticsearch.xpack.security.authz.store.FileRolesStore;
|
import org.elasticsearch.xpack.security.authz.store.FileRolesStore;
|
||||||
@ -180,7 +183,6 @@ import org.joda.time.DateTimeZone;
|
|||||||
|
|
||||||
import java.io.IOException;
|
import java.io.IOException;
|
||||||
import java.nio.charset.StandardCharsets;
|
import java.nio.charset.StandardCharsets;
|
||||||
import java.security.GeneralSecurityException;
|
|
||||||
import java.time.Clock;
|
import java.time.Clock;
|
||||||
import java.util.ArrayList;
|
import java.util.ArrayList;
|
||||||
import java.util.Arrays;
|
import java.util.Arrays;
|
||||||
@ -194,6 +196,7 @@ import java.util.Optional;
|
|||||||
import java.util.Set;
|
import java.util.Set;
|
||||||
import java.util.function.BiConsumer;
|
import java.util.function.BiConsumer;
|
||||||
import java.util.function.Function;
|
import java.util.function.Function;
|
||||||
|
import java.util.function.Predicate;
|
||||||
import java.util.function.Supplier;
|
import java.util.function.Supplier;
|
||||||
import java.util.function.UnaryOperator;
|
import java.util.function.UnaryOperator;
|
||||||
import java.util.stream.Collectors;
|
import java.util.stream.Collectors;
|
||||||
@ -206,7 +209,7 @@ import static org.elasticsearch.xpack.security.SecurityLifecycleService.SECURITY
|
|||||||
import static org.elasticsearch.xpack.security.SecurityLifecycleService.SECURITY_TEMPLATE_NAME;
|
import static org.elasticsearch.xpack.security.SecurityLifecycleService.SECURITY_TEMPLATE_NAME;
|
||||||
import static org.elasticsearch.xpack.security.support.IndexLifecycleManager.INTERNAL_INDEX_FORMAT;
|
import static org.elasticsearch.xpack.security.support.IndexLifecycleManager.INTERNAL_INDEX_FORMAT;
|
||||||
|
|
||||||
public class Security implements ActionPlugin, IngestPlugin, NetworkPlugin, ClusterPlugin, DiscoveryPlugin {
|
public class Security implements ActionPlugin, IngestPlugin, NetworkPlugin, ClusterPlugin, DiscoveryPlugin, MapperPlugin {
|
||||||
|
|
||||||
private static final Logger logger = Loggers.getLogger(XPackPlugin.class);
|
private static final Logger logger = Loggers.getLogger(XPackPlugin.class);
|
||||||
|
|
||||||
@ -239,8 +242,7 @@ public class Security implements ActionPlugin, IngestPlugin, NetworkPlugin, Clus
|
|||||||
private final SetOnce<SecurityActionFilter> securityActionFilter = new SetOnce<>();
|
private final SetOnce<SecurityActionFilter> securityActionFilter = new SetOnce<>();
|
||||||
private final List<BootstrapCheck> bootstrapChecks;
|
private final List<BootstrapCheck> bootstrapChecks;
|
||||||
|
|
||||||
public Security(Settings settings, Environment env, XPackLicenseState licenseState, SSLService sslService)
|
public Security(Settings settings, Environment env, XPackLicenseState licenseState, SSLService sslService) {
|
||||||
throws IOException, GeneralSecurityException {
|
|
||||||
this.settings = settings;
|
this.settings = settings;
|
||||||
this.env = env;
|
this.env = env;
|
||||||
this.transportClientMode = XPackPlugin.transportClientMode(settings);
|
this.transportClientMode = XPackPlugin.transportClientMode(settings);
|
||||||
@ -343,7 +345,7 @@ public class Security implements ActionPlugin, IngestPlugin, NetworkPlugin, Clus
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
final AuditTrailService auditTrailService =
|
final AuditTrailService auditTrailService =
|
||||||
new AuditTrailService(settings, auditTrails.stream().collect(Collectors.toList()), licenseState);
|
new AuditTrailService(settings, new ArrayList<>(auditTrails), licenseState);
|
||||||
components.add(auditTrailService);
|
components.add(auditTrailService);
|
||||||
this.auditTrailService.set(auditTrailService);
|
this.auditTrailService.set(auditTrailService);
|
||||||
|
|
||||||
@ -359,9 +361,8 @@ public class Security implements ActionPlugin, IngestPlugin, NetworkPlugin, Clus
|
|||||||
final AnonymousUser anonymousUser = new AnonymousUser(settings);
|
final AnonymousUser anonymousUser = new AnonymousUser(settings);
|
||||||
final ReservedRealm reservedRealm = new ReservedRealm(env, settings, nativeUsersStore,
|
final ReservedRealm reservedRealm = new ReservedRealm(env, settings, nativeUsersStore,
|
||||||
anonymousUser, securityLifecycleService, threadPool.getThreadContext());
|
anonymousUser, securityLifecycleService, threadPool.getThreadContext());
|
||||||
Map<String, Realm.Factory> realmFactories = new HashMap<>();
|
Map<String, Realm.Factory> realmFactories = new HashMap<>(InternalRealms.getFactories(threadPool, resourceWatcherService,
|
||||||
realmFactories.putAll(InternalRealms.getFactories(threadPool, resourceWatcherService, sslService, nativeUsersStore,
|
sslService, nativeUsersStore, nativeRoleMappingStore, securityLifecycleService));
|
||||||
nativeRoleMappingStore, securityLifecycleService));
|
|
||||||
for (XPackExtension extension : extensions) {
|
for (XPackExtension extension : extensions) {
|
||||||
Map<String, Realm.Factory> newRealms = extension.getRealms(resourceWatcherService);
|
Map<String, Realm.Factory> newRealms = extension.getRealms(resourceWatcherService);
|
||||||
for (Map.Entry<String, Realm.Factory> entry : newRealms.entrySet()) {
|
for (Map.Entry<String, Realm.Factory> entry : newRealms.entrySet()) {
|
||||||
@ -529,11 +530,8 @@ public class Security implements ActionPlugin, IngestPlugin, NetworkPlugin, Clus
|
|||||||
|
|
||||||
|
|
||||||
public List<String> getSettingsFilter(@Nullable XPackExtensionsService extensionsService) {
|
public List<String> getSettingsFilter(@Nullable XPackExtensionsService extensionsService) {
|
||||||
ArrayList<String> settingsFilter = new ArrayList<>();
|
|
||||||
List<String> asArray = settings.getAsList(setting("hide_settings"));
|
List<String> asArray = settings.getAsList(setting("hide_settings"));
|
||||||
for (String pattern : asArray) {
|
ArrayList<String> settingsFilter = new ArrayList<>(asArray);
|
||||||
settingsFilter.add(pattern);
|
|
||||||
}
|
|
||||||
|
|
||||||
final List<XPackExtension> extensions = extensionsService == null ? Collections.emptyList() : extensionsService.getExtensions();
|
final List<XPackExtension> extensions = extensionsService == null ? Collections.emptyList() : extensionsService.getExtensions();
|
||||||
settingsFilter.addAll(RealmSettings.getSettingsFilter(extensions));
|
settingsFilter.addAll(RealmSettings.getSettingsFilter(extensions));
|
||||||
@ -775,8 +773,7 @@ public class Security implements ActionPlugin, IngestPlugin, NetworkPlugin, Clus
|
|||||||
}
|
}
|
||||||
|
|
||||||
String[] matches = Strings.commaDelimitedListToStringArray(value);
|
String[] matches = Strings.commaDelimitedListToStringArray(value);
|
||||||
List<String> indices = new ArrayList<>();
|
List<String> indices = new ArrayList<>(SecurityLifecycleService.indexNames());
|
||||||
indices.addAll(SecurityLifecycleService.indexNames());
|
|
||||||
if (indexAuditingEnabled) {
|
if (indexAuditingEnabled) {
|
||||||
DateTime now = new DateTime(DateTimeZone.UTC);
|
DateTime now = new DateTime(DateTimeZone.UTC);
|
||||||
// just use daily rollover
|
// just use daily rollover
|
||||||
@ -941,6 +938,31 @@ public class Security implements ActionPlugin, IngestPlugin, NetworkPlugin, Clus
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public Function<String, Predicate<String>> getFieldFilter() {
|
||||||
|
if (enabled) {
|
||||||
|
return index -> {
|
||||||
|
if (licenseState.isDocumentAndFieldLevelSecurityAllowed() == false) {
|
||||||
|
return MapperPlugin.NOOP_FIELD_PREDICATE;
|
||||||
|
}
|
||||||
|
IndicesAccessControl indicesAccessControl = threadContext.get().getTransient(AuthorizationService.INDICES_PERMISSIONS_KEY);
|
||||||
|
IndicesAccessControl.IndexAccessControl indexPermissions = indicesAccessControl.getIndexPermissions(index);
|
||||||
|
if (indexPermissions == null) {
|
||||||
|
return MapperPlugin.NOOP_FIELD_PREDICATE;
|
||||||
|
}
|
||||||
|
if (indexPermissions.isGranted() == false) {
|
||||||
|
throw new IllegalStateException("unexpected call to getFieldFilter for index [" + index + "] which is not granted");
|
||||||
|
}
|
||||||
|
FieldPermissions fieldPermissions = indexPermissions.getFieldPermissions();
|
||||||
|
if (fieldPermissions == null) {
|
||||||
|
return MapperPlugin.NOOP_FIELD_PREDICATE;
|
||||||
|
}
|
||||||
|
return fieldPermissions::grantsAccessTo;
|
||||||
|
};
|
||||||
|
}
|
||||||
|
return MapperPlugin.super.getFieldFilter();
|
||||||
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public BiConsumer<DiscoveryNode, ClusterState> getJoinValidator() {
|
public BiConsumer<DiscoveryNode, ClusterState> getJoinValidator() {
|
||||||
if (enabled) {
|
if (enabled) {
|
||||||
|
@ -0,0 +1,26 @@
|
|||||||
|
/*
|
||||||
|
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||||
|
* or more contributor license agreements. Licensed under the Elastic License;
|
||||||
|
* you may not use this file except in compliance with the Elastic License.
|
||||||
|
*/
|
||||||
|
package org.elasticsearch.xpack.security;
|
||||||
|
|
||||||
|
import org.elasticsearch.xpack.security.authz.RoleDescriptor;
|
||||||
|
|
||||||
|
import java.util.Collections;
|
||||||
|
import java.util.List;
|
||||||
|
import java.util.Map;
|
||||||
|
|
||||||
|
/**
|
||||||
|
* SPI interface to any plugins that want to provide custom extensions to aid the security module in functioning without
|
||||||
|
* needing to explicitly know about the behavior of the implementing plugin.
|
||||||
|
*/
|
||||||
|
public interface SecurityExtension {
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Gets a set of reserved roles, consisting of the role name and the descriptor.
|
||||||
|
*/
|
||||||
|
default Map<String, RoleDescriptor> getReservedRoles() {
|
||||||
|
return Collections.emptyMap();
|
||||||
|
}
|
||||||
|
}
|
@ -0,0 +1,68 @@
|
|||||||
|
/*
|
||||||
|
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||||
|
* or more contributor license agreements. Licensed under the Elastic License;
|
||||||
|
* you may not use this file except in compliance with the Elastic License.
|
||||||
|
*/
|
||||||
|
package org.elasticsearch.xpack.security;
|
||||||
|
|
||||||
|
import org.elasticsearch.xpack.security.authz.RoleDescriptor;
|
||||||
|
import org.elasticsearch.xpack.security.support.MetadataUtils;
|
||||||
|
|
||||||
|
import java.util.Collections;
|
||||||
|
import java.util.HashMap;
|
||||||
|
import java.util.Map;
|
||||||
|
|
||||||
|
public class StackSecurityExtension implements SecurityExtension {
|
||||||
|
@Override
|
||||||
|
public Map<String, RoleDescriptor> getReservedRoles() {
|
||||||
|
Map<String, RoleDescriptor> roles = new HashMap<>();
|
||||||
|
|
||||||
|
roles.put("transport_client",
|
||||||
|
new RoleDescriptor("transport_client",
|
||||||
|
new String[] { "transport_client" },
|
||||||
|
null,
|
||||||
|
null,
|
||||||
|
MetadataUtils.DEFAULT_RESERVED_METADATA));
|
||||||
|
|
||||||
|
roles.put("kibana_user",
|
||||||
|
new RoleDescriptor("kibana_user",
|
||||||
|
null,
|
||||||
|
new RoleDescriptor.IndicesPrivileges[] {
|
||||||
|
RoleDescriptor.IndicesPrivileges.builder()
|
||||||
|
.indices(".kibana*")
|
||||||
|
.privileges("manage", "read", "index", "delete")
|
||||||
|
.build()
|
||||||
|
},
|
||||||
|
null,
|
||||||
|
MetadataUtils.DEFAULT_RESERVED_METADATA));
|
||||||
|
|
||||||
|
roles.put("ingest_admin",
|
||||||
|
new RoleDescriptor("ingest_admin",
|
||||||
|
new String[] { "manage_index_templates", "manage_pipeline" },
|
||||||
|
null,
|
||||||
|
null,
|
||||||
|
MetadataUtils.DEFAULT_RESERVED_METADATA));
|
||||||
|
|
||||||
|
// reporting_user doesn't have any privileges in Elasticsearch, and Kibana authorizes privileges based on this role
|
||||||
|
roles.put("reporting_user",
|
||||||
|
new RoleDescriptor("reporting_user",
|
||||||
|
null,
|
||||||
|
null,
|
||||||
|
null,
|
||||||
|
MetadataUtils.DEFAULT_RESERVED_METADATA));
|
||||||
|
|
||||||
|
roles.put("kibana_dashboard_only_user",
|
||||||
|
new RoleDescriptor("kibana_dashboard_only_user",
|
||||||
|
null,
|
||||||
|
new RoleDescriptor.IndicesPrivileges[] {
|
||||||
|
RoleDescriptor.IndicesPrivileges.builder()
|
||||||
|
.indices(".kibana*")
|
||||||
|
.privileges("read", "view_index_metadata")
|
||||||
|
.build()
|
||||||
|
},
|
||||||
|
null,
|
||||||
|
MetadataUtils.DEFAULT_RESERVED_METADATA));
|
||||||
|
|
||||||
|
return Collections.unmodifiableMap(roles);
|
||||||
|
}
|
||||||
|
}
|
@ -6,19 +6,20 @@
|
|||||||
package org.elasticsearch.xpack.security.action.user;
|
package org.elasticsearch.xpack.security.action.user;
|
||||||
|
|
||||||
import java.util.ArrayList;
|
import java.util.ArrayList;
|
||||||
import java.util.Arrays;
|
|
||||||
import java.util.Collections;
|
import java.util.Collections;
|
||||||
import java.util.HashMap;
|
import java.util.HashMap;
|
||||||
import java.util.LinkedHashMap;
|
import java.util.LinkedHashMap;
|
||||||
import java.util.List;
|
import java.util.List;
|
||||||
import java.util.Map;
|
import java.util.Map;
|
||||||
|
|
||||||
|
import org.apache.logging.log4j.message.ParameterizedMessage;
|
||||||
import org.apache.lucene.util.automaton.Automaton;
|
import org.apache.lucene.util.automaton.Automaton;
|
||||||
import org.apache.lucene.util.automaton.Operations;
|
import org.apache.lucene.util.automaton.Operations;
|
||||||
import org.elasticsearch.action.ActionListener;
|
import org.elasticsearch.action.ActionListener;
|
||||||
import org.elasticsearch.action.support.ActionFilters;
|
import org.elasticsearch.action.support.ActionFilters;
|
||||||
import org.elasticsearch.action.support.HandledTransportAction;
|
import org.elasticsearch.action.support.HandledTransportAction;
|
||||||
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
|
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
|
||||||
|
import org.elasticsearch.common.Strings;
|
||||||
import org.elasticsearch.common.inject.Inject;
|
import org.elasticsearch.common.inject.Inject;
|
||||||
import org.elasticsearch.common.settings.Settings;
|
import org.elasticsearch.common.settings.Settings;
|
||||||
import org.elasticsearch.threadpool.ThreadPool;
|
import org.elasticsearch.threadpool.ThreadPool;
|
||||||
@ -68,10 +69,9 @@ public class TransportHasPrivilegesAction extends HandledTransportAction<HasPriv
|
|||||||
|
|
||||||
private void checkPrivileges(HasPrivilegesRequest request, Role userRole,
|
private void checkPrivileges(HasPrivilegesRequest request, Role userRole,
|
||||||
ActionListener<HasPrivilegesResponse> listener) {
|
ActionListener<HasPrivilegesResponse> listener) {
|
||||||
if (logger.isDebugEnabled()) {
|
logger.debug(() -> new ParameterizedMessage("Check whether role [{}] has privileges cluster=[{}] index=[{}]",
|
||||||
logger.debug("Check whether role [{}] has privileges cluster=[{}] index=[{}]", userRole.name(),
|
Strings.arrayToCommaDelimitedString(userRole.names()), Strings.arrayToCommaDelimitedString(request.clusterPrivileges()),
|
||||||
Arrays.toString(request.clusterPrivileges()), Arrays.toString(request.indexPrivileges()));
|
Strings.arrayToCommaDelimitedString(request.indexPrivileges())));
|
||||||
}
|
|
||||||
|
|
||||||
Map<String, Boolean> cluster = new HashMap<>();
|
Map<String, Boolean> cluster = new HashMap<>();
|
||||||
for (String checkAction : request.clusterPrivileges()) {
|
for (String checkAction : request.clusterPrivileges()) {
|
||||||
@ -93,10 +93,12 @@ public class TransportHasPrivilegesAction extends HandledTransportAction<HasPriv
|
|||||||
}
|
}
|
||||||
for (String privilege : check.getPrivileges()) {
|
for (String privilege : check.getPrivileges()) {
|
||||||
if (testIndexMatch(index, privilege, userRole, predicateCache)) {
|
if (testIndexMatch(index, privilege, userRole, predicateCache)) {
|
||||||
logger.debug("Role [{}] has [{}] on [{}]", userRole.name(), privilege, index);
|
logger.debug(() -> new ParameterizedMessage("Role [{}] has [{}] on [{}]",
|
||||||
|
Strings.arrayToCommaDelimitedString(userRole.names()), privilege, index));
|
||||||
privileges.put(privilege, true);
|
privileges.put(privilege, true);
|
||||||
} else {
|
} else {
|
||||||
logger.debug("Role [{}] does not have [{}] on [{}]", userRole.name(), privilege, index);
|
logger.debug(() -> new ParameterizedMessage("Role [{}] does not have [{}] on [{}]",
|
||||||
|
Strings.arrayToCommaDelimitedString(userRole.names()), privilege, index));
|
||||||
privileges.put(privilege, false);
|
privileges.put(privilege, false);
|
||||||
allMatch = false;
|
allMatch = false;
|
||||||
}
|
}
|
||||||
|
@ -39,25 +39,9 @@ public interface AuditTrail {
|
|||||||
|
|
||||||
void authenticationFailed(String realm, AuthenticationToken token, RestRequest request);
|
void authenticationFailed(String realm, AuthenticationToken token, RestRequest request);
|
||||||
|
|
||||||
/**
|
void accessGranted(User user, String action, TransportMessage message, String[] roleNames, @Nullable Set<String> specificIndices);
|
||||||
* Access was granted for some request.
|
|
||||||
* @param specificIndices if non-null then the action was authorized
|
|
||||||
* for all indices in this particular set of indices, otherwise
|
|
||||||
* the action was authorized for all indices to which it is
|
|
||||||
* related, if any
|
|
||||||
*/
|
|
||||||
void accessGranted(User user, String action, TransportMessage message, @Nullable Set<String> specificIndices);
|
|
||||||
|
|
||||||
/**
|
void accessDenied(User user, String action, TransportMessage message, String[] roleNames, @Nullable Set<String> specificIndices);
|
||||||
* Access was denied for some request.
|
|
||||||
* @param specificIndices if non-null then the action was denied
|
|
||||||
* for at least one index in this particular set of indices,
|
|
||||||
* otherwise the action was denied for at least one index
|
|
||||||
* to which the request is related. If the request isn't
|
|
||||||
* related to any particular index then the request itself
|
|
||||||
* was denied.
|
|
||||||
*/
|
|
||||||
void accessDenied(User user, String action, TransportMessage message, @Nullable Set<String> specificIndices);
|
|
||||||
|
|
||||||
void tamperedRequest(RestRequest request);
|
void tamperedRequest(RestRequest request);
|
||||||
|
|
||||||
@ -69,9 +53,9 @@ public interface AuditTrail {
|
|||||||
|
|
||||||
void connectionDenied(InetAddress inetAddress, String profile, SecurityIpFilterRule rule);
|
void connectionDenied(InetAddress inetAddress, String profile, SecurityIpFilterRule rule);
|
||||||
|
|
||||||
void runAsGranted(User user, String action, TransportMessage message);
|
void runAsGranted(User user, String action, TransportMessage message, String[] roleNames);
|
||||||
|
|
||||||
void runAsDenied(User user, String action, TransportMessage message);
|
void runAsDenied(User user, String action, TransportMessage message, String[] roleNames);
|
||||||
|
|
||||||
void runAsDenied(User user, RestRequest request);
|
void runAsDenied(User user, RestRequest request, String[] roleNames);
|
||||||
}
|
}
|
||||||
|
@ -132,19 +132,19 @@ public class AuditTrailService extends AbstractComponent implements AuditTrail {
|
|||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public void accessGranted(User user, String action, TransportMessage message, @Nullable Set<String> specificIndices) {
|
public void accessGranted(User user, String action, TransportMessage message, String[] roleNames, @Nullable Set<String> specificIndices) {
|
||||||
if (licenseState.isAuditingAllowed()) {
|
if (licenseState.isAuditingAllowed()) {
|
||||||
for (AuditTrail auditTrail : auditTrails) {
|
for (AuditTrail auditTrail : auditTrails) {
|
||||||
auditTrail.accessGranted(user, action, message, specificIndices);
|
auditTrail.accessGranted(user, action, message, roleNames, specificIndices);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public void accessDenied(User user, String action, TransportMessage message, @Nullable Set<String> specificIndices) {
|
public void accessDenied(User user, String action, TransportMessage message, String[] roleNames, @Nullable Set<String> specificIndices) {
|
||||||
if (licenseState.isAuditingAllowed()) {
|
if (licenseState.isAuditingAllowed()) {
|
||||||
for (AuditTrail auditTrail : auditTrails) {
|
for (AuditTrail auditTrail : auditTrails) {
|
||||||
auditTrail.accessDenied(user, action, message, specificIndices);
|
auditTrail.accessDenied(user, action, message, roleNames, specificIndices);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -193,28 +193,28 @@ public class AuditTrailService extends AbstractComponent implements AuditTrail {
|
|||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public void runAsGranted(User user, String action, TransportMessage message) {
|
public void runAsGranted(User user, String action, TransportMessage message, String[] roleNames) {
|
||||||
if (licenseState.isAuditingAllowed()) {
|
if (licenseState.isAuditingAllowed()) {
|
||||||
for (AuditTrail auditTrail : auditTrails) {
|
for (AuditTrail auditTrail : auditTrails) {
|
||||||
auditTrail.runAsGranted(user, action, message);
|
auditTrail.runAsGranted(user, action, message, roleNames);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public void runAsDenied(User user, String action, TransportMessage message) {
|
public void runAsDenied(User user, String action, TransportMessage message, String[] roleNames) {
|
||||||
if (licenseState.isAuditingAllowed()) {
|
if (licenseState.isAuditingAllowed()) {
|
||||||
for (AuditTrail auditTrail : auditTrails) {
|
for (AuditTrail auditTrail : auditTrails) {
|
||||||
auditTrail.runAsDenied(user, action, message);
|
auditTrail.runAsDenied(user, action, message, roleNames);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public void runAsDenied(User user, RestRequest request) {
|
public void runAsDenied(User user, RestRequest request, String[] roleNames) {
|
||||||
if (licenseState.isAuditingAllowed()) {
|
if (licenseState.isAuditingAllowed()) {
|
||||||
for (AuditTrail auditTrail : auditTrails) {
|
for (AuditTrail auditTrail : auditTrails) {
|
||||||
auditTrail.runAsDenied(user, request);
|
auditTrail.runAsDenied(user, request, roleNames);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -48,7 +48,6 @@ import org.elasticsearch.xpack.XPackPlugin;
|
|||||||
import org.elasticsearch.xpack.security.audit.AuditLevel;
|
import org.elasticsearch.xpack.security.audit.AuditLevel;
|
||||||
import org.elasticsearch.xpack.security.audit.AuditTrail;
|
import org.elasticsearch.xpack.security.audit.AuditTrail;
|
||||||
import org.elasticsearch.xpack.security.authc.AuthenticationToken;
|
import org.elasticsearch.xpack.security.authc.AuthenticationToken;
|
||||||
import org.elasticsearch.xpack.security.authz.privilege.SystemPrivilege;
|
|
||||||
import org.elasticsearch.xpack.security.rest.RemoteHostHeader;
|
import org.elasticsearch.xpack.security.rest.RemoteHostHeader;
|
||||||
import org.elasticsearch.xpack.security.support.IndexLifecycleManager;
|
import org.elasticsearch.xpack.security.support.IndexLifecycleManager;
|
||||||
import org.elasticsearch.xpack.security.transport.filter.SecurityIpFilterRule;
|
import org.elasticsearch.xpack.security.transport.filter.SecurityIpFilterRule;
|
||||||
@ -356,7 +355,7 @@ public class IndexAuditTrail extends AbstractComponent implements AuditTrail {
|
|||||||
public void authenticationSuccess(String realm, User user, RestRequest request) {
|
public void authenticationSuccess(String realm, User user, RestRequest request) {
|
||||||
if (events.contains(AUTHENTICATION_SUCCESS)) {
|
if (events.contains(AUTHENTICATION_SUCCESS)) {
|
||||||
try {
|
try {
|
||||||
enqueue(message("authentication_success", realm, user, request), "authentication_success");
|
enqueue(message("authentication_success", realm, user, null, request), "authentication_success");
|
||||||
} catch (Exception e) {
|
} catch (Exception e) {
|
||||||
logger.warn("failed to index audit event: [authentication_success]", e);
|
logger.warn("failed to index audit event: [authentication_success]", e);
|
||||||
}
|
}
|
||||||
@ -367,7 +366,7 @@ public class IndexAuditTrail extends AbstractComponent implements AuditTrail {
|
|||||||
public void authenticationSuccess(String realm, User user, String action, TransportMessage message) {
|
public void authenticationSuccess(String realm, User user, String action, TransportMessage message) {
|
||||||
if (events.contains(AUTHENTICATION_SUCCESS)) {
|
if (events.contains(AUTHENTICATION_SUCCESS)) {
|
||||||
try {
|
try {
|
||||||
enqueue(message("authentication_success", action, user, realm, null, message), "authentication_success");
|
enqueue(message("authentication_success", action, user, null, realm, null, message), "authentication_success");
|
||||||
} catch (Exception e) {
|
} catch (Exception e) {
|
||||||
logger.warn("failed to index audit event: [authentication_success]", e);
|
logger.warn("failed to index audit event: [authentication_success]", e);
|
||||||
}
|
}
|
||||||
@ -378,7 +377,7 @@ public class IndexAuditTrail extends AbstractComponent implements AuditTrail {
|
|||||||
public void anonymousAccessDenied(String action, TransportMessage message) {
|
public void anonymousAccessDenied(String action, TransportMessage message) {
|
||||||
if (events.contains(ANONYMOUS_ACCESS_DENIED)) {
|
if (events.contains(ANONYMOUS_ACCESS_DENIED)) {
|
||||||
try {
|
try {
|
||||||
enqueue(message("anonymous_access_denied", action, (User) null, null, indices(message), message),
|
enqueue(message("anonymous_access_denied", action, (User) null, null, null, indices(message), message),
|
||||||
"anonymous_access_denied");
|
"anonymous_access_denied");
|
||||||
} catch (Exception e) {
|
} catch (Exception e) {
|
||||||
logger.warn("failed to index audit event: [anonymous_access_denied]", e);
|
logger.warn("failed to index audit event: [anonymous_access_denied]", e);
|
||||||
@ -401,7 +400,8 @@ public class IndexAuditTrail extends AbstractComponent implements AuditTrail {
|
|||||||
public void authenticationFailed(String action, TransportMessage message) {
|
public void authenticationFailed(String action, TransportMessage message) {
|
||||||
if (events.contains(AUTHENTICATION_FAILED)) {
|
if (events.contains(AUTHENTICATION_FAILED)) {
|
||||||
try {
|
try {
|
||||||
enqueue(message("authentication_failed", action, (User) null, null, indices(message), message), "authentication_failed");
|
enqueue(message("authentication_failed", action, (User) null, null, null, indices(message), message),
|
||||||
|
"authentication_failed");
|
||||||
} catch (Exception e) {
|
} catch (Exception e) {
|
||||||
logger.warn("failed to index audit event: [authentication_failed]", e);
|
logger.warn("failed to index audit event: [authentication_failed]", e);
|
||||||
}
|
}
|
||||||
@ -473,21 +473,15 @@ public class IndexAuditTrail extends AbstractComponent implements AuditTrail {
|
|||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public void accessGranted(User user, String action, TransportMessage message, @Nullable Set<String> specificIndices) {
|
public void accessGranted(User user, String action, TransportMessage message, String[] roleNames,
|
||||||
// special treatment for internal system actions - only log if explicitly told to
|
@Nullable Set<String> specificIndices) {
|
||||||
if ((SystemUser.is(user) && SystemPrivilege.INSTANCE.predicate().test(action))) {
|
final boolean isSystem = SystemUser.is(user) || XPackUser.is(user);
|
||||||
if (events.contains(SYSTEM_ACCESS_GRANTED)) {
|
final boolean logSystemAccessGranted = isSystem && events.contains(SYSTEM_ACCESS_GRANTED);
|
||||||
|
final boolean shouldLog = logSystemAccessGranted || (isSystem == false && events.contains(ACCESS_GRANTED));
|
||||||
|
if (shouldLog) {
|
||||||
try {
|
try {
|
||||||
Set<String> indices = specificIndices == null ? indices(message) : specificIndices;
|
Set<String> indices = specificIndices == null ? indices(message) : specificIndices;
|
||||||
enqueue(message("access_granted", action, user, null, indices, message), "access_granted");
|
enqueue(message("access_granted", action, user, roleNames, null, indices, message), "access_granted");
|
||||||
} catch (Exception e) {
|
|
||||||
logger.warn("failed to index audit event: [access_granted]", e);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
} else if (events.contains(ACCESS_GRANTED) && XPackUser.is(user) == false) {
|
|
||||||
try {
|
|
||||||
Set<String> indices = specificIndices == null ? indices(message) : specificIndices;
|
|
||||||
enqueue(message("access_granted", action, user, null, indices, message), "access_granted");
|
|
||||||
} catch (Exception e) {
|
} catch (Exception e) {
|
||||||
logger.warn("failed to index audit event: [access_granted]", e);
|
logger.warn("failed to index audit event: [access_granted]", e);
|
||||||
}
|
}
|
||||||
@ -495,11 +489,12 @@ public class IndexAuditTrail extends AbstractComponent implements AuditTrail {
|
|||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public void accessDenied(User user, String action, TransportMessage message, @Nullable Set<String> specificIndices) {
|
public void accessDenied(User user, String action, TransportMessage message, String[] roleNames,
|
||||||
|
@Nullable Set<String> specificIndices) {
|
||||||
if (events.contains(ACCESS_DENIED) && XPackUser.is(user) == false) {
|
if (events.contains(ACCESS_DENIED) && XPackUser.is(user) == false) {
|
||||||
try {
|
try {
|
||||||
Set<String> indices = specificIndices == null ? indices(message) : specificIndices;
|
Set<String> indices = specificIndices == null ? indices(message) : specificIndices;
|
||||||
enqueue(message("access_denied", action, user, null, indices, message), "access_denied");
|
enqueue(message("access_denied", action, user, roleNames, null, indices, message), "access_denied");
|
||||||
} catch (Exception e) {
|
} catch (Exception e) {
|
||||||
logger.warn("failed to index audit event: [access_denied]", e);
|
logger.warn("failed to index audit event: [access_denied]", e);
|
||||||
}
|
}
|
||||||
@ -521,7 +516,7 @@ public class IndexAuditTrail extends AbstractComponent implements AuditTrail {
|
|||||||
public void tamperedRequest(String action, TransportMessage message) {
|
public void tamperedRequest(String action, TransportMessage message) {
|
||||||
if (events.contains(TAMPERED_REQUEST)) {
|
if (events.contains(TAMPERED_REQUEST)) {
|
||||||
try {
|
try {
|
||||||
enqueue(message("tampered_request", action, (User) null, null, indices(message), message), "tampered_request");
|
enqueue(message("tampered_request", action, (User) null, null, null, indices(message), message), "tampered_request");
|
||||||
} catch (Exception e) {
|
} catch (Exception e) {
|
||||||
logger.warn("failed to index audit event: [tampered_request]", e);
|
logger.warn("failed to index audit event: [tampered_request]", e);
|
||||||
}
|
}
|
||||||
@ -532,7 +527,7 @@ public class IndexAuditTrail extends AbstractComponent implements AuditTrail {
|
|||||||
public void tamperedRequest(User user, String action, TransportMessage request) {
|
public void tamperedRequest(User user, String action, TransportMessage request) {
|
||||||
if (events.contains(TAMPERED_REQUEST) && XPackUser.is(user) == false) {
|
if (events.contains(TAMPERED_REQUEST) && XPackUser.is(user) == false) {
|
||||||
try {
|
try {
|
||||||
enqueue(message("tampered_request", action, user, null, indices(request), request), "tampered_request");
|
enqueue(message("tampered_request", action, user, null, null, indices(request), request), "tampered_request");
|
||||||
} catch (Exception e) {
|
} catch (Exception e) {
|
||||||
logger.warn("failed to index audit event: [tampered_request]", e);
|
logger.warn("failed to index audit event: [tampered_request]", e);
|
||||||
}
|
}
|
||||||
@ -562,10 +557,10 @@ public class IndexAuditTrail extends AbstractComponent implements AuditTrail {
|
|||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public void runAsGranted(User user, String action, TransportMessage message) {
|
public void runAsGranted(User user, String action, TransportMessage message, String[] roleNames) {
|
||||||
if (events.contains(RUN_AS_GRANTED)) {
|
if (events.contains(RUN_AS_GRANTED)) {
|
||||||
try {
|
try {
|
||||||
enqueue(message("run_as_granted", action, user, null, null, message), "run_as_granted");
|
enqueue(message("run_as_granted", action, user, roleNames, null, null, message), "run_as_granted");
|
||||||
} catch (Exception e) {
|
} catch (Exception e) {
|
||||||
logger.warn("failed to index audit event: [run_as_granted]", e);
|
logger.warn("failed to index audit event: [run_as_granted]", e);
|
||||||
}
|
}
|
||||||
@ -573,10 +568,10 @@ public class IndexAuditTrail extends AbstractComponent implements AuditTrail {
|
|||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public void runAsDenied(User user, String action, TransportMessage message) {
|
public void runAsDenied(User user, String action, TransportMessage message, String[] roleNames) {
|
||||||
if (events.contains(RUN_AS_DENIED)) {
|
if (events.contains(RUN_AS_DENIED)) {
|
||||||
try {
|
try {
|
||||||
enqueue(message("run_as_denied", action, user, null, null, message), "run_as_denied");
|
enqueue(message("run_as_denied", action, user, roleNames, null, null, message), "run_as_denied");
|
||||||
} catch (Exception e) {
|
} catch (Exception e) {
|
||||||
logger.warn("failed to index audit event: [run_as_denied]", e);
|
logger.warn("failed to index audit event: [run_as_denied]", e);
|
||||||
}
|
}
|
||||||
@ -584,17 +579,17 @@ public class IndexAuditTrail extends AbstractComponent implements AuditTrail {
|
|||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public void runAsDenied(User user, RestRequest request) {
|
public void runAsDenied(User user, RestRequest request, String[] roleNames) {
|
||||||
if (events.contains(RUN_AS_DENIED)) {
|
if (events.contains(RUN_AS_DENIED)) {
|
||||||
try {
|
try {
|
||||||
enqueue(message("run_as_denied", null, user, request), "run_as_denied");
|
enqueue(message("run_as_denied", null, user, roleNames, request), "run_as_denied");
|
||||||
} catch (Exception e) {
|
} catch (Exception e) {
|
||||||
logger.warn("failed to index audit event: [run_as_denied]", e);
|
logger.warn("failed to index audit event: [run_as_denied]", e);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
private Message message(String type, @Nullable String action, @Nullable User user, @Nullable String realm,
|
private Message message(String type, @Nullable String action, @Nullable User user, @Nullable String[] roleNames, @Nullable String realm,
|
||||||
@Nullable Set<String> indices, TransportMessage message) throws Exception {
|
@Nullable Set<String> indices, TransportMessage message) throws Exception {
|
||||||
|
|
||||||
Message msg = new Message().start();
|
Message msg = new Message().start();
|
||||||
@ -617,6 +612,9 @@ public class IndexAuditTrail extends AbstractComponent implements AuditTrail {
|
|||||||
} else {
|
} else {
|
||||||
msg.builder.field(Field.PRINCIPAL, user.principal());
|
msg.builder.field(Field.PRINCIPAL, user.principal());
|
||||||
}
|
}
|
||||||
|
if (roleNames != null) {
|
||||||
|
msg.builder.array(Field.ROLE_NAMES, roleNames);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
if (indices != null) {
|
if (indices != null) {
|
||||||
msg.builder.array(Field.INDICES, indices.toArray(Strings.EMPTY_ARRAY));
|
msg.builder.array(Field.INDICES, indices.toArray(Strings.EMPTY_ARRAY));
|
||||||
@ -689,7 +687,8 @@ public class IndexAuditTrail extends AbstractComponent implements AuditTrail {
|
|||||||
return msg.end();
|
return msg.end();
|
||||||
}
|
}
|
||||||
|
|
||||||
private Message message(String type, String realm, User user, RestRequest request) throws Exception {
|
private Message message(String type, @Nullable String realm, @Nullable User user, @Nullable String[] roleNames, RestRequest request)
|
||||||
|
throws Exception {
|
||||||
|
|
||||||
Message msg = new Message().start();
|
Message msg = new Message().start();
|
||||||
common("rest", type, msg.builder);
|
common("rest", type, msg.builder);
|
||||||
@ -701,6 +700,9 @@ public class IndexAuditTrail extends AbstractComponent implements AuditTrail {
|
|||||||
} else {
|
} else {
|
||||||
msg.builder.field(Field.PRINCIPAL, user.principal());
|
msg.builder.field(Field.PRINCIPAL, user.principal());
|
||||||
}
|
}
|
||||||
|
if (roleNames != null) {
|
||||||
|
msg.builder.array(Field.ROLE_NAMES, roleNames);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
if (realm != null) {
|
if (realm != null) {
|
||||||
msg.builder.field(Field.REALM, realm);
|
msg.builder.field(Field.REALM, realm);
|
||||||
@ -1054,6 +1056,7 @@ public class IndexAuditTrail extends AbstractComponent implements AuditTrail {
|
|||||||
String ORIGIN_ADDRESS = "origin_address";
|
String ORIGIN_ADDRESS = "origin_address";
|
||||||
String ORIGIN_TYPE = "origin_type";
|
String ORIGIN_TYPE = "origin_type";
|
||||||
String PRINCIPAL = "principal";
|
String PRINCIPAL = "principal";
|
||||||
|
String ROLE_NAMES = "roles";
|
||||||
String RUN_AS_PRINCIPAL = "run_as_principal";
|
String RUN_AS_PRINCIPAL = "run_as_principal";
|
||||||
String RUN_BY_PRINCIPAL = "run_by_principal";
|
String RUN_BY_PRINCIPAL = "run_by_principal";
|
||||||
String ACTION = "action";
|
String ACTION = "action";
|
||||||
|
@ -6,6 +6,8 @@
|
|||||||
package org.elasticsearch.xpack.security.audit.logfile;
|
package org.elasticsearch.xpack.security.audit.logfile;
|
||||||
|
|
||||||
import org.apache.logging.log4j.Logger;
|
import org.apache.logging.log4j.Logger;
|
||||||
|
import org.elasticsearch.cluster.ClusterChangedEvent;
|
||||||
|
import org.elasticsearch.cluster.ClusterStateListener;
|
||||||
import org.elasticsearch.cluster.node.DiscoveryNode;
|
import org.elasticsearch.cluster.node.DiscoveryNode;
|
||||||
import org.elasticsearch.cluster.service.ClusterService;
|
import org.elasticsearch.cluster.service.ClusterService;
|
||||||
import org.elasticsearch.common.Nullable;
|
import org.elasticsearch.common.Nullable;
|
||||||
@ -23,7 +25,6 @@ import org.elasticsearch.transport.TransportMessage;
|
|||||||
import org.elasticsearch.xpack.security.audit.AuditLevel;
|
import org.elasticsearch.xpack.security.audit.AuditLevel;
|
||||||
import org.elasticsearch.xpack.security.audit.AuditTrail;
|
import org.elasticsearch.xpack.security.audit.AuditTrail;
|
||||||
import org.elasticsearch.xpack.security.authc.AuthenticationToken;
|
import org.elasticsearch.xpack.security.authc.AuthenticationToken;
|
||||||
import org.elasticsearch.xpack.security.authz.privilege.SystemPrivilege;
|
|
||||||
import org.elasticsearch.xpack.security.rest.RemoteHostHeader;
|
import org.elasticsearch.xpack.security.rest.RemoteHostHeader;
|
||||||
import org.elasticsearch.xpack.security.transport.filter.SecurityIpFilterRule;
|
import org.elasticsearch.xpack.security.transport.filter.SecurityIpFilterRule;
|
||||||
import org.elasticsearch.xpack.security.user.SystemUser;
|
import org.elasticsearch.xpack.security.user.SystemUser;
|
||||||
@ -37,10 +38,12 @@ import java.util.Arrays;
|
|||||||
import java.util.Collections;
|
import java.util.Collections;
|
||||||
import java.util.EnumSet;
|
import java.util.EnumSet;
|
||||||
import java.util.List;
|
import java.util.List;
|
||||||
|
import java.util.Optional;
|
||||||
import java.util.Set;
|
import java.util.Set;
|
||||||
import java.util.function.Function;
|
import java.util.function.Function;
|
||||||
|
|
||||||
import static org.elasticsearch.common.Strings.collectionToCommaDelimitedString;
|
import static org.elasticsearch.common.Strings.collectionToCommaDelimitedString;
|
||||||
|
import static org.elasticsearch.common.Strings.arrayToCommaDelimitedString;
|
||||||
import static org.elasticsearch.xpack.security.Security.setting;
|
import static org.elasticsearch.xpack.security.Security.setting;
|
||||||
import static org.elasticsearch.xpack.security.audit.AuditLevel.ACCESS_DENIED;
|
import static org.elasticsearch.xpack.security.audit.AuditLevel.ACCESS_DENIED;
|
||||||
import static org.elasticsearch.xpack.security.audit.AuditLevel.ACCESS_GRANTED;
|
import static org.elasticsearch.xpack.security.audit.AuditLevel.ACCESS_GRANTED;
|
||||||
@ -58,7 +61,7 @@ import static org.elasticsearch.xpack.security.audit.AuditLevel.parse;
|
|||||||
import static org.elasticsearch.xpack.security.audit.AuditUtil.indices;
|
import static org.elasticsearch.xpack.security.audit.AuditUtil.indices;
|
||||||
import static org.elasticsearch.xpack.security.audit.AuditUtil.restRequestContent;
|
import static org.elasticsearch.xpack.security.audit.AuditUtil.restRequestContent;
|
||||||
|
|
||||||
public class LoggingAuditTrail extends AbstractComponent implements AuditTrail {
|
public class LoggingAuditTrail extends AbstractComponent implements AuditTrail, ClusterStateListener {
|
||||||
|
|
||||||
public static final String NAME = "logfile";
|
public static final String NAME = "logfile";
|
||||||
public static final Setting<Boolean> HOST_ADDRESS_SETTING =
|
public static final Setting<Boolean> HOST_ADDRESS_SETTING =
|
||||||
@ -85,12 +88,10 @@ public class LoggingAuditTrail extends AbstractComponent implements AuditTrail {
|
|||||||
Setting.boolSetting(setting("audit.logfile.events.emit_request_body"), false, Property.NodeScope);
|
Setting.boolSetting(setting("audit.logfile.events.emit_request_body"), false, Property.NodeScope);
|
||||||
|
|
||||||
private final Logger logger;
|
private final Logger logger;
|
||||||
private final ClusterService clusterService;
|
|
||||||
private final ThreadContext threadContext;
|
|
||||||
private final EnumSet<AuditLevel> events;
|
private final EnumSet<AuditLevel> events;
|
||||||
private final boolean includeRequestBody;
|
private final boolean includeRequestBody;
|
||||||
|
private final ThreadContext threadContext;
|
||||||
private String prefix;
|
volatile LocalNodeInfo localNodeInfo;
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public String name() {
|
public String name() {
|
||||||
@ -104,28 +105,22 @@ public class LoggingAuditTrail extends AbstractComponent implements AuditTrail {
|
|||||||
LoggingAuditTrail(Settings settings, ClusterService clusterService, Logger logger, ThreadContext threadContext) {
|
LoggingAuditTrail(Settings settings, ClusterService clusterService, Logger logger, ThreadContext threadContext) {
|
||||||
super(settings);
|
super(settings);
|
||||||
this.logger = logger;
|
this.logger = logger;
|
||||||
this.clusterService = clusterService;
|
|
||||||
this.threadContext = threadContext;
|
|
||||||
this.events = parse(INCLUDE_EVENT_SETTINGS.get(settings), EXCLUDE_EVENT_SETTINGS.get(settings));
|
this.events = parse(INCLUDE_EVENT_SETTINGS.get(settings), EXCLUDE_EVENT_SETTINGS.get(settings));
|
||||||
this.includeRequestBody = INCLUDE_REQUEST_BODY.get(settings);
|
this.includeRequestBody = INCLUDE_REQUEST_BODY.get(settings);
|
||||||
}
|
this.threadContext = threadContext;
|
||||||
|
this.localNodeInfo = new LocalNodeInfo(settings, null);
|
||||||
private String getPrefix() {
|
clusterService.addListener(this);
|
||||||
if (prefix == null) {
|
|
||||||
prefix = resolvePrefix(settings, clusterService.localNode());
|
|
||||||
}
|
|
||||||
return prefix;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public void authenticationSuccess(String realm, User user, RestRequest request) {
|
public void authenticationSuccess(String realm, User user, RestRequest request) {
|
||||||
if (events.contains(AUTHENTICATION_SUCCESS)) {
|
if (events.contains(AUTHENTICATION_SUCCESS)) {
|
||||||
if (includeRequestBody) {
|
if (includeRequestBody) {
|
||||||
logger.info("{}[rest] [authentication_success]\t{}, realm=[{}], uri=[{}], params=[{}], request_body=[{}]", getPrefix(),
|
logger.info("{}[rest] [authentication_success]\t{}, realm=[{}], uri=[{}], params=[{}], request_body=[{}]",
|
||||||
principal(user), realm, request.uri(), request.params(), restRequestContent(request));
|
localNodeInfo.prefix, principal(user), realm, request.uri(), request.params(), restRequestContent(request));
|
||||||
} else {
|
} else {
|
||||||
logger.info("{}[rest] [authentication_success]\t{}, realm=[{}], uri=[{}], params=[{}]", getPrefix(), principal(user), realm,
|
logger.info("{}[rest] [authentication_success]\t{}, realm=[{}], uri=[{}], params=[{}]", localNodeInfo.prefix,
|
||||||
request.uri(), request.params());
|
principal(user), realm, request.uri(), request.params());
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -133,8 +128,9 @@ public class LoggingAuditTrail extends AbstractComponent implements AuditTrail {
|
|||||||
@Override
|
@Override
|
||||||
public void authenticationSuccess(String realm, User user, String action, TransportMessage message) {
|
public void authenticationSuccess(String realm, User user, String action, TransportMessage message) {
|
||||||
if (events.contains(AUTHENTICATION_SUCCESS)) {
|
if (events.contains(AUTHENTICATION_SUCCESS)) {
|
||||||
logger.info("{}[transport] [authentication_success]\t{}, {}, realm=[{}], action=[{}], request=[{}]", getPrefix(),
|
final LocalNodeInfo localNodeInfo = this.localNodeInfo;
|
||||||
originAttributes(message, clusterService.localNode(), threadContext), principal(user), realm, action,
|
logger.info("{}[transport] [authentication_success]\t{}, {}, realm=[{}], action=[{}], request=[{}]",
|
||||||
|
localNodeInfo.prefix, originAttributes(threadContext, message, localNodeInfo), principal(user), realm, action,
|
||||||
message.getClass().getSimpleName());
|
message.getClass().getSimpleName());
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -143,13 +139,14 @@ public class LoggingAuditTrail extends AbstractComponent implements AuditTrail {
|
|||||||
public void anonymousAccessDenied(String action, TransportMessage message) {
|
public void anonymousAccessDenied(String action, TransportMessage message) {
|
||||||
if (events.contains(ANONYMOUS_ACCESS_DENIED)) {
|
if (events.contains(ANONYMOUS_ACCESS_DENIED)) {
|
||||||
String indices = indicesString(message);
|
String indices = indicesString(message);
|
||||||
|
final LocalNodeInfo localNodeInfo = this.localNodeInfo;
|
||||||
if (indices != null) {
|
if (indices != null) {
|
||||||
logger.info("{}[transport] [anonymous_access_denied]\t{}, action=[{}], indices=[{}], request=[{}]", getPrefix(),
|
logger.info("{}[transport] [anonymous_access_denied]\t{}, action=[{}], indices=[{}], request=[{}]",
|
||||||
originAttributes(message, clusterService.localNode(), threadContext), action, indices,
|
localNodeInfo.prefix, originAttributes(threadContext, message, localNodeInfo), action, indices,
|
||||||
message.getClass().getSimpleName());
|
message.getClass().getSimpleName());
|
||||||
} else {
|
} else {
|
||||||
logger.info("{}[transport] [anonymous_access_denied]\t{}, action=[{}], request=[{}]", getPrefix(),
|
logger.info("{}[transport] [anonymous_access_denied]\t{}, action=[{}], request=[{}]", localNodeInfo.prefix,
|
||||||
originAttributes(message, clusterService.localNode(), threadContext), action, message.getClass().getSimpleName());
|
originAttributes(threadContext, message, localNodeInfo), action, message.getClass().getSimpleName());
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -158,10 +155,11 @@ public class LoggingAuditTrail extends AbstractComponent implements AuditTrail {
|
|||||||
public void anonymousAccessDenied(RestRequest request) {
|
public void anonymousAccessDenied(RestRequest request) {
|
||||||
if (events.contains(ANONYMOUS_ACCESS_DENIED)) {
|
if (events.contains(ANONYMOUS_ACCESS_DENIED)) {
|
||||||
if (includeRequestBody) {
|
if (includeRequestBody) {
|
||||||
logger.info("{}[rest] [anonymous_access_denied]\t{}, uri=[{}], request_body=[{}]", getPrefix(),
|
logger.info("{}[rest] [anonymous_access_denied]\t{}, uri=[{}], request_body=[{}]", localNodeInfo.prefix,
|
||||||
hostAttributes(request), request.uri(), restRequestContent(request));
|
hostAttributes(request), request.uri(), restRequestContent(request));
|
||||||
} else {
|
} else {
|
||||||
logger.info("{}[rest] [anonymous_access_denied]\t{}, uri=[{}]", getPrefix(), hostAttributes(request), request.uri());
|
logger.info("{}[rest] [anonymous_access_denied]\t{}, uri=[{}]", localNodeInfo.prefix, hostAttributes(request),
|
||||||
|
request.uri());
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -170,13 +168,14 @@ public class LoggingAuditTrail extends AbstractComponent implements AuditTrail {
|
|||||||
public void authenticationFailed(AuthenticationToken token, String action, TransportMessage message) {
|
public void authenticationFailed(AuthenticationToken token, String action, TransportMessage message) {
|
||||||
if (events.contains(AUTHENTICATION_FAILED)) {
|
if (events.contains(AUTHENTICATION_FAILED)) {
|
||||||
String indices = indicesString(message);
|
String indices = indicesString(message);
|
||||||
|
final LocalNodeInfo localNodeInfo = this.localNodeInfo;
|
||||||
if (indices != null) {
|
if (indices != null) {
|
||||||
logger.info("{}[transport] [authentication_failed]\t{}, principal=[{}], action=[{}], indices=[{}], request=[{}]",
|
logger.info("{}[transport] [authentication_failed]\t{}, principal=[{}], action=[{}], indices=[{}], request=[{}]",
|
||||||
getPrefix(), originAttributes(message, clusterService.localNode(), threadContext), token.principal(),
|
localNodeInfo.prefix, originAttributes(threadContext, message, localNodeInfo), token.principal(), action, indices,
|
||||||
action, indices, message.getClass().getSimpleName());
|
message.getClass().getSimpleName());
|
||||||
} else {
|
} else {
|
||||||
logger.info("{}[transport] [authentication_failed]\t{}, principal=[{}], action=[{}], request=[{}]", getPrefix(),
|
logger.info("{}[transport] [authentication_failed]\t{}, principal=[{}], action=[{}], request=[{}]",
|
||||||
originAttributes(message, clusterService.localNode(), threadContext), token.principal(), action,
|
localNodeInfo.prefix, originAttributes(threadContext, message, localNodeInfo), token.principal(), action,
|
||||||
message.getClass().getSimpleName());
|
message.getClass().getSimpleName());
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -187,10 +186,11 @@ public class LoggingAuditTrail extends AbstractComponent implements AuditTrail {
|
|||||||
public void authenticationFailed(RestRequest request) {
|
public void authenticationFailed(RestRequest request) {
|
||||||
if (events.contains(AUTHENTICATION_FAILED)) {
|
if (events.contains(AUTHENTICATION_FAILED)) {
|
||||||
if (includeRequestBody) {
|
if (includeRequestBody) {
|
||||||
logger.info("{}[rest] [authentication_failed]\t{}, uri=[{}], request_body=[{}]", getPrefix(), hostAttributes(request),
|
logger.info("{}[rest] [authentication_failed]\t{}, uri=[{}], request_body=[{}]", localNodeInfo.prefix,
|
||||||
request.uri(), restRequestContent(request));
|
hostAttributes(request), request.uri(), restRequestContent(request));
|
||||||
} else {
|
} else {
|
||||||
logger.info("{}[rest] [authentication_failed]\t{}, uri=[{}]", getPrefix(), hostAttributes(request), request.uri());
|
logger.info("{}[rest] [authentication_failed]\t{}, uri=[{}]", localNodeInfo.prefix, hostAttributes(request),
|
||||||
|
request.uri());
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -199,13 +199,13 @@ public class LoggingAuditTrail extends AbstractComponent implements AuditTrail {
|
|||||||
public void authenticationFailed(String action, TransportMessage message) {
|
public void authenticationFailed(String action, TransportMessage message) {
|
||||||
if (events.contains(AUTHENTICATION_FAILED)) {
|
if (events.contains(AUTHENTICATION_FAILED)) {
|
||||||
String indices = indicesString(message);
|
String indices = indicesString(message);
|
||||||
|
final LocalNodeInfo localNodeInfo = this.localNodeInfo;
|
||||||
if (indices != null) {
|
if (indices != null) {
|
||||||
logger.info("{}[transport] [authentication_failed]\t{}, action=[{}], indices=[{}], request=[{}]", getPrefix(),
|
logger.info("{}[transport] [authentication_failed]\t{}, action=[{}], indices=[{}], request=[{}]", localNodeInfo.prefix,
|
||||||
originAttributes(message, clusterService.localNode(), threadContext), action, indices,
|
originAttributes(threadContext, message, localNodeInfo), action, indices, message.getClass().getSimpleName());
|
||||||
message.getClass().getSimpleName());
|
|
||||||
} else {
|
} else {
|
||||||
logger.info("{}[transport] [authentication_failed]\t{}, action=[{}], request=[{}]", getPrefix(),
|
logger.info("{}[transport] [authentication_failed]\t{}, action=[{}], request=[{}]", localNodeInfo.prefix,
|
||||||
originAttributes(message, clusterService.localNode(), threadContext), action, message.getClass().getSimpleName());
|
originAttributes(threadContext, message, localNodeInfo), action, message.getClass().getSimpleName());
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -214,11 +214,11 @@ public class LoggingAuditTrail extends AbstractComponent implements AuditTrail {
|
|||||||
public void authenticationFailed(AuthenticationToken token, RestRequest request) {
|
public void authenticationFailed(AuthenticationToken token, RestRequest request) {
|
||||||
if (events.contains(AUTHENTICATION_FAILED)) {
|
if (events.contains(AUTHENTICATION_FAILED)) {
|
||||||
if (includeRequestBody) {
|
if (includeRequestBody) {
|
||||||
logger.info("{}[rest] [authentication_failed]\t{}, principal=[{}], uri=[{}], request_body=[{}]", getPrefix(),
|
logger.info("{}[rest] [authentication_failed]\t{}, principal=[{}], uri=[{}], request_body=[{}]", localNodeInfo.prefix,
|
||||||
hostAttributes(request), token.principal(), request.uri(), restRequestContent(request));
|
hostAttributes(request), token.principal(), request.uri(), restRequestContent(request));
|
||||||
} else {
|
} else {
|
||||||
logger.info("{}[rest] [authentication_failed]\t{}, principal=[{}], uri=[{}]", getPrefix(), hostAttributes(request),
|
logger.info("{}[rest] [authentication_failed]\t{}, principal=[{}], uri=[{}]", localNodeInfo.prefix,
|
||||||
token.principal(), request.uri());
|
hostAttributes(request), token.principal(), request.uri());
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -227,14 +227,15 @@ public class LoggingAuditTrail extends AbstractComponent implements AuditTrail {
|
|||||||
public void authenticationFailed(String realm, AuthenticationToken token, String action, TransportMessage message) {
|
public void authenticationFailed(String realm, AuthenticationToken token, String action, TransportMessage message) {
|
||||||
if (events.contains(REALM_AUTHENTICATION_FAILED)) {
|
if (events.contains(REALM_AUTHENTICATION_FAILED)) {
|
||||||
String indices = indicesString(message);
|
String indices = indicesString(message);
|
||||||
|
final LocalNodeInfo localNodeInfo = this.localNodeInfo;
|
||||||
if (indices != null) {
|
if (indices != null) {
|
||||||
logger.info("{}[transport] [realm_authentication_failed]\trealm=[{}], {}, principal=[{}], action=[{}], indices=[{}], " +
|
logger.info("{}[transport] [realm_authentication_failed]\trealm=[{}], {}, principal=[{}], action=[{}], indices=[{}], " +
|
||||||
"request=[{}]", getPrefix(), realm, originAttributes(message, clusterService.localNode(), threadContext),
|
"request=[{}]", localNodeInfo.prefix, realm, originAttributes(threadContext, message, localNodeInfo),
|
||||||
token.principal(), action, indices, message.getClass().getSimpleName());
|
token.principal(), action, indices, message.getClass().getSimpleName());
|
||||||
} else {
|
} else {
|
||||||
logger.info("{}[transport] [realm_authentication_failed]\trealm=[{}], {}, principal=[{}], action=[{}], request=[{}]",
|
logger.info("{}[transport] [realm_authentication_failed]\trealm=[{}], {}, principal=[{}], action=[{}], request=[{}]",
|
||||||
getPrefix(), realm, originAttributes(message, clusterService.localNode(), threadContext), token.principal(),
|
localNodeInfo.prefix, realm, originAttributes(threadContext, message, localNodeInfo), token.principal(), action,
|
||||||
action, message.getClass().getSimpleName());
|
message.getClass().getSimpleName());
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -244,45 +245,51 @@ public class LoggingAuditTrail extends AbstractComponent implements AuditTrail {
|
|||||||
if (events.contains(REALM_AUTHENTICATION_FAILED)) {
|
if (events.contains(REALM_AUTHENTICATION_FAILED)) {
|
||||||
if (includeRequestBody) {
|
if (includeRequestBody) {
|
||||||
logger.info("{}[rest] [realm_authentication_failed]\trealm=[{}], {}, principal=[{}], uri=[{}], request_body=[{}]",
|
logger.info("{}[rest] [realm_authentication_failed]\trealm=[{}], {}, principal=[{}], uri=[{}], request_body=[{}]",
|
||||||
getPrefix(), realm, hostAttributes(request), token.principal(), request.uri(), restRequestContent(request));
|
localNodeInfo.prefix, realm, hostAttributes(request), token.principal(), request.uri(),
|
||||||
|
restRequestContent(request));
|
||||||
} else {
|
} else {
|
||||||
logger.info("{}[rest] [realm_authentication_failed]\trealm=[{}], {}, principal=[{}], uri=[{}]", getPrefix(),
|
logger.info("{}[rest] [realm_authentication_failed]\trealm=[{}], {}, principal=[{}], uri=[{}]", localNodeInfo.prefix,
|
||||||
realm, hostAttributes(request), token.principal(), request.uri());
|
realm, hostAttributes(request), token.principal(), request.uri());
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public void accessGranted(User user, String action, TransportMessage message, @Nullable Set<String> specificIndices) {
|
public void accessGranted(User user, String action, TransportMessage message, String[] roleNames,
|
||||||
final boolean isSystem = (SystemUser.is(user) && SystemPrivilege.INSTANCE.predicate().test(action)) || XPackUser.is(user);
|
@Nullable Set<String> specificIndices) {
|
||||||
|
final boolean isSystem = SystemUser.is(user) || XPackUser.is(user);
|
||||||
final boolean logSystemAccessGranted = isSystem && events.contains(SYSTEM_ACCESS_GRANTED);
|
final boolean logSystemAccessGranted = isSystem && events.contains(SYSTEM_ACCESS_GRANTED);
|
||||||
final boolean shouldLog = logSystemAccessGranted || (isSystem == false && events.contains(ACCESS_GRANTED));
|
final boolean shouldLog = logSystemAccessGranted || (isSystem == false && events.contains(ACCESS_GRANTED));
|
||||||
if (shouldLog) {
|
if (shouldLog) {
|
||||||
String indices = specificIndices == null ? indicesString(message) : collectionToCommaDelimitedString(specificIndices);
|
String indices = specificIndices == null ? indicesString(message) : collectionToCommaDelimitedString(specificIndices);
|
||||||
|
final LocalNodeInfo localNodeInfo = this.localNodeInfo;
|
||||||
if (indices != null) {
|
if (indices != null) {
|
||||||
logger.info("{}[transport] [access_granted]\t{}, {}, action=[{}], indices=[{}], request=[{}]", getPrefix(),
|
logger.info("{}[transport] [access_granted]\t{}, {}, roles=[{}], action=[{}], indices=[{}], request=[{}]",
|
||||||
originAttributes(message, clusterService.localNode(), threadContext), principal(user), action, indices,
|
localNodeInfo.prefix, originAttributes(threadContext, message, localNodeInfo), principal(user),
|
||||||
message.getClass().getSimpleName());
|
arrayToCommaDelimitedString(roleNames), action, indices, message.getClass().getSimpleName());
|
||||||
} else {
|
} else {
|
||||||
logger.info("{}[transport] [access_granted]\t{}, {}, action=[{}], request=[{}]", getPrefix(),
|
logger.info("{}[transport] [access_granted]\t{}, {}, roles=[{}], action=[{}], request=[{}]", localNodeInfo.prefix,
|
||||||
originAttributes(message, clusterService.localNode(), threadContext), principal(user), action,
|
originAttributes(threadContext, message, localNodeInfo), principal(user), arrayToCommaDelimitedString(roleNames),
|
||||||
message.getClass().getSimpleName());
|
action, message.getClass().getSimpleName());
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public void accessDenied(User user, String action, TransportMessage message, @Nullable Set<String> specificIndices) {
|
public void accessDenied(User user, String action, TransportMessage message, String[] roleNames,
|
||||||
|
@Nullable Set<String> specificIndices) {
|
||||||
if (events.contains(ACCESS_DENIED)) {
|
if (events.contains(ACCESS_DENIED)) {
|
||||||
String indices = specificIndices == null ? indicesString(message) : collectionToCommaDelimitedString(specificIndices);
|
String indices = specificIndices == null ? indicesString(message) : collectionToCommaDelimitedString(specificIndices);
|
||||||
|
final LocalNodeInfo localNodeInfo = this.localNodeInfo;
|
||||||
|
|
||||||
if (indices != null) {
|
if (indices != null) {
|
||||||
logger.info("{}[transport] [access_denied]\t{}, {}, action=[{}], indices=[{}], request=[{}]", getPrefix(),
|
logger.info("{}[transport] [access_denied]\t{}, {}, roles=[{}], action=[{}], indices=[{}], request=[{}]",
|
||||||
originAttributes(message, clusterService.localNode(), threadContext), principal(user), action, indices,
|
localNodeInfo.prefix, originAttributes(threadContext, message, localNodeInfo), principal(user),
|
||||||
message.getClass().getSimpleName());
|
arrayToCommaDelimitedString(roleNames), action, indices, message.getClass().getSimpleName());
|
||||||
} else {
|
} else {
|
||||||
logger.info("{}[transport] [access_denied]\t{}, {}, action=[{}], request=[{}]", getPrefix(),
|
logger.info("{}[transport] [access_denied]\t{}, {}, roles=[{}], action=[{}], request=[{}]", localNodeInfo.prefix,
|
||||||
originAttributes(message, clusterService.localNode(), threadContext), principal(user), action,
|
originAttributes(threadContext, message, localNodeInfo), principal(user), arrayToCommaDelimitedString(roleNames),
|
||||||
message.getClass().getSimpleName());
|
action, message.getClass().getSimpleName());
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -291,10 +298,10 @@ public class LoggingAuditTrail extends AbstractComponent implements AuditTrail {
|
|||||||
public void tamperedRequest(RestRequest request) {
|
public void tamperedRequest(RestRequest request) {
|
||||||
if (events.contains(TAMPERED_REQUEST)) {
|
if (events.contains(TAMPERED_REQUEST)) {
|
||||||
if (includeRequestBody) {
|
if (includeRequestBody) {
|
||||||
logger.info("{}[rest] [tampered_request]\t{}, uri=[{}], request_body=[{}]", getPrefix(), hostAttributes(request),
|
logger.info("{}[rest] [tampered_request]\t{}, uri=[{}], request_body=[{}]", localNodeInfo.prefix,
|
||||||
request.uri(), restRequestContent(request));
|
hostAttributes(request), request.uri(), restRequestContent(request));
|
||||||
} else {
|
} else {
|
||||||
logger.info("{}[rest] [tampered_request]\t{}, uri=[{}]", getPrefix(), hostAttributes(request), request.uri());
|
logger.info("{}[rest] [tampered_request]\t{}, uri=[{}]", localNodeInfo.prefix, hostAttributes(request), request.uri());
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -303,14 +310,13 @@ public class LoggingAuditTrail extends AbstractComponent implements AuditTrail {
|
|||||||
public void tamperedRequest(String action, TransportMessage message) {
|
public void tamperedRequest(String action, TransportMessage message) {
|
||||||
if (events.contains(TAMPERED_REQUEST)) {
|
if (events.contains(TAMPERED_REQUEST)) {
|
||||||
String indices = indicesString(message);
|
String indices = indicesString(message);
|
||||||
|
final LocalNodeInfo localNodeInfo = this.localNodeInfo;
|
||||||
if (indices != null) {
|
if (indices != null) {
|
||||||
logger.info("{}[transport] [tampered_request]\t{}, action=[{}], indices=[{}], request=[{}]", getPrefix(),
|
logger.info("{}[transport] [tampered_request]\t{}, action=[{}], indices=[{}], request=[{}]", localNodeInfo.prefix,
|
||||||
originAttributes(message, clusterService.localNode(), threadContext), action, indices,
|
originAttributes(threadContext, message, localNodeInfo), action, indices, message.getClass().getSimpleName());
|
||||||
message.getClass().getSimpleName());
|
|
||||||
} else {
|
} else {
|
||||||
logger.info("{}[transport] [tampered_request]\t{}, action=[{}], request=[{}]", getPrefix(),
|
logger.info("{}[transport] [tampered_request]\t{}, action=[{}], request=[{}]", localNodeInfo.prefix,
|
||||||
originAttributes(message, clusterService.localNode(), threadContext), action,
|
originAttributes(threadContext, message, localNodeInfo), action, message.getClass().getSimpleName());
|
||||||
message.getClass().getSimpleName());
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -319,13 +325,14 @@ public class LoggingAuditTrail extends AbstractComponent implements AuditTrail {
|
|||||||
public void tamperedRequest(User user, String action, TransportMessage request) {
|
public void tamperedRequest(User user, String action, TransportMessage request) {
|
||||||
if (events.contains(TAMPERED_REQUEST)) {
|
if (events.contains(TAMPERED_REQUEST)) {
|
||||||
String indices = indicesString(request);
|
String indices = indicesString(request);
|
||||||
|
final LocalNodeInfo localNodeInfo = this.localNodeInfo;
|
||||||
if (indices != null) {
|
if (indices != null) {
|
||||||
logger.info("{}[transport] [tampered_request]\t{}, {}, action=[{}], indices=[{}], request=[{}]", getPrefix(),
|
logger.info("{}[transport] [tampered_request]\t{}, {}, action=[{}], indices=[{}], request=[{}]", localNodeInfo.prefix,
|
||||||
originAttributes(request, clusterService.localNode(), threadContext), principal(user), action, indices,
|
originAttributes(threadContext, request, localNodeInfo), principal(user), action, indices,
|
||||||
request.getClass().getSimpleName());
|
request.getClass().getSimpleName());
|
||||||
} else {
|
} else {
|
||||||
logger.info("{}[transport] [tampered_request]\t{}, {}, action=[{}], request=[{}]", getPrefix(),
|
logger.info("{}[transport] [tampered_request]\t{}, {}, action=[{}], request=[{}]", localNodeInfo.prefix,
|
||||||
originAttributes(request, clusterService.localNode(), threadContext), principal(user), action,
|
originAttributes(threadContext, request, localNodeInfo), principal(user), action,
|
||||||
request.getClass().getSimpleName());
|
request.getClass().getSimpleName());
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -334,46 +341,49 @@ public class LoggingAuditTrail extends AbstractComponent implements AuditTrail {
|
|||||||
@Override
|
@Override
|
||||||
public void connectionGranted(InetAddress inetAddress, String profile, SecurityIpFilterRule rule) {
|
public void connectionGranted(InetAddress inetAddress, String profile, SecurityIpFilterRule rule) {
|
||||||
if (events.contains(CONNECTION_GRANTED)) {
|
if (events.contains(CONNECTION_GRANTED)) {
|
||||||
logger.info("{}[ip_filter] [connection_granted]\torigin_address=[{}], transport_profile=[{}], rule=[{}]", getPrefix(),
|
logger.info("{}[ip_filter] [connection_granted]\torigin_address=[{}], transport_profile=[{}], rule=[{}]",
|
||||||
NetworkAddress.format(inetAddress), profile, rule);
|
localNodeInfo.prefix, NetworkAddress.format(inetAddress), profile, rule);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public void connectionDenied(InetAddress inetAddress, String profile, SecurityIpFilterRule rule) {
|
public void connectionDenied(InetAddress inetAddress, String profile, SecurityIpFilterRule rule) {
|
||||||
if (events.contains(CONNECTION_DENIED)) {
|
if (events.contains(CONNECTION_DENIED)) {
|
||||||
logger.info("{}[ip_filter] [connection_denied]\torigin_address=[{}], transport_profile=[{}], rule=[{}]", getPrefix(),
|
logger.info("{}[ip_filter] [connection_denied]\torigin_address=[{}], transport_profile=[{}], rule=[{}]",
|
||||||
NetworkAddress.format(inetAddress), profile, rule);
|
localNodeInfo.prefix, NetworkAddress.format(inetAddress), profile, rule);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public void runAsGranted(User user, String action, TransportMessage message) {
|
public void runAsGranted(User user, String action, TransportMessage message, String[] roleNames) {
|
||||||
if (events.contains(RUN_AS_GRANTED)) {
|
if (events.contains(RUN_AS_GRANTED)) {
|
||||||
logger.info("{}[transport] [run_as_granted]\t{}, principal=[{}], run_as_principal=[{}], action=[{}], request=[{}]",
|
final LocalNodeInfo localNodeInfo = this.localNodeInfo;
|
||||||
getPrefix(), originAttributes(message, clusterService.localNode(), threadContext), user.authenticatedUser().principal(),
|
logger.info("{}[transport] [run_as_granted]\t{}, principal=[{}], run_as_principal=[{}], roles=[{}], action=[{}], request=[{}]",
|
||||||
user.principal(), action, message.getClass().getSimpleName());
|
localNodeInfo.prefix, originAttributes(threadContext, message, localNodeInfo), user.authenticatedUser().principal(),
|
||||||
|
user.principal(), arrayToCommaDelimitedString(roleNames), action, message.getClass().getSimpleName());
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public void runAsDenied(User user, String action, TransportMessage message) {
|
public void runAsDenied(User user, String action, TransportMessage message, String[] roleNames) {
|
||||||
if (events.contains(RUN_AS_DENIED)) {
|
if (events.contains(RUN_AS_DENIED)) {
|
||||||
logger.info("{}[transport] [run_as_denied]\t{}, principal=[{}], run_as_principal=[{}], action=[{}], request=[{}]",
|
final LocalNodeInfo localNodeInfo = this.localNodeInfo;
|
||||||
getPrefix(), originAttributes(message, clusterService.localNode(), threadContext), user.authenticatedUser().principal(),
|
logger.info("{}[transport] [run_as_denied]\t{}, principal=[{}], run_as_principal=[{}], roles=[{}], action=[{}], request=[{}]",
|
||||||
user.principal(), action, message.getClass().getSimpleName());
|
localNodeInfo.prefix, originAttributes(threadContext, message, localNodeInfo), user.authenticatedUser().principal(),
|
||||||
|
user.principal(), arrayToCommaDelimitedString(roleNames), action, message.getClass().getSimpleName());
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public void runAsDenied(User user, RestRequest request) {
|
public void runAsDenied(User user, RestRequest request, String[] roleNames) {
|
||||||
if (events.contains(RUN_AS_DENIED)) {
|
if (events.contains(RUN_AS_DENIED)) {
|
||||||
if (includeRequestBody) {
|
if (includeRequestBody) {
|
||||||
logger.info("{}[rest] [run_as_denied]\t{}, principal=[{}], uri=[{}], request_body=[{}]", getPrefix(),
|
logger.info("{}[rest] [run_as_denied]\t{}, principal=[{}], roles=[{}], uri=[{}], request_body=[{}]", localNodeInfo.prefix,
|
||||||
hostAttributes(request), user.principal(), request.uri(), restRequestContent(request));
|
hostAttributes(request), user.principal(), arrayToCommaDelimitedString(roleNames), request.uri(),
|
||||||
|
restRequestContent(request));
|
||||||
} else {
|
} else {
|
||||||
logger.info("{}[rest] [run_as_denied]\t{}, principal=[{}], uri=[{}]", getPrefix(),
|
logger.info("{}[rest] [run_as_denied]\t{}, principal=[{}], roles=[{}], uri=[{}]", localNodeInfo.prefix,
|
||||||
hostAttributes(request), user.principal(), request.uri());
|
hostAttributes(request), user.principal(), arrayToCommaDelimitedString(roleNames), request.uri());
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -389,56 +399,29 @@ public class LoggingAuditTrail extends AbstractComponent implements AuditTrail {
|
|||||||
return "origin_address=[" + formattedAddress + "]";
|
return "origin_address=[" + formattedAddress + "]";
|
||||||
}
|
}
|
||||||
|
|
||||||
static String originAttributes(TransportMessage message, DiscoveryNode localNode, ThreadContext threadContext) {
|
protected static String originAttributes(ThreadContext threadContext, TransportMessage message, LocalNodeInfo localNodeInfo) {
|
||||||
StringBuilder builder = new StringBuilder();
|
return restOriginTag(threadContext).orElse(transportOriginTag(message).orElse(localNodeInfo.localOriginTag));
|
||||||
|
}
|
||||||
|
|
||||||
// first checking if the message originated in a rest call
|
private static Optional<String> restOriginTag(ThreadContext threadContext) {
|
||||||
InetSocketAddress restAddress = RemoteHostHeader.restRemoteAddress(threadContext);
|
InetSocketAddress restAddress = RemoteHostHeader.restRemoteAddress(threadContext);
|
||||||
if (restAddress != null) {
|
if (restAddress == null) {
|
||||||
builder.append("origin_type=[rest], origin_address=[").
|
return Optional.empty();
|
||||||
append(NetworkAddress.format(restAddress.getAddress())).
|
|
||||||
append("]");
|
|
||||||
return builder.toString();
|
|
||||||
}
|
}
|
||||||
|
return Optional.of(new StringBuilder("origin_type=[rest], origin_address=[").append(NetworkAddress.format(restAddress.getAddress()))
|
||||||
// we'll see if was originated in a remote node
|
|
||||||
TransportAddress address = message.remoteAddress();
|
|
||||||
if (address != null) {
|
|
||||||
builder.append("origin_type=[transport], ");
|
|
||||||
builder.append("origin_address=[").
|
|
||||||
append(NetworkAddress.format(address.address().getAddress())).
|
|
||||||
append("]");
|
|
||||||
return builder.toString();
|
|
||||||
}
|
|
||||||
|
|
||||||
// the call was originated locally on this node
|
|
||||||
return builder.append("origin_type=[local_node], origin_address=[")
|
|
||||||
.append(localNode.getHostAddress())
|
|
||||||
.append("]")
|
.append("]")
|
||||||
.toString();
|
.toString());
|
||||||
}
|
}
|
||||||
|
|
||||||
static String resolvePrefix(Settings settings, DiscoveryNode localNode) {
|
private static Optional<String> transportOriginTag(TransportMessage message) {
|
||||||
StringBuilder builder = new StringBuilder();
|
TransportAddress address = message.remoteAddress();
|
||||||
if (HOST_ADDRESS_SETTING.get(settings)) {
|
if (address == null) {
|
||||||
String address = localNode.getHostAddress();
|
return Optional.empty();
|
||||||
if (address != null) {
|
|
||||||
builder.append("[").append(address).append("] ");
|
|
||||||
}
|
}
|
||||||
}
|
return Optional.of(
|
||||||
if (HOST_NAME_SETTING.get(settings)) {
|
new StringBuilder("origin_type=[transport], origin_address=[").append(NetworkAddress.format(address.address().getAddress()))
|
||||||
String hostName = localNode.getHostName();
|
.append("]")
|
||||||
if (hostName != null) {
|
.toString());
|
||||||
builder.append("[").append(hostName).append("] ");
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if (NODE_NAME_SETTING.get(settings)) {
|
|
||||||
String name = settings.get("name");
|
|
||||||
if (name != null) {
|
|
||||||
builder.append("[").append(name).append("] ");
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return builder.toString();
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static String indicesString(TransportMessage message) {
|
static String indicesString(TransportMessage message) {
|
||||||
@ -463,4 +446,62 @@ public class LoggingAuditTrail extends AbstractComponent implements AuditTrail {
|
|||||||
settings.add(EXCLUDE_EVENT_SETTINGS);
|
settings.add(EXCLUDE_EVENT_SETTINGS);
|
||||||
settings.add(INCLUDE_REQUEST_BODY);
|
settings.add(INCLUDE_REQUEST_BODY);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public void clusterChanged(ClusterChangedEvent event) {
|
||||||
|
updateLocalNodeInfo(event.state().getNodes().getLocalNode());
|
||||||
|
}
|
||||||
|
|
||||||
|
void updateLocalNodeInfo(DiscoveryNode newLocalNode) {
|
||||||
|
// check if local node changed
|
||||||
|
final DiscoveryNode localNode = localNodeInfo.localNode;
|
||||||
|
if (localNode == null || localNode.equals(newLocalNode) == false) {
|
||||||
|
// no need to synchronize, called only from the cluster state applier thread
|
||||||
|
localNodeInfo = new LocalNodeInfo(settings, newLocalNode);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
protected static class LocalNodeInfo {
|
||||||
|
private final DiscoveryNode localNode;
|
||||||
|
private final String prefix;
|
||||||
|
private final String localOriginTag;
|
||||||
|
|
||||||
|
LocalNodeInfo(Settings settings, @Nullable DiscoveryNode newLocalNode) {
|
||||||
|
this.localNode = newLocalNode;
|
||||||
|
this.prefix = resolvePrefix(settings, newLocalNode);
|
||||||
|
this.localOriginTag = localOriginTag(newLocalNode);
|
||||||
|
}
|
||||||
|
|
||||||
|
static String resolvePrefix(Settings settings, @Nullable DiscoveryNode localNode) {
|
||||||
|
final StringBuilder builder = new StringBuilder();
|
||||||
|
if (HOST_ADDRESS_SETTING.get(settings)) {
|
||||||
|
String address = localNode != null ? localNode.getHostAddress() : null;
|
||||||
|
if (address != null) {
|
||||||
|
builder.append("[").append(address).append("] ");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if (HOST_NAME_SETTING.get(settings)) {
|
||||||
|
String hostName = localNode != null ? localNode.getHostName() : null;
|
||||||
|
if (hostName != null) {
|
||||||
|
builder.append("[").append(hostName).append("] ");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if (NODE_NAME_SETTING.get(settings)) {
|
||||||
|
String name = settings.get("name");
|
||||||
|
if (name != null) {
|
||||||
|
builder.append("[").append(name).append("] ");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return builder.toString();
|
||||||
|
}
|
||||||
|
|
||||||
|
private static String localOriginTag(@Nullable DiscoveryNode localNode) {
|
||||||
|
if (localNode == null) {
|
||||||
|
return "origin_type=[local_node]";
|
||||||
|
}
|
||||||
|
return new StringBuilder("origin_type=[local_node], origin_address=[").append(localNode.getHostAddress())
|
||||||
|
.append("]")
|
||||||
|
.toString();
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
@ -34,6 +34,7 @@ import org.elasticsearch.xpack.security.audit.AuditTrail;
|
|||||||
import org.elasticsearch.xpack.security.audit.AuditTrailService;
|
import org.elasticsearch.xpack.security.audit.AuditTrailService;
|
||||||
import org.elasticsearch.xpack.security.authc.Authentication.RealmRef;
|
import org.elasticsearch.xpack.security.authc.Authentication.RealmRef;
|
||||||
import org.elasticsearch.xpack.security.authc.support.UsernamePasswordToken;
|
import org.elasticsearch.xpack.security.authc.support.UsernamePasswordToken;
|
||||||
|
import org.elasticsearch.xpack.security.authz.permission.Role;
|
||||||
import org.elasticsearch.xpack.security.support.Exceptions;
|
import org.elasticsearch.xpack.security.support.Exceptions;
|
||||||
import org.elasticsearch.xpack.security.user.AnonymousUser;
|
import org.elasticsearch.xpack.security.user.AnonymousUser;
|
||||||
import org.elasticsearch.xpack.security.user.User;
|
import org.elasticsearch.xpack.security.user.User;
|
||||||
@ -531,7 +532,7 @@ public class AuthenticationService extends AbstractComponent {
|
|||||||
|
|
||||||
@Override
|
@Override
|
||||||
ElasticsearchSecurityException runAsDenied(User user, AuthenticationToken token) {
|
ElasticsearchSecurityException runAsDenied(User user, AuthenticationToken token) {
|
||||||
auditTrail.runAsDenied(user, action, message);
|
auditTrail.runAsDenied(user, action, message, Role.EMPTY.names());
|
||||||
return failureHandler.failedAuthentication(message, token, action, threadContext);
|
return failureHandler.failedAuthentication(message, token, action, threadContext);
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -593,7 +594,7 @@ public class AuthenticationService extends AbstractComponent {
|
|||||||
|
|
||||||
@Override
|
@Override
|
||||||
ElasticsearchSecurityException runAsDenied(User user, AuthenticationToken token) {
|
ElasticsearchSecurityException runAsDenied(User user, AuthenticationToken token) {
|
||||||
auditTrail.runAsDenied(user, request);
|
auditTrail.runAsDenied(user, request, Role.EMPTY.names());
|
||||||
return failureHandler.failedAuthentication(request, token, threadContext);
|
return failureHandler.failedAuthentication(request, token, threadContext);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -16,6 +16,7 @@ import java.util.function.Function;
|
|||||||
import java.util.stream.Collectors;
|
import java.util.stream.Collectors;
|
||||||
|
|
||||||
import org.elasticsearch.common.settings.AbstractScopedSettings;
|
import org.elasticsearch.common.settings.AbstractScopedSettings;
|
||||||
|
import org.elasticsearch.common.settings.SecureSetting;
|
||||||
import org.elasticsearch.common.settings.Setting;
|
import org.elasticsearch.common.settings.Setting;
|
||||||
import org.elasticsearch.common.settings.Settings;
|
import org.elasticsearch.common.settings.Settings;
|
||||||
import org.elasticsearch.xpack.extensions.XPackExtension;
|
import org.elasticsearch.xpack.extensions.XPackExtension;
|
||||||
@ -167,6 +168,11 @@ public class RealmSettings {
|
|||||||
// perfectly aligned
|
// perfectly aligned
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Don't validate secure settings because they might have been cleared already
|
||||||
|
settings = Settings.builder().put(settings, false).build();
|
||||||
|
validSettings.removeIf(s -> s instanceof SecureSetting);
|
||||||
|
|
||||||
Set<Setting<?>> settingSet = new HashSet<>(validSettings);
|
Set<Setting<?>> settingSet = new HashSet<>(validSettings);
|
||||||
settingSet.add(TYPE_SETTING);
|
settingSet.add(TYPE_SETTING);
|
||||||
settingSet.add(ENABLED_SETTING);
|
settingSet.add(ENABLED_SETTING);
|
||||||
|
@ -88,6 +88,7 @@ public class AuthorizationService extends AbstractComponent {
|
|||||||
public static final String INDICES_PERMISSIONS_KEY = "_indices_permissions";
|
public static final String INDICES_PERMISSIONS_KEY = "_indices_permissions";
|
||||||
public static final String INDICES_PERMISSIONS_RESOLVER_KEY = "_indices_permissions_resolver";
|
public static final String INDICES_PERMISSIONS_RESOLVER_KEY = "_indices_permissions_resolver";
|
||||||
public static final String ORIGINATING_ACTION_KEY = "_originating_action_name";
|
public static final String ORIGINATING_ACTION_KEY = "_originating_action_name";
|
||||||
|
public static final String ROLE_NAMES_KEY = "_effective_role_names";
|
||||||
|
|
||||||
private static final Predicate<String> MONITOR_INDEX_PREDICATE = IndexPrivilege.MONITOR.predicate();
|
private static final Predicate<String> MONITOR_INDEX_PREDICATE = IndexPrivilege.MONITOR.predicate();
|
||||||
private static final Predicate<String> SAME_USER_PRIVILEGE = Automatons.predicate(
|
private static final Predicate<String> SAME_USER_PRIVILEGE = Automatons.predicate(
|
||||||
@ -153,12 +154,13 @@ public class AuthorizationService extends AbstractComponent {
|
|||||||
|
|
||||||
// first we need to check if the user is the system. If it is, we'll just authorize the system access
|
// first we need to check if the user is the system. If it is, we'll just authorize the system access
|
||||||
if (SystemUser.is(authentication.getUser())) {
|
if (SystemUser.is(authentication.getUser())) {
|
||||||
if (SystemUser.isAuthorized(action) && SystemUser.is(authentication.getUser())) {
|
if (SystemUser.isAuthorized(action)) {
|
||||||
setIndicesAccessControl(IndicesAccessControl.ALLOW_ALL);
|
putTransientIfNonExisting(INDICES_PERMISSIONS_KEY, IndicesAccessControl.ALLOW_ALL);
|
||||||
grant(authentication, action, request, null);
|
putTransientIfNonExisting(ROLE_NAMES_KEY, new String[] { SystemUser.ROLE_NAME });
|
||||||
|
grant(authentication, action, request, new String[] { SystemUser.ROLE_NAME }, null);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
throw denial(authentication, action, request, null);
|
throw denial(authentication, action, request, new String[] { SystemUser.ROLE_NAME }, null);
|
||||||
}
|
}
|
||||||
|
|
||||||
// get the roles of the authenticated user, which may be different than the effective
|
// get the roles of the authenticated user, which may be different than the effective
|
||||||
@ -170,29 +172,30 @@ public class AuthorizationService extends AbstractComponent {
|
|||||||
// if we are running as a user we looked up then the authentication must contain a lookedUpBy. If it doesn't then this user
|
// if we are running as a user we looked up then the authentication must contain a lookedUpBy. If it doesn't then this user
|
||||||
// doesn't really exist but the authc service allowed it through to avoid leaking users that exist in the system
|
// doesn't really exist but the authc service allowed it through to avoid leaking users that exist in the system
|
||||||
if (authentication.getLookedUpBy() == null) {
|
if (authentication.getLookedUpBy() == null) {
|
||||||
throw denyRunAs(authentication, action, request);
|
throw denyRunAs(authentication, action, request, permission.names());
|
||||||
} else if (permission.runAs().check(authentication.getUser().principal())) {
|
} else if (permission.runAs().check(authentication.getUser().principal())) {
|
||||||
grantRunAs(authentication, action, request);
|
grantRunAs(authentication, action, request, permission.names());
|
||||||
permission = runAsRole;
|
permission = runAsRole;
|
||||||
} else {
|
} else {
|
||||||
throw denyRunAs(authentication, action, request);
|
throw denyRunAs(authentication, action, request, permission.names());
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
putTransientIfNonExisting(ROLE_NAMES_KEY, permission.names());
|
||||||
|
|
||||||
// first, we'll check if the action is a cluster action. If it is, we'll only check it against the cluster permissions
|
// first, we'll check if the action is a cluster action. If it is, we'll only check it against the cluster permissions
|
||||||
if (ClusterPrivilege.ACTION_MATCHER.test(action)) {
|
if (ClusterPrivilege.ACTION_MATCHER.test(action)) {
|
||||||
ClusterPermission cluster = permission.cluster();
|
ClusterPermission cluster = permission.cluster();
|
||||||
if (cluster.check(action) || checkSameUserPermissions(action, request, authentication)) {
|
if (cluster.check(action) || checkSameUserPermissions(action, request, authentication)) {
|
||||||
setIndicesAccessControl(IndicesAccessControl.ALLOW_ALL);
|
putTransientIfNonExisting(INDICES_PERMISSIONS_KEY, IndicesAccessControl.ALLOW_ALL);
|
||||||
grant(authentication, action, request, null);
|
grant(authentication, action, request, permission.names(), null);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
throw denial(authentication, action, request, null);
|
throw denial(authentication, action, request, permission.names(), null);
|
||||||
}
|
}
|
||||||
|
|
||||||
// ok... this is not a cluster action, let's verify it's an indices action
|
// ok... this is not a cluster action, let's verify it's an indices action
|
||||||
if (!IndexPrivilege.ACTION_MATCHER.test(action)) {
|
if (!IndexPrivilege.ACTION_MATCHER.test(action)) {
|
||||||
throw denial(authentication, action, request, null);
|
throw denial(authentication, action, request, permission.names(), null);
|
||||||
}
|
}
|
||||||
|
|
||||||
//composite actions are explicitly listed and will be authorized at the sub-request / shard level
|
//composite actions are explicitly listed and will be authorized at the sub-request / shard level
|
||||||
@ -203,16 +206,16 @@ public class AuthorizationService extends AbstractComponent {
|
|||||||
}
|
}
|
||||||
// we check if the user can execute the action, without looking at indices, which will be authorized at the shard level
|
// we check if the user can execute the action, without looking at indices, which will be authorized at the shard level
|
||||||
if (permission.indices().check(action)) {
|
if (permission.indices().check(action)) {
|
||||||
grant(authentication, action, request, null);
|
grant(authentication, action, request, permission.names(), null);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
throw denial(authentication, action, request, null);
|
throw denial(authentication, action, request, permission.names(), null);
|
||||||
} else if (isDelayedIndicesAction(action)) {
|
} else if (isDelayedIndicesAction(action)) {
|
||||||
/* We check now if the user can execute the action without looking at indices.
|
/* We check now if the user can execute the action without looking at indices.
|
||||||
* The action is itself responsible for checking if the user can access the
|
* The action is itself responsible for checking if the user can access the
|
||||||
* indices when it runs. */
|
* indices when it runs. */
|
||||||
if (permission.indices().check(action)) {
|
if (permission.indices().check(action)) {
|
||||||
grant(authentication, action, request, null);
|
grant(authentication, action, request, permission.names(), null);
|
||||||
|
|
||||||
/* Now that we know the user can run the action we need to build a function
|
/* Now that we know the user can run the action we need to build a function
|
||||||
* that we can use later to fetch the user's actual permissions for an
|
* that we can use later to fetch the user's actual permissions for an
|
||||||
@ -244,19 +247,33 @@ public class AuthorizationService extends AbstractComponent {
|
|||||||
try {
|
try {
|
||||||
resolvedIndices = indicesAndAliasesResolver.resolve(proxy, metaData, authorizedIndices);
|
resolvedIndices = indicesAndAliasesResolver.resolve(proxy, metaData, authorizedIndices);
|
||||||
} catch (Exception e) {
|
} catch (Exception e) {
|
||||||
denial(authentication, action, finalRequest, specificIndices);
|
denial(authentication, action, finalRequest, finalPermission.names(), specificIndices);
|
||||||
throw e;
|
throw e;
|
||||||
}
|
}
|
||||||
|
|
||||||
Set<String> localIndices = new HashSet<>(resolvedIndices.getLocal());
|
Set<String> localIndices = new HashSet<>(resolvedIndices.getLocal());
|
||||||
IndicesAccessControl indicesAccessControl = authorizeIndices(action, finalRequest, localIndices, specificIndices,
|
|
||||||
authentication, finalPermission, metaData);
|
IndicesAccessControl indicesAccessControl = finalPermission.authorize(action, localIndices,
|
||||||
grant(authentication, action, finalRequest, specificIndices);
|
metaData, fieldPermissionsCache);
|
||||||
|
if (!indicesAccessControl.isGranted()) {
|
||||||
|
throw denial(authentication, action, finalRequest, finalPermission.names(), specificIndices);
|
||||||
|
}
|
||||||
|
if (hasSecurityIndexAccess(indicesAccessControl)
|
||||||
|
&& MONITOR_INDEX_PREDICATE.test(action) == false
|
||||||
|
&& isSuperuser(authentication.getUser()) == false) {
|
||||||
|
// only the superusers are allowed to work with this index, but we should allow indices monitoring actions
|
||||||
|
// through for debuggingpurposes. These monitor requests also sometimes resolve indices concretely and
|
||||||
|
// then requests them
|
||||||
|
logger.debug("user [{}] attempted to directly perform [{}] against the security index [{}]",
|
||||||
|
authentication.getUser().principal(), action, SecurityLifecycleService.SECURITY_INDEX_NAME);
|
||||||
|
throw denial(authentication, action, finalRequest, finalPermission.names(), specificIndices);
|
||||||
|
}
|
||||||
|
grant(authentication, action, finalRequest, finalPermission.names(), specificIndices);
|
||||||
return indicesAccessControl;
|
return indicesAccessControl;
|
||||||
});
|
});
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
throw denial(authentication, action, request, null);
|
throw denial(authentication, action, request, permission.names(), null);
|
||||||
} else if (isTranslatedToBulkAction(action)) {
|
} else if (isTranslatedToBulkAction(action)) {
|
||||||
if (request instanceof CompositeIndicesRequest == false) {
|
if (request instanceof CompositeIndicesRequest == false) {
|
||||||
throw new IllegalStateException("Bulk translated actions must implement " + CompositeIndicesRequest.class.getSimpleName()
|
throw new IllegalStateException("Bulk translated actions must implement " + CompositeIndicesRequest.class.getSimpleName()
|
||||||
@ -264,10 +281,10 @@ public class AuthorizationService extends AbstractComponent {
|
|||||||
}
|
}
|
||||||
// we check if the user can execute the action, without looking at indices, which will be authorized at the shard level
|
// we check if the user can execute the action, without looking at indices, which will be authorized at the shard level
|
||||||
if (permission.indices().check(action)) {
|
if (permission.indices().check(action)) {
|
||||||
grant(authentication, action, request, null);
|
grant(authentication, action, request, permission.names(), null);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
throw denial(authentication, action, request, null);
|
throw denial(authentication, action, request, permission.names(), null);
|
||||||
} else if (TransportActionProxy.isProxyAction(action)) {
|
} else if (TransportActionProxy.isProxyAction(action)) {
|
||||||
// we authorize proxied actions once they are "unwrapped" on the next node
|
// we authorize proxied actions once they are "unwrapped" on the next node
|
||||||
if (TransportActionProxy.isProxyRequest(originalRequest) == false) {
|
if (TransportActionProxy.isProxyRequest(originalRequest) == false) {
|
||||||
@ -275,12 +292,12 @@ public class AuthorizationService extends AbstractComponent {
|
|||||||
+ action + "] is a proxy action");
|
+ action + "] is a proxy action");
|
||||||
}
|
}
|
||||||
if (permission.indices().check(action)) {
|
if (permission.indices().check(action)) {
|
||||||
grant(authentication, action, request, null);
|
grant(authentication, action, request, permission.names(), null);
|
||||||
return;
|
return;
|
||||||
} else {
|
} else {
|
||||||
// we do this here in addition to the denial below since we might run into an assertion on scroll request below if we
|
// we do this here in addition to the denial below since we might run into an assertion on scroll request below if we
|
||||||
// don't have permission to read cross cluster but wrap a scroll request.
|
// don't have permission to read cross cluster but wrap a scroll request.
|
||||||
throw denial(authentication, action, request, null);
|
throw denial(authentication, action, request, permission.names(), null);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -299,18 +316,18 @@ public class AuthorizationService extends AbstractComponent {
|
|||||||
// index and if they cannot, we can fail the request early before we allow the execution of the action and in
|
// index and if they cannot, we can fail the request early before we allow the execution of the action and in
|
||||||
// turn the shard actions
|
// turn the shard actions
|
||||||
if (SearchScrollAction.NAME.equals(action) && permission.indices().check(action) == false) {
|
if (SearchScrollAction.NAME.equals(action) && permission.indices().check(action) == false) {
|
||||||
throw denial(authentication, action, request, null);
|
throw denial(authentication, action, request, permission.names(), null);
|
||||||
} else {
|
} else {
|
||||||
// we store the request as a transient in the ThreadContext in case of a authorization failure at the shard
|
// we store the request as a transient in the ThreadContext in case of a authorization failure at the shard
|
||||||
// level. If authorization fails we will audit a access_denied message and will use the request to retrieve
|
// level. If authorization fails we will audit a access_denied message and will use the request to retrieve
|
||||||
// information such as the index and the incoming address of the request
|
// information such as the index and the incoming address of the request
|
||||||
grant(authentication, action, request, null);
|
grant(authentication, action, request, permission.names(), null);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
assert false :
|
assert false :
|
||||||
"only scroll related requests are known indices api that don't support retrieving the indices they relate to";
|
"only scroll related requests are known indices api that don't support retrieving the indices they relate to";
|
||||||
throw denial(authentication, action, request, null);
|
throw denial(authentication, action, request, permission.names(), null);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -320,33 +337,45 @@ public class AuthorizationService extends AbstractComponent {
|
|||||||
// If this request does not allow remote indices
|
// If this request does not allow remote indices
|
||||||
// then the user must have permission to perform this action on at least 1 local index
|
// then the user must have permission to perform this action on at least 1 local index
|
||||||
if (allowsRemoteIndices == false && permission.indices().check(action) == false) {
|
if (allowsRemoteIndices == false && permission.indices().check(action) == false) {
|
||||||
throw denial(authentication, action, request, null);
|
throw denial(authentication, action, request, permission.names(), null);
|
||||||
}
|
}
|
||||||
|
|
||||||
final MetaData metaData = clusterService.state().metaData();
|
final MetaData metaData = clusterService.state().metaData();
|
||||||
final AuthorizedIndices authorizedIndices = new AuthorizedIndices(authentication.getUser(), permission, action, metaData);
|
final AuthorizedIndices authorizedIndices = new AuthorizedIndices(authentication.getUser(), permission, action, metaData);
|
||||||
final ResolvedIndices resolvedIndices = resolveIndexNames(authentication, action, request, request, metaData, authorizedIndices);
|
final ResolvedIndices resolvedIndices = resolveIndexNames(authentication, action, request, request,
|
||||||
|
metaData, authorizedIndices, permission);
|
||||||
assert !resolvedIndices.isEmpty()
|
assert !resolvedIndices.isEmpty()
|
||||||
: "every indices request needs to have its indices set thus the resolved indices must not be empty";
|
: "every indices request needs to have its indices set thus the resolved indices must not be empty";
|
||||||
|
|
||||||
// If this request does reference any remote indices
|
// If this request does reference any remote indices
|
||||||
// then the user must have permission to perform this action on at least 1 local index
|
// then the user must have permission to perform this action on at least 1 local index
|
||||||
if (resolvedIndices.getRemote().isEmpty() && permission.indices().check(action) == false) {
|
if (resolvedIndices.getRemote().isEmpty() && permission.indices().check(action) == false) {
|
||||||
throw denial(authentication, action, request, null);
|
throw denial(authentication, action, request, permission.names(), null);
|
||||||
}
|
}
|
||||||
|
|
||||||
//all wildcard expressions have been resolved and only the security plugin could have set '-*' here.
|
//all wildcard expressions have been resolved and only the security plugin could have set '-*' here.
|
||||||
//'-*' matches no indices so we allow the request to go through, which will yield an empty response
|
//'-*' matches no indices so we allow the request to go through, which will yield an empty response
|
||||||
if (resolvedIndices.isNoIndicesPlaceholder()) {
|
if (resolvedIndices.isNoIndicesPlaceholder()) {
|
||||||
setIndicesAccessControl(IndicesAccessControl.ALLOW_NO_INDICES);
|
putTransientIfNonExisting(INDICES_PERMISSIONS_KEY, IndicesAccessControl.ALLOW_NO_INDICES);
|
||||||
grant(authentication, action, request, null);
|
grant(authentication, action, request, permission.names(), null);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
final Set<String> localIndices = new HashSet<>(resolvedIndices.getLocal());
|
final Set<String> localIndices = new HashSet<>(resolvedIndices.getLocal());
|
||||||
IndicesAccessControl indicesAccessControl = authorizeIndices(action, request, localIndices, null, authentication,
|
IndicesAccessControl indicesAccessControl = permission.authorize(action, localIndices, metaData, fieldPermissionsCache);
|
||||||
permission, metaData);
|
if (!indicesAccessControl.isGranted()) {
|
||||||
setIndicesAccessControl(indicesAccessControl);
|
throw denial(authentication, action, request, permission.names(), null);
|
||||||
|
} else if (hasSecurityIndexAccess(indicesAccessControl)
|
||||||
|
&& MONITOR_INDEX_PREDICATE.test(action) == false
|
||||||
|
&& isSuperuser(authentication.getUser()) == false) {
|
||||||
|
// only the XPackUser is allowed to work with this index, but we should allow indices monitoring actions through for debugging
|
||||||
|
// purposes. These monitor requests also sometimes resolve indices concretely and then requests them
|
||||||
|
logger.debug("user [{}] attempted to directly perform [{}] against the security index [{}]",
|
||||||
|
authentication.getUser().principal(), action, SecurityLifecycleService.SECURITY_INDEX_NAME);
|
||||||
|
throw denial(authentication, action, request, permission.names(), null);
|
||||||
|
} else {
|
||||||
|
putTransientIfNonExisting(INDICES_PERMISSIONS_KEY, indicesAccessControl);
|
||||||
|
}
|
||||||
|
|
||||||
//if we are creating an index we need to authorize potential aliases created at the same time
|
//if we are creating an index we need to authorize potential aliases created at the same time
|
||||||
if (IndexPrivilege.CREATE_INDEX_MATCHER.test(action)) {
|
if (IndexPrivilege.CREATE_INDEX_MATCHER.test(action)) {
|
||||||
@ -359,7 +388,7 @@ public class AuthorizationService extends AbstractComponent {
|
|||||||
}
|
}
|
||||||
indicesAccessControl = permission.authorize("indices:admin/aliases", aliasesAndIndices, metaData, fieldPermissionsCache);
|
indicesAccessControl = permission.authorize("indices:admin/aliases", aliasesAndIndices, metaData, fieldPermissionsCache);
|
||||||
if (!indicesAccessControl.isGranted()) {
|
if (!indicesAccessControl.isGranted()) {
|
||||||
throw denial(authentication, "indices:admin/aliases", request, null);
|
throw denial(authentication, "indices:admin/aliases", request, permission.names(), null);
|
||||||
}
|
}
|
||||||
// no need to re-add the indicesAccessControl in the context,
|
// no need to re-add the indicesAccessControl in the context,
|
||||||
// because the create index call doesn't do anything FLS or DLS
|
// because the create index call doesn't do anything FLS or DLS
|
||||||
@ -374,7 +403,7 @@ public class AuthorizationService extends AbstractComponent {
|
|||||||
authorizeBulkItems(authentication, (BulkShardRequest) request, permission, metaData, localIndices, authorizedIndices);
|
authorizeBulkItems(authentication, (BulkShardRequest) request, permission, metaData, localIndices, authorizedIndices);
|
||||||
}
|
}
|
||||||
|
|
||||||
grant(authentication, action, originalRequest, null);
|
grant(authentication, action, originalRequest, permission.names(), null);
|
||||||
}
|
}
|
||||||
|
|
||||||
private boolean hasSecurityIndexAccess(IndicesAccessControl indicesAccessControl) {
|
private boolean hasSecurityIndexAccess(IndicesAccessControl indicesAccessControl) {
|
||||||
@ -389,13 +418,17 @@ public class AuthorizationService extends AbstractComponent {
|
|||||||
|
|
||||||
/**
|
/**
|
||||||
* Performs authorization checks on the items within a {@link BulkShardRequest}.
|
* Performs authorization checks on the items within a {@link BulkShardRequest}.
|
||||||
* This inspects the {@link BulkItemRequest items} within the request, computes an <em>implied</em> action for each item's
|
* This inspects the {@link BulkItemRequest items} within the request, computes
|
||||||
* {@link DocWriteRequest#opType()}, and then checks whether that action is allowed on the targeted index.
|
* an <em>implied</em> action for each item's {@link DocWriteRequest#opType()},
|
||||||
* Items that fail this checks are {@link BulkItemRequest#abort(String, Exception) aborted}, with an
|
* and then checks whether that action is allowed on the targeted index. Items
|
||||||
* {@link #denial(Authentication, String, TransportRequest, Set) access denied} exception.
|
* that fail this checks are {@link BulkItemRequest#abort(String, Exception)
|
||||||
* Because a shard level request is for exactly 1 index, and there are a small number of possible item
|
* aborted}, with an
|
||||||
* {@link DocWriteRequest.OpType types}, the number of distinct authorization checks that need to be performed is very small, but the
|
* {@link #denial(Authentication, String, TransportRequest, String[], Set) access
|
||||||
* results must be cached, to avoid adding a high overhead to each bulk request.
|
* denied} exception. Because a shard level request is for exactly 1 index, and
|
||||||
|
* there are a small number of possible item {@link DocWriteRequest.OpType
|
||||||
|
* types}, the number of distinct authorization checks that need to be performed
|
||||||
|
* is very small, but the results must be cached, to avoid adding a high
|
||||||
|
* overhead to each bulk request.
|
||||||
*/
|
*/
|
||||||
private void authorizeBulkItems(Authentication authentication, BulkShardRequest request, Role permission,
|
private void authorizeBulkItems(Authentication authentication, BulkShardRequest request, Role permission,
|
||||||
MetaData metaData, Set<String> indices, AuthorizedIndices authorizedIndices) {
|
MetaData metaData, Set<String> indices, AuthorizedIndices authorizedIndices) {
|
||||||
@ -429,7 +462,7 @@ public class AuthorizationService extends AbstractComponent {
|
|||||||
return itemAccessControl.isGranted();
|
return itemAccessControl.isGranted();
|
||||||
});
|
});
|
||||||
if (granted == false) {
|
if (granted == false) {
|
||||||
item.abort(resolvedIndex, denial(authentication, itemAction, request, null));
|
item.abort(resolvedIndex, denial(authentication, itemAction, request, permission.names(), null));
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -454,20 +487,16 @@ public class AuthorizationService extends AbstractComponent {
|
|||||||
}
|
}
|
||||||
|
|
||||||
private ResolvedIndices resolveIndexNames(Authentication authentication, String action, Object indicesRequest,
|
private ResolvedIndices resolveIndexNames(Authentication authentication, String action, Object indicesRequest,
|
||||||
TransportRequest mainRequest, MetaData metaData,
|
TransportRequest mainRequest,MetaData metaData,
|
||||||
AuthorizedIndices authorizedIndices) {
|
AuthorizedIndices authorizedIndices, Role permission) {
|
||||||
try {
|
try {
|
||||||
return indicesAndAliasesResolver.resolve(indicesRequest, metaData, authorizedIndices);
|
return indicesAndAliasesResolver.resolve(indicesRequest, metaData, authorizedIndices);
|
||||||
} catch (Exception e) {
|
} catch (Exception e) {
|
||||||
auditTrail.accessDenied(authentication.getUser(), action, mainRequest, null);
|
auditTrail.accessDenied(authentication.getUser(), action, mainRequest, permission.names(), null);
|
||||||
throw e;
|
throw e;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
private void setIndicesAccessControl(IndicesAccessControl accessControl) {
|
|
||||||
putTransientIfNonExisting(INDICES_PERMISSIONS_KEY, accessControl);
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Sets a function to resolve {@link IndicesAccessControl} to be used by
|
* Sets a function to resolve {@link IndicesAccessControl} to be used by
|
||||||
* {@link #isDelayedIndicesAction(String) actions} that do not know their
|
* {@link #isDelayedIndicesAction(String) actions} that do not know their
|
||||||
@ -514,7 +543,7 @@ public class AuthorizationService extends AbstractComponent {
|
|||||||
|
|
||||||
if (roleNames.isEmpty()) {
|
if (roleNames.isEmpty()) {
|
||||||
roleActionListener.onResponse(Role.EMPTY);
|
roleActionListener.onResponse(Role.EMPTY);
|
||||||
} else if (roleNames.contains(ReservedRolesStore.SUPERUSER_ROLE.name())) {
|
} else if (roleNames.contains(ReservedRolesStore.SUPERUSER_ROLE_DESCRIPTOR.getName())) {
|
||||||
roleActionListener.onResponse(ReservedRolesStore.SUPERUSER_ROLE);
|
roleActionListener.onResponse(ReservedRolesStore.SUPERUSER_ROLE);
|
||||||
} else {
|
} else {
|
||||||
rolesStore.roles(roleNames, fieldPermissionsCache, roleActionListener);
|
rolesStore.roles(roleNames, fieldPermissionsCache, roleActionListener);
|
||||||
@ -562,28 +591,6 @@ public class AuthorizationService extends AbstractComponent {
|
|||||||
action.equals(SearchTransportService.CLEAR_SCROLL_CONTEXTS_ACTION_NAME);
|
action.equals(SearchTransportService.CLEAR_SCROLL_CONTEXTS_ACTION_NAME);
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
|
||||||
* Authorize some indices, throwing an exception if they are not authorized and returning
|
|
||||||
* the {@link IndicesAccessControl} if they are.
|
|
||||||
*/
|
|
||||||
private IndicesAccessControl authorizeIndices(String action, TransportRequest request, Set<String> localIndices,
|
|
||||||
Set<String> specificIndices, Authentication authentication, Role permission, MetaData metaData) {
|
|
||||||
IndicesAccessControl indicesAccessControl = permission.authorize(action, localIndices, metaData, fieldPermissionsCache);
|
|
||||||
if (!indicesAccessControl.isGranted()) {
|
|
||||||
throw denial(authentication, action, request, specificIndices);
|
|
||||||
}
|
|
||||||
if (hasSecurityIndexAccess(indicesAccessControl)
|
|
||||||
&& MONITOR_INDEX_PREDICATE.test(action) == false
|
|
||||||
&& isSuperuser(authentication.getUser()) == false) {
|
|
||||||
// only the superusers are allowed to work with this index, but we should allow indices monitoring actions through for debugging
|
|
||||||
// purposes. These monitor requests also sometimes resolve indices concretely and then requests them
|
|
||||||
logger.debug("user [{}] attempted to directly perform [{}] against the security index [{}]",
|
|
||||||
authentication.getUser().principal(), action, SecurityLifecycleService.SECURITY_INDEX_NAME);
|
|
||||||
throw denial(authentication, action, request, specificIndices);
|
|
||||||
}
|
|
||||||
return indicesAccessControl;
|
|
||||||
}
|
|
||||||
|
|
||||||
static boolean checkSameUserPermissions(String action, TransportRequest request, Authentication authentication) {
|
static boolean checkSameUserPermissions(String action, TransportRequest request, Authentication authentication) {
|
||||||
final boolean actionAllowed = SAME_USER_PRIVILEGE.test(action);
|
final boolean actionAllowed = SAME_USER_PRIVILEGE.test(action);
|
||||||
if (actionAllowed) {
|
if (actionAllowed) {
|
||||||
@ -629,22 +636,24 @@ public class AuthorizationService extends AbstractComponent {
|
|||||||
}
|
}
|
||||||
|
|
||||||
ElasticsearchSecurityException denial(Authentication authentication, String action, TransportRequest request,
|
ElasticsearchSecurityException denial(Authentication authentication, String action, TransportRequest request,
|
||||||
@Nullable Set<String> specificIndices) {
|
String[] roleNames, @Nullable Set<String> specificIndices) {
|
||||||
auditTrail.accessDenied(authentication.getUser(), action, request, specificIndices);
|
auditTrail.accessDenied(authentication.getUser(), action, request, roleNames, specificIndices);
|
||||||
return denialException(authentication, action);
|
return denialException(authentication, action);
|
||||||
}
|
}
|
||||||
|
|
||||||
private ElasticsearchSecurityException denyRunAs(Authentication authentication, String action, TransportRequest request) {
|
private ElasticsearchSecurityException denyRunAs(Authentication authentication, String action, TransportRequest request,
|
||||||
auditTrail.runAsDenied(authentication.getUser(), action, request);
|
String[] roleNames) {
|
||||||
|
auditTrail.runAsDenied(authentication.getUser(), action, request, roleNames);
|
||||||
return denialException(authentication, action);
|
return denialException(authentication, action);
|
||||||
}
|
}
|
||||||
|
|
||||||
private void grant(Authentication authentication, String action, TransportRequest request, @Nullable Set<String> specificIndices) {
|
private void grant(Authentication authentication, String action, TransportRequest request,
|
||||||
auditTrail.accessGranted(authentication.getUser(), action, request, specificIndices);
|
String[] roleNames, @Nullable Set<String> specificIndices) {
|
||||||
|
auditTrail.accessGranted(authentication.getUser(), action, request, roleNames, specificIndices);
|
||||||
}
|
}
|
||||||
|
|
||||||
private void grantRunAs(Authentication authentication, String action, TransportRequest request) {
|
private void grantRunAs(Authentication authentication, String action, TransportRequest request, String[] roleNames) {
|
||||||
auditTrail.runAsGranted(authentication.getUser(), action, request);
|
auditTrail.runAsGranted(authentication.getUser(), action, request, roleNames);
|
||||||
}
|
}
|
||||||
|
|
||||||
private ElasticsearchSecurityException denialException(Authentication authentication, String action) {
|
private ElasticsearchSecurityException denialException(Authentication authentication, String action) {
|
||||||
@ -665,7 +674,7 @@ public class AuthorizationService extends AbstractComponent {
|
|||||||
|
|
||||||
static boolean isSuperuser(User user) {
|
static boolean isSuperuser(User user) {
|
||||||
return Arrays.stream(user.roles())
|
return Arrays.stream(user.roles())
|
||||||
.anyMatch(ReservedRolesStore.SUPERUSER_ROLE.name()::equals);
|
.anyMatch(ReservedRolesStore.SUPERUSER_ROLE_DESCRIPTOR.getName()::equals);
|
||||||
}
|
}
|
||||||
|
|
||||||
public static void addSettings(List<Setting<?>> settings) {
|
public static void addSettings(List<Setting<?>> settings) {
|
||||||
|
@ -16,6 +16,7 @@ import org.elasticsearch.xpack.security.audit.AuditTrailService;
|
|||||||
import org.elasticsearch.xpack.security.authc.Authentication;
|
import org.elasticsearch.xpack.security.authc.Authentication;
|
||||||
|
|
||||||
import static org.elasticsearch.xpack.security.authz.AuthorizationService.ORIGINATING_ACTION_KEY;
|
import static org.elasticsearch.xpack.security.authz.AuthorizationService.ORIGINATING_ACTION_KEY;
|
||||||
|
import static org.elasticsearch.xpack.security.authz.AuthorizationService.ROLE_NAMES_KEY;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* A {@link SearchOperationListener} that is used to provide authorization for scroll requests.
|
* A {@link SearchOperationListener} that is used to provide authorization for scroll requests.
|
||||||
@ -59,7 +60,8 @@ public final class SecuritySearchOperationListener implements SearchOperationLis
|
|||||||
final Authentication originalAuth = searchContext.scrollContext().getFromContext(Authentication.AUTHENTICATION_KEY);
|
final Authentication originalAuth = searchContext.scrollContext().getFromContext(Authentication.AUTHENTICATION_KEY);
|
||||||
final Authentication current = Authentication.getAuthentication(threadContext);
|
final Authentication current = Authentication.getAuthentication(threadContext);
|
||||||
final String action = threadContext.getTransient(ORIGINATING_ACTION_KEY);
|
final String action = threadContext.getTransient(ORIGINATING_ACTION_KEY);
|
||||||
ensureAuthenticatedUserIsSame(originalAuth, current, auditTrailService, searchContext.id(), action, request);
|
ensureAuthenticatedUserIsSame(originalAuth, current, auditTrailService, searchContext.id(), action, request,
|
||||||
|
threadContext.getTransient(ROLE_NAMES_KEY));
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -71,7 +73,7 @@ public final class SecuritySearchOperationListener implements SearchOperationLis
|
|||||||
* (or lookup) realm. To work around this we compare the username and the originating realm type.
|
* (or lookup) realm. To work around this we compare the username and the originating realm type.
|
||||||
*/
|
*/
|
||||||
static void ensureAuthenticatedUserIsSame(Authentication original, Authentication current, AuditTrailService auditTrailService,
|
static void ensureAuthenticatedUserIsSame(Authentication original, Authentication current, AuditTrailService auditTrailService,
|
||||||
long id, String action, TransportRequest request) {
|
long id, String action, TransportRequest request, String[] roleNames) {
|
||||||
// this is really a best effort attempt since we cannot guarantee principal uniqueness
|
// this is really a best effort attempt since we cannot guarantee principal uniqueness
|
||||||
// and realm names can change between nodes.
|
// and realm names can change between nodes.
|
||||||
final boolean samePrincipal = original.getUser().principal().equals(current.getUser().principal());
|
final boolean samePrincipal = original.getUser().principal().equals(current.getUser().principal());
|
||||||
@ -90,7 +92,7 @@ public final class SecuritySearchOperationListener implements SearchOperationLis
|
|||||||
|
|
||||||
final boolean sameUser = samePrincipal && sameRealmType;
|
final boolean sameUser = samePrincipal && sameRealmType;
|
||||||
if (sameUser == false) {
|
if (sameUser == false) {
|
||||||
auditTrailService.accessDenied(current.getUser(), action, request, null);
|
auditTrailService.accessDenied(current.getUser(), action, request, roleNames, null);
|
||||||
throw new SearchContextMissingException(id);
|
throw new SearchContextMissingException(id);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -115,7 +115,7 @@ public final class FieldPermissions implements Accountable {
|
|||||||
return a.getNumStates() * 5; // wild guess, better than 0
|
return a.getNumStates() * 5; // wild guess, better than 0
|
||||||
}
|
}
|
||||||
|
|
||||||
static Automaton initializePermittedFieldsAutomaton(FieldPermissionsDefinition fieldPermissionsDefinition) {
|
public static Automaton initializePermittedFieldsAutomaton(FieldPermissionsDefinition fieldPermissionsDefinition) {
|
||||||
Set<FieldGrantExcludeGroup> groups = fieldPermissionsDefinition.getFieldGrantExcludeGroups();
|
Set<FieldGrantExcludeGroup> groups = fieldPermissionsDefinition.getFieldGrantExcludeGroups();
|
||||||
assert groups.size() > 0 : "there must always be a single group for field inclusion/exclusion";
|
assert groups.size() > 0 : "there must always be a single group for field inclusion/exclusion";
|
||||||
List<Automaton> automatonList =
|
List<Automaton> automatonList =
|
||||||
|
@ -27,20 +27,20 @@ public final class Role {
|
|||||||
|
|
||||||
public static final Role EMPTY = Role.builder("__empty").build();
|
public static final Role EMPTY = Role.builder("__empty").build();
|
||||||
|
|
||||||
private final String name;
|
private final String[] names;
|
||||||
private final ClusterPermission cluster;
|
private final ClusterPermission cluster;
|
||||||
private final IndicesPermission indices;
|
private final IndicesPermission indices;
|
||||||
private final RunAsPermission runAs;
|
private final RunAsPermission runAs;
|
||||||
|
|
||||||
Role(String name, ClusterPermission cluster, IndicesPermission indices, RunAsPermission runAs) {
|
Role(String[] names, ClusterPermission cluster, IndicesPermission indices, RunAsPermission runAs) {
|
||||||
this.name = name;
|
this.names = names;
|
||||||
this.cluster = Objects.requireNonNull(cluster);
|
this.cluster = Objects.requireNonNull(cluster);
|
||||||
this.indices = Objects.requireNonNull(indices);
|
this.indices = Objects.requireNonNull(indices);
|
||||||
this.runAs = Objects.requireNonNull(runAs);
|
this.runAs = Objects.requireNonNull(runAs);
|
||||||
}
|
}
|
||||||
|
|
||||||
public String name() {
|
public String[] names() {
|
||||||
return name;
|
return names;
|
||||||
}
|
}
|
||||||
|
|
||||||
public ClusterPermission cluster() {
|
public ClusterPermission cluster() {
|
||||||
@ -55,12 +55,12 @@ public final class Role {
|
|||||||
return runAs;
|
return runAs;
|
||||||
}
|
}
|
||||||
|
|
||||||
public static Builder builder(String name) {
|
public static Builder builder(String... names) {
|
||||||
return new Builder(name, null);
|
return new Builder(names, null);
|
||||||
}
|
}
|
||||||
|
|
||||||
public static Builder builder(String name, FieldPermissionsCache fieldPermissionsCache) {
|
public static Builder builder(String[] names, FieldPermissionsCache fieldPermissionsCache) {
|
||||||
return new Builder(name, fieldPermissionsCache);
|
return new Builder(names, fieldPermissionsCache);
|
||||||
}
|
}
|
||||||
|
|
||||||
public static Builder builder(RoleDescriptor rd, FieldPermissionsCache fieldPermissionsCache) {
|
public static Builder builder(RoleDescriptor rd, FieldPermissionsCache fieldPermissionsCache) {
|
||||||
@ -91,19 +91,19 @@ public final class Role {
|
|||||||
|
|
||||||
public static class Builder {
|
public static class Builder {
|
||||||
|
|
||||||
private final String name;
|
private final String[] names;
|
||||||
private ClusterPermission cluster = ClusterPermission.NONE;
|
private ClusterPermission cluster = ClusterPermission.NONE;
|
||||||
private RunAsPermission runAs = RunAsPermission.NONE;
|
private RunAsPermission runAs = RunAsPermission.NONE;
|
||||||
private List<IndicesPermission.Group> groups = new ArrayList<>();
|
private List<IndicesPermission.Group> groups = new ArrayList<>();
|
||||||
private FieldPermissionsCache fieldPermissionsCache = null;
|
private FieldPermissionsCache fieldPermissionsCache = null;
|
||||||
|
|
||||||
private Builder(String name, FieldPermissionsCache fieldPermissionsCache) {
|
private Builder(String[] names, FieldPermissionsCache fieldPermissionsCache) {
|
||||||
this.name = name;
|
this.names = names;
|
||||||
this.fieldPermissionsCache = fieldPermissionsCache;
|
this.fieldPermissionsCache = fieldPermissionsCache;
|
||||||
}
|
}
|
||||||
|
|
||||||
private Builder(RoleDescriptor rd, @Nullable FieldPermissionsCache fieldPermissionsCache) {
|
private Builder(RoleDescriptor rd, @Nullable FieldPermissionsCache fieldPermissionsCache) {
|
||||||
this.name = rd.getName();
|
this.names = new String[] { rd.getName() };
|
||||||
this.fieldPermissionsCache = fieldPermissionsCache;
|
this.fieldPermissionsCache = fieldPermissionsCache;
|
||||||
if (rd.getClusterPrivileges().length == 0) {
|
if (rd.getClusterPrivileges().length == 0) {
|
||||||
cluster = ClusterPermission.NONE;
|
cluster = ClusterPermission.NONE;
|
||||||
@ -140,7 +140,7 @@ public final class Role {
|
|||||||
public Role build() {
|
public Role build() {
|
||||||
IndicesPermission indices = groups.isEmpty() ? IndicesPermission.NONE :
|
IndicesPermission indices = groups.isEmpty() ? IndicesPermission.NONE :
|
||||||
new IndicesPermission(groups.toArray(new IndicesPermission.Group[groups.size()]));
|
new IndicesPermission(groups.toArray(new IndicesPermission.Group[groups.size()]));
|
||||||
return new Role(name, cluster, indices, runAs);
|
return new Role(names, cluster, indices, runAs);
|
||||||
}
|
}
|
||||||
|
|
||||||
static List<IndicesPermission.Group> convertFromIndicesPrivileges(RoleDescriptor.IndicesPrivileges[] indicesPrivileges,
|
static List<IndicesPermission.Group> convertFromIndicesPrivileges(RoleDescriptor.IndicesPrivileges[] indicesPrivileges,
|
||||||
|
@ -5,35 +5,9 @@
|
|||||||
*/
|
*/
|
||||||
package org.elasticsearch.xpack.security.authz.store;
|
package org.elasticsearch.xpack.security.authz.store;
|
||||||
|
|
||||||
import org.elasticsearch.action.ActionListener;
|
import java.util.ArrayList;
|
||||||
import org.elasticsearch.cluster.health.ClusterHealthStatus;
|
|
||||||
import org.elasticsearch.cluster.health.ClusterIndexHealth;
|
|
||||||
import org.elasticsearch.common.Strings;
|
|
||||||
import org.elasticsearch.common.bytes.BytesReference;
|
|
||||||
import org.elasticsearch.common.cache.Cache;
|
|
||||||
import org.elasticsearch.common.cache.CacheBuilder;
|
|
||||||
import org.elasticsearch.common.component.AbstractComponent;
|
|
||||||
import org.elasticsearch.common.inject.internal.Nullable;
|
|
||||||
import org.elasticsearch.common.settings.Setting;
|
|
||||||
import org.elasticsearch.common.settings.Setting.Property;
|
|
||||||
import org.elasticsearch.common.settings.Settings;
|
|
||||||
import org.elasticsearch.common.util.concurrent.ThreadContext;
|
|
||||||
import org.elasticsearch.license.XPackLicenseState;
|
|
||||||
import org.elasticsearch.common.util.concurrent.ConcurrentCollections;
|
|
||||||
import org.elasticsearch.common.util.concurrent.ReleasableLock;
|
|
||||||
import org.elasticsearch.common.util.set.Sets;
|
|
||||||
import org.elasticsearch.xpack.common.IteratingActionListener;
|
|
||||||
import org.elasticsearch.xpack.security.authz.RoleDescriptor;
|
|
||||||
import org.elasticsearch.xpack.security.authz.RoleDescriptor.IndicesPrivileges;
|
|
||||||
import org.elasticsearch.xpack.security.authz.permission.FieldPermissionsCache;
|
|
||||||
import org.elasticsearch.xpack.security.authz.permission.FieldPermissionsDefinition;
|
|
||||||
import org.elasticsearch.xpack.security.authz.permission.FieldPermissionsDefinition.FieldGrantExcludeGroup;
|
|
||||||
import org.elasticsearch.xpack.security.authz.permission.Role;
|
|
||||||
import org.elasticsearch.xpack.security.authz.privilege.ClusterPrivilege;
|
|
||||||
import org.elasticsearch.xpack.security.authz.privilege.IndexPrivilege;
|
|
||||||
import org.elasticsearch.xpack.security.authz.privilege.Privilege;
|
|
||||||
|
|
||||||
import java.util.Arrays;
|
import java.util.Arrays;
|
||||||
|
import java.util.Collection;
|
||||||
import java.util.Collections;
|
import java.util.Collections;
|
||||||
import java.util.HashMap;
|
import java.util.HashMap;
|
||||||
import java.util.HashSet;
|
import java.util.HashSet;
|
||||||
@ -48,6 +22,35 @@ import java.util.concurrent.locks.ReentrantReadWriteLock;
|
|||||||
import java.util.function.BiConsumer;
|
import java.util.function.BiConsumer;
|
||||||
import java.util.stream.Collectors;
|
import java.util.stream.Collectors;
|
||||||
|
|
||||||
|
import org.apache.logging.log4j.message.ParameterizedMessage;
|
||||||
|
import org.elasticsearch.action.ActionListener;
|
||||||
|
import org.elasticsearch.cluster.health.ClusterHealthStatus;
|
||||||
|
import org.elasticsearch.cluster.health.ClusterIndexHealth;
|
||||||
|
import org.elasticsearch.common.Strings;
|
||||||
|
import org.elasticsearch.common.bytes.BytesReference;
|
||||||
|
import org.elasticsearch.common.cache.Cache;
|
||||||
|
import org.elasticsearch.common.cache.CacheBuilder;
|
||||||
|
import org.elasticsearch.common.component.AbstractComponent;
|
||||||
|
import org.elasticsearch.common.inject.internal.Nullable;
|
||||||
|
import org.elasticsearch.common.settings.Setting;
|
||||||
|
import org.elasticsearch.common.settings.Setting.Property;
|
||||||
|
import org.elasticsearch.common.settings.Settings;
|
||||||
|
import org.elasticsearch.common.util.concurrent.ConcurrentCollections;
|
||||||
|
import org.elasticsearch.common.util.concurrent.ReleasableLock;
|
||||||
|
import org.elasticsearch.common.util.concurrent.ThreadContext;
|
||||||
|
import org.elasticsearch.common.util.set.Sets;
|
||||||
|
import org.elasticsearch.license.XPackLicenseState;
|
||||||
|
import org.elasticsearch.xpack.common.IteratingActionListener;
|
||||||
|
import org.elasticsearch.xpack.security.authz.RoleDescriptor;
|
||||||
|
import org.elasticsearch.xpack.security.authz.RoleDescriptor.IndicesPrivileges;
|
||||||
|
import org.elasticsearch.xpack.security.authz.permission.FieldPermissionsCache;
|
||||||
|
import org.elasticsearch.xpack.security.authz.permission.FieldPermissionsDefinition;
|
||||||
|
import org.elasticsearch.xpack.security.authz.permission.FieldPermissionsDefinition.FieldGrantExcludeGroup;
|
||||||
|
import org.elasticsearch.xpack.security.authz.permission.Role;
|
||||||
|
import org.elasticsearch.xpack.security.authz.privilege.ClusterPrivilege;
|
||||||
|
import org.elasticsearch.xpack.security.authz.privilege.IndexPrivilege;
|
||||||
|
import org.elasticsearch.xpack.security.authz.privilege.Privilege;
|
||||||
|
|
||||||
import static org.elasticsearch.xpack.security.Security.setting;
|
import static org.elasticsearch.xpack.security.Security.setting;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@ -143,14 +146,21 @@ public class CompositeRolesStore extends AbstractComponent {
|
|||||||
}
|
}
|
||||||
|
|
||||||
private void roleDescriptors(Set<String> roleNames, ActionListener<Set<RoleDescriptor>> roleDescriptorActionListener) {
|
private void roleDescriptors(Set<String> roleNames, ActionListener<Set<RoleDescriptor>> roleDescriptorActionListener) {
|
||||||
final Set<String> filteredRoleNames =
|
final Set<String> filteredRoleNames = roleNames.stream().filter((s) -> {
|
||||||
roleNames.stream().filter((s) -> negativeLookupCache.contains(s) == false).collect(Collectors.toSet());
|
if (negativeLookupCache.contains(s)) {
|
||||||
|
logger.debug("Requested role [{}] does not exist (cached)", s);
|
||||||
|
return false;
|
||||||
|
} else {
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
}).collect(Collectors.toSet());
|
||||||
final Set<RoleDescriptor> builtInRoleDescriptors = getBuiltInRoleDescriptors(filteredRoleNames);
|
final Set<RoleDescriptor> builtInRoleDescriptors = getBuiltInRoleDescriptors(filteredRoleNames);
|
||||||
Set<String> remainingRoleNames = difference(filteredRoleNames, builtInRoleDescriptors);
|
Set<String> remainingRoleNames = difference(filteredRoleNames, builtInRoleDescriptors);
|
||||||
if (remainingRoleNames.isEmpty()) {
|
if (remainingRoleNames.isEmpty()) {
|
||||||
roleDescriptorActionListener.onResponse(Collections.unmodifiableSet(builtInRoleDescriptors));
|
roleDescriptorActionListener.onResponse(Collections.unmodifiableSet(builtInRoleDescriptors));
|
||||||
} else {
|
} else {
|
||||||
nativeRolesStore.getRoleDescriptors(remainingRoleNames.toArray(Strings.EMPTY_ARRAY), ActionListener.wrap((descriptors) -> {
|
nativeRolesStore.getRoleDescriptors(remainingRoleNames.toArray(Strings.EMPTY_ARRAY), ActionListener.wrap((descriptors) -> {
|
||||||
|
logger.debug(() -> new ParameterizedMessage("Roles [{}] were resolved from the native index store", names(descriptors)));
|
||||||
builtInRoleDescriptors.addAll(descriptors);
|
builtInRoleDescriptors.addAll(descriptors);
|
||||||
callCustomRoleProvidersIfEnabled(builtInRoleDescriptors, filteredRoleNames, roleDescriptorActionListener);
|
callCustomRoleProvidersIfEnabled(builtInRoleDescriptors, filteredRoleNames, roleDescriptorActionListener);
|
||||||
}, e -> {
|
}, e -> {
|
||||||
@ -169,6 +179,8 @@ public class CompositeRolesStore extends AbstractComponent {
|
|||||||
new IteratingActionListener<>(roleDescriptorActionListener, (rolesProvider, listener) -> {
|
new IteratingActionListener<>(roleDescriptorActionListener, (rolesProvider, listener) -> {
|
||||||
// resolve descriptors with role provider
|
// resolve descriptors with role provider
|
||||||
rolesProvider.accept(missing, ActionListener.wrap((resolvedDescriptors) -> {
|
rolesProvider.accept(missing, ActionListener.wrap((resolvedDescriptors) -> {
|
||||||
|
logger.debug(() ->
|
||||||
|
new ParameterizedMessage("Roles [{}] were resolved by [{}]", names(resolvedDescriptors), rolesProvider));
|
||||||
builtInRoleDescriptors.addAll(resolvedDescriptors);
|
builtInRoleDescriptors.addAll(resolvedDescriptors);
|
||||||
// remove resolved descriptors from the set of roles still needed to be resolved
|
// remove resolved descriptors from the set of roles still needed to be resolved
|
||||||
for (RoleDescriptor descriptor : resolvedDescriptors) {
|
for (RoleDescriptor descriptor : resolvedDescriptors) {
|
||||||
@ -187,6 +199,8 @@ public class CompositeRolesStore extends AbstractComponent {
|
|||||||
return builtInRoleDescriptors;
|
return builtInRoleDescriptors;
|
||||||
}).run();
|
}).run();
|
||||||
} else {
|
} else {
|
||||||
|
logger.debug(() ->
|
||||||
|
new ParameterizedMessage("Requested roles [{}] do not exist", Strings.collectionToCommaDelimitedString(missing)));
|
||||||
negativeLookupCache.addAll(missing);
|
negativeLookupCache.addAll(missing);
|
||||||
roleDescriptorActionListener.onResponse(Collections.unmodifiableSet(builtInRoleDescriptors));
|
roleDescriptorActionListener.onResponse(Collections.unmodifiableSet(builtInRoleDescriptors));
|
||||||
}
|
}
|
||||||
@ -199,15 +213,24 @@ public class CompositeRolesStore extends AbstractComponent {
|
|||||||
final Set<RoleDescriptor> descriptors = reservedRolesStore.roleDescriptors().stream()
|
final Set<RoleDescriptor> descriptors = reservedRolesStore.roleDescriptors().stream()
|
||||||
.filter((rd) -> roleNames.contains(rd.getName()))
|
.filter((rd) -> roleNames.contains(rd.getName()))
|
||||||
.collect(Collectors.toCollection(HashSet::new));
|
.collect(Collectors.toCollection(HashSet::new));
|
||||||
|
if (descriptors.size() > 0) {
|
||||||
|
logger.debug(() -> new ParameterizedMessage("Roles [{}] are builtin roles", names(descriptors)));
|
||||||
|
}
|
||||||
final Set<String> difference = difference(roleNames, descriptors);
|
final Set<String> difference = difference(roleNames, descriptors);
|
||||||
if (difference.isEmpty() == false) {
|
if (difference.isEmpty() == false) {
|
||||||
descriptors.addAll(fileRolesStore.roleDescriptors(difference));
|
final Set<RoleDescriptor> fileRoles = fileRolesStore.roleDescriptors(difference);
|
||||||
|
logger.debug(() ->
|
||||||
|
new ParameterizedMessage("Roles [{}] were resolved from [{}]", names(fileRoles), fileRolesStore.getFile()));
|
||||||
|
descriptors.addAll(fileRoles);
|
||||||
}
|
}
|
||||||
|
|
||||||
return descriptors;
|
return descriptors;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
private String names(Collection<RoleDescriptor> descriptors) {
|
||||||
|
return descriptors.stream().map(RoleDescriptor::getName).collect(Collectors.joining(","));
|
||||||
|
}
|
||||||
|
|
||||||
private Set<String> difference(Set<String> roleNames, Set<RoleDescriptor> descriptors) {
|
private Set<String> difference(Set<String> roleNames, Set<RoleDescriptor> descriptors) {
|
||||||
Set<String> foundNames = descriptors.stream().map(RoleDescriptor::getName).collect(Collectors.toSet());
|
Set<String> foundNames = descriptors.stream().map(RoleDescriptor::getName).collect(Collectors.toSet());
|
||||||
return Sets.difference(roleNames, foundNames);
|
return Sets.difference(roleNames, foundNames);
|
||||||
@ -217,13 +240,12 @@ public class CompositeRolesStore extends AbstractComponent {
|
|||||||
if (roleDescriptors.isEmpty()) {
|
if (roleDescriptors.isEmpty()) {
|
||||||
return Role.EMPTY;
|
return Role.EMPTY;
|
||||||
}
|
}
|
||||||
StringBuilder nameBuilder = new StringBuilder();
|
|
||||||
Set<String> clusterPrivileges = new HashSet<>();
|
Set<String> clusterPrivileges = new HashSet<>();
|
||||||
Set<String> runAs = new HashSet<>();
|
Set<String> runAs = new HashSet<>();
|
||||||
Map<Set<String>, MergeableIndicesPrivilege> indicesPrivilegesMap = new HashMap<>();
|
Map<Set<String>, MergeableIndicesPrivilege> indicesPrivilegesMap = new HashMap<>();
|
||||||
|
List<String> roleNames = new ArrayList<>(roleDescriptors.size());
|
||||||
for (RoleDescriptor descriptor : roleDescriptors) {
|
for (RoleDescriptor descriptor : roleDescriptors) {
|
||||||
nameBuilder.append(descriptor.getName());
|
roleNames.add(descriptor.getName());
|
||||||
nameBuilder.append('_');
|
|
||||||
if (descriptor.getClusterPrivileges() != null) {
|
if (descriptor.getClusterPrivileges() != null) {
|
||||||
clusterPrivileges.addAll(Arrays.asList(descriptor.getClusterPrivileges()));
|
clusterPrivileges.addAll(Arrays.asList(descriptor.getClusterPrivileges()));
|
||||||
}
|
}
|
||||||
@ -254,7 +276,7 @@ public class CompositeRolesStore extends AbstractComponent {
|
|||||||
|
|
||||||
final Set<String> clusterPrivs = clusterPrivileges.isEmpty() ? null : clusterPrivileges;
|
final Set<String> clusterPrivs = clusterPrivileges.isEmpty() ? null : clusterPrivileges;
|
||||||
final Privilege runAsPrivilege = runAs.isEmpty() ? Privilege.NONE : new Privilege(runAs, runAs.toArray(Strings.EMPTY_ARRAY));
|
final Privilege runAsPrivilege = runAs.isEmpty() ? Privilege.NONE : new Privilege(runAs, runAs.toArray(Strings.EMPTY_ARRAY));
|
||||||
Role.Builder builder = Role.builder(nameBuilder.toString(), fieldPermissionsCache)
|
Role.Builder builder = Role.builder(roleNames.toArray(new String[roleNames.size()]), fieldPermissionsCache)
|
||||||
.cluster(ClusterPrivilege.get(clusterPrivs))
|
.cluster(ClusterPrivilege.get(clusterPrivs))
|
||||||
.runAs(runAsPrivilege);
|
.runAs(runAsPrivilege);
|
||||||
indicesPrivilegesMap.entrySet().forEach((entry) -> {
|
indicesPrivilegesMap.entrySet().forEach((entry) -> {
|
||||||
|
@ -112,6 +112,10 @@ public class FileRolesStore extends AbstractComponent {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
public Path getFile() {
|
||||||
|
return file;
|
||||||
|
}
|
||||||
|
|
||||||
public static Path resolveFile(Environment env) {
|
public static Path resolveFile(Environment env) {
|
||||||
return XPackPlugin.resolveConfigFile(env, "roles.yml");
|
return XPackPlugin.resolveConfigFile(env, "roles.yml");
|
||||||
}
|
}
|
||||||
|
@ -5,26 +5,24 @@
|
|||||||
*/
|
*/
|
||||||
package org.elasticsearch.xpack.security.authz.store;
|
package org.elasticsearch.xpack.security.authz.store;
|
||||||
|
|
||||||
import org.elasticsearch.common.collect.MapBuilder;
|
|
||||||
import org.elasticsearch.xpack.monitoring.action.MonitoringBulkAction;
|
|
||||||
import org.elasticsearch.xpack.security.authz.RoleDescriptor;
|
import org.elasticsearch.xpack.security.authz.RoleDescriptor;
|
||||||
import org.elasticsearch.xpack.security.authz.permission.Role;
|
import org.elasticsearch.xpack.security.authz.permission.Role;
|
||||||
|
import org.elasticsearch.xpack.security.SecurityExtension;
|
||||||
import org.elasticsearch.xpack.security.support.MetadataUtils;
|
import org.elasticsearch.xpack.security.support.MetadataUtils;
|
||||||
import org.elasticsearch.xpack.security.user.KibanaUser;
|
|
||||||
import org.elasticsearch.xpack.security.user.SystemUser;
|
import org.elasticsearch.xpack.security.user.SystemUser;
|
||||||
import org.elasticsearch.xpack.security.user.XPackUser;
|
import org.elasticsearch.xpack.security.user.XPackUser;
|
||||||
import org.elasticsearch.xpack.watcher.execution.TriggeredWatchStore;
|
|
||||||
import org.elasticsearch.xpack.watcher.history.HistoryStore;
|
|
||||||
import org.elasticsearch.xpack.watcher.watch.Watch;
|
|
||||||
|
|
||||||
import java.util.Collection;
|
import java.util.Collection;
|
||||||
import java.util.Collections;
|
import java.util.Collections;
|
||||||
|
import java.util.HashMap;
|
||||||
import java.util.Map;
|
import java.util.Map;
|
||||||
|
import java.util.ServiceLoader;
|
||||||
import java.util.Set;
|
import java.util.Set;
|
||||||
|
|
||||||
public class ReservedRolesStore {
|
public class ReservedRolesStore {
|
||||||
|
|
||||||
public static final RoleDescriptor SUPERUSER_ROLE_DESCRIPTOR = new RoleDescriptor("superuser", new String[] { "all" },
|
public static final RoleDescriptor SUPERUSER_ROLE_DESCRIPTOR = new RoleDescriptor("superuser",
|
||||||
|
new String[] { "all" },
|
||||||
new RoleDescriptor.IndicesPrivileges[] {
|
new RoleDescriptor.IndicesPrivileges[] {
|
||||||
RoleDescriptor.IndicesPrivileges.builder().indices("*").privileges("all").build()},
|
RoleDescriptor.IndicesPrivileges.builder().indices("*").privileges("all").build()},
|
||||||
new String[] { "*" },
|
new String[] { "*" },
|
||||||
@ -33,82 +31,16 @@ public class ReservedRolesStore {
|
|||||||
private static final Map<String, RoleDescriptor> RESERVED_ROLES = initializeReservedRoles();
|
private static final Map<String, RoleDescriptor> RESERVED_ROLES = initializeReservedRoles();
|
||||||
|
|
||||||
private static Map<String, RoleDescriptor> initializeReservedRoles() {
|
private static Map<String, RoleDescriptor> initializeReservedRoles() {
|
||||||
return MapBuilder.<String, RoleDescriptor>newMapBuilder()
|
Map<String, RoleDescriptor> roles = new HashMap<>();
|
||||||
.put("superuser", new RoleDescriptor("superuser", new String[] { "all" },
|
|
||||||
new RoleDescriptor.IndicesPrivileges[] {
|
roles.put("superuser", SUPERUSER_ROLE_DESCRIPTOR);
|
||||||
RoleDescriptor.IndicesPrivileges.builder().indices("*").privileges("all").build()},
|
|
||||||
new String[] { "*" },
|
// Services are loaded through SPI, and are defined in their META-INF/services
|
||||||
MetadataUtils.DEFAULT_RESERVED_METADATA))
|
for(SecurityExtension ext : ServiceLoader.load(SecurityExtension.class, SecurityExtension.class.getClassLoader())) {
|
||||||
.put("transport_client", new RoleDescriptor("transport_client", new String[] { "transport_client" }, null, null,
|
roles.putAll(ext.getReservedRoles());
|
||||||
MetadataUtils.DEFAULT_RESERVED_METADATA))
|
}
|
||||||
.put("kibana_user", new RoleDescriptor("kibana_user", null, new RoleDescriptor.IndicesPrivileges[] {
|
|
||||||
RoleDescriptor.IndicesPrivileges.builder().indices(".kibana*").privileges("manage", "read", "index", "delete")
|
return Collections.unmodifiableMap(roles);
|
||||||
.build() }, null, MetadataUtils.DEFAULT_RESERVED_METADATA))
|
|
||||||
.put("monitoring_user", new RoleDescriptor("monitoring_user", null, new RoleDescriptor.IndicesPrivileges[] {
|
|
||||||
RoleDescriptor.IndicesPrivileges.builder()
|
|
||||||
.indices(".monitoring-*").privileges("read", "read_cross_cluster").build()
|
|
||||||
},
|
|
||||||
null, MetadataUtils.DEFAULT_RESERVED_METADATA))
|
|
||||||
.put("remote_monitoring_agent", new RoleDescriptor("remote_monitoring_agent",
|
|
||||||
new String[] {
|
|
||||||
"manage_index_templates", "manage_ingest_pipelines", "monitor",
|
|
||||||
"cluster:monitor/xpack/watcher/watch/get",
|
|
||||||
"cluster:admin/xpack/watcher/watch/put",
|
|
||||||
"cluster:admin/xpack/watcher/watch/delete",
|
|
||||||
},
|
|
||||||
new RoleDescriptor.IndicesPrivileges[] {
|
|
||||||
RoleDescriptor.IndicesPrivileges.builder().indices(".monitoring-*").privileges("all").build() },
|
|
||||||
null, MetadataUtils.DEFAULT_RESERVED_METADATA))
|
|
||||||
.put("ingest_admin", new RoleDescriptor("ingest_admin", new String[] { "manage_index_templates", "manage_pipeline" },
|
|
||||||
null, null, MetadataUtils.DEFAULT_RESERVED_METADATA))
|
|
||||||
// reporting_user doesn't have any privileges in Elasticsearch, and Kibana authorizes privileges based on this role
|
|
||||||
.put("reporting_user", new RoleDescriptor("reporting_user", null, null,
|
|
||||||
null, MetadataUtils.DEFAULT_RESERVED_METADATA))
|
|
||||||
.put("kibana_dashboard_only_user", new RoleDescriptor(
|
|
||||||
"kibana_dashboard_only_user",
|
|
||||||
null,
|
|
||||||
new RoleDescriptor.IndicesPrivileges[] {
|
|
||||||
RoleDescriptor.IndicesPrivileges.builder()
|
|
||||||
.indices(".kibana*").privileges("read", "view_index_metadata").build()
|
|
||||||
},
|
|
||||||
null,
|
|
||||||
MetadataUtils.DEFAULT_RESERVED_METADATA))
|
|
||||||
.put(KibanaUser.ROLE_NAME, new RoleDescriptor(KibanaUser.ROLE_NAME,
|
|
||||||
new String[] { "monitor", "manage_index_templates", MonitoringBulkAction.NAME },
|
|
||||||
new RoleDescriptor.IndicesPrivileges[] {
|
|
||||||
RoleDescriptor.IndicesPrivileges.builder().indices(".kibana*", ".reporting-*").privileges("all").build(),
|
|
||||||
RoleDescriptor.IndicesPrivileges.builder()
|
|
||||||
.indices(".monitoring-*").privileges("read", "read_cross_cluster").build()
|
|
||||||
},
|
|
||||||
null, MetadataUtils.DEFAULT_RESERVED_METADATA))
|
|
||||||
.put("logstash_system", new RoleDescriptor("logstash_system", new String[] { "monitor", MonitoringBulkAction.NAME},
|
|
||||||
null, null, MetadataUtils.DEFAULT_RESERVED_METADATA))
|
|
||||||
.put("machine_learning_user", new RoleDescriptor("machine_learning_user", new String[] { "monitor_ml" },
|
|
||||||
new RoleDescriptor.IndicesPrivileges[] { RoleDescriptor.IndicesPrivileges.builder().indices(".ml-anomalies*",
|
|
||||||
".ml-notifications").privileges("view_index_metadata", "read").build() },
|
|
||||||
null, MetadataUtils.DEFAULT_RESERVED_METADATA))
|
|
||||||
.put("machine_learning_admin", new RoleDescriptor("machine_learning_admin", new String[] { "manage_ml" },
|
|
||||||
new RoleDescriptor.IndicesPrivileges[] {
|
|
||||||
RoleDescriptor.IndicesPrivileges.builder().indices(".ml-*").privileges("view_index_metadata", "read")
|
|
||||||
.build() }, null, MetadataUtils.DEFAULT_RESERVED_METADATA))
|
|
||||||
.put("watcher_admin", new RoleDescriptor("watcher_admin", new String[] { "manage_watcher" },
|
|
||||||
new RoleDescriptor.IndicesPrivileges[] {
|
|
||||||
RoleDescriptor.IndicesPrivileges.builder().indices(Watch.INDEX, TriggeredWatchStore.INDEX_NAME,
|
|
||||||
HistoryStore.INDEX_PREFIX + "*").privileges("read").build() },
|
|
||||||
null, MetadataUtils.DEFAULT_RESERVED_METADATA))
|
|
||||||
.put("watcher_user", new RoleDescriptor("watcher_user", new String[] { "monitor_watcher" },
|
|
||||||
new RoleDescriptor.IndicesPrivileges[] {
|
|
||||||
RoleDescriptor.IndicesPrivileges.builder().indices(Watch.INDEX)
|
|
||||||
.privileges("read")
|
|
||||||
.build(),
|
|
||||||
RoleDescriptor.IndicesPrivileges.builder().indices(HistoryStore.INDEX_PREFIX + "*")
|
|
||||||
.privileges("read")
|
|
||||||
.build() }, null, MetadataUtils.DEFAULT_RESERVED_METADATA))
|
|
||||||
.put("logstash_admin", new RoleDescriptor("logstash_admin", null, new RoleDescriptor.IndicesPrivileges[] {
|
|
||||||
RoleDescriptor.IndicesPrivileges.builder().indices(".logstash*")
|
|
||||||
.privileges("create", "delete", "index", "manage", "read").build() },
|
|
||||||
null, MetadataUtils.DEFAULT_RESERVED_METADATA))
|
|
||||||
.immutableMap();
|
|
||||||
}
|
}
|
||||||
|
|
||||||
public Map<String, Object> usageStats() {
|
public Map<String, Object> usageStats() {
|
||||||
|
@ -7,8 +7,6 @@ package org.elasticsearch.xpack.security.user;
|
|||||||
|
|
||||||
import org.elasticsearch.xpack.security.support.MetadataUtils;
|
import org.elasticsearch.xpack.security.support.MetadataUtils;
|
||||||
|
|
||||||
import java.util.HashMap;
|
|
||||||
import java.util.Map;
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* The reserved {@code elastic} superuser. Has full permission/access to the cluster/indices and can
|
* The reserved {@code elastic} superuser. Has full permission/access to the cluster/indices and can
|
||||||
@ -17,7 +15,8 @@ import java.util.Map;
|
|||||||
public class ElasticUser extends User {
|
public class ElasticUser extends User {
|
||||||
|
|
||||||
public static final String NAME = "elastic";
|
public static final String NAME = "elastic";
|
||||||
private static final String ROLE_NAME = "superuser";
|
// used for testing in a different package
|
||||||
|
public static final String ROLE_NAME = "superuser";
|
||||||
|
|
||||||
public ElasticUser(boolean enabled) {
|
public ElasticUser(boolean enabled) {
|
||||||
super(NAME, new String[] { ROLE_NAME }, null, null, MetadataUtils.DEFAULT_RESERVED_METADATA, enabled);
|
super(NAME, new String[] { ROLE_NAME }, null, null, MetadataUtils.DEFAULT_RESERVED_METADATA, enabled);
|
||||||
|
@ -14,7 +14,7 @@ import org.elasticsearch.xpack.security.support.MetadataUtils;
|
|||||||
public class LogstashSystemUser extends User {
|
public class LogstashSystemUser extends User {
|
||||||
|
|
||||||
public static final String NAME = "logstash_system";
|
public static final String NAME = "logstash_system";
|
||||||
private static final String ROLE_NAME = "logstash_system";
|
public static final String ROLE_NAME = "logstash_system";
|
||||||
public static final Version DEFINED_SINCE = Version.V_5_2_0;
|
public static final Version DEFINED_SINCE = Version.V_5_2_0;
|
||||||
public static final BuiltinUserInfo USER_INFO = new BuiltinUserInfo(NAME, ROLE_NAME, DEFINED_SINCE);
|
public static final BuiltinUserInfo USER_INFO = new BuiltinUserInfo(NAME, ROLE_NAME, DEFINED_SINCE);
|
||||||
|
|
||||||
|
@ -449,7 +449,7 @@ public class Watcher implements ActionPlugin {
|
|||||||
new FixedExecutorBuilder(
|
new FixedExecutorBuilder(
|
||||||
settings,
|
settings,
|
||||||
InternalWatchExecutor.THREAD_POOL_NAME,
|
InternalWatchExecutor.THREAD_POOL_NAME,
|
||||||
5 * EsExecutors.numberOfProcessors(settings),
|
getWatcherThreadPoolSize(settings),
|
||||||
1000,
|
1000,
|
||||||
"xpack.watcher.thread_pool");
|
"xpack.watcher.thread_pool");
|
||||||
return Collections.singletonList(builder);
|
return Collections.singletonList(builder);
|
||||||
@ -457,6 +457,28 @@ public class Watcher implements ActionPlugin {
|
|||||||
return Collections.emptyList();
|
return Collections.emptyList();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* A method to indicate the size of the watcher thread pool
|
||||||
|
* As watches are primarily bound on I/O waiting and execute
|
||||||
|
* synchronously, it makes sense to have a certain minimum of a
|
||||||
|
* threadpool size. This means you should start with a fair number
|
||||||
|
* of threads which is more than the number of CPUs, but you also need
|
||||||
|
* to ensure that this number does not go crazy high if you have really
|
||||||
|
* beefy machines. This can still be configured manually.
|
||||||
|
*
|
||||||
|
* Calculation is as follows:
|
||||||
|
* Use five times the number of processors up until 50, then stick with the
|
||||||
|
* number of processors.
|
||||||
|
*
|
||||||
|
* @param settings The current settings
|
||||||
|
* @return A number between 5 and the number of processors
|
||||||
|
*/
|
||||||
|
static int getWatcherThreadPoolSize(Settings settings) {
|
||||||
|
int numberOfProcessors = EsExecutors.numberOfProcessors(settings);
|
||||||
|
long size = Math.max(Math.min(5 * numberOfProcessors, 50), numberOfProcessors);
|
||||||
|
return Math.toIntExact(size);
|
||||||
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public List<ActionHandler<? extends ActionRequest, ? extends ActionResponse>> getActions() {
|
public List<ActionHandler<? extends ActionRequest, ? extends ActionResponse>> getActions() {
|
||||||
if (false == enabled) {
|
if (false == enabled) {
|
||||||
|
@ -0,0 +1,55 @@
|
|||||||
|
/*
|
||||||
|
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||||
|
* or more contributor license agreements. Licensed under the Elastic License;
|
||||||
|
* you may not use this file except in compliance with the Elastic License.
|
||||||
|
*/
|
||||||
|
package org.elasticsearch.xpack.watcher;
|
||||||
|
|
||||||
|
import org.elasticsearch.xpack.security.authz.RoleDescriptor;
|
||||||
|
import org.elasticsearch.xpack.security.SecurityExtension;
|
||||||
|
import org.elasticsearch.xpack.security.support.MetadataUtils;
|
||||||
|
import org.elasticsearch.xpack.watcher.execution.TriggeredWatchStore;
|
||||||
|
import org.elasticsearch.xpack.watcher.history.HistoryStore;
|
||||||
|
import org.elasticsearch.xpack.watcher.watch.Watch;
|
||||||
|
|
||||||
|
import java.util.Collections;
|
||||||
|
import java.util.HashMap;
|
||||||
|
import java.util.Map;
|
||||||
|
|
||||||
|
public class WatcherSecurityExtension implements SecurityExtension {
|
||||||
|
@Override
|
||||||
|
public Map<String, RoleDescriptor> getReservedRoles() {
|
||||||
|
Map<String, RoleDescriptor> roles = new HashMap<>();
|
||||||
|
|
||||||
|
roles.put("watcher_admin",
|
||||||
|
new RoleDescriptor("watcher_admin",
|
||||||
|
new String[] { "manage_watcher" },
|
||||||
|
new RoleDescriptor.IndicesPrivileges[] {
|
||||||
|
RoleDescriptor.IndicesPrivileges.builder()
|
||||||
|
.indices(Watch.INDEX,
|
||||||
|
TriggeredWatchStore.INDEX_NAME,
|
||||||
|
HistoryStore.INDEX_PREFIX + "*")
|
||||||
|
.privileges("read").build()
|
||||||
|
},
|
||||||
|
null,
|
||||||
|
MetadataUtils.DEFAULT_RESERVED_METADATA));
|
||||||
|
|
||||||
|
roles.put("watcher_user",
|
||||||
|
new RoleDescriptor("watcher_user",
|
||||||
|
new String[] { "monitor_watcher" },
|
||||||
|
new RoleDescriptor.IndicesPrivileges[] {
|
||||||
|
RoleDescriptor.IndicesPrivileges.builder()
|
||||||
|
.indices(Watch.INDEX)
|
||||||
|
.privileges("read")
|
||||||
|
.build(),
|
||||||
|
RoleDescriptor.IndicesPrivileges.builder()
|
||||||
|
.indices(HistoryStore.INDEX_PREFIX + "*")
|
||||||
|
.privileges("read")
|
||||||
|
.build()
|
||||||
|
},
|
||||||
|
null,
|
||||||
|
MetadataUtils.DEFAULT_RESERVED_METADATA));
|
||||||
|
|
||||||
|
return Collections.unmodifiableMap(roles);
|
||||||
|
}
|
||||||
|
}
|
@ -121,7 +121,7 @@ public class IncidentEvent implements ToXContentObject {
|
|||||||
builder.endObject();
|
builder.endObject();
|
||||||
}
|
}
|
||||||
if (contexts != null && contexts.length > 0) {
|
if (contexts != null && contexts.length > 0) {
|
||||||
builder.startArray(Fields.CONTEXT.getPreferredName());
|
builder.startArray(Fields.CONTEXTS.getPreferredName());
|
||||||
for (IncidentEventContext context : contexts) {
|
for (IncidentEventContext context : contexts) {
|
||||||
context.toXContent(builder, params);
|
context.toXContent(builder, params);
|
||||||
}
|
}
|
||||||
@ -154,7 +154,7 @@ public class IncidentEvent implements ToXContentObject {
|
|||||||
}
|
}
|
||||||
builder.field(Fields.ATTACH_PAYLOAD.getPreferredName(), attachPayload);
|
builder.field(Fields.ATTACH_PAYLOAD.getPreferredName(), attachPayload);
|
||||||
if (contexts != null) {
|
if (contexts != null) {
|
||||||
builder.startArray(Fields.CONTEXT.getPreferredName());
|
builder.startArray(Fields.CONTEXTS.getPreferredName());
|
||||||
for (IncidentEventContext context : contexts) {
|
for (IncidentEventContext context : contexts) {
|
||||||
context.toXContent(builder, params);
|
context.toXContent(builder, params);
|
||||||
}
|
}
|
||||||
@ -265,7 +265,7 @@ public class IncidentEvent implements ToXContentObject {
|
|||||||
proxy.toXContent(builder, params);
|
proxy.toXContent(builder, params);
|
||||||
}
|
}
|
||||||
if (contexts != null) {
|
if (contexts != null) {
|
||||||
builder.startArray(Fields.CONTEXT.getPreferredName());
|
builder.startArray(Fields.CONTEXTS.getPreferredName());
|
||||||
for (IncidentEventContext.Template context : contexts) {
|
for (IncidentEventContext.Template context : contexts) {
|
||||||
context.toXContent(builder, params);
|
context.toXContent(builder, params);
|
||||||
}
|
}
|
||||||
@ -341,7 +341,7 @@ public class IncidentEvent implements ToXContentObject {
|
|||||||
throw new ElasticsearchParseException("could not parse pager duty event template. failed to parse field [{}], " +
|
throw new ElasticsearchParseException("could not parse pager duty event template. failed to parse field [{}], " +
|
||||||
"expected a boolean value but found [{}] instead", Fields.ATTACH_PAYLOAD.getPreferredName(), token);
|
"expected a boolean value but found [{}] instead", Fields.ATTACH_PAYLOAD.getPreferredName(), token);
|
||||||
}
|
}
|
||||||
} else if (Fields.CONTEXT.match(currentFieldName)) {
|
} else if (Fields.CONTEXTS.match(currentFieldName) || Fields.CONTEXT_DEPRECATED.match(currentFieldName)) {
|
||||||
if (token == XContentParser.Token.START_ARRAY) {
|
if (token == XContentParser.Token.START_ARRAY) {
|
||||||
List<IncidentEventContext.Template> list = new ArrayList<>();
|
List<IncidentEventContext.Template> list = new ArrayList<>();
|
||||||
while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) {
|
while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) {
|
||||||
@ -349,7 +349,7 @@ public class IncidentEvent implements ToXContentObject {
|
|||||||
list.add(IncidentEventContext.Template.parse(parser));
|
list.add(IncidentEventContext.Template.parse(parser));
|
||||||
} catch (ElasticsearchParseException e) {
|
} catch (ElasticsearchParseException e) {
|
||||||
throw new ElasticsearchParseException("could not parse pager duty event template. failed to parse field " +
|
throw new ElasticsearchParseException("could not parse pager duty event template. failed to parse field " +
|
||||||
"[{}]", Fields.CONTEXT.getPreferredName());
|
"[{}]", parser.currentName());
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
contexts = list.toArray(new IncidentEventContext.Template[list.size()]);
|
contexts = list.toArray(new IncidentEventContext.Template[list.size()]);
|
||||||
@ -438,7 +438,11 @@ public class IncidentEvent implements ToXContentObject {
|
|||||||
ParseField CLIENT = new ParseField("client");
|
ParseField CLIENT = new ParseField("client");
|
||||||
ParseField CLIENT_URL = new ParseField("client_url");
|
ParseField CLIENT_URL = new ParseField("client_url");
|
||||||
ParseField ATTACH_PAYLOAD = new ParseField("attach_payload");
|
ParseField ATTACH_PAYLOAD = new ParseField("attach_payload");
|
||||||
ParseField CONTEXT = new ParseField("context");
|
ParseField CONTEXTS = new ParseField("contexts");
|
||||||
|
// this field exists because in versions prior 6.0 we accidentally used context instead of contexts and thus the correct data
|
||||||
|
// was never picked up on the pagerduty side
|
||||||
|
// we need to keep this for BWC
|
||||||
|
ParseField CONTEXT_DEPRECATED = new ParseField("context");
|
||||||
|
|
||||||
ParseField SERVICE_KEY = new ParseField("service_key");
|
ParseField SERVICE_KEY = new ParseField("service_key");
|
||||||
ParseField PAYLOAD = new ParseField("payload");
|
ParseField PAYLOAD = new ParseField("payload");
|
||||||
|
@ -0,0 +1,5 @@
|
|||||||
|
org.elasticsearch.xpack.logstash.LogstashSecurityExtension
|
||||||
|
org.elasticsearch.xpack.ml.MachineLearningSecurityExtension
|
||||||
|
org.elasticsearch.xpack.monitoring.MonitoringSecurityExtension
|
||||||
|
org.elasticsearch.xpack.security.StackSecurityExtension
|
||||||
|
org.elasticsearch.xpack.watcher.WatcherSecurityExtension
|
@ -37,10 +37,6 @@
|
|||||||
},
|
},
|
||||||
"name": {
|
"name": {
|
||||||
"type": "keyword"
|
"type": "keyword"
|
||||||
},
|
|
||||||
"timestamp": {
|
|
||||||
"type": "date",
|
|
||||||
"format": "date_time"
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
@ -71,6 +67,136 @@
|
|||||||
"type": "keyword"
|
"type": "keyword"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
},
|
||||||
|
"metrics": {
|
||||||
|
"properties": {
|
||||||
|
"beat": {
|
||||||
|
"properties": {
|
||||||
|
"memstats": {
|
||||||
|
"properties": {
|
||||||
|
"gc_next": {
|
||||||
|
"type": "long"
|
||||||
|
},
|
||||||
|
"memory_alloc": {
|
||||||
|
"type": "long"
|
||||||
|
},
|
||||||
|
"memory_total": {
|
||||||
|
"type": "long"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"libbeat": {
|
||||||
|
"properties": {
|
||||||
|
"config": {
|
||||||
|
"properties": {
|
||||||
|
"module": {
|
||||||
|
"properties": {
|
||||||
|
"running": {
|
||||||
|
"type": "long"
|
||||||
|
},
|
||||||
|
"starts": {
|
||||||
|
"type": "long"
|
||||||
|
},
|
||||||
|
"stops": {
|
||||||
|
"type": "long"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"reloads": {
|
||||||
|
"type": "long"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"output": {
|
||||||
|
"properties": {
|
||||||
|
"events": {
|
||||||
|
"properties": {
|
||||||
|
"acked": {
|
||||||
|
"type": "long"
|
||||||
|
},
|
||||||
|
"active": {
|
||||||
|
"type": "long"
|
||||||
|
},
|
||||||
|
"batches": {
|
||||||
|
"type": "long"
|
||||||
|
},
|
||||||
|
"failed": {
|
||||||
|
"type": "long"
|
||||||
|
},
|
||||||
|
"total": {
|
||||||
|
"type": "long"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"read": {
|
||||||
|
"properties": {
|
||||||
|
"bytes": {
|
||||||
|
"type": "long"
|
||||||
|
},
|
||||||
|
"errors": {
|
||||||
|
"type": "long"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"type": {
|
||||||
|
"type": "keyword"
|
||||||
|
},
|
||||||
|
"write": {
|
||||||
|
"properties": {
|
||||||
|
"bytes": {
|
||||||
|
"type": "long"
|
||||||
|
},
|
||||||
|
"errors": {
|
||||||
|
"type": "long"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"pipeline": {
|
||||||
|
"properties": {
|
||||||
|
"clients": {
|
||||||
|
"type": "long"
|
||||||
|
},
|
||||||
|
"events": {
|
||||||
|
"properties": {
|
||||||
|
"active": {
|
||||||
|
"type": "long"
|
||||||
|
},
|
||||||
|
"dropped": {
|
||||||
|
"type": "long"
|
||||||
|
},
|
||||||
|
"failed": {
|
||||||
|
"type": "long"
|
||||||
|
},
|
||||||
|
"filtered": {
|
||||||
|
"type": "long"
|
||||||
|
},
|
||||||
|
"published": {
|
||||||
|
"type": "long"
|
||||||
|
},
|
||||||
|
"retry": {
|
||||||
|
"type": "long"
|
||||||
|
},
|
||||||
|
"total": {
|
||||||
|
"type": "long"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"queue": {
|
||||||
|
"properties": {
|
||||||
|
"acked": {
|
||||||
|
"type": "long"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -40,6 +40,9 @@
|
|||||||
"principal": {
|
"principal": {
|
||||||
"type": "keyword"
|
"type": "keyword"
|
||||||
},
|
},
|
||||||
|
"roles": {
|
||||||
|
"type": "keyword"
|
||||||
|
},
|
||||||
"run_by_principal": {
|
"run_by_principal": {
|
||||||
"type": "keyword"
|
"type": "keyword"
|
||||||
},
|
},
|
||||||
|
@ -5,24 +5,36 @@
|
|||||||
*/
|
*/
|
||||||
package org.elasticsearch.integration;
|
package org.elasticsearch.integration;
|
||||||
|
|
||||||
|
import org.elasticsearch.action.admin.indices.get.GetIndexResponse;
|
||||||
|
import org.elasticsearch.action.admin.indices.mapping.get.GetFieldMappingsResponse;
|
||||||
|
import org.elasticsearch.action.admin.indices.mapping.get.GetMappingsResponse;
|
||||||
|
import org.elasticsearch.action.fieldcaps.FieldCapabilities;
|
||||||
|
import org.elasticsearch.action.fieldcaps.FieldCapabilitiesRequest;
|
||||||
|
import org.elasticsearch.action.fieldcaps.FieldCapabilitiesResponse;
|
||||||
import org.elasticsearch.action.search.SearchResponse;
|
import org.elasticsearch.action.search.SearchResponse;
|
||||||
|
import org.elasticsearch.cluster.metadata.MappingMetaData;
|
||||||
|
import org.elasticsearch.common.collect.ImmutableOpenMap;
|
||||||
import org.elasticsearch.common.settings.SecureString;
|
import org.elasticsearch.common.settings.SecureString;
|
||||||
import org.elasticsearch.common.settings.Settings;
|
import org.elasticsearch.common.settings.Settings;
|
||||||
import org.elasticsearch.index.IndexModule;
|
import org.elasticsearch.index.IndexModule;
|
||||||
import org.elasticsearch.index.query.QueryBuilders;
|
import org.elasticsearch.index.query.QueryBuilders;
|
||||||
|
import org.elasticsearch.indices.IndicesModule;
|
||||||
import org.elasticsearch.search.sort.SortOrder;
|
import org.elasticsearch.search.sort.SortOrder;
|
||||||
|
import org.elasticsearch.test.SecurityIntegTestCase;
|
||||||
import org.elasticsearch.xpack.XPackSettings;
|
import org.elasticsearch.xpack.XPackSettings;
|
||||||
import org.elasticsearch.xpack.security.authc.support.Hasher;
|
import org.elasticsearch.xpack.security.authc.support.Hasher;
|
||||||
import org.elasticsearch.test.SecurityIntegTestCase;
|
|
||||||
|
|
||||||
import java.util.Collections;
|
import java.util.Collections;
|
||||||
|
import java.util.HashMap;
|
||||||
|
import java.util.Map;
|
||||||
|
import java.util.Set;
|
||||||
|
|
||||||
import static org.elasticsearch.action.support.WriteRequest.RefreshPolicy.IMMEDIATE;
|
import static org.elasticsearch.action.support.WriteRequest.RefreshPolicy.IMMEDIATE;
|
||||||
import static org.elasticsearch.xpack.security.authc.support.UsernamePasswordToken.BASIC_AUTH_HEADER;
|
|
||||||
import static org.elasticsearch.xpack.security.authc.support.UsernamePasswordToken.basicAuthHeaderValue;
|
|
||||||
import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;
|
import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;
|
||||||
import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount;
|
import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount;
|
||||||
import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchHits;
|
import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchHits;
|
||||||
|
import static org.elasticsearch.xpack.security.authc.support.UsernamePasswordToken.BASIC_AUTH_HEADER;
|
||||||
|
import static org.elasticsearch.xpack.security.authc.support.UsernamePasswordToken.basicAuthHeaderValue;
|
||||||
import static org.hamcrest.Matchers.equalTo;
|
import static org.hamcrest.Matchers.equalTo;
|
||||||
|
|
||||||
public class DocumentAndFieldLevelSecurityTests extends SecurityIntegTestCase {
|
public class DocumentAndFieldLevelSecurityTests extends SecurityIntegTestCase {
|
||||||
@ -92,7 +104,7 @@ public class DocumentAndFieldLevelSecurityTests extends SecurityIntegTestCase {
|
|||||||
.build();
|
.build();
|
||||||
}
|
}
|
||||||
|
|
||||||
public void testSimpleQuery() throws Exception {
|
public void testSimpleQuery() {
|
||||||
assertAcked(client().admin().indices().prepareCreate("test")
|
assertAcked(client().admin().indices().prepareCreate("test")
|
||||||
.addMapping("type1", "field1", "type=text", "field2", "type=text")
|
.addMapping("type1", "field1", "type=text", "field2", "type=text")
|
||||||
);
|
);
|
||||||
@ -130,7 +142,7 @@ public class DocumentAndFieldLevelSecurityTests extends SecurityIntegTestCase {
|
|||||||
assertThat(response.getHits().getAt(1).getSourceAsMap().get("field2").toString(), equalTo("value2"));
|
assertThat(response.getHits().getAt(1).getSourceAsMap().get("field2").toString(), equalTo("value2"));
|
||||||
}
|
}
|
||||||
|
|
||||||
public void testDLSIsAppliedBeforeFLS() throws Exception {
|
public void testDLSIsAppliedBeforeFLS() {
|
||||||
assertAcked(client().admin().indices().prepareCreate("test")
|
assertAcked(client().admin().indices().prepareCreate("test")
|
||||||
.addMapping("type1", "field1", "type=text", "field2", "type=text")
|
.addMapping("type1", "field1", "type=text", "field2", "type=text")
|
||||||
);
|
);
|
||||||
@ -157,7 +169,7 @@ public class DocumentAndFieldLevelSecurityTests extends SecurityIntegTestCase {
|
|||||||
assertHitCount(response, 0);
|
assertHitCount(response, 0);
|
||||||
}
|
}
|
||||||
|
|
||||||
public void testQueryCache() throws Exception {
|
public void testQueryCache() {
|
||||||
assertAcked(client().admin().indices().prepareCreate("test")
|
assertAcked(client().admin().indices().prepareCreate("test")
|
||||||
.setSettings(Settings.builder().put(IndexModule.INDEX_QUERY_CACHE_EVERYTHING_SETTING.getKey(), true))
|
.setSettings(Settings.builder().put(IndexModule.INDEX_QUERY_CACHE_EVERYTHING_SETTING.getKey(), true))
|
||||||
.addMapping("type1", "field1", "type=text", "field2", "type=text")
|
.addMapping("type1", "field1", "type=text", "field2", "type=text")
|
||||||
@ -214,4 +226,216 @@ public class DocumentAndFieldLevelSecurityTests extends SecurityIntegTestCase {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
public void testGetMappingsIsFiltered() {
|
||||||
|
assertAcked(client().admin().indices().prepareCreate("test")
|
||||||
|
.addMapping("type1", "field1", "type=text", "field2", "type=text")
|
||||||
|
);
|
||||||
|
client().prepareIndex("test", "type1", "1").setSource("field1", "value1")
|
||||||
|
.setRefreshPolicy(IMMEDIATE)
|
||||||
|
.get();
|
||||||
|
client().prepareIndex("test", "type1", "2").setSource("field2", "value2")
|
||||||
|
.setRefreshPolicy(IMMEDIATE)
|
||||||
|
.get();
|
||||||
|
|
||||||
|
{
|
||||||
|
GetMappingsResponse getMappingsResponse = client().filterWithHeader(
|
||||||
|
Collections.singletonMap(BASIC_AUTH_HEADER, basicAuthHeaderValue("user1", USERS_PASSWD)))
|
||||||
|
.admin().indices().prepareGetMappings("test").get();
|
||||||
|
assertExpectedFields(getMappingsResponse.getMappings(), "field1");
|
||||||
|
}
|
||||||
|
|
||||||
|
{
|
||||||
|
GetMappingsResponse getMappingsResponse = client().filterWithHeader(
|
||||||
|
Collections.singletonMap(BASIC_AUTH_HEADER, basicAuthHeaderValue("user2", USERS_PASSWD)))
|
||||||
|
.admin().indices().prepareGetMappings("test").get();
|
||||||
|
assertExpectedFields(getMappingsResponse.getMappings(), "field2");
|
||||||
|
}
|
||||||
|
|
||||||
|
{
|
||||||
|
GetMappingsResponse getMappingsResponse = client().filterWithHeader(
|
||||||
|
Collections.singletonMap(BASIC_AUTH_HEADER, basicAuthHeaderValue("user3", USERS_PASSWD)))
|
||||||
|
.admin().indices().prepareGetMappings("test").get();
|
||||||
|
assertExpectedFields(getMappingsResponse.getMappings(), "field1");
|
||||||
|
}
|
||||||
|
|
||||||
|
{
|
||||||
|
GetMappingsResponse getMappingsResponse = client().filterWithHeader(
|
||||||
|
Collections.singletonMap(BASIC_AUTH_HEADER, basicAuthHeaderValue("user4", USERS_PASSWD)))
|
||||||
|
.admin().indices().prepareGetMappings("test").get();
|
||||||
|
assertExpectedFields(getMappingsResponse.getMappings(), "field1", "field2");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
public void testGetIndexMappingsIsFiltered() {
|
||||||
|
assertAcked(client().admin().indices().prepareCreate("test")
|
||||||
|
.addMapping("type1", "field1", "type=text", "field2", "type=text")
|
||||||
|
);
|
||||||
|
client().prepareIndex("test", "type1", "1").setSource("field1", "value1")
|
||||||
|
.setRefreshPolicy(IMMEDIATE)
|
||||||
|
.get();
|
||||||
|
client().prepareIndex("test", "type1", "2").setSource("field2", "value2")
|
||||||
|
.setRefreshPolicy(IMMEDIATE)
|
||||||
|
.get();
|
||||||
|
|
||||||
|
{
|
||||||
|
GetIndexResponse getIndexResponse = client().filterWithHeader(
|
||||||
|
Collections.singletonMap(BASIC_AUTH_HEADER, basicAuthHeaderValue("user1", USERS_PASSWD)))
|
||||||
|
.admin().indices().prepareGetIndex().setIndices("test").get();
|
||||||
|
assertExpectedFields(getIndexResponse.getMappings(), "field1");
|
||||||
|
}
|
||||||
|
{
|
||||||
|
GetIndexResponse getIndexResponse = client().filterWithHeader(
|
||||||
|
Collections.singletonMap(BASIC_AUTH_HEADER, basicAuthHeaderValue("user2", USERS_PASSWD)))
|
||||||
|
.admin().indices().prepareGetIndex().setIndices("test").get();
|
||||||
|
assertExpectedFields(getIndexResponse.getMappings(), "field2");
|
||||||
|
}
|
||||||
|
{
|
||||||
|
GetIndexResponse getIndexResponse = client().filterWithHeader(
|
||||||
|
Collections.singletonMap(BASIC_AUTH_HEADER, basicAuthHeaderValue("user3", USERS_PASSWD)))
|
||||||
|
.admin().indices().prepareGetIndex().setIndices("test").get();
|
||||||
|
assertExpectedFields(getIndexResponse.getMappings(), "field1");
|
||||||
|
}
|
||||||
|
{
|
||||||
|
GetIndexResponse getIndexResponse = client().filterWithHeader(
|
||||||
|
Collections.singletonMap(BASIC_AUTH_HEADER, basicAuthHeaderValue("user4", USERS_PASSWD)))
|
||||||
|
.admin().indices().prepareGetIndex().setIndices("test").get();
|
||||||
|
assertExpectedFields(getIndexResponse.getMappings(), "field1", "field2");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
public void testGetFieldMappingsIsFiltered() {
|
||||||
|
assertAcked(client().admin().indices().prepareCreate("test")
|
||||||
|
.addMapping("type1", "field1", "type=text", "field2", "type=text")
|
||||||
|
);
|
||||||
|
client().prepareIndex("test", "type1", "1").setSource("field1", "value1")
|
||||||
|
.setRefreshPolicy(IMMEDIATE)
|
||||||
|
.get();
|
||||||
|
client().prepareIndex("test", "type1", "2").setSource("field2", "value2")
|
||||||
|
.setRefreshPolicy(IMMEDIATE)
|
||||||
|
.get();
|
||||||
|
|
||||||
|
{
|
||||||
|
GetFieldMappingsResponse getFieldMappingsResponse = client().filterWithHeader(
|
||||||
|
Collections.singletonMap(BASIC_AUTH_HEADER, basicAuthHeaderValue("user1", USERS_PASSWD)))
|
||||||
|
.admin().indices().prepareGetFieldMappings("test").setFields("*").get();
|
||||||
|
|
||||||
|
Map<String, Map<String, Map<String, GetFieldMappingsResponse.FieldMappingMetaData>>> mappings =
|
||||||
|
getFieldMappingsResponse.mappings();
|
||||||
|
assertEquals(1, mappings.size());
|
||||||
|
assertExpectedFields(mappings.get("test"), "field1");
|
||||||
|
}
|
||||||
|
{
|
||||||
|
GetFieldMappingsResponse getFieldMappingsResponse = client().filterWithHeader(
|
||||||
|
Collections.singletonMap(BASIC_AUTH_HEADER, basicAuthHeaderValue("user2", USERS_PASSWD)))
|
||||||
|
.admin().indices().prepareGetFieldMappings("test").setFields("*").get();
|
||||||
|
|
||||||
|
Map<String, Map<String, Map<String, GetFieldMappingsResponse.FieldMappingMetaData>>> mappings =
|
||||||
|
getFieldMappingsResponse.mappings();
|
||||||
|
assertEquals(1, mappings.size());
|
||||||
|
assertExpectedFields(mappings.get("test"), "field2");
|
||||||
|
}
|
||||||
|
{
|
||||||
|
GetFieldMappingsResponse getFieldMappingsResponse = client().filterWithHeader(
|
||||||
|
Collections.singletonMap(BASIC_AUTH_HEADER, basicAuthHeaderValue("user3", USERS_PASSWD)))
|
||||||
|
.admin().indices().prepareGetFieldMappings("test").setFields("*").get();
|
||||||
|
|
||||||
|
Map<String, Map<String, Map<String, GetFieldMappingsResponse.FieldMappingMetaData>>> mappings =
|
||||||
|
getFieldMappingsResponse.mappings();
|
||||||
|
assertEquals(1, mappings.size());
|
||||||
|
assertExpectedFields(mappings.get("test"), "field1");
|
||||||
|
}
|
||||||
|
{
|
||||||
|
GetFieldMappingsResponse getFieldMappingsResponse = client().filterWithHeader(
|
||||||
|
Collections.singletonMap(BASIC_AUTH_HEADER, basicAuthHeaderValue("user4", USERS_PASSWD)))
|
||||||
|
.admin().indices().prepareGetFieldMappings("test").setFields("*").get();
|
||||||
|
|
||||||
|
Map<String, Map<String, Map<String, GetFieldMappingsResponse.FieldMappingMetaData>>> mappings =
|
||||||
|
getFieldMappingsResponse.mappings();
|
||||||
|
assertEquals(1, mappings.size());
|
||||||
|
assertExpectedFields(mappings.get("test"), "field1", "field2");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
public void testFieldCapabilitiesIsFiltered() {
|
||||||
|
assertAcked(client().admin().indices().prepareCreate("test")
|
||||||
|
.addMapping("type1", "field1", "type=text", "field2", "type=text")
|
||||||
|
);
|
||||||
|
client().prepareIndex("test", "type1", "1").setSource("field1", "value1")
|
||||||
|
.setRefreshPolicy(IMMEDIATE)
|
||||||
|
.get();
|
||||||
|
client().prepareIndex("test", "type1", "2").setSource("field2", "value2")
|
||||||
|
.setRefreshPolicy(IMMEDIATE)
|
||||||
|
.get();
|
||||||
|
|
||||||
|
{
|
||||||
|
FieldCapabilitiesRequest fieldCapabilitiesRequest = new FieldCapabilitiesRequest().fields("*").indices("test");
|
||||||
|
FieldCapabilitiesResponse response = client().filterWithHeader(
|
||||||
|
Collections.singletonMap(BASIC_AUTH_HEADER, basicAuthHeaderValue("user1", USERS_PASSWD)))
|
||||||
|
.fieldCaps(fieldCapabilitiesRequest).actionGet();
|
||||||
|
assertExpectedFields(response, "field1");
|
||||||
|
}
|
||||||
|
{
|
||||||
|
FieldCapabilitiesRequest fieldCapabilitiesRequest = new FieldCapabilitiesRequest().fields("*").indices("test");
|
||||||
|
FieldCapabilitiesResponse response = client().filterWithHeader(
|
||||||
|
Collections.singletonMap(BASIC_AUTH_HEADER, basicAuthHeaderValue("user2", USERS_PASSWD)))
|
||||||
|
.fieldCaps(fieldCapabilitiesRequest).actionGet();
|
||||||
|
assertExpectedFields(response, "field2");
|
||||||
|
}
|
||||||
|
{
|
||||||
|
FieldCapabilitiesRequest fieldCapabilitiesRequest = new FieldCapabilitiesRequest().fields("*").indices("test");
|
||||||
|
FieldCapabilitiesResponse response = client().filterWithHeader(
|
||||||
|
Collections.singletonMap(BASIC_AUTH_HEADER, basicAuthHeaderValue("user3", USERS_PASSWD)))
|
||||||
|
.fieldCaps(fieldCapabilitiesRequest).actionGet();
|
||||||
|
assertExpectedFields(response, "field1");
|
||||||
|
}
|
||||||
|
{
|
||||||
|
FieldCapabilitiesRequest fieldCapabilitiesRequest = new FieldCapabilitiesRequest().fields("*").indices("test");
|
||||||
|
FieldCapabilitiesResponse response = client().filterWithHeader(
|
||||||
|
Collections.singletonMap(BASIC_AUTH_HEADER, basicAuthHeaderValue("user4", USERS_PASSWD)))
|
||||||
|
.fieldCaps(fieldCapabilitiesRequest).actionGet();
|
||||||
|
assertExpectedFields(response, "field1", "field2");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
@SuppressWarnings("unchecked")
|
||||||
|
private static void assertExpectedFields(ImmutableOpenMap<String, ImmutableOpenMap<String, MappingMetaData>> mappings,
|
||||||
|
String... fields) {
|
||||||
|
Map<String, Object> sourceAsMap = mappings.get("test").get("type1").getSourceAsMap();
|
||||||
|
assertEquals(1, sourceAsMap.size());
|
||||||
|
Map<String, Object> properties = (Map<String, Object>)sourceAsMap.get("properties");
|
||||||
|
assertEquals(fields.length, properties.size());
|
||||||
|
for (String field : fields) {
|
||||||
|
assertNotNull(properties.get(field));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
private static void assertExpectedFields(FieldCapabilitiesResponse fieldCapabilitiesResponse, String... expectedFields) {
|
||||||
|
Map<String, Map<String, FieldCapabilities>> responseMap = fieldCapabilitiesResponse.get();
|
||||||
|
Set<String> builtInMetaDataFields = IndicesModule.getBuiltInMetaDataFields();
|
||||||
|
for (String field : builtInMetaDataFields) {
|
||||||
|
Map<String, FieldCapabilities> remove = responseMap.remove(field);
|
||||||
|
assertNotNull(" expected field [" + field + "] not found", remove);
|
||||||
|
}
|
||||||
|
for (String field : expectedFields) {
|
||||||
|
Map<String, FieldCapabilities> remove = responseMap.remove(field);
|
||||||
|
assertNotNull(" expected field [" + field + "] not found", remove);
|
||||||
|
}
|
||||||
|
assertEquals("Some unexpected fields were returned: " + responseMap.keySet(), 0, responseMap.size());
|
||||||
|
}
|
||||||
|
|
||||||
|
private static void assertExpectedFields(Map<String, Map<String, GetFieldMappingsResponse.FieldMappingMetaData>> mappings,
|
||||||
|
String... expectedFields) {
|
||||||
|
assertEquals(1, mappings.size());
|
||||||
|
Map<String, GetFieldMappingsResponse.FieldMappingMetaData> fields = new HashMap<>(mappings.get("type1"));
|
||||||
|
Set<String> builtInMetaDataFields = IndicesModule.getBuiltInMetaDataFields();
|
||||||
|
for (String field : builtInMetaDataFields) {
|
||||||
|
GetFieldMappingsResponse.FieldMappingMetaData fieldMappingMetaData = fields.remove(field);
|
||||||
|
assertNotNull(" expected field [" + field + "] not found", fieldMappingMetaData);
|
||||||
|
}
|
||||||
|
for (String field : expectedFields) {
|
||||||
|
GetFieldMappingsResponse.FieldMappingMetaData fieldMappingMetaData = fields.remove(field);
|
||||||
|
assertNotNull("expected field [" + field + "] not found", fieldMappingMetaData);
|
||||||
|
}
|
||||||
|
assertEquals("Some unexpected fields were returned: " + fields.keySet(), 0, fields.size());
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
@ -192,7 +192,7 @@ public class OpenJobActionTests extends ESTestCase {
|
|||||||
metaData.putCustom(PersistentTasksCustomMetaData.TYPE, tasks);
|
metaData.putCustom(PersistentTasksCustomMetaData.TYPE, tasks);
|
||||||
cs.metaData(metaData);
|
cs.metaData(metaData);
|
||||||
cs.routingTable(routingTable.build());
|
cs.routingTable(routingTable.build());
|
||||||
Assignment result = OpenJobAction.selectLeastLoadedMlNode("job_id2", cs.build(), 2, maxRunningJobsPerNode, 30, logger);
|
Assignment result = OpenJobAction.selectLeastLoadedMlNode("job_id0", cs.build(), 2, maxRunningJobsPerNode, 30, logger);
|
||||||
assertNull(result.getExecutorNode());
|
assertNull(result.getExecutorNode());
|
||||||
assertTrue(result.getExplanation().contains("because this node is full. Number of opened jobs [" + maxRunningJobsPerNode
|
assertTrue(result.getExplanation().contains("because this node is full. Number of opened jobs [" + maxRunningJobsPerNode
|
||||||
+ "], xpack.ml.max_open_jobs [" + maxRunningJobsPerNode + "]"));
|
+ "], xpack.ml.max_open_jobs [" + maxRunningJobsPerNode + "]"));
|
||||||
|
@ -6,7 +6,6 @@
|
|||||||
package org.elasticsearch.xpack.ml.datafeed;
|
package org.elasticsearch.xpack.ml.datafeed;
|
||||||
|
|
||||||
import com.carrotsearch.randomizedtesting.generators.CodepointSetGenerator;
|
import com.carrotsearch.randomizedtesting.generators.CodepointSetGenerator;
|
||||||
|
|
||||||
import org.elasticsearch.ElasticsearchException;
|
import org.elasticsearch.ElasticsearchException;
|
||||||
import org.elasticsearch.common.io.stream.NamedWriteableRegistry;
|
import org.elasticsearch.common.io.stream.NamedWriteableRegistry;
|
||||||
import org.elasticsearch.common.io.stream.Writeable;
|
import org.elasticsearch.common.io.stream.Writeable;
|
||||||
@ -33,7 +32,6 @@ import org.elasticsearch.search.builder.SearchSourceBuilder.ScriptField;
|
|||||||
import org.elasticsearch.test.AbstractSerializingTestCase;
|
import org.elasticsearch.test.AbstractSerializingTestCase;
|
||||||
import org.elasticsearch.test.ESTestCase;
|
import org.elasticsearch.test.ESTestCase;
|
||||||
import org.elasticsearch.xpack.ml.datafeed.ChunkingConfig.Mode;
|
import org.elasticsearch.xpack.ml.datafeed.ChunkingConfig.Mode;
|
||||||
import org.elasticsearch.xpack.ml.job.config.JobTests;
|
|
||||||
import org.elasticsearch.xpack.ml.job.messages.Messages;
|
import org.elasticsearch.xpack.ml.job.messages.Messages;
|
||||||
import org.joda.time.DateTimeZone;
|
import org.joda.time.DateTimeZone;
|
||||||
|
|
||||||
@ -79,23 +77,29 @@ public class DatafeedConfigTests extends AbstractSerializingTestCase<DatafeedCon
|
|||||||
}
|
}
|
||||||
builder.setScriptFields(scriptFields);
|
builder.setScriptFields(scriptFields);
|
||||||
}
|
}
|
||||||
|
Long aggHistogramInterval = null;
|
||||||
if (randomBoolean() && addScriptFields == false) {
|
if (randomBoolean() && addScriptFields == false) {
|
||||||
// can only test with a single agg as the xcontent order gets randomized by test base class and then
|
// can only test with a single agg as the xcontent order gets randomized by test base class and then
|
||||||
// the actual xcontent isn't the same and test fail.
|
// the actual xcontent isn't the same and test fail.
|
||||||
// Testing with a single agg is ok as we don't have special list writeable / xconent logic
|
// Testing with a single agg is ok as we don't have special list writeable / xconent logic
|
||||||
AggregatorFactories.Builder aggs = new AggregatorFactories.Builder();
|
AggregatorFactories.Builder aggs = new AggregatorFactories.Builder();
|
||||||
long interval = randomNonNegativeLong();
|
aggHistogramInterval = randomNonNegativeLong();
|
||||||
interval = interval > bucketSpanMillis ? bucketSpanMillis : interval;
|
aggHistogramInterval = aggHistogramInterval> bucketSpanMillis ? bucketSpanMillis : aggHistogramInterval;
|
||||||
interval = interval <= 0 ? 1 : interval;
|
aggHistogramInterval = aggHistogramInterval <= 0 ? 1 : aggHistogramInterval;
|
||||||
MaxAggregationBuilder maxTime = AggregationBuilders.max("time").field("time");
|
MaxAggregationBuilder maxTime = AggregationBuilders.max("time").field("time");
|
||||||
aggs.addAggregator(AggregationBuilders.dateHistogram("buckets").interval(interval).subAggregation(maxTime).field("time"));
|
aggs.addAggregator(AggregationBuilders.dateHistogram("buckets")
|
||||||
|
.interval(aggHistogramInterval).subAggregation(maxTime).field("time"));
|
||||||
builder.setAggregations(aggs);
|
builder.setAggregations(aggs);
|
||||||
}
|
}
|
||||||
if (randomBoolean()) {
|
if (randomBoolean()) {
|
||||||
builder.setScrollSize(randomIntBetween(0, Integer.MAX_VALUE));
|
builder.setScrollSize(randomIntBetween(0, Integer.MAX_VALUE));
|
||||||
}
|
}
|
||||||
if (randomBoolean()) {
|
if (randomBoolean()) {
|
||||||
|
if (aggHistogramInterval == null) {
|
||||||
builder.setFrequency(TimeValue.timeValueSeconds(randomIntBetween(1, 1_000_000)));
|
builder.setFrequency(TimeValue.timeValueSeconds(randomIntBetween(1, 1_000_000)));
|
||||||
|
} else {
|
||||||
|
builder.setFrequency(TimeValue.timeValueMillis(randomIntBetween(1, 5) * aggHistogramInterval));
|
||||||
|
}
|
||||||
}
|
}
|
||||||
if (randomBoolean()) {
|
if (randomBoolean()) {
|
||||||
builder.setQueryDelay(TimeValue.timeValueMillis(randomIntBetween(1, 1_000_000)));
|
builder.setQueryDelay(TimeValue.timeValueMillis(randomIntBetween(1, 1_000_000)));
|
||||||
@ -398,6 +402,90 @@ public class DatafeedConfigTests extends AbstractSerializingTestCase<DatafeedCon
|
|||||||
assertEquals("Aggregations can only have 1 date_histogram or histogram aggregation", e.getMessage());
|
assertEquals("Aggregations can only have 1 date_histogram or histogram aggregation", e.getMessage());
|
||||||
}
|
}
|
||||||
|
|
||||||
|
public void testDefaultFrequency_GivenNegative() {
|
||||||
|
DatafeedConfig datafeed = createTestInstance();
|
||||||
|
ESTestCase.expectThrows(IllegalArgumentException.class, () -> datafeed.defaultFrequency(TimeValue.timeValueSeconds(-1)));
|
||||||
|
}
|
||||||
|
|
||||||
|
public void testDefaultFrequency_GivenNoAggregations() {
|
||||||
|
DatafeedConfig.Builder datafeedBuilder = new DatafeedConfig.Builder("feed", "job");
|
||||||
|
datafeedBuilder.setIndices(Arrays.asList("my_index"));
|
||||||
|
DatafeedConfig datafeed = datafeedBuilder.build();
|
||||||
|
|
||||||
|
assertEquals(TimeValue.timeValueMinutes(1), datafeed.defaultFrequency(TimeValue.timeValueSeconds(1)));
|
||||||
|
assertEquals(TimeValue.timeValueMinutes(1), datafeed.defaultFrequency(TimeValue.timeValueSeconds(30)));
|
||||||
|
assertEquals(TimeValue.timeValueMinutes(1), datafeed.defaultFrequency(TimeValue.timeValueSeconds(60)));
|
||||||
|
assertEquals(TimeValue.timeValueMinutes(1), datafeed.defaultFrequency(TimeValue.timeValueSeconds(90)));
|
||||||
|
assertEquals(TimeValue.timeValueMinutes(1), datafeed.defaultFrequency(TimeValue.timeValueSeconds(120)));
|
||||||
|
assertEquals(TimeValue.timeValueMinutes(1), datafeed.defaultFrequency(TimeValue.timeValueSeconds(121)));
|
||||||
|
|
||||||
|
assertEquals(TimeValue.timeValueSeconds(61), datafeed.defaultFrequency(TimeValue.timeValueSeconds(122)));
|
||||||
|
assertEquals(TimeValue.timeValueSeconds(75), datafeed.defaultFrequency(TimeValue.timeValueSeconds(150)));
|
||||||
|
assertEquals(TimeValue.timeValueSeconds(150), datafeed.defaultFrequency(TimeValue.timeValueSeconds(300)));
|
||||||
|
assertEquals(TimeValue.timeValueMinutes(10), datafeed.defaultFrequency(TimeValue.timeValueSeconds(1200)));
|
||||||
|
|
||||||
|
assertEquals(TimeValue.timeValueMinutes(10), datafeed.defaultFrequency(TimeValue.timeValueSeconds(1201)));
|
||||||
|
assertEquals(TimeValue.timeValueMinutes(10), datafeed.defaultFrequency(TimeValue.timeValueSeconds(1800)));
|
||||||
|
assertEquals(TimeValue.timeValueMinutes(10), datafeed.defaultFrequency(TimeValue.timeValueHours(1)));
|
||||||
|
assertEquals(TimeValue.timeValueMinutes(10), datafeed.defaultFrequency(TimeValue.timeValueHours(2)));
|
||||||
|
assertEquals(TimeValue.timeValueMinutes(10), datafeed.defaultFrequency(TimeValue.timeValueHours(12)));
|
||||||
|
|
||||||
|
assertEquals(TimeValue.timeValueHours(1), datafeed.defaultFrequency(TimeValue.timeValueSeconds(12 * 3600 + 1)));
|
||||||
|
assertEquals(TimeValue.timeValueHours(1), datafeed.defaultFrequency(TimeValue.timeValueHours(13)));
|
||||||
|
assertEquals(TimeValue.timeValueHours(1), datafeed.defaultFrequency(TimeValue.timeValueHours(24)));
|
||||||
|
assertEquals(TimeValue.timeValueHours(1), datafeed.defaultFrequency(TimeValue.timeValueHours(48)));
|
||||||
|
}
|
||||||
|
|
||||||
|
public void testDefaultFrequency_GivenAggregationsWithHistogramInterval_1_Second() {
|
||||||
|
DatafeedConfig datafeed = createDatafeedWithDateHistogram("1s");
|
||||||
|
|
||||||
|
assertEquals(TimeValue.timeValueMinutes(1), datafeed.defaultFrequency(TimeValue.timeValueSeconds(60)));
|
||||||
|
assertEquals(TimeValue.timeValueMinutes(1), datafeed.defaultFrequency(TimeValue.timeValueSeconds(90)));
|
||||||
|
assertEquals(TimeValue.timeValueMinutes(1), datafeed.defaultFrequency(TimeValue.timeValueSeconds(120)));
|
||||||
|
assertEquals(TimeValue.timeValueSeconds(125), datafeed.defaultFrequency(TimeValue.timeValueSeconds(250)));
|
||||||
|
assertEquals(TimeValue.timeValueSeconds(250), datafeed.defaultFrequency(TimeValue.timeValueSeconds(500)));
|
||||||
|
|
||||||
|
assertEquals(TimeValue.timeValueMinutes(10), datafeed.defaultFrequency(TimeValue.timeValueHours(1)));
|
||||||
|
assertEquals(TimeValue.timeValueHours(1), datafeed.defaultFrequency(TimeValue.timeValueHours(13)));
|
||||||
|
}
|
||||||
|
|
||||||
|
public void testDefaultFrequency_GivenAggregationsWithHistogramInterval_1_Minute() {
|
||||||
|
DatafeedConfig datafeed = createDatafeedWithDateHistogram("1m");
|
||||||
|
|
||||||
|
assertEquals(TimeValue.timeValueMinutes(1), datafeed.defaultFrequency(TimeValue.timeValueSeconds(60)));
|
||||||
|
assertEquals(TimeValue.timeValueMinutes(1), datafeed.defaultFrequency(TimeValue.timeValueSeconds(90)));
|
||||||
|
assertEquals(TimeValue.timeValueMinutes(1), datafeed.defaultFrequency(TimeValue.timeValueSeconds(120)));
|
||||||
|
assertEquals(TimeValue.timeValueMinutes(1), datafeed.defaultFrequency(TimeValue.timeValueSeconds(180)));
|
||||||
|
assertEquals(TimeValue.timeValueMinutes(2), datafeed.defaultFrequency(TimeValue.timeValueSeconds(240)));
|
||||||
|
assertEquals(TimeValue.timeValueMinutes(10), datafeed.defaultFrequency(TimeValue.timeValueMinutes(20)));
|
||||||
|
|
||||||
|
assertEquals(TimeValue.timeValueMinutes(10), datafeed.defaultFrequency(TimeValue.timeValueSeconds(20 * 60 + 1)));
|
||||||
|
assertEquals(TimeValue.timeValueMinutes(10), datafeed.defaultFrequency(TimeValue.timeValueHours(6)));
|
||||||
|
assertEquals(TimeValue.timeValueMinutes(10), datafeed.defaultFrequency(TimeValue.timeValueHours(12)));
|
||||||
|
|
||||||
|
assertEquals(TimeValue.timeValueHours(1), datafeed.defaultFrequency(TimeValue.timeValueHours(13)));
|
||||||
|
assertEquals(TimeValue.timeValueHours(1), datafeed.defaultFrequency(TimeValue.timeValueHours(72)));
|
||||||
|
}
|
||||||
|
|
||||||
|
public void testDefaultFrequency_GivenAggregationsWithHistogramInterval_10_Minutes() {
|
||||||
|
DatafeedConfig datafeed = createDatafeedWithDateHistogram("10m");
|
||||||
|
|
||||||
|
assertEquals(TimeValue.timeValueMinutes(10), datafeed.defaultFrequency(TimeValue.timeValueMinutes(10)));
|
||||||
|
assertEquals(TimeValue.timeValueMinutes(10), datafeed.defaultFrequency(TimeValue.timeValueMinutes(20)));
|
||||||
|
assertEquals(TimeValue.timeValueMinutes(10), datafeed.defaultFrequency(TimeValue.timeValueMinutes(30)));
|
||||||
|
assertEquals(TimeValue.timeValueMinutes(10), datafeed.defaultFrequency(TimeValue.timeValueMinutes(12 * 60)));
|
||||||
|
assertEquals(TimeValue.timeValueHours(1), datafeed.defaultFrequency(TimeValue.timeValueMinutes(13 * 60)));
|
||||||
|
}
|
||||||
|
|
||||||
|
public void testDefaultFrequency_GivenAggregationsWithHistogramInterval_1_Hour() {
|
||||||
|
DatafeedConfig datafeed = createDatafeedWithDateHistogram("1h");
|
||||||
|
|
||||||
|
assertEquals(TimeValue.timeValueHours(1), datafeed.defaultFrequency(TimeValue.timeValueHours(1)));
|
||||||
|
assertEquals(TimeValue.timeValueHours(1), datafeed.defaultFrequency(TimeValue.timeValueSeconds(3601)));
|
||||||
|
assertEquals(TimeValue.timeValueHours(1), datafeed.defaultFrequency(TimeValue.timeValueHours(2)));
|
||||||
|
assertEquals(TimeValue.timeValueHours(1), datafeed.defaultFrequency(TimeValue.timeValueHours(12)));
|
||||||
|
}
|
||||||
|
|
||||||
public static String randomValidDatafeedId() {
|
public static String randomValidDatafeedId() {
|
||||||
CodepointSetGenerator generator = new CodepointSetGenerator("abcdefghijklmnopqrstuvwxyz".toCharArray());
|
CodepointSetGenerator generator = new CodepointSetGenerator("abcdefghijklmnopqrstuvwxyz".toCharArray());
|
||||||
return generator.ofCodePointsLength(random(), 10, 10);
|
return generator.ofCodePointsLength(random(), 10, 10);
|
||||||
|
@ -141,6 +141,39 @@ public class DatafeedJobValidatorTests extends ESTestCase {
|
|||||||
DatafeedJobValidator.validate(goodDatafeedConfig, job);
|
DatafeedJobValidator.validate(goodDatafeedConfig, job);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
public void testVerify_FrequencyIsMultipleOfHistogramInterval() throws IOException {
|
||||||
|
Job.Builder builder = buildJobBuilder("foo");
|
||||||
|
AnalysisConfig.Builder ac = createAnalysisConfig();
|
||||||
|
ac.setSummaryCountFieldName("some_count");
|
||||||
|
ac.setBucketSpan(TimeValue.timeValueMinutes(5));
|
||||||
|
builder.setAnalysisConfig(ac);
|
||||||
|
Job job = builder.build(new Date());
|
||||||
|
DatafeedConfig.Builder datafeedBuilder = createValidDatafeedConfigWithAggs(60 * 1000);
|
||||||
|
|
||||||
|
// Check with multiples
|
||||||
|
datafeedBuilder.setFrequency(TimeValue.timeValueSeconds(60));
|
||||||
|
DatafeedJobValidator.validate(datafeedBuilder.build(), job);
|
||||||
|
datafeedBuilder.setFrequency(TimeValue.timeValueSeconds(120));
|
||||||
|
DatafeedJobValidator.validate(datafeedBuilder.build(), job);
|
||||||
|
datafeedBuilder.setFrequency(TimeValue.timeValueSeconds(180));
|
||||||
|
DatafeedJobValidator.validate(datafeedBuilder.build(), job);
|
||||||
|
datafeedBuilder.setFrequency(TimeValue.timeValueSeconds(240));
|
||||||
|
DatafeedJobValidator.validate(datafeedBuilder.build(), job);
|
||||||
|
datafeedBuilder.setFrequency(TimeValue.timeValueHours(1));
|
||||||
|
DatafeedJobValidator.validate(datafeedBuilder.build(), job);
|
||||||
|
|
||||||
|
// Now non-multiples
|
||||||
|
datafeedBuilder.setFrequency(TimeValue.timeValueSeconds(30));
|
||||||
|
ElasticsearchStatusException e = ESTestCase.expectThrows(ElasticsearchStatusException.class,
|
||||||
|
() -> DatafeedJobValidator.validate(datafeedBuilder.build(), job));
|
||||||
|
assertEquals("Datafeed frequency [30s] must be a multiple of the aggregation interval [60000ms]", e.getMessage());
|
||||||
|
|
||||||
|
datafeedBuilder.setFrequency(TimeValue.timeValueSeconds(90));
|
||||||
|
e = ESTestCase.expectThrows(ElasticsearchStatusException.class,
|
||||||
|
() -> DatafeedJobValidator.validate(datafeedBuilder.build(), job));
|
||||||
|
assertEquals("Datafeed frequency [1.5m] must be a multiple of the aggregation interval [60000ms]", e.getMessage());
|
||||||
|
}
|
||||||
|
|
||||||
private static Job.Builder buildJobBuilder(String id) {
|
private static Job.Builder buildJobBuilder(String id) {
|
||||||
Job.Builder builder = new Job.Builder(id);
|
Job.Builder builder = new Job.Builder(id);
|
||||||
AnalysisConfig.Builder ac = createAnalysisConfig();
|
AnalysisConfig.Builder ac = createAnalysisConfig();
|
||||||
|
@ -1,43 +0,0 @@
|
|||||||
/*
|
|
||||||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
|
||||||
* or more contributor license agreements. Licensed under the Elastic License;
|
|
||||||
* you may not use this file except in compliance with the Elastic License.
|
|
||||||
*/
|
|
||||||
package org.elasticsearch.xpack.ml.datafeed;
|
|
||||||
|
|
||||||
import org.elasticsearch.test.ESTestCase;
|
|
||||||
|
|
||||||
import java.time.Duration;
|
|
||||||
|
|
||||||
public class DefaultFrequencyTests extends ESTestCase {
|
|
||||||
|
|
||||||
public void testCalc_GivenNegative() {
|
|
||||||
ESTestCase.expectThrows(IllegalArgumentException.class, () -> DefaultFrequency.ofBucketSpan(-1));
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
public void testCalc() {
|
|
||||||
assertEquals(Duration.ofMinutes(1), DefaultFrequency.ofBucketSpan(1));
|
|
||||||
assertEquals(Duration.ofMinutes(1), DefaultFrequency.ofBucketSpan(30));
|
|
||||||
assertEquals(Duration.ofMinutes(1), DefaultFrequency.ofBucketSpan(60));
|
|
||||||
assertEquals(Duration.ofMinutes(1), DefaultFrequency.ofBucketSpan(90));
|
|
||||||
assertEquals(Duration.ofMinutes(1), DefaultFrequency.ofBucketSpan(120));
|
|
||||||
assertEquals(Duration.ofMinutes(1), DefaultFrequency.ofBucketSpan(121));
|
|
||||||
|
|
||||||
assertEquals(Duration.ofSeconds(61), DefaultFrequency.ofBucketSpan(122));
|
|
||||||
assertEquals(Duration.ofSeconds(75), DefaultFrequency.ofBucketSpan(150));
|
|
||||||
assertEquals(Duration.ofSeconds(150), DefaultFrequency.ofBucketSpan(300));
|
|
||||||
assertEquals(Duration.ofMinutes(10), DefaultFrequency.ofBucketSpan(1200));
|
|
||||||
|
|
||||||
assertEquals(Duration.ofMinutes(10), DefaultFrequency.ofBucketSpan(1201));
|
|
||||||
assertEquals(Duration.ofMinutes(10), DefaultFrequency.ofBucketSpan(1800));
|
|
||||||
assertEquals(Duration.ofMinutes(10), DefaultFrequency.ofBucketSpan(3600));
|
|
||||||
assertEquals(Duration.ofMinutes(10), DefaultFrequency.ofBucketSpan(7200));
|
|
||||||
assertEquals(Duration.ofMinutes(10), DefaultFrequency.ofBucketSpan(12 * 3600));
|
|
||||||
|
|
||||||
assertEquals(Duration.ofHours(1), DefaultFrequency.ofBucketSpan(12 * 3600 + 1));
|
|
||||||
assertEquals(Duration.ofHours(1), DefaultFrequency.ofBucketSpan(13 * 3600));
|
|
||||||
assertEquals(Duration.ofHours(1), DefaultFrequency.ofBucketSpan(24 * 3600));
|
|
||||||
assertEquals(Duration.ofHours(1), DefaultFrequency.ofBucketSpan(48 * 3600));
|
|
||||||
}
|
|
||||||
}
|
|
@ -5,6 +5,10 @@
|
|||||||
*/
|
*/
|
||||||
package org.elasticsearch.xpack.ml.datafeed.extractor.scroll;
|
package org.elasticsearch.xpack.ml.datafeed.extractor.scroll;
|
||||||
|
|
||||||
|
import org.elasticsearch.action.ActionFuture;
|
||||||
|
import org.elasticsearch.action.search.ClearScrollAction;
|
||||||
|
import org.elasticsearch.action.search.ClearScrollRequest;
|
||||||
|
import org.elasticsearch.action.search.ClearScrollResponse;
|
||||||
import org.elasticsearch.action.search.SearchPhaseExecutionException;
|
import org.elasticsearch.action.search.SearchPhaseExecutionException;
|
||||||
import org.elasticsearch.action.search.SearchRequestBuilder;
|
import org.elasticsearch.action.search.SearchRequestBuilder;
|
||||||
import org.elasticsearch.action.search.SearchResponse;
|
import org.elasticsearch.action.search.SearchResponse;
|
||||||
@ -21,6 +25,7 @@ import org.elasticsearch.search.SearchHits;
|
|||||||
import org.elasticsearch.search.builder.SearchSourceBuilder;
|
import org.elasticsearch.search.builder.SearchSourceBuilder;
|
||||||
import org.elasticsearch.test.ESTestCase;
|
import org.elasticsearch.test.ESTestCase;
|
||||||
import org.junit.Before;
|
import org.junit.Before;
|
||||||
|
import org.mockito.ArgumentCaptor;
|
||||||
|
|
||||||
import java.io.BufferedReader;
|
import java.io.BufferedReader;
|
||||||
import java.io.IOException;
|
import java.io.IOException;
|
||||||
@ -42,6 +47,7 @@ import static java.util.Collections.emptyMap;
|
|||||||
import static org.hamcrest.Matchers.containsString;
|
import static org.hamcrest.Matchers.containsString;
|
||||||
import static org.hamcrest.Matchers.equalTo;
|
import static org.hamcrest.Matchers.equalTo;
|
||||||
import static org.hamcrest.Matchers.is;
|
import static org.hamcrest.Matchers.is;
|
||||||
|
import static org.mockito.Matchers.same;
|
||||||
import static org.mockito.Mockito.mock;
|
import static org.mockito.Mockito.mock;
|
||||||
import static org.mockito.Mockito.when;
|
import static org.mockito.Mockito.when;
|
||||||
|
|
||||||
@ -50,7 +56,7 @@ public class ScrollDataExtractorTests extends ESTestCase {
|
|||||||
private Client client;
|
private Client client;
|
||||||
private List<SearchRequestBuilder> capturedSearchRequests;
|
private List<SearchRequestBuilder> capturedSearchRequests;
|
||||||
private List<String> capturedContinueScrollIds;
|
private List<String> capturedContinueScrollIds;
|
||||||
private List<String> capturedClearScrollIds;
|
private ArgumentCaptor<ClearScrollRequest> capturedClearScrollRequests;
|
||||||
private String jobId;
|
private String jobId;
|
||||||
private ExtractedFields extractedFields;
|
private ExtractedFields extractedFields;
|
||||||
private List<String> types;
|
private List<String> types;
|
||||||
@ -59,6 +65,7 @@ public class ScrollDataExtractorTests extends ESTestCase {
|
|||||||
private List<SearchSourceBuilder.ScriptField> scriptFields;
|
private List<SearchSourceBuilder.ScriptField> scriptFields;
|
||||||
private int scrollSize;
|
private int scrollSize;
|
||||||
private long initScrollStartTime;
|
private long initScrollStartTime;
|
||||||
|
private ActionFuture<ClearScrollResponse> clearScrollFuture;
|
||||||
|
|
||||||
private class TestDataExtractor extends ScrollDataExtractor {
|
private class TestDataExtractor extends ScrollDataExtractor {
|
||||||
|
|
||||||
@ -95,11 +102,6 @@ public class ScrollDataExtractorTests extends ESTestCase {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
|
||||||
void clearScroll(String scrollId) {
|
|
||||||
capturedClearScrollIds.add(scrollId);
|
|
||||||
}
|
|
||||||
|
|
||||||
void setNextResponse(SearchResponse searchResponse) {
|
void setNextResponse(SearchResponse searchResponse) {
|
||||||
responses.add(searchResponse);
|
responses.add(searchResponse);
|
||||||
}
|
}
|
||||||
@ -118,7 +120,6 @@ public class ScrollDataExtractorTests extends ESTestCase {
|
|||||||
client = mock(Client.class);
|
client = mock(Client.class);
|
||||||
capturedSearchRequests = new ArrayList<>();
|
capturedSearchRequests = new ArrayList<>();
|
||||||
capturedContinueScrollIds = new ArrayList<>();
|
capturedContinueScrollIds = new ArrayList<>();
|
||||||
capturedClearScrollIds = new ArrayList<>();
|
|
||||||
jobId = "test-job";
|
jobId = "test-job";
|
||||||
ExtractedField timeField = ExtractedField.newField("time", ExtractedField.ExtractionMethod.DOC_VALUE);
|
ExtractedField timeField = ExtractedField.newField("time", ExtractedField.ExtractionMethod.DOC_VALUE);
|
||||||
extractedFields = new ExtractedFields(timeField,
|
extractedFields = new ExtractedFields(timeField,
|
||||||
@ -128,6 +129,10 @@ public class ScrollDataExtractorTests extends ESTestCase {
|
|||||||
query = QueryBuilders.matchAllQuery();
|
query = QueryBuilders.matchAllQuery();
|
||||||
scriptFields = Collections.emptyList();
|
scriptFields = Collections.emptyList();
|
||||||
scrollSize = 1000;
|
scrollSize = 1000;
|
||||||
|
|
||||||
|
clearScrollFuture = mock(ActionFuture.class);
|
||||||
|
capturedClearScrollRequests = ArgumentCaptor.forClass(ClearScrollRequest.class);
|
||||||
|
when(client.execute(same(ClearScrollAction.INSTANCE), capturedClearScrollRequests.capture())).thenReturn(clearScrollFuture);
|
||||||
}
|
}
|
||||||
|
|
||||||
public void testSinglePageExtraction() throws IOException {
|
public void testSinglePageExtraction() throws IOException {
|
||||||
@ -164,6 +169,7 @@ public class ScrollDataExtractorTests extends ESTestCase {
|
|||||||
assertThat(capturedContinueScrollIds.size(), equalTo(1));
|
assertThat(capturedContinueScrollIds.size(), equalTo(1));
|
||||||
assertThat(capturedContinueScrollIds.get(0), equalTo(response1.getScrollId()));
|
assertThat(capturedContinueScrollIds.get(0), equalTo(response1.getScrollId()));
|
||||||
|
|
||||||
|
List<String> capturedClearScrollIds = getCapturedClearScrollIds();
|
||||||
assertThat(capturedClearScrollIds.size(), equalTo(1));
|
assertThat(capturedClearScrollIds.size(), equalTo(1));
|
||||||
assertThat(capturedClearScrollIds.get(0), equalTo(response2.getScrollId()));
|
assertThat(capturedClearScrollIds.get(0), equalTo(response2.getScrollId()));
|
||||||
}
|
}
|
||||||
@ -215,6 +221,7 @@ public class ScrollDataExtractorTests extends ESTestCase {
|
|||||||
assertThat(capturedContinueScrollIds.get(0), equalTo(response1.getScrollId()));
|
assertThat(capturedContinueScrollIds.get(0), equalTo(response1.getScrollId()));
|
||||||
assertThat(capturedContinueScrollIds.get(1), equalTo(response2.getScrollId()));
|
assertThat(capturedContinueScrollIds.get(1), equalTo(response2.getScrollId()));
|
||||||
|
|
||||||
|
List<String> capturedClearScrollIds = getCapturedClearScrollIds();
|
||||||
assertThat(capturedClearScrollIds.size(), equalTo(1));
|
assertThat(capturedClearScrollIds.size(), equalTo(1));
|
||||||
assertThat(capturedClearScrollIds.get(0), equalTo(response3.getScrollId()));
|
assertThat(capturedClearScrollIds.get(0), equalTo(response3.getScrollId()));
|
||||||
}
|
}
|
||||||
@ -252,6 +259,7 @@ public class ScrollDataExtractorTests extends ESTestCase {
|
|||||||
assertThat(asString(stream.get()), equalTo(expectedStream));
|
assertThat(asString(stream.get()), equalTo(expectedStream));
|
||||||
assertThat(extractor.hasNext(), is(false));
|
assertThat(extractor.hasNext(), is(false));
|
||||||
|
|
||||||
|
List<String> capturedClearScrollIds = getCapturedClearScrollIds();
|
||||||
assertThat(capturedClearScrollIds.size(), equalTo(1));
|
assertThat(capturedClearScrollIds.size(), equalTo(1));
|
||||||
assertThat(capturedClearScrollIds.get(0), equalTo(response2.getScrollId()));
|
assertThat(capturedClearScrollIds.get(0), equalTo(response2.getScrollId()));
|
||||||
}
|
}
|
||||||
@ -392,6 +400,7 @@ public class ScrollDataExtractorTests extends ESTestCase {
|
|||||||
|
|
||||||
expectThrows(IOException.class, () -> extractor.next());
|
expectThrows(IOException.class, () -> extractor.next());
|
||||||
|
|
||||||
|
List<String> capturedClearScrollIds = getCapturedClearScrollIds();
|
||||||
assertThat(capturedClearScrollIds.isEmpty(), is(true));
|
assertThat(capturedClearScrollIds.isEmpty(), is(true));
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -445,6 +454,7 @@ public class ScrollDataExtractorTests extends ESTestCase {
|
|||||||
assertThat(capturedContinueScrollIds.size(), equalTo(1));
|
assertThat(capturedContinueScrollIds.size(), equalTo(1));
|
||||||
assertThat(capturedContinueScrollIds.get(0), equalTo(response1.getScrollId()));
|
assertThat(capturedContinueScrollIds.get(0), equalTo(response1.getScrollId()));
|
||||||
|
|
||||||
|
List<String> capturedClearScrollIds = getCapturedClearScrollIds();
|
||||||
assertThat(capturedClearScrollIds.size(), equalTo(1));
|
assertThat(capturedClearScrollIds.size(), equalTo(1));
|
||||||
assertThat(capturedClearScrollIds.get(0), equalTo(response2.getScrollId()));
|
assertThat(capturedClearScrollIds.get(0), equalTo(response2.getScrollId()));
|
||||||
}
|
}
|
||||||
@ -500,6 +510,10 @@ public class ScrollDataExtractorTests extends ESTestCase {
|
|||||||
return searchResponse;
|
return searchResponse;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
private List<String> getCapturedClearScrollIds() {
|
||||||
|
return capturedClearScrollRequests.getAllValues().stream().map(r -> r.getScrollIds().get(0)).collect(Collectors.toList());
|
||||||
|
}
|
||||||
|
|
||||||
private static String asString(InputStream inputStream) throws IOException {
|
private static String asString(InputStream inputStream) throws IOException {
|
||||||
try (BufferedReader reader = new BufferedReader(new InputStreamReader(inputStream, StandardCharsets.UTF_8))) {
|
try (BufferedReader reader = new BufferedReader(new InputStreamReader(inputStream, StandardCharsets.UTF_8))) {
|
||||||
return reader.lines().collect(Collectors.joining("\n"));
|
return reader.lines().collect(Collectors.joining("\n"));
|
||||||
|
@ -96,7 +96,7 @@ public class BasicDistributedJobsIT extends BaseMlIntegTestCase {
|
|||||||
DatafeedConfig.Builder configBuilder = createDatafeedBuilder("data_feed_id", job.getId(), Collections.singletonList("*"));
|
DatafeedConfig.Builder configBuilder = createDatafeedBuilder("data_feed_id", job.getId(), Collections.singletonList("*"));
|
||||||
|
|
||||||
MaxAggregationBuilder maxAggregation = AggregationBuilders.max("time").field("time");
|
MaxAggregationBuilder maxAggregation = AggregationBuilders.max("time").field("time");
|
||||||
HistogramAggregationBuilder histogramAggregation = AggregationBuilders.histogram("time").interval(300000)
|
HistogramAggregationBuilder histogramAggregation = AggregationBuilders.histogram("time").interval(60000)
|
||||||
.subAggregation(maxAggregation).field("time");
|
.subAggregation(maxAggregation).field("time");
|
||||||
|
|
||||||
configBuilder.setAggregations(AggregatorFactories.builder().addAggregator(histogramAggregation));
|
configBuilder.setAggregations(AggregatorFactories.builder().addAggregator(histogramAggregation));
|
||||||
@ -216,7 +216,7 @@ public class BasicDistributedJobsIT extends BaseMlIntegTestCase {
|
|||||||
|
|
||||||
DiscoveryNode node = clusterState.nodes().resolveNode(task.getExecutorNode());
|
DiscoveryNode node = clusterState.nodes().resolveNode(task.getExecutorNode());
|
||||||
assertThat(node.getAttributes(), hasEntry(MachineLearning.ML_ENABLED_NODE_ATTR, "true"));
|
assertThat(node.getAttributes(), hasEntry(MachineLearning.ML_ENABLED_NODE_ATTR, "true"));
|
||||||
assertThat(node.getAttributes(), hasEntry(MachineLearning.MAX_OPEN_JOBS_NODE_ATTR, "10"));
|
assertThat(node.getAttributes(), hasEntry(MachineLearning.MAX_OPEN_JOBS_NODE_ATTR, "20"));
|
||||||
JobTaskStatus jobTaskStatus = (JobTaskStatus) task.getStatus();
|
JobTaskStatus jobTaskStatus = (JobTaskStatus) task.getStatus();
|
||||||
assertNotNull(jobTaskStatus);
|
assertNotNull(jobTaskStatus);
|
||||||
assertEquals(JobState.OPENED, jobTaskStatus.getState());
|
assertEquals(JobState.OPENED, jobTaskStatus.getState());
|
||||||
@ -402,7 +402,7 @@ public class BasicDistributedJobsIT extends BaseMlIntegTestCase {
|
|||||||
assertFalse(task.needsReassignment(clusterState.nodes()));
|
assertFalse(task.needsReassignment(clusterState.nodes()));
|
||||||
DiscoveryNode node = clusterState.nodes().resolveNode(task.getExecutorNode());
|
DiscoveryNode node = clusterState.nodes().resolveNode(task.getExecutorNode());
|
||||||
assertThat(node.getAttributes(), hasEntry(MachineLearning.ML_ENABLED_NODE_ATTR, "true"));
|
assertThat(node.getAttributes(), hasEntry(MachineLearning.ML_ENABLED_NODE_ATTR, "true"));
|
||||||
assertThat(node.getAttributes(), hasEntry(MachineLearning.MAX_OPEN_JOBS_NODE_ATTR, "10"));
|
assertThat(node.getAttributes(), hasEntry(MachineLearning.MAX_OPEN_JOBS_NODE_ATTR, "20"));
|
||||||
|
|
||||||
JobTaskStatus jobTaskStatus = (JobTaskStatus) task.getStatus();
|
JobTaskStatus jobTaskStatus = (JobTaskStatus) task.getStatus();
|
||||||
assertNotNull(jobTaskStatus);
|
assertNotNull(jobTaskStatus);
|
||||||
|
@ -9,6 +9,7 @@ import org.elasticsearch.ResourceNotFoundException;
|
|||||||
import org.elasticsearch.Version;
|
import org.elasticsearch.Version;
|
||||||
import org.elasticsearch.action.admin.cluster.node.hotthreads.NodeHotThreads;
|
import org.elasticsearch.action.admin.cluster.node.hotthreads.NodeHotThreads;
|
||||||
import org.elasticsearch.action.admin.cluster.node.hotthreads.NodesHotThreadsResponse;
|
import org.elasticsearch.action.admin.cluster.node.hotthreads.NodesHotThreadsResponse;
|
||||||
|
import org.elasticsearch.common.unit.TimeValue;
|
||||||
import org.elasticsearch.common.util.concurrent.ConcurrentCollections;
|
import org.elasticsearch.common.util.concurrent.ConcurrentCollections;
|
||||||
import org.elasticsearch.common.util.concurrent.ConcurrentMapLong;
|
import org.elasticsearch.common.util.concurrent.ConcurrentMapLong;
|
||||||
import org.elasticsearch.xpack.ml.action.DeleteDatafeedAction;
|
import org.elasticsearch.xpack.ml.action.DeleteDatafeedAction;
|
||||||
@ -17,6 +18,7 @@ import org.elasticsearch.xpack.ml.action.GetJobsStatsAction;
|
|||||||
import org.elasticsearch.xpack.ml.action.KillProcessAction;
|
import org.elasticsearch.xpack.ml.action.KillProcessAction;
|
||||||
import org.elasticsearch.xpack.ml.action.PutJobAction;
|
import org.elasticsearch.xpack.ml.action.PutJobAction;
|
||||||
import org.elasticsearch.xpack.ml.action.StopDatafeedAction;
|
import org.elasticsearch.xpack.ml.action.StopDatafeedAction;
|
||||||
|
import org.elasticsearch.xpack.ml.datafeed.ChunkingConfig;
|
||||||
import org.elasticsearch.xpack.ml.datafeed.DatafeedConfig;
|
import org.elasticsearch.xpack.ml.datafeed.DatafeedConfig;
|
||||||
import org.elasticsearch.xpack.ml.datafeed.DatafeedState;
|
import org.elasticsearch.xpack.ml.datafeed.DatafeedState;
|
||||||
import org.elasticsearch.xpack.ml.job.config.Job;
|
import org.elasticsearch.xpack.ml.job.config.Job;
|
||||||
@ -32,10 +34,12 @@ import java.util.concurrent.TimeUnit;
|
|||||||
import java.util.concurrent.atomic.AtomicReference;
|
import java.util.concurrent.atomic.AtomicReference;
|
||||||
|
|
||||||
import static org.elasticsearch.xpack.ml.support.BaseMlIntegTestCase.createDatafeed;
|
import static org.elasticsearch.xpack.ml.support.BaseMlIntegTestCase.createDatafeed;
|
||||||
|
import static org.elasticsearch.xpack.ml.support.BaseMlIntegTestCase.createDatafeedBuilder;
|
||||||
import static org.elasticsearch.xpack.ml.support.BaseMlIntegTestCase.createScheduledJob;
|
import static org.elasticsearch.xpack.ml.support.BaseMlIntegTestCase.createScheduledJob;
|
||||||
import static org.elasticsearch.xpack.ml.support.BaseMlIntegTestCase.getDataCounts;
|
import static org.elasticsearch.xpack.ml.support.BaseMlIntegTestCase.getDataCounts;
|
||||||
import static org.elasticsearch.xpack.ml.support.BaseMlIntegTestCase.indexDocs;
|
import static org.elasticsearch.xpack.ml.support.BaseMlIntegTestCase.indexDocs;
|
||||||
import static org.hamcrest.Matchers.equalTo;
|
import static org.hamcrest.Matchers.equalTo;
|
||||||
|
import static org.hamcrest.Matchers.greaterThan;
|
||||||
|
|
||||||
public class DatafeedJobsIT extends MlNativeAutodetectIntegTestCase {
|
public class DatafeedJobsIT extends MlNativeAutodetectIntegTestCase {
|
||||||
|
|
||||||
@ -223,6 +227,59 @@ public class DatafeedJobsIT extends MlNativeAutodetectIntegTestCase {
|
|||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Stopping a lookback closes the associated job _after_ the stop call returns.
|
||||||
|
* This test ensures that a kill request submitted during this close doesn't
|
||||||
|
* put the job into the "failed" state.
|
||||||
|
*/
|
||||||
|
public void testStopLookbackFollowedByProcessKill() throws Exception {
|
||||||
|
client().admin().indices().prepareCreate("data")
|
||||||
|
.addMapping("type", "time", "type=date")
|
||||||
|
.get();
|
||||||
|
long numDocs = randomIntBetween(1024, 2048);
|
||||||
|
long now = System.currentTimeMillis();
|
||||||
|
long oneWeekAgo = now - 604800000;
|
||||||
|
long twoWeeksAgo = oneWeekAgo - 604800000;
|
||||||
|
indexDocs(logger, "data", numDocs, twoWeeksAgo, oneWeekAgo);
|
||||||
|
|
||||||
|
Job.Builder job = createScheduledJob("lookback-job-stopped-then-killed");
|
||||||
|
registerJob(job);
|
||||||
|
PutJobAction.Response putJobResponse = putJob(job);
|
||||||
|
assertTrue(putJobResponse.isAcknowledged());
|
||||||
|
assertThat(putJobResponse.getResponse().getJobVersion(), equalTo(Version.CURRENT));
|
||||||
|
openJob(job.getId());
|
||||||
|
assertBusy(() -> assertEquals(getJobStats(job.getId()).get(0).getState(), JobState.OPENED));
|
||||||
|
|
||||||
|
List<String> t = Collections.singletonList("data");
|
||||||
|
DatafeedConfig.Builder datafeedConfigBuilder = createDatafeedBuilder(job.getId() + "-datafeed", job.getId(), t);
|
||||||
|
// Use lots of chunks so we have time to stop the lookback before it completes
|
||||||
|
datafeedConfigBuilder.setChunkingConfig(ChunkingConfig.newManual(new TimeValue(1, TimeUnit.SECONDS)));
|
||||||
|
DatafeedConfig datafeedConfig = datafeedConfigBuilder.build();
|
||||||
|
registerDatafeed(datafeedConfig);
|
||||||
|
assertTrue(putDatafeed(datafeedConfig).isAcknowledged());
|
||||||
|
|
||||||
|
startDatafeed(datafeedConfig.getId(), 0L, now);
|
||||||
|
assertBusy(() -> {
|
||||||
|
DataCounts dataCounts = getDataCounts(job.getId());
|
||||||
|
assertThat(dataCounts.getProcessedRecordCount(), greaterThan(0L));
|
||||||
|
}, 60, TimeUnit.SECONDS);
|
||||||
|
|
||||||
|
stopDatafeed(datafeedConfig.getId());
|
||||||
|
|
||||||
|
// At this point, stopping the datafeed will have submitted a request for the job to close.
|
||||||
|
// Depending on thread scheduling, the following kill request might overtake it. The Thread.sleep()
|
||||||
|
// call here makes it more likely; to make it inevitable for testing also add a Thread.sleep(10)
|
||||||
|
// immediately before the checkProcessIsAlive() call in AutodetectCommunicator.close().
|
||||||
|
Thread.sleep(randomIntBetween(1, 9));
|
||||||
|
|
||||||
|
KillProcessAction.Request killRequest = new KillProcessAction.Request(job.getId());
|
||||||
|
client().execute(KillProcessAction.INSTANCE, killRequest).actionGet();
|
||||||
|
|
||||||
|
// This should close very quickly, as we killed the process. If the job goes into the "failed"
|
||||||
|
// state that's wrong and this test will fail.
|
||||||
|
waitUntilJobIsClosed(job.getId(), TimeValue.timeValueSeconds(2));
|
||||||
|
}
|
||||||
|
|
||||||
private void startRealtime(String jobId) throws Exception {
|
private void startRealtime(String jobId) throws Exception {
|
||||||
client().admin().indices().prepareCreate("data")
|
client().admin().indices().prepareCreate("data")
|
||||||
.addMapping("type", "time", "type=date")
|
.addMapping("type", "time", "type=date")
|
||||||
|
@ -16,6 +16,7 @@ import org.elasticsearch.common.util.concurrent.ThreadContext;
|
|||||||
import org.elasticsearch.test.SecuritySettingsSource;
|
import org.elasticsearch.test.SecuritySettingsSource;
|
||||||
import org.elasticsearch.test.rest.ESRestTestCase;
|
import org.elasticsearch.test.rest.ESRestTestCase;
|
||||||
import org.elasticsearch.xpack.ml.MachineLearning;
|
import org.elasticsearch.xpack.ml.MachineLearning;
|
||||||
|
import org.elasticsearch.xpack.test.rest.XPackRestTestHelper;
|
||||||
import org.junit.After;
|
import org.junit.After;
|
||||||
import org.junit.Before;
|
import org.junit.Before;
|
||||||
|
|
||||||
@ -720,6 +721,7 @@ public class DatafeedJobsRestIT extends ESRestTestCase {
|
|||||||
@After
|
@After
|
||||||
public void clearMlState() throws Exception {
|
public void clearMlState() throws Exception {
|
||||||
new MlRestTestStateCleaner(logger, adminClient(), this).clearMlMetadata();
|
new MlRestTestStateCleaner(logger, adminClient(), this).clearMlMetadata();
|
||||||
|
XPackRestTestHelper.waitForPendingTasks(adminClient());
|
||||||
}
|
}
|
||||||
|
|
||||||
private static class DatafeedBuilder {
|
private static class DatafeedBuilder {
|
||||||
|
@ -5,6 +5,7 @@
|
|||||||
*/
|
*/
|
||||||
package org.elasticsearch.xpack.ml.integration;
|
package org.elasticsearch.xpack.ml.integration;
|
||||||
|
|
||||||
|
import org.elasticsearch.ElasticsearchException;
|
||||||
import org.elasticsearch.common.unit.TimeValue;
|
import org.elasticsearch.common.unit.TimeValue;
|
||||||
import org.elasticsearch.xpack.ml.job.config.AnalysisConfig;
|
import org.elasticsearch.xpack.ml.job.config.AnalysisConfig;
|
||||||
import org.elasticsearch.xpack.ml.job.config.DataDescription;
|
import org.elasticsearch.xpack.ml.job.config.DataDescription;
|
||||||
@ -134,6 +135,27 @@ public class ForecastIT extends MlNativeAutodetectIntegTestCase {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
public void testDurationCannotBeLessThanBucketSpan() throws Exception {
|
||||||
|
Detector.Builder detector = new Detector.Builder("mean", "value");
|
||||||
|
|
||||||
|
TimeValue bucketSpan = TimeValue.timeValueHours(1);
|
||||||
|
AnalysisConfig.Builder analysisConfig = new AnalysisConfig.Builder(Collections.singletonList(detector.build()));
|
||||||
|
analysisConfig.setBucketSpan(bucketSpan);
|
||||||
|
DataDescription.Builder dataDescription = new DataDescription.Builder();
|
||||||
|
dataDescription.setTimeFormat("epoch");
|
||||||
|
Job.Builder job = new Job.Builder("forecast-it-test-duration-bucket-span");
|
||||||
|
job.setAnalysisConfig(analysisConfig);
|
||||||
|
job.setDataDescription(dataDescription);
|
||||||
|
|
||||||
|
registerJob(job);
|
||||||
|
putJob(job);
|
||||||
|
openJob(job.getId());
|
||||||
|
ElasticsearchException e = expectThrows(ElasticsearchException.class,() -> forecast(job.getId(),
|
||||||
|
TimeValue.timeValueMinutes(10), null));
|
||||||
|
assertThat(e.getMessage(),
|
||||||
|
equalTo("java.lang.IllegalArgumentException: [duration] must be greater or equal to the bucket span: [10m/1h]"));
|
||||||
|
}
|
||||||
|
|
||||||
private static Map<String, Object> createRecord(long timestamp, double value) {
|
private static Map<String, Object> createRecord(long timestamp, double value) {
|
||||||
Map<String, Object> record = new HashMap<>();
|
Map<String, Object> record = new HashMap<>();
|
||||||
record.put("time", timestamp);
|
record.put("time", timestamp);
|
||||||
|
@ -0,0 +1,50 @@
|
|||||||
|
/*
|
||||||
|
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||||
|
* or more contributor license agreements. Licensed under the Elastic License;
|
||||||
|
* you may not use this file except in compliance with the Elastic License.
|
||||||
|
*/
|
||||||
|
package org.elasticsearch.xpack.ml.integration;
|
||||||
|
|
||||||
|
import org.elasticsearch.cluster.metadata.IndexMetaData;
|
||||||
|
import org.elasticsearch.common.unit.ByteSizeUnit;
|
||||||
|
import org.elasticsearch.common.unit.ByteSizeValue;
|
||||||
|
import org.elasticsearch.xpack.ml.action.DeleteJobAction;
|
||||||
|
import org.elasticsearch.xpack.ml.action.OpenJobAction;
|
||||||
|
import org.elasticsearch.xpack.ml.action.PutJobAction;
|
||||||
|
import org.elasticsearch.xpack.ml.job.config.Job;
|
||||||
|
import org.elasticsearch.xpack.ml.support.BaseMlIntegTestCase;
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Test that ML does not touch unnecessary indices when removing job index aliases
|
||||||
|
*/
|
||||||
|
public class JobStorageDeletionTaskIT extends BaseMlIntegTestCase {
|
||||||
|
|
||||||
|
private static final String UNRELATED_INDEX = "unrelated-data";
|
||||||
|
|
||||||
|
public void testUnrelatedIndexNotTouched() throws Exception {
|
||||||
|
internalCluster().ensureAtLeastNumDataNodes(1);
|
||||||
|
ensureStableCluster(1);
|
||||||
|
|
||||||
|
client().admin().indices().prepareCreate(UNRELATED_INDEX).get();
|
||||||
|
|
||||||
|
enableIndexBlock(UNRELATED_INDEX, IndexMetaData.SETTING_READ_ONLY);
|
||||||
|
|
||||||
|
Job.Builder job = createJob("delete-aliases-test-job", new ByteSizeValue(2, ByteSizeUnit.MB));
|
||||||
|
PutJobAction.Request putJobRequest = new PutJobAction.Request(job);
|
||||||
|
PutJobAction.Response putJobResponse = client().execute(PutJobAction.INSTANCE, putJobRequest).actionGet();
|
||||||
|
assertTrue(putJobResponse.isAcknowledged());
|
||||||
|
|
||||||
|
OpenJobAction.Request openJobRequest = new OpenJobAction.Request(job.getId());
|
||||||
|
client().execute(OpenJobAction.INSTANCE, openJobRequest).actionGet();
|
||||||
|
awaitJobOpenedAndAssigned(job.getId(), null);
|
||||||
|
|
||||||
|
DeleteJobAction.Request deleteJobRequest = new DeleteJobAction.Request(job.getId());
|
||||||
|
deleteJobRequest.setForce(true);
|
||||||
|
client().execute(DeleteJobAction.INSTANCE, deleteJobRequest).actionGet();
|
||||||
|
|
||||||
|
// If the deletion of aliases touches the unrelated index with the block
|
||||||
|
// then the line above will throw a ClusterBlockException
|
||||||
|
|
||||||
|
disableIndexBlock(UNRELATED_INDEX, IndexMetaData.SETTING_READ_ONLY);
|
||||||
|
}
|
||||||
|
}
|
@ -17,6 +17,7 @@ import org.elasticsearch.test.SecuritySettingsSource;
|
|||||||
import org.elasticsearch.test.rest.ESRestTestCase;
|
import org.elasticsearch.test.rest.ESRestTestCase;
|
||||||
import org.elasticsearch.xpack.ml.MachineLearning;
|
import org.elasticsearch.xpack.ml.MachineLearning;
|
||||||
import org.elasticsearch.xpack.ml.job.persistence.AnomalyDetectorsIndex;
|
import org.elasticsearch.xpack.ml.job.persistence.AnomalyDetectorsIndex;
|
||||||
|
import org.elasticsearch.xpack.test.rest.XPackRestTestHelper;
|
||||||
import org.junit.After;
|
import org.junit.After;
|
||||||
|
|
||||||
import java.io.BufferedReader;
|
import java.io.BufferedReader;
|
||||||
@ -648,5 +649,6 @@ public class MlJobIT extends ESRestTestCase {
|
|||||||
@After
|
@After
|
||||||
public void clearMlState() throws Exception {
|
public void clearMlState() throws Exception {
|
||||||
new MlRestTestStateCleaner(logger, adminClient(), this).clearMlMetadata();
|
new MlRestTestStateCleaner(logger, adminClient(), this).clearMlMetadata();
|
||||||
|
XPackRestTestHelper.waitForPendingTasks(adminClient());
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -233,7 +233,12 @@ abstract class MlNativeAutodetectIntegTestCase extends SecurityIntegTestCase {
|
|||||||
}
|
}
|
||||||
|
|
||||||
protected void waitUntilJobIsClosed(String jobId) throws Exception {
|
protected void waitUntilJobIsClosed(String jobId) throws Exception {
|
||||||
assertBusy(() -> assertThat(getJobStats(jobId).get(0).getState(), equalTo(JobState.CLOSED)), 30, TimeUnit.SECONDS);
|
waitUntilJobIsClosed(jobId, TimeValue.timeValueSeconds(30));
|
||||||
|
}
|
||||||
|
|
||||||
|
protected void waitUntilJobIsClosed(String jobId, TimeValue waitTime) throws Exception {
|
||||||
|
assertBusy(() -> assertThat(getJobStats(jobId).get(0).getState(), equalTo(JobState.CLOSED)),
|
||||||
|
waitTime.getMillis(), TimeUnit.MILLISECONDS);
|
||||||
}
|
}
|
||||||
|
|
||||||
protected List<Job> getJob(String jobId) {
|
protected List<Job> getJob(String jobId) {
|
||||||
|