Merge branch 'master' into feature/sql
Original commit: elastic/x-pack-elasticsearch@9945382d90
This commit is contained in:
commit
858f0b2dac
|
@ -166,7 +166,6 @@ setups['my_inactive_watch'] = '''
|
|||
- do:
|
||||
xpack.watcher.put_watch:
|
||||
id: "my_watch"
|
||||
master_timeout: "40s"
|
||||
active: false
|
||||
body: >
|
||||
{
|
||||
|
|
|
@ -4,9 +4,11 @@
|
|||
Use the information in this section to troubleshoot common problems and find
|
||||
answers for frequently asked questions.
|
||||
|
||||
* <<ml-rollingupgrade>>
|
||||
* <<ml-mappingclash>>
|
||||
|
||||
To get help, see <<xpack-help>>.
|
||||
|
||||
[float]
|
||||
[[ml-rollingupgrade]]
|
||||
=== Machine learning features unavailable after rolling upgrade
|
||||
|
||||
|
@ -31,3 +33,28 @@ After you upgrade all master-eligible nodes to {es} {version} and {xpack}
|
|||
features to re-initialize.
|
||||
|
||||
For more information, see {ref}/rolling-upgrades.html[Rolling upgrades].
|
||||
|
||||
[[ml-mappingclash]]
|
||||
=== Job creation failure due to mapping clash
|
||||
|
||||
This problem occurs when you try to create a job.
|
||||
|
||||
*Symptoms:*
|
||||
|
||||
* Illegal argument exception occurs when you click *Create Job* in {kib} or run
|
||||
the create job API. For example:
|
||||
`Save failed: [status_exception] This job would cause a mapping clash
|
||||
with existing field [field_name] - avoid the clash by assigning a dedicated
|
||||
results index` or `Save failed: [illegal_argument_exception] Can't merge a non
|
||||
object mapping [field_name] with an object mapping [field_name]`.
|
||||
|
||||
*Resolution:*
|
||||
|
||||
This issue typically occurs when two or more jobs store their results in the
|
||||
same index and the results contain fields with the same name but different
|
||||
data types or different `fields` settings.
|
||||
|
||||
By default, {ml} results are stored in the `.ml-anomalies-shared` index in {es}.
|
||||
To resolve this issue, click *Advanced > Use dedicated index* when you create
|
||||
the job in {kib}. If you are using the create job API, specify an index name in
|
||||
the `results_index_name` property.
|
||||
|
|
|
@ -2,11 +2,35 @@
|
|||
[[migration-api-assistance]]
|
||||
=== Migration Assistance API
|
||||
|
||||
The Migration Assistance API analyzes existing indices in the cluster and returns the information
|
||||
about indices that require some changes before the cluster can be upgraded to the next major version.
|
||||
The Migration Assistance API analyzes existing indices in the cluster and
|
||||
returns the information about indices that require some changes before the
|
||||
cluster can be upgraded to the next major version.
|
||||
|
||||
To see a list of indices that needs to be upgraded or reindexed, submit a GET request to the
|
||||
`/_xpack/migration/assistance` endpoint:
|
||||
[float]
|
||||
==== Request
|
||||
|
||||
`GET /_xpack/migration/assistance` +
|
||||
|
||||
`GET /_xpack/migration/assistance/<index_name>`
|
||||
|
||||
//==== Description
|
||||
|
||||
[float]
|
||||
==== Path Parameters
|
||||
|
||||
`index_name`::
|
||||
(string) Identifier for the index. It can be an index name or a wildcard
|
||||
expression.
|
||||
|
||||
//==== Query Parameters
|
||||
|
||||
//==== Authorization
|
||||
|
||||
[float]
|
||||
==== Examples
|
||||
|
||||
To see a list of indices that needs to be upgraded or reindexed, submit a GET
|
||||
request to the `/_xpack/migration/assistance` endpoint:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
|
@ -38,8 +62,8 @@ A successful call returns a list of indices that need to updated or reindexed:
|
|||
--------------------------------------------------
|
||||
// NOTCONSOLE
|
||||
|
||||
To check a particular index or set of indices, specify this index name or mask as the last part of the
|
||||
`/_xpack/migration/assistance/index_name` endpoint:
|
||||
To check a particular index or set of indices, specify this index name or mask
|
||||
as the last part of the `/_xpack/migration/assistance/index_name` endpoint:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
|
@ -48,8 +72,8 @@ GET /_xpack/migration/assistance/my_*
|
|||
// CONSOLE
|
||||
// TEST[skip:cannot create an old index in docs test]
|
||||
|
||||
A successful call returns a list of indices that needs to updated or reindexed and match the index specified
|
||||
on the endpoint:
|
||||
A successful call returns a list of indices that needs to updated or reindexed
|
||||
and match the index specified on the endpoint:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
|
@ -64,4 +88,4 @@ on the endpoint:
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// NOTCONSOLE
|
||||
// NOTCONSOLE
|
||||
|
|
|
@ -2,10 +2,36 @@
|
|||
[[migration-api-deprecation]]
|
||||
== Deprecation Info APIs
|
||||
|
||||
The deprecation API is to be used to retrieve information about different cluster, node, and index level
|
||||
settings that use deprecated features that will be removed or changed in the next major version.
|
||||
The deprecation API is to be used to retrieve information about different
|
||||
cluster, node, and index level settings that use deprecated features that will
|
||||
be removed or changed in the next major version.
|
||||
|
||||
To see the list of offenders in your cluster, submit a GET request to the `_xpack/migration/deprecations` endpoint:
|
||||
[float]
|
||||
=== Request
|
||||
|
||||
`GET /_xpack/migration/deprecations` +
|
||||
|
||||
`GET /<index_name>/_xpack/migration/deprecations`
|
||||
|
||||
//=== Description
|
||||
|
||||
[float]
|
||||
=== Path Parameters
|
||||
|
||||
`index_name`::
|
||||
(string) Identifier for the index. It can be an index name or a wildcard
|
||||
expression. When you specify this parameter, only index-level deprecations for
|
||||
the specified indices are returned.
|
||||
|
||||
//=== Query Parameters
|
||||
|
||||
//=== Authorization
|
||||
|
||||
[float]
|
||||
=== Examples
|
||||
|
||||
To see the list of offenders in your cluster, submit a GET request to the
|
||||
`_xpack/migration/deprecations` endpoint:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
|
@ -43,8 +69,9 @@ Example response:
|
|||
--------------------------------------------------
|
||||
// NOTCONSOLE
|
||||
|
||||
The response you will receive will break down all the specific forward-incompatible settings that your
|
||||
cluster should resolve before upgrading. Any offending setting will be represented as a deprecation warning.
|
||||
The response breaks down all the specific forward-incompatible settings that you
|
||||
should resolve before upgrading your cluster. Any offending settings are
|
||||
represented as a deprecation warning.
|
||||
|
||||
The following is an example deprecation warning:
|
||||
|
||||
|
@ -59,25 +86,31 @@ The following is an example deprecation warning:
|
|||
--------------------------------------------------
|
||||
// NOTCONSOLE
|
||||
|
||||
As is shown, there is a `level` property that describes how significant the issue may be.
|
||||
As is shown, there is a `level` property that describes the significance of the
|
||||
issue.
|
||||
|
||||
|=======
|
||||
|none | everything is good
|
||||
|info | An advisory note that something has changed. No action needed
|
||||
|warning | You can upgrade directly, but you are using deprecated functionality which will not be available in the next major version
|
||||
|critical | You cannot upgrade without fixing this problem
|
||||
|none | Everything is good.
|
||||
|info | An advisory note that something has changed. No action needed.
|
||||
|warning | You can upgrade directly, but you are using deprecated functionality
|
||||
which will not be available in the next major version.
|
||||
|critical | You cannot upgrade without fixing this problem.
|
||||
|=======
|
||||
|
||||
`message` and the optional `details` provide descriptive information about the deprecation warning, while the `url`
|
||||
property provides a link to the Breaking Changes Documentation, where more information about this change can be found.
|
||||
The `message` property and the optional `details` property provide descriptive
|
||||
information about the deprecation warning. The `url` property provides a link to
|
||||
the Breaking Changes Documentation, where you can find more information about
|
||||
this change.
|
||||
|
||||
Any cluster-level deprecation warnings can be found under
|
||||
the `cluster_settings` key. Similarly, any node-level warnings will be found under `node_settings`.
|
||||
Since only a select subset of your nodes may incorporate these settings, it is important to read the
|
||||
`details` section for more information about which nodes are to be updated. Index warnings are
|
||||
sectioned off per index and can be filtered using an index-pattern in the query.
|
||||
Any cluster-level deprecation warnings can be found under the `cluster_settings`
|
||||
key. Similarly, any node-level warnings are found under `node_settings`. Since
|
||||
only a select subset of your nodes might incorporate these settings, it is
|
||||
important to read the `details` section for more information about which nodes
|
||||
are affected. Index warnings are sectioned off per index and can be filtered
|
||||
using an index-pattern in the query.
|
||||
|
||||
Example request that only shows index-level deprecations of all `logstash-*` indices:
|
||||
The following example request shows only index-level deprecations of all
|
||||
`logstash-*` indices:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
|
|
|
@ -2,10 +2,39 @@
|
|||
[[migration-api-upgrade]]
|
||||
=== Migration Upgrade API
|
||||
|
||||
The Migration Upgrade API performs the upgrade of internal indices to make them compatible with the next major version.
|
||||
The Migration Upgrade API performs the upgrade of internal indices to make them
|
||||
compatible with the next major version.
|
||||
|
||||
Indices need to be upgraded one at a time by submitting a POST request to the
|
||||
`/_xpack/migration/upgrade/index_name` endpoint:
|
||||
[float]
|
||||
==== Request
|
||||
|
||||
`POST /_xpack/migration/upgrade/<index_name>`
|
||||
|
||||
[float]
|
||||
==== Description
|
||||
|
||||
Indices must be upgraded one at a time.
|
||||
|
||||
[float]
|
||||
==== Path Parameters
|
||||
|
||||
`index_name`::
|
||||
(string) Identifier for the index.
|
||||
|
||||
`wait_for_completion`::
|
||||
(boolean) Defines whether the upgrade call blocks until the upgrade process is
|
||||
finished. The default value is `true`. If set to `false`, the upgrade can be
|
||||
performed asynchronously.
|
||||
|
||||
//==== Query Parameters
|
||||
|
||||
//==== Authorization
|
||||
|
||||
[float]
|
||||
==== Examples
|
||||
|
||||
The following example submits a POST request to the
|
||||
`/_xpack/migration/upgrade/<index_name>` endpoint:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
|
@ -38,8 +67,8 @@ A successful call returns the statistics about the upgrade process:
|
|||
--------------------------------------------------
|
||||
// NOTCONSOLE
|
||||
|
||||
By default, the upgrade call blocks until the upgrade process is finished. For large indices the upgrade can be
|
||||
performed asynchronously by specifying `wait_for_completion=false` parameter:
|
||||
The following example upgrades a large index asynchronously by specifying the
|
||||
`wait_for_completion` parameter:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
|
@ -58,7 +87,8 @@ This call should return the id of the upgrade process task:
|
|||
--------------------------------------------------
|
||||
// NOTCONSOLE
|
||||
|
||||
The status of the running or finished upgrade requests can be obtained using <<tasks,Task API>>:
|
||||
The status of the running or finished upgrade requests can be obtained by using
|
||||
the <<tasks,Task API>>:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
|
@ -102,7 +132,7 @@ GET _tasks/PFvgv7T6TGumRyFF3vqTFg:1137?detailed=true
|
|||
--------------------------------------------------
|
||||
// NOTCONSOLE
|
||||
|
||||
<1> `true` in the `completed` field indicates that the upgrade request has finished, `false` means that
|
||||
request is still executing
|
||||
<1> If the `completed` field value is `true`, the upgrade request has finished.
|
||||
If it is `false`, the request is still running.
|
||||
|
||||
<2> the `response` field contains the status of the upgrade request
|
||||
<2> The `response` field contains the status of the upgrade request.
|
||||
|
|
|
@ -35,21 +35,3 @@ Response:
|
|||
--------------------------------------------------
|
||||
// TESTRESPONSE
|
||||
|
||||
[float]
|
||||
==== Timeouts
|
||||
|
||||
When deleting a watch while it is executing, the delete action will block and
|
||||
wait for the watch execution to finish. Depending on the nature of the watch, in
|
||||
some situations this can take a while. For this reason, the delete watch action
|
||||
is associated with a timeout that is set to 10 seconds by default. You can
|
||||
control this timeout by passing in the `master_timeout` parameter.
|
||||
|
||||
The following snippet shows how to change the default timeout of the delete
|
||||
action to 30 seconds:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
DELETE _xpack/watcher/watch/my_watch?master_timeout=30s
|
||||
--------------------------------------------------
|
||||
// CONSOLE
|
||||
// TEST[setup:my_active_watch]
|
||||
|
|
|
@ -98,22 +98,6 @@ A watch has the following fields:
|
|||
|======
|
||||
|
||||
[float]
|
||||
==== Timeouts
|
||||
|
||||
When updating a watch while it is executing, the put action will block and wait
|
||||
for the watch execution to finish. Depending on the nature of the watch, in some
|
||||
situations this can take a while. For this reason, the put watch action is
|
||||
associated with a timeout that is set to 10 seconds by default. You can control
|
||||
this timeout by passing in the `master_timeout` parameter.
|
||||
|
||||
The following snippet shows how to change the default timeout of the put action
|
||||
to 30 seconds:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
PUT _xpack/watcher/watch/my-watch?master_timeout=30s
|
||||
--------------------------------------------------
|
||||
|
||||
[[watcher-api-put-watch-active-state]]
|
||||
==== Controlling Default Active State
|
||||
|
||||
|
|
|
@ -16,7 +16,7 @@ roles against its local role definitions to determine which indices the user
|
|||
is allowed to access.
|
||||
|
||||
|
||||
[WARNING]
|
||||
[WARNING]
|
||||
This feature was added as Beta in Elasticsearch `v5.3` with further
|
||||
improvements made in 5.4 and 5.5. It requires gateway eligible nodes to be on
|
||||
`v5.5` onwards.
|
||||
|
@ -86,22 +86,12 @@ PUT _cluster_settings
|
|||
Next, set up a role called `cluster_two_logs` on both cluster `one` and
|
||||
cluster `two`.
|
||||
|
||||
On cluster `one`, this role allows the user to query any indices on remote clusters:
|
||||
On cluster `one`, this role does not need any special privileges:
|
||||
|
||||
[source,js]
|
||||
-----------------------------------------------------------
|
||||
POST /_xpack/security/role/cluster_two_logs
|
||||
{
|
||||
"indices": [
|
||||
{
|
||||
"names": [
|
||||
"*:*"
|
||||
],
|
||||
"privileges": [
|
||||
"read"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
-----------------------------------------------------------
|
||||
|
||||
|
@ -155,5 +145,6 @@ GET two:logs-2017.04/_search <1>
|
|||
}
|
||||
}
|
||||
-----------------------------------------------------------
|
||||
//TBD: Is there a missing description of the <1> callout above?
|
||||
|
||||
include::{xkb-repo-dir}/security/cross-cluster-kibana.asciidoc[]
|
||||
include::{xkb-repo-dir}/security/cross-cluster-kibana.asciidoc[]
|
||||
|
|
|
@ -76,13 +76,6 @@ Set to `false` to disable the built-in token service. Defaults to `true` unless
|
|||
`xpack.security.http.ssl.enabled` is `false` and `http.enabled` is `true`.
|
||||
This prevents sniffing the token from a connection over plain http.
|
||||
|
||||
`xpack.security.authc.token.passphrase`::
|
||||
A secure passphrase that must be the same on each node and greater than
|
||||
8 characters in length. This passphrase is used to derive a cryptographic key
|
||||
with which the tokens will be encrypted and authenticated. If this setting is
|
||||
not used, the cluster automatically generates a key, which is the recommended
|
||||
method.
|
||||
|
||||
`xpack.security.authc.token.timeout`::
|
||||
The length of time that a token is valid for. By default this value is `20m` or
|
||||
20 minutes. The maximum value is 1 hour.
|
||||
|
|
|
@ -3,11 +3,12 @@ You configure settings for X-Pack features in the `elasticsearch.yml`,
|
|||
|
||||
[options="header,footer"]
|
||||
|=======================
|
||||
|{xpack} Feature |{es} Settings |{kib} Settings |Logstash Settings
|
||||
|Graph |No |{kibana-ref}/graph-settings-kb.html[Yes] |No
|
||||
|Machine learning |{ref}/ml-settings.html[Yes] |{kibana-ref}/ml-settings-kb.html[Yes] |No
|
||||
|Monitoring |{ref}/monitoring-settings.html[Yes] |{kibana-ref}/monitoring-settings-kb.html[Yes] |{logstash-ref}/settings-xpack.html#monitoring-settings[Yes]
|
||||
|Reporting |No |{kibana-ref}/reporting-settings-kb.html[Yes] |No
|
||||
|Security |{ref}/security-settings.html[Yes] |{kibana-ref}/security-settings-kb.html[Yes] |No
|
||||
|Watcher |{ref}/notification-settings.html[Yes] |No |No
|
||||
|{xpack} Feature |{es} Settings |{kib} Settings |Logstash Settings
|
||||
|Development Tools |No |{kibana-ref}/dev-settings-kb.html[Yes] |No
|
||||
|Graph |No |{kibana-ref}/graph-settings-kb.html[Yes] |No
|
||||
|Machine learning |{ref}/ml-settings.html[Yes] |{kibana-ref}/ml-settings-kb.html[Yes] |No
|
||||
|Monitoring |{ref}/monitoring-settings.html[Yes] |{kibana-ref}/monitoring-settings-kb.html[Yes] |{logstash-ref}/settings-xpack.html#monitoring-settings[Yes]
|
||||
|Reporting |No |{kibana-ref}/reporting-settings-kb.html[Yes] |No
|
||||
|Security |{ref}/security-settings.html[Yes] |{kibana-ref}/security-settings-kb.html[Yes] |No
|
||||
|Watcher |{ref}/notification-settings.html[Yes] |No |No
|
||||
|=======================
|
||||
|
|
|
@ -219,7 +219,6 @@ integTestCluster {
|
|||
setting 'xpack.monitoring.collection.interval', '-1'
|
||||
setting 'xpack.security.authc.token.enabled', 'true'
|
||||
keystoreSetting 'bootstrap.password', 'x-pack-test-password'
|
||||
keystoreSetting 'xpack.security.authc.token.passphrase', 'x-pack-token-service-password'
|
||||
distribution = 'zip' // this is important since we use the reindex module in ML
|
||||
|
||||
setupCommand 'setupTestUser', 'bin/x-pack/users', 'useradd', 'x_pack_rest_user', '-p', 'x-pack-test-password', '-r', 'superuser'
|
||||
|
|
|
@ -187,8 +187,8 @@ public class IndexDeprecationChecks {
|
|||
indexMetaData.getSettings().get("index.shared_filesystem") != null) {
|
||||
return new DeprecationIssue(DeprecationIssue.Level.CRITICAL,
|
||||
"[index.shared_filesystem] setting should be removed",
|
||||
"https://www.elastic.co/guide/en/elasticsearch/reference/master/" +
|
||||
"breaking_60_settings_changes.html#_shadow_replicas_have_been_removed", null);
|
||||
"https://www.elastic.co/guide/en/elasticsearch/reference/6.0/" +
|
||||
"breaking_60_indices_changes.html#_shadow_replicas_have_been_removed", null);
|
||||
|
||||
}
|
||||
return null;
|
||||
|
|
|
@ -22,6 +22,7 @@ import org.elasticsearch.common.settings.Setting;
|
|||
import org.elasticsearch.common.settings.Setting.Property;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.settings.SettingsFilter;
|
||||
import org.elasticsearch.common.unit.ByteSizeValue;
|
||||
import org.elasticsearch.common.unit.TimeValue;
|
||||
import org.elasticsearch.common.xcontent.NamedXContentRegistry;
|
||||
import org.elasticsearch.env.Environment;
|
||||
|
@ -160,6 +161,8 @@ public class MachineLearning implements ActionPlugin {
|
|||
public static final String MAX_OPEN_JOBS_NODE_ATTR = "ml.max_open_jobs";
|
||||
public static final Setting<Integer> CONCURRENT_JOB_ALLOCATIONS =
|
||||
Setting.intSetting("xpack.ml.node_concurrent_job_allocations", 2, 0, Property.Dynamic, Property.NodeScope);
|
||||
public static final Setting<ByteSizeValue> MAX_MODEL_MEMORY =
|
||||
Setting.memorySizeSetting("xpack.ml.max_model_memory_limit", new ByteSizeValue(0), Property.NodeScope);
|
||||
|
||||
public static final TimeValue STATE_PERSIST_RESTORE_TIMEOUT = TimeValue.timeValueMinutes(30);
|
||||
|
||||
|
@ -186,6 +189,7 @@ public class MachineLearning implements ActionPlugin {
|
|||
Arrays.asList(AUTODETECT_PROCESS,
|
||||
ML_ENABLED,
|
||||
CONCURRENT_JOB_ALLOCATIONS,
|
||||
MAX_MODEL_MEMORY,
|
||||
ProcessCtrl.DONT_PERSIST_MODEL_STATE_SETTING,
|
||||
ProcessCtrl.MAX_ANOMALY_RECORDS_SETTING,
|
||||
DataCountsReporter.ACCEPTABLE_PERCENTAGE_DATE_PARSE_ERRORS_SETTING,
|
||||
|
|
|
@ -25,7 +25,6 @@ import org.elasticsearch.common.unit.TimeValue;
|
|||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
import org.elasticsearch.gateway.GatewayService;
|
||||
import org.elasticsearch.index.IndexSettings;
|
||||
import org.elasticsearch.index.mapper.MapperService;
|
||||
import org.elasticsearch.threadpool.ThreadPool;
|
||||
import org.elasticsearch.xpack.ml.job.persistence.AnomalyDetectorsIndex;
|
||||
import org.elasticsearch.xpack.ml.job.persistence.ElasticsearchMappings;
|
||||
|
@ -273,9 +272,6 @@ public class MachineLearningTemplateRegistry extends AbstractComponent implement
|
|||
// failure we can lose the last 5 seconds of changes, but it's
|
||||
// much faster
|
||||
.put(IndexSettings.INDEX_TRANSLOG_DURABILITY_SETTING.getKey(), ASYNC)
|
||||
// We need to allow fields not mentioned in the mappings to
|
||||
// pick up default mappings and be used in queries
|
||||
.put(MapperService.INDEX_MAPPER_DYNAMIC_SETTING.getKey(), true)
|
||||
// set the default all search field
|
||||
.put(IndexSettings.DEFAULT_FIELD_SETTING.getKey(), ElasticsearchMappings.ALL_FIELD_VALUES);
|
||||
}
|
||||
|
@ -290,10 +286,7 @@ public class MachineLearningTemplateRegistry extends AbstractComponent implement
|
|||
// Our indexes are small and one shard puts the
|
||||
// least possible burden on Elasticsearch
|
||||
.put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1)
|
||||
.put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING.getKey(), delayedNodeTimeOutSetting)
|
||||
// We need to allow fields not mentioned in the mappings to
|
||||
// pick up default mappings and be used in queries
|
||||
.put(MapperService.INDEX_MAPPER_DYNAMIC_SETTING.getKey(), true);
|
||||
.put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING.getKey(), delayedNodeTimeOutSetting);
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
|
@ -7,6 +7,7 @@ package org.elasticsearch.xpack.ml;
|
|||
|
||||
import org.apache.logging.log4j.Logger;
|
||||
import org.elasticsearch.client.Client;
|
||||
import org.elasticsearch.cluster.ClusterName;
|
||||
import org.elasticsearch.common.lease.Releasable;
|
||||
import org.elasticsearch.common.logging.Loggers;
|
||||
import org.elasticsearch.common.unit.TimeValue;
|
||||
|
@ -18,6 +19,7 @@ import org.joda.time.DateTime;
|
|||
import org.joda.time.chrono.ISOChronology;
|
||||
|
||||
import java.util.Objects;
|
||||
import java.util.Random;
|
||||
import java.util.concurrent.ScheduledFuture;
|
||||
import java.util.function.Supplier;
|
||||
|
||||
|
@ -28,6 +30,8 @@ public class MlDailyMaintenanceService implements Releasable {
|
|||
|
||||
private static final Logger LOGGER = Loggers.getLogger(MlDailyMaintenanceService.class);
|
||||
|
||||
private static final int MAX_TIME_OFFSET_MINUTES = 120;
|
||||
|
||||
private final ThreadPool threadPool;
|
||||
private final Client client;
|
||||
|
||||
|
@ -45,16 +49,26 @@ public class MlDailyMaintenanceService implements Releasable {
|
|||
this.schedulerProvider = Objects.requireNonNull(scheduleProvider);
|
||||
}
|
||||
|
||||
public MlDailyMaintenanceService(ThreadPool threadPool, Client client) {
|
||||
this(threadPool, client, createAfterMidnightScheduleProvider());
|
||||
public MlDailyMaintenanceService(ClusterName clusterName, ThreadPool threadPool, Client client) {
|
||||
this(threadPool, client, () -> delayToNextTime(clusterName));
|
||||
}
|
||||
|
||||
private static Supplier<TimeValue> createAfterMidnightScheduleProvider() {
|
||||
return () -> {
|
||||
DateTime now = DateTime.now(ISOChronology.getInstance());
|
||||
DateTime next = now.plusDays(1).withTimeAtStartOfDay().plusMinutes(30);
|
||||
return TimeValue.timeValueMillis(next.getMillis() - now.getMillis());
|
||||
};
|
||||
/**
|
||||
* Calculates the delay until the next time the maintenance should be triggered.
|
||||
* The next time is 30 minutes past midnight of the following day plus a random
|
||||
* offset. The random offset is added in order to avoid multiple clusters
|
||||
* running the maintenance tasks at the same time. A cluster with a given name
|
||||
* shall have the same offset throughout its life.
|
||||
*
|
||||
* @param clusterName the cluster name is used to seed the random offset
|
||||
* @return the delay to the next time the maintenance should be triggered
|
||||
*/
|
||||
private static TimeValue delayToNextTime(ClusterName clusterName) {
|
||||
Random random = new Random(clusterName.hashCode());
|
||||
int minutesOffset = random.ints(0, MAX_TIME_OFFSET_MINUTES).findFirst().getAsInt();
|
||||
DateTime now = DateTime.now(ISOChronology.getInstance());
|
||||
DateTime next = now.plusDays(1).withTimeAtStartOfDay().plusMinutes(30).plusMinutes(minutesOffset);
|
||||
return TimeValue.timeValueMillis(next.getMillis() - now.getMillis());
|
||||
}
|
||||
|
||||
public void start() {
|
||||
|
|
|
@ -63,7 +63,7 @@ class MlInitializationService extends AbstractComponent implements ClusterStateL
|
|||
private void installMlMetadata(MetaData metaData) {
|
||||
if (metaData.custom(MlMetadata.TYPE) == null) {
|
||||
if (installMlMetadataCheck.compareAndSet(false, true)) {
|
||||
threadPool.executor(ThreadPool.Names.GENERIC).execute(() -> {
|
||||
threadPool.executor(ThreadPool.Names.GENERIC).execute(() ->
|
||||
clusterService.submitStateUpdateTask("install-ml-metadata", new ClusterStateUpdateTask() {
|
||||
@Override
|
||||
public ClusterState execute(ClusterState currentState) throws Exception {
|
||||
|
@ -83,8 +83,8 @@ class MlInitializationService extends AbstractComponent implements ClusterStateL
|
|||
installMlMetadataCheck.set(false);
|
||||
logger.error("unable to install ml metadata", e);
|
||||
}
|
||||
});
|
||||
});
|
||||
})
|
||||
);
|
||||
}
|
||||
} else {
|
||||
installMlMetadataCheck.set(false);
|
||||
|
@ -93,7 +93,7 @@ class MlInitializationService extends AbstractComponent implements ClusterStateL
|
|||
|
||||
private void installDailyMaintenanceService() {
|
||||
if (mlDailyMaintenanceService == null) {
|
||||
mlDailyMaintenanceService = new MlDailyMaintenanceService(threadPool, client);
|
||||
mlDailyMaintenanceService = new MlDailyMaintenanceService(clusterService.getClusterName(), threadPool, client);
|
||||
mlDailyMaintenanceService.start();
|
||||
clusterService.addLifecycleListener(new LifecycleListener() {
|
||||
@Override
|
||||
|
|
|
@ -72,13 +72,6 @@ public class PutJobAction extends Action<PutJobAction.Request, PutJobAction.Resp
|
|||
jobBuilder.getId(), jobId));
|
||||
}
|
||||
|
||||
// Some fields cannot be set at create time
|
||||
List<String> invalidJobCreationSettings = jobBuilder.invalidCreateTimeSettings();
|
||||
if (invalidJobCreationSettings.isEmpty() == false) {
|
||||
throw new IllegalArgumentException(Messages.getMessage(Messages.JOB_CONFIG_INVALID_CREATE_SETTINGS,
|
||||
String.join(",", invalidJobCreationSettings)));
|
||||
}
|
||||
|
||||
return new Request(jobBuilder);
|
||||
}
|
||||
|
||||
|
@ -88,11 +81,12 @@ public class PutJobAction extends Action<PutJobAction.Request, PutJobAction.Resp
|
|||
// Validate the jobBuilder immediately so that errors can be detected prior to transportation.
|
||||
jobBuilder.validateInputFields();
|
||||
|
||||
// In 6.1 we want to make the model memory size limit more prominent, and also reduce the default from
|
||||
// 4GB to 1GB. However, changing the meaning of a null model memory limit for existing jobs would be a
|
||||
// breaking change, so instead we add an explicit limit to newly created jobs that didn't have one when
|
||||
// submitted
|
||||
jobBuilder.setDefaultMemoryLimitIfUnset();
|
||||
// Some fields cannot be set at create time
|
||||
List<String> invalidJobCreationSettings = jobBuilder.invalidCreateTimeSettings();
|
||||
if (invalidJobCreationSettings.isEmpty() == false) {
|
||||
throw new IllegalArgumentException(Messages.getMessage(Messages.JOB_CONFIG_INVALID_CREATE_SETTINGS,
|
||||
String.join(",", invalidJobCreationSettings)));
|
||||
}
|
||||
|
||||
this.jobBuilder = jobBuilder;
|
||||
}
|
||||
|
|
|
@ -41,6 +41,7 @@ import java.util.EnumMap;
|
|||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.Objects;
|
||||
import java.util.Random;
|
||||
import java.util.concurrent.TimeUnit;
|
||||
|
||||
/**
|
||||
|
@ -352,12 +353,13 @@ public class DatafeedConfig extends AbstractDiffable<DatafeedConfig> implements
|
|||
public static class Builder {
|
||||
|
||||
private static final int DEFAULT_SCROLL_SIZE = 1000;
|
||||
private static final TimeValue DEFAULT_QUERY_DELAY = TimeValue.timeValueMinutes(1);
|
||||
private static final TimeValue MIN_DEFAULT_QUERY_DELAY = TimeValue.timeValueMinutes(1);
|
||||
private static final TimeValue MAX_DEFAULT_QUERY_DELAY = TimeValue.timeValueMinutes(2);
|
||||
private static final int DEFAULT_AGGREGATION_CHUNKING_BUCKETS = 1000;
|
||||
|
||||
private String id;
|
||||
private String jobId;
|
||||
private TimeValue queryDelay = DEFAULT_QUERY_DELAY;
|
||||
private TimeValue queryDelay;
|
||||
private TimeValue frequency;
|
||||
private List<String> indices = Collections.emptyList();
|
||||
private List<String> types = Collections.emptyList();
|
||||
|
@ -460,6 +462,7 @@ public class DatafeedConfig extends AbstractDiffable<DatafeedConfig> implements
|
|||
}
|
||||
validateAggregations();
|
||||
setDefaultChunkingConfig();
|
||||
setDefaultQueryDelay();
|
||||
return new DatafeedConfig(id, jobId, queryDelay, frequency, indices, types, query, aggregations, scriptFields, scrollSize,
|
||||
chunkingConfig);
|
||||
}
|
||||
|
@ -530,6 +533,15 @@ public class DatafeedConfig extends AbstractDiffable<DatafeedConfig> implements
|
|||
}
|
||||
}
|
||||
|
||||
private void setDefaultQueryDelay() {
|
||||
if (queryDelay == null) {
|
||||
Random random = new Random(jobId.hashCode());
|
||||
long delayMillis = random.longs(MIN_DEFAULT_QUERY_DELAY.millis(), MAX_DEFAULT_QUERY_DELAY.millis())
|
||||
.findFirst().getAsLong();
|
||||
queryDelay = TimeValue.timeValueMillis(delayMillis);
|
||||
}
|
||||
}
|
||||
|
||||
private static ElasticsearchException invalidOptionValue(String fieldName, Object value) {
|
||||
String msg = Messages.getMessage(Messages.DATAFEED_CONFIG_INVALID_OPTION_VALUE, fieldName, value);
|
||||
throw ExceptionsHelper.badRequestException(msg);
|
||||
|
|
|
@ -274,7 +274,7 @@ class DatafeedJob {
|
|||
|
||||
private long nextRealtimeTimestamp() {
|
||||
long epochMs = currentTimeSupplier.get() + frequencyMs;
|
||||
return toIntervalStartEpochMs(epochMs) + NEXT_TASK_DELAY_MS;
|
||||
return toIntervalStartEpochMs(epochMs) + queryDelayMs + NEXT_TASK_DELAY_MS;
|
||||
}
|
||||
|
||||
private long toIntervalStartEpochMs(long epochMs) {
|
||||
|
|
|
@ -18,6 +18,7 @@ import org.elasticsearch.cluster.service.ClusterService;
|
|||
import org.elasticsearch.common.CheckedConsumer;
|
||||
import org.elasticsearch.common.component.AbstractComponent;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.xpack.ml.MachineLearning;
|
||||
import org.elasticsearch.xpack.ml.MlMetadata;
|
||||
import org.elasticsearch.xpack.ml.action.DeleteJobAction;
|
||||
import org.elasticsearch.xpack.ml.action.PutJobAction;
|
||||
|
@ -135,6 +136,13 @@ public class JobManager extends AbstractComponent {
|
|||
* Stores a job in the cluster state
|
||||
*/
|
||||
public void putJob(PutJobAction.Request request, ClusterState state, ActionListener<PutJobAction.Response> actionListener) {
|
||||
// In 6.1 we want to make the model memory size limit more prominent, and also reduce the default from
|
||||
// 4GB to 1GB. However, changing the meaning of a null model memory limit for existing jobs would be a
|
||||
// breaking change, so instead we add an explicit limit to newly created jobs that didn't have one when
|
||||
// submitted
|
||||
request.getJobBuilder().validateModelMemoryLimit(MachineLearning.MAX_MODEL_MEMORY.get(settings));
|
||||
|
||||
|
||||
Job job = request.getJobBuilder().build(new Date());
|
||||
|
||||
MlMetadata currentMlMetadata = state.metaData().custom(MlMetadata.TYPE);
|
||||
|
@ -235,7 +243,7 @@ public class JobManager extends AbstractComponent {
|
|||
@Override
|
||||
public ClusterState execute(ClusterState currentState) throws Exception {
|
||||
Job job = getJobOrThrowIfUnknown(jobId, currentState);
|
||||
updatedJob = jobUpdate.mergeWithJob(job);
|
||||
updatedJob = jobUpdate.mergeWithJob(job, MachineLearning.MAX_MODEL_MEMORY.get(settings));
|
||||
return updateClusterState(updatedJob, true, currentState);
|
||||
}
|
||||
|
||||
|
|
|
@ -426,7 +426,7 @@ public class AnalysisConfig implements ToXContentObject, Writeable {
|
|||
|
||||
public static class Builder {
|
||||
|
||||
static final TimeValue DEFAULT_BUCKET_SPAN = TimeValue.timeValueMinutes(5);
|
||||
public static final TimeValue DEFAULT_BUCKET_SPAN = TimeValue.timeValueMinutes(5);
|
||||
|
||||
private List<Detector> detectors;
|
||||
private TimeValue bucketSpan = DEFAULT_BUCKET_SPAN;
|
||||
|
|
|
@ -10,9 +10,12 @@ import org.elasticsearch.cluster.AbstractDiffable;
|
|||
import org.elasticsearch.common.Nullable;
|
||||
import org.elasticsearch.common.ParseField;
|
||||
import org.elasticsearch.common.Strings;
|
||||
import org.elasticsearch.common.inject.spi.Message;
|
||||
import org.elasticsearch.common.io.stream.StreamInput;
|
||||
import org.elasticsearch.common.io.stream.StreamOutput;
|
||||
import org.elasticsearch.common.io.stream.Writeable;
|
||||
import org.elasticsearch.common.unit.ByteSizeUnit;
|
||||
import org.elasticsearch.common.unit.ByteSizeValue;
|
||||
import org.elasticsearch.common.unit.TimeValue;
|
||||
import org.elasticsearch.common.xcontent.ObjectParser;
|
||||
import org.elasticsearch.common.xcontent.ObjectParser.ValueType;
|
||||
|
@ -716,15 +719,6 @@ public class Job extends AbstractDiffable<Job> implements Writeable, ToXContentO
|
|||
}
|
||||
|
||||
public Builder setAnalysisLimits(AnalysisLimits analysisLimits) {
|
||||
if (this.analysisLimits != null) {
|
||||
long oldMemoryLimit = this.analysisLimits.getModelMemoryLimit();
|
||||
long newMemoryLimit = analysisLimits.getModelMemoryLimit();
|
||||
if (newMemoryLimit < oldMemoryLimit) {
|
||||
throw new IllegalArgumentException(
|
||||
Messages.getMessage(Messages.JOB_CONFIG_UPDATE_ANALYSIS_LIMITS_MODEL_MEMORY_LIMIT_CANNOT_BE_DECREASED,
|
||||
oldMemoryLimit, newMemoryLimit));
|
||||
}
|
||||
}
|
||||
this.analysisLimits = analysisLimits;
|
||||
return this;
|
||||
}
|
||||
|
@ -1004,14 +998,36 @@ public class Job extends AbstractDiffable<Job> implements Writeable, ToXContentO
|
|||
* In 6.1 we want to make the model memory size limit more prominent, and also reduce the default from
|
||||
* 4GB to 1GB. However, changing the meaning of a null model memory limit for existing jobs would be a
|
||||
* breaking change, so instead we add an explicit limit to newly created jobs that didn't have one when
|
||||
* submitted
|
||||
* submitted.
|
||||
* Additionally the MAX_MODEL_MEM setting limits the value, an exception is thrown if the max limit
|
||||
* is exceeded.
|
||||
*/
|
||||
public void setDefaultMemoryLimitIfUnset() {
|
||||
if (analysisLimits == null) {
|
||||
analysisLimits = new AnalysisLimits((Long) null);
|
||||
} else if (analysisLimits.getModelMemoryLimit() == null) {
|
||||
analysisLimits = new AnalysisLimits(analysisLimits.getCategorizationExamplesLimit());
|
||||
public void validateModelMemoryLimit(ByteSizeValue maxModelMemoryLimit) {
|
||||
|
||||
boolean maxModelMemoryIsSet = maxModelMemoryLimit != null && maxModelMemoryLimit.getMb() > 0;
|
||||
Long categorizationExampleLimit = null;
|
||||
long modelMemoryLimit;
|
||||
if (maxModelMemoryIsSet) {
|
||||
modelMemoryLimit = Math.min(maxModelMemoryLimit.getMb(), AnalysisLimits.DEFAULT_MODEL_MEMORY_LIMIT_MB);
|
||||
} else {
|
||||
modelMemoryLimit = AnalysisLimits.DEFAULT_MODEL_MEMORY_LIMIT_MB;
|
||||
}
|
||||
|
||||
if (analysisLimits != null) {
|
||||
categorizationExampleLimit = analysisLimits.getCategorizationExamplesLimit();
|
||||
|
||||
if (analysisLimits.getModelMemoryLimit() != null) {
|
||||
modelMemoryLimit = analysisLimits.getModelMemoryLimit();
|
||||
|
||||
if (maxModelMemoryIsSet && modelMemoryLimit > maxModelMemoryLimit.getMb()) {
|
||||
throw new IllegalArgumentException(Messages.getMessage(Messages.JOB_CONFIG_MODEL_MEMORY_LIMIT_GREATER_THAN_MAX,
|
||||
new ByteSizeValue(modelMemoryLimit, ByteSizeUnit.MB),
|
||||
maxModelMemoryLimit));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
analysisLimits = new AnalysisLimits(modelMemoryLimit, categorizationExampleLimit);
|
||||
}
|
||||
|
||||
private void validateGroups() {
|
||||
|
|
|
@ -11,16 +11,18 @@ import org.elasticsearch.common.ParseField;
|
|||
import org.elasticsearch.common.io.stream.StreamInput;
|
||||
import org.elasticsearch.common.io.stream.StreamOutput;
|
||||
import org.elasticsearch.common.io.stream.Writeable;
|
||||
import org.elasticsearch.common.unit.ByteSizeUnit;
|
||||
import org.elasticsearch.common.unit.ByteSizeValue;
|
||||
import org.elasticsearch.common.unit.TimeValue;
|
||||
import org.elasticsearch.common.xcontent.ConstructingObjectParser;
|
||||
import org.elasticsearch.common.xcontent.ObjectParser;
|
||||
import org.elasticsearch.common.xcontent.ToXContentObject;
|
||||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
import org.elasticsearch.xpack.ml.job.messages.Messages;
|
||||
import org.elasticsearch.xpack.ml.utils.ExceptionsHelper;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.Arrays;
|
||||
import java.util.Collections;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.Objects;
|
||||
|
@ -48,6 +50,14 @@ public class JobUpdate implements Writeable, ToXContentObject {
|
|||
PARSER.declareString(Builder::setModelSnapshotId, Job.MODEL_SNAPSHOT_ID);
|
||||
}
|
||||
|
||||
/**
|
||||
* Prior to 6.1 a default model_memory_limit was not enforced in Java.
|
||||
* The default of 4GB was used in the C++ code.
|
||||
* If model_memory_limit is not defined for a job then the
|
||||
* job was created before 6.1 and a value of 4GB is assumed.
|
||||
*/
|
||||
private static final long UNDEFINED_MODEL_MEMORY_LIMIT_DEFAULT = 4096;
|
||||
|
||||
private final String jobId;
|
||||
private final List<String> groups;
|
||||
private final String description;
|
||||
|
@ -242,9 +252,10 @@ public class JobUpdate implements Writeable, ToXContentObject {
|
|||
* Updates {@code source} with the new values in this object returning a new {@link Job}.
|
||||
*
|
||||
* @param source Source job to be updated
|
||||
* @param maxModelMemoryLimit The maximum model memory allowed
|
||||
* @return A new job equivalent to {@code source} updated.
|
||||
*/
|
||||
public Job mergeWithJob(Job source) {
|
||||
public Job mergeWithJob(Job source, ByteSizeValue maxModelMemoryLimit) {
|
||||
Job.Builder builder = new Job.Builder(source);
|
||||
if (groups != null) {
|
||||
builder.setGroups(groups);
|
||||
|
@ -278,6 +289,36 @@ public class JobUpdate implements Writeable, ToXContentObject {
|
|||
builder.setModelPlotConfig(modelPlotConfig);
|
||||
}
|
||||
if (analysisLimits != null) {
|
||||
Long oldMemoryLimit;
|
||||
if (source.getAnalysisLimits() != null) {
|
||||
oldMemoryLimit = source.getAnalysisLimits().getModelMemoryLimit() != null ?
|
||||
source.getAnalysisLimits().getModelMemoryLimit()
|
||||
: UNDEFINED_MODEL_MEMORY_LIMIT_DEFAULT;
|
||||
} else {
|
||||
oldMemoryLimit = UNDEFINED_MODEL_MEMORY_LIMIT_DEFAULT;
|
||||
}
|
||||
|
||||
Long newMemoryLimit = analysisLimits.getModelMemoryLimit() != null ?
|
||||
analysisLimits.getModelMemoryLimit()
|
||||
: oldMemoryLimit;
|
||||
|
||||
if (newMemoryLimit < oldMemoryLimit) {
|
||||
throw ExceptionsHelper.badRequestException(
|
||||
Messages.getMessage(Messages.JOB_CONFIG_UPDATE_ANALYSIS_LIMITS_MODEL_MEMORY_LIMIT_CANNOT_BE_DECREASED,
|
||||
new ByteSizeValue(oldMemoryLimit, ByteSizeUnit.MB),
|
||||
new ByteSizeValue(newMemoryLimit, ByteSizeUnit.MB)));
|
||||
}
|
||||
|
||||
boolean maxModelMemoryLimitIsSet = maxModelMemoryLimit != null && maxModelMemoryLimit.getMb() > 0;
|
||||
if (maxModelMemoryLimitIsSet) {
|
||||
Long modelMemoryLimit = analysisLimits.getModelMemoryLimit();
|
||||
if (modelMemoryLimit != null && modelMemoryLimit > maxModelMemoryLimit.getMb()) {
|
||||
throw ExceptionsHelper.badRequestException(Messages.getMessage(Messages.JOB_CONFIG_MODEL_MEMORY_LIMIT_GREATER_THAN_MAX,
|
||||
new ByteSizeValue(modelMemoryLimit, ByteSizeUnit.MB),
|
||||
maxModelMemoryLimit));
|
||||
}
|
||||
}
|
||||
|
||||
builder.setAnalysisLimits(analysisLimits);
|
||||
}
|
||||
if (renormalizationWindowDays != null) {
|
||||
|
|
|
@ -5,6 +5,8 @@
|
|||
*/
|
||||
package org.elasticsearch.xpack.ml.job.messages;
|
||||
|
||||
import org.elasticsearch.xpack.ml.MachineLearning;
|
||||
|
||||
import java.text.MessageFormat;
|
||||
import java.util.Locale;
|
||||
|
||||
|
@ -105,6 +107,8 @@ public final class Messages {
|
|||
public static final String JOB_CONFIG_FIELDNAME_INCOMPATIBLE_FUNCTION = "field_name cannot be used with function ''{0}''";
|
||||
public static final String JOB_CONFIG_FIELD_VALUE_TOO_LOW = "{0} cannot be less than {1,number}. Value = {2,number}";
|
||||
public static final String JOB_CONFIG_MODEL_MEMORY_LIMIT_TOO_LOW = "model_memory_limit must be at least 1 MiB. Value = {0,number}";
|
||||
public static final String JOB_CONFIG_MODEL_MEMORY_LIMIT_GREATER_THAN_MAX =
|
||||
"model_memory_limit [{0}] must be less than the value of the " + MachineLearning.MAX_MODEL_MEMORY.getKey() + " setting [{1}]";
|
||||
public static final String JOB_CONFIG_FUNCTION_INCOMPATIBLE_PRESUMMARIZED =
|
||||
"The ''{0}'' function cannot be used in jobs that will take pre-summarized input";
|
||||
public static final String JOB_CONFIG_FUNCTION_REQUIRES_BYFIELD = "by_field_name must be set when the ''{0}'' function is used";
|
||||
|
|
|
@ -44,6 +44,9 @@ import org.elasticsearch.index.query.TermsQueryBuilder;
|
|||
import org.elasticsearch.rest.RestStatus;
|
||||
import org.elasticsearch.search.SearchHit;
|
||||
import org.elasticsearch.search.SearchHits;
|
||||
import org.elasticsearch.search.aggregations.Aggregation;
|
||||
import org.elasticsearch.search.aggregations.AggregationBuilders;
|
||||
import org.elasticsearch.search.aggregations.metrics.stats.extended.ExtendedStats;
|
||||
import org.elasticsearch.search.builder.SearchSourceBuilder;
|
||||
import org.elasticsearch.search.sort.FieldSortBuilder;
|
||||
import org.elasticsearch.search.sort.SortBuilder;
|
||||
|
@ -100,6 +103,8 @@ public class JobProvider {
|
|||
);
|
||||
|
||||
private static final int RECORDS_SIZE_PARAM = 10000;
|
||||
private static final int BUCKETS_FOR_ESTABLISHED_MEMORY_SIZE = 20;
|
||||
private static final double ESTABLISHED_MEMORY_CV_THRESHOLD = 0.1;
|
||||
|
||||
private final Client client;
|
||||
private final Settings settings;
|
||||
|
@ -943,6 +948,96 @@ public class JobProvider {
|
|||
.addSort(SortBuilders.fieldSort(ModelSizeStats.LOG_TIME_FIELD.getPreferredName()).order(SortOrder.DESC));
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the "established" memory usage of a job, if it has one.
|
||||
* In order for a job to be considered to have established memory usage it must:
|
||||
* - Have generated at least <code>BUCKETS_FOR_ESTABLISHED_MEMORY_SIZE</code> buckets of results
|
||||
* - Have generated at least one model size stats document
|
||||
* - Have low variability of model bytes in model size stats documents in the time period covered by the last
|
||||
* <code>BUCKETS_FOR_ESTABLISHED_MEMORY_SIZE</code> buckets, which is defined as having a coefficient of variation
|
||||
* of no more than <code>ESTABLISHED_MEMORY_CV_THRESHOLD</code>
|
||||
* @param jobId the id of the job for which established memory usage is required
|
||||
* @param handler if the method succeeds, this will be passed the established memory usage (in bytes) of the
|
||||
* specified job, or <code>null</code> if memory usage is not yet established
|
||||
* @param errorHandler if a problem occurs, the exception will be passed to this handler
|
||||
*/
|
||||
public void getEstablishedMemoryUsage(String jobId, Consumer<Long> handler, Consumer<Exception> errorHandler) {
|
||||
|
||||
String indexName = AnomalyDetectorsIndex.jobResultsAliasedName(jobId);
|
||||
|
||||
// Step 2. Find the count, mean and standard deviation of memory usage over the time span of the last N bucket results,
|
||||
// where N is the number of buckets required to consider memory usage "established"
|
||||
Consumer<QueryPage<Bucket>> bucketHandler = buckets -> {
|
||||
if (buckets.results().size() == 1) {
|
||||
String searchFromTimeMs = Long.toString(buckets.results().get(0).getTimestamp().getTime());
|
||||
SearchRequestBuilder search = client.prepareSearch(indexName)
|
||||
.setSize(0)
|
||||
.setIndicesOptions(IndicesOptions.lenientExpandOpen())
|
||||
.setQuery(new BoolQueryBuilder()
|
||||
.filter(QueryBuilders.rangeQuery(Result.TIMESTAMP.getPreferredName()).gte(searchFromTimeMs))
|
||||
.filter(QueryBuilders.termQuery(Result.RESULT_TYPE.getPreferredName(), ModelSizeStats.RESULT_TYPE_VALUE)))
|
||||
.addAggregation(AggregationBuilders.extendedStats("es").field(ModelSizeStats.MODEL_BYTES_FIELD.getPreferredName()));
|
||||
search.execute(ActionListener.wrap(
|
||||
response -> {
|
||||
List<Aggregation> aggregations = response.getAggregations().asList();
|
||||
if (aggregations.size() == 1) {
|
||||
ExtendedStats extendedStats = (ExtendedStats) aggregations.get(0);
|
||||
long count = extendedStats.getCount();
|
||||
if (count <= 0) {
|
||||
// model size stats haven't changed in the last N buckets, so the latest (older) ones are established
|
||||
modelSizeStats(jobId, modelSizeStats -> handleModelBytesOrNull(handler, modelSizeStats), errorHandler);
|
||||
} else if (count == 1) {
|
||||
// no need to do an extra search in the case of exactly one document being aggregated
|
||||
handler.accept((long) extendedStats.getAvg());
|
||||
} else {
|
||||
double coefficientOfVaration = extendedStats.getStdDeviation() / extendedStats.getAvg();
|
||||
LOGGER.trace("[{}] Coefficient of variation [{}] when calculating established memory use", jobId,
|
||||
coefficientOfVaration);
|
||||
// is there sufficient stability in the latest model size stats readings?
|
||||
if (coefficientOfVaration <= ESTABLISHED_MEMORY_CV_THRESHOLD) {
|
||||
// yes, so return the latest model size as established
|
||||
modelSizeStats(jobId, modelSizeStats -> handleModelBytesOrNull(handler, modelSizeStats),
|
||||
errorHandler);
|
||||
} else {
|
||||
// no - we don't have an established model size
|
||||
handler.accept(null);
|
||||
}
|
||||
}
|
||||
} else {
|
||||
handler.accept(null);
|
||||
}
|
||||
}, errorHandler
|
||||
));
|
||||
} else {
|
||||
handler.accept(null);
|
||||
}
|
||||
};
|
||||
|
||||
// Step 1. Find the time span of the most recent N bucket results, where N is the number of buckets
|
||||
// required to consider memory usage "established"
|
||||
BucketsQueryBuilder.BucketsQuery bucketQuery = new BucketsQueryBuilder()
|
||||
.sortField(Result.TIMESTAMP.getPreferredName())
|
||||
.sortDescending(true).from(BUCKETS_FOR_ESTABLISHED_MEMORY_SIZE - 1).size(1)
|
||||
.includeInterim(false)
|
||||
.build();
|
||||
bucketsViaInternalClient(jobId, bucketQuery, bucketHandler, e -> {
|
||||
if (e instanceof ResourceNotFoundException) {
|
||||
handler.accept(null);
|
||||
} else {
|
||||
errorHandler.accept(e);
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* A model size of 0 implies a completely uninitialised model. This method converts 0 to <code>null</code>
|
||||
* before calling a handler.
|
||||
*/
|
||||
private static void handleModelBytesOrNull(Consumer<Long> handler, ModelSizeStats modelSizeStats) {
|
||||
long modelBytes = modelSizeStats.getModelBytes();
|
||||
handler.accept(modelBytes > 0 ? modelBytes : null);
|
||||
}
|
||||
|
||||
/**
|
||||
* Maps authorization failures when querying ML indexes to job-specific authorization failures attributed to the ML actions.
|
||||
* Works by replacing the action name with another provided by the caller, and appending the job ID.
|
||||
|
|
|
@ -6,6 +6,7 @@
|
|||
package org.elasticsearch.xpack.security;
|
||||
|
||||
import org.elasticsearch.bootstrap.BootstrapCheck;
|
||||
import org.elasticsearch.bootstrap.BootstrapContext;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.xpack.security.authc.RealmSettings;
|
||||
import org.elasticsearch.xpack.security.authc.pki.PkiRealm;
|
||||
|
@ -20,10 +21,8 @@ import static org.elasticsearch.xpack.security.Security.setting;
|
|||
class PkiRealmBootstrapCheck implements BootstrapCheck {
|
||||
|
||||
private final SSLService sslService;
|
||||
private final Settings settings;
|
||||
|
||||
PkiRealmBootstrapCheck(Settings settings, SSLService sslService) {
|
||||
this.settings = settings;
|
||||
PkiRealmBootstrapCheck(SSLService sslService) {
|
||||
this.sslService = sslService;
|
||||
}
|
||||
|
||||
|
@ -32,7 +31,8 @@ class PkiRealmBootstrapCheck implements BootstrapCheck {
|
|||
* least one network communication layer.
|
||||
*/
|
||||
@Override
|
||||
public boolean check() {
|
||||
public boolean check(BootstrapContext context) {
|
||||
final Settings settings = context.settings;
|
||||
final boolean pkiRealmEnabled = settings.getGroups(RealmSettings.PREFIX).values().stream()
|
||||
.filter(s -> PkiRealm.TYPE.equals(s.get("type")))
|
||||
.anyMatch(s -> s.getAsBoolean("enabled", true));
|
||||
|
|
|
@ -242,10 +242,9 @@ public class Security implements ActionPlugin, IngestPlugin, NetworkPlugin, Clus
|
|||
// fetched
|
||||
final List<BootstrapCheck> checks = new ArrayList<>();
|
||||
checks.addAll(Arrays.asList(
|
||||
new SSLBootstrapCheck(sslService, settings, env),
|
||||
new TokenPassphraseBootstrapCheck(settings),
|
||||
new TokenSSLBootstrapCheck(settings),
|
||||
new PkiRealmBootstrapCheck(settings, sslService)));
|
||||
new SSLBootstrapCheck(sslService, env),
|
||||
new TokenSSLBootstrapCheck(),
|
||||
new PkiRealmBootstrapCheck(sslService)));
|
||||
checks.addAll(InternalRealms.getBootstrapChecks(settings));
|
||||
this.bootstrapChecks = Collections.unmodifiableList(checks);
|
||||
} else {
|
||||
|
@ -491,7 +490,6 @@ public class Security implements ActionPlugin, IngestPlugin, NetworkPlugin, Clus
|
|||
settingsList.add(CompositeRolesStore.CACHE_SIZE_SETTING);
|
||||
settingsList.add(FieldPermissionsCache.CACHE_SIZE_SETTING);
|
||||
settingsList.add(TokenService.TOKEN_EXPIRATION);
|
||||
settingsList.add(TokenService.TOKEN_PASSPHRASE);
|
||||
settingsList.add(TokenService.DELETE_INTERVAL);
|
||||
settingsList.add(TokenService.DELETE_TIMEOUT);
|
||||
settingsList.add(SecurityServerTransportInterceptor.TRANSPORT_TYPE_PROFILE_SETTING);
|
||||
|
|
|
@ -1,51 +0,0 @@
|
|||
/*
|
||||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||
* or more contributor license agreements. Licensed under the Elastic License;
|
||||
* you may not use this file except in compliance with the Elastic License.
|
||||
*/
|
||||
package org.elasticsearch.xpack.security;
|
||||
|
||||
import org.elasticsearch.bootstrap.BootstrapCheck;
|
||||
import org.elasticsearch.common.settings.SecureString;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.xpack.XPackSettings;
|
||||
import org.elasticsearch.xpack.security.authc.TokenService;
|
||||
|
||||
/**
|
||||
* Bootstrap check to ensure that the user has set the token passphrase setting and is not using
|
||||
* the default value in production
|
||||
*/
|
||||
final class TokenPassphraseBootstrapCheck implements BootstrapCheck {
|
||||
|
||||
static final int MINIMUM_PASSPHRASE_LENGTH = 8;
|
||||
|
||||
private final boolean tokenServiceEnabled;
|
||||
private final SecureString tokenPassphrase;
|
||||
|
||||
TokenPassphraseBootstrapCheck(Settings settings) {
|
||||
this.tokenServiceEnabled = XPackSettings.TOKEN_SERVICE_ENABLED_SETTING.get(settings);
|
||||
|
||||
this.tokenPassphrase = TokenService.TOKEN_PASSPHRASE.exists(settings) ? TokenService.TOKEN_PASSPHRASE.get(settings) : null;
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean check() {
|
||||
if (tokenPassphrase == null) { // that's fine we bootstrap it ourself
|
||||
return false;
|
||||
}
|
||||
try (SecureString ignore = tokenPassphrase) {
|
||||
if (tokenServiceEnabled) {
|
||||
return tokenPassphrase.length() < MINIMUM_PASSPHRASE_LENGTH;
|
||||
}
|
||||
}
|
||||
// service is not enabled so no need to check
|
||||
return false;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String errorMessage() {
|
||||
return "Please set a passphrase using the elasticsearch-keystore tool for the setting [" + TokenService.TOKEN_PASSPHRASE.getKey() +
|
||||
"] that is at least " + MINIMUM_PASSPHRASE_LENGTH + " characters in length or " +
|
||||
"disable the token service using the [" + XPackSettings.TOKEN_SERVICE_ENABLED_SETTING.getKey() + "] setting";
|
||||
}
|
||||
}
|
|
@ -6,6 +6,7 @@
|
|||
package org.elasticsearch.xpack.security;
|
||||
|
||||
import org.elasticsearch.bootstrap.BootstrapCheck;
|
||||
import org.elasticsearch.bootstrap.BootstrapContext;
|
||||
import org.elasticsearch.common.network.NetworkModule;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.xpack.XPackSettings;
|
||||
|
@ -15,16 +16,11 @@ import org.elasticsearch.xpack.XPackSettings;
|
|||
*/
|
||||
final class TokenSSLBootstrapCheck implements BootstrapCheck {
|
||||
|
||||
private final Settings settings;
|
||||
|
||||
TokenSSLBootstrapCheck(Settings settings) {
|
||||
this.settings = settings;
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean check() {
|
||||
if (NetworkModule.HTTP_ENABLED.get(settings)) {
|
||||
return XPackSettings.HTTP_SSL_ENABLED.get(settings) == false && XPackSettings.TOKEN_SERVICE_ENABLED_SETTING.get(settings);
|
||||
public boolean check(BootstrapContext context) {
|
||||
if (NetworkModule.HTTP_ENABLED.get(context.settings)) {
|
||||
return XPackSettings.HTTP_SSL_ENABLED.get(context.settings) == false && XPackSettings.TOKEN_SERVICE_ENABLED_SETTING.get
|
||||
(context.settings);
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
|
|
@ -116,7 +116,6 @@ public final class TokenService extends AbstractComponent {
|
|||
private static final String TYPE = "doc";
|
||||
|
||||
public static final String THREAD_POOL_NAME = XPackPlugin.SECURITY + "-token-key";
|
||||
public static final Setting<SecureString> TOKEN_PASSPHRASE = SecureSetting.secureString("xpack.security.authc.token.passphrase", null);
|
||||
public static final Setting<TimeValue> TOKEN_EXPIRATION = Setting.timeSetting("xpack.security.authc.token.timeout",
|
||||
TimeValue.timeValueMinutes(20L), TimeValue.timeValueSeconds(1L), Property.NodeScope);
|
||||
public static final Setting<TimeValue> DELETE_INTERVAL = Setting.timeSetting("xpack.security.authc.token.delete.interval",
|
||||
|
@ -155,14 +154,7 @@ public final class TokenService extends AbstractComponent {
|
|||
byte[] saltArr = new byte[SALT_BYTES];
|
||||
secureRandom.nextBytes(saltArr);
|
||||
|
||||
final SecureString tokenPassphraseValue = TOKEN_PASSPHRASE.get(settings);
|
||||
final SecureString tokenPassphrase;
|
||||
if (tokenPassphraseValue.length() == 0) {
|
||||
tokenPassphrase = generateTokenKey();
|
||||
} else {
|
||||
tokenPassphrase = tokenPassphraseValue;
|
||||
}
|
||||
|
||||
final SecureString tokenPassphrase = generateTokenKey();
|
||||
this.clock = clock.withZone(ZoneOffset.UTC);
|
||||
this.expirationDelay = TOKEN_EXPIRATION.get(settings);
|
||||
this.internalClient = internalClient;
|
||||
|
@ -236,41 +228,32 @@ public final class TokenService extends AbstractComponent {
|
|||
} else {
|
||||
// the token exists and the value is at least as long as we'd expect
|
||||
final Version version = Version.readVersion(in);
|
||||
if (version.before(Version.V_5_5_0)) {
|
||||
listener.onResponse(null);
|
||||
} else {
|
||||
final BytesKey decodedSalt = new BytesKey(in.readByteArray());
|
||||
final BytesKey passphraseHash;
|
||||
if (version.onOrAfter(Version.V_6_0_0_beta2)) {
|
||||
passphraseHash = new BytesKey(in.readByteArray());
|
||||
} else {
|
||||
passphraseHash = keyCache.currentTokenKeyHash;
|
||||
}
|
||||
KeyAndCache keyAndCache = keyCache.get(passphraseHash);
|
||||
if (keyAndCache != null) {
|
||||
final SecretKey decodeKey = keyAndCache.getKey(decodedSalt);
|
||||
final byte[] iv = in.readByteArray();
|
||||
if (decodeKey != null) {
|
||||
try {
|
||||
decryptToken(in, getDecryptionCipher(iv, decodeKey, version, decodedSalt), version, listener);
|
||||
} catch (GeneralSecurityException e) {
|
||||
// could happen with a token that is not ours
|
||||
logger.warn("invalid token", e);
|
||||
listener.onResponse(null);
|
||||
}
|
||||
} else {
|
||||
/* As a measure of protected against DOS, we can pass requests requiring a key
|
||||
* computation off to a single thread executor. For normal usage, the initial
|
||||
* request(s) that require a key computation will be delayed and there will be
|
||||
* some additional latency.
|
||||
*/
|
||||
internalClient.threadPool().executor(THREAD_POOL_NAME)
|
||||
.submit(new KeyComputingRunnable(in, iv, version, decodedSalt, listener, keyAndCache));
|
||||
final BytesKey decodedSalt = new BytesKey(in.readByteArray());
|
||||
final BytesKey passphraseHash = new BytesKey(in.readByteArray());
|
||||
KeyAndCache keyAndCache = keyCache.get(passphraseHash);
|
||||
if (keyAndCache != null) {
|
||||
final SecretKey decodeKey = keyAndCache.getKey(decodedSalt);
|
||||
final byte[] iv = in.readByteArray();
|
||||
if (decodeKey != null) {
|
||||
try {
|
||||
decryptToken(in, getDecryptionCipher(iv, decodeKey, version, decodedSalt), version, listener);
|
||||
} catch (GeneralSecurityException e) {
|
||||
// could happen with a token that is not ours
|
||||
logger.warn("invalid token", e);
|
||||
listener.onResponse(null);
|
||||
}
|
||||
} else {
|
||||
logger.debug("invalid key {} key: {}", passphraseHash, keyCache.cache.keySet());
|
||||
listener.onResponse(null);
|
||||
/* As a measure of protected against DOS, we can pass requests requiring a key
|
||||
* computation off to a single thread executor. For normal usage, the initial
|
||||
* request(s) that require a key computation will be delayed and there will be
|
||||
* some additional latency.
|
||||
*/
|
||||
internalClient.threadPool().executor(THREAD_POOL_NAME)
|
||||
.submit(new KeyComputingRunnable(in, iv, version, decodedSalt, listener, keyAndCache));
|
||||
}
|
||||
} else {
|
||||
logger.debug("invalid key {} key: {}", passphraseHash, keyCache.cache.keySet());
|
||||
listener.onResponse(null);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -209,6 +209,7 @@ public class PkiRealm extends Realm {
|
|||
|
||||
settings.add(SSL_SETTINGS.truststorePath);
|
||||
settings.add(SSL_SETTINGS.truststorePassword);
|
||||
settings.add(SSL_SETTINGS.legacyTruststorePassword);
|
||||
settings.add(SSL_SETTINGS.truststoreAlgorithm);
|
||||
settings.add(SSL_SETTINGS.caPaths);
|
||||
|
||||
|
|
|
@ -9,6 +9,7 @@ import java.nio.file.Path;
|
|||
|
||||
import org.apache.lucene.util.SetOnce;
|
||||
import org.elasticsearch.bootstrap.BootstrapCheck;
|
||||
import org.elasticsearch.bootstrap.BootstrapContext;
|
||||
import org.elasticsearch.xpack.security.authc.RealmConfig;
|
||||
|
||||
/**
|
||||
|
@ -27,7 +28,7 @@ public class RoleMappingFileBootstrapCheck implements BootstrapCheck {
|
|||
}
|
||||
|
||||
@Override
|
||||
public boolean check() {
|
||||
public boolean check(BootstrapContext context) {
|
||||
try {
|
||||
DnRoleMapper.parseFile(path, realmConfig.logger(getClass()), realmConfig.type(), realmConfig.name(), true);
|
||||
return false;
|
||||
|
|
|
@ -7,6 +7,7 @@ package org.elasticsearch.xpack.ssl;
|
|||
|
||||
import org.elasticsearch.ElasticsearchException;
|
||||
import org.elasticsearch.bootstrap.BootstrapCheck;
|
||||
import org.elasticsearch.bootstrap.BootstrapContext;
|
||||
import org.elasticsearch.common.inject.internal.Nullable;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.env.Environment;
|
||||
|
@ -33,18 +34,16 @@ import java.util.stream.Stream;
|
|||
public final class SSLBootstrapCheck implements BootstrapCheck {
|
||||
|
||||
private final SSLService sslService;
|
||||
private final Settings settings;
|
||||
private final Environment environment;
|
||||
|
||||
public SSLBootstrapCheck(SSLService sslService, Settings settings, @Nullable Environment environment) {
|
||||
public SSLBootstrapCheck(SSLService sslService, @Nullable Environment environment) {
|
||||
this.sslService = sslService;
|
||||
this.settings = settings;
|
||||
this.environment = environment;
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean check() {
|
||||
final Settings transportSSLSettings = settings.getByPrefix(XPackSettings.TRANSPORT_SSL_PREFIX);
|
||||
public boolean check(BootstrapContext context) {
|
||||
final Settings transportSSLSettings = context.settings.getByPrefix(XPackSettings.TRANSPORT_SSL_PREFIX);
|
||||
return sslService.sslConfiguration(transportSSLSettings).keyConfig() == KeyConfig.NONE
|
||||
|| isDefaultCACertificateTrusted() || isDefaultPrivateKeyUsed();
|
||||
}
|
||||
|
|
|
@ -47,10 +47,12 @@ public class SSLConfigurationSettings {
|
|||
public final Setting<Optional<SSLClientAuth>> clientAuth;
|
||||
public final Setting<Optional<VerificationMode>> verificationMode;
|
||||
|
||||
// public for PKI realm
|
||||
public final Setting<SecureString> legacyTruststorePassword;
|
||||
|
||||
// pkg private for tests
|
||||
final Setting<SecureString> legacyKeystorePassword;
|
||||
final Setting<SecureString> legacyKeystoreKeyPassword;
|
||||
final Setting<SecureString> legacyTruststorePassword;
|
||||
final Setting<SecureString> legacyKeyPassword;
|
||||
|
||||
private final List<Setting<?>> allSettings;
|
||||
|
|
|
@ -6,6 +6,7 @@
|
|||
package org.elasticsearch.xpack.watcher;
|
||||
|
||||
import org.elasticsearch.bootstrap.BootstrapCheck;
|
||||
import org.elasticsearch.bootstrap.BootstrapContext;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.env.Environment;
|
||||
import org.elasticsearch.xpack.XPackPlugin;
|
||||
|
@ -15,17 +16,16 @@ import java.nio.file.Path;
|
|||
|
||||
final class EncryptSensitiveDataBootstrapCheck implements BootstrapCheck {
|
||||
|
||||
private final Settings settings;
|
||||
private final Environment environment;
|
||||
|
||||
EncryptSensitiveDataBootstrapCheck(Settings settings, Environment environment) {
|
||||
this.settings = settings;
|
||||
EncryptSensitiveDataBootstrapCheck(Environment environment) {
|
||||
this.environment = environment;
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean check() {
|
||||
return Watcher.ENCRYPT_SENSITIVE_DATA_SETTING.get(settings) && Watcher.ENCRYPTION_KEY_SETTING.exists(settings) == false;
|
||||
public boolean check(BootstrapContext context) {
|
||||
return Watcher.ENCRYPT_SENSITIVE_DATA_SETTING.get(context.settings)
|
||||
&& Watcher.ENCRYPTION_KEY_SETTING.exists(context.settings) == false;
|
||||
}
|
||||
|
||||
@Override
|
||||
|
|
|
@ -518,6 +518,6 @@ public class Watcher implements ActionPlugin {
|
|||
}
|
||||
|
||||
public List<BootstrapCheck> getBootstrapChecks() {
|
||||
return Collections.singletonList(new EncryptSensitiveDataBootstrapCheck(settings, new Environment(settings)));
|
||||
return Collections.singletonList(new EncryptSensitiveDataBootstrapCheck(new Environment(settings)));
|
||||
}
|
||||
}
|
||||
|
|
|
@ -192,23 +192,18 @@ final class WatcherIndexingListener extends AbstractComponent implements Indexin
|
|||
*/
|
||||
@Override
|
||||
public void clusterChanged(ClusterChangedEvent event) {
|
||||
boolean isWatchExecutionDistributed = WatcherLifeCycleService.isWatchExecutionDistributed(event.state());
|
||||
if (isWatchExecutionDistributed) {
|
||||
if (event.state().nodes().getLocalNode().isDataNode() && event.metaDataChanged()) {
|
||||
try {
|
||||
IndexMetaData metaData = WatchStoreUtils.getConcreteIndex(Watch.INDEX, event.state().metaData());
|
||||
if (metaData == null) {
|
||||
configuration = INACTIVE;
|
||||
} else {
|
||||
checkWatchIndexHasChanged(metaData, event);
|
||||
}
|
||||
} catch (IllegalStateException e) {
|
||||
logger.error("error loading watches index: [{}]", e.getMessage());
|
||||
if (event.state().nodes().getLocalNode().isDataNode() && event.metaDataChanged()) {
|
||||
try {
|
||||
IndexMetaData metaData = WatchStoreUtils.getConcreteIndex(Watch.INDEX, event.state().metaData());
|
||||
if (metaData == null) {
|
||||
configuration = INACTIVE;
|
||||
} else {
|
||||
checkWatchIndexHasChanged(metaData, event);
|
||||
}
|
||||
} catch (IllegalStateException e) {
|
||||
logger.error("error loading watches index: [{}]", e.getMessage());
|
||||
configuration = INACTIVE;
|
||||
}
|
||||
} else {
|
||||
configuration = INACTIVE;
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -5,7 +5,6 @@
|
|||
*/
|
||||
package org.elasticsearch.xpack.watcher;
|
||||
|
||||
import org.elasticsearch.Version;
|
||||
import org.elasticsearch.cluster.ClusterChangedEvent;
|
||||
import org.elasticsearch.cluster.ClusterState;
|
||||
import org.elasticsearch.cluster.ClusterStateListener;
|
||||
|
@ -115,90 +114,60 @@ public class WatcherLifeCycleService extends AbstractComponent implements Cluste
|
|||
if (currentWatcherStopped) {
|
||||
executor.execute(() -> this.stop("watcher manually marked to shutdown in cluster state update, shutting down"));
|
||||
} else {
|
||||
// if there are old nodes in the cluster hosting the watch index shards, we cannot run distributed, only on the master node
|
||||
boolean isDistributedWatchExecutionEnabled = isWatchExecutionDistributed(event.state());
|
||||
if (isDistributedWatchExecutionEnabled) {
|
||||
if (watcherService.state() == WatcherState.STARTED && event.state().nodes().getLocalNode().isDataNode()) {
|
||||
DiscoveryNode localNode = event.state().nodes().getLocalNode();
|
||||
RoutingNode routingNode = event.state().getRoutingNodes().node(localNode.getId());
|
||||
IndexMetaData watcherIndexMetaData = WatchStoreUtils.getConcreteIndex(Watch.INDEX, event.state().metaData());
|
||||
if (watcherService.state() == WatcherState.STARTED && event.state().nodes().getLocalNode().isDataNode()) {
|
||||
DiscoveryNode localNode = event.state().nodes().getLocalNode();
|
||||
RoutingNode routingNode = event.state().getRoutingNodes().node(localNode.getId());
|
||||
IndexMetaData watcherIndexMetaData = WatchStoreUtils.getConcreteIndex(Watch.INDEX, event.state().metaData());
|
||||
|
||||
// no watcher index, time to pause, as there are for sure no shards on this node
|
||||
if (watcherIndexMetaData == null) {
|
||||
if (previousAllocationIds.get().isEmpty() == false) {
|
||||
previousAllocationIds.set(Collections.emptyList());
|
||||
executor.execute(() -> watcherService.pauseExecution("no watcher index found"));
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
String watchIndex = watcherIndexMetaData.getIndex().getName();
|
||||
List<ShardRouting> localShards = routingNode.shardsWithState(watchIndex, RELOCATING, STARTED);
|
||||
|
||||
// no local shards, empty out watcher and not waste resources!
|
||||
if (localShards.isEmpty()) {
|
||||
if (previousAllocationIds.get().isEmpty() == false) {
|
||||
executor.execute(() -> watcherService.pauseExecution("no local watcher shards"));
|
||||
previousAllocationIds.set(Collections.emptyList());
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
List<String> currentAllocationIds = localShards.stream()
|
||||
.map(ShardRouting::allocationId)
|
||||
.map(AllocationId::getId)
|
||||
.collect(Collectors.toList());
|
||||
Collections.sort(currentAllocationIds);
|
||||
|
||||
if (previousAllocationIds.get().equals(currentAllocationIds) == false) {
|
||||
previousAllocationIds.set(currentAllocationIds);
|
||||
executor.execute(() -> watcherService.reload(event.state(), "different shard allocation ids"));
|
||||
}
|
||||
} else if (watcherService.state() != WatcherState.STARTED && watcherService.state() != WatcherState.STARTING) {
|
||||
IndexMetaData watcherIndexMetaData = WatchStoreUtils.getConcreteIndex(Watch.INDEX, event.state().metaData());
|
||||
IndexMetaData triggeredWatchesIndexMetaData = WatchStoreUtils.getConcreteIndex(TriggeredWatchStore.INDEX_NAME,
|
||||
event.state().metaData());
|
||||
boolean isIndexInternalFormatWatchIndex = watcherIndexMetaData == null ||
|
||||
Upgrade.checkInternalIndexFormat(watcherIndexMetaData);
|
||||
boolean isIndexInternalFormatTriggeredWatchIndex = triggeredWatchesIndexMetaData == null ||
|
||||
Upgrade.checkInternalIndexFormat(triggeredWatchesIndexMetaData);
|
||||
if (isIndexInternalFormatTriggeredWatchIndex && isIndexInternalFormatWatchIndex) {
|
||||
executor.execute(() -> start(event.state(), false));
|
||||
} else {
|
||||
logger.warn("not starting watcher, upgrade API run required: .watches[{}], .triggered_watches[{}]",
|
||||
isIndexInternalFormatWatchIndex, isIndexInternalFormatTriggeredWatchIndex);
|
||||
// no watcher index, time to pause, as there are for sure no shards on this node
|
||||
if (watcherIndexMetaData == null) {
|
||||
if (previousAllocationIds.get().isEmpty() == false) {
|
||||
previousAllocationIds.set(Collections.emptyList());
|
||||
executor.execute(() -> watcherService.pauseExecution("no watcher index found"));
|
||||
}
|
||||
return;
|
||||
}
|
||||
} else {
|
||||
if (event.localNodeMaster()) {
|
||||
if (watcherService.state() != WatcherState.STARTED && watcherService.state() != WatcherState.STARTING) {
|
||||
executor.execute(() -> start(event.state(), false));
|
||||
|
||||
String watchIndex = watcherIndexMetaData.getIndex().getName();
|
||||
List<ShardRouting> localShards = routingNode.shardsWithState(watchIndex, RELOCATING, STARTED);
|
||||
|
||||
// no local shards, empty out watcher and not waste resources!
|
||||
if (localShards.isEmpty()) {
|
||||
if (previousAllocationIds.get().isEmpty() == false) {
|
||||
executor.execute(() -> watcherService.pauseExecution("no local watcher shards"));
|
||||
previousAllocationIds.set(Collections.emptyList());
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
List<String> currentAllocationIds = localShards.stream()
|
||||
.map(ShardRouting::allocationId)
|
||||
.map(AllocationId::getId)
|
||||
.collect(Collectors.toList());
|
||||
Collections.sort(currentAllocationIds);
|
||||
|
||||
if (previousAllocationIds.get().equals(currentAllocationIds) == false) {
|
||||
previousAllocationIds.set(currentAllocationIds);
|
||||
executor.execute(() -> watcherService.reload(event.state(), "different shard allocation ids"));
|
||||
}
|
||||
} else if (watcherService.state() != WatcherState.STARTED && watcherService.state() != WatcherState.STARTING) {
|
||||
IndexMetaData watcherIndexMetaData = WatchStoreUtils.getConcreteIndex(Watch.INDEX, event.state().metaData());
|
||||
IndexMetaData triggeredWatchesIndexMetaData = WatchStoreUtils.getConcreteIndex(TriggeredWatchStore.INDEX_NAME,
|
||||
event.state().metaData());
|
||||
boolean isIndexInternalFormatWatchIndex = watcherIndexMetaData == null ||
|
||||
Upgrade.checkInternalIndexFormat(watcherIndexMetaData);
|
||||
boolean isIndexInternalFormatTriggeredWatchIndex = triggeredWatchesIndexMetaData == null ||
|
||||
Upgrade.checkInternalIndexFormat(triggeredWatchesIndexMetaData);
|
||||
if (isIndexInternalFormatTriggeredWatchIndex && isIndexInternalFormatWatchIndex) {
|
||||
executor.execute(() -> start(event.state(), false));
|
||||
} else {
|
||||
if (watcherService.state() == WatcherState.STARTED || watcherService.state() == WatcherState.STARTING) {
|
||||
executor.execute(() -> watcherService.pauseExecution("Pausing watcher, cluster contains old nodes not supporting" +
|
||||
" distributed watch execution"));
|
||||
}
|
||||
logger.warn("not starting watcher, upgrade API run required: .watches[{}], .triggered_watches[{}]",
|
||||
isIndexInternalFormatWatchIndex, isIndexInternalFormatTriggeredWatchIndex);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if the preconditions are given to run watcher with distributed watch execution.
|
||||
* The following requirements need to be fulfilled
|
||||
*
|
||||
* 1. The master node must run on a version greather than or equal 6.0
|
||||
* 2. The nodes holding the watcher shards must run on a version greater than or equal 6.0
|
||||
*
|
||||
* @param state The cluster to check against
|
||||
* @return true, if the above requirements are fulfilled, false otherwise
|
||||
*/
|
||||
public static boolean isWatchExecutionDistributed(ClusterState state) {
|
||||
// short circuit if all nodes are on 6.x, should be standard after upgrade
|
||||
return state.nodes().getMinNodeVersion().onOrAfter(Version.V_6_0_0_beta1);
|
||||
}
|
||||
|
||||
public WatcherMetaData watcherMetaData() {
|
||||
return watcherMetaData;
|
||||
}
|
||||
|
|
|
@ -29,6 +29,7 @@ import static org.elasticsearch.rest.RestRequest.Method.PUT;
|
|||
* The rest action to ack a watch
|
||||
*/
|
||||
public class RestAckWatchAction extends WatcherRestHandler {
|
||||
|
||||
public RestAckWatchAction(Settings settings, RestController controller) {
|
||||
super(settings);
|
||||
controller.registerHandler(POST, URI_BASE + "/watch/{id}/_ack", this);
|
||||
|
@ -49,7 +50,6 @@ public class RestAckWatchAction extends WatcherRestHandler {
|
|||
if (actions != null) {
|
||||
ackWatchRequest.setActionIds(actions);
|
||||
}
|
||||
ackWatchRequest.masterNodeTimeout(request.paramAsTime("master_timeout", ackWatchRequest.masterNodeTimeout()));
|
||||
return channel -> client.ackWatch(ackWatchRequest, new RestBuilderListener<AckWatchResponse>(channel) {
|
||||
@Override
|
||||
public RestResponse buildResponse(AckWatchResponse response, XContentBuilder builder) throws Exception {
|
||||
|
@ -60,5 +60,4 @@ public class RestAckWatchAction extends WatcherRestHandler {
|
|||
}
|
||||
});
|
||||
}
|
||||
|
||||
}
|
||||
|
|
|
@ -38,7 +38,6 @@ public class RestDeleteWatchAction extends WatcherRestHandler {
|
|||
@Override
|
||||
protected RestChannelConsumer doPrepareRequest(final RestRequest request, WatcherClient client) throws IOException {
|
||||
DeleteWatchRequest deleteWatchRequest = new DeleteWatchRequest(request.param("id"));
|
||||
deleteWatchRequest.masterNodeTimeout(request.paramAsTime("master_timeout", deleteWatchRequest.masterNodeTimeout()));
|
||||
return channel -> client.deleteWatch(deleteWatchRequest, new RestBuilderListener<DeleteWatchResponse>(channel) {
|
||||
@Override
|
||||
public RestResponse buildResponse(DeleteWatchResponse response, XContentBuilder builder) throws Exception {
|
||||
|
|
|
@ -46,7 +46,6 @@ public class RestPutWatchAction extends WatcherRestHandler implements RestReques
|
|||
protected RestChannelConsumer doPrepareRequest(final RestRequest request, WatcherClient client) throws IOException {
|
||||
PutWatchRequest putWatchRequest =
|
||||
new PutWatchRequest(request.param("id"), request.content(), request.getXContentType());
|
||||
putWatchRequest.masterNodeTimeout(request.paramAsTime("master_timeout", putWatchRequest.masterNodeTimeout()));
|
||||
putWatchRequest.setActive(request.paramAsBoolean("active", putWatchRequest.isActive()));
|
||||
return channel -> client.putWatch(putWatchRequest, new RestBuilderListener<PutWatchResponse>(channel) {
|
||||
@Override
|
||||
|
|
|
@ -6,16 +6,11 @@
|
|||
package org.elasticsearch.xpack.watcher.transport.actions;
|
||||
|
||||
import org.elasticsearch.action.ActionListener;
|
||||
import org.elasticsearch.action.ActionRequest;
|
||||
import org.elasticsearch.action.ActionResponse;
|
||||
import org.elasticsearch.action.support.ActionFilters;
|
||||
import org.elasticsearch.action.support.master.MasterNodeRequest;
|
||||
import org.elasticsearch.action.support.master.TransportMasterNodeAction;
|
||||
import org.elasticsearch.cluster.ClusterState;
|
||||
import org.elasticsearch.cluster.block.ClusterBlockException;
|
||||
import org.elasticsearch.cluster.block.ClusterBlockLevel;
|
||||
import org.elasticsearch.cluster.metadata.IndexMetaData;
|
||||
import org.elasticsearch.action.support.HandledTransportAction;
|
||||
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
|
||||
import org.elasticsearch.cluster.service.ClusterService;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.license.LicenseUtils;
|
||||
import org.elasticsearch.license.XPackLicenseState;
|
||||
|
@ -23,54 +18,25 @@ import org.elasticsearch.tasks.Task;
|
|||
import org.elasticsearch.threadpool.ThreadPool;
|
||||
import org.elasticsearch.transport.TransportService;
|
||||
import org.elasticsearch.xpack.XPackPlugin;
|
||||
import org.elasticsearch.xpack.watcher.WatcherLifeCycleService;
|
||||
import org.elasticsearch.xpack.watcher.watch.Watch;
|
||||
import org.elasticsearch.xpack.watcher.watch.WatchStoreUtils;
|
||||
|
||||
import java.util.function.Supplier;
|
||||
|
||||
public abstract class WatcherTransportAction<Request extends MasterNodeRequest<Request>, Response extends ActionResponse>
|
||||
extends TransportMasterNodeAction<Request, Response> {
|
||||
public abstract class WatcherTransportAction<Request extends ActionRequest, Response extends ActionResponse>
|
||||
extends HandledTransportAction<Request, Response> {
|
||||
|
||||
protected final XPackLicenseState licenseState;
|
||||
private final ClusterService clusterService;
|
||||
private final Supplier<Response> response;
|
||||
|
||||
public WatcherTransportAction(Settings settings, String actionName, TransportService transportService, ThreadPool threadPool,
|
||||
ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver,
|
||||
XPackLicenseState licenseState, ClusterService clusterService, Supplier<Request> request,
|
||||
Supplier<Response> response) {
|
||||
super(settings, actionName, transportService, clusterService, threadPool, actionFilters, indexNameExpressionResolver, request);
|
||||
XPackLicenseState licenseState, Supplier<Request> request) {
|
||||
super(settings, actionName, threadPool, transportService, actionFilters, indexNameExpressionResolver, request);
|
||||
this.licenseState = licenseState;
|
||||
this.clusterService = clusterService;
|
||||
this.response = response;
|
||||
}
|
||||
|
||||
protected String executor() {
|
||||
return ThreadPool.Names.GENERIC;
|
||||
}
|
||||
|
||||
@Override
|
||||
protected Response newResponse() {
|
||||
return response.get();
|
||||
}
|
||||
|
||||
protected abstract void masterOperation(Request request, ClusterState state, ActionListener<Response> listener) throws Exception;
|
||||
|
||||
protected boolean localExecute(Request request) {
|
||||
return WatcherLifeCycleService.isWatchExecutionDistributed(clusterService.state());
|
||||
}
|
||||
|
||||
@Override
|
||||
protected ClusterBlockException checkBlock(Request request, ClusterState state) {
|
||||
IndexMetaData index = WatchStoreUtils.getConcreteIndex(Watch.INDEX, state.metaData());
|
||||
if (index != null) {
|
||||
return state.blocks().indexBlockedException(ClusterBlockLevel.WRITE, index.getIndex().getName());
|
||||
} else {
|
||||
return state.blocks().globalBlockedException(ClusterBlockLevel.WRITE);
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
protected void doExecute(Task task, final Request request, ActionListener<Response> listener) {
|
||||
if (licenseState.isWatcherAllowed()) {
|
||||
|
|
|
@ -5,13 +5,12 @@
|
|||
*/
|
||||
package org.elasticsearch.xpack.watcher.transport.actions.ack;
|
||||
|
||||
import org.elasticsearch.action.ActionRequest;
|
||||
import org.elasticsearch.action.ActionRequestValidationException;
|
||||
import org.elasticsearch.action.ValidateActions;
|
||||
import org.elasticsearch.action.support.master.MasterNodeRequest;
|
||||
import org.elasticsearch.common.Strings;
|
||||
import org.elasticsearch.common.io.stream.StreamInput;
|
||||
import org.elasticsearch.common.io.stream.StreamOutput;
|
||||
import org.elasticsearch.common.unit.TimeValue;
|
||||
import org.elasticsearch.xpack.watcher.watch.Watch;
|
||||
|
||||
import java.io.IOException;
|
||||
|
@ -20,9 +19,7 @@ import java.util.Locale;
|
|||
/**
|
||||
* A ack watch request to ack a watch by name (id)
|
||||
*/
|
||||
public class AckWatchRequest extends MasterNodeRequest<AckWatchRequest> {
|
||||
|
||||
private static final TimeValue DEFAULT_TIMEOUT = TimeValue.timeValueSeconds(10);
|
||||
public class AckWatchRequest extends ActionRequest {
|
||||
|
||||
private String watchId;
|
||||
private String[] actionIds = Strings.EMPTY_ARRAY;
|
||||
|
@ -34,7 +31,6 @@ public class AckWatchRequest extends MasterNodeRequest<AckWatchRequest> {
|
|||
public AckWatchRequest(String watchId, String... actionIds) {
|
||||
this.watchId = watchId;
|
||||
this.actionIds = actionIds;
|
||||
masterNodeTimeout(DEFAULT_TIMEOUT);
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
|
@ -5,13 +5,13 @@
|
|||
*/
|
||||
package org.elasticsearch.xpack.watcher.transport.actions.ack;
|
||||
|
||||
import org.elasticsearch.action.support.master.MasterNodeOperationRequestBuilder;
|
||||
import org.elasticsearch.action.ActionRequestBuilder;
|
||||
import org.elasticsearch.client.ElasticsearchClient;
|
||||
|
||||
/**
|
||||
* A ack watch action request builder.
|
||||
*/
|
||||
public class AckWatchRequestBuilder extends MasterNodeOperationRequestBuilder<AckWatchRequest, AckWatchResponse, AckWatchRequestBuilder> {
|
||||
public class AckWatchRequestBuilder extends ActionRequestBuilder<AckWatchRequest, AckWatchResponse, AckWatchRequestBuilder> {
|
||||
|
||||
public AckWatchRequestBuilder(ElasticsearchClient client) {
|
||||
super(client, AckWatchAction.INSTANCE, new AckWatchRequest());
|
||||
|
@ -25,6 +25,4 @@ public class AckWatchRequestBuilder extends MasterNodeOperationRequestBuilder<Ac
|
|||
request.setActionIds(actionIds);
|
||||
return this;
|
||||
}
|
||||
|
||||
|
||||
}
|
||||
|
|
|
@ -12,10 +12,8 @@ import org.elasticsearch.action.support.ActionFilters;
|
|||
import org.elasticsearch.action.support.WriteRequest;
|
||||
import org.elasticsearch.action.update.UpdateRequest;
|
||||
import org.elasticsearch.client.Client;
|
||||
import org.elasticsearch.cluster.ClusterState;
|
||||
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
|
||||
import org.elasticsearch.cluster.routing.Preference;
|
||||
import org.elasticsearch.cluster.service.ClusterService;
|
||||
import org.elasticsearch.common.inject.Inject;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.xcontent.ToXContent;
|
||||
|
@ -46,17 +44,16 @@ public class TransportAckWatchAction extends WatcherTransportAction<AckWatchRequ
|
|||
@Inject
|
||||
public TransportAckWatchAction(Settings settings, TransportService transportService, ThreadPool threadPool, ActionFilters actionFilters,
|
||||
IndexNameExpressionResolver indexNameExpressionResolver, Clock clock, XPackLicenseState licenseState,
|
||||
Watch.Parser parser, InternalClient client, ClusterService clusterService) {
|
||||
Watch.Parser parser, InternalClient client) {
|
||||
super(settings, AckWatchAction.NAME, transportService, threadPool, actionFilters, indexNameExpressionResolver,
|
||||
licenseState, clusterService, AckWatchRequest::new, AckWatchResponse::new);
|
||||
licenseState, AckWatchRequest::new);
|
||||
this.clock = clock;
|
||||
this.parser = parser;
|
||||
this.client = client;
|
||||
}
|
||||
|
||||
@Override
|
||||
protected void masterOperation(AckWatchRequest request, ClusterState state,
|
||||
ActionListener<AckWatchResponse> listener) throws Exception {
|
||||
protected void doExecute(AckWatchRequest request, ActionListener<AckWatchResponse> listener) {
|
||||
GetRequest getRequest = new GetRequest(Watch.INDEX, Watch.DOC_TYPE, request.getWatchId())
|
||||
.preference(Preference.LOCAL.type()).realtime(true);
|
||||
|
||||
|
|
|
@ -5,12 +5,11 @@
|
|||
*/
|
||||
package org.elasticsearch.xpack.watcher.transport.actions.activate;
|
||||
|
||||
import org.elasticsearch.action.ActionRequest;
|
||||
import org.elasticsearch.action.ActionRequestValidationException;
|
||||
import org.elasticsearch.action.ValidateActions;
|
||||
import org.elasticsearch.action.support.master.MasterNodeRequest;
|
||||
import org.elasticsearch.common.io.stream.StreamInput;
|
||||
import org.elasticsearch.common.io.stream.StreamOutput;
|
||||
import org.elasticsearch.common.unit.TimeValue;
|
||||
import org.elasticsearch.xpack.watcher.watch.Watch;
|
||||
|
||||
import java.io.IOException;
|
||||
|
@ -18,9 +17,7 @@ import java.io.IOException;
|
|||
/**
|
||||
* A ack watch request to ack a watch by name (id)
|
||||
*/
|
||||
public class ActivateWatchRequest extends MasterNodeRequest<ActivateWatchRequest> {
|
||||
|
||||
private static final TimeValue DEFAULT_TIMEOUT = TimeValue.timeValueSeconds(10);
|
||||
public class ActivateWatchRequest extends ActionRequest {
|
||||
|
||||
private String watchId;
|
||||
private boolean activate;
|
||||
|
@ -32,7 +29,6 @@ public class ActivateWatchRequest extends MasterNodeRequest<ActivateWatchRequest
|
|||
public ActivateWatchRequest(String watchId, boolean activate) {
|
||||
this.watchId = watchId;
|
||||
this.activate = activate;
|
||||
masterNodeTimeout(DEFAULT_TIMEOUT);
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
|
@ -5,13 +5,13 @@
|
|||
*/
|
||||
package org.elasticsearch.xpack.watcher.transport.actions.activate;
|
||||
|
||||
import org.elasticsearch.action.support.master.MasterNodeOperationRequestBuilder;
|
||||
import org.elasticsearch.action.ActionRequestBuilder;
|
||||
import org.elasticsearch.client.ElasticsearchClient;
|
||||
|
||||
/**
|
||||
* A activate watch action request builder.
|
||||
*/
|
||||
public class ActivateWatchRequestBuilder extends MasterNodeOperationRequestBuilder<ActivateWatchRequest, ActivateWatchResponse,
|
||||
public class ActivateWatchRequestBuilder extends ActionRequestBuilder<ActivateWatchRequest, ActivateWatchResponse,
|
||||
ActivateWatchRequestBuilder> {
|
||||
|
||||
public ActivateWatchRequestBuilder(ElasticsearchClient client) {
|
||||
|
|
|
@ -12,10 +12,8 @@ import org.elasticsearch.action.support.ActionFilters;
|
|||
import org.elasticsearch.action.support.WriteRequest;
|
||||
import org.elasticsearch.action.update.UpdateRequest;
|
||||
import org.elasticsearch.client.Client;
|
||||
import org.elasticsearch.cluster.ClusterState;
|
||||
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
|
||||
import org.elasticsearch.cluster.routing.Preference;
|
||||
import org.elasticsearch.cluster.service.ClusterService;
|
||||
import org.elasticsearch.common.inject.Inject;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
|
@ -25,7 +23,6 @@ import org.elasticsearch.threadpool.ThreadPool;
|
|||
import org.elasticsearch.transport.TransportService;
|
||||
import org.elasticsearch.xpack.security.InternalClient;
|
||||
import org.elasticsearch.xpack.watcher.transport.actions.WatcherTransportAction;
|
||||
import org.elasticsearch.xpack.watcher.trigger.TriggerService;
|
||||
import org.elasticsearch.xpack.watcher.watch.Watch;
|
||||
import org.elasticsearch.xpack.watcher.watch.WatchStatus;
|
||||
import org.joda.time.DateTime;
|
||||
|
@ -45,25 +42,20 @@ public class TransportActivateWatchAction extends WatcherTransportAction<Activat
|
|||
private final Clock clock;
|
||||
private final Watch.Parser parser;
|
||||
private final Client client;
|
||||
private final TriggerService triggerService;
|
||||
|
||||
@Inject
|
||||
public TransportActivateWatchAction(Settings settings, TransportService transportService, ThreadPool threadPool,
|
||||
ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver, Clock clock,
|
||||
XPackLicenseState licenseState, Watch.Parser parser, ClusterService clusterService,
|
||||
InternalClient client, TriggerService triggerService) {
|
||||
XPackLicenseState licenseState, Watch.Parser parser, InternalClient client) {
|
||||
super(settings, ActivateWatchAction.NAME, transportService, threadPool, actionFilters, indexNameExpressionResolver,
|
||||
licenseState, clusterService, ActivateWatchRequest::new, ActivateWatchResponse::new);
|
||||
licenseState, ActivateWatchRequest::new);
|
||||
this.clock = clock;
|
||||
this.parser = parser;
|
||||
this.client = client;
|
||||
this.triggerService = triggerService;
|
||||
}
|
||||
|
||||
@Override
|
||||
protected void masterOperation(ActivateWatchRequest request, ClusterState state, ActionListener<ActivateWatchResponse> listener)
|
||||
throws Exception {
|
||||
|
||||
protected void doExecute(ActivateWatchRequest request, ActionListener<ActivateWatchResponse> listener) {
|
||||
try {
|
||||
DateTime now = new DateTime(clock.millis(), UTC);
|
||||
UpdateRequest updateRequest = new UpdateRequest(Watch.INDEX, Watch.DOC_TYPE, request.getWatchId());
|
||||
|
@ -84,13 +76,6 @@ public class TransportActivateWatchAction extends WatcherTransportAction<Activat
|
|||
XContentType.JSON);
|
||||
watch.version(getResponse.getVersion());
|
||||
watch.status().version(getResponse.getVersion());
|
||||
if (localExecute(request)) {
|
||||
if (watch.status().state().isActive()) {
|
||||
triggerService.add(watch);
|
||||
} else {
|
||||
triggerService.remove(watch.id());
|
||||
}
|
||||
}
|
||||
listener.onResponse(new ActivateWatchResponse(watch.status()));
|
||||
} else {
|
||||
listener.onFailure(new ResourceNotFoundException("Watch with id [{}] does not exist", request.getWatchId()));
|
||||
|
@ -114,5 +99,4 @@ public class TransportActivateWatchAction extends WatcherTransportAction<Activat
|
|||
return builder;
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
|
|
@ -5,13 +5,12 @@
|
|||
*/
|
||||
package org.elasticsearch.xpack.watcher.transport.actions.delete;
|
||||
|
||||
import org.elasticsearch.action.ActionRequest;
|
||||
import org.elasticsearch.action.ActionRequestValidationException;
|
||||
import org.elasticsearch.action.ValidateActions;
|
||||
import org.elasticsearch.action.support.master.MasterNodeRequest;
|
||||
import org.elasticsearch.common.io.stream.StreamInput;
|
||||
import org.elasticsearch.common.io.stream.StreamOutput;
|
||||
import org.elasticsearch.common.lucene.uid.Versions;
|
||||
import org.elasticsearch.common.unit.TimeValue;
|
||||
import org.elasticsearch.xpack.watcher.watch.Watch;
|
||||
|
||||
import java.io.IOException;
|
||||
|
@ -19,9 +18,7 @@ import java.io.IOException;
|
|||
/**
|
||||
* A delete watch request to delete an watch by name (id)
|
||||
*/
|
||||
public class DeleteWatchRequest extends MasterNodeRequest<DeleteWatchRequest> {
|
||||
|
||||
private static final TimeValue DEFAULT_TIMEOUT = TimeValue.timeValueSeconds(10);
|
||||
public class DeleteWatchRequest extends ActionRequest {
|
||||
|
||||
private String id;
|
||||
private long version = Versions.MATCH_ANY;
|
||||
|
@ -32,7 +29,6 @@ public class DeleteWatchRequest extends MasterNodeRequest<DeleteWatchRequest> {
|
|||
|
||||
public DeleteWatchRequest(String id) {
|
||||
this.id = id;
|
||||
masterNodeTimeout(DEFAULT_TIMEOUT);
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
|
@ -5,13 +5,13 @@
|
|||
*/
|
||||
package org.elasticsearch.xpack.watcher.transport.actions.delete;
|
||||
|
||||
import org.elasticsearch.action.support.master.MasterNodeOperationRequestBuilder;
|
||||
import org.elasticsearch.action.ActionRequestBuilder;
|
||||
import org.elasticsearch.client.ElasticsearchClient;
|
||||
|
||||
/**
|
||||
* A delete document action request builder.
|
||||
*/
|
||||
public class DeleteWatchRequestBuilder extends MasterNodeOperationRequestBuilder<DeleteWatchRequest, DeleteWatchResponse,
|
||||
public class DeleteWatchRequestBuilder extends ActionRequestBuilder<DeleteWatchRequest, DeleteWatchResponse,
|
||||
DeleteWatchRequestBuilder> {
|
||||
|
||||
public DeleteWatchRequestBuilder(ElasticsearchClient client) {
|
||||
|
|
|
@ -5,15 +5,13 @@
|
|||
*/
|
||||
package org.elasticsearch.xpack.watcher.transport.actions.execute;
|
||||
|
||||
import org.elasticsearch.Version;
|
||||
import org.elasticsearch.action.ActionRequest;
|
||||
import org.elasticsearch.action.ActionRequestValidationException;
|
||||
import org.elasticsearch.action.ValidateActions;
|
||||
import org.elasticsearch.action.support.master.MasterNodeReadRequest;
|
||||
import org.elasticsearch.common.Nullable;
|
||||
import org.elasticsearch.common.bytes.BytesReference;
|
||||
import org.elasticsearch.common.io.stream.StreamInput;
|
||||
import org.elasticsearch.common.io.stream.StreamOutput;
|
||||
import org.elasticsearch.common.xcontent.XContentFactory;
|
||||
import org.elasticsearch.common.xcontent.XContentType;
|
||||
import org.elasticsearch.xpack.watcher.client.WatchSourceBuilder;
|
||||
import org.elasticsearch.xpack.watcher.execution.ActionExecutionMode;
|
||||
|
@ -28,7 +26,7 @@ import java.util.Map;
|
|||
/**
|
||||
* An execute watch request to execute a watch by id
|
||||
*/
|
||||
public class ExecuteWatchRequest extends MasterNodeReadRequest<ExecuteWatchRequest> {
|
||||
public class ExecuteWatchRequest extends ActionRequest {
|
||||
|
||||
public static final String INLINE_WATCH_ID = "_inlined_";
|
||||
|
||||
|
@ -240,11 +238,7 @@ public class ExecuteWatchRequest extends MasterNodeReadRequest<ExecuteWatchReque
|
|||
}
|
||||
if (in.readBoolean()) {
|
||||
watchSource = in.readBytesReference();
|
||||
if (in.getVersion().onOrAfter(Version.V_5_3_0)) {
|
||||
xContentType = XContentType.readFrom(in);
|
||||
} else {
|
||||
xContentType = XContentFactory.xContentType(watchSource);
|
||||
}
|
||||
xContentType = XContentType.readFrom(in);
|
||||
}
|
||||
debug = in.readBoolean();
|
||||
}
|
||||
|
@ -272,9 +266,7 @@ public class ExecuteWatchRequest extends MasterNodeReadRequest<ExecuteWatchReque
|
|||
out.writeBoolean(watchSource != null);
|
||||
if (watchSource != null) {
|
||||
out.writeBytesReference(watchSource);
|
||||
if (out.getVersion().onOrAfter(Version.V_5_3_0)) {
|
||||
xContentType.writeTo(out);
|
||||
}
|
||||
xContentType.writeTo(out);
|
||||
}
|
||||
out.writeBoolean(debug);
|
||||
}
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
*/
|
||||
package org.elasticsearch.xpack.watcher.transport.actions.execute;
|
||||
|
||||
import org.elasticsearch.action.support.master.MasterNodeOperationRequestBuilder;
|
||||
import org.elasticsearch.action.ActionRequestBuilder;
|
||||
import org.elasticsearch.client.ElasticsearchClient;
|
||||
import org.elasticsearch.common.bytes.BytesReference;
|
||||
import org.elasticsearch.common.xcontent.XContentType;
|
||||
|
@ -19,7 +19,7 @@ import java.util.Map;
|
|||
/**
|
||||
* A execute watch action request builder.
|
||||
*/
|
||||
public class ExecuteWatchRequestBuilder extends MasterNodeOperationRequestBuilder<ExecuteWatchRequest, ExecuteWatchResponse,
|
||||
public class ExecuteWatchRequestBuilder extends ActionRequestBuilder<ExecuteWatchRequest, ExecuteWatchResponse,
|
||||
ExecuteWatchRequestBuilder> {
|
||||
|
||||
public ExecuteWatchRequestBuilder(ElasticsearchClient client) {
|
||||
|
|
|
@ -12,10 +12,8 @@ import org.elasticsearch.action.ActionListener;
|
|||
import org.elasticsearch.action.get.GetRequest;
|
||||
import org.elasticsearch.action.support.ActionFilters;
|
||||
import org.elasticsearch.client.Client;
|
||||
import org.elasticsearch.cluster.ClusterState;
|
||||
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
|
||||
import org.elasticsearch.cluster.routing.Preference;
|
||||
import org.elasticsearch.cluster.service.ClusterService;
|
||||
import org.elasticsearch.common.inject.Inject;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
|
@ -62,10 +60,9 @@ public class TransportExecuteWatchAction extends WatcherTransportAction<ExecuteW
|
|||
public TransportExecuteWatchAction(Settings settings, TransportService transportService, ThreadPool threadPool,
|
||||
ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver,
|
||||
ExecutionService executionService, Clock clock, XPackLicenseState licenseState,
|
||||
Watch.Parser watchParser, InternalClient client, TriggerService triggerService,
|
||||
ClusterService clusterService) {
|
||||
Watch.Parser watchParser, InternalClient client, TriggerService triggerService) {
|
||||
super(settings, ExecuteWatchAction.NAME, transportService, threadPool, actionFilters, indexNameExpressionResolver,
|
||||
licenseState, clusterService, ExecuteWatchRequest::new, ExecuteWatchResponse::new);
|
||||
licenseState, ExecuteWatchRequest::new);
|
||||
this.executionService = executionService;
|
||||
this.clock = clock;
|
||||
this.triggerService = triggerService;
|
||||
|
@ -74,8 +71,7 @@ public class TransportExecuteWatchAction extends WatcherTransportAction<ExecuteW
|
|||
}
|
||||
|
||||
@Override
|
||||
protected void masterOperation(ExecuteWatchRequest request, ClusterState state,
|
||||
ActionListener<ExecuteWatchResponse> listener) throws Exception {
|
||||
protected void doExecute(ExecuteWatchRequest request, ActionListener<ExecuteWatchResponse> listener) {
|
||||
if (request.getId() != null) {
|
||||
GetRequest getRequest = new GetRequest(Watch.INDEX, Watch.DOC_TYPE, request.getId())
|
||||
.preference(Preference.LOCAL.type()).realtime(true);
|
||||
|
@ -139,5 +135,4 @@ public class TransportExecuteWatchAction extends WatcherTransportAction<ExecuteW
|
|||
}
|
||||
});
|
||||
}
|
||||
|
||||
}
|
||||
|
|
|
@ -5,9 +5,9 @@
|
|||
*/
|
||||
package org.elasticsearch.xpack.watcher.transport.actions.get;
|
||||
|
||||
import org.elasticsearch.action.ActionRequest;
|
||||
import org.elasticsearch.action.ActionRequestValidationException;
|
||||
import org.elasticsearch.action.ValidateActions;
|
||||
import org.elasticsearch.action.support.master.MasterNodeReadRequest;
|
||||
import org.elasticsearch.common.io.stream.StreamInput;
|
||||
import org.elasticsearch.common.io.stream.StreamOutput;
|
||||
import org.elasticsearch.xpack.watcher.watch.Watch;
|
||||
|
@ -17,7 +17,7 @@ import java.io.IOException;
|
|||
/**
|
||||
* The request to get the watch by name (id)
|
||||
*/
|
||||
public class GetWatchRequest extends MasterNodeReadRequest<GetWatchRequest> {
|
||||
public class GetWatchRequest extends ActionRequest {
|
||||
|
||||
private String id;
|
||||
|
||||
|
|
|
@ -5,13 +5,13 @@
|
|||
*/
|
||||
package org.elasticsearch.xpack.watcher.transport.actions.get;
|
||||
|
||||
import org.elasticsearch.action.support.master.MasterNodeReadOperationRequestBuilder;
|
||||
import org.elasticsearch.action.ActionRequestBuilder;
|
||||
import org.elasticsearch.client.ElasticsearchClient;
|
||||
|
||||
/**
|
||||
* A delete document action request builder.
|
||||
*/
|
||||
public class GetWatchRequestBuilder extends MasterNodeReadOperationRequestBuilder<GetWatchRequest, GetWatchResponse,
|
||||
public class GetWatchRequestBuilder extends ActionRequestBuilder<GetWatchRequest, GetWatchResponse,
|
||||
GetWatchRequestBuilder> {
|
||||
|
||||
public GetWatchRequestBuilder(ElasticsearchClient client, String id) {
|
||||
|
|
|
@ -9,10 +9,8 @@ import org.elasticsearch.action.ActionListener;
|
|||
import org.elasticsearch.action.get.GetRequest;
|
||||
import org.elasticsearch.action.support.ActionFilters;
|
||||
import org.elasticsearch.client.Client;
|
||||
import org.elasticsearch.cluster.ClusterState;
|
||||
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
|
||||
import org.elasticsearch.cluster.routing.Preference;
|
||||
import org.elasticsearch.cluster.service.ClusterService;
|
||||
import org.elasticsearch.common.inject.Inject;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
|
@ -41,17 +39,16 @@ public class TransportGetWatchAction extends WatcherTransportAction<GetWatchRequ
|
|||
@Inject
|
||||
public TransportGetWatchAction(Settings settings, TransportService transportService, ThreadPool threadPool, ActionFilters actionFilters,
|
||||
IndexNameExpressionResolver indexNameExpressionResolver, XPackLicenseState licenseState,
|
||||
Watch.Parser parser, Clock clock, InternalClient client, ClusterService clusterService) {
|
||||
Watch.Parser parser, Clock clock, InternalClient client) {
|
||||
super(settings, GetWatchAction.NAME, transportService, threadPool, actionFilters, indexNameExpressionResolver,
|
||||
licenseState, clusterService, GetWatchRequest::new, GetWatchResponse::new);
|
||||
licenseState, GetWatchRequest::new);
|
||||
this.parser = parser;
|
||||
this.clock = clock;
|
||||
this.client = client;
|
||||
}
|
||||
|
||||
@Override
|
||||
protected void masterOperation(GetWatchRequest request, ClusterState state,
|
||||
ActionListener<GetWatchResponse> listener) throws Exception {
|
||||
protected void doExecute(GetWatchRequest request, ActionListener<GetWatchResponse> listener) {
|
||||
GetRequest getRequest = new GetRequest(Watch.INDEX, Watch.DOC_TYPE, request.getId())
|
||||
.preference(Preference.LOCAL.type()).realtime(true);
|
||||
|
||||
|
|
|
@ -6,15 +6,12 @@
|
|||
package org.elasticsearch.xpack.watcher.transport.actions.put;
|
||||
|
||||
|
||||
import org.elasticsearch.Version;
|
||||
import org.elasticsearch.action.ActionRequest;
|
||||
import org.elasticsearch.action.ActionRequestValidationException;
|
||||
import org.elasticsearch.action.ValidateActions;
|
||||
import org.elasticsearch.action.support.master.MasterNodeRequest;
|
||||
import org.elasticsearch.common.bytes.BytesReference;
|
||||
import org.elasticsearch.common.io.stream.StreamInput;
|
||||
import org.elasticsearch.common.io.stream.StreamOutput;
|
||||
import org.elasticsearch.common.unit.TimeValue;
|
||||
import org.elasticsearch.common.xcontent.XContentFactory;
|
||||
import org.elasticsearch.common.xcontent.XContentType;
|
||||
import org.elasticsearch.xpack.watcher.client.WatchSourceBuilder;
|
||||
import org.elasticsearch.xpack.watcher.watch.Watch;
|
||||
|
@ -25,9 +22,7 @@ import java.io.IOException;
|
|||
* This request class contains the data needed to create a watch along with the name of the watch.
|
||||
* The name of the watch will become the ID of the indexed document.
|
||||
*/
|
||||
public class PutWatchRequest extends MasterNodeRequest<PutWatchRequest> {
|
||||
|
||||
private static final TimeValue DEFAULT_TIMEOUT = TimeValue.timeValueSeconds(10);
|
||||
public class PutWatchRequest extends ActionRequest {
|
||||
|
||||
private String id;
|
||||
private BytesReference source;
|
||||
|
@ -45,7 +40,6 @@ public class PutWatchRequest extends MasterNodeRequest<PutWatchRequest> {
|
|||
this.id = id;
|
||||
this.source = source;
|
||||
this.xContentType = xContentType;
|
||||
masterNodeTimeout(DEFAULT_TIMEOUT);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -125,11 +119,7 @@ public class PutWatchRequest extends MasterNodeRequest<PutWatchRequest> {
|
|||
id = in.readString();
|
||||
source = in.readBytesReference();
|
||||
active = in.readBoolean();
|
||||
if (in.getVersion().onOrAfter(Version.V_5_3_0)) {
|
||||
xContentType = XContentType.readFrom(in);
|
||||
} else {
|
||||
xContentType = XContentFactory.xContentType(source);
|
||||
}
|
||||
xContentType = XContentType.readFrom(in);
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -138,9 +128,6 @@ public class PutWatchRequest extends MasterNodeRequest<PutWatchRequest> {
|
|||
out.writeString(id);
|
||||
out.writeBytesReference(source);
|
||||
out.writeBoolean(active);
|
||||
if (out.getVersion().onOrAfter(Version.V_5_3_0)) {
|
||||
xContentType.writeTo(out);
|
||||
}
|
||||
xContentType.writeTo(out);
|
||||
}
|
||||
|
||||
}
|
||||
|
|
|
@ -5,13 +5,13 @@
|
|||
*/
|
||||
package org.elasticsearch.xpack.watcher.transport.actions.put;
|
||||
|
||||
import org.elasticsearch.action.support.master.MasterNodeOperationRequestBuilder;
|
||||
import org.elasticsearch.action.ActionRequestBuilder;
|
||||
import org.elasticsearch.client.ElasticsearchClient;
|
||||
import org.elasticsearch.common.bytes.BytesReference;
|
||||
import org.elasticsearch.common.xcontent.XContentType;
|
||||
import org.elasticsearch.xpack.watcher.client.WatchSourceBuilder;
|
||||
|
||||
public class PutWatchRequestBuilder extends MasterNodeOperationRequestBuilder<PutWatchRequest, PutWatchResponse, PutWatchRequestBuilder> {
|
||||
public class PutWatchRequestBuilder extends ActionRequestBuilder<PutWatchRequest, PutWatchResponse, PutWatchRequestBuilder> {
|
||||
|
||||
public PutWatchRequestBuilder(ElasticsearchClient client) {
|
||||
super(client, PutWatchAction.INSTANCE, new PutWatchRequest());
|
||||
|
|
|
@ -10,9 +10,7 @@ import org.elasticsearch.action.DocWriteResponse;
|
|||
import org.elasticsearch.action.index.IndexRequest;
|
||||
import org.elasticsearch.action.support.ActionFilters;
|
||||
import org.elasticsearch.action.support.WriteRequest;
|
||||
import org.elasticsearch.cluster.ClusterState;
|
||||
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
|
||||
import org.elasticsearch.cluster.service.ClusterService;
|
||||
import org.elasticsearch.common.bytes.BytesReference;
|
||||
import org.elasticsearch.common.inject.Inject;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
|
@ -24,7 +22,6 @@ import org.elasticsearch.transport.TransportService;
|
|||
import org.elasticsearch.xpack.security.InternalClient;
|
||||
import org.elasticsearch.xpack.watcher.support.xcontent.WatcherParams;
|
||||
import org.elasticsearch.xpack.watcher.transport.actions.WatcherTransportAction;
|
||||
import org.elasticsearch.xpack.watcher.trigger.TriggerService;
|
||||
import org.elasticsearch.xpack.watcher.watch.Payload;
|
||||
import org.elasticsearch.xpack.watcher.watch.Watch;
|
||||
import org.joda.time.DateTime;
|
||||
|
@ -39,24 +36,20 @@ public class TransportPutWatchAction extends WatcherTransportAction<PutWatchRequ
|
|||
private final Clock clock;
|
||||
private final Watch.Parser parser;
|
||||
private final InternalClient client;
|
||||
private final TriggerService triggerService;
|
||||
|
||||
@Inject
|
||||
public TransportPutWatchAction(Settings settings, TransportService transportService, ThreadPool threadPool, ActionFilters actionFilters,
|
||||
IndexNameExpressionResolver indexNameExpressionResolver, Clock clock, XPackLicenseState licenseState,
|
||||
Watch.Parser parser, InternalClient client, ClusterService clusterService,
|
||||
TriggerService triggerService) {
|
||||
Watch.Parser parser, InternalClient client) {
|
||||
super(settings, PutWatchAction.NAME, transportService, threadPool, actionFilters, indexNameExpressionResolver,
|
||||
licenseState, clusterService, PutWatchRequest::new, PutWatchResponse::new);
|
||||
licenseState, PutWatchRequest::new);
|
||||
this.clock = clock;
|
||||
this.parser = parser;
|
||||
this.client = client;
|
||||
this.triggerService = triggerService;
|
||||
}
|
||||
|
||||
@Override
|
||||
protected void masterOperation(PutWatchRequest request, ClusterState state,
|
||||
ActionListener<PutWatchResponse> listener) throws Exception {
|
||||
protected void doExecute(PutWatchRequest request, ActionListener<PutWatchResponse> listener) {
|
||||
try {
|
||||
DateTime now = new DateTime(clock.millis(), UTC);
|
||||
Watch watch = parser.parseWithSecrets(request.getId(), false, request.getSource(), now, request.xContentType());
|
||||
|
@ -73,9 +66,6 @@ public class TransportPutWatchAction extends WatcherTransportAction<PutWatchRequ
|
|||
|
||||
client.index(indexRequest, ActionListener.wrap(indexResponse -> {
|
||||
boolean created = indexResponse.getResult() == DocWriteResponse.Result.CREATED;
|
||||
if (localExecute(request) == false && watch.status().state().isActive()) {
|
||||
triggerService.add(watch);
|
||||
}
|
||||
listener.onResponse(new PutWatchResponse(indexResponse.getId(), indexResponse.getVersion(), created));
|
||||
}, listener::onFailure));
|
||||
}
|
||||
|
|
|
@ -2,8 +2,7 @@
|
|||
"index_patterns": ".security_audit_log*",
|
||||
"order": 2147483647,
|
||||
"settings": {
|
||||
"index.format": 6,
|
||||
"index.mapper.dynamic" : false
|
||||
"index.format": 6
|
||||
},
|
||||
"mappings": {
|
||||
"doc": {
|
||||
|
|
|
@ -3,7 +3,6 @@
|
|||
"order": 2147483647,
|
||||
"settings": {
|
||||
"index.number_of_shards": 1,
|
||||
"index.mapper.dynamic" : false,
|
||||
"index.refresh_interval" : "-1",
|
||||
"index.format": 6,
|
||||
"index.priority": 900
|
||||
|
|
|
@ -4,8 +4,7 @@
|
|||
"settings": {
|
||||
"xpack.watcher.template.version": "${xpack.watcher.template.version}",
|
||||
"index.number_of_shards": 1,
|
||||
"index.format": 6,
|
||||
"index.mapper.dynamic": false
|
||||
"index.format": 6
|
||||
},
|
||||
"mappings": {
|
||||
"doc": {
|
||||
|
|
|
@ -3,7 +3,6 @@
|
|||
"order": 2147483647,
|
||||
"settings": {
|
||||
"index.number_of_shards": 1,
|
||||
"index.mapper.dynamic" : false,
|
||||
"index.format": 6,
|
||||
"index.priority": 800
|
||||
},
|
||||
|
|
|
@ -142,11 +142,11 @@ public class IndexDeprecationChecksTests extends ESTestCase {
|
|||
}
|
||||
public void testStoreThrottleSettingsCheck() {
|
||||
assertSettingsAndIssue("index.store.throttle.max_bytes_per_sec", "32",
|
||||
new DeprecationIssue(DeprecationIssue.Level.CRITICAL,
|
||||
"index.store.throttle settings are no longer recognized. these settings should be removed",
|
||||
"https://www.elastic.co/guide/en/elasticsearch/reference/master/" +
|
||||
"breaking_60_settings_changes.html#_store_throttling_settings",
|
||||
"present settings: [index.store.throttle.max_bytes_per_sec]"));
|
||||
new DeprecationIssue(DeprecationIssue.Level.CRITICAL,
|
||||
"index.store.throttle settings are no longer recognized. these settings should be removed",
|
||||
"https://www.elastic.co/guide/en/elasticsearch/reference/master/" +
|
||||
"breaking_60_settings_changes.html#_store_throttling_settings",
|
||||
"present settings: [index.store.throttle.max_bytes_per_sec]"));
|
||||
assertSettingsAndIssue("index.store.throttle.type", "none",
|
||||
new DeprecationIssue(DeprecationIssue.Level.CRITICAL,
|
||||
"index.store.throttle settings are no longer recognized. these settings should be removed",
|
||||
|
@ -159,7 +159,7 @@ public class IndexDeprecationChecksTests extends ESTestCase {
|
|||
assertSettingsAndIssue("index.shared_filesystem", "true",
|
||||
new DeprecationIssue(DeprecationIssue.Level.CRITICAL,
|
||||
"[index.shared_filesystem] setting should be removed",
|
||||
"https://www.elastic.co/guide/en/elasticsearch/reference/master/" +
|
||||
"breaking_60_settings_changes.html#_shadow_replicas_have_been_removed", null));
|
||||
"https://www.elastic.co/guide/en/elasticsearch/reference/6.0/" +
|
||||
"breaking_60_indices_changes.html#_shadow_replicas_have_been_removed", null));
|
||||
}
|
||||
}
|
|
@ -149,10 +149,9 @@ public class MachineLearningTemplateRegistryTests extends ESTestCase {
|
|||
new MachineLearningTemplateRegistry(createSettings(), clusterService, client, threadPool);
|
||||
Settings settings = templateRegistry.mlResultsIndexSettings().build();
|
||||
|
||||
assertEquals(4, settings.size());
|
||||
assertEquals(3, settings.size());
|
||||
assertThat(settings.get("index.number_of_shards"), is(nullValue()));
|
||||
assertEquals("async", settings.get("index.translog.durability"));
|
||||
assertEquals("true", settings.get("index.mapper.dynamic"));
|
||||
assertEquals("all_field_values", settings.get("index.query.default_field"));
|
||||
assertEquals("2s", settings.get("index.unassigned.node_left.delayed_timeout"));
|
||||
}
|
||||
|
@ -162,9 +161,8 @@ public class MachineLearningTemplateRegistryTests extends ESTestCase {
|
|||
new MachineLearningTemplateRegistry(createSettings(), clusterService, client, threadPool);
|
||||
Settings settings = templateRegistry.mlNotificationIndexSettings().build();
|
||||
|
||||
assertEquals(3, settings.size());
|
||||
assertEquals(2, settings.size());
|
||||
assertEquals("1", settings.get("index.number_of_shards"));
|
||||
assertEquals("true", settings.get("index.mapper.dynamic"));
|
||||
assertEquals("2s", settings.get("index.unassigned.node_left.delayed_timeout"));
|
||||
}
|
||||
|
||||
|
|
|
@ -40,6 +40,8 @@ import static org.mockito.Mockito.when;
|
|||
|
||||
public class MlInitializationServiceTests extends ESTestCase {
|
||||
|
||||
private static final ClusterName CLUSTER_NAME = new ClusterName("my_cluster");
|
||||
|
||||
private ThreadPool threadPool;
|
||||
private ExecutorService executorService;
|
||||
private ClusterService clusterService;
|
||||
|
@ -60,6 +62,8 @@ public class MlInitializationServiceTests extends ESTestCase {
|
|||
|
||||
ScheduledFuture scheduledFuture = mock(ScheduledFuture.class);
|
||||
when(threadPool.schedule(any(), any(), any())).thenReturn(scheduledFuture);
|
||||
|
||||
when(clusterService.getClusterName()).thenReturn(CLUSTER_NAME);
|
||||
}
|
||||
|
||||
public void testInitialize() throws Exception {
|
||||
|
@ -93,7 +97,6 @@ public class MlInitializationServiceTests extends ESTestCase {
|
|||
}
|
||||
|
||||
public void testInitialize_alreadyInitialized() throws Exception {
|
||||
ClusterService clusterService = mock(ClusterService.class);
|
||||
MlInitializationService initializationService = new MlInitializationService(Settings.EMPTY, threadPool, clusterService, client);
|
||||
|
||||
ClusterState cs = ClusterState.builder(new ClusterName("_name"))
|
||||
|
@ -113,7 +116,6 @@ public class MlInitializationServiceTests extends ESTestCase {
|
|||
}
|
||||
|
||||
public void testInitialize_onlyOnce() throws Exception {
|
||||
ClusterService clusterService = mock(ClusterService.class);
|
||||
MlInitializationService initializationService = new MlInitializationService(Settings.EMPTY, threadPool, clusterService, client);
|
||||
|
||||
ClusterState cs = ClusterState.builder(new ClusterName("_name"))
|
||||
|
|
|
@ -33,18 +33,23 @@ import org.elasticsearch.search.builder.SearchSourceBuilder.ScriptField;
|
|||
import org.elasticsearch.test.AbstractSerializingTestCase;
|
||||
import org.elasticsearch.test.ESTestCase;
|
||||
import org.elasticsearch.xpack.ml.datafeed.ChunkingConfig.Mode;
|
||||
import org.elasticsearch.xpack.ml.job.config.JobTests;
|
||||
import org.elasticsearch.xpack.ml.job.messages.Messages;
|
||||
import org.joda.time.DateTimeZone;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.ArrayList;
|
||||
import java.util.Arrays;
|
||||
import java.util.Collections;
|
||||
import java.util.List;
|
||||
import java.util.TimeZone;
|
||||
|
||||
import static org.hamcrest.Matchers.containsString;
|
||||
import static org.hamcrest.Matchers.equalTo;
|
||||
import static org.hamcrest.Matchers.greaterThanOrEqualTo;
|
||||
import static org.hamcrest.Matchers.is;
|
||||
import static org.hamcrest.Matchers.lessThan;
|
||||
import static org.hamcrest.Matchers.not;
|
||||
|
||||
public class DatafeedConfigTests extends AbstractSerializingTestCase<DatafeedConfig> {
|
||||
|
||||
|
@ -162,15 +167,36 @@ public class DatafeedConfigTests extends AbstractSerializingTestCase<DatafeedCon
|
|||
}
|
||||
}
|
||||
|
||||
public void testFillDefaults() {
|
||||
public void testDefaults() {
|
||||
DatafeedConfig.Builder expectedDatafeedConfig = new DatafeedConfig.Builder("datafeed1", "job1");
|
||||
expectedDatafeedConfig.setIndices(Collections.singletonList("index"));
|
||||
expectedDatafeedConfig.setQueryDelay(TimeValue.timeValueMinutes(1));
|
||||
expectedDatafeedConfig.setScrollSize(1000);
|
||||
DatafeedConfig.Builder defaultedDatafeedConfig = new DatafeedConfig.Builder("datafeed1", "job1");
|
||||
defaultedDatafeedConfig.setIndices(Collections.singletonList("index"));
|
||||
DatafeedConfig.Builder defaultFeedBuilder = new DatafeedConfig.Builder("datafeed1", "job1");
|
||||
defaultFeedBuilder.setIndices(Collections.singletonList("index"));
|
||||
DatafeedConfig defaultFeed = defaultFeedBuilder.build();
|
||||
|
||||
assertEquals(expectedDatafeedConfig.build(), defaultedDatafeedConfig.build());
|
||||
|
||||
assertThat(defaultFeed.getScrollSize(), equalTo(1000));
|
||||
assertThat(defaultFeed.getQueryDelay().seconds(), greaterThanOrEqualTo(60L));
|
||||
assertThat(defaultFeed.getQueryDelay().seconds(), lessThan(120L));
|
||||
}
|
||||
|
||||
public void testDefaultQueryDelay() {
|
||||
DatafeedConfig.Builder feedBuilder1 = new DatafeedConfig.Builder("datafeed1", "job1");
|
||||
feedBuilder1.setIndices(Arrays.asList("foo"));
|
||||
DatafeedConfig.Builder feedBuilder2 = new DatafeedConfig.Builder("datafeed2", "job1");
|
||||
feedBuilder2.setIndices(Arrays.asList("foo"));
|
||||
DatafeedConfig.Builder feedBuilder3 = new DatafeedConfig.Builder("datafeed3", "job2");
|
||||
feedBuilder3.setIndices(Arrays.asList("foo"));
|
||||
DatafeedConfig feed1 = feedBuilder1.build();
|
||||
DatafeedConfig feed2 = feedBuilder2.build();
|
||||
DatafeedConfig feed3 = feedBuilder3.build();
|
||||
|
||||
// Two datafeeds with the same job id should have the same random query delay
|
||||
assertThat(feed1.getQueryDelay(), equalTo(feed2.getQueryDelay()));
|
||||
// But the query delay of a datafeed with a different job id should differ too
|
||||
assertThat(feed1.getQueryDelay(), not(equalTo(feed3.getQueryDelay())));
|
||||
}
|
||||
|
||||
public void testCheckValid_GivenNullIndices() throws IOException {
|
||||
|
|
|
@ -116,7 +116,7 @@ public class DatafeedJobTests extends ESTestCase {
|
|||
long queryDelayMs = 500;
|
||||
DatafeedJob datafeedJob = createDatafeedJob(frequencyMs, queryDelayMs, -1, -1);
|
||||
long next = datafeedJob.runLookBack(0L, null);
|
||||
assertEquals(2000 + frequencyMs + 100, next);
|
||||
assertEquals(2000 + frequencyMs + queryDelayMs + 100, next);
|
||||
|
||||
verify(dataExtractorFactory).newExtractor(0L, 1500L);
|
||||
FlushJobAction.Request flushRequest = new FlushJobAction.Request("_job_id");
|
||||
|
@ -138,7 +138,7 @@ public class DatafeedJobTests extends ESTestCase {
|
|||
long queryDelayMs = 500;
|
||||
DatafeedJob datafeedJob = createDatafeedJob(frequencyMs, queryDelayMs, latestFinalBucketEndTimeMs, latestRecordTimeMs);
|
||||
long next = datafeedJob.runLookBack(0L, null);
|
||||
assertEquals(10000 + frequencyMs + 100, next);
|
||||
assertEquals(10000 + frequencyMs + queryDelayMs + 100, next);
|
||||
|
||||
verify(dataExtractorFactory).newExtractor(5000 + 1L, currentTime - queryDelayMs);
|
||||
assertThat(flushJobRequests.getAllValues().size(), equalTo(1));
|
||||
|
@ -185,7 +185,7 @@ public class DatafeedJobTests extends ESTestCase {
|
|||
long queryDelayMs = 1000;
|
||||
DatafeedJob datafeedJob = createDatafeedJob(frequencyMs, queryDelayMs, 1000, -1);
|
||||
long next = datafeedJob.runRealtime();
|
||||
assertEquals(currentTime + frequencyMs + 100, next);
|
||||
assertEquals(currentTime + frequencyMs + queryDelayMs + 100, next);
|
||||
|
||||
verify(dataExtractorFactory).newExtractor(1000L + 1L, currentTime - queryDelayMs);
|
||||
FlushJobAction.Request flushRequest = new FlushJobAction.Request("_job_id");
|
||||
|
|
|
@ -81,6 +81,7 @@ public class CategorizationIT extends MlNativeAutodetectIntegTestCase {
|
|||
client().admin().indices().prepareRefresh("*").get();
|
||||
}
|
||||
|
||||
@AwaitsFix(bugUrl = "https://github.com/elastic/machine-learning-cpp/issues/279")
|
||||
public void testBasicCategorization() throws Exception {
|
||||
Job.Builder job = newJobBuilder("categorization", Collections.emptyList());
|
||||
registerJob(job);
|
||||
|
|
|
@ -0,0 +1,230 @@
|
|||
/*
|
||||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||
* or more contributor license agreements. Licensed under the Elastic License;
|
||||
* you may not use this file except in compliance with the Elastic License.
|
||||
*/
|
||||
package org.elasticsearch.xpack.ml.integration;
|
||||
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.xpack.ml.action.PutJobAction;
|
||||
import org.elasticsearch.xpack.ml.job.config.AnalysisConfig;
|
||||
import org.elasticsearch.xpack.ml.job.config.Job;
|
||||
import org.elasticsearch.xpack.ml.job.persistence.JobProvider;
|
||||
import org.elasticsearch.xpack.ml.job.persistence.JobResultsPersister;
|
||||
import org.elasticsearch.xpack.ml.job.process.autodetect.state.ModelSizeStats;
|
||||
import org.elasticsearch.xpack.ml.job.results.Bucket;
|
||||
import org.elasticsearch.xpack.ml.support.BaseMlIntegTestCase;
|
||||
import org.junit.Before;
|
||||
|
||||
import java.util.Date;
|
||||
import java.util.concurrent.CountDownLatch;
|
||||
import java.util.concurrent.atomic.AtomicReference;
|
||||
|
||||
import static org.hamcrest.CoreMatchers.equalTo;
|
||||
import static org.hamcrest.CoreMatchers.nullValue;
|
||||
|
||||
public class EstablishedMemUsageIT extends BaseMlIntegTestCase {
|
||||
|
||||
private long bucketSpan = AnalysisConfig.Builder.DEFAULT_BUCKET_SPAN.getMillis();
|
||||
|
||||
private JobProvider jobProvider;
|
||||
private JobResultsPersister jobResultsPersister;
|
||||
|
||||
@Before
|
||||
public void createComponents() {
|
||||
Settings settings = nodeSettings(0);
|
||||
jobProvider = new JobProvider(client(), settings);
|
||||
jobResultsPersister = new JobResultsPersister(settings, client());
|
||||
}
|
||||
|
||||
public void testEstablishedMem_givenNoResults() throws Exception {
|
||||
String jobId = "no-results-established-mem-job";
|
||||
|
||||
initClusterAndJob(jobId);
|
||||
|
||||
assertThat(queryEstablishedMemoryUsage(jobId), nullValue());
|
||||
}
|
||||
|
||||
public void testEstablishedMem_givenNoStatsLongHistory() throws Exception {
|
||||
String jobId = "no-stats-long-history-established-mem-job";
|
||||
|
||||
initClusterAndJob(jobId);
|
||||
|
||||
createBuckets(jobId, 25);
|
||||
jobResultsPersister.commitResultWrites(jobId);
|
||||
|
||||
assertThat(queryEstablishedMemoryUsage(jobId), nullValue());
|
||||
}
|
||||
|
||||
public void testEstablishedMem_givenNoStatsShortHistory() throws Exception {
|
||||
String jobId = "no-stats-short-history-established-mem-job";
|
||||
|
||||
initClusterAndJob(jobId);
|
||||
|
||||
createBuckets(jobId, 5);
|
||||
jobResultsPersister.commitResultWrites(jobId);
|
||||
|
||||
assertThat(queryEstablishedMemoryUsage(jobId), nullValue());
|
||||
}
|
||||
|
||||
public void testEstablishedMem_givenHistoryTooShort() throws Exception {
|
||||
String jobId = "too-short-established-mem-job";
|
||||
|
||||
initClusterAndJob(jobId);
|
||||
|
||||
createBuckets(jobId, 19);
|
||||
createModelSizeStats(jobId, 1, 19000L);
|
||||
createModelSizeStats(jobId, 10, 20000L);
|
||||
jobResultsPersister.commitResultWrites(jobId);
|
||||
|
||||
assertThat(queryEstablishedMemoryUsage(jobId), nullValue());
|
||||
}
|
||||
|
||||
public void testEstablishedMem_givenHistoryJustEnoughLowVariation() throws Exception {
|
||||
String jobId = "just-enough-low-cv-established-mem-job";
|
||||
|
||||
initClusterAndJob(jobId);
|
||||
|
||||
createBuckets(jobId, 20);
|
||||
createModelSizeStats(jobId, 1, 19000L);
|
||||
createModelSizeStats(jobId, 10, 20000L);
|
||||
jobResultsPersister.commitResultWrites(jobId);
|
||||
|
||||
assertThat(queryEstablishedMemoryUsage(jobId), equalTo(20000L));
|
||||
}
|
||||
|
||||
public void testEstablishedMem_givenHistoryJustEnoughHighVariation() throws Exception {
|
||||
String jobId = "just-enough-high-cv-established-mem-job";
|
||||
|
||||
initClusterAndJob(jobId);
|
||||
|
||||
createBuckets(jobId, 20);
|
||||
createModelSizeStats(jobId, 1, 1000L);
|
||||
createModelSizeStats(jobId, 10, 20000L);
|
||||
jobResultsPersister.commitResultWrites(jobId);
|
||||
|
||||
assertThat(queryEstablishedMemoryUsage(jobId), nullValue());
|
||||
}
|
||||
|
||||
public void testEstablishedMem_givenLongEstablished() throws Exception {
|
||||
String jobId = "long-established-mem-job";
|
||||
|
||||
initClusterAndJob(jobId);
|
||||
|
||||
createBuckets(jobId, 25);
|
||||
createModelSizeStats(jobId, 1, 10000L);
|
||||
createModelSizeStats(jobId, 2, 20000L);
|
||||
jobResultsPersister.commitResultWrites(jobId);
|
||||
|
||||
assertThat(queryEstablishedMemoryUsage(jobId), equalTo(20000L));
|
||||
}
|
||||
|
||||
public void testEstablishedMem_givenOneRecentChange() throws Exception {
|
||||
String jobId = "one-recent-change-established-mem-job";
|
||||
|
||||
initClusterAndJob(jobId);
|
||||
|
||||
createBuckets(jobId, 25);
|
||||
createModelSizeStats(jobId, 1, 10000L);
|
||||
createModelSizeStats(jobId, 10, 20000L);
|
||||
jobResultsPersister.commitResultWrites(jobId);
|
||||
|
||||
assertThat(queryEstablishedMemoryUsage(jobId), equalTo(20000L));
|
||||
}
|
||||
|
||||
public void testEstablishedMem_givenOneRecentChangeOnly() throws Exception {
|
||||
String jobId = "one-recent-change-only-established-mem-job";
|
||||
|
||||
initClusterAndJob(jobId);
|
||||
|
||||
createBuckets(jobId, 25);
|
||||
createModelSizeStats(jobId, 10, 20000L);
|
||||
jobResultsPersister.commitResultWrites(jobId);
|
||||
|
||||
assertThat(queryEstablishedMemoryUsage(jobId), equalTo(20000L));
|
||||
}
|
||||
|
||||
public void testEstablishedMem_givenHistoricHighVariationRecentLowVariation() throws Exception {
|
||||
String jobId = "historic-high-cv-recent-low-cv-established-mem-job";
|
||||
|
||||
initClusterAndJob(jobId);
|
||||
|
||||
createBuckets(jobId, 40);
|
||||
createModelSizeStats(jobId, 1, 1000L);
|
||||
createModelSizeStats(jobId, 3, 2000L);
|
||||
createModelSizeStats(jobId, 10, 6000L);
|
||||
createModelSizeStats(jobId, 19, 9000L);
|
||||
createModelSizeStats(jobId, 30, 19000L);
|
||||
createModelSizeStats(jobId, 35, 20000L);
|
||||
jobResultsPersister.commitResultWrites(jobId);
|
||||
|
||||
assertThat(queryEstablishedMemoryUsage(jobId), equalTo(20000L));
|
||||
}
|
||||
|
||||
public void testEstablishedMem_givenHistoricLowVariationRecentHighVariation() throws Exception {
|
||||
String jobId = "historic-low-cv-recent-high-cv-established-mem-job";
|
||||
|
||||
initClusterAndJob(jobId);
|
||||
|
||||
createBuckets(jobId, 40);
|
||||
createModelSizeStats(jobId, 1, 19000L);
|
||||
createModelSizeStats(jobId, 3, 20000L);
|
||||
createModelSizeStats(jobId, 25, 21000L);
|
||||
createModelSizeStats(jobId, 27, 39000L);
|
||||
createModelSizeStats(jobId, 30, 67000L);
|
||||
createModelSizeStats(jobId, 35, 95000L);
|
||||
jobResultsPersister.commitResultWrites(jobId);
|
||||
|
||||
assertThat(queryEstablishedMemoryUsage(jobId), nullValue());
|
||||
}
|
||||
|
||||
private void initClusterAndJob(String jobId) {
|
||||
internalCluster().ensureAtLeastNumDataNodes(1);
|
||||
ensureStableCluster(1);
|
||||
|
||||
Job.Builder job = createJob(jobId);
|
||||
PutJobAction.Request putJobRequest = new PutJobAction.Request(job);
|
||||
PutJobAction.Response putJobResponse = client().execute(PutJobAction.INSTANCE, putJobRequest).actionGet();
|
||||
assertTrue(putJobResponse.isAcknowledged());
|
||||
}
|
||||
|
||||
private void createBuckets(String jobId, int count) {
|
||||
JobResultsPersister.Builder builder = jobResultsPersister.bulkPersisterBuilder(jobId);
|
||||
for (int i = 1; i <= count; ++i) {
|
||||
Bucket bucket = new Bucket(jobId, new Date(bucketSpan * i), bucketSpan);
|
||||
builder.persistBucket(bucket);
|
||||
}
|
||||
builder.executeRequest();
|
||||
}
|
||||
|
||||
private void createModelSizeStats(String jobId, int bucketNum, long modelBytes) {
|
||||
ModelSizeStats.Builder modelSizeStats = new ModelSizeStats.Builder(jobId);
|
||||
modelSizeStats.setTimestamp(new Date(bucketSpan * bucketNum));
|
||||
modelSizeStats.setLogTime(new Date(bucketSpan * bucketNum + randomIntBetween(1, 1000)));
|
||||
modelSizeStats.setModelBytes(modelBytes);
|
||||
jobResultsPersister.persistModelSizeStats(modelSizeStats.build());
|
||||
}
|
||||
|
||||
private Long queryEstablishedMemoryUsage(String jobId) throws Exception {
|
||||
AtomicReference<Long> establishedModelMemoryUsage = new AtomicReference<>();
|
||||
AtomicReference<Exception> exception = new AtomicReference<>();
|
||||
|
||||
CountDownLatch latch = new CountDownLatch(1);
|
||||
|
||||
jobProvider.getEstablishedMemoryUsage(jobId, memUse -> {
|
||||
establishedModelMemoryUsage.set(memUse);
|
||||
latch.countDown();
|
||||
}, e -> {
|
||||
exception.set(e);
|
||||
latch.countDown();
|
||||
});
|
||||
|
||||
latch.await();
|
||||
|
||||
if (exception.get() != null) {
|
||||
throw exception.get();
|
||||
}
|
||||
|
||||
return establishedModelMemoryUsage.get();
|
||||
}
|
||||
}
|
|
@ -9,6 +9,8 @@ import com.carrotsearch.randomizedtesting.generators.CodepointSetGenerator;
|
|||
import org.elasticsearch.Version;
|
||||
import org.elasticsearch.common.bytes.BytesReference;
|
||||
import org.elasticsearch.common.io.stream.Writeable;
|
||||
import org.elasticsearch.common.unit.ByteSizeUnit;
|
||||
import org.elasticsearch.common.unit.ByteSizeValue;
|
||||
import org.elasticsearch.common.unit.TimeValue;
|
||||
import org.elasticsearch.common.xcontent.NamedXContentRegistry;
|
||||
import org.elasticsearch.common.xcontent.XContentFactory;
|
||||
|
@ -17,6 +19,7 @@ import org.elasticsearch.common.xcontent.XContentParser;
|
|||
import org.elasticsearch.common.xcontent.XContentType;
|
||||
import org.elasticsearch.test.AbstractSerializingTestCase;
|
||||
import org.elasticsearch.test.ESTestCase;
|
||||
import org.elasticsearch.xpack.ml.MachineLearning;
|
||||
import org.elasticsearch.xpack.ml.job.messages.Messages;
|
||||
import org.elasticsearch.xpack.ml.job.persistence.AnomalyDetectorsIndex;
|
||||
|
||||
|
@ -109,12 +112,42 @@ public class JobTests extends AbstractSerializingTestCase<Job> {
|
|||
|
||||
public void testEnsureModelMemoryLimitSet() {
|
||||
Job.Builder builder = buildJobBuilder("foo");
|
||||
builder.setDefaultMemoryLimitIfUnset();
|
||||
builder.setAnalysisLimits(null);
|
||||
builder.validateModelMemoryLimit(new ByteSizeValue(0L));
|
||||
Job job = builder.build();
|
||||
|
||||
assertEquals("foo", job.getId());
|
||||
assertNotNull(job.getAnalysisLimits());
|
||||
assertThat(job.getAnalysisLimits().getModelMemoryLimit(), equalTo(AnalysisLimits.DEFAULT_MODEL_MEMORY_LIMIT_MB));
|
||||
assertNull(job.getAnalysisLimits().getCategorizationExamplesLimit());
|
||||
|
||||
builder.setAnalysisLimits(new AnalysisLimits(AnalysisLimits.DEFAULT_MODEL_MEMORY_LIMIT_MB * 2, 4L));
|
||||
builder.validateModelMemoryLimit(null);
|
||||
job = builder.build();
|
||||
assertNotNull(job.getAnalysisLimits());
|
||||
assertThat(job.getAnalysisLimits().getModelMemoryLimit(), equalTo(AnalysisLimits.DEFAULT_MODEL_MEMORY_LIMIT_MB * 2));
|
||||
assertThat(job.getAnalysisLimits().getCategorizationExamplesLimit(), equalTo(4L));
|
||||
}
|
||||
|
||||
public void testValidateModelMemoryLimit_whenMaxIsLessThanTheDefault() {
|
||||
Job.Builder builder = buildJobBuilder("foo");
|
||||
builder.setAnalysisLimits(null);
|
||||
builder.validateModelMemoryLimit(new ByteSizeValue(512L, ByteSizeUnit.MB));
|
||||
|
||||
Job job = builder.build();
|
||||
assertNotNull(job.getAnalysisLimits());
|
||||
assertThat(job.getAnalysisLimits().getModelMemoryLimit(), equalTo(512L));
|
||||
assertNull(job.getAnalysisLimits().getCategorizationExamplesLimit());
|
||||
}
|
||||
|
||||
public void testValidateModelMemoryLimit_throwsWhenMaxLimitIsExceeded() {
|
||||
Job.Builder builder = buildJobBuilder("foo");
|
||||
builder.setAnalysisLimits(new AnalysisLimits(4096L, null));
|
||||
IllegalArgumentException e = expectThrows(IllegalArgumentException.class,
|
||||
() -> builder.validateModelMemoryLimit(new ByteSizeValue(1000L, ByteSizeUnit.MB)));
|
||||
assertEquals("model_memory_limit [4gb] must be less than the value of the " +
|
||||
MachineLearning.MAX_MODEL_MEMORY.getKey() + " setting [1000mb]", e.getMessage());
|
||||
|
||||
builder.validateModelMemoryLimit(new ByteSizeValue(8192L, ByteSizeUnit.MB));
|
||||
}
|
||||
|
||||
public void testEquals_GivenDifferentClass() {
|
||||
|
@ -204,15 +237,6 @@ public class JobTests extends AbstractSerializingTestCase<Job> {
|
|||
assertFalse(jobDetails1.build().equals(jobDetails2.build()));
|
||||
}
|
||||
|
||||
public void testSetAnalysisLimits() {
|
||||
Job.Builder builder = new Job.Builder();
|
||||
builder.setAnalysisLimits(new AnalysisLimits(42L, null));
|
||||
IllegalArgumentException e = expectThrows(IllegalArgumentException.class,
|
||||
() -> builder.setAnalysisLimits(new AnalysisLimits(41L, null)));
|
||||
assertEquals("Invalid update value for analysis_limits: model_memory_limit cannot be decreased; existing is 42, update had 41",
|
||||
e.getMessage());
|
||||
}
|
||||
|
||||
// JobConfigurationTests:
|
||||
|
||||
/**
|
||||
|
|
|
@ -5,7 +5,10 @@
|
|||
*/
|
||||
package org.elasticsearch.xpack.ml.job.config;
|
||||
|
||||
import org.elasticsearch.ElasticsearchStatusException;
|
||||
import org.elasticsearch.common.io.stream.Writeable;
|
||||
import org.elasticsearch.common.unit.ByteSizeUnit;
|
||||
import org.elasticsearch.common.unit.ByteSizeValue;
|
||||
import org.elasticsearch.common.unit.TimeValue;
|
||||
import org.elasticsearch.common.xcontent.XContentParser;
|
||||
import org.elasticsearch.test.AbstractSerializingTestCase;
|
||||
|
@ -140,7 +143,7 @@ public class JobUpdateTests extends AbstractSerializingTestCase<JobUpdate> {
|
|||
jobBuilder.setDataDescription(new DataDescription.Builder());
|
||||
jobBuilder.setCreateTime(new Date());
|
||||
|
||||
Job updatedJob = update.mergeWithJob(jobBuilder.build());
|
||||
Job updatedJob = update.mergeWithJob(jobBuilder.build(), new ByteSizeValue(0L));
|
||||
|
||||
assertEquals(update.getGroups(), updatedJob.getGroups());
|
||||
assertEquals(update.getDescription(), updatedJob.getDescription());
|
||||
|
@ -171,4 +174,75 @@ public class JobUpdateTests extends AbstractSerializingTestCase<JobUpdate> {
|
|||
update = new JobUpdate.Builder("foo").setDetectorUpdates(Collections.singletonList(mock(JobUpdate.DetectorUpdate.class))).build();
|
||||
assertTrue(update.isAutodetectProcessUpdate());
|
||||
}
|
||||
|
||||
public void testUpdateAnalysisLimitWithLowerValue() {
|
||||
Job.Builder jobBuilder = new Job.Builder("foo");
|
||||
Detector.Builder d1 = new Detector.Builder("info_content", "domain");
|
||||
d1.setOverFieldName("mlcategory");
|
||||
Detector.Builder d2 = new Detector.Builder("min", "field");
|
||||
d2.setOverFieldName("host");
|
||||
AnalysisConfig.Builder ac = new AnalysisConfig.Builder(Arrays.asList(d1.build(), d2.build()));
|
||||
ac.setCategorizationFieldName("cat_field");
|
||||
jobBuilder.setAnalysisConfig(ac);
|
||||
jobBuilder.setDataDescription(new DataDescription.Builder());
|
||||
jobBuilder.setCreateTime(new Date());
|
||||
jobBuilder.setAnalysisLimits(new AnalysisLimits(42L, null));
|
||||
|
||||
JobUpdate update = new JobUpdate.Builder("foo").setAnalysisLimits(new AnalysisLimits(41L, null)).build();
|
||||
|
||||
ElasticsearchStatusException e = expectThrows(ElasticsearchStatusException.class,
|
||||
() -> update.mergeWithJob(jobBuilder.build(), new ByteSizeValue(0L)));
|
||||
assertEquals("Invalid update value for analysis_limits: model_memory_limit cannot be decreased; existing is 42mb, update had 41mb",
|
||||
e.getMessage());
|
||||
}
|
||||
|
||||
public void testUpdateAnalysisLimitWithValueGreaterThanMax() {
|
||||
Job.Builder jobBuilder = new Job.Builder("foo");
|
||||
Detector.Builder d1 = new Detector.Builder("info_content", "domain");
|
||||
d1.setOverFieldName("mlcategory");
|
||||
Detector.Builder d2 = new Detector.Builder("min", "field");
|
||||
d2.setOverFieldName("host");
|
||||
AnalysisConfig.Builder ac = new AnalysisConfig.Builder(Arrays.asList(d1.build(), d2.build()));
|
||||
ac.setCategorizationFieldName("cat_field");
|
||||
jobBuilder.setAnalysisConfig(ac);
|
||||
jobBuilder.setDataDescription(new DataDescription.Builder());
|
||||
jobBuilder.setCreateTime(new Date());
|
||||
jobBuilder.setAnalysisLimits(new AnalysisLimits(256L, null));
|
||||
|
||||
JobUpdate update = new JobUpdate.Builder("foo").setAnalysisLimits(new AnalysisLimits(1024L, null)).build();
|
||||
|
||||
ElasticsearchStatusException e = expectThrows(ElasticsearchStatusException.class,
|
||||
() -> update.mergeWithJob(jobBuilder.build(), new ByteSizeValue(512L, ByteSizeUnit.MB)));
|
||||
assertEquals("model_memory_limit [1gb] must be less than the value of the xpack.ml.max_model_memory_limit setting [512mb]",
|
||||
e.getMessage());
|
||||
}
|
||||
|
||||
public void testUpdate_withAnalysisLimitsPreviouslyUndefined() {
|
||||
Job.Builder jobBuilder = new Job.Builder("foo");
|
||||
Detector.Builder d1 = new Detector.Builder("info_content", "domain");
|
||||
AnalysisConfig.Builder ac = new AnalysisConfig.Builder(Collections.singletonList(d1.build()));
|
||||
jobBuilder.setAnalysisConfig(ac);
|
||||
jobBuilder.setDataDescription(new DataDescription.Builder());
|
||||
jobBuilder.setCreateTime(new Date());
|
||||
|
||||
JobUpdate update = new JobUpdate.Builder("foo").setAnalysisLimits(new AnalysisLimits(null, null)).build();
|
||||
Job updated = update.mergeWithJob(jobBuilder.build(), new ByteSizeValue(0L));
|
||||
assertNull(updated.getAnalysisLimits().getModelMemoryLimit());
|
||||
|
||||
JobUpdate updateWithLimit = new JobUpdate.Builder("foo").setAnalysisLimits(new AnalysisLimits(2048L, null)).build();
|
||||
|
||||
ElasticsearchStatusException e = expectThrows(ElasticsearchStatusException.class,
|
||||
() -> updateWithLimit.mergeWithJob(jobBuilder.build(), new ByteSizeValue(8000L, ByteSizeUnit.MB)));
|
||||
assertEquals("Invalid update value for analysis_limits: model_memory_limit cannot be decreased; existing is 4gb, update had 2gb",
|
||||
e.getMessage());
|
||||
|
||||
JobUpdate updateAboveMaxLimit = new JobUpdate.Builder("foo").setAnalysisLimits(new AnalysisLimits(8000L, null)).build();
|
||||
|
||||
e = expectThrows(ElasticsearchStatusException.class,
|
||||
() -> updateAboveMaxLimit.mergeWithJob(jobBuilder.build(), new ByteSizeValue(5000L, ByteSizeUnit.MB)));
|
||||
assertEquals("model_memory_limit [7.8gb] must be less than the value of the xpack.ml.max_model_memory_limit setting [4.8gb]",
|
||||
e.getMessage());
|
||||
|
||||
updateAboveMaxLimit.mergeWithJob(jobBuilder.build(), new ByteSizeValue(10000L, ByteSizeUnit.MB));
|
||||
}
|
||||
}
|
||||
|
|
|
@ -5,6 +5,7 @@
|
|||
*/
|
||||
package org.elasticsearch.xpack.security;
|
||||
|
||||
import org.elasticsearch.bootstrap.BootstrapContext;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.env.Environment;
|
||||
import org.elasticsearch.test.ESTestCase;
|
||||
|
@ -14,8 +15,9 @@ import org.elasticsearch.xpack.ssl.SSLService;
|
|||
public class PkiRealmBootstrapCheckTests extends ESTestCase {
|
||||
|
||||
public void testPkiRealmBootstrapDefault() throws Exception {
|
||||
assertFalse(new PkiRealmBootstrapCheck(Settings.EMPTY, new SSLService(Settings.EMPTY,
|
||||
new Environment(Settings.builder().put("path.home", createTempDir()).build()))).check());
|
||||
assertFalse(new PkiRealmBootstrapCheck(new SSLService(Settings.EMPTY,
|
||||
new Environment(Settings.builder().put("path.home", createTempDir()).build()))).check((new BootstrapContext(Settings
|
||||
.EMPTY, null))));
|
||||
}
|
||||
|
||||
public void testBootstrapCheckWithPkiRealm() throws Exception {
|
||||
|
@ -24,42 +26,42 @@ public class PkiRealmBootstrapCheckTests extends ESTestCase {
|
|||
.put("path.home", createTempDir())
|
||||
.build();
|
||||
Environment env = new Environment(settings);
|
||||
assertFalse(new PkiRealmBootstrapCheck(settings, new SSLService(settings, env)).check());
|
||||
assertFalse(new PkiRealmBootstrapCheck(new SSLService(settings, env)).check(new BootstrapContext(settings, null)));
|
||||
|
||||
// disable client auth default
|
||||
settings = Settings.builder().put(settings)
|
||||
.put("xpack.ssl.client_authentication", "none")
|
||||
.build();
|
||||
env = new Environment(settings);
|
||||
assertTrue(new PkiRealmBootstrapCheck(settings, new SSLService(settings, env)).check());
|
||||
assertTrue(new PkiRealmBootstrapCheck(new SSLService(settings, env)).check(new BootstrapContext(settings, null)));
|
||||
|
||||
// enable ssl for http
|
||||
settings = Settings.builder().put(settings)
|
||||
.put("xpack.security.http.ssl.enabled", true)
|
||||
.build();
|
||||
env = new Environment(settings);
|
||||
assertTrue(new PkiRealmBootstrapCheck(settings, new SSLService(settings, env)).check());
|
||||
assertTrue(new PkiRealmBootstrapCheck(new SSLService(settings, env)).check(new BootstrapContext(settings, null)));
|
||||
|
||||
// enable client auth for http
|
||||
settings = Settings.builder().put(settings)
|
||||
.put("xpack.security.http.ssl.client_authentication", randomFrom("required", "optional"))
|
||||
.build();
|
||||
env = new Environment(settings);
|
||||
assertFalse(new PkiRealmBootstrapCheck(settings, new SSLService(settings, env)).check());
|
||||
assertFalse(new PkiRealmBootstrapCheck(new SSLService(settings, env)).check(new BootstrapContext(settings, null)));
|
||||
|
||||
// disable http ssl
|
||||
settings = Settings.builder().put(settings)
|
||||
.put("xpack.security.http.ssl.enabled", false)
|
||||
.build();
|
||||
env = new Environment(settings);
|
||||
assertTrue(new PkiRealmBootstrapCheck(settings, new SSLService(settings, env)).check());
|
||||
assertTrue(new PkiRealmBootstrapCheck(new SSLService(settings, env)).check(new BootstrapContext(settings, null)));
|
||||
|
||||
// set transport client auth
|
||||
settings = Settings.builder().put(settings)
|
||||
.put("xpack.security.transport.client_authentication", randomFrom("required", "optional"))
|
||||
.build();
|
||||
env = new Environment(settings);
|
||||
assertTrue(new PkiRealmBootstrapCheck(settings, new SSLService(settings, env)).check());
|
||||
assertTrue(new PkiRealmBootstrapCheck(new SSLService(settings, env)).check(new BootstrapContext(settings, null)));
|
||||
|
||||
// test with transport profile
|
||||
settings = Settings.builder().put(settings)
|
||||
|
@ -67,7 +69,7 @@ public class PkiRealmBootstrapCheckTests extends ESTestCase {
|
|||
.put("transport.profiles.foo.xpack.security.ssl.client_authentication", randomFrom("required", "optional"))
|
||||
.build();
|
||||
env = new Environment(settings);
|
||||
assertFalse(new PkiRealmBootstrapCheck(settings, new SSLService(settings, env)).check());
|
||||
assertFalse(new PkiRealmBootstrapCheck(new SSLService(settings, env)).check(new BootstrapContext(settings, null)));
|
||||
}
|
||||
|
||||
public void testBootstrapCheckWithDisabledRealm() throws Exception {
|
||||
|
@ -78,6 +80,6 @@ public class PkiRealmBootstrapCheckTests extends ESTestCase {
|
|||
.put("path.home", createTempDir())
|
||||
.build();
|
||||
Environment env = new Environment(settings);
|
||||
assertFalse(new PkiRealmBootstrapCheck(settings, new SSLService(settings, env)).check());
|
||||
assertFalse(new PkiRealmBootstrapCheck(new SSLService(settings, env)).check(new BootstrapContext(settings, null)));
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,57 +0,0 @@
|
|||
/*
|
||||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||
* or more contributor license agreements. Licensed under the Elastic License;
|
||||
* you may not use this file except in compliance with the Elastic License.
|
||||
*/
|
||||
package org.elasticsearch.xpack.security;
|
||||
|
||||
import org.elasticsearch.common.settings.MockSecureSettings;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.test.ESTestCase;
|
||||
import org.elasticsearch.xpack.XPackSettings;
|
||||
import org.elasticsearch.xpack.security.TokenPassphraseBootstrapCheck;
|
||||
import org.elasticsearch.xpack.security.authc.TokenService;
|
||||
|
||||
import static org.elasticsearch.xpack.security.TokenPassphraseBootstrapCheck.MINIMUM_PASSPHRASE_LENGTH;
|
||||
|
||||
public class TokenPassphraseBootstrapCheckTests extends ESTestCase {
|
||||
|
||||
public void testTokenPassphraseCheck() throws Exception {
|
||||
assertFalse(new TokenPassphraseBootstrapCheck(Settings.EMPTY).check());
|
||||
MockSecureSettings secureSettings = new MockSecureSettings();
|
||||
secureSettings.setString("foo", "bar"); // leniency in setSecureSettings... if its empty it's skipped
|
||||
Settings settings = Settings.builder()
|
||||
.put(XPackSettings.TOKEN_SERVICE_ENABLED_SETTING.getKey(), true).setSecureSettings(secureSettings).build();
|
||||
assertFalse(new TokenPassphraseBootstrapCheck(settings).check());
|
||||
|
||||
secureSettings.setString(TokenService.TOKEN_PASSPHRASE.getKey(), randomAlphaOfLengthBetween(MINIMUM_PASSPHRASE_LENGTH, 30));
|
||||
assertFalse(new TokenPassphraseBootstrapCheck(settings).check());
|
||||
|
||||
secureSettings.setString(TokenService.TOKEN_PASSPHRASE.getKey(), randomAlphaOfLengthBetween(1, MINIMUM_PASSPHRASE_LENGTH - 1));
|
||||
assertTrue(new TokenPassphraseBootstrapCheck(settings).check());
|
||||
}
|
||||
|
||||
public void testTokenPassphraseCheckServiceDisabled() throws Exception {
|
||||
Settings settings = Settings.builder().put(XPackSettings.TOKEN_SERVICE_ENABLED_SETTING.getKey(), false)
|
||||
.put(XPackSettings.HTTP_SSL_ENABLED.getKey(), true).build();
|
||||
assertFalse(new TokenPassphraseBootstrapCheck(settings).check());
|
||||
MockSecureSettings secureSettings = new MockSecureSettings();
|
||||
secureSettings.setString("foo", "bar"); // leniency in setSecureSettings... if its empty it's skipped
|
||||
settings = Settings.builder().put(settings).setSecureSettings(secureSettings).build();
|
||||
assertFalse(new TokenPassphraseBootstrapCheck(settings).check());
|
||||
|
||||
secureSettings.setString(TokenService.TOKEN_PASSPHRASE.getKey(), randomAlphaOfLengthBetween(1, 30));
|
||||
assertFalse(new TokenPassphraseBootstrapCheck(settings).check());
|
||||
}
|
||||
|
||||
public void testTokenPassphraseCheckAfterSecureSettingsClosed() throws Exception {
|
||||
Settings settings = Settings.builder().put(XPackSettings.HTTP_SSL_ENABLED.getKey(), true).build();
|
||||
MockSecureSettings secureSettings = new MockSecureSettings();
|
||||
secureSettings.setString("foo", "bar"); // leniency in setSecureSettings... if its empty it's skipped
|
||||
settings = Settings.builder().put(settings).setSecureSettings(secureSettings).build();
|
||||
secureSettings.setString(TokenService.TOKEN_PASSPHRASE.getKey(), randomAlphaOfLengthBetween(1, MINIMUM_PASSPHRASE_LENGTH - 1));
|
||||
final TokenPassphraseBootstrapCheck check = new TokenPassphraseBootstrapCheck(settings);
|
||||
secureSettings.close();
|
||||
assertTrue(check.check());
|
||||
}
|
||||
}
|
|
@ -5,39 +5,40 @@
|
|||
*/
|
||||
package org.elasticsearch.xpack.security;
|
||||
|
||||
import org.elasticsearch.bootstrap.BootstrapContext;
|
||||
import org.elasticsearch.common.network.NetworkModule;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.test.ESTestCase;
|
||||
import org.elasticsearch.xpack.XPackSettings;
|
||||
import org.elasticsearch.xpack.security.TokenSSLBootstrapCheck;
|
||||
|
||||
public class TokenSSLBootsrapCheckTests extends ESTestCase {
|
||||
|
||||
public void testTokenSSLBootstrapCheck() {
|
||||
Settings settings = Settings.EMPTY;
|
||||
assertFalse(new TokenSSLBootstrapCheck(settings).check());
|
||||
|
||||
assertFalse(new TokenSSLBootstrapCheck().check(new BootstrapContext(settings, null)));
|
||||
|
||||
settings = Settings.builder()
|
||||
.put(NetworkModule.HTTP_ENABLED.getKey(), false)
|
||||
.put(XPackSettings.TOKEN_SERVICE_ENABLED_SETTING.getKey(), true).build();
|
||||
assertFalse(new TokenSSLBootstrapCheck(settings).check());
|
||||
assertFalse(new TokenSSLBootstrapCheck().check(new BootstrapContext(settings, null)));
|
||||
|
||||
settings = Settings.builder().put(XPackSettings.HTTP_SSL_ENABLED.getKey(), true).build();
|
||||
assertFalse(new TokenSSLBootstrapCheck(settings).check());
|
||||
assertFalse(new TokenSSLBootstrapCheck().check(new BootstrapContext(settings, null)));
|
||||
|
||||
// XPackSettings.HTTP_SSL_ENABLED default false
|
||||
settings = Settings.builder().put(XPackSettings.TOKEN_SERVICE_ENABLED_SETTING.getKey(), true).build();
|
||||
assertTrue(new TokenSSLBootstrapCheck(settings).check());
|
||||
assertTrue(new TokenSSLBootstrapCheck().check(new BootstrapContext(settings, null)));
|
||||
|
||||
settings = Settings.builder()
|
||||
.put(XPackSettings.HTTP_SSL_ENABLED.getKey(), false)
|
||||
.put(XPackSettings.TOKEN_SERVICE_ENABLED_SETTING.getKey(), true).build();
|
||||
assertTrue(new TokenSSLBootstrapCheck(settings).check());
|
||||
assertTrue(new TokenSSLBootstrapCheck().check(new BootstrapContext(settings, null)));
|
||||
|
||||
settings = Settings.builder()
|
||||
.put(XPackSettings.HTTP_SSL_ENABLED.getKey(), false)
|
||||
.put(XPackSettings.TOKEN_SERVICE_ENABLED_SETTING.getKey(), true)
|
||||
.put(NetworkModule.HTTP_ENABLED.getKey(), false).build();
|
||||
assertFalse(new TokenSSLBootstrapCheck(settings).check());
|
||||
assertFalse(new TokenSSLBootstrapCheck().check(new BootstrapContext(settings, null)));
|
||||
}
|
||||
}
|
||||
|
|
|
@ -6,7 +6,6 @@
|
|||
package org.elasticsearch.xpack.security.authc;
|
||||
|
||||
import org.elasticsearch.ElasticsearchSecurityException;
|
||||
import org.elasticsearch.Version;
|
||||
import org.elasticsearch.action.ActionListener;
|
||||
import org.elasticsearch.action.get.GetAction;
|
||||
import org.elasticsearch.action.get.GetRequest;
|
||||
|
@ -15,7 +14,6 @@ import org.elasticsearch.action.support.PlainActionFuture;
|
|||
import org.elasticsearch.client.Client;
|
||||
import org.elasticsearch.cluster.service.ClusterService;
|
||||
import org.elasticsearch.common.settings.ClusterSettings;
|
||||
import org.elasticsearch.common.settings.MockSecureSettings;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.unit.TimeValue;
|
||||
import org.elasticsearch.common.util.concurrent.ThreadContext;
|
||||
|
@ -36,12 +34,10 @@ import org.junit.Before;
|
|||
import org.junit.BeforeClass;
|
||||
|
||||
import javax.crypto.SecretKey;
|
||||
import java.io.IOException;
|
||||
import java.security.GeneralSecurityException;
|
||||
import java.time.Clock;
|
||||
import java.util.Base64;
|
||||
import java.util.Collections;
|
||||
import java.util.concurrent.ExecutionException;
|
||||
|
||||
import static org.elasticsearch.repositories.ESBlobStoreTestCase.randomBytes;
|
||||
import static org.hamcrest.Matchers.containsString;
|
||||
|
@ -281,10 +277,8 @@ public class TokenServiceTests extends ESTestCase {
|
|||
|
||||
try (ThreadContext.StoredContext ignore = requestContext.newStoredContext(true)) {
|
||||
// verify a second separate token service with its own passphrase cannot verify
|
||||
MockSecureSettings secureSettings = new MockSecureSettings();
|
||||
secureSettings.setString(TokenService.TOKEN_PASSPHRASE.getKey(), randomAlphaOfLengthBetween(8, 30));
|
||||
Settings settings = Settings.builder().setSecureSettings(secureSettings).build();
|
||||
TokenService anotherService = new TokenService(settings, Clock.systemUTC(), internalClient, lifecycleService, clusterService);
|
||||
TokenService anotherService = new TokenService(Settings.EMPTY, Clock.systemUTC(), internalClient, lifecycleService,
|
||||
clusterService);
|
||||
PlainActionFuture<UserToken> future = new PlainActionFuture<>();
|
||||
anotherService.getAndValidateToken(requestContext, future);
|
||||
assertNull(future.get());
|
||||
|
@ -456,27 +450,4 @@ public class TokenServiceTests extends ESTestCase {
|
|||
assertNull(future.get());
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
public void testDecodePre6xToken() throws GeneralSecurityException, ExecutionException, InterruptedException, IOException {
|
||||
String token = "g+y0AiDWsbLNzUGTywPa3VCz053RUPW7wAx4xTAonlcqjOmO1AzMhQDTUku/+ZtdtMgDobKqIrNdNvchvFMX0pvZLY6i4nAG2OhkApSstPfQQP" +
|
||||
"J1fxg/JZNQDPufePg1GxV/RAQm2Gr8mYAelijEVlWIdYaQ3R76U+P/w6Q1v90dGVZQn6DKMOfgmkfwAFNY";
|
||||
MockSecureSettings secureSettings = new MockSecureSettings();
|
||||
secureSettings.setString(TokenService.TOKEN_PASSPHRASE.getKey(), "xpack_token_passpharse");
|
||||
Settings settings = Settings.builder().put(XPackSettings.HTTP_SSL_ENABLED.getKey(), true).setSecureSettings(secureSettings).build();
|
||||
TokenService tokenService = new TokenService(settings, Clock.systemUTC(), internalClient, lifecycleService, clusterService);
|
||||
Authentication authentication = new Authentication(new User("joe", "admin"), new RealmRef("native_realm", "native", "node1"), null);
|
||||
ThreadContext requestContext = new ThreadContext(Settings.EMPTY);
|
||||
requestContext.putHeader("Authorization", "Bearer " + token);
|
||||
|
||||
try (ThreadContext.StoredContext ignore = requestContext.newStoredContext(true)) {
|
||||
PlainActionFuture<UserToken> future = new PlainActionFuture<>();
|
||||
tokenService.decodeToken(tokenService.getFromHeader(requestContext), future);
|
||||
UserToken serialized = future.get();
|
||||
assertNotNull(serialized);
|
||||
assertEquals("joe", serialized.getAuthentication().getUser().principal());
|
||||
assertEquals(Version.V_5_6_0, serialized.getAuthentication().getVersion());
|
||||
assertArrayEquals(new String[] {"admin"}, serialized.getAuthentication().getUser().roles());
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -11,25 +11,31 @@ import java.nio.file.Files;
|
|||
import java.nio.file.Path;
|
||||
import java.security.cert.CertificateFactory;
|
||||
import java.security.cert.X509Certificate;
|
||||
import java.util.ArrayList;
|
||||
import java.util.Collections;
|
||||
import java.util.HashSet;
|
||||
import java.util.List;
|
||||
import java.util.Set;
|
||||
import java.util.regex.Pattern;
|
||||
|
||||
import org.elasticsearch.action.ActionListener;
|
||||
import org.elasticsearch.action.support.PlainActionFuture;
|
||||
import org.elasticsearch.common.settings.ClusterSettings;
|
||||
import org.elasticsearch.common.settings.MockSecureSettings;
|
||||
import org.elasticsearch.common.settings.SecureString;
|
||||
import org.elasticsearch.common.settings.Setting;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.util.concurrent.ThreadContext;
|
||||
import org.elasticsearch.env.Environment;
|
||||
import org.elasticsearch.test.ESTestCase;
|
||||
import org.elasticsearch.xpack.security.authc.AuthenticationResult;
|
||||
import org.elasticsearch.xpack.security.authc.RealmConfig;
|
||||
import org.elasticsearch.xpack.security.authc.RealmSettings;
|
||||
import org.elasticsearch.xpack.security.authc.support.UserRoleMapper;
|
||||
import org.elasticsearch.xpack.security.authc.support.UsernamePasswordToken;
|
||||
import org.elasticsearch.xpack.security.support.NoOpLogger;
|
||||
import org.elasticsearch.xpack.security.user.User;
|
||||
import org.elasticsearch.xpack.ssl.SSLConfigurationSettings;
|
||||
import org.junit.Before;
|
||||
import org.mockito.Mockito;
|
||||
|
||||
|
@ -248,6 +254,20 @@ public class PkiRealmTests extends ESTestCase {
|
|||
assertThat(token.dn(), is("EMAILADDRESS=pki@elastic.co, CN=PKI Client, OU=Security"));
|
||||
}
|
||||
|
||||
public void testPKIRealmSettingsPassValidation() throws Exception {
|
||||
Settings settings = Settings.builder()
|
||||
.put("xpack.security.authc.realms.pki1.type", "pki")
|
||||
.put("xpack.security.authc.realms.pki1.truststore.path", "/foo/bar")
|
||||
.put("xpack.security.authc.realms.pki1.truststore.password", "supersecret")
|
||||
.build();
|
||||
List<Setting<?>> settingList = new ArrayList<>();
|
||||
RealmSettings.addSettings(settingList, Collections.emptyList());
|
||||
ClusterSettings clusterSettings = new ClusterSettings(settings, new HashSet<>(settingList));
|
||||
clusterSettings.validate(settings);
|
||||
|
||||
assertSettingDeprecationsAndWarnings(new Setting[] { SSLConfigurationSettings.withoutPrefix().legacyTruststorePassword });
|
||||
}
|
||||
|
||||
static X509Certificate readCert(Path path) throws Exception {
|
||||
try (InputStream in = Files.newInputStream(path)) {
|
||||
CertificateFactory factory = CertificateFactory.getInstance("X.509");
|
||||
|
|
|
@ -12,6 +12,7 @@ import java.nio.file.Path;
|
|||
import java.util.Collections;
|
||||
|
||||
import org.elasticsearch.bootstrap.BootstrapCheck;
|
||||
import org.elasticsearch.bootstrap.BootstrapContext;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.util.concurrent.ThreadContext;
|
||||
import org.elasticsearch.test.ESTestCase;
|
||||
|
@ -45,7 +46,7 @@ public class RoleMappingFileBootstrapCheckTests extends ESTestCase {
|
|||
final BootstrapCheck check = RoleMappingFileBootstrapCheck.create(config);
|
||||
assertThat(check, notNullValue());
|
||||
assertThat(check.alwaysEnforce(), equalTo(true));
|
||||
assertThat(check.check(), equalTo(false));
|
||||
assertThat(check.check(new BootstrapContext(settings, null)), equalTo(false));
|
||||
}
|
||||
|
||||
public void testBootstrapCheckOfMissingFile() {
|
||||
|
@ -58,7 +59,7 @@ public class RoleMappingFileBootstrapCheckTests extends ESTestCase {
|
|||
final BootstrapCheck check = RoleMappingFileBootstrapCheck.create(config);
|
||||
assertThat(check, notNullValue());
|
||||
assertThat(check.alwaysEnforce(), equalTo(true));
|
||||
assertThat(check.check(), equalTo(true));
|
||||
assertThat(check.check(new BootstrapContext(settings, null)), equalTo(true));
|
||||
assertThat(check.errorMessage(), containsString("the-realm-name"));
|
||||
assertThat(check.errorMessage(), containsString(fileName));
|
||||
assertThat(check.errorMessage(), containsString("does not exist"));
|
||||
|
@ -76,7 +77,7 @@ public class RoleMappingFileBootstrapCheckTests extends ESTestCase {
|
|||
final BootstrapCheck check = RoleMappingFileBootstrapCheck.create(config);
|
||||
assertThat(check, notNullValue());
|
||||
assertThat(check.alwaysEnforce(), equalTo(true));
|
||||
assertThat(check.check(), equalTo(true));
|
||||
assertThat(check.check(new BootstrapContext(settings, null)), equalTo(true));
|
||||
assertThat(check.errorMessage(), containsString("the-realm-name"));
|
||||
assertThat(check.errorMessage(), containsString(file.toString()));
|
||||
assertThat(check.errorMessage(), containsString("could not read"));
|
||||
|
@ -94,7 +95,7 @@ public class RoleMappingFileBootstrapCheckTests extends ESTestCase {
|
|||
final BootstrapCheck check = RoleMappingFileBootstrapCheck.create(config);
|
||||
assertThat(check, notNullValue());
|
||||
assertThat(check.alwaysEnforce(), equalTo(true));
|
||||
assertThat(check.check(), equalTo(true));
|
||||
assertThat(check.check(new BootstrapContext(settings, null)), equalTo(true));
|
||||
assertThat(check.errorMessage(), containsString("the-realm-name"));
|
||||
assertThat(check.errorMessage(), containsString(file.toString()));
|
||||
assertThat(check.errorMessage(), containsString("invalid DN"));
|
||||
|
|
|
@ -5,6 +5,7 @@
|
|||
*/
|
||||
package org.elasticsearch.xpack.ssl;
|
||||
|
||||
import org.elasticsearch.bootstrap.BootstrapContext;
|
||||
import org.elasticsearch.common.settings.MockSecureSettings;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.env.Environment;
|
||||
|
@ -14,8 +15,8 @@ public class SSLBootstrapCheckTests extends ESTestCase {
|
|||
|
||||
public void testSSLBootstrapCheckWithNoKey() throws Exception {
|
||||
SSLService sslService = new SSLService(Settings.EMPTY, null);
|
||||
SSLBootstrapCheck bootstrapCheck = new SSLBootstrapCheck(sslService, Settings.EMPTY, null);
|
||||
assertTrue(bootstrapCheck.check());
|
||||
SSLBootstrapCheck bootstrapCheck = new SSLBootstrapCheck(sslService, null);
|
||||
assertTrue(bootstrapCheck.check(new BootstrapContext(Settings.EMPTY, null)));
|
||||
}
|
||||
|
||||
public void testSSLBootstrapCheckWithKey() throws Exception {
|
||||
|
@ -31,8 +32,8 @@ public class SSLBootstrapCheckTests extends ESTestCase {
|
|||
.setSecureSettings(secureSettings)
|
||||
.build();
|
||||
final Environment env = randomBoolean() ? new Environment(settings) : null;
|
||||
SSLBootstrapCheck bootstrapCheck = new SSLBootstrapCheck(new SSLService(settings, env), settings, env);
|
||||
assertFalse(bootstrapCheck.check());
|
||||
SSLBootstrapCheck bootstrapCheck = new SSLBootstrapCheck(new SSLService(settings, env), env);
|
||||
assertFalse(bootstrapCheck.check(new BootstrapContext(settings, null)));
|
||||
}
|
||||
|
||||
public void testSSLBootstrapCheckWithDefaultCABeingTrusted() throws Exception {
|
||||
|
@ -51,15 +52,15 @@ public class SSLBootstrapCheckTests extends ESTestCase {
|
|||
.setSecureSettings(secureSettings)
|
||||
.build();
|
||||
final Environment env = randomBoolean() ? new Environment(settings) : null;
|
||||
SSLBootstrapCheck bootstrapCheck = new SSLBootstrapCheck(new SSLService(settings, env), settings, env);
|
||||
assertTrue(bootstrapCheck.check());
|
||||
SSLBootstrapCheck bootstrapCheck = new SSLBootstrapCheck(new SSLService(settings, env), env);
|
||||
assertTrue(bootstrapCheck.check(new BootstrapContext(settings, null)));
|
||||
|
||||
settings = Settings.builder().put(settings.filter((s) -> s.contains(".certificate_authorities")))
|
||||
.put("xpack.security.http.ssl.certificate_authorities",
|
||||
getDataPath("/org/elasticsearch/xpack/ssl/ca.pem").toString())
|
||||
.build();
|
||||
bootstrapCheck = new SSLBootstrapCheck(new SSLService(settings, env), settings, env);
|
||||
assertTrue(bootstrapCheck.check());
|
||||
bootstrapCheck = new SSLBootstrapCheck(new SSLService(settings, env), env);
|
||||
assertTrue(bootstrapCheck.check(new BootstrapContext(settings, null)));
|
||||
}
|
||||
|
||||
public void testSSLBootstrapCheckWithDefaultKeyBeingUsed() throws Exception {
|
||||
|
@ -77,8 +78,8 @@ public class SSLBootstrapCheckTests extends ESTestCase {
|
|||
.setSecureSettings(secureSettings)
|
||||
.build();
|
||||
final Environment env = randomBoolean() ? new Environment(settings) : null;
|
||||
SSLBootstrapCheck bootstrapCheck = new SSLBootstrapCheck(new SSLService(settings, env), settings, env);
|
||||
assertTrue(bootstrapCheck.check());
|
||||
SSLBootstrapCheck bootstrapCheck = new SSLBootstrapCheck(new SSLService(settings, env), env);
|
||||
assertTrue(bootstrapCheck.check(new BootstrapContext(settings, null)));
|
||||
|
||||
settings = Settings.builder().put(settings.filter((s) -> s.contains(".http.ssl.")))
|
||||
.put("xpack.security.transport.profiles.foo.xpack.security.ssl.key",
|
||||
|
@ -86,7 +87,7 @@ public class SSLBootstrapCheckTests extends ESTestCase {
|
|||
.put("xpack.security.transport.profiles.foo.xpack.security.ssl.certificate",
|
||||
getDataPath("/org/elasticsearch/xpack/ssl/ca.pem").toString())
|
||||
.build();
|
||||
bootstrapCheck = new SSLBootstrapCheck(new SSLService(settings, env), settings, env);
|
||||
assertTrue(bootstrapCheck.check());
|
||||
bootstrapCheck = new SSLBootstrapCheck(new SSLService(settings, env), env);
|
||||
assertTrue(bootstrapCheck.check(new BootstrapContext(settings, null)));
|
||||
}
|
||||
}
|
||||
|
|
|
@ -5,6 +5,7 @@
|
|||
*/
|
||||
package org.elasticsearch.xpack.watcher;
|
||||
|
||||
import org.elasticsearch.bootstrap.BootstrapContext;
|
||||
import org.elasticsearch.common.settings.MockSecureSettings;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.env.Environment;
|
||||
|
@ -16,8 +17,8 @@ public class EncryptSensitiveDataBootstrapCheckTests extends ESTestCase {
|
|||
public void testDefaultIsFalse() {
|
||||
Settings settings = Settings.builder().put("path.home", createTempDir()).build();
|
||||
Environment env = new Environment(settings);
|
||||
EncryptSensitiveDataBootstrapCheck check = new EncryptSensitiveDataBootstrapCheck(settings, env);
|
||||
assertFalse(check.check());
|
||||
EncryptSensitiveDataBootstrapCheck check = new EncryptSensitiveDataBootstrapCheck(env);
|
||||
assertFalse(check.check(new BootstrapContext(settings, null)));
|
||||
assertTrue(check.alwaysEnforce());
|
||||
}
|
||||
|
||||
|
@ -27,8 +28,8 @@ public class EncryptSensitiveDataBootstrapCheckTests extends ESTestCase {
|
|||
.put(Watcher.ENCRYPT_SENSITIVE_DATA_SETTING.getKey(), true)
|
||||
.build();
|
||||
Environment env = new Environment(settings);
|
||||
EncryptSensitiveDataBootstrapCheck check = new EncryptSensitiveDataBootstrapCheck(settings, env);
|
||||
assertTrue(check.check());
|
||||
EncryptSensitiveDataBootstrapCheck check = new EncryptSensitiveDataBootstrapCheck(env);
|
||||
assertTrue(check.check(new BootstrapContext(settings, null)));
|
||||
}
|
||||
|
||||
public void testKeyInKeystore() {
|
||||
|
@ -40,7 +41,7 @@ public class EncryptSensitiveDataBootstrapCheckTests extends ESTestCase {
|
|||
.setSecureSettings(secureSettings)
|
||||
.build();
|
||||
Environment env = new Environment(settings);
|
||||
EncryptSensitiveDataBootstrapCheck check = new EncryptSensitiveDataBootstrapCheck(settings, env);
|
||||
assertFalse(check.check());
|
||||
EncryptSensitiveDataBootstrapCheck check = new EncryptSensitiveDataBootstrapCheck(env);
|
||||
assertFalse(check.check(new BootstrapContext(settings, null)));
|
||||
}
|
||||
}
|
||||
|
|
|
@ -41,7 +41,6 @@ import java.util.concurrent.ExecutorService;
|
|||
import static java.util.Arrays.asList;
|
||||
import static org.elasticsearch.cluster.routing.ShardRoutingState.RELOCATING;
|
||||
import static org.elasticsearch.cluster.routing.ShardRoutingState.STARTED;
|
||||
import static org.hamcrest.Matchers.is;
|
||||
import static org.mockito.Matchers.any;
|
||||
import static org.mockito.Matchers.anyObject;
|
||||
import static org.mockito.Matchers.anyString;
|
||||
|
@ -406,43 +405,6 @@ public class WatcherLifeCycleServiceTests extends ESTestCase {
|
|||
verify(watcherService, never()).start(any(ClusterState.class));
|
||||
}
|
||||
|
||||
public void testWatcherPausesOnNonMasterWhenOldNodesHoldWatcherIndex() {
|
||||
DiscoveryNodes nodes = new DiscoveryNodes.Builder()
|
||||
.masterNodeId("node_1").localNodeId("node_2")
|
||||
.add(newNode("node_1"))
|
||||
.add(newNode("node_2"))
|
||||
.add(newNode("oldNode", VersionUtils.randomVersionBetween(random(), Version.V_5_5_0, Version.V_6_0_0_alpha2)))
|
||||
.build();
|
||||
|
||||
Index index = new Index(Watch.INDEX, "foo");
|
||||
ShardId shardId = new ShardId(index, 0);
|
||||
IndexRoutingTable routingTable = IndexRoutingTable.builder(index)
|
||||
.addShard(TestShardRouting.newShardRouting(shardId, "node_2", true, STARTED))
|
||||
.addShard(TestShardRouting.newShardRouting(shardId, "oldNode", false, STARTED)).build();
|
||||
|
||||
Settings.Builder indexSettings = Settings.builder()
|
||||
.put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1)
|
||||
.put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0)
|
||||
.put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT)
|
||||
.put(IndexMetaData.INDEX_FORMAT_SETTING.getKey(), 6);
|
||||
|
||||
IndexMetaData.Builder indexMetaDataBuilder = IndexMetaData.builder(Watch.INDEX).settings(indexSettings);
|
||||
|
||||
ClusterState state = ClusterState.builder(new ClusterName("my-cluster"))
|
||||
.nodes(nodes)
|
||||
.routingTable(RoutingTable.builder().add(routingTable).build())
|
||||
.metaData(MetaData.builder().put(indexMetaDataBuilder))
|
||||
.build();
|
||||
|
||||
WatcherState watcherState = randomFrom(WatcherState.values());
|
||||
when(watcherService.state()).thenReturn(watcherState);
|
||||
|
||||
lifeCycleService.clusterChanged(new ClusterChangedEvent("any", state, state));
|
||||
if (watcherState == WatcherState.STARTED || watcherState == WatcherState.STARTING) {
|
||||
verify(watcherService).pauseExecution(any(String.class));
|
||||
}
|
||||
}
|
||||
|
||||
public void testWatcherStartsOnlyOnMasterWhenOldNodesAreInCluster() throws Exception {
|
||||
DiscoveryNodes nodes = new DiscoveryNodes.Builder()
|
||||
.masterNodeId("node_1").localNodeId("node_1")
|
||||
|
@ -459,29 +421,6 @@ public class WatcherLifeCycleServiceTests extends ESTestCase {
|
|||
verify(watcherService).start(any(ClusterState.class));
|
||||
}
|
||||
|
||||
public void testDistributedWatchExecutionDisabledWith5xNodesInCluster() throws Exception {
|
||||
DiscoveryNodes nodes = new DiscoveryNodes.Builder()
|
||||
.masterNodeId("node_1").localNodeId("node_1")
|
||||
.add(newNode("node_1"))
|
||||
.add(newNode("node_2", VersionUtils.randomVersionBetween(random(), Version.V_5_5_0, Version.V_6_0_0_alpha2)))
|
||||
.build();
|
||||
|
||||
ClusterState state = ClusterState.builder(new ClusterName("my-cluster")).nodes(nodes).build();
|
||||
|
||||
assertThat(WatcherLifeCycleService.isWatchExecutionDistributed(state), is(false));
|
||||
}
|
||||
|
||||
public void testDistributedWatchExecutionEnabled() throws Exception {
|
||||
DiscoveryNodes nodes = new DiscoveryNodes.Builder()
|
||||
.masterNodeId("master_node").localNodeId("master_node")
|
||||
.add(newNode("master_node", VersionUtils.randomVersionBetween(random(), Version.V_6_0_0_beta1, Version.CURRENT)))
|
||||
.add(newNode("data_node_6x", VersionUtils.randomVersionBetween(random(), Version.V_6_0_0_beta1, Version.CURRENT)))
|
||||
.build();
|
||||
ClusterState state = ClusterState.builder(new ClusterName("my-cluster")).nodes(nodes).build();
|
||||
|
||||
assertThat(WatcherLifeCycleService.isWatchExecutionDistributed(state), is(true));
|
||||
}
|
||||
|
||||
private static DiscoveryNode newNode(String nodeName) {
|
||||
return newNode(nodeName, Version.CURRENT);
|
||||
}
|
||||
|
|
|
@ -5,7 +5,6 @@
|
|||
*/
|
||||
package org.elasticsearch.xpack.watcher.transport.action.put;
|
||||
|
||||
import org.elasticsearch.Version;
|
||||
import org.elasticsearch.common.bytes.BytesArray;
|
||||
import org.elasticsearch.common.io.stream.BytesStreamOutput;
|
||||
import org.elasticsearch.common.io.stream.StreamInput;
|
||||
|
@ -14,9 +13,6 @@ import org.elasticsearch.common.xcontent.json.JsonXContent;
|
|||
import org.elasticsearch.test.ESTestCase;
|
||||
import org.elasticsearch.xpack.watcher.transport.actions.put.PutWatchRequest;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.Base64;
|
||||
|
||||
import static org.hamcrest.Matchers.is;
|
||||
|
||||
public class PutWatchSerializationTests extends ESTestCase {
|
||||
|
@ -61,25 +57,4 @@ public class PutWatchSerializationTests extends ESTestCase {
|
|||
assertThat(readRequest.getSource(), is(request.getSource()));
|
||||
assertThat(readRequest.xContentType(), is(XContentType.JSON));
|
||||
}
|
||||
|
||||
public void testPutWatchSerializationXContentBwc() throws IOException {
|
||||
final byte[] data = Base64.getDecoder().decode("ADwDAmlkDXsiZm9vIjoiYmFyIn0BAAAA");
|
||||
final Version version = randomFrom(Version.V_5_0_0, Version.V_5_0_1, Version.V_5_0_2,
|
||||
Version.V_5_1_1, Version.V_5_1_2, Version.V_5_2_0);
|
||||
try (StreamInput in = StreamInput.wrap(data)) {
|
||||
in.setVersion(version);
|
||||
PutWatchRequest request = new PutWatchRequest();
|
||||
request.readFrom(in);
|
||||
assertEquals(XContentType.JSON, request.xContentType());
|
||||
assertEquals("id", request.getId());
|
||||
assertTrue(request.isActive());
|
||||
assertEquals("{\"foo\":\"bar\"}", request.getSource().utf8ToString());
|
||||
|
||||
try (BytesStreamOutput out = new BytesStreamOutput()) {
|
||||
out.setVersion(version);
|
||||
request.writeTo(out);
|
||||
assertArrayEquals(data, out.bytes().toBytesRef().bytes);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,172 +0,0 @@
|
|||
/*
|
||||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||
* or more contributor license agreements. Licensed under the Elastic License;
|
||||
* you may not use this file except in compliance with the Elastic License.
|
||||
*/
|
||||
package org.elasticsearch.xpack.watcher.transport.actions;
|
||||
|
||||
import org.elasticsearch.Version;
|
||||
import org.elasticsearch.action.ActionListener;
|
||||
import org.elasticsearch.action.ActionRequestValidationException;
|
||||
import org.elasticsearch.action.ActionResponse;
|
||||
import org.elasticsearch.action.support.ActionFilters;
|
||||
import org.elasticsearch.action.support.PlainActionFuture;
|
||||
import org.elasticsearch.action.support.master.MasterNodeRequest;
|
||||
import org.elasticsearch.cluster.ClusterName;
|
||||
import org.elasticsearch.cluster.ClusterState;
|
||||
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
|
||||
import org.elasticsearch.cluster.node.DiscoveryNode;
|
||||
import org.elasticsearch.cluster.node.DiscoveryNodes;
|
||||
import org.elasticsearch.cluster.routing.IndexRoutingTable;
|
||||
import org.elasticsearch.cluster.routing.RoutingTable;
|
||||
import org.elasticsearch.cluster.routing.TestShardRouting;
|
||||
import org.elasticsearch.cluster.service.ClusterService;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.util.concurrent.EsExecutors;
|
||||
import org.elasticsearch.index.Index;
|
||||
import org.elasticsearch.index.shard.ShardId;
|
||||
import org.elasticsearch.license.XPackLicenseState;
|
||||
import org.elasticsearch.tasks.Task;
|
||||
import org.elasticsearch.tasks.TaskId;
|
||||
import org.elasticsearch.test.ESTestCase;
|
||||
import org.elasticsearch.test.VersionUtils;
|
||||
import org.elasticsearch.threadpool.ThreadPool;
|
||||
import org.elasticsearch.transport.TransportService;
|
||||
import org.elasticsearch.xpack.watcher.watch.Watch;
|
||||
import org.junit.Before;
|
||||
import org.mockito.ArgumentCaptor;
|
||||
|
||||
import java.util.Collections;
|
||||
import java.util.HashSet;
|
||||
|
||||
import static java.util.Arrays.asList;
|
||||
import static org.elasticsearch.cluster.routing.ShardRoutingState.STARTED;
|
||||
import static org.hamcrest.Matchers.is;
|
||||
import static org.mockito.Matchers.any;
|
||||
import static org.mockito.Matchers.eq;
|
||||
import static org.mockito.Mockito.mock;
|
||||
import static org.mockito.Mockito.verify;
|
||||
import static org.mockito.Mockito.when;
|
||||
|
||||
public class WatcherTransportActionTests extends ESTestCase {
|
||||
|
||||
private MyTransportAction transportAction;
|
||||
private ClusterService clusterService;
|
||||
private TransportService transportService;
|
||||
|
||||
@Before
|
||||
public void createTransportAction() {
|
||||
ThreadPool threadPool = mock(ThreadPool.class);
|
||||
when(threadPool.executor(any())).thenReturn(EsExecutors.newDirectExecutorService());
|
||||
clusterService = mock(ClusterService.class);
|
||||
transportService = mock(TransportService.class);
|
||||
transportAction = new MyTransportAction(transportService, threadPool, clusterService);
|
||||
}
|
||||
|
||||
public void testThatRequestIsExecutedLocallyWithDistributedExecutionEnabled() throws Exception {
|
||||
DiscoveryNodes nodes = new DiscoveryNodes.Builder()
|
||||
.masterNodeId("master_node").localNodeId("data_node")
|
||||
.add(newNode("master_node", Version.CURRENT))
|
||||
.add(newNode("data_node", Version.CURRENT))
|
||||
.build();
|
||||
|
||||
Index watchIndex = new Index(Watch.INDEX, "foo");
|
||||
ShardId shardId = new ShardId(watchIndex, 0);
|
||||
IndexRoutingTable routingTable = IndexRoutingTable.builder(watchIndex)
|
||||
.addShard(TestShardRouting.newShardRouting(shardId, "data_node", true, STARTED)).build();
|
||||
|
||||
ClusterState state = ClusterState.builder(new ClusterName("my-cluster"))
|
||||
.nodes(nodes)
|
||||
.routingTable(RoutingTable.builder().add(routingTable).build())
|
||||
.build();
|
||||
|
||||
when(clusterService.state()).thenReturn(state);
|
||||
when(clusterService.localNode()).thenReturn(state.nodes().getLocalNode());
|
||||
MyActionRequest request = new MyActionRequest();
|
||||
PlainActionFuture<MyActionResponse> future = PlainActionFuture.newFuture();
|
||||
|
||||
Task task = request.createTask(1, "type", "action", new TaskId("parent", 0));
|
||||
transportAction.doExecute(task, request, future);
|
||||
MyActionResponse response = future.actionGet(1000);
|
||||
assertThat(response.request, is(request));
|
||||
}
|
||||
|
||||
public void testThatRequestIsExecutedByMasterWithDistributedExecutionDisabled() throws Exception {
|
||||
DiscoveryNodes nodes = new DiscoveryNodes.Builder()
|
||||
.masterNodeId("master_node").localNodeId("master_node")
|
||||
.add(newNode("master_node", VersionUtils.randomVersionBetween(random(), Version.V_5_6_0, Version.V_6_0_0_alpha2)))
|
||||
.build();
|
||||
ClusterState state = ClusterState.builder(new ClusterName("my-cluster")).nodes(nodes).build();
|
||||
|
||||
when(clusterService.state()).thenReturn(state);
|
||||
when(clusterService.localNode()).thenReturn(state.nodes().getLocalNode());
|
||||
MyActionRequest request = new MyActionRequest();
|
||||
PlainActionFuture<MyActionResponse> future = PlainActionFuture.newFuture();
|
||||
|
||||
Task task = request.createTask(1, "type", "action", new TaskId("parent", 0));
|
||||
transportAction.doExecute(task, request, future);
|
||||
MyActionResponse response = future.actionGet(1000);
|
||||
assertThat(response.request, is(request));
|
||||
}
|
||||
|
||||
public void testThatRequestIsForwardedToMasterWithDistributedExecutionDisabled() throws Exception {
|
||||
DiscoveryNodes nodes = new DiscoveryNodes.Builder()
|
||||
.masterNodeId("master_node").localNodeId("non_master_node")
|
||||
.add(newNode("master_node", VersionUtils.randomVersionBetween(random(), Version.V_5_6_0, Version.V_6_0_0_alpha2)))
|
||||
.add(newNode("non_master_node", Version.CURRENT))
|
||||
.build();
|
||||
|
||||
ClusterState state = ClusterState.builder(new ClusterName("my-cluster")).nodes(nodes).build();
|
||||
|
||||
when(clusterService.state()).thenReturn(state);
|
||||
when(clusterService.localNode()).thenReturn(state.nodes().getLocalNode());
|
||||
MyActionRequest request = new MyActionRequest();
|
||||
Task task = request.createTask(1, "type", "action", new TaskId("parent", 0));
|
||||
transportAction.doExecute(task, request, PlainActionFuture.newFuture());
|
||||
// dont wait for the future here, we would need to stub the action listener of the sendRequest call
|
||||
|
||||
ArgumentCaptor<DiscoveryNode> nodeArgumentCaptor = ArgumentCaptor.forClass(DiscoveryNode.class);
|
||||
verify(transportService).sendRequest(nodeArgumentCaptor.capture(), eq("my_action_name"), eq(request), any());
|
||||
assertThat(nodeArgumentCaptor.getValue().getId(), is("master_node"));
|
||||
}
|
||||
|
||||
private static DiscoveryNode newNode(String nodeName, Version version) {
|
||||
return new DiscoveryNode(nodeName, ESTestCase.buildNewFakeTransportAddress(), Collections.emptyMap(),
|
||||
new HashSet<>(asList(DiscoveryNode.Role.values())), version);
|
||||
}
|
||||
|
||||
private final class MyTransportAction extends WatcherTransportAction<MyActionRequest, MyActionResponse> {
|
||||
|
||||
MyTransportAction(TransportService transportService, ThreadPool threadPool, ClusterService clusterService) {
|
||||
super(Settings.EMPTY, "my_action_name", transportService, threadPool, new ActionFilters(Collections.emptySet()),
|
||||
new IndexNameExpressionResolver(Settings.EMPTY), new XPackLicenseState(), clusterService, MyActionRequest::new,
|
||||
MyActionResponse::new);
|
||||
}
|
||||
|
||||
@Override
|
||||
protected void masterOperation(MyActionRequest request, ClusterState state,
|
||||
ActionListener<MyActionResponse> listener) throws Exception {
|
||||
listener.onResponse(new MyActionResponse(request));
|
||||
}
|
||||
}
|
||||
|
||||
private static final class MyActionResponse extends ActionResponse {
|
||||
|
||||
MyActionRequest request;
|
||||
|
||||
MyActionResponse(MyActionRequest request) {
|
||||
super();
|
||||
this.request = request;
|
||||
}
|
||||
|
||||
MyActionResponse() {}
|
||||
}
|
||||
|
||||
private static final class MyActionRequest extends MasterNodeRequest<MyActionRequest> {
|
||||
|
||||
@Override
|
||||
public ActionRequestValidationException validate() {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
}
|
|
@ -15,12 +15,6 @@
|
|||
"type" : "list",
|
||||
"description" : "A comma-separated list of the action ids to be acked"
|
||||
}
|
||||
},
|
||||
"params": {
|
||||
"master_timeout": {
|
||||
"type": "time",
|
||||
"description": "Explicit operation timeout for connection to master node"
|
||||
}
|
||||
}
|
||||
},
|
||||
"body": null
|
||||
|
|
|
@ -11,12 +11,6 @@
|
|||
"description" : "Watch ID",
|
||||
"required" : true
|
||||
}
|
||||
},
|
||||
"params": {
|
||||
"master_timeout": {
|
||||
"type": "time",
|
||||
"description": "Explicit operation timeout for connection to master node"
|
||||
}
|
||||
}
|
||||
},
|
||||
"body": null
|
||||
|
|
|
@ -11,12 +11,6 @@
|
|||
"description" : "Watch ID",
|
||||
"required" : true
|
||||
}
|
||||
},
|
||||
"params": {
|
||||
"master_timeout": {
|
||||
"type": "time",
|
||||
"description": "Explicit operation timeout for connection to master node"
|
||||
}
|
||||
}
|
||||
},
|
||||
"body": null
|
||||
|
|
|
@ -12,12 +12,6 @@
|
|||
"description" : "Watch ID",
|
||||
"required" : true
|
||||
}
|
||||
},
|
||||
"params": {
|
||||
"master_timeout": {
|
||||
"type": "time",
|
||||
"description": "Explicit operation timeout for connection to master node"
|
||||
}
|
||||
}
|
||||
},
|
||||
"body": null
|
||||
|
|
|
@ -13,10 +13,6 @@
|
|||
}
|
||||
},
|
||||
"params": {
|
||||
"master_timeout": {
|
||||
"type": "time",
|
||||
"description": "Explicit operation timeout for connection to master node"
|
||||
},
|
||||
"active": {
|
||||
"type": "boolean",
|
||||
"description": "Specify whether the watch is in/active by default"
|
||||
|
|
|
@ -336,6 +336,17 @@
|
|||
- match: { model_snapshot_retention_days: 30 }
|
||||
- match: { results_retention_days: 40 }
|
||||
|
||||
- do:
|
||||
catch: "/Invalid update value for analysis_limits: model_memory_limit cannot be decreased; existing is 20mb, update had 1mb/"
|
||||
xpack.ml.update_job:
|
||||
job_id: jobs-crud-update-job
|
||||
body: >
|
||||
{
|
||||
"analysis_limits": {
|
||||
"model_memory_limit": "1mb"
|
||||
}
|
||||
}
|
||||
|
||||
- do:
|
||||
catch: request
|
||||
xpack.ml.update_job:
|
||||
|
|
|
@ -7,7 +7,6 @@
|
|||
- do:
|
||||
xpack.watcher.put_watch:
|
||||
id: "my_watch"
|
||||
master_timeout: "40s"
|
||||
body: >
|
||||
{
|
||||
"trigger" : {
|
||||
|
|
|
@ -7,7 +7,6 @@
|
|||
- do:
|
||||
xpack.watcher.put_watch:
|
||||
id: "my_watch"
|
||||
master_timeout: "40s"
|
||||
body: >
|
||||
{
|
||||
"trigger" : {
|
||||
|
|
|
@ -7,7 +7,6 @@
|
|||
- do:
|
||||
xpack.watcher.put_watch:
|
||||
id: "my_watch"
|
||||
master_timeout: "40s"
|
||||
body: >
|
||||
{
|
||||
"trigger" : {
|
||||
|
|
|
@ -16,7 +16,6 @@ teardown:
|
|||
- do:
|
||||
xpack.watcher.put_watch:
|
||||
id: "my_watch"
|
||||
master_timeout: "40s"
|
||||
body: >
|
||||
{
|
||||
"trigger": {
|
||||
|
|
|
@ -7,7 +7,6 @@
|
|||
- do:
|
||||
xpack.watcher.put_watch:
|
||||
id: "my_watch"
|
||||
master_timeout: "40s"
|
||||
body: >
|
||||
{
|
||||
"trigger": {
|
||||
|
|
|
@ -16,7 +16,6 @@ teardown:
|
|||
- do:
|
||||
xpack.watcher.put_watch:
|
||||
id: "my_watch1"
|
||||
master_timeout: "40s"
|
||||
body: >
|
||||
{
|
||||
"throttle_period" : "10s",
|
||||
|
|
|
@ -16,7 +16,6 @@ teardown:
|
|||
- do:
|
||||
xpack.watcher.put_watch:
|
||||
id: "my_watch1"
|
||||
master_timeout: "40s"
|
||||
body: >
|
||||
{
|
||||
"trigger": {
|
||||
|
|
|
@ -16,7 +16,6 @@ teardown:
|
|||
- do:
|
||||
xpack.watcher.put_watch:
|
||||
id: "my_watch"
|
||||
master_timeout: "40s"
|
||||
active: false
|
||||
body: >
|
||||
{
|
||||
|
|
|
@ -8,7 +8,6 @@
|
|||
catch: /Configured URL is empty/
|
||||
xpack.watcher.put_watch:
|
||||
id: "my_watch"
|
||||
master_timeout: "40s"
|
||||
body: >
|
||||
{
|
||||
"trigger": {
|
||||
|
@ -48,7 +47,6 @@
|
|||
catch: /Malformed URL/
|
||||
xpack.watcher.put_watch:
|
||||
id: "my_watch"
|
||||
master_timeout: "40s"
|
||||
body: >
|
||||
{
|
||||
"trigger": {
|
||||
|
|
|
@ -16,7 +16,6 @@ teardown:
|
|||
- do:
|
||||
xpack.watcher.put_watch:
|
||||
id: "my_watch1"
|
||||
master_timeout: "40s"
|
||||
body: >
|
||||
{
|
||||
"trigger": {
|
||||
|
|
|
@ -16,7 +16,6 @@ teardown:
|
|||
- do:
|
||||
xpack.watcher.put_watch:
|
||||
id: "my_watch"
|
||||
master_timeout: "40s"
|
||||
body: >
|
||||
{
|
||||
"trigger": {
|
||||
|
@ -65,7 +64,6 @@ teardown:
|
|||
- do:
|
||||
xpack.watcher.put_watch:
|
||||
id: "my_watch"
|
||||
master_timeout: "40s"
|
||||
body: >
|
||||
{
|
||||
"trigger": {
|
||||
|
@ -114,7 +112,6 @@ teardown:
|
|||
- do:
|
||||
xpack.watcher.put_watch:
|
||||
id: "my_watch"
|
||||
master_timeout: "40s"
|
||||
body: >
|
||||
{
|
||||
"trigger": {
|
||||
|
@ -174,7 +171,6 @@ teardown:
|
|||
- do:
|
||||
xpack.watcher.put_watch:
|
||||
id: "my_watch"
|
||||
master_timeout: "40s"
|
||||
body: >
|
||||
{
|
||||
"trigger": {
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue