Merge remote-tracking branch 'remotes/upstream/master' into feature/sql

Original commit: elastic/x-pack-elasticsearch@2ac0dab27b
This commit is contained in:
Costin Leau 2017-10-05 17:06:26 +03:00
commit 31a952993a
64 changed files with 616 additions and 718 deletions

View File

@ -17,6 +17,9 @@ include::setup-xes.asciidoc[]
:edit_url:
include::{es-repo-dir}/reference/index-shared2.asciidoc[]
:edit_url!:
include::release-notes/xpack-breaking.asciidoc[]
:edit_url:
include::{es-repo-dir}/reference/index-shared3.asciidoc[]
@ -33,5 +36,8 @@ include::commands/index.asciidoc[]
:edit_url:
include::{es-repo-dir}/reference/index-shared4.asciidoc[]
:edit_url!:
include::release-notes/xpack-xes.asciidoc[]
:edit_url:
include::{es-repo-dir}/reference/index-shared5.asciidoc[]

View File

@ -163,7 +163,7 @@ This command generates a zip file with the CA certificate, private key, and
signed certificates and keys in the PEM format for each node that you specify.
If you want to use a commercial or organization-specific CA, you can use the
`-csr` parameter to generate certificate signing requests (CSR) for the nodes
in your cluster. For more information, see <<certgen>>.
in your cluster. For more information, see <<certgen>>.
TIP: For easier setup, use the node name as the instance name when you run
this tool.
@ -217,7 +217,7 @@ that is no longer valid after the command runs successfully. You cannot run the
the **Management > Users** UI in {kib} or use the security user API.
For more information, see
{ref}/setting-up-authentication.html#set-built-in-user-passwords[Setting Built-in User Passwords].
{xpack-ref}/setting-up-authentication.html#set-built-in-user-passwords[Setting Built-in User Passwords].
--
. {kibana-ref}/installing-xpack-kb.html[Install {xpack} on {kib}].

View File

@ -0,0 +1,15 @@
[[xes-7.0.0-alpha1]]
== {es} {xpack} 7.0.0-alpha1 Release Notes
[float]
[[xes-breaking-7.0.0-alpha1]]
=== Breaking Changes
Machine Learning::
* The `max_running_jobs` node property is removed in this release. Use the
`xpack.ml.max_open_jobs` setting instead. For more information, see <<ml-settings>>.
See also:
* <<release-notes-7.0.0-alpha1,{es} 7.0.0-alpha1 Release Notes>>
* {logstash-ref}/xls-7.0.0-alpha1.html[Logstash {xpack} 7.0.0-alpha1 Release Notes]

View File

@ -0,0 +1,31 @@
[role="xpack"]
[[breaking-changes-xes]]
= {xpack} Breaking Changes
[partintro]
--
This section summarizes the changes that you need to be aware of when migrating
your application from one version of {xpack} to another.
* <<breaking-7.0.0-xes>>
See also:
* <<breaking-changes,{es} Breaking Changes>>
* {kibana-ref}/breaking-changes-xpackkb.html[{kib} {xpack} Breaking Changes]
* {logstash-ref}/breaking-changes-xls.html[Logstash {xpack} Breaking Changes]
--
[role="xpack"]
[[breaking-7.0.0-xes]]
== {xpack} Breaking changes in 7.0.0
Machine Learning::
* The `max_running_jobs` node property is removed in this release. Use the
`xpack.ml.max_open_jobs` setting instead. For more information, <<ml-settings>>.
See also:
* <<breaking-changes-7.0,{es} Breaking Changes>>

View File

@ -0,0 +1,20 @@
[role="xpack"]
[[release-notes-xes]]
= {xpack} Release Notes
[partintro]
--
This section summarizes the changes in each release for all of the {xpack}
components in {es}.
* <<xes-7.0.0-alpha1>>
See also:
* <<es-release-notes,{es} Release Notes>>
* {kibana-ref}/release-notes-xpackkb.html[{kib} {xpack} Release Notes]
* {logstash-ref}/release-notes-xls.html[Logstash {xpack} Release Notes]
--
include::7.0.0-alpha1.asciidoc[]

View File

@ -24,7 +24,7 @@ For more information about categories, see
(string) Identifier for the job.
`category_id`::
(string) Identifier for the category. If you do not specify this optional parameter,
(long) Identifier for the category. If you do not specify this optional parameter,
the API returns information about all categories in the job.

View File

@ -33,6 +33,12 @@ All operations on index templates.
`manage_ml`::
All {ml} operations, such as creating and deleting {dfeeds}, jobs, and model
snapshots.
+
--
NOTE: Datafeeds run as a system user with elevated privileges, including
permission to read all indices.
--
`manage_pipeline`::
All operations on ingest pipelines.
@ -43,6 +49,12 @@ cache clearing.
`manage_watcher`::
All watcher operations, such as putting watches, executing, activate or acknowledging.
+
--
NOTE: Watches run as a system user with elevated privileges, including permission
to read and write all indices.
--
`transport_client`::
All privileges necessary for a transport client to connect. Required by the remote

View File

@ -44,7 +44,7 @@ The following snippet shows a simple `index` action definition:
| `doc_id` | no | - | The optional `_id` of the document.
| `execution_time_field` | no | - | The field that will store/index the watch execution
time.
time.
| `timeout` | no | 60s | The timeout for waiting for the index api call to
return. If no response is returned within this time,
@ -73,3 +73,6 @@ a document and the index action indexes all of them in a bulk.
An `_id` value can be added per document to dynamically set the ID of the indexed
document.
NOTE: The index action runs as a system user with elevated privileges, including
permission to write all indices.

View File

@ -3,8 +3,9 @@
[partintro]
--
You can watch for changes or anomalies in your data and perform the necessary
actions in response. For example, you might want to:
{xpack} alerting is a set of administrative features that enable you to watch
for changes or anomalies in your data and perform the necessary actions in
response. For example, you might want to:
* Monitor social media as another way to detect failures in user-facing
automated systems like ATMs or ticketing systems. When the number of tweets
@ -62,6 +63,11 @@ A full history of all watches is maintained in an Elasticsearch index. This
history keeps track of each time a watch is triggered and records the results
from the query, whether the condition was met, and what actions were taken.
NOTE: Watches run with elevated privileges. Users mapped to the built-in
`watcher_admin` role or any other role to which the `manage_watcher` cluster
privilege is assigned should be reviewed and granted only to personnel with
appropriate trust levels to read and write all indices.
--
include::getting-started.asciidoc[]
@ -81,5 +87,5 @@ include::transform.asciidoc[]
include::java.asciidoc[]
include::managing-watches.asciidoc[]
include::example-watches.asciidoc[]

View File

@ -2,17 +2,17 @@
=== Search Input
Use the `search` input to load the results of an Elasticsearch search request
into the execution context when the watch is triggered. See
into the execution context when the watch is triggered. See
<<search-input-attributes, Search Input Attributes>> for all of the
supported attributes.
In the search input's `request` object, you specify:
* The indices you want to search
* The {ref}/search-request-search-type.html[search type]
* The search request body
The search request body supports the full Elasticsearch Query DSL--it's the
The search request body supports the full Elasticsearch Query DSL--it's the
same as the body of an Elasticsearch `_search` request.
For example, the following input retrieves all `event`
@ -33,7 +33,7 @@ documents from the `logs` index:
}
--------------------------------------------------
You can use date math and wildcards when specifying indices. For example,
You can use date math and wildcards when specifying indices. For example,
the following input loads the latest VIXZ quote from today's daily quotes index:
[source,js]
@ -42,7 +42,7 @@ the following input loads the latest VIXZ quote from today's daily quotes index:
"input" : {
"search" : {
"request" : {
"indices" : [ "<stock-quotes-{now/d}>" ],
"indices" : [ "<stock-quotes-{now/d}>" ],
"body" : {
"size" : 1,
"sort" : {
@ -108,8 +108,8 @@ parameter:
==== Applying Conditions
The `search` input is often used in conjunction with the <<condition-script,
`script`>> condition. For example, the following snippet adds a condition to
The `search` input is often used in conjunction with the <<condition-script,
`script`>> condition. For example, the following snippet adds a condition to
check if the search returned more than five hits:
[source,js]
@ -200,4 +200,7 @@ specifying the request `body`:
| `ctx.trigger.triggered_time` | The time this watch was triggered.
| `ctx.trigger.scheduled_time` | The time this watch was supposed to be triggered.
| `ctx.metadata.*` | Any metadata associated with the watch.
|======
|======
NOTE: The search input runs as a system user with elevated privileges, including
permission to read all indices.

View File

@ -163,8 +163,8 @@ public class MachineLearning implements ActionPlugin {
public static final String MAX_OPEN_JOBS_NODE_ATTR = "ml.max_open_jobs";
public static final Setting<Integer> CONCURRENT_JOB_ALLOCATIONS =
Setting.intSetting("xpack.ml.node_concurrent_job_allocations", 2, 0, Property.Dynamic, Property.NodeScope);
public static final Setting<ByteSizeValue> MAX_MODEL_MEMORY =
Setting.memorySizeSetting("xpack.ml.max_model_memory_limit", new ByteSizeValue(0), Property.NodeScope);
public static final Setting<ByteSizeValue> MAX_MODEL_MEMORY_LIMIT =
Setting.memorySizeSetting("xpack.ml.max_model_memory_limit", new ByteSizeValue(0), Property.Dynamic, Property.NodeScope);
public static final TimeValue STATE_PERSIST_RESTORE_TIMEOUT = TimeValue.timeValueMinutes(30);
@ -191,7 +191,7 @@ public class MachineLearning implements ActionPlugin {
Arrays.asList(AUTODETECT_PROCESS,
ML_ENABLED,
CONCURRENT_JOB_ALLOCATIONS,
MAX_MODEL_MEMORY,
MAX_MODEL_MEMORY_LIMIT,
ProcessCtrl.DONT_PERSIST_MODEL_STATE_SETTING,
ProcessCtrl.MAX_ANOMALY_RECORDS_SETTING,
DataCountsReporter.ACCEPTABLE_PERCENTAGE_DATE_PARSE_ERRORS_SETTING,

View File

@ -410,7 +410,7 @@ public class GetBucketsAction extends Action<GetBucketsAction.Request, GetBucket
query.start(request.start);
query.end(request.end);
}
jobProvider.buckets(request.jobId, query.build(), q -> listener.onResponse(new Response(q)), listener::onFailure, client);
jobProvider.buckets(request.jobId, query, q -> listener.onResponse(new Response(q)), listener::onFailure, client);
}
}

View File

@ -323,7 +323,7 @@ public class GetRecordsAction extends Action<GetRecordsAction.Request, GetRecord
jobManager.getJobOrThrowIfUnknown(request.getJobId());
RecordsQueryBuilder.RecordsQuery query = new RecordsQueryBuilder()
RecordsQueryBuilder query = new RecordsQueryBuilder()
.includeInterim(request.excludeInterim == false)
.epochStart(request.start)
.epochEnd(request.end)
@ -331,8 +331,7 @@ public class GetRecordsAction extends Action<GetRecordsAction.Request, GetRecord
.size(request.pageParams.getSize())
.recordScore(request.recordScoreFilter)
.sortField(request.sort)
.sortDescending(request.descending)
.build();
.sortDescending(request.descending);
jobProvider.records(request.jobId, query, page -> listener.onResponse(new Response(page)), listener::onFailure, client);
}
}

View File

@ -83,11 +83,10 @@ public class DatafeedJobBuilder {
};
// Step 1. Collect latest bucket
BucketsQueryBuilder.BucketsQuery latestBucketQuery = new BucketsQueryBuilder()
BucketsQueryBuilder latestBucketQuery = new BucketsQueryBuilder()
.sortField(Result.TIMESTAMP.getPreferredName())
.sortDescending(true).size(1)
.includeInterim(false)
.build();
.includeInterim(false);
jobProvider.bucketsViaInternalClient(job.getId(), latestBucketQuery, bucketsHandler, e -> {
if (e instanceof ResourceNotFoundException) {
QueryPage<Bucket> empty = new QueryPage<>(Collections.emptyList(), 0, Bucket.RESULT_TYPE_FIELD);

View File

@ -18,6 +18,7 @@ import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.CheckedConsumer;
import org.elasticsearch.common.component.AbstractComponent;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.unit.ByteSizeValue;
import org.elasticsearch.xpack.ml.MachineLearning;
import org.elasticsearch.xpack.ml.MlMetadata;
import org.elasticsearch.xpack.ml.action.DeleteJobAction;
@ -63,6 +64,8 @@ public class JobManager extends AbstractComponent {
private final Client client;
private final UpdateJobProcessNotifier updateJobProcessNotifier;
private volatile ByteSizeValue maxModelMemoryLimit;
/**
* Create a JobManager
*/
@ -74,6 +77,14 @@ public class JobManager extends AbstractComponent {
this.auditor = Objects.requireNonNull(auditor);
this.client = Objects.requireNonNull(client);
this.updateJobProcessNotifier = updateJobProcessNotifier;
maxModelMemoryLimit = MachineLearning.MAX_MODEL_MEMORY_LIMIT.get(settings);
clusterService.getClusterSettings()
.addSettingsUpdateConsumer(MachineLearning.MAX_MODEL_MEMORY_LIMIT, this::setMaxModelMemoryLimit);
}
private void setMaxModelMemoryLimit(ByteSizeValue maxModelMemoryLimit) {
this.maxModelMemoryLimit = maxModelMemoryLimit;
}
/**
@ -140,7 +151,7 @@ public class JobManager extends AbstractComponent {
// 4GB to 1GB. However, changing the meaning of a null model memory limit for existing jobs would be a
// breaking change, so instead we add an explicit limit to newly created jobs that didn't have one when
// submitted
request.getJobBuilder().validateModelMemoryLimit(MachineLearning.MAX_MODEL_MEMORY.get(settings));
request.getJobBuilder().validateModelMemoryLimit(maxModelMemoryLimit);
Job job = request.getJobBuilder().build(new Date());
@ -243,7 +254,7 @@ public class JobManager extends AbstractComponent {
@Override
public ClusterState execute(ClusterState currentState) throws Exception {
Job job = getJobOrThrowIfUnknown(jobId, currentState);
updatedJob = jobUpdate.mergeWithJob(job, MachineLearning.MAX_MODEL_MEMORY.get(settings));
updatedJob = jobUpdate.mergeWithJob(job, maxModelMemoryLimit);
return updateClusterState(updatedJob, true, currentState);
}

View File

@ -108,7 +108,8 @@ public final class Messages {
public static final String JOB_CONFIG_FIELD_VALUE_TOO_LOW = "{0} cannot be less than {1,number}. Value = {2,number}";
public static final String JOB_CONFIG_MODEL_MEMORY_LIMIT_TOO_LOW = "model_memory_limit must be at least 1 MiB. Value = {0,number}";
public static final String JOB_CONFIG_MODEL_MEMORY_LIMIT_GREATER_THAN_MAX =
"model_memory_limit [{0}] must be less than the value of the " + MachineLearning.MAX_MODEL_MEMORY.getKey() + " setting [{1}]";
"model_memory_limit [{0}] must be less than the value of the " + MachineLearning.MAX_MODEL_MEMORY_LIMIT.getKey() +
" setting [{1}]";
public static final String JOB_CONFIG_FUNCTION_INCOMPATIBLE_PRESUMMARIZED =
"The ''{0}'' function cannot be used in jobs that will take pre-summarized input";
public static final String JOB_CONFIG_FUNCTION_REQUIRES_BYFIELD = "by_field_name must be set when the ''{0}'' function is used";

View File

@ -5,11 +5,16 @@
*/
package org.elasticsearch.xpack.ml.job.persistence;
import org.elasticsearch.common.Strings;
import org.elasticsearch.index.query.BoolQueryBuilder;
import org.elasticsearch.index.query.QueryBuilder;
import org.elasticsearch.index.query.QueryBuilders;
import org.elasticsearch.search.builder.SearchSourceBuilder;
import org.elasticsearch.search.sort.FieldSortBuilder;
import org.elasticsearch.search.sort.SortBuilder;
import org.elasticsearch.search.sort.SortOrder;
import org.elasticsearch.xpack.ml.job.results.Bucket;
import org.elasticsearch.xpack.ml.job.results.Result;
import java.util.Objects;
/**
* One time query builder for buckets.
* <ul>
@ -34,52 +39,59 @@ import java.util.Objects;
public final class BucketsQueryBuilder {
public static final int DEFAULT_SIZE = 100;
private BucketsQuery bucketsQuery = new BucketsQuery();
private int from = 0;
private int size = DEFAULT_SIZE;
private boolean expand = false;
private boolean includeInterim = false;
private double anomalyScoreFilter = 0.0;
private String start;
private String end;
private String timestamp;
private String sortField = Result.TIMESTAMP.getPreferredName();
private boolean sortDescending = false;
public BucketsQueryBuilder from(int from) {
bucketsQuery.from = from;
this.from = from;
return this;
}
public BucketsQueryBuilder size(int size) {
bucketsQuery.size = size;
this.size = size;
return this;
}
public BucketsQueryBuilder expand(boolean expand) {
bucketsQuery.expand = expand;
this.expand = expand;
return this;
}
public boolean isExpand() {
return expand;
}
public BucketsQueryBuilder includeInterim(boolean include) {
bucketsQuery.includeInterim = include;
this.includeInterim = include;
return this;
}
public boolean isIncludeInterim() {
return includeInterim;
}
public BucketsQueryBuilder anomalyScoreThreshold(Double anomalyScoreFilter) {
if (anomalyScoreFilter != null) {
bucketsQuery.anomalyScoreFilter = anomalyScoreFilter;
}
return this;
}
/**
* @param partitionValue Not set if null or empty
*/
public BucketsQueryBuilder partitionValue(String partitionValue) {
if (!Strings.isNullOrEmpty(partitionValue)) {
bucketsQuery.partitionValue = partitionValue;
this.anomalyScoreFilter = anomalyScoreFilter;
}
return this;
}
public BucketsQueryBuilder sortField(String sortField) {
bucketsQuery.sortField = sortField;
this.sortField = sortField;
return this;
}
public BucketsQueryBuilder sortDescending(boolean sortDescending) {
bucketsQuery.sortDescending = sortDescending;
this.sortDescending = sortDescending;
return this;
}
@ -87,7 +99,7 @@ public final class BucketsQueryBuilder {
* If startTime &lt;= 0 the parameter is not set
*/
public BucketsQueryBuilder start(String startTime) {
bucketsQuery.start = startTime;
this.start = startTime;
return this;
}
@ -95,121 +107,52 @@ public final class BucketsQueryBuilder {
* If endTime &lt;= 0 the parameter is not set
*/
public BucketsQueryBuilder end(String endTime) {
bucketsQuery.end = endTime;
this.end = endTime;
return this;
}
public BucketsQueryBuilder timestamp(String timestamp) {
bucketsQuery.timestamp = timestamp;
bucketsQuery.size = 1;
this.timestamp = timestamp;
this.size = 1;
return this;
}
public BucketsQueryBuilder.BucketsQuery build() {
if (bucketsQuery.timestamp != null && (bucketsQuery.start != null || bucketsQuery.end != null)) {
public boolean hasTimestamp() {
return timestamp != null;
}
public SearchSourceBuilder build() {
if (timestamp != null && (start != null || end != null)) {
throw new IllegalStateException("Either specify timestamp or start/end");
}
return bucketsQuery;
}
public void clear() {
bucketsQuery = new BucketsQueryBuilder.BucketsQuery();
}
public class BucketsQuery {
private int from = 0;
private int size = DEFAULT_SIZE;
private boolean expand = false;
private boolean includeInterim = false;
private double anomalyScoreFilter = 0.0;
private String start;
private String end;
private String timestamp;
private String partitionValue = null;
private String sortField = Result.TIMESTAMP.getPreferredName();
private boolean sortDescending = false;
public int getFrom() {
return from;
ResultsFilterBuilder rfb = new ResultsFilterBuilder();
if (hasTimestamp()) {
rfb.timeRange(Result.TIMESTAMP.getPreferredName(), timestamp);
} else {
rfb.timeRange(Result.TIMESTAMP.getPreferredName(), start, end)
.score(Bucket.ANOMALY_SCORE.getPreferredName(), anomalyScoreFilter)
.interim(includeInterim);
}
public int getSize() {
return size;
}
public boolean isExpand() {
return expand;
}
public boolean isIncludeInterim() {
return includeInterim;
}
public double getAnomalyScoreFilter() {
return anomalyScoreFilter;
}
public String getStart() {
return start;
}
public String getEnd() {
return end;
}
public String getTimestamp() {
return timestamp;
}
/**
* @return Null if not set
*/
public String getPartitionValue() {
return partitionValue;
}
public String getSortField() {
return sortField;
}
public boolean isSortDescending() {
return sortDescending;
}
@Override
public int hashCode() {
return Objects.hash(from, size, expand, includeInterim, anomalyScoreFilter, start, end,
timestamp, partitionValue, sortField, sortDescending);
}
@Override
public boolean equals(Object obj) {
if (this == obj) {
return true;
}
if (obj == null) {
return false;
}
if (getClass() != obj.getClass()) {
return false;
}
BucketsQuery other = (BucketsQuery) obj;
return Objects.equals(from, other.from) &&
Objects.equals(size, other.size) &&
Objects.equals(expand, other.expand) &&
Objects.equals(includeInterim, other.includeInterim) &&
Objects.equals(start, other.start) &&
Objects.equals(end, other.end) &&
Objects.equals(timestamp, other.timestamp) &&
Objects.equals(anomalyScoreFilter, other.anomalyScoreFilter) &&
Objects.equals(partitionValue, other.partitionValue) &&
Objects.equals(sortField, other.sortField) &&
this.sortDescending == other.sortDescending;
SortBuilder<?> sortBuilder = new FieldSortBuilder(sortField)
.order(sortDescending ? SortOrder.DESC : SortOrder.ASC);
QueryBuilder boolQuery = new BoolQueryBuilder()
.filter(rfb.build())
.filter(QueryBuilders.termQuery(Result.RESULT_TYPE.getPreferredName(), Bucket.RESULT_TYPE_VALUE));
SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
searchSourceBuilder.sort(sortBuilder);
searchSourceBuilder.query(boolQuery);
searchSourceBuilder.from(from);
searchSourceBuilder.size(size);
// If not using the default sort field (timestamp) add it as a secondary sort
if (Result.TIMESTAMP.getPreferredName().equals(sortField) == false) {
searchSourceBuilder.sort(Result.TIMESTAMP.getPreferredName(), sortDescending ? SortOrder.DESC : SortOrder.ASC);
}
return searchSourceBuilder;
}
}

View File

@ -49,7 +49,6 @@ import org.elasticsearch.search.aggregations.AggregationBuilders;
import org.elasticsearch.search.aggregations.metrics.stats.extended.ExtendedStats;
import org.elasticsearch.search.builder.SearchSourceBuilder;
import org.elasticsearch.search.sort.FieldSortBuilder;
import org.elasticsearch.search.sort.SortBuilder;
import org.elasticsearch.search.sort.SortBuilders;
import org.elasticsearch.search.sort.SortOrder;
import org.elasticsearch.xpack.ml.MlMetaIndex;
@ -60,7 +59,6 @@ import org.elasticsearch.xpack.ml.action.GetRecordsAction;
import org.elasticsearch.xpack.ml.action.util.QueryPage;
import org.elasticsearch.xpack.ml.job.config.Job;
import org.elasticsearch.xpack.ml.job.config.MlFilter;
import org.elasticsearch.xpack.ml.job.persistence.BucketsQueryBuilder.BucketsQuery;
import org.elasticsearch.xpack.ml.job.persistence.InfluencersQueryBuilder.InfluencersQuery;
import org.elasticsearch.xpack.ml.job.process.autodetect.params.AutodetectParams;
import org.elasticsearch.xpack.ml.job.process.autodetect.state.CategorizerState;
@ -93,15 +91,6 @@ import java.util.function.Supplier;
public class JobProvider {
private static final Logger LOGGER = Loggers.getLogger(JobProvider.class);
private static final List<String> SECONDARY_SORT = Arrays.asList(
AnomalyRecord.RECORD_SCORE.getPreferredName(),
AnomalyRecord.OVER_FIELD_VALUE.getPreferredName(),
AnomalyRecord.PARTITION_FIELD_VALUE.getPreferredName(),
AnomalyRecord.BY_FIELD_VALUE.getPreferredName(),
AnomalyRecord.FIELD_NAME.getPreferredName(),
AnomalyRecord.FUNCTION.getPreferredName()
);
private static final int RECORDS_SIZE_PARAM = 10000;
private static final int BUCKETS_FOR_ESTABLISHED_MEMORY_SIZE = 20;
private static final double ESTABLISHED_MEMORY_CV_THRESHOLD = 0.1;
@ -449,7 +438,7 @@ public class JobProvider {
* Search for buckets with the parameters in the {@link BucketsQueryBuilder}
* Uses the internal client, so runs as the _xpack user
*/
public void bucketsViaInternalClient(String jobId, BucketsQuery query, Consumer<QueryPage<Bucket>> handler,
public void bucketsViaInternalClient(String jobId, BucketsQueryBuilder query, Consumer<QueryPage<Bucket>> handler,
Consumer<Exception> errorHandler) {
buckets(jobId, query, handler, errorHandler, client);
}
@ -458,62 +447,28 @@ public class JobProvider {
* Search for buckets with the parameters in the {@link BucketsQueryBuilder}
* Uses a supplied client, so may run as the currently authenticated user
*/
public void buckets(String jobId, BucketsQuery query, Consumer<QueryPage<Bucket>> handler, Consumer<Exception> errorHandler,
public void buckets(String jobId, BucketsQueryBuilder query, Consumer<QueryPage<Bucket>> handler, Consumer<Exception> errorHandler,
Client client) throws ResourceNotFoundException {
ResultsFilterBuilder rfb = new ResultsFilterBuilder();
if (query.getTimestamp() != null) {
rfb.timeRange(Result.TIMESTAMP.getPreferredName(), query.getTimestamp());
} else {
rfb.timeRange(Result.TIMESTAMP.getPreferredName(), query.getStart(), query.getEnd())
.score(Bucket.ANOMALY_SCORE.getPreferredName(), query.getAnomalyScoreFilter())
.interim(query.isIncludeInterim());
}
SortBuilder<?> sortBuilder = new FieldSortBuilder(query.getSortField())
.order(query.isSortDescending() ? SortOrder.DESC : SortOrder.ASC);
QueryBuilder boolQuery = new BoolQueryBuilder()
.filter(rfb.build())
.filter(QueryBuilders.termQuery(Result.RESULT_TYPE.getPreferredName(), Bucket.RESULT_TYPE_VALUE));
String indexName = AnomalyDetectorsIndex.jobResultsAliasedName(jobId);
SearchRequest searchRequest = new SearchRequest(indexName);
SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
searchSourceBuilder.sort(sortBuilder);
searchSourceBuilder.query(boolQuery);
searchSourceBuilder.from(query.getFrom());
searchSourceBuilder.size(query.getSize());
// If not using the default sort field (timestamp) add it as a secondary sort
if (Result.TIMESTAMP.getPreferredName().equals(query.getSortField()) == false) {
searchSourceBuilder.sort(Result.TIMESTAMP.getPreferredName(), query.isSortDescending() ? SortOrder.DESC : SortOrder.ASC);
}
searchRequest.source(searchSourceBuilder);
searchRequest.source(query.build());
searchRequest.indicesOptions(addIgnoreUnavailable(SearchRequest.DEFAULT_INDICES_OPTIONS));
client.search(searchRequest, ActionListener.wrap(searchResponse -> {
SearchHits hits = searchResponse.getHits();
if (query.getTimestamp() != null) {
if (hits.getTotalHits() == 0) {
throw QueryPage.emptyQueryPage(Bucket.RESULTS_FIELD);
} else if (hits.getTotalHits() > 1) {
LOGGER.error("Found more than one bucket with timestamp [{}] from index {}", query.getTimestamp(), indexName);
}
}
List<Bucket> results = new ArrayList<>();
for (SearchHit hit : hits.getHits()) {
BytesReference source = hit.getSourceRef();
try (XContentParser parser = XContentFactory.xContent(source).createParser(NamedXContentRegistry.EMPTY, source)) {
Bucket bucket = Bucket.PARSER.apply(parser, null);
if (query.isIncludeInterim() || bucket.isInterim() == false) {
results.add(bucket);
}
results.add(bucket);
} catch (IOException e) {
throw new ElasticsearchParseException("failed to parse bucket", e);
}
}
if (query.getTimestamp() != null && results.isEmpty()) {
if (query.hasTimestamp() && results.isEmpty()) {
throw QueryPage.emptyQueryPage(Bucket.RESULTS_FIELD);
}
@ -529,11 +484,11 @@ public class JobProvider {
}, e -> errorHandler.accept(mapAuthFailure(e, jobId, GetBucketsAction.NAME))));
}
private void expandBuckets(String jobId, BucketsQuery query, QueryPage<Bucket> buckets, Iterator<Bucket> bucketsToExpand,
private void expandBuckets(String jobId, BucketsQueryBuilder query, QueryPage<Bucket> buckets, Iterator<Bucket> bucketsToExpand,
Consumer<QueryPage<Bucket>> handler, Consumer<Exception> errorHandler, Client client) {
if (bucketsToExpand.hasNext()) {
Consumer<Integer> c = i -> expandBuckets(jobId, query, buckets, bucketsToExpand, handler, errorHandler, client);
expandBucket(jobId, query.isIncludeInterim(), bucketsToExpand.next(), query.getPartitionValue(), c, errorHandler, client);
expandBucket(jobId, query.isIncludeInterim(), bucketsToExpand.next(), c, errorHandler, client);
} else {
handler.accept(buckets);
}
@ -569,43 +524,33 @@ public class JobProvider {
// This now gets the first 10K records for a bucket. The rate of records per bucket
// is controlled by parameter in the c++ process and its default value is 500. Users may
// change that. Issue elastic/machine-learning-cpp#73 is open to prevent this.
public void expandBucket(String jobId, boolean includeInterim, Bucket bucket, String partitionFieldValue,
Consumer<Integer> consumer, Consumer<Exception> errorHandler, Client client) {
public void expandBucket(String jobId, boolean includeInterim, Bucket bucket, Consumer<Integer> consumer,
Consumer<Exception> errorHandler, Client client) {
Consumer<QueryPage<AnomalyRecord>> h = page -> {
bucket.getRecords().addAll(page.results());
if (partitionFieldValue != null) {
bucket.setAnomalyScore(bucket.partitionAnomalyScore(partitionFieldValue));
}
consumer.accept(bucket.getRecords().size());
};
bucketRecords(jobId, bucket, 0, RECORDS_SIZE_PARAM, includeInterim, AnomalyRecord.PROBABILITY.getPreferredName(),
false, partitionFieldValue, h, errorHandler, client);
false, h, errorHandler, client);
}
void bucketRecords(String jobId, Bucket bucket, int from, int size, boolean includeInterim, String sortField,
boolean descending, String partitionFieldValue, Consumer<QueryPage<AnomalyRecord>> handler,
boolean descending, Consumer<QueryPage<AnomalyRecord>> handler,
Consumer<Exception> errorHandler, Client client) {
// Find the records using the time stamp rather than a parent-child
// relationship. The parent-child filter involves two queries behind
// the scenes, and Elasticsearch documentation claims it's significantly
// slower. Here we rely on the record timestamps being identical to the
// bucket timestamp.
QueryBuilder recordFilter = QueryBuilders.termQuery(Result.TIMESTAMP.getPreferredName(), bucket.getTimestamp().getTime());
RecordsQueryBuilder recordsQueryBuilder = new RecordsQueryBuilder()
.timestamp(bucket.getTimestamp())
.from(from)
.size(size)
.includeInterim(includeInterim)
.sortField(sortField)
.sortDescending(descending);
ResultsFilterBuilder builder = new ResultsFilterBuilder(recordFilter).interim(includeInterim);
if (partitionFieldValue != null) {
builder.term(AnomalyRecord.PARTITION_FIELD_VALUE.getPreferredName(), partitionFieldValue);
}
recordFilter = builder.build();
FieldSortBuilder sb = null;
if (sortField != null) {
sb = new FieldSortBuilder(sortField)
.missing("_last")
.order(descending ? SortOrder.DESC : SortOrder.ASC);
}
records(jobId, from, size, recordFilter, sb, SECONDARY_SORT, descending, handler, errorHandler, client);
records(jobId, recordsQueryBuilder, handler, errorHandler, client);
}
/**
@ -659,55 +604,19 @@ public class JobProvider {
/**
* Search for anomaly records with the parameters in the
* {@link org.elasticsearch.xpack.ml.job.persistence.RecordsQueryBuilder.RecordsQuery}
* {@link org.elasticsearch.xpack.ml.job.persistence.RecordsQueryBuilder}
* Uses a supplied client, so may run as the currently authenticated user
*/
public void records(String jobId, RecordsQueryBuilder.RecordsQuery query, Consumer<QueryPage<AnomalyRecord>> handler,
Consumer<Exception> errorHandler, Client client) {
QueryBuilder fb = new ResultsFilterBuilder()
.timeRange(Result.TIMESTAMP.getPreferredName(), query.getStart(), query.getEnd())
.score(AnomalyRecord.RECORD_SCORE.getPreferredName(), query.getRecordScoreThreshold())
.interim(query.isIncludeInterim())
.term(AnomalyRecord.PARTITION_FIELD_VALUE.getPreferredName(), query.getPartitionFieldValue()).build();
FieldSortBuilder sb = null;
if (query.getSortField() != null) {
sb = new FieldSortBuilder(query.getSortField())
.missing("_last")
.order(query.isSortDescending() ? SortOrder.DESC : SortOrder.ASC);
}
records(jobId, query.getFrom(), query.getSize(), fb, sb, SECONDARY_SORT, query.isSortDescending(), handler, errorHandler, client);
}
/**
* The returned records have their id set.
*/
private void records(String jobId, int from, int size,
QueryBuilder recordFilter, FieldSortBuilder sb, List<String> secondarySort,
boolean descending, Consumer<QueryPage<AnomalyRecord>> handler,
public void records(String jobId, RecordsQueryBuilder recordsQueryBuilder, Consumer<QueryPage<AnomalyRecord>> handler,
Consumer<Exception> errorHandler, Client client) {
String indexName = AnomalyDetectorsIndex.jobResultsAliasedName(jobId);
recordFilter = new BoolQueryBuilder()
.filter(recordFilter)
.filter(new TermsQueryBuilder(Result.RESULT_TYPE.getPreferredName(), AnomalyRecord.RESULT_TYPE_VALUE));
SearchSourceBuilder searchSourceBuilder = recordsQueryBuilder.build();
SearchRequest searchRequest = new SearchRequest(indexName);
searchRequest.indicesOptions(addIgnoreUnavailable(searchRequest.indicesOptions()));
searchRequest.source(new SearchSourceBuilder()
.from(from)
.size(size)
.query(recordFilter)
.sort(sb == null ? SortBuilders.fieldSort(ElasticsearchMappings.ES_DOC) : sb)
.fetchSource(true)
);
searchRequest.source(recordsQueryBuilder.build());
for (String sortField : secondarySort) {
searchRequest.source().sort(sortField, descending ? SortOrder.DESC : SortOrder.ASC);
}
LOGGER.trace("ES API CALL: search all of records from index {}{}{} with filter after sort from {} size {}",
indexName, (sb != null) ? " with sort" : "",
secondarySort.isEmpty() ? "" : " with secondary sort", from, size);
LOGGER.trace("ES API CALL: search all of records from index {} with query {}", indexName, searchSourceBuilder);
client.search(searchRequest, ActionListener.wrap(searchResponse -> {
List<AnomalyRecord> results = new ArrayList<>();
for (SearchHit hit : searchResponse.getHits().getHits()) {
@ -1015,11 +924,10 @@ public class JobProvider {
// Step 1. Find the time span of the most recent N bucket results, where N is the number of buckets
// required to consider memory usage "established"
BucketsQueryBuilder.BucketsQuery bucketQuery = new BucketsQueryBuilder()
BucketsQueryBuilder bucketQuery = new BucketsQueryBuilder()
.sortField(Result.TIMESTAMP.getPreferredName())
.sortDescending(true).from(BUCKETS_FOR_ESTABLISHED_MEMORY_SIZE - 1).size(1)
.includeInterim(false)
.build();
.includeInterim(false);
bucketsViaInternalClient(jobId, bucketQuery, bucketHandler, e -> {
if (e instanceof ResourceNotFoundException) {
handler.accept(null);

View File

@ -5,6 +5,21 @@
*/
package org.elasticsearch.xpack.ml.job.persistence;
import org.elasticsearch.index.query.BoolQueryBuilder;
import org.elasticsearch.index.query.QueryBuilder;
import org.elasticsearch.index.query.QueryBuilders;
import org.elasticsearch.index.query.TermsQueryBuilder;
import org.elasticsearch.search.builder.SearchSourceBuilder;
import org.elasticsearch.search.sort.FieldSortBuilder;
import org.elasticsearch.search.sort.SortBuilders;
import org.elasticsearch.search.sort.SortOrder;
import org.elasticsearch.xpack.ml.job.results.AnomalyRecord;
import org.elasticsearch.xpack.ml.job.results.Result;
import java.util.Arrays;
import java.util.Date;
import java.util.List;
/**
* One time query builder for records. Sets default values for the following
* parameters:
@ -31,109 +46,105 @@ public final class RecordsQueryBuilder {
public static final int DEFAULT_SIZE = 100;
private RecordsQuery recordsQuery = new RecordsQuery();
private static final List<String> SECONDARY_SORT = Arrays.asList(
AnomalyRecord.RECORD_SCORE.getPreferredName(),
AnomalyRecord.OVER_FIELD_VALUE.getPreferredName(),
AnomalyRecord.PARTITION_FIELD_VALUE.getPreferredName(),
AnomalyRecord.BY_FIELD_VALUE.getPreferredName(),
AnomalyRecord.FIELD_NAME.getPreferredName(),
AnomalyRecord.FUNCTION.getPreferredName()
);
private int from = 0;
private int size = DEFAULT_SIZE;
private boolean includeInterim = false;
private String sortField;
private boolean sortDescending = true;
private double recordScore = 0.0;
private String start;
private String end;
private Date timestamp;
public RecordsQueryBuilder from(int from) {
recordsQuery.from = from;
this.from = from;
return this;
}
public RecordsQueryBuilder size(int size) {
recordsQuery.size = size;
this.size = size;
return this;
}
public RecordsQueryBuilder epochStart(String startTime) {
recordsQuery.start = startTime;
this.start = startTime;
return this;
}
public RecordsQueryBuilder epochEnd(String endTime) {
recordsQuery.end = endTime;
this.end = endTime;
return this;
}
public RecordsQueryBuilder includeInterim(boolean include) {
recordsQuery.includeInterim = include;
this.includeInterim = include;
return this;
}
public RecordsQueryBuilder sortField(String fieldname) {
recordsQuery.sortField = fieldname;
this.sortField = fieldname;
return this;
}
public RecordsQueryBuilder sortDescending(boolean sortDescending) {
recordsQuery.sortDescending = sortDescending;
this.sortDescending = sortDescending;
return this;
}
public RecordsQueryBuilder recordScore(double recordScore) {
recordsQuery.recordScore = recordScore;
this.recordScore = recordScore;
return this;
}
public RecordsQueryBuilder partitionFieldValue(String partitionFieldValue) {
recordsQuery.partitionFieldValue = partitionFieldValue;
public RecordsQueryBuilder timestamp(Date timestamp) {
this.timestamp = timestamp;
return this;
}
public RecordsQuery build() {
return recordsQuery;
}
public SearchSourceBuilder build() {
QueryBuilder query = new ResultsFilterBuilder()
.timeRange(Result.TIMESTAMP.getPreferredName(), start, end)
.score(AnomalyRecord.RECORD_SCORE.getPreferredName(), recordScore)
.interim(includeInterim)
.build();
public void clear() {
recordsQuery = new RecordsQuery();
}
public class RecordsQuery {
private int from = 0;
private int size = DEFAULT_SIZE;
private boolean includeInterim = false;
private String sortField;
private boolean sortDescending = true;
private double recordScore = 0.0;
private String partitionFieldValue;
private String start;
private String end;
public int getSize() {
return size;
FieldSortBuilder sb;
if (sortField != null) {
sb = new FieldSortBuilder(sortField)
.missing("_last")
.order(sortDescending ? SortOrder.DESC : SortOrder.ASC);
} else {
sb = SortBuilders.fieldSort(ElasticsearchMappings.ES_DOC);
}
public boolean isIncludeInterim() {
return includeInterim;
BoolQueryBuilder recordFilter = new BoolQueryBuilder()
.filter(query)
.filter(new TermsQueryBuilder(Result.RESULT_TYPE.getPreferredName(), AnomalyRecord.RESULT_TYPE_VALUE));
if (timestamp != null) {
recordFilter.filter(QueryBuilders.termQuery(Result.TIMESTAMP.getPreferredName(), timestamp.getTime()));
}
public String getSortField() {
return sortField;
SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder()
.from(from)
.size(size)
.query(recordFilter)
.sort(sb)
.fetchSource(true);
for (String sortField : SECONDARY_SORT) {
searchSourceBuilder.sort(sortField, sortDescending ? SortOrder.DESC : SortOrder.ASC);
}
public boolean isSortDescending() {
return sortDescending;
}
public double getRecordScoreThreshold() {
return recordScore;
}
public String getPartitionFieldValue() {
return partitionFieldValue;
}
public int getFrom() {
return from;
}
public String getStart() {
return start;
}
public String getEnd() {
return end;
}
return searchSourceBuilder;
}
}

View File

@ -20,19 +20,19 @@ import java.util.List;
* This builder facilitates the creation of a {@link QueryBuilder} with common
* characteristics to both buckets and records.
*/
class ResultsFilterBuilder {
public class ResultsFilterBuilder {
private final List<QueryBuilder> queries;
ResultsFilterBuilder() {
public ResultsFilterBuilder() {
queries = new ArrayList<>();
}
ResultsFilterBuilder(QueryBuilder queryBuilder) {
public ResultsFilterBuilder(QueryBuilder queryBuilder) {
this();
queries.add(queryBuilder);
}
ResultsFilterBuilder timeRange(String field, Object start, Object end) {
public ResultsFilterBuilder timeRange(String field, Object start, Object end) {
if (start != null || end != null) {
RangeQueryBuilder timeRange = QueryBuilders.rangeQuery(field);
if (start != null) {
@ -46,12 +46,12 @@ class ResultsFilterBuilder {
return this;
}
ResultsFilterBuilder timeRange(String field, String timestamp) {
public ResultsFilterBuilder timeRange(String field, String timestamp) {
addQuery(QueryBuilders.matchQuery(field, timestamp));
return this;
}
ResultsFilterBuilder score(String fieldName, double threshold) {
public ResultsFilterBuilder score(String fieldName, double threshold) {
if (threshold > 0.0) {
RangeQueryBuilder scoreFilter = QueryBuilders.rangeQuery(fieldName);
scoreFilter.gte(threshold);
@ -78,7 +78,7 @@ class ResultsFilterBuilder {
return this;
}
ResultsFilterBuilder term(String fieldName, String fieldValue) {
public ResultsFilterBuilder term(String fieldName, String fieldValue) {
if (Strings.isNullOrEmpty(fieldName) || Strings.isNullOrEmpty(fieldValue)) {
return this;
}
@ -88,7 +88,7 @@ class ResultsFilterBuilder {
return this;
}
ResultsFilterBuilder resultType(String resultType) {
public ResultsFilterBuilder resultType(String resultType) {
return term(Result.RESULT_TYPE.getPreferredName(), resultType);
}

View File

@ -6,6 +6,7 @@
package org.elasticsearch.xpack.ml.rest.results;
import org.elasticsearch.client.node.NodeClient;
import org.elasticsearch.common.Strings;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.xcontent.XContentParser;
import org.elasticsearch.rest.BaseRestHandler;
@ -45,20 +46,23 @@ public class RestGetBucketsAction extends BaseRestHandler {
@Override
protected RestChannelConsumer prepareRequest(RestRequest restRequest, NodeClient client) throws IOException {
String jobId = restRequest.param(Job.ID.getPreferredName());
String timestamp = restRequest.param(GetBucketsAction.Request.TIMESTAMP.getPreferredName());
final GetBucketsAction.Request request;
if (restRequest.hasContentOrSourceParam()) {
XContentParser parser = restRequest.contentOrSourceParamParser();
request = GetBucketsAction.Request.parseRequest(jobId, parser);
// A timestamp in the URL overrides any timestamp that may also have been set in the body
if (!Strings.isNullOrEmpty(timestamp)) {
request.setTimestamp(timestamp);
}
} else {
request = new GetBucketsAction.Request(jobId);
// Check if the REST param is set first so mutually exclusive
// options will only cause an error if set
if (restRequest.hasParam(GetBucketsAction.Request.TIMESTAMP.getPreferredName())) {
String timestamp = restRequest.param(GetBucketsAction.Request.TIMESTAMP.getPreferredName());
if (timestamp != null && !timestamp.isEmpty()) {
request.setTimestamp(timestamp);
}
// options will cause an error if set
if (!Strings.isNullOrEmpty(timestamp)) {
request.setTimestamp(timestamp);
}
// multiple bucket options
if (restRequest.hasParam(PageParams.FROM.getPreferredName()) || restRequest.hasParam(PageParams.SIZE.getPreferredName())) {

View File

@ -119,6 +119,9 @@ public class NodeStatsMonitoringDoc extends FilteredMonitoringDoc {
"node_stats.os.cgroup.cpu.stat.number_of_elapsed_periods",
"node_stats.os.cgroup.cpu.stat.number_of_times_throttled",
"node_stats.os.cgroup.cpu.stat.time_throttled_nanos",
"node_stats.os.cgroup.memory.control_group",
"node_stats.os.cgroup.memory.limit_in_bytes",
"node_stats.os.cgroup.memory.usage_in_bytes",
"node_stats.os.cpu.load_average.1m",
"node_stats.os.cpu.load_average.5m",
"node_stats.os.cpu.load_average.15m",
@ -155,4 +158,4 @@ public class NodeStatsMonitoringDoc extends FilteredMonitoringDoc {
"node_stats.thread_pool.watcher.threads",
"node_stats.thread_pool.watcher.queue",
"node_stats.thread_pool.watcher.rejected");
}
}

View File

@ -26,7 +26,7 @@ public final class MonitoringTemplateUtils {
* <p>
* It may be possible for this to diverge between templates and pipelines, but for now they're the same.
*/
public static final int LAST_UPDATED_VERSION = Version.V_6_0_0_beta1.id;
public static final int LAST_UPDATED_VERSION = Version.V_7_0_0_alpha1.id;
/**
* Current version of templates used in their name to differentiate from breaking changes (separate from product version).

View File

@ -7,6 +7,7 @@ package org.elasticsearch.xpack.notification.email;
import org.apache.logging.log4j.Logger;
import org.elasticsearch.SpecialPermission;
import org.elasticsearch.common.settings.Setting;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.settings.SettingsException;
import org.elasticsearch.common.unit.TimeValue;
@ -237,8 +238,8 @@ public class Account {
replace(builder, "use_rset", "userset");
settings = builder.build();
Properties props = new Properties();
for (Map.Entry<String, String> entry : settings.getAsMap().entrySet()) {
props.setProperty(SMTP_SETTINGS_PREFIX + entry.getKey(), entry.getValue());
for (String key : settings.keySet()) {
props.setProperty(SMTP_SETTINGS_PREFIX + key, settings.get(key));
}
return props;
}

View File

@ -212,7 +212,7 @@ public class Security implements ActionPlugin, IngestPlugin, NetworkPlugin, Clus
static final Setting<List<String>> AUDIT_OUTPUTS_SETTING =
Setting.listSetting(setting("audit.outputs"),
s -> s.getAsMap().containsKey(setting("audit.outputs")) ?
s -> s.keySet().contains(setting("audit.outputs")) ?
Collections.emptyList() : Collections.singletonList(LoggingAuditTrail.NAME),
Function.identity(), Property.NodeScope);
@ -680,7 +680,6 @@ public class Security implements ActionPlugin, IngestPlugin, NetworkPlugin, Clus
return;
}
final Map<String, String> settingsMap = settings.getAsMap();
for (Map.Entry<String, Settings> tribeSettings : tribesSettings.entrySet()) {
String tribePrefix = "tribe." + tribeSettings.getKey() + ".";
@ -701,12 +700,11 @@ public class Security implements ActionPlugin, IngestPlugin, NetworkPlugin, Clus
}
// we passed all the checks now we need to copy in all of the x-pack security settings
for (Map.Entry<String, String> entry : settingsMap.entrySet()) {
String key = entry.getKey();
if (key.startsWith("xpack.security.")) {
settingsBuilder.put(tribePrefix + key, entry.getValue());
settings.keySet().forEach(k -> {
if (k.startsWith("xpack.security.")) {
settingsBuilder.copy(tribePrefix + k, k, settings);
}
}
});
}
Map<String, Settings> realmsSettings = settings.getGroups(setting("authc.realms"), true);

View File

@ -840,14 +840,14 @@ public class IndexAuditTrail extends AbstractComponent implements AuditTrail, Cl
// Filter out forbidden settings:
Settings.Builder builder = Settings.builder();
for (Map.Entry<String, String> entry : newSettings.getAsMap().entrySet()) {
String name = "index." + entry.getKey();
builder.put(newSettings.filter(k -> {
String name = "index." + k;
if (FORBIDDEN_INDEX_SETTING.equals(name)) {
logger.warn("overriding the default [{}} setting is forbidden. ignoring...", name);
continue;
return false;
}
builder.put(name, entry.getValue());
}
return true;
}));
return builder.build();
}

View File

@ -70,8 +70,10 @@ import org.elasticsearch.xpack.sql.plugin.sql.action.SqlTranslateAction;
import java.util.Arrays;
import java.util.Collections;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.function.BiFunction;
import java.util.function.Predicate;
@ -368,27 +370,45 @@ public class AuthorizationService extends AbstractComponent {
assert request instanceof BulkShardRequest
: "Action " + action + " requires " + BulkShardRequest.class + " but was " + request.getClass();
if (localIndices.size() != 1) {
throw new IllegalStateException("Action " + action + " should operate on exactly 1 local index but was "
+ localIndices.size());
}
String index = localIndices.iterator().next();
BulkShardRequest bulk = (BulkShardRequest) request;
for (BulkItemRequest item : bulk.items()) {
final String itemAction = getAction(item);
final IndicesAccessControl itemAccessControl = permission.authorize(itemAction, localIndices, metaData,
fieldPermissionsCache);
if (itemAccessControl.isGranted() == false) {
item.abort(index, denial(authentication, itemAction, request, null));
}
}
authorizeBulkItems(authentication, action, (BulkShardRequest) request, permission, metaData, localIndices);
}
grant(authentication, action, originalRequest, null);
}
private String getAction(BulkItemRequest item) {
/**
* Performs authorization checks on the items within a {@link BulkShardRequest}.
* This inspects the {@link BulkItemRequest items} within the request, computes an <em>implied</em> action for each item's
* {@link DocWriteRequest#opType()}, and then checks whether that action is allowed on the targeted index.
* Items that fail this checks are {@link BulkItemRequest#abort(String, Exception) aborted}, with an
* {@link #denial(Authentication, String, TransportRequest, Set) access denied} exception.
* Because a shard level request is for exactly 1 index, and there are a small number of possible item
* {@link DocWriteRequest.OpType types}, the number of distinct authorization checks that need to be performed is very small, but the
* results must be cached, to avoid adding a high overhead to each bulk request.
*/
private void authorizeBulkItems(Authentication authentication, String action, BulkShardRequest request, Role permission,
MetaData metaData, Set<String> indices) {
if (indices.size() != 1) {
final String message = "Action " + action + " should operate on exactly 1 local index but was " + indices.size();
assert false : message;
throw new IllegalStateException(message);
}
final String index = indices.iterator().next();
final Map<String, Boolean> actionAuthority = new HashMap<>();
for (BulkItemRequest item : request.items()) {
final String itemAction = getAction(item);
final boolean granted = actionAuthority.computeIfAbsent(itemAction, key -> {
final IndicesAccessControl itemAccessControl = permission.authorize(itemAction, indices, metaData, fieldPermissionsCache);
return itemAccessControl.isGranted();
});
if (granted == false) {
item.abort(index, denial(authentication, itemAction, request, null));
}
}
}
private static String getAction(BulkItemRequest item) {
final DocWriteRequest docWriteRequest = item.request();
switch (docWriteRequest.opType()) {
case INDEX:

View File

@ -44,7 +44,7 @@ public class RestPutRoleAction extends SecurityBaseRestHandler {
@Override
public RestChannelConsumer innerPrepareRequest(RestRequest request, NodeClient client) throws IOException {
PutRoleRequestBuilder requestBuilder = new SecurityClient(client)
.preparePutRole(request.param("name"), request.content(), request.getXContentType())
.preparePutRole(request.param("name"), request.requiredContent(), request.getXContentType())
.setRefreshPolicy(request.param("refresh"));
return channel -> requestBuilder.execute(new RestBuilderListener<PutRoleResponse>(channel) {
@Override

View File

@ -47,7 +47,7 @@ public class RestPutRoleMappingAction extends SecurityBaseRestHandler {
public RestChannelConsumer innerPrepareRequest(RestRequest request, NodeClient client) throws IOException {
final String name = request.param("name");
PutRoleMappingRequestBuilder requestBuilder = new SecurityClient(client)
.preparePutRoleMapping(name, request.content(), request.getXContentType())
.preparePutRoleMapping(name, request.requiredContent(), request.getXContentType())
.setRefreshPolicy(request.param("refresh"));
return channel -> requestBuilder.execute(
new RestBuilderListener<PutRoleMappingResponse>(channel) {

View File

@ -61,7 +61,7 @@ public class RestChangePasswordAction extends SecurityBaseRestHandler implements
final String refresh = request.param("refresh");
return channel ->
new SecurityClient(client)
.prepareChangePassword(username, request.content(), request.getXContentType())
.prepareChangePassword(username, request.requiredContent(), request.getXContentType())
.setRefreshPolicy(refresh)
.execute(new RestBuilderListener<ChangePasswordResponse>(channel) {
@Override

View File

@ -54,7 +54,7 @@ public class RestHasPrivilegesAction extends SecurityBaseRestHandler {
public RestChannelConsumer innerPrepareRequest(RestRequest request, NodeClient client) throws IOException {
final String username = getUsername(request);
HasPrivilegesRequestBuilder requestBuilder = new SecurityClient(client)
.prepareHasPrivileges(username, request.content(), request.getXContentType());
.prepareHasPrivileges(username, request.requiredContent(), request.getXContentType());
return channel -> requestBuilder.execute(new HasPrivilegesRestResponseBuilder(username, channel));
}

View File

@ -48,7 +48,7 @@ public class RestPutUserAction extends SecurityBaseRestHandler implements RestRe
@Override
public RestChannelConsumer innerPrepareRequest(RestRequest request, NodeClient client) throws IOException {
PutUserRequestBuilder requestBuilder = new SecurityClient(client)
.preparePutUser(request.param("username"), request.content(), request.getXContentType())
.preparePutUser(request.param("username"), request.requiredContent(), request.getXContentType())
.setRefreshPolicy(request.param("refresh"));
return channel -> requestBuilder.execute(new RestBuilderListener<PutUserResponse>(channel) {

View File

@ -1,6 +1,6 @@
{
"index_patterns": ".monitoring-alerts-${monitoring.template.version}",
"version": 6000026,
"version": 7000001,
"settings": {
"index": {
"number_of_shards": 1,

View File

@ -639,6 +639,19 @@
}
}
}
},
"memory": {
"properties": {
"control_group": {
"type": "keyword"
},
"limit_in_bytes": {
"type": "keyword"
},
"usage_in_bytes": {
"type": "keyword"
}
}
}
}
},

View File

@ -103,7 +103,7 @@ public abstract class AbstractAdLdapRealmTestCase extends SecurityIntegTestCase
roleMappings = realmConfig.selectRoleMappings(ESTestCase::randomBoolean);
useGlobalSSL = randomBoolean();
ESLoggerFactory.getLogger("test").info("running test with realm configuration [{}], with direct group to role mapping [{}]. " +
"Settings [{}]", realmConfig, realmConfig.mapGroupsAsRoles, realmConfig.settings.getAsMap());
"Settings [{}]", realmConfig, realmConfig.mapGroupsAsRoles, realmConfig.settings);
}
@AfterClass
@ -119,11 +119,7 @@ public abstract class AbstractAdLdapRealmTestCase extends SecurityIntegTestCase
if (useGlobalSSL) {
// don't use filter since it returns a prefixed secure setting instead of mock!
Settings settingsToAdd = super.nodeSettings(nodeOrdinal);
for (Map.Entry<String, String> settingsEntry : settingsToAdd.getAsMap().entrySet()) {
if (settingsEntry.getKey().startsWith("xpack.ssl.") == false) {
builder.put(settingsEntry.getKey(), settingsEntry.getValue());
}
}
builder.put(settingsToAdd.filter(k -> k.startsWith("xpack.ssl.") == false), false);
MockSecureSettings mockSecureSettings = (MockSecureSettings) Settings.builder().put(settingsToAdd).getSecureSettings();
if (mockSecureSettings != null) {
MockSecureSettings filteredSecureSettings = new MockSecureSettings();

View File

@ -36,7 +36,7 @@ public class MultipleAdRealmTests extends AbstractAdLdapRealmTestCase {
secondaryRealmConfig = randomFrom(configs);
ESLoggerFactory.getLogger("test")
.info("running test with secondary realm configuration [{}], with direct group to role mapping [{}]. Settings [{}]",
secondaryRealmConfig, secondaryRealmConfig.mapGroupsAsRoles, secondaryRealmConfig.settings.getAsMap());
secondaryRealmConfig, secondaryRealmConfig.mapGroupsAsRoles, secondaryRealmConfig.settings);
// It's easier to test 2 realms when using file based role mapping, and for the purposes of
// this test, there's no need to test native mappings.
@ -51,9 +51,9 @@ public class MultipleAdRealmTests extends AbstractAdLdapRealmTestCase {
Path store = getDataPath(TESTNODE_KEYSTORE);
final List<RoleMappingEntry> secondaryRoleMappings = secondaryRealmConfig.selectRoleMappings(() -> true);
final Settings secondarySettings = super.buildRealmSettings(secondaryRealmConfig, secondaryRoleMappings, store);
secondarySettings.getAsMap().forEach((name, value) -> {
name = name.replace(XPACK_SECURITY_AUTHC_REALMS_EXTERNAL, XPACK_SECURITY_AUTHC_REALMS_EXTERNAL + "2");
builder.put(name, value);
secondarySettings.keySet().forEach(name -> {
String newName = name.replace(XPACK_SECURITY_AUTHC_REALMS_EXTERNAL, XPACK_SECURITY_AUTHC_REALMS_EXTERNAL + "2");
builder.copy(newName, name, secondarySettings);
});
return builder.build();

View File

@ -160,16 +160,17 @@ public abstract class TribeTransportTestCase extends ESIntegTestCase {
assertAcked(cluster2.client().admin().indices().prepareCreate("test2").get());
ensureYellow(internalCluster());
ensureYellow(cluster2);
Map<String,String> asMap = internalCluster().getDefaultSettings().getAsMap();
// Map<String,String> asMap = internalCluster().getDefaultSettings().getAsMap();
Settings.Builder tribe1Defaults = Settings.builder();
Settings.Builder tribe2Defaults = Settings.builder();
for (Map.Entry<String, String> entry : asMap.entrySet()) {
if (entry.getKey().startsWith("path.")) {
continue;
internalCluster().getDefaultSettings().keySet().forEach(k -> {
if (k.startsWith("path.") == false) {
tribe1Defaults.copy(k, internalCluster().getDefaultSettings());
tribe2Defaults.copy(k, internalCluster().getDefaultSettings());
}
tribe1Defaults.put("tribe.t1." + entry.getKey(), entry.getValue());
tribe2Defaults.put("tribe.t2." + entry.getKey(), entry.getValue());
}
});
tribe1Defaults.normalizePrefix("tribe.t1.");
tribe2Defaults.normalizePrefix("tribe.t2.");
// give each tribe it's unicast hosts to connect to
tribe1Defaults.putArray("tribe.t1." + UnicastZenPing.DISCOVERY_ZEN_PING_UNICAST_HOSTS_SETTING.getKey(),
getUnicastHosts(internalCluster().client()));

View File

@ -141,7 +141,7 @@ public class AutodetectResultProcessorIT extends XPackSingleNodeTestCase {
resultProcessor.process(builder.buildTestProcess());
resultProcessor.awaitCompletion();
BucketsQueryBuilder.BucketsQuery bucketsQuery = new BucketsQueryBuilder().includeInterim(true).build();
BucketsQueryBuilder bucketsQuery = new BucketsQueryBuilder().includeInterim(true);
QueryPage<Bucket> persistedBucket = getBucketQueryPage(bucketsQuery);
assertEquals(1, persistedBucket.count());
// Records are not persisted to Elasticsearch as an array within the bucket
@ -149,7 +149,7 @@ public class AutodetectResultProcessorIT extends XPackSingleNodeTestCase {
bucket.setRecords(Collections.emptyList());
assertEquals(bucket, persistedBucket.results().get(0));
QueryPage<AnomalyRecord> persistedRecords = getRecords(new RecordsQueryBuilder().build());
QueryPage<AnomalyRecord> persistedRecords = getRecords(new RecordsQueryBuilder());
assertResultsAreSame(records, persistedRecords);
QueryPage<Influencer> persistedInfluencers = getInfluencers();
@ -190,7 +190,7 @@ public class AutodetectResultProcessorIT extends XPackSingleNodeTestCase {
resultProcessor.process(resultBuilder.buildTestProcess());
resultProcessor.awaitCompletion();
QueryPage<Bucket> persistedBucket = getBucketQueryPage(new BucketsQueryBuilder().includeInterim(true).build());
QueryPage<Bucket> persistedBucket = getBucketQueryPage(new BucketsQueryBuilder().includeInterim(true));
assertEquals(1, persistedBucket.count());
// Records are not persisted to Elasticsearch as an array within the bucket
// documents, so remove them from the expected bucket before comparing
@ -200,7 +200,7 @@ public class AutodetectResultProcessorIT extends XPackSingleNodeTestCase {
QueryPage<Influencer> persistedInfluencers = getInfluencers();
assertEquals(0, persistedInfluencers.count());
QueryPage<AnomalyRecord> persistedRecords = getRecords(new RecordsQueryBuilder().includeInterim(true).build());
QueryPage<AnomalyRecord> persistedRecords = getRecords(new RecordsQueryBuilder().includeInterim(true));
assertEquals(0, persistedRecords.count());
}
@ -222,14 +222,14 @@ public class AutodetectResultProcessorIT extends XPackSingleNodeTestCase {
resultProcessor.process(resultBuilder.buildTestProcess());
resultProcessor.awaitCompletion();
QueryPage<Bucket> persistedBucket = getBucketQueryPage(new BucketsQueryBuilder().includeInterim(true).build());
QueryPage<Bucket> persistedBucket = getBucketQueryPage(new BucketsQueryBuilder().includeInterim(true));
assertEquals(1, persistedBucket.count());
// Records are not persisted to Elasticsearch as an array within the bucket
// documents, so remove them from the expected bucket before comparing
finalBucket.setRecords(Collections.emptyList());
assertEquals(finalBucket, persistedBucket.results().get(0));
QueryPage<AnomalyRecord> persistedRecords = getRecords(new RecordsQueryBuilder().includeInterim(true).build());
QueryPage<AnomalyRecord> persistedRecords = getRecords(new RecordsQueryBuilder().includeInterim(true));
assertResultsAreSame(finalAnomalyRecords, persistedRecords);
}
@ -246,10 +246,10 @@ public class AutodetectResultProcessorIT extends XPackSingleNodeTestCase {
resultProcessor.process(resultBuilder.buildTestProcess());
resultProcessor.awaitCompletion();
QueryPage<Bucket> persistedBucket = getBucketQueryPage(new BucketsQueryBuilder().includeInterim(true).build());
QueryPage<Bucket> persistedBucket = getBucketQueryPage(new BucketsQueryBuilder().includeInterim(true));
assertEquals(1, persistedBucket.count());
QueryPage<AnomalyRecord> persistedRecords = getRecords(new RecordsQueryBuilder().size(200).includeInterim(true).build());
QueryPage<AnomalyRecord> persistedRecords = getRecords(new RecordsQueryBuilder().size(200).includeInterim(true));
List<AnomalyRecord> allRecords = new ArrayList<>(firstSetOfRecords);
allRecords.addAll(secondSetOfRecords);
assertResultsAreSame(allRecords, persistedRecords);
@ -419,7 +419,7 @@ public class AutodetectResultProcessorIT extends XPackSingleNodeTestCase {
assertEquals(0, expectedSet.size());
}
private QueryPage<Bucket> getBucketQueryPage(BucketsQueryBuilder.BucketsQuery bucketsQuery) throws Exception {
private QueryPage<Bucket> getBucketQueryPage(BucketsQueryBuilder bucketsQuery) throws Exception {
AtomicReference<Exception> errorHolder = new AtomicReference<>();
AtomicReference<QueryPage<Bucket>> resultHolder = new AtomicReference<>();
CountDownLatch latch = new CountDownLatch(1);
@ -491,7 +491,7 @@ public class AutodetectResultProcessorIT extends XPackSingleNodeTestCase {
return resultHolder.get();
}
private QueryPage<AnomalyRecord> getRecords(RecordsQueryBuilder.RecordsQuery recordsQuery) throws Exception {
private QueryPage<AnomalyRecord> getRecords(RecordsQueryBuilder recordsQuery) throws Exception {
AtomicReference<Exception> errorHolder = new AtomicReference<>();
AtomicReference<QueryPage<AnomalyRecord>> resultHolder = new AtomicReference<>();
CountDownLatch latch = new CountDownLatch(1);

View File

@ -14,9 +14,11 @@ import org.elasticsearch.cluster.ClusterName;
import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.metadata.MetaData;
import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.settings.ClusterSettings;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.env.Environment;
import org.elasticsearch.test.ESTestCase;
import org.elasticsearch.xpack.ml.MachineLearning;
import org.elasticsearch.xpack.ml.MlMetadata;
import org.elasticsearch.xpack.ml.action.PutJobAction;
import org.elasticsearch.xpack.ml.action.util.QueryPage;
@ -40,6 +42,7 @@ import static org.hamcrest.Matchers.lessThanOrEqualTo;
import static org.mockito.Matchers.any;
import static org.mockito.Mockito.doAnswer;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.when;
public class JobManagerTests extends ESTestCase {
@ -161,6 +164,8 @@ public class JobManagerTests extends ESTestCase {
private JobManager createJobManager() {
Settings settings = Settings.builder().put(Environment.PATH_HOME_SETTING.getKey(), createTempDir().toString()).build();
ClusterSettings clusterSettings = new ClusterSettings(settings, Collections.singleton(MachineLearning.MAX_MODEL_MEMORY_LIMIT));
when(clusterService.getClusterSettings()).thenReturn(clusterSettings);
UpdateJobProcessNotifier notifier = mock(UpdateJobProcessNotifier.class);
return new JobManager(settings, jobProvider, clusterService, auditor, client, notifier);
}

View File

@ -145,7 +145,7 @@ public class JobTests extends AbstractSerializingTestCase<Job> {
IllegalArgumentException e = expectThrows(IllegalArgumentException.class,
() -> builder.validateModelMemoryLimit(new ByteSizeValue(1000L, ByteSizeUnit.MB)));
assertEquals("model_memory_limit [4gb] must be less than the value of the " +
MachineLearning.MAX_MODEL_MEMORY.getKey() + " setting [1000mb]", e.getMessage());
MachineLearning.MAX_MODEL_MEMORY_LIMIT.getKey() + " setting [1000mb]", e.getMessage());
builder.validateModelMemoryLimit(new ByteSizeValue(8192L, ByteSizeUnit.MB));
}

View File

@ -1,99 +0,0 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.xpack.ml.job.persistence;
import org.elasticsearch.test.ESTestCase;
public class BucketsQueryBuilderTests extends ESTestCase {
public void testDefaultBuild() throws Exception {
BucketsQueryBuilder.BucketsQuery query = new BucketsQueryBuilder().build();
assertEquals(0, query.getFrom());
assertEquals(BucketsQueryBuilder.DEFAULT_SIZE, query.getSize());
assertEquals(false, query.isIncludeInterim());
assertEquals(false, query.isExpand());
assertEquals(0.0, query.getAnomalyScoreFilter(), 0.0001);
assertNull(query.getStart());
assertNull(query.getEnd());
assertEquals("timestamp", query.getSortField());
assertFalse(query.isSortDescending());
}
public void testAll() {
BucketsQueryBuilder.BucketsQuery query = new BucketsQueryBuilder()
.from(20)
.size(40)
.includeInterim(true)
.expand(true)
.anomalyScoreThreshold(50.0d)
.start("1000")
.end("2000")
.partitionValue("foo")
.sortField("anomaly_score")
.sortDescending(true)
.build();
assertEquals(20, query.getFrom());
assertEquals(40, query.getSize());
assertEquals(true, query.isIncludeInterim());
assertEquals(true, query.isExpand());
assertEquals(50.0d, query.getAnomalyScoreFilter(), 0.00001);
assertEquals("1000", query.getStart());
assertEquals("2000", query.getEnd());
assertEquals("foo", query.getPartitionValue());
assertEquals("anomaly_score", query.getSortField());
assertTrue(query.isSortDescending());
}
public void testEqualsHash() {
BucketsQueryBuilder query = new BucketsQueryBuilder()
.from(20)
.size(40)
.includeInterim(true)
.expand(true)
.anomalyScoreThreshold(50.0d)
.start("1000")
.end("2000")
.partitionValue("foo");
BucketsQueryBuilder query2 = new BucketsQueryBuilder()
.from(20)
.size(40)
.includeInterim(true)
.expand(true)
.anomalyScoreThreshold(50.0d)
.start("1000")
.end("2000")
.partitionValue("foo");
assertEquals(query.build(), query2.build());
assertEquals(query.build().hashCode(), query2.build().hashCode());
query2.clear();
assertFalse(query.build().equals(query2.build()));
query2.from(20)
.size(40)
.includeInterim(true)
.expand(true)
.anomalyScoreThreshold(50.0d)
.start("1000")
.end("2000")
.partitionValue("foo");
assertEquals(query.build(), query2.build());
query2.clear();
query2.from(20)
.size(40)
.includeInterim(true)
.expand(true)
.anomalyScoreThreshold(50.1d)
.start("1000")
.end("2000")
.partitionValue("foo");
assertFalse(query.build().equals(query2.build()));
}
}

View File

@ -72,7 +72,6 @@ import static org.mockito.Mockito.when;
public class JobProviderTests extends ESTestCase {
private static final String CLUSTER_NAME = "myCluster";
private static final String JOB_ID = "foo";
@SuppressWarnings("unchecked")
public void testCreateJobResultsIndex() {
@ -253,7 +252,7 @@ public class JobProviderTests extends ESTestCase {
@SuppressWarnings({"unchecked", "rawtypes"})
QueryPage<Bucket>[] holder = new QueryPage[1];
provider.buckets(jobId, bq.build(), r -> holder[0] = r, e -> {throw new RuntimeException(e);}, client);
provider.buckets(jobId, bq, r -> holder[0] = r, e -> {throw new RuntimeException(e);}, client);
QueryPage<Bucket> buckets = holder[0];
assertEquals(1L, buckets.count());
QueryBuilder query = queryBuilderHolder[0];
@ -288,7 +287,7 @@ public class JobProviderTests extends ESTestCase {
@SuppressWarnings({"unchecked", "rawtypes"})
QueryPage<Bucket>[] holder = new QueryPage[1];
provider.buckets(jobId, bq.build(), r -> holder[0] = r, e -> {throw new RuntimeException(e);}, client);
provider.buckets(jobId, bq, r -> holder[0] = r, e -> {throw new RuntimeException(e);}, client);
QueryPage<Bucket> buckets = holder[0];
assertEquals(1L, buckets.count());
QueryBuilder query = queryBuilderHolder[0];
@ -325,7 +324,7 @@ public class JobProviderTests extends ESTestCase {
@SuppressWarnings({"unchecked", "rawtypes"})
QueryPage<Bucket>[] holder = new QueryPage[1];
provider.buckets(jobId, bq.build(), r -> holder[0] = r, e -> {throw new RuntimeException(e);}, client);
provider.buckets(jobId, bq, r -> holder[0] = r, e -> {throw new RuntimeException(e);}, client);
QueryPage<Bucket> buckets = holder[0];
assertEquals(1L, buckets.count());
QueryBuilder query = queryBuilderHolder[0];
@ -334,7 +333,7 @@ public class JobProviderTests extends ESTestCase {
assertFalse(queryString.matches("(?s).*is_interim.*"));
}
public void testBucket_NoBucketNoExpandNoInterim()
public void testBucket_NoBucketNoExpand()
throws InterruptedException, ExecutionException, IOException {
String jobId = "TestJobIdentification";
Long timestamp = 98765432123456789L;
@ -348,11 +347,11 @@ public class JobProviderTests extends ESTestCase {
BucketsQueryBuilder bq = new BucketsQueryBuilder();
bq.timestamp(Long.toString(timestamp));
Exception[] holder = new Exception[1];
provider.buckets(jobId, bq.build(), q -> {}, e -> holder[0] = e, client);
provider.buckets(jobId, bq, q -> {}, e -> holder[0] = e, client);
assertEquals(ResourceNotFoundException.class, holder[0].getClass());
}
public void testBucket_OneBucketNoExpandNoInterim()
public void testBucket_OneBucketNoExpand()
throws InterruptedException, ExecutionException, IOException {
String jobId = "TestJobIdentification";
Date now = new Date();
@ -373,37 +372,12 @@ public class JobProviderTests extends ESTestCase {
@SuppressWarnings({"unchecked", "rawtypes"})
QueryPage<Bucket>[] bucketHolder = new QueryPage[1];
provider.buckets(jobId, bq.build(), q -> bucketHolder[0] = q, e -> {}, client);
provider.buckets(jobId, bq, q -> bucketHolder[0] = q, e -> {}, client);
assertThat(bucketHolder[0].count(), equalTo(1L));
Bucket b = bucketHolder[0].results().get(0);
assertEquals(now, b.getTimestamp());
}
public void testBucket_OneBucketNoExpandInterim()
throws InterruptedException, ExecutionException, IOException {
String jobId = "TestJobIdentification";
Date now = new Date();
List<Map<String, Object>> source = new ArrayList<>();
Map<String, Object> map = new HashMap<>();
map.put("job_id", "foo");
map.put("timestamp", now.getTime());
map.put("bucket_span", 22);
map.put("is_interim", true);
source.add(map);
SearchResponse response = createSearchResponse(source);
Client client = getMockedClient(queryBuilder -> {}, response);
JobProvider provider = createProvider(client);
BucketsQueryBuilder bq = new BucketsQueryBuilder();
bq.timestamp(Long.toString(now.getTime()));
Exception[] holder = new Exception[1];
provider.buckets(jobId, bq.build(), q -> {}, e -> holder[0] = e, client);
assertEquals(ResourceNotFoundException.class, holder[0].getClass());
}
public void testRecords() throws InterruptedException, ExecutionException, IOException {
String jobId = "TestJobIdentification";
Date now = new Date();
@ -439,7 +413,7 @@ public class JobProviderTests extends ESTestCase {
@SuppressWarnings({"unchecked", "rawtypes"})
QueryPage<AnomalyRecord>[] holder = new QueryPage[1];
provider.records(jobId, rqb.build(), page -> holder[0] = page, RuntimeException::new, client);
provider.records(jobId, rqb, page -> holder[0] = page, RuntimeException::new, client);
QueryPage<AnomalyRecord> recordPage = holder[0];
assertEquals(2L, recordPage.count());
List<AnomalyRecord> records = recordPage.results();
@ -493,7 +467,7 @@ public class JobProviderTests extends ESTestCase {
@SuppressWarnings({"unchecked", "rawtypes"})
QueryPage<AnomalyRecord>[] holder = new QueryPage[1];
provider.records(jobId, rqb.build(), page -> holder[0] = page, RuntimeException::new, client);
provider.records(jobId, rqb, page -> holder[0] = page, RuntimeException::new, client);
QueryPage<AnomalyRecord> recordPage = holder[0];
assertEquals(2L, recordPage.count());
List<AnomalyRecord> records = recordPage.results();
@ -538,7 +512,7 @@ public class JobProviderTests extends ESTestCase {
@SuppressWarnings({"unchecked", "rawtypes"})
QueryPage<AnomalyRecord>[] holder = new QueryPage[1];
provider.bucketRecords(jobId, bucket, from, size, true, sortfield, true, "", page -> holder[0] = page, RuntimeException::new,
provider.bucketRecords(jobId, bucket, from, size, true, sortfield, true, page -> holder[0] = page, RuntimeException::new,
client);
QueryPage<AnomalyRecord> recordPage = holder[0];
assertEquals(2L, recordPage.count());
@ -574,8 +548,7 @@ public class JobProviderTests extends ESTestCase {
JobProvider provider = createProvider(client);
Integer[] holder = new Integer[1];
provider.expandBucket(jobId, false, bucket, null, records -> holder[0] = records, RuntimeException::new,
client);
provider.expandBucket(jobId, false, bucket, records -> holder[0] = records, RuntimeException::new, client);
int records = holder[0];
assertEquals(400L, records);
}

View File

@ -188,6 +188,11 @@ public class NodeStatsMonitoringDocTests extends BaseFilteredMonitoringDocTestCa
+ "\"number_of_times_throttled\":45,"
+ "\"time_throttled_nanos\":46"
+ "}"
+ "},"
+ "\"memory\":{"
+ "\"control_group\":\"_memory_ctrl_group\","
+ "\"limit_in_bytes\":\"2000000000\","
+ "\"usage_in_bytes\":\"1000000000\""
+ "}"
+ "}"
+ "},"
@ -327,7 +332,8 @@ public class NodeStatsMonitoringDocTests extends BaseFilteredMonitoringDocTestCa
// Os
final OsStats.Cpu osCpu = new OsStats.Cpu((short) no, new double[]{++iota, ++iota, ++iota});
final OsStats.Cgroup.CpuStat osCpuStat = new OsStats.Cgroup.CpuStat(++iota, ++iota, ++iota);
final OsStats.Cgroup osCgroup = new OsStats.Cgroup("_cpu_acct_ctrl_group", ++iota, "_cpu_ctrl_group", ++iota, ++iota, osCpuStat);
final OsStats.Cgroup osCgroup = new OsStats.Cgroup("_cpu_acct_ctrl_group", ++iota, "_cpu_ctrl_group", ++iota, ++iota, osCpuStat,
"_memory_ctrl_group", "2000000000", "1000000000");
final OsStats.Mem osMem = new OsStats.Mem(no, no);
final OsStats.Swap osSwap = new OsStats.Swap(no, no);

View File

@ -193,10 +193,10 @@ public class ExportersTests extends ESTestCase {
exporters.start();
assertThat(settingsHolder.get(), notNullValue());
Map<String, String> settings = settingsHolder.get().getAsMap();
Settings settings = settingsHolder.get();
assertThat(settings.size(), is(2));
assertThat(settings, hasEntry("_name0.type", "_type"));
assertThat(settings, hasEntry("_name1.type", "_type"));
assertEquals(settings.get("_name0.type"), "_type");
assertEquals(settings.get("_name1.type"), "_type");
Settings update = Settings.builder()
.put("xpack.monitoring.exporters._name0.foo", "bar")
@ -204,12 +204,12 @@ public class ExportersTests extends ESTestCase {
.build();
clusterSettings.applySettings(update);
assertThat(settingsHolder.get(), notNullValue());
settings = settingsHolder.get().getAsMap();
settings = settingsHolder.get();
assertThat(settings.size(), is(4));
assertThat(settings, hasEntry("_name0.type", "_type"));
assertThat(settings, hasEntry("_name0.foo", "bar"));
assertThat(settings, hasEntry("_name1.type", "_type"));
assertThat(settings, hasEntry("_name1.foo", "bar"));
assertEquals(settings.get("_name0.type"), "_type");
assertEquals(settings.get("_name0.foo"), "bar");
assertEquals(settings.get("_name1.type"), "_type");
assertEquals(settings.get("_name1.foo"), "bar");
}
public void testExporterBlocksOnClusterState() {

View File

@ -5,6 +5,7 @@
*/
package org.elasticsearch.xpack.monitoring.exporter.http;
import org.apache.lucene.util.LuceneTestCase.AwaitsFix;
import org.elasticsearch.Version;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.DocWriteRequest;
@ -72,6 +73,7 @@ import static org.hamcrest.Matchers.notNullValue;
@ESIntegTestCase.ClusterScope(scope = Scope.TEST,
numDataNodes = 1, numClientNodes = 0, transportClientRatio = 0.0, supportsDedicatedMasters = false)
@AwaitsFix(bugUrl = "https://github.com/elastic/x-pack-elasticsearch/issues/2671")
public class HttpExporterIT extends MonitoringIntegTestCase {
private final boolean templatesExistsAlready = randomBoolean();

View File

@ -22,7 +22,6 @@ import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.common.util.concurrent.CountDown;
import org.elasticsearch.common.xcontent.XContentType;
import org.elasticsearch.index.IndexNotFoundException;
import org.elasticsearch.index.query.QueryBuilders;
import org.elasticsearch.plugins.Plugin;
import org.elasticsearch.test.ESIntegTestCase;
import org.elasticsearch.test.SecuritySettingsSource;
@ -41,7 +40,6 @@ import org.elasticsearch.xpack.security.Security;
import org.elasticsearch.xpack.security.authc.file.FileRealm;
import org.elasticsearch.xpack.security.authc.support.Hasher;
import org.elasticsearch.xpack.watcher.WatcherLifeCycleService;
import org.hamcrest.Matcher;
import org.junit.After;
import org.junit.Before;
@ -249,23 +247,10 @@ public abstract class MonitoringIntegTestCase extends ESIntegTestCase {
assertAcked(client().admin().indices().prepareDelete(ALL_MONITORING_INDICES));
}
protected void awaitMonitoringDocsCountOnPrimary(Matcher<Long> matcher, String... types) throws Exception {
assertBusy(() -> assertMonitoringDocsCountOnPrimary(matcher, types), 30, TimeUnit.SECONDS);
}
protected void ensureMonitoringIndicesYellow() {
ensureYellow(".monitoring-es-*");
}
protected void assertMonitoringDocsCountOnPrimary(Matcher<Long> matcher, String... types) {
flushAndRefresh(ALL_MONITORING_INDICES);
long count = client().prepareSearch(ALL_MONITORING_INDICES).setSize(0)
.setQuery(QueryBuilders.termsQuery("type", types))
.setPreference("_primary").get().getHits().getTotalHits();
logger.trace("--> searched for [{}] documents on primary, found [{}]", Strings.arrayToCommaDelimitedString(types), count);
assertThat(count, matcher);
}
protected List<Tuple<String, String>> monitoringTemplates() {
return Arrays.stream(MonitoringTemplateUtils.TEMPLATE_IDS)
.map(id -> new Tuple<>(MonitoringTemplateUtils.templateName(id), MonitoringTemplateUtils.loadTemplate(id)))

View File

@ -154,8 +154,8 @@ public class PagerDutyAccountsTests extends ESTestCase {
private void addAccountSettings(String name, Settings.Builder builder) {
builder.put("xpack.notification.pagerduty.account." + name + ".service_api_key", randomAlphaOfLength(50));
Settings defaults = SlackMessageDefaultsTests.randomSettings();
for (Map.Entry<String, String> setting : defaults.getAsMap().entrySet()) {
builder.put("xpack.notification.pagerduty.message_defaults." + setting.getKey(), setting.getValue());
for (String setting : defaults.keySet()) {
builder.copy("xpack.notification.pagerduty.message_defaults." + setting, setting, defaults);
}
}
}

View File

@ -138,8 +138,8 @@ public class SlackAccountsTests extends ESTestCase {
private void addAccountSettings(String name, Settings.Builder builder) {
builder.put("xpack.notification.slack.account." + name + ".url", "https://hooks.slack.com/services/" + randomAlphaOfLength(50));
Settings defaults = SlackMessageDefaultsTests.randomSettings();
for (Map.Entry<String, String> setting : defaults.getAsMap().entrySet()) {
builder.put("xpack.notification.slack.message_defaults." + setting.getKey(), setting.getValue());
for (String setting : defaults.keySet()) {
builder.copy("xpack.notification.slack.message_defaults." + setting, setting, defaults);
}
}
}

View File

@ -238,10 +238,10 @@ public class SecurityTribeIT extends NativeRealmIntegTestCase {
}
return true;
});
for (Map.Entry<String, String> entry : tribeSettings.getAsMap().entrySet()) {
tribe1Defaults.put("tribe.t1." + entry.getKey(), entry.getValue());
tribe2Defaults.put("tribe.t2." + entry.getKey(), entry.getValue());
}
tribe1Defaults.put(tribeSettings, false);
tribe1Defaults.normalizePrefix("tribe.t1.");
tribe2Defaults.put(tribeSettings, false);
tribe2Defaults.normalizePrefix("tribe.t2.");
// TODO: rethink how these settings are generated for tribes once we support more than just string settings...
MockSecureSettings secureSettingsTemplate =
(MockSecureSettings) Settings.builder().put(settingsTemplate).getSecureSettings();

View File

@ -291,7 +291,7 @@ public class IndexAuditTrailTests extends SecurityIntegTestCase {
}
Settings settings = builder.put(settings(rollover, includes, excludes)).build();
logger.info("--> settings: [{}]", settings.getAsMap().toString());
logger.info("--> settings: [{}]", settings);
DiscoveryNode localNode = mock(DiscoveryNode.class);
when(localNode.getHostAddress()).thenReturn(remoteAddress.getAddress());
when(localNode.getHostName()).thenReturn(remoteAddress.getAddress());

View File

@ -397,9 +397,10 @@ public class LdapUserSearchSessionFactoryTests extends LdapTestCase {
.build();
Settings.Builder builder = Settings.builder()
.put(globalSettings);
for (Map.Entry<String, String> entry : settings.getAsMap().entrySet()) {
builder.put("xpack.security.authc.realms.ldap." + entry.getKey(), entry.getValue());
}
settings.keySet().forEach(k -> {
builder.copy("xpack.security.authc.realms.ldap." + k, k, settings);
});
Settings fullSettings = builder.build();
sslService = new SSLService(fullSettings, new Environment(fullSettings));
RealmConfig config = new RealmConfig("ad-as-ldap-test", settings, globalSettings, new Environment(globalSettings), new ThreadContext(globalSettings));

View File

@ -144,16 +144,10 @@ public class PkiAuthenticationTests extends SecurityIntegTestCase {
private TransportClient createTransportClient(Settings additionalSettings) {
Settings clientSettings = transportClientSettings();
if (additionalSettings.getByPrefix("xpack.ssl.").isEmpty() == false) {
Settings.Builder builder = Settings.builder();
for (Entry<String, String> entry : clientSettings.getAsMap().entrySet()) {
if (entry.getKey().startsWith("xpack.ssl.") == false) {
builder.put(entry.getKey(), entry.getValue());
}
}
clientSettings = builder.build();
clientSettings = clientSettings.filter(k -> k.startsWith("xpack.ssl.") == false);
}
Settings.Builder builder = Settings.builder().put(clientSettings)
Settings.Builder builder = Settings.builder().put(clientSettings, false)
.put(additionalSettings)
.put("cluster.name", internalCluster().getClusterName());
builder.remove(Security.USER_SETTING.getKey());

View File

@ -81,8 +81,8 @@ public class IpFilteringUpdateTests extends SecurityIntegTestCase {
ClusterState clusterState = client().admin().cluster().prepareState().get().getState();
assertThat(clusterState.metaData().settings().get("xpack.security.transport.filter.allow"), is("127.0.0.1"));
assertThat(clusterState.metaData().settings().get("xpack.security.transport.filter.deny"), is("127.0.0.8"));
assertThat(clusterState.metaData().settings().get("xpack.security.http.filter.allow.0"), is("127.0.0.1"));
assertThat(clusterState.metaData().settings().get("xpack.security.http.filter.deny.0"), is("127.0.0.8"));
assertArrayEquals(new String[] {"127.0.0.1"},clusterState.metaData().settings().getAsArray("xpack.security.http.filter.allow"));
assertArrayEquals(new String[] {"127.0.0.8"},clusterState.metaData().settings().getAsArray("xpack.security.http.filter.deny"));
assertThat(clusterState.metaData().settings().get("transport.profiles.client.xpack.security.filter.allow"), is("127.0.0.1"));
assertThat(clusterState.metaData().settings().get("transport.profiles.client.xpack.security.filter.deny"), is("127.0.0.8"));
@ -99,8 +99,8 @@ public class IpFilteringUpdateTests extends SecurityIntegTestCase {
clusterState = client().admin().cluster().prepareState().get().getState();
assertThat(clusterState.metaData().settings().get("xpack.security.transport.filter.allow"), is("127.0.0.1"));
assertThat(clusterState.metaData().settings().get("xpack.security.transport.filter.deny"), is("127.0.0.8"));
assertThat(clusterState.metaData().settings().get("xpack.security.http.filter.allow.0"), is("127.0.0.1"));
assertThat(clusterState.metaData().settings().get("xpack.security.http.filter.deny.0"), is("127.0.0.8"));
assertArrayEquals(new String[] {"127.0.0.1"},clusterState.metaData().settings().getAsArray("xpack.security.http.filter.allow"));
assertArrayEquals(new String[] {"127.0.0.8"},clusterState.metaData().settings().getAsArray("xpack.security.http.filter.deny"));
assertThat(clusterState.metaData().settings().get("transport.profiles.client.xpack.security.filter.allow"), is("127.0.0.1"));
assertThat(clusterState.metaData().settings().get("transport.profiles.client.xpack.security.filter.deny"), is("127.0.0.8"));

View File

@ -69,15 +69,7 @@ public class IPHostnameVerificationTests extends SecurityIntegTestCase {
@Override
protected Settings transportClientSettings() {
Settings clientSettings = super.transportClientSettings();
Settings.Builder builder = Settings.builder();
for (Entry<String, String> entry : clientSettings.getAsMap().entrySet()) {
if (entry.getKey().startsWith("xpack.ssl.") == false) {
builder.put(entry.getKey(), entry.getValue());
}
}
clientSettings = builder.build();
return Settings.builder().put(clientSettings)
return Settings.builder().put(clientSettings.filter(k -> k.startsWith("xpack.ssl.") == false))
.put("xpack.ssl.verification_mode", "certificate")
.put("xpack.ssl.keystore.path", keystore.toAbsolutePath())
.put("xpack.ssl.keystore.password", "testnode-ip-only")

View File

@ -35,11 +35,7 @@ public class SslHostnameVerificationTests extends SecurityIntegTestCase {
protected Settings nodeSettings(int nodeOrdinal) {
Settings settings = super.nodeSettings(nodeOrdinal);
Settings.Builder settingsBuilder = Settings.builder();
for (Entry<String, String> entry : settings.getAsMap().entrySet()) {
if (entry.getKey().startsWith("xpack.ssl.") == false) {
settingsBuilder.put(entry.getKey(), entry.getValue());
}
}
settingsBuilder.put(settings.filter(k -> k.startsWith("xpack.ssl.") == false), false);
Path keystore;
try {
/*
@ -71,12 +67,7 @@ public class SslHostnameVerificationTests extends SecurityIntegTestCase {
Settings settings = super.transportClientSettings();
// remove all ssl settings
Settings.Builder builder = Settings.builder();
for (Entry<String, String> entry : settings.getAsMap().entrySet()) {
String key = entry.getKey();
if (key.startsWith("xpack.ssl.") == false) {
builder.put(key, entry.getValue());
}
}
builder.put(settings.filter( k -> k.startsWith("xpack.ssl.") == false), false);
builder.put("xpack.ssl.verification_mode", "certificate")
.put("xpack.ssl.keystore.path", keystore.toAbsolutePath()) // settings for client keystore

View File

@ -32,6 +32,8 @@ import java.util.Collections;
import java.util.HashMap;
import java.util.Locale;
import java.util.Map;
import java.util.function.Function;
import java.util.stream.Collectors;
import static java.util.Collections.emptyMap;
import static java.util.Collections.singletonMap;
@ -223,11 +225,11 @@ public class ExecutableJiraActionTests extends ESTestCase {
}
public void testExecutionFieldsStringArrays() throws Exception {
Map<String, String> defaults = Settings.builder()
Settings build = Settings.builder()
.putArray("k0", "a", "b", "c")
.put("k1", "v1")
.build()
.getAsMap();
.build();
Map<String, String> defaults = build.keySet().stream().collect(Collectors.toMap(Function.identity(), k -> build.get(k)));
Map<String, Object> fields = new HashMap<>();
fields.put("k2", "v2");
@ -241,11 +243,10 @@ public class ExecutableJiraActionTests extends ESTestCase {
}
public void testExecutionFieldsStringArraysNotOverridden() throws Exception {
Map<String, String> defaults = Settings.builder()
Settings build = Settings.builder()
.putArray("k0", "a", "b", "c")
.build()
.getAsMap();
.build();
Map<String, String> defaults = build.keySet().stream().collect(Collectors.toMap(Function.identity(), k -> build.get(k)));
Map<String, Object> fields = new HashMap<>();
fields.put("k1", "v1");
fields.put("k0", new String[]{"d", "e", "f"}); // should not be overridden byt the defaults

View File

@ -72,7 +72,7 @@ import static java.util.Arrays.asList;
import static java.util.Collections.singletonMap;
import static org.elasticsearch.common.unit.TimeValue.timeValueSeconds;
import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;
import static org.hamcrest.Matchers.greaterThanOrEqualTo;
import static org.hamcrest.Matchers.greaterThan;
import static org.hamcrest.Matchers.hasSize;
import static org.hamcrest.Matchers.instanceOf;
import static org.hamcrest.Matchers.is;
@ -152,7 +152,7 @@ public class ExecutionServiceTests extends ESTestCase {
Condition.Result conditionResult = AlwaysCondition.RESULT_INSTANCE;
Condition condition = mock(Condition.class);
// introduce a very short sleep time which we can use to check if the duration in milliseconds is correctly created
long randomConditionDurationMs = randomIntBetween(1, 10);
long randomConditionDurationMs = randomIntBetween(5, 10);
when(condition.execute(any(WatchExecutionContext.class))).then(invocationOnMock -> {
Thread.sleep(randomConditionDurationMs);
return conditionResult;
@ -227,8 +227,9 @@ public class ExecutionServiceTests extends ESTestCase {
verify(watchTransform, times(1)).execute(context, payload);
verify(action, times(1)).execute("_action", context, payload);
// test execution duration
assertThat(watchRecord.result().executionDurationMs(), is(greaterThanOrEqualTo(randomConditionDurationMs)));
// test execution duration, make sure it is set at all
// no exact duration check here, as different platforms handle sleep differently, so this might not be exact
assertThat(watchRecord.result().executionDurationMs(), is(greaterThan(0L)));
assertThat(watchRecord.result().executionTime(), is(notNullValue()));
// test stats

View File

@ -6,101 +6,69 @@
package org.elasticsearch.xpack.watcher.history;
import org.elasticsearch.action.admin.indices.mapping.get.GetFieldMappingsResponse;
import org.elasticsearch.plugins.Plugin;
import org.elasticsearch.script.MockScriptPlugin;
import org.elasticsearch.test.junit.annotations.TestLogging;
import org.elasticsearch.xpack.watcher.condition.AlwaysCondition;
import org.elasticsearch.xpack.watcher.execution.ExecutionState;
import org.elasticsearch.action.support.WriteRequest;
import org.elasticsearch.index.query.QueryBuilders;
import org.elasticsearch.xpack.watcher.test.AbstractWatcherIntegrationTestCase;
import org.elasticsearch.xpack.watcher.transport.actions.put.PutWatchResponse;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.function.Function;
import java.util.stream.Collectors;
import static java.util.Collections.singletonMap;
import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;
import static org.elasticsearch.search.builder.SearchSourceBuilder.searchSource;
import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;
import static org.elasticsearch.xpack.watcher.actions.ActionBuilders.loggingAction;
import static org.elasticsearch.xpack.watcher.client.WatchSourceBuilders.watchBuilder;
import static org.elasticsearch.xpack.watcher.input.InputBuilders.simpleInput;
import static org.elasticsearch.xpack.watcher.transform.TransformBuilders.scriptTransform;
import static org.elasticsearch.xpack.watcher.test.WatcherTestUtils.templateRequest;
import static org.elasticsearch.xpack.watcher.transform.TransformBuilders.searchTransform;
import static org.elasticsearch.xpack.watcher.trigger.TriggerBuilders.schedule;
import static org.elasticsearch.xpack.watcher.trigger.schedule.Schedules.interval;
import static org.hamcrest.Matchers.hasItem;
import static org.hamcrest.Matchers.is;
/**
* This test makes sure that the http host and path fields in the watch_record action result are
* not analyzed so they can be used in aggregations
*/
@TestLogging("org.elasticsearch.xpack.watcher:DEBUG,org.elasticsearch.xpack.watcher.WatcherIndexingListener:TRACE")
public class HistoryTemplateTransformMappingsTests extends AbstractWatcherIntegrationTestCase {
@Override
protected List<Class<? extends Plugin>> pluginTypes() {
List<Class<? extends Plugin>> types = super.pluginTypes();
types.add(CustomScriptPlugin.class);
return types;
}
public static class CustomScriptPlugin extends MockScriptPlugin {
@Override
@SuppressWarnings("unchecked")
protected Map<String, Function<Map<String, Object>, Object>> pluginScripts() {
Map<String, Function<Map<String, Object>, Object>> scripts = new HashMap<>();
scripts.put("return [ 'key' : 'value1' ];", vars -> singletonMap("key", "value1"));
scripts.put("return [ 'key' : 'value2' ];", vars -> singletonMap("key", "value2"));
scripts.put("return [ 'key' : [ 'key1' : 'value1' ] ];", vars -> singletonMap("key", singletonMap("key1", "value1")));
scripts.put("return [ 'key' : [ 'key1' : 'value2' ] ];", vars -> singletonMap("key", singletonMap("key1", "value2")));
return scripts;
}
@Override
public String pluginScriptLang() {
return WATCHER_LANG;
}
}
@Override
protected boolean timeWarped() {
return true; // just to have better control over the triggers
}
public void testTransformFields() throws Exception {
PutWatchResponse putWatchResponse = watcherClient().preparePutWatch("_id1").setSource(watchBuilder()
assertAcked(client().admin().indices().prepareCreate("idx").addMapping("doc",
jsonBuilder().startObject()
.startObject("properties")
.startObject("foo")
.field("type", "object")
.field("enabled", false)
.endObject()
.endObject()
.endObject()));
client().prepareBulk().setRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE)
.add(client().prepareIndex("idx", "doc", "1")
.setSource(jsonBuilder().startObject().field("name", "first").field("foo", "bar").endObject()))
.add(client().prepareIndex("idx", "doc", "2")
.setSource(jsonBuilder().startObject().field("name", "second")
.startObject("foo").field("what", "ever").endObject().endObject()))
.get();
watcherClient().preparePutWatch("_first").setSource(watchBuilder()
.trigger(schedule(interval("5s")))
.input(simpleInput())
.condition(AlwaysCondition.INSTANCE)
.transform(scriptTransform("return [ 'key' : 'value1' ];"))
.addAction("logger", scriptTransform("return [ 'key' : 'value2' ];"), loggingAction("indexed")))
.transform(searchTransform(templateRequest(searchSource().query(QueryBuilders.termQuery("name", "first")), "idx")))
.addAction("logger",
searchTransform(templateRequest(searchSource().query(QueryBuilders.termQuery("name", "first")), "idx")),
loggingAction("indexed")))
.get();
assertThat(putWatchResponse.isCreated(), is(true));
timeWarp().trigger("_id1");
// adding another watch which with a transform that should conflict with the preview watch. Since the
// mapping for the transform construct is disabled, there should be nor problems.
putWatchResponse = watcherClient().preparePutWatch("_id2").setSource(watchBuilder()
// execute another watch which with a transform that should conflict with the previous watch. Since the
// mapping for the transform construct is disabled, there should be no problems.
watcherClient().preparePutWatch("_second").setSource(watchBuilder()
.trigger(schedule(interval("5s")))
.input(simpleInput())
.condition(AlwaysCondition.INSTANCE)
.transform(scriptTransform("return [ 'key' : [ 'key1' : 'value1' ] ];"))
.addAction("logger", scriptTransform("return [ 'key' : [ 'key1' : 'value2' ] ];"), loggingAction("indexed")))
.transform(searchTransform(templateRequest(searchSource().query(QueryBuilders.termQuery("name", "second")), "idx")))
.addAction("logger",
searchTransform(templateRequest(searchSource().query(QueryBuilders.termQuery("name", "second")), "idx")),
loggingAction("indexed")))
.get();
assertThat(putWatchResponse.isCreated(), is(true));
timeWarp().trigger("_id2");
flush();
refresh();
assertWatchWithMinimumActionsCount("_id1", ExecutionState.EXECUTED, 1);
assertWatchWithMinimumActionsCount("_id2", ExecutionState.EXECUTED, 1);
refresh();
watcherClient().prepareExecuteWatch("_first").setRecordExecution(true).get();
watcherClient().prepareExecuteWatch("_second").setRecordExecution(true).get();
assertBusy(() -> {
GetFieldMappingsResponse response = client().admin().indices()

View File

@ -13,6 +13,7 @@
},
"snapshot_id": {
"type": "string",
"required": true,
"description": "The ID of the snapshot to revert to"
}
},

View File

@ -941,3 +941,76 @@
}
}
---
"Test max model memory limit":
- do:
headers:
Authorization: "Basic eF9wYWNrX3Jlc3RfdXNlcjp4LXBhY2stdGVzdC1wYXNzd29yZA==" # run as x_pack_rest_user, i.e. the test setup superuser
cluster.put_settings:
body:
transient:
xpack.ml.max_model_memory_limit: "9g"
flat_settings: true
- match: {transient: {xpack.ml.max_model_memory_limit: "9g"}}
- do:
xpack.ml.put_job:
job_id: job-model-memory-limit-below-global-max
body: >
{
"analysis_config" : {
"detectors" :[{"function":"count"}]
},
"data_description" : {
},
"analysis_limits": {
"model_memory_limit": "8g"
}
}
- match: { job_id: "job-model-memory-limit-below-global-max" }
- match: { analysis_limits.model_memory_limit: "8192mb" }
- do:
catch: /model_memory_limit \[10gb\] must be less than the value of the xpack.ml.max_model_memory_limit setting \[9gb\]/
xpack.ml.put_job:
job_id: job-model-memory-limit-above-global-max
body: >
{
"analysis_config" : {
"detectors" :[{"function":"count"}]
},
"data_description" : {
},
"analysis_limits": {
"model_memory_limit": "10g"
}
}
- do:
headers:
Authorization: "Basic eF9wYWNrX3Jlc3RfdXNlcjp4LXBhY2stdGVzdC1wYXNzd29yZA==" # run as x_pack_rest_user, i.e. the test setup superuser
cluster.put_settings:
body:
transient:
xpack.ml.max_model_memory_limit: null
flat_settings: true
- match: {transient: {}}
- do:
xpack.ml.put_job:
job_id: job-model-memory-limit-above-removed-global-max
body: >
{
"analysis_config" : {
"detectors" :[{"function":"count"}]
},
"data_description" : {
},
"analysis_limits": {
"model_memory_limit": "10g"
}
}
- match: { job_id: "job-model-memory-limit-above-removed-global-max" }
- match: { analysis_limits.model_memory_limit: "10240mb" }

View File

@ -170,6 +170,18 @@ setup:
- match: { buckets.0.job_id: jobs-get-result-buckets }
- match: { buckets.0.result_type: bucket}
---
"Test result single bucket api with empty body":
- do:
xpack.ml.get_buckets:
job_id: "jobs-get-result-buckets"
timestamp: "2016-06-01T00:00:00Z"
body: {}
- match: { buckets.0.timestamp: 1464739200000}
- match: { buckets.0.job_id: jobs-get-result-buckets }
- match: { buckets.0.result_type: bucket}
---
"Test mutually-exclusive params":
- do:

View File

@ -136,6 +136,10 @@ subprojects {
clusterName = 'full-cluster-restart'
setupCommand 'setupTestUser', 'bin/x-pack/users', 'useradd', 'test_user', '-p', 'x-pack-test-password', '-r', 'superuser'
waitCondition = waitWithAuth
// some tests rely on the translog not being flushed
setting 'indices.memory.shard_inactive_time', '20m'
setting 'xpack.security.transport.ssl.enabled', 'true'
setting 'xpack.ssl.keystore.path', 'testnode.jks'
setting 'xpack.ssl.keystore.password', 'testnode'
@ -179,6 +183,10 @@ subprojects {
cleanShared = false // We want to keep snapshots made by the old cluster!
setupCommand 'setupTestUser', 'bin/x-pack/users', 'useradd', 'test_user', '-p', 'x-pack-test-password', '-r', 'superuser'
waitCondition = waitWithAuth
// some tests rely on the translog not being flushed
setting 'indices.memory.shard_inactive_time', '20m'
setting 'xpack.ssl.keystore.path', 'testnode.jks'
keystoreSetting 'xpack.ssl.keystore.secure_password', 'testnode'
dependsOn copyTestNodeKeystore

View File

@ -10,14 +10,12 @@ import org.apache.http.entity.StringEntity;
import org.apache.http.util.EntityUtils;
import org.elasticsearch.client.Response;
import org.elasticsearch.cluster.metadata.IndexMetaData;
import org.elasticsearch.common.Strings;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.test.rest.ESRestTestCase;
import org.elasticsearch.xpack.ml.MachineLearning;
import org.elasticsearch.xpack.ml.utils.DomainSplitFunction;
import org.joda.time.DateTime;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Collections;
import java.util.HashMap;
@ -27,7 +25,6 @@ import java.util.regex.Matcher;
import java.util.regex.Pattern;
import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;
import static org.hamcrest.Matchers.anyOf;
import static org.hamcrest.Matchers.equalTo;
public class PainlessDomainSplitIT extends ESRestTestCase {
@ -181,21 +178,6 @@ public class PainlessDomainSplitIT extends ESRestTestCase {
tests.add(new TestConfiguration(null, "shishi.xn--fiqs8s","shishi.xn--fiqs8s"));
}
private void assertOK(Response response) {
assertThat(response.getStatusLine().getStatusCode(), anyOf(equalTo(200), equalTo(201)));
}
private void createIndex(String name, Settings settings) throws IOException {
assertOK(client().performRequest("PUT", name, Collections.emptyMap(),
new StringEntity("{ \"settings\": " + Strings.toString(settings) + " }", ContentType.APPLICATION_JSON)));
}
private void createIndex(String name, Settings settings, String mapping) throws IOException {
assertOK(client().performRequest("PUT", name, Collections.emptyMap(),
new StringEntity("{ \"settings\": " + Strings.toString(settings)
+ ", \"mappings\" : {" + mapping + "} }", ContentType.APPLICATION_JSON)));
}
public void testIsolated() throws Exception {
Settings.Builder settings = Settings.builder()
.put(IndexMetaData.INDEX_NUMBER_OF_SHARDS_SETTING.getKey(), 1)

View File

@ -69,9 +69,7 @@ public class OpenLdapUserSearchSessionFactoryTests extends ESTestCase {
.build(), globalSettings, new Environment(globalSettings), new ThreadContext(globalSettings));
Settings.Builder builder = Settings.builder()
.put(globalSettings);
for (Map.Entry<String, String> entry : config.settings().getAsMap().entrySet()) {
builder.put("xpack.security.authc.realms.ldap." + entry.getKey(), entry.getValue());
}
builder.put(Settings.builder().put(config.settings()).normalizePrefix("xpack.security.authc.realms.ldap.").build());
Settings settings = builder.build();
SSLService sslService = new SSLService(settings, new Environment(settings));