Merge remote-tracking branch 'origin/master' into feature/sql_2

Original commit: elastic/x-pack-elasticsearch@4176773659
This commit is contained in:
Lee Hinman 2018-01-31 10:49:25 -07:00
commit 1b36133988
65 changed files with 725 additions and 3619 deletions

View File

@ -35,13 +35,6 @@ At this time, you cannot use cross cluster search in either the {ml} APIs or the
For more information about cross cluster search, For more information about cross cluster search,
see {ref}/modules-cross-cluster-search.html[Cross Cluster Search]. see {ref}/modules-cross-cluster-search.html[Cross Cluster Search].
[float]
=== {xpackml} features are not supported on tribe nodes
You cannot use {ml} features on tribe nodes. For more information about that
type of node, see
{ref}/modules-tribe.html[Tribe node].
[float] [float]
=== Anomaly Explorer omissions and limitations === Anomaly Explorer omissions and limitations
//See x-pack-elasticsearch/#844 and x-pack-kibana/#1461 //See x-pack-elasticsearch/#844 and x-pack-kibana/#1461

View File

@ -12,5 +12,4 @@ can also adjust how monitoring data is displayed. For more information, see
<<es-monitoring>>. <<es-monitoring>>.
include::indices.asciidoc[] include::indices.asciidoc[]
include::tribe.asciidoc[]
include::{xes-repo-dir}/settings/monitoring-settings.asciidoc[] include::{xes-repo-dir}/settings/monitoring-settings.asciidoc[]

View File

@ -1,40 +0,0 @@
[role="xpack"]
[[monitoring-tribe]]
=== Configuring a Tribe Node to Work with Monitoring
If you connect to a cluster through a <<modules-tribe,tribe node>>,
and you want to monitor the tribe node, then you will need to install {xpack} on
that node as well.
With this configuration, the tribe node is included in the node count displayed
in the Monitoring UI, but is not included in the node list because it does not
export any data to the monitoring cluster.
To include the tribe node in the monitoring data, enable Monitoring data
collection at the tribe level:
[source,yaml]
----------------------------------
node.name: my-tribe-node1
tribe:
on_conflict: prefer_cluster1
c1:
cluster.name: cluster1
discovery.zen.ping.unicast.hosts: [ "cluster1-node1:9300", "cluster1-node2:9300", "cluster1-node2:9300" ]
xpack.monitoring.enabled: true <1>
c2:
cluster.name: cluster2
discovery.zen.ping.unicast.hosts: [ "cluster2-node3:9300", "cluster2-node3:9300", "cluster2-node3:9300" ]
xpack.monitoring: <2>
enabled: true
exporters:
id1:
type: http
host: [ "monitoring-cluster:9200" ]
----------------------------------
<1> Enable data collection from the tribe node using a Local Exporter.
<2> Enable data collection from the tribe node using an HTTP Exporter.
When you enable data collection from the tribe node, it is included in both the
node count and node list.

View File

@ -1,12 +1,11 @@
[[ccs-tribe-clients-integrations]] [[ccs-tribe-clients-integrations]]
== Cross Cluster Search, Tribe, Clients and Integrations == Cross Cluster Search, Clients and Integrations
When using {ref}/modules-cross-cluster-search.html[Cross Cluster Search] or When using {ref}/modules-cross-cluster-search.html[Cross Cluster Search]
{ref}/modules-tribe.html[Tribe Nodes] you need to take extra steps to secure you need to take extra steps to secure communications with the connected
communications with the connected clusters. clusters.
* <<cross-cluster-configuring, Cross Cluster Search and Security>> * <<cross-cluster-configuring, Cross Cluster Search and Security>>
* <<tribe-node-configuring, Tribe Nodes and Security>>
You will need to update the configuration for several clients to work with a You will need to update the configuration for several clients to work with a
secured cluster: secured cluster:
@ -34,8 +33,6 @@ or at least communicate with the cluster in a secured way:
include::tribe-clients-integrations/cross-cluster.asciidoc[] include::tribe-clients-integrations/cross-cluster.asciidoc[]
include::tribe-clients-integrations/tribe.asciidoc[]
include::tribe-clients-integrations/java.asciidoc[] include::tribe-clients-integrations/java.asciidoc[]
include::tribe-clients-integrations/http.asciidoc[] include::tribe-clients-integrations/http.asciidoc[]

View File

@ -1,101 +0,0 @@
[[tribe-node-configuring]]
=== Tribe Nodes and Security
{ref}/modules-tribe.html[Tribe nodes] act as a federated client across multiple
clusters. When using tribe nodes with secured clusters, all clusters must have
{security} enabled and share the same security configuration (users, roles,
user-role mappings, SSL/TLS CA). The tribe node itself also must be configured
to grant access to actions and indices on all of the connected clusters, as
security checks on incoming requests are primarily done on the tribe node
itself.
IMPORTANT: Support for tribe nodes in Kibana was added in v5.2.
To use a tribe node with secured clusters:
. Install {xpack} on the tribe node and every node in each connected cluster.
. Enable encryption globally. To encrypt communications, you must enable
<<ssl-tls,enable SSL/TLS>> on every node.
+
TIP: To simplify SSL/TLS configuration, use the same certificate authority to
generate certificates for all connected clusters.
. Configure the tribe in the tribe node's `elasticsearch.yml` file. You must
specify each cluster that is a part of the tribe and configure discovery and
encryption settings per cluster. For example, the following configuration adds
two clusters to the tribe:
+
[source,yml]
-----------------------------------------------------------
tribe:
on_conflict: prefer_cluster1 <1>
c1: <2>
cluster.name: cluster1
discovery.zen.ping.unicast.hosts: [ "cluster1-node1:9300", "cluster1-node2:9300"]
xpack.ssl.key: /home/es/config/x-pack/es-tribe-01.key
xpack.ssl.certificate: /home/es/config/x-pack/es-tribe-01.crt
xpack.ssl.certificate_authorities: [ "/home/es/config/x-pack/ca.crt" ]
xpack.security.transport.ssl.enabled: true
xpack.security.http.ssl.enabled: true
c2:
cluster.name: cluster2
discovery.zen.ping.unicast.hosts: [ "cluster2-node1:9300", "cluster2-node2:9300"]
xpack.ssl.key: /home/es/config/x-pack/es-tribe-01.key
xpack.ssl.certificate: /home/es/config/x-pack/es-tribe-01.crt
xpack.ssl.certificate_authorities: [ "/home/es/config/x-pack/ca.crt" ]
xpack.security.transport.ssl.enabled: true
xpack.security.http.ssl.enabled: true
-----------------------------------------------------------
<1> Results are returned from the preferred cluster if the named index exists
in multiple clusters. A preference is *required* when using {security} on
a tribe node.
<2> An arbitrary name that represents the connection to the cluster.
. Configure the same index privileges for your users on all nodes, including the
tribe node. The nodes in each cluster must grant access to indices in other
connected clusters as well as their own.
+
For example, let's assume `cluster1` and `cluster2` each have a indices `index1`
and `index2`. To enable a user to submit a request through the tribe node to
search both clusters:
+
--
.. On the tribe node and both clusters, <<defining-roles, define a `tribe_user` role>>
that has read access to `index1` and `index2`:
+
[source,yaml]
-----------------------------------------------------------
tribe_user:
indices:
'index*': search
-----------------------------------------------------------
.. Assign the `tribe_user` role to a user on the tribe node and both clusters.
For example, run the following command on each node to create `my_tribe_user`
and assign the `tribe_user` role:
+
[source,shell]
-----------------------------------------------------------
./bin/shield/users useradd my_tribe_user -p password -r tribe_user
-----------------------------------------------------------
+
NOTE: Each cluster needs to have its own users with admin privileges.
You cannot perform administration tasks such as create index through
the tribe node, you must send the request directly to the appropriate
cluster.
--
. To enable selected users to retrieve merged cluster state information
for the tribe from the tribe node, grant them the cluster
<<privileges-list-cluster, `monitor` privilege>> on the tribe node. For example,
you could create a `tribe_monitor` role that assigns the `monitor` privilege:
+
[source,yaml]
-----------------------------------------------------------
tribe_monitor:
cluster: monitor
-----------------------------------------------------------
. Start the tribe node. If you've made configuration changes to the nodes in the
connected clusters, they also need to be restarted.

View File

@ -76,7 +76,7 @@ public class License implements ToXContentObject {
* <p> * <p>
* Note: The mode indicates features that should be made available, but it does not indicate whether the license is active! * Note: The mode indicates features that should be made available, but it does not indicate whether the license is active!
* *
* The id byte is used for ordering operation modes (used for merging license md in tribe node) * The id byte is used for ordering operation modes
*/ */
public enum OperationMode { public enum OperationMode {
MISSING((byte) 0), MISSING((byte) 0),

View File

@ -29,15 +29,12 @@ import java.util.Collections;
import java.util.List; import java.util.List;
import java.util.function.Supplier; import java.util.function.Supplier;
import static org.elasticsearch.xpack.core.XPackClientActionPlugin.isTribeNode;
import static org.elasticsearch.xpack.core.XPackPlugin.transportClientMode; import static org.elasticsearch.xpack.core.XPackPlugin.transportClientMode;
public class Licensing implements ActionPlugin { public class Licensing implements ActionPlugin {
public static final String NAME = "license"; public static final String NAME = "license";
protected final Settings settings; protected final Settings settings;
private final boolean isTransportClient;
private final boolean isTribeNode;
// Until this is moved out to its own plugin (its currently in XPackPlugin.java, we need to make sure that any edits to this file // Until this is moved out to its own plugin (its currently in XPackPlugin.java, we need to make sure that any edits to this file
// are also carried out in XPackClientPlugin.java // are also carried out in XPackClientPlugin.java
@ -60,15 +57,10 @@ public class Licensing implements ActionPlugin {
public Licensing(Settings settings) { public Licensing(Settings settings) {
this.settings = settings; this.settings = settings;
isTransportClient = transportClientMode(settings);
isTribeNode = isTribeNode(settings);
} }
@Override @Override
public List<ActionHandler<? extends ActionRequest, ? extends ActionResponse>> getActions() { public List<ActionHandler<? extends ActionRequest, ? extends ActionResponse>> getActions() {
if (isTribeNode) {
return Collections.singletonList(new ActionHandler<>(GetLicenseAction.INSTANCE, TransportGetLicenseAction.class));
}
return Arrays.asList(new ActionHandler<>(PutLicenseAction.INSTANCE, TransportPutLicenseAction.class), return Arrays.asList(new ActionHandler<>(PutLicenseAction.INSTANCE, TransportPutLicenseAction.class),
new ActionHandler<>(GetLicenseAction.INSTANCE, TransportGetLicenseAction.class), new ActionHandler<>(GetLicenseAction.INSTANCE, TransportGetLicenseAction.class),
new ActionHandler<>(DeleteLicenseAction.INSTANCE, TransportDeleteLicenseAction.class), new ActionHandler<>(DeleteLicenseAction.INSTANCE, TransportDeleteLicenseAction.class),
@ -82,12 +74,10 @@ public class Licensing implements ActionPlugin {
Supplier<DiscoveryNodes> nodesInCluster) { Supplier<DiscoveryNodes> nodesInCluster) {
List<RestHandler> handlers = new ArrayList<>(); List<RestHandler> handlers = new ArrayList<>();
handlers.add(new RestGetLicenseAction(settings, restController)); handlers.add(new RestGetLicenseAction(settings, restController));
if (false == isTribeNode) { handlers.add(new RestPutLicenseAction(settings, restController));
handlers.add(new RestPutLicenseAction(settings, restController)); handlers.add(new RestDeleteLicenseAction(settings, restController));
handlers.add(new RestDeleteLicenseAction(settings, restController)); handlers.add(new RestGetTrialStatus(settings, restController));
handlers.add(new RestGetTrialStatus(settings, restController)); handlers.add(new RestPostStartTrialLicense(settings, restController));
handlers.add(new RestPostStartTrialLicense(settings, restController));
}
return handlers; return handlers;
} }

View File

@ -1,19 +0,0 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.xpack.core;
import org.elasticsearch.common.settings.Settings;
public interface XPackClientActionPlugin {
static boolean isTribeNode(Settings settings) {
return settings.getGroups("tribe", true).isEmpty() == false;
}
static boolean isTribeClientNode(Settings settings) {
return settings.get("tribe.name") != null;
}
}

View File

@ -30,10 +30,8 @@ public class XPackSettings {
/** Setting for enabling or disabling security. Defaults to true. */ /** Setting for enabling or disabling security. Defaults to true. */
public static final Setting<Boolean> SECURITY_ENABLED = Setting.boolSetting("xpack.security.enabled", true, Setting.Property.NodeScope); public static final Setting<Boolean> SECURITY_ENABLED = Setting.boolSetting("xpack.security.enabled", true, Setting.Property.NodeScope);
/** Setting for enabling or disabling monitoring. Defaults to true if not a tribe node. */ /** Setting for enabling or disabling monitoring. */
public static final Setting<Boolean> MONITORING_ENABLED = Setting.boolSetting("xpack.monitoring.enabled", public static final Setting<Boolean> MONITORING_ENABLED = Setting.boolSetting("xpack.monitoring.enabled", true,
// By default, monitoring is disabled on tribe nodes
s -> String.valueOf(XPackClientActionPlugin.isTribeNode(s) == false && XPackClientActionPlugin.isTribeClientNode(s) == false),
Setting.Property.NodeScope); Setting.Property.NodeScope);
/** Setting for enabling or disabling watcher. Defaults to true. */ /** Setting for enabling or disabling watcher. Defaults to true. */

View File

@ -601,9 +601,9 @@ public class AnalysisConfig implements ToXContentObject, Writeable {
private void verifyNoMetricFunctionsWhenSummaryCountFieldNameIsSet() { private void verifyNoMetricFunctionsWhenSummaryCountFieldNameIsSet() {
if (Strings.isNullOrEmpty(summaryCountFieldName) == false && if (Strings.isNullOrEmpty(summaryCountFieldName) == false &&
detectors.stream().anyMatch(d -> Detector.METRIC.equals(d.getFunction()))) { detectors.stream().anyMatch(d -> DetectorFunction.METRIC.equals(d.getFunction()))) {
throw ExceptionsHelper.badRequestException( throw ExceptionsHelper.badRequestException(
Messages.getMessage(Messages.JOB_CONFIG_FUNCTION_INCOMPATIBLE_PRESUMMARIZED, Detector.METRIC)); Messages.getMessage(Messages.JOB_CONFIG_FUNCTION_INCOMPATIBLE_PRESUMMARIZED, DetectorFunction.METRIC));
} }
} }
@ -763,7 +763,7 @@ public class AnalysisConfig implements ToXContentObject, Writeable {
// If any detector function is rare/freq_rare, mustn't use overlapping buckets // If any detector function is rare/freq_rare, mustn't use overlapping buckets
boolean mustNotUse = false; boolean mustNotUse = false;
List<String> illegalFunctions = new ArrayList<>(); List<DetectorFunction> illegalFunctions = new ArrayList<>();
for (Detector d : detectors) { for (Detector d : detectors) {
if (Detector.NO_OVERLAPPING_BUCKETS_FUNCTIONS.contains(d.getFunction())) { if (Detector.NO_OVERLAPPING_BUCKETS_FUNCTIONS.contains(d.getFunction())) {
illegalFunctions.add(d.getFunction()); illegalFunctions.add(d.getFunction());

View File

@ -41,7 +41,7 @@ public final class DefaultDetectorDescription {
* @param sb the {@code StringBuilder} to append to * @param sb the {@code StringBuilder} to append to
*/ */
public static void appendOn(Detector detector, StringBuilder sb) { public static void appendOn(Detector detector, StringBuilder sb) {
if (isNotNullOrEmpty(detector.getFunction())) { if (isNotNullOrEmpty(detector.getFunction().getFullName())) {
sb.append(detector.getFunction()); sb.append(detector.getFunction());
if (isNotNullOrEmpty(detector.getFieldName())) { if (isNotNullOrEmpty(detector.getFieldName())) {
sb.append('(').append(quoteField(detector.getFieldName())) sb.append('(').append(quoteField(detector.getFieldName()))

View File

@ -27,6 +27,7 @@ import java.util.ArrayList;
import java.util.Arrays; import java.util.Arrays;
import java.util.Collections; import java.util.Collections;
import java.util.EnumMap; import java.util.EnumMap;
import java.util.EnumSet;
import java.util.HashSet; import java.util.HashSet;
import java.util.List; import java.util.List;
import java.util.Locale; import java.util.Locale;
@ -117,200 +118,96 @@ public class Detector implements ToXContentObject, Writeable {
} }
} }
public static final String COUNT = "count";
public static final String HIGH_COUNT = "high_count";
public static final String LOW_COUNT = "low_count";
public static final String NON_ZERO_COUNT = "non_zero_count";
public static final String LOW_NON_ZERO_COUNT = "low_non_zero_count";
public static final String HIGH_NON_ZERO_COUNT = "high_non_zero_count";
public static final String NZC = "nzc";
public static final String LOW_NZC = "low_nzc";
public static final String HIGH_NZC = "high_nzc";
public static final String DISTINCT_COUNT = "distinct_count";
public static final String LOW_DISTINCT_COUNT = "low_distinct_count";
public static final String HIGH_DISTINCT_COUNT = "high_distinct_count";
public static final String DC = "dc";
public static final String LOW_DC = "low_dc";
public static final String HIGH_DC = "high_dc";
public static final String RARE = "rare";
public static final String FREQ_RARE = "freq_rare";
public static final String INFO_CONTENT = "info_content";
public static final String LOW_INFO_CONTENT = "low_info_content";
public static final String HIGH_INFO_CONTENT = "high_info_content";
public static final String METRIC = "metric";
public static final String MEAN = "mean";
public static final String MEDIAN = "median";
public static final String LOW_MEDIAN = "low_median";
public static final String HIGH_MEDIAN = "high_median";
public static final String HIGH_MEAN = "high_mean";
public static final String LOW_MEAN = "low_mean";
public static final String AVG = "avg";
public static final String HIGH_AVG = "high_avg";
public static final String LOW_AVG = "low_avg";
public static final String MIN = "min";
public static final String MAX = "max";
public static final String SUM = "sum";
public static final String LOW_SUM = "low_sum";
public static final String HIGH_SUM = "high_sum";
public static final String NON_NULL_SUM = "non_null_sum";
public static final String LOW_NON_NULL_SUM = "low_non_null_sum";
public static final String HIGH_NON_NULL_SUM = "high_non_null_sum";
public static final String BY = "by"; public static final String BY = "by";
public static final String OVER = "over"; public static final String OVER = "over";
/**
* Population variance is called varp to match Splunk
*/
public static final String POPULATION_VARIANCE = "varp";
public static final String LOW_POPULATION_VARIANCE = "low_varp";
public static final String HIGH_POPULATION_VARIANCE = "high_varp";
public static final String TIME_OF_DAY = "time_of_day";
public static final String TIME_OF_WEEK = "time_of_week";
public static final String LAT_LONG = "lat_long";
/**
* The set of valid function names.
*/
public static final Set<String> ANALYSIS_FUNCTIONS =
new HashSet<>(Arrays.asList(
// The convention here is that synonyms (only) go on the same line
COUNT,
HIGH_COUNT,
LOW_COUNT,
NON_ZERO_COUNT, NZC,
LOW_NON_ZERO_COUNT, LOW_NZC,
HIGH_NON_ZERO_COUNT, HIGH_NZC,
DISTINCT_COUNT, DC,
LOW_DISTINCT_COUNT, LOW_DC,
HIGH_DISTINCT_COUNT, HIGH_DC,
RARE,
FREQ_RARE,
INFO_CONTENT,
LOW_INFO_CONTENT,
HIGH_INFO_CONTENT,
METRIC,
MEAN, AVG,
HIGH_MEAN, HIGH_AVG,
LOW_MEAN, LOW_AVG,
MEDIAN,
LOW_MEDIAN,
HIGH_MEDIAN,
MIN,
MAX,
SUM,
LOW_SUM,
HIGH_SUM,
NON_NULL_SUM,
LOW_NON_NULL_SUM,
HIGH_NON_NULL_SUM,
POPULATION_VARIANCE,
LOW_POPULATION_VARIANCE,
HIGH_POPULATION_VARIANCE,
TIME_OF_DAY,
TIME_OF_WEEK,
LAT_LONG
));
/** /**
* The set of functions that do not require a field, by field or over field * The set of functions that do not require a field, by field or over field
*/ */
public static final Set<String> COUNT_WITHOUT_FIELD_FUNCTIONS = public static final EnumSet<DetectorFunction> COUNT_WITHOUT_FIELD_FUNCTIONS = EnumSet.of(
new HashSet<>(Arrays.asList( DetectorFunction.COUNT,
COUNT, DetectorFunction.HIGH_COUNT,
HIGH_COUNT, DetectorFunction.LOW_COUNT,
LOW_COUNT, DetectorFunction.NON_ZERO_COUNT,
NON_ZERO_COUNT, NZC, DetectorFunction.LOW_NON_ZERO_COUNT,
LOW_NON_ZERO_COUNT, LOW_NZC, DetectorFunction.HIGH_NON_ZERO_COUNT,
HIGH_NON_ZERO_COUNT, HIGH_NZC, DetectorFunction.TIME_OF_DAY,
TIME_OF_DAY, DetectorFunction.TIME_OF_WEEK
TIME_OF_WEEK );
));
/** /**
* The set of functions that require a fieldname * The set of functions that require a fieldname
*/ */
public static final Set<String> FIELD_NAME_FUNCTIONS = public static final EnumSet<DetectorFunction> FIELD_NAME_FUNCTIONS = EnumSet.of(
new HashSet<>(Arrays.asList( DetectorFunction.DISTINCT_COUNT,
DISTINCT_COUNT, DC, DetectorFunction.LOW_DISTINCT_COUNT,
LOW_DISTINCT_COUNT, LOW_DC, DetectorFunction.HIGH_DISTINCT_COUNT,
HIGH_DISTINCT_COUNT, HIGH_DC, DetectorFunction.INFO_CONTENT,
INFO_CONTENT, DetectorFunction.LOW_INFO_CONTENT,
LOW_INFO_CONTENT, DetectorFunction.HIGH_INFO_CONTENT,
HIGH_INFO_CONTENT, DetectorFunction.METRIC,
METRIC, DetectorFunction.MEAN, DetectorFunction.AVG,
MEAN, AVG, DetectorFunction.HIGH_MEAN, DetectorFunction.HIGH_AVG,
HIGH_MEAN, HIGH_AVG, DetectorFunction.LOW_MEAN, DetectorFunction.LOW_AVG,
LOW_MEAN, LOW_AVG, DetectorFunction.MEDIAN,
MEDIAN, DetectorFunction.LOW_MEDIAN,
LOW_MEDIAN, DetectorFunction.HIGH_MEDIAN,
HIGH_MEDIAN, DetectorFunction.MIN,
MIN, DetectorFunction.MAX,
MAX, DetectorFunction.SUM,
SUM, DetectorFunction.LOW_SUM,
LOW_SUM, DetectorFunction.HIGH_SUM,
HIGH_SUM, DetectorFunction.NON_NULL_SUM,
NON_NULL_SUM, DetectorFunction.LOW_NON_NULL_SUM,
LOW_NON_NULL_SUM, DetectorFunction.HIGH_NON_NULL_SUM,
HIGH_NON_NULL_SUM, DetectorFunction.VARP,
POPULATION_VARIANCE, DetectorFunction.LOW_VARP,
LOW_POPULATION_VARIANCE, DetectorFunction.HIGH_VARP,
HIGH_POPULATION_VARIANCE, DetectorFunction.LAT_LONG
LAT_LONG );
));
/** /**
* The set of functions that require a by fieldname * The set of functions that require a by fieldname
*/ */
public static final Set<String> BY_FIELD_NAME_FUNCTIONS = public static final EnumSet<DetectorFunction> BY_FIELD_NAME_FUNCTIONS = EnumSet.of(
new HashSet<>(Arrays.asList( DetectorFunction.RARE,
RARE, DetectorFunction.FREQ_RARE
FREQ_RARE );
));
/** /**
* The set of functions that require a over fieldname * The set of functions that require a over fieldname
*/ */
public static final Set<String> OVER_FIELD_NAME_FUNCTIONS = public static final EnumSet<DetectorFunction> OVER_FIELD_NAME_FUNCTIONS = EnumSet.of(
new HashSet<>(Arrays.asList( DetectorFunction.FREQ_RARE
FREQ_RARE );
));
/**
* The set of functions that cannot have a by fieldname
*/
public static final Set<String> NO_BY_FIELD_NAME_FUNCTIONS =
new HashSet<>();
/** /**
* The set of functions that cannot have an over fieldname * The set of functions that cannot have an over fieldname
*/ */
public static final Set<String> NO_OVER_FIELD_NAME_FUNCTIONS = public static final EnumSet<DetectorFunction> NO_OVER_FIELD_NAME_FUNCTIONS = EnumSet.of(
new HashSet<>(Arrays.asList( DetectorFunction.NON_ZERO_COUNT,
NON_ZERO_COUNT, NZC, DetectorFunction.LOW_NON_ZERO_COUNT,
LOW_NON_ZERO_COUNT, LOW_NZC, DetectorFunction.HIGH_NON_ZERO_COUNT
HIGH_NON_ZERO_COUNT, HIGH_NZC );
));
/** /**
* The set of functions that must not be used with overlapping buckets * The set of functions that must not be used with overlapping buckets
*/ */
public static final Set<String> NO_OVERLAPPING_BUCKETS_FUNCTIONS = public static final EnumSet<DetectorFunction> NO_OVERLAPPING_BUCKETS_FUNCTIONS = EnumSet.of(
new HashSet<>(Arrays.asList( DetectorFunction.RARE,
RARE, DetectorFunction.FREQ_RARE
FREQ_RARE );
));
/** /**
* The set of functions that should not be used with overlapping buckets * The set of functions that should not be used with overlapping buckets
* as they gain no benefit but have overhead * as they gain no benefit but have overhead
*/ */
public static final Set<String> OVERLAPPING_BUCKETS_FUNCTIONS_NOT_NEEDED = public static final EnumSet<DetectorFunction> OVERLAPPING_BUCKETS_FUNCTIONS_NOT_NEEDED = EnumSet.of(
new HashSet<>(Arrays.asList( DetectorFunction.MIN,
MIN, DetectorFunction.MAX,
MAX, DetectorFunction.TIME_OF_DAY,
TIME_OF_DAY, DetectorFunction.TIME_OF_WEEK
TIME_OF_WEEK );
));
/** /**
* field names cannot contain any of these characters * field names cannot contain any of these characters
@ -323,7 +220,7 @@ public class Detector implements ToXContentObject, Writeable {
private final String detectorDescription; private final String detectorDescription;
private final String function; private final DetectorFunction function;
private final String fieldName; private final String fieldName;
private final String byFieldName; private final String byFieldName;
private final String overFieldName; private final String overFieldName;
@ -335,7 +232,7 @@ public class Detector implements ToXContentObject, Writeable {
public Detector(StreamInput in) throws IOException { public Detector(StreamInput in) throws IOException {
detectorDescription = in.readString(); detectorDescription = in.readString();
function = in.readString(); function = DetectorFunction.fromString(in.readString());
fieldName = in.readOptionalString(); fieldName = in.readOptionalString();
byFieldName = in.readOptionalString(); byFieldName = in.readOptionalString();
overFieldName = in.readOptionalString(); overFieldName = in.readOptionalString();
@ -354,7 +251,7 @@ public class Detector implements ToXContentObject, Writeable {
@Override @Override
public void writeTo(StreamOutput out) throws IOException { public void writeTo(StreamOutput out) throws IOException {
out.writeString(detectorDescription); out.writeString(detectorDescription);
out.writeString(function); out.writeString(function.getFullName());
out.writeOptionalString(fieldName); out.writeOptionalString(fieldName);
out.writeOptionalString(byFieldName); out.writeOptionalString(byFieldName);
out.writeOptionalString(overFieldName); out.writeOptionalString(overFieldName);
@ -406,7 +303,7 @@ public class Detector implements ToXContentObject, Writeable {
return builder; return builder;
} }
private Detector(String detectorDescription, String function, String fieldName, String byFieldName, String overFieldName, private Detector(String detectorDescription, DetectorFunction function, String fieldName, String byFieldName, String overFieldName,
String partitionFieldName, boolean useNull, ExcludeFrequent excludeFrequent, List<DetectionRule> rules, String partitionFieldName, boolean useNull, ExcludeFrequent excludeFrequent, List<DetectionRule> rules,
int detectorIndex) { int detectorIndex) {
this.function = function; this.function = function;
@ -426,12 +323,11 @@ public class Detector implements ToXContentObject, Writeable {
} }
/** /**
* The analysis function used e.g. count, rare, min etc. There is no * The analysis function used e.g. count, rare, min etc.
* validation to check this value is one a predefined set
* *
* @return The function or <code>null</code> if not set * @return The function or <code>null</code> if not set
*/ */
public String getFunction() { public DetectorFunction getFunction() {
return function; return function;
} }
@ -577,10 +473,11 @@ public class Detector implements ToXContentObject, Writeable {
* error-prone * error-prone
* </ul> * </ul>
*/ */
static final Set<String> FUNCTIONS_WITHOUT_RULE_SUPPORT = new HashSet<>(Arrays.asList(Detector.LAT_LONG, Detector.METRIC)); static final EnumSet<DetectorFunction> FUNCTIONS_WITHOUT_RULE_SUPPORT = EnumSet.of(
DetectorFunction.LAT_LONG, DetectorFunction.METRIC);
private String detectorDescription; private String detectorDescription;
private String function; private DetectorFunction function;
private String fieldName; private String fieldName;
private String byFieldName; private String byFieldName;
private String overFieldName; private String overFieldName;
@ -608,6 +505,10 @@ public class Detector implements ToXContentObject, Writeable {
} }
public Builder(String function, String fieldName) { public Builder(String function, String fieldName) {
this(DetectorFunction.fromString(function), fieldName);
}
public Builder(DetectorFunction function, String fieldName) {
this.function = function; this.function = function;
this.fieldName = fieldName; this.fieldName = fieldName;
} }
@ -617,7 +518,7 @@ public class Detector implements ToXContentObject, Writeable {
} }
public void setFunction(String function) { public void setFunction(String function) {
this.function = function; this.function = DetectorFunction.fromString(function);
} }
public void setFieldName(String fieldName) { public void setFieldName(String fieldName) {
@ -657,13 +558,10 @@ public class Detector implements ToXContentObject, Writeable {
boolean emptyByField = Strings.isEmpty(byFieldName); boolean emptyByField = Strings.isEmpty(byFieldName);
boolean emptyOverField = Strings.isEmpty(overFieldName); boolean emptyOverField = Strings.isEmpty(overFieldName);
boolean emptyPartitionField = Strings.isEmpty(partitionFieldName); boolean emptyPartitionField = Strings.isEmpty(partitionFieldName);
if (Detector.ANALYSIS_FUNCTIONS.contains(function) == false) {
throw ExceptionsHelper.badRequestException(Messages.getMessage(Messages.JOB_CONFIG_UNKNOWN_FUNCTION, function));
}
if (emptyField && emptyByField && emptyOverField) { if (emptyField && emptyByField && emptyOverField) {
if (!Detector.COUNT_WITHOUT_FIELD_FUNCTIONS.contains(function)) { if (!Detector.COUNT_WITHOUT_FIELD_FUNCTIONS.contains(function)) {
throw ExceptionsHelper.badRequestException(Messages.getMessage(Messages.JOB_CONFIG_NO_ANALYSIS_FIELD_NOT_COUNT)); throw ExceptionsHelper.badRequestException(Messages.getMessage(Messages.JOB_CONFIG_ANALYSIS_FIELD_MUST_BE_SET));
} }
} }
@ -682,11 +580,6 @@ public class Detector implements ToXContentObject, Writeable {
throw ExceptionsHelper.badRequestException(Messages.getMessage(Messages.JOB_CONFIG_FUNCTION_REQUIRES_BYFIELD, function)); throw ExceptionsHelper.badRequestException(Messages.getMessage(Messages.JOB_CONFIG_FUNCTION_REQUIRES_BYFIELD, function));
} }
if (!emptyByField && Detector.NO_BY_FIELD_NAME_FUNCTIONS.contains(function)) {
throw ExceptionsHelper.badRequestException(
Messages.getMessage(Messages.JOB_CONFIG_BYFIELD_INCOMPATIBLE_FUNCTION, function));
}
if (emptyOverField && Detector.OVER_FIELD_NAME_FUNCTIONS.contains(function)) { if (emptyOverField && Detector.OVER_FIELD_NAME_FUNCTIONS.contains(function)) {
throw ExceptionsHelper.badRequestException(Messages.getMessage(Messages.JOB_CONFIG_FUNCTION_REQUIRES_OVERFIELD, function)); throw ExceptionsHelper.badRequestException(Messages.getMessage(Messages.JOB_CONFIG_FUNCTION_REQUIRES_OVERFIELD, function));
} }
@ -702,7 +595,7 @@ public class Detector implements ToXContentObject, Writeable {
verifyFieldName(field); verifyFieldName(field);
} }
String function = this.function == null ? Detector.METRIC : this.function; DetectorFunction function = this.function == null ? DetectorFunction.METRIC : this.function;
if (rules.isEmpty() == false) { if (rules.isEmpty() == false) {
if (FUNCTIONS_WITHOUT_RULE_SUPPORT.contains(function)) { if (FUNCTIONS_WITHOUT_RULE_SUPPORT.contains(function)) {
String msg = Messages.getMessage(Messages.JOB_CONFIG_DETECTION_RULE_NOT_SUPPORTED_BY_FUNCTION, function); String msg = Messages.getMessage(Messages.JOB_CONFIG_DETECTION_RULE_NOT_SUPPORTED_BY_FUNCTION, function);
@ -733,13 +626,13 @@ public class Detector implements ToXContentObject, Writeable {
} }
// by/over field names cannot be "count", "over', "by" - this requirement dates back to the early // by/over field names cannot be "count", "over', "by" - this requirement dates back to the early
// days of the Splunk app and could be removed now BUT ONLY IF THE C++ CODE IS CHANGED // days of the ML code and could be removed now BUT ONLY IF THE C++ CODE IS CHANGED
// FIRST - see https://github.com/elastic/x-pack-elasticsearch/issues/858 // FIRST - see https://github.com/elastic/x-pack-elasticsearch/issues/858
if (COUNT.equals(byFieldName)) { if (DetectorFunction.COUNT.getFullName().equals(byFieldName)) {
throw ExceptionsHelper.badRequestException(Messages.getMessage(Messages.JOB_CONFIG_DETECTOR_COUNT_DISALLOWED, throw ExceptionsHelper.badRequestException(Messages.getMessage(Messages.JOB_CONFIG_DETECTOR_COUNT_DISALLOWED,
BY_FIELD_NAME_FIELD.getPreferredName())); BY_FIELD_NAME_FIELD.getPreferredName()));
} }
if (COUNT.equals(overFieldName)) { if (DetectorFunction.COUNT.getFullName().equals(overFieldName)) {
throw ExceptionsHelper.badRequestException(Messages.getMessage(Messages.JOB_CONFIG_DETECTOR_COUNT_DISALLOWED, throw ExceptionsHelper.badRequestException(Messages.getMessage(Messages.JOB_CONFIG_DETECTOR_COUNT_DISALLOWED,
OVER_FIELD_NAME_FIELD.getPreferredName())); OVER_FIELD_NAME_FIELD.getPreferredName()));
} }

View File

@ -0,0 +1,84 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.xpack.core.ml.job.config;
import org.elasticsearch.xpack.core.ml.job.messages.Messages;
import java.util.Arrays;
import java.util.Collections;
import java.util.Locale;
import java.util.Set;
import java.util.stream.Collectors;
public enum DetectorFunction {
COUNT,
LOW_COUNT,
HIGH_COUNT,
NON_ZERO_COUNT("nzc"),
LOW_NON_ZERO_COUNT("low_nzc"),
HIGH_NON_ZERO_COUNT("high_nzc"),
DISTINCT_COUNT("dc"),
LOW_DISTINCT_COUNT("low_dc"),
HIGH_DISTINCT_COUNT("high_dc"),
RARE,
FREQ_RARE,
INFO_CONTENT,
LOW_INFO_CONTENT,
HIGH_INFO_CONTENT,
METRIC,
MEAN,
LOW_MEAN,
HIGH_MEAN,
AVG,
LOW_AVG,
HIGH_AVG,
MEDIAN,
LOW_MEDIAN,
HIGH_MEDIAN,
MIN,
MAX,
SUM,
LOW_SUM,
HIGH_SUM,
NON_NULL_SUM,
LOW_NON_NULL_SUM,
HIGH_NON_NULL_SUM,
VARP,
LOW_VARP,
HIGH_VARP,
TIME_OF_DAY,
TIME_OF_WEEK,
LAT_LONG;
private Set<String> shortcuts;
DetectorFunction() {
shortcuts = Collections.emptySet();
}
DetectorFunction(String... shortcuts) {
this.shortcuts = Arrays.stream(shortcuts).collect(Collectors.toSet());
}
public String getFullName() {
return name().toLowerCase(Locale.ROOT);
}
@Override
public String toString() {
return getFullName();
}
public static DetectorFunction fromString(String op) {
for (DetectorFunction function : values()) {
if (function.getFullName().equals(op) || function.shortcuts.contains(op)) {
return function;
}
}
throw new IllegalArgumentException(Messages.getMessage(Messages.JOB_CONFIG_UNKNOWN_FUNCTION, op));
}
}

View File

@ -69,7 +69,6 @@ public final class Messages {
public static final String JOB_AUDIT_REVERTED = "Job model snapshot reverted to ''{0}''"; public static final String JOB_AUDIT_REVERTED = "Job model snapshot reverted to ''{0}''";
public static final String JOB_AUDIT_SNAPSHOT_DELETED = "Model snapshot [{0}] with description ''{1}'' deleted"; public static final String JOB_AUDIT_SNAPSHOT_DELETED = "Model snapshot [{0}] with description ''{1}'' deleted";
public static final String JOB_CONFIG_BYFIELD_INCOMPATIBLE_FUNCTION = "by_field_name cannot be used with function ''{0}''";
public static final String JOB_CONFIG_CATEGORIZATION_FILTERS_CONTAINS_DUPLICATES = "categorization_filters contain duplicates"; public static final String JOB_CONFIG_CATEGORIZATION_FILTERS_CONTAINS_DUPLICATES = "categorization_filters contain duplicates";
public static final String JOB_CONFIG_CATEGORIZATION_FILTERS_CONTAINS_EMPTY = public static final String JOB_CONFIG_CATEGORIZATION_FILTERS_CONTAINS_EMPTY =
"categorization_filters are not allowed to contain empty strings"; "categorization_filters are not allowed to contain empty strings";
@ -137,8 +136,8 @@ public final class Messages {
public static final String JOB_CONFIG_MISSING_DATA_DESCRIPTION = "A data_description must be set"; public static final String JOB_CONFIG_MISSING_DATA_DESCRIPTION = "A data_description must be set";
public static final String JOB_CONFIG_MULTIPLE_BUCKETSPANS_MUST_BE_MULTIPLE = public static final String JOB_CONFIG_MULTIPLE_BUCKETSPANS_MUST_BE_MULTIPLE =
"Multiple bucket_span ''{0}'' must be a multiple of the main bucket_span ''{1}''"; "Multiple bucket_span ''{0}'' must be a multiple of the main bucket_span ''{1}''";
public static final String JOB_CONFIG_NO_ANALYSIS_FIELD_NOT_COUNT = public static final String JOB_CONFIG_ANALYSIS_FIELD_MUST_BE_SET =
"Unless the function is 'count' one of field_name, by_field_name or over_field_name must be set"; "Unless a count or temporal function is used one of field_name, by_field_name or over_field_name must be set";
public static final String JOB_CONFIG_NO_DETECTORS = "No detectors configured"; public static final String JOB_CONFIG_NO_DETECTORS = "No detectors configured";
public static final String JOB_CONFIG_OVERFIELD_INCOMPATIBLE_FUNCTION = public static final String JOB_CONFIG_OVERFIELD_INCOMPATIBLE_FUNCTION =
"over_field_name cannot be used with function ''{0}''"; "over_field_name cannot be used with function ''{0}''";

View File

@ -8,6 +8,7 @@ package org.elasticsearch.xpack.core.watcher.support.xcontent;
import org.apache.lucene.util.BytesRef; import org.apache.lucene.util.BytesRef;
import org.elasticsearch.ElasticsearchException; import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.common.Nullable; import org.elasticsearch.common.Nullable;
import org.elasticsearch.common.xcontent.DeprecationHandler;
import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.common.xcontent.NamedXContentRegistry;
import org.elasticsearch.common.xcontent.XContentLocation; import org.elasticsearch.common.xcontent.XContentLocation;
import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.common.xcontent.XContentParser;
@ -293,4 +294,9 @@ public class WatcherXContentParser implements XContentParser {
public void close() throws ElasticsearchException { public void close() throws ElasticsearchException {
parser.close(); parser.close();
} }
@Override
public DeprecationHandler getDeprecationHandler() {
return parser.getDeprecationHandler();
}
} }

View File

@ -526,42 +526,6 @@ public class AnalysisConfigTests extends AbstractSerializingTestCase<AnalysisCon
return new AnalysisConfig.Builder(Collections.singletonList(new Detector.Builder("min", "count").build())); return new AnalysisConfig.Builder(Collections.singletonList(new Detector.Builder("min", "count").build()));
} }
public void testVerify_throws() {
// count works with no fields
Detector d = new Detector.Builder("count", null).build();
new AnalysisConfig.Builder(Collections.singletonList(d)).build();
try {
d = new Detector.Builder("distinct_count", null).build();
new AnalysisConfig.Builder(Collections.singletonList(d)).build();
assertTrue(false); // shouldn't get here
} catch (ElasticsearchException e) {
assertEquals("Unless the function is 'count' one of field_name, by_field_name or over_field_name must be set", e.getMessage());
}
// should work now
Detector.Builder builder = new Detector.Builder("distinct_count", "somefield");
builder.setOverFieldName("over_field");
new AnalysisConfig.Builder(Collections.singletonList(builder.build())).build();
builder = new Detector.Builder("info_content", "somefield");
builder.setOverFieldName("over_field");
builder.build();
new AnalysisConfig.Builder(Collections.singletonList(builder.build())).build();
builder.setByFieldName("by_field");
new AnalysisConfig.Builder(Collections.singletonList(builder.build())).build();
try {
builder = new Detector.Builder("made_up_function", "somefield");
builder.setOverFieldName("over_field");
new AnalysisConfig.Builder(Collections.singletonList(builder.build())).build();
assertTrue(false); // shouldn't get here
} catch (ElasticsearchException e) {
assertEquals("Unknown function 'made_up_function'", e.getMessage());
}
}
public void testVerify_GivenNegativeBucketSpan() { public void testVerify_GivenNegativeBucketSpan() {
AnalysisConfig.Builder config = createValidConfig(); AnalysisConfig.Builder config = createValidConfig();
config.setBucketSpan(TimeValue.timeValueSeconds(-1)); config.setBucketSpan(TimeValue.timeValueSeconds(-1));
@ -728,7 +692,7 @@ public class AnalysisConfigTests extends AbstractSerializingTestCase<AnalysisCon
AnalysisConfig.Builder ac = new AnalysisConfig.Builder(Collections.singletonList(d)); AnalysisConfig.Builder ac = new AnalysisConfig.Builder(Collections.singletonList(d));
ac.setSummaryCountFieldName("my_summary_count"); ac.setSummaryCountFieldName("my_summary_count");
ElasticsearchException e = ESTestCase.expectThrows(ElasticsearchException.class, ac::build); ElasticsearchException e = ESTestCase.expectThrows(ElasticsearchException.class, ac::build);
assertEquals(Messages.getMessage(Messages.JOB_CONFIG_FUNCTION_INCOMPATIBLE_PRESUMMARIZED, Detector.METRIC), e.getMessage()); assertEquals(Messages.getMessage(Messages.JOB_CONFIG_FUNCTION_INCOMPATIBLE_PRESUMMARIZED, DetectorFunction.METRIC), e.getMessage());
} }
public void testMultipleBucketsConfig() { public void testMultipleBucketsConfig() {

View File

@ -6,21 +6,23 @@
package org.elasticsearch.xpack.core.ml.job.config; package org.elasticsearch.xpack.core.ml.job.config;
import org.elasticsearch.ElasticsearchException; import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.ElasticsearchStatusException;
import org.elasticsearch.common.io.stream.Writeable.Reader; import org.elasticsearch.common.io.stream.Writeable.Reader;
import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.common.xcontent.XContentParser;
import org.elasticsearch.test.AbstractSerializingTestCase; import org.elasticsearch.test.AbstractSerializingTestCase;
import org.elasticsearch.test.ESTestCase; import org.elasticsearch.test.ESTestCase;
import org.elasticsearch.xpack.core.ml.job.messages.Messages; import org.elasticsearch.xpack.core.ml.job.messages.Messages;
import org.elasticsearch.xpack.core.ml.job.process.autodetect.writer.RecordWriter; import org.elasticsearch.xpack.core.ml.job.process.autodetect.writer.RecordWriter;
import org.junit.Assert;
import java.util.ArrayList; import java.util.ArrayList;
import java.util.Arrays; import java.util.Arrays;
import java.util.Collection; import java.util.Collection;
import java.util.Collections; import java.util.Collections;
import java.util.EnumSet;
import java.util.HashSet; import java.util.HashSet;
import java.util.List; import java.util.List;
import java.util.Set;
import static org.hamcrest.Matchers.equalTo;
public class DetectorTests extends AbstractSerializingTestCase<Detector> { public class DetectorTests extends AbstractSerializingTestCase<Detector> {
@ -151,12 +153,12 @@ public class DetectorTests extends AbstractSerializingTestCase<Detector> {
@Override @Override
protected Detector createTestInstance() { protected Detector createTestInstance() {
String function; DetectorFunction function;
Detector.Builder detector; Detector.Builder detector;
if (randomBoolean()) { if (randomBoolean()) {
detector = new Detector.Builder(function = randomFrom(Detector.COUNT_WITHOUT_FIELD_FUNCTIONS), null); detector = new Detector.Builder(function = randomFrom(Detector.COUNT_WITHOUT_FIELD_FUNCTIONS), null);
} else { } else {
Set<String> functions = new HashSet<>(Detector.FIELD_NAME_FUNCTIONS); EnumSet<DetectorFunction> functions = EnumSet.copyOf(Detector.FIELD_NAME_FUNCTIONS);
functions.removeAll(Detector.Builder.FUNCTIONS_WITHOUT_RULE_SUPPORT); functions.removeAll(Detector.Builder.FUNCTIONS_WITHOUT_RULE_SUPPORT);
detector = new Detector.Builder(function = randomFrom(functions), randomAlphaOfLengthBetween(1, 20)); detector = new Detector.Builder(function = randomFrom(functions), randomAlphaOfLengthBetween(1, 20));
} }
@ -168,7 +170,7 @@ public class DetectorTests extends AbstractSerializingTestCase<Detector> {
detector.setPartitionFieldName(fieldName = randomAlphaOfLengthBetween(6, 20)); detector.setPartitionFieldName(fieldName = randomAlphaOfLengthBetween(6, 20));
} else if (randomBoolean() && Detector.NO_OVER_FIELD_NAME_FUNCTIONS.contains(function) == false) { } else if (randomBoolean() && Detector.NO_OVER_FIELD_NAME_FUNCTIONS.contains(function) == false) {
detector.setOverFieldName(fieldName = randomAlphaOfLengthBetween(6, 20)); detector.setOverFieldName(fieldName = randomAlphaOfLengthBetween(6, 20));
} else if (randomBoolean() && Detector.NO_BY_FIELD_NAME_FUNCTIONS.contains(function) == false) { } else if (randomBoolean()) {
detector.setByFieldName(fieldName = randomAlphaOfLengthBetween(6, 20)); detector.setByFieldName(fieldName = randomAlphaOfLengthBetween(6, 20));
} }
if (randomBoolean()) { if (randomBoolean()) {
@ -295,217 +297,169 @@ public class DetectorTests extends AbstractSerializingTestCase<Detector> {
}); });
} }
public void testVerify() throws Exception { public void testVerify_GivenFunctionOnly() {
// if nothing else is set the count functions (excluding distinct count) // if nothing else is set the count functions (excluding distinct count)
// are the only allowable functions // are the only allowable functions
new Detector.Builder(Detector.COUNT, null).build(); new Detector.Builder(DetectorFunction.COUNT, null).build();
Set<String> difference = new HashSet<String>(Detector.ANALYSIS_FUNCTIONS); EnumSet<DetectorFunction> difference = EnumSet.allOf(DetectorFunction.class);
difference.remove(Detector.COUNT); difference.remove(DetectorFunction.COUNT);
difference.remove(Detector.HIGH_COUNT); difference.remove(DetectorFunction.HIGH_COUNT);
difference.remove(Detector.LOW_COUNT); difference.remove(DetectorFunction.LOW_COUNT);
difference.remove(Detector.NON_ZERO_COUNT); difference.remove(DetectorFunction.NON_ZERO_COUNT);
difference.remove(Detector.NZC); difference.remove(DetectorFunction.LOW_NON_ZERO_COUNT);
difference.remove(Detector.LOW_NON_ZERO_COUNT); difference.remove(DetectorFunction.HIGH_NON_ZERO_COUNT);
difference.remove(Detector.LOW_NZC); difference.remove(DetectorFunction.TIME_OF_DAY);
difference.remove(Detector.HIGH_NON_ZERO_COUNT); difference.remove(DetectorFunction.TIME_OF_WEEK);
difference.remove(Detector.HIGH_NZC); for (DetectorFunction f : difference) {
difference.remove(Detector.TIME_OF_DAY); ElasticsearchStatusException e = expectThrows(ElasticsearchStatusException.class,
difference.remove(Detector.TIME_OF_WEEK); () -> new Detector.Builder(f, null).build());
for (String f : difference) { assertThat(e.getMessage(), equalTo("Unless a count or temporal function is used one of field_name," +
try { " by_field_name or over_field_name must be set"));
new Detector.Builder(f, null).build();
Assert.fail("ElasticsearchException not thrown when expected");
} catch (ElasticsearchException e) {
}
} }
}
// certain fields aren't allowed with certain functions public void testVerify_GivenFunctionsNotSupportingOverField() {
// first do the over field EnumSet<DetectorFunction> noOverFieldFunctions = EnumSet.of(
for (String f : new String[]{Detector.NON_ZERO_COUNT, Detector.NZC, DetectorFunction.NON_ZERO_COUNT,
Detector.LOW_NON_ZERO_COUNT, Detector.LOW_NZC, Detector.HIGH_NON_ZERO_COUNT, DetectorFunction.LOW_NON_ZERO_COUNT,
Detector.HIGH_NZC}) { DetectorFunction.HIGH_NON_ZERO_COUNT
);
for (DetectorFunction f: noOverFieldFunctions) {
Detector.Builder builder = new Detector.Builder(f, null); Detector.Builder builder = new Detector.Builder(f, null);
builder.setOverFieldName("over_field"); builder.setOverFieldName("over_field");
try { ElasticsearchStatusException e = expectThrows(ElasticsearchStatusException.class, () -> builder.build());
builder.build(); assertThat(e.getMessage(), equalTo("over_field_name cannot be used with function '" + f + "'"));
Assert.fail("ElasticsearchException not thrown when expected");
} catch (ElasticsearchException e) {
}
} }
}
// these functions cannot have just an over field public void testVerify_GivenFunctionsCannotHaveJustOverField() {
difference = new HashSet<>(Detector.ANALYSIS_FUNCTIONS); EnumSet<DetectorFunction> difference = EnumSet.allOf(DetectorFunction.class);
difference.remove(Detector.COUNT); difference.remove(DetectorFunction.COUNT);
difference.remove(Detector.HIGH_COUNT); difference.remove(DetectorFunction.LOW_COUNT);
difference.remove(Detector.LOW_COUNT); difference.remove(DetectorFunction.HIGH_COUNT);
difference.remove(Detector.TIME_OF_DAY); difference.remove(DetectorFunction.TIME_OF_DAY);
difference.remove(Detector.TIME_OF_WEEK); difference.remove(DetectorFunction.TIME_OF_WEEK);
for (String f : difference) { for (DetectorFunction f: difference) {
Detector.Builder builder = new Detector.Builder(f, null); Detector.Builder builder = new Detector.Builder(f, null);
builder.setOverFieldName("over_field"); builder.setOverFieldName("over_field");
try { expectThrows(ElasticsearchStatusException.class, () -> builder.build());
builder.build();
Assert.fail("ElasticsearchException not thrown when expected");
} catch (ElasticsearchException e) {
}
} }
}
// these functions can have just an over field public void testVerify_GivenFunctionsCanHaveJustOverField() {
for (String f : new String[]{Detector.COUNT, Detector.HIGH_COUNT, EnumSet<DetectorFunction> noOverFieldFunctions = EnumSet.of(
Detector.LOW_COUNT}) { DetectorFunction.COUNT,
DetectorFunction.LOW_COUNT,
DetectorFunction.HIGH_COUNT
);
for (DetectorFunction f: noOverFieldFunctions) {
Detector.Builder builder = new Detector.Builder(f, null); Detector.Builder builder = new Detector.Builder(f, null);
builder.setOverFieldName("over_field"); builder.setOverFieldName("over_field");
builder.build(); builder.build();
} }
}
for (String f : new String[]{Detector.RARE, Detector.FREQ_RARE}) { public void testVerify_GivenFunctionsCannotHaveFieldName() {
for (DetectorFunction f : Detector.COUNT_WITHOUT_FIELD_FUNCTIONS) {
Detector.Builder builder = new Detector.Builder(f, "field");
builder.setByFieldName("b");
ElasticsearchStatusException e = expectThrows(ElasticsearchStatusException.class, () -> builder.build());
assertThat(e.getMessage(), equalTo("field_name cannot be used with function '" + f + "'"));
}
// Nor rare
{
Detector.Builder builder = new Detector.Builder(DetectorFunction.RARE, "field");
builder.setByFieldName("b");
builder.setOverFieldName("over_field");
ElasticsearchStatusException e = expectThrows(ElasticsearchStatusException.class, () -> builder.build());
assertThat(e.getMessage(), equalTo("field_name cannot be used with function 'rare'"));
}
// Nor freq_rare
{
Detector.Builder builder = new Detector.Builder(DetectorFunction.FREQ_RARE, "field");
builder.setByFieldName("b");
builder.setOverFieldName("over_field");
ElasticsearchStatusException e = expectThrows(ElasticsearchStatusException.class, () -> builder.build());
assertThat(e.getMessage(), equalTo("field_name cannot be used with function 'freq_rare'"));
}
}
public void testVerify_GivenFunctionsRequiringFieldName() {
// some functions require a fieldname
for (DetectorFunction f : Detector.FIELD_NAME_FUNCTIONS) {
Detector.Builder builder = new Detector.Builder(f, "f");
builder.build();
}
}
public void testVerify_GivenFieldNameFunctionsAndOverField() {
// some functions require a fieldname
for (DetectorFunction f : Detector.FIELD_NAME_FUNCTIONS) {
Detector.Builder builder = new Detector.Builder(f, "f");
builder.setOverFieldName("some_over_field");
builder.build();
}
}
public void testVerify_GivenFieldNameFunctionsAndByField() {
// some functions require a fieldname
for (DetectorFunction f : Detector.FIELD_NAME_FUNCTIONS) {
Detector.Builder builder = new Detector.Builder(f, "f");
builder.setByFieldName("some_by_field");
builder.build();
}
}
public void testVerify_GivenCountFunctionsWithByField() {
// some functions require a fieldname
for (DetectorFunction f : Detector.COUNT_WITHOUT_FIELD_FUNCTIONS) {
Detector.Builder builder = new Detector.Builder(f, null);
builder.setByFieldName("some_by_field");
builder.build();
}
}
public void testVerify_GivenCountFunctionsWithOverField() {
EnumSet<DetectorFunction> functions = EnumSet.copyOf(Detector.COUNT_WITHOUT_FIELD_FUNCTIONS);
functions.removeAll(Detector.NO_OVER_FIELD_NAME_FUNCTIONS);
for (DetectorFunction f : functions) {
Detector.Builder builder = new Detector.Builder(f, null);
builder.setOverFieldName("some_over_field");
builder.build();
}
}
public void testVerify_GivenCountFunctionsWithByAndOverFields() {
EnumSet<DetectorFunction> functions = EnumSet.copyOf(Detector.COUNT_WITHOUT_FIELD_FUNCTIONS);
functions.removeAll(Detector.NO_OVER_FIELD_NAME_FUNCTIONS);
for (DetectorFunction f : functions) {
Detector.Builder builder = new Detector.Builder(f, null);
builder.setByFieldName("some_over_field");
builder.setOverFieldName("some_by_field");
builder.build();
}
}
public void testVerify_GivenRareAndFreqRareWithByAndOverFields() {
for (DetectorFunction f : EnumSet.of(DetectorFunction.RARE, DetectorFunction.FREQ_RARE)) {
Detector.Builder builder = new Detector.Builder(f, null); Detector.Builder builder = new Detector.Builder(f, null);
builder.setOverFieldName("over_field"); builder.setOverFieldName("over_field");
builder.setByFieldName("by_field"); builder.setByFieldName("by_field");
builder.build(); builder.build();
} }
}
// some functions require a fieldname public void testVerify_GivenFunctionsThatCanHaveByField() {
for (String f : new String[]{Detector.DISTINCT_COUNT, Detector.DC, for (DetectorFunction f : EnumSet.of(DetectorFunction.COUNT, DetectorFunction.HIGH_COUNT, DetectorFunction.LOW_COUNT,
Detector.HIGH_DISTINCT_COUNT, Detector.HIGH_DC, Detector.LOW_DISTINCT_COUNT, Detector.LOW_DC, DetectorFunction.RARE, DetectorFunction.NON_ZERO_COUNT, DetectorFunction.LOW_NON_ZERO_COUNT,
Detector.INFO_CONTENT, Detector.LOW_INFO_CONTENT, Detector.HIGH_INFO_CONTENT, DetectorFunction.HIGH_NON_ZERO_COUNT)) {
Detector.METRIC, Detector.MEAN, Detector.HIGH_MEAN, Detector.LOW_MEAN, Detector.AVG,
Detector.HIGH_AVG, Detector.LOW_AVG, Detector.MAX, Detector.MIN, Detector.SUM,
Detector.LOW_SUM, Detector.HIGH_SUM, Detector.NON_NULL_SUM,
Detector.LOW_NON_NULL_SUM, Detector.HIGH_NON_NULL_SUM, Detector.POPULATION_VARIANCE,
Detector.LOW_POPULATION_VARIANCE, Detector.HIGH_POPULATION_VARIANCE}) {
Detector.Builder builder = new Detector.Builder(f, "f");
builder.setOverFieldName("over_field");
builder.build();
}
// these functions cannot have a field name
difference = new HashSet<>(Detector.ANALYSIS_FUNCTIONS);
difference.remove(Detector.METRIC);
difference.remove(Detector.MEAN);
difference.remove(Detector.LOW_MEAN);
difference.remove(Detector.HIGH_MEAN);
difference.remove(Detector.AVG);
difference.remove(Detector.LOW_AVG);
difference.remove(Detector.HIGH_AVG);
difference.remove(Detector.MEDIAN);
difference.remove(Detector.LOW_MEDIAN);
difference.remove(Detector.HIGH_MEDIAN);
difference.remove(Detector.MIN);
difference.remove(Detector.MAX);
difference.remove(Detector.SUM);
difference.remove(Detector.LOW_SUM);
difference.remove(Detector.HIGH_SUM);
difference.remove(Detector.NON_NULL_SUM);
difference.remove(Detector.LOW_NON_NULL_SUM);
difference.remove(Detector.HIGH_NON_NULL_SUM);
difference.remove(Detector.POPULATION_VARIANCE);
difference.remove(Detector.LOW_POPULATION_VARIANCE);
difference.remove(Detector.HIGH_POPULATION_VARIANCE);
difference.remove(Detector.DISTINCT_COUNT);
difference.remove(Detector.HIGH_DISTINCT_COUNT);
difference.remove(Detector.LOW_DISTINCT_COUNT);
difference.remove(Detector.DC);
difference.remove(Detector.LOW_DC);
difference.remove(Detector.HIGH_DC);
difference.remove(Detector.INFO_CONTENT);
difference.remove(Detector.LOW_INFO_CONTENT);
difference.remove(Detector.HIGH_INFO_CONTENT);
difference.remove(Detector.LAT_LONG);
for (String f : difference) {
Detector.Builder builder = new Detector.Builder(f, "f");
builder.setOverFieldName("over_field");
try {
builder.build();
Assert.fail("ElasticsearchException not thrown when expected");
} catch (ElasticsearchException e) {
}
}
// these can have a by field
for (String f : new String[]{Detector.COUNT, Detector.HIGH_COUNT,
Detector.LOW_COUNT, Detector.RARE,
Detector.NON_ZERO_COUNT, Detector.NZC}) {
Detector.Builder builder = new Detector.Builder(f, null); Detector.Builder builder = new Detector.Builder(f, null);
builder.setByFieldName("b"); builder.setByFieldName("b");
builder.build(); builder.build();
} }
Detector.Builder builder = new Detector.Builder(Detector.FREQ_RARE, null);
builder.setOverFieldName("over_field");
builder.setByFieldName("b");
builder.build();
builder = new Detector.Builder(Detector.FREQ_RARE, null);
builder.setOverFieldName("over_field");
builder.setByFieldName("b");
builder.build();
// some functions require a fieldname
int testedFunctionsCount = 0;
for (String f : Detector.FIELD_NAME_FUNCTIONS) {
testedFunctionsCount++;
builder = new Detector.Builder(f, "f");
builder.setByFieldName("b");
builder.build();
}
Assert.assertEquals(Detector.FIELD_NAME_FUNCTIONS.size(), testedFunctionsCount);
// these functions don't work with fieldname
testedFunctionsCount = 0;
for (String f : Detector.COUNT_WITHOUT_FIELD_FUNCTIONS) {
testedFunctionsCount++;
try {
builder = new Detector.Builder(f, "field");
builder.setByFieldName("b");
builder.build();
Assert.fail("ElasticsearchException not thrown when expected");
} catch (ElasticsearchException e) {
}
}
Assert.assertEquals(Detector.COUNT_WITHOUT_FIELD_FUNCTIONS.size(), testedFunctionsCount);
builder = new Detector.Builder(Detector.FREQ_RARE, "field");
builder.setByFieldName("b");
builder.setOverFieldName("over_field");
try {
builder.build();
Assert.fail("ElasticsearchException not thrown when expected");
} catch (ElasticsearchException e) {
}
for (String f : new String[]{Detector.HIGH_COUNT,
Detector.LOW_COUNT, Detector.NON_ZERO_COUNT, Detector.NZC}) {
builder = new Detector.Builder(f, null);
builder.setByFieldName("by_field");
builder.build();
}
for (String f : new String[]{Detector.COUNT, Detector.HIGH_COUNT,
Detector.LOW_COUNT}) {
builder = new Detector.Builder(f, null);
builder.setOverFieldName("over_field");
builder.build();
}
for (String f : new String[]{Detector.HIGH_COUNT,
Detector.LOW_COUNT}) {
builder = new Detector.Builder(f, null);
builder.setByFieldName("by_field");
builder.setOverFieldName("over_field");
builder.build();
}
for (String f : new String[]{Detector.NON_ZERO_COUNT, Detector.NZC}) {
try {
builder = new Detector.Builder(f, "field");
builder.setByFieldName("by_field");
builder.setOverFieldName("over_field");
builder.build();
Assert.fail("ElasticsearchException not thrown when expected");
} catch (ElasticsearchException e) {
}
}
} }
public void testVerify_GivenInvalidRuleTargetFieldName() { public void testVerify_GivenInvalidRuleTargetFieldName() {

View File

@ -49,7 +49,6 @@ import org.elasticsearch.threadpool.ExecutorBuilder;
import org.elasticsearch.threadpool.FixedExecutorBuilder; import org.elasticsearch.threadpool.FixedExecutorBuilder;
import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.watcher.ResourceWatcherService; import org.elasticsearch.watcher.ResourceWatcherService;
import org.elasticsearch.xpack.core.XPackClientActionPlugin;
import org.elasticsearch.xpack.core.XPackPlugin; import org.elasticsearch.xpack.core.XPackPlugin;
import org.elasticsearch.xpack.core.XPackSettings; import org.elasticsearch.xpack.core.XPackSettings;
import org.elasticsearch.xpack.core.ml.MachineLearningField; import org.elasticsearch.xpack.core.ml.MachineLearningField;
@ -261,8 +260,6 @@ public class MachineLearning extends Plugin implements ActionPlugin, AnalysisPlu
private final Environment env; private final Environment env;
private final boolean enabled; private final boolean enabled;
private final boolean transportClientMode; private final boolean transportClientMode;
private final boolean tribeNode;
private final boolean tribeNodeClient;
private final SetOnce<AutodetectProcessManager> autodetectProcessManager = new SetOnce<>(); private final SetOnce<AutodetectProcessManager> autodetectProcessManager = new SetOnce<>();
private final SetOnce<DatafeedManager> datafeedManager = new SetOnce<>(); private final SetOnce<DatafeedManager> datafeedManager = new SetOnce<>();
@ -272,8 +269,6 @@ public class MachineLearning extends Plugin implements ActionPlugin, AnalysisPlu
this.enabled = XPackSettings.MACHINE_LEARNING_ENABLED.get(settings); this.enabled = XPackSettings.MACHINE_LEARNING_ENABLED.get(settings);
this.transportClientMode = XPackPlugin.transportClientMode(settings); this.transportClientMode = XPackPlugin.transportClientMode(settings);
this.env = transportClientMode ? null : new Environment(settings, configPath); this.env = transportClientMode ? null : new Environment(settings, configPath);
this.tribeNode = XPackClientActionPlugin.isTribeNode(settings);
this.tribeNodeClient = XPackClientActionPlugin.isTribeClientNode(settings);
} }
protected XPackLicenseState getLicenseState() { return XPackPlugin.getSharedLicenseState(); } protected XPackLicenseState getLicenseState() { return XPackPlugin.getSharedLicenseState(); }
@ -299,7 +294,7 @@ public class MachineLearning extends Plugin implements ActionPlugin, AnalysisPlu
String maxOpenJobsPerNodeNodeAttrName = "node.attr." + MAX_OPEN_JOBS_NODE_ATTR; String maxOpenJobsPerNodeNodeAttrName = "node.attr." + MAX_OPEN_JOBS_NODE_ATTR;
String machineMemoryAttrName = "node.attr." + MACHINE_MEMORY_NODE_ATTR; String machineMemoryAttrName = "node.attr." + MACHINE_MEMORY_NODE_ATTR;
if (enabled == false || transportClientMode || tribeNode || tribeNodeClient) { if (enabled == false || transportClientMode) {
disallowMlNodeAttributes(mlEnabledNodeAttrName, maxOpenJobsPerNodeNodeAttrName, machineMemoryAttrName); disallowMlNodeAttributes(mlEnabledNodeAttrName, maxOpenJobsPerNodeNodeAttrName, machineMemoryAttrName);
return Settings.EMPTY; return Settings.EMPTY;
} }
@ -384,7 +379,7 @@ public class MachineLearning extends Plugin implements ActionPlugin, AnalysisPlu
// this method can replace the entire contents of the overridden createComponents() method // this method can replace the entire contents of the overridden createComponents() method
private Collection<Object> createComponents(Client client, ClusterService clusterService, ThreadPool threadPool, private Collection<Object> createComponents(Client client, ClusterService clusterService, ThreadPool threadPool,
NamedXContentRegistry xContentRegistry, Environment environment) { NamedXContentRegistry xContentRegistry, Environment environment) {
if (enabled == false || transportClientMode || tribeNode || tribeNodeClient) { if (enabled == false || transportClientMode) {
return emptyList(); return emptyList();
} }
@ -449,7 +444,7 @@ public class MachineLearning extends Plugin implements ActionPlugin, AnalysisPlu
} }
public List<PersistentTasksExecutor<?>> createPersistentTasksExecutors(ClusterService clusterService) { public List<PersistentTasksExecutor<?>> createPersistentTasksExecutors(ClusterService clusterService) {
if (enabled == false || transportClientMode || tribeNode || tribeNodeClient) { if (enabled == false || transportClientMode) {
return emptyList(); return emptyList();
} }
@ -462,7 +457,7 @@ public class MachineLearning extends Plugin implements ActionPlugin, AnalysisPlu
public Collection<Module> createGuiceModules() { public Collection<Module> createGuiceModules() {
List<Module> modules = new ArrayList<>(); List<Module> modules = new ArrayList<>();
if (tribeNodeClient || transportClientMode) { if (transportClientMode) {
return modules; return modules;
} }
@ -478,7 +473,7 @@ public class MachineLearning extends Plugin implements ActionPlugin, AnalysisPlu
IndexScopedSettings indexScopedSettings, SettingsFilter settingsFilter, IndexScopedSettings indexScopedSettings, SettingsFilter settingsFilter,
IndexNameExpressionResolver indexNameExpressionResolver, IndexNameExpressionResolver indexNameExpressionResolver,
Supplier<DiscoveryNodes> nodesInCluster) { Supplier<DiscoveryNodes> nodesInCluster) {
if (false == enabled || tribeNodeClient || tribeNode) { if (false == enabled) {
return emptyList(); return emptyList();
} }
return Arrays.asList( return Arrays.asList(
@ -528,7 +523,7 @@ public class MachineLearning extends Plugin implements ActionPlugin, AnalysisPlu
@Override @Override
public List<ActionHandler<? extends ActionRequest, ? extends ActionResponse>> getActions() { public List<ActionHandler<? extends ActionRequest, ? extends ActionResponse>> getActions() {
if (false == enabled || tribeNodeClient || tribeNode) { if (false == enabled) {
return emptyList(); return emptyList();
} }
return Arrays.asList( return Arrays.asList(
@ -587,7 +582,7 @@ public class MachineLearning extends Plugin implements ActionPlugin, AnalysisPlu
} }
@Override @Override
public List<ExecutorBuilder<?>> getExecutorBuilders(Settings settings) { public List<ExecutorBuilder<?>> getExecutorBuilders(Settings settings) {
if (false == enabled || tribeNode || tribeNodeClient || transportClientMode) { if (false == enabled || transportClientMode) {
return emptyList(); return emptyList();
} }
int maxNumberOfJobs = AutodetectProcessManager.MAX_OPEN_JOBS_PER_NODE.get(settings); int maxNumberOfJobs = AutodetectProcessManager.MAX_OPEN_JOBS_PER_NODE.get(settings);

View File

@ -19,7 +19,6 @@ import org.elasticsearch.common.logging.Loggers;
import org.elasticsearch.env.Environment; import org.elasticsearch.env.Environment;
import org.elasticsearch.license.XPackLicenseState; import org.elasticsearch.license.XPackLicenseState;
import org.elasticsearch.plugins.Platforms; import org.elasticsearch.plugins.Platforms;
import org.elasticsearch.xpack.core.XPackClientActionPlugin;
import org.elasticsearch.xpack.core.XPackFeatureSet; import org.elasticsearch.xpack.core.XPackFeatureSet;
import org.elasticsearch.xpack.core.XPackPlugin; import org.elasticsearch.xpack.core.XPackPlugin;
import org.elasticsearch.xpack.core.XPackSettings; import org.elasticsearch.xpack.core.XPackSettings;
@ -72,9 +71,8 @@ public class MachineLearningFeatureSet implements XPackFeatureSet {
Map<String, Object> nativeCodeInfo = NativeController.UNKNOWN_NATIVE_CODE_INFO; Map<String, Object> nativeCodeInfo = NativeController.UNKNOWN_NATIVE_CODE_INFO;
// Don't try to get the native code version if ML is disabled - it causes too much controversy // Don't try to get the native code version if ML is disabled - it causes too much controversy
// if ML has been disabled because of some OS incompatibility. Also don't try to get the native // if ML has been disabled because of some OS incompatibility. Also don't try to get the native
// code version in the transport or tribe client - the controller process won't be running. // code version in the transport client - the controller process won't be running.
if (enabled && XPackPlugin.transportClientMode(environment.settings()) == false if (enabled && XPackPlugin.transportClientMode(environment.settings()) == false) {
&& XPackClientActionPlugin.isTribeClientNode(environment.settings()) == false) {
try { try {
if (isRunningOnMlPlatform(true)) { if (isRunningOnMlPlatform(true)) {
NativeController nativeController = NativeControllerHolder.getNativeController(environment); NativeController nativeController = NativeControllerHolder.getNativeController(environment);

View File

@ -0,0 +1,28 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.xpack.ml.job.config;
import org.elasticsearch.test.ESTestCase;
import org.elasticsearch.xpack.core.ml.job.config.DetectorFunction;
import static org.hamcrest.Matchers.equalTo;
public class DetectorFunctionTests extends ESTestCase {
public void testShortcuts() {
assertThat(DetectorFunction.fromString("nzc").getFullName(), equalTo("non_zero_count"));
assertThat(DetectorFunction.fromString("low_nzc").getFullName(), equalTo("low_non_zero_count"));
assertThat(DetectorFunction.fromString("high_nzc").getFullName(), equalTo("high_non_zero_count"));
assertThat(DetectorFunction.fromString("dc").getFullName(), equalTo("distinct_count"));
assertThat(DetectorFunction.fromString("low_dc").getFullName(), equalTo("low_distinct_count"));
assertThat(DetectorFunction.fromString("high_dc").getFullName(), equalTo("high_distinct_count"));
}
public void testFromString_GivenInvalidFunction() {
IllegalArgumentException e = expectThrows(IllegalArgumentException.class, () -> DetectorFunction.fromString("invalid"));
assertThat(e.getMessage(), equalTo("Unknown function 'invalid'"));
}
}

View File

@ -31,7 +31,6 @@ import org.elasticsearch.rest.RestHandler;
import org.elasticsearch.script.ScriptService; import org.elasticsearch.script.ScriptService;
import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.watcher.ResourceWatcherService; import org.elasticsearch.watcher.ResourceWatcherService;
import org.elasticsearch.xpack.core.XPackClientActionPlugin;
import org.elasticsearch.xpack.core.XPackPlugin; import org.elasticsearch.xpack.core.XPackPlugin;
import org.elasticsearch.xpack.core.XPackSettings; import org.elasticsearch.xpack.core.XPackSettings;
import org.elasticsearch.xpack.core.monitoring.MonitoringField; import org.elasticsearch.xpack.core.monitoring.MonitoringField;
@ -68,10 +67,9 @@ import static java.util.Collections.singletonList;
import static org.elasticsearch.common.settings.Setting.boolSetting; import static org.elasticsearch.common.settings.Setting.boolSetting;
/** /**
* This class activates/deactivates the monitoring modules depending if we're running a node client, transport client or tribe client: * This class activates/deactivates the monitoring modules depending if we're running a node client, transport client:
* - node clients: all modules are binded * - node clients: all modules are bound
* - transport clients: only action/transport actions are binded * - transport clients: only action/transport actions are bound
* - tribe clients: everything is disables by default but can be enabled per tribe cluster
*/ */
public class Monitoring extends Plugin implements ActionPlugin { public class Monitoring extends Plugin implements ActionPlugin {
@ -85,13 +83,11 @@ public class Monitoring extends Plugin implements ActionPlugin {
protected final Settings settings; protected final Settings settings;
private final boolean enabled; private final boolean enabled;
private final boolean transportClientMode; private final boolean transportClientMode;
private final boolean tribeNode;
public Monitoring(Settings settings) { public Monitoring(Settings settings) {
this.settings = settings; this.settings = settings;
this.transportClientMode = XPackPlugin.transportClientMode(settings); this.transportClientMode = XPackPlugin.transportClientMode(settings);
this.enabled = XPackSettings.MONITORING_ENABLED.get(settings); this.enabled = XPackSettings.MONITORING_ENABLED.get(settings);
this.tribeNode = XPackClientActionPlugin.isTribeNode(settings);
} }
// overridable by tests // overridable by tests
@ -112,7 +108,7 @@ public class Monitoring extends Plugin implements ActionPlugin {
List<Module> modules = new ArrayList<>(); List<Module> modules = new ArrayList<>();
modules.add(b -> { modules.add(b -> {
XPackPlugin.bindFeatureSet(b, MonitoringFeatureSet.class); XPackPlugin.bindFeatureSet(b, MonitoringFeatureSet.class);
if (transportClientMode || enabled == false || tribeNode) { if (transportClientMode || enabled == false) {
b.bind(Exporters.class).toProvider(Providers.of(null)); b.bind(Exporters.class).toProvider(Providers.of(null));
} }
}); });
@ -124,7 +120,7 @@ public class Monitoring extends Plugin implements ActionPlugin {
ResourceWatcherService resourceWatcherService, ScriptService scriptService, ResourceWatcherService resourceWatcherService, ScriptService scriptService,
NamedXContentRegistry xContentRegistry, Environment environment, NamedXContentRegistry xContentRegistry, Environment environment,
NodeEnvironment nodeEnvironment, NamedWriteableRegistry namedWriteableRegistry) { NodeEnvironment nodeEnvironment, NamedWriteableRegistry namedWriteableRegistry) {
if (enabled == false || tribeNode) { if (enabled == false) {
return Collections.emptyList(); return Collections.emptyList();
} }
@ -153,7 +149,7 @@ public class Monitoring extends Plugin implements ActionPlugin {
@Override @Override
public List<ActionHandler<? extends ActionRequest, ? extends ActionResponse>> getActions() { public List<ActionHandler<? extends ActionRequest, ? extends ActionResponse>> getActions() {
if (false == enabled || tribeNode) { if (false == enabled) {
return emptyList(); return emptyList();
} }
return singletonList(new ActionHandler<>(MonitoringBulkAction.INSTANCE, TransportMonitoringBulkAction.class)); return singletonList(new ActionHandler<>(MonitoringBulkAction.INSTANCE, TransportMonitoringBulkAction.class));
@ -163,7 +159,7 @@ public class Monitoring extends Plugin implements ActionPlugin {
public List<RestHandler> getRestHandlers(Settings settings, RestController restController, ClusterSettings clusterSettings, public List<RestHandler> getRestHandlers(Settings settings, RestController restController, ClusterSettings clusterSettings,
IndexScopedSettings indexScopedSettings, SettingsFilter settingsFilter, IndexNameExpressionResolver indexNameExpressionResolver, IndexScopedSettings indexScopedSettings, SettingsFilter settingsFilter, IndexNameExpressionResolver indexNameExpressionResolver,
Supplier<DiscoveryNodes> nodesInCluster) { Supplier<DiscoveryNodes> nodesInCluster) {
if (false == enabled || tribeNode) { if (false == enabled) {
return emptyList(); return emptyList();
} }
return singletonList(new RestMonitoringBulkAction(settings, restController)); return singletonList(new RestMonitoringBulkAction(settings, restController));

View File

@ -85,7 +85,7 @@ public class HttpExporter extends Exporter {
*/ */
public static final Setting.AffixSetting<TimeValue> BULK_TIMEOUT_SETTING = public static final Setting.AffixSetting<TimeValue> BULK_TIMEOUT_SETTING =
Setting.affixKeySetting("xpack.monitoring.exporters.","bulk.timeout", Setting.affixKeySetting("xpack.monitoring.exporters.","bulk.timeout",
(key) -> Setting.timeSetting(key, TimeValue.timeValueSeconds(10), Property.Dynamic, Property.NodeScope)); (key) -> Setting.timeSetting(key, TimeValue.MINUS_ONE, Property.Dynamic, Property.NodeScope));
/** /**
* Timeout used for initiating a connection. * Timeout used for initiating a connection.
*/ */
@ -147,7 +147,7 @@ public class HttpExporter extends Exporter {
*/ */
public static final Setting.AffixSetting<TimeValue> TEMPLATE_CHECK_TIMEOUT_SETTING = public static final Setting.AffixSetting<TimeValue> TEMPLATE_CHECK_TIMEOUT_SETTING =
Setting.affixKeySetting("xpack.monitoring.exporters.","index.template.master_timeout", Setting.affixKeySetting("xpack.monitoring.exporters.","index.template.master_timeout",
(key) -> Setting.timeSetting(key, TimeValue.timeValueSeconds(10), Property.Dynamic, Property.NodeScope)); (key) -> Setting.timeSetting(key, TimeValue.MINUS_ONE, Property.Dynamic, Property.NodeScope));
/** /**
* A boolean setting to enable or disable whether to create placeholders for the old templates. * A boolean setting to enable or disable whether to create placeholders for the old templates.
*/ */
@ -159,7 +159,7 @@ public class HttpExporter extends Exporter {
*/ */
public static final Setting.AffixSetting<TimeValue> PIPELINE_CHECK_TIMEOUT_SETTING = public static final Setting.AffixSetting<TimeValue> PIPELINE_CHECK_TIMEOUT_SETTING =
Setting.affixKeySetting("xpack.monitoring.exporters.","index.pipeline.master_timeout", Setting.affixKeySetting("xpack.monitoring.exporters.","index.pipeline.master_timeout",
(key) -> Setting.timeSetting(key, TimeValue.timeValueSeconds(10), Property.Dynamic, Property.NodeScope)); (key) -> Setting.timeSetting(key, TimeValue.MINUS_ONE, Property.Dynamic, Property.NodeScope));
/** /**
* Minimum supported version of the remote monitoring cluster (same major). * Minimum supported version of the remote monitoring cluster (same major).
@ -517,8 +517,8 @@ public class HttpExporter extends Exporter {
final MapBuilder<String, String> params = new MapBuilder<>(); final MapBuilder<String, String> params = new MapBuilder<>();
if (bulkTimeout != null) { if (TimeValue.MINUS_ONE.equals(bulkTimeout) == false) {
params.put("master_timeout", bulkTimeout.toString()); params.put("timeout", bulkTimeout.toString());
} }
// allow the use of ingest pipelines to be completely optional // allow the use of ingest pipelines to be completely optional

View File

@ -108,7 +108,7 @@ public abstract class PublishableHttpResource extends HttpResource {
* Create a new {@link PublishableHttpResource}. * Create a new {@link PublishableHttpResource}.
* *
* @param resourceOwnerName The user-recognizable name. * @param resourceOwnerName The user-recognizable name.
* @param masterTimeout Master timeout to use with any request. * @param masterTimeout timeout to use with any request.
* @param baseParameters The base parameters to specify for the request. * @param baseParameters The base parameters to specify for the request.
* @param dirty Whether the resource is dirty or not * @param dirty Whether the resource is dirty or not
*/ */
@ -116,7 +116,7 @@ public abstract class PublishableHttpResource extends HttpResource {
final Map<String, String> baseParameters, final boolean dirty) { final Map<String, String> baseParameters, final boolean dirty) {
super(resourceOwnerName, dirty); super(resourceOwnerName, dirty);
if (masterTimeout != null) { if (masterTimeout != null && TimeValue.MINUS_ONE.equals(masterTimeout) == false) {
final Map<String, String> parameters = new HashMap<>(baseParameters.size() + 1); final Map<String, String> parameters = new HashMap<>(baseParameters.size() + 1);
parameters.putAll(baseParameters); parameters.putAll(baseParameters);

View File

@ -43,7 +43,7 @@ public abstract class AbstractPublishableHttpResourceTestCase extends ESTestCase
protected final String owner = getClass().getSimpleName(); protected final String owner = getClass().getSimpleName();
@Nullable @Nullable
protected final TimeValue masterTimeout = randomFrom(TimeValue.timeValueMinutes(5), null); protected final TimeValue masterTimeout = randomFrom(TimeValue.timeValueMinutes(5), TimeValue.MINUS_ONE, null);
protected final RestClient client = mock(RestClient.class); protected final RestClient client = mock(RestClient.class);
@ -182,8 +182,10 @@ public abstract class AbstractPublishableHttpResourceTestCase extends ESTestCase
protected void assertParameters(final PublishableHttpResource resource) { protected void assertParameters(final PublishableHttpResource resource) {
final Map<String, String> parameters = new HashMap<>(resource.getParameters()); final Map<String, String> parameters = new HashMap<>(resource.getParameters());
if (masterTimeout != null) { if (masterTimeout != null && TimeValue.MINUS_ONE.equals(masterTimeout) == false) {
assertThat(parameters.remove("master_timeout"), is(masterTimeout.toString())); assertThat(parameters.remove("master_timeout"), is(masterTimeout.toString()));
} else {
assertFalse(parameters.containsKey("master_timeout"));
} }
assertThat(parameters.remove("filter_path"), is("$NONE")); assertThat(parameters.remove("filter_path"), is("$NONE"));
@ -193,8 +195,10 @@ public abstract class AbstractPublishableHttpResourceTestCase extends ESTestCase
protected void assertVersionParameters(final PublishableHttpResource resource) { protected void assertVersionParameters(final PublishableHttpResource resource) {
final Map<String, String> parameters = new HashMap<>(resource.getParameters()); final Map<String, String> parameters = new HashMap<>(resource.getParameters());
if (masterTimeout != null) { if (masterTimeout != null && TimeValue.MINUS_ONE.equals(masterTimeout) == false) {
assertThat(parameters.remove("master_timeout"), is(masterTimeout.toString())); assertThat(parameters.remove("master_timeout"), is(masterTimeout.toString()));
} else {
assertFalse(parameters.containsKey("master_timeout"));
} }
assertThat(parameters.remove("filter_path"), is("*.version")); assertThat(parameters.remove("filter_path"), is("*.version"));

View File

@ -626,7 +626,7 @@ public class HttpExporterIT extends MonitoringIntegTestCase {
} }
private String resourceVersionQueryString() { private String resourceVersionQueryString() {
return "master_timeout=10s&filter_path=" + FILTER_PATH_RESOURCE_VERSION; return "filter_path=" + FILTER_PATH_RESOURCE_VERSION;
} }
private String watcherCheckQueryString() { private String watcherCheckQueryString() {
@ -636,7 +636,7 @@ public class HttpExporterIT extends MonitoringIntegTestCase {
private String bulkQueryString() { private String bulkQueryString() {
final String pipelineName = MonitoringTemplateUtils.pipelineName(TEMPLATE_VERSION); final String pipelineName = MonitoringTemplateUtils.pipelineName(TEMPLATE_VERSION);
return "master_timeout=10s&pipeline=" + pipelineName + "&filter_path=" + "errors,items.*.error"; return "pipeline=" + pipelineName + "&filter_path=" + "errors,items.*.error";
} }
private void enqueueGetClusterVersionResponse(Version v) throws IOException { private void enqueueGetClusterVersionResponse(Version v) throws IOException {

View File

@ -416,9 +416,9 @@ public class HttpExporterTests extends ESTestCase {
assertThat(parameters.remove("filter_path"), equalTo("errors,items.*.error")); assertThat(parameters.remove("filter_path"), equalTo("errors,items.*.error"));
if (bulkTimeout != null) { if (bulkTimeout != null) {
assertThat(parameters.remove("master_timeout"), equalTo(bulkTimeout.toString())); assertThat(parameters.remove("timeout"), equalTo(bulkTimeout.toString()));
} else { } else {
assertThat(parameters.remove("master_timeout"), equalTo("10s")); assertNull(parameters.remove("timeout"));
} }
if (useIngest) { if (useIngest) {

View File

@ -524,7 +524,6 @@ public class Security extends Plugin implements ActionPlugin, IngestPlugin, Netw
SecurityNetty4HttpServerTransport.overrideSettings(builder, settings); SecurityNetty4HttpServerTransport.overrideSettings(builder, settings);
} }
builder.put(SecuritySettings.addUserSettings(settings)); builder.put(SecuritySettings.addUserSettings(settings));
addTribeSettings(settings, builder);
return builder.build(); return builder.build();
} else { } else {
return Settings.EMPTY; return Settings.EMPTY;
@ -720,64 +719,6 @@ public class Security extends Plugin implements ActionPlugin, IngestPlugin, Netw
return Collections.singletonMap(SetSecurityUserProcessor.TYPE, new SetSecurityUserProcessor.Factory(parameters.threadContext)); return Collections.singletonMap(SetSecurityUserProcessor.TYPE, new SetSecurityUserProcessor.Factory(parameters.threadContext));
} }
/**
* If the current node is a tribe node, we inject additional settings on each tribe client. We do this to make sure
* that every tribe cluster has x-pack installed and security is enabled. We do that by:
*
* - making it mandatory on the tribe client (this means that the tribe node will fail at startup if x-pack is
* not loaded on any tribe due to missing mandatory plugin)
*
* - forcibly enabling it (that means it's not possible to disable security on the tribe clients)
*/
private static void addTribeSettings(Settings settings, Settings.Builder settingsBuilder) {
Map<String, Settings> tribesSettings = settings.getGroups("tribe", true);
if (tribesSettings.isEmpty()) {
// it's not a tribe node
return;
}
for (Map.Entry<String, Settings> tribeSettings : tribesSettings.entrySet()) {
final String tribeName = tribeSettings.getKey();
final String tribePrefix = "tribe." + tribeName + ".";
if ("blocks".equals(tribeName) || "on_conflict".equals(tribeName) || "name".equals(tribeName)) {
continue;
}
final String tribeEnabledSetting = tribePrefix + XPackSettings.SECURITY_ENABLED.getKey();
if (settings.get(tribeEnabledSetting) != null) {
boolean enabled = XPackSettings.SECURITY_ENABLED.get(tribeSettings.getValue());
if (!enabled) {
throw new IllegalStateException("tribe setting [" + tribeEnabledSetting + "] must be set to true but the value is ["
+ settings.get(tribeEnabledSetting) + "]");
}
} else {
//x-pack security must be enabled on every tribe if it's enabled on the tribe node
settingsBuilder.put(tribeEnabledSetting, true);
}
// we passed all the checks now we need to copy in all of the x-pack security settings
settings.keySet().forEach(k -> {
if (k.startsWith("xpack.security.")) {
settingsBuilder.copy(tribePrefix + k, k, settings);
}
});
}
Map<String, Settings> realmsSettings = settings.getGroups(SecurityField.setting("authc.realms"), true);
final boolean hasNativeRealm = XPackSettings.RESERVED_REALM_ENABLED_SETTING.get(settings) ||
realmsSettings.isEmpty() ||
realmsSettings.entrySet().stream()
.anyMatch((e) -> NativeRealmSettings.TYPE.equals(e.getValue().get("type")) && e.getValue()
.getAsBoolean("enabled", true));
if (hasNativeRealm) {
if (settings.get("tribe.on_conflict", "").startsWith("prefer_") == false) {
throw new IllegalArgumentException("use of security on tribe nodes requires setting [tribe.on_conflict] to specify the " +
"name of the tribe to prefer such as [prefer_t1] as the security index can exist in multiple tribes but only one" +
" can be used by the tribe node");
}
}
}
static boolean indexAuditLoggingEnabled(Settings settings) { static boolean indexAuditLoggingEnabled(Settings settings) {
if (XPackSettings.AUDIT_ENABLED.get(settings)) { if (XPackSettings.AUDIT_ENABLED.get(settings)) {

View File

@ -36,7 +36,6 @@ import org.elasticsearch.index.engine.DocumentMissingException;
import org.elasticsearch.index.query.QueryBuilder; import org.elasticsearch.index.query.QueryBuilder;
import org.elasticsearch.index.query.QueryBuilders; import org.elasticsearch.index.query.QueryBuilders;
import org.elasticsearch.search.SearchHit; import org.elasticsearch.search.SearchHit;
import org.elasticsearch.xpack.core.XPackClientActionPlugin;
import org.elasticsearch.xpack.core.security.ScrollHelper; import org.elasticsearch.xpack.core.security.ScrollHelper;
import org.elasticsearch.xpack.core.security.action.realm.ClearRealmCacheRequest; import org.elasticsearch.xpack.core.security.action.realm.ClearRealmCacheRequest;
import org.elasticsearch.xpack.core.security.action.realm.ClearRealmCacheResponse; import org.elasticsearch.xpack.core.security.action.realm.ClearRealmCacheResponse;
@ -83,14 +82,12 @@ public class NativeUsersStore extends AbstractComponent {
private final Hasher hasher = Hasher.BCRYPT; private final Hasher hasher = Hasher.BCRYPT;
private final Client client; private final Client client;
private final boolean isTribeNode;
private volatile SecurityLifecycleService securityLifecycleService; private volatile SecurityLifecycleService securityLifecycleService;
public NativeUsersStore(Settings settings, Client client, SecurityLifecycleService securityLifecycleService) { public NativeUsersStore(Settings settings, Client client, SecurityLifecycleService securityLifecycleService) {
super(settings); super(settings);
this.client = client; this.client = client;
this.isTribeNode = XPackClientActionPlugin.isTribeNode(settings);
this.securityLifecycleService = securityLifecycleService; this.securityLifecycleService = securityLifecycleService;
} }
@ -195,48 +192,44 @@ public class NativeUsersStore extends AbstractComponent {
public void changePassword(final ChangePasswordRequest request, final ActionListener<Void> listener) { public void changePassword(final ChangePasswordRequest request, final ActionListener<Void> listener) {
final String username = request.username(); final String username = request.username();
assert SystemUser.NAME.equals(username) == false && XPackUser.NAME.equals(username) == false : username + "is internal!"; assert SystemUser.NAME.equals(username) == false && XPackUser.NAME.equals(username) == false : username + "is internal!";
if (isTribeNode) { final String docType;
listener.onFailure(new UnsupportedOperationException("users may not be created or modified using a tribe node")); if (ClientReservedRealm.isReserved(username, settings)) {
docType = RESERVED_USER_TYPE;
} else { } else {
final String docType; docType = USER_DOC_TYPE;
if (ClientReservedRealm.isReserved(username, settings)) {
docType = RESERVED_USER_TYPE;
} else {
docType = USER_DOC_TYPE;
}
securityLifecycleService.prepareIndexIfNeededThenExecute(listener::onFailure, () -> {
executeAsyncWithOrigin(client.threadPool().getThreadContext(), SECURITY_ORIGIN,
client.prepareUpdate(SECURITY_INDEX_NAME, INDEX_TYPE, getIdForUser(docType, username))
.setDoc(Requests.INDEX_CONTENT_TYPE, Fields.PASSWORD.getPreferredName(),
String.valueOf(request.passwordHash()))
.setRefreshPolicy(request.getRefreshPolicy()).request(),
new ActionListener<UpdateResponse>() {
@Override
public void onResponse(UpdateResponse updateResponse) {
assert updateResponse.getResult() == DocWriteResponse.Result.UPDATED;
clearRealmCache(request.username(), listener, null);
}
@Override
public void onFailure(Exception e) {
if (isIndexNotFoundOrDocumentMissing(e)) {
if (docType.equals(RESERVED_USER_TYPE)) {
createReservedUser(username, request.passwordHash(), request.getRefreshPolicy(), listener);
} else {
logger.debug((org.apache.logging.log4j.util.Supplier<?>) () ->
new ParameterizedMessage("failed to change password for user [{}]", request.username()), e);
ValidationException validationException = new ValidationException();
validationException.addValidationError("user must exist in order to change password");
listener.onFailure(validationException);
}
} else {
listener.onFailure(e);
}
}
}, client::update);
});
} }
securityLifecycleService.prepareIndexIfNeededThenExecute(listener::onFailure, () -> {
executeAsyncWithOrigin(client.threadPool().getThreadContext(), SECURITY_ORIGIN,
client.prepareUpdate(SECURITY_INDEX_NAME, INDEX_TYPE, getIdForUser(docType, username))
.setDoc(Requests.INDEX_CONTENT_TYPE, Fields.PASSWORD.getPreferredName(),
String.valueOf(request.passwordHash()))
.setRefreshPolicy(request.getRefreshPolicy()).request(),
new ActionListener<UpdateResponse>() {
@Override
public void onResponse(UpdateResponse updateResponse) {
assert updateResponse.getResult() == DocWriteResponse.Result.UPDATED;
clearRealmCache(request.username(), listener, null);
}
@Override
public void onFailure(Exception e) {
if (isIndexNotFoundOrDocumentMissing(e)) {
if (docType.equals(RESERVED_USER_TYPE)) {
createReservedUser(username, request.passwordHash(), request.getRefreshPolicy(), listener);
} else {
logger.debug((org.apache.logging.log4j.util.Supplier<?>) () ->
new ParameterizedMessage("failed to change password for user [{}]", request.username()), e);
ValidationException validationException = new ValidationException();
validationException.addValidationError("user must exist in order to change password");
listener.onFailure(validationException);
}
} else {
listener.onFailure(e);
}
}
}, client::update);
});
} }
/** /**
@ -273,9 +266,7 @@ public class NativeUsersStore extends AbstractComponent {
* method will not modify the enabled value. * method will not modify the enabled value.
*/ */
public void putUser(final PutUserRequest request, final ActionListener<Boolean> listener) { public void putUser(final PutUserRequest request, final ActionListener<Boolean> listener) {
if (isTribeNode) { if (request.passwordHash() == null) {
listener.onFailure(new UnsupportedOperationException("users may not be created or modified using a tribe node"));
} else if (request.passwordHash() == null) {
updateUserWithoutPassword(request, listener); updateUserWithoutPassword(request, listener);
} else { } else {
indexUser(request, listener); indexUser(request, listener);
@ -366,9 +357,7 @@ public class NativeUsersStore extends AbstractComponent {
*/ */
public void setEnabled(final String username, final boolean enabled, final RefreshPolicy refreshPolicy, public void setEnabled(final String username, final boolean enabled, final RefreshPolicy refreshPolicy,
final ActionListener<Void> listener) { final ActionListener<Void> listener) {
if (isTribeNode) { if (ClientReservedRealm.isReserved(username, settings)) {
listener.onFailure(new UnsupportedOperationException("users may not be created or modified using a tribe node"));
} else if (ClientReservedRealm.isReserved(username, settings)) {
setReservedUserEnabled(username, enabled, refreshPolicy, true, listener); setReservedUserEnabled(username, enabled, refreshPolicy, true, listener);
} else { } else {
setRegularUserEnabled(username, enabled, refreshPolicy, listener); setRegularUserEnabled(username, enabled, refreshPolicy, listener);
@ -442,28 +431,24 @@ public class NativeUsersStore extends AbstractComponent {
} }
public void deleteUser(final DeleteUserRequest deleteUserRequest, final ActionListener<Boolean> listener) { public void deleteUser(final DeleteUserRequest deleteUserRequest, final ActionListener<Boolean> listener) {
if (isTribeNode) { securityLifecycleService.prepareIndexIfNeededThenExecute(listener::onFailure, () -> {
listener.onFailure(new UnsupportedOperationException("users may not be deleted using a tribe node")); DeleteRequest request = client.prepareDelete(SECURITY_INDEX_NAME,
} else { INDEX_TYPE, getIdForUser(USER_DOC_TYPE, deleteUserRequest.username())).request();
securityLifecycleService.prepareIndexIfNeededThenExecute(listener::onFailure, () -> { request.setRefreshPolicy(deleteUserRequest.getRefreshPolicy());
DeleteRequest request = client.prepareDelete(SECURITY_INDEX_NAME, executeAsyncWithOrigin(client.threadPool().getThreadContext(), SECURITY_ORIGIN, request,
INDEX_TYPE, getIdForUser(USER_DOC_TYPE, deleteUserRequest.username())).request(); new ActionListener<DeleteResponse>() {
request.setRefreshPolicy(deleteUserRequest.getRefreshPolicy()); @Override
executeAsyncWithOrigin(client.threadPool().getThreadContext(), SECURITY_ORIGIN, request, public void onResponse(DeleteResponse deleteResponse) {
new ActionListener<DeleteResponse>() { clearRealmCache(deleteUserRequest.username(), listener,
@Override deleteResponse.getResult() == DocWriteResponse.Result.DELETED);
public void onResponse(DeleteResponse deleteResponse) { }
clearRealmCache(deleteUserRequest.username(), listener,
deleteResponse.getResult() == DocWriteResponse.Result.DELETED);
}
@Override @Override
public void onFailure(Exception e) { public void onFailure(Exception e) {
listener.onFailure(e); listener.onFailure(e);
} }
}, client::delete); }, client::delete);
}); });
}
} }
/** /**

View File

@ -25,7 +25,6 @@ import org.elasticsearch.common.xcontent.XContentParser;
import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.common.xcontent.XContentType;
import org.elasticsearch.index.query.QueryBuilder; import org.elasticsearch.index.query.QueryBuilder;
import org.elasticsearch.index.query.QueryBuilders; import org.elasticsearch.index.query.QueryBuilders;
import org.elasticsearch.xpack.core.XPackClientActionPlugin;
import org.elasticsearch.xpack.core.security.ScrollHelper; import org.elasticsearch.xpack.core.security.ScrollHelper;
import org.elasticsearch.xpack.core.security.action.realm.ClearRealmCacheAction; import org.elasticsearch.xpack.core.security.action.realm.ClearRealmCacheAction;
import org.elasticsearch.xpack.core.security.action.realm.ClearRealmCacheResponse; import org.elasticsearch.xpack.core.security.action.realm.ClearRealmCacheResponse;
@ -81,14 +80,12 @@ public class NativeRoleMappingStore extends AbstractComponent implements UserRol
private static final String SECURITY_GENERIC_TYPE = "doc"; private static final String SECURITY_GENERIC_TYPE = "doc";
private final Client client; private final Client client;
private final boolean isTribeNode;
private final SecurityLifecycleService securityLifecycleService; private final SecurityLifecycleService securityLifecycleService;
private final List<String> realmsToRefresh = new CopyOnWriteArrayList<>(); private final List<String> realmsToRefresh = new CopyOnWriteArrayList<>();
public NativeRoleMappingStore(Settings settings, Client client, SecurityLifecycleService securityLifecycleService) { public NativeRoleMappingStore(Settings settings, Client client, SecurityLifecycleService securityLifecycleService) {
super(settings); super(settings);
this.client = client; this.client = client;
this.isTribeNode = XPackClientActionPlugin.isTribeNode(settings);
this.securityLifecycleService = securityLifecycleService; this.securityLifecycleService = securityLifecycleService;
} }
@ -164,9 +161,7 @@ public class NativeRoleMappingStore extends AbstractComponent implements UserRol
private <Request, Result> void modifyMapping(String name, CheckedBiConsumer<Request, ActionListener<Result>, Exception> inner, private <Request, Result> void modifyMapping(String name, CheckedBiConsumer<Request, ActionListener<Result>, Exception> inner,
Request request, ActionListener<Result> listener) { Request request, ActionListener<Result> listener) {
if (isTribeNode) { if (securityLifecycleService.isSecurityIndexOutOfDate()) {
listener.onFailure(new UnsupportedOperationException("role-mappings may not be modified using a tribe node"));
} else if (securityLifecycleService.isSecurityIndexOutOfDate()) {
listener.onFailure(new IllegalStateException( listener.onFailure(new IllegalStateException(
"Security index is not on the current version - the native realm will not be operational until " + "Security index is not on the current version - the native realm will not be operational until " +
"the upgrade API is run on the security index")); "the upgrade API is run on the security index"));

View File

@ -35,7 +35,6 @@ import org.elasticsearch.index.query.QueryBuilder;
import org.elasticsearch.index.query.QueryBuilders; import org.elasticsearch.index.query.QueryBuilders;
import org.elasticsearch.license.LicenseUtils; import org.elasticsearch.license.LicenseUtils;
import org.elasticsearch.license.XPackLicenseState; import org.elasticsearch.license.XPackLicenseState;
import org.elasticsearch.xpack.core.XPackClientActionPlugin;
import org.elasticsearch.xpack.core.security.ScrollHelper; import org.elasticsearch.xpack.core.security.ScrollHelper;
import org.elasticsearch.xpack.core.security.action.role.ClearRolesCacheRequest; import org.elasticsearch.xpack.core.security.action.role.ClearRolesCacheRequest;
import org.elasticsearch.xpack.core.security.action.role.ClearRolesCacheResponse; import org.elasticsearch.xpack.core.security.action.role.ClearRolesCacheResponse;
@ -84,7 +83,6 @@ public class NativeRolesStore extends AbstractComponent {
private final Client client; private final Client client;
private final XPackLicenseState licenseState; private final XPackLicenseState licenseState;
private final boolean isTribeNode;
private SecurityClient securityClient; private SecurityClient securityClient;
private final SecurityLifecycleService securityLifecycleService; private final SecurityLifecycleService securityLifecycleService;
@ -93,7 +91,6 @@ public class NativeRolesStore extends AbstractComponent {
SecurityLifecycleService securityLifecycleService) { SecurityLifecycleService securityLifecycleService) {
super(settings); super(settings);
this.client = client; this.client = client;
this.isTribeNode = XPackClientActionPlugin.isTribeNode(settings);
this.securityClient = new SecurityClient(client); this.securityClient = new SecurityClient(client);
this.licenseState = licenseState; this.licenseState = licenseState;
this.securityLifecycleService = securityLifecycleService; this.securityLifecycleService = securityLifecycleService;
@ -136,35 +133,29 @@ public class NativeRolesStore extends AbstractComponent {
} }
public void deleteRole(final DeleteRoleRequest deleteRoleRequest, final ActionListener<Boolean> listener) { public void deleteRole(final DeleteRoleRequest deleteRoleRequest, final ActionListener<Boolean> listener) {
if (isTribeNode) { securityLifecycleService.prepareIndexIfNeededThenExecute(listener::onFailure, () -> {
listener.onFailure(new UnsupportedOperationException("roles may not be deleted using a tribe node")); DeleteRequest request = client.prepareDelete(SecurityLifecycleService.SECURITY_INDEX_NAME,
} else { ROLE_DOC_TYPE, getIdForUser(deleteRoleRequest.name())).request();
securityLifecycleService.prepareIndexIfNeededThenExecute(listener::onFailure, () -> { request.setRefreshPolicy(deleteRoleRequest.getRefreshPolicy());
DeleteRequest request = client.prepareDelete(SecurityLifecycleService.SECURITY_INDEX_NAME, executeAsyncWithOrigin(client.threadPool().getThreadContext(), SECURITY_ORIGIN, request,
ROLE_DOC_TYPE, getIdForUser(deleteRoleRequest.name())).request(); new ActionListener<DeleteResponse>() {
request.setRefreshPolicy(deleteRoleRequest.getRefreshPolicy()); @Override
executeAsyncWithOrigin(client.threadPool().getThreadContext(), SECURITY_ORIGIN, request, public void onResponse(DeleteResponse deleteResponse) {
new ActionListener<DeleteResponse>() { clearRoleCache(deleteRoleRequest.name(), listener,
@Override deleteResponse.getResult() == DocWriteResponse.Result.DELETED);
public void onResponse(DeleteResponse deleteResponse) { }
clearRoleCache(deleteRoleRequest.name(), listener,
deleteResponse.getResult() == DocWriteResponse.Result.DELETED);
}
@Override @Override
public void onFailure(Exception e) { public void onFailure(Exception e) {
logger.error("failed to delete role from the index", e); logger.error("failed to delete role from the index", e);
listener.onFailure(e); listener.onFailure(e);
} }
}, client::delete); }, client::delete);
}); });
}
} }
public void putRole(final PutRoleRequest request, final RoleDescriptor role, final ActionListener<Boolean> listener) { public void putRole(final PutRoleRequest request, final RoleDescriptor role, final ActionListener<Boolean> listener) {
if (isTribeNode) { if (licenseState.isDocumentAndFieldLevelSecurityAllowed()) {
listener.onFailure(new UnsupportedOperationException("roles may not be created or modified using a tribe node"));
} else if (licenseState.isDocumentAndFieldLevelSecurityAllowed()) {
innerPutRole(request, role, listener); innerPutRole(request, role, listener);
} else if (role.isUsingDocumentOrFieldLevelSecurity()) { } else if (role.isUsingDocumentOrFieldLevelSecurity()) {
listener.onFailure(LicenseUtils.newComplianceException("field and document level security")); listener.onFailure(LicenseUtils.newComplianceException("field and document level security"));

View File

@ -21,124 +21,6 @@ import static org.hamcrest.Matchers.nullValue;
public class SecuritySettingsTests extends ESTestCase { public class SecuritySettingsTests extends ESTestCase {
private static final String TRIBE_T1_SECURITY_ENABLED = "tribe.t1." + XPackSettings.SECURITY_ENABLED.getKey();
private static final String TRIBE_T2_SECURITY_ENABLED = "tribe.t2." + XPackSettings.SECURITY_ENABLED.getKey();
public void testSecurityIsEnabledByDefaultOnTribes() {
Settings settings = Settings.builder()
.put("tribe.t1.cluster.name", "non_existing")
.put("tribe.t2.cluster.name", "non_existing2")
.put("tribe.on_conflict", "prefer_t1")
.build();
Settings additionalSettings = Security.additionalSettings(settings, true, false);
assertThat(additionalSettings.getAsBoolean(TRIBE_T1_SECURITY_ENABLED, null), equalTo(true));
assertThat(additionalSettings.getAsBoolean(TRIBE_T2_SECURITY_ENABLED, null), equalTo(true));
}
public void testSecurityDisabledOnATribe() {
Settings settings = Settings.builder().put("tribe.t1.cluster.name", "non_existing")
.put(TRIBE_T1_SECURITY_ENABLED, false)
.put("tribe.t2.cluster.name", "non_existing").build();
try {
Security.additionalSettings(settings, true, false);
fail("security cannot change the value of a setting that is already defined, so a exception should be thrown");
} catch (IllegalStateException e) {
assertThat(e.getMessage(), containsString(TRIBE_T1_SECURITY_ENABLED));
}
}
public void testSecurityDisabledOnTribesSecurityAlreadyMandatory() {
Settings settings = Settings.builder().put("tribe.t1.cluster.name", "non_existing")
.put(TRIBE_T1_SECURITY_ENABLED, false)
.put("tribe.t2.cluster.name", "non_existing")
.putList("tribe.t1.plugin.mandatory", "test_plugin", XPackField.NAME).build();
try {
Security.additionalSettings(settings, true, false);
fail("security cannot change the value of a setting that is already defined, so a exception should be thrown");
} catch (IllegalStateException e) {
assertThat(e.getMessage(), containsString(TRIBE_T1_SECURITY_ENABLED));
}
}
public void testSecuritySettingsCopiedForTribeNodes() {
Settings settings = Settings.builder()
.put("tribe.t1.cluster.name", "non_existing")
.put("tribe.t2.cluster.name", "non_existing")
.put("tribe.on_conflict", "prefer_" + randomFrom("t1", "t2"))
.put("xpack.security.foo", "bar")
.put("xpack.security.bar", "foo")
.putList("xpack.security.something.else.here", new String[] { "foo", "bar" })
.build();
Settings additionalSettings = Security.additionalSettings(settings, true, false);
assertThat(additionalSettings.get("xpack.security.foo"), nullValue());
assertThat(additionalSettings.get("xpack.security.bar"), nullValue());
assertThat(additionalSettings.getAsList("xpack.security.something.else.here"), is(Collections.emptyList()));
assertThat(additionalSettings.get("tribe.t1.xpack.security.foo"), is("bar"));
assertThat(additionalSettings.get("tribe.t1.xpack.security.bar"), is("foo"));
assertThat(additionalSettings.getAsList("tribe.t1.xpack.security.something.else.here"), contains("foo", "bar"));
assertThat(additionalSettings.get("tribe.t2.xpack.security.foo"), is("bar"));
assertThat(additionalSettings.get("tribe.t2.xpack.security.bar"), is("foo"));
assertThat(additionalSettings.getAsList("tribe.t2.xpack.security.something.else.here"), contains("foo", "bar"));
assertThat(additionalSettings.get("tribe.on_conflict"), nullValue());
assertThat(additionalSettings.get("tribe.t1.on_conflict"), nullValue());
assertThat(additionalSettings.get("tribe.t2.on_conflict"), nullValue());
}
public void testOnConflictMustBeSetOnTribe() {
final Settings settings = Settings.builder()
.put("tribe.t1.cluster.name", "non_existing")
.put("tribe.t2.cluster.name", "non_existing2")
.build();
IllegalArgumentException e = expectThrows(IllegalArgumentException.class, () -> Security.additionalSettings(settings, true, false));
assertThat(e.getMessage(), containsString("tribe.on_conflict"));
final Settings badOnConflict = Settings.builder().put(settings).put("tribe.on_conflict", randomFrom("any", "drop")).build();
e = expectThrows(IllegalArgumentException.class, () -> Security.additionalSettings(badOnConflict, true, false));
assertThat(e.getMessage(), containsString("tribe.on_conflict"));
Settings goodOnConflict = Settings.builder().put(settings).put("tribe.on_conflict", "prefer_" + randomFrom("t1", "t2")).build();
Settings additionalSettings = Security.additionalSettings(goodOnConflict, true, false);
assertNotNull(additionalSettings);
}
public void testOnConflictWithNoNativeRealms() {
final Settings noNative = Settings.builder()
.put("tribe.t1.cluster.name", "non_existing")
.put("tribe.t2.cluster.name", "non_existing2")
.put(XPackSettings.RESERVED_REALM_ENABLED_SETTING.getKey(), false)
.put("xpack.security.authc.realms.foo.type", randomFrom("ldap", "pki", randomAlphaOfLengthBetween(1, 6)))
.build();
Settings additionalSettings = Security.additionalSettings(noNative, true, false);
assertNotNull(additionalSettings);
// still with the reserved realm
final Settings withReserved = Settings.builder()
.put("tribe.t1.cluster.name", "non_existing")
.put("tribe.t2.cluster.name", "non_existing2")
.put("xpack.security.authc.realms.foo.type", randomFrom("ldap", "pki", randomAlphaOfLengthBetween(1, 6)))
.build();
IllegalArgumentException e = expectThrows(
IllegalArgumentException.class,
() -> Security.additionalSettings(withReserved, true, false));
assertThat(e.getMessage(), containsString("tribe.on_conflict"));
// reserved disabled but no realms defined
final Settings reservedDisabled = Settings.builder()
.put("tribe.t1.cluster.name", "non_existing")
.put("tribe.t2.cluster.name", "non_existing2")
.put(XPackSettings.RESERVED_REALM_ENABLED_SETTING.getKey(), false)
.build();
e = expectThrows(IllegalArgumentException.class, () -> Security.additionalSettings(reservedDisabled, true, false));
assertThat(e.getMessage(), containsString("tribe.on_conflict"));
}
public void testValidAutoCreateIndex() { public void testValidAutoCreateIndex() {
Security.validateAutoCreateIndex(Settings.EMPTY); Security.validateAutoCreateIndex(Settings.EMPTY);
Security.validateAutoCreateIndex(Settings.builder().put("action.auto_create_index", true).build()); Security.validateAutoCreateIndex(Settings.builder().put("action.auto_create_index", true).build());

View File

@ -5,6 +5,7 @@
*/ */
package org.elasticsearch.xpack.ssl; package org.elasticsearch.xpack.ssl;
import org.apache.logging.log4j.message.ParameterizedMessage;
import org.bouncycastle.asn1.x509.GeneralNames; import org.bouncycastle.asn1.x509.GeneralNames;
import org.bouncycastle.openssl.jcajce.JcaPEMWriter; import org.bouncycastle.openssl.jcajce.JcaPEMWriter;
import org.elasticsearch.ElasticsearchException; import org.elasticsearch.ElasticsearchException;
@ -26,7 +27,6 @@ import javax.net.ssl.SSLHandshakeException;
import javax.net.ssl.SSLSocket; import javax.net.ssl.SSLSocket;
import javax.net.ssl.SSLSocketFactory; import javax.net.ssl.SSLSocketFactory;
import javax.security.auth.x500.X500Principal; import javax.security.auth.x500.X500Principal;
import java.io.BufferedWriter; import java.io.BufferedWriter;
import java.io.IOException; import java.io.IOException;
import java.net.SocketException; import java.net.SocketException;
@ -147,6 +147,8 @@ public class SSLTrustRestrictionsTests extends SecurityIntegTestCase {
try { try {
tryConnect(trustedCert); tryConnect(trustedCert);
} catch (SSLHandshakeException | SocketException ex) { } catch (SSLHandshakeException | SocketException ex) {
logger.warn(new ParameterizedMessage("unexpected handshake failure with certificate [{}] [{}]",
trustedCert.certificate.getSubjectDN(), trustedCert.certificate.getSubjectAlternativeNames()), ex);
fail("handshake should have been successful, but failed with " + ex); fail("handshake should have been successful, but failed with " + ex);
} }
} }

View File

@ -1254,3 +1254,19 @@
} }
] ]
} }
---
"Test function shortcut expansion":
- do:
xpack.ml.put_job:
job_id: jobs-function-shortcut-expansion
body: >
{
"analysis_config" : {
"bucket_span": "1h",
"detectors" :[{"function":"nzc","by_field_name":"airline"}]
},
"data_description" : {}
}
- match: { job_id: "jobs-function-shortcut-expansion" }
- match: { analysis_config.detectors.0.function: "non_zero_count"}

View File

@ -65,6 +65,7 @@ public class HttpClient extends AbstractComponent {
private final CloseableHttpClient client; private final CloseableHttpClient client;
private final Integer proxyPort; private final Integer proxyPort;
private final String proxyHost; private final String proxyHost;
private final String proxyScheme;
private final TimeValue defaultConnectionTimeout; private final TimeValue defaultConnectionTimeout;
private final TimeValue defaultReadTimeout; private final TimeValue defaultReadTimeout;
private final ByteSizeValue maxResponseSize; private final ByteSizeValue maxResponseSize;
@ -78,6 +79,7 @@ public class HttpClient extends AbstractComponent {
// proxy setup // proxy setup
this.proxyHost = HttpSettings.PROXY_HOST.get(settings); this.proxyHost = HttpSettings.PROXY_HOST.get(settings);
this.proxyScheme = HttpSettings.PROXY_SCHEME.exists(settings) ? HttpSettings.PROXY_SCHEME.get(settings) : null;
this.proxyPort = HttpSettings.PROXY_PORT.get(settings); this.proxyPort = HttpSettings.PROXY_PORT.get(settings);
if (proxyPort != 0 && Strings.hasText(proxyHost)) { if (proxyPort != 0 && Strings.hasText(proxyHost)) {
logger.info("Using default proxy for http input and slack/hipchat/pagerduty/webhook actions [{}:{}]", proxyHost, proxyPort); logger.info("Using default proxy for http input and slack/hipchat/pagerduty/webhook actions [{}:{}]", proxyHost, proxyPort);
@ -139,10 +141,14 @@ public class HttpClient extends AbstractComponent {
// proxy // proxy
if (request.proxy != null && request.proxy.equals(HttpProxy.NO_PROXY) == false) { if (request.proxy != null && request.proxy.equals(HttpProxy.NO_PROXY) == false) {
HttpHost proxy = new HttpHost(request.proxy.getHost(), request.proxy.getPort(), request.scheme.scheme()); // if a proxy scheme is configured use this, but fall back to the same than the request in case there was no special
// configuration given
String scheme = request.proxy.getScheme() != null ? request.proxy.getScheme().scheme() : request.scheme.scheme();
HttpHost proxy = new HttpHost(request.proxy.getHost(), request.proxy.getPort(), scheme);
config.setProxy(proxy); config.setProxy(proxy);
} else if (proxyPort != null && Strings.hasText(proxyHost)) { } else if (proxyPort != null && Strings.hasText(proxyHost)) {
HttpHost proxy = new HttpHost(proxyHost, proxyPort, request.scheme.scheme()); String scheme = proxyScheme != null ? proxyScheme : request.scheme.scheme();
HttpHost proxy = new HttpHost(proxyHost, proxyPort, scheme);
config.setProxy(proxy); config.setProxy(proxy);
} }

View File

@ -22,34 +22,37 @@ import java.net.Proxy;
import java.net.UnknownHostException; import java.net.UnknownHostException;
import java.util.Objects; import java.util.Objects;
public class HttpProxy implements ToXContentFragment, Streamable { public class HttpProxy implements ToXContentFragment {
public static final HttpProxy NO_PROXY = new HttpProxy(null, null); public static final HttpProxy NO_PROXY = new HttpProxy(null, null, null);
private static final ParseField HOST = new ParseField("host");
private static final ParseField PORT = new ParseField("port");
private static final ParseField SCHEME = new ParseField("scheme");
private String host; private String host;
private Integer port; private Integer port;
private Scheme scheme;
public HttpProxy(String host, Integer port) { public HttpProxy(String host, Integer port) {
this.host = host; this.host = host;
this.port = port; this.port = port;
} }
@Override public HttpProxy(String host, Integer port, Scheme scheme) {
public void readFrom(StreamInput in) throws IOException { this.host = host;
host = in.readOptionalString(); this.port = port;
port = in.readOptionalVInt(); this.scheme = scheme;
}
@Override
public void writeTo(StreamOutput out) throws IOException {
out.writeOptionalString(host);
out.writeOptionalVInt(port);
} }
@Override @Override
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
if (Strings.hasText(host) && port != null) { if (Strings.hasText(host) && port != null) {
builder.startObject("proxy").field("host", host).field("port", port).endObject(); builder.startObject("proxy").field("host", host).field("port", port);
if (scheme != null) {
builder.field("scheme", scheme.scheme());
}
builder.endObject();
} }
return builder; return builder;
} }
@ -62,12 +65,8 @@ public class HttpProxy implements ToXContentFragment, Streamable {
return port; return port;
} }
public Proxy proxy() throws UnknownHostException { public Scheme getScheme() {
if (Strings.hasText(host) && port != null) { return scheme;
return new Proxy(Proxy.Type.HTTP, new InetSocketAddress(InetAddress.getByName(host), port));
}
return Proxy.NO_PROXY;
} }
@Override @Override
@ -77,12 +76,12 @@ public class HttpProxy implements ToXContentFragment, Streamable {
HttpProxy that = (HttpProxy) o; HttpProxy that = (HttpProxy) o;
return Objects.equals(port, that.port) && Objects.equals(host, that.host); return Objects.equals(port, that.port) && Objects.equals(host, that.host) && Objects.equals(scheme, that.scheme);
} }
@Override @Override
public int hashCode() { public int hashCode() {
return Objects.hash(host, port); return Objects.hash(host, port, scheme);
} }
@ -91,13 +90,16 @@ public class HttpProxy implements ToXContentFragment, Streamable {
String currentFieldName = null; String currentFieldName = null;
String host = null; String host = null;
Integer port = null; Integer port = null;
Scheme scheme = null;
while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {
if (token == XContentParser.Token.FIELD_NAME) { if (token == XContentParser.Token.FIELD_NAME) {
currentFieldName = parser.currentName(); currentFieldName = parser.currentName();
} else if (Field.HOST.match(currentFieldName)) { } else if (HOST.match(currentFieldName)) {
host = parser.text(); host = parser.text();
} else if (Field.PORT.match(currentFieldName)) { } else if (SCHEME.match(currentFieldName)) {
scheme = Scheme.parse(parser.text());
} else if (PORT.match(currentFieldName)) {
port = parser.intValue(); port = parser.intValue();
if (port <= 0 || port >= 65535) { if (port <= 0 || port >= 65535) {
throw new ElasticsearchParseException("Proxy port must be between 1 and 65534, but was " + port); throw new ElasticsearchParseException("Proxy port must be between 1 and 65534, but was " + port);
@ -109,11 +111,6 @@ public class HttpProxy implements ToXContentFragment, Streamable {
throw new ElasticsearchParseException("Proxy must contain 'port' and 'host' field"); throw new ElasticsearchParseException("Proxy must contain 'port' and 'host' field");
} }
return new HttpProxy(host, port); return new HttpProxy(host, port, scheme);
}
public interface Field {
ParseField HOST = new ParseField("host");
ParseField PORT = new ParseField("port");
} }
} }

View File

@ -6,6 +6,7 @@
package org.elasticsearch.xpack.watcher.common.http; package org.elasticsearch.xpack.watcher.common.http;
import org.elasticsearch.common.settings.Setting; import org.elasticsearch.common.settings.Setting;
import org.elasticsearch.common.settings.Setting.Property;
import org.elasticsearch.common.unit.ByteSizeUnit; import org.elasticsearch.common.unit.ByteSizeUnit;
import org.elasticsearch.common.unit.ByteSizeValue; import org.elasticsearch.common.unit.ByteSizeValue;
import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.common.unit.TimeValue;
@ -23,22 +24,24 @@ public class HttpSettings {
private static final TimeValue DEFAULT_CONNECTION_TIMEOUT = DEFAULT_READ_TIMEOUT; private static final TimeValue DEFAULT_CONNECTION_TIMEOUT = DEFAULT_READ_TIMEOUT;
static final Setting<TimeValue> READ_TIMEOUT = Setting.timeSetting("xpack.http.default_read_timeout", static final Setting<TimeValue> READ_TIMEOUT = Setting.timeSetting("xpack.http.default_read_timeout",
DEFAULT_READ_TIMEOUT, Setting.Property.NodeScope); DEFAULT_READ_TIMEOUT, Property.NodeScope);
static final Setting<TimeValue> CONNECTION_TIMEOUT = Setting.timeSetting("xpack.http.default_connection_timeout", static final Setting<TimeValue> CONNECTION_TIMEOUT = Setting.timeSetting("xpack.http.default_connection_timeout",
DEFAULT_CONNECTION_TIMEOUT, Setting.Property.NodeScope); DEFAULT_CONNECTION_TIMEOUT, Property.NodeScope);
static final String PROXY_HOST_KEY = "xpack.http.proxy.host"; private static final String PROXY_HOST_KEY = "xpack.http.proxy.host";
static final String PROXY_PORT_KEY = "xpack.http.proxy.port"; private static final String PROXY_PORT_KEY = "xpack.http.proxy.port";
static final String SSL_KEY_PREFIX = "xpack.http.ssl."; private static final String PROXY_SCHEME_KEY = "xpack.http.proxy.scheme";
private static final String SSL_KEY_PREFIX = "xpack.http.ssl.";
static final Setting<String> PROXY_HOST = Setting.simpleString(PROXY_HOST_KEY, Setting.Property.NodeScope); static final Setting<String> PROXY_HOST = Setting.simpleString(PROXY_HOST_KEY, Property.NodeScope);
static final Setting<Integer> PROXY_PORT = Setting.intSetting(PROXY_PORT_KEY, 0, 0, 0xFFFF, Setting.Property.NodeScope); static final Setting<String> PROXY_SCHEME = Setting.simpleString(PROXY_SCHEME_KEY, (v, s) -> Scheme.parse(v), Property.NodeScope);
static final Setting<Integer> PROXY_PORT = Setting.intSetting(PROXY_PORT_KEY, 0, 0, 0xFFFF, Property.NodeScope);
static final Setting<ByteSizeValue> MAX_HTTP_RESPONSE_SIZE = Setting.byteSizeSetting("xpack.http.max_response_size", static final Setting<ByteSizeValue> MAX_HTTP_RESPONSE_SIZE = Setting.byteSizeSetting("xpack.http.max_response_size",
new ByteSizeValue(10, ByteSizeUnit.MB), // default new ByteSizeValue(10, ByteSizeUnit.MB), // default
new ByteSizeValue(1, ByteSizeUnit.BYTES), // min new ByteSizeValue(1, ByteSizeUnit.BYTES), // min
new ByteSizeValue(50, ByteSizeUnit.MB), // max new ByteSizeValue(50, ByteSizeUnit.MB), // max
Setting.Property.NodeScope); Property.NodeScope);
private static final SSLConfigurationSettings SSL = SSLConfigurationSettings.withPrefix(SSL_KEY_PREFIX); private static final SSLConfigurationSettings SSL = SSLConfigurationSettings.withPrefix(SSL_KEY_PREFIX);
@ -49,6 +52,7 @@ public class HttpSettings {
settings.add(CONNECTION_TIMEOUT); settings.add(CONNECTION_TIMEOUT);
settings.add(PROXY_HOST); settings.add(PROXY_HOST);
settings.add(PROXY_PORT); settings.add(PROXY_PORT);
settings.add(PROXY_SCHEME);
settings.add(MAX_HTTP_RESPONSE_SIZE); settings.add(MAX_HTTP_RESPONSE_SIZE);
return settings; return settings;
} }

View File

@ -323,6 +323,50 @@ public class HttpClientTests extends ESTestCase {
} }
} }
public void testProxyCanHaveDifferentSchemeThanRequest() throws Exception {
// this test fakes a proxy server that sends a response instead of forwarding it to the mock web server
// on top of that the proxy request is HTTPS but the real request is HTTP only
MockSecureSettings serverSecureSettings = new MockSecureSettings();
// We can't use the client created above for the server since it is only a truststore
serverSecureSettings.setString("xpack.ssl.keystore.secure_password", "testnode");
Settings serverSettings = Settings.builder()
.put("xpack.ssl.keystore.path", getDataPath("/org/elasticsearch/xpack/security/keystore/testnode.jks"))
.setSecureSettings(serverSecureSettings)
.build();
TestsSSLService sslService = new TestsSSLService(serverSettings, environment);
try (MockWebServer proxyServer = new MockWebServer(sslService.sslContext(), false)) {
proxyServer.enqueue(new MockResponse().setResponseCode(200).setBody("fullProxiedContent"));
proxyServer.start();
Path resource = getDataPath("/org/elasticsearch/xpack/security/keystore/truststore-testnode-only.jks");
MockSecureSettings secureSettings = new MockSecureSettings();
secureSettings.setString("xpack.http.ssl.truststore.secure_password", "truststore-testnode-only");
Settings settings = Settings.builder()
.put(HttpSettings.PROXY_HOST.getKey(), "localhost")
.put(HttpSettings.PROXY_PORT.getKey(), proxyServer.getPort())
.put(HttpSettings.PROXY_SCHEME.getKey(), "https")
.put("xpack.http.ssl.truststore.path", resource.toString())
.setSecureSettings(secureSettings)
.build();
HttpClient httpClient = new HttpClient(settings, authRegistry, new SSLService(settings, environment));
HttpRequest.Builder requestBuilder = HttpRequest.builder("localhost", webServer.getPort())
.method(HttpMethod.GET)
.scheme(Scheme.HTTP)
.path("/");
HttpResponse response = httpClient.execute(requestBuilder.build());
assertThat(response.status(), equalTo(200));
assertThat(response.body().utf8ToString(), equalTo("fullProxiedContent"));
// ensure we hit the proxyServer and not the webserver
assertThat(webServer.requests(), hasSize(0));
assertThat(proxyServer.requests(), hasSize(1));
}
}
public void testThatProxyCanBeOverriddenByRequest() throws Exception { public void testThatProxyCanBeOverriddenByRequest() throws Exception {
// this test fakes a proxy server that sends a response instead of forwarding it to the mock web server // this test fakes a proxy server that sends a response instead of forwarding it to the mock web server
try (MockWebServer proxyServer = new MockWebServer()) { try (MockWebServer proxyServer = new MockWebServer()) {
@ -331,12 +375,13 @@ public class HttpClientTests extends ESTestCase {
Settings settings = Settings.builder() Settings settings = Settings.builder()
.put(HttpSettings.PROXY_HOST.getKey(), "localhost") .put(HttpSettings.PROXY_HOST.getKey(), "localhost")
.put(HttpSettings.PROXY_PORT.getKey(), proxyServer.getPort() + 1) .put(HttpSettings.PROXY_PORT.getKey(), proxyServer.getPort() + 1)
.put(HttpSettings.PROXY_HOST.getKey(), "https")
.build(); .build();
HttpClient httpClient = new HttpClient(settings, authRegistry, new SSLService(settings, environment)); HttpClient httpClient = new HttpClient(settings, authRegistry, new SSLService(settings, environment));
HttpRequest.Builder requestBuilder = HttpRequest.builder("localhost", webServer.getPort()) HttpRequest.Builder requestBuilder = HttpRequest.builder("localhost", webServer.getPort())
.method(HttpMethod.GET) .method(HttpMethod.GET)
.proxy(new HttpProxy("localhost", proxyServer.getPort())) .proxy(new HttpProxy("localhost", proxyServer.getPort(), Scheme.HTTP))
.path("/"); .path("/");
HttpResponse response = httpClient.execute(requestBuilder.build()); HttpResponse response = httpClient.execute(requestBuilder.build());

View File

@ -0,0 +1,109 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.xpack.watcher.common.http;
import org.elasticsearch.ElasticsearchParseException;
import org.elasticsearch.common.xcontent.NamedXContentRegistry;
import org.elasticsearch.common.xcontent.ToXContent;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentFactory;
import org.elasticsearch.common.xcontent.XContentParser;
import org.elasticsearch.common.xcontent.XContentType;
import org.elasticsearch.test.ESTestCase;
import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;
import static org.hamcrest.Matchers.is;
import static org.hamcrest.Matchers.nullValue;
public class HttpProxyTests extends ESTestCase {
public void testParser() throws Exception {
int port = randomIntBetween(1, 65000);
String host = randomAlphaOfLength(10);
XContentBuilder builder = jsonBuilder().startObject().field("host", host).field("port", port);
boolean isSchemeConfigured = randomBoolean();
String scheme = null;
if (isSchemeConfigured) {
scheme = randomFrom(Scheme.values()).scheme();
builder.field("scheme", scheme);
}
builder.endObject();
try (XContentParser parser = XContentFactory.xContent(XContentType.JSON)
.createParser(NamedXContentRegistry.EMPTY, builder.bytes())) {
parser.nextToken();
HttpProxy proxy = HttpProxy.parse(parser);
assertThat(proxy.getHost(), is(host));
assertThat(proxy.getPort(), is(port));
if (isSchemeConfigured) {
assertThat(proxy.getScheme().scheme(), is(scheme));
} else {
assertThat(proxy.getScheme(), is(nullValue()));
}
}
}
public void testParserValidScheme() throws Exception {
XContentBuilder builder = jsonBuilder().startObject()
.field("host", "localhost").field("port", 12345).field("scheme", "invalid")
.endObject();
try (XContentParser parser = XContentFactory.xContent(XContentType.JSON)
.createParser(NamedXContentRegistry.EMPTY, builder.bytes())) {
parser.nextToken();
expectThrows(IllegalArgumentException.class, () -> HttpProxy.parse(parser));
}
}
public void testParserValidPortRange() throws Exception {
XContentBuilder builder = jsonBuilder().startObject()
.field("host", "localhost").field("port", -1)
.endObject();
try (XContentParser parser = XContentFactory.xContent(XContentType.JSON)
.createParser(NamedXContentRegistry.EMPTY, builder.bytes())) {
parser.nextToken();
expectThrows(ElasticsearchParseException.class, () -> HttpProxy.parse(parser));
}
}
public void testParserNoHost() throws Exception {
XContentBuilder builder = jsonBuilder().startObject()
.field("port", -1)
.endObject();
try (XContentParser parser = XContentFactory.xContent(XContentType.JSON)
.createParser(NamedXContentRegistry.EMPTY, builder.bytes())) {
parser.nextToken();
expectThrows(ElasticsearchParseException.class, () -> HttpProxy.parse(parser));
}
}
public void testParserNoPort() throws Exception {
XContentBuilder builder = jsonBuilder().startObject()
.field("host", "localhost")
.endObject();
try (XContentParser parser = XContentFactory.xContent(XContentType.JSON)
.createParser(NamedXContentRegistry.EMPTY, builder.bytes())) {
parser.nextToken();
expectThrows(ElasticsearchParseException.class, () -> HttpProxy.parse(parser));
}
}
public void testToXContent() throws Exception {
try (XContentBuilder builder = jsonBuilder()) {
builder.startObject();
HttpProxy proxy = new HttpProxy("localhost", 3128);
proxy.toXContent(builder, ToXContent.EMPTY_PARAMS);
builder.endObject();
assertThat(builder.string(), is("{\"proxy\":{\"host\":\"localhost\",\"port\":3128}}"));
}
try (XContentBuilder builder = jsonBuilder()) {
builder.startObject();
HttpProxy httpsProxy = new HttpProxy("localhost", 3128, Scheme.HTTPS);
httpsProxy.toXContent(builder, ToXContent.EMPTY_PARAMS);
builder.endObject();
assertThat(builder.string(), is("{\"proxy\":{\"host\":\"localhost\",\"port\":3128,\"scheme\":\"https\"}}"));
}
}
}

View File

@ -122,13 +122,10 @@ public class EmailSecretsIntegrationTests extends AbstractWatcherIntegrationTest
assertThat(value, nullValue()); assertThat(value, nullValue());
// now we restart, to make sure the watches and their secrets are reloaded from the index properly // now we restart, to make sure the watches and their secrets are reloaded from the index properly
assertAcked(watcherClient.prepareWatchService().stop().get()); stopWatcher();
ensureWatcherStopped(); startWatcher();
assertAcked(watcherClient.prepareWatchService().start().get());
ensureWatcherStarted();
// now lets execute the watch manually // now lets execute the watch manually
final CountDownLatch latch = new CountDownLatch(1); final CountDownLatch latch = new CountDownLatch(1);
server.addListener(message -> { server.addListener(message -> {
assertThat(message.getSubject(), is("_subject")); assertThat(message.getSubject(), is("_subject"));

View File

@ -469,7 +469,7 @@ public abstract class AbstractWatcherIntegrationTestCase extends ESIntegTestCase
}); });
} }
protected void ensureWatcherStarted() throws Exception { protected void startWatcher() throws Exception {
ensureWatcherTemplatesAdded(); ensureWatcherTemplatesAdded();
assertBusy(() -> { assertBusy(() -> {
WatcherStatsResponse watcherStatsResponse = watcherClient().prepareWatcherStats().get(); WatcherStatsResponse watcherStatsResponse = watcherClient().prepareWatcherStats().get();
@ -479,6 +479,11 @@ public abstract class AbstractWatcherIntegrationTestCase extends ESIntegTestCase
.collect(Collectors.toList()); .collect(Collectors.toList());
List<WatcherState> states = currentStatesFromStatsRequest.stream().map(Tuple::v2).collect(Collectors.toList()); List<WatcherState> states = currentStatesFromStatsRequest.stream().map(Tuple::v2).collect(Collectors.toList());
boolean isStateStarted = states.stream().allMatch(w -> w == WatcherState.STARTED);
if (isStateStarted == false){
assertAcked(watcherClient().prepareWatchService().start().get());
}
String message = String.format(Locale.ROOT, "Expected watcher to be started, but state was %s", currentStatesFromStatsRequest); String message = String.format(Locale.ROOT, "Expected watcher to be started, but state was %s", currentStatesFromStatsRequest);
assertThat(message, states, everyItem(is(WatcherState.STARTED))); assertThat(message, states, everyItem(is(WatcherState.STARTED)));
}); });
@ -493,7 +498,7 @@ public abstract class AbstractWatcherIntegrationTestCase extends ESIntegTestCase
}); });
} }
protected void ensureWatcherStopped() throws Exception { protected void stopWatcher() throws Exception {
assertBusy(() -> { assertBusy(() -> {
WatcherStatsResponse watcherStatsResponse = watcherClient().prepareWatcherStats().get(); WatcherStatsResponse watcherStatsResponse = watcherClient().prepareWatcherStats().get();
assertThat(watcherStatsResponse.hasFailures(), is(false)); assertThat(watcherStatsResponse.hasFailures(), is(false));
@ -502,21 +507,16 @@ public abstract class AbstractWatcherIntegrationTestCase extends ESIntegTestCase
.collect(Collectors.toList()); .collect(Collectors.toList());
List<WatcherState> states = currentStatesFromStatsRequest.stream().map(Tuple::v2).collect(Collectors.toList()); List<WatcherState> states = currentStatesFromStatsRequest.stream().map(Tuple::v2).collect(Collectors.toList());
boolean isStateStopped = states.stream().allMatch(w -> w == WatcherState.STOPPED);
if (isStateStopped == false){
assertAcked(watcherClient().prepareWatchService().stop().get());
}
String message = String.format(Locale.ROOT, "Expected watcher to be stopped, but state was %s", currentStatesFromStatsRequest); String message = String.format(Locale.ROOT, "Expected watcher to be stopped, but state was %s", currentStatesFromStatsRequest);
assertThat(message, states, everyItem(is(WatcherState.STOPPED))); assertThat(message, states, everyItem(is(WatcherState.STOPPED)));
}); });
} }
protected void startWatcher() throws Exception {
assertAcked(watcherClient().prepareWatchService().start().get());
ensureWatcherStarted();
}
protected void stopWatcher() throws Exception {
assertAcked(watcherClient().prepareWatchService().stop().get());
ensureWatcherStopped();
}
public static class NoopEmailService extends EmailService { public static class NoopEmailService extends EmailService {
public NoopEmailService() { public NoopEmailService() {

View File

@ -78,7 +78,7 @@ public class BasicWatcherTests extends AbstractWatcherIntegrationTestCase {
.addAction("_logger", loggingAction("_logging") .addAction("_logger", loggingAction("_logging")
.setCategory("_category"))) .setCategory("_category")))
.get(); .get();
ensureWatcherStarted();
timeWarp().trigger("_name"); timeWarp().trigger("_name");
assertWatchWithMinimumPerformedActionsCount("_name", 1); assertWatchWithMinimumPerformedActionsCount("_name", 1);

View File

@ -138,7 +138,6 @@ public class BootStrapTests extends AbstractWatcherIntegrationTestCase {
.setRefreshPolicy(IMMEDIATE) .setRefreshPolicy(IMMEDIATE)
.get(); .get();
ensureWatcherStarted();
stopWatcher(); stopWatcher();
startWatcher(); startWatcher();
@ -148,7 +147,6 @@ public class BootStrapTests extends AbstractWatcherIntegrationTestCase {
@AwaitsFix(bugUrl = "https://github.com/elastic/x-pack-elasticsearch/issues/1915") @AwaitsFix(bugUrl = "https://github.com/elastic/x-pack-elasticsearch/issues/1915")
public void testLoadExistingWatchesUponStartup() throws Exception { public void testLoadExistingWatchesUponStartup() throws Exception {
ensureWatcherStarted();
stopWatcher(); stopWatcher();
int numWatches = scaledRandomIntBetween(16, 128); int numWatches = scaledRandomIntBetween(16, 128);
@ -202,7 +200,6 @@ public class BootStrapTests extends AbstractWatcherIntegrationTestCase {
).get(); ).get();
} }
ensureWatcherStarted();
stopWatcher(); stopWatcher();
DateTime now = DateTime.now(UTC); DateTime now = DateTime.now(UTC);
@ -249,7 +246,6 @@ public class BootStrapTests extends AbstractWatcherIntegrationTestCase {
.defaultThrottlePeriod(TimeValue.timeValueMillis(0)) .defaultThrottlePeriod(TimeValue.timeValueMillis(0))
).get(); ).get();
ensureWatcherStarted();
stopWatcher(); stopWatcher();
DateTime now = DateTime.now(UTC); DateTime now = DateTime.now(UTC);
@ -300,7 +296,6 @@ public class BootStrapTests extends AbstractWatcherIntegrationTestCase {
} }
public void testManuallyStopped() throws Exception { public void testManuallyStopped() throws Exception {
ensureWatcherStarted();
WatcherStatsResponse response = watcherClient().prepareWatcherStats().get(); WatcherStatsResponse response = watcherClient().prepareWatcherStats().get();
assertThat(response.watcherMetaData().manuallyStopped(), is(false)); assertThat(response.watcherMetaData().manuallyStopped(), is(false));
stopWatcher(); stopWatcher();
@ -323,7 +318,6 @@ public class BootStrapTests extends AbstractWatcherIntegrationTestCase {
final String watchRecordIndex = HistoryStoreField.getHistoryIndexNameForTime(triggeredTime); final String watchRecordIndex = HistoryStoreField.getHistoryIndexNameForTime(triggeredTime);
logger.info("Stopping watcher"); logger.info("Stopping watcher");
ensureWatcherStarted();
stopWatcher(); stopWatcher();
BulkRequestBuilder bulkRequestBuilder = client().prepareBulk(); BulkRequestBuilder bulkRequestBuilder = client().prepareBulk();

View File

@ -90,7 +90,6 @@ public class HistoryIntegrationTests extends AbstractWatcherIntegrationTestCase
.addAction("_logger", loggingAction("#### randomLogging"))) .addAction("_logger", loggingAction("#### randomLogging")))
.get(); .get();
ensureWatcherStarted();
watcherClient().prepareExecuteWatch("test_watch").setRecordExecution(true).get(); watcherClient().prepareExecuteWatch("test_watch").setRecordExecution(true).get();
refresh(".watcher-history*"); refresh(".watcher-history*");

View File

@ -237,11 +237,8 @@ public class WatchAckTests extends AbstractWatcherIntegrationTestCase {
private void restartWatcherRandomly() throws Exception { private void restartWatcherRandomly() throws Exception {
if (randomBoolean()) { if (randomBoolean()) {
ensureWatcherStarted();
stopWatcher(); stopWatcher();
ensureWatcherStopped();
startWatcher(); startWatcher();
ensureWatcherStarted();
} }
} }
} }

View File

@ -157,7 +157,6 @@ public class ActivateWatchTests extends AbstractWatcherIntegrationTestCase {
assertThat(indexResponse.getId(), is("_id")); assertThat(indexResponse.getId(), is("_id"));
// now, let's restart // now, let's restart
ensureWatcherStarted();
stopWatcher(); stopWatcher();
startWatcher(); startWatcher();

View File

@ -41,8 +41,6 @@ public class DeleteWatchTests extends AbstractWatcherIntegrationTestCase {
// Also the watch history is checked, that the error has been marked as deleted // Also the watch history is checked, that the error has been marked as deleted
// The mock webserver does not support count down latches, so we have to use sleep - sorry! // The mock webserver does not support count down latches, so we have to use sleep - sorry!
public void testWatchDeletionDuringExecutionWorks() throws Exception { public void testWatchDeletionDuringExecutionWorks() throws Exception {
ensureWatcherStarted();
MockResponse response = new MockResponse(); MockResponse response = new MockResponse();
response.setBody("foo"); response.setBody("foo");
response.setResponseCode(200); response.setResponseCode(200);

View File

@ -192,3 +192,18 @@
}, },
"data_description" : {} "data_description" : {}
} }
---
"Test function shortcut expansion":
- do:
xpack.ml.put_job:
job_id: old-cluster-function-shortcut-expansion
body: >
{
"analysis_config" : {
"bucket_span": "1h",
"detectors" :[{"function":"nzc","by_field_name":"airline"}]
},
"data_description" : {}
}
- match: { job_id: "old-cluster-function-shortcut-expansion" }

View File

@ -118,3 +118,12 @@ setup:
} }
] ]
} }
---
"Test get job with function shortcut should expand":
- do:
xpack.ml.get_jobs:
job_id: old-cluster-function-shortcut-expansion
- match: { count: 1 }
- match: { jobs.0.analysis_config.detectors.0.function: "non_zero_count" }

View File

@ -1,176 +0,0 @@
<?xml version="1.0"?>
<!--
~ ELASTICSEARCH CONFIDENTIAL
~ __________________
~
~ [2014] Elasticsearch Incorporated. All Rights Reserved.
~
~ NOTICE: All information contained herein is, and remains
~ the property of Elasticsearch Incorporated and its suppliers,
~ if any. The intellectual and technical concepts contained
~ herein are proprietary to Elasticsearch Incorporated
~ and its suppliers and may be covered by U.S. and Foreign Patents,
~ patents in process, and are protected by trade secret or copyright law.
~ Dissemination of this information or reproduction of this material
~ is strictly forbidden unless prior written permission is obtained
~ from Elasticsearch Incorporated.
-->
<project name="smoke-test-tribe-node-with-security"
xmlns:ac="antlib:net.sf.antcontrib">
<taskdef name="xhttp" classname="org.elasticsearch.ant.HttpTask" classpath="${test_classpath}" />
<typedef name="xhttp" classname="org.elasticsearch.ant.HttpCondition" classpath="${test_classpath}"/>
<import file="${elasticsearch.integ.antfile.default}"/>
<import file="${elasticsearch.tools.directory}/ant/security-overrides.xml"/>
<property name="tribe_node.pidfile" location="${integ.scratch}/tribe-node.pid"/>
<available property="tribe_node.pidfile.exists" file="${tribe_node.pidfile}"/>
<property name="cluster1.pidfile" location="${integ.scratch}/cluster1.pid"/>
<available property="cluster1.pidfile.exists" file="${cluster1.pidfile}"/>
<property name="cluster2.pidfile" location="${integ.scratch}/cluster2.pid"/>
<available property="cluster2.pidfile.exists" file="${cluster2.pidfile}"/>
<macrodef name="create-index">
<attribute name="name" />
<attribute name="port" />
<sequential>
<xhttp uri="http://127.0.0.1:@{port}/@{name}" method="PUT" username="test_admin" password="x-pack-test-password" />
<waitfor maxwait="30" maxwaitunit="second"
checkevery="500" checkeveryunit="millisecond"
timeoutproperty="@{timeoutproperty}">
<xhttp uri="http://127.0.0.1:@{port}/_cluster/health/@{name}?wait_for_status=yellow" username="test_admin" password="x-pack-test-password" />
</waitfor>
</sequential>
</macrodef>
<target name="start-tribe-node-and-2-clusters-with-security" depends="setup-workspace">
<ac:for list="${xplugins.list}" param="xplugin.name">
<sequential>
<fail message="Expected @{xplugin.name}-${version}.zip as a dependency, but could not be found in ${integ.deps}/plugins}">
<condition>
<not>
<available file="${integ.deps}/plugins/@{xplugin.name}-${elasticsearch.version}.zip"/>
</not>
</condition>
</fail>
</sequential>
</ac:for>
<ac:for param="file">
<path>
<fileset dir="${integ.deps}/plugins"/>
</path>
<sequential>
<local name="plugin.name"/>
<convert-plugin-name file="@{file}" outputproperty="plugin.name"/>
<install-plugin name="${plugin.name}" file="@{file}"/>
</sequential>
</ac:for>
<local name="home"/>
<property name="home" location="${integ.scratch}/elasticsearch-${elasticsearch.version}"/>
<echo>Adding roles.yml</echo>
<copy file="roles.yml" tofile="${home}/config/x-pack/roles.yml" overwrite="true"/>
<echo>Adding security users...</echo>
<run-script script="${home}/bin/x-pack/esusers">
<nested>
<arg value="useradd"/>
<arg value="test_admin"/>
<arg value="-p"/>
<arg value="x-pack-test-password"/>
<arg value="-r"/>
<arg value="admin"/>
</nested>
</run-script>
<echo>Starting two nodes, each node in a different cluster</echo>
<ac:trycatch property="failure.message">
<ac:try>
<startup-elasticsearch es.transport.tcp.port="9600"
es.http.port="9700"
es.pidfile="${cluster1.pidfile}"
es.unicast.hosts="127.0.0.1:9600"
es.cluster.name="cluster1"/>
</ac:try>
<ac:catch>
<echo>Failed to start cluster1 with message: ${failure.message}</echo>
<stop-node es.pidfile="${cluster1.pidfile}"/>
</ac:catch>
</ac:trycatch>
<ac:trycatch property="failure.message">
<ac:try>
<startup-elasticsearch es.transport.tcp.port="9800"
es.http.port="9900"
es.pidfile="${cluster2.pidfile}"
es.unicast.hosts="127.0.0.1:9800"
es.cluster.name="cluster2"/>
</ac:try>
<ac:catch>
<echo>Failed to start cluster2 with message: ${failure.message}</echo>
<stop-node es.pidfile="${cluster1.pidfile}"/>
<stop-node es.pidfile="${cluster2.pidfile}"/>
</ac:catch>
</ac:trycatch>
<ac:trycatch property="failure.message">
<ac:try>
<echo>Starting a tribe node, configured to connect to cluster1 and cluster2</echo>
<startup-elasticsearch es.pidfile="${tribe_node.pidfile}">
<additional-args>
<arg value="-Des.tribe.cluster1.cluster.name=cluster1"/>
<arg value="-Des.tribe.cluster1.discovery.zen.ping.unicast.hosts=127.0.0.1:9600"/>
<arg value="-Des.tribe.cluster2.cluster.name=cluster2"/>
<arg value="-Des.tribe.cluster2.discovery.zen.ping.unicast.hosts=127.0.0.1:9800"/>
</additional-args>
</startup-elasticsearch>
<xhttp uri="http://127.0.0.1:${integ.http.port}/_cluster/health?wait_for_nodes=5" username="test_admin" password="changeme" />
<!--
From the rest tests we only connect to the tribe node, so we need create the indices externally:
By creating the index after the tribe node has started we can be sure that the tribe node knows
about it. See: https://github.com/elastic/elasticsearch/issues/13292
-->
<echo>Creating index1 in cluster1</echo>
<create-index name="index1" port="9700"/>
<!-- TODO: remove this after we know why on CI the shards of index1 don't get into a started state -->
<loadfile property="cluster1-logs" srcFile="${integ.scratch}/elasticsearch-${elasticsearch.version}/logs/cluster1.log" />
<echo>post index1 creation es logs: ${cluster1-logs}</echo>
<echo>Creating index2 in cluster2</echo>
<create-index name="index2" port="9900"/>
<!-- TODO: remove this after we know why on CI the shards of index2 don't get into a started state -->
<loadfile property="cluster2-logs" srcFile="${integ.scratch}/elasticsearch-${elasticsearch.version}/logs/cluster2.log" />
<echo>post index2 creation es logs: ${cluster2-logs}</echo>
</ac:try>
<ac:catch>
<echo>Failed to start tribe node with message: ${failure.message}</echo>
<stop-node es.pidfile="${tribe_node.pidfile}"/>
<stop-node es.pidfile="${cluster1.pidfile}"/>
<stop-node es.pidfile="${cluster2.pidfile}"/>
</ac:catch>
</ac:trycatch>
</target>
<target name="stop-tribe-node" if="tribe_node.pidfile.exists">
<stop-node es.pidfile="${tribe_node.pidfile}"/>
</target>
<target name="stop-cluster1" if="cluster1.pidfile.exists">
<stop-node es.pidfile="${cluster1.pidfile}"/>
</target>
<target name="stop-cluster2" if="cluster2.pidfile.exists">
<stop-node es.pidfile="${cluster2.pidfile}"/>
</target>
<target name="stop-tribe-node-and-all-clusters" depends="stop-tribe-node,stop-cluster1,stop-cluster2"/>
</project>

View File

@ -1,27 +0,0 @@
---
"Tribe node search":
- do:
index:
index: index1
type: test
id: 1
body: { foo: bar }
- do:
index:
index: index2
type: test
id: 1
body: { foo: bar }
- do:
indices.refresh: {}
- do:
search:
index: index1,index2
body:
query: { term: { foo: bar }}
- match: { hits.total: 2 }

View File

@ -1,4 +0,0 @@
admin:
cluster: all
indices:
'*': all

View File

@ -1,25 +0,0 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.ant;
import org.apache.tools.ant.BuildException;
import org.apache.tools.ant.taskdefs.condition.Condition;
public class HttpCondition extends HttpTask implements Condition {
private int expectedResponseCode = 200;
@Override
public boolean eval() throws BuildException {
int responseCode = executeHttpRequest();
getProject().log("response code=" + responseCode);
return responseCode == expectedResponseCode;
}
public void setExpectedResponseCode(int expectedResponseCode) {
this.expectedResponseCode = expectedResponseCode;
}
}

View File

@ -1,82 +0,0 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.ant;
import org.apache.tools.ant.BuildException;
import org.apache.tools.ant.Task;
import org.elasticsearch.common.Base64;
import java.net.HttpURLConnection;
import java.net.URI;
import java.net.URL;
import java.nio.charset.StandardCharsets;
public class HttpTask extends Task {
private String uri;
private String method;
private String body;
private String username;
private String password;
@Override
public void execute() throws BuildException {
int responseCode = executeHttpRequest();
getProject().log("response code=" + responseCode);
}
protected int executeHttpRequest() {
try {
URI uri = new URI(this.uri);
URL url = uri.toURL();
getProject().log("url=" + url);
HttpURLConnection urlConnection = (HttpURLConnection) url.openConnection();
if (method != null) {
urlConnection.setRequestMethod(method);
}
if (username != null) {
String basicAuth = "Basic " + Base64.encodeBytes((username + ":" + password).getBytes(StandardCharsets.UTF_8));
urlConnection.setRequestProperty("Authorization", basicAuth);
}
if (body != null) {
urlConnection.setDoOutput(true);
urlConnection.setRequestProperty("Accept-Charset", StandardCharsets.UTF_8.name());
byte[] bytes = body.getBytes(StandardCharsets.UTF_8.name());
urlConnection.setRequestProperty("Content-Length", String.valueOf(bytes.length));
urlConnection.getOutputStream().write(bytes);
urlConnection.getOutputStream().close();
}
urlConnection.connect();
int responseCode = urlConnection.getResponseCode();
urlConnection.disconnect();
return responseCode;
} catch (Exception e) {
throw new BuildException(e);
}
}
public void setUri(String uri) {
this.uri = uri;
}
public void setMethod(String method) {
this.method = method;
}
public void setBody(String body) {
this.body = body;
}
public void setUsername(String username) {
this.username = username;
}
public void setPassword(String password) {
this.password = password;
}
}

View File

@ -1,43 +0,0 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.xpack.security;
import com.carrotsearch.randomizedtesting.annotations.Name;
import com.carrotsearch.randomizedtesting.annotations.ParametersFactory;
import org.elasticsearch.client.support.Headers;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.xpack.security.authc.support.SecuredString;
import org.elasticsearch.test.rest.ESRestTestCase;
import org.elasticsearch.test.rest.RestTestCandidate;
import org.elasticsearch.test.rest.parser.RestTestParseException;
import java.io.IOException;
import static org.elasticsearch.xpack.security.authc.support.UsernamePasswordToken.basicAuthHeaderValue;
public class RestIT extends TribeRestTestCase {
private static final String USER = "test_admin";
private static final String PASS = "x-pack-test-password";
public RestIT(@Name("yaml") RestTestCandidate testCandidate) {
super(testCandidate);
}
@ParametersFactory
public static Iterable<Object[]> parameters() throws IOException, RestTestParseException {
return ESRestTestCase.createParameters(0, 1);
}
@Override
protected Settings restClientSettings() {
String token = basicAuthHeaderValue(USER, new SecuredString(PASS.toCharArray()));
return Settings.builder()
.put(Headers.PREFIX + ".Authorization", token)
.build();
}
}

View File

@ -1,371 +0,0 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.xpack.security;
import com.carrotsearch.randomizedtesting.RandomizedTest;
import com.carrotsearch.randomizedtesting.annotations.TestGroup;
import com.carrotsearch.randomizedtesting.annotations.TimeoutSuite;
import org.apache.lucene.util.IOUtils;
import org.apache.lucene.util.LuceneTestCase.SuppressCodecs;
import org.apache.lucene.util.LuceneTestCase.SuppressFsync;
import org.apache.lucene.util.TimeUnits;
import org.elasticsearch.common.Strings;
import org.elasticsearch.common.SuppressForbidden;
import org.elasticsearch.common.io.PathUtils;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.xcontent.XContentHelper;
import org.elasticsearch.test.ESIntegTestCase.ClusterScope;
import org.elasticsearch.test.ESTestCase;
import org.elasticsearch.test.rest.ESRestTestCase;
import org.elasticsearch.test.rest.RestTestCandidate;
import org.elasticsearch.test.rest.RestTestExecutionContext;
import org.elasticsearch.test.rest.client.RestException;
import org.elasticsearch.test.rest.parser.RestTestParseException;
import org.elasticsearch.test.rest.parser.RestTestSuiteParser;
import org.elasticsearch.test.rest.section.DoSection;
import org.elasticsearch.test.rest.section.ExecutableSection;
import org.elasticsearch.test.rest.section.RestTestSuite;
import org.elasticsearch.test.rest.section.SkipSection;
import org.elasticsearch.test.rest.section.TestSection;
import org.elasticsearch.test.rest.spec.RestApi;
import org.elasticsearch.test.rest.spec.RestSpec;
import org.elasticsearch.test.rest.support.FileUtils;
import org.junit.AfterClass;
import org.junit.Before;
import org.junit.BeforeClass;
import java.io.IOException;
import java.io.InputStream;
import java.lang.annotation.ElementType;
import java.lang.annotation.Inherited;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;
import java.net.InetAddress;
import java.net.InetSocketAddress;
import java.net.URI;
import java.net.URISyntaxException;
import java.net.URL;
import java.net.UnknownHostException;
import java.nio.file.FileSystem;
import java.nio.file.FileSystems;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.PathMatcher;
import java.nio.file.StandardCopyOption;
import java.util.ArrayList;
import java.util.Collections;
import java.util.Comparator;
import java.util.List;
import java.util.Map;
import java.util.Set;
/**
* Forked from RestTestCase with changes required to run rest tests via a tribe node
*
* Reasons for forking:
* 1) Always communicate via the tribe node from the tests. The original class in core connects to any endpoint it can see via the nodes info api and that would mean also the nodes part of the other clusters would be just as entry point. This should not happen for the tribe tests
* 2) The original class in core executes delete calls after each test, but the tribe node can't handle master level write operations. These api calls hang for 1m and then just fail.
* 3) The indices in cluster1 and cluster2 are created from the ant integ file and because of that the original class in core would just remove that in between tests.
* 4) extends ESTestCase instead if ESIntegTestCase and doesn't setup a test cluster and just connects to the one endpoint defined in the tests.rest.cluster.
*/
@ESRestTestCase.Rest
@SuppressFsync // we aren't trying to test this here, and it can make the test slow
@SuppressCodecs("*") // requires custom completion postings format
@ClusterScope(randomDynamicTemplates = false)
@TimeoutSuite(millis = 40 * TimeUnits.MINUTE) // timeout the suite after 40min and fail the test.
public abstract class TribeRestTestCase extends ESTestCase {
/**
* Property that allows to control whether the REST tests are run (default) or not
*/
public static final String TESTS_REST = "tests.rest";
public static final String TESTS_REST_CLUSTER = "tests.rest.cluster";
/**
* Annotation for REST tests
*/
@Inherited
@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.TYPE)
@TestGroup(enabled = true, sysProperty = ESRestTestCase.TESTS_REST)
public @interface Rest {
}
/**
* Property that allows to control which REST tests get run. Supports comma separated list of tests
* or directories that contain tests e.g. -Dtests.rest.suite=index,get,create/10_with_id
*/
public static final String REST_TESTS_SUITE = "tests.rest.suite";
/**
* Property that allows to blacklist some of the REST tests based on a comma separated list of globs
* e.g. -Dtests.rest.blacklist=get/10_basic/*
*/
public static final String REST_TESTS_BLACKLIST = "tests.rest.blacklist";
/**
* Property that allows to control whether spec validation is enabled or not (default true).
*/
public static final String REST_TESTS_VALIDATE_SPEC = "tests.rest.validate_spec";
/**
* Property that allows to control where the REST spec files need to be loaded from
*/
public static final String REST_TESTS_SPEC = "tests.rest.spec";
public static final String REST_LOAD_PACKAGED_TESTS = "tests.rest.load_packaged";
private static final String DEFAULT_TESTS_PATH = "/rest-api-spec/test";
private static final String DEFAULT_SPEC_PATH = "/rest-api-spec/api";
private static final String PATHS_SEPARATOR = ",";
private final PathMatcher[] blacklistPathMatchers;
private static RestTestExecutionContext restTestExecutionContext;
private final RestTestCandidate testCandidate;
public TribeRestTestCase(RestTestCandidate testCandidate) {
this.testCandidate = testCandidate;
String[] blacklist = resolvePathsProperty(REST_TESTS_BLACKLIST, null);
if (blacklist != null) {
blacklistPathMatchers = new PathMatcher[blacklist.length];
int i = 0;
for (String glob : blacklist) {
blacklistPathMatchers[i++] = PathUtils.getDefaultFileSystem().getPathMatcher("glob:" + glob);
}
} else {
blacklistPathMatchers = new PathMatcher[0];
}
}
@Override
protected void afterIfFailed(List<Throwable> errors) {
logger.info("Stash dump on failure [{}]", XContentHelper.toString(restTestExecutionContext.stash()));
super.afterIfFailed(errors);
}
public static Iterable<Object[]> createParameters(int id, int count) throws IOException, RestTestParseException {
TestGroup testGroup = Rest.class.getAnnotation(TestGroup.class);
String sysProperty = TestGroup.Utilities.getSysProperty(Rest.class);
boolean enabled;
try {
enabled = RandomizedTest.systemPropertyAsBoolean(sysProperty, testGroup.enabled());
} catch (IllegalArgumentException e) {
// Ignore malformed system property, disable the group if malformed though.
enabled = false;
}
if (!enabled) {
return new ArrayList<>();
}
//parse tests only if rest test group is enabled, otherwise rest tests might not even be available on file system
List<RestTestCandidate> restTestCandidates = collectTestCandidates(id, count);
List<Object[]> objects = new ArrayList<>();
for (RestTestCandidate restTestCandidate : restTestCandidates) {
objects.add(new Object[]{restTestCandidate});
}
return objects;
}
private static List<RestTestCandidate> collectTestCandidates(int id, int count) throws RestTestParseException, IOException {
List<RestTestCandidate> testCandidates = new ArrayList<>();
FileSystem fileSystem = getFileSystem();
// don't make a try-with, getFileSystem returns null
// ... and you can't close() the default filesystem
try {
String[] paths = resolvePathsProperty(REST_TESTS_SUITE, DEFAULT_TESTS_PATH);
Map<String, Set<Path>> yamlSuites = FileUtils.findYamlSuites(fileSystem, DEFAULT_TESTS_PATH, paths);
RestTestSuiteParser restTestSuiteParser = new RestTestSuiteParser();
//yaml suites are grouped by directory (effectively by api)
for (String api : yamlSuites.keySet()) {
List<Path> yamlFiles = new ArrayList<>(yamlSuites.get(api));
for (Path yamlFile : yamlFiles) {
String key = api + yamlFile.getFileName().toString();
if (mustExecute(key, id, count)) {
RestTestSuite restTestSuite = restTestSuiteParser.parse(api, yamlFile);
for (TestSection testSection : restTestSuite.getTestSections()) {
testCandidates.add(new RestTestCandidate(restTestSuite, testSection));
}
}
}
}
} finally {
IOUtils.close(fileSystem);
}
//sort the candidates so they will always be in the same order before being shuffled, for repeatability
Collections.sort(testCandidates, new Comparator<RestTestCandidate>() {
@Override
public int compare(RestTestCandidate o1, RestTestCandidate o2) {
return o1.getTestPath().compareTo(o2.getTestPath());
}
});
return testCandidates;
}
private static boolean mustExecute(String test, int id, int count) {
int hash = (int) (Math.abs((long)test.hashCode()) % count);
return hash == id;
}
private static String[] resolvePathsProperty(String propertyName, String defaultValue) {
String property = System.getProperty(propertyName);
if (!Strings.hasLength(property)) {
return defaultValue == null ? null : new String[]{defaultValue};
} else {
return property.split(PATHS_SEPARATOR);
}
}
/**
* Returns a new FileSystem to read REST resources, or null if they
* are available from classpath.
*/
@SuppressForbidden(reason = "proper use of URL, hack around a JDK bug")
static FileSystem getFileSystem() throws IOException {
// REST suite handling is currently complicated, with lots of filtering and so on
// For now, to work embedded in a jar, return a ZipFileSystem over the jar contents.
URL codeLocation = FileUtils.class.getProtectionDomain().getCodeSource().getLocation();
boolean loadPackaged = RandomizedTest.systemPropertyAsBoolean(REST_LOAD_PACKAGED_TESTS, true);
if (codeLocation.getFile().endsWith(".jar") && loadPackaged) {
try {
// hack around a bug in the zipfilesystem implementation before java 9,
// its checkWritable was incorrect and it won't work without write permissions.
// if we add the permission, it will open jars r/w, which is too scary! so copy to a safe r-w location.
Path tmp = Files.createTempFile(null, ".jar");
try (InputStream in = codeLocation.openStream()) {
Files.copy(in, tmp, StandardCopyOption.REPLACE_EXISTING);
}
return FileSystems.newFileSystem(new URI("jar:" + tmp.toUri()), Collections.<String,Object>emptyMap());
} catch (URISyntaxException e) {
throw new IOException("couldn't open zipfilesystem: ", e);
}
} else {
return null;
}
}
@BeforeClass
public static void initExecutionContext() throws IOException, RestException {
String[] specPaths = resolvePathsProperty(REST_TESTS_SPEC, DEFAULT_SPEC_PATH);
RestSpec restSpec = null;
FileSystem fileSystem = getFileSystem();
// don't make a try-with, getFileSystem returns null
// ... and you can't close() the default filesystem
try {
restSpec = RestSpec.parseFrom(fileSystem, DEFAULT_SPEC_PATH, specPaths);
} finally {
IOUtils.close(fileSystem);
}
validateSpec(restSpec);
restTestExecutionContext = new RestTestExecutionContext(restSpec);
}
private static void validateSpec(RestSpec restSpec) {
boolean validateSpec = RandomizedTest.systemPropertyAsBoolean(REST_TESTS_VALIDATE_SPEC, true);
if (validateSpec) {
StringBuilder errorMessage = new StringBuilder();
for (RestApi restApi : restSpec.getApis()) {
if (restApi.getMethods().contains("GET") && restApi.isBodySupported()) {
if (!restApi.getMethods().contains("POST")) {
errorMessage.append("\n- ").append(restApi.getName()).append(" supports GET with a body but doesn't support POST");
}
}
}
if (errorMessage.length() > 0) {
throw new IllegalArgumentException(errorMessage.toString());
}
}
}
@AfterClass
public static void close() {
if (restTestExecutionContext != null) {
restTestExecutionContext.close();
restTestExecutionContext = null;
}
}
/**
* Used to obtain settings for the REST client that is used to send REST requests.
*/
protected Settings restClientSettings() {
return Settings.EMPTY;
}
protected InetSocketAddress[] httpAddresses() {
String clusterAddresses = System.getProperty(TESTS_REST_CLUSTER);
String[] stringAddresses = clusterAddresses.split(",");
InetSocketAddress[] transportAddresses = new InetSocketAddress[stringAddresses.length];
int i = 0;
for (String stringAddress : stringAddresses) {
String[] split = stringAddress.split(":");
if (split.length < 2) {
throw new IllegalArgumentException("address [" + clusterAddresses + "] not valid");
}
try {
transportAddresses[i++] = new InetSocketAddress(InetAddress.getByName(split[0]), Integer.valueOf(split[1]));
} catch (NumberFormatException e) {
throw new IllegalArgumentException("port is not valid, expected number but was [" + split[1] + "]");
} catch (UnknownHostException e) {
throw new IllegalArgumentException("unknown host [" + split[0] + "]", e);
}
}
return transportAddresses;
}
@Before
public void reset() throws IOException, RestException {
//skip test if it matches one of the blacklist globs
for (PathMatcher blacklistedPathMatcher : blacklistPathMatchers) {
//we need to replace a few characters otherwise the test section name can't be parsed as a path on windows
String testSection = testCandidate.getTestSection().getName().replace("*", "").replace("\\", "/").replaceAll("\\s+/", "/").replace(":", "").trim();
String testPath = testCandidate.getSuitePath() + "/" + testSection;
assumeFalse("[" + testCandidate.getTestPath() + "] skipped, reason: blacklisted", blacklistedPathMatcher.matches(PathUtils.get(testPath)));
}
//The client needs non static info to get initialized, therefore it can't be initialized in the before class
restTestExecutionContext.initClient(httpAddresses(), restClientSettings());
restTestExecutionContext.clear();
//skip test if the whole suite (yaml file) is disabled
assumeFalse(buildSkipMessage(testCandidate.getSuitePath(), testCandidate.getSetupSection().getSkipSection()),
testCandidate.getSetupSection().getSkipSection().skip(restTestExecutionContext.esVersion()));
//skip test if test section is disabled
assumeFalse(buildSkipMessage(testCandidate.getTestPath(), testCandidate.getTestSection().getSkipSection()),
testCandidate.getTestSection().getSkipSection().skip(restTestExecutionContext.esVersion()));
}
private static String buildSkipMessage(String description, SkipSection skipSection) {
StringBuilder messageBuilder = new StringBuilder();
if (skipSection.isVersionCheck()) {
messageBuilder.append("[").append(description).append("] skipped, reason: [").append(skipSection.getReason()).append("] ");
} else {
messageBuilder.append("[").append(description).append("] skipped, reason: features ").append(skipSection.getFeatures()).append(" not supported");
}
return messageBuilder.toString();
}
public void test() throws IOException {
//let's check that there is something to run, otherwise there might be a problem with the test section
if (testCandidate.getTestSection().getExecutableSections().size() == 0) {
throw new IllegalArgumentException("No executable sections loaded for [" + testCandidate.getTestPath() + "]");
}
if (!testCandidate.getSetupSection().isEmpty()) {
logger.info("start setup test [{}]", testCandidate.getTestPath());
for (DoSection doSection : testCandidate.getSetupSection().getDoSections()) {
doSection.execute(restTestExecutionContext);
}
logger.info("end setup test [{}]", testCandidate.getTestPath());
}
restTestExecutionContext.clear();
for (ExecutableSection executableSection : testCandidate.getTestSection().getExecutableSections()) {
executableSection.execute(restTestExecutionContext);
}
}
}

View File

@ -1,132 +0,0 @@
import org.elasticsearch.gradle.test.ClusterConfiguration
import org.elasticsearch.gradle.test.ClusterFormationTasks
import org.elasticsearch.gradle.test.NodeInfo
apply plugin: 'elasticsearch.standalone-test'
apply plugin: 'elasticsearch.standalone-rest-test'
apply plugin: 'elasticsearch.rest-test'
dependencies {
testCompile project(path: ':modules:tribe', configuration: 'runtime')
testCompile project(path: xpackProject('plugin').path, configuration: 'testArtifacts')
// TODO: remove all these test deps, this is completely bogus, guava is being force upgraded
testCompile project(path: xpackModule('deprecation'), configuration: 'runtime')
testCompile project(path: xpackModule('graph'), configuration: 'runtime')
testCompile project(path: xpackModule('logstash'), configuration: 'runtime')
testCompile project(path: xpackModule('ml'), configuration: 'runtime')
testCompile project(path: xpackModule('monitoring'), configuration: 'runtime')
testCompile project(path: xpackModule('security'), configuration: 'runtime')
testCompile project(path: xpackModule('upgrade'), configuration: 'runtime')
testCompile project(path: xpackModule('watcher'), configuration: 'runtime')
testCompile project(path: xpackModule('core'), configuration: 'testArtifacts')
testCompile project(path: xpackModule('monitoring'), configuration: 'testArtifacts')
}
compileTestJava.options.compilerArgs << "-Xlint:-rawtypes,-unchecked"
namingConventions.skipIntegTestInDisguise = true
test {
/*
* We have to disable setting the number of available processors as tests in the same JVM randomize processors and will step on each
* other if we allow them to set the number of available processors as it's set-once in Netty.
*/
systemProperty 'es.set.netty.runtime.available.processors', 'false'
include '**/*Tests.class'
}
String licensePath = xpackProject('license-tools').projectDir.toPath().resolve('src/test/resources').toString()
sourceSets {
test {
resources {
srcDirs += [licensePath]
}
}
}
project.forbiddenPatterns {
exclude '**/*.key'
}
task setupClusterOne {}
ClusterConfiguration cluster1Config = new ClusterConfiguration(project)
cluster1Config.clusterName = 'cluster1'
cluster1Config.setting('node.name', 'cluster1-node1')
// x-pack
cluster1Config.plugin(xpackProject('plugin').path)
cluster1Config.setting('xpack.monitoring.enabled', false)
cluster1Config.setting('xpack.security.enabled', false)
cluster1Config.setting('xpack.watcher.enabled', false)
cluster1Config.setting('xpack.graph.enabled', false)
cluster1Config.setting('xpack.ml.enabled', false)
List<NodeInfo> cluster1Nodes = ClusterFormationTasks.setup(project, 'clusterOne', setupClusterOne, cluster1Config)
task setupClusterTwo {}
ClusterConfiguration cluster2Config = new ClusterConfiguration(project)
cluster2Config.clusterName = 'cluster2'
cluster2Config.setting('node.name', 'cluster2-node1')
// x-pack
cluster2Config.plugin(xpackProject('plugin').path)
cluster2Config.setting('xpack.monitoring.enabled', false)
cluster2Config.setting('xpack.monitoring.enabled', false)
cluster2Config.setting('xpack.security.enabled', false)
cluster2Config.setting('xpack.watcher.enabled', false)
cluster2Config.setting('xpack.graph.enabled', false)
cluster2Config.setting('xpack.ml.enabled', false)
List<NodeInfo> cluster2Nodes = ClusterFormationTasks.setup(project, 'clusterTwo', setupClusterTwo, cluster2Config)
integTestCluster {
dependsOn setupClusterOne, setupClusterTwo
setting 'node.name', 'tribe-node'
setting 'tribe.on_conflict', 'prefer_cluster1'
setting 'tribe.cluster1.cluster.name', 'cluster1'
setting 'tribe.cluster1.discovery.zen.ping.unicast.hosts', "'${-> cluster1Nodes.get(0).transportUri()}'"
setting 'tribe.cluster1.http.enabled', 'true'
setting 'tribe.cluster1.xpack.monitoring.enabled', false
setting 'tribe.cluster1.xpack.monitoring.enabled', false
setting 'tribe.cluster1.xpack.security.enabled', false
setting 'tribe.cluster1.xpack.watcher.enabled', false
setting 'tribe.cluster1.xpack.graph.enabled', false
setting 'tribe.cluster1.xpack.ml.enabled', false
setting 'tribe.cluster2.cluster.name', 'cluster2'
setting 'tribe.cluster2.discovery.zen.ping.unicast.hosts', "'${-> cluster2Nodes.get(0).transportUri()}'"
setting 'tribe.cluster2.http.enabled', 'true'
setting 'tribe.cluster2.xpack.monitoring.enabled', false
setting 'tribe.cluster2.xpack.monitoring.enabled', false
setting 'tribe.cluster2.xpack.security.enabled', false
setting 'tribe.cluster2.xpack.watcher.enabled', false
setting 'tribe.cluster2.xpack.graph.enabled', false
setting 'tribe.cluster2.xpack.ml.enabled', false
// x-pack
plugin xpackProject('plugin').path
setting 'xpack.monitoring.enabled', false
setting 'xpack.monitoring.enabled', false
setting 'xpack.security.enabled', false
setting 'xpack.watcher.enabled', false
setting 'xpack.graph.enabled', false
setting 'xpack.ml.enabled', false
waitCondition = { node, ant ->
File tmpFile = new File(node.cwd, 'wait.success')
// 5 nodes: tribe + clusterOne (1 node + tribe internal node) + clusterTwo (1 node + tribe internal node)
ant.get(src: "http://${node.httpUri()}/_cluster/health?wait_for_nodes=>=5&wait_for_status=yellow",
dest: tmpFile.toString(),
ignoreerrors: true,
retries: 10)
return tmpFile.exists()
}
}
integTestRunner {
/*
* We have to disable setting the number of available processors as tests in the same JVM randomize processors and will step on each
* other if we allow them to set the number of available processors as it's set-once in Netty.
*/
systemProperty 'es.set.netty.runtime.available.processors', 'false'
systemProperty 'tests.cluster', "${-> cluster1Nodes.get(0).transportUri()}"
systemProperty 'tests.cluster2', "${-> cluster2Nodes.get(0).transportUri()}"
systemProperty 'tests.tribe', "${-> integTest.nodes.get(0).transportUri()}"
finalizedBy 'clusterOne#stop'
finalizedBy 'clusterTwo#stop'
}

View File

@ -1,44 +0,0 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.license;
import org.elasticsearch.client.Client;
import org.elasticsearch.common.unit.TimeValue;
import static org.elasticsearch.license.TestUtils.generateSignedLicense;
public class LicenseTribeTests extends TribeTransportTestCase {
@Override
protected void verifyActionOnClientNode(Client client) throws Exception {
assertLicenseTransportActionsWorks(client);
}
@Override
protected void verifyActionOnMasterNode(Client masterClient) throws Exception {
assertLicenseTransportActionsWorks(masterClient);
}
@Override
protected void verifyActionOnDataNode(Client dataNodeClient) throws Exception {
assertLicenseTransportActionsWorks(dataNodeClient);
}
private static void assertLicenseTransportActionsWorks(Client client) throws Exception {
client.execute(GetLicenseAction.INSTANCE, new GetLicenseRequest()).get();
client.execute(PutLicenseAction.INSTANCE, new PutLicenseRequest()
.license(generateSignedLicense(TimeValue.timeValueHours(1))));
client.execute(DeleteLicenseAction.INSTANCE, new DeleteLicenseRequest());
}
@Override
protected void verifyActionOnTribeNode(Client tribeClient) throws Exception {
// The get licence action should work, but everything else should fail
tribeClient.execute(GetLicenseAction.INSTANCE, new GetLicenseRequest()).get();
failAction(tribeClient, PutLicenseAction.INSTANCE);
failAction(tribeClient, DeleteLicenseAction.INSTANCE);
}
}

View File

@ -1,322 +0,0 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.license;
import org.elasticsearch.action.Action;
import org.elasticsearch.action.admin.cluster.health.ClusterHealthResponse;
import org.elasticsearch.action.admin.cluster.node.info.NodeInfo;
import org.elasticsearch.action.admin.cluster.node.info.NodesInfoResponse;
import org.elasticsearch.analysis.common.CommonAnalysisPlugin;
import org.elasticsearch.client.Client;
import org.elasticsearch.client.Requests;
import org.elasticsearch.cluster.health.ClusterHealthStatus;
import org.elasticsearch.cluster.node.DiscoveryNode;
import org.elasticsearch.cluster.node.DiscoveryNodes;
import org.elasticsearch.common.Priority;
import org.elasticsearch.common.UUIDs;
import org.elasticsearch.common.network.NetworkModule;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.transport.TransportAddress;
import org.elasticsearch.discovery.zen.UnicastZenPing;
import org.elasticsearch.env.Environment;
import org.elasticsearch.node.MockNode;
import org.elasticsearch.node.Node;
import org.elasticsearch.plugins.Plugin;
import org.elasticsearch.test.ESIntegTestCase;
import org.elasticsearch.test.ESIntegTestCase.ClusterScope;
import org.elasticsearch.test.ESIntegTestCase.Scope;
import org.elasticsearch.test.InternalTestCluster;
import org.elasticsearch.test.NodeConfigurationSource;
import org.elasticsearch.test.TestCluster;
import org.elasticsearch.test.discovery.TestZenDiscovery;
import org.elasticsearch.tribe.TribePlugin;
import org.elasticsearch.xpack.CompositeTestingXPackPlugin;
import org.elasticsearch.xpack.core.XPackClientPlugin;
import org.elasticsearch.xpack.core.XPackPlugin;
import org.elasticsearch.xpack.core.XPackSettings;
import org.elasticsearch.xpack.core.XPackField;
import org.elasticsearch.xpack.deprecation.Deprecation;
import org.elasticsearch.xpack.graph.Graph;
import org.elasticsearch.xpack.logstash.Logstash;
import org.elasticsearch.xpack.ml.MachineLearning;
import org.elasticsearch.xpack.core.ml.MachineLearningField;
import org.elasticsearch.xpack.monitoring.Monitoring;
import org.elasticsearch.xpack.security.Security;
import org.elasticsearch.xpack.upgrade.Upgrade;
import org.elasticsearch.xpack.watcher.Watcher;
import java.nio.file.Path;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collection;
import java.util.Collections;
import java.util.List;
import java.util.function.Function;
import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;
import static org.hamcrest.Matchers.anyOf;
import static org.hamcrest.Matchers.containsString;
import static org.hamcrest.Matchers.equalTo;
@ClusterScope(scope = Scope.TEST, transportClientRatio = 0, numClientNodes = 1, numDataNodes = 2)
public abstract class TribeTransportTestCase extends ESIntegTestCase {
protected List<String> enabledFeatures() {
return Collections.emptyList();
}
@Override
protected final Settings nodeSettings(int nodeOrdinal) {
final Settings.Builder builder = Settings.builder()
.put(NetworkModule.HTTP_ENABLED.getKey(), false)
.put("transport.type", getTestTransportType());
List<String> enabledFeatures = enabledFeatures();
builder.put(XPackSettings.SECURITY_ENABLED.getKey(), enabledFeatures.contains(XPackField.SECURITY));
builder.put(XPackSettings.MONITORING_ENABLED.getKey(), enabledFeatures.contains(XPackField.MONITORING));
builder.put(XPackSettings.WATCHER_ENABLED.getKey(), enabledFeatures.contains(XPackField.WATCHER));
builder.put(XPackSettings.GRAPH_ENABLED.getKey(), enabledFeatures.contains(XPackField.GRAPH));
builder.put(XPackSettings.MACHINE_LEARNING_ENABLED.getKey(), enabledFeatures.contains(XPackField.MACHINE_LEARNING));
builder.put(MachineLearningField.AUTODETECT_PROCESS.getKey(), false);
return builder.build();
}
@Override
protected boolean ignoreExternalCluster() {
return true;
}
@Override
protected boolean addTestZenDiscovery() {
return false;
}
public static class TribeAwareTestZenDiscoveryPlugin extends TestZenDiscovery.TestPlugin {
public TribeAwareTestZenDiscoveryPlugin(Settings settings) {
super(settings);
}
@Override
public Settings additionalSettings() {
if (settings.getGroups("tribe", true).isEmpty()) {
return super.additionalSettings();
} else {
return Settings.EMPTY;
}
}
}
public static class MockTribePlugin extends TribePlugin {
public MockTribePlugin(Settings settings) {
super(settings);
}
protected Function<Settings, Node> nodeBuilder(Path configPath) {
return settings -> new MockNode(new Environment(settings, configPath), internalCluster().getPlugins());
}
}
@Override
protected Collection<Class<? extends Plugin>> nodePlugins() {
ArrayList<Class<? extends Plugin>> plugins = new ArrayList<>();
plugins.add(MockTribePlugin.class);
plugins.add(TribeAwareTestZenDiscoveryPlugin.class);
plugins.add(CompositeTestingXPackPlugin.class);
plugins.add(CommonAnalysisPlugin.class);
return plugins;
}
@Override
protected final Collection<Class<? extends Plugin>> transportClientPlugins() {
ArrayList<Class<? extends Plugin>> plugins = new ArrayList<>();
plugins.add(MockTribePlugin.class);
plugins.add(TribeAwareTestZenDiscoveryPlugin.class);
plugins.add(XPackClientPlugin.class);
plugins.add(CommonAnalysisPlugin.class);
return plugins;
}
public void testTribeSetup() throws Exception {
NodeConfigurationSource nodeConfigurationSource = new NodeConfigurationSource() {
@Override
public Settings nodeSettings(int nodeOrdinal) {
return TribeTransportTestCase.this.nodeSettings(nodeOrdinal);
}
@Override
public Path nodeConfigPath(int nodeOrdinal) {
return null;
}
@Override
public Collection<Class<? extends Plugin>> nodePlugins() {
return TribeTransportTestCase.this.nodePlugins();
}
@Override
public Settings transportClientSettings() {
return TribeTransportTestCase.this.transportClientSettings();
}
@Override
public Collection<Class<? extends Plugin>> transportClientPlugins() {
return TribeTransportTestCase.this.transportClientPlugins();
}
};
final InternalTestCluster cluster2 = new InternalTestCluster(
randomLong(), createTempDir(), true, true, 2, 2,
UUIDs.randomBase64UUID(random()), nodeConfigurationSource, 1, false, "tribe_node2",
getMockPlugins(), getClientWrapper());
cluster2.beforeTest(random(), 0.0);
logger.info("create 2 indices, test1 on t1, and test2 on t2");
assertAcked(internalCluster().client().admin().indices().prepareCreate("test1").get());
assertAcked(cluster2.client().admin().indices().prepareCreate("test2").get());
ensureYellow(internalCluster());
ensureYellow(cluster2);
// Map<String,String> asMap = internalCluster().getDefaultSettings().getAsMap();
Settings.Builder tribe1Defaults = Settings.builder();
Settings.Builder tribe2Defaults = Settings.builder();
internalCluster().getDefaultSettings().keySet().forEach(k -> {
if (k.startsWith("path.") == false) {
tribe1Defaults.copy(k, internalCluster().getDefaultSettings());
tribe2Defaults.copy(k, internalCluster().getDefaultSettings());
}
});
tribe1Defaults.normalizePrefix("tribe.t1.");
tribe2Defaults.normalizePrefix("tribe.t2.");
// give each tribe it's unicast hosts to connect to
tribe1Defaults.putList("tribe.t1." + UnicastZenPing.DISCOVERY_ZEN_PING_UNICAST_HOSTS_SETTING.getKey(),
getUnicastHosts(internalCluster().client()));
tribe1Defaults.putList("tribe.t2." + UnicastZenPing.DISCOVERY_ZEN_PING_UNICAST_HOSTS_SETTING.getKey(),
getUnicastHosts(cluster2.client()));
Settings merged = Settings.builder()
.put("tribe.t1.cluster.name", internalCluster().getClusterName())
.put("tribe.t2.cluster.name", cluster2.getClusterName())
.put("tribe.t1.transport.type", getTestTransportType())
.put("tribe.t2.transport.type", getTestTransportType())
.put("tribe.blocks.write", false)
.put(tribe1Defaults.build())
.put(tribe2Defaults.build())
.put(NetworkModule.HTTP_ENABLED.getKey(), false)
.put(internalCluster().getDefaultSettings())
.put(XPackSettings.SECURITY_ENABLED.getKey(), false) // otherwise it conflicts with mock transport
.put(XPackSettings.MACHINE_LEARNING_ENABLED.getKey(), false)
.put(MachineLearningField.AUTODETECT_PROCESS.getKey(), false)
.put("tribe.t1." + XPackSettings.SECURITY_ENABLED.getKey(), false)
.put("tribe.t2." + XPackSettings.SECURITY_ENABLED.getKey(), false)
.put("tribe.t1." + XPackSettings.WATCHER_ENABLED.getKey(), false)
.put("tribe.t2." + XPackSettings.WATCHER_ENABLED.getKey(), false)
.put("tribe.t1." + XPackSettings.MACHINE_LEARNING_ENABLED.getKey(), false)
.put("tribe.t2." + XPackSettings.MACHINE_LEARNING_ENABLED.getKey(), false)
.put("tribe.t1." + MachineLearningField.AUTODETECT_PROCESS.getKey(), false)
.put("tribe.t2." + MachineLearningField.AUTODETECT_PROCESS.getKey(), false)
.put("node.name", "tribe_node") // make sure we can identify threads from this node
.put("transport.type", getTestTransportType())
.build();
final List<Class<? extends Plugin>> mockPlugins = Arrays.asList(MockTribePlugin.class, TribeAwareTestZenDiscoveryPlugin.class,
getTestTransportPlugin(), Deprecation.class, Graph.class, Logstash.class, MachineLearning.class, Monitoring.class,
Security.class, Upgrade.class, Watcher.class, XPackPlugin.class);
final Node tribeNode = new MockNode(merged, mockPlugins).start();
Client tribeClient = tribeNode.client();
logger.info("wait till tribe has the same nodes as the 2 clusters");
assertBusy(() -> {
DiscoveryNodes tribeNodes = tribeNode.client().admin().cluster().prepareState().get().getState().getNodes();
assertThat(countDataNodesForTribe("t1", tribeNodes),
equalTo(internalCluster().client().admin().cluster().prepareState().get().getState()
.getNodes().getDataNodes().size()));
assertThat(countDataNodesForTribe("t2", tribeNodes),
equalTo(cluster2.client().admin().cluster().prepareState().get().getState()
.getNodes().getDataNodes().size()));
});
logger.info(" --> verify transport actions for tribe node");
verifyActionOnTribeNode(tribeClient);
logger.info(" --> verify transport actions for data node");
verifyActionOnDataNode((randomBoolean() ? internalCluster() : cluster2).dataNodeClient());
logger.info(" --> verify transport actions for master node");
verifyActionOnMasterNode((randomBoolean() ? internalCluster() : cluster2).masterClient());
logger.info(" --> verify transport actions for client node");
verifyActionOnClientNode((randomBoolean() ? internalCluster() : cluster2).coordOnlyNodeClient());
try {
cluster2.wipe(Collections.<String>emptySet());
} finally {
cluster2.afterTest();
}
tribeNode.close();
cluster2.close();
}
/**
* Verify transport action behaviour on client node
*/
protected abstract void verifyActionOnClientNode(Client client) throws Exception;
/**
* Verify transport action behaviour on master node
*/
protected abstract void verifyActionOnMasterNode(Client masterClient) throws Exception;
/**
* Verify transport action behaviour on data node
*/
protected abstract void verifyActionOnDataNode(Client dataNodeClient) throws Exception;
/**
* Verify transport action behaviour on tribe node
*/
protected abstract void verifyActionOnTribeNode(Client tribeClient) throws Exception;
protected void failAction(Client client, Action action) {
try {
client.execute(action, action.newRequestBuilder(client).request());
fail("expected [" + action.name() + "] to fail");
} catch (IllegalStateException e) {
assertThat(e.getMessage(), containsString("failed to find action"));
}
}
private void ensureYellow(TestCluster testCluster) {
ClusterHealthResponse actionGet = testCluster.client().admin().cluster()
.health(Requests.clusterHealthRequest().waitForYellowStatus()
.waitForEvents(Priority.LANGUID).waitForNoRelocatingShards(true)).actionGet();
if (actionGet.isTimedOut()) {
logger.info("ensureGreen timed out, cluster state:\n{}\n{}", testCluster.client().admin().cluster()
.prepareState().get().getState(),
testCluster.client().admin().cluster().preparePendingClusterTasks().get());
assertThat("timed out waiting for yellow state", actionGet.isTimedOut(), equalTo(false));
}
assertThat(actionGet.getStatus(), anyOf(equalTo(ClusterHealthStatus.YELLOW), equalTo(ClusterHealthStatus.GREEN)));
}
private int countDataNodesForTribe(String tribeName, DiscoveryNodes nodes) {
int count = 0;
for (DiscoveryNode node : nodes) {
if (!node.isDataNode()) {
continue;
}
if (tribeName.equals(node.getAttributes().get("tribe.name"))) {
count++;
}
}
return count;
}
private static String[] getUnicastHosts(Client client) {
ArrayList<String> unicastHosts = new ArrayList<>();
NodesInfoResponse nodeInfos = client.admin().cluster().prepareNodesInfo().clear().setTransport(true).get();
for (NodeInfo info : nodeInfos.getNodes()) {
TransportAddress address = info.getTransport().getAddress().publishAddress();
unicastHosts.add(address.getAddress() + ":" + address.getPort());
}
return unicastHosts.toArray(new String[unicastHosts.size()]);
}
}

View File

@ -1,177 +0,0 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.test;
import org.elasticsearch.Build;
import org.elasticsearch.common.bytes.BytesArray;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.transport.TransportAddress;
import org.elasticsearch.common.xcontent.XContentType;
import org.elasticsearch.license.GetLicenseResponse;
import org.elasticsearch.license.License;
import org.elasticsearch.license.LicensesStatus;
import org.elasticsearch.license.LicensingClient;
import org.elasticsearch.license.PutLicenseResponse;
import org.elasticsearch.plugins.Plugin;
import org.elasticsearch.xpack.core.LocalStateCompositeXPackPlugin;
import org.elasticsearch.xpack.core.XPackPlugin;
import org.elasticsearch.xpack.core.XPackSettings;
import org.junit.AfterClass;
import java.io.IOException;
import java.net.InetAddress;
import java.net.InetSocketAddress;
import java.net.URL;
import java.nio.charset.StandardCharsets;
import java.util.Collection;
import java.util.Collections;
import static org.hamcrest.CoreMatchers.equalTo;
public class LicensingTribeIT extends ESIntegTestCase {
private static TestCluster cluster2;
private static TestCluster tribeNode;
@Override
protected Collection<Class<? extends Plugin>> nodePlugins() {
return Collections.singletonList(XPackPlugin.class);
}
@Override
protected Collection<Class<? extends Plugin>> transportClientPlugins() {
return Collections.singletonList(LocalStateCompositeXPackPlugin.class);
}
@Override
public void setUp() throws Exception {
super.setUp();
if (cluster2 == null) {
cluster2 = buildExternalCluster(System.getProperty("tests.cluster2"));
}
if (tribeNode == null) {
tribeNode = buildExternalCluster(System.getProperty("tests.tribe"));
}
}
@AfterClass
public static void tearDownExternalClusters() throws IOException {
if (cluster2 != null) {
try {
cluster2.close();
} finally {
cluster2 = null;
}
}
if (tribeNode != null) {
try {
tribeNode.close();
} finally {
tribeNode = null;
}
}
}
@Override
protected Settings externalClusterClientSettings() {
Settings.Builder builder = Settings.builder();
builder.put(XPackSettings.SECURITY_ENABLED.getKey(), false);
builder.put(XPackSettings.MONITORING_ENABLED.getKey(), false);
builder.put(XPackSettings.WATCHER_ENABLED.getKey(), false);
builder.put(XPackSettings.GRAPH_ENABLED.getKey(), false);
builder.put(XPackSettings.MACHINE_LEARNING_ENABLED.getKey(), false);
return builder.build();
}
private ExternalTestCluster buildExternalCluster(String clusterAddresses) throws IOException {
String[] stringAddresses = clusterAddresses.split(",");
TransportAddress[] transportAddresses = new TransportAddress[stringAddresses.length];
int i = 0;
for (String stringAddress : stringAddresses) {
URL url = new URL("http://" + stringAddress);
InetAddress inetAddress = InetAddress.getByName(url.getHost());
transportAddresses[i++] = new TransportAddress(new InetSocketAddress(inetAddress, url.getPort()));
}
return new ExternalTestCluster(createTempDir(), externalClusterClientSettings(), transportClientPlugins(), transportAddresses);
}
public void testLicensePropagateToTribeNode() throws Exception {
assumeTrue("License is only valid when tested against snapshot/test keys", Build.CURRENT.isSnapshot());
// test that auto-generated trial license propagates to tribe
assertBusy(() -> {
GetLicenseResponse getLicenseResponse = new LicensingClient(tribeNode.client()).prepareGetLicense().get();
assertNotNull(getLicenseResponse.license());
assertThat(getLicenseResponse.license().operationMode(), equalTo(License.OperationMode.TRIAL));
});
// test that signed license put in one cluster propagates to tribe
LicensingClient cluster1Client = new LicensingClient(client());
PutLicenseResponse licenseResponse = cluster1Client
.preparePutLicense(License.fromSource(new BytesArray(BASIC_LICENSE.getBytes(StandardCharsets.UTF_8)), XContentType.JSON))
.setAcknowledge(true).get();
assertThat(licenseResponse.isAcknowledged(), equalTo(true));
assertThat(licenseResponse.status(), equalTo(LicensesStatus.VALID));
assertBusy(() -> {
GetLicenseResponse getLicenseResponse = new LicensingClient(tribeNode.client()).prepareGetLicense().get();
assertNotNull(getLicenseResponse.license());
assertThat(getLicenseResponse.license().operationMode(), equalTo(License.OperationMode.BASIC));
});
// test that signed license with higher operation mode takes precedence
LicensingClient cluster2Client = new LicensingClient(cluster2.client());
licenseResponse = cluster2Client
.preparePutLicense(License.fromSource(new BytesArray(PLATINUM_LICENSE.getBytes(StandardCharsets.UTF_8)), XContentType.JSON))
.setAcknowledge(true).get();
assertThat(licenseResponse.isAcknowledged(), equalTo(true));
assertThat(licenseResponse.status(), equalTo(LicensesStatus.VALID));
assertBusy(() -> {
GetLicenseResponse getLicenseResponse = new LicensingClient(tribeNode.client()).prepareGetLicense().get();
assertNotNull(getLicenseResponse.license());
assertThat(getLicenseResponse.license().operationMode(), equalTo(License.OperationMode.PLATINUM));
});
// test removing signed license falls back works
assertTrue(cluster2Client.prepareDeleteLicense().get().isAcknowledged());
assertBusy(() -> {
GetLicenseResponse getLicenseResponse = new LicensingClient(tribeNode.client()).prepareGetLicense().get();
assertNotNull(getLicenseResponse.license());
assertThat(getLicenseResponse.license().operationMode(), equalTo(License.OperationMode.BASIC));
});
}
public void testDummy() throws Exception {
// this test is here so that testLicensePropagateToTribeNode's assumption
// doesn't result in this test suite to have no tests run and trigger a build failure
}
private static final String PLATINUM_LICENSE = "{\"license\":{\"uid\":\"1\",\"type\":\"platinum\"," +
"\"issue_date_in_millis\":1411948800000,\"expiry_date_in_millis\":1914278399999,\"max_nodes\":1," +
"\"issued_to\":\"issuedTo\",\"issuer\":\"issuer\"," +
"\"signature\":\"AAAAAwAAAA2hWlkvKcxQIpdVWdCtAAABmC9ZN0hjZDBGYnVyRXpCOW5Bb3FjZDAxOWpSbTVoMVZwUzRxVk1" +
"PSmkxakxZdW5IMlhlTHNoN1N2MXMvRFk4d3JTZEx3R3RRZ0pzU3lobWJKZnQvSEFva0ppTHBkWkprZWZSQi9iNmRQNkw1SlpLN0l" +
"DalZCS095MXRGN1lIZlpYcVVTTnFrcTE2dzhJZmZrdFQrN3JQeGwxb0U0MXZ0dDJHSERiZTVLOHNzSDByWnpoZEphZHBEZjUrTVB" +
"xRENNSXNsWWJjZllaODdzVmEzUjNiWktNWGM5TUhQV2plaUo4Q1JOUml4MXNuL0pSOEhQaVB2azhmUk9QVzhFeTFoM1Q0RnJXSG5" +
"3MWk2K055c28zSmRnVkF1b2JSQkFLV2VXUmVHNDZ2R3o2VE1qbVNQS2lxOHN5bUErZlNIWkZSVmZIWEtaSU9wTTJENDVvT1NCYkla" +
"cUYyK2FwRW9xa0t6dldMbmMzSGtQc3FWOTgzZ3ZUcXMvQkt2RUZwMFJnZzlvL2d2bDRWUzh6UG5pdENGWFRreXNKNkE9PQAAAQBWg" +
"u3yZp0KOBG//92X4YVmau3P5asvx0FAPDX2Ze734Tap/nc30X6Rt4yEEm+6bCQr/ibBOqWboJKRbbTZLBQfYFmL1ZqvAY3bJJ1/Xs" +
"8NyDfxKGztlUt/IIOzHPzxs0f8Bv4OJeK48vjovWaDc1Vmo4n1SGyyL0JcEbOWC6A3U3mBsWn7wLUe+hW9+akVAYOO5TIcm60ub7k" +
"H/LIZNOhvGglSVDbl3p8EBkNMy0CV7urQ0wdG1nLCnvf8/BiT15lC5nLrM9Dt5w3pzciPlASzw4iksW/CzvYy5tjOoWKEnxi2EZOB" +
"9dKyT4mTdvyBOrTHLdgr4lmHd3qYAEgcTCaQ\",\"start_date_in_millis\":-1}}";
private static final String BASIC_LICENSE = "{\"license\":{\"uid\":\"1\",\"type\":\"basic\"," +
"\"issue_date_in_millis\":1411948800000,\"expiry_date_in_millis\":1914278399999,\"max_nodes\":1," +
"\"issued_to\":\"issuedTo\",\"issuer\":\"issuer\",\"signature\":\"AAA" + "AAwAAAA2is2oANL3mZGS883l9AAAB" +
"mC9ZN0hjZDBGYnVyRXpCOW5Bb3FjZDAxOWpSbTVoMVZwUzRxVk1PSmkxakxZdW5IMlhlTHNoN1N2MXMvRFk4d3JTZEx3R3RRZ0pzU3" +
"lobWJKZnQvSEFva0ppTHBkWkprZWZSQi9iNmRQNkw1SlpLN0lDalZCS095MXRGN1lIZlpYcVVTTnFrcTE2dzhJZmZrdFQrN3JQeGwx" +
"b0U0MXZ0dDJHSERiZTVLOHNzSDByWnpoZEphZHBEZjUrTVBxRENNSXNsWWJjZllaODdzVmEzUjNiWktNWGM5TUhQV2plaUo4Q1JOUm" +
"l4MXNuL0pSOEhQaVB2azhmUk9QVzhFeTFoM1Q0RnJXSG53MWk2K055c28zSmRnVkF1b2JSQkFLV2VXUmVHNDZ2R3o2VE1qbVNQS2lx" +
"OHN5bUErZlNIWkZSVmZIWEtaSU9wTTJENDVvT1NCYklacUYyK2FwRW9xa0t6dldMbmMzSGtQc3FWOTgzZ3ZUcXMvQkt2RUZwMFJnZz" +
"lvL2d2bDRWUzh6UG5pdENGWFRreXNKNkE9PQAAAQCjL9HJnHrHVRq39yO5OFrOS0fY+mf+KqLh8i+RK4s9Hepdi/VQ3SHTEonEUCCB" +
"1iFO35eykW3t+poCMji9VGkslQyJ+uWKzUqn0lmioy8ukpjETcmKH8TSWTqcC7HNZ0NKc1XMTxwkIi/chQTsPUz+h3gfCHZRQwGnRz" +
"JPmPjCJf4293hsMFUlsFQU3tYKDH+kULMdNx1Cg+3PhbUCNrUyQJMb5p4XDrwOaanZUM6HdifS1Y/qjxLXC/B1wHGFEpvrEPFyBuSe" +
"GnJ9uxkrBSv28iG0qsyHrFhHQXIMVFlQKCPaMKikfuZyRhxzE5ntTcGJMn84llCaIyX/kmzqoZHQ\",\"start_date_in_millis\":-1}}\n";
}

View File

@ -1,72 +0,0 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.xpack;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.license.LicenseService;
import org.elasticsearch.license.XPackLicenseState;
import org.elasticsearch.xpack.core.LocalStateCompositeXPackPlugin;
import org.elasticsearch.xpack.core.ssl.SSLService;
import org.elasticsearch.xpack.deprecation.Deprecation;
import org.elasticsearch.xpack.graph.Graph;
import org.elasticsearch.xpack.logstash.Logstash;
import org.elasticsearch.xpack.ml.MachineLearning;
import org.elasticsearch.xpack.monitoring.Monitoring;
import org.elasticsearch.xpack.security.Security;
import org.elasticsearch.xpack.watcher.Watcher;
import java.nio.file.Path;
public class CompositeTestingXPackPlugin extends LocalStateCompositeXPackPlugin {
public CompositeTestingXPackPlugin(final Settings settings, final Path configPath) throws Exception {
super(settings, configPath);
CompositeTestingXPackPlugin thisVar = this;
plugins.add(new Deprecation());
plugins.add(new Graph(settings));
plugins.add(new Logstash(settings));
plugins.add(new MachineLearning(settings, configPath) {
@Override
protected XPackLicenseState getLicenseState() {
return super.getLicenseState();
}
});
plugins.add(new Monitoring(settings) {
@Override
protected SSLService getSslService() {
return thisVar.getSslService();
}
@Override
protected LicenseService getLicenseService() {
return thisVar.getLicenseService();
}
@Override
protected XPackLicenseState getLicenseState() {
return thisVar.getLicenseState();
}
});
plugins.add(new Watcher(settings) {
@Override
protected SSLService getSslService() {
return thisVar.getSslService();
}
@Override
protected XPackLicenseState getLicenseState() {
return thisVar.getLicenseState();
}
});
plugins.add(new Security(settings, configPath) {
@Override
protected SSLService getSslService() { return thisVar.getSslService(); }
@Override
protected XPackLicenseState getLicenseState() { return thisVar.getLicenseState(); }
});
}
}

View File

@ -1,174 +0,0 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.xpack.monitoring;
import org.elasticsearch.action.admin.cluster.node.info.NodeInfo;
import org.elasticsearch.action.admin.cluster.node.info.NodesInfoResponse;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.env.Environment;
import org.elasticsearch.node.MockNode;
import org.elasticsearch.node.Node;
import org.elasticsearch.plugins.Plugin;
import org.elasticsearch.plugins.PluginInfo;
import org.elasticsearch.test.ESIntegTestCase.ClusterScope;
import org.elasticsearch.test.discovery.TestZenDiscovery;
import org.elasticsearch.tribe.TribePlugin;
import org.elasticsearch.xpack.core.XPackSettings;
import org.elasticsearch.xpack.monitoring.test.MonitoringIntegTestCase;
import java.nio.file.Path;
import java.util.ArrayList;
import java.util.Collection;
import java.util.function.Function;
import static org.elasticsearch.test.ESIntegTestCase.Scope.TEST;
import static org.hamcrest.Matchers.equalTo;
@ClusterScope(scope = TEST, transportClientRatio = 0, numClientNodes = 0, numDataNodes = 0)
public class MonitoringPluginTests extends MonitoringIntegTestCase {
public MonitoringPluginTests() throws Exception {
super();
}
@Override
protected void startMonitoringService() {
// do nothing as monitoring is sometime unbound
}
@Override
protected void stopMonitoringService() {
// do nothing as monitoring is sometime unbound
}
@Override
protected boolean addTestZenDiscovery() {
return false;
}
public static class TribeAwareTestZenDiscoveryPlugin extends TestZenDiscovery.TestPlugin {
public TribeAwareTestZenDiscoveryPlugin(Settings settings) {
super(settings);
}
@Override
public Settings additionalSettings() {
if (settings.getGroups("tribe", true).isEmpty()) {
return super.additionalSettings();
} else {
return Settings.EMPTY;
}
}
}
public static class MockTribePlugin extends TribePlugin {
public MockTribePlugin(Settings settings) {
super(settings);
}
protected Function<Settings, Node> nodeBuilder(Path configPath) {
return settings -> new MockNode(new Environment(settings, configPath), internalCluster().getPlugins());
}
}
@Override
protected Collection<Class<? extends Plugin>> nodePlugins() {
ArrayList<Class<? extends Plugin>> plugins = new ArrayList<>(super.nodePlugins());
plugins.add(MockTribePlugin.class);
plugins.add(TribeAwareTestZenDiscoveryPlugin.class);
return plugins;
}
@Override
protected Settings nodeSettings(int nodeOrdinal) {
return Settings.builder()
.put(super.nodeSettings(nodeOrdinal))
.put(MonitoringService.INTERVAL.getKey(), "-1")
.put(XPackSettings.SECURITY_ENABLED.getKey(), false)
.put(XPackSettings.WATCHER_ENABLED.getKey(), false)
.build();
}
@Override
protected Settings transportClientSettings() {
return Settings.builder()
.put(super.transportClientSettings())
.put(XPackSettings.SECURITY_ENABLED.getKey(), false)
.build();
}
public void testMonitoringEnabled() {
internalCluster().startNode(Settings.builder()
.put(XPackSettings.MONITORING_ENABLED.getKey(), true)
.build());
assertPluginIsLoaded();
assertServiceIsBound(MonitoringService.class);
}
public void testMonitoringDisabled() {
internalCluster().startNode(Settings.builder()
.put(XPackSettings.MONITORING_ENABLED.getKey(), false)
.build());
assertPluginIsLoaded();
assertServiceIsNotBound(MonitoringService.class);
}
public void testMonitoringEnabledOnTribeNode() {
internalCluster().startNode(Settings.builder()
.put(XPackSettings.MONITORING_ENABLED.getKey(), true)
.put("tribe.name", "t1")
.build());
assertPluginIsLoaded();
assertServiceIsBound(MonitoringService.class);
}
public void testMonitoringDisabledOnTribeNode() {
internalCluster().startNode(Settings.builder().put("tribe.name", "t1").build());
assertPluginIsLoaded();
assertServiceIsNotBound(MonitoringService.class);
}
private void assertPluginIsLoaded() {
NodesInfoResponse response = client().admin().cluster().prepareNodesInfo().setPlugins(true).get();
for (NodeInfo nodeInfo : response.getNodes()) {
assertNotNull(nodeInfo.getPlugins());
boolean found = false;
for (PluginInfo plugin : nodeInfo.getPlugins().getPluginInfos()) {
assertNotNull(plugin);
if (LocalStateMonitoring.class.getName().equals(plugin.getName())) {
found = true;
break;
}
}
assertThat("xpack plugin not found", found, equalTo(true));
}
}
private void assertServiceIsBound(Class<?> klass) {
try {
Object binding = internalCluster().getDataNodeInstance(klass);
assertNotNull(binding);
assertTrue(klass.isInstance(binding));
} catch (Exception e) {
fail("no service bound for class " + klass.getSimpleName());
}
}
private void assertServiceIsNotBound(Class<?> klass) {
try {
internalCluster().getDataNodeInstance(klass);
fail("should have thrown an exception about missing implementation");
} catch (Exception ce) {
assertThat("message contains error about missing implementation: " + ce.getMessage(),
ce.getMessage().contains("Could not find a suitable constructor"), equalTo(true));
}
}
}

View File

@ -1,47 +0,0 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.xpack.monitoring;
import org.elasticsearch.client.Client;
import org.elasticsearch.license.TribeTransportTestCase;
import org.elasticsearch.xpack.core.XPackField;
import org.elasticsearch.xpack.core.monitoring.action.MonitoringBulkAction;
import org.elasticsearch.xpack.core.monitoring.action.MonitoringBulkRequest;
import java.util.Collections;
import java.util.List;
public class MonitoringTribeTests extends TribeTransportTestCase {
@Override
protected List<String> enabledFeatures() {
return Collections.singletonList(XPackField.MONITORING);
}
@Override
protected void verifyActionOnClientNode(Client client) throws Exception {
assertMonitoringTransportActionsWorks(client);
}
@Override
protected void verifyActionOnMasterNode(Client masterClient) throws Exception {
assertMonitoringTransportActionsWorks(masterClient);
}
@Override
protected void verifyActionOnDataNode(Client dataNodeClient) throws Exception {
assertMonitoringTransportActionsWorks(dataNodeClient);
}
private static void assertMonitoringTransportActionsWorks(Client client) throws Exception {
client.execute(MonitoringBulkAction.INSTANCE, new MonitoringBulkRequest());
}
@Override
protected void verifyActionOnTribeNode(Client tribeClient) {
failAction(tribeClient, MonitoringBulkAction.INSTANCE);
}
}

View File

@ -1,128 +0,0 @@
import org.elasticsearch.gradle.test.ClusterConfiguration
import org.elasticsearch.gradle.test.ClusterFormationTasks
import org.elasticsearch.gradle.test.NodeInfo
apply plugin: 'elasticsearch.standalone-test'
apply plugin: 'elasticsearch.standalone-rest-test'
apply plugin: 'elasticsearch.rest-test'
dependencies {
testCompile project(path: ':modules:tribe', configuration: 'runtime')
testCompile project(path: xpackModule('core'), configuration: 'runtime')
testCompile project(path: xpackModule('core'), configuration: 'testArtifacts')
testCompile project(path: xpackModule('security'), configuration: 'testArtifacts')
testCompile project(path: ':modules:analysis-common', configuration: 'runtime')
}
namingConventions.skipIntegTestInDisguise = true
compileTestJava.options.compilerArgs << "-Xlint:-try"
String xpackPath = project(xpackModule('core')).projectDir.toPath().resolve('src/test/resources').toString()
sourceSets {
test {
resources {
srcDirs += [xpackPath]
}
}
}
forbiddenPatterns {
exclude '**/*.key'
exclude '**/*.p12'
exclude '**/*.der'
exclude '**/*.zip'
}
task setupClusterOne {}
ClusterConfiguration configOne = new ClusterConfiguration(project)
configOne.clusterName = 'cluster1'
configOne.setting('node.name', 'cluster1-node1')
configOne.setting('xpack.monitoring.enabled', false)
configOne.setting('xpack.ml.enabled', false)
configOne.plugin(xpackProject('plugin').path)
configOne.module(project.project(':modules:analysis-common'))
configOne.setupCommand('setupDummyUser',
'bin/x-pack/users', 'useradd', 'test_user', '-p', 'x-pack-test-password', '-r', 'superuser')
configOne.waitCondition = { node, ant ->
File tmpFile = new File(node.cwd, 'wait.success')
ant.get(src: "http://${node.httpUri()}/_cluster/health?wait_for_nodes=>=1&wait_for_status=yellow&timeout=60s",
dest: tmpFile.toString(),
username: 'test_user',
password: 'x-pack-test-password',
ignoreerrors: true,
retries: 10)
return tmpFile.exists()
}
List<NodeInfo> cluster1Nodes = ClusterFormationTasks.setup(project, 'clusterOne', setupClusterOne, configOne)
task setupClusterTwo {}
ClusterConfiguration configTwo = new ClusterConfiguration(project)
configTwo.clusterName = 'cluster2'
configTwo.setting('node.name', 'cluster2-node1')
configTwo.setting('xpack.monitoring.enabled', false)
configTwo.setting('xpack.ml.enabled', false)
configTwo.plugin(xpackProject('plugin').path)
configTwo.module(project.project(':modules:analysis-common'))
configTwo.setupCommand('setupDummyUser',
'bin/x-pack/users', 'useradd', 'test_user', '-p', 'x-pack-test-password', '-r', 'superuser')
configTwo.waitCondition = { node, ant ->
File tmpFile = new File(node.cwd, 'wait.success')
ant.get(src: "http://${node.httpUri()}/_cluster/health?wait_for_nodes=>=1&wait_for_status=yellow&timeout=60s",
dest: tmpFile.toString(),
username: 'test_user',
password: 'x-pack-test-password',
ignoreerrors: true,
retries: 10)
return tmpFile.exists()
}
List<NodeInfo> cluster2Nodes = ClusterFormationTasks.setup(project, 'clusterTwo', setupClusterTwo, configTwo)
integTestCluster {
dependsOn setupClusterOne, setupClusterTwo
plugin xpackProject('plugin').path
nodeStartupWaitSeconds 45
setupCommand 'setupDummyUser',
'bin/x-pack/users', 'useradd', 'test_user', '-p', 'x-pack-test-password', '-r', 'superuser'
setting 'xpack.monitoring.enabled', false
setting 'xpack.ml.enabled', false
setting 'node.name', 'tribe-node'
setting 'tribe.on_conflict', 'prefer_cluster1'
setting 'tribe.cluster1.cluster.name', 'cluster1'
setting 'tribe.cluster1.discovery.zen.ping.unicast.hosts', "'${-> cluster1Nodes.get(0).transportUri()}'"
setting 'tribe.cluster1.http.enabled', 'true'
setting 'tribe.cluster1.xpack.ml.enabled', 'false'
setting 'tribe.cluster2.cluster.name', 'cluster2'
setting 'tribe.cluster2.discovery.zen.ping.unicast.hosts', "'${-> cluster2Nodes.get(0).transportUri()}'"
setting 'tribe.cluster2.http.enabled', 'true'
setting 'tribe.cluster2.xpack.ml.enabled', 'false'
keystoreSetting 'bootstrap.password', 'x-pack-test-password'
waitCondition = { node, ant ->
File tmpFile = new File(node.cwd, 'wait.success')
// 5 nodes: tribe + clusterOne (1 node + tribe internal node) + clusterTwo (1 node + tribe internal node)
ant.get(src: "http://${node.httpUri()}/_cluster/health?wait_for_nodes=>=5&wait_for_status=yellow&timeout=60s",
dest: tmpFile.toString(),
username: 'test_user',
password: 'x-pack-test-password',
ignoreerrors: true,
retries: 10)
return tmpFile.exists()
}
}
test {
/*
* We have to disable setting the number of available processors as tests in the same JVM randomize processors and will step on each
* other if we allow them to set the number of available processors as it's set-once in Netty.
*/
systemProperty 'es.set.netty.runtime.available.processors', 'false'
include '**/*Tests.class'
}
integTestRunner {
systemProperty 'tests.cluster', "${-> cluster1Nodes.get(0).transportUri()}"
systemProperty 'tests.cluster2', "${-> cluster2Nodes.get(0).transportUri()}"
systemProperty 'tests.tribe', "${-> integTest.nodes.get(0).transportUri()}"
finalizedBy 'clusterOne#stop'
finalizedBy 'clusterTwo#stop'
}

View File

@ -1,233 +0,0 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.test;
import com.carrotsearch.hppc.cursors.ObjectCursor;
import org.elasticsearch.action.admin.cluster.health.ClusterHealthResponse;
import org.elasticsearch.action.admin.cluster.node.info.NodeInfo;
import org.elasticsearch.action.admin.cluster.node.info.NodesInfoResponse;
import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.metadata.IndexMetaData;
import org.elasticsearch.common.io.stream.NotSerializableExceptionWrapper;
import org.elasticsearch.common.network.NetworkModule;
import org.elasticsearch.common.settings.SecureString;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.transport.TransportAddress;
import org.elasticsearch.xpack.core.security.SecurityField;
import org.elasticsearch.xpack.core.security.action.role.GetRolesResponse;
import org.elasticsearch.xpack.core.security.action.role.PutRoleResponse;
import org.elasticsearch.xpack.core.security.authc.support.UsernamePasswordToken;
import org.elasticsearch.xpack.core.security.client.SecurityClient;
import org.elasticsearch.xpack.security.Security;
import org.junit.After;
import org.junit.AfterClass;
import java.io.IOException;
import java.net.InetAddress;
import java.net.InetSocketAddress;
import java.net.URL;
import java.util.ArrayList;
import java.util.Collection;
import java.util.Collections;
import java.util.HashSet;
import java.util.List;
import java.util.Set;
import java.util.stream.Collectors;
import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertNoTimeout;
import static org.elasticsearch.xpack.security.support.IndexLifecycleManager.INTERNAL_SECURITY_INDEX;
import static org.hamcrest.CoreMatchers.containsString;
import static org.hamcrest.CoreMatchers.equalTo;
import static org.hamcrest.Matchers.arrayContaining;
import static org.hamcrest.core.IsCollectionContaining.hasItem;
public class TribeWithSecurityIT extends SecurityIntegTestCase {
private static TestCluster cluster2;
private static TestCluster tribeNode;
@Override
public void setUp() throws Exception {
super.setUp();
if (cluster2 == null) {
cluster2 = buildExternalCluster(System.getProperty("tests.cluster2"));
}
if (tribeNode == null) {
tribeNode = buildExternalCluster(System.getProperty("tests.tribe"));
}
}
/**
* TODO: this entire class should be removed. SecurityIntegTestCase is meant for tests, but we run against real xpack
*/
@Override
public void doAssertXPackIsInstalled() {
NodesInfoResponse nodeInfos = client().admin().cluster().prepareNodesInfo().clear().setPlugins(true).get();
for (NodeInfo nodeInfo : nodeInfos.getNodes()) {
// TODO: disable this assertion for now, due to random runs with mock plugins. perhaps run without mock plugins?
// assertThat(nodeInfo.getPlugins().getInfos(), hasSize(2));
Collection<String> pluginNames =
nodeInfo.getPlugins().getPluginInfos().stream().map(p -> p.getClassname()).collect(Collectors.toList());
assertThat("plugin [" + Security.class.getName() + "] not found in [" + pluginNames + "]", pluginNames,
hasItem(Security.class.getName()));
}
}
@AfterClass
public static void tearDownExternalClusters() throws IOException {
if (cluster2 != null) {
try {
cluster2.close();
} finally {
cluster2 = null;
}
}
if (tribeNode != null) {
try {
tribeNode.close();
} finally {
tribeNode = null;
}
}
}
@After
public void removeSecurityIndex() {
if (client().admin().indices().prepareExists(INTERNAL_SECURITY_INDEX).get().isExists()) {
client().admin().indices().prepareDelete(INTERNAL_SECURITY_INDEX).get();
}
if (cluster2.client().admin().indices().prepareExists(INTERNAL_SECURITY_INDEX).get().isExists()) {
cluster2.client().admin().indices().prepareDelete(INTERNAL_SECURITY_INDEX).get();
}
securityClient(client()).prepareClearRealmCache().get();
securityClient(cluster2.client()).prepareClearRealmCache().get();
}
@Override
protected Settings externalClusterClientSettings() {
Settings.Builder builder = Settings.builder().put(super.externalClusterClientSettings());
builder.put(NetworkModule.TRANSPORT_TYPE_KEY, SecurityField.NAME4);
return builder.build();
}
private ExternalTestCluster buildExternalCluster(String clusterAddresses) throws IOException {
String[] stringAddresses = clusterAddresses.split(",");
TransportAddress[] transportAddresses = new TransportAddress[stringAddresses.length];
int i = 0;
for (String stringAddress : stringAddresses) {
URL url = new URL("http://" + stringAddress);
InetAddress inetAddress = InetAddress.getByName(url.getHost());
transportAddresses[i++] = new TransportAddress(new InetSocketAddress(inetAddress, url.getPort()));
}
return new ExternalTestCluster(createTempDir(), externalClusterClientSettings(), transportClientPlugins(), transportAddresses);
}
public void testThatTribeCanAuthenticateElasticUser() throws Exception {
ClusterHealthResponse response = tribeNode.client().filterWithHeader(Collections.singletonMap("Authorization",
UsernamePasswordToken.basicAuthHeaderValue("elastic", BOOTSTRAP_PASSWORD)))
.admin().cluster().prepareHealth().get();
assertNoTimeout(response);
}
public void testThatTribeCanAuthenticateElasticUserWithChangedPassword() throws Exception {
assertSecurityIndexActive();
securityClient(client()).prepareChangePassword("elastic", "password".toCharArray()).get();
assertTribeNodeHasAllIndices();
ClusterHealthResponse response = tribeNode.client().filterWithHeader(Collections.singletonMap("Authorization",
UsernamePasswordToken.basicAuthHeaderValue("elastic", new SecureString("password".toCharArray()))))
.admin().cluster().prepareHealth().get();
assertNoTimeout(response);
}
public void testThatTribeClustersHaveDifferentPasswords() throws Exception {
assertSecurityIndexActive();
assertSecurityIndexActive(cluster2);
securityClient().prepareChangePassword("elastic", "password".toCharArray()).get();
securityClient(cluster2.client()).prepareChangePassword("elastic", "password2".toCharArray()).get();
assertTribeNodeHasAllIndices();
ClusterHealthResponse response = tribeNode.client().filterWithHeader(Collections.singletonMap("Authorization",
UsernamePasswordToken.basicAuthHeaderValue("elastic", new SecureString("password".toCharArray()))))
.admin().cluster().prepareHealth().get();
assertNoTimeout(response);
}
public void testUserModificationUsingTribeNodeAreDisabled() throws Exception {
SecurityClient securityClient = securityClient(tribeNode.client());
NotSerializableExceptionWrapper e = expectThrows(NotSerializableExceptionWrapper.class,
() -> securityClient.preparePutUser("joe", "password".toCharArray()).get());
assertThat(e.getMessage(), containsString("users may not be created or modified using a tribe node"));
e = expectThrows(NotSerializableExceptionWrapper.class, () -> securityClient.prepareSetEnabled("elastic", randomBoolean()).get());
assertThat(e.getMessage(), containsString("users may not be created or modified using a tribe node"));
e = expectThrows(NotSerializableExceptionWrapper.class,
() -> securityClient.prepareChangePassword("elastic", "password".toCharArray()).get());
assertThat(e.getMessage(), containsString("users may not be created or modified using a tribe node"));
e = expectThrows(NotSerializableExceptionWrapper.class, () -> securityClient.prepareDeleteUser("joe").get());
assertThat(e.getMessage(), containsString("users may not be deleted using a tribe node"));
}
// note tribe node has tribe.on_conflict set to prefer cluster_1
public void testRetrieveRolesOnPreferredClusterOnly() throws Exception {
final int randomRoles = scaledRandomIntBetween(3, 8);
List<String> shouldBeSuccessfulRoles = new ArrayList<>();
assertSecurityIndexActive();
for (int i = 0; i < randomRoles; i++) {
final String rolename = "preferredClusterRole" + i;
PutRoleResponse response = securityClient(client()).preparePutRole(rolename).cluster("monitor").get();
assertTrue(response.isCreated());
shouldBeSuccessfulRoles.add(rolename);
}
assertTribeNodeHasAllIndices();
SecurityClient securityClient = securityClient(tribeNode.client());
for (String rolename : shouldBeSuccessfulRoles) {
GetRolesResponse response = securityClient.prepareGetRoles(rolename).get();
assertTrue(response.hasRoles());
assertEquals(1, response.roles().length);
assertThat(response.roles()[0].getClusterPrivileges(), arrayContaining("monitor"));
}
}
private void assertTribeNodeHasAllIndices() throws Exception {
assertBusy(() -> {
Set<String> indices = new HashSet<>();
client().admin().cluster().prepareState().setMetaData(true).get()
.getState().getMetaData().getIndices().keysIt().forEachRemaining(indices::add);
cluster2.client().admin().cluster().prepareState().setMetaData(true).get()
.getState().getMetaData().getIndices().keysIt().forEachRemaining(indices::add);
ClusterState state = tribeNode.client().admin().cluster().prepareState().setRoutingTable(true)
.setMetaData(true).get().getState();
StringBuilder sb = new StringBuilder();
for (String index : indices) {
if (sb.length() == 0) {
sb.append("[");
sb.append(index);
} else {
sb.append(",");
sb.append(index);
}
}
sb.append("]");
Set<String> tribeIndices = new HashSet<>();
for (ObjectCursor<IndexMetaData> cursor : state.getMetaData().getIndices().values()) {
tribeIndices.add(cursor.value.getIndex().getName());
}
assertThat("cluster indices [" + indices + "] tribe indices [" + tribeIndices + "]",
state.getMetaData().getIndices().size(), equalTo(indices.size()));
for (String index : indices) {
assertTrue(state.getMetaData().hasIndex(index));
assertTrue(state.getRoutingTable().hasIndex(index));
assertTrue(state.getRoutingTable().index(index).allPrimaryShardsActive());
}
});
}
}

View File

@ -1,556 +0,0 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.xpack.security;
import org.elasticsearch.ElasticsearchSecurityException;
import org.elasticsearch.action.admin.cluster.health.ClusterHealthResponse;
import org.elasticsearch.action.admin.cluster.node.info.NodesInfoResponse;
import org.elasticsearch.client.Client;
import org.elasticsearch.client.RestClient;
import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.ClusterStateObserver;
import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.UUIDs;
import org.elasticsearch.common.network.NetworkModule;
import org.elasticsearch.common.settings.MockSecureSettings;
import org.elasticsearch.common.settings.SecureString;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.common.util.concurrent.ThreadContext;
import org.elasticsearch.env.Environment;
import org.elasticsearch.env.NodeEnvironment;
import org.elasticsearch.index.IndexNotFoundException;
import org.elasticsearch.node.MockNode;
import org.elasticsearch.node.Node;
import org.elasticsearch.plugins.Plugin;
import org.elasticsearch.test.ESIntegTestCase;
import org.elasticsearch.test.InternalTestCluster;
import org.elasticsearch.test.NativeRealmIntegTestCase;
import org.elasticsearch.test.SecuritySettingsSource;
import org.elasticsearch.test.SecuritySettingsSourceField;
import org.elasticsearch.test.discovery.TestZenDiscovery;
import org.elasticsearch.tribe.TribePlugin;
import org.elasticsearch.tribe.TribeService;
import org.elasticsearch.xpack.core.security.SecurityLifecycleServiceField;
import org.elasticsearch.xpack.core.security.action.role.GetRolesResponse;
import org.elasticsearch.xpack.core.security.action.role.PutRoleResponse;
import org.elasticsearch.xpack.core.security.action.user.PutUserResponse;
import org.elasticsearch.xpack.core.security.authc.support.UsernamePasswordToken;
import org.elasticsearch.xpack.core.security.client.SecurityClient;
import org.elasticsearch.xpack.security.support.IndexLifecycleManager;
import org.junit.After;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import java.io.IOException;
import java.io.UncheckedIOException;
import java.nio.file.Path;
import java.util.ArrayList;
import java.util.Collection;
import java.util.Collections;
import java.util.HashSet;
import java.util.List;
import java.util.Set;
import java.util.concurrent.CountDownLatch;
import java.util.function.Function;
import java.util.function.Predicate;
import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertNoTimeout;
import static org.hamcrest.Matchers.anyOf;
import static org.hamcrest.Matchers.arrayContaining;
import static org.hamcrest.Matchers.containsString;
import static org.hamcrest.Matchers.equalTo;
import static org.hamcrest.Matchers.startsWith;
/**
* Tests security with tribe nodes
*/
public class SecurityTribeTests extends NativeRealmIntegTestCase {
private static final String SECOND_CLUSTER_NODE_PREFIX = "node_cluster2_";
private static InternalTestCluster cluster2;
private static boolean useSSL;
private Node tribeNode;
private Client tribeClient;
@BeforeClass
public static void setupSSL() {
useSSL = randomBoolean();
}
@Override
public void setUp() throws Exception {
super.setUp();
if (cluster2 == null) {
SecuritySettingsSource cluster2SettingsSource =
new SecuritySettingsSource(defaultMaxNumberOfNodes(), useSSL, createTempDir(), Scope.SUITE) {
@Override
public Settings nodeSettings(int nodeOrdinal) {
Settings.Builder builder = Settings.builder()
.put(super.nodeSettings(nodeOrdinal))
.put(NetworkModule.HTTP_ENABLED.getKey(), true);
if (builder.getSecureSettings() == null) {
builder.setSecureSettings(new MockSecureSettings());
}
((MockSecureSettings) builder.getSecureSettings()).setString("bootstrap.password",
BOOTSTRAP_PASSWORD.toString());
return builder.build();
}
@Override
public Collection<Class<? extends Plugin>> nodePlugins() {
ArrayList<Class<? extends Plugin>> plugins = new ArrayList<>(super.nodePlugins());
plugins.add(MockTribePlugin.class);
plugins.add(TribeAwareTestZenDiscoveryPlugin.class);
return plugins;
}
};
cluster2 = new InternalTestCluster(randomLong(), createTempDir(), true, true, 1, 2,
UUIDs.randomBase64UUID(random()), cluster2SettingsSource, 0, false, SECOND_CLUSTER_NODE_PREFIX, getMockPlugins(),
getClientWrapper());
cluster2.beforeTest(random(), 0.1);
cluster2.ensureAtLeastNumDataNodes(2);
}
assertSecurityIndexActive(cluster2);
}
@Override
public boolean transportSSLEnabled() {
return useSSL;
}
@AfterClass
public static void tearDownSecondCluster() {
if (cluster2 != null) {
try {
cluster2.close();
} finally {
cluster2 = null;
}
}
}
/**
* We intentionally do not override {@link ESIntegTestCase#tearDown()} as doing so causes the ensure cluster size check to timeout
*/
@After
public void tearDownTribeNodeAndWipeCluster() throws Exception {
if (cluster2 != null) {
try {
cluster2.wipe(Collections.singleton(SecurityLifecycleServiceField.SECURITY_TEMPLATE_NAME));
try {
// this is a hack to clean up the .security index since only the XPackSecurity user or superusers can delete it
final Client cluster2Client = cluster2.client().filterWithHeader(Collections.singletonMap("Authorization",
UsernamePasswordToken.basicAuthHeaderValue(SecuritySettingsSource.TEST_SUPERUSER,
SecuritySettingsSourceField.TEST_PASSWORD_SECURE_STRING)));
cluster2Client.admin().indices().prepareDelete(IndexLifecycleManager.INTERNAL_SECURITY_INDEX).get();
} catch (IndexNotFoundException e) {
// ignore it since not all tests create this index...
}
// Clear the realm cache for all realms since we use a SUITE scoped cluster
SecurityClient client = securityClient(cluster2.client());
client.prepareClearRealmCache().get();
} finally {
cluster2.afterTest();
}
}
if (tribeNode != null) {
tribeNode.close();
tribeNode = null;
}
}
@Override
protected boolean ignoreExternalCluster() {
return true;
}
@Override
protected boolean shouldSetReservedUserPasswords() {
return false;
}
@Override
protected boolean addTestZenDiscovery() {
return false;
}
public static class TribeAwareTestZenDiscoveryPlugin extends TestZenDiscovery.TestPlugin {
public TribeAwareTestZenDiscoveryPlugin(Settings settings) {
super(settings);
}
@Override
public Settings additionalSettings() {
if (settings.getGroups("tribe", true).isEmpty()) {
return super.additionalSettings();
} else {
return Settings.EMPTY;
}
}
}
public static class MockTribePlugin extends TribePlugin {
public MockTribePlugin(Settings settings) {
super(settings);
}
protected Function<Settings, Node> nodeBuilder(Path configPath) {
return settings -> new MockNode(new Environment(settings, configPath), internalCluster().getPlugins());
}
}
@Override
protected Collection<Class<? extends Plugin>> nodePlugins() {
ArrayList<Class<? extends Plugin>> plugins = new ArrayList<>(super.nodePlugins());
plugins.add(MockTribePlugin.class);
plugins.add(TribeAwareTestZenDiscoveryPlugin.class);
return plugins;
}
private void setupTribeNode(Settings settings) throws Exception {
SecuritySettingsSource cluster2SettingsSource =
new SecuritySettingsSource(1, useSSL, createTempDir(), Scope.TEST) {
@Override
public Settings nodeSettings(int nodeOrdinal) {
return Settings.builder()
.put(super.nodeSettings(nodeOrdinal))
.put(NetworkModule.HTTP_ENABLED.getKey(), true)
.build();
}
};
final Settings settingsTemplate = cluster2SettingsSource.nodeSettings(0);
Settings.Builder tribe1Defaults = Settings.builder();
Settings.Builder tribe2Defaults = Settings.builder();
Settings tribeSettings = settingsTemplate.filter(k -> {
if (k.equals(NodeEnvironment.MAX_LOCAL_STORAGE_NODES_SETTING.getKey())) {
return false;
} if (k.startsWith("path.")) {
return false;
} else if (k.equals("transport.tcp.port")) {
return false;
}
return true;
});
tribe1Defaults.put(tribeSettings, false);
tribe1Defaults.normalizePrefix("tribe.t1.");
tribe2Defaults.put(tribeSettings, false);
tribe2Defaults.normalizePrefix("tribe.t2.");
// TODO: rethink how these settings are generated for tribes once we support more than just string settings...
MockSecureSettings secureSettingsTemplate =
(MockSecureSettings) Settings.builder().put(settingsTemplate).getSecureSettings();
MockSecureSettings secureSettings = new MockSecureSettings();
if (secureSettingsTemplate != null) {
for (String settingName : secureSettingsTemplate.getSettingNames()) {
String settingValue = secureSettingsTemplate.getString(settingName).toString();
secureSettings.setString(settingName, settingValue);
secureSettings.setString("tribe.t1." + settingName, settingValue);
secureSettings.setString("tribe.t2." + settingName, settingValue);
}
}
Settings merged = Settings.builder()
.put(internalCluster().getDefaultSettings())
.put(tribeSettings, false)
.put("tribe.t1.cluster.name", internalCluster().getClusterName())
.put("tribe.t2.cluster.name", cluster2.getClusterName())
.put("tribe.blocks.write", false)
.put("tribe.on_conflict", "prefer_t1")
.put(tribe1Defaults.build())
.put(tribe2Defaults.build())
.put(settings)
.put("node.name", "tribe_node") // make sure we can identify threads from this node
.setSecureSettings(secureSettings)
.build();
final List<Class<? extends Plugin>> classpathPlugins = new ArrayList<>(nodePlugins());
classpathPlugins.addAll(getMockPlugins());
tribeNode = new MockNode(merged, classpathPlugins, cluster2SettingsSource.nodeConfigPath(0)).start();
tribeClient = getClientWrapper().apply(tribeNode.client());
ClusterService tribeClusterService = tribeNode.injector().getInstance(ClusterService.class);
ClusterState clusterState = tribeClusterService.state();
ClusterStateObserver observer = new ClusterStateObserver(clusterState, tribeClusterService, null,
logger, new ThreadContext(settings));
final int cluster1Nodes = internalCluster().size();
final int cluster2Nodes = cluster2.size();
logger.info("waiting for [{}] nodes to be added to the tribe cluster state", cluster1Nodes + cluster2Nodes + 2);
final Predicate<ClusterState> nodeCountPredicate = state -> state.nodes().getSize() == cluster1Nodes + cluster2Nodes + 3;
if (nodeCountPredicate.test(clusterState) == false) {
CountDownLatch latch = new CountDownLatch(1);
observer.waitForNextChange(new ClusterStateObserver.Listener() {
@Override
public void onNewClusterState(ClusterState state) {
latch.countDown();
}
@Override
public void onClusterServiceClose() {
fail("tribe cluster service closed");
latch.countDown();
}
@Override
public void onTimeout(TimeValue timeout) {
fail("timed out waiting for nodes to be added to tribe's cluster state");
latch.countDown();
}
}, nodeCountPredicate);
latch.await();
}
assertTribeNodeHasAllIndices();
}
public void testThatTribeCanAuthenticateElasticUser() throws Exception {
ensureElasticPasswordBootstrapped(internalCluster());
setupTribeNode(Settings.EMPTY);
assertTribeNodeHasAllIndices();
ClusterHealthResponse response = tribeClient.filterWithHeader(Collections.singletonMap("Authorization",
UsernamePasswordToken.basicAuthHeaderValue("elastic", getReservedPassword())))
.admin().cluster().prepareHealth().get();
assertNoTimeout(response);
}
public void testThatTribeCanAuthenticateElasticUserWithChangedPassword() throws Exception {
InternalTestCluster cluster = randomBoolean() ? internalCluster() : cluster2;
ensureElasticPasswordBootstrapped(cluster);
setupTribeNode(Settings.EMPTY);
securityClient(cluster.client()).prepareChangePassword("elastic", "password".toCharArray()).get();
assertTribeNodeHasAllIndices();
ClusterHealthResponse response = tribeClient.filterWithHeader(Collections.singletonMap("Authorization",
UsernamePasswordToken.basicAuthHeaderValue("elastic", new SecureString("password".toCharArray()))))
.admin().cluster().prepareHealth().get();
assertNoTimeout(response);
}
public void testThatTribeClustersHaveDifferentPasswords() throws Exception {
ensureElasticPasswordBootstrapped(internalCluster());
ensureElasticPasswordBootstrapped(cluster2);
setupTribeNode(Settings.EMPTY);
securityClient().prepareChangePassword("elastic", "password".toCharArray()).get();
securityClient(cluster2.client()).prepareChangePassword("elastic", "password2".toCharArray()).get();
assertTribeNodeHasAllIndices();
ClusterHealthResponse response = tribeClient.filterWithHeader(Collections.singletonMap("Authorization",
UsernamePasswordToken.basicAuthHeaderValue("elastic", new SecureString("password".toCharArray()))))
.admin().cluster().prepareHealth().get();
assertNoTimeout(response);
}
public void testUsersInBothTribes() throws Exception {
ensureElasticPasswordBootstrapped(internalCluster());
ensureElasticPasswordBootstrapped(cluster2);
final String preferredTribe = randomBoolean() ? "t1" : "t2";
setupTribeNode(Settings.builder().put("tribe.on_conflict", "prefer_" + preferredTribe).build());
final int randomUsers = scaledRandomIntBetween(3, 8);
final Client cluster1Client = client();
final Client cluster2Client = cluster2.client();
List<String> shouldBeSuccessfulUsers = new ArrayList<>();
List<String> shouldFailUsers = new ArrayList<>();
final Client preferredClient = "t1".equals(preferredTribe) ? cluster1Client : cluster2Client;
for (int i = 0; i < randomUsers; i++) {
final String username = "user" + i;
Client clusterClient = randomBoolean() ? cluster1Client : cluster2Client;
PutUserResponse response =
securityClient(clusterClient).preparePutUser(username, "password".toCharArray(), "superuser").get();
assertTrue(response.created());
// if it was the first client, we should expect authentication to succeed
if (preferredClient == clusterClient) {
shouldBeSuccessfulUsers.add(username);
} else {
shouldFailUsers.add(username);
}
}
assertTribeNodeHasAllIndices();
for (String username : shouldBeSuccessfulUsers) {
ClusterHealthResponse response = tribeClient.filterWithHeader(Collections.singletonMap("Authorization",
UsernamePasswordToken.basicAuthHeaderValue(username, new SecureString("password".toCharArray()))))
.admin().cluster().prepareHealth().get();
assertNoTimeout(response);
}
for (String username : shouldFailUsers) {
ElasticsearchSecurityException e = expectThrows(ElasticsearchSecurityException.class, () ->
tribeClient.filterWithHeader(Collections.singletonMap("Authorization",
UsernamePasswordToken.basicAuthHeaderValue(username, new SecureString("password".toCharArray()))))
.admin().cluster().prepareHealth().get());
assertThat(e.getMessage(), containsString("authenticate"));
}
}
public void testUsersInNonPreferredClusterOnly() throws Exception {
final String preferredTribe = randomBoolean() ? "t1" : "t2";
// only create users in the non preferred client
final InternalTestCluster nonPreferredCluster = "t1".equals(preferredTribe) ? cluster2 : internalCluster();
ensureElasticPasswordBootstrapped(nonPreferredCluster);
setupTribeNode(Settings.builder().put("tribe.on_conflict", "prefer_" + preferredTribe).build());
final int randomUsers = scaledRandomIntBetween(3, 8);
List<String> shouldBeSuccessfulUsers = new ArrayList<>();
for (int i = 0; i < randomUsers; i++) {
final String username = "user" + i;
PutUserResponse response =
securityClient(nonPreferredCluster.client()).preparePutUser(username, "password".toCharArray(), "superuser").get();
assertTrue(response.created());
shouldBeSuccessfulUsers.add(username);
}
assertTribeNodeHasAllIndices();
for (String username : shouldBeSuccessfulUsers) {
ClusterHealthResponse response = tribeClient.filterWithHeader(Collections.singletonMap("Authorization",
UsernamePasswordToken.basicAuthHeaderValue(username, new SecureString("password".toCharArray()))))
.admin().cluster().prepareHealth().get();
assertNoTimeout(response);
}
}
private void ensureElasticPasswordBootstrapped(InternalTestCluster cluster) {
NodesInfoResponse nodesInfoResponse = cluster.client().admin().cluster().prepareNodesInfo().get();
assertFalse(nodesInfoResponse.hasFailures());
try (RestClient restClient = createRestClient(nodesInfoResponse.getNodes(), null, "http")) {
setupReservedPasswords(restClient);
} catch (IOException e) {
throw new UncheckedIOException(e);
}
}
public void testUserModificationUsingTribeNodeAreDisabled() throws Exception {
ensureElasticPasswordBootstrapped(internalCluster());
setupTribeNode(Settings.EMPTY);
SecurityClient securityClient = securityClient(tribeClient);
UnsupportedOperationException e = expectThrows(UnsupportedOperationException.class,
() -> securityClient.preparePutUser("joe", "password".toCharArray()).get());
assertThat(e.getMessage(), containsString("users may not be created or modified using a tribe node"));
e = expectThrows(UnsupportedOperationException.class, () -> securityClient.prepareSetEnabled("elastic", randomBoolean()).get());
assertThat(e.getMessage(), containsString("users may not be created or modified using a tribe node"));
e = expectThrows(UnsupportedOperationException.class,
() -> securityClient.prepareChangePassword("elastic", "password".toCharArray()).get());
assertThat(e.getMessage(), containsString("users may not be created or modified using a tribe node"));
e = expectThrows(UnsupportedOperationException.class, () -> securityClient.prepareDeleteUser("joe").get());
assertThat(e.getMessage(), containsString("users may not be deleted using a tribe node"));
}
public void testRetrieveRolesOnTribeNode() throws Exception {
ensureElasticPasswordBootstrapped(internalCluster());
ensureElasticPasswordBootstrapped(cluster2);
final String preferredTribe = randomBoolean() ? "t1" : "t2";
setupTribeNode(Settings.builder().put("tribe.on_conflict", "prefer_" + preferredTribe).build());
final int randomRoles = scaledRandomIntBetween(3, 8);
final Client cluster1Client = client();
final Client cluster2Client = cluster2.client();
List<String> shouldBeSuccessfulRoles = new ArrayList<>();
List<String> shouldFailRoles = new ArrayList<>();
final Client preferredClient = "t1".equals(preferredTribe) ? cluster1Client : cluster2Client;
for (int i = 0; i < randomRoles; i++) {
final String rolename = "role" + i;
Client clusterClient = randomBoolean() ? cluster1Client : cluster2Client;
PutRoleResponse response = securityClient(clusterClient).preparePutRole(rolename).cluster("monitor").get();
assertTrue(response.isCreated());
// if it was the first client, we should expect authentication to succeed
if (preferredClient == clusterClient) {
shouldBeSuccessfulRoles.add(rolename);
} else {
shouldFailRoles.add(rolename);
}
}
assertTribeNodeHasAllIndices();
SecurityClient securityClient = securityClient(tribeClient);
for (String rolename : shouldBeSuccessfulRoles) {
GetRolesResponse response = securityClient.prepareGetRoles(rolename).get();
assertTrue(response.hasRoles());
assertEquals(1, response.roles().length);
assertThat(response.roles()[0].getClusterPrivileges(), arrayContaining("monitor"));
}
for (String rolename : shouldFailRoles) {
GetRolesResponse response = securityClient.prepareGetRoles(rolename).get();
assertFalse(response.hasRoles());
}
}
public void testRetrieveRolesOnNonPreferredClusterOnly() throws Exception {
final String preferredTribe = randomBoolean() ? "t1" : "t2";
final InternalTestCluster nonPreferredCluster = "t1".equals(preferredTribe) ? cluster2 : internalCluster();
ensureElasticPasswordBootstrapped(nonPreferredCluster);
setupTribeNode(Settings.builder().put("tribe.on_conflict", "prefer_" + preferredTribe).build());
final int randomRoles = scaledRandomIntBetween(3, 8);
List<String> shouldBeSuccessfulRoles = new ArrayList<>();
Client nonPreferredClient = nonPreferredCluster.client();
for (int i = 0; i < randomRoles; i++) {
final String rolename = "role" + i;
PutRoleResponse response = securityClient(nonPreferredClient).preparePutRole(rolename).cluster("monitor").get();
assertTrue(response.isCreated());
shouldBeSuccessfulRoles.add(rolename);
}
assertTribeNodeHasAllIndices();
SecurityClient securityClient = securityClient(tribeClient);
for (String rolename : shouldBeSuccessfulRoles) {
GetRolesResponse response = securityClient.prepareGetRoles(rolename).get();
assertTrue(response.hasRoles());
assertEquals(1, response.roles().length);
assertThat(response.roles()[0].getClusterPrivileges(), arrayContaining("monitor"));
}
}
public void testRoleModificationUsingTribeNodeAreDisabled() throws Exception {
setupTribeNode(Settings.EMPTY);
SecurityClient securityClient = securityClient(getClientWrapper().apply(tribeClient));
UnsupportedOperationException e = expectThrows(UnsupportedOperationException.class,
() -> securityClient.preparePutRole("role").cluster("all").get());
assertThat(e.getMessage(), containsString("roles may not be created or modified using a tribe node"));
e = expectThrows(UnsupportedOperationException.class, () -> securityClient.prepareDeleteRole("role").get());
assertThat(e.getMessage(), containsString("roles may not be deleted using a tribe node"));
}
public void testTribeSettingNames() throws Exception {
TribeService.TRIBE_SETTING_KEYS
.forEach(s -> assertThat("a new setting has been introduced for tribe that security needs to know about in Security.java",
s, anyOf(startsWith("tribe.blocks"), startsWith("tribe.name"), startsWith("tribe.on_conflict"))));
}
private void assertTribeNodeHasAllIndices() throws Exception {
assertBusy(() -> {
Set<String> indices = new HashSet<>();
client().admin().cluster().prepareState().setMetaData(true).get()
.getState().getMetaData().getIndices().keysIt().forEachRemaining(indices::add);
cluster2.client().admin().cluster().prepareState().setMetaData(true).get()
.getState().getMetaData().getIndices().keysIt().forEachRemaining(indices::add);
ClusterState state = tribeClient.admin().cluster().prepareState().setRoutingTable(true).setMetaData(true).get().getState();
assertThat(state.getMetaData().getIndices().size(), equalTo(indices.size()));
for (String index : indices) {
assertTrue(state.getMetaData().hasIndex(index));
assertTrue(state.getRoutingTable().hasIndex(index));
assertTrue(state.getRoutingTable().index(index).allPrimaryShardsActive());
}
});
}
}