Remove all tribe related code, comments and documentation (elastic/x-pack-elasticsearch#3784)

Relates to elastic/elasticsearch#28443

Original commit: elastic/x-pack-elasticsearch@5c4e7fccc7
This commit is contained in:
Simon Willnauer 2018-01-30 20:40:46 +01:00 committed by GitHub
parent 5b7c38da7f
commit 570411c2dc
34 changed files with 102 additions and 3115 deletions

View File

@ -35,13 +35,6 @@ At this time, you cannot use cross cluster search in either the {ml} APIs or the
For more information about cross cluster search,
see {ref}/modules-cross-cluster-search.html[Cross Cluster Search].
[float]
=== {xpackml} features are not supported on tribe nodes
You cannot use {ml} features on tribe nodes. For more information about that
type of node, see
{ref}/modules-tribe.html[Tribe node].
[float]
=== Anomaly Explorer omissions and limitations
//See x-pack-elasticsearch/#844 and x-pack-kibana/#1461

View File

@ -12,5 +12,4 @@ can also adjust how monitoring data is displayed. For more information, see
<<es-monitoring>>.
include::indices.asciidoc[]
include::tribe.asciidoc[]
include::{xes-repo-dir}/settings/monitoring-settings.asciidoc[]

View File

@ -1,40 +0,0 @@
[role="xpack"]
[[monitoring-tribe]]
=== Configuring a Tribe Node to Work with Monitoring
If you connect to a cluster through a <<modules-tribe,tribe node>>,
and you want to monitor the tribe node, then you will need to install {xpack} on
that node as well.
With this configuration, the tribe node is included in the node count displayed
in the Monitoring UI, but is not included in the node list because it does not
export any data to the monitoring cluster.
To include the tribe node in the monitoring data, enable Monitoring data
collection at the tribe level:
[source,yaml]
----------------------------------
node.name: my-tribe-node1
tribe:
on_conflict: prefer_cluster1
c1:
cluster.name: cluster1
discovery.zen.ping.unicast.hosts: [ "cluster1-node1:9300", "cluster1-node2:9300", "cluster1-node2:9300" ]
xpack.monitoring.enabled: true <1>
c2:
cluster.name: cluster2
discovery.zen.ping.unicast.hosts: [ "cluster2-node3:9300", "cluster2-node3:9300", "cluster2-node3:9300" ]
xpack.monitoring: <2>
enabled: true
exporters:
id1:
type: http
host: [ "monitoring-cluster:9200" ]
----------------------------------
<1> Enable data collection from the tribe node using a Local Exporter.
<2> Enable data collection from the tribe node using an HTTP Exporter.
When you enable data collection from the tribe node, it is included in both the
node count and node list.

View File

@ -1,12 +1,11 @@
[[ccs-tribe-clients-integrations]]
== Cross Cluster Search, Tribe, Clients and Integrations
== Cross Cluster Search, Clients and Integrations
When using {ref}/modules-cross-cluster-search.html[Cross Cluster Search] or
{ref}/modules-tribe.html[Tribe Nodes] you need to take extra steps to secure
communications with the connected clusters.
When using {ref}/modules-cross-cluster-search.html[Cross Cluster Search]
you need to take extra steps to secure communications with the connected
clusters.
* <<cross-cluster-configuring, Cross Cluster Search and Security>>
* <<tribe-node-configuring, Tribe Nodes and Security>>
You will need to update the configuration for several clients to work with a
secured cluster:
@ -34,8 +33,6 @@ or at least communicate with the cluster in a secured way:
include::tribe-clients-integrations/cross-cluster.asciidoc[]
include::tribe-clients-integrations/tribe.asciidoc[]
include::tribe-clients-integrations/java.asciidoc[]
include::tribe-clients-integrations/http.asciidoc[]

View File

@ -1,101 +0,0 @@
[[tribe-node-configuring]]
=== Tribe Nodes and Security
{ref}/modules-tribe.html[Tribe nodes] act as a federated client across multiple
clusters. When using tribe nodes with secured clusters, all clusters must have
{security} enabled and share the same security configuration (users, roles,
user-role mappings, SSL/TLS CA). The tribe node itself also must be configured
to grant access to actions and indices on all of the connected clusters, as
security checks on incoming requests are primarily done on the tribe node
itself.
IMPORTANT: Support for tribe nodes in Kibana was added in v5.2.
To use a tribe node with secured clusters:
. Install {xpack} on the tribe node and every node in each connected cluster.
. Enable encryption globally. To encrypt communications, you must enable
<<ssl-tls,enable SSL/TLS>> on every node.
+
TIP: To simplify SSL/TLS configuration, use the same certificate authority to
generate certificates for all connected clusters.
. Configure the tribe in the tribe node's `elasticsearch.yml` file. You must
specify each cluster that is a part of the tribe and configure discovery and
encryption settings per cluster. For example, the following configuration adds
two clusters to the tribe:
+
[source,yml]
-----------------------------------------------------------
tribe:
on_conflict: prefer_cluster1 <1>
c1: <2>
cluster.name: cluster1
discovery.zen.ping.unicast.hosts: [ "cluster1-node1:9300", "cluster1-node2:9300"]
xpack.ssl.key: /home/es/config/x-pack/es-tribe-01.key
xpack.ssl.certificate: /home/es/config/x-pack/es-tribe-01.crt
xpack.ssl.certificate_authorities: [ "/home/es/config/x-pack/ca.crt" ]
xpack.security.transport.ssl.enabled: true
xpack.security.http.ssl.enabled: true
c2:
cluster.name: cluster2
discovery.zen.ping.unicast.hosts: [ "cluster2-node1:9300", "cluster2-node2:9300"]
xpack.ssl.key: /home/es/config/x-pack/es-tribe-01.key
xpack.ssl.certificate: /home/es/config/x-pack/es-tribe-01.crt
xpack.ssl.certificate_authorities: [ "/home/es/config/x-pack/ca.crt" ]
xpack.security.transport.ssl.enabled: true
xpack.security.http.ssl.enabled: true
-----------------------------------------------------------
<1> Results are returned from the preferred cluster if the named index exists
in multiple clusters. A preference is *required* when using {security} on
a tribe node.
<2> An arbitrary name that represents the connection to the cluster.
. Configure the same index privileges for your users on all nodes, including the
tribe node. The nodes in each cluster must grant access to indices in other
connected clusters as well as their own.
+
For example, let's assume `cluster1` and `cluster2` each have a indices `index1`
and `index2`. To enable a user to submit a request through the tribe node to
search both clusters:
+
--
.. On the tribe node and both clusters, <<defining-roles, define a `tribe_user` role>>
that has read access to `index1` and `index2`:
+
[source,yaml]
-----------------------------------------------------------
tribe_user:
indices:
'index*': search
-----------------------------------------------------------
.. Assign the `tribe_user` role to a user on the tribe node and both clusters.
For example, run the following command on each node to create `my_tribe_user`
and assign the `tribe_user` role:
+
[source,shell]
-----------------------------------------------------------
./bin/shield/users useradd my_tribe_user -p password -r tribe_user
-----------------------------------------------------------
+
NOTE: Each cluster needs to have its own users with admin privileges.
You cannot perform administration tasks such as create index through
the tribe node, you must send the request directly to the appropriate
cluster.
--
. To enable selected users to retrieve merged cluster state information
for the tribe from the tribe node, grant them the cluster
<<privileges-list-cluster, `monitor` privilege>> on the tribe node. For example,
you could create a `tribe_monitor` role that assigns the `monitor` privilege:
+
[source,yaml]
-----------------------------------------------------------
tribe_monitor:
cluster: monitor
-----------------------------------------------------------
. Start the tribe node. If you've made configuration changes to the nodes in the
connected clusters, they also need to be restarted.

View File

@ -76,7 +76,7 @@ public class License implements ToXContentObject {
* <p>
* Note: The mode indicates features that should be made available, but it does not indicate whether the license is active!
*
* The id byte is used for ordering operation modes (used for merging license md in tribe node)
* The id byte is used for ordering operation modes
*/
public enum OperationMode {
MISSING((byte) 0),

View File

@ -29,15 +29,12 @@ import java.util.Collections;
import java.util.List;
import java.util.function.Supplier;
import static org.elasticsearch.xpack.core.XPackClientActionPlugin.isTribeNode;
import static org.elasticsearch.xpack.core.XPackPlugin.transportClientMode;
public class Licensing implements ActionPlugin {
public static final String NAME = "license";
protected final Settings settings;
private final boolean isTransportClient;
private final boolean isTribeNode;
// Until this is moved out to its own plugin (its currently in XPackPlugin.java, we need to make sure that any edits to this file
// are also carried out in XPackClientPlugin.java
@ -60,15 +57,10 @@ public class Licensing implements ActionPlugin {
public Licensing(Settings settings) {
this.settings = settings;
isTransportClient = transportClientMode(settings);
isTribeNode = isTribeNode(settings);
}
@Override
public List<ActionHandler<? extends ActionRequest, ? extends ActionResponse>> getActions() {
if (isTribeNode) {
return Collections.singletonList(new ActionHandler<>(GetLicenseAction.INSTANCE, TransportGetLicenseAction.class));
}
return Arrays.asList(new ActionHandler<>(PutLicenseAction.INSTANCE, TransportPutLicenseAction.class),
new ActionHandler<>(GetLicenseAction.INSTANCE, TransportGetLicenseAction.class),
new ActionHandler<>(DeleteLicenseAction.INSTANCE, TransportDeleteLicenseAction.class),
@ -82,12 +74,10 @@ public class Licensing implements ActionPlugin {
Supplier<DiscoveryNodes> nodesInCluster) {
List<RestHandler> handlers = new ArrayList<>();
handlers.add(new RestGetLicenseAction(settings, restController));
if (false == isTribeNode) {
handlers.add(new RestPutLicenseAction(settings, restController));
handlers.add(new RestDeleteLicenseAction(settings, restController));
handlers.add(new RestGetTrialStatus(settings, restController));
handlers.add(new RestPostStartTrialLicense(settings, restController));
}
return handlers;
}

View File

@ -1,19 +0,0 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.xpack.core;
import org.elasticsearch.common.settings.Settings;
public interface XPackClientActionPlugin {
static boolean isTribeNode(Settings settings) {
return settings.getGroups("tribe", true).isEmpty() == false;
}
static boolean isTribeClientNode(Settings settings) {
return settings.get("tribe.name") != null;
}
}

View File

@ -30,10 +30,8 @@ public class XPackSettings {
/** Setting for enabling or disabling security. Defaults to true. */
public static final Setting<Boolean> SECURITY_ENABLED = Setting.boolSetting("xpack.security.enabled", true, Setting.Property.NodeScope);
/** Setting for enabling or disabling monitoring. Defaults to true if not a tribe node. */
public static final Setting<Boolean> MONITORING_ENABLED = Setting.boolSetting("xpack.monitoring.enabled",
// By default, monitoring is disabled on tribe nodes
s -> String.valueOf(XPackClientActionPlugin.isTribeNode(s) == false && XPackClientActionPlugin.isTribeClientNode(s) == false),
/** Setting for enabling or disabling monitoring. */
public static final Setting<Boolean> MONITORING_ENABLED = Setting.boolSetting("xpack.monitoring.enabled", true,
Setting.Property.NodeScope);
/** Setting for enabling or disabling watcher. Defaults to true. */

View File

@ -49,7 +49,6 @@ import org.elasticsearch.threadpool.ExecutorBuilder;
import org.elasticsearch.threadpool.FixedExecutorBuilder;
import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.watcher.ResourceWatcherService;
import org.elasticsearch.xpack.core.XPackClientActionPlugin;
import org.elasticsearch.xpack.core.XPackPlugin;
import org.elasticsearch.xpack.core.XPackSettings;
import org.elasticsearch.xpack.core.ml.MachineLearningField;
@ -261,8 +260,6 @@ public class MachineLearning extends Plugin implements ActionPlugin, AnalysisPlu
private final Environment env;
private final boolean enabled;
private final boolean transportClientMode;
private final boolean tribeNode;
private final boolean tribeNodeClient;
private final SetOnce<AutodetectProcessManager> autodetectProcessManager = new SetOnce<>();
private final SetOnce<DatafeedManager> datafeedManager = new SetOnce<>();
@ -272,8 +269,6 @@ public class MachineLearning extends Plugin implements ActionPlugin, AnalysisPlu
this.enabled = XPackSettings.MACHINE_LEARNING_ENABLED.get(settings);
this.transportClientMode = XPackPlugin.transportClientMode(settings);
this.env = transportClientMode ? null : new Environment(settings, configPath);
this.tribeNode = XPackClientActionPlugin.isTribeNode(settings);
this.tribeNodeClient = XPackClientActionPlugin.isTribeClientNode(settings);
}
protected XPackLicenseState getLicenseState() { return XPackPlugin.getSharedLicenseState(); }
@ -299,7 +294,7 @@ public class MachineLearning extends Plugin implements ActionPlugin, AnalysisPlu
String maxOpenJobsPerNodeNodeAttrName = "node.attr." + MAX_OPEN_JOBS_NODE_ATTR;
String machineMemoryAttrName = "node.attr." + MACHINE_MEMORY_NODE_ATTR;
if (enabled == false || transportClientMode || tribeNode || tribeNodeClient) {
if (enabled == false || transportClientMode) {
disallowMlNodeAttributes(mlEnabledNodeAttrName, maxOpenJobsPerNodeNodeAttrName, machineMemoryAttrName);
return Settings.EMPTY;
}
@ -384,7 +379,7 @@ public class MachineLearning extends Plugin implements ActionPlugin, AnalysisPlu
// this method can replace the entire contents of the overridden createComponents() method
private Collection<Object> createComponents(Client client, ClusterService clusterService, ThreadPool threadPool,
NamedXContentRegistry xContentRegistry, Environment environment) {
if (enabled == false || transportClientMode || tribeNode || tribeNodeClient) {
if (enabled == false || transportClientMode) {
return emptyList();
}
@ -449,7 +444,7 @@ public class MachineLearning extends Plugin implements ActionPlugin, AnalysisPlu
}
public List<PersistentTasksExecutor<?>> createPersistentTasksExecutors(ClusterService clusterService) {
if (enabled == false || transportClientMode || tribeNode || tribeNodeClient) {
if (enabled == false || transportClientMode) {
return emptyList();
}
@ -462,7 +457,7 @@ public class MachineLearning extends Plugin implements ActionPlugin, AnalysisPlu
public Collection<Module> createGuiceModules() {
List<Module> modules = new ArrayList<>();
if (tribeNodeClient || transportClientMode) {
if (transportClientMode) {
return modules;
}
@ -478,7 +473,7 @@ public class MachineLearning extends Plugin implements ActionPlugin, AnalysisPlu
IndexScopedSettings indexScopedSettings, SettingsFilter settingsFilter,
IndexNameExpressionResolver indexNameExpressionResolver,
Supplier<DiscoveryNodes> nodesInCluster) {
if (false == enabled || tribeNodeClient || tribeNode) {
if (false == enabled) {
return emptyList();
}
return Arrays.asList(
@ -528,7 +523,7 @@ public class MachineLearning extends Plugin implements ActionPlugin, AnalysisPlu
@Override
public List<ActionHandler<? extends ActionRequest, ? extends ActionResponse>> getActions() {
if (false == enabled || tribeNodeClient || tribeNode) {
if (false == enabled) {
return emptyList();
}
return Arrays.asList(
@ -587,7 +582,7 @@ public class MachineLearning extends Plugin implements ActionPlugin, AnalysisPlu
}
@Override
public List<ExecutorBuilder<?>> getExecutorBuilders(Settings settings) {
if (false == enabled || tribeNode || tribeNodeClient || transportClientMode) {
if (false == enabled || transportClientMode) {
return emptyList();
}
int maxNumberOfJobs = AutodetectProcessManager.MAX_OPEN_JOBS_PER_NODE.get(settings);

View File

@ -19,7 +19,6 @@ import org.elasticsearch.common.logging.Loggers;
import org.elasticsearch.env.Environment;
import org.elasticsearch.license.XPackLicenseState;
import org.elasticsearch.plugins.Platforms;
import org.elasticsearch.xpack.core.XPackClientActionPlugin;
import org.elasticsearch.xpack.core.XPackFeatureSet;
import org.elasticsearch.xpack.core.XPackPlugin;
import org.elasticsearch.xpack.core.XPackSettings;
@ -72,9 +71,8 @@ public class MachineLearningFeatureSet implements XPackFeatureSet {
Map<String, Object> nativeCodeInfo = NativeController.UNKNOWN_NATIVE_CODE_INFO;
// Don't try to get the native code version if ML is disabled - it causes too much controversy
// if ML has been disabled because of some OS incompatibility. Also don't try to get the native
// code version in the transport or tribe client - the controller process won't be running.
if (enabled && XPackPlugin.transportClientMode(environment.settings()) == false
&& XPackClientActionPlugin.isTribeClientNode(environment.settings()) == false) {
// code version in the transport client - the controller process won't be running.
if (enabled && XPackPlugin.transportClientMode(environment.settings()) == false) {
try {
if (isRunningOnMlPlatform(true)) {
NativeController nativeController = NativeControllerHolder.getNativeController(environment);

View File

@ -31,7 +31,6 @@ import org.elasticsearch.rest.RestHandler;
import org.elasticsearch.script.ScriptService;
import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.watcher.ResourceWatcherService;
import org.elasticsearch.xpack.core.XPackClientActionPlugin;
import org.elasticsearch.xpack.core.XPackPlugin;
import org.elasticsearch.xpack.core.XPackSettings;
import org.elasticsearch.xpack.core.monitoring.MonitoringField;
@ -68,10 +67,9 @@ import static java.util.Collections.singletonList;
import static org.elasticsearch.common.settings.Setting.boolSetting;
/**
* This class activates/deactivates the monitoring modules depending if we're running a node client, transport client or tribe client:
* - node clients: all modules are binded
* - transport clients: only action/transport actions are binded
* - tribe clients: everything is disables by default but can be enabled per tribe cluster
* This class activates/deactivates the monitoring modules depending if we're running a node client, transport client:
* - node clients: all modules are bound
* - transport clients: only action/transport actions are bound
*/
public class Monitoring extends Plugin implements ActionPlugin {
@ -85,13 +83,11 @@ public class Monitoring extends Plugin implements ActionPlugin {
protected final Settings settings;
private final boolean enabled;
private final boolean transportClientMode;
private final boolean tribeNode;
public Monitoring(Settings settings) {
this.settings = settings;
this.transportClientMode = XPackPlugin.transportClientMode(settings);
this.enabled = XPackSettings.MONITORING_ENABLED.get(settings);
this.tribeNode = XPackClientActionPlugin.isTribeNode(settings);
}
// overridable by tests
@ -112,7 +108,7 @@ public class Monitoring extends Plugin implements ActionPlugin {
List<Module> modules = new ArrayList<>();
modules.add(b -> {
XPackPlugin.bindFeatureSet(b, MonitoringFeatureSet.class);
if (transportClientMode || enabled == false || tribeNode) {
if (transportClientMode || enabled == false) {
b.bind(Exporters.class).toProvider(Providers.of(null));
}
});
@ -124,7 +120,7 @@ public class Monitoring extends Plugin implements ActionPlugin {
ResourceWatcherService resourceWatcherService, ScriptService scriptService,
NamedXContentRegistry xContentRegistry, Environment environment,
NodeEnvironment nodeEnvironment, NamedWriteableRegistry namedWriteableRegistry) {
if (enabled == false || tribeNode) {
if (enabled == false) {
return Collections.emptyList();
}
@ -153,7 +149,7 @@ public class Monitoring extends Plugin implements ActionPlugin {
@Override
public List<ActionHandler<? extends ActionRequest, ? extends ActionResponse>> getActions() {
if (false == enabled || tribeNode) {
if (false == enabled) {
return emptyList();
}
return singletonList(new ActionHandler<>(MonitoringBulkAction.INSTANCE, TransportMonitoringBulkAction.class));
@ -163,7 +159,7 @@ public class Monitoring extends Plugin implements ActionPlugin {
public List<RestHandler> getRestHandlers(Settings settings, RestController restController, ClusterSettings clusterSettings,
IndexScopedSettings indexScopedSettings, SettingsFilter settingsFilter, IndexNameExpressionResolver indexNameExpressionResolver,
Supplier<DiscoveryNodes> nodesInCluster) {
if (false == enabled || tribeNode) {
if (false == enabled) {
return emptyList();
}
return singletonList(new RestMonitoringBulkAction(settings, restController));

View File

@ -524,7 +524,6 @@ public class Security extends Plugin implements ActionPlugin, IngestPlugin, Netw
SecurityNetty4HttpServerTransport.overrideSettings(builder, settings);
}
builder.put(SecuritySettings.addUserSettings(settings));
addTribeSettings(settings, builder);
return builder.build();
} else {
return Settings.EMPTY;
@ -720,64 +719,6 @@ public class Security extends Plugin implements ActionPlugin, IngestPlugin, Netw
return Collections.singletonMap(SetSecurityUserProcessor.TYPE, new SetSecurityUserProcessor.Factory(parameters.threadContext));
}
/**
* If the current node is a tribe node, we inject additional settings on each tribe client. We do this to make sure
* that every tribe cluster has x-pack installed and security is enabled. We do that by:
*
* - making it mandatory on the tribe client (this means that the tribe node will fail at startup if x-pack is
* not loaded on any tribe due to missing mandatory plugin)
*
* - forcibly enabling it (that means it's not possible to disable security on the tribe clients)
*/
private static void addTribeSettings(Settings settings, Settings.Builder settingsBuilder) {
Map<String, Settings> tribesSettings = settings.getGroups("tribe", true);
if (tribesSettings.isEmpty()) {
// it's not a tribe node
return;
}
for (Map.Entry<String, Settings> tribeSettings : tribesSettings.entrySet()) {
final String tribeName = tribeSettings.getKey();
final String tribePrefix = "tribe." + tribeName + ".";
if ("blocks".equals(tribeName) || "on_conflict".equals(tribeName) || "name".equals(tribeName)) {
continue;
}
final String tribeEnabledSetting = tribePrefix + XPackSettings.SECURITY_ENABLED.getKey();
if (settings.get(tribeEnabledSetting) != null) {
boolean enabled = XPackSettings.SECURITY_ENABLED.get(tribeSettings.getValue());
if (!enabled) {
throw new IllegalStateException("tribe setting [" + tribeEnabledSetting + "] must be set to true but the value is ["
+ settings.get(tribeEnabledSetting) + "]");
}
} else {
//x-pack security must be enabled on every tribe if it's enabled on the tribe node
settingsBuilder.put(tribeEnabledSetting, true);
}
// we passed all the checks now we need to copy in all of the x-pack security settings
settings.keySet().forEach(k -> {
if (k.startsWith("xpack.security.")) {
settingsBuilder.copy(tribePrefix + k, k, settings);
}
});
}
Map<String, Settings> realmsSettings = settings.getGroups(SecurityField.setting("authc.realms"), true);
final boolean hasNativeRealm = XPackSettings.RESERVED_REALM_ENABLED_SETTING.get(settings) ||
realmsSettings.isEmpty() ||
realmsSettings.entrySet().stream()
.anyMatch((e) -> NativeRealmSettings.TYPE.equals(e.getValue().get("type")) && e.getValue()
.getAsBoolean("enabled", true));
if (hasNativeRealm) {
if (settings.get("tribe.on_conflict", "").startsWith("prefer_") == false) {
throw new IllegalArgumentException("use of security on tribe nodes requires setting [tribe.on_conflict] to specify the " +
"name of the tribe to prefer such as [prefer_t1] as the security index can exist in multiple tribes but only one" +
" can be used by the tribe node");
}
}
}
static boolean indexAuditLoggingEnabled(Settings settings) {
if (XPackSettings.AUDIT_ENABLED.get(settings)) {

View File

@ -36,7 +36,6 @@ import org.elasticsearch.index.engine.DocumentMissingException;
import org.elasticsearch.index.query.QueryBuilder;
import org.elasticsearch.index.query.QueryBuilders;
import org.elasticsearch.search.SearchHit;
import org.elasticsearch.xpack.core.XPackClientActionPlugin;
import org.elasticsearch.xpack.core.security.ScrollHelper;
import org.elasticsearch.xpack.core.security.action.realm.ClearRealmCacheRequest;
import org.elasticsearch.xpack.core.security.action.realm.ClearRealmCacheResponse;
@ -83,14 +82,12 @@ public class NativeUsersStore extends AbstractComponent {
private final Hasher hasher = Hasher.BCRYPT;
private final Client client;
private final boolean isTribeNode;
private volatile SecurityLifecycleService securityLifecycleService;
public NativeUsersStore(Settings settings, Client client, SecurityLifecycleService securityLifecycleService) {
super(settings);
this.client = client;
this.isTribeNode = XPackClientActionPlugin.isTribeNode(settings);
this.securityLifecycleService = securityLifecycleService;
}
@ -195,9 +192,6 @@ public class NativeUsersStore extends AbstractComponent {
public void changePassword(final ChangePasswordRequest request, final ActionListener<Void> listener) {
final String username = request.username();
assert SystemUser.NAME.equals(username) == false && XPackUser.NAME.equals(username) == false : username + "is internal!";
if (isTribeNode) {
listener.onFailure(new UnsupportedOperationException("users may not be created or modified using a tribe node"));
} else {
final String docType;
if (ClientReservedRealm.isReserved(username, settings)) {
docType = RESERVED_USER_TYPE;
@ -237,7 +231,6 @@ public class NativeUsersStore extends AbstractComponent {
}, client::update);
});
}
}
/**
* Asynchronous method to create a reserved user with the given password hash. The cache for the user will be cleared after the document
@ -273,9 +266,7 @@ public class NativeUsersStore extends AbstractComponent {
* method will not modify the enabled value.
*/
public void putUser(final PutUserRequest request, final ActionListener<Boolean> listener) {
if (isTribeNode) {
listener.onFailure(new UnsupportedOperationException("users may not be created or modified using a tribe node"));
} else if (request.passwordHash() == null) {
if (request.passwordHash() == null) {
updateUserWithoutPassword(request, listener);
} else {
indexUser(request, listener);
@ -366,9 +357,7 @@ public class NativeUsersStore extends AbstractComponent {
*/
public void setEnabled(final String username, final boolean enabled, final RefreshPolicy refreshPolicy,
final ActionListener<Void> listener) {
if (isTribeNode) {
listener.onFailure(new UnsupportedOperationException("users may not be created or modified using a tribe node"));
} else if (ClientReservedRealm.isReserved(username, settings)) {
if (ClientReservedRealm.isReserved(username, settings)) {
setReservedUserEnabled(username, enabled, refreshPolicy, true, listener);
} else {
setRegularUserEnabled(username, enabled, refreshPolicy, listener);
@ -442,9 +431,6 @@ public class NativeUsersStore extends AbstractComponent {
}
public void deleteUser(final DeleteUserRequest deleteUserRequest, final ActionListener<Boolean> listener) {
if (isTribeNode) {
listener.onFailure(new UnsupportedOperationException("users may not be deleted using a tribe node"));
} else {
securityLifecycleService.prepareIndexIfNeededThenExecute(listener::onFailure, () -> {
DeleteRequest request = client.prepareDelete(SECURITY_INDEX_NAME,
INDEX_TYPE, getIdForUser(USER_DOC_TYPE, deleteUserRequest.username())).request();
@ -464,7 +450,6 @@ public class NativeUsersStore extends AbstractComponent {
}, client::delete);
});
}
}
/**
* This method is used to verify the username and credentials against those stored in the system.

View File

@ -25,7 +25,6 @@ import org.elasticsearch.common.xcontent.XContentParser;
import org.elasticsearch.common.xcontent.XContentType;
import org.elasticsearch.index.query.QueryBuilder;
import org.elasticsearch.index.query.QueryBuilders;
import org.elasticsearch.xpack.core.XPackClientActionPlugin;
import org.elasticsearch.xpack.core.security.ScrollHelper;
import org.elasticsearch.xpack.core.security.action.realm.ClearRealmCacheAction;
import org.elasticsearch.xpack.core.security.action.realm.ClearRealmCacheResponse;
@ -81,14 +80,12 @@ public class NativeRoleMappingStore extends AbstractComponent implements UserRol
private static final String SECURITY_GENERIC_TYPE = "doc";
private final Client client;
private final boolean isTribeNode;
private final SecurityLifecycleService securityLifecycleService;
private final List<String> realmsToRefresh = new CopyOnWriteArrayList<>();
public NativeRoleMappingStore(Settings settings, Client client, SecurityLifecycleService securityLifecycleService) {
super(settings);
this.client = client;
this.isTribeNode = XPackClientActionPlugin.isTribeNode(settings);
this.securityLifecycleService = securityLifecycleService;
}
@ -164,9 +161,7 @@ public class NativeRoleMappingStore extends AbstractComponent implements UserRol
private <Request, Result> void modifyMapping(String name, CheckedBiConsumer<Request, ActionListener<Result>, Exception> inner,
Request request, ActionListener<Result> listener) {
if (isTribeNode) {
listener.onFailure(new UnsupportedOperationException("role-mappings may not be modified using a tribe node"));
} else if (securityLifecycleService.isSecurityIndexOutOfDate()) {
if (securityLifecycleService.isSecurityIndexOutOfDate()) {
listener.onFailure(new IllegalStateException(
"Security index is not on the current version - the native realm will not be operational until " +
"the upgrade API is run on the security index"));

View File

@ -35,7 +35,6 @@ import org.elasticsearch.index.query.QueryBuilder;
import org.elasticsearch.index.query.QueryBuilders;
import org.elasticsearch.license.LicenseUtils;
import org.elasticsearch.license.XPackLicenseState;
import org.elasticsearch.xpack.core.XPackClientActionPlugin;
import org.elasticsearch.xpack.core.security.ScrollHelper;
import org.elasticsearch.xpack.core.security.action.role.ClearRolesCacheRequest;
import org.elasticsearch.xpack.core.security.action.role.ClearRolesCacheResponse;
@ -84,7 +83,6 @@ public class NativeRolesStore extends AbstractComponent {
private final Client client;
private final XPackLicenseState licenseState;
private final boolean isTribeNode;
private SecurityClient securityClient;
private final SecurityLifecycleService securityLifecycleService;
@ -93,7 +91,6 @@ public class NativeRolesStore extends AbstractComponent {
SecurityLifecycleService securityLifecycleService) {
super(settings);
this.client = client;
this.isTribeNode = XPackClientActionPlugin.isTribeNode(settings);
this.securityClient = new SecurityClient(client);
this.licenseState = licenseState;
this.securityLifecycleService = securityLifecycleService;
@ -136,9 +133,6 @@ public class NativeRolesStore extends AbstractComponent {
}
public void deleteRole(final DeleteRoleRequest deleteRoleRequest, final ActionListener<Boolean> listener) {
if (isTribeNode) {
listener.onFailure(new UnsupportedOperationException("roles may not be deleted using a tribe node"));
} else {
securityLifecycleService.prepareIndexIfNeededThenExecute(listener::onFailure, () -> {
DeleteRequest request = client.prepareDelete(SecurityLifecycleService.SECURITY_INDEX_NAME,
ROLE_DOC_TYPE, getIdForUser(deleteRoleRequest.name())).request();
@ -159,12 +153,9 @@ public class NativeRolesStore extends AbstractComponent {
}, client::delete);
});
}
}
public void putRole(final PutRoleRequest request, final RoleDescriptor role, final ActionListener<Boolean> listener) {
if (isTribeNode) {
listener.onFailure(new UnsupportedOperationException("roles may not be created or modified using a tribe node"));
} else if (licenseState.isDocumentAndFieldLevelSecurityAllowed()) {
if (licenseState.isDocumentAndFieldLevelSecurityAllowed()) {
innerPutRole(request, role, listener);
} else if (role.isUsingDocumentOrFieldLevelSecurity()) {
listener.onFailure(LicenseUtils.newComplianceException("field and document level security"));

View File

@ -21,124 +21,6 @@ import static org.hamcrest.Matchers.nullValue;
public class SecuritySettingsTests extends ESTestCase {
private static final String TRIBE_T1_SECURITY_ENABLED = "tribe.t1." + XPackSettings.SECURITY_ENABLED.getKey();
private static final String TRIBE_T2_SECURITY_ENABLED = "tribe.t2." + XPackSettings.SECURITY_ENABLED.getKey();
public void testSecurityIsEnabledByDefaultOnTribes() {
Settings settings = Settings.builder()
.put("tribe.t1.cluster.name", "non_existing")
.put("tribe.t2.cluster.name", "non_existing2")
.put("tribe.on_conflict", "prefer_t1")
.build();
Settings additionalSettings = Security.additionalSettings(settings, true, false);
assertThat(additionalSettings.getAsBoolean(TRIBE_T1_SECURITY_ENABLED, null), equalTo(true));
assertThat(additionalSettings.getAsBoolean(TRIBE_T2_SECURITY_ENABLED, null), equalTo(true));
}
public void testSecurityDisabledOnATribe() {
Settings settings = Settings.builder().put("tribe.t1.cluster.name", "non_existing")
.put(TRIBE_T1_SECURITY_ENABLED, false)
.put("tribe.t2.cluster.name", "non_existing").build();
try {
Security.additionalSettings(settings, true, false);
fail("security cannot change the value of a setting that is already defined, so a exception should be thrown");
} catch (IllegalStateException e) {
assertThat(e.getMessage(), containsString(TRIBE_T1_SECURITY_ENABLED));
}
}
public void testSecurityDisabledOnTribesSecurityAlreadyMandatory() {
Settings settings = Settings.builder().put("tribe.t1.cluster.name", "non_existing")
.put(TRIBE_T1_SECURITY_ENABLED, false)
.put("tribe.t2.cluster.name", "non_existing")
.putList("tribe.t1.plugin.mandatory", "test_plugin", XPackField.NAME).build();
try {
Security.additionalSettings(settings, true, false);
fail("security cannot change the value of a setting that is already defined, so a exception should be thrown");
} catch (IllegalStateException e) {
assertThat(e.getMessage(), containsString(TRIBE_T1_SECURITY_ENABLED));
}
}
public void testSecuritySettingsCopiedForTribeNodes() {
Settings settings = Settings.builder()
.put("tribe.t1.cluster.name", "non_existing")
.put("tribe.t2.cluster.name", "non_existing")
.put("tribe.on_conflict", "prefer_" + randomFrom("t1", "t2"))
.put("xpack.security.foo", "bar")
.put("xpack.security.bar", "foo")
.putList("xpack.security.something.else.here", new String[] { "foo", "bar" })
.build();
Settings additionalSettings = Security.additionalSettings(settings, true, false);
assertThat(additionalSettings.get("xpack.security.foo"), nullValue());
assertThat(additionalSettings.get("xpack.security.bar"), nullValue());
assertThat(additionalSettings.getAsList("xpack.security.something.else.here"), is(Collections.emptyList()));
assertThat(additionalSettings.get("tribe.t1.xpack.security.foo"), is("bar"));
assertThat(additionalSettings.get("tribe.t1.xpack.security.bar"), is("foo"));
assertThat(additionalSettings.getAsList("tribe.t1.xpack.security.something.else.here"), contains("foo", "bar"));
assertThat(additionalSettings.get("tribe.t2.xpack.security.foo"), is("bar"));
assertThat(additionalSettings.get("tribe.t2.xpack.security.bar"), is("foo"));
assertThat(additionalSettings.getAsList("tribe.t2.xpack.security.something.else.here"), contains("foo", "bar"));
assertThat(additionalSettings.get("tribe.on_conflict"), nullValue());
assertThat(additionalSettings.get("tribe.t1.on_conflict"), nullValue());
assertThat(additionalSettings.get("tribe.t2.on_conflict"), nullValue());
}
public void testOnConflictMustBeSetOnTribe() {
final Settings settings = Settings.builder()
.put("tribe.t1.cluster.name", "non_existing")
.put("tribe.t2.cluster.name", "non_existing2")
.build();
IllegalArgumentException e = expectThrows(IllegalArgumentException.class, () -> Security.additionalSettings(settings, true, false));
assertThat(e.getMessage(), containsString("tribe.on_conflict"));
final Settings badOnConflict = Settings.builder().put(settings).put("tribe.on_conflict", randomFrom("any", "drop")).build();
e = expectThrows(IllegalArgumentException.class, () -> Security.additionalSettings(badOnConflict, true, false));
assertThat(e.getMessage(), containsString("tribe.on_conflict"));
Settings goodOnConflict = Settings.builder().put(settings).put("tribe.on_conflict", "prefer_" + randomFrom("t1", "t2")).build();
Settings additionalSettings = Security.additionalSettings(goodOnConflict, true, false);
assertNotNull(additionalSettings);
}
public void testOnConflictWithNoNativeRealms() {
final Settings noNative = Settings.builder()
.put("tribe.t1.cluster.name", "non_existing")
.put("tribe.t2.cluster.name", "non_existing2")
.put(XPackSettings.RESERVED_REALM_ENABLED_SETTING.getKey(), false)
.put("xpack.security.authc.realms.foo.type", randomFrom("ldap", "pki", randomAlphaOfLengthBetween(1, 6)))
.build();
Settings additionalSettings = Security.additionalSettings(noNative, true, false);
assertNotNull(additionalSettings);
// still with the reserved realm
final Settings withReserved = Settings.builder()
.put("tribe.t1.cluster.name", "non_existing")
.put("tribe.t2.cluster.name", "non_existing2")
.put("xpack.security.authc.realms.foo.type", randomFrom("ldap", "pki", randomAlphaOfLengthBetween(1, 6)))
.build();
IllegalArgumentException e = expectThrows(
IllegalArgumentException.class,
() -> Security.additionalSettings(withReserved, true, false));
assertThat(e.getMessage(), containsString("tribe.on_conflict"));
// reserved disabled but no realms defined
final Settings reservedDisabled = Settings.builder()
.put("tribe.t1.cluster.name", "non_existing")
.put("tribe.t2.cluster.name", "non_existing2")
.put(XPackSettings.RESERVED_REALM_ENABLED_SETTING.getKey(), false)
.build();
e = expectThrows(IllegalArgumentException.class, () -> Security.additionalSettings(reservedDisabled, true, false));
assertThat(e.getMessage(), containsString("tribe.on_conflict"));
}
public void testValidAutoCreateIndex() {
Security.validateAutoCreateIndex(Settings.EMPTY);
Security.validateAutoCreateIndex(Settings.builder().put("action.auto_create_index", true).build());

View File

@ -1,176 +0,0 @@
<?xml version="1.0"?>
<!--
~ ELASTICSEARCH CONFIDENTIAL
~ __________________
~
~ [2014] Elasticsearch Incorporated. All Rights Reserved.
~
~ NOTICE: All information contained herein is, and remains
~ the property of Elasticsearch Incorporated and its suppliers,
~ if any. The intellectual and technical concepts contained
~ herein are proprietary to Elasticsearch Incorporated
~ and its suppliers and may be covered by U.S. and Foreign Patents,
~ patents in process, and are protected by trade secret or copyright law.
~ Dissemination of this information or reproduction of this material
~ is strictly forbidden unless prior written permission is obtained
~ from Elasticsearch Incorporated.
-->
<project name="smoke-test-tribe-node-with-security"
xmlns:ac="antlib:net.sf.antcontrib">
<taskdef name="xhttp" classname="org.elasticsearch.ant.HttpTask" classpath="${test_classpath}" />
<typedef name="xhttp" classname="org.elasticsearch.ant.HttpCondition" classpath="${test_classpath}"/>
<import file="${elasticsearch.integ.antfile.default}"/>
<import file="${elasticsearch.tools.directory}/ant/security-overrides.xml"/>
<property name="tribe_node.pidfile" location="${integ.scratch}/tribe-node.pid"/>
<available property="tribe_node.pidfile.exists" file="${tribe_node.pidfile}"/>
<property name="cluster1.pidfile" location="${integ.scratch}/cluster1.pid"/>
<available property="cluster1.pidfile.exists" file="${cluster1.pidfile}"/>
<property name="cluster2.pidfile" location="${integ.scratch}/cluster2.pid"/>
<available property="cluster2.pidfile.exists" file="${cluster2.pidfile}"/>
<macrodef name="create-index">
<attribute name="name" />
<attribute name="port" />
<sequential>
<xhttp uri="http://127.0.0.1:@{port}/@{name}" method="PUT" username="test_admin" password="x-pack-test-password" />
<waitfor maxwait="30" maxwaitunit="second"
checkevery="500" checkeveryunit="millisecond"
timeoutproperty="@{timeoutproperty}">
<xhttp uri="http://127.0.0.1:@{port}/_cluster/health/@{name}?wait_for_status=yellow" username="test_admin" password="x-pack-test-password" />
</waitfor>
</sequential>
</macrodef>
<target name="start-tribe-node-and-2-clusters-with-security" depends="setup-workspace">
<ac:for list="${xplugins.list}" param="xplugin.name">
<sequential>
<fail message="Expected @{xplugin.name}-${version}.zip as a dependency, but could not be found in ${integ.deps}/plugins}">
<condition>
<not>
<available file="${integ.deps}/plugins/@{xplugin.name}-${elasticsearch.version}.zip"/>
</not>
</condition>
</fail>
</sequential>
</ac:for>
<ac:for param="file">
<path>
<fileset dir="${integ.deps}/plugins"/>
</path>
<sequential>
<local name="plugin.name"/>
<convert-plugin-name file="@{file}" outputproperty="plugin.name"/>
<install-plugin name="${plugin.name}" file="@{file}"/>
</sequential>
</ac:for>
<local name="home"/>
<property name="home" location="${integ.scratch}/elasticsearch-${elasticsearch.version}"/>
<echo>Adding roles.yml</echo>
<copy file="roles.yml" tofile="${home}/config/x-pack/roles.yml" overwrite="true"/>
<echo>Adding security users...</echo>
<run-script script="${home}/bin/x-pack/esusers">
<nested>
<arg value="useradd"/>
<arg value="test_admin"/>
<arg value="-p"/>
<arg value="x-pack-test-password"/>
<arg value="-r"/>
<arg value="admin"/>
</nested>
</run-script>
<echo>Starting two nodes, each node in a different cluster</echo>
<ac:trycatch property="failure.message">
<ac:try>
<startup-elasticsearch es.transport.tcp.port="9600"
es.http.port="9700"
es.pidfile="${cluster1.pidfile}"
es.unicast.hosts="127.0.0.1:9600"
es.cluster.name="cluster1"/>
</ac:try>
<ac:catch>
<echo>Failed to start cluster1 with message: ${failure.message}</echo>
<stop-node es.pidfile="${cluster1.pidfile}"/>
</ac:catch>
</ac:trycatch>
<ac:trycatch property="failure.message">
<ac:try>
<startup-elasticsearch es.transport.tcp.port="9800"
es.http.port="9900"
es.pidfile="${cluster2.pidfile}"
es.unicast.hosts="127.0.0.1:9800"
es.cluster.name="cluster2"/>
</ac:try>
<ac:catch>
<echo>Failed to start cluster2 with message: ${failure.message}</echo>
<stop-node es.pidfile="${cluster1.pidfile}"/>
<stop-node es.pidfile="${cluster2.pidfile}"/>
</ac:catch>
</ac:trycatch>
<ac:trycatch property="failure.message">
<ac:try>
<echo>Starting a tribe node, configured to connect to cluster1 and cluster2</echo>
<startup-elasticsearch es.pidfile="${tribe_node.pidfile}">
<additional-args>
<arg value="-Des.tribe.cluster1.cluster.name=cluster1"/>
<arg value="-Des.tribe.cluster1.discovery.zen.ping.unicast.hosts=127.0.0.1:9600"/>
<arg value="-Des.tribe.cluster2.cluster.name=cluster2"/>
<arg value="-Des.tribe.cluster2.discovery.zen.ping.unicast.hosts=127.0.0.1:9800"/>
</additional-args>
</startup-elasticsearch>
<xhttp uri="http://127.0.0.1:${integ.http.port}/_cluster/health?wait_for_nodes=5" username="test_admin" password="changeme" />
<!--
From the rest tests we only connect to the tribe node, so we need create the indices externally:
By creating the index after the tribe node has started we can be sure that the tribe node knows
about it. See: https://github.com/elastic/elasticsearch/issues/13292
-->
<echo>Creating index1 in cluster1</echo>
<create-index name="index1" port="9700"/>
<!-- TODO: remove this after we know why on CI the shards of index1 don't get into a started state -->
<loadfile property="cluster1-logs" srcFile="${integ.scratch}/elasticsearch-${elasticsearch.version}/logs/cluster1.log" />
<echo>post index1 creation es logs: ${cluster1-logs}</echo>
<echo>Creating index2 in cluster2</echo>
<create-index name="index2" port="9900"/>
<!-- TODO: remove this after we know why on CI the shards of index2 don't get into a started state -->
<loadfile property="cluster2-logs" srcFile="${integ.scratch}/elasticsearch-${elasticsearch.version}/logs/cluster2.log" />
<echo>post index2 creation es logs: ${cluster2-logs}</echo>
</ac:try>
<ac:catch>
<echo>Failed to start tribe node with message: ${failure.message}</echo>
<stop-node es.pidfile="${tribe_node.pidfile}"/>
<stop-node es.pidfile="${cluster1.pidfile}"/>
<stop-node es.pidfile="${cluster2.pidfile}"/>
</ac:catch>
</ac:trycatch>
</target>
<target name="stop-tribe-node" if="tribe_node.pidfile.exists">
<stop-node es.pidfile="${tribe_node.pidfile}"/>
</target>
<target name="stop-cluster1" if="cluster1.pidfile.exists">
<stop-node es.pidfile="${cluster1.pidfile}"/>
</target>
<target name="stop-cluster2" if="cluster2.pidfile.exists">
<stop-node es.pidfile="${cluster2.pidfile}"/>
</target>
<target name="stop-tribe-node-and-all-clusters" depends="stop-tribe-node,stop-cluster1,stop-cluster2"/>
</project>

View File

@ -1,27 +0,0 @@
---
"Tribe node search":
- do:
index:
index: index1
type: test
id: 1
body: { foo: bar }
- do:
index:
index: index2
type: test
id: 1
body: { foo: bar }
- do:
indices.refresh: {}
- do:
search:
index: index1,index2
body:
query: { term: { foo: bar }}
- match: { hits.total: 2 }

View File

@ -1,4 +0,0 @@
admin:
cluster: all
indices:
'*': all

View File

@ -1,25 +0,0 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.ant;
import org.apache.tools.ant.BuildException;
import org.apache.tools.ant.taskdefs.condition.Condition;
public class HttpCondition extends HttpTask implements Condition {
private int expectedResponseCode = 200;
@Override
public boolean eval() throws BuildException {
int responseCode = executeHttpRequest();
getProject().log("response code=" + responseCode);
return responseCode == expectedResponseCode;
}
public void setExpectedResponseCode(int expectedResponseCode) {
this.expectedResponseCode = expectedResponseCode;
}
}

View File

@ -1,82 +0,0 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.ant;
import org.apache.tools.ant.BuildException;
import org.apache.tools.ant.Task;
import org.elasticsearch.common.Base64;
import java.net.HttpURLConnection;
import java.net.URI;
import java.net.URL;
import java.nio.charset.StandardCharsets;
public class HttpTask extends Task {
private String uri;
private String method;
private String body;
private String username;
private String password;
@Override
public void execute() throws BuildException {
int responseCode = executeHttpRequest();
getProject().log("response code=" + responseCode);
}
protected int executeHttpRequest() {
try {
URI uri = new URI(this.uri);
URL url = uri.toURL();
getProject().log("url=" + url);
HttpURLConnection urlConnection = (HttpURLConnection) url.openConnection();
if (method != null) {
urlConnection.setRequestMethod(method);
}
if (username != null) {
String basicAuth = "Basic " + Base64.encodeBytes((username + ":" + password).getBytes(StandardCharsets.UTF_8));
urlConnection.setRequestProperty("Authorization", basicAuth);
}
if (body != null) {
urlConnection.setDoOutput(true);
urlConnection.setRequestProperty("Accept-Charset", StandardCharsets.UTF_8.name());
byte[] bytes = body.getBytes(StandardCharsets.UTF_8.name());
urlConnection.setRequestProperty("Content-Length", String.valueOf(bytes.length));
urlConnection.getOutputStream().write(bytes);
urlConnection.getOutputStream().close();
}
urlConnection.connect();
int responseCode = urlConnection.getResponseCode();
urlConnection.disconnect();
return responseCode;
} catch (Exception e) {
throw new BuildException(e);
}
}
public void setUri(String uri) {
this.uri = uri;
}
public void setMethod(String method) {
this.method = method;
}
public void setBody(String body) {
this.body = body;
}
public void setUsername(String username) {
this.username = username;
}
public void setPassword(String password) {
this.password = password;
}
}

View File

@ -1,43 +0,0 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.xpack.security;
import com.carrotsearch.randomizedtesting.annotations.Name;
import com.carrotsearch.randomizedtesting.annotations.ParametersFactory;
import org.elasticsearch.client.support.Headers;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.xpack.security.authc.support.SecuredString;
import org.elasticsearch.test.rest.ESRestTestCase;
import org.elasticsearch.test.rest.RestTestCandidate;
import org.elasticsearch.test.rest.parser.RestTestParseException;
import java.io.IOException;
import static org.elasticsearch.xpack.security.authc.support.UsernamePasswordToken.basicAuthHeaderValue;
public class RestIT extends TribeRestTestCase {
private static final String USER = "test_admin";
private static final String PASS = "x-pack-test-password";
public RestIT(@Name("yaml") RestTestCandidate testCandidate) {
super(testCandidate);
}
@ParametersFactory
public static Iterable<Object[]> parameters() throws IOException, RestTestParseException {
return ESRestTestCase.createParameters(0, 1);
}
@Override
protected Settings restClientSettings() {
String token = basicAuthHeaderValue(USER, new SecuredString(PASS.toCharArray()));
return Settings.builder()
.put(Headers.PREFIX + ".Authorization", token)
.build();
}
}

View File

@ -1,371 +0,0 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.xpack.security;
import com.carrotsearch.randomizedtesting.RandomizedTest;
import com.carrotsearch.randomizedtesting.annotations.TestGroup;
import com.carrotsearch.randomizedtesting.annotations.TimeoutSuite;
import org.apache.lucene.util.IOUtils;
import org.apache.lucene.util.LuceneTestCase.SuppressCodecs;
import org.apache.lucene.util.LuceneTestCase.SuppressFsync;
import org.apache.lucene.util.TimeUnits;
import org.elasticsearch.common.Strings;
import org.elasticsearch.common.SuppressForbidden;
import org.elasticsearch.common.io.PathUtils;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.xcontent.XContentHelper;
import org.elasticsearch.test.ESIntegTestCase.ClusterScope;
import org.elasticsearch.test.ESTestCase;
import org.elasticsearch.test.rest.ESRestTestCase;
import org.elasticsearch.test.rest.RestTestCandidate;
import org.elasticsearch.test.rest.RestTestExecutionContext;
import org.elasticsearch.test.rest.client.RestException;
import org.elasticsearch.test.rest.parser.RestTestParseException;
import org.elasticsearch.test.rest.parser.RestTestSuiteParser;
import org.elasticsearch.test.rest.section.DoSection;
import org.elasticsearch.test.rest.section.ExecutableSection;
import org.elasticsearch.test.rest.section.RestTestSuite;
import org.elasticsearch.test.rest.section.SkipSection;
import org.elasticsearch.test.rest.section.TestSection;
import org.elasticsearch.test.rest.spec.RestApi;
import org.elasticsearch.test.rest.spec.RestSpec;
import org.elasticsearch.test.rest.support.FileUtils;
import org.junit.AfterClass;
import org.junit.Before;
import org.junit.BeforeClass;
import java.io.IOException;
import java.io.InputStream;
import java.lang.annotation.ElementType;
import java.lang.annotation.Inherited;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;
import java.net.InetAddress;
import java.net.InetSocketAddress;
import java.net.URI;
import java.net.URISyntaxException;
import java.net.URL;
import java.net.UnknownHostException;
import java.nio.file.FileSystem;
import java.nio.file.FileSystems;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.PathMatcher;
import java.nio.file.StandardCopyOption;
import java.util.ArrayList;
import java.util.Collections;
import java.util.Comparator;
import java.util.List;
import java.util.Map;
import java.util.Set;
/**
* Forked from RestTestCase with changes required to run rest tests via a tribe node
*
* Reasons for forking:
* 1) Always communicate via the tribe node from the tests. The original class in core connects to any endpoint it can see via the nodes info api and that would mean also the nodes part of the other clusters would be just as entry point. This should not happen for the tribe tests
* 2) The original class in core executes delete calls after each test, but the tribe node can't handle master level write operations. These api calls hang for 1m and then just fail.
* 3) The indices in cluster1 and cluster2 are created from the ant integ file and because of that the original class in core would just remove that in between tests.
* 4) extends ESTestCase instead if ESIntegTestCase and doesn't setup a test cluster and just connects to the one endpoint defined in the tests.rest.cluster.
*/
@ESRestTestCase.Rest
@SuppressFsync // we aren't trying to test this here, and it can make the test slow
@SuppressCodecs("*") // requires custom completion postings format
@ClusterScope(randomDynamicTemplates = false)
@TimeoutSuite(millis = 40 * TimeUnits.MINUTE) // timeout the suite after 40min and fail the test.
public abstract class TribeRestTestCase extends ESTestCase {
/**
* Property that allows to control whether the REST tests are run (default) or not
*/
public static final String TESTS_REST = "tests.rest";
public static final String TESTS_REST_CLUSTER = "tests.rest.cluster";
/**
* Annotation for REST tests
*/
@Inherited
@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.TYPE)
@TestGroup(enabled = true, sysProperty = ESRestTestCase.TESTS_REST)
public @interface Rest {
}
/**
* Property that allows to control which REST tests get run. Supports comma separated list of tests
* or directories that contain tests e.g. -Dtests.rest.suite=index,get,create/10_with_id
*/
public static final String REST_TESTS_SUITE = "tests.rest.suite";
/**
* Property that allows to blacklist some of the REST tests based on a comma separated list of globs
* e.g. -Dtests.rest.blacklist=get/10_basic/*
*/
public static final String REST_TESTS_BLACKLIST = "tests.rest.blacklist";
/**
* Property that allows to control whether spec validation is enabled or not (default true).
*/
public static final String REST_TESTS_VALIDATE_SPEC = "tests.rest.validate_spec";
/**
* Property that allows to control where the REST spec files need to be loaded from
*/
public static final String REST_TESTS_SPEC = "tests.rest.spec";
public static final String REST_LOAD_PACKAGED_TESTS = "tests.rest.load_packaged";
private static final String DEFAULT_TESTS_PATH = "/rest-api-spec/test";
private static final String DEFAULT_SPEC_PATH = "/rest-api-spec/api";
private static final String PATHS_SEPARATOR = ",";
private final PathMatcher[] blacklistPathMatchers;
private static RestTestExecutionContext restTestExecutionContext;
private final RestTestCandidate testCandidate;
public TribeRestTestCase(RestTestCandidate testCandidate) {
this.testCandidate = testCandidate;
String[] blacklist = resolvePathsProperty(REST_TESTS_BLACKLIST, null);
if (blacklist != null) {
blacklistPathMatchers = new PathMatcher[blacklist.length];
int i = 0;
for (String glob : blacklist) {
blacklistPathMatchers[i++] = PathUtils.getDefaultFileSystem().getPathMatcher("glob:" + glob);
}
} else {
blacklistPathMatchers = new PathMatcher[0];
}
}
@Override
protected void afterIfFailed(List<Throwable> errors) {
logger.info("Stash dump on failure [{}]", XContentHelper.toString(restTestExecutionContext.stash()));
super.afterIfFailed(errors);
}
public static Iterable<Object[]> createParameters(int id, int count) throws IOException, RestTestParseException {
TestGroup testGroup = Rest.class.getAnnotation(TestGroup.class);
String sysProperty = TestGroup.Utilities.getSysProperty(Rest.class);
boolean enabled;
try {
enabled = RandomizedTest.systemPropertyAsBoolean(sysProperty, testGroup.enabled());
} catch (IllegalArgumentException e) {
// Ignore malformed system property, disable the group if malformed though.
enabled = false;
}
if (!enabled) {
return new ArrayList<>();
}
//parse tests only if rest test group is enabled, otherwise rest tests might not even be available on file system
List<RestTestCandidate> restTestCandidates = collectTestCandidates(id, count);
List<Object[]> objects = new ArrayList<>();
for (RestTestCandidate restTestCandidate : restTestCandidates) {
objects.add(new Object[]{restTestCandidate});
}
return objects;
}
private static List<RestTestCandidate> collectTestCandidates(int id, int count) throws RestTestParseException, IOException {
List<RestTestCandidate> testCandidates = new ArrayList<>();
FileSystem fileSystem = getFileSystem();
// don't make a try-with, getFileSystem returns null
// ... and you can't close() the default filesystem
try {
String[] paths = resolvePathsProperty(REST_TESTS_SUITE, DEFAULT_TESTS_PATH);
Map<String, Set<Path>> yamlSuites = FileUtils.findYamlSuites(fileSystem, DEFAULT_TESTS_PATH, paths);
RestTestSuiteParser restTestSuiteParser = new RestTestSuiteParser();
//yaml suites are grouped by directory (effectively by api)
for (String api : yamlSuites.keySet()) {
List<Path> yamlFiles = new ArrayList<>(yamlSuites.get(api));
for (Path yamlFile : yamlFiles) {
String key = api + yamlFile.getFileName().toString();
if (mustExecute(key, id, count)) {
RestTestSuite restTestSuite = restTestSuiteParser.parse(api, yamlFile);
for (TestSection testSection : restTestSuite.getTestSections()) {
testCandidates.add(new RestTestCandidate(restTestSuite, testSection));
}
}
}
}
} finally {
IOUtils.close(fileSystem);
}
//sort the candidates so they will always be in the same order before being shuffled, for repeatability
Collections.sort(testCandidates, new Comparator<RestTestCandidate>() {
@Override
public int compare(RestTestCandidate o1, RestTestCandidate o2) {
return o1.getTestPath().compareTo(o2.getTestPath());
}
});
return testCandidates;
}
private static boolean mustExecute(String test, int id, int count) {
int hash = (int) (Math.abs((long)test.hashCode()) % count);
return hash == id;
}
private static String[] resolvePathsProperty(String propertyName, String defaultValue) {
String property = System.getProperty(propertyName);
if (!Strings.hasLength(property)) {
return defaultValue == null ? null : new String[]{defaultValue};
} else {
return property.split(PATHS_SEPARATOR);
}
}
/**
* Returns a new FileSystem to read REST resources, or null if they
* are available from classpath.
*/
@SuppressForbidden(reason = "proper use of URL, hack around a JDK bug")
static FileSystem getFileSystem() throws IOException {
// REST suite handling is currently complicated, with lots of filtering and so on
// For now, to work embedded in a jar, return a ZipFileSystem over the jar contents.
URL codeLocation = FileUtils.class.getProtectionDomain().getCodeSource().getLocation();
boolean loadPackaged = RandomizedTest.systemPropertyAsBoolean(REST_LOAD_PACKAGED_TESTS, true);
if (codeLocation.getFile().endsWith(".jar") && loadPackaged) {
try {
// hack around a bug in the zipfilesystem implementation before java 9,
// its checkWritable was incorrect and it won't work without write permissions.
// if we add the permission, it will open jars r/w, which is too scary! so copy to a safe r-w location.
Path tmp = Files.createTempFile(null, ".jar");
try (InputStream in = codeLocation.openStream()) {
Files.copy(in, tmp, StandardCopyOption.REPLACE_EXISTING);
}
return FileSystems.newFileSystem(new URI("jar:" + tmp.toUri()), Collections.<String,Object>emptyMap());
} catch (URISyntaxException e) {
throw new IOException("couldn't open zipfilesystem: ", e);
}
} else {
return null;
}
}
@BeforeClass
public static void initExecutionContext() throws IOException, RestException {
String[] specPaths = resolvePathsProperty(REST_TESTS_SPEC, DEFAULT_SPEC_PATH);
RestSpec restSpec = null;
FileSystem fileSystem = getFileSystem();
// don't make a try-with, getFileSystem returns null
// ... and you can't close() the default filesystem
try {
restSpec = RestSpec.parseFrom(fileSystem, DEFAULT_SPEC_PATH, specPaths);
} finally {
IOUtils.close(fileSystem);
}
validateSpec(restSpec);
restTestExecutionContext = new RestTestExecutionContext(restSpec);
}
private static void validateSpec(RestSpec restSpec) {
boolean validateSpec = RandomizedTest.systemPropertyAsBoolean(REST_TESTS_VALIDATE_SPEC, true);
if (validateSpec) {
StringBuilder errorMessage = new StringBuilder();
for (RestApi restApi : restSpec.getApis()) {
if (restApi.getMethods().contains("GET") && restApi.isBodySupported()) {
if (!restApi.getMethods().contains("POST")) {
errorMessage.append("\n- ").append(restApi.getName()).append(" supports GET with a body but doesn't support POST");
}
}
}
if (errorMessage.length() > 0) {
throw new IllegalArgumentException(errorMessage.toString());
}
}
}
@AfterClass
public static void close() {
if (restTestExecutionContext != null) {
restTestExecutionContext.close();
restTestExecutionContext = null;
}
}
/**
* Used to obtain settings for the REST client that is used to send REST requests.
*/
protected Settings restClientSettings() {
return Settings.EMPTY;
}
protected InetSocketAddress[] httpAddresses() {
String clusterAddresses = System.getProperty(TESTS_REST_CLUSTER);
String[] stringAddresses = clusterAddresses.split(",");
InetSocketAddress[] transportAddresses = new InetSocketAddress[stringAddresses.length];
int i = 0;
for (String stringAddress : stringAddresses) {
String[] split = stringAddress.split(":");
if (split.length < 2) {
throw new IllegalArgumentException("address [" + clusterAddresses + "] not valid");
}
try {
transportAddresses[i++] = new InetSocketAddress(InetAddress.getByName(split[0]), Integer.valueOf(split[1]));
} catch (NumberFormatException e) {
throw new IllegalArgumentException("port is not valid, expected number but was [" + split[1] + "]");
} catch (UnknownHostException e) {
throw new IllegalArgumentException("unknown host [" + split[0] + "]", e);
}
}
return transportAddresses;
}
@Before
public void reset() throws IOException, RestException {
//skip test if it matches one of the blacklist globs
for (PathMatcher blacklistedPathMatcher : blacklistPathMatchers) {
//we need to replace a few characters otherwise the test section name can't be parsed as a path on windows
String testSection = testCandidate.getTestSection().getName().replace("*", "").replace("\\", "/").replaceAll("\\s+/", "/").replace(":", "").trim();
String testPath = testCandidate.getSuitePath() + "/" + testSection;
assumeFalse("[" + testCandidate.getTestPath() + "] skipped, reason: blacklisted", blacklistedPathMatcher.matches(PathUtils.get(testPath)));
}
//The client needs non static info to get initialized, therefore it can't be initialized in the before class
restTestExecutionContext.initClient(httpAddresses(), restClientSettings());
restTestExecutionContext.clear();
//skip test if the whole suite (yaml file) is disabled
assumeFalse(buildSkipMessage(testCandidate.getSuitePath(), testCandidate.getSetupSection().getSkipSection()),
testCandidate.getSetupSection().getSkipSection().skip(restTestExecutionContext.esVersion()));
//skip test if test section is disabled
assumeFalse(buildSkipMessage(testCandidate.getTestPath(), testCandidate.getTestSection().getSkipSection()),
testCandidate.getTestSection().getSkipSection().skip(restTestExecutionContext.esVersion()));
}
private static String buildSkipMessage(String description, SkipSection skipSection) {
StringBuilder messageBuilder = new StringBuilder();
if (skipSection.isVersionCheck()) {
messageBuilder.append("[").append(description).append("] skipped, reason: [").append(skipSection.getReason()).append("] ");
} else {
messageBuilder.append("[").append(description).append("] skipped, reason: features ").append(skipSection.getFeatures()).append(" not supported");
}
return messageBuilder.toString();
}
public void test() throws IOException {
//let's check that there is something to run, otherwise there might be a problem with the test section
if (testCandidate.getTestSection().getExecutableSections().size() == 0) {
throw new IllegalArgumentException("No executable sections loaded for [" + testCandidate.getTestPath() + "]");
}
if (!testCandidate.getSetupSection().isEmpty()) {
logger.info("start setup test [{}]", testCandidate.getTestPath());
for (DoSection doSection : testCandidate.getSetupSection().getDoSections()) {
doSection.execute(restTestExecutionContext);
}
logger.info("end setup test [{}]", testCandidate.getTestPath());
}
restTestExecutionContext.clear();
for (ExecutableSection executableSection : testCandidate.getTestSection().getExecutableSections()) {
executableSection.execute(restTestExecutionContext);
}
}
}

View File

@ -1,132 +0,0 @@
import org.elasticsearch.gradle.test.ClusterConfiguration
import org.elasticsearch.gradle.test.ClusterFormationTasks
import org.elasticsearch.gradle.test.NodeInfo
apply plugin: 'elasticsearch.standalone-test'
apply plugin: 'elasticsearch.standalone-rest-test'
apply plugin: 'elasticsearch.rest-test'
dependencies {
testCompile project(path: ':modules:tribe', configuration: 'runtime')
testCompile project(path: xpackProject('plugin').path, configuration: 'testArtifacts')
// TODO: remove all these test deps, this is completely bogus, guava is being force upgraded
testCompile project(path: xpackModule('deprecation'), configuration: 'runtime')
testCompile project(path: xpackModule('graph'), configuration: 'runtime')
testCompile project(path: xpackModule('logstash'), configuration: 'runtime')
testCompile project(path: xpackModule('ml'), configuration: 'runtime')
testCompile project(path: xpackModule('monitoring'), configuration: 'runtime')
testCompile project(path: xpackModule('security'), configuration: 'runtime')
testCompile project(path: xpackModule('upgrade'), configuration: 'runtime')
testCompile project(path: xpackModule('watcher'), configuration: 'runtime')
testCompile project(path: xpackModule('core'), configuration: 'testArtifacts')
testCompile project(path: xpackModule('monitoring'), configuration: 'testArtifacts')
}
compileTestJava.options.compilerArgs << "-Xlint:-rawtypes,-unchecked"
namingConventions.skipIntegTestInDisguise = true
test {
/*
* We have to disable setting the number of available processors as tests in the same JVM randomize processors and will step on each
* other if we allow them to set the number of available processors as it's set-once in Netty.
*/
systemProperty 'es.set.netty.runtime.available.processors', 'false'
include '**/*Tests.class'
}
String licensePath = xpackProject('license-tools').projectDir.toPath().resolve('src/test/resources').toString()
sourceSets {
test {
resources {
srcDirs += [licensePath]
}
}
}
project.forbiddenPatterns {
exclude '**/*.key'
}
task setupClusterOne {}
ClusterConfiguration cluster1Config = new ClusterConfiguration(project)
cluster1Config.clusterName = 'cluster1'
cluster1Config.setting('node.name', 'cluster1-node1')
// x-pack
cluster1Config.plugin(xpackProject('plugin').path)
cluster1Config.setting('xpack.monitoring.enabled', false)
cluster1Config.setting('xpack.security.enabled', false)
cluster1Config.setting('xpack.watcher.enabled', false)
cluster1Config.setting('xpack.graph.enabled', false)
cluster1Config.setting('xpack.ml.enabled', false)
List<NodeInfo> cluster1Nodes = ClusterFormationTasks.setup(project, 'clusterOne', setupClusterOne, cluster1Config)
task setupClusterTwo {}
ClusterConfiguration cluster2Config = new ClusterConfiguration(project)
cluster2Config.clusterName = 'cluster2'
cluster2Config.setting('node.name', 'cluster2-node1')
// x-pack
cluster2Config.plugin(xpackProject('plugin').path)
cluster2Config.setting('xpack.monitoring.enabled', false)
cluster2Config.setting('xpack.monitoring.enabled', false)
cluster2Config.setting('xpack.security.enabled', false)
cluster2Config.setting('xpack.watcher.enabled', false)
cluster2Config.setting('xpack.graph.enabled', false)
cluster2Config.setting('xpack.ml.enabled', false)
List<NodeInfo> cluster2Nodes = ClusterFormationTasks.setup(project, 'clusterTwo', setupClusterTwo, cluster2Config)
integTestCluster {
dependsOn setupClusterOne, setupClusterTwo
setting 'node.name', 'tribe-node'
setting 'tribe.on_conflict', 'prefer_cluster1'
setting 'tribe.cluster1.cluster.name', 'cluster1'
setting 'tribe.cluster1.discovery.zen.ping.unicast.hosts', "'${-> cluster1Nodes.get(0).transportUri()}'"
setting 'tribe.cluster1.http.enabled', 'true'
setting 'tribe.cluster1.xpack.monitoring.enabled', false
setting 'tribe.cluster1.xpack.monitoring.enabled', false
setting 'tribe.cluster1.xpack.security.enabled', false
setting 'tribe.cluster1.xpack.watcher.enabled', false
setting 'tribe.cluster1.xpack.graph.enabled', false
setting 'tribe.cluster1.xpack.ml.enabled', false
setting 'tribe.cluster2.cluster.name', 'cluster2'
setting 'tribe.cluster2.discovery.zen.ping.unicast.hosts', "'${-> cluster2Nodes.get(0).transportUri()}'"
setting 'tribe.cluster2.http.enabled', 'true'
setting 'tribe.cluster2.xpack.monitoring.enabled', false
setting 'tribe.cluster2.xpack.monitoring.enabled', false
setting 'tribe.cluster2.xpack.security.enabled', false
setting 'tribe.cluster2.xpack.watcher.enabled', false
setting 'tribe.cluster2.xpack.graph.enabled', false
setting 'tribe.cluster2.xpack.ml.enabled', false
// x-pack
plugin xpackProject('plugin').path
setting 'xpack.monitoring.enabled', false
setting 'xpack.monitoring.enabled', false
setting 'xpack.security.enabled', false
setting 'xpack.watcher.enabled', false
setting 'xpack.graph.enabled', false
setting 'xpack.ml.enabled', false
waitCondition = { node, ant ->
File tmpFile = new File(node.cwd, 'wait.success')
// 5 nodes: tribe + clusterOne (1 node + tribe internal node) + clusterTwo (1 node + tribe internal node)
ant.get(src: "http://${node.httpUri()}/_cluster/health?wait_for_nodes=>=5&wait_for_status=yellow",
dest: tmpFile.toString(),
ignoreerrors: true,
retries: 10)
return tmpFile.exists()
}
}
integTestRunner {
/*
* We have to disable setting the number of available processors as tests in the same JVM randomize processors and will step on each
* other if we allow them to set the number of available processors as it's set-once in Netty.
*/
systemProperty 'es.set.netty.runtime.available.processors', 'false'
systemProperty 'tests.cluster', "${-> cluster1Nodes.get(0).transportUri()}"
systemProperty 'tests.cluster2', "${-> cluster2Nodes.get(0).transportUri()}"
systemProperty 'tests.tribe', "${-> integTest.nodes.get(0).transportUri()}"
finalizedBy 'clusterOne#stop'
finalizedBy 'clusterTwo#stop'
}

View File

@ -1,44 +0,0 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.license;
import org.elasticsearch.client.Client;
import org.elasticsearch.common.unit.TimeValue;
import static org.elasticsearch.license.TestUtils.generateSignedLicense;
public class LicenseTribeTests extends TribeTransportTestCase {
@Override
protected void verifyActionOnClientNode(Client client) throws Exception {
assertLicenseTransportActionsWorks(client);
}
@Override
protected void verifyActionOnMasterNode(Client masterClient) throws Exception {
assertLicenseTransportActionsWorks(masterClient);
}
@Override
protected void verifyActionOnDataNode(Client dataNodeClient) throws Exception {
assertLicenseTransportActionsWorks(dataNodeClient);
}
private static void assertLicenseTransportActionsWorks(Client client) throws Exception {
client.execute(GetLicenseAction.INSTANCE, new GetLicenseRequest()).get();
client.execute(PutLicenseAction.INSTANCE, new PutLicenseRequest()
.license(generateSignedLicense(TimeValue.timeValueHours(1))));
client.execute(DeleteLicenseAction.INSTANCE, new DeleteLicenseRequest());
}
@Override
protected void verifyActionOnTribeNode(Client tribeClient) throws Exception {
// The get licence action should work, but everything else should fail
tribeClient.execute(GetLicenseAction.INSTANCE, new GetLicenseRequest()).get();
failAction(tribeClient, PutLicenseAction.INSTANCE);
failAction(tribeClient, DeleteLicenseAction.INSTANCE);
}
}

View File

@ -1,322 +0,0 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.license;
import org.elasticsearch.action.Action;
import org.elasticsearch.action.admin.cluster.health.ClusterHealthResponse;
import org.elasticsearch.action.admin.cluster.node.info.NodeInfo;
import org.elasticsearch.action.admin.cluster.node.info.NodesInfoResponse;
import org.elasticsearch.analysis.common.CommonAnalysisPlugin;
import org.elasticsearch.client.Client;
import org.elasticsearch.client.Requests;
import org.elasticsearch.cluster.health.ClusterHealthStatus;
import org.elasticsearch.cluster.node.DiscoveryNode;
import org.elasticsearch.cluster.node.DiscoveryNodes;
import org.elasticsearch.common.Priority;
import org.elasticsearch.common.UUIDs;
import org.elasticsearch.common.network.NetworkModule;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.transport.TransportAddress;
import org.elasticsearch.discovery.zen.UnicastZenPing;
import org.elasticsearch.env.Environment;
import org.elasticsearch.node.MockNode;
import org.elasticsearch.node.Node;
import org.elasticsearch.plugins.Plugin;
import org.elasticsearch.test.ESIntegTestCase;
import org.elasticsearch.test.ESIntegTestCase.ClusterScope;
import org.elasticsearch.test.ESIntegTestCase.Scope;
import org.elasticsearch.test.InternalTestCluster;
import org.elasticsearch.test.NodeConfigurationSource;
import org.elasticsearch.test.TestCluster;
import org.elasticsearch.test.discovery.TestZenDiscovery;
import org.elasticsearch.tribe.TribePlugin;
import org.elasticsearch.xpack.CompositeTestingXPackPlugin;
import org.elasticsearch.xpack.core.XPackClientPlugin;
import org.elasticsearch.xpack.core.XPackPlugin;
import org.elasticsearch.xpack.core.XPackSettings;
import org.elasticsearch.xpack.core.XPackField;
import org.elasticsearch.xpack.deprecation.Deprecation;
import org.elasticsearch.xpack.graph.Graph;
import org.elasticsearch.xpack.logstash.Logstash;
import org.elasticsearch.xpack.ml.MachineLearning;
import org.elasticsearch.xpack.core.ml.MachineLearningField;
import org.elasticsearch.xpack.monitoring.Monitoring;
import org.elasticsearch.xpack.security.Security;
import org.elasticsearch.xpack.upgrade.Upgrade;
import org.elasticsearch.xpack.watcher.Watcher;
import java.nio.file.Path;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collection;
import java.util.Collections;
import java.util.List;
import java.util.function.Function;
import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;
import static org.hamcrest.Matchers.anyOf;
import static org.hamcrest.Matchers.containsString;
import static org.hamcrest.Matchers.equalTo;
@ClusterScope(scope = Scope.TEST, transportClientRatio = 0, numClientNodes = 1, numDataNodes = 2)
public abstract class TribeTransportTestCase extends ESIntegTestCase {
protected List<String> enabledFeatures() {
return Collections.emptyList();
}
@Override
protected final Settings nodeSettings(int nodeOrdinal) {
final Settings.Builder builder = Settings.builder()
.put(NetworkModule.HTTP_ENABLED.getKey(), false)
.put("transport.type", getTestTransportType());
List<String> enabledFeatures = enabledFeatures();
builder.put(XPackSettings.SECURITY_ENABLED.getKey(), enabledFeatures.contains(XPackField.SECURITY));
builder.put(XPackSettings.MONITORING_ENABLED.getKey(), enabledFeatures.contains(XPackField.MONITORING));
builder.put(XPackSettings.WATCHER_ENABLED.getKey(), enabledFeatures.contains(XPackField.WATCHER));
builder.put(XPackSettings.GRAPH_ENABLED.getKey(), enabledFeatures.contains(XPackField.GRAPH));
builder.put(XPackSettings.MACHINE_LEARNING_ENABLED.getKey(), enabledFeatures.contains(XPackField.MACHINE_LEARNING));
builder.put(MachineLearningField.AUTODETECT_PROCESS.getKey(), false);
return builder.build();
}
@Override
protected boolean ignoreExternalCluster() {
return true;
}
@Override
protected boolean addTestZenDiscovery() {
return false;
}
public static class TribeAwareTestZenDiscoveryPlugin extends TestZenDiscovery.TestPlugin {
public TribeAwareTestZenDiscoveryPlugin(Settings settings) {
super(settings);
}
@Override
public Settings additionalSettings() {
if (settings.getGroups("tribe", true).isEmpty()) {
return super.additionalSettings();
} else {
return Settings.EMPTY;
}
}
}
public static class MockTribePlugin extends TribePlugin {
public MockTribePlugin(Settings settings) {
super(settings);
}
protected Function<Settings, Node> nodeBuilder(Path configPath) {
return settings -> new MockNode(new Environment(settings, configPath), internalCluster().getPlugins());
}
}
@Override
protected Collection<Class<? extends Plugin>> nodePlugins() {
ArrayList<Class<? extends Plugin>> plugins = new ArrayList<>();
plugins.add(MockTribePlugin.class);
plugins.add(TribeAwareTestZenDiscoveryPlugin.class);
plugins.add(CompositeTestingXPackPlugin.class);
plugins.add(CommonAnalysisPlugin.class);
return plugins;
}
@Override
protected final Collection<Class<? extends Plugin>> transportClientPlugins() {
ArrayList<Class<? extends Plugin>> plugins = new ArrayList<>();
plugins.add(MockTribePlugin.class);
plugins.add(TribeAwareTestZenDiscoveryPlugin.class);
plugins.add(XPackClientPlugin.class);
plugins.add(CommonAnalysisPlugin.class);
return plugins;
}
public void testTribeSetup() throws Exception {
NodeConfigurationSource nodeConfigurationSource = new NodeConfigurationSource() {
@Override
public Settings nodeSettings(int nodeOrdinal) {
return TribeTransportTestCase.this.nodeSettings(nodeOrdinal);
}
@Override
public Path nodeConfigPath(int nodeOrdinal) {
return null;
}
@Override
public Collection<Class<? extends Plugin>> nodePlugins() {
return TribeTransportTestCase.this.nodePlugins();
}
@Override
public Settings transportClientSettings() {
return TribeTransportTestCase.this.transportClientSettings();
}
@Override
public Collection<Class<? extends Plugin>> transportClientPlugins() {
return TribeTransportTestCase.this.transportClientPlugins();
}
};
final InternalTestCluster cluster2 = new InternalTestCluster(
randomLong(), createTempDir(), true, true, 2, 2,
UUIDs.randomBase64UUID(random()), nodeConfigurationSource, 1, false, "tribe_node2",
getMockPlugins(), getClientWrapper());
cluster2.beforeTest(random(), 0.0);
logger.info("create 2 indices, test1 on t1, and test2 on t2");
assertAcked(internalCluster().client().admin().indices().prepareCreate("test1").get());
assertAcked(cluster2.client().admin().indices().prepareCreate("test2").get());
ensureYellow(internalCluster());
ensureYellow(cluster2);
// Map<String,String> asMap = internalCluster().getDefaultSettings().getAsMap();
Settings.Builder tribe1Defaults = Settings.builder();
Settings.Builder tribe2Defaults = Settings.builder();
internalCluster().getDefaultSettings().keySet().forEach(k -> {
if (k.startsWith("path.") == false) {
tribe1Defaults.copy(k, internalCluster().getDefaultSettings());
tribe2Defaults.copy(k, internalCluster().getDefaultSettings());
}
});
tribe1Defaults.normalizePrefix("tribe.t1.");
tribe2Defaults.normalizePrefix("tribe.t2.");
// give each tribe it's unicast hosts to connect to
tribe1Defaults.putList("tribe.t1." + UnicastZenPing.DISCOVERY_ZEN_PING_UNICAST_HOSTS_SETTING.getKey(),
getUnicastHosts(internalCluster().client()));
tribe1Defaults.putList("tribe.t2." + UnicastZenPing.DISCOVERY_ZEN_PING_UNICAST_HOSTS_SETTING.getKey(),
getUnicastHosts(cluster2.client()));
Settings merged = Settings.builder()
.put("tribe.t1.cluster.name", internalCluster().getClusterName())
.put("tribe.t2.cluster.name", cluster2.getClusterName())
.put("tribe.t1.transport.type", getTestTransportType())
.put("tribe.t2.transport.type", getTestTransportType())
.put("tribe.blocks.write", false)
.put(tribe1Defaults.build())
.put(tribe2Defaults.build())
.put(NetworkModule.HTTP_ENABLED.getKey(), false)
.put(internalCluster().getDefaultSettings())
.put(XPackSettings.SECURITY_ENABLED.getKey(), false) // otherwise it conflicts with mock transport
.put(XPackSettings.MACHINE_LEARNING_ENABLED.getKey(), false)
.put(MachineLearningField.AUTODETECT_PROCESS.getKey(), false)
.put("tribe.t1." + XPackSettings.SECURITY_ENABLED.getKey(), false)
.put("tribe.t2." + XPackSettings.SECURITY_ENABLED.getKey(), false)
.put("tribe.t1." + XPackSettings.WATCHER_ENABLED.getKey(), false)
.put("tribe.t2." + XPackSettings.WATCHER_ENABLED.getKey(), false)
.put("tribe.t1." + XPackSettings.MACHINE_LEARNING_ENABLED.getKey(), false)
.put("tribe.t2." + XPackSettings.MACHINE_LEARNING_ENABLED.getKey(), false)
.put("tribe.t1." + MachineLearningField.AUTODETECT_PROCESS.getKey(), false)
.put("tribe.t2." + MachineLearningField.AUTODETECT_PROCESS.getKey(), false)
.put("node.name", "tribe_node") // make sure we can identify threads from this node
.put("transport.type", getTestTransportType())
.build();
final List<Class<? extends Plugin>> mockPlugins = Arrays.asList(MockTribePlugin.class, TribeAwareTestZenDiscoveryPlugin.class,
getTestTransportPlugin(), Deprecation.class, Graph.class, Logstash.class, MachineLearning.class, Monitoring.class,
Security.class, Upgrade.class, Watcher.class, XPackPlugin.class);
final Node tribeNode = new MockNode(merged, mockPlugins).start();
Client tribeClient = tribeNode.client();
logger.info("wait till tribe has the same nodes as the 2 clusters");
assertBusy(() -> {
DiscoveryNodes tribeNodes = tribeNode.client().admin().cluster().prepareState().get().getState().getNodes();
assertThat(countDataNodesForTribe("t1", tribeNodes),
equalTo(internalCluster().client().admin().cluster().prepareState().get().getState()
.getNodes().getDataNodes().size()));
assertThat(countDataNodesForTribe("t2", tribeNodes),
equalTo(cluster2.client().admin().cluster().prepareState().get().getState()
.getNodes().getDataNodes().size()));
});
logger.info(" --> verify transport actions for tribe node");
verifyActionOnTribeNode(tribeClient);
logger.info(" --> verify transport actions for data node");
verifyActionOnDataNode((randomBoolean() ? internalCluster() : cluster2).dataNodeClient());
logger.info(" --> verify transport actions for master node");
verifyActionOnMasterNode((randomBoolean() ? internalCluster() : cluster2).masterClient());
logger.info(" --> verify transport actions for client node");
verifyActionOnClientNode((randomBoolean() ? internalCluster() : cluster2).coordOnlyNodeClient());
try {
cluster2.wipe(Collections.<String>emptySet());
} finally {
cluster2.afterTest();
}
tribeNode.close();
cluster2.close();
}
/**
* Verify transport action behaviour on client node
*/
protected abstract void verifyActionOnClientNode(Client client) throws Exception;
/**
* Verify transport action behaviour on master node
*/
protected abstract void verifyActionOnMasterNode(Client masterClient) throws Exception;
/**
* Verify transport action behaviour on data node
*/
protected abstract void verifyActionOnDataNode(Client dataNodeClient) throws Exception;
/**
* Verify transport action behaviour on tribe node
*/
protected abstract void verifyActionOnTribeNode(Client tribeClient) throws Exception;
protected void failAction(Client client, Action action) {
try {
client.execute(action, action.newRequestBuilder(client).request());
fail("expected [" + action.name() + "] to fail");
} catch (IllegalStateException e) {
assertThat(e.getMessage(), containsString("failed to find action"));
}
}
private void ensureYellow(TestCluster testCluster) {
ClusterHealthResponse actionGet = testCluster.client().admin().cluster()
.health(Requests.clusterHealthRequest().waitForYellowStatus()
.waitForEvents(Priority.LANGUID).waitForNoRelocatingShards(true)).actionGet();
if (actionGet.isTimedOut()) {
logger.info("ensureGreen timed out, cluster state:\n{}\n{}", testCluster.client().admin().cluster()
.prepareState().get().getState(),
testCluster.client().admin().cluster().preparePendingClusterTasks().get());
assertThat("timed out waiting for yellow state", actionGet.isTimedOut(), equalTo(false));
}
assertThat(actionGet.getStatus(), anyOf(equalTo(ClusterHealthStatus.YELLOW), equalTo(ClusterHealthStatus.GREEN)));
}
private int countDataNodesForTribe(String tribeName, DiscoveryNodes nodes) {
int count = 0;
for (DiscoveryNode node : nodes) {
if (!node.isDataNode()) {
continue;
}
if (tribeName.equals(node.getAttributes().get("tribe.name"))) {
count++;
}
}
return count;
}
private static String[] getUnicastHosts(Client client) {
ArrayList<String> unicastHosts = new ArrayList<>();
NodesInfoResponse nodeInfos = client.admin().cluster().prepareNodesInfo().clear().setTransport(true).get();
for (NodeInfo info : nodeInfos.getNodes()) {
TransportAddress address = info.getTransport().getAddress().publishAddress();
unicastHosts.add(address.getAddress() + ":" + address.getPort());
}
return unicastHosts.toArray(new String[unicastHosts.size()]);
}
}

View File

@ -1,177 +0,0 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.test;
import org.elasticsearch.Build;
import org.elasticsearch.common.bytes.BytesArray;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.transport.TransportAddress;
import org.elasticsearch.common.xcontent.XContentType;
import org.elasticsearch.license.GetLicenseResponse;
import org.elasticsearch.license.License;
import org.elasticsearch.license.LicensesStatus;
import org.elasticsearch.license.LicensingClient;
import org.elasticsearch.license.PutLicenseResponse;
import org.elasticsearch.plugins.Plugin;
import org.elasticsearch.xpack.core.LocalStateCompositeXPackPlugin;
import org.elasticsearch.xpack.core.XPackPlugin;
import org.elasticsearch.xpack.core.XPackSettings;
import org.junit.AfterClass;
import java.io.IOException;
import java.net.InetAddress;
import java.net.InetSocketAddress;
import java.net.URL;
import java.nio.charset.StandardCharsets;
import java.util.Collection;
import java.util.Collections;
import static org.hamcrest.CoreMatchers.equalTo;
public class LicensingTribeIT extends ESIntegTestCase {
private static TestCluster cluster2;
private static TestCluster tribeNode;
@Override
protected Collection<Class<? extends Plugin>> nodePlugins() {
return Collections.singletonList(XPackPlugin.class);
}
@Override
protected Collection<Class<? extends Plugin>> transportClientPlugins() {
return Collections.singletonList(LocalStateCompositeXPackPlugin.class);
}
@Override
public void setUp() throws Exception {
super.setUp();
if (cluster2 == null) {
cluster2 = buildExternalCluster(System.getProperty("tests.cluster2"));
}
if (tribeNode == null) {
tribeNode = buildExternalCluster(System.getProperty("tests.tribe"));
}
}
@AfterClass
public static void tearDownExternalClusters() throws IOException {
if (cluster2 != null) {
try {
cluster2.close();
} finally {
cluster2 = null;
}
}
if (tribeNode != null) {
try {
tribeNode.close();
} finally {
tribeNode = null;
}
}
}
@Override
protected Settings externalClusterClientSettings() {
Settings.Builder builder = Settings.builder();
builder.put(XPackSettings.SECURITY_ENABLED.getKey(), false);
builder.put(XPackSettings.MONITORING_ENABLED.getKey(), false);
builder.put(XPackSettings.WATCHER_ENABLED.getKey(), false);
builder.put(XPackSettings.GRAPH_ENABLED.getKey(), false);
builder.put(XPackSettings.MACHINE_LEARNING_ENABLED.getKey(), false);
return builder.build();
}
private ExternalTestCluster buildExternalCluster(String clusterAddresses) throws IOException {
String[] stringAddresses = clusterAddresses.split(",");
TransportAddress[] transportAddresses = new TransportAddress[stringAddresses.length];
int i = 0;
for (String stringAddress : stringAddresses) {
URL url = new URL("http://" + stringAddress);
InetAddress inetAddress = InetAddress.getByName(url.getHost());
transportAddresses[i++] = new TransportAddress(new InetSocketAddress(inetAddress, url.getPort()));
}
return new ExternalTestCluster(createTempDir(), externalClusterClientSettings(), transportClientPlugins(), transportAddresses);
}
public void testLicensePropagateToTribeNode() throws Exception {
assumeTrue("License is only valid when tested against snapshot/test keys", Build.CURRENT.isSnapshot());
// test that auto-generated trial license propagates to tribe
assertBusy(() -> {
GetLicenseResponse getLicenseResponse = new LicensingClient(tribeNode.client()).prepareGetLicense().get();
assertNotNull(getLicenseResponse.license());
assertThat(getLicenseResponse.license().operationMode(), equalTo(License.OperationMode.TRIAL));
});
// test that signed license put in one cluster propagates to tribe
LicensingClient cluster1Client = new LicensingClient(client());
PutLicenseResponse licenseResponse = cluster1Client
.preparePutLicense(License.fromSource(new BytesArray(BASIC_LICENSE.getBytes(StandardCharsets.UTF_8)), XContentType.JSON))
.setAcknowledge(true).get();
assertThat(licenseResponse.isAcknowledged(), equalTo(true));
assertThat(licenseResponse.status(), equalTo(LicensesStatus.VALID));
assertBusy(() -> {
GetLicenseResponse getLicenseResponse = new LicensingClient(tribeNode.client()).prepareGetLicense().get();
assertNotNull(getLicenseResponse.license());
assertThat(getLicenseResponse.license().operationMode(), equalTo(License.OperationMode.BASIC));
});
// test that signed license with higher operation mode takes precedence
LicensingClient cluster2Client = new LicensingClient(cluster2.client());
licenseResponse = cluster2Client
.preparePutLicense(License.fromSource(new BytesArray(PLATINUM_LICENSE.getBytes(StandardCharsets.UTF_8)), XContentType.JSON))
.setAcknowledge(true).get();
assertThat(licenseResponse.isAcknowledged(), equalTo(true));
assertThat(licenseResponse.status(), equalTo(LicensesStatus.VALID));
assertBusy(() -> {
GetLicenseResponse getLicenseResponse = new LicensingClient(tribeNode.client()).prepareGetLicense().get();
assertNotNull(getLicenseResponse.license());
assertThat(getLicenseResponse.license().operationMode(), equalTo(License.OperationMode.PLATINUM));
});
// test removing signed license falls back works
assertTrue(cluster2Client.prepareDeleteLicense().get().isAcknowledged());
assertBusy(() -> {
GetLicenseResponse getLicenseResponse = new LicensingClient(tribeNode.client()).prepareGetLicense().get();
assertNotNull(getLicenseResponse.license());
assertThat(getLicenseResponse.license().operationMode(), equalTo(License.OperationMode.BASIC));
});
}
public void testDummy() throws Exception {
// this test is here so that testLicensePropagateToTribeNode's assumption
// doesn't result in this test suite to have no tests run and trigger a build failure
}
private static final String PLATINUM_LICENSE = "{\"license\":{\"uid\":\"1\",\"type\":\"platinum\"," +
"\"issue_date_in_millis\":1411948800000,\"expiry_date_in_millis\":1914278399999,\"max_nodes\":1," +
"\"issued_to\":\"issuedTo\",\"issuer\":\"issuer\"," +
"\"signature\":\"AAAAAwAAAA2hWlkvKcxQIpdVWdCtAAABmC9ZN0hjZDBGYnVyRXpCOW5Bb3FjZDAxOWpSbTVoMVZwUzRxVk1" +
"PSmkxakxZdW5IMlhlTHNoN1N2MXMvRFk4d3JTZEx3R3RRZ0pzU3lobWJKZnQvSEFva0ppTHBkWkprZWZSQi9iNmRQNkw1SlpLN0l" +
"DalZCS095MXRGN1lIZlpYcVVTTnFrcTE2dzhJZmZrdFQrN3JQeGwxb0U0MXZ0dDJHSERiZTVLOHNzSDByWnpoZEphZHBEZjUrTVB" +
"xRENNSXNsWWJjZllaODdzVmEzUjNiWktNWGM5TUhQV2plaUo4Q1JOUml4MXNuL0pSOEhQaVB2azhmUk9QVzhFeTFoM1Q0RnJXSG5" +
"3MWk2K055c28zSmRnVkF1b2JSQkFLV2VXUmVHNDZ2R3o2VE1qbVNQS2lxOHN5bUErZlNIWkZSVmZIWEtaSU9wTTJENDVvT1NCYkla" +
"cUYyK2FwRW9xa0t6dldMbmMzSGtQc3FWOTgzZ3ZUcXMvQkt2RUZwMFJnZzlvL2d2bDRWUzh6UG5pdENGWFRreXNKNkE9PQAAAQBWg" +
"u3yZp0KOBG//92X4YVmau3P5asvx0FAPDX2Ze734Tap/nc30X6Rt4yEEm+6bCQr/ibBOqWboJKRbbTZLBQfYFmL1ZqvAY3bJJ1/Xs" +
"8NyDfxKGztlUt/IIOzHPzxs0f8Bv4OJeK48vjovWaDc1Vmo4n1SGyyL0JcEbOWC6A3U3mBsWn7wLUe+hW9+akVAYOO5TIcm60ub7k" +
"H/LIZNOhvGglSVDbl3p8EBkNMy0CV7urQ0wdG1nLCnvf8/BiT15lC5nLrM9Dt5w3pzciPlASzw4iksW/CzvYy5tjOoWKEnxi2EZOB" +
"9dKyT4mTdvyBOrTHLdgr4lmHd3qYAEgcTCaQ\",\"start_date_in_millis\":-1}}";
private static final String BASIC_LICENSE = "{\"license\":{\"uid\":\"1\",\"type\":\"basic\"," +
"\"issue_date_in_millis\":1411948800000,\"expiry_date_in_millis\":1914278399999,\"max_nodes\":1," +
"\"issued_to\":\"issuedTo\",\"issuer\":\"issuer\",\"signature\":\"AAA" + "AAwAAAA2is2oANL3mZGS883l9AAAB" +
"mC9ZN0hjZDBGYnVyRXpCOW5Bb3FjZDAxOWpSbTVoMVZwUzRxVk1PSmkxakxZdW5IMlhlTHNoN1N2MXMvRFk4d3JTZEx3R3RRZ0pzU3" +
"lobWJKZnQvSEFva0ppTHBkWkprZWZSQi9iNmRQNkw1SlpLN0lDalZCS095MXRGN1lIZlpYcVVTTnFrcTE2dzhJZmZrdFQrN3JQeGwx" +
"b0U0MXZ0dDJHSERiZTVLOHNzSDByWnpoZEphZHBEZjUrTVBxRENNSXNsWWJjZllaODdzVmEzUjNiWktNWGM5TUhQV2plaUo4Q1JOUm" +
"l4MXNuL0pSOEhQaVB2azhmUk9QVzhFeTFoM1Q0RnJXSG53MWk2K055c28zSmRnVkF1b2JSQkFLV2VXUmVHNDZ2R3o2VE1qbVNQS2lx" +
"OHN5bUErZlNIWkZSVmZIWEtaSU9wTTJENDVvT1NCYklacUYyK2FwRW9xa0t6dldMbmMzSGtQc3FWOTgzZ3ZUcXMvQkt2RUZwMFJnZz" +
"lvL2d2bDRWUzh6UG5pdENGWFRreXNKNkE9PQAAAQCjL9HJnHrHVRq39yO5OFrOS0fY+mf+KqLh8i+RK4s9Hepdi/VQ3SHTEonEUCCB" +
"1iFO35eykW3t+poCMji9VGkslQyJ+uWKzUqn0lmioy8ukpjETcmKH8TSWTqcC7HNZ0NKc1XMTxwkIi/chQTsPUz+h3gfCHZRQwGnRz" +
"JPmPjCJf4293hsMFUlsFQU3tYKDH+kULMdNx1Cg+3PhbUCNrUyQJMb5p4XDrwOaanZUM6HdifS1Y/qjxLXC/B1wHGFEpvrEPFyBuSe" +
"GnJ9uxkrBSv28iG0qsyHrFhHQXIMVFlQKCPaMKikfuZyRhxzE5ntTcGJMn84llCaIyX/kmzqoZHQ\",\"start_date_in_millis\":-1}}\n";
}

View File

@ -1,72 +0,0 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.xpack;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.license.LicenseService;
import org.elasticsearch.license.XPackLicenseState;
import org.elasticsearch.xpack.core.LocalStateCompositeXPackPlugin;
import org.elasticsearch.xpack.core.ssl.SSLService;
import org.elasticsearch.xpack.deprecation.Deprecation;
import org.elasticsearch.xpack.graph.Graph;
import org.elasticsearch.xpack.logstash.Logstash;
import org.elasticsearch.xpack.ml.MachineLearning;
import org.elasticsearch.xpack.monitoring.Monitoring;
import org.elasticsearch.xpack.security.Security;
import org.elasticsearch.xpack.watcher.Watcher;
import java.nio.file.Path;
public class CompositeTestingXPackPlugin extends LocalStateCompositeXPackPlugin {
public CompositeTestingXPackPlugin(final Settings settings, final Path configPath) throws Exception {
super(settings, configPath);
CompositeTestingXPackPlugin thisVar = this;
plugins.add(new Deprecation());
plugins.add(new Graph(settings));
plugins.add(new Logstash(settings));
plugins.add(new MachineLearning(settings, configPath) {
@Override
protected XPackLicenseState getLicenseState() {
return super.getLicenseState();
}
});
plugins.add(new Monitoring(settings) {
@Override
protected SSLService getSslService() {
return thisVar.getSslService();
}
@Override
protected LicenseService getLicenseService() {
return thisVar.getLicenseService();
}
@Override
protected XPackLicenseState getLicenseState() {
return thisVar.getLicenseState();
}
});
plugins.add(new Watcher(settings) {
@Override
protected SSLService getSslService() {
return thisVar.getSslService();
}
@Override
protected XPackLicenseState getLicenseState() {
return thisVar.getLicenseState();
}
});
plugins.add(new Security(settings, configPath) {
@Override
protected SSLService getSslService() { return thisVar.getSslService(); }
@Override
protected XPackLicenseState getLicenseState() { return thisVar.getLicenseState(); }
});
}
}

View File

@ -1,174 +0,0 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.xpack.monitoring;
import org.elasticsearch.action.admin.cluster.node.info.NodeInfo;
import org.elasticsearch.action.admin.cluster.node.info.NodesInfoResponse;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.env.Environment;
import org.elasticsearch.node.MockNode;
import org.elasticsearch.node.Node;
import org.elasticsearch.plugins.Plugin;
import org.elasticsearch.plugins.PluginInfo;
import org.elasticsearch.test.ESIntegTestCase.ClusterScope;
import org.elasticsearch.test.discovery.TestZenDiscovery;
import org.elasticsearch.tribe.TribePlugin;
import org.elasticsearch.xpack.core.XPackSettings;
import org.elasticsearch.xpack.monitoring.test.MonitoringIntegTestCase;
import java.nio.file.Path;
import java.util.ArrayList;
import java.util.Collection;
import java.util.function.Function;
import static org.elasticsearch.test.ESIntegTestCase.Scope.TEST;
import static org.hamcrest.Matchers.equalTo;
@ClusterScope(scope = TEST, transportClientRatio = 0, numClientNodes = 0, numDataNodes = 0)
public class MonitoringPluginTests extends MonitoringIntegTestCase {
public MonitoringPluginTests() throws Exception {
super();
}
@Override
protected void startMonitoringService() {
// do nothing as monitoring is sometime unbound
}
@Override
protected void stopMonitoringService() {
// do nothing as monitoring is sometime unbound
}
@Override
protected boolean addTestZenDiscovery() {
return false;
}
public static class TribeAwareTestZenDiscoveryPlugin extends TestZenDiscovery.TestPlugin {
public TribeAwareTestZenDiscoveryPlugin(Settings settings) {
super(settings);
}
@Override
public Settings additionalSettings() {
if (settings.getGroups("tribe", true).isEmpty()) {
return super.additionalSettings();
} else {
return Settings.EMPTY;
}
}
}
public static class MockTribePlugin extends TribePlugin {
public MockTribePlugin(Settings settings) {
super(settings);
}
protected Function<Settings, Node> nodeBuilder(Path configPath) {
return settings -> new MockNode(new Environment(settings, configPath), internalCluster().getPlugins());
}
}
@Override
protected Collection<Class<? extends Plugin>> nodePlugins() {
ArrayList<Class<? extends Plugin>> plugins = new ArrayList<>(super.nodePlugins());
plugins.add(MockTribePlugin.class);
plugins.add(TribeAwareTestZenDiscoveryPlugin.class);
return plugins;
}
@Override
protected Settings nodeSettings(int nodeOrdinal) {
return Settings.builder()
.put(super.nodeSettings(nodeOrdinal))
.put(MonitoringService.INTERVAL.getKey(), "-1")
.put(XPackSettings.SECURITY_ENABLED.getKey(), false)
.put(XPackSettings.WATCHER_ENABLED.getKey(), false)
.build();
}
@Override
protected Settings transportClientSettings() {
return Settings.builder()
.put(super.transportClientSettings())
.put(XPackSettings.SECURITY_ENABLED.getKey(), false)
.build();
}
public void testMonitoringEnabled() {
internalCluster().startNode(Settings.builder()
.put(XPackSettings.MONITORING_ENABLED.getKey(), true)
.build());
assertPluginIsLoaded();
assertServiceIsBound(MonitoringService.class);
}
public void testMonitoringDisabled() {
internalCluster().startNode(Settings.builder()
.put(XPackSettings.MONITORING_ENABLED.getKey(), false)
.build());
assertPluginIsLoaded();
assertServiceIsNotBound(MonitoringService.class);
}
public void testMonitoringEnabledOnTribeNode() {
internalCluster().startNode(Settings.builder()
.put(XPackSettings.MONITORING_ENABLED.getKey(), true)
.put("tribe.name", "t1")
.build());
assertPluginIsLoaded();
assertServiceIsBound(MonitoringService.class);
}
public void testMonitoringDisabledOnTribeNode() {
internalCluster().startNode(Settings.builder().put("tribe.name", "t1").build());
assertPluginIsLoaded();
assertServiceIsNotBound(MonitoringService.class);
}
private void assertPluginIsLoaded() {
NodesInfoResponse response = client().admin().cluster().prepareNodesInfo().setPlugins(true).get();
for (NodeInfo nodeInfo : response.getNodes()) {
assertNotNull(nodeInfo.getPlugins());
boolean found = false;
for (PluginInfo plugin : nodeInfo.getPlugins().getPluginInfos()) {
assertNotNull(plugin);
if (LocalStateMonitoring.class.getName().equals(plugin.getName())) {
found = true;
break;
}
}
assertThat("xpack plugin not found", found, equalTo(true));
}
}
private void assertServiceIsBound(Class<?> klass) {
try {
Object binding = internalCluster().getDataNodeInstance(klass);
assertNotNull(binding);
assertTrue(klass.isInstance(binding));
} catch (Exception e) {
fail("no service bound for class " + klass.getSimpleName());
}
}
private void assertServiceIsNotBound(Class<?> klass) {
try {
internalCluster().getDataNodeInstance(klass);
fail("should have thrown an exception about missing implementation");
} catch (Exception ce) {
assertThat("message contains error about missing implementation: " + ce.getMessage(),
ce.getMessage().contains("Could not find a suitable constructor"), equalTo(true));
}
}
}

View File

@ -1,47 +0,0 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.xpack.monitoring;
import org.elasticsearch.client.Client;
import org.elasticsearch.license.TribeTransportTestCase;
import org.elasticsearch.xpack.core.XPackField;
import org.elasticsearch.xpack.core.monitoring.action.MonitoringBulkAction;
import org.elasticsearch.xpack.core.monitoring.action.MonitoringBulkRequest;
import java.util.Collections;
import java.util.List;
public class MonitoringTribeTests extends TribeTransportTestCase {
@Override
protected List<String> enabledFeatures() {
return Collections.singletonList(XPackField.MONITORING);
}
@Override
protected void verifyActionOnClientNode(Client client) throws Exception {
assertMonitoringTransportActionsWorks(client);
}
@Override
protected void verifyActionOnMasterNode(Client masterClient) throws Exception {
assertMonitoringTransportActionsWorks(masterClient);
}
@Override
protected void verifyActionOnDataNode(Client dataNodeClient) throws Exception {
assertMonitoringTransportActionsWorks(dataNodeClient);
}
private static void assertMonitoringTransportActionsWorks(Client client) throws Exception {
client.execute(MonitoringBulkAction.INSTANCE, new MonitoringBulkRequest());
}
@Override
protected void verifyActionOnTribeNode(Client tribeClient) {
failAction(tribeClient, MonitoringBulkAction.INSTANCE);
}
}

View File

@ -1,128 +0,0 @@
import org.elasticsearch.gradle.test.ClusterConfiguration
import org.elasticsearch.gradle.test.ClusterFormationTasks
import org.elasticsearch.gradle.test.NodeInfo
apply plugin: 'elasticsearch.standalone-test'
apply plugin: 'elasticsearch.standalone-rest-test'
apply plugin: 'elasticsearch.rest-test'
dependencies {
testCompile project(path: ':modules:tribe', configuration: 'runtime')
testCompile project(path: xpackModule('core'), configuration: 'runtime')
testCompile project(path: xpackModule('core'), configuration: 'testArtifacts')
testCompile project(path: xpackModule('security'), configuration: 'testArtifacts')
testCompile project(path: ':modules:analysis-common', configuration: 'runtime')
}
namingConventions.skipIntegTestInDisguise = true
compileTestJava.options.compilerArgs << "-Xlint:-try"
String xpackPath = project(xpackModule('core')).projectDir.toPath().resolve('src/test/resources').toString()
sourceSets {
test {
resources {
srcDirs += [xpackPath]
}
}
}
forbiddenPatterns {
exclude '**/*.key'
exclude '**/*.p12'
exclude '**/*.der'
exclude '**/*.zip'
}
task setupClusterOne {}
ClusterConfiguration configOne = new ClusterConfiguration(project)
configOne.clusterName = 'cluster1'
configOne.setting('node.name', 'cluster1-node1')
configOne.setting('xpack.monitoring.enabled', false)
configOne.setting('xpack.ml.enabled', false)
configOne.plugin(xpackProject('plugin').path)
configOne.module(project.project(':modules:analysis-common'))
configOne.setupCommand('setupDummyUser',
'bin/x-pack/users', 'useradd', 'test_user', '-p', 'x-pack-test-password', '-r', 'superuser')
configOne.waitCondition = { node, ant ->
File tmpFile = new File(node.cwd, 'wait.success')
ant.get(src: "http://${node.httpUri()}/_cluster/health?wait_for_nodes=>=1&wait_for_status=yellow&timeout=60s",
dest: tmpFile.toString(),
username: 'test_user',
password: 'x-pack-test-password',
ignoreerrors: true,
retries: 10)
return tmpFile.exists()
}
List<NodeInfo> cluster1Nodes = ClusterFormationTasks.setup(project, 'clusterOne', setupClusterOne, configOne)
task setupClusterTwo {}
ClusterConfiguration configTwo = new ClusterConfiguration(project)
configTwo.clusterName = 'cluster2'
configTwo.setting('node.name', 'cluster2-node1')
configTwo.setting('xpack.monitoring.enabled', false)
configTwo.setting('xpack.ml.enabled', false)
configTwo.plugin(xpackProject('plugin').path)
configTwo.module(project.project(':modules:analysis-common'))
configTwo.setupCommand('setupDummyUser',
'bin/x-pack/users', 'useradd', 'test_user', '-p', 'x-pack-test-password', '-r', 'superuser')
configTwo.waitCondition = { node, ant ->
File tmpFile = new File(node.cwd, 'wait.success')
ant.get(src: "http://${node.httpUri()}/_cluster/health?wait_for_nodes=>=1&wait_for_status=yellow&timeout=60s",
dest: tmpFile.toString(),
username: 'test_user',
password: 'x-pack-test-password',
ignoreerrors: true,
retries: 10)
return tmpFile.exists()
}
List<NodeInfo> cluster2Nodes = ClusterFormationTasks.setup(project, 'clusterTwo', setupClusterTwo, configTwo)
integTestCluster {
dependsOn setupClusterOne, setupClusterTwo
plugin xpackProject('plugin').path
nodeStartupWaitSeconds 45
setupCommand 'setupDummyUser',
'bin/x-pack/users', 'useradd', 'test_user', '-p', 'x-pack-test-password', '-r', 'superuser'
setting 'xpack.monitoring.enabled', false
setting 'xpack.ml.enabled', false
setting 'node.name', 'tribe-node'
setting 'tribe.on_conflict', 'prefer_cluster1'
setting 'tribe.cluster1.cluster.name', 'cluster1'
setting 'tribe.cluster1.discovery.zen.ping.unicast.hosts', "'${-> cluster1Nodes.get(0).transportUri()}'"
setting 'tribe.cluster1.http.enabled', 'true'
setting 'tribe.cluster1.xpack.ml.enabled', 'false'
setting 'tribe.cluster2.cluster.name', 'cluster2'
setting 'tribe.cluster2.discovery.zen.ping.unicast.hosts', "'${-> cluster2Nodes.get(0).transportUri()}'"
setting 'tribe.cluster2.http.enabled', 'true'
setting 'tribe.cluster2.xpack.ml.enabled', 'false'
keystoreSetting 'bootstrap.password', 'x-pack-test-password'
waitCondition = { node, ant ->
File tmpFile = new File(node.cwd, 'wait.success')
// 5 nodes: tribe + clusterOne (1 node + tribe internal node) + clusterTwo (1 node + tribe internal node)
ant.get(src: "http://${node.httpUri()}/_cluster/health?wait_for_nodes=>=5&wait_for_status=yellow&timeout=60s",
dest: tmpFile.toString(),
username: 'test_user',
password: 'x-pack-test-password',
ignoreerrors: true,
retries: 10)
return tmpFile.exists()
}
}
test {
/*
* We have to disable setting the number of available processors as tests in the same JVM randomize processors and will step on each
* other if we allow them to set the number of available processors as it's set-once in Netty.
*/
systemProperty 'es.set.netty.runtime.available.processors', 'false'
include '**/*Tests.class'
}
integTestRunner {
systemProperty 'tests.cluster', "${-> cluster1Nodes.get(0).transportUri()}"
systemProperty 'tests.cluster2', "${-> cluster2Nodes.get(0).transportUri()}"
systemProperty 'tests.tribe', "${-> integTest.nodes.get(0).transportUri()}"
finalizedBy 'clusterOne#stop'
finalizedBy 'clusterTwo#stop'
}

View File

@ -1,233 +0,0 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.test;
import com.carrotsearch.hppc.cursors.ObjectCursor;
import org.elasticsearch.action.admin.cluster.health.ClusterHealthResponse;
import org.elasticsearch.action.admin.cluster.node.info.NodeInfo;
import org.elasticsearch.action.admin.cluster.node.info.NodesInfoResponse;
import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.metadata.IndexMetaData;
import org.elasticsearch.common.io.stream.NotSerializableExceptionWrapper;
import org.elasticsearch.common.network.NetworkModule;
import org.elasticsearch.common.settings.SecureString;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.transport.TransportAddress;
import org.elasticsearch.xpack.core.security.SecurityField;
import org.elasticsearch.xpack.core.security.action.role.GetRolesResponse;
import org.elasticsearch.xpack.core.security.action.role.PutRoleResponse;
import org.elasticsearch.xpack.core.security.authc.support.UsernamePasswordToken;
import org.elasticsearch.xpack.core.security.client.SecurityClient;
import org.elasticsearch.xpack.security.Security;
import org.junit.After;
import org.junit.AfterClass;
import java.io.IOException;
import java.net.InetAddress;
import java.net.InetSocketAddress;
import java.net.URL;
import java.util.ArrayList;
import java.util.Collection;
import java.util.Collections;
import java.util.HashSet;
import java.util.List;
import java.util.Set;
import java.util.stream.Collectors;
import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertNoTimeout;
import static org.elasticsearch.xpack.security.support.IndexLifecycleManager.INTERNAL_SECURITY_INDEX;
import static org.hamcrest.CoreMatchers.containsString;
import static org.hamcrest.CoreMatchers.equalTo;
import static org.hamcrest.Matchers.arrayContaining;
import static org.hamcrest.core.IsCollectionContaining.hasItem;
public class TribeWithSecurityIT extends SecurityIntegTestCase {
private static TestCluster cluster2;
private static TestCluster tribeNode;
@Override
public void setUp() throws Exception {
super.setUp();
if (cluster2 == null) {
cluster2 = buildExternalCluster(System.getProperty("tests.cluster2"));
}
if (tribeNode == null) {
tribeNode = buildExternalCluster(System.getProperty("tests.tribe"));
}
}
/**
* TODO: this entire class should be removed. SecurityIntegTestCase is meant for tests, but we run against real xpack
*/
@Override
public void doAssertXPackIsInstalled() {
NodesInfoResponse nodeInfos = client().admin().cluster().prepareNodesInfo().clear().setPlugins(true).get();
for (NodeInfo nodeInfo : nodeInfos.getNodes()) {
// TODO: disable this assertion for now, due to random runs with mock plugins. perhaps run without mock plugins?
// assertThat(nodeInfo.getPlugins().getInfos(), hasSize(2));
Collection<String> pluginNames =
nodeInfo.getPlugins().getPluginInfos().stream().map(p -> p.getClassname()).collect(Collectors.toList());
assertThat("plugin [" + Security.class.getName() + "] not found in [" + pluginNames + "]", pluginNames,
hasItem(Security.class.getName()));
}
}
@AfterClass
public static void tearDownExternalClusters() throws IOException {
if (cluster2 != null) {
try {
cluster2.close();
} finally {
cluster2 = null;
}
}
if (tribeNode != null) {
try {
tribeNode.close();
} finally {
tribeNode = null;
}
}
}
@After
public void removeSecurityIndex() {
if (client().admin().indices().prepareExists(INTERNAL_SECURITY_INDEX).get().isExists()) {
client().admin().indices().prepareDelete(INTERNAL_SECURITY_INDEX).get();
}
if (cluster2.client().admin().indices().prepareExists(INTERNAL_SECURITY_INDEX).get().isExists()) {
cluster2.client().admin().indices().prepareDelete(INTERNAL_SECURITY_INDEX).get();
}
securityClient(client()).prepareClearRealmCache().get();
securityClient(cluster2.client()).prepareClearRealmCache().get();
}
@Override
protected Settings externalClusterClientSettings() {
Settings.Builder builder = Settings.builder().put(super.externalClusterClientSettings());
builder.put(NetworkModule.TRANSPORT_TYPE_KEY, SecurityField.NAME4);
return builder.build();
}
private ExternalTestCluster buildExternalCluster(String clusterAddresses) throws IOException {
String[] stringAddresses = clusterAddresses.split(",");
TransportAddress[] transportAddresses = new TransportAddress[stringAddresses.length];
int i = 0;
for (String stringAddress : stringAddresses) {
URL url = new URL("http://" + stringAddress);
InetAddress inetAddress = InetAddress.getByName(url.getHost());
transportAddresses[i++] = new TransportAddress(new InetSocketAddress(inetAddress, url.getPort()));
}
return new ExternalTestCluster(createTempDir(), externalClusterClientSettings(), transportClientPlugins(), transportAddresses);
}
public void testThatTribeCanAuthenticateElasticUser() throws Exception {
ClusterHealthResponse response = tribeNode.client().filterWithHeader(Collections.singletonMap("Authorization",
UsernamePasswordToken.basicAuthHeaderValue("elastic", BOOTSTRAP_PASSWORD)))
.admin().cluster().prepareHealth().get();
assertNoTimeout(response);
}
public void testThatTribeCanAuthenticateElasticUserWithChangedPassword() throws Exception {
assertSecurityIndexActive();
securityClient(client()).prepareChangePassword("elastic", "password".toCharArray()).get();
assertTribeNodeHasAllIndices();
ClusterHealthResponse response = tribeNode.client().filterWithHeader(Collections.singletonMap("Authorization",
UsernamePasswordToken.basicAuthHeaderValue("elastic", new SecureString("password".toCharArray()))))
.admin().cluster().prepareHealth().get();
assertNoTimeout(response);
}
public void testThatTribeClustersHaveDifferentPasswords() throws Exception {
assertSecurityIndexActive();
assertSecurityIndexActive(cluster2);
securityClient().prepareChangePassword("elastic", "password".toCharArray()).get();
securityClient(cluster2.client()).prepareChangePassword("elastic", "password2".toCharArray()).get();
assertTribeNodeHasAllIndices();
ClusterHealthResponse response = tribeNode.client().filterWithHeader(Collections.singletonMap("Authorization",
UsernamePasswordToken.basicAuthHeaderValue("elastic", new SecureString("password".toCharArray()))))
.admin().cluster().prepareHealth().get();
assertNoTimeout(response);
}
public void testUserModificationUsingTribeNodeAreDisabled() throws Exception {
SecurityClient securityClient = securityClient(tribeNode.client());
NotSerializableExceptionWrapper e = expectThrows(NotSerializableExceptionWrapper.class,
() -> securityClient.preparePutUser("joe", "password".toCharArray()).get());
assertThat(e.getMessage(), containsString("users may not be created or modified using a tribe node"));
e = expectThrows(NotSerializableExceptionWrapper.class, () -> securityClient.prepareSetEnabled("elastic", randomBoolean()).get());
assertThat(e.getMessage(), containsString("users may not be created or modified using a tribe node"));
e = expectThrows(NotSerializableExceptionWrapper.class,
() -> securityClient.prepareChangePassword("elastic", "password".toCharArray()).get());
assertThat(e.getMessage(), containsString("users may not be created or modified using a tribe node"));
e = expectThrows(NotSerializableExceptionWrapper.class, () -> securityClient.prepareDeleteUser("joe").get());
assertThat(e.getMessage(), containsString("users may not be deleted using a tribe node"));
}
// note tribe node has tribe.on_conflict set to prefer cluster_1
public void testRetrieveRolesOnPreferredClusterOnly() throws Exception {
final int randomRoles = scaledRandomIntBetween(3, 8);
List<String> shouldBeSuccessfulRoles = new ArrayList<>();
assertSecurityIndexActive();
for (int i = 0; i < randomRoles; i++) {
final String rolename = "preferredClusterRole" + i;
PutRoleResponse response = securityClient(client()).preparePutRole(rolename).cluster("monitor").get();
assertTrue(response.isCreated());
shouldBeSuccessfulRoles.add(rolename);
}
assertTribeNodeHasAllIndices();
SecurityClient securityClient = securityClient(tribeNode.client());
for (String rolename : shouldBeSuccessfulRoles) {
GetRolesResponse response = securityClient.prepareGetRoles(rolename).get();
assertTrue(response.hasRoles());
assertEquals(1, response.roles().length);
assertThat(response.roles()[0].getClusterPrivileges(), arrayContaining("monitor"));
}
}
private void assertTribeNodeHasAllIndices() throws Exception {
assertBusy(() -> {
Set<String> indices = new HashSet<>();
client().admin().cluster().prepareState().setMetaData(true).get()
.getState().getMetaData().getIndices().keysIt().forEachRemaining(indices::add);
cluster2.client().admin().cluster().prepareState().setMetaData(true).get()
.getState().getMetaData().getIndices().keysIt().forEachRemaining(indices::add);
ClusterState state = tribeNode.client().admin().cluster().prepareState().setRoutingTable(true)
.setMetaData(true).get().getState();
StringBuilder sb = new StringBuilder();
for (String index : indices) {
if (sb.length() == 0) {
sb.append("[");
sb.append(index);
} else {
sb.append(",");
sb.append(index);
}
}
sb.append("]");
Set<String> tribeIndices = new HashSet<>();
for (ObjectCursor<IndexMetaData> cursor : state.getMetaData().getIndices().values()) {
tribeIndices.add(cursor.value.getIndex().getName());
}
assertThat("cluster indices [" + indices + "] tribe indices [" + tribeIndices + "]",
state.getMetaData().getIndices().size(), equalTo(indices.size()));
for (String index : indices) {
assertTrue(state.getMetaData().hasIndex(index));
assertTrue(state.getRoutingTable().hasIndex(index));
assertTrue(state.getRoutingTable().index(index).allPrimaryShardsActive());
}
});
}
}

View File

@ -1,556 +0,0 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.xpack.security;
import org.elasticsearch.ElasticsearchSecurityException;
import org.elasticsearch.action.admin.cluster.health.ClusterHealthResponse;
import org.elasticsearch.action.admin.cluster.node.info.NodesInfoResponse;
import org.elasticsearch.client.Client;
import org.elasticsearch.client.RestClient;
import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.ClusterStateObserver;
import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.UUIDs;
import org.elasticsearch.common.network.NetworkModule;
import org.elasticsearch.common.settings.MockSecureSettings;
import org.elasticsearch.common.settings.SecureString;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.common.util.concurrent.ThreadContext;
import org.elasticsearch.env.Environment;
import org.elasticsearch.env.NodeEnvironment;
import org.elasticsearch.index.IndexNotFoundException;
import org.elasticsearch.node.MockNode;
import org.elasticsearch.node.Node;
import org.elasticsearch.plugins.Plugin;
import org.elasticsearch.test.ESIntegTestCase;
import org.elasticsearch.test.InternalTestCluster;
import org.elasticsearch.test.NativeRealmIntegTestCase;
import org.elasticsearch.test.SecuritySettingsSource;
import org.elasticsearch.test.SecuritySettingsSourceField;
import org.elasticsearch.test.discovery.TestZenDiscovery;
import org.elasticsearch.tribe.TribePlugin;
import org.elasticsearch.tribe.TribeService;
import org.elasticsearch.xpack.core.security.SecurityLifecycleServiceField;
import org.elasticsearch.xpack.core.security.action.role.GetRolesResponse;
import org.elasticsearch.xpack.core.security.action.role.PutRoleResponse;
import org.elasticsearch.xpack.core.security.action.user.PutUserResponse;
import org.elasticsearch.xpack.core.security.authc.support.UsernamePasswordToken;
import org.elasticsearch.xpack.core.security.client.SecurityClient;
import org.elasticsearch.xpack.security.support.IndexLifecycleManager;
import org.junit.After;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import java.io.IOException;
import java.io.UncheckedIOException;
import java.nio.file.Path;
import java.util.ArrayList;
import java.util.Collection;
import java.util.Collections;
import java.util.HashSet;
import java.util.List;
import java.util.Set;
import java.util.concurrent.CountDownLatch;
import java.util.function.Function;
import java.util.function.Predicate;
import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertNoTimeout;
import static org.hamcrest.Matchers.anyOf;
import static org.hamcrest.Matchers.arrayContaining;
import static org.hamcrest.Matchers.containsString;
import static org.hamcrest.Matchers.equalTo;
import static org.hamcrest.Matchers.startsWith;
/**
* Tests security with tribe nodes
*/
public class SecurityTribeTests extends NativeRealmIntegTestCase {
private static final String SECOND_CLUSTER_NODE_PREFIX = "node_cluster2_";
private static InternalTestCluster cluster2;
private static boolean useSSL;
private Node tribeNode;
private Client tribeClient;
@BeforeClass
public static void setupSSL() {
useSSL = randomBoolean();
}
@Override
public void setUp() throws Exception {
super.setUp();
if (cluster2 == null) {
SecuritySettingsSource cluster2SettingsSource =
new SecuritySettingsSource(defaultMaxNumberOfNodes(), useSSL, createTempDir(), Scope.SUITE) {
@Override
public Settings nodeSettings(int nodeOrdinal) {
Settings.Builder builder = Settings.builder()
.put(super.nodeSettings(nodeOrdinal))
.put(NetworkModule.HTTP_ENABLED.getKey(), true);
if (builder.getSecureSettings() == null) {
builder.setSecureSettings(new MockSecureSettings());
}
((MockSecureSettings) builder.getSecureSettings()).setString("bootstrap.password",
BOOTSTRAP_PASSWORD.toString());
return builder.build();
}
@Override
public Collection<Class<? extends Plugin>> nodePlugins() {
ArrayList<Class<? extends Plugin>> plugins = new ArrayList<>(super.nodePlugins());
plugins.add(MockTribePlugin.class);
plugins.add(TribeAwareTestZenDiscoveryPlugin.class);
return plugins;
}
};
cluster2 = new InternalTestCluster(randomLong(), createTempDir(), true, true, 1, 2,
UUIDs.randomBase64UUID(random()), cluster2SettingsSource, 0, false, SECOND_CLUSTER_NODE_PREFIX, getMockPlugins(),
getClientWrapper());
cluster2.beforeTest(random(), 0.1);
cluster2.ensureAtLeastNumDataNodes(2);
}
assertSecurityIndexActive(cluster2);
}
@Override
public boolean transportSSLEnabled() {
return useSSL;
}
@AfterClass
public static void tearDownSecondCluster() {
if (cluster2 != null) {
try {
cluster2.close();
} finally {
cluster2 = null;
}
}
}
/**
* We intentionally do not override {@link ESIntegTestCase#tearDown()} as doing so causes the ensure cluster size check to timeout
*/
@After
public void tearDownTribeNodeAndWipeCluster() throws Exception {
if (cluster2 != null) {
try {
cluster2.wipe(Collections.singleton(SecurityLifecycleServiceField.SECURITY_TEMPLATE_NAME));
try {
// this is a hack to clean up the .security index since only the XPackSecurity user or superusers can delete it
final Client cluster2Client = cluster2.client().filterWithHeader(Collections.singletonMap("Authorization",
UsernamePasswordToken.basicAuthHeaderValue(SecuritySettingsSource.TEST_SUPERUSER,
SecuritySettingsSourceField.TEST_PASSWORD_SECURE_STRING)));
cluster2Client.admin().indices().prepareDelete(IndexLifecycleManager.INTERNAL_SECURITY_INDEX).get();
} catch (IndexNotFoundException e) {
// ignore it since not all tests create this index...
}
// Clear the realm cache for all realms since we use a SUITE scoped cluster
SecurityClient client = securityClient(cluster2.client());
client.prepareClearRealmCache().get();
} finally {
cluster2.afterTest();
}
}
if (tribeNode != null) {
tribeNode.close();
tribeNode = null;
}
}
@Override
protected boolean ignoreExternalCluster() {
return true;
}
@Override
protected boolean shouldSetReservedUserPasswords() {
return false;
}
@Override
protected boolean addTestZenDiscovery() {
return false;
}
public static class TribeAwareTestZenDiscoveryPlugin extends TestZenDiscovery.TestPlugin {
public TribeAwareTestZenDiscoveryPlugin(Settings settings) {
super(settings);
}
@Override
public Settings additionalSettings() {
if (settings.getGroups("tribe", true).isEmpty()) {
return super.additionalSettings();
} else {
return Settings.EMPTY;
}
}
}
public static class MockTribePlugin extends TribePlugin {
public MockTribePlugin(Settings settings) {
super(settings);
}
protected Function<Settings, Node> nodeBuilder(Path configPath) {
return settings -> new MockNode(new Environment(settings, configPath), internalCluster().getPlugins());
}
}
@Override
protected Collection<Class<? extends Plugin>> nodePlugins() {
ArrayList<Class<? extends Plugin>> plugins = new ArrayList<>(super.nodePlugins());
plugins.add(MockTribePlugin.class);
plugins.add(TribeAwareTestZenDiscoveryPlugin.class);
return plugins;
}
private void setupTribeNode(Settings settings) throws Exception {
SecuritySettingsSource cluster2SettingsSource =
new SecuritySettingsSource(1, useSSL, createTempDir(), Scope.TEST) {
@Override
public Settings nodeSettings(int nodeOrdinal) {
return Settings.builder()
.put(super.nodeSettings(nodeOrdinal))
.put(NetworkModule.HTTP_ENABLED.getKey(), true)
.build();
}
};
final Settings settingsTemplate = cluster2SettingsSource.nodeSettings(0);
Settings.Builder tribe1Defaults = Settings.builder();
Settings.Builder tribe2Defaults = Settings.builder();
Settings tribeSettings = settingsTemplate.filter(k -> {
if (k.equals(NodeEnvironment.MAX_LOCAL_STORAGE_NODES_SETTING.getKey())) {
return false;
} if (k.startsWith("path.")) {
return false;
} else if (k.equals("transport.tcp.port")) {
return false;
}
return true;
});
tribe1Defaults.put(tribeSettings, false);
tribe1Defaults.normalizePrefix("tribe.t1.");
tribe2Defaults.put(tribeSettings, false);
tribe2Defaults.normalizePrefix("tribe.t2.");
// TODO: rethink how these settings are generated for tribes once we support more than just string settings...
MockSecureSettings secureSettingsTemplate =
(MockSecureSettings) Settings.builder().put(settingsTemplate).getSecureSettings();
MockSecureSettings secureSettings = new MockSecureSettings();
if (secureSettingsTemplate != null) {
for (String settingName : secureSettingsTemplate.getSettingNames()) {
String settingValue = secureSettingsTemplate.getString(settingName).toString();
secureSettings.setString(settingName, settingValue);
secureSettings.setString("tribe.t1." + settingName, settingValue);
secureSettings.setString("tribe.t2." + settingName, settingValue);
}
}
Settings merged = Settings.builder()
.put(internalCluster().getDefaultSettings())
.put(tribeSettings, false)
.put("tribe.t1.cluster.name", internalCluster().getClusterName())
.put("tribe.t2.cluster.name", cluster2.getClusterName())
.put("tribe.blocks.write", false)
.put("tribe.on_conflict", "prefer_t1")
.put(tribe1Defaults.build())
.put(tribe2Defaults.build())
.put(settings)
.put("node.name", "tribe_node") // make sure we can identify threads from this node
.setSecureSettings(secureSettings)
.build();
final List<Class<? extends Plugin>> classpathPlugins = new ArrayList<>(nodePlugins());
classpathPlugins.addAll(getMockPlugins());
tribeNode = new MockNode(merged, classpathPlugins, cluster2SettingsSource.nodeConfigPath(0)).start();
tribeClient = getClientWrapper().apply(tribeNode.client());
ClusterService tribeClusterService = tribeNode.injector().getInstance(ClusterService.class);
ClusterState clusterState = tribeClusterService.state();
ClusterStateObserver observer = new ClusterStateObserver(clusterState, tribeClusterService, null,
logger, new ThreadContext(settings));
final int cluster1Nodes = internalCluster().size();
final int cluster2Nodes = cluster2.size();
logger.info("waiting for [{}] nodes to be added to the tribe cluster state", cluster1Nodes + cluster2Nodes + 2);
final Predicate<ClusterState> nodeCountPredicate = state -> state.nodes().getSize() == cluster1Nodes + cluster2Nodes + 3;
if (nodeCountPredicate.test(clusterState) == false) {
CountDownLatch latch = new CountDownLatch(1);
observer.waitForNextChange(new ClusterStateObserver.Listener() {
@Override
public void onNewClusterState(ClusterState state) {
latch.countDown();
}
@Override
public void onClusterServiceClose() {
fail("tribe cluster service closed");
latch.countDown();
}
@Override
public void onTimeout(TimeValue timeout) {
fail("timed out waiting for nodes to be added to tribe's cluster state");
latch.countDown();
}
}, nodeCountPredicate);
latch.await();
}
assertTribeNodeHasAllIndices();
}
public void testThatTribeCanAuthenticateElasticUser() throws Exception {
ensureElasticPasswordBootstrapped(internalCluster());
setupTribeNode(Settings.EMPTY);
assertTribeNodeHasAllIndices();
ClusterHealthResponse response = tribeClient.filterWithHeader(Collections.singletonMap("Authorization",
UsernamePasswordToken.basicAuthHeaderValue("elastic", getReservedPassword())))
.admin().cluster().prepareHealth().get();
assertNoTimeout(response);
}
public void testThatTribeCanAuthenticateElasticUserWithChangedPassword() throws Exception {
InternalTestCluster cluster = randomBoolean() ? internalCluster() : cluster2;
ensureElasticPasswordBootstrapped(cluster);
setupTribeNode(Settings.EMPTY);
securityClient(cluster.client()).prepareChangePassword("elastic", "password".toCharArray()).get();
assertTribeNodeHasAllIndices();
ClusterHealthResponse response = tribeClient.filterWithHeader(Collections.singletonMap("Authorization",
UsernamePasswordToken.basicAuthHeaderValue("elastic", new SecureString("password".toCharArray()))))
.admin().cluster().prepareHealth().get();
assertNoTimeout(response);
}
public void testThatTribeClustersHaveDifferentPasswords() throws Exception {
ensureElasticPasswordBootstrapped(internalCluster());
ensureElasticPasswordBootstrapped(cluster2);
setupTribeNode(Settings.EMPTY);
securityClient().prepareChangePassword("elastic", "password".toCharArray()).get();
securityClient(cluster2.client()).prepareChangePassword("elastic", "password2".toCharArray()).get();
assertTribeNodeHasAllIndices();
ClusterHealthResponse response = tribeClient.filterWithHeader(Collections.singletonMap("Authorization",
UsernamePasswordToken.basicAuthHeaderValue("elastic", new SecureString("password".toCharArray()))))
.admin().cluster().prepareHealth().get();
assertNoTimeout(response);
}
public void testUsersInBothTribes() throws Exception {
ensureElasticPasswordBootstrapped(internalCluster());
ensureElasticPasswordBootstrapped(cluster2);
final String preferredTribe = randomBoolean() ? "t1" : "t2";
setupTribeNode(Settings.builder().put("tribe.on_conflict", "prefer_" + preferredTribe).build());
final int randomUsers = scaledRandomIntBetween(3, 8);
final Client cluster1Client = client();
final Client cluster2Client = cluster2.client();
List<String> shouldBeSuccessfulUsers = new ArrayList<>();
List<String> shouldFailUsers = new ArrayList<>();
final Client preferredClient = "t1".equals(preferredTribe) ? cluster1Client : cluster2Client;
for (int i = 0; i < randomUsers; i++) {
final String username = "user" + i;
Client clusterClient = randomBoolean() ? cluster1Client : cluster2Client;
PutUserResponse response =
securityClient(clusterClient).preparePutUser(username, "password".toCharArray(), "superuser").get();
assertTrue(response.created());
// if it was the first client, we should expect authentication to succeed
if (preferredClient == clusterClient) {
shouldBeSuccessfulUsers.add(username);
} else {
shouldFailUsers.add(username);
}
}
assertTribeNodeHasAllIndices();
for (String username : shouldBeSuccessfulUsers) {
ClusterHealthResponse response = tribeClient.filterWithHeader(Collections.singletonMap("Authorization",
UsernamePasswordToken.basicAuthHeaderValue(username, new SecureString("password".toCharArray()))))
.admin().cluster().prepareHealth().get();
assertNoTimeout(response);
}
for (String username : shouldFailUsers) {
ElasticsearchSecurityException e = expectThrows(ElasticsearchSecurityException.class, () ->
tribeClient.filterWithHeader(Collections.singletonMap("Authorization",
UsernamePasswordToken.basicAuthHeaderValue(username, new SecureString("password".toCharArray()))))
.admin().cluster().prepareHealth().get());
assertThat(e.getMessage(), containsString("authenticate"));
}
}
public void testUsersInNonPreferredClusterOnly() throws Exception {
final String preferredTribe = randomBoolean() ? "t1" : "t2";
// only create users in the non preferred client
final InternalTestCluster nonPreferredCluster = "t1".equals(preferredTribe) ? cluster2 : internalCluster();
ensureElasticPasswordBootstrapped(nonPreferredCluster);
setupTribeNode(Settings.builder().put("tribe.on_conflict", "prefer_" + preferredTribe).build());
final int randomUsers = scaledRandomIntBetween(3, 8);
List<String> shouldBeSuccessfulUsers = new ArrayList<>();
for (int i = 0; i < randomUsers; i++) {
final String username = "user" + i;
PutUserResponse response =
securityClient(nonPreferredCluster.client()).preparePutUser(username, "password".toCharArray(), "superuser").get();
assertTrue(response.created());
shouldBeSuccessfulUsers.add(username);
}
assertTribeNodeHasAllIndices();
for (String username : shouldBeSuccessfulUsers) {
ClusterHealthResponse response = tribeClient.filterWithHeader(Collections.singletonMap("Authorization",
UsernamePasswordToken.basicAuthHeaderValue(username, new SecureString("password".toCharArray()))))
.admin().cluster().prepareHealth().get();
assertNoTimeout(response);
}
}
private void ensureElasticPasswordBootstrapped(InternalTestCluster cluster) {
NodesInfoResponse nodesInfoResponse = cluster.client().admin().cluster().prepareNodesInfo().get();
assertFalse(nodesInfoResponse.hasFailures());
try (RestClient restClient = createRestClient(nodesInfoResponse.getNodes(), null, "http")) {
setupReservedPasswords(restClient);
} catch (IOException e) {
throw new UncheckedIOException(e);
}
}
public void testUserModificationUsingTribeNodeAreDisabled() throws Exception {
ensureElasticPasswordBootstrapped(internalCluster());
setupTribeNode(Settings.EMPTY);
SecurityClient securityClient = securityClient(tribeClient);
UnsupportedOperationException e = expectThrows(UnsupportedOperationException.class,
() -> securityClient.preparePutUser("joe", "password".toCharArray()).get());
assertThat(e.getMessage(), containsString("users may not be created or modified using a tribe node"));
e = expectThrows(UnsupportedOperationException.class, () -> securityClient.prepareSetEnabled("elastic", randomBoolean()).get());
assertThat(e.getMessage(), containsString("users may not be created or modified using a tribe node"));
e = expectThrows(UnsupportedOperationException.class,
() -> securityClient.prepareChangePassword("elastic", "password".toCharArray()).get());
assertThat(e.getMessage(), containsString("users may not be created or modified using a tribe node"));
e = expectThrows(UnsupportedOperationException.class, () -> securityClient.prepareDeleteUser("joe").get());
assertThat(e.getMessage(), containsString("users may not be deleted using a tribe node"));
}
public void testRetrieveRolesOnTribeNode() throws Exception {
ensureElasticPasswordBootstrapped(internalCluster());
ensureElasticPasswordBootstrapped(cluster2);
final String preferredTribe = randomBoolean() ? "t1" : "t2";
setupTribeNode(Settings.builder().put("tribe.on_conflict", "prefer_" + preferredTribe).build());
final int randomRoles = scaledRandomIntBetween(3, 8);
final Client cluster1Client = client();
final Client cluster2Client = cluster2.client();
List<String> shouldBeSuccessfulRoles = new ArrayList<>();
List<String> shouldFailRoles = new ArrayList<>();
final Client preferredClient = "t1".equals(preferredTribe) ? cluster1Client : cluster2Client;
for (int i = 0; i < randomRoles; i++) {
final String rolename = "role" + i;
Client clusterClient = randomBoolean() ? cluster1Client : cluster2Client;
PutRoleResponse response = securityClient(clusterClient).preparePutRole(rolename).cluster("monitor").get();
assertTrue(response.isCreated());
// if it was the first client, we should expect authentication to succeed
if (preferredClient == clusterClient) {
shouldBeSuccessfulRoles.add(rolename);
} else {
shouldFailRoles.add(rolename);
}
}
assertTribeNodeHasAllIndices();
SecurityClient securityClient = securityClient(tribeClient);
for (String rolename : shouldBeSuccessfulRoles) {
GetRolesResponse response = securityClient.prepareGetRoles(rolename).get();
assertTrue(response.hasRoles());
assertEquals(1, response.roles().length);
assertThat(response.roles()[0].getClusterPrivileges(), arrayContaining("monitor"));
}
for (String rolename : shouldFailRoles) {
GetRolesResponse response = securityClient.prepareGetRoles(rolename).get();
assertFalse(response.hasRoles());
}
}
public void testRetrieveRolesOnNonPreferredClusterOnly() throws Exception {
final String preferredTribe = randomBoolean() ? "t1" : "t2";
final InternalTestCluster nonPreferredCluster = "t1".equals(preferredTribe) ? cluster2 : internalCluster();
ensureElasticPasswordBootstrapped(nonPreferredCluster);
setupTribeNode(Settings.builder().put("tribe.on_conflict", "prefer_" + preferredTribe).build());
final int randomRoles = scaledRandomIntBetween(3, 8);
List<String> shouldBeSuccessfulRoles = new ArrayList<>();
Client nonPreferredClient = nonPreferredCluster.client();
for (int i = 0; i < randomRoles; i++) {
final String rolename = "role" + i;
PutRoleResponse response = securityClient(nonPreferredClient).preparePutRole(rolename).cluster("monitor").get();
assertTrue(response.isCreated());
shouldBeSuccessfulRoles.add(rolename);
}
assertTribeNodeHasAllIndices();
SecurityClient securityClient = securityClient(tribeClient);
for (String rolename : shouldBeSuccessfulRoles) {
GetRolesResponse response = securityClient.prepareGetRoles(rolename).get();
assertTrue(response.hasRoles());
assertEquals(1, response.roles().length);
assertThat(response.roles()[0].getClusterPrivileges(), arrayContaining("monitor"));
}
}
public void testRoleModificationUsingTribeNodeAreDisabled() throws Exception {
setupTribeNode(Settings.EMPTY);
SecurityClient securityClient = securityClient(getClientWrapper().apply(tribeClient));
UnsupportedOperationException e = expectThrows(UnsupportedOperationException.class,
() -> securityClient.preparePutRole("role").cluster("all").get());
assertThat(e.getMessage(), containsString("roles may not be created or modified using a tribe node"));
e = expectThrows(UnsupportedOperationException.class, () -> securityClient.prepareDeleteRole("role").get());
assertThat(e.getMessage(), containsString("roles may not be deleted using a tribe node"));
}
public void testTribeSettingNames() throws Exception {
TribeService.TRIBE_SETTING_KEYS
.forEach(s -> assertThat("a new setting has been introduced for tribe that security needs to know about in Security.java",
s, anyOf(startsWith("tribe.blocks"), startsWith("tribe.name"), startsWith("tribe.on_conflict"))));
}
private void assertTribeNodeHasAllIndices() throws Exception {
assertBusy(() -> {
Set<String> indices = new HashSet<>();
client().admin().cluster().prepareState().setMetaData(true).get()
.getState().getMetaData().getIndices().keysIt().forEachRemaining(indices::add);
cluster2.client().admin().cluster().prepareState().setMetaData(true).get()
.getState().getMetaData().getIndices().keysIt().forEachRemaining(indices::add);
ClusterState state = tribeClient.admin().cluster().prepareState().setRoutingTable(true).setMetaData(true).get().getState();
assertThat(state.getMetaData().getIndices().size(), equalTo(indices.size()));
for (String index : indices) {
assertTrue(state.getMetaData().hasIndex(index));
assertTrue(state.getRoutingTable().hasIndex(index));
assertTrue(state.getRoutingTable().index(index).allPrimaryShardsActive());
}
});
}
}