Added document and field level security

This commit adds document and field level security to Shield.

Field level security can be enabled by adding the `fields` option to a role in the `role.yml` file.

For example:

```yaml
customer_care:
  indices:
    '*':
      privileges: read
      fields:
        - issue_id
        - description
        - customer_handle
        - customer_email
        - customer_address
        - customer_phone
```

The `fields` list is an inclusive list of fields that controls what fields should be accessible for that role. By default all meta fields (_uid, _type, _source, _ttl etc) are also included, otherwise ES or specific features stop working. The `_all` field if configured, isn't included by default, since that actually contains data from all the other fields. If the `_all` field is required then this needs to be added to the `fields` list in a role. In the case of the content of the `_source` field and `_field_names` there is special filtering in place so that only the content relevant for the role are being returned.

If no `fields` is specified then field level security is disabled for that role and all fields in an index are accessible.

Field level security can be setup per index group.

Field level security is implemented at the Lucene level by wrapping a directory index reader and hides fields away that aren't in the `field` list defined with the role of the current user. It as if the other fields never existed.

* Any `realtime` read operation from the translog is disabled. Instead this operations fall back to the Lucene index, which makes these operations compatible with field level security, but there aren't realtime.
*  If user with role A executes first and the result gets cached and then a user with role B executes the same query results from the query executed with role A would be returned. This is bad and therefore the query cache is disabled.
* For the same reason the request cache is also disabled.
* The update API is blocked. An update request needs to be executed via a role that doesn't have field level security enabled.

Document level security can be enabled by adding the `query` option to a role in the `role.yml` file:
```yaml
customer_care:
  indices:
    '*':
      privileges: read
      query:
        term:
         department_id: 12
```

Document level security is implemented as a filter that filters out documents there don't match with the query. This is like index aliases, but better, because the role query is embedded on the lowest level possible in ES (Engine level) and on all places the acquire an IndexSearcher the role query will always be included. While alias filters are applied at a higher level (after the searcher has been acquired)

Document level security can be setup per index group.

Right now like alias filters the document level security isn't applied on all APIs. Like for example the get api, term vector api, which ignore the alias filter. These apis do acquire an IndexSearcher, but don't use the IndexSearcher itself and directly use the index reader to access the inverted index and there for bypassing the role query. If it is required to these apis need document level security too the the implementation for document level security needs to change.

Closes elastic/elasticsearch#341

Original commit: elastic/x-pack-elasticsearch@fac085dca6
This commit is contained in:
Martijn van Groningen 2015-08-27 17:53:10 +02:00
parent 64bbc110ff
commit 5f01f793d5
39 changed files with 3949 additions and 80 deletions

View File

@ -131,4 +131,6 @@ configured roles.
include::granting-alias-privileges.asciidoc[]
include::mapping-roles.asciidoc[]
include::mapping-roles.asciidoc[]
include::setting-up-field-and-document-level-security.asciidoc[]

View File

@ -0,0 +1,105 @@
[[setting-up-field-and-document-level-security]]
=== Setting Up Field and Document Level Security.
You can control access to data within an index by adding field and document level security permissions to a role.
Field level security permissions restrict access to particular fields within a document.
Document level security permissions restrict access to particular documents within an index.
Field and document level permissions are specified separately, but a role can define both field and document level permissions.
Field and document level security permissions can be configured on a per-index basis.
==== Field Level Security
To enable field level security, you specify the fields that each role can access in the `roles.yml` file.
You list the allowed fields with the `fields` option. Fields are associated with a particular index or index pattern and
operate in conjunction with the privileges specified for the indices.
[source,yaml]
--------------------------------------------------
<role_name>:
indices:
<index_permission_expression>:
privileges: <privileges>
fields:
- <allowed_field_1>
- <allowed_field_2>
- <allowed_field_N>
--------------------------------------------------
To allow access to the `_all` meta field, you must explicitly list it as an allowed field. Access to the following meta fields
is always allowed: _id, _type, _parent, _routing, _timestamp, _ttl, _size and _index. If you specify an empty list of fields,
only these meta fields are accessible.
NOTE: Omitting the fields entry entirely disables field-level security.
For example, the following `customer_care` role grants read access to six fields in any index:
[source,yaml]
--------------------------------------------------
customer_care:
indices:
'*':
privileges: read
fields:
- issue_id
- description
- customer_handle
- customer_email
- customer_address
- customer_phone
--------------------------------------------------
===== Limitations
When field level security is enabled for an index:
* The get, multi get, termsvector and multi termsvector APIs aren't executed in real time. The realtime option for these APIs is forcefully set to false.
* The query cache and the request cache are disabled for search requests.
* The update API is blocked. An update request needs to be executed via a role that doesn't have field level security enabled.
==== Document Level Security
Enabling document level security restricts which documents can be accessed from any Elasticsearch query API.
To enable document level security, you use a query to specify the documents that each role can access in the `roles.yml` file.
You specify the document query with the `query` option. The document query is associated with a particular index or index pattern and
operates in conjunction with the privileges specified for the indices.
[source,yaml]
--------------------------------------------------
<role_name>:
indices:
<index_permission_expression>:
privileges: <privileges>
query:
<query>
--------------------------------------------------
NOTE: Omitting the `query` entry entirely disables document-level security.
The `query` should follow the same format as if a query was defined in the request body of a search request,
but here the format is YAML. Any query from the query-dsl can be defined in the `query` entry.
For example, the following `customer_care` role grants read access to all indices, but restricts access to documents whose `department_id` equals `12`.
[source,yaml]
--------------------------------------------------
customer_care:
indices:
'*':
privileges: read
query:
term:
department_id: 12
--------------------------------------------------
Alternatively the query can also be defined in JSON as a string. This makes it easier to define queries that already have
been defined in the JSON body of search request body elsewhere.
[source,yaml]
--------------------------------------------------
customer_care:
indices:
'*':
privileges: read
query: '{"term" : {"field2" : "value2"}}''
--------------------------------------------------

View File

@ -16,6 +16,8 @@ import org.elasticsearch.common.inject.Module;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.env.Environment;
import org.elasticsearch.http.HttpServerModule;
import org.elasticsearch.index.cache.IndexCacheModule;
import org.elasticsearch.plugins.Plugin;
import org.elasticsearch.rest.RestModule;
import org.elasticsearch.shield.action.ShieldActionFilter;
import org.elasticsearch.shield.action.ShieldActionModule;
@ -24,11 +26,12 @@ import org.elasticsearch.shield.action.authc.cache.TransportClearRealmCacheActio
import org.elasticsearch.shield.audit.AuditTrailModule;
import org.elasticsearch.shield.audit.index.IndexAuditUserHolder;
import org.elasticsearch.shield.authc.AuthenticationModule;
import org.elasticsearch.plugins.Plugin;
import org.elasticsearch.shield.authc.Realms;
import org.elasticsearch.shield.authc.support.SecuredString;
import org.elasticsearch.shield.authc.support.UsernamePasswordToken;
import org.elasticsearch.shield.authz.AuthorizationModule;
import org.elasticsearch.shield.authz.accesscontrol.AccessControlShardModule;
import org.elasticsearch.shield.authz.accesscontrol.OptOutQueryCache;
import org.elasticsearch.shield.authz.store.FileRolesStore;
import org.elasticsearch.shield.crypto.CryptoModule;
import org.elasticsearch.shield.crypto.InternalCryptoService;
@ -61,6 +64,8 @@ public class ShieldPlugin extends Plugin {
public static final String ENABLED_SETTING_NAME = NAME + ".enabled";
public static final String OPT_OUT_QUERY_CACHE = "opt_out_cache";
private final Settings settings;
private final boolean enabled;
private final boolean clientMode;
@ -69,6 +74,9 @@ public class ShieldPlugin extends Plugin {
this.settings = settings;
this.enabled = shieldEnabled(settings);
this.clientMode = clientMode(settings);
if (enabled && clientMode == false) {
failIfShieldQueryCacheIsNotActive(settings, true);
}
}
@Override
@ -87,20 +95,38 @@ public class ShieldPlugin extends Plugin {
return Collections.<Module>singletonList(new ShieldDisabledModule(settings));
} else if (clientMode) {
return Arrays.<Module>asList(
new ShieldTransportModule(settings),
new SSLModule(settings));
new ShieldTransportModule(settings),
new SSLModule(settings));
} else {
return Arrays.<Module>asList(
new ShieldModule(settings),
new LicenseModule(settings),
new CryptoModule(settings),
new AuthenticationModule(settings),
new AuthorizationModule(settings),
new AuditTrailModule(settings),
new ShieldRestModule(settings),
new ShieldActionModule(settings),
new ShieldTransportModule(settings),
new SSLModule(settings));
new ShieldModule(settings),
new LicenseModule(settings),
new CryptoModule(settings),
new AuthenticationModule(settings),
new AuthorizationModule(settings),
new AuditTrailModule(settings),
new ShieldRestModule(settings),
new ShieldActionModule(settings),
new ShieldTransportModule(settings),
new SSLModule(settings));
}
}
@Override
public Collection<Module> indexModules(Settings settings) {
if (enabled && clientMode == false) {
failIfShieldQueryCacheIsNotActive(settings, false);
}
return ImmutableList.of();
}
@Override
public Collection<Module> shardModules(Settings settings) {
if (enabled && clientMode == false) {
failIfShieldQueryCacheIsNotActive(settings, false);
return ImmutableList.<Module>of(new AccessControlShardModule(settings));
} else {
return ImmutableList.of();
}
}
@ -122,6 +148,7 @@ public class ShieldPlugin extends Plugin {
Settings.Builder settingsBuilder = Settings.settingsBuilder();
addUserSettings(settingsBuilder);
addTribeSettings(settingsBuilder);
addQueryCacheSettings(settingsBuilder);
return settingsBuilder.build();
}
@ -178,6 +205,12 @@ public class ShieldPlugin extends Plugin {
}
}
public void onModule(IndexCacheModule module) {
if (enabled && clientMode == false) {
module.registerQueryCache(OPT_OUT_QUERY_CACHE, OptOutQueryCache.class);
}
}
private void addUserSettings(Settings.Builder settingsBuilder) {
String authHeaderSettingName = Headers.PREFIX + "." + UsernamePasswordToken.BASIC_AUTH_HEADER;
if (settings.get(authHeaderSettingName) != null) {
@ -231,6 +264,16 @@ public class ShieldPlugin extends Plugin {
}
}
/*
We need to forcefully overwrite the query cache implementation to use Shield's opt out query cache implementation.
This impl. disabled the query cache if field level security is used for a particular request. If we wouldn't do
forcefully overwrite the query cache implementation then we leave the system vulnerable to leakages of data to
unauthorized users.
*/
private void addQueryCacheSettings(Settings.Builder settingsBuilder) {
settingsBuilder.put(IndexCacheModule.QUERY_CACHE_TYPE, OPT_OUT_QUERY_CACHE);
}
private static boolean isShieldMandatory(String[] existingMandatoryPlugins) {
for (String existingMandatoryPlugin : existingMandatoryPlugins) {
if (NAME.equals(existingMandatoryPlugin)) {
@ -255,4 +298,19 @@ public class ShieldPlugin extends Plugin {
public static boolean shieldEnabled(Settings settings) {
return settings.getAsBoolean(ENABLED_SETTING_NAME, true);
}
private void failIfShieldQueryCacheIsNotActive(Settings settings, boolean nodeSettings) {
String queryCacheImplementation;
if (nodeSettings) {
// in case this are node settings then the plugin additional settings have not been applied yet,
// so we use 'opt_out_cache' as default. So in that case we only fail if the node settings contain
// another cache impl than 'opt_out_cache'.
queryCacheImplementation = settings.get(IndexCacheModule.QUERY_CACHE_TYPE, OPT_OUT_QUERY_CACHE);
} else {
queryCacheImplementation = settings.get(IndexCacheModule.QUERY_CACHE_TYPE);
}
if (OPT_OUT_QUERY_CACHE.equals(queryCacheImplementation) == false) {
throw new IllegalStateException("shield does not support a user specified query cache. remove the setting [" + IndexCacheModule.QUERY_CACHE_TYPE + "] with value [" + queryCacheImplementation + "]");
}
}
}

View File

@ -19,6 +19,7 @@ import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.license.plugin.core.LicenseUtils;
import org.elasticsearch.shield.User;
import org.elasticsearch.shield.action.interceptor.RequestInterceptor;
import org.elasticsearch.shield.audit.AuditTrail;
import org.elasticsearch.shield.authc.AuthenticationService;
import org.elasticsearch.shield.authz.AuthorizationService;
@ -28,8 +29,7 @@ import org.elasticsearch.shield.license.LicenseEventsNotifier;
import org.elasticsearch.shield.license.LicenseService;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.*;
import static org.elasticsearch.shield.support.Exceptions.authorizationError;
@ -45,12 +45,13 @@ public class ShieldActionFilter extends AbstractComponent implements ActionFilte
private final CryptoService cryptoService;
private final AuditTrail auditTrail;
private final ShieldActionMapper actionMapper;
private final Set<RequestInterceptor> requestInterceptors;
private volatile boolean licenseEnabled = true;
@Inject
public ShieldActionFilter(Settings settings, AuthenticationService authcService, AuthorizationService authzService, CryptoService cryptoService,
AuditTrail auditTrail, LicenseEventsNotifier licenseEventsNotifier, ShieldActionMapper actionMapper) {
AuditTrail auditTrail, LicenseEventsNotifier licenseEventsNotifier, ShieldActionMapper actionMapper, Set<RequestInterceptor> requestInterceptors) {
super(settings);
this.authcService = authcService;
this.authzService = authzService;
@ -68,6 +69,7 @@ public class ShieldActionFilter extends AbstractComponent implements ActionFilte
licenseEnabled = false;
}
});
this.requestInterceptors = requestInterceptors;
}
@Override
@ -100,6 +102,12 @@ public class ShieldActionFilter extends AbstractComponent implements ActionFilte
User user = authcService.authenticate(shieldAction, request, User.SYSTEM);
authzService.authorize(user, shieldAction, request);
request = unsign(user, shieldAction, request);
for (RequestInterceptor interceptor : requestInterceptors) {
if (interceptor.supports(request)) {
interceptor.intercept(request, user);
}
}
chain.proceed(action, request, new SigningListener(this, listener));
} catch (Throwable t) {
listener.onFailure(t);

View File

@ -5,7 +5,12 @@
*/
package org.elasticsearch.shield.action;
import org.elasticsearch.common.inject.multibindings.Multibinder;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.shield.action.interceptor.RealtimeRequestInterceptor;
import org.elasticsearch.shield.action.interceptor.RequestInterceptor;
import org.elasticsearch.shield.action.interceptor.SearchRequestInterceptor;
import org.elasticsearch.shield.action.interceptor.UpdateRequestInterceptor;
import org.elasticsearch.shield.support.AbstractShieldModule;
public class ShieldActionModule extends AbstractShieldModule.Node {
@ -19,5 +24,10 @@ public class ShieldActionModule extends AbstractShieldModule.Node {
bind(ShieldActionMapper.class).asEagerSingleton();
// we need to ensure that there's only a single instance of this filter.
bind(ShieldActionFilter.class).asEagerSingleton();
Multibinder<RequestInterceptor> multibinder
= Multibinder.newSetBinder(binder(), RequestInterceptor.class);
multibinder.addBinding().to(RealtimeRequestInterceptor.class);
multibinder.addBinding().to(SearchRequestInterceptor.class);
multibinder.addBinding().to(UpdateRequestInterceptor.class);
}
}

View File

@ -0,0 +1,56 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.shield.action.interceptor;
import org.elasticsearch.action.CompositeIndicesRequest;
import org.elasticsearch.action.IndicesRequest;
import org.elasticsearch.common.component.AbstractComponent;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.shield.User;
import org.elasticsearch.shield.authz.accesscontrol.IndicesAccessControl;
import org.elasticsearch.shield.authz.InternalAuthorizationService;
import org.elasticsearch.transport.TransportRequest;
import java.util.Collections;
import java.util.List;
/**
* Base class for interceptors that disables features when field level security is configured for indices a request
* is going to execute on.
*/
public abstract class FieldSecurityRequestInterceptor<Request> extends AbstractComponent implements RequestInterceptor<Request> {
public FieldSecurityRequestInterceptor(Settings settings) {
super(settings);
}
public void intercept(Request request, User user) {
List<? extends IndicesRequest> indicesRequests;
if (request instanceof CompositeIndicesRequest) {
indicesRequests = ((CompositeIndicesRequest) request).subRequests();
} else if (request instanceof IndicesRequest) {
indicesRequests = Collections.singletonList((IndicesRequest) request);
} else {
return;
}
IndicesAccessControl indicesAccessControl = ((TransportRequest) request).getFromContext(InternalAuthorizationService.INDICES_PERMISSIONS_KEY);
for (IndicesRequest indicesRequest : indicesRequests) {
for (String index : indicesRequest.indices()) {
IndicesAccessControl.IndexAccessControl indexAccessControl = indicesAccessControl.getIndexPermissions(index);
if (indexAccessControl != null && indexAccessControl.getFields() != null) {
logger.debug("intercepted request for index [{}] with field level security enabled, disabling features", index);
disableFeatures(request);
return;
} else {
logger.trace("intercepted request for index [{}] with field level security not enabled, doing nothing", index);
}
}
}
}
protected abstract void disableFeatures(Request request);
}

View File

@ -0,0 +1,33 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.shield.action.interceptor;
import org.elasticsearch.action.RealtimeRequest;
import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.transport.TransportRequest;
/**
* If field level security is enabled this interceptor disables the realtime feature of get, multi get, termsvector and
* multi termsvector requests.
*/
public class RealtimeRequestInterceptor extends FieldSecurityRequestInterceptor<RealtimeRequest> {
@Inject
public RealtimeRequestInterceptor(Settings settings) {
super(settings);
}
@Override
public void disableFeatures(RealtimeRequest request) {
request.realtime(false);
}
@Override
public boolean supports(TransportRequest request) {
return request instanceof RealtimeRequest;
}
}

View File

@ -0,0 +1,27 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.shield.action.interceptor;
import org.elasticsearch.shield.User;
import org.elasticsearch.transport.TransportRequest;
/**
* A request interceptor can introspect a request and modify it.
*/
public interface RequestInterceptor<Request> {
/**
* If {@link #supports(TransportRequest)} returns <code>true</code> this interceptor will introspect the request
* and potentially modify it.
*/
void intercept(Request request, User user);
/**
* Returns whether this request interceptor should intercept the specified request.
*/
boolean supports(TransportRequest request);
}

View File

@ -0,0 +1,32 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.shield.action.interceptor;
import org.elasticsearch.action.search.SearchRequest;
import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.transport.TransportRequest;
/**
* If field level security is enabled this interceptor disables the request cache for search requests.
*/
public class SearchRequestInterceptor extends FieldSecurityRequestInterceptor<SearchRequest> {
@Inject
public SearchRequestInterceptor(Settings settings) {
super(settings);
}
@Override
public void disableFeatures(SearchRequest request) {
request.requestCache(false);
}
@Override
public boolean supports(TransportRequest request) {
return request instanceof SearchRequest;
}
}

View File

@ -0,0 +1,38 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.shield.action.interceptor;
import org.elasticsearch.ElasticsearchSecurityException;
import org.elasticsearch.action.update.UpdateRequest;
import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.rest.RestStatus;
import org.elasticsearch.transport.TransportRequest;
/**
* A request interceptor that fails update request if field level security is enabled.
*
* It can be dangerous for users if document where to be update via a role that has field level security enabled,
* because only the fields that a role can see would be used to perform the update and without knowing the user may
* remove the other fields, not visible for him, from the document being updated.
*/
public class UpdateRequestInterceptor extends FieldSecurityRequestInterceptor<UpdateRequest> {
@Inject
public UpdateRequestInterceptor(Settings settings) {
super(settings);
}
@Override
protected void disableFeatures(UpdateRequest updateRequest) {
throw new ElasticsearchSecurityException("Can't execute an update request if field level security is enabled", RestStatus.BAD_REQUEST);
}
@Override
public boolean supports(TransportRequest request) {
return request instanceof UpdateRequest;
}
}

View File

@ -33,7 +33,7 @@ public class InternalAuthenticationService extends AbstractComponent implements
public static final String SETTING_SIGN_USER_HEADER = "shield.authc.sign_user_header";
static final String TOKEN_KEY = "_shield_token";
static final String USER_KEY = "_shield_user";
public static final String USER_KEY = "_shield_user";
private final Realms realms;
private final AuditTrail auditTrail;

View File

@ -17,6 +17,7 @@ import org.elasticsearch.action.admin.indices.create.CreateIndexRequest;
import org.elasticsearch.action.search.ClearScrollAction;
import org.elasticsearch.action.search.SearchScrollAction;
import org.elasticsearch.cluster.ClusterService;
import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.metadata.AliasOrIndex;
import org.elasticsearch.cluster.metadata.MetaData;
import org.elasticsearch.common.component.AbstractComponent;
@ -27,15 +28,15 @@ import org.elasticsearch.shield.User;
import org.elasticsearch.shield.audit.AuditTrail;
import org.elasticsearch.shield.authc.AnonymousService;
import org.elasticsearch.shield.authc.AuthenticationFailureHandler;
import org.elasticsearch.shield.authz.indicesresolver.DefaultIndicesResolver;
import org.elasticsearch.shield.authz.indicesresolver.IndicesResolver;
import org.elasticsearch.shield.authz.accesscontrol.IndicesAccessControl;
import org.elasticsearch.shield.authz.indicesresolver.DefaultIndicesAndAliasesResolver;
import org.elasticsearch.shield.authz.indicesresolver.IndicesAndAliasesResolver;
import org.elasticsearch.shield.authz.store.RolesStore;
import org.elasticsearch.transport.TransportRequest;
import java.util.Map;
import java.util.Set;
import static org.elasticsearch.shield.support.Exceptions.authenticationError;
import static org.elasticsearch.shield.support.Exceptions.authorizationError;
/**
@ -43,10 +44,12 @@ import static org.elasticsearch.shield.support.Exceptions.authorizationError;
*/
public class InternalAuthorizationService extends AbstractComponent implements AuthorizationService {
public static final String INDICES_PERMISSIONS_KEY = "_indices_permissions";
private final ClusterService clusterService;
private final RolesStore rolesStore;
private final AuditTrail auditTrail;
private final IndicesResolver[] indicesResolvers;
private final IndicesAndAliasesResolver[] indicesAndAliasesResolvers;
private final AnonymousService anonymousService;
private final AuthenticationFailureHandler authcFailureHandler;
@ -57,8 +60,8 @@ public class InternalAuthorizationService extends AbstractComponent implements A
this.rolesStore = rolesStore;
this.clusterService = clusterService;
this.auditTrail = auditTrail;
this.indicesResolvers = new IndicesResolver[] {
new DefaultIndicesResolver(this)
this.indicesAndAliasesResolvers = new IndicesAndAliasesResolver[]{
new DefaultIndicesAndAliasesResolver(this)
};
this.anonymousService = anonymousService;
this.authcFailureHandler = authcFailureHandler;
@ -97,6 +100,7 @@ public class InternalAuthorizationService extends AbstractComponent implements A
// first we need to check if the user is the system. If it is, we'll just authorize the system access
if (user.isSystem()) {
if (SystemRole.INSTANCE.check(action)) {
request.putInContext(INDICES_PERMISSIONS_KEY, IndicesAccessControl.ALLOW_ALL);
grant(user, action, request);
return;
}
@ -116,6 +120,7 @@ public class InternalAuthorizationService extends AbstractComponent implements A
if (Privilege.Cluster.ACTION_MATCHER.apply(action)) {
Permission.Cluster cluster = permission.cluster();
if (cluster != null && cluster.check(action)) {
request.putInContext(INDICES_PERMISSIONS_KEY, IndicesAccessControl.ALLOW_ALL);
grant(user, action, request);
return;
}
@ -149,11 +154,15 @@ public class InternalAuthorizationService extends AbstractComponent implements A
throw denial(user, action, request);
}
Set<String> indexNames = resolveIndices(user, action, request);
ClusterState clusterState = clusterService.state();
Set<String> indexNames = resolveIndices(user, action, request, clusterState);
assert !indexNames.isEmpty() : "every indices request needs to have its indices set thus the resolved indices must not be empty";
if (!authorizeIndices(action, indexNames, permission.indices())) {
MetaData metaData = clusterState.metaData();
IndicesAccessControl indicesAccessControl = permission.authorize(action, indexNames, metaData);
if (!indicesAccessControl.isGranted()) {
throw denial(user, action, request);
} else {
request.putInContext(INDICES_PERMISSIONS_KEY, indicesAccessControl);
}
//if we are creating an index we need to authorize potential aliases created at the same time
@ -165,33 +174,18 @@ public class InternalAuthorizationService extends AbstractComponent implements A
for (Alias alias : aliases) {
aliasesAndIndices.add(alias.name());
}
if (!authorizeIndices("indices:admin/aliases", aliasesAndIndices, permission.indices())) {
indicesAccessControl = permission.authorize("indices:admin/aliases", aliasesAndIndices, metaData);
if (!indicesAccessControl.isGranted()) {
throw denial(user, "indices:admin/aliases", request);
}
// no need to re-add the indicesAccessControl in the context,
// because the create index call doesn't do anything FLS or DLS
}
}
grant(user, action, request);
}
private boolean authorizeIndices(String action, Set<String> requestIndices, Permission.Indices permission) {
// now... every index that is associated with the request, must be granted
// by at least one indices permission group
for (String index : requestIndices) {
boolean granted = false;
for (Permission.Indices.Group group : permission) {
if (group.check(action, index)) {
granted = true;
break;
}
}
if (!granted) {
return false;
}
}
return true;
}
private Permission.Global permission(User user) {
String[] roleNames = user.roles();
if (roleNames.length == 0) {
@ -215,9 +209,9 @@ public class InternalAuthorizationService extends AbstractComponent implements A
return roles.build();
}
private Set<String> resolveIndices(User user, String action, TransportRequest request) {
MetaData metaData = clusterService.state().metaData();
for (IndicesResolver resolver : indicesResolvers) {
private Set<String> resolveIndices(User user, String action, TransportRequest request, ClusterState clusterState) {
MetaData metaData = clusterState.metaData();
for (IndicesAndAliasesResolver resolver : indicesAndAliasesResolvers) {
if (resolver.requestType().isInstance(request)) {
return resolver.resolve(user, action, request, metaData);
}

View File

@ -9,14 +9,17 @@ import com.google.common.base.Predicate;
import com.google.common.cache.CacheBuilder;
import com.google.common.cache.CacheLoader;
import com.google.common.cache.LoadingCache;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.Iterators;
import com.google.common.collect.UnmodifiableIterator;
import com.google.common.collect.*;
import org.elasticsearch.cluster.metadata.AliasOrIndex;
import org.elasticsearch.cluster.metadata.IndexMetaData;
import org.elasticsearch.cluster.metadata.MetaData;
import org.elasticsearch.common.Nullable;
import org.elasticsearch.common.bytes.BytesReference;
import org.elasticsearch.shield.authz.accesscontrol.IndicesAccessControl;
import org.elasticsearch.shield.support.AutomatonPredicate;
import org.elasticsearch.shield.support.Automatons;
import java.util.Collections;
import java.util.Iterator;
import java.util.*;
/**
* Represents a permission in the system. There are 3 types of permissions:
@ -63,6 +66,27 @@ public interface Permission {
return (cluster == null || cluster.isEmpty()) && (indices == null || indices.isEmpty());
}
/**
* Returns whether at least group encapsulated by this indices permissions is auhorized the execute the
* specified action with the requested indices/aliases. At the same time if field and/or document level security
* is configured for any group also the allowed fields and role queries are resolved.
*/
public IndicesAccessControl authorize(String action, Set<String> requestedIndicesOrAliases, MetaData metaData) {
ImmutableMap<String, IndicesAccessControl.IndexAccessControl> indexPermissions = indices.authorize(
action, requestedIndicesOrAliases, metaData
);
// At least one role / indices permission set need to match with all the requested indices/aliases:
boolean granted = true;
for (Map.Entry<String, IndicesAccessControl.IndexAccessControl> entry : indexPermissions.entrySet()) {
if (!entry.getValue().isGranted()) {
granted = false;
break;
}
}
return new IndicesAccessControl(granted, indexPermissions);
}
public static class Role extends Global {
private final String name;
@ -106,7 +130,12 @@ public interface Permission {
}
public Builder add(Privilege.Index privilege, String... indices) {
groups.add(new Indices.Group(privilege, indices));
groups.add(new Indices.Group(privilege, null, null, indices));
return this;
}
public Builder add(List<String> fields, BytesReference query, Privilege.Index privilege, String... indices) {
groups.add(new Indices.Group(privilege, fields, query, indices));
return this;
}
@ -226,6 +255,8 @@ public interface Permission {
static interface Indices extends Permission, Iterable<Indices.Group> {
ImmutableMap<String, IndicesAccessControl.IndexAccessControl> authorize(String action, Set<String> requestedIndicesOrAliases, MetaData metaData);
public static class Core implements Indices {
public static final Core NONE = new Core() {
@ -281,6 +312,82 @@ public interface Permission {
public Predicate<String> allowedIndicesMatcher(String action) {
return allowedIndicesMatchersForAction.getUnchecked(action);
}
@Override
public ImmutableMap<String, IndicesAccessControl.IndexAccessControl> authorize(String action, Set<String> requestedIndicesOrAliases, MetaData metaData) {
// now... every index that is associated with the request, must be granted
// by at least one indices permission group
SortedMap<String, AliasOrIndex> allAliasesAndIndices = metaData.getAliasAndIndexLookup();
Map<String, ImmutableSet.Builder<String>> fieldsBuilder = new HashMap<>();
Map<String, ImmutableSet.Builder<BytesReference>> queryBuilder = new HashMap<>();
Map<String, Boolean> grantedBuilder = new HashMap<>();
for (String indexOrAlias : requestedIndicesOrAliases) {
boolean granted = false;
Set<String> concreteIndices = new HashSet<>();
AliasOrIndex aliasOrIndex = allAliasesAndIndices.get(indexOrAlias);
if (aliasOrIndex != null) {
for (IndexMetaData indexMetaData : aliasOrIndex.getIndices()) {
concreteIndices.add(indexMetaData.getIndex());
}
}
for (Permission.Indices.Group group : groups) {
if (group.check(action, indexOrAlias)) {
granted = true;
for (String index : concreteIndices) {
if (group.getFields() != null) {
ImmutableSet.Builder<String> roleFieldsBuilder = fieldsBuilder.get(index);
if (roleFieldsBuilder == null) {
roleFieldsBuilder = ImmutableSet.builder();
fieldsBuilder.put(index, roleFieldsBuilder);
}
roleFieldsBuilder.addAll(group.getFields());
}
if (group.getQuery() != null) {
ImmutableSet.Builder<BytesReference> roleQueriesBuilder = queryBuilder.get(index);
if (roleQueriesBuilder == null) {
roleQueriesBuilder = ImmutableSet.builder();
queryBuilder.put(index, roleQueriesBuilder);
}
roleQueriesBuilder.add(group.getQuery());
}
}
}
}
if (concreteIndices.isEmpty()) {
grantedBuilder.put(indexOrAlias, granted);
} else {
for (String concreteIndex : concreteIndices) {
grantedBuilder.put(concreteIndex, granted);
}
}
}
ImmutableMap.Builder<String, IndicesAccessControl.IndexAccessControl> indexPermissions = ImmutableMap.builder();
for (Map.Entry<String, Boolean> entry : grantedBuilder.entrySet()) {
String index = entry.getKey();
ImmutableSet.Builder<BytesReference> roleQueriesBuilder = queryBuilder.get(index);
ImmutableSet.Builder<String> roleFieldsBuilder = fieldsBuilder.get(index);
final ImmutableSet<String> roleFields;
if (roleFieldsBuilder != null) {
roleFields = roleFieldsBuilder.build();
} else {
roleFields = null;
}
final ImmutableSet<BytesReference> roleQueries;
if (roleQueriesBuilder != null) {
roleQueries = roleQueriesBuilder.build();
} else {
roleQueries = null;
}
indexPermissions.put(index, new IndicesAccessControl.IndexAccessControl(entry.getValue(), roleFields, roleQueries));
}
return indexPermissions.build();
}
}
public static class Globals implements Indices {
@ -311,6 +418,36 @@ public interface Permission {
return true;
}
@Override
public ImmutableMap<String, IndicesAccessControl.IndexAccessControl> authorize(String action, Set<String> requestedIndicesOrAliases, MetaData metaData) {
if (isEmpty()) {
return ImmutableMap.of();
}
// What this code does is just merge `IndexAccessControl` instances from the permissions this class holds:
Map<String, IndicesAccessControl.IndexAccessControl> indicesAccessControl = null;
for (Global permission : globals) {
ImmutableMap<String, IndicesAccessControl.IndexAccessControl> temp = permission.indices().authorize(action, requestedIndicesOrAliases, metaData);
if (indicesAccessControl == null) {
indicesAccessControl = new HashMap<>(temp);
} else {
for (Map.Entry<String, IndicesAccessControl.IndexAccessControl> entry : temp.entrySet()) {
IndicesAccessControl.IndexAccessControl existing = indicesAccessControl.get(entry.getKey());
if (existing != null) {
indicesAccessControl.put(entry.getKey(), existing.merge(entry.getValue()));
} else {
indicesAccessControl.put(entry.getKey(), entry.getValue());
}
}
}
}
if (indicesAccessControl == null) {
return ImmutableMap.of();
} else {
return ImmutableMap.copyOf(indicesAccessControl);
}
}
static class Iter extends UnmodifiableIterator<Group> {
private final Iterator<Global> globals;
@ -361,13 +498,17 @@ public interface Permission {
private final Predicate<String> actionMatcher;
private final String[] indices;
private final Predicate<String> indexNameMatcher;
private final List<String> fields;
private final BytesReference query;
public Group(Privilege.Index privilege, String... indices) {
public Group(Privilege.Index privilege, @Nullable List<String> fields, @Nullable BytesReference query, String... indices) {
assert indices.length != 0;
this.privilege = privilege;
this.actionMatcher = privilege.predicate();
this.indices = indices;
this.indexNameMatcher = new AutomatonPredicate(Automatons.patterns(indices));
this.fields = fields;
this.query = query;
}
public Privilege.Index privilege() {
@ -378,6 +519,20 @@ public interface Permission {
return indices;
}
@Nullable
public List<String> getFields() {
return fields;
}
@Nullable
public BytesReference getQuery() {
return query;
}
public boolean indexNameMatch(String index) {
return indexNameMatcher.apply(index);
}
public boolean check(String action, String index) {
assert index != null;
return actionMatcher.apply(action) && indexNameMatcher.apply(index);

View File

@ -0,0 +1,25 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.shield.authz.accesscontrol;
import org.elasticsearch.common.inject.multibindings.Multibinder;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.index.engine.IndexSearcherWrapper;
import org.elasticsearch.shield.support.AbstractShieldModule;
public class AccessControlShardModule extends AbstractShieldModule.Node {
public AccessControlShardModule(Settings settings) {
super(settings);
}
@Override
protected void configureNode() {
Multibinder<IndexSearcherWrapper> multibinder
= Multibinder.newSetBinder(binder(), IndexSearcherWrapper.class);
multibinder.addBinding().to(ShieldIndexSearcherWrapper.class);
}
}

View File

@ -0,0 +1,354 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.shield.authz.accesscontrol;
import org.apache.lucene.index.*;
import org.apache.lucene.util.Bits;
import org.apache.lucene.util.BytesRef;
import org.apache.lucene.util.FilterIterator;
import org.elasticsearch.common.bytes.BytesArray;
import org.elasticsearch.common.bytes.BytesReference;
import org.elasticsearch.common.collect.Tuple;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentHelper;
import org.elasticsearch.common.xcontent.XContentType;
import org.elasticsearch.common.xcontent.support.XContentMapValues;
import org.elasticsearch.index.mapper.internal.FieldNamesFieldMapper;
import org.elasticsearch.index.mapper.internal.SourceFieldMapper;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Iterator;
import java.util.Map;
import java.util.Set;
/**
* A {@link FilterLeafReader} that exposes only a subset
* of fields from the underlying wrapped reader.
*/
// based on lucene/test-framework's FieldFilterLeafReader.
public final class FieldSubsetReader extends FilterLeafReader {
/**
* Wraps a provided DirectoryReader, exposing a subset of fields.
* <p>
* Note that for convenience, the returned reader
* can be used normally (e.g. passed to {@link DirectoryReader#openIfChanged(DirectoryReader)})
* and so on.
* @param in reader to filter
* @param fieldNames fields to filter.
*/
public static DirectoryReader wrap(DirectoryReader in, Set<String> fieldNames) throws IOException {
return new FieldSubsetDirectoryReader(in, fieldNames);
}
// wraps subreaders with fieldsubsetreaders.
static class FieldSubsetDirectoryReader extends FilterDirectoryReader {
private final Set<String> fieldNames;
FieldSubsetDirectoryReader(DirectoryReader in, final Set<String> fieldNames) throws IOException {
super(in, new FilterDirectoryReader.SubReaderWrapper() {
@Override
public LeafReader wrap(LeafReader reader) {
return new FieldSubsetReader(reader, fieldNames);
}
});
this.fieldNames = fieldNames;
}
@Override
protected DirectoryReader doWrapDirectoryReader(DirectoryReader in) throws IOException {
return new FieldSubsetDirectoryReader(in, fieldNames);
}
}
/** List of filtered fields */
private final FieldInfos fieldInfos;
/** List of filtered fields. this is used for _source filtering */
private final String[] fieldNames;
/**
* Wrap a single segment, exposing a subset of its fields.
*/
FieldSubsetReader(LeafReader in, Set<String> fieldNames) {
super(in);
ArrayList<FieldInfo> filteredInfos = new ArrayList<>();
for (FieldInfo fi : in.getFieldInfos()) {
if (fieldNames.contains(fi.name)) {
filteredInfos.add(fi);
}
}
fieldInfos = new FieldInfos(filteredInfos.toArray(new FieldInfo[filteredInfos.size()]));
this.fieldNames = fieldNames.toArray(new String[fieldNames.size()]);
}
/** returns true if this field is allowed. */
boolean hasField(String field) {
return fieldInfos.fieldInfo(field) != null;
}
@Override
public FieldInfos getFieldInfos() {
return fieldInfos;
}
@Override
public Fields getTermVectors(int docID) throws IOException {
Fields f = super.getTermVectors(docID);
if (f == null) {
return null;
}
f = new FieldFilterFields(f);
// we need to check for emptyness, so we can return null:
return f.iterator().hasNext() ? f : null;
}
@Override
public void document(final int docID, final StoredFieldVisitor visitor) throws IOException {
super.document(docID, new StoredFieldVisitor() {
@Override
public void binaryField(FieldInfo fieldInfo, byte[] value) throws IOException {
if (SourceFieldMapper.NAME.equals(fieldInfo.name)) {
// for _source, parse, filter out the fields we care about, and serialize back downstream
BytesReference bytes = new BytesArray(value);
Tuple<XContentType, Map<String, Object>> result = XContentHelper.convertToMap(bytes, true);
Map<String, Object> transformedSource = XContentMapValues.filter(result.v2(), fieldNames, null);
XContentBuilder xContentBuilder = XContentBuilder.builder(result.v1().xContent()).map(transformedSource);
visitor.binaryField(fieldInfo, xContentBuilder.bytes().toBytes());
} else {
visitor.binaryField(fieldInfo, value);
}
}
@Override
public void stringField(FieldInfo fieldInfo, byte[] value) throws IOException {
visitor.stringField(fieldInfo, value);
}
@Override
public void intField(FieldInfo fieldInfo, int value) throws IOException {
visitor.intField(fieldInfo, value);
}
@Override
public void longField(FieldInfo fieldInfo, long value) throws IOException {
visitor.longField(fieldInfo, value);
}
@Override
public void floatField(FieldInfo fieldInfo, float value) throws IOException {
visitor.floatField(fieldInfo, value);
}
@Override
public void doubleField(FieldInfo fieldInfo, double value) throws IOException {
visitor.doubleField(fieldInfo, value);
}
@Override
public Status needsField(FieldInfo fieldInfo) throws IOException {
return hasField(fieldInfo.name) ? visitor.needsField(fieldInfo) : Status.NO;
}
});
}
@Override
public Fields fields() throws IOException {
return new FieldFilterFields(super.fields());
}
@Override
public NumericDocValues getNumericDocValues(String field) throws IOException {
return hasField(field) ? super.getNumericDocValues(field) : null;
}
@Override
public BinaryDocValues getBinaryDocValues(String field) throws IOException {
return hasField(field) ? super.getBinaryDocValues(field) : null;
}
@Override
public SortedDocValues getSortedDocValues(String field) throws IOException {
return hasField(field) ? super.getSortedDocValues(field) : null;
}
@Override
public SortedNumericDocValues getSortedNumericDocValues(String field) throws IOException {
return hasField(field) ? super.getSortedNumericDocValues(field) : null;
}
@Override
public SortedSetDocValues getSortedSetDocValues(String field) throws IOException {
return hasField(field) ? super.getSortedSetDocValues(field) : null;
}
@Override
public NumericDocValues getNormValues(String field) throws IOException {
return hasField(field) ? super.getNormValues(field) : null;
}
@Override
public Bits getDocsWithField(String field) throws IOException {
return hasField(field) ? super.getDocsWithField(field) : null;
}
// we share core cache keys (for e.g. fielddata)
@Override
public Object getCombinedCoreAndDeletesKey() {
return in.getCombinedCoreAndDeletesKey();
}
@Override
public Object getCoreCacheKey() {
return in.getCoreCacheKey();
}
/**
* Filters the Fields instance from the postings.
* <p>
* In addition to only returning fields allowed in this subset,
* the ES internal _field_names (used by exists filter) has special handling,
* to hide terms for fields that don't exist.
*/
class FieldFilterFields extends FilterFields {
public FieldFilterFields(Fields in) {
super(in);
}
@Override
public int size() {
// this information is not cheap, return -1 like MultiFields does:
return -1;
}
@Override
public Iterator<String> iterator() {
return new FilterIterator<String, String>(super.iterator()) {
@Override
protected boolean predicateFunction(String field) {
return hasField(field);
}
};
}
@Override
public Terms terms(String field) throws IOException {
if (!hasField(field)) {
return null;
} else if (FieldNamesFieldMapper.NAME.equals(field)) {
// for the _field_names field, fields for the document
// are encoded as postings, where term is the field.
// so we hide terms for fields we filter out.
Terms terms = super.terms(field);
if (terms != null) {
// check for null, in case term dictionary is not a ghostbuster
// So just because its in fieldinfos and "indexed=true" doesn't mean you can go grab a Terms for it.
// It just means at one point there was a document with that field indexed...
// The fields infos isn't updates/removed even if no docs refer to it
terms = new FieldNamesTerms(terms);
}
return terms;
} else {
return super.terms(field);
}
}
}
/**
* Terms impl for _field_names (used by exists filter) that filters out terms
* representing fields that should not be visible in this reader.
*/
class FieldNamesTerms extends FilterTerms {
FieldNamesTerms(Terms in) {
super(in);
}
@Override
public TermsEnum iterator() throws IOException {
return new FieldNamesTermsEnum(in.iterator());
}
// we don't support field statistics (since we filter out terms)
// but this isn't really a big deal: _field_names is not used for ranking.
@Override
public int getDocCount() throws IOException {
return -1;
}
@Override
public long getSumDocFreq() throws IOException {
return -1;
}
@Override
public long getSumTotalTermFreq() throws IOException {
return -1;
}
@Override
public long size() throws IOException {
return -1;
}
}
/**
* TermsEnum impl for _field_names (used by exists filter) that filters out terms
* representing fields that should not be visible in this reader.
*/
class FieldNamesTermsEnum extends FilterTermsEnum {
FieldNamesTermsEnum(TermsEnum in) {
super(in);
}
/** Return true if term is accepted (matches a field name in this reader). */
boolean accept(BytesRef term) {
return hasField(term.utf8ToString());
}
@Override
public boolean seekExact(BytesRef term) throws IOException {
return accept(term) && in.seekExact(term);
}
@Override
public SeekStatus seekCeil(BytesRef term) throws IOException {
SeekStatus status = in.seekCeil(term);
if (status == SeekStatus.END || accept(term())) {
return status;
}
return next() == null ? SeekStatus.END : SeekStatus.NOT_FOUND;
}
@Override
public BytesRef next() throws IOException {
BytesRef next;
while ((next = in.next()) != null) {
if (accept(next)) {
break;
}
}
return next;
}
// we don't support ordinals, but _field_names is not used in this way
@Override
public void seekExact(long ord) throws IOException {
throw new UnsupportedOperationException();
}
@Override
public long ord() throws IOException {
throw new UnsupportedOperationException();
}
}
}

View File

@ -0,0 +1,122 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.shield.authz.accesscontrol;
import com.google.common.collect.ImmutableMap;
import com.google.common.collect.ImmutableSet;
import org.elasticsearch.common.Nullable;
import org.elasticsearch.common.bytes.BytesReference;
import java.util.HashSet;
import java.util.Set;
/**
* Encapsulates the field and document permissions per concrete index based on the current request.
*/
public class IndicesAccessControl {
public static final IndicesAccessControl ALLOW_ALL = new IndicesAccessControl(true, ImmutableMap.<String, IndexAccessControl>of());
private final boolean granted;
private final ImmutableMap<String, IndexAccessControl> indexPermissions;
public IndicesAccessControl(boolean granted, ImmutableMap<String, IndexAccessControl> indexPermissions) {
this.granted = granted;
this.indexPermissions = indexPermissions;
}
/**
* @return The document and field permissions for an index if exist, otherwise <code>null</code> is returned.
* If <code>null</code> is being returned this means that there are no field or document level restrictions.
*/
@Nullable
public IndexAccessControl getIndexPermissions(String index) {
return indexPermissions.get(index);
}
/**
* @return Whether any role / permission group is allowed to access all indices.
*/
public boolean isGranted() {
return granted;
}
/**
* Encapsulates the field and document permissions for an index.
*/
public static class IndexAccessControl {
private final boolean granted;
private final ImmutableSet<String> fields;
private final ImmutableSet<BytesReference> queries;
public IndexAccessControl(boolean granted, ImmutableSet<String> fields, ImmutableSet<BytesReference> queries) {
this.granted = granted;
this.fields = fields;
this.queries = queries;
}
/**
* @return Whether any role / permission group is allowed to this index.
*/
public boolean isGranted() {
return granted;
}
/**
* @return The allowed fields for this index permissions. If <code>null</code> is returned then
* this means that there are no field level restrictions
*/
@Nullable
public ImmutableSet<String> getFields() {
return fields;
}
/**
* @return The allowed documents expressed as a query for this index permission. If <code>null</code> is returned
* then this means that there are no document level restrictions
*/
@Nullable
public ImmutableSet<BytesReference> getQueries() {
return queries;
}
public IndexAccessControl merge(IndexAccessControl other) {
boolean granted = this.granted;
if (!granted) {
granted = other.isGranted();
}
// this code is a bit of a pita, but right now we can't just initialize an empty set,
// because an empty Set means no permissions on fields and
// <code>null</code> means no field level security
ImmutableSet<String> fields = null;
if (this.fields != null || other.getFields() != null) {
Set<String> _fields = new HashSet<>();
if (this.fields != null) {
_fields.addAll(this.fields);
}
if (other.getFields() != null) {
_fields.addAll(other.getFields());
}
fields = ImmutableSet.copyOf(_fields);
}
ImmutableSet<BytesReference> queries = null;
if (this.queries != null || other.getQueries() != null) {
Set<BytesReference> _queries = new HashSet<>();
if (this.queries != null) {
_queries.addAll(this.queries);
}
if (other.getQueries() != null) {
_queries.addAll(other.getQueries());
}
queries = ImmutableSet.copyOf(_queries);
}
return new IndexAccessControl(granted, fields, queries);
}
}
}

View File

@ -0,0 +1,79 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.shield.authz.accesscontrol;
import org.apache.lucene.search.QueryCachingPolicy;
import org.apache.lucene.search.Weight;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.action.support.broadcast.BroadcastShardRequest;
import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.index.AbstractIndexComponent;
import org.elasticsearch.index.Index;
import org.elasticsearch.index.cache.query.QueryCache;
import org.elasticsearch.index.settings.IndexSettings;
import org.elasticsearch.indices.cache.query.IndicesQueryCache;
import org.elasticsearch.search.internal.ShardSearchRequest;
import org.elasticsearch.shield.authz.InternalAuthorizationService;
/**
* Opts out of the query cache if field level security is active for the current request.
*/
public final class OptOutQueryCache extends AbstractIndexComponent implements QueryCache {
final IndicesQueryCache indicesQueryCache;
@Inject
public OptOutQueryCache(Index index, @IndexSettings Settings indexSettings, IndicesQueryCache indicesQueryCache) {
super(index, indexSettings);
this.indicesQueryCache = indicesQueryCache;
}
@Override
public void close() throws ElasticsearchException {
clear("close");
}
@Override
public void clear(String reason) {
logger.debug("full cache clear, reason [{}]", reason);
indicesQueryCache.clearIndex(index.getName());
}
@Override
public Weight doCache(Weight weight, QueryCachingPolicy policy) {
final RequestContext context = RequestContext.current();
if (context == null) {
throw new IllegalStateException("opting out of the query cache. current request can't be found");
}
final IndicesAccessControl indicesAccessControl = context.getRequest().getFromContext(InternalAuthorizationService.INDICES_PERMISSIONS_KEY);
if (indicesAccessControl == null) {
logger.debug("opting out of the query cache. current request doesn't hold indices permissions");
return weight;
}
// At this level only IndicesRequest
final String index;
if (context.getRequest() instanceof ShardSearchRequest) {
index = ((ShardSearchRequest) context.getRequest()).index();
} else if (context.getRequest() instanceof BroadcastShardRequest) {
index = ((BroadcastShardRequest) context.getRequest()).shardId().getIndex();
} else {
return weight;
}
IndicesAccessControl.IndexAccessControl indexAccessControl = indicesAccessControl.getIndexPermissions(index);
if (indexAccessControl != null && indexAccessControl.getFields() != null) {
logger.debug("opting out of the query cache. request for index [{}] has field level security enabled", index);
// If in the future there is a Query#extractFields() then we can be smart on when to skip the query cache.
// (only cache if all fields in the query also are defined in the role)
return weight;
} else {
logger.trace("not opting out of the query cache. request for index [{}] has field level security disabled", index);
return indicesQueryCache.doCache(weight, policy);
}
}
}

View File

@ -0,0 +1,52 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.shield.authz.accesscontrol;
import org.elasticsearch.transport.TransportRequest;
/**
* A thread local based holder of the currnet {@link TransportRequest} instance.
*/
public final class RequestContext {
// Need thread local to make the current transport request available to places in the code that
// don't have direct access to the current transport request
private static final ThreadLocal<RequestContext> current = new ThreadLocal<>();
/**
* If set then this returns the current {@link RequestContext} with the current {@link TransportRequest}.
*/
public static RequestContext current() {
return current.get();
}
/**
* Invoked by the transport service to set the current transport request in the thread local
*/
public static void setCurrent(RequestContext value) {
current.set(value);
}
/**
* Invoked by the transport service to remove the current request from the thread local
*/
public static void removeCurrent() {
current.remove();
}
private final TransportRequest request;
public RequestContext(TransportRequest request) {
this.request = request;
}
/**
* @return current {@link TransportRequest}
*/
public TransportRequest getRequest() {
return request;
}
}

View File

@ -0,0 +1,235 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.shield.authz.accesscontrol;
import org.apache.lucene.index.DirectoryReader;
import org.apache.lucene.search.BooleanQuery;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.MatchNoDocsQuery;
import org.apache.lucene.search.Query;
import org.elasticsearch.ExceptionsHelper;
import org.elasticsearch.common.bytes.BytesReference;
import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.logging.support.LoggerMessageFormat;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.index.engine.EngineConfig;
import org.elasticsearch.index.engine.EngineException;
import org.elasticsearch.index.engine.IndexSearcherWrapper;
import org.elasticsearch.index.mapper.DocumentMapper;
import org.elasticsearch.index.mapper.DocumentTypeListener;
import org.elasticsearch.index.mapper.MapperService;
import org.elasticsearch.index.mapper.internal.ParentFieldMapper;
import org.elasticsearch.index.query.IndexQueryParserService;
import org.elasticsearch.index.query.ParsedQuery;
import org.elasticsearch.index.settings.IndexSettings;
import org.elasticsearch.index.shard.AbstractIndexShardComponent;
import org.elasticsearch.index.shard.IndexShard;
import org.elasticsearch.index.shard.ShardId;
import org.elasticsearch.index.shard.ShardUtils;
import org.elasticsearch.indices.IndicesLifecycle;
import org.elasticsearch.shield.authz.InternalAuthorizationService;
import org.elasticsearch.shield.support.Exceptions;
import java.io.IOException;
import java.util.Arrays;
import java.util.Collections;
import java.util.HashSet;
import java.util.Set;
import static org.apache.lucene.search.BooleanClause.Occur.FILTER;
import static org.apache.lucene.search.BooleanClause.Occur.MUST;
/**
* An {@link IndexSearcherWrapper} implementation that is used for field and document level security.
*
* Based on the {@link RequestContext} this class will enable field and/or document level security.
*
* Field level security is enabled by wrapping the original {@link DirectoryReader} in a {@link FieldSubsetReader}
* in the {@link #wrap(DirectoryReader)} method.
*
* Document level security is enabled by replacing the original {@link IndexSearcher} with a {@link ShieldIndexSearcherWrapper.ShieldIndexSearcher}
* instance.
*/
public final class ShieldIndexSearcherWrapper extends AbstractIndexShardComponent implements IndexSearcherWrapper, DocumentTypeListener {
private final IndexQueryParserService parserService;
private volatile Set<String> allowedMetaFields;
private volatile boolean shardStarted = false;
@Inject
public ShieldIndexSearcherWrapper(ShardId shardId, @IndexSettings Settings indexSettings, IndexQueryParserService parserService, IndicesLifecycle indicesLifecycle, MapperService mapperService) {
super(shardId, indexSettings);
this.parserService = parserService;
indicesLifecycle.addListener(new ShardLifecycleListener());
mapperService.addTypeListener(this);
Set<String> allowedMetaFields = new HashSet<>();
allowedMetaFields.addAll(Arrays.asList(MapperService.getAllMetaFields()));
allowedMetaFields.add("_source"); // TODO: add _source to MapperService#META_FIELDS?
allowedMetaFields.add("_version"); // TODO: add _version to MapperService#META_FIELDS?
allowedMetaFields.remove("_all"); // The _all field contains actual data and we can't include that by default.
for (DocumentMapper mapper : mapperService.docMappers(false)) {
ParentFieldMapper parentFieldMapper = mapper.parentFieldMapper();
if (parentFieldMapper.active()) {
String joinField = ParentFieldMapper.joinField(parentFieldMapper.type());
allowedMetaFields.add(joinField);
}
}
this.allowedMetaFields = Collections.unmodifiableSet(allowedMetaFields);
}
@Override
public void beforeCreate(DocumentMapper mapper) {
Set<String> allowedMetaFields = new HashSet<>(this.allowedMetaFields);
ParentFieldMapper parentFieldMapper = mapper.parentFieldMapper();
if (parentFieldMapper.active()) {
String joinField = ParentFieldMapper.joinField(parentFieldMapper.type());
if (allowedMetaFields.add(joinField)) {
this.allowedMetaFields = Collections.unmodifiableSet(allowedMetaFields);
}
}
}
@Override
public DirectoryReader wrap(DirectoryReader reader) {
final Set<String> allowedMetaFields = this.allowedMetaFields;
try {
RequestContext context = RequestContext.current();
if (context == null) {
if (shardStarted == false) {
// The shard this index searcher wrapper has been created for hasn't started yet,
// We may load some initial stuff like for example previous stored percolator queries and recovery,
// so for this reason we should provide access to all fields:
return reader;
} else {
logger.debug("couldn't locate the current request, field level security will only allow meta fields");
return FieldSubsetReader.wrap(reader, allowedMetaFields);
}
}
IndicesAccessControl indicesAccessControl = context.getRequest().getFromContext(InternalAuthorizationService.INDICES_PERMISSIONS_KEY);
if (indicesAccessControl == null) {
throw Exceptions.authorizationError("no indices permissions found");
}
ShardId shardId = ShardUtils.extractShardId(reader);
if (shardId == null) {
throw new IllegalStateException(LoggerMessageFormat.format("couldn't extract shardId from reader [{}]", reader));
}
IndicesAccessControl.IndexAccessControl permissions = indicesAccessControl.getIndexPermissions(shardId.getIndex());
// Either no permissions have been defined for an index or no fields have been configured for a role permission
if (permissions == null || permissions.getFields() == null) {
return reader;
}
// now add the allowed fields based on the current granted permissions and :
Set<String> fields = new HashSet<>(allowedMetaFields);
fields.addAll(permissions.getFields());
return FieldSubsetReader.wrap(reader, fields);
} catch (IOException e) {
logger.error("Unable to apply field level security");
throw ExceptionsHelper.convertToElastic(e);
}
}
@Override
public IndexSearcher wrap(EngineConfig engineConfig, IndexSearcher searcher) throws EngineException {
RequestContext context = RequestContext.current();
if (context == null) {
if (shardStarted == false) {
// The shard this index searcher wrapper has been created for hasn't started yet,
// We may load some initial stuff like for example previous stored percolator queries and recovery,
// so for this reason we should provide access to all documents:
return searcher;
} else {
logger.debug("couldn't locate the current request, document level security hides all documents");
return new ShieldIndexSearcher(engineConfig, searcher, new MatchNoDocsQuery());
}
}
ShardId shardId = ShardUtils.extractShardId(searcher.getIndexReader());
if (shardId == null) {
throw new IllegalStateException(LoggerMessageFormat.format("couldn't extract shardId from reader [{}]", searcher.getIndexReader()));
}
IndicesAccessControl indicesAccessControl = context.getRequest().getFromContext(InternalAuthorizationService.INDICES_PERMISSIONS_KEY);
if (indicesAccessControl == null) {
throw Exceptions.authorizationError("no indices permissions found");
}
IndicesAccessControl.IndexAccessControl permissions = indicesAccessControl.getIndexPermissions(shardId.getIndex());
if (permissions == null) {
return searcher;
} else if (permissions.getQueries() == null) {
return searcher;
}
final Query roleQuery;
switch (permissions.getQueries().size()) {
case 0:
roleQuery = new MatchNoDocsQuery();
break;
case 1:
roleQuery = parserService.parse(permissions.getQueries().iterator().next()).query();
break;
default:
BooleanQuery bq = new BooleanQuery();
for (BytesReference bytesReference : permissions.getQueries()) {
ParsedQuery parsedQuery = parserService.parse(bytesReference);
bq.add(parsedQuery.query(), MUST);
}
roleQuery = bq;
break;
}
return new ShieldIndexSearcher(engineConfig, searcher, roleQuery);
}
/**
* An {@link IndexSearcher} implementation that applies the role query for document level security during the
* query rewrite and disabled the query cache if required when field level security is enabled.
*/
static final class ShieldIndexSearcher extends IndexSearcher {
private final Query roleQuery;
private ShieldIndexSearcher(EngineConfig engineConfig, IndexSearcher in, Query roleQuery) {
super(in.getIndexReader());
setSimilarity(in.getSimilarity(true));
setQueryCache(engineConfig.getQueryCache());
setQueryCachingPolicy(engineConfig.getQueryCachingPolicy());
this.roleQuery = roleQuery;
}
@Override
public Query rewrite(Query original) throws IOException {
return super.rewrite(wrap(original));
}
@Override
public String toString() {
return "ShieldIndexSearcher(" + super.toString() + ")";
}
private Query wrap(Query original) {
BooleanQuery bq = new BooleanQuery();
bq.add(original, MUST);
bq.add(roleQuery, FILTER);
return bq;
}
}
private class ShardLifecycleListener extends IndicesLifecycle.Listener {
@Override
public void afterIndexShardPostRecovery(IndexShard indexShard) {
if (shardId.equals(indexShard.shardId())) {
shardStarted = true;
}
}
}
}

View File

@ -28,11 +28,11 @@ import java.util.*;
/**
*
*/
public class DefaultIndicesResolver implements IndicesResolver<TransportRequest> {
public class DefaultIndicesAndAliasesResolver implements IndicesAndAliasesResolver<TransportRequest> {
private final AuthorizationService authzService;
public DefaultIndicesResolver(AuthorizationService authzService) {
public DefaultIndicesAndAliasesResolver(AuthorizationService authzService) {
this.authzService = authzService;
}
@ -56,15 +56,15 @@ public class DefaultIndicesResolver implements IndicesResolver<TransportRequest>
Set<String> indices = Sets.newHashSet();
CompositeIndicesRequest compositeIndicesRequest = (CompositeIndicesRequest) request;
for (IndicesRequest indicesRequest : compositeIndicesRequest.subRequests()) {
indices.addAll(resolveIndices(user, action, indicesRequest, metaData));
indices.addAll(resolveIndicesAndAliases(user, action, indicesRequest, metaData));
}
return indices;
}
return resolveIndices(user, action, (IndicesRequest) request, metaData);
return resolveIndicesAndAliases(user, action, (IndicesRequest) request, metaData);
}
private Set<String> resolveIndices(User user, String action, IndicesRequest indicesRequest, MetaData metaData) {
private Set<String> resolveIndicesAndAliases(User user, String action, IndicesRequest indicesRequest, MetaData metaData) {
if (indicesRequest.indicesOptions().expandWildcardsOpen() || indicesRequest.indicesOptions().expandWildcardsClosed()) {
if (indicesRequest instanceof IndicesRequest.Replaceable) {
ImmutableList<String> authorizedIndices = authzService.authorizedIndicesAndAliases(user, action);

View File

@ -14,7 +14,7 @@ import java.util.Set;
/**
*
*/
public interface IndicesResolver<Request extends TransportRequest> {
public interface IndicesAndAliasesResolver<Request extends TransportRequest> {
Class<Request> requestType();

View File

@ -11,11 +11,16 @@ import com.google.common.collect.ImmutableMap;
import com.google.common.collect.ImmutableSet;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.common.Strings;
import org.elasticsearch.common.bytes.BytesArray;
import org.elasticsearch.common.bytes.BytesReference;
import org.elasticsearch.common.component.AbstractLifecycleComponent;
import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.logging.ESLogger;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentHelper;
import org.elasticsearch.common.xcontent.XContentParser;
import org.elasticsearch.common.xcontent.json.JsonXContent;
import org.elasticsearch.common.xcontent.yaml.YamlXContent;
import org.elasticsearch.env.Environment;
import org.elasticsearch.shield.ShieldPlugin;
@ -239,6 +244,65 @@ public class FileRolesStore extends AbstractLifecycleComponent<RolesStore> imple
if (!names.isEmpty()) {
name = new Privilege.Name(names);
}
} else if (token == XContentParser.Token.START_OBJECT) {
List<String> fields = null;
BytesReference query = null;
while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {
if (token == XContentParser.Token.FIELD_NAME) {
currentFieldName = parser.currentName();
} else if ("fields".equals(currentFieldName)) {
if (token == XContentParser.Token.START_ARRAY) {
fields = (List) parser.list();
} else if (token.isValue()) {
String field = parser.text();
if (field.trim().isEmpty()) {
// The yaml parser doesn't emit null token if the key is empty...
fields = Collections.emptyList();
} else {
fields = Collections.singletonList(field);
}
}
} else if ("query".equals(currentFieldName)) {
if (token == XContentParser.Token.START_OBJECT) {
XContentBuilder builder = JsonXContent.contentBuilder();
XContentHelper.copyCurrentStructure(builder.generator(), parser);
query = builder.bytes();
} else if (token == XContentParser.Token.VALUE_STRING) {
query = new BytesArray(parser.text());
}
} else if ("privileges".equals(currentFieldName)) {
if (token == XContentParser.Token.VALUE_STRING) {
String namesStr = parser.text().trim();
if (Strings.hasLength(namesStr)) {
String[] names = COMMA_DELIM.split(parser.text());
name = new Privilege.Name(names);
}
} else if (token == XContentParser.Token.START_ARRAY) {
Set<String> names = new HashSet<>();
while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) {
if (token == XContentParser.Token.VALUE_STRING) {
names.add(parser.text());
} else {
logger.error("invalid role definition [{}] in roles file [{}]. could not parse " +
"[{}] as index privilege. privilege names must be strings. skipping role...", roleName, path.toAbsolutePath(), token);
return null;
}
}
if (!names.isEmpty()) {
name = new Privilege.Name(names);
}
}
}
}
if (name != null) {
try {
permission.add(fields, query, Privilege.Index.get(name), indices);
} catch (IllegalArgumentException e) {
logger.error("invalid role definition [{}] in roles file [{}]. could not resolve indices privileges [{}]. skipping role...", roleName, path.toAbsolutePath(), name);
return null;
}
}
continue;
} else {
logger.error("invalid role definition [{}] in roles file [{}]. could not parse [{}] as index privileges. privilege lists must either " +
"be a comma delimited string or an array of strings. skipping role...", roleName, path.toAbsolutePath(), token);

View File

@ -12,6 +12,7 @@ import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.shield.action.ShieldActionMapper;
import org.elasticsearch.shield.authc.AuthenticationService;
import org.elasticsearch.shield.authz.AuthorizationService;
import org.elasticsearch.shield.authz.accesscontrol.RequestContext;
import org.elasticsearch.shield.transport.netty.ShieldNettyTransport;
import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.transport.*;
@ -109,7 +110,7 @@ public class ShieldServerTransportService extends TransportService {
protected final TransportRequestHandler<T> handler;
private final Map<String, ServerTransportFilter> profileFilters;
public ProfileSecuredRequestHandler(String action, TransportRequestHandler handler, Map<String, ServerTransportFilter> profileFilters) {
public ProfileSecuredRequestHandler(String action, TransportRequestHandler<T> handler, Map<String, ServerTransportFilter> profileFilters) {
this.action = action;
this.handler = handler;
this.profileFilters = profileFilters;
@ -132,11 +133,15 @@ public class ShieldServerTransportService extends TransportService {
}
assert filter != null;
filter.inbound(action, request, channel);
RequestContext context = new RequestContext(request);
RequestContext.setCurrent(context);
handler.messageReceived(request, channel);
} catch (Throwable t) {
channel.sendResponse(t);
return;
} finally {
RequestContext.removeCurrent();
}
handler.messageReceived(request, channel);
}
}
}

View File

@ -0,0 +1,119 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.integration;
import org.elasticsearch.action.search.SearchResponse;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.index.cache.IndexCacheModule;
import org.elasticsearch.shield.authc.support.Hasher;
import org.elasticsearch.shield.authc.support.SecuredString;
import org.elasticsearch.test.ShieldIntegTestCase;
import static org.elasticsearch.shield.authc.support.UsernamePasswordToken.BASIC_AUTH_HEADER;
import static org.elasticsearch.shield.authc.support.UsernamePasswordToken.basicAuthHeaderValue;
import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.*;
import static org.hamcrest.Matchers.equalTo;
/**
*/
public class DocumentAndFieldLevelSecurityTests extends ShieldIntegTestCase {
protected static final SecuredString USERS_PASSWD = new SecuredString("change_me".toCharArray());
protected static final String USERS_PASSWD_HASHED = new String(Hasher.BCRYPT.hash(USERS_PASSWD));
@Override
protected String configUsers() {
return super.configUsers() +
"user1:" + USERS_PASSWD_HASHED + "\n" +
"user2:" + USERS_PASSWD_HASHED + "\n" +
"user3:" + USERS_PASSWD_HASHED + "\n" ;
}
@Override
protected String configUsersRoles() {
return super.configUsersRoles() +
"role1:user1\n" +
"role2:user2\n" +
"role3:user3\n";
}
@Override
protected String configRoles() {
return super.configRoles() +
"\nrole1:\n" +
" cluster: all\n" +
" indices:\n" +
" '*':\n" +
" privileges: ALL\n" +
" fields: field1\n" +
" query: '{\"term\" : {\"field1\" : \"value1\"}}'\n" +
"role2:\n" +
" cluster: all\n" +
" indices:\n" +
" '*':\n" +
" privileges: ALL\n" +
" fields: field2\n" +
" query: '{\"term\" : {\"field2\" : \"value2\"}}'\n" +
"role3:\n" +
" cluster: all\n" +
" indices:\n" +
" '*':\n" +
" privileges: ALL\n" +
" fields: field2\n" +
" query: '{\"term\" : {\"field1\" : \"value1\"}}'\n";
}
public void testSimpleQuery() throws Exception {
assertAcked(client().admin().indices().prepareCreate("test")
.addMapping("type1", "field1", "type=string", "field2", "type=string")
);
client().prepareIndex("test", "type1", "1").setSource("field1", "value1")
.setRefresh(true)
.get();
client().prepareIndex("test", "type1", "2").setSource("field2", "value2")
.setRefresh(true)
.get();
SearchResponse response = client().prepareSearch("test")
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user1", USERS_PASSWD))
.get();
assertHitCount(response, 1);
assertSearchHits(response, "1");
assertThat(response.getHits().getAt(0).getSource().size(), equalTo(1));
assertThat(response.getHits().getAt(0).getSource().get("field1").toString(), equalTo("value1"));
response = client().prepareSearch("test")
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user2", USERS_PASSWD))
.get();
assertHitCount(response, 1);
assertSearchHits(response, "2");
assertThat(response.getHits().getAt(0).getSource().size(), equalTo(1));
assertThat(response.getHits().getAt(0).getSource().get("field2").toString(), equalTo("value2"));
}
public void testQueryCache() throws Exception {
assertAcked(client().admin().indices().prepareCreate("test")
.setSettings(Settings.builder().put(IndexCacheModule.QUERY_CACHE_EVERYTHING, true))
.addMapping("type1", "field1", "type=string", "field2", "type=string")
);
client().prepareIndex("test", "type1", "1").setSource("field1", "value1", "field2", "value2")
.setRefresh(true)
.get();
// Both users have the same role query, but user3 has access to field2 and not field1, which should result in zero hits:
int max = scaledRandomIntBetween(4, 32);
for (int i = 0; i < max; i++) {
SearchResponse response = client().prepareSearch("test")
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user1", USERS_PASSWD))
.get();
assertHitCount(response, 1);
response = client().prepareSearch("test")
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user3", USERS_PASSWD))
.get();
assertHitCount(response, 0);
}
}
}

View File

@ -0,0 +1,96 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.integration;
import org.elasticsearch.action.admin.indices.alias.IndicesAliasesRequestBuilder;
import org.elasticsearch.action.index.IndexRequestBuilder;
import org.elasticsearch.action.search.SearchResponse;
import org.elasticsearch.index.query.QueryBuilders;
import org.elasticsearch.shield.authc.support.Hasher;
import org.elasticsearch.shield.authc.support.SecuredString;
import org.elasticsearch.test.ShieldIntegTestCase;
import java.util.ArrayList;
import java.util.List;
import static org.elasticsearch.shield.authc.support.UsernamePasswordToken.BASIC_AUTH_HEADER;
import static org.elasticsearch.shield.authc.support.UsernamePasswordToken.basicAuthHeaderValue;
import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;
import static org.hamcrest.Matchers.equalTo;
/**
*/
public class DocumentLevelSecurityRandomTests extends ShieldIntegTestCase {
protected static final SecuredString USERS_PASSWD = new SecuredString("change_me".toCharArray());
protected static final String USERS_PASSWD_HASHED = new String(Hasher.BCRYPT.hash(new SecuredString("change_me".toCharArray())));
// can't add a second test method, because each test run creates a new instance of this class and that will will result
// in a new random value:
private final int numberOfRoles = scaledRandomIntBetween(3, 99);
@Override
protected String configUsers() {
StringBuilder builder = new StringBuilder(super.configUsers());
for (int i = 1; i <= numberOfRoles; i++) {
builder.append("user").append(i).append(':').append(USERS_PASSWD_HASHED).append('\n');
}
return builder.toString();
}
@Override
protected String configUsersRoles() {
StringBuilder builder = new StringBuilder(super.configUsersRoles());
for (int i = 1; i <= numberOfRoles; i++) {
builder.append("role").append(i).append(":user").append(i).append('\n');
}
return builder.toString();
}
@Override
protected String configRoles() {
StringBuilder builder = new StringBuilder(super.configRoles());
builder.append('\n');
for (int i = 1; i <= numberOfRoles; i++) {
builder.append("role").append(i).append(":\n");
builder.append(" cluster: all\n");
builder.append(" indices:\n");
builder.append(" '*':\n");
builder.append(" privileges: ALL\n");
builder.append(" query: \n");
builder.append(" term: \n");
builder.append(" field1: value").append(i).append('\n');
}
return builder.toString();
}
public void testDuelWithAliasFilters() throws Exception {
assertAcked(client().admin().indices().prepareCreate("test")
.addMapping("type1", "field1", "type=string", "field2", "type=string")
);
List<IndexRequestBuilder> requests = new ArrayList<>(numberOfRoles);
IndicesAliasesRequestBuilder builder = client().admin().indices().prepareAliases();
for (int i = 1; i <= numberOfRoles; i++) {
String value = "value" + i;
requests.add(client().prepareIndex("test", "type1", value).setSource("field1", value));
builder.addAlias("test", "alias" + i, QueryBuilders.termQuery("field1", value));
}
indexRandom(true, requests);
builder.get();
for (int roleI = 1; roleI <= numberOfRoles; roleI++) {
SearchResponse searchResponse1 = client().prepareSearch("test")
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user" + roleI, USERS_PASSWD))
.get();
SearchResponse searchResponse2 = client().prepareSearch("alias" + roleI).get();
assertThat(searchResponse1.getHits().getTotalHits(), equalTo(searchResponse2.getHits().getTotalHits()));
for (int hitI = 0; hitI < searchResponse1.getHits().getHits().length; hitI++) {
assertThat(searchResponse1.getHits().getAt(hitI).getId(), equalTo(searchResponse2.getHits().getAt(hitI).getId()));
}
}
}
}

View File

@ -0,0 +1,270 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.integration;
import org.elasticsearch.action.percolate.PercolateResponse;
import org.elasticsearch.action.percolate.PercolateSourceBuilder;
import org.elasticsearch.action.search.SearchResponse;
import org.elasticsearch.search.aggregations.AggregationBuilders;
import org.elasticsearch.search.aggregations.bucket.children.Children;
import org.elasticsearch.search.aggregations.bucket.global.Global;
import org.elasticsearch.search.aggregations.bucket.terms.Terms;
import org.elasticsearch.search.sort.SortOrder;
import org.elasticsearch.shield.authc.support.Hasher;
import org.elasticsearch.shield.authc.support.SecuredString;
import org.elasticsearch.test.ShieldIntegTestCase;
import static org.elasticsearch.index.query.QueryBuilders.*;
import static org.elasticsearch.shield.authc.support.UsernamePasswordToken.BASIC_AUTH_HEADER;
import static org.elasticsearch.shield.authc.support.UsernamePasswordToken.basicAuthHeaderValue;
import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.*;
import static org.hamcrest.Matchers.equalTo;
/**
*/
public class DocumentLevelSecurityTests extends ShieldIntegTestCase {
protected static final SecuredString USERS_PASSWD = new SecuredString("change_me".toCharArray());
protected static final String USERS_PASSWD_HASHED = new String(Hasher.BCRYPT.hash(USERS_PASSWD));
@Override
protected String configUsers() {
return super.configUsers() +
"user1:" + USERS_PASSWD_HASHED + "\n" +
"user2:" + USERS_PASSWD_HASHED + "\n" ;
}
@Override
protected String configUsersRoles() {
return super.configUsersRoles() +
"role1:user1\n" +
"role2:user2\n";
}
@Override
protected String configRoles() {
return super.configRoles() +
"\nrole1:\n" +
" cluster: all\n" +
" indices:\n" +
" '*':\n" +
" privileges: ALL\n" +
" query: \n" +
" term: \n" +
" field1: value1\n" +
"role2:\n" +
" cluster: all\n" +
" indices:\n" +
" '*':\n" +
" privileges: ALL\n" +
" query: '{\"term\" : {\"field2\" : \"value2\"}}'"; // <-- query defined as json in a string
}
public void testSimpleQuery() throws Exception {
assertAcked(client().admin().indices().prepareCreate("test")
.addMapping("type1", "field1", "type=string", "field2", "type=string")
);
client().prepareIndex("test", "type1", "1").setSource("field1", "value1")
.setRefresh(true)
.get();
client().prepareIndex("test", "type1", "2").setSource("field2", "value2")
.setRefresh(true)
.get();
SearchResponse response = client().prepareSearch("test")
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user1", USERS_PASSWD))
.get();
assertHitCount(response, 1);
assertSearchHits(response, "1");
response = client().prepareSearch("test")
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user2", USERS_PASSWD))
.get();
assertHitCount(response, 1);
assertSearchHits(response, "2");
}
public void testGlobalAggregation() throws Exception {
assertAcked(client().admin().indices().prepareCreate("test")
.addMapping("type1", "field1", "type=string", "field2", "type=string")
);
client().prepareIndex("test", "type1", "1").setSource("field1", "value1")
.setRefresh(true)
.get();
client().prepareIndex("test", "type1", "2").setSource("field2", "value2")
.setRefresh(true)
.get();
SearchResponse response = client().prepareSearch("test")
.addAggregation(AggregationBuilders.global("global").subAggregation(AggregationBuilders.terms("field2").field("field2")))
.get();
assertHitCount(response, 2);
assertSearchHits(response, "1", "2");
Global globalAgg = response.getAggregations().get("global");
assertThat(globalAgg.getDocCount(), equalTo(2l));
Terms termsAgg = globalAgg.getAggregations().get("field2");
assertThat(termsAgg.getBuckets().get(0).getKeyAsString(), equalTo("value2"));
assertThat(termsAgg.getBuckets().get(0).getDocCount(), equalTo(1l));
response = client().prepareSearch("test")
.addAggregation(AggregationBuilders.global("global").subAggregation(AggregationBuilders.terms("field2").field("field2")))
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user1", USERS_PASSWD))
.get();
assertHitCount(response, 1);
assertSearchHits(response, "1");
globalAgg = response.getAggregations().get("global");
assertThat(globalAgg.getDocCount(), equalTo(1l));
termsAgg = globalAgg.getAggregations().get("field2");
assertThat(termsAgg.getBuckets().size(), equalTo(0));
}
public void testChildrenAggregation() throws Exception {
assertAcked(client().admin().indices().prepareCreate("test")
.addMapping("type1", "field1", "type=string", "field2", "type=string")
.addMapping("type2", "_parent", "type=type1", "field3", "type=string")
);
client().prepareIndex("test", "type1", "1").setSource("field1", "value1")
.setRefresh(true)
.get();
client().prepareIndex("test", "type2", "2").setSource("field3", "value3")
.setParent("1")
.setRefresh(true)
.get();
SearchResponse response = client().prepareSearch("test")
.setTypes("type1")
.addAggregation(AggregationBuilders.children("children").childType("type2")
.subAggregation(AggregationBuilders.terms("field3").field("field3")))
.get();
assertHitCount(response, 1);
assertSearchHits(response, "1");
Children children = response.getAggregations().get("children");
assertThat(children.getDocCount(), equalTo(1l));
Terms termsAgg = children.getAggregations().get("field3");
assertThat(termsAgg.getBuckets().get(0).getKeyAsString(), equalTo("value3"));
assertThat(termsAgg.getBuckets().get(0).getDocCount(), equalTo(1l));
response = client().prepareSearch("test")
.setTypes("type1")
.addAggregation(AggregationBuilders.children("children").childType("type2")
.subAggregation(AggregationBuilders.terms("field3").field("field3")))
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user1", USERS_PASSWD))
.get();
assertHitCount(response, 1);
assertSearchHits(response, "1");
children = response.getAggregations().get("children");
assertThat(children.getDocCount(), equalTo(0l));
termsAgg = children.getAggregations().get("field3");
assertThat(termsAgg.getBuckets().size(), equalTo(0));
}
public void testParentChild() {
assertAcked(prepareCreate("test")
.addMapping("parent")
.addMapping("child", "_parent", "type=parent", "field1", "type=string", "field2", "type=string"));
ensureGreen();
// index simple data
client().prepareIndex("test", "parent", "p1").setSource("field1", "value1").get();
client().prepareIndex("test", "child", "c1").setSource("field2", "value2").setParent("p1").get();
client().prepareIndex("test", "child", "c2").setSource("field2", "value2").setParent("p1").get();
refresh();
SearchResponse searchResponse = client().prepareSearch("test")
.setQuery(hasChildQuery("child", matchAllQuery()))
.get();
assertHitCount(searchResponse, 1l);
assertThat(searchResponse.getHits().totalHits(), equalTo(1l));
assertThat(searchResponse.getHits().getAt(0).id(), equalTo("p1"));
searchResponse = client().prepareSearch("test")
.setQuery(hasParentQuery("parent", matchAllQuery()))
.addSort("_id", SortOrder.ASC)
.get();
assertHitCount(searchResponse, 2l);
assertThat(searchResponse.getHits().getAt(0).id(), equalTo("c1"));
assertThat(searchResponse.getHits().getAt(1).id(), equalTo("c2"));
// Both user1 and user2 can't see field1 and field2, no parent/child query should yield results:
searchResponse = client().prepareSearch("test")
.setQuery(hasChildQuery("child", matchAllQuery()))
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user1", USERS_PASSWD))
.get();
assertHitCount(searchResponse, 0l);
searchResponse = client().prepareSearch("test")
.setQuery(hasChildQuery("child", matchAllQuery()))
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user2", USERS_PASSWD))
.get();
assertHitCount(searchResponse, 0l);
searchResponse = client().prepareSearch("test")
.setQuery(hasParentQuery("parent", matchAllQuery()))
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user1", USERS_PASSWD))
.get();
assertHitCount(searchResponse, 0l);
searchResponse = client().prepareSearch("test")
.setQuery(hasParentQuery("parent", matchAllQuery()))
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user2", USERS_PASSWD))
.get();
assertHitCount(searchResponse, 0l);
}
public void testPercolateApi() {
assertAcked(client().admin().indices().prepareCreate("test")
.addMapping(".percolator", "field1", "type=string", "field2", "type=string")
);
client().prepareIndex("test", ".percolator", "1")
.setSource("{\"query\" : { \"match_all\" : {} }, \"field1\" : \"value1\"}")
.setRefresh(true)
.get();
// Percolator without a query just evaluates all percolator queries that are loaded, so we have a match:
PercolateResponse response = client().preparePercolate()
.setDocumentType("type")
.setPercolateDoc(new PercolateSourceBuilder.DocBuilder().setDoc("{}"))
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user1", USERS_PASSWD))
.get();
assertThat(response.getCount(), equalTo(1l));
assertThat(response.getMatches()[0].getId().string(), equalTo("1"));
// Percolator with a query on a document that the current user can see. Percolator will have one query to evaluate, so there is a match:
response = client().preparePercolate()
.setDocumentType("type")
.setPercolateQuery(termQuery("field1", "value1"))
.setPercolateDoc(new PercolateSourceBuilder.DocBuilder().setDoc("{}"))
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user1", USERS_PASSWD))
.get();
assertThat(response.getCount(), equalTo(1l));
assertThat(response.getMatches()[0].getId().string(), equalTo("1"));
// Percolator with a query on a document that the current user can't see. Percolator will not have queries to evaluate, so there is no match:
response = client().preparePercolate()
.setDocumentType("type")
.setPercolateQuery(termQuery("field1", "value1"))
.setPercolateDoc(new PercolateSourceBuilder.DocBuilder().setDoc("{}"))
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user2", USERS_PASSWD))
.get();
assertThat(response.getCount(), equalTo(0l));
assertAcked(client().admin().indices().prepareClose("test"));
assertAcked(client().admin().indices().prepareOpen("test"));
ensureGreen("test");
// Ensure that the query loading that happens at startup has permissions to load the percolator queries:
response = client().preparePercolate()
.setDocumentType("type")
.setPercolateDoc(new PercolateSourceBuilder.DocBuilder().setDoc("{}"))
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user1", USERS_PASSWD))
.get();
assertThat(response.getCount(), equalTo(1l));
assertThat(response.getMatches()[0].getId().string(), equalTo("1"));
}
}

View File

@ -0,0 +1,216 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.integration;
import org.elasticsearch.action.index.IndexRequestBuilder;
import org.elasticsearch.action.search.SearchResponse;
import org.elasticsearch.index.query.QueryBuilders;
import org.elasticsearch.search.sort.SortOrder;
import org.elasticsearch.shield.authc.support.Hasher;
import org.elasticsearch.shield.authc.support.SecuredString;
import org.elasticsearch.test.ShieldIntegTestCase;
import java.util.*;
import static org.elasticsearch.index.query.QueryBuilders.matchQuery;
import static org.elasticsearch.shield.authc.support.UsernamePasswordToken.BASIC_AUTH_HEADER;
import static org.elasticsearch.shield.authc.support.UsernamePasswordToken.basicAuthHeaderValue;
import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;
import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount;
import static org.hamcrest.Matchers.equalTo;
public class FieldLevelSecurityRandomTests extends ShieldIntegTestCase {
protected static final SecuredString USERS_PASSWD = new SecuredString("change_me".toCharArray());
protected static final String USERS_PASSWD_HASHED = new String(Hasher.BCRYPT.hash(new SecuredString("change_me".toCharArray())));
private static Set<String> allowedFields;
private static Set<String> disAllowedFields;
@Override
protected String configUsers() {
return super.configUsers() +
"user1:" + USERS_PASSWD_HASHED + "\n" +
"user2:" + USERS_PASSWD_HASHED + "\n" +
"user3:" + USERS_PASSWD_HASHED + "\n" +
"user4:" + USERS_PASSWD_HASHED + "\n" ;
}
@Override
protected String configUsersRoles() {
return super.configUsersRoles() +
"role1:user1\n" +
"role2:user2\n" +
"role3:user3\n" +
"role4:user4\n";
}
@Override
protected String configRoles() {
if (allowedFields == null) {
allowedFields = new HashSet<>();
disAllowedFields = new HashSet<>();
int numFields = scaledRandomIntBetween(5, 50);
for (int i = 0; i < numFields; i++) {
String field = "field" + i;
if (i % 2 == 0) {
allowedFields.add(field);
} else {
disAllowedFields.add(field);
}
}
}
StringBuilder roleFields = new StringBuilder();
for (String field : allowedFields) {
roleFields.append(" - ").append(field).append('\n');
}
return super.configRoles() +
"\nrole1:\n" +
" cluster: all\n" +
" indices:\n" +
" '*':\n" +
" privileges: ALL\n" +
" fields:\n" + roleFields.toString() +
"role2:\n" +
" cluster: all\n" +
" indices:\n" +
" test:\n" +
" privileges: ALL\n" +
" fields:\n" +
" - field1\n" +
"role3:\n" +
" cluster: all\n" +
" indices:\n" +
" test:\n" +
" privileges: ALL\n" +
" fields:\n" +
" - field2\n" +
"role4:\n" +
" cluster: all\n" +
" indices:\n" +
" test:\n" +
" privileges: ALL\n" +
" fields:\n" +
" - field3\n";
}
public void testRandom() throws Exception {
int j = 0;
Map<String, Object> doc = new HashMap<>();
String[] fieldMappers = new String[(allowedFields.size() + disAllowedFields.size()) * 2];
for (String field : allowedFields) {
fieldMappers[j++] = field;
fieldMappers[j++] = "type=string";
doc.put(field, "value");
}
for (String field : disAllowedFields) {
fieldMappers[j++] = field;
fieldMappers[j++] = "type=string";
doc.put(field, "value");
}
assertAcked(client().admin().indices().prepareCreate("test")
.addMapping("type1", fieldMappers)
);
client().prepareIndex("test", "type1", "1").setSource(doc).setRefresh(true).get();
for (String allowedField : allowedFields) {
logger.info("Checking allowed field [{}]", allowedField);
SearchResponse response = client().prepareSearch("test")
.setQuery(matchQuery(allowedField, "value"))
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user1", USERS_PASSWD))
.get();
assertHitCount(response, 1);
}
for (String disallowedField : disAllowedFields) {
logger.info("Checking disallowed field [{}]", disallowedField);
SearchResponse response = client().prepareSearch("test")
.setQuery(matchQuery(disallowedField, "value"))
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user1", USERS_PASSWD))
.get();
assertHitCount(response, 0);
}
}
public void testDuel() throws Exception {
assertAcked(client().admin().indices().prepareCreate("test")
.addMapping("type1", "field1", "type=string", "field2", "type=string", "field3", "type=string")
);
int numDocs = scaledRandomIntBetween(32, 128);
List<IndexRequestBuilder> requests = new ArrayList<>(numDocs);
for (int i = 1; i <= numDocs; i++) {
String field = randomFrom("field1", "field2", "field3");
String value = "value";
requests.add(client().prepareIndex("test", "type1", value).setSource(field, value));
}
indexRandom(true, requests);
SearchResponse actual = client().prepareSearch("test")
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user2", USERS_PASSWD))
.addSort("_uid", SortOrder.ASC)
.setQuery(QueryBuilders.boolQuery()
.should(QueryBuilders.termQuery("field1", "value"))
.should(QueryBuilders.termQuery("field2", "value"))
.should(QueryBuilders.termQuery("field3", "value"))
)
.get();
SearchResponse expected = client().prepareSearch("test")
.addSort("_uid", SortOrder.ASC)
.setQuery(QueryBuilders.boolQuery()
.should(QueryBuilders.termQuery("field1", "value"))
)
.get();
assertThat(actual.getHits().getTotalHits(), equalTo(expected.getHits().getTotalHits()));
assertThat(actual.getHits().getHits().length, equalTo(expected.getHits().getHits().length));
for (int i = 0; i < actual.getHits().getHits().length; i++) {
assertThat(actual.getHits().getAt(i).getId(), equalTo(expected.getHits().getAt(i).getId()));
}
actual = client().prepareSearch("test")
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user3", USERS_PASSWD))
.addSort("_uid", SortOrder.ASC)
.setQuery(QueryBuilders.boolQuery()
.should(QueryBuilders.termQuery("field1", "value"))
.should(QueryBuilders.termQuery("field2", "value"))
.should(QueryBuilders.termQuery("field3", "value"))
)
.get();
expected = client().prepareSearch("test")
.addSort("_uid", SortOrder.ASC)
.setQuery(QueryBuilders.boolQuery()
.should(QueryBuilders.termQuery("field2", "value"))
)
.get();
assertThat(actual.getHits().getTotalHits(), equalTo(expected.getHits().getTotalHits()));
assertThat(actual.getHits().getHits().length, equalTo(expected.getHits().getHits().length));
for (int i = 0; i < actual.getHits().getHits().length; i++) {
assertThat(actual.getHits().getAt(i).getId(), equalTo(expected.getHits().getAt(i).getId()));
}
actual = client().prepareSearch("test")
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user4", USERS_PASSWD))
.addSort("_uid", SortOrder.ASC)
.setQuery(QueryBuilders.boolQuery()
.should(QueryBuilders.termQuery("field1", "value"))
.should(QueryBuilders.termQuery("field2", "value"))
.should(QueryBuilders.termQuery("field3", "value"))
)
.get();
expected = client().prepareSearch("test")
.addSort("_uid", SortOrder.ASC)
.setQuery(QueryBuilders.boolQuery()
.should(QueryBuilders.termQuery("field3", "value"))
)
.get();
assertThat(actual.getHits().getTotalHits(), equalTo(expected.getHits().getTotalHits()));
assertThat(actual.getHits().getHits().length, equalTo(expected.getHits().getHits().length));
for (int i = 0; i < actual.getHits().getHits().length; i++) {
assertThat(actual.getHits().getAt(i).getId(), equalTo(expected.getHits().getAt(i).getId()));
}
}
}

View File

@ -0,0 +1,771 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.integration;
import org.elasticsearch.ElasticsearchSecurityException;
import org.elasticsearch.Version;
import org.elasticsearch.action.fieldstats.FieldStatsResponse;
import org.elasticsearch.action.get.GetResponse;
import org.elasticsearch.action.get.MultiGetResponse;
import org.elasticsearch.action.percolate.PercolateResponse;
import org.elasticsearch.action.percolate.PercolateSourceBuilder;
import org.elasticsearch.action.search.SearchResponse;
import org.elasticsearch.action.termvectors.MultiTermVectorsResponse;
import org.elasticsearch.action.termvectors.TermVectorsRequest;
import org.elasticsearch.action.termvectors.TermVectorsResponse;
import org.elasticsearch.cluster.metadata.IndexMetaData;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.index.cache.IndexCacheModule;
import org.elasticsearch.indices.cache.request.IndicesRequestCache;
import org.elasticsearch.rest.RestStatus;
import org.elasticsearch.search.aggregations.AggregationBuilders;
import org.elasticsearch.search.aggregations.bucket.terms.Terms;
import org.elasticsearch.search.sort.SortOrder;
import org.elasticsearch.shield.authc.support.Hasher;
import org.elasticsearch.shield.authc.support.SecuredString;
import org.elasticsearch.test.ESIntegTestCase;
import org.elasticsearch.test.ShieldIntegTestCase;
import static org.elasticsearch.index.query.QueryBuilders.*;
import static org.elasticsearch.shield.authc.support.UsernamePasswordToken.BASIC_AUTH_HEADER;
import static org.elasticsearch.shield.authc.support.UsernamePasswordToken.basicAuthHeaderValue;
import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.*;
import static org.hamcrest.Matchers.*;
// The random usage of meta fields such as _timestamp add noice to the test, so disable random index templates:
@ESIntegTestCase.ClusterScope(randomDynamicTemplates = false)
public class FieldLevelSecurityTests extends ShieldIntegTestCase {
protected static final SecuredString USERS_PASSWD = new SecuredString("change_me".toCharArray());
protected static final String USERS_PASSWD_HASHED = new String(Hasher.BCRYPT.hash(new SecuredString("change_me".toCharArray())));
@Override
protected String configUsers() {
return super.configUsers() +
"user1:" + USERS_PASSWD_HASHED + "\n" +
"user2:" + USERS_PASSWD_HASHED + "\n" +
"user3:" + USERS_PASSWD_HASHED + "\n" +
"user4:" + USERS_PASSWD_HASHED + "\n" +
"user5:" + USERS_PASSWD_HASHED + "\n";
}
@Override
protected String configUsersRoles() {
return super.configUsersRoles() +
"role1:user1\n" +
"role2:user2\n" +
"role3:user3\n" +
"role4:user4\n" +
"role5:user5\n";
}
@Override
protected String configRoles() {
return super.configRoles() +
"\nrole1:\n" +
" cluster: all\n" +
" indices:\n" +
" '*':\n" +
" privileges: ALL\n" +
" fields: field1\n" +
"role2:\n" +
" cluster: all\n" +
" indices:\n" +
" '*':\n" +
" privileges: ALL\n" +
" fields: field2\n" +
"role3:\n" +
" cluster: all\n" +
" indices:\n" +
" '*':\n" +
" privileges: ALL\n" +
" fields: \n" +
" - field1\n" +
" - field2\n" +
"role4:\n" +
" cluster: all\n" +
" indices:\n" +
" '*':\n" +
" privileges: ALL\n" +
" fields:\n" +
"role5:\n" +
" cluster: all\n" +
" indices:\n" +
" '*': ALL\n";
}
public void testQuery() throws Exception {
assertAcked(client().admin().indices().prepareCreate("test")
.addMapping("type1", "field1", "type=string", "field2", "type=string")
);
client().prepareIndex("test", "type1", "1").setSource("field1", "value1", "field2", "value2")
.setRefresh(true)
.get();
// user1 has access to field1, so the query should match with the document:
SearchResponse response = client().prepareSearch("test")
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user1", USERS_PASSWD))
.setQuery(matchQuery("field1", "value1"))
.get();
assertHitCount(response, 1);
// user2 has no access to field1, so the query should not match with the document:
response = client().prepareSearch("test")
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user2", USERS_PASSWD))
.setQuery(matchQuery("field1", "value1"))
.get();
assertHitCount(response, 0);
// user3 has access to field1 and field2, so the query should match with the document:
response = client().prepareSearch("test")
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user3", USERS_PASSWD))
.setQuery(matchQuery("field1", "value1"))
.get();
assertHitCount(response, 1);
// user4 has access to no fields, so the query should not match with the document:
response = client().prepareSearch("test")
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user4", USERS_PASSWD))
.setQuery(matchQuery("field1", "value1"))
.get();
assertHitCount(response, 0);
// user5 has no field level security configured, so the query should match with the document:
response = client().prepareSearch("test")
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user5", USERS_PASSWD))
.setQuery(matchQuery("field1", "value1"))
.get();
assertHitCount(response, 1);
// user1 has no access to field1, so the query should not match with the document:
response = client().prepareSearch("test")
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user1", USERS_PASSWD))
.setQuery(matchQuery("field2", "value2"))
.get();
assertHitCount(response, 0);
// user2 has access to field1, so the query should match with the document:
response = client().prepareSearch("test")
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user2", USERS_PASSWD))
.setQuery(matchQuery("field2", "value2"))
.get();
assertHitCount(response, 1);
// user3 has access to field1 and field2, so the query should match with the document:
response = client().prepareSearch("test")
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user3", USERS_PASSWD))
.setQuery(matchQuery("field2", "value2"))
.get();
assertHitCount(response, 1);
// user4 has access to no fields, so the query should not match with the document:
response = client().prepareSearch("test")
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user4", USERS_PASSWD))
.setQuery(matchQuery("field2", "value2"))
.get();
assertHitCount(response, 0);
// user5 has no field level security configured, so the query should match with the document:
response = client().prepareSearch("test")
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user5", USERS_PASSWD))
.setQuery(matchQuery("field2", "value2"))
.get();
assertHitCount(response, 1);
}
public void testGetApi() throws Exception {
assertAcked(client().admin().indices().prepareCreate("test")
.addMapping("type1", "field1", "type=string", "field2", "type=string")
);
client().prepareIndex("test", "type1", "1").setSource("field1", "value1", "field2", "value2")
.get();
Boolean realtime = randomFrom(true, false, null);
// user1 is granted access to field1 only:
GetResponse response = client().prepareGet("test", "type1", "1")
.setRealtime(realtime)
.setRefresh(true)
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user1", USERS_PASSWD))
.get();
assertThat(response.isExists(), is(true));
assertThat(response.getSource().size(), equalTo(1));
assertThat(response.getSource().get("field1").toString(), equalTo("value1"));
// user2 is granted access to field2 only:
response = client().prepareGet("test", "type1", "1")
.setRealtime(realtime)
.setRefresh(true)
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user2", USERS_PASSWD))
.get();
assertThat(response.isExists(), is(true));
assertThat(response.getSource().size(), equalTo(1));
assertThat(response.getSource().get("field2").toString(), equalTo("value2"));
// user3 is granted access to field1 and field2:
response = client().prepareGet("test", "type1", "1")
.setRealtime(realtime)
.setRefresh(true)
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user3", USERS_PASSWD))
.get();
assertThat(response.isExists(), is(true));
assertThat(response.getSource().size(), equalTo(2));
assertThat(response.getSource().get("field1").toString(), equalTo("value1"));
assertThat(response.getSource().get("field2").toString(), equalTo("value2"));
// user4 is granted access to no fields, so the get response does say the doc exist, but no fields are returned:
response = client().prepareGet("test", "type1", "1")
.setRealtime(realtime)
.setRefresh(true)
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user4", USERS_PASSWD))
.get();
assertThat(response.isExists(), is(true));
assertThat(response.getSource().size(), equalTo(0));
// user5 has no field level security configured, so all fields are returned:
response = client().prepareGet("test", "type1", "1")
.setRealtime(realtime)
.setRefresh(true)
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user5", USERS_PASSWD))
.get();
assertThat(response.isExists(), is(true));
assertThat(response.getSource().size(), equalTo(2));
assertThat(response.getSource().get("field1").toString(), equalTo("value1"));
assertThat(response.getSource().get("field2").toString(), equalTo("value2"));
}
public void testMGetApi() throws Exception {
assertAcked(client().admin().indices().prepareCreate("test")
.addMapping("type1", "field1", "type=string", "field2", "type=string")
);
client().prepareIndex("test", "type1", "1").setSource("field1", "value1", "field2", "value2")
.get();
Boolean realtime = randomFrom(true, false, null);
// user1 is granted access to field1 only:
MultiGetResponse response = client().prepareMultiGet()
.add("test", "type1", "1")
.setRealtime(realtime)
.setRefresh(true)
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user1", USERS_PASSWD))
.get();
assertThat(response.getResponses()[0].isFailed(), is(false));
assertThat(response.getResponses()[0].getResponse().isExists(), is(true));
assertThat(response.getResponses()[0].getResponse().getSource().size(), equalTo(1));
assertThat(response.getResponses()[0].getResponse().getSource().get("field1").toString(), equalTo("value1"));
// user2 is granted access to field2 only:
response = client().prepareMultiGet()
.add("test", "type1", "1")
.setRealtime(realtime)
.setRefresh(true)
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user2", USERS_PASSWD))
.get();
assertThat(response.getResponses()[0].isFailed(), is(false));
assertThat(response.getResponses()[0].getResponse().isExists(), is(true));
assertThat(response.getResponses()[0].getResponse().getSource().size(), equalTo(1));
assertThat(response.getResponses()[0].getResponse().getSource().get("field2").toString(), equalTo("value2"));
// user3 is granted access to field1 and field2:
response = client().prepareMultiGet()
.add("test", "type1", "1")
.setRealtime(realtime)
.setRefresh(true)
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user3", USERS_PASSWD))
.get();
assertThat(response.getResponses()[0].isFailed(), is(false));
assertThat(response.getResponses()[0].getResponse().isExists(), is(true));
assertThat(response.getResponses()[0].getResponse().getSource().size(), equalTo(2));
assertThat(response.getResponses()[0].getResponse().getSource().get("field1").toString(), equalTo("value1"));
assertThat(response.getResponses()[0].getResponse().getSource().get("field2").toString(), equalTo("value2"));
// user4 is granted access to no fields, so the get response does say the doc exist, but no fields are returned:
response = client().prepareMultiGet()
.add("test", "type1", "1")
.setRealtime(realtime)
.setRefresh(true)
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user4", USERS_PASSWD))
.get();
assertThat(response.getResponses()[0].isFailed(), is(false));
assertThat(response.getResponses()[0].getResponse().isExists(), is(true));
assertThat(response.getResponses()[0].getResponse().getSource().size(), equalTo(0));
// user5 has no field level security configured, so all fields are returned:
response = client().prepareMultiGet()
.add("test", "type1", "1")
.setRealtime(realtime)
.setRefresh(true)
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user5", USERS_PASSWD))
.get();
assertThat(response.getResponses()[0].isFailed(), is(false));
assertThat(response.getResponses()[0].getResponse().isExists(), is(true));
assertThat(response.getResponses()[0].getResponse().getSource().size(), equalTo(2));
assertThat(response.getResponses()[0].getResponse().getSource().get("field1").toString(), equalTo("value1"));
assertThat(response.getResponses()[0].getResponse().getSource().get("field2").toString(), equalTo("value2"));
}
public void testFieldStatsApi() throws Exception {
assertAcked(client().admin().indices().prepareCreate("test")
.addMapping("type1", "field1", "type=string", "field2", "type=string")
);
client().prepareIndex("test", "type1", "1").setSource("field1", "value1", "field2", "value2")
.setRefresh(true)
.get();
// user1 is granted access to field1 only:
FieldStatsResponse response = client().prepareFieldStats()
.setFields("field1", "field2")
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user1", USERS_PASSWD))
.get();
assertThat(response.getAllFieldStats().size(), equalTo(1));
assertThat(response.getAllFieldStats().get("field1").getDocCount(), equalTo(1l));
// user2 is granted access to field2 only:
response = client().prepareFieldStats()
.setFields("field1", "field2")
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user2", USERS_PASSWD))
.get();
assertThat(response.getAllFieldStats().size(), equalTo(1));
assertThat(response.getAllFieldStats().get("field2").getDocCount(), equalTo(1l));
// user3 is granted access to field1 and field2:
response = client().prepareFieldStats()
.setFields("field1", "field2")
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user3", USERS_PASSWD))
.get();
assertThat(response.getAllFieldStats().size(), equalTo(2));
assertThat(response.getAllFieldStats().get("field1").getDocCount(), equalTo(1l));
assertThat(response.getAllFieldStats().get("field2").getDocCount(), equalTo(1l));
// user4 is granted access to no fields:
response = client().prepareFieldStats()
.setFields("field1", "field2")
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user4", USERS_PASSWD))
.get();
assertThat(response.getAllFieldStats().size(), equalTo(0));
// user5 has no field level security configured:
response = client().prepareFieldStats()
.setFields("field1", "field2")
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user5", USERS_PASSWD))
.get();
assertThat(response.getAllFieldStats().size(), equalTo(2));
assertThat(response.getAllFieldStats().get("field1").getDocCount(), equalTo(1l));
assertThat(response.getAllFieldStats().get("field2").getDocCount(), equalTo(1l));
}
public void testQueryCache() throws Exception {
assertAcked(client().admin().indices().prepareCreate("test")
.setSettings(Settings.builder().put(IndexCacheModule.QUERY_CACHE_EVERYTHING, true))
.addMapping("type1", "field1", "type=string", "field2", "type=string")
);
client().prepareIndex("test", "type1", "1").setSource("field1", "value1", "field2", "value2")
.setRefresh(true)
.get();
int max = scaledRandomIntBetween(4, 32);
for (int i = 0; i < max; i++) {
SearchResponse response = client().prepareSearch("test")
.setQuery(constantScoreQuery(termQuery("field1", "value1")))
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user1", USERS_PASSWD))
.get();
assertHitCount(response, 1);
response = client().prepareSearch("test")
.setQuery(constantScoreQuery(termQuery("field1", "value1")))
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user2", USERS_PASSWD))
.get();
assertHitCount(response, 0);
}
}
public void testRequestCache() throws Exception {
assertAcked(client().admin().indices().prepareCreate("test")
.setSettings(Settings.builder().put(IndicesRequestCache.INDEX_CACHE_REQUEST_ENABLED, true))
.addMapping("type1", "field1", "type=string", "field2", "type=string")
);
client().prepareIndex("test", "type1", "1").setSource("field1", "value1", "field2", "value2")
.setRefresh(true)
.get();
int max = scaledRandomIntBetween(4, 32);
for (int i = 0; i < max; i++) {
Boolean requestCache = randomFrom(true, null);
SearchResponse response = client().prepareSearch("test")
.setSize(0)
.setQuery(termQuery("field1", "value1"))
.setRequestCache(requestCache)
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user1", USERS_PASSWD))
.get();
assertNoFailures(response);
assertHitCount(response, 1);
response = client().prepareSearch("test")
.setSize(0)
.setQuery(termQuery("field1", "value1"))
.setRequestCache(requestCache)
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user2", USERS_PASSWD))
.get();
assertNoFailures(response);
assertHitCount(response, 0);
}
}
public void testFields() throws Exception {
assertAcked(client().admin().indices().prepareCreate("test")
.addMapping("type1", "field1", "type=string,store=yes", "field2", "type=string,store=yes")
);
client().prepareIndex("test", "type1", "1").setSource("field1", "value1", "field2", "value2")
.setRefresh(true)
.get();
// user1 is granted access to field1 only:
SearchResponse response = client().prepareSearch("test")
.addField("field1")
.addField("field2")
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user1", USERS_PASSWD))
.get();
assertThat(response.getHits().getAt(0).fields().size(), equalTo(1));
assertThat(response.getHits().getAt(0).fields().get("field1").<String>getValue(), equalTo("value1"));
// user2 is granted access to field2 only:
response = client().prepareSearch("test")
.addField("field1")
.addField("field2")
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user2", USERS_PASSWD))
.get();
assertThat(response.getHits().getAt(0).fields().size(), equalTo(1));
assertThat(response.getHits().getAt(0).fields().get("field2").<String>getValue(), equalTo("value2"));
// user3 is granted access to field1 and field2:
response = client().prepareSearch("test")
.addField("field1")
.addField("field2")
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user3", USERS_PASSWD))
.get();
assertThat(response.getHits().getAt(0).fields().size(), equalTo(2));
assertThat(response.getHits().getAt(0).fields().get("field1").<String>getValue(), equalTo("value1"));
assertThat(response.getHits().getAt(0).fields().get("field2").<String>getValue(), equalTo("value2"));
// user4 is granted access to no fields:
response = client().prepareSearch("test")
.addField("field1")
.addField("field2")
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user4", USERS_PASSWD))
.get();
assertThat(response.getHits().getAt(0).fields().size(), equalTo(0));
// user5 has no field level security configured:
response = client().prepareSearch("test")
.addField("field1")
.addField("field2")
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user5", USERS_PASSWD))
.get();
assertThat(response.getHits().getAt(0).fields().size(), equalTo(2));
assertThat(response.getHits().getAt(0).fields().get("field1").<String>getValue(), equalTo("value1"));
assertThat(response.getHits().getAt(0).fields().get("field2").<String>getValue(), equalTo("value2"));
}
public void testSource() throws Exception {
assertAcked(client().admin().indices().prepareCreate("test")
.addMapping("type1", "field1", "type=string", "field2", "type=string")
);
client().prepareIndex("test", "type1", "1").setSource("field1", "value1", "field2", "value2")
.setRefresh(true)
.get();
// user1 is granted access to field1 only:
SearchResponse response = client().prepareSearch("test")
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user1", USERS_PASSWD))
.get();
assertThat(response.getHits().getAt(0).sourceAsMap().size(), equalTo(1));
assertThat(response.getHits().getAt(0).sourceAsMap().get("field1").toString(), equalTo("value1"));
// user2 is granted access to field2 only:
response = client().prepareSearch("test")
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user2", USERS_PASSWD))
.get();
assertThat(response.getHits().getAt(0).sourceAsMap().size(), equalTo(1));
assertThat(response.getHits().getAt(0).sourceAsMap().get("field2").toString(), equalTo("value2"));
// user3 is granted access to field1 and field2:
response = client().prepareSearch("test")
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user3", USERS_PASSWD))
.get();
assertThat(response.getHits().getAt(0).sourceAsMap().size(), equalTo(2));
assertThat(response.getHits().getAt(0).sourceAsMap().get("field1").toString(), equalTo("value1"));
assertThat(response.getHits().getAt(0).sourceAsMap().get("field2").toString(), equalTo("value2"));
// user4 is granted access to no fields:
response = client().prepareSearch("test")
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user4", USERS_PASSWD))
.get();
assertThat(response.getHits().getAt(0).sourceAsMap().size(), equalTo(0));
// user5 has no field level security configured:
response = client().prepareSearch("test")
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user5", USERS_PASSWD))
.get();
assertThat(response.getHits().getAt(0).sourceAsMap().size(), equalTo(2));
assertThat(response.getHits().getAt(0).sourceAsMap().get("field1").toString(), equalTo("value1"));
assertThat(response.getHits().getAt(0).sourceAsMap().get("field2").toString(), equalTo("value2"));
}
public void testSort() throws Exception {
assertAcked(client().admin().indices().prepareCreate("test")
.addMapping("type1", "field1", "type=long", "field2", "type=long")
);
client().prepareIndex("test", "type1", "1").setSource("field1", 1d, "field2", 2d)
.setRefresh(true)
.get();
// user1 is granted to use field1, so it is included in the sort_values
SearchResponse response = client().prepareSearch("test")
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user1", USERS_PASSWD))
.addSort("field1", SortOrder.ASC)
.get();
assertThat((Long) response.getHits().getAt(0).sortValues()[0], equalTo(1l));
// user2 is not granted to use field1, so the default missing sort value is included
response = client().prepareSearch("test")
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user2", USERS_PASSWD))
.addSort("field1", SortOrder.ASC)
.get();
assertThat((Long) response.getHits().getAt(0).sortValues()[0], equalTo(Long.MAX_VALUE));
// user1 is not granted to use field2, so the default missing sort value is included
response = client().prepareSearch("test")
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user1", USERS_PASSWD))
.addSort("field2", SortOrder.ASC)
.get();
assertThat((Long) response.getHits().getAt(0).sortValues()[0], equalTo(Long.MAX_VALUE));
// user2 is granted to use field2, so it is included in the sort_values
response = client().prepareSearch("test")
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user2", USERS_PASSWD))
.addSort("field2", SortOrder.ASC)
.get();
assertThat((Long) response.getHits().getAt(0).sortValues()[0], equalTo(2l));
}
public void testAggs() throws Exception {
assertAcked(client().admin().indices().prepareCreate("test")
.addMapping("type1", "field1", "type=string", "field2", "type=string")
);
client().prepareIndex("test", "type1", "1").setSource("field1", "value1", "field2", "value2")
.setRefresh(true)
.get();
// user1 is authorized to use field1, so buckets are include for a term agg on field1
SearchResponse response = client().prepareSearch("test")
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user1", USERS_PASSWD))
.addAggregation(AggregationBuilders.terms("_name").field("field1"))
.get();
assertThat(((Terms) response.getAggregations().get("_name")).getBucketByKey("value1").getDocCount(), equalTo(1l));
// user2 is not authorized to use field1, so no buckets are include for a term agg on field1
response = client().prepareSearch("test")
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user2", USERS_PASSWD))
.addAggregation(AggregationBuilders.terms("_name").field("field1"))
.get();
assertThat(((Terms) response.getAggregations().get("_name")).getBucketByKey("value1"), nullValue());
// user1 is not authorized to use field2, so no buckets are include for a term agg on field2
response = client().prepareSearch("test")
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user1", USERS_PASSWD))
.addAggregation(AggregationBuilders.terms("_name").field("field2"))
.get();
assertThat(((Terms) response.getAggregations().get("_name")).getBucketByKey("value2"), nullValue());
// user2 is authorized to use field2, so buckets are include for a term agg on field2
response = client().prepareSearch("test")
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user2", USERS_PASSWD))
.addAggregation(AggregationBuilders.terms("_name").field("field2"))
.get();
assertThat(((Terms) response.getAggregations().get("_name")).getBucketByKey("value2").getDocCount(), equalTo(1l));
}
public void testTVApi() throws Exception {
assertAcked(client().admin().indices().prepareCreate("test")
.addMapping("type1", "field1", "type=string,term_vector=with_positions_offsets_payloads", "field2", "type=string,term_vector=with_positions_offsets_payloads")
);
client().prepareIndex("test", "type1", "1").setSource("field1", "value1", "field2", "value2")
.setRefresh(true)
.get();
Boolean realtime = randomFrom(true, false, null);
TermVectorsResponse response = client().prepareTermVectors("test", "type1", "1")
.setRealtime(realtime)
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user1", USERS_PASSWD))
.get();
assertThat(response.isExists(), is(true));
assertThat(response.getFields().size(), equalTo(1));
assertThat(response.getFields().terms("field1").size(), equalTo(1l));
response = client().prepareTermVectors("test", "type1", "1")
.setRealtime(realtime)
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user2", USERS_PASSWD))
.get();
assertThat(response.isExists(), is(true));
assertThat(response.getFields().size(), equalTo(1));
assertThat(response.getFields().terms("field2").size(), equalTo(1l));
response = client().prepareTermVectors("test", "type1", "1")
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user3", USERS_PASSWD))
.setRealtime(realtime)
.get();
assertThat(response.isExists(), is(true));
assertThat(response.getFields().size(), equalTo(2));
assertThat(response.getFields().terms("field1").size(), equalTo(1l));
assertThat(response.getFields().terms("field2").size(), equalTo(1l));
response = client().prepareTermVectors("test", "type1", "1")
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user4", USERS_PASSWD))
.setRealtime(realtime)
.get();
assertThat(response.isExists(), is(true));
assertThat(response.getFields().size(), equalTo(0));
}
public void testMTVApi() throws Exception {
assertAcked(client().admin().indices().prepareCreate("test")
.addMapping("type1", "field1", "type=string,term_vector=with_positions_offsets_payloads", "field2", "type=string,term_vector=with_positions_offsets_payloads")
);
client().prepareIndex("test", "type1", "1").setSource("field1", "value1", "field2", "value2")
.setRefresh(true)
.get();
Boolean realtime = randomFrom(true, false, null);
MultiTermVectorsResponse response = client().prepareMultiTermVectors()
.add(new TermVectorsRequest("test", "type1", "1").realtime(realtime))
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user1", USERS_PASSWD))
.get();
assertThat(response.getResponses().length, equalTo(1));
assertThat(response.getResponses()[0].getResponse().isExists(), is(true));
assertThat(response.getResponses()[0].getResponse().getFields().size(), equalTo(1));
assertThat(response.getResponses()[0].getResponse().getFields().terms("field1").size(), equalTo(1l));
response = client().prepareMultiTermVectors()
.add(new TermVectorsRequest("test", "type1", "1").realtime(realtime))
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user2", USERS_PASSWD))
.get();
assertThat(response.getResponses().length, equalTo(1));
assertThat(response.getResponses()[0].getResponse().isExists(), is(true));
assertThat(response.getResponses()[0].getResponse().getFields().size(), equalTo(1));
assertThat(response.getResponses()[0].getResponse().getFields().terms("field2").size(), equalTo(1l));
response = client().prepareMultiTermVectors()
.add(new TermVectorsRequest("test", "type1", "1").realtime(realtime))
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user3", USERS_PASSWD))
.get();
assertThat(response.getResponses().length, equalTo(1));
assertThat(response.getResponses()[0].getResponse().isExists(), is(true));
assertThat(response.getResponses()[0].getResponse().getFields().size(), equalTo(2));
assertThat(response.getResponses()[0].getResponse().getFields().terms("field1").size(), equalTo(1l));
assertThat(response.getResponses()[0].getResponse().getFields().terms("field2").size(), equalTo(1l));
response = client().prepareMultiTermVectors()
.add(new TermVectorsRequest("test", "type1", "1").realtime(realtime))
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user4", USERS_PASSWD))
.get();
assertThat(response.getResponses().length, equalTo(1));
assertThat(response.getResponses()[0].getResponse().isExists(), is(true));
assertThat(response.getResponses()[0].getResponse().getFields().size(), equalTo(0));
}
public void testPercolateApi() {
assertAcked(client().admin().indices().prepareCreate("test")
.addMapping(".percolator", "field1", "type=string", "field2", "type=string")
);
client().prepareIndex("test", ".percolator", "1")
.setSource("{\"query\" : { \"match_all\" : {} }, \"field1\" : \"value1\"}")
.setRefresh(true)
.get();
// Percolator without a query just evaluates all percolator queries that are loaded, so we have a match:
PercolateResponse response = client().preparePercolate()
.setDocumentType("type")
.setPercolateDoc(new PercolateSourceBuilder.DocBuilder().setDoc("{}"))
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user2", USERS_PASSWD))
.get();
assertThat(response.getCount(), equalTo(1l));
assertThat(response.getMatches()[0].getId().string(), equalTo("1"));
// Percolator with a query on a field that the current user can't see. Percolator will not have queries to evaluate, so there is no match:
response = client().preparePercolate()
.setDocumentType("type")
.setPercolateQuery(termQuery("field1", "value1"))
.setPercolateDoc(new PercolateSourceBuilder.DocBuilder().setDoc("{}"))
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user2", USERS_PASSWD))
.get();
assertThat(response.getCount(), equalTo(0l));
assertAcked(client().admin().indices().prepareClose("test"));
assertAcked(client().admin().indices().prepareOpen("test"));
ensureGreen("test");
// Ensure that the query loading that happens at startup has permissions to load the percolator queries:
response = client().preparePercolate()
.setDocumentType("type")
.setPercolateDoc(new PercolateSourceBuilder.DocBuilder().setDoc("{}"))
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user2", USERS_PASSWD))
.get();
assertThat(response.getCount(), equalTo(1l));
assertThat(response.getMatches()[0].getId().string(), equalTo("1"));
}
public void testParentChild() {
// There are two parent/child impls:
// pre 2.0 parent/child uses the _uid and _parent fields
// 2.0 and beyond parent/child uses dedicated doc values join fields
// Both impls need to be tested with field level security, so that is why the index version is randomized here.
Version version = randomFrom(Version.V_1_7_2, Version.CURRENT);
logger.info("Testing parent/child with field level security on an index created with version[{}]", version);
assertAcked(prepareCreate("test")
.setSettings(Settings.builder().put(IndexMetaData.SETTING_VERSION_CREATED, version))
.addMapping("parent")
.addMapping("child", "_parent", "type=parent"));
ensureGreen();
// index simple data
client().prepareIndex("test", "parent", "p1").setSource("{}").get();
client().prepareIndex("test", "child", "c1").setSource("field1", "red").setParent("p1").get();
client().prepareIndex("test", "child", "c2").setSource("field1", "yellow").setParent("p1").get();
refresh();
SearchResponse searchResponse = client().prepareSearch("test")
.setQuery(hasChildQuery("child", termQuery("field1", "yellow")))
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user1", USERS_PASSWD))
.get();
assertHitCount(searchResponse, 1l);
assertThat(searchResponse.getHits().totalHits(), equalTo(1l));
assertThat(searchResponse.getHits().getAt(0).id(), equalTo("p1"));
searchResponse = client().prepareSearch("test")
.setQuery(hasChildQuery("child", termQuery("field1", "yellow")))
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user2", USERS_PASSWD))
.get();
assertHitCount(searchResponse, 0l);
}
public void testUpdateApiIsBlocked() throws Exception {
assertAcked(client().admin().indices().prepareCreate("test")
.addMapping("type", "field1", "type=string", "field2", "type=string")
);
client().prepareIndex("test", "type", "1")
.setSource("field1", "value1", "field2", "value1")
.setRefresh(true)
.get();
// With field level security enabled the update is not allowed:
try {
client().prepareUpdate("test", "type", "1").setDoc("field2", "value2")
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user1", USERS_PASSWD))
.get();
fail("failed, because update request shouldn't be allowed if field level security is enabled");
} catch (ElasticsearchSecurityException e) {
assertThat(e.status(), equalTo(RestStatus.BAD_REQUEST));
assertThat(e.getMessage(), equalTo("Can't execute an update request if field level security is enabled"));
}
assertThat(client().prepareGet("test", "type", "1").get().getSource().get("field2").toString(), equalTo("value1"));
// With no field level security enabled the update is allowed:
client().prepareUpdate("test", "type", "1").setDoc("field2", "value2")
.get();
assertThat(client().prepareGet("test", "type", "1").get().getSource().get("field2").toString(), equalTo("value2"));
}
}

View File

@ -0,0 +1,83 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.integration;
import org.elasticsearch.action.admin.indices.alias.Alias;
import org.elasticsearch.action.get.GetResponse;
import org.elasticsearch.shield.authc.support.Hasher;
import org.elasticsearch.shield.authc.support.SecuredString;
import org.elasticsearch.test.ShieldIntegTestCase;
import static org.elasticsearch.shield.authc.support.UsernamePasswordToken.BASIC_AUTH_HEADER;
import static org.elasticsearch.shield.authc.support.UsernamePasswordToken.basicAuthHeaderValue;
import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;
import static org.hamcrest.Matchers.equalTo;
/**
*/
public class IndicesPermissionsWithAliasesWildcardsAndRegexsTests extends ShieldIntegTestCase {
protected static final SecuredString USERS_PASSWD = new SecuredString("change_me".toCharArray());
protected static final String USERS_PASSWD_HASHED = new String(Hasher.BCRYPT.hash(new SecuredString("change_me".toCharArray())));
@Override
protected String configUsers() {
return super.configUsers() +
"user1:" + USERS_PASSWD_HASHED + "\n";
}
@Override
protected String configUsersRoles() {
return super.configUsersRoles() +
"role1:user1\n";
}
@Override
protected String configRoles() {
return super.configRoles() +
"\nrole1:\n" +
" cluster: all\n" +
" indices:\n" +
" 't*':\n" +
" privileges: ALL\n" +
" fields: field1\n" +
" 'my_alias':\n" +
" privileges: ALL\n" +
" fields: field2\n" +
" '/an_.*/':\n" +
" privileges: ALL\n" +
" fields: field3\n";
}
public void testResolveWildcardsRegexs() throws Exception {
assertAcked(client().admin().indices().prepareCreate("test")
.addMapping("type1", "field1", "type=string", "field2", "type=string")
.addAlias(new Alias("my_alias"))
.addAlias(new Alias("an_alias"))
);
client().prepareIndex("test", "type1", "1").setSource("field1", "value1", "field2", "value2", "field3", "value3")
.setRefresh(true)
.get();
GetResponse getResponse = client().prepareGet("test", "type1", "1")
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user1", USERS_PASSWD))
.get();
assertThat(getResponse.getSource().size(), equalTo(1));
assertThat((String) getResponse.getSource().get("field1"), equalTo("value1"));
getResponse = client().prepareGet("my_alias", "type1", "1")
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user1", USERS_PASSWD))
.get();
assertThat(getResponse.getSource().size(), equalTo(1));
assertThat((String) getResponse.getSource().get("field2"), equalTo("value2"));
getResponse = client().prepareGet("an_alias", "type1", "1")
.putHeader(BASIC_AUTH_HEADER, basicAuthHeaderValue("user1", USERS_PASSWD))
.get();
assertThat(getResponse.getSource().size(), equalTo(1));
assertThat((String) getResponse.getSource().get("field3"), equalTo("value3"));
}
}

View File

@ -12,6 +12,7 @@ import org.elasticsearch.action.search.SearchScrollRequest;
import org.elasticsearch.action.support.ActionFilterChain;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.shield.User;
import org.elasticsearch.shield.action.interceptor.RequestInterceptor;
import org.elasticsearch.shield.audit.AuditTrail;
import org.elasticsearch.shield.authc.AuthenticationService;
import org.elasticsearch.shield.authz.AuthorizationService;
@ -21,6 +22,8 @@ import org.elasticsearch.test.ESTestCase;
import org.junit.Before;
import org.junit.Test;
import java.util.HashSet;
import static org.hamcrest.Matchers.equalTo;
import static org.mockito.Matchers.eq;
import static org.mockito.Matchers.isA;
@ -45,7 +48,7 @@ public class ShieldActionFilterTests extends ESTestCase {
cryptoService = mock(CryptoService.class);
auditTrail = mock(AuditTrail.class);
licenseEventsNotifier = new MockLicenseEventsNotifier();
filter = new ShieldActionFilter(Settings.EMPTY, authcService, authzService, cryptoService, auditTrail, licenseEventsNotifier, new ShieldActionMapper());
filter = new ShieldActionFilter(Settings.EMPTY, authcService, authzService, cryptoService, auditTrail, licenseEventsNotifier, new ShieldActionMapper(), new HashSet<RequestInterceptor>());
}
@Test

View File

@ -22,6 +22,7 @@ import org.elasticsearch.common.transport.InetSocketTransportAddress;
import org.elasticsearch.common.transport.LocalTransportAddress;
import org.elasticsearch.env.Environment;
import org.elasticsearch.index.IndexNotFoundException;
import org.elasticsearch.index.cache.IndexCacheModule;
import org.elasticsearch.rest.RestRequest;
import org.elasticsearch.search.SearchHit;
import org.elasticsearch.shield.ShieldPlugin;
@ -144,10 +145,15 @@ public class IndexAuditTrailTests extends ShieldIntegTestCase {
ShieldSettingsSource cluster2SettingsSource = new ShieldSettingsSource(numNodes, useSSL, systemKey(), createTempDir(), Scope.SUITE) {
@Override
public Settings node(int nodeOrdinal) {
return Settings.builder()
Settings.Builder builder = Settings.builder()
.put(super.node(nodeOrdinal))
.put(ShieldPlugin.ENABLED_SETTING_NAME, useShield)
.build();
.put(ShieldPlugin.ENABLED_SETTING_NAME, useShield);
// For tests we forcefully configure Shield's custom query cache because the test framework randomizes the query cache impl,
// but if shield is disabled then we don't need to forcefully set the query cache
if (useShield == false) {
builder.remove(IndexCacheModule.QUERY_CACHE_TYPE);
}
return builder.build();
}
};
cluster2 = new InternalTestCluster("network", randomLong(), createTempDir(), numNodes, numNodes, cluster2Name, cluster2SettingsSource, 0, false, SECOND_CLUSTER_NODE_PREFIX);

View File

@ -33,7 +33,6 @@ import org.junit.Test;
import static org.elasticsearch.test.ShieldTestsUtils.assertAuthenticationException;
import static org.elasticsearch.test.ShieldTestsUtils.assertAuthorizationException;
import static org.hamcrest.Matchers.contains;
import static org.hamcrest.Matchers.*;
import static org.mockito.Mockito.*;
@ -207,7 +206,7 @@ public class InternalAuthorizationServiceTests extends ESTestCase {
assertAuthorizationException(e, containsString("action [indices:a] is unauthorized for user [test user]"));
verify(auditTrail).accessDenied(user, "indices:a", request);
verify(clusterService, times(2)).state();
verify(state, times(2)).metaData();
verify(state, times(3)).metaData();
}
}
@ -228,7 +227,7 @@ public class InternalAuthorizationServiceTests extends ESTestCase {
assertAuthorizationException(e, containsString("action [" + IndicesAliasesAction.NAME + "] is unauthorized for user [test user]"));
verify(auditTrail).accessDenied(user, IndicesAliasesAction.NAME, request);
verify(clusterService).state();
verify(state).metaData();
verify(state, times(2)).metaData();
}
}
@ -247,7 +246,7 @@ public class InternalAuthorizationServiceTests extends ESTestCase {
verify(auditTrail).accessGranted(user, CreateIndexAction.NAME, request);
verifyNoMoreInteractions(auditTrail);
verify(clusterService).state();
verify(state).metaData();
verify(state, times(2)).metaData();
}
@Test
@ -304,7 +303,7 @@ public class InternalAuthorizationServiceTests extends ESTestCase {
assertAuthorizationException(e, containsString("action [indices:a] is unauthorized for user [" + anonymousService.anonymousUser().principal() + "]"));
verify(auditTrail).accessDenied(anonymousService.anonymousUser(), "indices:a", request);
verify(clusterService, times(2)).state();
verify(state, times(2)).metaData();
verify(state, times(3)).metaData();
}
}
@ -329,7 +328,8 @@ public class InternalAuthorizationServiceTests extends ESTestCase {
assertAuthenticationException(e, containsString("action [indices:a] requires authentication"));
verify(auditTrail).accessDenied(anonymousService.anonymousUser(), "indices:a", request);
verify(clusterService, times(2)).state();
verify(state, times(2)).metaData();
verify(state, times(3)).metaData();
}
}
}

View File

@ -0,0 +1,770 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.shield.authz.accesscontrol;
import org.apache.lucene.analysis.MockAnalyzer;
import org.apache.lucene.document.*;
import org.apache.lucene.index.*;
import org.apache.lucene.index.TermsEnum.SeekStatus;
import org.apache.lucene.store.Directory;
import org.apache.lucene.util.BytesRef;
import org.apache.lucene.util.IOUtils;
import org.apache.lucene.util.TestUtil;
import org.elasticsearch.index.mapper.internal.FieldNamesFieldMapper;
import org.elasticsearch.index.mapper.internal.SourceFieldMapper;
import org.elasticsearch.test.ESTestCase;
import java.nio.charset.StandardCharsets;
import java.util.Collections;
import java.util.HashSet;
import java.util.Set;
/** Simple tests for this filterreader */
public class FieldSubsetReaderTests extends ESTestCase {
/**
* test filtering two string fields
*/
public void testIndexed() throws Exception {
Directory dir = newDirectory();
IndexWriterConfig iwc = new IndexWriterConfig(null);
IndexWriter iw = new IndexWriter(dir, iwc);
// add document with 2 fields
Document doc = new Document();
doc.add(new StringField("fieldA", "test", Field.Store.NO));
doc.add(new StringField("fieldB", "test", Field.Store.NO));
iw.addDocument(doc);
// open reader
Set<String> fields = Collections.singleton("fieldA");
DirectoryReader ir = FieldSubsetReader.wrap(DirectoryReader.open(iw, true), fields);
// see only one field
LeafReader segmentReader = ir.leaves().get(0).reader();
Set<String> seenFields = new HashSet<>();
for (String field : segmentReader.fields()) {
seenFields.add(field);
}
assertEquals(Collections.singleton("fieldA"), seenFields);
assertNotNull(segmentReader.terms("fieldA"));
assertNull(segmentReader.terms("fieldB"));
TestUtil.checkReader(ir);
IOUtils.close(ir, iw, dir);
}
/**
* test filtering two stored fields (string)
*/
public void testStoredFieldsString() throws Exception {
Directory dir = newDirectory();
IndexWriterConfig iwc = new IndexWriterConfig(null);
IndexWriter iw = new IndexWriter(dir, iwc);
// add document with 2 fields
Document doc = new Document();
doc.add(new StoredField("fieldA", "testA"));
doc.add(new StoredField("fieldB", "testB"));
iw.addDocument(doc);
// open reader
Set<String> fields = Collections.singleton("fieldA");
DirectoryReader ir = FieldSubsetReader.wrap(DirectoryReader.open(iw, true), fields);
// see only one field
Document d2 = ir.document(0);
assertEquals(1, d2.getFields().size());
assertEquals("testA", d2.get("fieldA"));
TestUtil.checkReader(ir);
IOUtils.close(ir, iw, dir);
}
/**
* test filtering two stored fields (binary)
*/
public void testStoredFieldsBinary() throws Exception {
Directory dir = newDirectory();
IndexWriterConfig iwc = new IndexWriterConfig(null);
IndexWriter iw = new IndexWriter(dir, iwc);
// add document with 2 fields
Document doc = new Document();
doc.add(new StoredField("fieldA", new BytesRef("testA")));
doc.add(new StoredField("fieldB", new BytesRef("testB")));
iw.addDocument(doc);
// open reader
Set<String> fields = Collections.singleton("fieldA");
DirectoryReader ir = FieldSubsetReader.wrap(DirectoryReader.open(iw, true), fields);
// see only one field
Document d2 = ir.document(0);
assertEquals(1, d2.getFields().size());
assertEquals(new BytesRef("testA"), d2.getBinaryValue("fieldA"));
TestUtil.checkReader(ir);
IOUtils.close(ir, iw, dir);
}
/**
* test filtering two stored fields (int)
*/
public void testStoredFieldsInt() throws Exception {
Directory dir = newDirectory();
IndexWriterConfig iwc = new IndexWriterConfig(null);
IndexWriter iw = new IndexWriter(dir, iwc);
// add document with 2 fields
Document doc = new Document();
doc.add(new StoredField("fieldA", 1));
doc.add(new StoredField("fieldB", 2));
iw.addDocument(doc);
// open reader
Set<String> fields = Collections.singleton("fieldA");
DirectoryReader ir = FieldSubsetReader.wrap(DirectoryReader.open(iw, true), fields);
// see only one field
Document d2 = ir.document(0);
assertEquals(1, d2.getFields().size());
assertEquals(1, d2.getField("fieldA").numericValue());
TestUtil.checkReader(ir);
IOUtils.close(ir, iw, dir);
}
/**
* test filtering two stored fields (long)
*/
public void testStoredFieldsLong() throws Exception {
Directory dir = newDirectory();
IndexWriterConfig iwc = new IndexWriterConfig(null);
IndexWriter iw = new IndexWriter(dir, iwc);
// add document with 2 fields
Document doc = new Document();
doc.add(new StoredField("fieldA", 1L));
doc.add(new StoredField("fieldB", 2L));
iw.addDocument(doc);
// open reader
Set<String> fields = Collections.singleton("fieldA");
DirectoryReader ir = FieldSubsetReader.wrap(DirectoryReader.open(iw, true), fields);
// see only one field
Document d2 = ir.document(0);
assertEquals(1, d2.getFields().size());
assertEquals(1L, d2.getField("fieldA").numericValue());
TestUtil.checkReader(ir);
IOUtils.close(ir, iw, dir);
}
/**
* test filtering two stored fields (float)
*/
public void testStoredFieldsFloat() throws Exception {
Directory dir = newDirectory();
IndexWriterConfig iwc = new IndexWriterConfig(null);
IndexWriter iw = new IndexWriter(dir, iwc);
// add document with 2 fields
Document doc = new Document();
doc.add(new StoredField("fieldA", 1F));
doc.add(new StoredField("fieldB", 2F));
iw.addDocument(doc);
// open reader
Set<String> fields = Collections.singleton("fieldA");
DirectoryReader ir = FieldSubsetReader.wrap(DirectoryReader.open(iw, true), fields);
// see only one field
Document d2 = ir.document(0);
assertEquals(1, d2.getFields().size());
assertEquals(1F, d2.getField("fieldA").numericValue());
TestUtil.checkReader(ir);
IOUtils.close(ir, iw, dir);
}
/**
* test filtering two stored fields (double)
*/
public void testStoredFieldsDouble() throws Exception {
Directory dir = newDirectory();
IndexWriterConfig iwc = new IndexWriterConfig(null);
IndexWriter iw = new IndexWriter(dir, iwc);
// add document with 2 fields
Document doc = new Document();
doc.add(new StoredField("fieldA", 1D));
doc.add(new StoredField("fieldB", 2D));
iw.addDocument(doc);
// open reader
Set<String> fields = Collections.singleton("fieldA");
DirectoryReader ir = FieldSubsetReader.wrap(DirectoryReader.open(iw, true), fields);
// see only one field
Document d2 = ir.document(0);
assertEquals(1, d2.getFields().size());
assertEquals(1D, d2.getField("fieldA").numericValue());
TestUtil.checkReader(ir);
IOUtils.close(ir, iw, dir);
}
/**
* test filtering two vector fields
*/
public void testVectors() throws Exception {
Directory dir = newDirectory();
IndexWriterConfig iwc = new IndexWriterConfig(null);
IndexWriter iw = new IndexWriter(dir, iwc);
// add document with 2 fields
Document doc = new Document();
FieldType ft = new FieldType(StringField.TYPE_NOT_STORED);
ft.setStoreTermVectors(true);
doc.add(new Field("fieldA", "testA", ft));
doc.add(new Field("fieldB", "testB", ft));
iw.addDocument(doc);
// open reader
Set<String> fields = Collections.singleton("fieldA");
DirectoryReader ir = FieldSubsetReader.wrap(DirectoryReader.open(iw, true), fields);
// see only one field
Fields vectors = ir.getTermVectors(0);
Set<String> seenFields = new HashSet<>();
for (String field : vectors) {
seenFields.add(field);
}
assertEquals(Collections.singleton("fieldA"), seenFields);
TestUtil.checkReader(ir);
IOUtils.close(ir, iw, dir);
}
/**
* test filtering two text fields
*/
public void testNorms() throws Exception {
Directory dir = newDirectory();
IndexWriterConfig iwc = new IndexWriterConfig(new MockAnalyzer(random()));
IndexWriter iw = new IndexWriter(dir, iwc);
// add document with 2 fields
Document doc = new Document();
doc.add(new TextField("fieldA", "test", Field.Store.NO));
doc.add(new TextField("fieldB", "test", Field.Store.NO));
iw.addDocument(doc);
// open reader
Set<String> fields = Collections.singleton("fieldA");
DirectoryReader ir = FieldSubsetReader.wrap(DirectoryReader.open(iw, true), fields);
// see only one field
LeafReader segmentReader = ir.leaves().get(0).reader();
assertNotNull(segmentReader.getNormValues("fieldA"));
assertNull(segmentReader.getNormValues("fieldB"));
TestUtil.checkReader(ir);
IOUtils.close(ir, iw, dir);
}
/**
* test filtering two numeric dv fields
*/
public void testNumericDocValues() throws Exception {
Directory dir = newDirectory();
IndexWriterConfig iwc = new IndexWriterConfig(null);
IndexWriter iw = new IndexWriter(dir, iwc);
// add document with 2 fields
Document doc = new Document();
doc.add(new NumericDocValuesField("fieldA", 1));
doc.add(new NumericDocValuesField("fieldB", 2));
iw.addDocument(doc);
// open reader
Set<String> fields = Collections.singleton("fieldA");
DirectoryReader ir = FieldSubsetReader.wrap(DirectoryReader.open(iw, true), fields);
// see only one field
LeafReader segmentReader = ir.leaves().get(0).reader();
assertNotNull(segmentReader.getNumericDocValues("fieldA"));
assertEquals(1, segmentReader.getNumericDocValues("fieldA").get(0));
assertNull(segmentReader.getNumericDocValues("fieldB"));
// check docs with field
assertNotNull(segmentReader.getDocsWithField("fieldA"));
assertNull(segmentReader.getDocsWithField("fieldB"));
TestUtil.checkReader(ir);
IOUtils.close(ir, iw, dir);
}
/**
* test filtering two binary dv fields
*/
public void testBinaryDocValues() throws Exception {
Directory dir = newDirectory();
IndexWriterConfig iwc = new IndexWriterConfig(null);
IndexWriter iw = new IndexWriter(dir, iwc);
// add document with 2 fields
Document doc = new Document();
doc.add(new BinaryDocValuesField("fieldA", new BytesRef("testA")));
doc.add(new BinaryDocValuesField("fieldB", new BytesRef("testB")));
iw.addDocument(doc);
// open reader
Set<String> fields = Collections.singleton("fieldA");
DirectoryReader ir = FieldSubsetReader.wrap(DirectoryReader.open(iw, true), fields);
// see only one field
LeafReader segmentReader = ir.leaves().get(0).reader();
assertNotNull(segmentReader.getBinaryDocValues("fieldA"));
assertEquals(new BytesRef("testA"), segmentReader.getBinaryDocValues("fieldA").get(0));
assertNull(segmentReader.getBinaryDocValues("fieldB"));
// check docs with field
assertNotNull(segmentReader.getDocsWithField("fieldA"));
assertNull(segmentReader.getDocsWithField("fieldB"));
TestUtil.checkReader(ir);
IOUtils.close(ir, iw, dir);
}
/**
* test filtering two sorted dv fields
*/
public void testSortedDocValues() throws Exception {
Directory dir = newDirectory();
IndexWriterConfig iwc = new IndexWriterConfig(null);
IndexWriter iw = new IndexWriter(dir, iwc);
// add document with 2 fields
Document doc = new Document();
doc.add(new SortedDocValuesField("fieldA", new BytesRef("testA")));
doc.add(new SortedDocValuesField("fieldB", new BytesRef("testB")));
iw.addDocument(doc);
// open reader
Set<String> fields = Collections.singleton("fieldA");
DirectoryReader ir = FieldSubsetReader.wrap(DirectoryReader.open(iw, true), fields);
// see only one field
LeafReader segmentReader = ir.leaves().get(0).reader();
assertNotNull(segmentReader.getSortedDocValues("fieldA"));
assertEquals(new BytesRef("testA"), segmentReader.getSortedDocValues("fieldA").get(0));
assertNull(segmentReader.getSortedDocValues("fieldB"));
// check docs with field
assertNotNull(segmentReader.getDocsWithField("fieldA"));
assertNull(segmentReader.getDocsWithField("fieldB"));
TestUtil.checkReader(ir);
IOUtils.close(ir, iw, dir);
}
/**
* test filtering two sortedset dv fields
*/
public void testSortedSetDocValues() throws Exception {
Directory dir = newDirectory();
IndexWriterConfig iwc = new IndexWriterConfig(null);
IndexWriter iw = new IndexWriter(dir, iwc);
// add document with 2 fields
Document doc = new Document();
doc.add(new SortedSetDocValuesField("fieldA", new BytesRef("testA")));
doc.add(new SortedSetDocValuesField("fieldB", new BytesRef("testB")));
iw.addDocument(doc);
// open reader
Set<String> fields = Collections.singleton("fieldA");
DirectoryReader ir = FieldSubsetReader.wrap(DirectoryReader.open(iw, true), fields);
// see only one field
LeafReader segmentReader = ir.leaves().get(0).reader();
SortedSetDocValues dv = segmentReader.getSortedSetDocValues("fieldA");
assertNotNull(dv);
dv.setDocument(0);
assertEquals(0, dv.nextOrd());
assertEquals(SortedSetDocValues.NO_MORE_ORDS, dv.nextOrd());
assertEquals(new BytesRef("testA"), dv.lookupOrd(0));
assertNull(segmentReader.getSortedSetDocValues("fieldB"));
// check docs with field
assertNotNull(segmentReader.getDocsWithField("fieldA"));
assertNull(segmentReader.getDocsWithField("fieldB"));
TestUtil.checkReader(ir);
IOUtils.close(ir, iw, dir);
}
/**
* test filtering two sortednumeric dv fields
*/
public void testSortedNumericDocValues() throws Exception {
Directory dir = newDirectory();
IndexWriterConfig iwc = new IndexWriterConfig(null);
IndexWriter iw = new IndexWriter(dir, iwc);
// add document with 2 fields
Document doc = new Document();
doc.add(new SortedNumericDocValuesField("fieldA", 1));
doc.add(new SortedNumericDocValuesField("fieldB", 2));
iw.addDocument(doc);
// open reader
Set<String> fields = Collections.singleton("fieldA");
DirectoryReader ir = FieldSubsetReader.wrap(DirectoryReader.open(iw, true), fields);
// see only one field
LeafReader segmentReader = ir.leaves().get(0).reader();
SortedNumericDocValues dv = segmentReader.getSortedNumericDocValues("fieldA");
assertNotNull(dv);
dv.setDocument(0);
assertEquals(1, dv.count());
assertEquals(1, dv.valueAt(0));
assertNull(segmentReader.getSortedNumericDocValues("fieldB"));
// check docs with field
assertNotNull(segmentReader.getDocsWithField("fieldA"));
assertNull(segmentReader.getDocsWithField("fieldB"));
TestUtil.checkReader(ir);
IOUtils.close(ir, iw, dir);
}
/**
* test we have correct fieldinfos metadata
*/
public void testFieldInfos() throws Exception {
Directory dir = newDirectory();
IndexWriterConfig iwc = new IndexWriterConfig(null);
IndexWriter iw = new IndexWriter(dir, iwc);
// add document with 2 fields
Document doc = new Document();
doc.add(new StringField("fieldA", "test", Field.Store.NO));
doc.add(new StringField("fieldB", "test", Field.Store.NO));
iw.addDocument(doc);
// open reader
Set<String> fields = Collections.singleton("fieldA");
DirectoryReader ir = FieldSubsetReader.wrap(DirectoryReader.open(iw, true), fields);
// see only one field
LeafReader segmentReader = ir.leaves().get(0).reader();
FieldInfos infos = segmentReader.getFieldInfos();
assertEquals(1, infos.size());
assertNotNull(infos.fieldInfo("fieldA"));
assertNull(infos.fieldInfo("fieldB"));
TestUtil.checkReader(ir);
IOUtils.close(ir, iw, dir);
}
/**
* test special handling for _source field.
*/
public void testSourceFiltering() throws Exception {
Directory dir = newDirectory();
IndexWriterConfig iwc = new IndexWriterConfig(null);
IndexWriter iw = new IndexWriter(dir, iwc);
// add document with 2 fields
Document doc = new Document();
doc.add(new StringField("fieldA", "testA", Field.Store.NO));
doc.add(new StringField("fieldB", "testB", Field.Store.NO));
byte bytes[] = "{\"fieldA\":\"testA\", \"fieldB\":\"testB\"}".getBytes(StandardCharsets.UTF_8);
doc.add(new StoredField(SourceFieldMapper.NAME, bytes, 0, bytes.length));
iw.addDocument(doc);
// open reader
Set<String> fields = new HashSet<>();
fields.add("fieldA");
fields.add(SourceFieldMapper.NAME);
DirectoryReader ir = FieldSubsetReader.wrap(DirectoryReader.open(iw, true), fields);
// see only one field
Document d2 = ir.document(0);
assertEquals(1, d2.getFields().size());
assertEquals("{\"fieldA\":\"testA\"}", d2.getBinaryValue(SourceFieldMapper.NAME).utf8ToString());
TestUtil.checkReader(ir);
IOUtils.close(ir, iw, dir);
}
/**
* test special handling for _field_names field.
*/
public void testFieldNames() throws Exception {
Directory dir = newDirectory();
IndexWriterConfig iwc = new IndexWriterConfig(null);
IndexWriter iw = new IndexWriter(dir, iwc);
// add document with 2 fields
Document doc = new Document();
doc.add(new StringField("fieldA", "test", Field.Store.NO));
doc.add(new StringField("fieldB", "test", Field.Store.NO));
doc.add(new StringField(FieldNamesFieldMapper.NAME, "fieldA", Field.Store.NO));
doc.add(new StringField(FieldNamesFieldMapper.NAME, "fieldB", Field.Store.NO));
iw.addDocument(doc);
// open reader
Set<String> fields = new HashSet<>();
fields.add("fieldA");
fields.add(FieldNamesFieldMapper.NAME);
DirectoryReader ir = FieldSubsetReader.wrap(DirectoryReader.open(iw, true), fields);
// see only one field
LeafReader segmentReader = ir.leaves().get(0).reader();
Terms terms = segmentReader.terms(FieldNamesFieldMapper.NAME);
TermsEnum termsEnum = terms.iterator();
assertEquals(new BytesRef("fieldA"), termsEnum.next());
assertNull(termsEnum.next());
// seekExact
termsEnum = terms.iterator();
assertTrue(termsEnum.seekExact(new BytesRef("fieldA")));
assertFalse(termsEnum.seekExact(new BytesRef("fieldB")));
// seekCeil
termsEnum = terms.iterator();
assertEquals(SeekStatus.FOUND, termsEnum.seekCeil(new BytesRef("fieldA")));
assertEquals(SeekStatus.NOT_FOUND, termsEnum.seekCeil(new BytesRef("field0000")));
assertEquals(new BytesRef("fieldA"), termsEnum.term());
assertEquals(SeekStatus.END, termsEnum.seekCeil(new BytesRef("fieldAAA")));
assertEquals(SeekStatus.END, termsEnum.seekCeil(new BytesRef("fieldB")));
TestUtil.checkReader(ir);
IOUtils.close(ir, iw, dir);
}
/**
* test special handling for _field_names field (three fields, to exercise termsenum better)
*/
public void testFieldNamesThreeFields() throws Exception {
Directory dir = newDirectory();
IndexWriterConfig iwc = new IndexWriterConfig(null);
IndexWriter iw = new IndexWriter(dir, iwc);
// add document with 2 fields
Document doc = new Document();
doc.add(new StringField("fieldA", "test", Field.Store.NO));
doc.add(new StringField("fieldB", "test", Field.Store.NO));
doc.add(new StringField("fieldC", "test", Field.Store.NO));
doc.add(new StringField(FieldNamesFieldMapper.NAME, "fieldA", Field.Store.NO));
doc.add(new StringField(FieldNamesFieldMapper.NAME, "fieldB", Field.Store.NO));
doc.add(new StringField(FieldNamesFieldMapper.NAME, "fieldC", Field.Store.NO));
iw.addDocument(doc);
// open reader
Set<String> fields = new HashSet<>();
fields.add("fieldA");
fields.add("fieldC");
fields.add(FieldNamesFieldMapper.NAME);
DirectoryReader ir = FieldSubsetReader.wrap(DirectoryReader.open(iw, true), fields);
// see only two fields
LeafReader segmentReader = ir.leaves().get(0).reader();
Terms terms = segmentReader.terms(FieldNamesFieldMapper.NAME);
TermsEnum termsEnum = terms.iterator();
assertEquals(new BytesRef("fieldA"), termsEnum.next());
assertEquals(new BytesRef("fieldC"), termsEnum.next());
assertNull(termsEnum.next());
// seekExact
termsEnum = terms.iterator();
assertTrue(termsEnum.seekExact(new BytesRef("fieldA")));
assertFalse(termsEnum.seekExact(new BytesRef("fieldB")));
assertTrue(termsEnum.seekExact(new BytesRef("fieldC")));
// seekCeil
termsEnum = terms.iterator();
assertEquals(SeekStatus.FOUND, termsEnum.seekCeil(new BytesRef("fieldA")));
assertEquals(SeekStatus.NOT_FOUND, termsEnum.seekCeil(new BytesRef("fieldB")));
assertEquals(new BytesRef("fieldC"), termsEnum.term());
assertEquals(SeekStatus.END, termsEnum.seekCeil(new BytesRef("fieldD")));
TestUtil.checkReader(ir);
IOUtils.close(ir, iw, dir);
}
/**
* test _field_names where a field is permitted, but doesn't exist in the segment.
*/
public void testFieldNamesMissing() throws Exception {
Directory dir = newDirectory();
IndexWriterConfig iwc = new IndexWriterConfig(null);
IndexWriter iw = new IndexWriter(dir, iwc);
// add document with 2 fields
Document doc = new Document();
doc.add(new StringField("fieldA", "test", Field.Store.NO));
doc.add(new StringField("fieldB", "test", Field.Store.NO));
doc.add(new StringField(FieldNamesFieldMapper.NAME, "fieldA", Field.Store.NO));
doc.add(new StringField(FieldNamesFieldMapper.NAME, "fieldB", Field.Store.NO));
iw.addDocument(doc);
// open reader
Set<String> fields = new HashSet<>();
fields.add("fieldA");
fields.add("fieldC");
fields.add(FieldNamesFieldMapper.NAME);
DirectoryReader ir = FieldSubsetReader.wrap(DirectoryReader.open(iw, true), fields);
// see only one field
LeafReader segmentReader = ir.leaves().get(0).reader();
Terms terms = segmentReader.terms(FieldNamesFieldMapper.NAME);
// seekExact
TermsEnum termsEnum = terms.iterator();
assertFalse(termsEnum.seekExact(new BytesRef("fieldC")));
// seekCeil
termsEnum = terms.iterator();
assertEquals(SeekStatus.END, termsEnum.seekCeil(new BytesRef("fieldC")));
TestUtil.checkReader(ir);
IOUtils.close(ir, iw, dir);
}
/**
* test where _field_names does not exist
*/
public void testFieldNamesOldIndex() throws Exception {
Directory dir = newDirectory();
IndexWriterConfig iwc = new IndexWriterConfig(null);
IndexWriter iw = new IndexWriter(dir, iwc);
// add document with 2 fields
Document doc = new Document();
doc.add(new StringField("fieldA", "test", Field.Store.NO));
doc.add(new StringField("fieldB", "test", Field.Store.NO));
iw.addDocument(doc);
// open reader
Set<String> fields = new HashSet<>();
fields.add("fieldA");
fields.add(FieldNamesFieldMapper.NAME);
DirectoryReader ir = FieldSubsetReader.wrap(DirectoryReader.open(iw, true), fields);
// see only one field
LeafReader segmentReader = ir.leaves().get(0).reader();
assertNull(segmentReader.terms(FieldNamesFieldMapper.NAME));
TestUtil.checkReader(ir);
IOUtils.close(ir, iw, dir);
}
/** test that core cache key (needed for NRT) is working */
public void testCoreCacheKey() throws Exception {
Directory dir = newDirectory();
IndexWriterConfig iwc = new IndexWriterConfig(null);
iwc.setMaxBufferedDocs(100);
iwc.setMergePolicy(NoMergePolicy.INSTANCE);
IndexWriter iw = new IndexWriter(dir, iwc);
// add two docs, id:0 and id:1
Document doc = new Document();
Field idField = new StringField("id", "", Field.Store.NO);
doc.add(idField);
idField.setStringValue("0");
iw.addDocument(doc);
idField.setStringValue("1");
iw.addDocument(doc);
// open reader
Set<String> fields = Collections.singleton("id");
DirectoryReader ir = FieldSubsetReader.wrap(DirectoryReader.open(iw, true), fields);
assertEquals(2, ir.numDocs());
assertEquals(1, ir.leaves().size());
// delete id:0 and reopen
iw.deleteDocuments(new Term("id", "0"));
DirectoryReader ir2 = DirectoryReader.openIfChanged(ir);
// we should have the same cache key as before
assertEquals(1, ir2.numDocs());
assertEquals(1, ir2.leaves().size());
assertSame(ir.leaves().get(0).reader().getCoreCacheKey(), ir2.leaves().get(0).reader().getCoreCacheKey());
// this is kind of stupid, but for now its here
assertNotSame(ir.leaves().get(0).reader().getCombinedCoreAndDeletesKey(), ir2.leaves().get(0).reader().getCombinedCoreAndDeletesKey());
TestUtil.checkReader(ir);
IOUtils.close(ir, ir2, iw, dir);
}
/**
* test filtering the only vector fields
*/
public void testFilterAwayAllVectors() throws Exception {
Directory dir = newDirectory();
IndexWriterConfig iwc = new IndexWriterConfig(null);
IndexWriter iw = new IndexWriter(dir, iwc);
// add document with 2 fields
Document doc = new Document();
FieldType ft = new FieldType(StringField.TYPE_NOT_STORED);
ft.setStoreTermVectors(true);
doc.add(new Field("fieldA", "testA", ft));
doc.add(new StringField("fieldB", "testB", Field.Store.NO)); // no vectors
iw.addDocument(doc);
// open reader
Set<String> fields = Collections.singleton("fieldB");
DirectoryReader ir = FieldSubsetReader.wrap(DirectoryReader.open(iw, true), fields);
// sees no fields
assertNull(ir.getTermVectors(0));
TestUtil.checkReader(ir);
IOUtils.close(ir, iw, dir);
}
/**
* test filtering an index with no fields
*/
public void testEmpty() throws Exception {
Directory dir = newDirectory();
IndexWriterConfig iwc = new IndexWriterConfig(null);
IndexWriter iw = new IndexWriter(dir, iwc);
iw.addDocument(new Document());
// open reader
Set<String> fields = Collections.singleton("fieldA");
DirectoryReader ir = FieldSubsetReader.wrap(DirectoryReader.open(iw, true), fields);
// see no fields
LeafReader segmentReader = ir.leaves().get(0).reader();
Fields f = segmentReader.fields();
assertNotNull(f); // 5.x contract
Set<String> seenFields = new HashSet<>();
for (String field : segmentReader.fields()) {
seenFields.add(field);
}
assertEquals(0, seenFields.size());
// see no vectors
assertNull(segmentReader.getTermVectors(0));
// see no stored fields
Document document = segmentReader.document(0);
assertEquals(0, document.getFields().size());
TestUtil.checkReader(ir);
IOUtils.close(ir, iw, dir);
}
}

View File

@ -0,0 +1,75 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.shield.authz.accesscontrol;
import com.google.common.collect.Sets;
import org.elasticsearch.Version;
import org.elasticsearch.action.search.SearchAction;
import org.elasticsearch.cluster.metadata.AliasMetaData;
import org.elasticsearch.cluster.metadata.IndexMetaData;
import org.elasticsearch.cluster.metadata.MetaData;
import org.elasticsearch.common.bytes.BytesArray;
import org.elasticsearch.common.bytes.BytesReference;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.shield.authz.Permission;
import org.elasticsearch.shield.authz.Privilege;
import org.elasticsearch.test.ESTestCase;
import java.util.Arrays;
import java.util.List;
import static org.hamcrest.Matchers.*;
public class IndicesPermissionTests extends ESTestCase {
public void testAuthorize() {
IndexMetaData.Builder imbBuilder = IndexMetaData.builder("_index")
.settings(Settings.builder()
.put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1)
.put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 1)
.put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT)
)
.putAlias(AliasMetaData.builder("_alias"));
MetaData md = MetaData.builder().put(imbBuilder).build();
// basics:
BytesReference query = new BytesArray("{}");
List<String> fields = Arrays.asList("_field");
Permission.Global.Role role = Permission.Global.Role.builder("_role").add(fields, query, Privilege.Index.ALL, "_index").build();
IndicesAccessControl permissions = role.authorize(SearchAction.NAME, Sets.newHashSet("_index"), md);
assertThat(permissions.getIndexPermissions("_index"), notNullValue());
assertThat(permissions.getIndexPermissions("_index").getFields().size(), equalTo(1));
assertThat(permissions.getIndexPermissions("_index").getFields().iterator().next(), equalTo("_field"));
assertThat(permissions.getIndexPermissions("_index").getQueries().size(), equalTo(1));
assertThat(permissions.getIndexPermissions("_index").getQueries().iterator().next(), equalTo(query));
// no document level security:
role = Permission.Global.Role.builder("_role").add(fields, null, Privilege.Index.ALL, "_index").build();
permissions = role.authorize(SearchAction.NAME, Sets.newHashSet("_index"), md);
assertThat(permissions.getIndexPermissions("_index"), notNullValue());
assertThat(permissions.getIndexPermissions("_index").getFields().size(), equalTo(1));
assertThat(permissions.getIndexPermissions("_index").getFields().iterator().next(), equalTo("_field"));
assertThat(permissions.getIndexPermissions("_index").getQueries(), nullValue());
// no field level security:
role = Permission.Global.Role.builder("_role").add(null, query, Privilege.Index.ALL, "_index").build();
permissions = role.authorize(SearchAction.NAME, Sets.newHashSet("_index"), md);
assertThat(permissions.getIndexPermissions("_index"), notNullValue());
assertThat(permissions.getIndexPermissions("_index").getFields(), nullValue());
assertThat(permissions.getIndexPermissions("_index").getQueries().size(), equalTo(1));
assertThat(permissions.getIndexPermissions("_index").getQueries().iterator().next(), equalTo(query));
// index group associated with an alias:
role = Permission.Global.Role.builder("_role").add(fields, query, Privilege.Index.ALL, "_alias").build();
permissions = role.authorize(SearchAction.NAME, Sets.newHashSet("_alias"), md);
assertThat(permissions.getIndexPermissions("_index"), notNullValue());
assertThat(permissions.getIndexPermissions("_index").getFields().size(), equalTo(1));
assertThat(permissions.getIndexPermissions("_index").getFields().iterator().next(), equalTo("_field"));
assertThat(permissions.getIndexPermissions("_index").getQueries().size(), equalTo(1));
assertThat(permissions.getIndexPermissions("_index").getQueries().iterator().next(), equalTo(query));
}
}

View File

@ -44,7 +44,7 @@ public class DefaultIndicesResolverTests extends ESTestCase {
private User user;
private User userNoIndices;
private MetaData metaData;
private DefaultIndicesResolver defaultIndicesResolver;
private DefaultIndicesAndAliasesResolver defaultIndicesResolver;
@Before
public void setup() {
@ -82,7 +82,7 @@ public class DefaultIndicesResolverTests extends ESTestCase {
when(authzService.authorizedIndicesAndAliases(userNoIndices, SearchAction.NAME)).thenReturn(ImmutableList.<String>of());
when(authzService.authorizedIndicesAndAliases(userNoIndices, MultiSearchAction.NAME)).thenReturn(ImmutableList.<String>of());
defaultIndicesResolver = new DefaultIndicesResolver(authzService);
defaultIndicesResolver = new DefaultIndicesAndAliasesResolver(authzService);
}
@Test

View File

@ -24,7 +24,7 @@ import java.util.List;
import static org.elasticsearch.test.ShieldTestsUtils.assertAuthorizationException;
import static org.hamcrest.CoreMatchers.*;
public class IndicesResolverIntegrationTests extends ShieldIntegTestCase {
public class IndicesAndAliasesResolverIntegrationTests extends ShieldIntegTestCase {
@Override
protected String configRoles() {

View File

@ -5,7 +5,6 @@
*/
package org.elasticsearch.shield.transport;
import com.google.common.collect.ImmutableSet;
import org.elasticsearch.cluster.ClusterService;
import org.elasticsearch.cluster.node.DiscoveryNode;
import org.elasticsearch.common.inject.AbstractModule;

View File

@ -9,6 +9,7 @@ import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.client.support.Headers;
import org.elasticsearch.common.io.PathUtils;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.index.cache.IndexCacheModule;
import org.elasticsearch.license.plugin.LicensePlugin;
import org.elasticsearch.plugins.Plugin;
import org.elasticsearch.shield.ShieldPlugin;
@ -120,6 +121,9 @@ public class ShieldSettingsSource extends ClusterDiscoveryConfiguration.UnicastZ
.put("shield.authc.realms.esusers.files.users", writeFile(folder, "users", configUsers()))
.put("shield.authc.realms.esusers.files.users_roles", writeFile(folder, "users_roles", configUsersRoles()))
.put("shield.authz.store.files.roles", writeFile(folder, "roles.yml", configRoles()))
// Test framework sometimes randomily selects the 'index' or 'none' cache and that makes the
// validation in ShieldPlugin fail.
.put(IndexCacheModule.QUERY_CACHE_TYPE, ShieldPlugin.OPT_OUT_QUERY_CACHE)
.put(getNodeSSLSettings());
setUser(builder, nodeClientUsername(), nodeClientPassword());

View File

@ -20,10 +20,10 @@ import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.util.Callback;
import org.elasticsearch.common.xcontent.XContentHelper;
import org.elasticsearch.common.xcontent.support.XContentMapValues;
import org.elasticsearch.index.cache.IndexCacheModule;
import org.elasticsearch.index.query.QueryBuilder;
import org.elasticsearch.license.plugin.LicensePlugin;
import org.elasticsearch.plugins.Plugin;
import org.elasticsearch.plugins.PluginsService;
import org.elasticsearch.search.SearchHit;
import org.elasticsearch.search.builder.SearchSourceBuilder;
import org.elasticsearch.shield.ShieldPlugin;
@ -620,6 +620,9 @@ public abstract class AbstractWatcherIntegrationTests extends ESIntegTestCase {
.put("shield.system_key.file", writeFile(folder, "system_key.yml", systemKey))
.put("shield.authc.sign_user_header", false)
.put("shield.audit.enabled", auditLogsEnabled)
// Test framework sometimes randomily selects the 'index' or 'none' cache and that makes the
// validation in ShieldPlugin fail. Shield can only run with this query cache impl
.put(IndexCacheModule.QUERY_CACHE_TYPE, ShieldPlugin.OPT_OUT_QUERY_CACHE)
.build();
} catch (IOException ex) {
throw new RuntimeException("failed to build settings for shield", ex);