Merge branch 'master' into ccr

* master:
  Adjust BWC version on mapping version
  Token API supports the client_credentials grant (#33106)
  Build: forked compiler max memory matches jvmArgs (#33138)
  Introduce mapping version to index metadata (#33147)
  SQL: Enable aggregations to create a separate bucket for missing values (#32832)
  Fix grammar in contributing docs
  SECURITY: Fix Compile Error in ReservedRealmTests (#33166)
  APM server monitoring (#32515)
  Support only string `format` in date, root object & date range (#28117)
  [Rollup] Move toBuilders() methods out of rollup config objects (#32585)
  Fix forbiddenapis on java 11  (#33116)
  Apply publishing to genreate pom (#33094)
  Have circuit breaker succeed on unknown mem usage
  Do not lose default mapper on metadata updates (#33153)
  Fix a mappings update test (#33146)
  Reload Secure Settings REST specs & docs (#32990)
  Refactor CachingUsernamePassword realm (#32646)
This commit is contained in:
Jason Tedor 2018-08-27 13:39:38 -04:00
commit 0e5d42ca38
No known key found for this signature in database
GPG Key ID: FA89F05560F16BC5
76 changed files with 1598 additions and 463 deletions

View File

@ -95,7 +95,7 @@ Contributing to the Elasticsearch codebase
JDK 10 is required to build Elasticsearch. You must have a JDK 10 installation
with the environment variable `JAVA_HOME` referencing the path to Java home for
your JDK 10 installation. By default, tests use the same runtime as `JAVA_HOME`.
However, since Elasticsearch, supports JDK 8 the build supports compiling with
However, since Elasticsearch supports JDK 8, the build supports compiling with
JDK 10 and testing on a JDK 8 runtime; to do this, set `RUNTIME_JAVA_HOME`
pointing to the Java home of a JDK 8 installation. Note that this mechanism can
be used to test against other JDKs as well, this is not only limited to JDK 8.

View File

@ -601,7 +601,6 @@ class BuildPlugin implements Plugin<Project> {
} else {
options.fork = true
options.forkOptions.javaHome = compilerJavaHomeFile
options.forkOptions.memoryMaximumSize = "512m"
}
if (targetCompatibilityVersion == JavaVersion.VERSION_1_8) {
// compile with compact 3 profile by default

View File

@ -23,6 +23,8 @@ import org.gradle.api.Action;
import org.gradle.api.DefaultTask;
import org.gradle.api.JavaVersion;
import org.gradle.api.file.FileCollection;
import org.gradle.api.logging.Logger;
import org.gradle.api.logging.Logging;
import org.gradle.api.tasks.Input;
import org.gradle.api.tasks.InputFiles;
import org.gradle.api.tasks.OutputFile;
@ -41,6 +43,7 @@ import java.util.Set;
public class ForbiddenApisCliTask extends DefaultTask {
private final Logger logger = Logging.getLogger(ForbiddenApisCliTask.class);
private FileCollection signaturesFiles;
private List<String> signatures = new ArrayList<>();
private Set<String> bundledSignatures = new LinkedHashSet<>();
@ -49,12 +52,21 @@ public class ForbiddenApisCliTask extends DefaultTask {
private FileCollection classesDirs;
private Action<JavaExecSpec> execAction;
@Input
public JavaVersion getTargetCompatibility() {
return targetCompatibility;
}
public void setTargetCompatibility(JavaVersion targetCompatibility) {
this.targetCompatibility = targetCompatibility;
if (targetCompatibility.compareTo(JavaVersion.VERSION_1_10) > 0) {
logger.warn(
"Target compatibility is set to {} but forbiddenapis only supports up to 10. Will cap at 10.",
targetCompatibility
);
this.targetCompatibility = JavaVersion.VERSION_1_10;
} else {
this.targetCompatibility = targetCompatibility;
}
}
public Action<JavaExecSpec> getExecAction() {

View File

@ -685,6 +685,7 @@ public class RestHighLevelClientTests extends ESTestCase {
"nodes.stats",
"nodes.hot_threads",
"nodes.usage",
"nodes.reload_secure_settings",
"search_shards",
};
Set<String> deprecatedMethods = new HashSet<>();

View File

@ -0,0 +1,55 @@
[[cluster-nodes-reload-secure-settings]]
== Nodes Reload Secure Settings
The cluster nodes reload secure settings API is used to re-read the
local node's encrypted keystore. Specifically, it will prompt the keystore
decryption and reading accross the cluster. The keystore's plain content is
used to reinitialize all compatible plugins. A compatible plugin can be
reinitilized without restarting the node. The operation is
complete when all compatible plugins have finished reinitilizing. Subsequently,
the keystore is closed and any changes to it will not be reflected on the node.
[source,js]
--------------------------------------------------
POST _nodes/reload_secure_settings
POST _nodes/nodeId1,nodeId2/reload_secure_settings
--------------------------------------------------
// CONSOLE
// TEST[setup:node]
// TEST[s/nodeId1,nodeId2/*/]
The first command reloads the keystore on each node. The seconds allows
to selectively target `nodeId1` and `nodeId2`. The node selection options are
detailed <<cluster-nodes,here>>.
Note: It is an error if secure settings are inconsistent across the cluster
nodes, yet this consistency is not enforced whatsoever. Hence, reloading specific
nodes is not standard. It is only justifiable when retrying failed reload operations.
[float]
[[rest-reload-secure-settings]]
==== REST Reload Secure Settings Response
The response contains the `nodes` object, which is a map, keyed by the
node id. Each value has the node `name` and an optional `reload_exception`
field. The `reload_exception` field is a serialization of the exception
that was thrown during the reload process, if any.
[source,js]
--------------------------------------------------
{
"_nodes": {
"total": 1,
"successful": 1,
"failed": 0
},
"cluster_name": "my_cluster",
"nodes": {
"pQHNt5rXTTWNvUgOrdynKg": {
"name": "node-0"
}
}
}
--------------------------------------------------
// TESTRESPONSE[s/"my_cluster"/$body.cluster_name/]
// TESTRESPONSE[s/"pQHNt5rXTTWNvUgOrdynKg"/\$node_name/]

View File

@ -4,7 +4,7 @@
== elasticsearch-setup-passwords
The `elasticsearch-setup-passwords` command sets the passwords for the built-in
`elastic`, `kibana`, `logstash_system`, and `beats_system` users.
`elastic`, `kibana`, `logstash_system`, `beats_system`, and `apm_system` users.
[float]
=== Synopsis

View File

@ -105,12 +105,12 @@ route monitoring data:
[options="header"]
|=======================
| Template | Purpose
| `.monitoring-alerts` | All cluster alerts for monitoring data.
| `.monitoring-beats` | All Beats monitoring data.
| `.monitoring-es` | All {es} monitoring data.
| `.monitoring-kibana` | All {kib} monitoring data.
| `.monitoring-logstash` | All Logstash monitoring data.
| Template | Purpose
| `.monitoring-alerts` | All cluster alerts for monitoring data.
| `.monitoring-beats` | All Beats monitoring data.
| `.monitoring-es` | All {es} monitoring data.
| `.monitoring-kibana` | All {kib} monitoring data.
| `.monitoring-logstash` | All Logstash monitoring data.
|=======================
The templates are ordinary {es} templates that control the default settings and

View File

@ -1,2 +1,3 @@
org.gradle.daemon=false
org.gradle.jvmargs=-Xmx2g
options.forkOptions.memoryMaximumSize=2g

View File

@ -0,0 +1,23 @@
{
"nodes.reload_secure_settings": {
"documentation": "http://www.elastic.co/guide/en/elasticsearch/reference/master/cluster-nodes-reload-secure-settings.html",
"methods": ["POST"],
"url": {
"path": "/_nodes/reload_secure_settings",
"paths": ["/_nodes/reload_secure_settings", "/_nodes/{node_id}/reload_secure_settings"],
"parts": {
"node_id": {
"type": "list",
"description": "A comma-separated list of node IDs to span the reload/reinit call. Should stay empty because reloading usually involves all cluster nodes."
}
},
"params": {
"timeout": {
"type" : "time",
"description" : "Explicit operation timeout"
}
}
},
"body": null
}
}

View File

@ -0,0 +1,8 @@
---
"node_reload_secure_settings test":
- do:
nodes.reload_secure_settings: {}
- is_true: nodes
- is_true: cluster_name

View File

@ -284,7 +284,7 @@ public class ClusterState implements ToXContentFragment, Diffable<ClusterState>
final String TAB = " ";
for (IndexMetaData indexMetaData : metaData) {
sb.append(TAB).append(indexMetaData.getIndex());
sb.append(": v[").append(indexMetaData.getVersion()).append("]\n");
sb.append(": v[").append(indexMetaData.getVersion()).append("], mv[").append(indexMetaData.getMappingVersion()).append("]\n");
for (int shard = 0; shard < indexMetaData.getNumberOfShards(); shard++) {
sb.append(TAB).append(TAB).append(shard).append(": ");
sb.append("p_term [").append(indexMetaData.primaryTerm(shard)).append("], ");

View File

@ -24,6 +24,7 @@ import com.carrotsearch.hppc.cursors.IntObjectCursor;
import com.carrotsearch.hppc.cursors.ObjectCursor;
import com.carrotsearch.hppc.cursors.ObjectObjectCursor;
import org.elasticsearch.Assertions;
import org.elasticsearch.Version;
import org.elasticsearch.action.admin.indices.rollover.RolloverInfo;
import org.elasticsearch.action.support.ActiveShardCount;
@ -291,6 +292,7 @@ public class IndexMetaData implements Diffable<IndexMetaData>, ToXContentFragmen
public static final String KEY_IN_SYNC_ALLOCATIONS = "in_sync_allocations";
static final String KEY_VERSION = "version";
static final String KEY_MAPPING_VERSION = "mapping_version";
static final String KEY_ROUTING_NUM_SHARDS = "routing_num_shards";
static final String KEY_SETTINGS = "settings";
static final String KEY_STATE = "state";
@ -309,6 +311,9 @@ public class IndexMetaData implements Diffable<IndexMetaData>, ToXContentFragmen
private final Index index;
private final long version;
private final long mappingVersion;
private final long[] primaryTerms;
private final State state;
@ -336,7 +341,7 @@ public class IndexMetaData implements Diffable<IndexMetaData>, ToXContentFragmen
private final ActiveShardCount waitForActiveShards;
private final ImmutableOpenMap<String, RolloverInfo> rolloverInfos;
private IndexMetaData(Index index, long version, long[] primaryTerms, State state, int numberOfShards, int numberOfReplicas, Settings settings,
private IndexMetaData(Index index, long version, long mappingVersion, long[] primaryTerms, State state, int numberOfShards, int numberOfReplicas, Settings settings,
ImmutableOpenMap<String, MappingMetaData> mappings, ImmutableOpenMap<String, AliasMetaData> aliases,
ImmutableOpenMap<String, Custom> customs, ImmutableOpenIntMap<Set<String>> inSyncAllocationIds,
DiscoveryNodeFilters requireFilters, DiscoveryNodeFilters initialRecoveryFilters, DiscoveryNodeFilters includeFilters, DiscoveryNodeFilters excludeFilters,
@ -345,6 +350,8 @@ public class IndexMetaData implements Diffable<IndexMetaData>, ToXContentFragmen
this.index = index;
this.version = version;
assert mappingVersion >= 0 : mappingVersion;
this.mappingVersion = mappingVersion;
this.primaryTerms = primaryTerms;
assert primaryTerms.length == numberOfShards;
this.state = state;
@ -394,6 +401,9 @@ public class IndexMetaData implements Diffable<IndexMetaData>, ToXContentFragmen
return this.version;
}
public long getMappingVersion() {
return mappingVersion;
}
/**
* The term of the current selected primary. This is a non-negative number incremented when
@ -644,6 +654,7 @@ public class IndexMetaData implements Diffable<IndexMetaData>, ToXContentFragmen
private final String index;
private final int routingNumShards;
private final long version;
private final long mappingVersion;
private final long[] primaryTerms;
private final State state;
private final Settings settings;
@ -656,6 +667,7 @@ public class IndexMetaData implements Diffable<IndexMetaData>, ToXContentFragmen
IndexMetaDataDiff(IndexMetaData before, IndexMetaData after) {
index = after.index.getName();
version = after.version;
mappingVersion = after.mappingVersion;
routingNumShards = after.routingNumShards;
state = after.state;
settings = after.settings;
@ -672,6 +684,11 @@ public class IndexMetaData implements Diffable<IndexMetaData>, ToXContentFragmen
index = in.readString();
routingNumShards = in.readInt();
version = in.readLong();
if (in.getVersion().onOrAfter(Version.V_6_5_0)) {
mappingVersion = in.readVLong();
} else {
mappingVersion = 1;
}
state = State.fromId(in.readByte());
settings = Settings.readSettingsFromStream(in);
primaryTerms = in.readVLongArray();
@ -707,6 +724,9 @@ public class IndexMetaData implements Diffable<IndexMetaData>, ToXContentFragmen
out.writeString(index);
out.writeInt(routingNumShards);
out.writeLong(version);
if (out.getVersion().onOrAfter(Version.V_6_5_0)) {
out.writeVLong(mappingVersion);
}
out.writeByte(state.id);
Settings.writeSettingsToStream(settings, out);
out.writeVLongArray(primaryTerms);
@ -723,6 +743,7 @@ public class IndexMetaData implements Diffable<IndexMetaData>, ToXContentFragmen
public IndexMetaData apply(IndexMetaData part) {
Builder builder = builder(index);
builder.version(version);
builder.mappingVersion(mappingVersion);
builder.setRoutingNumShards(routingNumShards);
builder.state(state);
builder.settings(settings);
@ -739,6 +760,11 @@ public class IndexMetaData implements Diffable<IndexMetaData>, ToXContentFragmen
public static IndexMetaData readFrom(StreamInput in) throws IOException {
Builder builder = new Builder(in.readString());
builder.version(in.readLong());
if (in.getVersion().onOrAfter(Version.V_6_5_0)) {
builder.mappingVersion(in.readVLong());
} else {
builder.mappingVersion(1);
}
builder.setRoutingNumShards(in.readInt());
builder.state(State.fromId(in.readByte()));
builder.settings(readSettingsFromStream(in));
@ -778,6 +804,9 @@ public class IndexMetaData implements Diffable<IndexMetaData>, ToXContentFragmen
public void writeTo(StreamOutput out) throws IOException {
out.writeString(index.getName()); // uuid will come as part of settings
out.writeLong(version);
if (out.getVersion().onOrAfter(Version.V_6_5_0)) {
out.writeVLong(mappingVersion);
}
out.writeInt(routingNumShards);
out.writeByte(state.id());
writeSettingsToStream(settings, out);
@ -821,6 +850,7 @@ public class IndexMetaData implements Diffable<IndexMetaData>, ToXContentFragmen
private String index;
private State state = State.OPEN;
private long version = 1;
private long mappingVersion = 1;
private long[] primaryTerms = null;
private Settings settings = Settings.Builder.EMPTY_SETTINGS;
private final ImmutableOpenMap.Builder<String, MappingMetaData> mappings;
@ -843,6 +873,7 @@ public class IndexMetaData implements Diffable<IndexMetaData>, ToXContentFragmen
this.index = indexMetaData.getIndex().getName();
this.state = indexMetaData.state;
this.version = indexMetaData.version;
this.mappingVersion = indexMetaData.mappingVersion;
this.settings = indexMetaData.getSettings();
this.primaryTerms = indexMetaData.primaryTerms.clone();
this.mappings = ImmutableOpenMap.builder(indexMetaData.mappings);
@ -1009,6 +1040,15 @@ public class IndexMetaData implements Diffable<IndexMetaData>, ToXContentFragmen
return this;
}
public long mappingVersion() {
return mappingVersion;
}
public Builder mappingVersion(final long mappingVersion) {
this.mappingVersion = mappingVersion;
return this;
}
/**
* returns the primary term for the given shard.
* See {@link IndexMetaData#primaryTerm(int)} for more information.
@ -1136,7 +1176,7 @@ public class IndexMetaData implements Diffable<IndexMetaData>, ToXContentFragmen
final String uuid = settings.get(SETTING_INDEX_UUID, INDEX_UUID_NA_VALUE);
return new IndexMetaData(new Index(index, uuid), version, primaryTerms, state, numberOfShards, numberOfReplicas, tmpSettings, mappings.build(),
return new IndexMetaData(new Index(index, uuid), version, mappingVersion, primaryTerms, state, numberOfShards, numberOfReplicas, tmpSettings, mappings.build(),
tmpAliases.build(), customs.build(), filledInSyncAllocationIds.build(), requireFilters, initialRecoveryFilters, includeFilters, excludeFilters,
indexCreatedVersion, indexUpgradedVersion, getRoutingNumShards(), routingPartitionSize, waitForActiveShards, rolloverInfos.build());
}
@ -1145,6 +1185,7 @@ public class IndexMetaData implements Diffable<IndexMetaData>, ToXContentFragmen
builder.startObject(indexMetaData.getIndex().getName());
builder.field(KEY_VERSION, indexMetaData.getVersion());
builder.field(KEY_MAPPING_VERSION, indexMetaData.getMappingVersion());
builder.field(KEY_ROUTING_NUM_SHARDS, indexMetaData.getRoutingNumShards());
builder.field(KEY_STATE, indexMetaData.getState().toString().toLowerCase(Locale.ENGLISH));
@ -1218,6 +1259,7 @@ public class IndexMetaData implements Diffable<IndexMetaData>, ToXContentFragmen
if (token != XContentParser.Token.START_OBJECT) {
throw new IllegalArgumentException("expected object but got a " + token);
}
boolean mappingVersion = false;
while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {
if (token == XContentParser.Token.FIELD_NAME) {
currentFieldName = parser.currentName();
@ -1316,6 +1358,9 @@ public class IndexMetaData implements Diffable<IndexMetaData>, ToXContentFragmen
builder.state(State.fromString(parser.text()));
} else if (KEY_VERSION.equals(currentFieldName)) {
builder.version(parser.longValue());
} else if (KEY_MAPPING_VERSION.equals(currentFieldName)) {
mappingVersion = true;
builder.mappingVersion(parser.longValue());
} else if (KEY_ROUTING_NUM_SHARDS.equals(currentFieldName)) {
builder.setRoutingNumShards(parser.intValue());
} else {
@ -1325,6 +1370,9 @@ public class IndexMetaData implements Diffable<IndexMetaData>, ToXContentFragmen
throw new IllegalArgumentException("Unexpected token " + token);
}
}
if (Assertions.ENABLED && Version.indexCreated(builder.settings).onOrAfter(Version.V_6_5_0)) {
assert mappingVersion : "mapping version should be present for indices created on or after 6.5.0";
}
return builder.build();
}
}

View File

@ -287,6 +287,7 @@ public class MetaDataMappingService extends AbstractComponent {
MetaData.Builder builder = MetaData.builder(metaData);
boolean updated = false;
for (IndexMetaData indexMetaData : updateList) {
boolean updatedMapping = false;
// do the actual merge here on the master, and update the mapping source
// we use the exact same indexService and metadata we used to validate above here to actually apply the update
final Index index = indexMetaData.getIndex();
@ -303,7 +304,7 @@ public class MetaDataMappingService extends AbstractComponent {
if (existingSource.equals(updatedSource)) {
// same source, no changes, ignore it
} else {
updated = true;
updatedMapping = true;
// use the merged mapping source
if (logger.isDebugEnabled()) {
logger.debug("{} update_mapping [{}] with source [{}]", index, mergedMapper.type(), updatedSource);
@ -313,7 +314,7 @@ public class MetaDataMappingService extends AbstractComponent {
}
} else {
updated = true;
updatedMapping = true;
if (logger.isDebugEnabled()) {
logger.debug("{} create_mapping [{}] with source [{}]", index, mappingType, updatedSource);
} else if (logger.isInfoEnabled()) {
@ -329,7 +330,16 @@ public class MetaDataMappingService extends AbstractComponent {
indexMetaDataBuilder.putMapping(new MappingMetaData(mapper.mappingSource()));
}
}
if (updatedMapping) {
indexMetaDataBuilder.mappingVersion(1 + indexMetaDataBuilder.mappingVersion());
}
/*
* This implicitly increments the index metadata version and builds the index metadata. This means that we need to have
* already incremented the mapping version if necessary. Therefore, the mapping version increment must remain before this
* statement.
*/
builder.put(indexMetaDataBuilder);
updated |= updatedMapping;
}
if (updated) {
return ClusterState.builder(currentState).metaData(builder).build();

View File

@ -522,8 +522,8 @@ public class IndexService extends AbstractIndexComponent implements IndicesClust
}
@Override
public boolean updateMapping(IndexMetaData indexMetaData) throws IOException {
return mapperService().updateMapping(indexMetaData);
public boolean updateMapping(final IndexMetaData currentIndexMetaData, final IndexMetaData newIndexMetaData) throws IOException {
return mapperService().updateMapping(currentIndexMetaData, newIndexMetaData);
}
private class StoreCloseListener implements Store.OnClose {

View File

@ -25,6 +25,7 @@ import org.apache.logging.log4j.message.ParameterizedMessage;
import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.analysis.DelegatingAnalyzerWrapper;
import org.apache.lucene.index.Term;
import org.elasticsearch.Assertions;
import org.elasticsearch.ElasticsearchGenerationException;
import org.elasticsearch.Version;
import org.elasticsearch.cluster.metadata.IndexMetaData;
@ -192,8 +193,8 @@ public class MapperService extends AbstractIndexComponent implements Closeable {
/**
* Update mapping by only merging the metadata that is different between received and stored entries
*/
public boolean updateMapping(IndexMetaData indexMetaData) throws IOException {
assert indexMetaData.getIndex().equals(index()) : "index mismatch: expected " + index() + " but was " + indexMetaData.getIndex();
public boolean updateMapping(final IndexMetaData currentIndexMetaData, final IndexMetaData newIndexMetaData) throws IOException {
assert newIndexMetaData.getIndex().equals(index()) : "index mismatch: expected " + index() + " but was " + newIndexMetaData.getIndex();
// go over and add the relevant mappings (or update them)
Set<String> existingMappers = new HashSet<>();
if (mapper != null) {
@ -205,7 +206,7 @@ public class MapperService extends AbstractIndexComponent implements Closeable {
final Map<String, DocumentMapper> updatedEntries;
try {
// only update entries if needed
updatedEntries = internalMerge(indexMetaData, MergeReason.MAPPING_RECOVERY, true);
updatedEntries = internalMerge(newIndexMetaData, MergeReason.MAPPING_RECOVERY, true);
} catch (Exception e) {
logger.warn(() -> new ParameterizedMessage("[{}] failed to apply mappings", index()), e);
throw e;
@ -213,9 +214,11 @@ public class MapperService extends AbstractIndexComponent implements Closeable {
boolean requireRefresh = false;
assertMappingVersion(currentIndexMetaData, newIndexMetaData, updatedEntries);
for (DocumentMapper documentMapper : updatedEntries.values()) {
String mappingType = documentMapper.type();
CompressedXContent incomingMappingSource = indexMetaData.mapping(mappingType).source();
CompressedXContent incomingMappingSource = newIndexMetaData.mapping(mappingType).source();
String op = existingMappers.contains(mappingType) ? "updated" : "added";
if (logger.isDebugEnabled() && incomingMappingSource.compressed().length < 512) {
@ -240,6 +243,45 @@ public class MapperService extends AbstractIndexComponent implements Closeable {
return requireRefresh;
}
private void assertMappingVersion(
final IndexMetaData currentIndexMetaData,
final IndexMetaData newIndexMetaData,
final Map<String, DocumentMapper> updatedEntries) {
if (Assertions.ENABLED
&& currentIndexMetaData != null
&& currentIndexMetaData.getCreationVersion().onOrAfter(Version.V_6_5_0)) {
if (currentIndexMetaData.getMappingVersion() == newIndexMetaData.getMappingVersion()) {
// if the mapping version is unchanged, then there should not be any updates and all mappings should be the same
assert updatedEntries.isEmpty() : updatedEntries;
for (final ObjectCursor<MappingMetaData> mapping : newIndexMetaData.getMappings().values()) {
final CompressedXContent currentSource = currentIndexMetaData.mapping(mapping.value.type()).source();
final CompressedXContent newSource = mapping.value.source();
assert currentSource.equals(newSource) :
"expected current mapping [" + currentSource + "] for type [" + mapping.value.type() + "] "
+ "to be the same as new mapping [" + newSource + "]";
}
} else {
// if the mapping version is changed, it should increase, there should be updates, and the mapping should be different
final long currentMappingVersion = currentIndexMetaData.getMappingVersion();
final long newMappingVersion = newIndexMetaData.getMappingVersion();
assert currentMappingVersion < newMappingVersion :
"expected current mapping version [" + currentMappingVersion + "] "
+ "to be less than new mapping version [" + newMappingVersion + "]";
assert updatedEntries.isEmpty() == false;
for (final DocumentMapper documentMapper : updatedEntries.values()) {
final MappingMetaData currentMapping = currentIndexMetaData.mapping(documentMapper.type());
if (currentMapping != null) {
final CompressedXContent currentSource = currentMapping.source();
final CompressedXContent newSource = documentMapper.mappingSource();
assert currentSource.equals(newSource) == false :
"expected current mapping [" + currentSource + "] for type [" + documentMapper.type() + "] " +
"to be different than new mapping";
}
}
}
}
}
public void merge(Map<String, Map<String, Object>> mappings, MergeReason reason) {
Map<String, CompressedXContent> mappingSourcesCompressed = new LinkedHashMap<>(mappings.size());
for (Map.Entry<String, Map<String, Object>> entry : mappings.entrySet()) {
@ -468,11 +510,11 @@ public class MapperService extends AbstractIndexComponent implements Closeable {
// commit the change
if (defaultMappingSource != null) {
this.defaultMappingSource = defaultMappingSource;
this.defaultMapper = defaultMapper;
}
if (newMapper != null) {
this.mapper = newMapper;
}
this.defaultMapper = defaultMapper;
this.fieldTypes = fieldTypes;
this.hasNested = hasNested;
this.fullPathObjectMappers = fullPathObjectMappers;

View File

@ -264,7 +264,10 @@ public class TypeParsers {
}
public static FormatDateTimeFormatter parseDateTimeFormatter(Object node) {
return Joda.forPattern(node.toString());
if (node instanceof String) {
return Joda.forPattern((String) node);
}
throw new IllegalArgumentException("Invalid format: [" + node.toString() + "]: expected string value");
}
public static void parseTermVector(String fieldName, String termVector, FieldMapper.Builder builder) throws MapperParsingException {

View File

@ -251,7 +251,16 @@ public class HierarchyCircuitBreakerService extends CircuitBreakerService {
//package private to allow overriding it in tests
long currentMemoryUsage() {
return MEMORY_MX_BEAN.getHeapMemoryUsage().getUsed();
try {
return MEMORY_MX_BEAN.getHeapMemoryUsage().getUsed();
} catch (IllegalArgumentException ex) {
// This exception can happen (rarely) due to a race condition in the JVM when determining usage of memory pools. We do not want
// to fail requests because of this and thus return zero memory usage in this case. While we could also return the most
// recently determined memory usage, we would overestimate memory usage immediately after a garbage collection event.
assert ex.getMessage().matches("committed = \\d+ should be < max = \\d+");
logger.info("Cannot determine current memory usage due to JDK-8207200.", ex);
return 0;
}
}
/**

View File

@ -456,7 +456,7 @@ public class IndicesClusterStateService extends AbstractLifecycleComponent imple
AllocatedIndex<? extends Shard> indexService = null;
try {
indexService = indicesService.createIndex(indexMetaData, buildInIndexListener);
if (indexService.updateMapping(indexMetaData) && sendRefreshMapping) {
if (indexService.updateMapping(null, indexMetaData) && sendRefreshMapping) {
nodeMappingRefreshAction.nodeMappingRefresh(state.nodes().getMasterNode(),
new NodeMappingRefreshAction.NodeMappingRefreshRequest(indexMetaData.getIndex().getName(),
indexMetaData.getIndexUUID(), state.nodes().getLocalNodeId())
@ -490,7 +490,7 @@ public class IndicesClusterStateService extends AbstractLifecycleComponent imple
if (ClusterChangedEvent.indexMetaDataChanged(currentIndexMetaData, newIndexMetaData)) {
indexService.updateMetaData(newIndexMetaData);
try {
if (indexService.updateMapping(newIndexMetaData) && sendRefreshMapping) {
if (indexService.updateMapping(currentIndexMetaData, newIndexMetaData) && sendRefreshMapping) {
nodeMappingRefreshAction.nodeMappingRefresh(state.nodes().getMasterNode(),
new NodeMappingRefreshAction.NodeMappingRefreshRequest(newIndexMetaData.getIndex().getName(),
newIndexMetaData.getIndexUUID(), state.nodes().getLocalNodeId())
@ -778,7 +778,7 @@ public class IndicesClusterStateService extends AbstractLifecycleComponent imple
/**
* Checks if index requires refresh from master.
*/
boolean updateMapping(IndexMetaData indexMetaData) throws IOException;
boolean updateMapping(IndexMetaData currentIndexMetaData, IndexMetaData newIndexMetaData) throws IOException;
/**
* Returns shard with given id.

View File

@ -292,6 +292,7 @@ public class RestoreService extends AbstractComponent implements ClusterStateApp
// Index exists and it's closed - open it in metadata and start recovery
IndexMetaData.Builder indexMdBuilder = IndexMetaData.builder(snapshotIndexMetaData).state(IndexMetaData.State.OPEN);
indexMdBuilder.version(Math.max(snapshotIndexMetaData.getVersion(), currentIndexMetaData.getVersion() + 1));
indexMdBuilder.mappingVersion(Math.max(snapshotIndexMetaData.getMappingVersion(), currentIndexMetaData.getMappingVersion() + 1));
if (!request.includeAliases()) {
// Remove all snapshot aliases
if (!snapshotIndexMetaData.getAliases().isEmpty()) {

View File

@ -16,12 +16,15 @@
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.cluster.metadata;
import org.elasticsearch.action.admin.indices.mapping.put.PutMappingClusterStateUpdateRequest;
import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.ClusterStateTaskExecutor;
import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.compress.CompressedXContent;
import org.elasticsearch.index.Index;
import org.elasticsearch.index.IndexService;
import org.elasticsearch.plugins.Plugin;
import org.elasticsearch.test.ESSingleNodeTestCase;
@ -31,6 +34,7 @@ import java.util.Collection;
import java.util.Collections;
import static org.hamcrest.Matchers.equalTo;
import static org.hamcrest.Matchers.not;
public class MetaDataMappingServiceTests extends ESSingleNodeTestCase {
@ -47,8 +51,18 @@ public class MetaDataMappingServiceTests extends ESSingleNodeTestCase {
final ClusterService clusterService = getInstanceFromNode(ClusterService.class);
// TODO - it will be nice to get a random mapping generator
final PutMappingClusterStateUpdateRequest request = new PutMappingClusterStateUpdateRequest().type("type");
request.source("{ \"properties\" { \"field\": { \"type\": \"text\" }}}");
mappingService.putMappingExecutor.execute(clusterService.state(), Collections.singletonList(request));
request.indices(new Index[] {indexService.index()});
request.source("{ \"properties\": { \"field\": { \"type\": \"text\" }}}");
final ClusterStateTaskExecutor.ClusterTasksResult<PutMappingClusterStateUpdateRequest> result =
mappingService.putMappingExecutor.execute(clusterService.state(), Collections.singletonList(request));
// the task completed successfully
assertThat(result.executionResults.size(), equalTo(1));
assertTrue(result.executionResults.values().iterator().next().isSuccess());
// the task really was a mapping update
assertThat(
indexService.mapperService().documentMapper("type").mappingSource(),
not(equalTo(result.resultingState.metaData().index("test").mapping("type").source())));
// since we never committed the cluster state update, the in-memory state is unchanged
assertThat(indexService.mapperService().documentMapper("type").mappingSource(), equalTo(currentMapping));
}
@ -69,4 +83,35 @@ public class MetaDataMappingServiceTests extends ESSingleNodeTestCase {
assertSame(result, result2);
}
public void testMappingVersion() throws Exception {
final IndexService indexService = createIndex("test", client().admin().indices().prepareCreate("test").addMapping("type"));
final long previousVersion = indexService.getMetaData().getMappingVersion();
final MetaDataMappingService mappingService = getInstanceFromNode(MetaDataMappingService.class);
final ClusterService clusterService = getInstanceFromNode(ClusterService.class);
final PutMappingClusterStateUpdateRequest request = new PutMappingClusterStateUpdateRequest().type("type");
request.indices(new Index[] {indexService.index()});
request.source("{ \"properties\": { \"field\": { \"type\": \"text\" }}}");
final ClusterStateTaskExecutor.ClusterTasksResult<PutMappingClusterStateUpdateRequest> result =
mappingService.putMappingExecutor.execute(clusterService.state(), Collections.singletonList(request));
assertThat(result.executionResults.size(), equalTo(1));
assertTrue(result.executionResults.values().iterator().next().isSuccess());
assertThat(result.resultingState.metaData().index("test").getMappingVersion(), equalTo(1 + previousVersion));
}
public void testMappingVersionUnchanged() throws Exception {
final IndexService indexService = createIndex("test", client().admin().indices().prepareCreate("test").addMapping("type"));
final long previousVersion = indexService.getMetaData().getMappingVersion();
final MetaDataMappingService mappingService = getInstanceFromNode(MetaDataMappingService.class);
final ClusterService clusterService = getInstanceFromNode(ClusterService.class);
final PutMappingClusterStateUpdateRequest request = new PutMappingClusterStateUpdateRequest().type("type");
request.indices(new Index[] {indexService.index()});
request.source("{ \"properties\": {}}");
final ClusterStateTaskExecutor.ClusterTasksResult<PutMappingClusterStateUpdateRequest> result =
mappingService.putMappingExecutor.execute(clusterService.state(), Collections.singletonList(request));
assertThat(result.executionResults.size(), equalTo(1));
assertTrue(result.executionResults.values().iterator().next().isSuccess());
assertThat(result.resultingState.metaData().index("test").getMappingVersion(), equalTo(previousVersion));
}
}

View File

@ -267,6 +267,7 @@ public class MetaDataStateFormatTests extends ESTestCase {
IndexMetaData deserialized = indices.get(original.getIndex().getName());
assertThat(deserialized, notNullValue());
assertThat(deserialized.getVersion(), equalTo(original.getVersion()));
assertThat(deserialized.getMappingVersion(), equalTo(original.getMappingVersion()));
assertThat(deserialized.getNumberOfReplicas(), equalTo(original.getNumberOfReplicas()));
assertThat(deserialized.getNumberOfShards(), equalTo(original.getNumberOfShards()));
}

View File

@ -414,4 +414,22 @@ public class DateFieldMapperTests extends ESSingleNodeTestCase {
() -> mapper.merge(update.mapping()));
assertEquals("mapper [date] of different type, current_type [date], merged_type [text]", e.getMessage());
}
public void testIllegalFormatField() throws Exception {
String mapping = Strings.toString(XContentFactory.jsonBuilder()
.startObject()
.startObject("type")
.startObject("properties")
.startObject("field")
.field("type", "date")
.array("format", "test_format")
.endObject()
.endObject()
.endObject()
.endObject());
IllegalArgumentException e = expectThrows(IllegalArgumentException.class,
() -> parser.parse("type", new CompressedXContent(mapping)));
assertEquals("Invalid format: [[test_format]]: expected string value", e.getMessage());
}
}

View File

@ -22,6 +22,7 @@ import org.apache.lucene.index.IndexOptions;
import org.elasticsearch.Version;
import org.elasticsearch.action.admin.indices.mapping.get.GetMappingsResponse;
import org.elasticsearch.cluster.metadata.IndexMetaData;
import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.Strings;
import org.elasticsearch.common.bytes.BytesReference;
import org.elasticsearch.common.compress.CompressedXContent;
@ -745,4 +746,13 @@ public class DynamicMappingTests extends ESSingleNodeTestCase {
client().prepareIndex("test", "type", "1").setSource("foo", "abc").get();
assertThat(index.mapperService().fullName("foo"), instanceOf(KeywordFieldMapper.KeywordFieldType.class));
}
public void testMappingVersionAfterDynamicMappingUpdate() {
createIndex("test", client().admin().indices().prepareCreate("test").addMapping("type"));
final ClusterService clusterService = getInstanceFromNode(ClusterService.class);
final long previousVersion = clusterService.state().metaData().index("test").getMappingVersion();
client().prepareIndex("test", "type", "1").setSource("field", "text").get();
assertThat(clusterService.state().metaData().index("test").getMappingVersion(), equalTo(1 + previousVersion));
}
}

View File

@ -21,13 +21,16 @@ package org.elasticsearch.index.mapper;
import org.elasticsearch.ExceptionsHelper;
import org.elasticsearch.Version;
import org.elasticsearch.action.admin.indices.mapping.put.PutMappingRequest;
import org.elasticsearch.cluster.metadata.IndexMetaData;
import org.elasticsearch.common.Strings;
import org.elasticsearch.common.bytes.BytesReference;
import org.elasticsearch.common.compress.CompressedXContent;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentFactory;
import org.elasticsearch.common.xcontent.XContentType;
import org.elasticsearch.common.xcontent.json.JsonXContent;
import org.elasticsearch.index.IndexService;
import org.elasticsearch.index.mapper.KeywordFieldMapper.KeywordFieldType;
import org.elasticsearch.index.mapper.MapperService.MergeReason;
@ -119,6 +122,35 @@ public class MapperServiceTests extends ESSingleNodeTestCase {
assertNull(indexService.mapperService().documentMapper(MapperService.DEFAULT_MAPPING));
}
public void testIndexMetaDataUpdateDoesNotLoseDefaultMapper() throws IOException {
final IndexService indexService =
createIndex("test", Settings.builder().put(IndexMetaData.SETTING_VERSION_CREATED, Version.V_6_3_0).build());
try (XContentBuilder builder = JsonXContent.contentBuilder()) {
builder.startObject();
{
builder.startObject(MapperService.DEFAULT_MAPPING);
{
builder.field("date_detection", false);
}
builder.endObject();
}
builder.endObject();
final PutMappingRequest putMappingRequest = new PutMappingRequest();
putMappingRequest.indices("test");
putMappingRequest.type(MapperService.DEFAULT_MAPPING);
putMappingRequest.source(builder);
client().admin().indices().preparePutMapping("test").setType(MapperService.DEFAULT_MAPPING).setSource(builder).get();
}
assertNotNull(indexService.mapperService().documentMapper(MapperService.DEFAULT_MAPPING));
final Settings zeroReplicasSettings = Settings.builder().put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0).build();
client().admin().indices().prepareUpdateSettings("test").setSettings(zeroReplicasSettings).get();
/*
* This assertion is a guard against a previous bug that would lose the default mapper when applying a metadata update that did not
* update the default mapping.
*/
assertNotNull(indexService.mapperService().documentMapper(MapperService.DEFAULT_MAPPING));
}
public void testTotalFieldsExceedsLimit() throws Throwable {
Function<String, String> mapping = type -> {
try {

View File

@ -443,4 +443,22 @@ public class RangeFieldMapperTests extends AbstractNumericFieldMapperTestCase {
}
}
public void testIllegalFormatField() throws Exception {
String mapping = Strings.toString(XContentFactory.jsonBuilder()
.startObject()
.startObject("type")
.startObject("properties")
.startObject("field")
.field("type", "date_range")
.array("format", "test_format")
.endObject()
.endObject()
.endObject()
.endObject());
IllegalArgumentException e = expectThrows(IllegalArgumentException.class,
() -> parser.parse("type", new CompressedXContent(mapping)));
assertEquals("Invalid format: [[test_format]]: expected string value", e.getMessage());
}
}

View File

@ -159,4 +159,30 @@ public class RootObjectMapperTests extends ESSingleNodeTestCase {
mapper = mapperService.merge("type", new CompressedXContent(mapping3), MergeReason.MAPPING_UPDATE);
assertEquals(mapping3, mapper.mappingSource().toString());
}
public void testIllegalFormatField() throws Exception {
String dynamicMapping = Strings.toString(XContentFactory.jsonBuilder()
.startObject()
.startObject("type")
.startArray("dynamic_date_formats")
.startArray().value("test_format").endArray()
.endArray()
.endObject()
.endObject());
String mapping = Strings.toString(XContentFactory.jsonBuilder()
.startObject()
.startObject("type")
.startArray("date_formats")
.startArray().value("test_format").endArray()
.endArray()
.endObject()
.endObject());
DocumentMapperParser parser = createIndex("test").mapperService().documentMapperParser();
for (String m : Arrays.asList(mapping, dynamicMapping)) {
IllegalArgumentException e = expectThrows(IllegalArgumentException.class,
() -> parser.parse("type", new CompressedXContent(m)));
assertEquals("Invalid format: [[test_format]]: expected string value", e.getMessage());
}
}
}

View File

@ -19,6 +19,8 @@
package org.elasticsearch.index.mapper;
import org.elasticsearch.action.admin.indices.mapping.put.PutMappingRequest;
import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.Strings;
import org.elasticsearch.common.bytes.BytesReference;
import org.elasticsearch.common.compress.CompressedXContent;
@ -30,6 +32,7 @@ import org.elasticsearch.index.mapper.MapperService.MergeReason;
import org.elasticsearch.plugins.Plugin;
import org.elasticsearch.test.ESSingleNodeTestCase;
import org.elasticsearch.test.InternalSettingsPlugin;
import org.hamcrest.Matchers;
import java.io.IOException;
import java.util.Collection;
@ -188,4 +191,30 @@ public class UpdateMappingTests extends ESSingleNodeTestCase {
() -> mapperService2.merge("type", new CompressedXContent(mapping1), MergeReason.MAPPING_UPDATE));
assertThat(e.getMessage(), equalTo("mapper [foo] of different type, current_type [long], merged_type [ObjectMapper]"));
}
public void testMappingVersion() {
createIndex("test", client().admin().indices().prepareCreate("test").addMapping("type"));
final ClusterService clusterService = getInstanceFromNode(ClusterService.class);
{
final long previousVersion = clusterService.state().metaData().index("test").getMappingVersion();
final PutMappingRequest request = new PutMappingRequest();
request.indices("test");
request.type("type");
request.source("field", "type=text");
client().admin().indices().putMapping(request).actionGet();
assertThat(clusterService.state().metaData().index("test").getMappingVersion(), Matchers.equalTo(1 + previousVersion));
}
{
final long previousVersion = clusterService.state().metaData().index("test").getMappingVersion();
final PutMappingRequest request = new PutMappingRequest();
request.indices("test");
request.type("type");
request.source("field", "type=text");
client().admin().indices().putMapping(request).actionGet();
// the version should be unchanged after putting the same mapping again
assertThat(clusterService.state().metaData().index("test").getMappingVersion(), Matchers.equalTo(previousVersion));
}
}
}

View File

@ -273,7 +273,7 @@ public abstract class AbstractIndicesClusterStateServiceTestCase extends ESTestC
}
@Override
public boolean updateMapping(IndexMetaData indexMetaData) throws IOException {
public boolean updateMapping(final IndexMetaData currentIndexMetaData, final IndexMetaData newIndexMetaData) throws IOException {
failRandomly();
return false;
}

View File

@ -38,16 +38,19 @@ The following parameters can be specified in the body of a POST request and
pertain to creating a token:
`grant_type`::
(string) The type of grant. Valid grant types are: `password` and `refresh_token`.
(string) The type of grant. Supported grant types are: `password`,
`client_credentials` and `refresh_token`.
`password`::
(string) The user's password. If you specify the `password` grant type, this
parameter is required.
parameter is required. This parameter is not valid with any other supported
grant type.
`refresh_token`::
(string) If you specify the `refresh_token` grant type, this parameter is
required. It contains the string that was returned when you created the token
and enables you to extend its life.
and enables you to extend its life. This parameter is not valid with any other
supported grant type.
`scope`::
(string) The scope of the token. Currently tokens are only issued for a scope of
@ -55,11 +58,48 @@ and enables you to extend its life.
`username`::
(string) The username that identifies the user. If you specify the `password`
grant type, this parameter is required.
grant type, this parameter is required. This parameter is not valid with any
other supported grant type.
==== Examples
The following example obtains a token for the `test_admin` user:
The following example obtains a token using the `client_credentials` grant type,
which simply creates a token as the authenticated user:
[source,js]
--------------------------------------------------
POST /_xpack/security/oauth2/token
{
"grant_type" : "client_credentials"
}
--------------------------------------------------
// CONSOLE
The following example output contains the access token, the amount of time (in
seconds) that the token expires in, and the type:
[source,js]
--------------------------------------------------
{
"access_token" : "dGhpcyBpcyBub3QgYSByZWFsIHRva2VuIGJ1dCBpdCBpcyBvbmx5IHRlc3QgZGF0YS4gZG8gbm90IHRyeSB0byByZWFkIHRva2VuIQ==",
"type" : "Bearer",
"expires_in" : 1200
}
--------------------------------------------------
// TESTRESPONSE[s/dGhpcyBpcyBub3QgYSByZWFsIHRva2VuIGJ1dCBpdCBpcyBvbmx5IHRlc3QgZGF0YS4gZG8gbm90IHRyeSB0byByZWFkIHRva2VuIQ==/$body.access_token/]
The token returned by this API can be used by sending a request with a
`Authorization` header with a value having the prefix `Bearer ` followed
by the value of the `access_token`.
[source,shell]
--------------------------------------------------
curl -H "Authorization: Bearer dGhpcyBpcyBub3QgYSByZWFsIHRva2VuIGJ1dCBpdCBpcyBvbmx5IHRlc3QgZGF0YS4gZG8gbm90IHRyeSB0byByZWFkIHRva2VuIQ==" http://localhost:9200/_cluster/health
--------------------------------------------------
// NOTCONSOLE
The following example obtains a token for the `test_admin` user using the
`password` grant type:
[source,js]
--------------------------------------------------
@ -73,7 +113,7 @@ POST /_xpack/security/oauth2/token
// CONSOLE
The following example output contains the access token, the amount of time (in
seconds) that the token expires in, and the type:
seconds) that the token expires in, the type, and the refresh token:
[source,js]
--------------------------------------------------
@ -87,19 +127,10 @@ seconds) that the token expires in, and the type:
// TESTRESPONSE[s/dGhpcyBpcyBub3QgYSByZWFsIHRva2VuIGJ1dCBpdCBpcyBvbmx5IHRlc3QgZGF0YS4gZG8gbm90IHRyeSB0byByZWFkIHRva2VuIQ==/$body.access_token/]
// TESTRESPONSE[s/vLBPvmAB6KvwvJZr27cS/$body.refresh_token/]
The token returned by this API can be used by sending a request with a
`Authorization` header with a value having the prefix `Bearer ` followed
by the value of the `access_token`.
[source,shell]
--------------------------------------------------
curl -H "Authorization: Bearer dGhpcyBpcyBub3QgYSByZWFsIHRva2VuIGJ1dCBpdCBpcyBvbmx5IHRlc3QgZGF0YS4gZG8gbm90IHRyeSB0byByZWFkIHRva2VuIQ==" http://localhost:9200/_cluster/health
--------------------------------------------------
// NOTCONSOLE
[[security-api-refresh-token]]
To extend the life of an existing token, you can call the API again with the
refresh token within 24 hours of the token's creation. For example:
To extend the life of an existing token obtained using the `password` grant type,
you can call the API again with the refresh token within 24 hours of the token's
creation. For example:
[source,js]
--------------------------------------------------

View File

@ -55,8 +55,8 @@ help you get up and running. The +elasticsearch-setup-passwords+ command is the
simplest method to set the built-in users' passwords for the first time.
For example, you can run the command in an "interactive" mode, which prompts you
to enter new passwords for the `elastic`, `kibana`, `beats_system`, and
`logstash_system` users:
to enter new passwords for the `elastic`, `kibana`, `beats_system`,
`logstash_system`, and `apm_system` users:
[source,shell]
--------------------------------------------------

View File

@ -70,7 +70,7 @@ public class FollowIndexActionTests extends ESTestCase {
IndexMetaData followIMD = createIMD("index2", State.OPEN, "{\"properties\": {\"field\": {\"type\": \"text\"}}}", 5,
Settings.builder().put(CcrSettings.CCR_FOLLOWING_INDEX_SETTING.getKey(), true).build());
MapperService mapperService = MapperTestUtils.newMapperService(xContentRegistry(), createTempDir(), Settings.EMPTY, "index2");
mapperService.updateMapping(followIMD);
mapperService.updateMapping(null, followIMD);
Exception e = expectThrows(IllegalArgumentException.class,
() -> FollowIndexAction.validate(request, leaderIMD, followIMD, mapperService));
assertThat(e.getMessage(), equalTo("mapper [field] of different type, current_type [text], merged_type [keyword]"));
@ -99,7 +99,7 @@ public class FollowIndexActionTests extends ESTestCase {
IndexMetaData followIMD = createIMD("index2", 5, followingIndexSettings);
MapperService mapperService = MapperTestUtils.newMapperService(xContentRegistry(), createTempDir(),
followingIndexSettings, "index2");
mapperService.updateMapping(followIMD);
mapperService.updateMapping(null, followIMD);
IllegalArgumentException error = expectThrows(IllegalArgumentException.class,
() -> FollowIndexAction.validate(request, leaderIMD, followIMD, mapperService));
assertThat(error.getMessage(), equalTo("the following index [index2] is not ready to follow; " +
@ -112,7 +112,7 @@ public class FollowIndexActionTests extends ESTestCase {
IndexMetaData followIMD = createIMD("index2", 5, Settings.builder()
.put(CcrSettings.CCR_FOLLOWING_INDEX_SETTING.getKey(), true).build());
MapperService mapperService = MapperTestUtils.newMapperService(xContentRegistry(), createTempDir(), Settings.EMPTY, "index2");
mapperService.updateMapping(followIMD);
mapperService.updateMapping(null, followIMD);
FollowIndexAction.validate(request, leaderIMD, followIMD, mapperService);
}
{
@ -128,7 +128,7 @@ public class FollowIndexActionTests extends ESTestCase {
.put("index.analysis.analyzer.my_analyzer.tokenizer", "standard").build());
MapperService mapperService = MapperTestUtils.newMapperService(xContentRegistry(), createTempDir(),
followIMD.getSettings(), "index2");
mapperService.updateMapping(followIMD);
mapperService.updateMapping(null, followIMD);
FollowIndexAction.validate(request, leaderIMD, followIMD, mapperService);
}
{
@ -146,7 +146,7 @@ public class FollowIndexActionTests extends ESTestCase {
.put("index.analysis.analyzer.my_analyzer.tokenizer", "standard").build());
MapperService mapperService = MapperTestUtils.newMapperService(xContentRegistry(), createTempDir(),
followIMD.getSettings(), "index2");
mapperService.updateMapping(followIMD);
mapperService.updateMapping(null, followIMD);
FollowIndexAction.validate(request, leaderIMD, followIMD, mapperService);
}
}

View File

@ -20,16 +20,11 @@ import org.elasticsearch.common.xcontent.ConstructingObjectParser;
import org.elasticsearch.common.xcontent.ToXContentObject;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentParser;
import org.elasticsearch.search.aggregations.bucket.composite.CompositeValuesSourceBuilder;
import org.elasticsearch.search.aggregations.bucket.composite.DateHistogramValuesSourceBuilder;
import org.elasticsearch.search.aggregations.bucket.histogram.DateHistogramAggregationBuilder;
import org.elasticsearch.search.aggregations.bucket.histogram.DateHistogramInterval;
import org.elasticsearch.xpack.core.rollup.RollupField;
import org.joda.time.DateTimeZone;
import java.io.IOException;
import java.util.Collections;
import java.util.List;
import java.util.Map;
import java.util.Objects;
@ -182,19 +177,6 @@ public class DateHistogramGroupConfig implements Writeable, ToXContentObject {
return createRounding(interval.toString(), timeZone);
}
/**
* This returns a set of aggregation builders which represent the configured
* set of date histograms. Used by the rollup indexer to iterate over historical data
*/
public List<CompositeValuesSourceBuilder<?>> toBuilders() {
DateHistogramValuesSourceBuilder vsBuilder =
new DateHistogramValuesSourceBuilder(RollupField.formatIndexerAggName(field, DateHistogramAggregationBuilder.NAME));
vsBuilder.dateHistogramInterval(interval);
vsBuilder.field(field);
vsBuilder.timeZone(toDateTimeZone(timeZone));
return Collections.singletonList(vsBuilder);
}
public void validateMappings(Map<String, Map<String, FieldCapabilities>> fieldCapsResponse,
ActionRequestValidationException validationException) {

View File

@ -16,18 +16,13 @@ import org.elasticsearch.common.xcontent.ConstructingObjectParser;
import org.elasticsearch.common.xcontent.ToXContentObject;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentParser;
import org.elasticsearch.search.aggregations.bucket.composite.CompositeValuesSourceBuilder;
import org.elasticsearch.search.aggregations.bucket.composite.HistogramValuesSourceBuilder;
import org.elasticsearch.search.aggregations.bucket.histogram.HistogramAggregationBuilder;
import org.elasticsearch.xpack.core.rollup.RollupField;
import java.io.IOException;
import java.util.Arrays;
import java.util.Collections;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.stream.Collectors;
import static org.elasticsearch.common.xcontent.ConstructingObjectParser.constructorArg;
@ -85,25 +80,6 @@ public class HistogramGroupConfig implements Writeable, ToXContentObject {
return fields;
}
/**
* This returns a set of aggregation builders which represent the configured
* set of histograms. Used by the rollup indexer to iterate over historical data
*/
public List<CompositeValuesSourceBuilder<?>> toBuilders() {
if (fields.length == 0) {
return Collections.emptyList();
}
return Arrays.stream(fields).map(f -> {
HistogramValuesSourceBuilder vsBuilder
= new HistogramValuesSourceBuilder(RollupField.formatIndexerAggName(f, HistogramAggregationBuilder.NAME));
vsBuilder.interval(interval);
vsBuilder.field(f);
vsBuilder.missingBucket(true);
return vsBuilder;
}).collect(Collectors.toList());
}
public void validateMappings(Map<String, Map<String, FieldCapabilities>> fieldCapsResponse,
ActionRequestValidationException validationException) {

View File

@ -16,18 +16,9 @@ import org.elasticsearch.common.xcontent.ConstructingObjectParser;
import org.elasticsearch.common.xcontent.ToXContentObject;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentParser;
import org.elasticsearch.search.aggregations.metrics.avg.AvgAggregationBuilder;
import org.elasticsearch.search.aggregations.metrics.max.MaxAggregationBuilder;
import org.elasticsearch.search.aggregations.metrics.min.MinAggregationBuilder;
import org.elasticsearch.search.aggregations.metrics.sum.SumAggregationBuilder;
import org.elasticsearch.search.aggregations.metrics.valuecount.ValueCountAggregationBuilder;
import org.elasticsearch.search.aggregations.support.ValueType;
import org.elasticsearch.search.aggregations.support.ValuesSourceAggregationBuilder;
import org.elasticsearch.xpack.core.rollup.RollupField;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import java.util.Map;
import java.util.Objects;
@ -53,11 +44,11 @@ import static org.elasticsearch.common.xcontent.ConstructingObjectParser.constru
public class MetricConfig implements Writeable, ToXContentObject {
// TODO: replace these with an enum
private static final ParseField MIN = new ParseField("min");
private static final ParseField MAX = new ParseField("max");
private static final ParseField SUM = new ParseField("sum");
private static final ParseField AVG = new ParseField("avg");
private static final ParseField VALUE_COUNT = new ParseField("value_count");
public static final ParseField MIN = new ParseField("min");
public static final ParseField MAX = new ParseField("max");
public static final ParseField SUM = new ParseField("sum");
public static final ParseField AVG = new ParseField("avg");
public static final ParseField VALUE_COUNT = new ParseField("value_count");
static final String NAME = "metrics";
private static final String FIELD = "field";
@ -111,46 +102,6 @@ public class MetricConfig implements Writeable, ToXContentObject {
return metrics;
}
/**
* This returns a set of aggregation builders which represent the configured
* set of metrics. Used by the rollup indexer to iterate over historical data
*/
public List<ValuesSourceAggregationBuilder.LeafOnly> toBuilders() {
if (metrics.size() == 0) {
return Collections.emptyList();
}
List<ValuesSourceAggregationBuilder.LeafOnly> aggs = new ArrayList<>(metrics.size());
for (String metric : metrics) {
ValuesSourceAggregationBuilder.LeafOnly newBuilder;
if (metric.equals(MIN.getPreferredName())) {
newBuilder = new MinAggregationBuilder(RollupField.formatFieldName(field, MinAggregationBuilder.NAME, RollupField.VALUE));
} else if (metric.equals(MAX.getPreferredName())) {
newBuilder = new MaxAggregationBuilder(RollupField.formatFieldName(field, MaxAggregationBuilder.NAME, RollupField.VALUE));
} else if (metric.equals(AVG.getPreferredName())) {
// Avgs are sum + count
newBuilder = new SumAggregationBuilder(RollupField.formatFieldName(field, AvgAggregationBuilder.NAME, RollupField.VALUE));
ValuesSourceAggregationBuilder.LeafOnly countBuilder
= new ValueCountAggregationBuilder(
RollupField.formatFieldName(field, AvgAggregationBuilder.NAME, RollupField.COUNT_FIELD), ValueType.NUMERIC);
countBuilder.field(field);
aggs.add(countBuilder);
} else if (metric.equals(SUM.getPreferredName())) {
newBuilder = new SumAggregationBuilder(RollupField.formatFieldName(field, SumAggregationBuilder.NAME, RollupField.VALUE));
} else if (metric.equals(VALUE_COUNT.getPreferredName())) {
// TODO allow non-numeric value_counts.
// Hardcoding this is fine for now since the job validation guarantees that all metric fields are numerics
newBuilder = new ValueCountAggregationBuilder(
RollupField.formatFieldName(field, ValueCountAggregationBuilder.NAME, RollupField.VALUE), ValueType.NUMERIC);
} else {
throw new IllegalArgumentException("Unsupported metric type [" + metric + "]");
}
newBuilder.field(field);
aggs.add(newBuilder);
}
return aggs;
}
public void validateMappings(Map<String, Map<String, FieldCapabilities>> fieldCapsResponse,
ActionRequestValidationException validationException) {

View File

@ -18,16 +18,11 @@ import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentParser;
import org.elasticsearch.index.mapper.KeywordFieldMapper;
import org.elasticsearch.index.mapper.TextFieldMapper;
import org.elasticsearch.search.aggregations.bucket.composite.CompositeValuesSourceBuilder;
import org.elasticsearch.search.aggregations.bucket.composite.TermsValuesSourceBuilder;
import org.elasticsearch.search.aggregations.bucket.terms.TermsAggregationBuilder;
import org.elasticsearch.xpack.core.rollup.RollupField;
import java.io.IOException;
import java.util.Arrays;
import java.util.List;
import java.util.Map;
import java.util.stream.Collectors;
import static org.elasticsearch.common.xcontent.ConstructingObjectParser.constructorArg;
@ -79,20 +74,6 @@ public class TermsGroupConfig implements Writeable, ToXContentObject {
return fields;
}
/**
* This returns a set of aggregation builders which represent the configured
* set of date histograms. Used by the rollup indexer to iterate over historical data
*/
public List<CompositeValuesSourceBuilder<?>> toBuilders() {
return Arrays.stream(fields).map(f -> {
TermsValuesSourceBuilder vsBuilder
= new TermsValuesSourceBuilder(RollupField.formatIndexerAggName(f, TermsAggregationBuilder.NAME));
vsBuilder.field(f);
vsBuilder.missingBucket(true);
return vsBuilder;
}).collect(Collectors.toList());
}
public void validateMappings(Map<String, Map<String, FieldCapabilities>> fieldCapsResponse,
ActionRequestValidationException validationException) {

View File

@ -19,6 +19,10 @@ import org.elasticsearch.common.CharArrays;
import java.io.IOException;
import java.util.Arrays;
import java.util.Collections;
import java.util.EnumSet;
import java.util.Set;
import java.util.stream.Collectors;
import static org.elasticsearch.action.ValidateActions.addValidationError;
@ -29,6 +33,37 @@ import static org.elasticsearch.action.ValidateActions.addValidationError;
*/
public final class CreateTokenRequest extends ActionRequest {
public enum GrantType {
PASSWORD("password"),
REFRESH_TOKEN("refresh_token"),
AUTHORIZATION_CODE("authorization_code"),
CLIENT_CREDENTIALS("client_credentials");
private final String value;
GrantType(String value) {
this.value = value;
}
public String getValue() {
return value;
}
public static GrantType fromString(String grantType) {
if (grantType != null) {
for (GrantType type : values()) {
if (type.getValue().equals(grantType)) {
return type;
}
}
}
return null;
}
}
private static final Set<GrantType> SUPPORTED_GRANT_TYPES = Collections.unmodifiableSet(
EnumSet.of(GrantType.PASSWORD, GrantType.REFRESH_TOKEN, GrantType.CLIENT_CREDENTIALS));
private String grantType;
private String username;
private SecureString password;
@ -49,33 +84,58 @@ public final class CreateTokenRequest extends ActionRequest {
@Override
public ActionRequestValidationException validate() {
ActionRequestValidationException validationException = null;
if ("password".equals(grantType)) {
if (Strings.isNullOrEmpty(username)) {
validationException = addValidationError("username is missing", validationException);
}
if (password == null || password.getChars() == null || password.getChars().length == 0) {
validationException = addValidationError("password is missing", validationException);
}
if (refreshToken != null) {
validationException =
addValidationError("refresh_token is not supported with the password grant_type", validationException);
}
} else if ("refresh_token".equals(grantType)) {
if (username != null) {
validationException =
addValidationError("username is not supported with the refresh_token grant_type", validationException);
}
if (password != null) {
validationException =
addValidationError("password is not supported with the refresh_token grant_type", validationException);
}
if (refreshToken == null) {
validationException = addValidationError("refresh_token is missing", validationException);
GrantType type = GrantType.fromString(grantType);
if (type != null) {
switch (type) {
case PASSWORD:
if (Strings.isNullOrEmpty(username)) {
validationException = addValidationError("username is missing", validationException);
}
if (password == null || password.getChars() == null || password.getChars().length == 0) {
validationException = addValidationError("password is missing", validationException);
}
if (refreshToken != null) {
validationException =
addValidationError("refresh_token is not supported with the password grant_type", validationException);
}
break;
case REFRESH_TOKEN:
if (username != null) {
validationException =
addValidationError("username is not supported with the refresh_token grant_type", validationException);
}
if (password != null) {
validationException =
addValidationError("password is not supported with the refresh_token grant_type", validationException);
}
if (refreshToken == null) {
validationException = addValidationError("refresh_token is missing", validationException);
}
break;
case CLIENT_CREDENTIALS:
if (username != null) {
validationException =
addValidationError("username is not supported with the client_credentials grant_type", validationException);
}
if (password != null) {
validationException =
addValidationError("password is not supported with the client_credentials grant_type", validationException);
}
if (refreshToken != null) {
validationException = addValidationError("refresh_token is not supported with the client_credentials grant_type",
validationException);
}
break;
default:
validationException = addValidationError("grant_type only supports the values: [" +
SUPPORTED_GRANT_TYPES.stream().map(GrantType::getValue).collect(Collectors.joining(", ")) + "]",
validationException);
}
} else {
validationException = addValidationError("grant_type only supports the values: [password, refresh_token]", validationException);
validationException = addValidationError("grant_type only supports the values: [" +
SUPPORTED_GRANT_TYPES.stream().map(GrantType::getValue).collect(Collectors.joining(", ")) + "]",
validationException);
}
return validationException;
}
@ -126,6 +186,11 @@ public final class CreateTokenRequest extends ActionRequest {
@Override
public void writeTo(StreamOutput out) throws IOException {
super.writeTo(out);
if (out.getVersion().before(Version.V_7_0_0_alpha1) && GrantType.CLIENT_CREDENTIALS.getValue().equals(grantType)) {
throw new IllegalArgumentException("a request with the client_credentials grant_type cannot be sent to version [" +
out.getVersion() + "]");
}
out.writeString(grantType);
if (out.getVersion().onOrAfter(Version.V_6_2_0)) {
out.writeOptionalString(username);

View File

@ -59,8 +59,14 @@ public final class CreateTokenResponse extends ActionResponse implements ToXCont
out.writeString(tokenString);
out.writeTimeValue(expiresIn);
out.writeOptionalString(scope);
if (out.getVersion().onOrAfter(Version.V_6_2_0)) {
out.writeString(refreshToken);
if (out.getVersion().onOrAfter(Version.V_7_0_0_alpha1)) { // TODO change to V_6_5_0 after backport
out.writeOptionalString(refreshToken);
} else if (out.getVersion().onOrAfter(Version.V_6_2_0)) {
if (refreshToken == null) {
out.writeString("");
} else {
out.writeString(refreshToken);
}
}
}
@ -70,7 +76,9 @@ public final class CreateTokenResponse extends ActionResponse implements ToXCont
tokenString = in.readString();
expiresIn = in.readTimeValue();
scope = in.readOptionalString();
if (in.getVersion().onOrAfter(Version.V_6_2_0)) {
if (in.getVersion().onOrAfter(Version.V_7_0_0_alpha1)) { // TODO change to V_6_5_0 after backport
refreshToken = in.readOptionalString();
} else if (in.getVersion().onOrAfter(Version.V_6_2_0)) {
refreshToken = in.readString();
}
}
@ -90,4 +98,20 @@ public final class CreateTokenResponse extends ActionResponse implements ToXCont
}
return builder.endObject();
}
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
CreateTokenResponse that = (CreateTokenResponse) o;
return Objects.equals(tokenString, that.tokenString) &&
Objects.equals(expiresIn, that.expiresIn) &&
Objects.equals(scope, that.scope) &&
Objects.equals(refreshToken, that.refreshToken);
}
@Override
public int hashCode() {
return Objects.hash(tokenString, expiresIn, scope, refreshToken);
}
}

View File

@ -19,6 +19,7 @@ public class ClientReservedRealm {
case UsernamesField.KIBANA_NAME:
case UsernamesField.LOGSTASH_NAME:
case UsernamesField.BEATS_NAME:
case UsernamesField.APM_NAME:
return XPackSettings.RESERVED_REALM_ENABLED_SETTING.get(settings);
default:
return AnonymousUser.isAnonymousUsername(username, settings);

View File

@ -112,6 +112,8 @@ public class ReservedRolesStore {
null, MetadataUtils.DEFAULT_RESERVED_METADATA))
.put(UsernamesField.BEATS_ROLE, new RoleDescriptor(UsernamesField.BEATS_ROLE,
new String[] { "monitor", MonitoringBulkAction.NAME}, null, null, MetadataUtils.DEFAULT_RESERVED_METADATA))
.put(UsernamesField.APM_ROLE, new RoleDescriptor(UsernamesField.APM_ROLE,
new String[] { "monitor", MonitoringBulkAction.NAME}, null, null, MetadataUtils.DEFAULT_RESERVED_METADATA))
.put("machine_learning_user", new RoleDescriptor("machine_learning_user", new String[] { "monitor_ml" },
new RoleDescriptor.IndicesPrivileges[] { RoleDescriptor.IndicesPrivileges.builder().indices(".ml-anomalies*",
".ml-notifications").privileges("view_index_metadata", "read").build() },

View File

@ -0,0 +1,25 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.xpack.core.security.user;
import org.elasticsearch.Version;
import org.elasticsearch.protocol.xpack.security.User;
import org.elasticsearch.xpack.core.security.support.MetadataUtils;
/**
* Built in user for APM server internals. Currently used for APM server monitoring.
*/
public class APMSystemUser extends User {
public static final String NAME = UsernamesField.APM_NAME;
public static final String ROLE_NAME = UsernamesField.APM_ROLE;
public static final Version DEFINED_SINCE = Version.V_6_5_0;
public static final BuiltinUserInfo USER_INFO = new BuiltinUserInfo(NAME, ROLE_NAME, DEFINED_SINCE);
public APMSystemUser(boolean enabled) {
super(NAME, new String[]{ ROLE_NAME }, null, null, MetadataUtils.DEFAULT_RESERVED_METADATA, enabled);
}
}

View File

@ -20,6 +20,8 @@ public final class UsernamesField {
public static final String LOGSTASH_ROLE = "logstash_system";
public static final String BEATS_NAME = "beats_system";
public static final String BEATS_ROLE = "beats_system";
public static final String APM_NAME = "apm_system";
public static final String APM_ROLE = "apm_system";
private UsernamesField() {}
}

View File

@ -9,19 +9,16 @@ import org.elasticsearch.action.ActionRequestValidationException;
import org.elasticsearch.action.fieldcaps.FieldCapabilities;
import org.elasticsearch.common.io.stream.Writeable;
import org.elasticsearch.common.xcontent.XContentParser;
import org.elasticsearch.search.aggregations.bucket.composite.CompositeValuesSourceBuilder;
import org.elasticsearch.test.AbstractSerializingTestCase;
import java.io.IOException;
import java.util.Collections;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import static org.elasticsearch.xpack.core.rollup.ConfigTestHelpers.randomTermsGroupConfig;
import static org.hamcrest.Matchers.equalTo;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.when;
public class TermsGroupConfigSerializingTests extends AbstractSerializingTestCase<TermsGroupConfig> {
@ -77,62 +74,4 @@ public class TermsGroupConfigSerializingTests extends AbstractSerializingTestCas
assertThat(e.validationErrors().get(0), equalTo("The field referenced by a terms group must be a [numeric] or " +
"[keyword/text] type, but found [geo_point] for field [my_field]"));
}
public void testValidateFieldMatchingNotAggregatable() {
ActionRequestValidationException e = new ActionRequestValidationException();
Map<String, Map<String, FieldCapabilities>> responseMap = new HashMap<>();
// Have to mock fieldcaps because the ctor's aren't public...
FieldCapabilities fieldCaps = mock(FieldCapabilities.class);
when(fieldCaps.isAggregatable()).thenReturn(false);
responseMap.put("my_field", Collections.singletonMap(getRandomType(), fieldCaps));
TermsGroupConfig config = new TermsGroupConfig("my_field");
config.validateMappings(responseMap, e);
assertThat(e.validationErrors().get(0), equalTo("The field [my_field] must be aggregatable across all indices, but is not."));
}
public void testValidateMatchingField() {
ActionRequestValidationException e = new ActionRequestValidationException();
Map<String, Map<String, FieldCapabilities>> responseMap = new HashMap<>();
String type = getRandomType();
// Have to mock fieldcaps because the ctor's aren't public...
FieldCapabilities fieldCaps = mock(FieldCapabilities.class);
when(fieldCaps.isAggregatable()).thenReturn(true);
responseMap.put("my_field", Collections.singletonMap(type, fieldCaps));
TermsGroupConfig config = new TermsGroupConfig("my_field");
config.validateMappings(responseMap, e);
if (e.validationErrors().size() != 0) {
fail(e.getMessage());
}
List<CompositeValuesSourceBuilder<?>> builders = config.toBuilders();
assertThat(builders.size(), equalTo(1));
}
private String getRandomType() {
int n = randomIntBetween(0,8);
if (n == 0) {
return "keyword";
} else if (n == 1) {
return "text";
} else if (n == 2) {
return "long";
} else if (n == 3) {
return "integer";
} else if (n == 4) {
return "short";
} else if (n == 5) {
return "float";
} else if (n == 6) {
return "double";
} else if (n == 7) {
return "scaled_float";
} else if (n == 8) {
return "half_float";
}
return "long";
}
}

View File

@ -3,7 +3,7 @@
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.xpack.security.action.token;
package org.elasticsearch.xpack.core.security.action.token;
import org.elasticsearch.action.ActionRequestValidationException;
import org.elasticsearch.common.settings.SecureString;
@ -20,7 +20,7 @@ public class CreateTokenRequestTests extends ESTestCase {
ActionRequestValidationException ve = request.validate();
assertNotNull(ve);
assertEquals(1, ve.validationErrors().size());
assertThat(ve.validationErrors().get(0), containsString("[password, refresh_token]"));
assertThat(ve.validationErrors().get(0), containsString("[password, refresh_token, client_credentials]"));
assertThat(ve.validationErrors().get(0), containsString("grant_type"));
request.setGrantType("password");
@ -72,5 +72,19 @@ public class CreateTokenRequestTests extends ESTestCase {
assertNotNull(ve);
assertEquals(1, ve.validationErrors().size());
assertThat(ve.validationErrors(), hasItem("refresh_token is missing"));
request.setGrantType("client_credentials");
ve = request.validate();
assertNull(ve);
request.setUsername(randomAlphaOfLengthBetween(1, 32));
request.setPassword(new SecureString(randomAlphaOfLengthBetween(1, 32).toCharArray()));
request.setRefreshToken(randomAlphaOfLengthBetween(1, 32));
ve = request.validate();
assertNotNull(ve);
assertEquals(3, ve.validationErrors().size());
assertThat(ve.validationErrors(), hasItem(containsString("username is not supported")));
assertThat(ve.validationErrors(), hasItem(containsString("password is not supported")));
assertThat(ve.validationErrors(), hasItem(containsString("refresh_token is not supported")));
}
}

View File

@ -0,0 +1,92 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.xpack.core.security.action.token;
import org.elasticsearch.Version;
import org.elasticsearch.common.io.stream.BytesStreamOutput;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.test.ESTestCase;
import org.elasticsearch.test.VersionUtils;
public class CreateTokenResponseTests extends ESTestCase {
public void testSerialization() throws Exception {
CreateTokenResponse response = new CreateTokenResponse(randomAlphaOfLengthBetween(1, 10), TimeValue.timeValueMinutes(20L),
randomBoolean() ? null : "FULL", randomAlphaOfLengthBetween(1, 10));
try (BytesStreamOutput output = new BytesStreamOutput()) {
response.writeTo(output);
try (StreamInput input = output.bytes().streamInput()) {
CreateTokenResponse serialized = new CreateTokenResponse();
serialized.readFrom(input);
assertEquals(response, serialized);
}
}
response = new CreateTokenResponse(randomAlphaOfLengthBetween(1, 10), TimeValue.timeValueMinutes(20L),
randomBoolean() ? null : "FULL", null);
try (BytesStreamOutput output = new BytesStreamOutput()) {
response.writeTo(output);
try (StreamInput input = output.bytes().streamInput()) {
CreateTokenResponse serialized = new CreateTokenResponse();
serialized.readFrom(input);
assertEquals(response, serialized);
}
}
}
public void testSerializationToPre62Version() throws Exception {
CreateTokenResponse response = new CreateTokenResponse(randomAlphaOfLengthBetween(1, 10), TimeValue.timeValueMinutes(20L),
randomBoolean() ? null : "FULL", randomBoolean() ? null : randomAlphaOfLengthBetween(1, 10));
final Version version = VersionUtils.randomVersionBetween(random(), Version.V_6_0_0, Version.V_6_1_4);
try (BytesStreamOutput output = new BytesStreamOutput()) {
output.setVersion(version);
response.writeTo(output);
try (StreamInput input = output.bytes().streamInput()) {
input.setVersion(version);
CreateTokenResponse serialized = new CreateTokenResponse();
serialized.readFrom(input);
assertNull(serialized.getRefreshToken());
assertEquals(response.getTokenString(), serialized.getTokenString());
assertEquals(response.getExpiresIn(), serialized.getExpiresIn());
assertEquals(response.getScope(), serialized.getScope());
}
}
}
public void testSerializationToPost62Pre65Version() throws Exception {
CreateTokenResponse response = new CreateTokenResponse(randomAlphaOfLengthBetween(1, 10), TimeValue.timeValueMinutes(20L),
randomBoolean() ? null : "FULL", randomAlphaOfLengthBetween(1, 10));
final Version version = VersionUtils.randomVersionBetween(random(), Version.V_6_2_0, Version.V_6_4_0);
try (BytesStreamOutput output = new BytesStreamOutput()) {
output.setVersion(version);
response.writeTo(output);
try (StreamInput input = output.bytes().streamInput()) {
input.setVersion(version);
CreateTokenResponse serialized = new CreateTokenResponse();
serialized.readFrom(input);
assertEquals(response, serialized);
}
}
// no refresh token
response = new CreateTokenResponse(randomAlphaOfLengthBetween(1, 10), TimeValue.timeValueMinutes(20L),
randomBoolean() ? null : "FULL", null);
try (BytesStreamOutput output = new BytesStreamOutput()) {
output.setVersion(version);
response.writeTo(output);
try (StreamInput input = output.bytes().streamInput()) {
input.setVersion(version);
CreateTokenResponse serialized = new CreateTokenResponse();
serialized.readFrom(input);
assertEquals("", serialized.getRefreshToken());
assertEquals(response.getTokenString(), serialized.getTokenString());
assertEquals(response.getExpiresIn(), serialized.getExpiresIn());
assertEquals(response.getScope(), serialized.getScope());
}
}
}
}

View File

@ -94,6 +94,7 @@ import org.elasticsearch.xpack.core.security.authz.permission.FieldPermissionsCa
import org.elasticsearch.xpack.core.security.authz.permission.Role;
import org.elasticsearch.xpack.core.security.authz.privilege.ApplicationPrivilege;
import org.elasticsearch.xpack.core.security.authz.privilege.ApplicationPrivilegeDescriptor;
import org.elasticsearch.xpack.core.security.user.APMSystemUser;
import org.elasticsearch.xpack.core.security.user.BeatsSystemUser;
import org.elasticsearch.xpack.core.security.user.LogstashSystemUser;
import org.elasticsearch.xpack.core.security.user.SystemUser;
@ -147,6 +148,7 @@ public class ReservedRolesStoreTests extends ESTestCase {
assertThat(ReservedRolesStore.isReserved(XPackUser.ROLE_NAME), is(true));
assertThat(ReservedRolesStore.isReserved(LogstashSystemUser.ROLE_NAME), is(true));
assertThat(ReservedRolesStore.isReserved(BeatsSystemUser.ROLE_NAME), is(true));
assertThat(ReservedRolesStore.isReserved(APMSystemUser.ROLE_NAME), is(true));
}
public void testIngestAdminRole() {
@ -628,6 +630,30 @@ public class ReservedRolesStoreTests extends ESTestCase {
is(false));
}
public void testAPMSystemRole() {
final TransportRequest request = mock(TransportRequest.class);
RoleDescriptor roleDescriptor = new ReservedRolesStore().roleDescriptor(APMSystemUser.ROLE_NAME);
assertNotNull(roleDescriptor);
assertThat(roleDescriptor.getMetadata(), hasEntry("_reserved", true));
Role APMSystemRole = Role.builder(roleDescriptor, null).build();
assertThat(APMSystemRole.cluster().check(ClusterHealthAction.NAME, request), is(true));
assertThat(APMSystemRole.cluster().check(ClusterStateAction.NAME, request), is(true));
assertThat(APMSystemRole.cluster().check(ClusterStatsAction.NAME, request), is(true));
assertThat(APMSystemRole.cluster().check(PutIndexTemplateAction.NAME, request), is(false));
assertThat(APMSystemRole.cluster().check(ClusterRerouteAction.NAME, request), is(false));
assertThat(APMSystemRole.cluster().check(ClusterUpdateSettingsAction.NAME, request), is(false));
assertThat(APMSystemRole.cluster().check(MonitoringBulkAction.NAME, request), is(true));
assertThat(APMSystemRole.runAs().check(randomAlphaOfLengthBetween(1, 30)), is(false));
assertThat(APMSystemRole.indices().allowedIndicesMatcher(IndexAction.NAME).test("foo"), is(false));
assertThat(APMSystemRole.indices().allowedIndicesMatcher(IndexAction.NAME).test(".reporting"), is(false));
assertThat(APMSystemRole.indices().allowedIndicesMatcher("indices:foo").test(randomAlphaOfLengthBetween(8, 24)),
is(false));
}
public void testMachineLearningAdminRole() {
final TransportRequest request = mock(TransportRequest.class);

View File

@ -15,19 +15,35 @@ import org.elasticsearch.action.search.SearchResponse;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.index.query.QueryBuilder;
import org.elasticsearch.index.query.RangeQueryBuilder;
import org.elasticsearch.search.aggregations.AggregationBuilder;
import org.elasticsearch.search.aggregations.bucket.composite.CompositeAggregation;
import org.elasticsearch.search.aggregations.bucket.composite.CompositeAggregationBuilder;
import org.elasticsearch.search.aggregations.bucket.composite.CompositeValuesSourceBuilder;
import org.elasticsearch.search.aggregations.bucket.composite.DateHistogramValuesSourceBuilder;
import org.elasticsearch.search.aggregations.bucket.composite.HistogramValuesSourceBuilder;
import org.elasticsearch.search.aggregations.bucket.composite.TermsValuesSourceBuilder;
import org.elasticsearch.search.aggregations.bucket.histogram.DateHistogramAggregationBuilder;
import org.elasticsearch.search.aggregations.bucket.histogram.HistogramAggregationBuilder;
import org.elasticsearch.search.aggregations.bucket.terms.TermsAggregationBuilder;
import org.elasticsearch.search.aggregations.metrics.avg.AvgAggregationBuilder;
import org.elasticsearch.search.aggregations.metrics.max.MaxAggregationBuilder;
import org.elasticsearch.search.aggregations.metrics.min.MinAggregationBuilder;
import org.elasticsearch.search.aggregations.metrics.sum.SumAggregationBuilder;
import org.elasticsearch.search.aggregations.metrics.valuecount.ValueCountAggregationBuilder;
import org.elasticsearch.search.aggregations.support.ValueType;
import org.elasticsearch.search.aggregations.support.ValuesSourceAggregationBuilder;
import org.elasticsearch.search.builder.SearchSourceBuilder;
import org.elasticsearch.xpack.core.rollup.RollupField;
import org.elasticsearch.xpack.core.rollup.job.DateHistogramGroupConfig;
import org.elasticsearch.xpack.core.rollup.job.GroupConfig;
import org.elasticsearch.xpack.core.rollup.job.HistogramGroupConfig;
import org.elasticsearch.xpack.core.rollup.job.IndexerState;
import org.elasticsearch.xpack.core.rollup.job.MetricConfig;
import org.elasticsearch.xpack.core.rollup.job.RollupJob;
import org.elasticsearch.xpack.core.rollup.job.RollupJobConfig;
import org.elasticsearch.xpack.core.rollup.job.RollupJobStats;
import org.elasticsearch.xpack.core.rollup.job.TermsGroupConfig;
import org.joda.time.DateTimeZone;
import java.util.ArrayList;
import java.util.Arrays;
@ -38,6 +54,10 @@ import java.util.concurrent.Executor;
import java.util.concurrent.atomic.AtomicBoolean;
import java.util.concurrent.atomic.AtomicReference;
import static java.util.Collections.singletonList;
import static java.util.Collections.unmodifiableList;
import static org.elasticsearch.xpack.core.rollup.RollupField.formatFieldName;
/**
* An abstract class that builds a rollup index incrementally. A background job can be launched using {@link #maybeTriggerAsyncJob(long)},
* it will create the rollup index from the source index up to the last complete bucket that is allowed to be built (based on the current
@ -392,21 +412,12 @@ public abstract class RollupIndexer {
*/
private CompositeAggregationBuilder createCompositeBuilder(RollupJobConfig config) {
final GroupConfig groupConfig = config.getGroupConfig();
List<CompositeValuesSourceBuilder<?>> builders = new ArrayList<>();
// Add all the agg builders to our request in order: date_histo -> histo -> terms
if (groupConfig != null) {
builders.addAll(groupConfig.getDateHistogram().toBuilders());
if (groupConfig.getHistogram() != null) {
builders.addAll(groupConfig.getHistogram().toBuilders());
}
if (groupConfig.getTerms() != null) {
builders.addAll(groupConfig.getTerms().toBuilders());
}
}
List<CompositeValuesSourceBuilder<?>> builders = createValueSourceBuilders(groupConfig);
CompositeAggregationBuilder composite = new CompositeAggregationBuilder(AGGREGATION_NAME, builders);
config.getMetricsConfig().forEach(m -> m.toBuilders().forEach(composite::subAggregation));
List<AggregationBuilder> aggregations = createAggregationBuilders(config.getMetricsConfig());
aggregations.forEach(composite::subAggregation);
final Map<String, Object> metadata = createMetadata(groupConfig);
if (metadata.isEmpty() == false) {
@ -456,5 +467,112 @@ public abstract class RollupIndexer {
}
return metadata;
}
public static List<CompositeValuesSourceBuilder<?>> createValueSourceBuilders(final GroupConfig groupConfig) {
final List<CompositeValuesSourceBuilder<?>> builders = new ArrayList<>();
// Add all the agg builders to our request in order: date_histo -> histo -> terms
if (groupConfig != null) {
final DateHistogramGroupConfig dateHistogram = groupConfig.getDateHistogram();
builders.addAll(createValueSourceBuilders(dateHistogram));
final HistogramGroupConfig histogram = groupConfig.getHistogram();
builders.addAll(createValueSourceBuilders(histogram));
final TermsGroupConfig terms = groupConfig.getTerms();
builders.addAll(createValueSourceBuilders(terms));
}
return unmodifiableList(builders);
}
public static List<CompositeValuesSourceBuilder<?>> createValueSourceBuilders(final DateHistogramGroupConfig dateHistogram) {
final String dateHistogramField = dateHistogram.getField();
final String dateHistogramName = RollupField.formatIndexerAggName(dateHistogramField, DateHistogramAggregationBuilder.NAME);
final DateHistogramValuesSourceBuilder dateHistogramBuilder = new DateHistogramValuesSourceBuilder(dateHistogramName);
dateHistogramBuilder.dateHistogramInterval(dateHistogram.getInterval());
dateHistogramBuilder.field(dateHistogramField);
dateHistogramBuilder.timeZone(toDateTimeZone(dateHistogram.getTimeZone()));
return singletonList(dateHistogramBuilder);
}
public static List<CompositeValuesSourceBuilder<?>> createValueSourceBuilders(final HistogramGroupConfig histogram) {
final List<CompositeValuesSourceBuilder<?>> builders = new ArrayList<>();
if (histogram != null) {
for (String field : histogram.getFields()) {
final String histogramName = RollupField.formatIndexerAggName(field, HistogramAggregationBuilder.NAME);
final HistogramValuesSourceBuilder histogramBuilder = new HistogramValuesSourceBuilder(histogramName);
histogramBuilder.interval(histogram.getInterval());
histogramBuilder.field(field);
histogramBuilder.missingBucket(true);
builders.add(histogramBuilder);
}
}
return unmodifiableList(builders);
}
public static List<CompositeValuesSourceBuilder<?>> createValueSourceBuilders(final TermsGroupConfig terms) {
final List<CompositeValuesSourceBuilder<?>> builders = new ArrayList<>();
if (terms != null) {
for (String field : terms.getFields()) {
final String termsName = RollupField.formatIndexerAggName(field, TermsAggregationBuilder.NAME);
final TermsValuesSourceBuilder termsBuilder = new TermsValuesSourceBuilder(termsName);
termsBuilder.field(field);
termsBuilder.missingBucket(true);
builders.add(termsBuilder);
}
}
return unmodifiableList(builders);
}
/**
* This returns a set of aggregation builders which represent the configured
* set of metrics. Used to iterate over historical data.
*/
static List<AggregationBuilder> createAggregationBuilders(final List<MetricConfig> metricsConfigs) {
final List<AggregationBuilder> builders = new ArrayList<>();
if (metricsConfigs != null) {
for (MetricConfig metricConfig : metricsConfigs) {
final List<String> metrics = metricConfig.getMetrics();
if (metrics.isEmpty() == false) {
final String field = metricConfig.getField();
for (String metric : metrics) {
ValuesSourceAggregationBuilder.LeafOnly newBuilder;
if (metric.equals(MetricConfig.MIN.getPreferredName())) {
newBuilder = new MinAggregationBuilder(formatFieldName(field, MinAggregationBuilder.NAME, RollupField.VALUE));
} else if (metric.equals(MetricConfig.MAX.getPreferredName())) {
newBuilder = new MaxAggregationBuilder(formatFieldName(field, MaxAggregationBuilder.NAME, RollupField.VALUE));
} else if (metric.equals(MetricConfig.AVG.getPreferredName())) {
// Avgs are sum + count
newBuilder = new SumAggregationBuilder(formatFieldName(field, AvgAggregationBuilder.NAME, RollupField.VALUE));
ValuesSourceAggregationBuilder.LeafOnly countBuilder
= new ValueCountAggregationBuilder(
formatFieldName(field, AvgAggregationBuilder.NAME, RollupField.COUNT_FIELD), ValueType.NUMERIC);
countBuilder.field(field);
builders.add(countBuilder);
} else if (metric.equals(MetricConfig.SUM.getPreferredName())) {
newBuilder = new SumAggregationBuilder(formatFieldName(field, SumAggregationBuilder.NAME, RollupField.VALUE));
} else if (metric.equals(MetricConfig.VALUE_COUNT.getPreferredName())) {
// TODO allow non-numeric value_counts.
// Hardcoding this is fine for now since the job validation guarantees that all metric fields are numerics
newBuilder = new ValueCountAggregationBuilder(
formatFieldName(field, ValueCountAggregationBuilder.NAME, RollupField.VALUE), ValueType.NUMERIC);
} else {
throw new IllegalArgumentException("Unsupported metric type [" + metric + "]");
}
newBuilder.field(field);
builders.add(newBuilder);
}
}
}
}
return unmodifiableList(builders);
}
private static DateTimeZone toDateTimeZone(final String timezone) {
try {
return DateTimeZone.forOffsetHours(Integer.parseInt(timezone));
} catch (NumberFormatException e) {
return DateTimeZone.forID(timezone);
}
}
}

View File

@ -0,0 +1,83 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.xpack.rollup.action.job;
import org.elasticsearch.action.ActionRequestValidationException;
import org.elasticsearch.action.fieldcaps.FieldCapabilities;
import org.elasticsearch.search.aggregations.bucket.composite.CompositeValuesSourceBuilder;
import org.elasticsearch.test.ESTestCase;
import org.elasticsearch.xpack.core.rollup.job.TermsGroupConfig;
import org.elasticsearch.xpack.rollup.job.RollupIndexer;
import java.util.Collections;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import static org.hamcrest.Matchers.equalTo;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.when;
public class RollupIndexTests extends ESTestCase {
public void testValidateMatchingField() {
ActionRequestValidationException e = new ActionRequestValidationException();
Map<String, Map<String, FieldCapabilities>> responseMap = new HashMap<>();
String type = getRandomType();
// Have to mock fieldcaps because the ctor's aren't public...
FieldCapabilities fieldCaps = mock(FieldCapabilities.class);
when(fieldCaps.isAggregatable()).thenReturn(true);
responseMap.put("my_field", Collections.singletonMap(type, fieldCaps));
TermsGroupConfig config = new TermsGroupConfig("my_field");
config.validateMappings(responseMap, e);
if (e.validationErrors().size() != 0) {
fail(e.getMessage());
}
List<CompositeValuesSourceBuilder<?>> builders = RollupIndexer.createValueSourceBuilders(config);
assertThat(builders.size(), equalTo(1));
}
public void testValidateFieldMatchingNotAggregatable() {
ActionRequestValidationException e = new ActionRequestValidationException();
Map<String, Map<String, FieldCapabilities>> responseMap = new HashMap<>();
// Have to mock fieldcaps because the ctor's aren't public...
FieldCapabilities fieldCaps = mock(FieldCapabilities.class);
when(fieldCaps.isAggregatable()).thenReturn(false);
responseMap.put("my_field", Collections.singletonMap(getRandomType(), fieldCaps));
TermsGroupConfig config = new TermsGroupConfig("my_field");
config.validateMappings(responseMap, e);
assertThat(e.validationErrors().get(0), equalTo("The field [my_field] must be aggregatable across all indices, but is not."));
}
private String getRandomType() {
int n = randomIntBetween(0,8);
if (n == 0) {
return "keyword";
} else if (n == 1) {
return "text";
} else if (n == 2) {
return "long";
} else if (n == 3) {
return "integer";
} else if (n == 4) {
return "short";
} else if (n == 5) {
return "float";
} else if (n == 6) {
return "double";
} else if (n == 7) {
return "scaled_float";
} else if (n == 8) {
return "half_float";
}
return "long";
}
}

View File

@ -21,6 +21,7 @@ import org.elasticsearch.index.mapper.DateFieldMapper;
import org.elasticsearch.index.mapper.MappedFieldType;
import org.elasticsearch.index.mapper.NumberFieldMapper;
import org.elasticsearch.search.aggregations.Aggregation;
import org.elasticsearch.search.aggregations.AggregationBuilder;
import org.elasticsearch.search.aggregations.Aggregations;
import org.elasticsearch.search.aggregations.Aggregator;
import org.elasticsearch.search.aggregations.AggregatorTestCase;
@ -57,6 +58,7 @@ import static java.util.Collections.singletonList;
import static org.elasticsearch.xpack.core.rollup.ConfigTestHelpers.randomDateHistogramGroupConfig;
import static org.elasticsearch.xpack.core.rollup.ConfigTestHelpers.randomGroupConfig;
import static org.elasticsearch.xpack.core.rollup.ConfigTestHelpers.randomHistogramGroupConfig;
import static org.elasticsearch.xpack.rollup.job.RollupIndexer.createAggregationBuilders;
import static org.hamcrest.Matchers.equalTo;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.when;
@ -101,9 +103,11 @@ public class IndexerUtilsTests extends AggregatorTestCase {
//TODO swap this over to DateHistoConfig.Builder once DateInterval is in
DateHistogramGroupConfig dateHistoGroupConfig = new DateHistogramGroupConfig(timestampField, DateHistogramInterval.DAY);
CompositeAggregationBuilder compositeBuilder =
new CompositeAggregationBuilder(RollupIndexer.AGGREGATION_NAME, dateHistoGroupConfig.toBuilders());
new CompositeAggregationBuilder(RollupIndexer.AGGREGATION_NAME,
RollupIndexer.createValueSourceBuilders(dateHistoGroupConfig));
MetricConfig metricConfig = new MetricConfig("does_not_exist", singletonList("max"));
metricConfig.toBuilders().forEach(compositeBuilder::subAggregation);
List<AggregationBuilder> metricAgg = createAggregationBuilders(singletonList(metricConfig));
metricAgg.forEach(compositeBuilder::subAggregation);
Aggregator aggregator = createAggregator(compositeBuilder, indexSearcher, timestampFieldType, valueFieldType);
aggregator.preCollection();
@ -170,7 +174,8 @@ public class IndexerUtilsTests extends AggregatorTestCase {
singletonList(dateHisto));
MetricConfig metricConfig = new MetricConfig(valueField, singletonList("max"));
metricConfig.toBuilders().forEach(compositeBuilder::subAggregation);
List<AggregationBuilder> metricAgg = createAggregationBuilders(singletonList(metricConfig));
metricAgg.forEach(compositeBuilder::subAggregation);
Aggregator aggregator = createAggregator(compositeBuilder, indexSearcher, timestampFieldType, valueFieldType);
aggregator.preCollection();
@ -226,7 +231,8 @@ public class IndexerUtilsTests extends AggregatorTestCase {
singletonList(terms));
MetricConfig metricConfig = new MetricConfig(valueField, singletonList("max"));
metricConfig.toBuilders().forEach(compositeBuilder::subAggregation);
List<AggregationBuilder> metricAgg = createAggregationBuilders(singletonList(metricConfig));
metricAgg.forEach(compositeBuilder::subAggregation);
Aggregator aggregator = createAggregator(compositeBuilder, indexSearcher, valueFieldType);
aggregator.preCollection();
@ -292,7 +298,8 @@ public class IndexerUtilsTests extends AggregatorTestCase {
singletonList(dateHisto));
MetricConfig metricConfig = new MetricConfig("another_field", Arrays.asList("avg", "sum"));
metricConfig.toBuilders().forEach(compositeBuilder::subAggregation);
List<AggregationBuilder> metricAgg = createAggregationBuilders(singletonList(metricConfig));
metricAgg.forEach(compositeBuilder::subAggregation);
Aggregator aggregator = createAggregator(compositeBuilder, indexSearcher, timestampFieldType, valueFieldType);
aggregator.preCollection();
@ -523,11 +530,13 @@ public class IndexerUtilsTests extends AggregatorTestCase {
// Setup the composite agg
TermsGroupConfig termsGroupConfig = new TermsGroupConfig(valueField);
CompositeAggregationBuilder compositeBuilder = new CompositeAggregationBuilder(RollupIndexer.AGGREGATION_NAME,
termsGroupConfig.toBuilders()).size(numDocs*2);
CompositeAggregationBuilder compositeBuilder =
new CompositeAggregationBuilder(RollupIndexer.AGGREGATION_NAME, RollupIndexer.createValueSourceBuilders(termsGroupConfig))
.size(numDocs*2);
MetricConfig metricConfig = new MetricConfig(metricField, singletonList("max"));
metricConfig.toBuilders().forEach(compositeBuilder::subAggregation);
List<AggregationBuilder> metricAgg = createAggregationBuilders(singletonList(metricConfig));
metricAgg.forEach(compositeBuilder::subAggregation);
Aggregator aggregator = createAggregator(compositeBuilder, indexSearcher, valueFieldType, metricFieldType);
aggregator.preCollection();

View File

@ -1,6 +1,7 @@
evaluationDependsOn(xpackModule('core'))
apply plugin: 'elasticsearch.esplugin'
apply plugin: 'nebula.maven-scm'
esplugin {
name 'x-pack-security'
description 'Elasticsearch Expanded Pack Plugin - Security'

View File

@ -61,7 +61,7 @@ public final class TransportSamlAuthenticateAction extends HandledTransportActio
final TimeValue expiresIn = tokenService.getExpirationDelay();
listener.onResponse(
new SamlAuthenticateResponse(authentication.getUser().principal(), tokenString, tuple.v2(), expiresIn));
}, listener::onFailure), tokenMeta);
}, listener::onFailure), tokenMeta, true);
}, e -> {
logger.debug(() -> new ParameterizedMessage("SamlToken [{}] could not be authenticated", saml), e);
listener.onFailure(e);

View File

@ -22,6 +22,7 @@ import org.elasticsearch.xpack.security.authc.AuthenticationService;
import org.elasticsearch.xpack.security.authc.TokenService;
import org.elasticsearch.xpack.core.security.authc.support.UsernamePasswordToken;
import java.io.IOException;
import java.util.Collections;
/**
@ -48,29 +49,52 @@ public final class TransportCreateTokenAction extends HandledTransportAction<Cre
@Override
protected void doExecute(Task task, CreateTokenRequest request, ActionListener<CreateTokenResponse> listener) {
CreateTokenRequest.GrantType type = CreateTokenRequest.GrantType.fromString(request.getGrantType());
assert type != null : "type should have been validated in the action";
switch (type) {
case PASSWORD:
authenticateAndCreateToken(request, listener);
break;
case CLIENT_CREDENTIALS:
Authentication authentication = Authentication.getAuthentication(threadPool.getThreadContext());
createToken(request, authentication, authentication, false, listener);
break;
default:
listener.onFailure(new IllegalStateException("grant_type [" + request.getGrantType() +
"] is not supported by the create token action"));
break;
}
}
private void authenticateAndCreateToken(CreateTokenRequest request, ActionListener<CreateTokenResponse> listener) {
Authentication originatingAuthentication = Authentication.getAuthentication(threadPool.getThreadContext());
try (ThreadContext.StoredContext ignore = threadPool.getThreadContext().stashContext()) {
final UsernamePasswordToken authToken = new UsernamePasswordToken(request.getUsername(), request.getPassword());
authenticationService.authenticate(CreateTokenAction.NAME, request, authToken,
ActionListener.wrap(authentication -> {
request.getPassword().close();
tokenService.createUserToken(authentication, originatingAuthentication, ActionListener.wrap(tuple -> {
final String tokenStr = tokenService.getUserTokenString(tuple.v1());
final String scope = getResponseScopeValue(request.getScope());
ActionListener.wrap(authentication -> {
request.getPassword().close();
createToken(request, authentication, originatingAuthentication, true, listener);
}, e -> {
// clear the request password
request.getPassword().close();
listener.onFailure(e);
}));
}
}
final CreateTokenResponse response =
new CreateTokenResponse(tokenStr, tokenService.getExpirationDelay(), scope, tuple.v2());
listener.onResponse(response);
}, e -> {
// clear the request password
request.getPassword().close();
listener.onFailure(e);
}), Collections.emptyMap());
}, e -> {
// clear the request password
request.getPassword().close();
listener.onFailure(e);
}));
private void createToken(CreateTokenRequest request, Authentication authentication, Authentication originatingAuth,
boolean includeRefreshToken, ActionListener<CreateTokenResponse> listener) {
try {
tokenService.createUserToken(authentication, originatingAuth, ActionListener.wrap(tuple -> {
final String tokenStr = tokenService.getUserTokenString(tuple.v1());
final String scope = getResponseScopeValue(request.getScope());
final CreateTokenResponse response =
new CreateTokenResponse(tokenStr, tokenService.getExpirationDelay(), scope, tuple.v2());
listener.onResponse(response);
}, listener::onFailure), Collections.emptyMap(), includeRefreshToken);
} catch (IOException e) {
listener.onFailure(e);
}
}

View File

@ -212,7 +212,8 @@ public final class TokenService extends AbstractComponent {
* The created token will be stored in the security index.
*/
public void createUserToken(Authentication authentication, Authentication originatingClientAuth,
ActionListener<Tuple<UserToken, String>> listener, Map<String, Object> metadata) throws IOException {
ActionListener<Tuple<UserToken, String>> listener, Map<String, Object> metadata,
boolean includeRefreshToken) throws IOException {
ensureEnabled();
if (authentication == null) {
listener.onFailure(new IllegalArgumentException("authentication must be provided"));
@ -226,13 +227,14 @@ public final class TokenService extends AbstractComponent {
new Authentication(authentication.getUser(), authentication.getAuthenticatedBy(), authentication.getLookedUpBy(),
version);
final UserToken userToken = new UserToken(version, matchingVersionAuth, expiration, metadata);
final String refreshToken = UUIDs.randomBase64UUID();
final String refreshToken = includeRefreshToken ? UUIDs.randomBase64UUID() : null;
try (XContentBuilder builder = XContentFactory.jsonBuilder()) {
builder.startObject();
builder.field("doc_type", "token");
builder.field("creation_time", created.toEpochMilli());
builder.startObject("refresh_token")
if (includeRefreshToken) {
builder.startObject("refresh_token")
.field("token", refreshToken)
.field("invalidated", false)
.field("refreshed", false)
@ -242,6 +244,7 @@ public final class TokenService extends AbstractComponent {
.field("realm", originatingClientAuth.getAuthenticatedBy().getName())
.endObject()
.endObject();
}
builder.startObject("access_token")
.field("invalidated", false)
.field("user_token", userToken)
@ -734,7 +737,7 @@ public final class TokenService extends AbstractComponent {
.request();
executeAsyncWithOrigin(client.threadPool().getThreadContext(), SECURITY_ORIGIN, updateRequest,
ActionListener.<UpdateResponse>wrap(
updateResponse -> createUserToken(authentication, userAuth, listener, metadata),
updateResponse -> createUserToken(authentication, userAuth, listener, metadata, true),
e -> {
Throwable cause = ExceptionsHelper.unwrapCause(e);
if (cause instanceof VersionConflictEngineException ||

View File

@ -24,6 +24,7 @@ import org.elasticsearch.xpack.core.security.authc.esnative.ClientReservedRealm;
import org.elasticsearch.xpack.core.security.authc.support.Hasher;
import org.elasticsearch.xpack.core.security.authc.support.UsernamePasswordToken;
import org.elasticsearch.xpack.core.security.support.Exceptions;
import org.elasticsearch.xpack.core.security.user.APMSystemUser;
import org.elasticsearch.xpack.core.security.user.AnonymousUser;
import org.elasticsearch.xpack.core.security.user.BeatsSystemUser;
import org.elasticsearch.xpack.core.security.user.ElasticUser;
@ -149,6 +150,8 @@ public class ReservedRealm extends CachingUsernamePasswordRealm {
return new LogstashSystemUser(userInfo.enabled);
case BeatsSystemUser.NAME:
return new BeatsSystemUser(userInfo.enabled);
case APMSystemUser.NAME:
return new APMSystemUser(userInfo.enabled);
default:
if (anonymousEnabled && anonymousUser.principal().equals(username)) {
return anonymousUser;
@ -177,6 +180,9 @@ public class ReservedRealm extends CachingUsernamePasswordRealm {
userInfo = reservedUserInfos.get(BeatsSystemUser.NAME);
users.add(new BeatsSystemUser(userInfo == null || userInfo.enabled));
userInfo = reservedUserInfos.get(APMSystemUser.NAME);
users.add(new APMSystemUser(userInfo == null || userInfo.enabled));
if (anonymousEnabled) {
users.add(anonymousUser);
}
@ -228,6 +234,8 @@ public class ReservedRealm extends CachingUsernamePasswordRealm {
switch (username) {
case BeatsSystemUser.NAME:
return BeatsSystemUser.DEFINED_SINCE;
case APMSystemUser.NAME:
return APMSystemUser.DEFINED_SINCE;
default:
return Version.V_6_0_0;
}

View File

@ -27,6 +27,7 @@ import org.elasticsearch.common.xcontent.json.JsonXContent;
import org.elasticsearch.env.Environment;
import org.elasticsearch.xpack.core.XPackSettings;
import org.elasticsearch.xpack.core.security.support.Validation;
import org.elasticsearch.xpack.core.security.user.APMSystemUser;
import org.elasticsearch.xpack.core.security.user.BeatsSystemUser;
import org.elasticsearch.xpack.core.security.user.ElasticUser;
import org.elasticsearch.xpack.core.security.user.KibanaUser;
@ -63,7 +64,8 @@ import static java.util.Arrays.asList;
public class SetupPasswordTool extends LoggingAwareMultiCommand {
private static final char[] CHARS = ("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789").toCharArray();
public static final List<String> USERS = asList(ElasticUser.NAME, KibanaUser.NAME, LogstashSystemUser.NAME, BeatsSystemUser.NAME);
public static final List<String> USERS = asList(ElasticUser.NAME, APMSystemUser.NAME, KibanaUser.NAME, LogstashSystemUser.NAME,
BeatsSystemUser.NAME);
private final BiFunction<Environment, Settings, CommandLineHttpClient> clientFunction;
private final CheckedFunction<Environment, KeyStoreWrapper, Exception> keyStoreFunction;

View File

@ -5,11 +5,9 @@
*/
package org.elasticsearch.xpack.security.authc.support;
import org.apache.lucene.util.SetOnce;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.common.cache.Cache;
import org.elasticsearch.common.cache.CacheBuilder;
import org.elasticsearch.common.collect.Tuple;
import org.elasticsearch.common.settings.SecureString;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.common.util.concurrent.ListenableFuture;
@ -30,7 +28,7 @@ import java.util.concurrent.atomic.AtomicBoolean;
public abstract class CachingUsernamePasswordRealm extends UsernamePasswordRealm implements CachingRealm {
private final Cache<String, ListenableFuture<Tuple<AuthenticationResult, UserWithHash>>> cache;
private final Cache<String, ListenableFuture<UserWithHash>> cache;
private final ThreadPool threadPool;
final Hasher cacheHasher;
@ -38,9 +36,9 @@ public abstract class CachingUsernamePasswordRealm extends UsernamePasswordRealm
super(type, config);
cacheHasher = Hasher.resolve(CachingUsernamePasswordRealmSettings.CACHE_HASH_ALGO_SETTING.get(config.settings()));
this.threadPool = threadPool;
TimeValue ttl = CachingUsernamePasswordRealmSettings.CACHE_TTL_SETTING.get(config.settings());
final TimeValue ttl = CachingUsernamePasswordRealmSettings.CACHE_TTL_SETTING.get(config.settings());
if (ttl.getNanos() > 0) {
cache = CacheBuilder.<String, ListenableFuture<Tuple<AuthenticationResult, UserWithHash>>>builder()
cache = CacheBuilder.<String, ListenableFuture<UserWithHash>>builder()
.setExpireAfterWrite(ttl)
.setMaximumWeight(CachingUsernamePasswordRealmSettings.CACHE_MAX_USERS_SETTING.get(config.settings()))
.build();
@ -49,6 +47,7 @@ public abstract class CachingUsernamePasswordRealm extends UsernamePasswordRealm
}
}
@Override
public final void expire(String username) {
if (cache != null) {
logger.trace("invalidating cache for user [{}] in realm [{}]", username, name());
@ -56,6 +55,7 @@ public abstract class CachingUsernamePasswordRealm extends UsernamePasswordRealm
}
}
@Override
public final void expireAll() {
if (cache != null) {
logger.trace("invalidating cache for all users in realm [{}]", name());
@ -72,108 +72,84 @@ public abstract class CachingUsernamePasswordRealm extends UsernamePasswordRealm
*/
@Override
public final void authenticate(AuthenticationToken authToken, ActionListener<AuthenticationResult> listener) {
UsernamePasswordToken token = (UsernamePasswordToken) authToken;
final UsernamePasswordToken token = (UsernamePasswordToken) authToken;
try {
if (cache == null) {
doAuthenticate(token, listener);
} else {
authenticateWithCache(token, listener);
}
} catch (Exception e) {
} catch (final Exception e) {
// each realm should handle exceptions, if we get one here it should be considered fatal
listener.onFailure(e);
}
}
/**
* This validates the {@code token} while making sure there is only one inflight
* request to the authentication source. Only successful responses are cached
* and any subsequent requests, bearing the <b>same</b> password, will succeed
* without reaching to the authentication source. A different password in a
* subsequent request, however, will clear the cache and <b>try</b> to reach to
* the authentication source.
*
* @param token The authentication token
* @param listener to be called at completion
*/
private void authenticateWithCache(UsernamePasswordToken token, ActionListener<AuthenticationResult> listener) {
try {
final SetOnce<User> authenticatedUser = new SetOnce<>();
final AtomicBoolean createdAndStartedFuture = new AtomicBoolean(false);
final ListenableFuture<Tuple<AuthenticationResult, UserWithHash>> future = cache.computeIfAbsent(token.principal(), k -> {
final ListenableFuture<Tuple<AuthenticationResult, UserWithHash>> created = new ListenableFuture<>();
if (createdAndStartedFuture.compareAndSet(false, true) == false) {
throw new IllegalStateException("something else already started this. how?");
}
return created;
final AtomicBoolean authenticationInCache = new AtomicBoolean(true);
final ListenableFuture<UserWithHash> listenableCacheEntry = cache.computeIfAbsent(token.principal(), k -> {
authenticationInCache.set(false);
return new ListenableFuture<>();
});
if (createdAndStartedFuture.get()) {
doAuthenticate(token, ActionListener.wrap(result -> {
if (result.isAuthenticated()) {
final User user = result.getUser();
authenticatedUser.set(user);
final UserWithHash userWithHash = new UserWithHash(user, token.credentials(), cacheHasher);
future.onResponse(new Tuple<>(result, userWithHash));
} else {
future.onResponse(new Tuple<>(result, null));
}
}, future::onFailure));
}
future.addListener(ActionListener.wrap(tuple -> {
if (tuple != null) {
final UserWithHash userWithHash = tuple.v2();
final boolean performedAuthentication = createdAndStartedFuture.get() && userWithHash != null &&
tuple.v2().user == authenticatedUser.get();
handleResult(future, createdAndStartedFuture.get(), performedAuthentication, token, tuple, listener);
} else {
handleFailure(future, createdAndStartedFuture.get(), token, new IllegalStateException("unknown error authenticating"),
listener);
}
}, e -> handleFailure(future, createdAndStartedFuture.get(), token, e, listener)),
threadPool.executor(ThreadPool.Names.GENERIC));
} catch (ExecutionException e) {
listener.onResponse(AuthenticationResult.unsuccessful("", e));
}
}
private void handleResult(ListenableFuture<Tuple<AuthenticationResult, UserWithHash>> future, boolean createdAndStartedFuture,
boolean performedAuthentication, UsernamePasswordToken token,
Tuple<AuthenticationResult, UserWithHash> result, ActionListener<AuthenticationResult> listener) {
final AuthenticationResult authResult = result.v1();
if (authResult == null) {
// this was from a lookup; clear and redo
cache.invalidate(token.principal(), future);
authenticateWithCache(token, listener);
} else if (authResult.isAuthenticated()) {
if (performedAuthentication) {
listener.onResponse(authResult);
} else {
UserWithHash userWithHash = result.v2();
if (userWithHash.verify(token.credentials())) {
if (userWithHash.user.enabled()) {
User user = userWithHash.user;
logger.debug("realm [{}] authenticated user [{}], with roles [{}]",
name(), token.principal(), user.roles());
if (authenticationInCache.get()) {
// there is a cached or an inflight authenticate request
listenableCacheEntry.addListener(ActionListener.wrap(authenticatedUserWithHash -> {
if (authenticatedUserWithHash != null && authenticatedUserWithHash.verify(token.credentials())) {
// cached credential hash matches the credential hash for this forestalled request
final User user = authenticatedUserWithHash.user;
logger.debug("realm [{}] authenticated user [{}], with roles [{}], from cache", name(), token.principal(),
user.roles());
listener.onResponse(AuthenticationResult.success(user));
} else {
// re-auth to see if user has been enabled
cache.invalidate(token.principal(), future);
// The inflight request has failed or its credential hash does not match the
// hash of the credential for this forestalled request.
// clear cache and try to reach the authentication source again because password
// might have changed there and the local cached hash got stale
cache.invalidate(token.principal(), listenableCacheEntry);
authenticateWithCache(token, listener);
}
} else {
// could be a password change?
cache.invalidate(token.principal(), future);
}, e -> {
// the inflight request failed, so try again, but first (always) make sure cache
// is cleared of the failed authentication
cache.invalidate(token.principal(), listenableCacheEntry);
authenticateWithCache(token, listener);
}
}
} else {
cache.invalidate(token.principal(), future);
if (createdAndStartedFuture) {
listener.onResponse(authResult);
}), threadPool.executor(ThreadPool.Names.GENERIC));
} else {
authenticateWithCache(token, listener);
// attempt authentication against the authentication source
doAuthenticate(token, ActionListener.wrap(authResult -> {
if (authResult.isAuthenticated() && authResult.getUser().enabled()) {
// compute the credential hash of this successful authentication request
final UserWithHash userWithHash = new UserWithHash(authResult.getUser(), token.credentials(), cacheHasher);
// notify any forestalled request listeners; they will not reach to the
// authentication request and instead will use this hash for comparison
listenableCacheEntry.onResponse(userWithHash);
} else {
// notify any forestalled request listeners; they will retry the request
listenableCacheEntry.onResponse(null);
}
// notify the listener of the inflight authentication request; this request is not retried
listener.onResponse(authResult);
}, e -> {
// notify any staved off listeners; they will retry the request
listenableCacheEntry.onFailure(e);
// notify the listener of the inflight authentication request; this request is not retried
listener.onFailure(e);
}));
}
}
}
private void handleFailure(ListenableFuture<Tuple<AuthenticationResult, UserWithHash>> future, boolean createdAndStarted,
UsernamePasswordToken token, Exception e, ActionListener<AuthenticationResult> listener) {
cache.invalidate(token.principal(), future);
if (createdAndStarted) {
} catch (final ExecutionException e) {
listener.onFailure(e);
} else {
authenticateWithCache(token, listener);
}
}
@ -193,38 +169,57 @@ public abstract class CachingUsernamePasswordRealm extends UsernamePasswordRealm
@Override
public final void lookupUser(String username, ActionListener<User> listener) {
if (cache != null) {
try {
ListenableFuture<Tuple<AuthenticationResult, UserWithHash>> future = cache.computeIfAbsent(username, key -> {
ListenableFuture<Tuple<AuthenticationResult, UserWithHash>> created = new ListenableFuture<>();
doLookupUser(username, ActionListener.wrap(user -> {
if (user != null) {
UserWithHash userWithHash = new UserWithHash(user, null, null);
created.onResponse(new Tuple<>(null, userWithHash));
} else {
created.onResponse(new Tuple<>(null, null));
}
}, created::onFailure));
return created;
});
future.addListener(ActionListener.wrap(tuple -> {
if (tuple != null) {
if (tuple.v2() == null) {
cache.invalidate(username, future);
listener.onResponse(null);
} else {
listener.onResponse(tuple.v2().user);
}
} else {
listener.onResponse(null);
}
}, listener::onFailure), threadPool.executor(ThreadPool.Names.GENERIC));
} catch (ExecutionException e) {
listener.onFailure(e);
try {
if (cache == null) {
doLookupUser(username, listener);
} else {
lookupWithCache(username, listener);
}
} else {
doLookupUser(username, listener);
} catch (final Exception e) {
// each realm should handle exceptions, if we get one here it should be
// considered fatal
listener.onFailure(e);
}
}
private void lookupWithCache(String username, ActionListener<User> listener) {
try {
final AtomicBoolean lookupInCache = new AtomicBoolean(true);
final ListenableFuture<UserWithHash> listenableCacheEntry = cache.computeIfAbsent(username, key -> {
lookupInCache.set(false);
return new ListenableFuture<>();
});
if (false == lookupInCache.get()) {
// attempt lookup against the user directory
doLookupUser(username, ActionListener.wrap(user -> {
if (user != null) {
// user found
final UserWithHash userWithHash = new UserWithHash(user, null, null);
// notify forestalled request listeners
listenableCacheEntry.onResponse(userWithHash);
} else {
// user not found, invalidate cache so that subsequent requests are forwarded to
// the user directory
cache.invalidate(username, listenableCacheEntry);
// notify forestalled request listeners
listenableCacheEntry.onResponse(null);
}
}, e -> {
// the next request should be forwarded, not halted by a failed lookup attempt
cache.invalidate(username, listenableCacheEntry);
// notify forestalled listeners
listenableCacheEntry.onFailure(e);
}));
}
listenableCacheEntry.addListener(ActionListener.wrap(userWithHash -> {
if (userWithHash != null) {
listener.onResponse(userWithHash.user);
} else {
listener.onResponse(null);
}
}, listener::onFailure), threadPool.executor(ThreadPool.Names.GENERIC));
} catch (final ExecutionException e) {
listener.onFailure(e);
}
}

View File

@ -12,6 +12,7 @@ import org.elasticsearch.common.settings.SecureString;
import org.elasticsearch.common.util.set.Sets;
import org.elasticsearch.xpack.core.security.authc.support.UsernamePasswordToken;
import org.elasticsearch.xpack.core.security.client.SecurityClient;
import org.elasticsearch.xpack.core.security.user.APMSystemUser;
import org.elasticsearch.xpack.core.security.user.BeatsSystemUser;
import org.elasticsearch.xpack.core.security.user.ElasticUser;
import org.elasticsearch.xpack.core.security.user.KibanaUser;
@ -88,7 +89,7 @@ public abstract class NativeRealmIntegTestCase extends SecurityIntegTestCase {
RequestOptions.Builder optionsBuilder = RequestOptions.DEFAULT.toBuilder();
optionsBuilder.addHeader("Authorization", UsernamePasswordToken.basicAuthHeaderValue(ElasticUser.NAME, reservedPassword));
RequestOptions options = optionsBuilder.build();
for (String username : Arrays.asList(KibanaUser.NAME, LogstashSystemUser.NAME, BeatsSystemUser.NAME)) {
for (String username : Arrays.asList(KibanaUser.NAME, LogstashSystemUser.NAME, BeatsSystemUser.NAME, APMSystemUser.NAME)) {
Request request = new Request("PUT", "/_xpack/security/user/" + username + "/_password");
request.setJsonEntity("{\"password\": \"" + new String(reservedPassword.getChars()) + "\"}");
request.setOptions(options);

View File

@ -316,7 +316,7 @@ public class TransportSamlInvalidateSessionActionTests extends SamlTestCase {
new RealmRef("native", NativeRealmSettings.TYPE, "node01"), null);
final Map<String, Object> metadata = samlRealm.createTokenMetadata(nameId, session);
final PlainActionFuture<Tuple<UserToken, String>> future = new PlainActionFuture<>();
tokenService.createUserToken(authentication, authentication, future, metadata);
tokenService.createUserToken(authentication, authentication, future, metadata, true);
return future.actionGet();
}

View File

@ -222,7 +222,7 @@ public class TransportSamlLogoutActionTests extends SamlTestCase {
new SamlNameId(NameID.TRANSIENT, nameId, null, null, null), session);
final PlainActionFuture<Tuple<UserToken, String>> future = new PlainActionFuture<>();
tokenService.createUserToken(authentication, authentication, future, tokenMetaData);
tokenService.createUserToken(authentication, authentication, future, tokenMetaData, true);
final UserToken userToken = future.actionGet().v1();
mockGetTokenFromId(userToken, client);
final String tokenString = tokenService.getUserTokenString(userToken);

View File

@ -0,0 +1,195 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.xpack.security.action.token;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.get.GetAction;
import org.elasticsearch.action.get.GetRequestBuilder;
import org.elasticsearch.action.get.GetResponse;
import org.elasticsearch.action.get.MultiGetAction;
import org.elasticsearch.action.get.MultiGetItemResponse;
import org.elasticsearch.action.get.MultiGetRequest;
import org.elasticsearch.action.get.MultiGetRequestBuilder;
import org.elasticsearch.action.get.MultiGetResponse;
import org.elasticsearch.action.index.IndexAction;
import org.elasticsearch.action.index.IndexRequest;
import org.elasticsearch.action.index.IndexRequestBuilder;
import org.elasticsearch.action.index.IndexResponse;
import org.elasticsearch.action.support.ActionFilters;
import org.elasticsearch.action.support.PlainActionFuture;
import org.elasticsearch.action.update.UpdateAction;
import org.elasticsearch.action.update.UpdateRequestBuilder;
import org.elasticsearch.client.Client;
import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.settings.SecureString;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.node.Node;
import org.elasticsearch.protocol.xpack.security.User;
import org.elasticsearch.test.ClusterServiceUtils;
import org.elasticsearch.test.ESTestCase;
import org.elasticsearch.threadpool.TestThreadPool;
import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.transport.TransportService;
import org.elasticsearch.xpack.core.XPackSettings;
import org.elasticsearch.xpack.core.security.action.token.CreateTokenAction;
import org.elasticsearch.xpack.core.security.action.token.CreateTokenRequest;
import org.elasticsearch.xpack.core.security.action.token.CreateTokenResponse;
import org.elasticsearch.xpack.core.security.authc.Authentication;
import org.elasticsearch.xpack.core.security.authc.support.UsernamePasswordToken;
import org.elasticsearch.xpack.security.authc.AuthenticationService;
import org.elasticsearch.xpack.security.authc.TokenService;
import org.elasticsearch.xpack.security.support.SecurityIndexManager;
import org.junit.After;
import org.junit.Before;
import java.time.Clock;
import java.util.Collections;
import java.util.Map;
import java.util.concurrent.atomic.AtomicReference;
import java.util.function.Consumer;
import static org.mockito.Matchers.any;
import static org.mockito.Matchers.anyString;
import static org.mockito.Matchers.eq;
import static org.mockito.Mockito.doAnswer;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.when;
public class TransportCreateTokenActionTests extends ESTestCase {
private static final Settings SETTINGS = Settings.builder().put(Node.NODE_NAME_SETTING.getKey(), "TokenServiceTests")
.put(XPackSettings.TOKEN_SERVICE_ENABLED_SETTING.getKey(), true).build();
private ThreadPool threadPool;
private Client client;
private SecurityIndexManager securityIndex;
private ClusterService clusterService;
private AtomicReference<IndexRequest> idxReqReference;
private AuthenticationService authenticationService;
@Before
public void setupClient() {
threadPool = new TestThreadPool(getTestName());
client = mock(Client.class);
idxReqReference = new AtomicReference<>();
authenticationService = mock(AuthenticationService.class);
when(client.threadPool()).thenReturn(threadPool);
when(client.settings()).thenReturn(SETTINGS);
doAnswer(invocationOnMock -> {
GetRequestBuilder builder = new GetRequestBuilder(client, GetAction.INSTANCE);
builder.setIndex((String) invocationOnMock.getArguments()[0])
.setType((String) invocationOnMock.getArguments()[1])
.setId((String) invocationOnMock.getArguments()[2]);
return builder;
}).when(client).prepareGet(anyString(), anyString(), anyString());
when(client.prepareMultiGet()).thenReturn(new MultiGetRequestBuilder(client, MultiGetAction.INSTANCE));
doAnswer(invocationOnMock -> {
ActionListener<MultiGetResponse> listener = (ActionListener<MultiGetResponse>) invocationOnMock.getArguments()[1];
MultiGetResponse response = mock(MultiGetResponse.class);
MultiGetItemResponse[] responses = new MultiGetItemResponse[2];
when(response.getResponses()).thenReturn(responses);
GetResponse oldGetResponse = mock(GetResponse.class);
when(oldGetResponse.isExists()).thenReturn(false);
responses[0] = new MultiGetItemResponse(oldGetResponse, null);
GetResponse getResponse = mock(GetResponse.class);
responses[1] = new MultiGetItemResponse(getResponse, null);
when(getResponse.isExists()).thenReturn(false);
listener.onResponse(response);
return Void.TYPE;
}).when(client).multiGet(any(MultiGetRequest.class), any(ActionListener.class));
when(client.prepareIndex(any(String.class), any(String.class), any(String.class)))
.thenReturn(new IndexRequestBuilder(client, IndexAction.INSTANCE));
when(client.prepareUpdate(any(String.class), any(String.class), any(String.class)))
.thenReturn(new UpdateRequestBuilder(client, UpdateAction.INSTANCE));
doAnswer(invocationOnMock -> {
idxReqReference.set((IndexRequest) invocationOnMock.getArguments()[1]);
ActionListener<IndexResponse> responseActionListener = (ActionListener<IndexResponse>) invocationOnMock.getArguments()[2];
responseActionListener.onResponse(new IndexResponse());
return null;
}).when(client).execute(eq(IndexAction.INSTANCE), any(IndexRequest.class), any(ActionListener.class));
// setup lifecycle service
securityIndex = mock(SecurityIndexManager.class);
doAnswer(invocationOnMock -> {
Runnable runnable = (Runnable) invocationOnMock.getArguments()[1];
runnable.run();
return null;
}).when(securityIndex).prepareIndexIfNeededThenExecute(any(Consumer.class), any(Runnable.class));
doAnswer(invocationOnMock -> {
UsernamePasswordToken token = (UsernamePasswordToken) invocationOnMock.getArguments()[2];
User user = new User(token.principal());
Authentication authentication = new Authentication(user, new Authentication.RealmRef("fake", "mock", "n1"), null);
authentication.writeToContext(threadPool.getThreadContext());
ActionListener<Authentication> authListener = (ActionListener<Authentication>) invocationOnMock.getArguments()[3];
authListener.onResponse(authentication);
return Void.TYPE;
}).when(authenticationService).authenticate(eq(CreateTokenAction.NAME), any(CreateTokenRequest.class),
any(UsernamePasswordToken.class), any(ActionListener.class));
this.clusterService = ClusterServiceUtils.createClusterService(threadPool);
}
@After
public void stopThreadPool() throws Exception {
if (threadPool != null) {
terminate(threadPool);
}
}
public void testClientCredentialsCreatesWithoutRefreshToken() throws Exception {
final TokenService tokenService = new TokenService(SETTINGS, Clock.systemUTC(), client, securityIndex, clusterService);
Authentication authentication = new Authentication(new User("joe"), new Authentication.RealmRef("realm", "type", "node"), null);
authentication.writeToContext(threadPool.getThreadContext());
final TransportCreateTokenAction action = new TransportCreateTokenAction(SETTINGS, threadPool,
mock(TransportService.class), new ActionFilters(Collections.emptySet()), tokenService,
authenticationService);
final CreateTokenRequest createTokenRequest = new CreateTokenRequest();
createTokenRequest.setGrantType("client_credentials");
PlainActionFuture<CreateTokenResponse> tokenResponseFuture = new PlainActionFuture<>();
action.doExecute(null, createTokenRequest, tokenResponseFuture);
CreateTokenResponse createTokenResponse = tokenResponseFuture.get();
assertNull(createTokenResponse.getRefreshToken());
assertNotNull(createTokenResponse.getTokenString());
assertNotNull(idxReqReference.get());
Map<String, Object> sourceMap = idxReqReference.get().sourceAsMap();
assertNotNull(sourceMap);
assertNotNull(sourceMap.get("access_token"));
assertNull(sourceMap.get("refresh_token"));
}
public void testPasswordGrantTypeCreatesWithRefreshToken() throws Exception {
final TokenService tokenService = new TokenService(SETTINGS, Clock.systemUTC(), client, securityIndex, clusterService);
Authentication authentication = new Authentication(new User("joe"), new Authentication.RealmRef("realm", "type", "node"), null);
authentication.writeToContext(threadPool.getThreadContext());
final TransportCreateTokenAction action = new TransportCreateTokenAction(SETTINGS, threadPool,
mock(TransportService.class), new ActionFilters(Collections.emptySet()), tokenService,
authenticationService);
final CreateTokenRequest createTokenRequest = new CreateTokenRequest();
createTokenRequest.setGrantType("password");
createTokenRequest.setUsername("user");
createTokenRequest.setPassword(new SecureString("password".toCharArray()));
PlainActionFuture<CreateTokenResponse> tokenResponseFuture = new PlainActionFuture<>();
action.doExecute(null, createTokenRequest, tokenResponseFuture);
CreateTokenResponse createTokenResponse = tokenResponseFuture.get();
assertNotNull(createTokenResponse.getRefreshToken());
assertNotNull(createTokenResponse.getTokenString());
assertNotNull(idxReqReference.get());
Map<String, Object> sourceMap = idxReqReference.get().sourceAsMap();
assertNotNull(sourceMap);
assertNotNull(sourceMap.get("access_token"));
assertNotNull(sourceMap.get("refresh_token"));
}
}

View File

@ -896,7 +896,7 @@ public class AuthenticationServiceTests extends ESTestCase {
PlainActionFuture<Tuple<UserToken, String>> tokenFuture = new PlainActionFuture<>();
try (ThreadContext.StoredContext ctx = threadContext.stashContext()) {
Authentication originatingAuth = new Authentication(new User("creator"), new RealmRef("test", "test", "test"), null);
tokenService.createUserToken(expected, originatingAuth, tokenFuture, Collections.emptyMap());
tokenService.createUserToken(expected, originatingAuth, tokenFuture, Collections.emptyMap(), true);
}
String token = tokenService.getUserTokenString(tokenFuture.get().v1());
mockGetTokenFromId(tokenFuture.get().v1(), client);
@ -975,7 +975,7 @@ public class AuthenticationServiceTests extends ESTestCase {
PlainActionFuture<Tuple<UserToken, String>> tokenFuture = new PlainActionFuture<>();
try (ThreadContext.StoredContext ctx = threadContext.stashContext()) {
Authentication originatingAuth = new Authentication(new User("creator"), new RealmRef("test", "test", "test"), null);
tokenService.createUserToken(expected, originatingAuth, tokenFuture, Collections.emptyMap());
tokenService.createUserToken(expected, originatingAuth, tokenFuture, Collections.emptyMap(), true);
}
String token = tokenService.getUserTokenString(tokenFuture.get().v1());
mockGetTokenFromId(tokenFuture.get().v1(), client);

View File

@ -341,6 +341,39 @@ public class TokenAuthIntegTests extends SecurityIntegTestCase {
assertEquals(SecuritySettingsSource.TEST_USER_NAME, response.user().principal());
}
public void testClientCredentialsGrant() throws Exception {
Client client = client().filterWithHeader(Collections.singletonMap("Authorization",
UsernamePasswordToken.basicAuthHeaderValue(SecuritySettingsSource.TEST_SUPERUSER,
SecuritySettingsSourceField.TEST_PASSWORD_SECURE_STRING)));
SecurityClient securityClient = new SecurityClient(client);
CreateTokenResponse createTokenResponse = securityClient.prepareCreateToken()
.setGrantType("client_credentials")
.get();
assertNull(createTokenResponse.getRefreshToken());
AuthenticateRequest request = new AuthenticateRequest();
request.username(SecuritySettingsSource.TEST_SUPERUSER);
PlainActionFuture<AuthenticateResponse> authFuture = new PlainActionFuture<>();
client.filterWithHeader(Collections.singletonMap("Authorization", "Bearer " + createTokenResponse.getTokenString()))
.execute(AuthenticateAction.INSTANCE, request, authFuture);
AuthenticateResponse response = authFuture.get();
assertEquals(SecuritySettingsSource.TEST_SUPERUSER, response.user().principal());
// invalidate
PlainActionFuture<InvalidateTokenResponse> invalidateResponseFuture = new PlainActionFuture<>();
InvalidateTokenRequest invalidateTokenRequest =
new InvalidateTokenRequest(createTokenResponse.getTokenString(), InvalidateTokenRequest.Type.ACCESS_TOKEN);
securityClient.invalidateToken(invalidateTokenRequest, invalidateResponseFuture);
assertTrue(invalidateResponseFuture.get().isCreated());
ElasticsearchSecurityException e = expectThrows(ElasticsearchSecurityException.class, () -> {
PlainActionFuture<AuthenticateResponse> responseFuture = new PlainActionFuture<>();
client.filterWithHeader(Collections.singletonMap("Authorization", "Bearer " + createTokenResponse.getTokenString()))
.execute(AuthenticateAction.INSTANCE, request, responseFuture);
responseFuture.actionGet();
});
}
@Before
public void waitForSecurityIndexWritable() throws Exception {
assertSecurityIndexActive();

View File

@ -157,7 +157,7 @@ public class TokenServiceTests extends ESTestCase {
TokenService tokenService = new TokenService(tokenServiceEnabledSettings, systemUTC(), client, securityIndex, clusterService);
Authentication authentication = new Authentication(new User("joe", "admin"), new RealmRef("native_realm", "native", "node1"), null);
PlainActionFuture<Tuple<UserToken, String>> tokenFuture = new PlainActionFuture<>();
tokenService.createUserToken(authentication, authentication, tokenFuture, Collections.emptyMap());
tokenService.createUserToken(authentication, authentication, tokenFuture, Collections.emptyMap(), true);
final UserToken token = tokenFuture.get().v1();
assertNotNull(token);
mockGetTokenFromId(token);
@ -203,7 +203,7 @@ public class TokenServiceTests extends ESTestCase {
TokenService tokenService = new TokenService(tokenServiceEnabledSettings, systemUTC(), client, securityIndex, clusterService);
Authentication authentication = new Authentication(new User("joe", "admin"), new RealmRef("native_realm", "native", "node1"), null);
PlainActionFuture<Tuple<UserToken, String>> tokenFuture = new PlainActionFuture<>();
tokenService.createUserToken(authentication, authentication, tokenFuture, Collections.emptyMap());
tokenService.createUserToken(authentication, authentication, tokenFuture, Collections.emptyMap(), true);
final UserToken token = tokenFuture.get().v1();
assertNotNull(token);
mockGetTokenFromId(token);
@ -227,7 +227,7 @@ public class TokenServiceTests extends ESTestCase {
}
PlainActionFuture<Tuple<UserToken, String>> newTokenFuture = new PlainActionFuture<>();
tokenService.createUserToken(authentication, authentication, newTokenFuture, Collections.emptyMap());
tokenService.createUserToken(authentication, authentication, newTokenFuture, Collections.emptyMap(), true);
final UserToken newToken = newTokenFuture.get().v1();
assertNotNull(newToken);
assertNotEquals(tokenService.getUserTokenString(newToken), tokenService.getUserTokenString(token));
@ -262,7 +262,7 @@ public class TokenServiceTests extends ESTestCase {
otherTokenService.refreshMetaData(tokenService.getTokenMetaData());
Authentication authentication = new Authentication(new User("joe", "admin"), new RealmRef("native_realm", "native", "node1"), null);
PlainActionFuture<Tuple<UserToken, String>> tokenFuture = new PlainActionFuture<>();
tokenService.createUserToken(authentication, authentication, tokenFuture, Collections.emptyMap());
tokenService.createUserToken(authentication, authentication, tokenFuture, Collections.emptyMap(), true);
final UserToken token = tokenFuture.get().v1();
assertNotNull(token);
mockGetTokenFromId(token);
@ -292,7 +292,7 @@ public class TokenServiceTests extends ESTestCase {
TokenService tokenService = new TokenService(tokenServiceEnabledSettings, systemUTC(), client, securityIndex, clusterService);
Authentication authentication = new Authentication(new User("joe", "admin"), new RealmRef("native_realm", "native", "node1"), null);
PlainActionFuture<Tuple<UserToken, String>> tokenFuture = new PlainActionFuture<>();
tokenService.createUserToken(authentication, authentication, tokenFuture, Collections.emptyMap());
tokenService.createUserToken(authentication, authentication, tokenFuture, Collections.emptyMap(), true);
final UserToken token = tokenFuture.get().v1();
assertNotNull(token);
mockGetTokenFromId(token);
@ -322,7 +322,7 @@ public class TokenServiceTests extends ESTestCase {
}
PlainActionFuture<Tuple<UserToken, String>> newTokenFuture = new PlainActionFuture<>();
tokenService.createUserToken(authentication, authentication, newTokenFuture, Collections.emptyMap());
tokenService.createUserToken(authentication, authentication, newTokenFuture, Collections.emptyMap(), true);
final UserToken newToken = newTokenFuture.get().v1();
assertNotNull(newToken);
assertNotEquals(tokenService.getUserTokenString(newToken), tokenService.getUserTokenString(token));
@ -353,7 +353,7 @@ public class TokenServiceTests extends ESTestCase {
TokenService tokenService = new TokenService(tokenServiceEnabledSettings, systemUTC(), client, securityIndex, clusterService);
Authentication authentication = new Authentication(new User("joe", "admin"), new RealmRef("native_realm", "native", "node1"), null);
PlainActionFuture<Tuple<UserToken, String>> tokenFuture = new PlainActionFuture<>();
tokenService.createUserToken(authentication, authentication, tokenFuture, Collections.emptyMap());
tokenService.createUserToken(authentication, authentication, tokenFuture, Collections.emptyMap(), true);
final UserToken token = tokenFuture.get().v1();
assertNotNull(token);
mockGetTokenFromId(token);
@ -383,7 +383,7 @@ public class TokenServiceTests extends ESTestCase {
Authentication authentication = new Authentication(new User("joe", "admin"), new RealmRef("native_realm", "native", "node1"), null);
PlainActionFuture<Tuple<UserToken, String>> tokenFuture = new PlainActionFuture<>();
tokenService.createUserToken(authentication, authentication, tokenFuture, Collections.emptyMap());
tokenService.createUserToken(authentication, authentication, tokenFuture, Collections.emptyMap(), true);
UserToken token = tokenFuture.get().v1();
assertThat(tokenService.getUserTokenString(token), notNullValue());
@ -397,7 +397,7 @@ public class TokenServiceTests extends ESTestCase {
new TokenService(tokenServiceEnabledSettings, systemUTC(), client, securityIndex, clusterService);
Authentication authentication = new Authentication(new User("joe", "admin"), new RealmRef("native_realm", "native", "node1"), null);
PlainActionFuture<Tuple<UserToken, String>> tokenFuture = new PlainActionFuture<>();
tokenService.createUserToken(authentication, authentication, tokenFuture, Collections.emptyMap());
tokenService.createUserToken(authentication, authentication, tokenFuture, Collections.emptyMap(), true);
final UserToken token = tokenFuture.get().v1();
assertNotNull(token);
doAnswer(invocationOnMock -> {
@ -451,7 +451,7 @@ public class TokenServiceTests extends ESTestCase {
TokenService tokenService = new TokenService(tokenServiceEnabledSettings, clock, client, securityIndex, clusterService);
Authentication authentication = new Authentication(new User("joe", "admin"), new RealmRef("native_realm", "native", "node1"), null);
PlainActionFuture<Tuple<UserToken, String>> tokenFuture = new PlainActionFuture<>();
tokenService.createUserToken(authentication, authentication, tokenFuture, Collections.emptyMap());
tokenService.createUserToken(authentication, authentication, tokenFuture, Collections.emptyMap(), true);
final UserToken token = tokenFuture.get().v1();
mockGetTokenFromId(token);
@ -501,7 +501,8 @@ public class TokenServiceTests extends ESTestCase {
.put(XPackSettings.TOKEN_SERVICE_ENABLED_SETTING.getKey(), false)
.build(),
Clock.systemUTC(), client, securityIndex, clusterService);
IllegalStateException e = expectThrows(IllegalStateException.class, () -> tokenService.createUserToken(null, null, null, null));
IllegalStateException e = expectThrows(IllegalStateException.class,
() -> tokenService.createUserToken(null, null, null, null, true));
assertEquals("tokens are not enabled", e.getMessage());
PlainActionFuture<UserToken> future = new PlainActionFuture<>();
@ -559,7 +560,7 @@ public class TokenServiceTests extends ESTestCase {
new TokenService(tokenServiceEnabledSettings, systemUTC(), client, securityIndex, clusterService);
Authentication authentication = new Authentication(new User("joe", "admin"), new RealmRef("native_realm", "native", "node1"), null);
PlainActionFuture<Tuple<UserToken, String>> tokenFuture = new PlainActionFuture<>();
tokenService.createUserToken(authentication, authentication, tokenFuture, Collections.emptyMap());
tokenService.createUserToken(authentication, authentication, tokenFuture, Collections.emptyMap(), true);
final UserToken token = tokenFuture.get().v1();
assertNotNull(token);
mockGetTokenFromId(token);

View File

@ -26,6 +26,7 @@ import org.elasticsearch.test.ESTestCase;
import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.xpack.core.security.authc.AuthenticationResult;
import org.elasticsearch.xpack.core.security.authc.support.Hasher;
import org.elasticsearch.xpack.core.security.user.APMSystemUser;
import org.elasticsearch.xpack.core.security.user.BeatsSystemUser;
import org.elasticsearch.xpack.core.security.user.ElasticUser;
import org.elasticsearch.xpack.core.security.user.KibanaUser;
@ -81,7 +82,8 @@ public class NativeUsersStoreTests extends ESTestCase {
public void testPasswordUpsertWhenSetEnabledOnReservedUser() throws Exception {
final NativeUsersStore nativeUsersStore = startNativeUsersStore();
final String user = randomFrom(ElasticUser.NAME, KibanaUser.NAME, LogstashSystemUser.NAME, BeatsSystemUser.NAME);
final String user = randomFrom(ElasticUser.NAME, KibanaUser.NAME, LogstashSystemUser.NAME,
BeatsSystemUser.NAME, APMSystemUser.NAME);
final PlainActionFuture<Void> future = new PlainActionFuture<>();
nativeUsersStore.setEnabled(user, true, WriteRequest.RefreshPolicy.IMMEDIATE, future);
@ -99,7 +101,8 @@ public class NativeUsersStoreTests extends ESTestCase {
public void testBlankPasswordInIndexImpliesDefaultPassword() throws Exception {
final NativeUsersStore nativeUsersStore = startNativeUsersStore();
final String user = randomFrom(ElasticUser.NAME, KibanaUser.NAME, LogstashSystemUser.NAME, BeatsSystemUser.NAME);
final String user = randomFrom(ElasticUser.NAME, KibanaUser.NAME, LogstashSystemUser.NAME,
BeatsSystemUser.NAME, APMSystemUser.NAME);
final Map<String, Object> values = new HashMap<>();
values.put(ENABLED_FIELD, Boolean.TRUE);
values.put(PASSWORD_FIELD, BLANK_PASSWORD);

View File

@ -13,6 +13,7 @@ import org.elasticsearch.test.NativeRealmIntegTestCase;
import org.elasticsearch.xpack.core.security.action.user.ChangePasswordResponse;
import org.elasticsearch.xpack.core.security.authc.support.Hasher;
import org.elasticsearch.xpack.core.security.client.SecurityClient;
import org.elasticsearch.xpack.core.security.user.APMSystemUser;
import org.elasticsearch.xpack.core.security.user.BeatsSystemUser;
import org.elasticsearch.xpack.core.security.user.ElasticUser;
import org.elasticsearch.xpack.core.security.user.KibanaUser;
@ -20,6 +21,7 @@ import org.elasticsearch.xpack.core.security.user.LogstashSystemUser;
import org.junit.BeforeClass;
import java.util.Arrays;
import java.util.List;
import static java.util.Collections.singletonMap;
import static org.elasticsearch.xpack.core.security.authc.support.UsernamePasswordToken.basicAuthHeaderValue;
@ -49,7 +51,9 @@ public class ReservedRealmIntegTests extends NativeRealmIntegTestCase {
}
public void testAuthenticate() {
for (String username : Arrays.asList(ElasticUser.NAME, KibanaUser.NAME, LogstashSystemUser.NAME, BeatsSystemUser.NAME)) {
final List<String> usernames = Arrays.asList(ElasticUser.NAME, KibanaUser.NAME, LogstashSystemUser.NAME,
BeatsSystemUser.NAME, APMSystemUser.NAME);
for (String username : usernames) {
ClusterHealthResponse response = client()
.filterWithHeader(singletonMap("Authorization", basicAuthHeaderValue(username, getReservedPassword())))
.admin()
@ -67,7 +71,9 @@ public class ReservedRealmIntegTests extends NativeRealmIntegTestCase {
*/
public void testAuthenticateAfterEnablingUser() {
final SecurityClient c = securityClient();
for (String username : Arrays.asList(ElasticUser.NAME, KibanaUser.NAME, LogstashSystemUser.NAME, BeatsSystemUser.NAME)) {
final List<String> usernames = Arrays.asList(ElasticUser.NAME, KibanaUser.NAME, LogstashSystemUser.NAME,
BeatsSystemUser.NAME, APMSystemUser.NAME);
for (String username : usernames) {
c.prepareSetEnabled(username, true).get();
ClusterHealthResponse response = client()
.filterWithHeader(singletonMap("Authorization", basicAuthHeaderValue(username, getReservedPassword())))
@ -81,7 +87,8 @@ public class ReservedRealmIntegTests extends NativeRealmIntegTestCase {
}
public void testChangingPassword() {
String username = randomFrom(ElasticUser.NAME, KibanaUser.NAME, LogstashSystemUser.NAME, BeatsSystemUser.NAME);
String username = randomFrom(ElasticUser.NAME, KibanaUser.NAME, LogstashSystemUser.NAME,
BeatsSystemUser.NAME, APMSystemUser.NAME);
final char[] newPassword = "supersecretvalue".toCharArray();
if (randomBoolean()) {

View File

@ -21,6 +21,7 @@ import org.elasticsearch.xpack.core.security.authc.AuthenticationResult;
import org.elasticsearch.xpack.core.security.authc.esnative.ClientReservedRealm;
import org.elasticsearch.xpack.core.security.authc.support.Hasher;
import org.elasticsearch.xpack.core.security.authc.support.UsernamePasswordToken;
import org.elasticsearch.xpack.core.security.user.APMSystemUser;
import org.elasticsearch.xpack.core.security.user.AnonymousUser;
import org.elasticsearch.xpack.core.security.user.BeatsSystemUser;
import org.elasticsearch.xpack.core.security.user.ElasticUser;
@ -262,7 +263,8 @@ public class ReservedRealmTests extends ESTestCase {
PlainActionFuture<Collection<User>> userFuture = new PlainActionFuture<>();
reservedRealm.users(userFuture);
assertThat(userFuture.actionGet(),
containsInAnyOrder(new ElasticUser(true), new KibanaUser(true), new LogstashSystemUser(true), new BeatsSystemUser(true)));
containsInAnyOrder(new ElasticUser(true), new KibanaUser(true), new LogstashSystemUser(true),
new BeatsSystemUser(true), new APMSystemUser((true))));
}
public void testGetUsersDisabled() {
@ -394,7 +396,7 @@ public class ReservedRealmTests extends ESTestCase {
new AnonymousUser(Settings.EMPTY), securityIndex, threadPool);
PlainActionFuture<AuthenticationResult> listener = new PlainActionFuture<>();
final String principal = randomFrom(KibanaUser.NAME, LogstashSystemUser.NAME, BeatsSystemUser.NAME);
final String principal = randomFrom(KibanaUser.NAME, LogstashSystemUser.NAME, BeatsSystemUser.NAME, APMSystemUser.NAME);
doAnswer((i) -> {
ActionListener callback = (ActionListener) i.getArguments()[1];
callback.onResponse(null);
@ -416,14 +418,15 @@ public class ReservedRealmTests extends ESTestCase {
new AnonymousUser(Settings.EMPTY), securityIndex, threadPool);
PlainActionFuture<AuthenticationResult> listener = new PlainActionFuture<>();
final String principal = randomFrom(KibanaUser.NAME, LogstashSystemUser.NAME, BeatsSystemUser.NAME);
final String principal = randomFrom(KibanaUser.NAME, LogstashSystemUser.NAME, BeatsSystemUser.NAME, APMSystemUser.NAME);
reservedRealm.doAuthenticate(new UsernamePasswordToken(principal, mockSecureSettings.getString("bootstrap.password")), listener);
final AuthenticationResult result = listener.get();
assertThat(result.getStatus(), is(AuthenticationResult.Status.TERMINATE));
}
private User randomReservedUser(boolean enabled) {
return randomFrom(new ElasticUser(enabled), new KibanaUser(enabled), new LogstashSystemUser(enabled), new BeatsSystemUser(enabled));
return randomFrom(new ElasticUser(enabled), new KibanaUser(enabled), new LogstashSystemUser(enabled),
new BeatsSystemUser(enabled), new APMSystemUser(enabled));
}
/*
@ -452,6 +455,10 @@ public class ReservedRealmTests extends ESTestCase {
assertThat(versionPredicate.test(Version.V_6_2_3), is(false));
assertThat(versionPredicate.test(Version.V_6_3_0), is(true));
break;
case APMSystemUser.NAME:
assertThat(versionPredicate.test(Version.V_6_4_0), is(false));
assertThat(versionPredicate.test(Version.V_6_5_0), is(true));
break;
default:
assertThat(versionPredicate.test(Version.V_6_3_0), is(true));
break;

View File

@ -25,7 +25,8 @@ public class GroupByColumnKey extends GroupByKey {
public TermsValuesSourceBuilder asValueSource() {
return new TermsValuesSourceBuilder(id())
.field(fieldName())
.order(direction().asOrder());
.order(direction().asOrder())
.missingBucket(true);
}
@Override

View File

@ -44,7 +44,8 @@ public class GroupByDateKey extends GroupByKey {
return new DateHistogramValuesSourceBuilder(id())
.field(fieldName())
.dateHistogramInterval(new DateHistogramInterval(interval))
.timeZone(DateTimeZone.forTimeZone(timeZone));
.timeZone(DateTimeZone.forTimeZone(timeZone))
.missingBucket(true);
}
@Override

View File

@ -36,7 +36,8 @@ public class GroupByScriptKey extends GroupByKey {
public TermsValuesSourceBuilder asValueSource() {
TermsValuesSourceBuilder builder = new TermsValuesSourceBuilder(id())
.script(script.toPainless())
.order(direction().asOrder());
.order(direction().asOrder())
.missingBucket(true);
if (script.outputType().isNumeric()) {
builder.valueType(ValueType.NUMBER);

View File

@ -158,6 +158,7 @@ subprojects {
} else {
String systemKeyFile = version.before('6.3.0') ? 'x-pack/system_key' : 'system_key'
extraConfigFile systemKeyFile, "${mainProject.projectDir}/src/test/resources/system_key"
keystoreSetting 'xpack.security.authc.token.passphrase', 'token passphrase'
}
setting 'xpack.watcher.encrypt_sensitive_data', 'true'
}
@ -199,6 +200,9 @@ subprojects {
setting 'xpack.watcher.encrypt_sensitive_data', 'true'
keystoreFile 'xpack.watcher.encryption_key', "${mainProject.projectDir}/src/test/resources/system_key"
}
if (version.before('6.0.0')) {
keystoreSetting 'xpack.security.authc.token.passphrase', 'token passphrase'
}
}
}

View File

@ -98,7 +98,7 @@ public class SetupPasswordToolIT extends ESRestTestCase {
}
});
assertEquals(4, userPasswordMap.size());
assertEquals(5, userPasswordMap.size());
userPasswordMap.entrySet().forEach(entry -> {
final String basicHeader = "Basic " +
Base64.getEncoder().encodeToString((entry.getKey() + ":" + entry.getValue()).getBytes(StandardCharsets.UTF_8));

View File

@ -42,14 +42,15 @@ public class DataLoader {
}
protected static void loadEmpDatasetIntoEs(RestClient client) throws Exception {
loadEmpDatasetIntoEs(client, "test_emp");
loadEmpDatasetIntoEs(client, "test_emp_copy");
loadEmpDatasetIntoEs(client, "test_emp", "employees");
loadEmpDatasetIntoEs(client, "test_emp_copy", "employees");
loadEmpDatasetIntoEs(client, "test_emp_with_nulls", "employees_with_nulls");
makeAlias(client, "test_alias", "test_emp", "test_emp_copy");
makeAlias(client, "test_alias_emp", "test_emp", "test_emp_copy");
}
public static void loadDocsDatasetIntoEs(RestClient client) throws Exception {
loadEmpDatasetIntoEs(client, "emp");
loadEmpDatasetIntoEs(client, "emp", "employees");
loadLibDatasetIntoEs(client, "library");
makeAlias(client, "employees", "emp");
}
@ -62,7 +63,7 @@ public class DataLoader {
.endObject();
}
protected static void loadEmpDatasetIntoEs(RestClient client, String index) throws Exception {
protected static void loadEmpDatasetIntoEs(RestClient client, String index, String fileName) throws Exception {
Request request = new Request("PUT", "/" + index);
XContentBuilder createIndex = JsonXContent.contentBuilder().startObject();
createIndex.startObject("settings");
@ -129,15 +130,18 @@ public class DataLoader {
request = new Request("POST", "/" + index + "/emp/_bulk");
request.addParameter("refresh", "true");
StringBuilder bulk = new StringBuilder();
csvToLines("employees", (titles, fields) -> {
csvToLines(fileName, (titles, fields) -> {
bulk.append("{\"index\":{}}\n");
bulk.append('{');
String emp_no = fields.get(1);
for (int f = 0; f < fields.size(); f++) {
if (f != 0) {
bulk.append(',');
// an empty value in the csv file is treated as 'null', thus skipping it in the bulk request
if (fields.get(f).trim().length() > 0) {
if (f != 0) {
bulk.append(',');
}
bulk.append('"').append(titles.get(f)).append("\":\"").append(fields.get(f)).append('"');
}
bulk.append('"').append(titles.get(f)).append("\":\"").append(fields.get(f)).append('"');
}
// append department
List<List<String>> list = dep_emp.get(emp_no);

View File

@ -25,7 +25,10 @@ public abstract class SqlSpecTestCase extends SpecBaseIntegrationTestCase {
private String query;
@ClassRule
public static LocalH2 H2 = new LocalH2((c) -> c.createStatement().execute("RUNSCRIPT FROM 'classpath:/setup_test_emp.sql'"));
public static LocalH2 H2 = new LocalH2((c) -> {
c.createStatement().execute("RUNSCRIPT FROM 'classpath:/setup_test_emp.sql'");
c.createStatement().execute("RUNSCRIPT FROM 'classpath:/setup_test_emp_with_nulls.sql'");
});
@ParametersFactory(argumentFormatting = PARAM_FORMATTING)
public static List<Object[]> readScriptSpec() throws Exception {
@ -39,6 +42,7 @@ public abstract class SqlSpecTestCase extends SpecBaseIntegrationTestCase {
tests.addAll(readScriptSpec("/arithmetic.sql-spec", parser));
tests.addAll(readScriptSpec("/string-functions.sql-spec", parser));
tests.addAll(readScriptSpec("/case-functions.sql-spec", parser));
tests.addAll(readScriptSpec("/agg_nulls.sql-spec", parser));
return tests;
}

View File

@ -0,0 +1,14 @@
selectGenderWithNullsAndGroupByGender
SELECT gender, COUNT(*) count FROM test_emp_with_nulls GROUP BY gender ORDER BY gender;
selectFirstNameWithNullsAndGroupByFirstName
SELECT first_name FROM test_emp_with_nulls GROUP BY first_name ORDER BY first_name;
selectCountWhereIsNull
SELECT COUNT(*) count FROM test_emp_with_nulls WHERE first_name IS NULL;
selectLanguagesCountWithNullsAndGroupByLanguage
SELECT languages l, COUNT(*) c FROM test_emp_with_nulls GROUP BY languages ORDER BY languages;
selectHireDateGroupByHireDate
SELECT hire_date HD, COUNT(*) c FROM test_emp_with_nulls GROUP BY hire_date ORDER BY hire_date DESC;
selectHireDateGroupByHireDate
SELECT hire_date HD, COUNT(*) c FROM test_emp_with_nulls GROUP BY hire_date ORDER BY hire_date DESC;
selectSalaryGroupBySalary
SELECT salary, COUNT(*) c FROM test_emp_with_nulls GROUP BY salary ORDER BY salary DESC;

View File

@ -86,6 +86,7 @@ test_alias | ALIAS
test_alias_emp | ALIAS
test_emp | BASE TABLE
test_emp_copy | BASE TABLE
test_emp_with_nulls | BASE TABLE
;
testGroupByOnAlias
@ -98,10 +99,10 @@ F | 10099.28
;
testGroupByOnPattern
SELECT gender, PERCENTILE(emp_no, 97) p1 FROM test_* GROUP BY gender;
SELECT gender, PERCENTILE(emp_no, 97) p1 FROM test_* WHERE gender is NOT NULL GROUP BY gender;
gender:s | p1:d
F | 10099.28
M | 10095.75
F | 10099.32
M | 10095.98
;

View File

@ -0,0 +1,101 @@
birth_date,emp_no,first_name,gender,hire_date,languages,last_name,salary
1953-09-02T00:00:00Z,10001,Georgi,,1986-06-26T00:00:00Z,2,Facello,57305
1964-06-02T00:00:00Z,10002,Bezalel,,1985-11-21T00:00:00Z,5,Simmel,56371
1959-12-03T00:00:00Z,10003,Parto,,1986-08-28T00:00:00Z,4,Bamford,61805
1954-05-01T00:00:00Z,10004,Chirstian,,1986-12-01T00:00:00Z,5,Koblick,36174
1955-01-21T00:00:00Z,10005,Kyoichi,,1989-09-12T00:00:00Z,1,Maliniak,63528
1953-04-20T00:00:00Z,10006,Anneke,,1989-06-02T00:00:00Z,3,Preusig,60335
1957-05-23T00:00:00Z,10007,Tzvetan,,1989-02-10T00:00:00Z,4,Zielinski,74572
1958-02-19T00:00:00Z,10008,Saniya,,1994-09-15T00:00:00Z,2,Kalloufi,43906
1952-04-19T00:00:00Z,10009,Sumant,,1985-02-18T00:00:00Z,1,Peac,66174
1963-06-01T00:00:00Z,10010,Duangkaew,,1989-08-24T00:00:00Z,4,Piveteau,45797
1953-11-07T00:00:00Z,10011,Mary,F,1990-01-22T00:00:00Z,5,Sluis,31120
1960-10-04T00:00:00Z,10012,Patricio,M,1992-12-18T00:00:00Z,5,Bridgland,48942
1963-06-07T00:00:00Z,10013,Eberhardt,M,1985-10-20T00:00:00Z,1,Terkki,48735
1956-02-12T00:00:00Z,10014,Berni,M,1987-03-11T00:00:00Z,5,Genin,37137
1959-08-19T00:00:00Z,10015,Guoxiang,M,1987-07-02T00:00:00Z,5,Nooteboom,25324
1961-05-02T00:00:00Z,10016,Kazuhito,M,1995-01-27T00:00:00Z,2,Cappelletti,61358
1958-07-06T00:00:00Z,10017,Cristinel,F,1993-08-03T00:00:00Z,2,Bouloucos,58715
1954-06-19T00:00:00Z,10018,Kazuhide,F,1993-08-03T00:00:00Z,2,Peha,56760
1953-01-23T00:00:00Z,10019,Lillian,M,1993-08-03T00:00:00Z,1,Haddadi,73717
1952-12-24T00:00:00Z,10020,,M,1991-01-26T00:00:00Z,3,Warwick,40031
1960-02-20T00:00:00Z,10021,,M,1989-12-17T00:00:00Z,5,Erde,60408
1952-07-08T00:00:00Z,10022,,M,1995-08-22T00:00:00Z,3,Famili,48233
1953-09-29T00:00:00Z,10023,,F,1989-12-17T00:00:00Z,2,Montemayor,47896
1958-09-05T00:00:00Z,10024,,F,1997-05-19T00:00:00Z,3,Pettey,64675
1958-10-31T00:00:00Z,10025,Prasadram,M,1987-08-17T00:00:00Z,5,Heyers,47411
1953-04-03T00:00:00Z,10026,Yongqiao,M,1995-03-20T00:00:00Z,3,Berztiss,28336
1962-07-10T00:00:00Z,10027,Divier,F,1989-07-07T00:00:00Z,5,Reistad,73851
1963-11-26T00:00:00Z,10028,Domenick,M,1991-10-22T00:00:00Z,1,Tempesti,39356
1956-12-13T00:00:00Z,10029,Otmar,M,1985-11-20T00:00:00Z,,Herbst,74999
1958-07-14T00:00:00Z,10030,Elvis,M,1994-02-17T00:00:00Z,,Demeyer,67492
1959-01-27T00:00:00Z,10031,Karsten,M,1994-02-17T00:00:00Z,,Joslin,37716
1960-08-09T00:00:00Z,10032,Jeong,F,1990-06-20T00:00:00Z,,Reistad,62233
1956-11-14T00:00:00Z,10033,Arif,M,1987-03-18T00:00:00Z,,Merlo,70011
1962-12-29T00:00:00Z,10034,Bader,M,1988-09-05T00:00:00Z,,Swan,39878
1953-02-08T00:00:00Z,10035,Alain,M,1988-09-05T00:00:00Z,,Chappelet,25945
1959-08-10T00:00:00Z,10036,Adamantios,M,1992-01-03T00:00:00Z,,Portugali,60781
1963-07-22T00:00:00Z,10037,Pradeep,M,1990-12-05T00:00:00Z,,Makrucki,37691
1960-07-20T00:00:00Z,10038,Huan,M,1989-09-20T00:00:00Z,,Lortz,35222
1959-10-01T00:00:00Z,10039,Alejandro,M,1988-01-19T00:00:00Z,,Brender,36051
1959-09-13T00:00:00Z,10040,Weiyi,F,1993-02-14T00:00:00Z,,Meriste,37112
1959-08-27T00:00:00Z,10041,Uri,F,1989-11-12T00:00:00Z,1,Lenart,56415
1956-02-26T00:00:00Z,10042,Magy,F,1993-03-21T00:00:00Z,3,Stamatiou,30404
1960-09-19T00:00:00Z,10043,Yishay,M,1990-10-20T00:00:00Z,1,Tzvieli,34341
1961-09-21T00:00:00Z,10044,Mingsen,F,1994-05-21T00:00:00Z,1,Casley,39728
1957-08-14T00:00:00Z,10045,Moss,M,1989-09-02T00:00:00Z,3,Shanbhogue,74970
1960-07-23T00:00:00Z,10046,Lucien,M,1992-06-20T00:00:00Z,4,Rosenbaum,50064
1952-06-29T00:00:00Z,10047,Zvonko,M,1989-03-31T00:00:00Z,4,Nyanchama,42716
1963-07-11T00:00:00Z,10048,Florian,M,1985-02-24T00:00:00Z,3,Syrotiuk,26436
1961-04-24T00:00:00Z,10049,Basil,F,1992-05-04T00:00:00Z,5,Tramer,37853
1958-05-21T00:00:00Z,10050,Yinghua,M,1990-12-25T00:00:00Z,2,Dredge,43026
1953-07-28T00:00:00Z,10051,Hidefumi,M,1992-10-15T00:00:00Z,3,Caine,58121
1961-02-26T00:00:00Z,10052,Heping,M,1988-05-21T00:00:00Z,1,Nitsch,55360
1954-09-13T00:00:00Z,10053,Sanjiv,F,1986-02-04T00:00:00Z,3,Zschoche,54462
1957-04-04T00:00:00Z,10054,Mayumi,M,1995-03-13T00:00:00Z,4,Schueller,65367
1956-06-06T00:00:00Z,10055,Georgy,M,1992-04-27T00:00:00Z,5,Dredge,49281
1961-09-01T00:00:00Z,10056,Brendon,F,1990-02-01T00:00:00Z,2,Bernini,33370
1954-05-30T00:00:00Z,10057,Ebbe,F,1992-01-15T00:00:00Z,4,Callaway,27215
1954-10-01T00:00:00Z,10058,Berhard,M,1987-04-13T00:00:00Z,3,McFarlin,38376
1953-09-19T00:00:00Z,10059,Alejandro,F,1991-06-26T00:00:00Z,2,McAlpine,44307
1961-10-15T00:00:00Z,10060,Breannda,M,1987-11-02T00:00:00Z,2,Billingsley,29175
1962-10-19T00:00:00Z,10061,Tse,M,1985-09-17T00:00:00Z,1,Herber,49095
1961-11-02T00:00:00Z,10062,Anoosh,M,1991-08-30T00:00:00Z,3,Peyn,65030
1952-08-06T00:00:00Z,10063,Gino,F,1989-04-08T00:00:00Z,3,Leonhardt,52121
1959-04-07T00:00:00Z,10064,Udi,M,1985-11-20T00:00:00Z,5,Jansch,33956
1963-04-14T00:00:00Z,10065,Satosi,M,1988-05-18T00:00:00Z,2,Awdeh,50249
1952-11-13T00:00:00Z,10066,Kwee,M,1986-02-26T00:00:00Z,5,Schusler,31897
1953-01-07T00:00:00Z,10067,Claudi,M,1987-03-04T00:00:00Z,2,Stavenow,52044
1962-11-26T00:00:00Z,10068,Charlene,M,1987-08-07T00:00:00Z,3,Brattka,28941
1960-09-06T00:00:00Z,10069,Margareta,F,1989-11-05T00:00:00Z,5,Bierman,41933
1955-08-20T00:00:00Z,10070,Reuven,M,1985-10-14T00:00:00Z,3,Garigliano,54329
1958-01-21T00:00:00Z,10071,Hisao,M,1987-10-01T00:00:00Z,2,Lipner,40612
1952-05-15T00:00:00Z,10072,Hironoby,F,1988-07-21T00:00:00Z,5,Sidou,54518
1954-02-23T00:00:00Z,10073,Shir,M,1991-12-01T00:00:00Z,4,McClurg,32568
1955-08-28T00:00:00Z,10074,Mokhtar,F,1990-08-13T00:00:00Z,5,Bernatsky,38992
1960-03-09T00:00:00Z,10075,Gao,F,1987-03-19T00:00:00Z,5,Dolinsky,51956
1952-06-13T00:00:00Z,10076,Erez,F,1985-07-09T00:00:00Z,3,Ritzmann,62405
1964-04-18T00:00:00Z,10077,Mona,M,1990-03-02T00:00:00Z,5,Azuma,46595
1959-12-25T00:00:00Z,10078,Danel,F,1987-05-26T00:00:00Z,2,Mondadori,69904
1961-10-05T00:00:00Z,10079,Kshitij,F,1986-03-27T00:00:00Z,2,Gils,32263
1957-12-03T00:00:00Z,10080,Premal,M,1985-11-19T00:00:00Z,5,Baek,52833
1960-12-17T00:00:00Z,10081,Zhongwei,M,1986-10-30T00:00:00Z,2,Rosen,50128
1963-09-09T00:00:00Z,10082,Parviz,M,1990-01-03T00:00:00Z,4,Lortz,49818
1959-07-23T00:00:00Z,10083,Vishv,M,1987-03-31T00:00:00Z,1,Zockler,
1960-05-25T00:00:00Z,10084,Tuval,M,1995-12-15T00:00:00Z,1,Kalloufi,
1962-11-07T00:00:00Z,10085,Kenroku,M,1994-04-09T00:00:00Z,5,Malabarba,
1962-11-19T00:00:00Z,10086,Somnath,M,1990-02-16T00:00:00Z,1,Foote,
1959-07-23T00:00:00Z,10087,Xinglin,F,1986-09-08T00:00:00Z,5,Eugenio,
1954-02-25T00:00:00Z,10088,Jungsoon,F,1988-09-02T00:00:00Z,5,Syrzycki,
1963-03-21T00:00:00Z,10089,Sudharsan,F,1986-08-12T00:00:00Z,4,Flasterstein,
1961-05-30T00:00:00Z,10090,Kendra,M,1986-03-14T00:00:00Z,2,Hofting,44956
1955-10-04T00:00:00Z,10091,Amabile,M,1992-11-18T00:00:00Z,3,Gomatam,38645
1964-10-18T00:00:00Z,10092,Valdiodio,F,1989-09-22T00:00:00Z,1,Niizuma,25976
1964-06-11T00:00:00Z,10093,Sailaja,M,1996-11-05T00:00:00Z,3,Desikan,45656
1957-05-25T00:00:00Z,10094,Arumugam,F,1987-04-18T00:00:00Z,5,Ossenbruggen,66817
1965-01-03T00:00:00Z,10095,Hilari,M,1986-07-15T00:00:00Z,4,Morton,37702
1954-09-16T00:00:00Z,10096,Jayson,M,1990-01-14T00:00:00Z,4,Mandell,43889
1952-02-27T00:00:00Z,10097,Remzi,M,1990-09-15T00:00:00Z,3,Waschkowski,71165
1961-09-23T00:00:00Z,10098,Sreekrishna,F,1985-05-13T00:00:00Z,4,Servieres,44817
1956-05-25T00:00:00Z,10099,Valter,F,1988-10-18T00:00:00Z,2,Sullins,73578
1953-04-21T00:00:00Z,10100,Hironobu,F,1987-09-21T00:00:00Z,4,Haraldson,68431
1 birth_date emp_no first_name gender hire_date languages last_name salary
2 1953-09-02T00:00:00Z 10001 Georgi 1986-06-26T00:00:00Z 2 Facello 57305
3 1964-06-02T00:00:00Z 10002 Bezalel 1985-11-21T00:00:00Z 5 Simmel 56371
4 1959-12-03T00:00:00Z 10003 Parto 1986-08-28T00:00:00Z 4 Bamford 61805
5 1954-05-01T00:00:00Z 10004 Chirstian 1986-12-01T00:00:00Z 5 Koblick 36174
6 1955-01-21T00:00:00Z 10005 Kyoichi 1989-09-12T00:00:00Z 1 Maliniak 63528
7 1953-04-20T00:00:00Z 10006 Anneke 1989-06-02T00:00:00Z 3 Preusig 60335
8 1957-05-23T00:00:00Z 10007 Tzvetan 1989-02-10T00:00:00Z 4 Zielinski 74572
9 1958-02-19T00:00:00Z 10008 Saniya 1994-09-15T00:00:00Z 2 Kalloufi 43906
10 1952-04-19T00:00:00Z 10009 Sumant 1985-02-18T00:00:00Z 1 Peac 66174
11 1963-06-01T00:00:00Z 10010 Duangkaew 1989-08-24T00:00:00Z 4 Piveteau 45797
12 1953-11-07T00:00:00Z 10011 Mary F 1990-01-22T00:00:00Z 5 Sluis 31120
13 1960-10-04T00:00:00Z 10012 Patricio M 1992-12-18T00:00:00Z 5 Bridgland 48942
14 1963-06-07T00:00:00Z 10013 Eberhardt M 1985-10-20T00:00:00Z 1 Terkki 48735
15 1956-02-12T00:00:00Z 10014 Berni M 1987-03-11T00:00:00Z 5 Genin 37137
16 1959-08-19T00:00:00Z 10015 Guoxiang M 1987-07-02T00:00:00Z 5 Nooteboom 25324
17 1961-05-02T00:00:00Z 10016 Kazuhito M 1995-01-27T00:00:00Z 2 Cappelletti 61358
18 1958-07-06T00:00:00Z 10017 Cristinel F 1993-08-03T00:00:00Z 2 Bouloucos 58715
19 1954-06-19T00:00:00Z 10018 Kazuhide F 1993-08-03T00:00:00Z 2 Peha 56760
20 1953-01-23T00:00:00Z 10019 Lillian M 1993-08-03T00:00:00Z 1 Haddadi 73717
21 1952-12-24T00:00:00Z 10020 M 1991-01-26T00:00:00Z 3 Warwick 40031
22 1960-02-20T00:00:00Z 10021 M 1989-12-17T00:00:00Z 5 Erde 60408
23 1952-07-08T00:00:00Z 10022 M 1995-08-22T00:00:00Z 3 Famili 48233
24 1953-09-29T00:00:00Z 10023 F 1989-12-17T00:00:00Z 2 Montemayor 47896
25 1958-09-05T00:00:00Z 10024 F 1997-05-19T00:00:00Z 3 Pettey 64675
26 1958-10-31T00:00:00Z 10025 Prasadram M 1987-08-17T00:00:00Z 5 Heyers 47411
27 1953-04-03T00:00:00Z 10026 Yongqiao M 1995-03-20T00:00:00Z 3 Berztiss 28336
28 1962-07-10T00:00:00Z 10027 Divier F 1989-07-07T00:00:00Z 5 Reistad 73851
29 1963-11-26T00:00:00Z 10028 Domenick M 1991-10-22T00:00:00Z 1 Tempesti 39356
30 1956-12-13T00:00:00Z 10029 Otmar M 1985-11-20T00:00:00Z Herbst 74999
31 1958-07-14T00:00:00Z 10030 Elvis M 1994-02-17T00:00:00Z Demeyer 67492
32 1959-01-27T00:00:00Z 10031 Karsten M 1994-02-17T00:00:00Z Joslin 37716
33 1960-08-09T00:00:00Z 10032 Jeong F 1990-06-20T00:00:00Z Reistad 62233
34 1956-11-14T00:00:00Z 10033 Arif M 1987-03-18T00:00:00Z Merlo 70011
35 1962-12-29T00:00:00Z 10034 Bader M 1988-09-05T00:00:00Z Swan 39878
36 1953-02-08T00:00:00Z 10035 Alain M 1988-09-05T00:00:00Z Chappelet 25945
37 1959-08-10T00:00:00Z 10036 Adamantios M 1992-01-03T00:00:00Z Portugali 60781
38 1963-07-22T00:00:00Z 10037 Pradeep M 1990-12-05T00:00:00Z Makrucki 37691
39 1960-07-20T00:00:00Z 10038 Huan M 1989-09-20T00:00:00Z Lortz 35222
40 1959-10-01T00:00:00Z 10039 Alejandro M 1988-01-19T00:00:00Z Brender 36051
41 1959-09-13T00:00:00Z 10040 Weiyi F 1993-02-14T00:00:00Z Meriste 37112
42 1959-08-27T00:00:00Z 10041 Uri F 1989-11-12T00:00:00Z 1 Lenart 56415
43 1956-02-26T00:00:00Z 10042 Magy F 1993-03-21T00:00:00Z 3 Stamatiou 30404
44 1960-09-19T00:00:00Z 10043 Yishay M 1990-10-20T00:00:00Z 1 Tzvieli 34341
45 1961-09-21T00:00:00Z 10044 Mingsen F 1994-05-21T00:00:00Z 1 Casley 39728
46 1957-08-14T00:00:00Z 10045 Moss M 1989-09-02T00:00:00Z 3 Shanbhogue 74970
47 1960-07-23T00:00:00Z 10046 Lucien M 1992-06-20T00:00:00Z 4 Rosenbaum 50064
48 1952-06-29T00:00:00Z 10047 Zvonko M 1989-03-31T00:00:00Z 4 Nyanchama 42716
49 1963-07-11T00:00:00Z 10048 Florian M 1985-02-24T00:00:00Z 3 Syrotiuk 26436
50 1961-04-24T00:00:00Z 10049 Basil F 1992-05-04T00:00:00Z 5 Tramer 37853
51 1958-05-21T00:00:00Z 10050 Yinghua M 1990-12-25T00:00:00Z 2 Dredge 43026
52 1953-07-28T00:00:00Z 10051 Hidefumi M 1992-10-15T00:00:00Z 3 Caine 58121
53 1961-02-26T00:00:00Z 10052 Heping M 1988-05-21T00:00:00Z 1 Nitsch 55360
54 1954-09-13T00:00:00Z 10053 Sanjiv F 1986-02-04T00:00:00Z 3 Zschoche 54462
55 1957-04-04T00:00:00Z 10054 Mayumi M 1995-03-13T00:00:00Z 4 Schueller 65367
56 1956-06-06T00:00:00Z 10055 Georgy M 1992-04-27T00:00:00Z 5 Dredge 49281
57 1961-09-01T00:00:00Z 10056 Brendon F 1990-02-01T00:00:00Z 2 Bernini 33370
58 1954-05-30T00:00:00Z 10057 Ebbe F 1992-01-15T00:00:00Z 4 Callaway 27215
59 1954-10-01T00:00:00Z 10058 Berhard M 1987-04-13T00:00:00Z 3 McFarlin 38376
60 1953-09-19T00:00:00Z 10059 Alejandro F 1991-06-26T00:00:00Z 2 McAlpine 44307
61 1961-10-15T00:00:00Z 10060 Breannda M 1987-11-02T00:00:00Z 2 Billingsley 29175
62 1962-10-19T00:00:00Z 10061 Tse M 1985-09-17T00:00:00Z 1 Herber 49095
63 1961-11-02T00:00:00Z 10062 Anoosh M 1991-08-30T00:00:00Z 3 Peyn 65030
64 1952-08-06T00:00:00Z 10063 Gino F 1989-04-08T00:00:00Z 3 Leonhardt 52121
65 1959-04-07T00:00:00Z 10064 Udi M 1985-11-20T00:00:00Z 5 Jansch 33956
66 1963-04-14T00:00:00Z 10065 Satosi M 1988-05-18T00:00:00Z 2 Awdeh 50249
67 1952-11-13T00:00:00Z 10066 Kwee M 1986-02-26T00:00:00Z 5 Schusler 31897
68 1953-01-07T00:00:00Z 10067 Claudi M 1987-03-04T00:00:00Z 2 Stavenow 52044
69 1962-11-26T00:00:00Z 10068 Charlene M 1987-08-07T00:00:00Z 3 Brattka 28941
70 1960-09-06T00:00:00Z 10069 Margareta F 1989-11-05T00:00:00Z 5 Bierman 41933
71 1955-08-20T00:00:00Z 10070 Reuven M 1985-10-14T00:00:00Z 3 Garigliano 54329
72 1958-01-21T00:00:00Z 10071 Hisao M 1987-10-01T00:00:00Z 2 Lipner 40612
73 1952-05-15T00:00:00Z 10072 Hironoby F 1988-07-21T00:00:00Z 5 Sidou 54518
74 1954-02-23T00:00:00Z 10073 Shir M 1991-12-01T00:00:00Z 4 McClurg 32568
75 1955-08-28T00:00:00Z 10074 Mokhtar F 1990-08-13T00:00:00Z 5 Bernatsky 38992
76 1960-03-09T00:00:00Z 10075 Gao F 1987-03-19T00:00:00Z 5 Dolinsky 51956
77 1952-06-13T00:00:00Z 10076 Erez F 1985-07-09T00:00:00Z 3 Ritzmann 62405
78 1964-04-18T00:00:00Z 10077 Mona M 1990-03-02T00:00:00Z 5 Azuma 46595
79 1959-12-25T00:00:00Z 10078 Danel F 1987-05-26T00:00:00Z 2 Mondadori 69904
80 1961-10-05T00:00:00Z 10079 Kshitij F 1986-03-27T00:00:00Z 2 Gils 32263
81 1957-12-03T00:00:00Z 10080 Premal M 1985-11-19T00:00:00Z 5 Baek 52833
82 1960-12-17T00:00:00Z 10081 Zhongwei M 1986-10-30T00:00:00Z 2 Rosen 50128
83 1963-09-09T00:00:00Z 10082 Parviz M 1990-01-03T00:00:00Z 4 Lortz 49818
84 1959-07-23T00:00:00Z 10083 Vishv M 1987-03-31T00:00:00Z 1 Zockler
85 1960-05-25T00:00:00Z 10084 Tuval M 1995-12-15T00:00:00Z 1 Kalloufi
86 1962-11-07T00:00:00Z 10085 Kenroku M 1994-04-09T00:00:00Z 5 Malabarba
87 1962-11-19T00:00:00Z 10086 Somnath M 1990-02-16T00:00:00Z 1 Foote
88 1959-07-23T00:00:00Z 10087 Xinglin F 1986-09-08T00:00:00Z 5 Eugenio
89 1954-02-25T00:00:00Z 10088 Jungsoon F 1988-09-02T00:00:00Z 5 Syrzycki
90 1963-03-21T00:00:00Z 10089 Sudharsan F 1986-08-12T00:00:00Z 4 Flasterstein
91 1961-05-30T00:00:00Z 10090 Kendra M 1986-03-14T00:00:00Z 2 Hofting 44956
92 1955-10-04T00:00:00Z 10091 Amabile M 1992-11-18T00:00:00Z 3 Gomatam 38645
93 1964-10-18T00:00:00Z 10092 Valdiodio F 1989-09-22T00:00:00Z 1 Niizuma 25976
94 1964-06-11T00:00:00Z 10093 Sailaja M 1996-11-05T00:00:00Z 3 Desikan 45656
95 1957-05-25T00:00:00Z 10094 Arumugam F 1987-04-18T00:00:00Z 5 Ossenbruggen 66817
96 1965-01-03T00:00:00Z 10095 Hilari M 1986-07-15T00:00:00Z 4 Morton 37702
97 1954-09-16T00:00:00Z 10096 Jayson M 1990-01-14T00:00:00Z 4 Mandell 43889
98 1952-02-27T00:00:00Z 10097 Remzi M 1990-09-15T00:00:00Z 3 Waschkowski 71165
99 1961-09-23T00:00:00Z 10098 Sreekrishna F 1985-05-13T00:00:00Z 4 Servieres 44817
100 1956-05-25T00:00:00Z 10099 Valter F 1988-10-18T00:00:00Z 2 Sullins 73578
101 1953-04-21T00:00:00Z 10100 Hironobu F 1987-09-21T00:00:00Z 4 Haraldson 68431

View File

@ -0,0 +1,12 @@
DROP TABLE IF EXISTS "test_emp_with_nulls";
CREATE TABLE "test_emp_with_nulls" (
"birth_date" TIMESTAMP WITH TIME ZONE,
"emp_no" INT,
"first_name" VARCHAR(50),
"gender" VARCHAR(1),
"hire_date" TIMESTAMP WITH TIME ZONE,
"languages" TINYINT,
"last_name" VARCHAR(50),
"salary" INT
)
AS SELECT * FROM CSVREAD('classpath:/employees_with_nulls.csv');