Merge branch 'master' into feature/sql

Original commit: elastic/x-pack-elasticsearch@1f74db9cde
This commit is contained in:
Nik Everett 2017-08-18 09:56:29 -04:00
commit a68d839edc
33 changed files with 1277 additions and 104 deletions

View File

@ -213,6 +213,12 @@ typical and actual values and the influencers that contributed to the anomaly.
image::images/ml-gs-job2-explorer-table.jpg["Job results table"]
Notice that there are anomalies for both detectors, that is to say for both the
`high_mean(response)` and the `sum(total)` metrics in this time interval. By
`high_mean(response)` and the `sum(total)` metrics in this time interval. The
table aggregates the anomalies to show the highest severity anomaly per detector
and entity, which is the by, over, or partition field value that is displayed
in the **found for** column. To view all the anomalies without any aggregation,
set the **Interval** to `Show all`.
By
investigating multiple metrics in a single job, you might see relationships
between events in your data that would otherwise be overlooked.

View File

@ -629,10 +629,21 @@ of the viewer. For example:
[role="screenshot"]
image::images/ml-gs-job1-anomalies.jpg["Single Metric Viewer Anomalies for total-requests job"]
For each anomaly you can see key details such as the time, the actual and
expected ("typical") values, and their probability.
By default, the table contains all anomalies that have a severity of "warning"
or higher in the selected section of the timeline. If you are only interested in
critical anomalies, for example, you can change the severity threshold for this
table.
The anomalies table also automatically calculates an interval for the data in
the table. If the time difference between the earliest and latest records in the
table is less than two days, the data is aggregated by hour to show the details
of the highest severity anomaly for each detector. Otherwise, it is
aggregated by day. You can change the interval for the table, for example, to
show all anomalies.
You can see the same information in a different format by using the
**Anomaly Explorer**:
[role="screenshot"]

View File

@ -39,6 +39,10 @@ so do not set the `background_persist_interval` value too low.
(string) If the job closed or failed, this is the time the job finished,
otherwise it is `null`.
`groups`::
(array of strings) A list of job groups. A job can belong to no groups or
many. For example, `["group1", "group2"]`.
`job_id`::
(string) The unique identifier for the job.

View File

@ -23,29 +23,32 @@ The create job API enables you to instantiate a job.
See <<ml-analysisconfig, analysis configuration objects>>.
`analysis_limits`::
Optionally specifies runtime limits for the job. See <<ml-apilimits,analysis limits>>.
(object) Specifies runtime limits for the job. See
<<ml-apilimits,analysis limits>>.
`data_description` (required)::
(object) Describes the format of the input data. This object is required, but
it can be empty (`{}`). See <<ml-datadescription,data description objects>>.
`description`::
(string) An optional description of the job.
(string) A description of the job.
`groups`::
(array of strings) A list of job groups. See <<ml-job-resource>>.
`model_plot`::
(object) This advanced configuration option stores model information along with the
results. This adds overhead to the performance of the system and
is not feasible for jobs with many entities, see <<ml-apimodelplotconfig>>.
(object) This advanced configuration option stores model information along
with the results. This adds overhead to the performance of the system and is
not feasible for jobs with many entities, see <<ml-apimodelplotconfig>>.
`model_snapshot_retention_days`::
(long) The time in days that model snapshots are retained for the job.
Older snapshots are deleted. The default value is 1 day.
For more information about model snapshots, see <<ml-snapshot-resource>>.
Older snapshots are deleted. The default value is 1 day. For more information
about model snapshots, see <<ml-snapshot-resource>>.
`results_index_name`::
(string) The name of the index in which to store the {ml} results.
The default value is `shared`, which corresponds to the index name
`.ml-anomalies-shared`.
(string) The name of the index in which to store the {ml} results. The default
value is `shared`, which corresponds to the index name `.ml-anomalies-shared`.
==== Authorization
@ -90,8 +93,9 @@ When the job is created, you receive the following results:
{
"job_id": "it-ops-kpi",
"job_type": "anomaly_detector",
"job_version": "7.0.0-alpha1",
"description": "First simple job",
"create_time": 1491948238874,
"create_time": 1502832478794,
"analysis_config": {
"bucket_span": "5m",
"latency": "0ms",

View File

@ -30,8 +30,9 @@ each periodic persistence of the model. See <<ml-job-resource>>. | Yes
|`custom_settings` |Contains custom meta data about the job. | No
|`description` |An optional description of the job.
See <<ml-job-resource>>. | No
|`description` |A description of the job. See <<ml-job-resource>>. | No
|`groups` |A list of job groups. See <<ml-job-resource>>. | No
|`model_plot_config`: `enabled` |If true, enables calculation and storage of the
model bounds for each entity that is being analyzed.
@ -87,6 +88,7 @@ The following example updates the `it_ops_new_logs` job:
POST _xpack/ml/anomaly_detectors/it_ops_new_logs/_update
{
"description":"An updated job",
"groups": ["group1","group2"],
"model_plot_config": {
"enabled": true
},
@ -116,9 +118,13 @@ information, including the updated property values. For example:
{
"job_id": "it_ops_new_logs",
"job_type": "anomaly_detector",
"job_version": "7.0.0-alpha1",
"groups": [
"group1",
"group2"
],
"description": "An updated job",
"create_time": 1493678314204,
"finished_time": 1493678315850,
"create_time": 1502904685360,
"analysis_config": {
"bucket_span": "1800s",
"categorization_field_name": "message",

View File

@ -5,6 +5,7 @@ rem or more contributor license agreements. Licensed under the Elastic License;
rem you may not use this file except in compliance with the Elastic License.
setlocal enabledelayedexpansion
setlocal enableextensions
call "%~dp0..\elasticsearch-env.bat" || exit /b 1
@ -19,3 +20,4 @@ call "%~dp0x-pack-env.bat" || exit /b 1
%*
endlocal
endlocal

View File

@ -5,6 +5,7 @@ rem or more contributor license agreements. Licensed under the Elastic License;
rem you may not use this file except in compliance with the Elastic License.
setlocal enabledelayedexpansion
setlocal enableextensions
call "%~dp0..\elasticsearch-env.bat" || exit /b 1
@ -19,3 +20,4 @@ call "%~dp0x-pack-env.bat" || exit /b 1
%*
endlocal
endlocal

View File

@ -5,6 +5,7 @@ rem or more contributor license agreements. Licensed under the Elastic License;
rem you may not use this file except in compliance with the Elastic License.
setlocal enabledelayedexpansion
setlocal enableextensions
call "%~dp0..\elasticsearch-env.bat" || exit /b 1
@ -19,3 +20,4 @@ call "%~dp0x-pack-env.bat" || exit /b 1
%*
endlocal
endlocal

View File

@ -5,6 +5,7 @@ rem or more contributor license agreements. Licensed under the Elastic License;
rem you may not use this file except in compliance with the Elastic License.
setlocal enabledelayedexpansion
setlocal enableextensions
call "%~dp0..\elasticsearch-env.bat" || exit /b 1
@ -19,3 +20,4 @@ call "%~dp0x-pack-env.bat" || exit /b 1
%*
endlocal
endlocal

View File

@ -5,6 +5,7 @@ rem or more contributor license agreements. Licensed under the Elastic License;
rem you may not use this file except in compliance with the Elastic License.
setlocal enabledelayedexpansion
setlocal enableextensions
call "%~dp0..\elasticsearch-env.bat" || exit /b 1
@ -19,3 +20,4 @@ call "%~dp0x-pack-env.bat" || exit /b 1
%*
endlocal
endlocal

View File

@ -5,6 +5,7 @@ rem or more contributor license agreements. Licensed under the Elastic License;
rem you may not use this file except in compliance with the Elastic License.
setlocal enabledelayedexpansion
setlocal enableextensions
call "%~dp0..\elasticsearch-env.bat" || exit /b 1
@ -19,3 +20,4 @@ call "%~dp0x-pack-env.bat" || exit /b 1
%*
endlocal
endlocal

View File

@ -5,6 +5,7 @@ rem or more contributor license agreements. Licensed under the Elastic License;
rem you may not use this file except in compliance with the Elastic License.
setlocal enabledelayedexpansion
setlocal enableextensions
call "%~dp0..\elasticsearch-env.bat" || exit /b 1
@ -19,3 +20,4 @@ call "%~dp0x-pack-env.bat" || exit /b 1
%*
endlocal
endlocal

View File

@ -13,6 +13,7 @@ import org.elasticsearch.action.support.ActionFilter;
import org.elasticsearch.bootstrap.BootstrapCheck;
import org.elasticsearch.client.Client;
import org.elasticsearch.client.transport.TransportClient;
import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
import org.elasticsearch.cluster.metadata.IndexTemplateMetaData;
import org.elasticsearch.cluster.node.DiscoveryNodes;
@ -42,6 +43,7 @@ import org.elasticsearch.license.LicenseUtils;
import org.elasticsearch.license.Licensing;
import org.elasticsearch.license.XPackLicenseState;
import org.elasticsearch.plugins.ActionPlugin;
import org.elasticsearch.plugins.ClusterPlugin;
import org.elasticsearch.plugins.IngestPlugin;
import org.elasticsearch.plugins.NetworkPlugin;
import org.elasticsearch.plugins.Plugin;
@ -135,7 +137,7 @@ import javax.security.auth.DestroyFailedException;
import static org.elasticsearch.xpack.watcher.Watcher.ENCRYPT_SENSITIVE_DATA_SETTING;
public class XPackPlugin extends Plugin implements ScriptPlugin, ActionPlugin, IngestPlugin, NetworkPlugin {
public class XPackPlugin extends Plugin implements ScriptPlugin, ActionPlugin, IngestPlugin, NetworkPlugin, ClusterPlugin {
public static final String NAME = "x-pack";
@ -632,4 +634,8 @@ public class XPackPlugin extends Plugin implements ScriptPlugin, ActionPlugin, I
.collect(Collectors.toList()));
}
@Override
public Map<String, Supplier<ClusterState.Custom>> getInitialClusterStateCustomSupplier() {
return security.getInitialClusterStateCustomSupplier();
}
}

View File

@ -88,6 +88,12 @@ public class PutJobAction extends Action<PutJobAction.Request, PutJobAction.Resp
// Validate the jobBuilder immediately so that errors can be detected prior to transportation.
jobBuilder.validateInputFields();
// In 6.1 we want to make the model memory size limit more prominent, and also reduce the default from
// 4GB to 1GB. However, changing the meaning of a null model memory limit for existing jobs would be a
// breaking change, so instead we add an explicit limit to newly created jobs that didn't have one when
// submitted
jobBuilder.setDefaultMemoryLimitIfUnset();
this.jobBuilder = jobBuilder;
}

View File

@ -31,6 +31,16 @@ import java.util.Objects;
* If an option has not been set it shouldn't be used so the default value is picked up instead.
*/
public class AnalysisLimits implements ToXContentObject, Writeable {
/**
* Prior to 6.1 the default model memory size limit was 4GB, and defined in the C++ code. The default
* is now 1GB and defined here in the Java code. However, changing the meaning of a null model memory
* limit for existing jobs would be a breaking change, so instead the meaning of <code>null</code> is
* still to use the default from the C++ code, but newly created jobs will have this explicit setting
* added if none is provided.
*/
static final long DEFAULT_MODEL_MEMORY_LIMIT_MB = 1024L;
/**
* Serialisation field names
*/
@ -66,7 +76,9 @@ public class AnalysisLimits implements ToXContentObject, Writeable {
/**
* The model memory limit in MiBs.
* It is initialised to <code>null</code>.
* A value of <code>null</code> will result to the default being used.
* A value of <code>null</code> will result to the default defined in the C++ code being used.
* However, for jobs created in version 6.1 or higher this will rarely be <code>null</code> because
* the put_job action set it to a new default defined in the Java code.
*/
private final Long modelMemoryLimit;
@ -76,6 +88,10 @@ public class AnalysisLimits implements ToXContentObject, Writeable {
*/
private final Long categorizationExamplesLimit;
public AnalysisLimits(Long categorizationExamplesLimit) {
this(DEFAULT_MODEL_MEMORY_LIMIT_MB, categorizationExamplesLimit);
}
public AnalysisLimits(Long modelMemoryLimit, Long categorizationExamplesLimit) {
if (modelMemoryLimit != null && modelMemoryLimit < 1) {
String msg = Messages.getMessage(Messages.JOB_CONFIG_MODEL_MEMORY_LIMIT_TOO_LOW, modelMemoryLimit);

View File

@ -1000,6 +1000,20 @@ public class Job extends AbstractDiffable<Job> implements Writeable, ToXContentO
// Creation time is NOT required in user input, hence validated only on build
}
/**
* In 6.1 we want to make the model memory size limit more prominent, and also reduce the default from
* 4GB to 1GB. However, changing the meaning of a null model memory limit for existing jobs would be a
* breaking change, so instead we add an explicit limit to newly created jobs that didn't have one when
* submitted
*/
public void setDefaultMemoryLimitIfUnset() {
if (analysisLimits == null) {
analysisLimits = new AnalysisLimits((Long) null);
} else if (analysisLimits.getModelMemoryLimit() == null) {
analysisLimits = new AnalysisLimits(analysisLimits.getCategorizationExamplesLimit());
}
}
private void validateGroups() {
for (String group : this.groups) {
if (MlStrings.isValidId(group) == false) {

View File

@ -14,6 +14,9 @@ import org.elasticsearch.action.ActionResponse;
import org.elasticsearch.action.support.ActionFilter;
import org.elasticsearch.action.support.DestructiveOperations;
import org.elasticsearch.bootstrap.BootstrapCheck;
import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.LocalNodeMasterListener;
import org.elasticsearch.cluster.NamedDiff;
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
import org.elasticsearch.cluster.metadata.IndexTemplateMetaData;
import org.elasticsearch.cluster.node.DiscoveryNodes;
@ -50,6 +53,7 @@ import org.elasticsearch.indices.breaker.CircuitBreakerService;
import org.elasticsearch.ingest.Processor;
import org.elasticsearch.license.XPackLicenseState;
import org.elasticsearch.plugins.ActionPlugin;
import org.elasticsearch.plugins.ClusterPlugin;
import org.elasticsearch.plugins.IngestPlugin;
import org.elasticsearch.plugins.NetworkPlugin;
import org.elasticsearch.rest.RestController;
@ -115,6 +119,7 @@ import org.elasticsearch.xpack.security.authc.InternalRealms;
import org.elasticsearch.xpack.security.authc.Realm;
import org.elasticsearch.xpack.security.authc.RealmSettings;
import org.elasticsearch.xpack.security.authc.Realms;
import org.elasticsearch.xpack.security.authc.TokenMetaData;
import org.elasticsearch.xpack.security.authc.TokenService;
import org.elasticsearch.xpack.security.authc.esnative.NativeRealm;
import org.elasticsearch.xpack.security.authc.esnative.NativeUsersStore;
@ -189,7 +194,7 @@ import static java.util.Collections.singletonList;
import static org.elasticsearch.xpack.XPackSettings.HTTP_SSL_ENABLED;
import static org.elasticsearch.xpack.security.SecurityLifecycleService.SECURITY_TEMPLATE_NAME;
public class Security implements ActionPlugin, IngestPlugin, NetworkPlugin {
public class Security implements ActionPlugin, IngestPlugin, NetworkPlugin, ClusterPlugin {
private static final Logger logger = Loggers.getLogger(XPackPlugin.class);
@ -218,6 +223,7 @@ public class Security implements ActionPlugin, IngestPlugin, NetworkPlugin {
private final SetOnce<AuditTrailService> auditTrailService = new SetOnce<>();
private final SetOnce<SecurityContext> securityContext = new SetOnce<>();
private final SetOnce<ThreadContext> threadContext = new SetOnce<>();
private final SetOnce<TokenService> tokenService = new SetOnce<>();
private final List<BootstrapCheck> bootstrapChecks;
public Security(Settings settings, Environment env, XPackLicenseState licenseState, SSLService sslService)
@ -332,7 +338,8 @@ public class Security implements ActionPlugin, IngestPlugin, NetworkPlugin {
final SecurityLifecycleService securityLifecycleService =
new SecurityLifecycleService(settings, clusterService, threadPool, client, indexAuditTrail);
final TokenService tokenService = new TokenService(settings, Clock.systemUTC(), client, securityLifecycleService);
final TokenService tokenService = new TokenService(settings, Clock.systemUTC(), client, securityLifecycleService, clusterService);
this.tokenService.set(tokenService);
components.add(tokenService);
// realms construction
@ -855,7 +862,11 @@ public class Security implements ActionPlugin, IngestPlugin, NetworkPlugin {
}
public static List<NamedWriteableRegistry.Entry> getNamedWriteables() {
return Arrays.asList(ExpressionParser.NAMED_WRITEABLES);
List<NamedWriteableRegistry.Entry> entries = new ArrayList<>();
entries.add(new NamedWriteableRegistry.Entry(ClusterState.Custom.class, TokenMetaData.TYPE, TokenMetaData::new));
entries.add(new NamedWriteableRegistry.Entry(NamedDiff.class, TokenMetaData.TYPE, TokenMetaData::readDiffFrom));
entries.addAll(Arrays.asList(ExpressionParser.NAMED_WRITEABLES));
return entries;
}
public List<ExecutorBuilder<?>> getExecutorBuilders(final Settings settings) {
@ -882,4 +893,13 @@ public class Security implements ActionPlugin, IngestPlugin, NetworkPlugin {
return templates;
};
}
@Override
public Map<String, Supplier<ClusterState.Custom>> getInitialClusterStateCustomSupplier() {
if (enabled) {
return Collections.singletonMap(TokenMetaData.TYPE, () -> tokenService.get().getTokenMetaData());
} else {
return Collections.emptyMap();
}
}
}

View File

@ -24,14 +24,18 @@ final class TokenPassphraseBootstrapCheck implements BootstrapCheck {
TokenPassphraseBootstrapCheck(Settings settings) {
this.tokenServiceEnabled = XPackSettings.TOKEN_SERVICE_ENABLED_SETTING.get(settings);
this.tokenPassphrase = TokenService.TOKEN_PASSPHRASE.get(settings);
this.tokenPassphrase = TokenService.TOKEN_PASSPHRASE.exists(settings) ? TokenService.TOKEN_PASSPHRASE.get(settings) : null;
}
@Override
public boolean check() {
if (tokenPassphrase == null) { // that's fine we bootstrap it ourself
return false;
}
try (SecureString ignore = tokenPassphrase) {
if (tokenServiceEnabled) {
return tokenPassphrase.length() < MINIMUM_PASSPHRASE_LENGTH || tokenPassphrase.equals(TokenService.DEFAULT_PASSPHRASE);
return tokenPassphrase.length() < MINIMUM_PASSPHRASE_LENGTH;
}
}
// service is not enabled so no need to check
@ -41,7 +45,7 @@ final class TokenPassphraseBootstrapCheck implements BootstrapCheck {
@Override
public String errorMessage() {
return "Please set a passphrase using the elasticsearch-keystore tool for the setting [" + TokenService.TOKEN_PASSPHRASE.getKey() +
"] that is at least " + MINIMUM_PASSPHRASE_LENGTH + " characters in length and does not match the default passphrase or " +
"] that is at least " + MINIMUM_PASSPHRASE_LENGTH + " characters in length or " +
"disable the token service using the [" + XPackSettings.TOKEN_SERVICE_ENABLED_SETTING.getKey() + "] setting";
}
}

View File

@ -0,0 +1,93 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.xpack.security.authc;
import org.elasticsearch.Version;
import org.elasticsearch.cluster.AbstractNamedDiffable;
import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.NamedDiff;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.xcontent.XContentBuilder;
import java.io.IOException;
import java.util.Collections;
import java.util.List;
public final class TokenMetaData extends AbstractNamedDiffable<ClusterState.Custom> implements ClusterState.Custom {
/**
* The type of {@link ClusterState} data.
*/
public static final String TYPE = "security_tokens";
final List<TokenService.KeyAndTimestamp> keys;
final byte[] currentKeyHash;
TokenMetaData(List<TokenService.KeyAndTimestamp> keys, byte[] currentKeyHash) {
this.keys = keys;
this.currentKeyHash = currentKeyHash;
}
public TokenMetaData(StreamInput input) throws IOException {
currentKeyHash = input.readByteArray();
keys = Collections.unmodifiableList(input.readList(TokenService.KeyAndTimestamp::new));
}
@Override
public void writeTo(StreamOutput out) throws IOException {
out.writeByteArray(currentKeyHash);
out.writeList(keys);
}
public static NamedDiff<ClusterState.Custom> readDiffFrom(StreamInput in) throws IOException {
return readDiffFrom(ClusterState.Custom.class, TYPE, in);
}
@Override
public String getWriteableName() {
return TYPE;
}
@Override
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
// never render this to the user
return builder;
}
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
TokenMetaData that = (TokenMetaData)o;
return keys.equals(that.keys) && currentKeyHash.equals(that.currentKeyHash);
}
@Override
public int hashCode() {
int result = keys.hashCode();
result = 31 * result + currentKeyHash.hashCode();
return result;
}
@Override
public String toString() {
return "TokenMetaData{ everything is secret }";
}
@Override
public Version getMinimalSupportedVersion() {
return Version.V_7_0_0_alpha1;
}
@Override
public boolean isPrivate() {
// never sent this to a client
return true;
}
}

View File

@ -6,8 +6,12 @@
package org.elasticsearch.xpack.security.authc;
import org.apache.logging.log4j.message.ParameterizedMessage;
import org.apache.lucene.util.ArrayUtil;
import org.apache.lucene.util.BytesRef;
import org.apache.lucene.util.BytesRefBuilder;
import org.apache.lucene.util.IOUtils;
import org.apache.lucene.util.StringHelper;
import org.apache.lucene.util.UnicodeUtil;
import org.elasticsearch.ElasticsearchSecurityException;
import org.elasticsearch.Version;
import org.elasticsearch.action.ActionListener;
@ -17,6 +21,12 @@ import org.elasticsearch.action.get.GetResponse;
import org.elasticsearch.action.index.IndexResponse;
import org.elasticsearch.action.support.TransportActions;
import org.elasticsearch.action.support.WriteRequest.RefreshPolicy;
import org.elasticsearch.action.support.master.AcknowledgedRequest;
import org.elasticsearch.cluster.AckedClusterStateUpdateTask;
import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.ack.AckedRequest;
import org.elasticsearch.cluster.ack.ClusterStateUpdateResponse;
import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.Strings;
import org.elasticsearch.common.cache.Cache;
import org.elasticsearch.common.cache.CacheBuilder;
@ -25,6 +35,7 @@ import org.elasticsearch.common.io.stream.InputStreamStreamInput;
import org.elasticsearch.common.io.stream.OutputStreamStreamOutput;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.io.stream.Writeable;
import org.elasticsearch.common.settings.SecureSetting;
import org.elasticsearch.common.settings.SecureString;
import org.elasticsearch.common.settings.Setting;
@ -33,6 +44,7 @@ import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.common.util.concurrent.AbstractRunnable;
import org.elasticsearch.common.util.concurrent.ThreadContext;
import org.elasticsearch.common.util.iterable.Iterables;
import org.elasticsearch.index.engine.VersionConflictEngineException;
import org.elasticsearch.rest.RestStatus;
import org.elasticsearch.xpack.XPackPlugin;
@ -51,21 +63,31 @@ import javax.crypto.spec.PBEKeySpec;
import javax.crypto.spec.SecretKeySpec;
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.Closeable;
import java.io.IOException;
import java.io.OutputStream;
import java.nio.ByteBuffer;
import java.nio.charset.StandardCharsets;
import java.security.GeneralSecurityException;
import java.security.MessageDigest;
import java.security.NoSuchAlgorithmException;
import java.security.SecureRandom;
import java.security.spec.InvalidKeySpecException;
import java.time.Clock;
import java.time.Instant;
import java.time.ZoneOffset;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Base64;
import java.util.Collections;
import java.util.Comparator;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.atomic.AtomicLong;
import static org.elasticsearch.gateway.GatewayService.STATE_NOT_RECOVERED_BLOCK;
/**
* Service responsible for the creation, validation, and other management of {@link UserToken}
@ -83,6 +105,7 @@ public final class TokenService extends AbstractComponent {
private static final int ITERATIONS = 100000;
private static final String KDF_ALGORITHM = "PBKDF2withHMACSHA512";
private static final int SALT_BYTES = 32;
private static final int KEY_BYTES = 64;
private static final int IV_BYTES = 12;
private static final int VERSION_BYTES = 4;
private static final String ENCRYPTION_CIPHER = "AES/GCM/NoPadding";
@ -100,27 +123,25 @@ public final class TokenService extends AbstractComponent {
TimeValue.timeValueMinutes(30L), Property.NodeScope);
public static final Setting<TimeValue> DELETE_TIMEOUT = Setting.timeSetting("xpack.security.authc.token.delete.timeout",
TimeValue.MINUS_ONE, Property.NodeScope);
public static final String DEFAULT_PASSPHRASE = "changeme is a terrible password, so let's not use it anymore!";
static final String DOC_TYPE = "invalidated-token";
static final int MINIMUM_BYTES = VERSION_BYTES + SALT_BYTES + IV_BYTES + 1;
static final int MINIMUM_BASE64_BYTES = Double.valueOf(Math.ceil((4 * MINIMUM_BYTES) / 3)).intValue();
private final SecureRandom secureRandom = new SecureRandom();
private final Cache<BytesKey, SecretKey> keyCache;
private final SecureString tokenPassphrase;
private final ClusterService clusterService;
private final Clock clock;
private final TimeValue expirationDelay;
private final TimeValue deleteInterval;
private final BytesKey salt;
private final InternalClient internalClient;
private final SecurityLifecycleService lifecycleService;
private final ExpiredTokenRemover expiredTokenRemover;
private final boolean enabled;
private final byte[] currentVersionBytes;
private volatile TokenKeys keyCache;
private volatile long lastExpirationRunMs;
private final AtomicLong createdTimeStamps = new AtomicLong(-1);
private static final Version TOKEN_SERVICE_VERSION = Version.CURRENT;
/**
* Creates a new token service
@ -129,21 +150,17 @@ public final class TokenService extends AbstractComponent {
* @param internalClient the client to use when checking for revocations
*/
public TokenService(Settings settings, Clock clock, InternalClient internalClient,
SecurityLifecycleService lifecycleService) throws GeneralSecurityException {
SecurityLifecycleService lifecycleService, ClusterService clusterService) throws GeneralSecurityException {
super(settings);
byte[] saltArr = new byte[SALT_BYTES];
secureRandom.nextBytes(saltArr);
this.salt = new BytesKey(saltArr);
this.keyCache = CacheBuilder.<BytesKey, SecretKey>builder()
.setExpireAfterAccess(TimeValue.timeValueMinutes(60L))
.setMaximumWeight(500L)
.build();
final SecureString tokenPassphraseValue = TOKEN_PASSPHRASE.get(settings);
final SecureString tokenPassphrase;
if (tokenPassphraseValue.length() == 0) {
// setting didn't exist - we should only be in a non-production mode for this
this.tokenPassphrase = new SecureString(DEFAULT_PASSPHRASE.toCharArray());
tokenPassphrase = generateTokenKey();
} else {
this.tokenPassphrase = tokenPassphraseValue;
tokenPassphrase = tokenPassphraseValue;
}
this.clock = clock.withZone(ZoneOffset.UTC);
@ -154,13 +171,17 @@ public final class TokenService extends AbstractComponent {
this.deleteInterval = DELETE_INTERVAL.get(settings);
this.enabled = XPackSettings.TOKEN_SERVICE_ENABLED_SETTING.get(settings);
this.expiredTokenRemover = new ExpiredTokenRemover(settings, internalClient);
this.currentVersionBytes = ByteBuffer.allocate(4).putInt(Version.CURRENT.id).array();
this.currentVersionBytes = ByteBuffer.allocate(4).putInt(TOKEN_SERVICE_VERSION.id).array();
ensureEncryptionCiphersSupported();
try (SecureString closeableChars = tokenPassphrase.clone()) {
keyCache.put(salt, computeSecretKey(closeableChars.getChars(), salt.bytes));
}
KeyAndCache keyAndCache = new KeyAndCache(new KeyAndTimestamp(tokenPassphrase.clone(), createdTimeStamps.incrementAndGet()),
new BytesKey(saltArr));
keyCache = new TokenKeys(Collections.singletonMap(keyAndCache.getKeyHash(), keyAndCache), keyAndCache.getKeyHash());
this.clusterService = clusterService;
initialize(clusterService);
getTokenMetaData();
}
/**
* Create a token based on the provided authentication
*/
@ -219,14 +240,22 @@ public final class TokenService extends AbstractComponent {
listener.onResponse(null);
} else {
final BytesKey decodedSalt = new BytesKey(in.readByteArray());
final SecretKey decodeKey = keyCache.get(decodedSalt);
final BytesKey passphraseHash;
if (version.onOrAfter(Version.V_7_0_0_alpha1)) {
passphraseHash = new BytesKey(in.readByteArray());
} else {
passphraseHash = keyCache.currentTokenKeyHash;
}
KeyAndCache keyAndCache = keyCache.get(passphraseHash);
if (keyAndCache != null) {
final SecretKey decodeKey = keyAndCache.getKey(decodedSalt);
final byte[] iv = in.readByteArray();
if (decodeKey != null) {
try {
decryptToken(in, getDecryptionCipher(iv, decodeKey, version, decodedSalt), version, listener);
} catch (GeneralSecurityException e) {
// could happen with a token that is not ours
logger.debug("invalid token", e);
logger.warn("invalid token", e);
listener.onResponse(null);
}
} else {
@ -236,13 +265,18 @@ public final class TokenService extends AbstractComponent {
* some additional latency.
*/
internalClient.threadPool().executor(THREAD_POOL_NAME)
.submit(new KeyComputingRunnable(in, iv, version, decodedSalt, listener));
.submit(new KeyComputingRunnable(in, iv, version, decodedSalt, listener, keyAndCache));
}
} else {
logger.debug("invalid key {} key: {}", passphraseHash, keyCache.cache.keySet());
listener.onResponse(null);
}
}
}
}
private void decryptToken(StreamInput in, Cipher cipher, Version version, ActionListener<UserToken> listener) throws IOException {
private static void decryptToken(StreamInput in, Cipher cipher, Version version, ActionListener<UserToken> listener) throws
IOException {
try (CipherInputStream cis = new CipherInputStream(in, cipher); StreamInput decryptedInput = new InputStreamStreamInput(cis)) {
decryptedInput.setVersion(version);
listener.onResponse(new UserToken(decryptedInput));
@ -384,7 +418,7 @@ public final class TokenService extends AbstractComponent {
* Gets the token from the <code>Authorization</code> header if the header begins with
* <code>Bearer </code>
*/
private String getFromHeader(ThreadContext threadContext) {
String getFromHeader(ThreadContext threadContext) {
String header = threadContext.getHeader("Authorization");
if (Strings.hasLength(header) && header.startsWith("Bearer ")
&& header.length() > "Bearer ".length()) {
@ -401,11 +435,13 @@ public final class TokenService extends AbstractComponent {
try (ByteArrayOutputStream os = new ByteArrayOutputStream(MINIMUM_BASE64_BYTES);
OutputStream base64 = Base64.getEncoder().wrap(os);
StreamOutput out = new OutputStreamStreamOutput(base64)) {
Version.writeVersion(Version.CURRENT, out);
out.writeByteArray(salt.bytes);
KeyAndCache keyAndCache = keyCache.activeKeyCache;
Version.writeVersion(TOKEN_SERVICE_VERSION, out);
out.writeByteArray(keyAndCache.getSalt().bytes);
out.writeByteArray(keyAndCache.getKeyHash().bytes); // TODO this requires a BWC layer in 5.6
final byte[] initializationVector = getNewInitializationVector();
out.writeByteArray(initializationVector);
try (CipherOutputStream encryptedOutput = new CipherOutputStream(out, getEncryptionCipher(initializationVector));
try (CipherOutputStream encryptedOutput = new CipherOutputStream(out, getEncryptionCipher(initializationVector, keyAndCache));
StreamOutput encryptedStreamOutput = new OutputStreamStreamOutput(encryptedOutput)) {
userToken.writeTo(encryptedStreamOutput);
encryptedStreamOutput.close();
@ -419,9 +455,10 @@ public final class TokenService extends AbstractComponent {
SecretKeyFactory.getInstance(KDF_ALGORITHM);
}
private Cipher getEncryptionCipher(byte[] iv) throws GeneralSecurityException {
private Cipher getEncryptionCipher(byte[] iv, KeyAndCache keyAndCache) throws GeneralSecurityException {
Cipher cipher = Cipher.getInstance(ENCRYPTION_CIPHER);
cipher.init(Cipher.ENCRYPT_MODE, keyCache.get(salt), new GCMParameterSpec(128, iv), secureRandom);
BytesKey salt = keyAndCache.getSalt();
cipher.init(Cipher.ENCRYPT_MODE, keyAndCache.getKey(salt), new GCMParameterSpec(128, iv), secureRandom);
cipher.updateAAD(currentVersionBytes);
cipher.updateAAD(salt.bytes);
return cipher;
@ -492,23 +529,22 @@ public final class TokenService extends AbstractComponent {
private final BytesKey decodedSalt;
private final ActionListener<UserToken> listener;
private final byte[] iv;
private final KeyAndCache keyAndCache;
KeyComputingRunnable(StreamInput input, byte[] iv, Version version, BytesKey decodedSalt, ActionListener<UserToken> listener) {
KeyComputingRunnable(StreamInput input, byte[] iv, Version version, BytesKey decodedSalt, ActionListener<UserToken> listener,
KeyAndCache keyAndCache) {
this.in = input;
this.version = version;
this.decodedSalt = decodedSalt;
this.listener = listener;
this.iv = iv;
this.keyAndCache = keyAndCache;
}
@Override
protected void doRun() {
try {
final SecretKey computedKey = keyCache.computeIfAbsent(decodedSalt, (salt) -> {
try (SecureString closeableChars = tokenPassphrase.clone()) {
return computeSecretKey(closeableChars.getChars(), decodedSalt.bytes);
}
});
final SecretKey computedKey = keyAndCache.getOrComputeKey(decodedSalt);
decryptToken(in, getDecryptionCipher(iv, computedKey, version, decodedSalt), version, listener);
} catch (ExecutionException e) {
if (e.getCause() != null &&
@ -569,5 +605,337 @@ public final class TokenService extends AbstractComponent {
BytesKey otherBytes = (BytesKey) other;
return Arrays.equals(otherBytes.bytes, bytes);
}
@Override
public String toString() {
return new BytesRef(bytes).toString();
}
}
/**
* Creates a new key unless present that is newer than the current active key and returns the corresponding metadata. Note:
* this method doesn't modify the metadata used in this token service. See {@link #refreshMetaData(TokenMetaData)}
*/
synchronized TokenMetaData generateSpareKey() {
KeyAndCache maxKey = keyCache.cache.values().stream().max(Comparator.comparingLong(v -> v.keyAndTimestamp.timestamp)).get();
KeyAndCache currentKey = keyCache.activeKeyCache;
if (currentKey == maxKey) {
long timestamp = createdTimeStamps.incrementAndGet();
while (true) {
byte[] saltArr = new byte[SALT_BYTES];
secureRandom.nextBytes(saltArr);
SecureString tokenKey = generateTokenKey();
KeyAndCache keyAndCache = new KeyAndCache(new KeyAndTimestamp(tokenKey, timestamp), new BytesKey(saltArr));
if (keyCache.cache.containsKey(keyAndCache.getKeyHash())) {
continue; // collision -- generate a new key
}
return newTokenMetaData(keyCache.currentTokenKeyHash, Iterables.concat(keyCache.cache.values(),
Collections.singletonList(keyAndCache)));
}
}
return newTokenMetaData(keyCache.currentTokenKeyHash, keyCache.cache.values());
}
/**
* Rotate the current active key to the spare key created in the previous {@link #generateSpareKey()} call.
*/
synchronized TokenMetaData rotateToSpareKey() {
KeyAndCache maxKey = keyCache.cache.values().stream().max(Comparator.comparingLong(v -> v.keyAndTimestamp.timestamp)).get();
if (maxKey == keyCache.activeKeyCache) {
throw new IllegalStateException("call generateSpareKey first");
}
return newTokenMetaData(maxKey.getKeyHash(), keyCache.cache.values());
}
/**
* Prunes the keys and keeps up to the latest N keys around
* @param numKeysToKeep the number of keys to keep.
*/
synchronized TokenMetaData pruneKeys(int numKeysToKeep) {
if (keyCache.cache.size() <= numKeysToKeep) {
return getTokenMetaData(); // nothing to do
}
Map<BytesKey, KeyAndCache> map = new HashMap<>(keyCache.cache.size() + 1);
KeyAndCache currentKey = keyCache.get(keyCache.currentTokenKeyHash);
ArrayList<KeyAndCache> entries = new ArrayList<>(keyCache.cache.values());
Collections.sort(entries,
(left, right) -> Long.compare(right.keyAndTimestamp.timestamp, left.keyAndTimestamp.timestamp));
for (KeyAndCache value: entries) {
if (map.size() < numKeysToKeep || value.keyAndTimestamp.timestamp >= currentKey
.keyAndTimestamp.timestamp) {
logger.debug("keeping key {} ", value.getKeyHash());
map.put(value.getKeyHash(), value);
} else {
logger.debug("prune key {} ", value.getKeyHash());
}
}
assert map.isEmpty() == false;
assert map.containsKey(keyCache.currentTokenKeyHash);
return newTokenMetaData(keyCache.currentTokenKeyHash, map.values());
}
/**
* Returns the current in-use metdata of this {@link TokenService}
*/
public synchronized TokenMetaData getTokenMetaData() {
return newTokenMetaData(keyCache.currentTokenKeyHash, keyCache.cache.values());
}
private TokenMetaData newTokenMetaData(BytesKey activeTokenKey, Iterable<KeyAndCache> iterable) {
List<KeyAndTimestamp> list = new ArrayList<>();
for (KeyAndCache v : iterable) {
list.add(v.keyAndTimestamp);
}
return new TokenMetaData(list, activeTokenKey.bytes);
}
/**
* Refreshes the current in-use metadata.
*/
synchronized void refreshMetaData(TokenMetaData metaData) {
BytesKey currentUsedKeyHash = new BytesKey(metaData.currentKeyHash);
byte[] saltArr = new byte[SALT_BYTES];
Map<BytesKey, KeyAndCache> map = new HashMap<>(metaData.keys.size());
long maxTimestamp = createdTimeStamps.get();
for (KeyAndTimestamp key : metaData.keys) {
secureRandom.nextBytes(saltArr);
KeyAndCache keyAndCache = new KeyAndCache(key, new BytesKey(saltArr));
maxTimestamp = Math.max(keyAndCache.keyAndTimestamp.timestamp, maxTimestamp);
if (keyCache.cache.containsKey(keyAndCache.getKeyHash()) == false) {
map.put(keyAndCache.getKeyHash(), keyAndCache);
} else {
map.put(keyAndCache.getKeyHash(), keyCache.get(keyAndCache.getKeyHash())); // maintain the cache we already have
}
}
if (map.containsKey(currentUsedKeyHash) == false) {
// this won't leak any secrets it's only exposing the current set of hashes
throw new IllegalStateException("Current key is not in the map: " + map.keySet() + " key: " + currentUsedKeyHash);
}
createdTimeStamps.set(maxTimestamp);
keyCache = new TokenKeys(Collections.unmodifiableMap(map), currentUsedKeyHash);
logger.debug("refreshed keys current: {}, keys: {}", currentUsedKeyHash, keyCache.cache.keySet());
}
private SecureString generateTokenKey() {
byte[] keyBytes = new byte[KEY_BYTES];
byte[] encode = new byte[0];
char[] ref = new char[0];
try {
secureRandom.nextBytes(keyBytes);
encode = Base64.getUrlEncoder().withoutPadding().encode(keyBytes);
ref = new char[encode.length];
int len = UnicodeUtil.UTF8toUTF16(encode, 0, encode.length, ref);
return new SecureString(Arrays.copyOfRange(ref, 0, len));
} finally {
Arrays.fill(keyBytes, (byte) 0x00);
Arrays.fill(encode, (byte) 0x00);
Arrays.fill(ref, (char) 0x00);
}
}
synchronized String getActiveKeyHash() {
return new BytesRef(Base64.getUrlEncoder().withoutPadding().encode(this.keyCache.currentTokenKeyHash.bytes)).utf8ToString();
}
void rotateKeysOnMaster(ActionListener<ClusterStateUpdateResponse> listener) {
logger.info("rotate keys on master");
TokenMetaData tokenMetaData = generateSpareKey();
clusterService.submitStateUpdateTask("publish next key to prepare key rotation",
new TokenMetadataPublishAction(
ActionListener.wrap((res) -> {
if (res.isAcknowledged()) {
TokenMetaData metaData = rotateToSpareKey();
clusterService.submitStateUpdateTask("publish next key to prepare key rotation",
new TokenMetadataPublishAction(listener, metaData));
} else {
listener.onFailure(new IllegalStateException("not acked"));
}
}, listener::onFailure), tokenMetaData));
}
private final class TokenMetadataPublishAction extends AckedClusterStateUpdateTask<ClusterStateUpdateResponse> {
private final TokenMetaData tokenMetaData;
protected TokenMetadataPublishAction(ActionListener<ClusterStateUpdateResponse> listener, TokenMetaData tokenMetaData) {
super(new AckedRequest() {
@Override
public TimeValue ackTimeout() {
return AcknowledgedRequest.DEFAULT_ACK_TIMEOUT;
}
@Override
public TimeValue masterNodeTimeout() {
return AcknowledgedRequest.DEFAULT_MASTER_NODE_TIMEOUT;
}
}, listener);
this.tokenMetaData = tokenMetaData;
}
@Override
public ClusterState execute(ClusterState currentState) throws Exception {
if (tokenMetaData.equals(currentState.custom(TokenMetaData.TYPE))) {
return currentState;
}
return ClusterState.builder(currentState).putCustom(TokenMetaData.TYPE, tokenMetaData).build();
}
@Override
protected ClusterStateUpdateResponse newResponse(boolean acknowledged) {
return new ClusterStateUpdateResponse(acknowledged);
}
}
private void initialize(ClusterService clusterService) {
clusterService.addListener(event -> {
ClusterState state = event.state();
if (state.getBlocks().hasGlobalBlock(STATE_NOT_RECOVERED_BLOCK)) {
return;
}
TokenMetaData custom = event.state().custom(TokenMetaData.TYPE);
if (custom != null && custom.equals(getTokenMetaData()) == false) {
logger.info("refresh keys");
try {
refreshMetaData(custom);
} catch (Exception e) {
logger.warn(e);
}
logger.info("refreshed keys");
}
});
}
static final class KeyAndTimestamp implements Writeable {
private final SecureString key;
private final long timestamp;
private KeyAndTimestamp(SecureString key, long timestamp) {
this.key = key;
this.timestamp = timestamp;
}
KeyAndTimestamp(StreamInput input) throws IOException {
timestamp = input.readVLong();
byte[] keyBytes = input.readByteArray();
final char[] ref = new char[keyBytes.length];
int len = UnicodeUtil.UTF8toUTF16(keyBytes, 0, keyBytes.length, ref);
key = new SecureString(Arrays.copyOfRange(ref, 0, len));
}
@Override
public void writeTo(StreamOutput out) throws IOException {
out.writeVLong(timestamp);
BytesRef bytesRef = new BytesRef(key);
out.writeVInt(bytesRef.length);
out.writeBytes(bytesRef.bytes, bytesRef.offset, bytesRef.length);
}
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
KeyAndTimestamp that = (KeyAndTimestamp) o;
if (timestamp != that.timestamp) return false;
return key.equals(that.key);
}
@Override
public int hashCode() {
int result = key.hashCode();
result = 31 * result + (int) (timestamp ^ (timestamp >>> 32));
return result;
}
}
static final class KeyAndCache implements Closeable {
private final KeyAndTimestamp keyAndTimestamp;
private final Cache<BytesKey, SecretKey> keyCache;
private final BytesKey salt;
private final BytesKey keyHash;
private KeyAndCache(KeyAndTimestamp keyAndTimestamp, BytesKey salt) {
this.keyAndTimestamp = keyAndTimestamp;
keyCache = CacheBuilder.<BytesKey, SecretKey>builder()
.setExpireAfterAccess(TimeValue.timeValueMinutes(60L))
.setMaximumWeight(500L)
.build();
try {
SecretKey secretKey = computeSecretKey(keyAndTimestamp.key.getChars(), salt.bytes);
keyCache.put(salt, secretKey);
} catch (Exception e) {
throw new IllegalStateException(e);
}
this.salt = salt;
this.keyHash = calculateKeyHash(keyAndTimestamp.key);
}
private SecretKey getKey(BytesKey salt) {
return keyCache.get(salt);
}
public SecretKey getOrComputeKey(BytesKey decodedSalt) throws ExecutionException {
return keyCache.computeIfAbsent(decodedSalt, (salt) -> {
try (SecureString closeableChars = keyAndTimestamp.key.clone()) {
return computeSecretKey(closeableChars.getChars(), salt.bytes);
}
});
}
@Override
public void close() throws IOException {
keyAndTimestamp.key.close();
}
BytesKey getKeyHash() {
return keyHash;
}
private static BytesKey calculateKeyHash(SecureString key) {
MessageDigest messageDigest = null;
try {
messageDigest = MessageDigest.getInstance("SHA-256");
} catch (NoSuchAlgorithmException e) {
throw new AssertionError(e);
}
BytesRefBuilder b = new BytesRefBuilder();
try {
b.copyChars(key);
BytesRef bytesRef = b.toBytesRef();
try {
messageDigest.update(bytesRef.bytes, bytesRef.offset, bytesRef.length);
return new BytesKey(Arrays.copyOfRange(messageDigest.digest(), 0, 8));
} finally {
Arrays.fill(bytesRef.bytes, (byte) 0x00);
}
} finally {
Arrays.fill(b.bytes(), (byte) 0x00);
}
}
BytesKey getSalt() {
return salt;
}
}
private static final class TokenKeys {
final Map<BytesKey, KeyAndCache> cache;
final BytesKey currentTokenKeyHash;
final KeyAndCache activeKeyCache;
private TokenKeys(Map<BytesKey, KeyAndCache> cache, BytesKey currentTokenKeyHash) {
this.cache = cache;
this.currentTokenKeyHash = currentTokenKeyHash;
this.activeKeyCache = cache.get(currentTokenKeyHash);
}
KeyAndCache get(BytesKey passphraseHash) {
return cache.get(passphraseHash);
}
}
}

View File

@ -57,6 +57,7 @@ import org.elasticsearch.xpack.persistent.PersistentTaskParams;
import org.elasticsearch.xpack.persistent.PersistentTasksCustomMetaData;
import org.elasticsearch.xpack.persistent.PersistentTasksNodeService;
import org.elasticsearch.xpack.security.Security;
import org.elasticsearch.xpack.security.authc.TokenMetaData;
import java.io.IOException;
import java.util.ArrayList;
@ -279,6 +280,7 @@ abstract class MlNativeAutodetectIntegTestCase extends SecurityIntegTestCase {
PersistentTasksNodeService.Status::new));
entries.add(new NamedWriteableRegistry.Entry(Task.Status.class, JobTaskStatus.NAME, JobTaskStatus::new));
entries.add(new NamedWriteableRegistry.Entry(Task.Status.class, DatafeedState.NAME, DatafeedState::fromStream));
entries.add(new NamedWriteableRegistry.Entry(ClusterState.Custom.class, TokenMetaData.TYPE, TokenMetaData::new));
final NamedWriteableRegistry namedWriteableRegistry = new NamedWriteableRegistry(entries);
ClusterState masterClusterState = client().admin().cluster().prepareState().all().get().getState();
byte[] masterClusterStateBytes = ClusterState.Builder.toBytes(masterClusterState);

View File

@ -106,6 +106,11 @@ public class AnalysisLimitsTests extends AbstractSerializingTestCase<AnalysisLim
assertThat(limits.getModelMemoryLimit(), equalTo(1L));
}
public void testModelMemoryDefault() {
AnalysisLimits limits = new AnalysisLimits(randomNonNegativeLong());
assertThat(limits.getModelMemoryLimit(), equalTo(AnalysisLimits.DEFAULT_MODEL_MEMORY_LIMIT_MB));
}
public void testEquals_GivenEqual() {
AnalysisLimits analysisLimits1 = new AnalysisLimits(10L, 20L);
AnalysisLimits analysisLimits2 = new AnalysisLimits(10L, 20L);

View File

@ -107,6 +107,16 @@ public class JobTests extends AbstractSerializingTestCase<Job> {
expectThrows(IllegalArgumentException.class, () -> buildJobBuilder("").build());
}
public void testEnsureModelMemoryLimitSet() {
Job.Builder builder = buildJobBuilder("foo");
builder.setDefaultMemoryLimitIfUnset();
Job job = builder.build();
assertEquals("foo", job.getId());
assertNotNull(job.getAnalysisLimits());
assertThat(job.getAnalysisLimits().getModelMemoryLimit(), equalTo(AnalysisLimits.DEFAULT_MODEL_MEMORY_LIMIT_MB));
}
public void testEquals_GivenDifferentClass() {
Job job = buildJobBuilder("foo").build();
assertFalse(job.equals("a string"));
@ -459,7 +469,7 @@ public class JobTests extends AbstractSerializingTestCase<Job> {
builder.setFinishedTime(new Date());
builder.setLastDataTime(new Date());
Set<String> expected = new HashSet();
Set<String> expected = new HashSet<>();
expected.add(Job.CREATE_TIME.getPreferredName());
expected.add(Job.FINISHED_TIME.getPreferredName());
expected.add(Job.LAST_DATA_TIME.getPreferredName());
@ -471,16 +481,14 @@ public class JobTests extends AbstractSerializingTestCase<Job> {
public void testEmptyGroup() {
Job.Builder builder = buildJobBuilder("foo");
builder.setGroups(Arrays.asList("foo-group", ""));
IllegalArgumentException e = expectThrows(IllegalArgumentException.class,
() -> builder.build());
IllegalArgumentException e = expectThrows(IllegalArgumentException.class, builder::build);
assertThat(e.getMessage(), containsString("Invalid group id ''"));
}
public void testInvalidGroup() {
Job.Builder builder = buildJobBuilder("foo");
builder.setGroups(Arrays.asList("foo-group", "$$$"));
IllegalArgumentException e = expectThrows(IllegalArgumentException.class,
() -> builder.build());
IllegalArgumentException e = expectThrows(IllegalArgumentException.class, builder::build);
assertThat(e.getMessage(), containsString("Invalid group id '$$$'"));
}

View File

@ -17,18 +17,15 @@ import static org.elasticsearch.xpack.security.TokenPassphraseBootstrapCheck.MIN
public class TokenPassphraseBootstrapCheckTests extends ESTestCase {
public void testTokenPassphraseCheck() throws Exception {
assertTrue(new TokenPassphraseBootstrapCheck(Settings.EMPTY).check());
assertFalse(new TokenPassphraseBootstrapCheck(Settings.EMPTY).check());
MockSecureSettings secureSettings = new MockSecureSettings();
secureSettings.setString("foo", "bar"); // leniency in setSecureSettings... if its empty it's skipped
Settings settings = Settings.builder().setSecureSettings(secureSettings).build();
assertTrue(new TokenPassphraseBootstrapCheck(settings).check());
assertFalse(new TokenPassphraseBootstrapCheck(settings).check());
secureSettings.setString(TokenService.TOKEN_PASSPHRASE.getKey(), randomAlphaOfLengthBetween(MINIMUM_PASSPHRASE_LENGTH, 30));
assertFalse(new TokenPassphraseBootstrapCheck(settings).check());
secureSettings.setString(TokenService.TOKEN_PASSPHRASE.getKey(), TokenService.DEFAULT_PASSPHRASE);
assertTrue(new TokenPassphraseBootstrapCheck(settings).check());
secureSettings.setString(TokenService.TOKEN_PASSPHRASE.getKey(), randomAlphaOfLengthBetween(1, MINIMUM_PASSPHRASE_LENGTH - 1));
assertTrue(new TokenPassphraseBootstrapCheck(settings).check());
}
@ -44,8 +41,6 @@ public class TokenPassphraseBootstrapCheckTests extends ESTestCase {
secureSettings.setString(TokenService.TOKEN_PASSPHRASE.getKey(), randomAlphaOfLengthBetween(1, 30));
assertFalse(new TokenPassphraseBootstrapCheck(settings).check());
secureSettings.setString(TokenService.TOKEN_PASSPHRASE.getKey(), TokenService.DEFAULT_PASSPHRASE);
assertFalse(new TokenPassphraseBootstrapCheck(settings).check());
}
public void testTokenPassphraseCheckAfterSecureSettingsClosed() throws Exception {
@ -53,7 +48,7 @@ public class TokenPassphraseBootstrapCheckTests extends ESTestCase {
MockSecureSettings secureSettings = new MockSecureSettings();
secureSettings.setString("foo", "bar"); // leniency in setSecureSettings... if its empty it's skipped
settings = Settings.builder().put(settings).setSecureSettings(secureSettings).build();
secureSettings.setString(TokenService.TOKEN_PASSPHRASE.getKey(), TokenService.DEFAULT_PASSPHRASE);
secureSettings.setString(TokenService.TOKEN_PASSPHRASE.getKey(), randomAlphaOfLengthBetween(1, MINIMUM_PASSPHRASE_LENGTH - 1));
final TokenPassphraseBootstrapCheck check = new TokenPassphraseBootstrapCheck(settings);
secureSettings.close();
assertTrue(check.check());

View File

@ -26,9 +26,11 @@ import org.elasticsearch.action.get.GetRequest;
import org.elasticsearch.action.get.GetResponse;
import org.elasticsearch.action.support.PlainActionFuture;
import org.elasticsearch.client.Client;
import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.SuppressForbidden;
import org.elasticsearch.common.io.stream.BytesStreamOutput;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.settings.ClusterSettings;
import org.elasticsearch.common.settings.SecureString;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.transport.TransportAddress;
@ -133,7 +135,9 @@ public class AuthenticationServiceTests extends ESTestCase {
threadContext = threadPool.getThreadContext();
InternalClient internalClient = new InternalClient(Settings.EMPTY, threadPool, client);
lifecycleService = mock(SecurityLifecycleService.class);
tokenService = new TokenService(settings, Clock.systemUTC(), internalClient, lifecycleService);
ClusterService clusterService = new ClusterService(settings, new ClusterSettings(settings, ClusterSettings
.BUILT_IN_CLUSTER_SETTINGS), threadPool, Collections.emptyMap());
tokenService = new TokenService(settings, Clock.systemUTC(), internalClient, lifecycleService, clusterService);
service = new AuthenticationService(settings, realms, auditTrail,
new DefaultAuthenticationFailureHandler(), threadPool, new AnonymousUser(settings), tokenService);
}

View File

@ -6,14 +6,17 @@
package org.elasticsearch.xpack.security.authc;
import org.elasticsearch.ElasticsearchSecurityException;
import org.elasticsearch.action.admin.cluster.state.ClusterStateResponse;
import org.elasticsearch.action.search.SearchResponse;
import org.elasticsearch.action.support.PlainActionFuture;
import org.elasticsearch.client.Client;
import org.elasticsearch.cluster.ack.ClusterStateUpdateResponse;
import org.elasticsearch.common.settings.SecureString;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.index.query.QueryBuilders;
import org.elasticsearch.search.builder.SearchSourceBuilder;
import org.elasticsearch.test.ESIntegTestCase;
import org.elasticsearch.test.SecurityIntegTestCase;
import org.elasticsearch.test.SecuritySettingsSource;
import org.elasticsearch.xpack.security.SecurityLifecycleService;
@ -45,6 +48,66 @@ public class TokenAuthIntegTests extends SecurityIntegTestCase {
.build();
}
public void testTokenServiceBootstrapOnNodeJoin() throws Exception {
final Client client = internalClient();
SecurityClient securityClient = new SecurityClient(client);
CreateTokenResponse response = securityClient.prepareCreateToken()
.setGrantType("password")
.setUsername(SecuritySettingsSource.TEST_USER_NAME)
.setPassword(new SecureString(SecuritySettingsSource.TEST_PASSWORD.toCharArray()))
.get();
for (TokenService tokenService : internalCluster().getInstances(TokenService.class)) {
PlainActionFuture<UserToken> userTokenFuture = new PlainActionFuture<>();
tokenService.decodeToken(response.getTokenString(), userTokenFuture);
assertNotNull(userTokenFuture.actionGet());
}
// start a new node and see if it can decrypt the token
String nodeName = internalCluster().startNode();
for (TokenService tokenService : internalCluster().getInstances(TokenService.class)) {
PlainActionFuture<UserToken> userTokenFuture = new PlainActionFuture<>();
tokenService.decodeToken(response.getTokenString(), userTokenFuture);
assertNotNull(userTokenFuture.actionGet());
}
TokenService tokenService = internalCluster().getInstance(TokenService.class, nodeName);
PlainActionFuture<UserToken> userTokenFuture = new PlainActionFuture<>();
tokenService.decodeToken(response.getTokenString(), userTokenFuture);
assertNotNull(userTokenFuture.actionGet());
}
public void testTokenServiceCanRotateKeys() throws Exception {
final Client client = internalClient();
SecurityClient securityClient = new SecurityClient(client);
CreateTokenResponse response = securityClient.prepareCreateToken()
.setGrantType("password")
.setUsername(SecuritySettingsSource.TEST_USER_NAME)
.setPassword(new SecureString(SecuritySettingsSource.TEST_PASSWORD.toCharArray()))
.get();
String masterName = internalCluster().getMasterName();
TokenService masterTokenService = internalCluster().getInstance(TokenService.class, masterName);
String activeKeyHash = masterTokenService.getActiveKeyHash();
for (TokenService tokenService : internalCluster().getInstances(TokenService.class)) {
PlainActionFuture<UserToken> userTokenFuture = new PlainActionFuture<>();
tokenService.decodeToken(response.getTokenString(), userTokenFuture);
assertNotNull(userTokenFuture.actionGet());
assertEquals(activeKeyHash, tokenService.getActiveKeyHash());
}
client().admin().cluster().prepareHealth().execute().get();
PlainActionFuture<ClusterStateUpdateResponse> rotateActionFuture = new PlainActionFuture<>();
logger.info("rotate on master: {}", masterName);
masterTokenService.rotateKeysOnMaster(rotateActionFuture);
assertTrue(rotateActionFuture.actionGet().isAcknowledged());
assertNotEquals(activeKeyHash, masterTokenService.getActiveKeyHash());
for (TokenService tokenService : internalCluster().getInstances(TokenService.class)) {
PlainActionFuture<UserToken> userTokenFuture = new PlainActionFuture<>();
tokenService.decodeToken(response.getTokenString(), userTokenFuture);
assertNotNull(userTokenFuture.actionGet());
assertNotEquals(activeKeyHash, tokenService.getActiveKeyHash());
}
}
public void testExpiredTokensDeletedAfterExpiration() throws Exception {
final Client client = internalClient();
SecurityClient securityClient = new SecurityClient(client);
@ -53,6 +116,7 @@ public class TokenAuthIntegTests extends SecurityIntegTestCase {
.setUsername(SecuritySettingsSource.TEST_USER_NAME)
.setPassword(new SecureString(SecuritySettingsSource.TEST_PASSWORD.toCharArray()))
.get();
Instant created = Instant.now();
InvalidateTokenResponse invalidateResponse = securityClient.prepareInvalidateToken(response.getTokenString()).get();
@ -126,4 +190,9 @@ public class TokenAuthIntegTests extends SecurityIntegTestCase {
assertTrue(done);
}
}
public void testMetadataIsNotSentToClient() {
ClusterStateResponse clusterStateResponse = client().admin().cluster().prepareState().setCustoms(true).get();
assertFalse(clusterStateResponse.getState().customs().containsKey(TokenMetaData.TYPE));
}
}

View File

@ -6,12 +6,15 @@
package org.elasticsearch.xpack.security.authc;
import org.elasticsearch.ElasticsearchSecurityException;
import org.elasticsearch.Version;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.get.GetAction;
import org.elasticsearch.action.get.GetRequest;
import org.elasticsearch.action.get.GetResponse;
import org.elasticsearch.action.support.PlainActionFuture;
import org.elasticsearch.client.Client;
import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.settings.ClusterSettings;
import org.elasticsearch.common.settings.MockSecureSettings;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.unit.TimeValue;
@ -29,11 +32,17 @@ import org.elasticsearch.xpack.security.authc.TokenService.BytesKey;
import org.elasticsearch.xpack.security.user.User;
import org.elasticsearch.xpack.support.clock.ClockMock;
import org.junit.After;
import org.junit.AfterClass;
import org.junit.Before;
import org.junit.BeforeClass;
import javax.crypto.SecretKey;
import java.io.IOException;
import java.security.GeneralSecurityException;
import java.time.Clock;
import java.util.Base64;
import java.util.Collections;
import java.util.concurrent.ExecutionException;
import static org.elasticsearch.repositories.ESBlobStoreTestCase.randomBytes;
import static org.hamcrest.Matchers.containsString;
@ -46,16 +55,16 @@ import static org.mockito.Mockito.when;
public class TokenServiceTests extends ESTestCase {
private InternalClient internalClient;
private ThreadPool threadPool;
private static ThreadPool threadPool;
private static final Settings settings = Settings.builder().put(Node.NODE_NAME_SETTING.getKey(), "TokenServiceTests").build();
private Client client;
private SecurityLifecycleService lifecycleService;
private ClusterService clusterService;
@Before
public void setupClient() {
client = mock(Client.class);
Settings settings = Settings.builder().put(Node.NODE_NAME_SETTING.getKey(), "TokenServiceTests").build();
threadPool = new ThreadPool(settings,
new FixedExecutorBuilder(settings, TokenService.THREAD_POOL_NAME, 1, 1000, "xpack.security.authc.token.thread_pool"));
internalClient = new InternalClient(settings, threadPool, client);
lifecycleService = mock(SecurityLifecycleService.class);
when(lifecycleService.isSecurityIndexWriteable()).thenReturn(true);
@ -67,15 +76,24 @@ public class TokenServiceTests extends ESTestCase {
return Void.TYPE;
}).when(client).execute(eq(GetAction.INSTANCE), any(GetRequest.class), any(ActionListener.class));
when(client.threadPool()).thenReturn(threadPool);
this.clusterService = new ClusterService(settings, new ClusterSettings(settings, ClusterSettings
.BUILT_IN_CLUSTER_SETTINGS), threadPool, Collections.emptyMap());
}
@After
public void shutdownThreadpool() throws InterruptedException {
@BeforeClass
public static void startThreadPool() {
threadPool = new ThreadPool(settings,
new FixedExecutorBuilder(settings, TokenService.THREAD_POOL_NAME, 1, 1000, "xpack.security.authc.token.thread_pool"));
}
@AfterClass
public static void shutdownThreadpool() throws InterruptedException {
terminate(threadPool);
threadPool = null;
}
public void testAttachAndGetToken() throws Exception {
TokenService tokenService = new TokenService(Settings.EMPTY, Clock.systemUTC(), internalClient, lifecycleService);
TokenService tokenService = new TokenService(Settings.EMPTY, Clock.systemUTC(), internalClient, lifecycleService, clusterService);
Authentication authentication = new Authentication(new User("joe", "admin"), new RealmRef("native_realm", "native", "node1"), null);
final UserToken token = tokenService.createUserToken(authentication);
assertNotNull(token);
@ -92,7 +110,9 @@ public class TokenServiceTests extends ESTestCase {
try (ThreadContext.StoredContext ignore = requestContext.newStoredContext(true)) {
// verify a second separate token service with its own salt can also verify
TokenService anotherService = new TokenService(Settings.EMPTY, Clock.systemUTC(), internalClient, lifecycleService);
TokenService anotherService = new TokenService(Settings.EMPTY, Clock.systemUTC(), internalClient, lifecycleService
, clusterService);
anotherService.refreshMetaData(tokenService.getTokenMetaData());
PlainActionFuture<UserToken> future = new PlainActionFuture<>();
anotherService.getAndValidateToken(requestContext, future);
UserToken fromOtherService = future.get();
@ -100,8 +120,144 @@ public class TokenServiceTests extends ESTestCase {
}
}
public void testRotateKey() throws Exception {
TokenService tokenService = new TokenService(Settings.EMPTY, Clock.systemUTC(), internalClient, lifecycleService, clusterService);
Authentication authentication = new Authentication(new User("joe", "admin"), new RealmRef("native_realm", "native", "node1"), null);
final UserToken token = tokenService.createUserToken(authentication);
assertNotNull(token);
ThreadContext requestContext = new ThreadContext(Settings.EMPTY);
requestContext.putHeader("Authorization", "Bearer " + tokenService.getUserTokenString(token));
try (ThreadContext.StoredContext ignore = requestContext.newStoredContext(true)) {
PlainActionFuture<UserToken> future = new PlainActionFuture<>();
tokenService.getAndValidateToken(requestContext, future);
UserToken serialized = future.get();
assertEquals(authentication, serialized.getAuthentication());
}
rotateKeys(tokenService);
try (ThreadContext.StoredContext ignore = requestContext.newStoredContext(true)) {
PlainActionFuture<UserToken> future = new PlainActionFuture<>();
tokenService.getAndValidateToken(requestContext, future);
UserToken serialized = future.get();
assertEquals(authentication, serialized.getAuthentication());
}
final UserToken newToken = tokenService.createUserToken(authentication);
assertNotNull(newToken);
assertNotEquals(tokenService.getUserTokenString(newToken), tokenService.getUserTokenString(token));
requestContext = new ThreadContext(Settings.EMPTY);
requestContext.putHeader("Authorization", "Bearer " + tokenService.getUserTokenString(newToken));
try (ThreadContext.StoredContext ignore = requestContext.newStoredContext(true)) {
PlainActionFuture<UserToken> future = new PlainActionFuture<>();
tokenService.getAndValidateToken(requestContext, future);
UserToken serialized = future.get();
assertEquals(authentication, serialized.getAuthentication());
}
}
private void rotateKeys(TokenService tokenService) {
TokenMetaData tokenMetaData = tokenService.generateSpareKey();
tokenService.refreshMetaData(tokenMetaData);
tokenMetaData = tokenService.rotateToSpareKey();
tokenService.refreshMetaData(tokenMetaData);
}
public void testKeyExchange() throws Exception {
TokenService tokenService = new TokenService(Settings.EMPTY, Clock.systemUTC(), internalClient, lifecycleService, clusterService);
int numRotations = 0;randomIntBetween(1, 5);
for (int i = 0; i < numRotations; i++) {
rotateKeys(tokenService);
}
TokenService otherTokenService = new TokenService(Settings.EMPTY, Clock.systemUTC(), internalClient, lifecycleService,
clusterService);
otherTokenService.refreshMetaData(tokenService.getTokenMetaData());
Authentication authentication = new Authentication(new User("joe", "admin"), new RealmRef("native_realm", "native", "node1"), null);
final UserToken token = tokenService.createUserToken(authentication);
assertNotNull(token);
ThreadContext requestContext = new ThreadContext(Settings.EMPTY);
requestContext.putHeader("Authorization", "Bearer " + tokenService.getUserTokenString(token));
try (ThreadContext.StoredContext ignore = requestContext.newStoredContext(true)) {
PlainActionFuture<UserToken> future = new PlainActionFuture<>();
otherTokenService.getAndValidateToken(requestContext, future);
UserToken serialized = future.get();
assertEquals(authentication, serialized.getAuthentication());
}
rotateKeys(tokenService);
otherTokenService.refreshMetaData(tokenService.getTokenMetaData());
try (ThreadContext.StoredContext ignore = requestContext.newStoredContext(true)) {
PlainActionFuture<UserToken> future = new PlainActionFuture<>();
otherTokenService.getAndValidateToken(requestContext, future);
UserToken serialized = future.get();
assertEquals(authentication, serialized.getAuthentication());
}
}
public void testPruneKeys() throws Exception {
TokenService tokenService = new TokenService(Settings.EMPTY, Clock.systemUTC(), internalClient, lifecycleService, clusterService);
Authentication authentication = new Authentication(new User("joe", "admin"), new RealmRef("native_realm", "native", "node1"), null);
final UserToken token = tokenService.createUserToken(authentication);
assertNotNull(token);
ThreadContext requestContext = new ThreadContext(Settings.EMPTY);
requestContext.putHeader("Authorization", "Bearer " + tokenService.getUserTokenString(token));
try (ThreadContext.StoredContext ignore = requestContext.newStoredContext(true)) {
PlainActionFuture<UserToken> future = new PlainActionFuture<>();
tokenService.getAndValidateToken(requestContext, future);
UserToken serialized = future.get();
assertEquals(authentication, serialized.getAuthentication());
}
TokenMetaData metaData = tokenService.pruneKeys(randomIntBetween(0, 100));
tokenService.refreshMetaData(metaData);
int numIterations = scaledRandomIntBetween(1, 5);
for (int i = 0; i < numIterations; i++) {
rotateKeys(tokenService);
}
try (ThreadContext.StoredContext ignore = requestContext.newStoredContext(true)) {
PlainActionFuture<UserToken> future = new PlainActionFuture<>();
tokenService.getAndValidateToken(requestContext, future);
UserToken serialized = future.get();
assertEquals(authentication, serialized.getAuthentication());
}
final UserToken newToken = tokenService.createUserToken(authentication);
assertNotNull(newToken);
assertNotEquals(tokenService.getUserTokenString(newToken), tokenService.getUserTokenString(token));
metaData = tokenService.pruneKeys(1);
tokenService.refreshMetaData(metaData);
try (ThreadContext.StoredContext ignore = requestContext.newStoredContext(true)) {
PlainActionFuture<UserToken> future = new PlainActionFuture<>();
tokenService.getAndValidateToken(requestContext, future);
UserToken serialized = future.get();
assertNull(serialized);
}
requestContext = new ThreadContext(Settings.EMPTY);
requestContext.putHeader("Authorization", "Bearer " + tokenService.getUserTokenString(newToken));
try (ThreadContext.StoredContext ignore = requestContext.newStoredContext(true)) {
PlainActionFuture<UserToken> future = new PlainActionFuture<>();
tokenService.getAndValidateToken(requestContext, future);
UserToken serialized = future.get();
assertEquals(authentication, serialized.getAuthentication());
}
}
public void testPassphraseWorks() throws Exception {
TokenService tokenService = new TokenService(Settings.EMPTY, Clock.systemUTC(), internalClient, lifecycleService);
TokenService tokenService = new TokenService(Settings.EMPTY, Clock.systemUTC(), internalClient, lifecycleService, clusterService);
Authentication authentication = new Authentication(new User("joe", "admin"), new RealmRef("native_realm", "native", "node1"), null);
final UserToken token = tokenService.createUserToken(authentication);
assertNotNull(token);
@ -121,7 +277,7 @@ public class TokenServiceTests extends ESTestCase {
MockSecureSettings secureSettings = new MockSecureSettings();
secureSettings.setString(TokenService.TOKEN_PASSPHRASE.getKey(), randomAlphaOfLengthBetween(8, 30));
Settings settings = Settings.builder().setSecureSettings(secureSettings).build();
TokenService anotherService = new TokenService(settings, Clock.systemUTC(), internalClient, lifecycleService);
TokenService anotherService = new TokenService(settings, Clock.systemUTC(), internalClient, lifecycleService, clusterService);
PlainActionFuture<UserToken> future = new PlainActionFuture<>();
anotherService.getAndValidateToken(requestContext, future);
assertNull(future.get());
@ -130,7 +286,7 @@ public class TokenServiceTests extends ESTestCase {
public void testInvalidatedToken() throws Exception {
when(lifecycleService.isSecurityIndexAvailable()).thenReturn(true);
TokenService tokenService = new TokenService(Settings.EMPTY, Clock.systemUTC(), internalClient, lifecycleService);
TokenService tokenService = new TokenService(Settings.EMPTY, Clock.systemUTC(), internalClient, lifecycleService, clusterService);
Authentication authentication = new Authentication(new User("joe", "admin"), new RealmRef("native_realm", "native", "node1"), null);
final UserToken token = tokenService.createUserToken(authentication);
assertNotNull(token);
@ -160,14 +316,14 @@ public class TokenServiceTests extends ESTestCase {
public void testComputeSecretKeyIsConsistent() throws Exception {
byte[] saltArr = new byte[32];
random().nextBytes(saltArr);
SecretKey key = TokenService.computeSecretKey(TokenService.DEFAULT_PASSPHRASE.toCharArray(), saltArr);
SecretKey key2 = TokenService.computeSecretKey(TokenService.DEFAULT_PASSPHRASE.toCharArray(), saltArr);
SecretKey key = TokenService.computeSecretKey("some random passphrase".toCharArray(), saltArr);
SecretKey key2 = TokenService.computeSecretKey("some random passphrase".toCharArray(), saltArr);
assertArrayEquals(key.getEncoded(), key2.getEncoded());
}
public void testTokenExpiry() throws Exception {
ClockMock clock = ClockMock.frozen();
TokenService tokenService = new TokenService(Settings.EMPTY, clock, internalClient, lifecycleService);
TokenService tokenService = new TokenService(Settings.EMPTY, clock, internalClient, lifecycleService, clusterService);
Authentication authentication = new Authentication(new User("joe", "admin"), new RealmRef("native_realm", "native", "node1"), null);
final UserToken token = tokenService.createUserToken(authentication);
@ -215,7 +371,7 @@ public class TokenServiceTests extends ESTestCase {
TokenService tokenService = new TokenService(Settings.builder()
.put(XPackSettings.TOKEN_SERVICE_ENABLED_SETTING.getKey(), false)
.build(),
Clock.systemUTC(), internalClient, lifecycleService);
Clock.systemUTC(), internalClient, lifecycleService, clusterService);
IllegalStateException e = expectThrows(IllegalStateException.class, () -> tokenService.createUserToken(null));
assertEquals("tokens are not enabled", e.getMessage());
@ -257,7 +413,7 @@ public class TokenServiceTests extends ESTestCase {
final int numBytes = randomIntBetween(1, TokenService.MINIMUM_BYTES + 32);
final byte[] randomBytes = new byte[numBytes];
random().nextBytes(randomBytes);
TokenService tokenService = new TokenService(Settings.EMPTY, Clock.systemUTC(), internalClient, lifecycleService);
TokenService tokenService = new TokenService(Settings.EMPTY, Clock.systemUTC(), internalClient, lifecycleService, clusterService);
ThreadContext requestContext = new ThreadContext(Settings.EMPTY);
requestContext.putHeader("Authorization", "Bearer " + Base64.getEncoder().encodeToString(randomBytes));
@ -270,7 +426,7 @@ public class TokenServiceTests extends ESTestCase {
}
public void testIndexNotAvailable() throws Exception {
TokenService tokenService = new TokenService(Settings.EMPTY, Clock.systemUTC(), internalClient, lifecycleService);
TokenService tokenService = new TokenService(Settings.EMPTY, Clock.systemUTC(), internalClient, lifecycleService, clusterService);
Authentication authentication = new Authentication(new User("joe", "admin"), new RealmRef("native_realm", "native", "node1"), null);
final UserToken token = tokenService.createUserToken(authentication);
assertNotNull(token);
@ -291,4 +447,27 @@ public class TokenServiceTests extends ESTestCase {
assertNull(future.get());
}
}
public void testDecodePre6xToken() throws GeneralSecurityException, ExecutionException, InterruptedException, IOException {
String token = "g+y0AiDWsbLNzUGTywPa3VCz053RUPW7wAx4xTAonlcqjOmO1AzMhQDTUku/+ZtdtMgDobKqIrNdNvchvFMX0pvZLY6i4nAG2OhkApSstPfQQP" +
"J1fxg/JZNQDPufePg1GxV/RAQm2Gr8mYAelijEVlWIdYaQ3R76U+P/w6Q1v90dGVZQn6DKMOfgmkfwAFNY";
MockSecureSettings secureSettings = new MockSecureSettings();
secureSettings.setString(TokenService.TOKEN_PASSPHRASE.getKey(), "xpack_token_passpharse");
Settings settings = Settings.builder().setSecureSettings(secureSettings).build();
TokenService tokenService = new TokenService(settings, Clock.systemUTC(), internalClient, lifecycleService, clusterService);
Authentication authentication = new Authentication(new User("joe", "admin"), new RealmRef("native_realm", "native", "node1"), null);
ThreadContext requestContext = new ThreadContext(Settings.EMPTY);
requestContext.putHeader("Authorization", "Bearer " + token);
try (ThreadContext.StoredContext ignore = requestContext.newStoredContext(true)) {
PlainActionFuture<UserToken> future = new PlainActionFuture<>();
tokenService.decodeToken(tokenService.getFromHeader(requestContext), future);
UserToken serialized = future.get();
assertNotNull(serialized);
assertEquals("joe", serialized.getAuthentication().getUser().principal());
assertEquals(Version.V_5_6_0, serialized.getAuthentication().getVersion());
assertArrayEquals(new String[] {"admin"}, serialized.getAuthentication().getUser().roles());
}
}
}

View File

@ -34,6 +34,44 @@ import org.elasticsearch.cluster.metadata.MetaData;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.util.set.Sets;
import org.elasticsearch.test.ESTestCase;
import org.elasticsearch.xpack.ml.MlMetaIndex;
import org.elasticsearch.xpack.ml.action.CloseJobAction;
import org.elasticsearch.xpack.ml.action.DeleteDatafeedAction;
import org.elasticsearch.xpack.ml.action.DeleteExpiredDataAction;
import org.elasticsearch.xpack.ml.action.DeleteFilterAction;
import org.elasticsearch.xpack.ml.action.DeleteJobAction;
import org.elasticsearch.xpack.ml.action.DeleteModelSnapshotAction;
import org.elasticsearch.xpack.ml.action.FinalizeJobExecutionAction;
import org.elasticsearch.xpack.ml.action.FlushJobAction;
import org.elasticsearch.xpack.ml.action.GetBucketsAction;
import org.elasticsearch.xpack.ml.action.GetCategoriesAction;
import org.elasticsearch.xpack.ml.action.GetDatafeedsAction;
import org.elasticsearch.xpack.ml.action.GetDatafeedsStatsAction;
import org.elasticsearch.xpack.ml.action.GetFiltersAction;
import org.elasticsearch.xpack.ml.action.GetInfluencersAction;
import org.elasticsearch.xpack.ml.action.GetJobsAction;
import org.elasticsearch.xpack.ml.action.GetJobsStatsAction;
import org.elasticsearch.xpack.ml.action.GetModelSnapshotsAction;
import org.elasticsearch.xpack.ml.action.GetRecordsAction;
import org.elasticsearch.xpack.ml.action.IsolateDatafeedAction;
import org.elasticsearch.xpack.ml.action.KillProcessAction;
import org.elasticsearch.xpack.ml.action.OpenJobAction;
import org.elasticsearch.xpack.ml.action.PostDataAction;
import org.elasticsearch.xpack.ml.action.PreviewDatafeedAction;
import org.elasticsearch.xpack.ml.action.PutDatafeedAction;
import org.elasticsearch.xpack.ml.action.PutFilterAction;
import org.elasticsearch.xpack.ml.action.PutJobAction;
import org.elasticsearch.xpack.ml.action.RevertModelSnapshotAction;
import org.elasticsearch.xpack.ml.action.StartDatafeedAction;
import org.elasticsearch.xpack.ml.action.StopDatafeedAction;
import org.elasticsearch.xpack.ml.action.UpdateDatafeedAction;
import org.elasticsearch.xpack.ml.action.UpdateJobAction;
import org.elasticsearch.xpack.ml.action.UpdateModelSnapshotAction;
import org.elasticsearch.xpack.ml.action.UpdateProcessAction;
import org.elasticsearch.xpack.ml.action.ValidateDetectorAction;
import org.elasticsearch.xpack.ml.action.ValidateJobConfigAction;
import org.elasticsearch.xpack.ml.job.persistence.AnomalyDetectorsIndex;
import org.elasticsearch.xpack.ml.notifications.Auditor;
import org.elasticsearch.xpack.monitoring.action.MonitoringBulkAction;
import org.elasticsearch.xpack.security.action.role.PutRoleAction;
import org.elasticsearch.xpack.security.action.user.PutUserAction;
@ -80,6 +118,8 @@ public class ReservedRolesStoreTests extends ESTestCase {
assertThat(ReservedRolesStore.isReserved("remote_monitoring_agent"), is(true));
assertThat(ReservedRolesStore.isReserved("monitoring_user"), is(true));
assertThat(ReservedRolesStore.isReserved("reporting_user"), is(true));
assertThat(ReservedRolesStore.isReserved("machine_learning_user"), is(true));
assertThat(ReservedRolesStore.isReserved("machine_learning_admin"), is(true));
assertThat(ReservedRolesStore.isReserved("watcher_user"), is(true));
assertThat(ReservedRolesStore.isReserved("watcher_admin"), is(true));
}
@ -386,6 +426,106 @@ public class ReservedRolesStoreTests extends ESTestCase {
is(false));
}
public void testMachineLearningAdminRole() {
RoleDescriptor roleDescriptor = new ReservedRolesStore().roleDescriptor("machine_learning_admin");
assertNotNull(roleDescriptor);
assertThat(roleDescriptor.getMetadata(), hasEntry("_reserved", true));
Role role = Role.builder(roleDescriptor, null).build();
assertThat(role.cluster().check(CloseJobAction.NAME), is(true));
assertThat(role.cluster().check(DeleteDatafeedAction.NAME), is(true));
assertThat(role.cluster().check(DeleteExpiredDataAction.NAME), is(true));
assertThat(role.cluster().check(DeleteFilterAction.NAME), is(true));
assertThat(role.cluster().check(DeleteJobAction.NAME), is(true));
assertThat(role.cluster().check(DeleteModelSnapshotAction.NAME), is(true));
assertThat(role.cluster().check(FinalizeJobExecutionAction.NAME), is(false)); // internal use only
assertThat(role.cluster().check(FlushJobAction.NAME), is(true));
assertThat(role.cluster().check(GetBucketsAction.NAME), is(true));
assertThat(role.cluster().check(GetCategoriesAction.NAME), is(true));
assertThat(role.cluster().check(GetDatafeedsAction.NAME), is(true));
assertThat(role.cluster().check(GetDatafeedsStatsAction.NAME), is(true));
assertThat(role.cluster().check(GetFiltersAction.NAME), is(true));
assertThat(role.cluster().check(GetInfluencersAction.NAME), is(true));
assertThat(role.cluster().check(GetJobsAction.NAME), is(true));
assertThat(role.cluster().check(GetJobsStatsAction.NAME), is(true));
assertThat(role.cluster().check(GetModelSnapshotsAction.NAME), is(true));
assertThat(role.cluster().check(GetRecordsAction.NAME), is(true));
assertThat(role.cluster().check(IsolateDatafeedAction.NAME), is(false)); // internal use only
assertThat(role.cluster().check(KillProcessAction.NAME), is(false)); // internal use only
assertThat(role.cluster().check(OpenJobAction.NAME), is(true));
assertThat(role.cluster().check(PostDataAction.NAME), is(true));
assertThat(role.cluster().check(PreviewDatafeedAction.NAME), is(true));
assertThat(role.cluster().check(PutDatafeedAction.NAME), is(true));
assertThat(role.cluster().check(PutFilterAction.NAME), is(true));
assertThat(role.cluster().check(PutJobAction.NAME), is(true));
assertThat(role.cluster().check(RevertModelSnapshotAction.NAME), is(true));
assertThat(role.cluster().check(StartDatafeedAction.NAME), is(true));
assertThat(role.cluster().check(StopDatafeedAction.NAME), is(true));
assertThat(role.cluster().check(UpdateDatafeedAction.NAME), is(true));
assertThat(role.cluster().check(UpdateJobAction.NAME), is(true));
assertThat(role.cluster().check(UpdateModelSnapshotAction.NAME), is(true));
assertThat(role.cluster().check(UpdateProcessAction.NAME), is(false)); // internal use only
assertThat(role.cluster().check(ValidateDetectorAction.NAME), is(true));
assertThat(role.cluster().check(ValidateJobConfigAction.NAME), is(true));
assertThat(role.runAs().check(randomAlphaOfLengthBetween(1, 30)), is(false));
assertNoAccessAllowed(role, "foo");
assertOnlyReadAllowed(role, MlMetaIndex.INDEX_NAME);
assertOnlyReadAllowed(role, AnomalyDetectorsIndex.jobStateIndexName());
assertOnlyReadAllowed(role, AnomalyDetectorsIndex.RESULTS_INDEX_PREFIX + AnomalyDetectorsIndex.RESULTS_INDEX_DEFAULT);
assertOnlyReadAllowed(role, Auditor.NOTIFICATIONS_INDEX);
}
public void testMachineLearningUserRole() {
RoleDescriptor roleDescriptor = new ReservedRolesStore().roleDescriptor("machine_learning_user");
assertNotNull(roleDescriptor);
assertThat(roleDescriptor.getMetadata(), hasEntry("_reserved", true));
Role role = Role.builder(roleDescriptor, null).build();
assertThat(role.cluster().check(CloseJobAction.NAME), is(false));
assertThat(role.cluster().check(DeleteDatafeedAction.NAME), is(false));
assertThat(role.cluster().check(DeleteExpiredDataAction.NAME), is(false));
assertThat(role.cluster().check(DeleteFilterAction.NAME), is(false));
assertThat(role.cluster().check(DeleteJobAction.NAME), is(false));
assertThat(role.cluster().check(DeleteModelSnapshotAction.NAME), is(false));
assertThat(role.cluster().check(FinalizeJobExecutionAction.NAME), is(false));
assertThat(role.cluster().check(FlushJobAction.NAME), is(false));
assertThat(role.cluster().check(GetBucketsAction.NAME), is(true));
assertThat(role.cluster().check(GetCategoriesAction.NAME), is(true));
assertThat(role.cluster().check(GetDatafeedsAction.NAME), is(true));
assertThat(role.cluster().check(GetDatafeedsStatsAction.NAME), is(true));
assertThat(role.cluster().check(GetFiltersAction.NAME), is(false));
assertThat(role.cluster().check(GetInfluencersAction.NAME), is(true));
assertThat(role.cluster().check(GetJobsAction.NAME), is(true));
assertThat(role.cluster().check(GetJobsStatsAction.NAME), is(true));
assertThat(role.cluster().check(GetModelSnapshotsAction.NAME), is(true));
assertThat(role.cluster().check(GetRecordsAction.NAME), is(true));
assertThat(role.cluster().check(IsolateDatafeedAction.NAME), is(false));
assertThat(role.cluster().check(KillProcessAction.NAME), is(false));
assertThat(role.cluster().check(OpenJobAction.NAME), is(false));
assertThat(role.cluster().check(PostDataAction.NAME), is(false));
assertThat(role.cluster().check(PreviewDatafeedAction.NAME), is(false));
assertThat(role.cluster().check(PutDatafeedAction.NAME), is(false));
assertThat(role.cluster().check(PutFilterAction.NAME), is(false));
assertThat(role.cluster().check(PutJobAction.NAME), is(false));
assertThat(role.cluster().check(RevertModelSnapshotAction.NAME), is(false));
assertThat(role.cluster().check(StartDatafeedAction.NAME), is(false));
assertThat(role.cluster().check(StopDatafeedAction.NAME), is(false));
assertThat(role.cluster().check(UpdateDatafeedAction.NAME), is(false));
assertThat(role.cluster().check(UpdateJobAction.NAME), is(false));
assertThat(role.cluster().check(UpdateModelSnapshotAction.NAME), is(false));
assertThat(role.cluster().check(UpdateProcessAction.NAME), is(false));
assertThat(role.cluster().check(ValidateDetectorAction.NAME), is(false));
assertThat(role.cluster().check(ValidateJobConfigAction.NAME), is(false));
assertThat(role.runAs().check(randomAlphaOfLengthBetween(1, 30)), is(false));
assertNoAccessAllowed(role, "foo");
assertNoAccessAllowed(role, MlMetaIndex.INDEX_NAME);
assertNoAccessAllowed(role, AnomalyDetectorsIndex.jobStateIndexName());
assertOnlyReadAllowed(role, AnomalyDetectorsIndex.RESULTS_INDEX_PREFIX + AnomalyDetectorsIndex.RESULTS_INDEX_DEFAULT);
assertOnlyReadAllowed(role, Auditor.NOTIFICATIONS_INDEX);
}
public void testWatcherAdminRole() {
RoleDescriptor roleDescriptor = new ReservedRolesStore().roleDescriptor("watcher_admin");
assertNotNull(roleDescriptor);
@ -449,6 +589,18 @@ public class ReservedRolesStoreTests extends ESTestCase {
assertThat(role.indices().allowedIndicesMatcher(BulkAction.NAME).test(index), is(false));
}
private void assertNoAccessAllowed(Role role, String index) {
assertThat(role.indices().allowedIndicesMatcher(DeleteIndexAction.NAME).test(index), is(false));
assertThat(role.indices().allowedIndicesMatcher(CreateIndexAction.NAME).test(index), is(false));
assertThat(role.indices().allowedIndicesMatcher(UpdateSettingsAction.NAME).test(index), is(false));
assertThat(role.indices().allowedIndicesMatcher(SearchAction.NAME).test(index), is(false));
assertThat(role.indices().allowedIndicesMatcher(GetAction.NAME).test(index), is(false));
assertThat(role.indices().allowedIndicesMatcher(IndexAction.NAME).test(index), is(false));
assertThat(role.indices().allowedIndicesMatcher(UpdateAction.NAME).test(index), is(false));
assertThat(role.indices().allowedIndicesMatcher(DeleteAction.NAME).test(index), is(false));
assertThat(role.indices().allowedIndicesMatcher(BulkAction.NAME).test(index), is(false));
}
public void testLogstashAdminRole() {
RoleDescriptor roleDescriptor = new ReservedRolesStore().roleDescriptor("logstash_admin");
assertNotNull(roleDescriptor);

View File

@ -52,12 +52,14 @@
}
}
- match: { job_id: "job-crud-test-apis" }
- match: { analysis_limits.model_memory_limit: "1024mb" }
- do:
xpack.ml.get_jobs:
job_id: "job-crud-test-apis"
- match: { count: 1 }
- match: { jobs.0.job_id: "job-crud-test-apis" }
- match: { jobs.0.analysis_limits.model_memory_limit: "1024mb" }
- do:
indices.get_alias:

View File

@ -124,6 +124,8 @@ subprojects {
setting 'xpack.security.transport.ssl.enabled', 'true'
setting 'xpack.ssl.keystore.path', 'testnode.jks'
setting 'xpack.ssl.keystore.password', 'testnode'
// this is needed until the token service changes are backported to 6.x
keystoreSetting 'xpack.security.authc.token.passphrase', 'xpack_token_passphrase'
dependsOn copyTestNodeKeystore
extraConfigFile 'testnode.jks', new File(outputDir + '/testnode.jks')
if (withSystemKey) {
@ -160,6 +162,7 @@ subprojects {
waitCondition = waitWithAuth
setting 'xpack.ssl.keystore.path', 'testnode.jks'
keystoreSetting 'xpack.ssl.keystore.secure_password', 'testnode'
keystoreSetting 'xpack.security.authc.token.passphrase', 'xpack_token_passphrase'
setting 'node.attr.upgraded', 'first'
dependsOn copyTestNodeKeystore
extraConfigFile 'testnode.jks', new File(outputDir + '/testnode.jks')
@ -190,6 +193,7 @@ subprojects {
waitCondition = waitWithAuth
setting 'xpack.ssl.keystore.path', 'testnode.jks'
keystoreSetting 'xpack.ssl.keystore.secure_password', 'testnode'
keystoreSetting 'xpack.security.authc.token.passphrase', 'xpack_token_passphrase'
dependsOn copyTestNodeKeystore
extraConfigFile 'testnode.jks', new File(outputDir + '/testnode.jks')
if (withSystemKey) {

View File

@ -0,0 +1,50 @@
---
"Get the indexed token and use if to authenticate":
- skip:
features: headers
- do:
get:
index: token_index
type: doc
id: "6"
- match: { _index: token_index }
- match: { _type: doc }
- match: { _id: "6" }
- is_true: _source.token
- set: { _source.token : token }
- do:
headers:
Authorization: Bearer ${token}
xpack.security.authenticate: {}
- match: { username: "token_user" }
- match: { roles.0: "superuser" }
- match: { full_name: "Token User" }
- do:
headers:
Authorization: Bearer ${token}
search:
index: token_index
- match: { hits.total: 6 }
- do:
headers:
Authorization: Bearer ${token}
search:
index: token_index
- match: { hits.total: 6 }
- do:
headers:
Authorization: Bearer ${token}
search:
index: token_index
- match: { hits.total: 6 }

View File

@ -0,0 +1,91 @@
---
"Create a token and reuse it across the upgrade":
- skip:
features: headers
- do:
cluster.health:
wait_for_status: yellow
- do:
xpack.security.put_user:
username: "token_user"
body: >
{
"password" : "x-pack-test-password",
"roles" : [ "superuser" ],
"full_name" : "Token User"
}
- do:
xpack.security.get_token:
body:
grant_type: "password"
username: "token_user"
password: "x-pack-test-password"
- match: { type: "Bearer" }
- is_true: access_token
- set: { access_token: token }
- match: { expires_in: 1200 }
- is_false: scope
- do:
headers:
Authorization: Bearer ${token}
xpack.security.authenticate: {}
- match: { username: "token_user" }
- match: { roles.0: "superuser" }
- match: { full_name: "Token User" }
- do:
indices.create:
index: token_index
wait_for_active_shards : all
body:
settings:
index:
number_of_replicas: 1
- do:
headers:
Authorization: Bearer ${token}
bulk:
refresh: true
body:
- '{"index": {"_index": "token_index", "_type": "doc", "_id" : "1"}}'
- '{"f1": "v1_old", "f2": 0}'
- '{"index": {"_index": "token_index", "_type": "doc", "_id" : "2"}}'
- '{"f1": "v2_old", "f2": 1}'
- '{"index": {"_index": "token_index", "_type": "doc", "_id" : "3"}}'
- '{"f1": "v3_old", "f2": 2}'
- '{"index": {"_index": "token_index", "_type": "doc", "_id" : "4"}}'
- '{"f1": "v4_old", "f2": 3}'
- '{"index": {"_index": "token_index", "_type": "doc", "_id" : "5"}}'
- '{"f1": "v5_old", "f2": 4}'
- do:
headers:
Authorization: Bearer ${token}
indices.flush:
index: token_index
- do:
headers:
Authorization: Bearer ${token}
search:
index: token_index
- match: { hits.total: 5 }
# we do store the token in the index such that we can reuse it down the road once
# the cluster is upgraded
- do:
headers:
Authorization: Bearer ${token}
index:
index: token_index
type: doc
id: "6"
body: { "token" : "${token}"}

View File

@ -0,0 +1,40 @@
---
"Get the indexed token and use if to authenticate":
- skip:
features: headers
- do:
get:
index: token_index
type: doc
id: "6"
- match: { _index: token_index }
- match: { _type: doc }
- match: { _id: "6" }
- is_true: _source.token
- set: { _source.token : token }
- do:
headers:
Authorization: Bearer ${token}
xpack.security.authenticate: {}
- match: { username: "token_user" }
- match: { roles.0: "superuser" }
- match: { full_name: "Token User" }
- do:
headers:
Authorization: Bearer ${token}
search:
index: token_index
- match: { hits.total: 6 }
# counter example that we are really checking this
- do:
headers:
Authorization: Bearer boom
catch: /missing authentication token/
search:
index: token_index