Merge remote-tracking branch 'elastic/master' into feature/sql_2

Original commit: elastic/x-pack-elasticsearch@d6451b0f6b
This commit is contained in:
Igor Motov 2018-01-23 12:53:13 -05:00
commit 2be9cabbe9
35 changed files with 943 additions and 359 deletions

View File

@ -1,12 +1,26 @@
[float]
[[ml-buckets]] [[ml-buckets]]
=== Buckets === Buckets
++++
<titleabbrev>Buckets</titleabbrev>
++++
The {xpackml} features use the concept of a bucket to divide the time The {xpackml} features use the concept of a _bucket_ to divide the time series
series into batches for processing. The _bucket span_ is part of the into batches for processing.
configuration information for a job. It defines the time interval that is used
to summarize and model the data. This is typically between 5 minutes to 1 hour The _bucket span_ is part of the configuration information for a job. It defines
and it depends on your data characteristics. When you set the bucket span, the time interval that is used to summarize and model the data. This is
take into account the granularity at which you want to analyze, the frequency typically between 5 minutes to 1 hour and it depends on your data characteristics.
of the input data, the typical duration of the anomalies, and the frequency at When you set the bucket span, take into account the granularity at which you
which alerting is required. want to analyze, the frequency of the input data, the typical duration of the
anomalies, and the frequency at which alerting is required.
When you view your {ml} results, each bucket has an anomaly score. This score is
a statistically aggregated and normalized view of the combined anomalousness of
all the record results in the bucket. If you have more than one job, you can
also obtain overall bucket results, which combine and correlate anomalies from
multiple jobs into an overall score. When you view the results for jobs groups
in {kib}, it provides the overall bucket scores.
For more information, see
{ref}/ml-results-resource.html[Results Resources] and
{ref}/ml-get-overall-buckets.html[Get Overall Buckets API].

View File

@ -9,10 +9,15 @@ The {ml} model is not ill-affected and you do not receive spurious results.
You can create calendars and scheduled events in the **Settings** pane on the You can create calendars and scheduled events in the **Settings** pane on the
**Machine Learning** page in {kib} or by using {ref}/ml-apis.html[{ml} APIs]. **Machine Learning** page in {kib} or by using {ref}/ml-apis.html[{ml} APIs].
A scheduled event must have a start time, end time, and description. You can A scheduled event must have a start time, end time, and description. In general,
identify zero or more scheduled events in a calendar. Jobs can then subscribe to scheduled events are short in duration (typically lasting from a few hours to a
calendars and the {ml} analytics handle all subsequent scheduled events day) and occur infrequently. If you have regularly occurring events, such as
appropriately. weekly maintenance periods, you do not need to create scheduled events for these
circumstances; they are already handled by the {ml} analytics.
You can identify zero or more scheduled events in a calendar. Jobs can then
subscribe to calendars and the {ml} analytics handle all subsequent scheduled
events appropriately.
If you want to add multiple scheduled events at once, you can import an If you want to add multiple scheduled events at once, you can import an
iCalendar (`.ics`) file in {kib} or a JSON file in the iCalendar (`.ics`) file in {kib} or a JSON file in the
@ -21,8 +26,16 @@ add events to calendar API
//] //]
. .
NOTE: Bucket results are generated during scheduled events but they have an [NOTE]
--
* If your iCalendar file contains recurring events, only the first occurrence is
imported.
* Bucket results are generated during scheduled events but they have an
anomaly score of zero. For more information about bucket results, see anomaly score of zero. For more information about bucket results, see
{ref}/ml-results-resource.html[Results Resources]. {ref}/ml-results-resource.html[Results Resources].
* If you use long or frequent scheduled events, it might take longer for the
{ml} analytics to learn to model your data and some anomalous behavior might be
missed.
//TO-DO: Add screenshot showing special events in Single Metric Viewer? --

View File

@ -3,6 +3,7 @@
include::analyzing.asciidoc[] include::analyzing.asciidoc[]
include::forecasting.asciidoc[] include::forecasting.asciidoc[]
include::buckets.asciidoc[]
include::calendars.asciidoc[] include::calendars.asciidoc[]
[[ml-concepts]] [[ml-concepts]]
@ -16,5 +17,4 @@ concepts from the outset will tremendously help ease the learning process.
include::jobs.asciidoc[] include::jobs.asciidoc[]
include::datafeeds.asciidoc[] include::datafeeds.asciidoc[]
include::buckets.asciidoc[]
include::architecture.asciidoc[] include::architecture.asciidoc[]

View File

@ -304,7 +304,7 @@ The format of a log entry is:
`<local_node_info>` :: Information about the local node that generated `<local_node_info>` :: Information about the local node that generated
the log entry. You can control what node information the log entry. You can control what node information
is included by configuring the is included by configuring the
<<audit-log-entry-local-node-info, local node info settings>>. {ref}/auditing-settings.html#node-audit-settings[local node info settings].
`<layer>` :: The layer from which this event originated: `<layer>` :: The layer from which this event originated:
`rest`, `transport` or `ip_filter`. `rest`, `transport` or `ip_filter`.
`<entry_type>` :: The type of event that occurred: `anonymous_access_denied`, `<entry_type>` :: The type of event that occurred: `anonymous_access_denied`,
@ -321,35 +321,13 @@ The format of a log entry is:
=== Logfile Output Settings === Logfile Output Settings
The events and some other information about what gets logged can be The events and some other information about what gets logged can be
controlled using settings in the `elasticsearch.yml` file. controlled using settings in the `elasticsearch.yml` file. See
{ref}/auditing-settings.html#event-audit-settings[Audited Event Settings] and
.Audited Event Settings {ref}/auditing-settings.html#node-audit-settings[Local Node Info Settings].
[cols="4,^2,4",options="header"]
|======
| Name | Default | Description
| `xpack.security.audit.logfile.events.include` | `access_denied`, `access_granted`, `anonymous_access_denied`, `authentication_failed`, `connection_denied`, `tampered_request`, `run_as_denied`, `run_as_granted` | Includes the specified events in the output.
| `xpack.security.audit.logfile.events.exclude` | | Excludes the specified events from the output.
| `xpack.security.audit.logfile.events.emit_request_body`| false | Include or exclude the request body from REST requests
on certain event types such as `authentication_failed`.
|======
IMPORTANT: No filtering is performed when auditing, so sensitive data may be IMPORTANT: No filtering is performed when auditing, so sensitive data may be
audited in plain text when including the request body in audit events. audited in plain text when including the request body in audit events.
[[audit-log-entry-local-node-info]]
.Local Node Info Settings
[cols="4,^2,4",options="header"]
|======
| Name | Default | Description
| `xpack.security.audit.logfile.prefix.emit_node_name` | true | Include or exclude the node's name
from the local node info.
| `xpack.security.audit.logfile.prefix.emit_node_host_address` | false | Include or exclude the node's IP address
from the local node info.
| `xpack.security.audit.logfile.prefix.emit_node_host_name` | false | Include or exclude the node's host name
from the local node info.
|======
[[logging-file]] [[logging-file]]
You can also configure how the logfile is written in the `log4j2.properties` You can also configure how the logfile is written in the `log4j2.properties`
file located in `CONFIG_DIR/x-pack`. By default, audit information is appended to the file located in `CONFIG_DIR/x-pack`. By default, audit information is appended to the
@ -450,19 +428,8 @@ in the `elasticsearch.yml` file:
xpack.security.audit.outputs: [ index, logfile ] xpack.security.audit.outputs: [ index, logfile ]
---------------------------- ----------------------------
.Audit Log Indexing Configuration For more configuration options, see
[options="header"] {ref}/auditing-settings.html#index-audit-settings[Audit Log Indexing Configuration Settings].
|======
| Attribute | Default Setting | Description
| `xpack.security.audit.index.bulk_size` | `1000` | Controls how many audit events are batched into a single write.
| `xpack.security.audit.index.flush_interval` | `1s` | Controls how often buffered events are flushed to the index.
| `xpack.security.audit.index.rollover` | `daily` | Controls how often to roll over to a new index:
`hourly`, `daily`, `weekly`, or `monthly`.
| `xpack.security.audit.index.events.include` | `anonymous_access_denied`, `authentication_failed`, `realm_authentication_failed`, `access_granted`, `access_denied`, `tampered_request`, `connection_granted`, `connection_denied`, `run_as_granted`, `run_as_denied` | The audit events to be indexed. See <<audit-event-types, Audit Entry Types>> for the complete list.
| `xpack.security.audit.index.events.exclude` | | The audit events to exclude from indexing.
| `xpack.security.audit.index.events.emit_request_body`| false | Include or exclude the request body from REST requests
on certain event types such as `authentication_failed`.
|======
IMPORTANT: No filtering is performed when auditing, so sensitive data may be IMPORTANT: No filtering is performed when auditing, so sensitive data may be
audited in plain text when including the request body in audit events. audited in plain text when including the request body in audit events.
@ -487,18 +454,14 @@ xpack.security.audit.index.settings:
==== Forwarding Audit Logs to a Remote Cluster ==== Forwarding Audit Logs to a Remote Cluster
To index audit events to a remote Elasticsearch cluster, you configure To index audit events to a remote Elasticsearch cluster, you configure
the following `xpack.security.audit.index.client` settings. the following `xpack.security.audit.index.client` settings:
.Remote Audit Log Indexing Configuration * `xpack.security.audit.index.client.hosts`
[options="header"] * `xpack.security.audit.index.client.cluster.name`
|====== * `xpack.security.audit.index.client.xpack.security.user`
| Attribute | Description
| `xpack.security.audit.index.client.hosts` | Comma-separated list of `host:port` pairs. These hosts For more information about these settings, see
should be nodes in the remote cluster. {ref}/auditing-settings.html#remote-audit-settings[Remote Audit Log Indexing Configuration Settings].
| `xpack.security.audit.index.client.cluster.name` | The name of the remote cluster.
| `xpack.security.audit.index.client.xpack.security.user` | The `username:password` pair to use to authenticate with
the remote cluster.
|======
You can pass additional settings to the remote client by specifying them in the You can pass additional settings to the remote client by specifying them in the
`xpack.security.audit.index.client` namespace. For example, to allow the remote `xpack.security.audit.index.client` namespace. For example, to allow the remote

View File

@ -19,10 +19,6 @@ es_meta_plugin {
'ml', 'monitoring', 'security', 'upgrade', 'watcher', 'sql'] 'ml', 'monitoring', 'security', 'upgrade', 'watcher', 'sql']
} }
ext.expansions = [
'project.version': version,
]
dependencies { dependencies {
testCompile project(path: ':x-pack-elasticsearch:plugin:core', configuration: 'testArtifacts') testCompile project(path: ':x-pack-elasticsearch:plugin:core', configuration: 'testArtifacts')
} }
@ -41,6 +37,13 @@ artifacts {
} }
integTestRunner { integTestRunner {
/*
* We have to disable setting the number of available processors as tests in the same JVM randomize processors and will step on each
* other if we allow them to set the number of available processors as it's set-once in Netty.
*/
systemProperty 'es.set.netty.runtime.available.processors', 'false'
// TODO: fix this rest test to not depend on a hardcoded port! // TODO: fix this rest test to not depend on a hardcoded port!
def blacklist = ['getting_started/10_monitor_cluster_health/*'] def blacklist = ['getting_started/10_monitor_cluster_health/*']
boolean snapshot = "true".equals(System.getProperty("build.snapshot", "true")) boolean snapshot = "true".equals(System.getProperty("build.snapshot", "true"))
@ -141,14 +144,6 @@ integTestCluster {
} }
} }
integTestRunner {
/*
* We have to disable setting the number of available processors as tests in the same JVM randomize processors and will step on each
* other if we allow them to set the number of available processors as it's set-once in Netty.
*/
systemProperty 'es.set.netty.runtime.available.processors', 'false'
}
run { run {
setting 'xpack.ml.enabled', 'true' setting 'xpack.ml.enabled', 'true'
setting 'xpack.graph.enabled', 'true' setting 'xpack.graph.enabled', 'true'

View File

@ -5,6 +5,9 @@ import java.nio.file.Path
import java.nio.file.StandardCopyOption import java.nio.file.StandardCopyOption
apply plugin: 'elasticsearch.esplugin' apply plugin: 'elasticsearch.esplugin'
archivesBaseName = 'x-pack-core'
esplugin { esplugin {
name 'x-pack-core' name 'x-pack-core'
description 'Elasticsearch Expanded Pack Plugin - Core' description 'Elasticsearch Expanded Pack Plugin - Core'
@ -18,27 +21,16 @@ esplugin {
integTest.enabled = false integTest.enabled = false
dependencyLicenses { dependencyLicenses {
mapping from: /netty-.*/, to: 'netty'
mapping from: /bc.*/, to: 'bouncycastle' mapping from: /bc.*/, to: 'bouncycastle'
mapping from: /owasp-java-html-sanitizer.*/, to: 'owasp-java-html-sanitizer'
mapping from: /transport-netty.*/, to: 'elasticsearch'
mapping from: /transport-nio.*/, to: 'elasticsearch'
mapping from: /elasticsearch-nio.*/, to: 'elasticsearch'
mapping from: /elasticsearch-rest-client.*/, to: 'elasticsearch'
mapping from: /http.*/, to: 'httpclient' // pulled in by rest client mapping from: /http.*/, to: 'httpclient' // pulled in by rest client
mapping from: /commons-.*/, to: 'commons' // pulled in by rest client mapping from: /commons-.*/, to: 'commons' // pulled in by rest client
ignoreSha 'elasticsearch-rest-client'
ignoreSha 'transport-netty4'
ignoreSha 'transport-nio'
ignoreSha 'elasticsearch-nio'
ignoreSha 'elasticsearch-rest-client-sniffer'
ignoreSha 'x-pack-core'
} }
dependencies { dependencies {
provided "org.elasticsearch:elasticsearch:${version}" provided "org.elasticsearch:elasticsearch:${version}"
compile "org.apache.httpcomponents:httpclient:${versions.httpclient}" compile "org.apache.httpcomponents:httpclient:${versions.httpclient}"
compile "org.apache.httpcomponents:httpcore:${versions.httpcore}" compile "org.apache.httpcomponents:httpcore:${versions.httpcore}"
compile "org.apache.httpcomponents:httpcore-nio:${versions.httpcore}"
compile "org.apache.httpcomponents:httpasyncclient:${versions.httpasyncclient}" compile "org.apache.httpcomponents:httpasyncclient:${versions.httpasyncclient}"
compile "commons-logging:commons-logging:${versions.commonslogging}" compile "commons-logging:commons-logging:${versions.commonslogging}"
@ -50,7 +42,6 @@ dependencies {
compile 'org.bouncycastle:bcpkix-jdk15on:1.58' compile 'org.bouncycastle:bcpkix-jdk15on:1.58'
compile project(path: ':modules:transport-netty4', configuration: 'runtime') compile project(path: ':modules:transport-netty4', configuration: 'runtime')
//testCompile project(path: ':core:cli', configuration: 'runtime')
testCompile 'org.elasticsearch:securemock:1.2' testCompile 'org.elasticsearch:securemock:1.2'
testCompile "org.elasticsearch:mocksocket:${versions.mocksocket}" testCompile "org.elasticsearch:mocksocket:${versions.mocksocket}"
testCompile "org.apache.logging.log4j:log4j-slf4j-impl:${versions.log4j}" testCompile "org.apache.logging.log4j:log4j-slf4j-impl:${versions.log4j}"
@ -60,6 +51,10 @@ dependencies {
testCompile project(path: ':modules:analysis-common', configuration: 'runtime') testCompile project(path: ':modules:analysis-common', configuration: 'runtime')
} }
ext.expansions = [
'project.version': version
]
processResources { processResources {
from(sourceSets.main.resources.srcDirs) { from(sourceSets.main.resources.srcDirs) {
exclude '**/public.key' exclude '**/public.key'
@ -81,14 +76,9 @@ forbiddenPatterns {
exclude '**/*.zip' exclude '**/*.zip'
} }
archivesBaseName = 'x-pack-core'
compileJava.options.compilerArgs << "-Xlint:-deprecation,-rawtypes,-serial,-try,-unchecked" compileJava.options.compilerArgs << "-Xlint:-deprecation,-rawtypes,-serial,-try,-unchecked"
compileTestJava.options.compilerArgs << "-Xlint:-deprecation,-rawtypes,-serial,-try,-unchecked" compileTestJava.options.compilerArgs << "-Xlint:-deprecation,-rawtypes,-serial,-try,-unchecked"
// TODO: fix these!
thirdPartyAudit.enabled = false
licenseHeaders { licenseHeaders {
approvedLicenses << 'BCrypt (BSD-like)' approvedLicenses << 'BCrypt (BSD-like)'
additionalLicense 'BCRYP', 'BCrypt (BSD-like)', 'Copyright (c) 2006 Damien Miller <djm@mindrot.org>' additionalLicense 'BCRYP', 'BCrypt (BSD-like)', 'Copyright (c) 2006 Damien Miller <djm@mindrot.org>'
@ -100,6 +90,7 @@ sourceSets.test.java {
srcDir '../../license-tools/src/main/java' srcDir '../../license-tools/src/main/java'
} }
// TODO: remove this jar once xpack extensions have been removed
// assemble the API JAR for the transport-client and extension authors; this JAR is the core JAR by another name // assemble the API JAR for the transport-client and extension authors; this JAR is the core JAR by another name
project.afterEvaluate { project.afterEvaluate {
task apiJar { task apiJar {
@ -131,108 +122,6 @@ project.afterEvaluate {
} }
} }
//
// integTestRunner {
// // TODO: fix this rest test to not depend on a hardcoded port!
// def blacklist = ['getting_started/10_monitor_cluster_health/*']
// boolean snapshot = "true".equals(System.getProperty("build.snapshot", "true"))
// if (!snapshot) {
// // these tests attempt to install basic/internal licenses signed against the dev/public.key
// // Since there is no infrastructure in place (anytime soon) to generate licenses using the production
// // private key, these tests are whitelisted in non-snapshot test runs
// blacklist.addAll(['xpack/15_basic/*', 'license/20_put_license/*'])
// }
// systemProperty 'tests.rest.blacklist', blacklist.join(',')
// }
// // location of generated keystores and certificates
// File keystoreDir = new File(project.buildDir, 'keystore')
// // Generate the node's keystore
// File nodeKeystore = new File(keystoreDir, 'test-node.jks')
// task createNodeKeyStore(type: LoggedExec) {
// doFirst {
// if (nodeKeystore.parentFile.exists() == false) {
// nodeKeystore.parentFile.mkdirs()
// }
// if (nodeKeystore.exists()) {
// delete nodeKeystore
// }
// }
// executable = new File(project.javaHome, 'bin/keytool')
// standardInput = new ByteArrayInputStream('FirstName LastName\nUnit\nOrganization\nCity\nState\nNL\nyes\n\n'.getBytes('UTF-8'))
// args '-genkey',
// '-alias', 'test-node',
// '-keystore', nodeKeystore,
// '-keyalg', 'RSA',
// '-keysize', '2048',
// '-validity', '712',
// '-dname', 'CN=smoke-test-plugins-ssl',
// '-keypass', 'keypass',
// '-storepass', 'keypass'
// }
// Add keystores to test classpath: it expects it there
//sourceSets.test.resources.srcDir(keystoreDir)
//processTestResources.dependsOn(createNodeKeyStore)
// integTestCluster {
// dependsOn createNodeKeyStore
// setting 'xpack.ml.enabled', 'true'
// setting 'logger.org.elasticsearch.xpack.ml.datafeed', 'TRACE'
// // Integration tests are supposed to enable/disable exporters before/after each test
// setting 'xpack.monitoring.exporters._local.type', 'local'
// setting 'xpack.monitoring.exporters._local.enabled', 'false'
// setting 'xpack.monitoring.collection.interval', '-1'
// setting 'xpack.security.authc.token.enabled', 'true'
// setting 'xpack.security.transport.ssl.enabled', 'true'
// setting 'xpack.security.transport.ssl.keystore.path', nodeKeystore.name
// setting 'xpack.security.transport.ssl.verification_mode', 'certificate'
// setting 'xpack.security.audit.enabled', 'true'
// keystoreSetting 'bootstrap.password', 'x-pack-test-password'
// keystoreSetting 'xpack.security.transport.ssl.keystore.secure_password', 'keypass'
// distribution = 'zip' // this is important since we use the reindex module in ML
// setupCommand 'setupTestUser', 'bin/x-pack/users', 'useradd', 'x_pack_rest_user', '-p', 'x-pack-test-password', '-r', 'superuser'
// extraConfigFile nodeKeystore.name, nodeKeystore
// waitCondition = { NodeInfo node, AntBuilder ant ->
// File tmpFile = new File(node.cwd, 'wait.success')
// for (int i = 0; i < 10; i++) {
// // we use custom wait logic here as the elastic user is not available immediately and ant.get will fail when a 401 is returned
// HttpURLConnection httpURLConnection = null;
// try {
// httpURLConnection = (HttpURLConnection) new URL("http://${node.httpUri()}/_cluster/health?wait_for_nodes=${numNodes}&wait_for_status=yellow").openConnection();
// httpURLConnection.setRequestProperty("Authorization", "Basic " +
// Base64.getEncoder().encodeToString("x_pack_rest_user:x-pack-test-password".getBytes(StandardCharsets.UTF_8)));
// httpURLConnection.setRequestMethod("GET");
// httpURLConnection.connect();
// if (httpURLConnection.getResponseCode() == 200) {
// tmpFile.withWriter StandardCharsets.UTF_8.name(), {
// it.write(httpURLConnection.getInputStream().getText(StandardCharsets.UTF_8.name()))
// }
// }
// } catch (Exception e) {
// if (i == 9) {
// logger.error("final attempt of calling cluster health failed", e)
// } else {
// logger.debug("failed to call cluster health", e)
// }
// } finally {
// if (httpURLConnection != null) {
// httpURLConnection.disconnect();
// }
// }
// // did not start, so wait a bit before trying again
// Thread.sleep(500L);
// }
// return tmpFile.exists()
// }
//}
test { test {
/* /*
* We have to disable setting the number of available processors as tests in the same JVM randomize processors and will step on each * We have to disable setting the number of available processors as tests in the same JVM randomize processors and will step on each
@ -249,6 +138,7 @@ integTestRunner {
systemProperty 'es.set.netty.runtime.available.processors', 'false' systemProperty 'es.set.netty.runtime.available.processors', 'false'
} }
// TODO: don't publish test artifacts just to run messy tests, fix the tests! // TODO: don't publish test artifacts just to run messy tests, fix the tests!
// https://github.com/elastic/x-plugins/issues/724 // https://github.com/elastic/x-plugins/issues/724
configurations { configurations {
@ -264,38 +154,12 @@ artifacts {
testArtifacts testJar testArtifacts testJar
} }
// pulled in as external dependency to work on java 9 thirdPartyAudit.excludes = [
if (JavaVersion.current() <= JavaVersion.VERSION_1_8) { //commons-logging optional dependencies
thirdPartyAudit.excludes += [ 'org.apache.avalon.framework.logger.Logger',
'com.sun.activation.registries.MailcapParseException', 'org.apache.log.Hierarchy',
'javax.activation.ActivationDataFlavor', 'org.apache.log.Logger',
'javax.activation.CommandInfo', //commons-logging provided dependencies
'javax.activation.CommandMap', 'javax.servlet.ServletContextEvent',
'javax.activation.CommandObject', 'javax.servlet.ServletContextListener'
'javax.activation.DataContentHandler', ]
'javax.activation.DataContentHandlerFactory',
'javax.activation.DataHandler$1',
'javax.activation.DataHandler',
'javax.activation.DataHandlerDataSource',
'javax.activation.DataSource',
'javax.activation.DataSourceDataContentHandler',
'javax.activation.FileDataSource',
'javax.activation.FileTypeMap',
'javax.activation.MimeType',
'javax.activation.MimeTypeParameterList',
'javax.activation.MimeTypeParseException',
'javax.activation.ObjectDataContentHandler',
'javax.activation.SecuritySupport$1',
'javax.activation.SecuritySupport$2',
'javax.activation.SecuritySupport$3',
'javax.activation.SecuritySupport$4',
'javax.activation.SecuritySupport$5',
'javax.activation.SecuritySupport',
'javax.activation.URLDataSource',
'javax.activation.UnsupportedDataTypeException'
]
}
run {
distribution = 'zip'
}

View File

@ -8,24 +8,16 @@ package org.elasticsearch.xpack.extensions;
import java.util.Collection; import java.util.Collection;
import java.util.Collections; import java.util.Collections;
import java.util.List; import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.function.BiConsumer;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.common.settings.Setting;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.watcher.ResourceWatcherService;
import org.elasticsearch.xpack.security.authc.AuthenticationFailureHandler;
import org.elasticsearch.xpack.security.authc.Realm;
import org.elasticsearch.xpack.security.authc.RealmConfig;
import org.elasticsearch.xpack.security.authz.RoleDescriptor;
import org.elasticsearch.plugins.Plugin;
import org.elasticsearch.xpack.security.SecurityExtension;
/** /**
* An extension point allowing to plug in custom functionality in x-pack authentication module. * An extension point allowing to plug in custom functionality in x-pack authentication module.
* @deprecated use {@link SecurityExtension} via SPI instead
*/ */
public abstract class XPackExtension { @Deprecated
public abstract class XPackExtension implements SecurityExtension {
/** /**
* The name of the plugin. * The name of the plugin.
*/ */
@ -43,72 +35,21 @@ public abstract class XPackExtension {
return Collections.emptyList(); return Collections.emptyList();
} }
/**
* Returns authentication realm implementations added by this extension.
*
* The key of the returned {@link Map} is the type name of the realm, and the value
* is a {@link org.elasticsearch.xpack.security.authc.Realm.Factory} which will construct
* that realm for use in authentication when that realm type is configured.
*
* @param resourceWatcherService Use to watch configuration files for changes
*/
public Map<String, Realm.Factory> getRealms(ResourceWatcherService resourceWatcherService) {
return Collections.emptyMap();
}
/**
* Returns the set of {@link Setting settings} that may be configured for the each type of realm.
*
* Each <em>setting key</em> must be unqualified and is in the same format as will be provided via {@link RealmConfig#settings()}.
* If a given realm-type is not present in the returned map, then it will be treated as if it supported <em>all</em> possible settings.
*
* The life-cycle of an extension dictates that this method will be called before {@link #getRealms(ResourceWatcherService)}
*/
public Map<String, Set<Setting<?>>> getRealmSettings() { return Collections.emptyMap(); }
/**
* Returns a handler for authentication failures, or null to use the default handler.
*
* Only one installed extension may have an authentication failure handler. If more than
* one extension returns a non-null handler, an error is raised.
*/
public AuthenticationFailureHandler getAuthenticationFailureHandler() {
return null;
}
/** /**
* Returns a list of settings that should be filtered from API calls. In most cases, * Returns a list of settings that should be filtered from API calls. In most cases,
* these settings are sensitive such as passwords. * these settings are sensitive such as passwords.
* *
* The value should be the full name of the setting or a wildcard that matches the * The value should be the full name of the setting or a wildcard that matches the
* desired setting. * desired setting.
* @deprecated use {@link Plugin#getSettingsFilter()} ()} via SPI extension instead
*/ */
@Deprecated
public List<String> getSettingsFilter() { public List<String> getSettingsFilter() {
return Collections.emptyList(); return Collections.emptyList();
} }
/** @Override
* Returns an ordered list of role providers that are used to resolve role names public String toString() {
* to {@link RoleDescriptor} objects. Each provider is invoked in order to return name();
* resolve any role names not resolved by the reserved or native roles stores.
*
* Each role provider is represented as a {@link BiConsumer} which takes a set
* of roles to resolve as the first parameter to consume and an {@link ActionListener}
* as the second parameter to consume. The implementation of the role provider
* should be asynchronous if the computation is lengthy or any disk and/or network
* I/O is involved. The implementation is responsible for resolving whatever roles
* it can into a set of {@link RoleDescriptor} instances. If successful, the
* implementation must invoke {@link ActionListener#onResponse(Object)} to pass along
* the resolved set of role descriptors. If a failure was encountered, the
* implementation must invoke {@link ActionListener#onFailure(Exception)}.
*
* By default, an empty list is returned.
*
* @param settings The configured settings for the node
* @param resourceWatcherService Use to watch configuration files for changes
*/
public List<BiConsumer<Set<String>, ActionListener<Set<RoleDescriptor>>>>
getRolesProviders(Settings settings, ResourceWatcherService resourceWatcherService) {
return Collections.emptyList();
} }
} }

View File

@ -11,6 +11,7 @@ import org.apache.logging.log4j.util.Supplier;
import org.elasticsearch.action.ActionListener; import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.admin.cluster.node.tasks.cancel.CancelTasksRequest; import org.elasticsearch.action.admin.cluster.node.tasks.cancel.CancelTasksRequest;
import org.elasticsearch.common.Nullable; import org.elasticsearch.common.Nullable;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.tasks.CancellableTask; import org.elasticsearch.tasks.CancellableTask;
import org.elasticsearch.tasks.Task; import org.elasticsearch.tasks.Task;
import org.elasticsearch.tasks.TaskCancelledException; import org.elasticsearch.tasks.TaskCancelledException;
@ -19,6 +20,7 @@ import org.elasticsearch.tasks.TaskManager;
import java.util.Map; import java.util.Map;
import java.util.concurrent.atomic.AtomicReference; import java.util.concurrent.atomic.AtomicReference;
import java.util.function.Predicate;
/** /**
* Represents a executor node operation that corresponds to a persistent task * Represents a executor node operation that corresponds to a persistent task
@ -105,6 +107,15 @@ public class AllocatedPersistentTask extends CancellableTask {
COMPLETED // the task is done running and trying to notify caller COMPLETED // the task is done running and trying to notify caller
} }
/**
* Waits for this persistent task to have the desired state.
*/
public void waitForPersistentTaskStatus(Predicate<PersistentTasksCustomMetaData.PersistentTask<?>> predicate,
@Nullable TimeValue timeout,
PersistentTasksService.WaitForPersistentTaskStatusListener<?> listener) {
persistentTasksService.waitForPersistentTaskStatus(persistentTaskId, predicate, timeout, listener);
}
public void markAsCompleted() { public void markAsCompleted() {
completeAndNotifyIfNeeded(null); completeAndNotifyIfNeeded(null);
} }

View File

@ -0,0 +1,106 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.xpack.security;
import org.apache.lucene.util.SPIClassIterator;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.common.settings.Setting;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.watcher.ResourceWatcherService;
import org.elasticsearch.xpack.security.authc.AuthenticationFailureHandler;
import org.elasticsearch.xpack.security.authc.Realm;
import org.elasticsearch.xpack.security.authc.RealmConfig;
import org.elasticsearch.xpack.security.authz.RoleDescriptor;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import java.util.Map;
import java.util.ServiceConfigurationError;
import java.util.Set;
import java.util.function.BiConsumer;
/**
* An SPI extension point allowing to plug in custom functionality in x-pack authentication module.
*/
public interface SecurityExtension {
/**
* Returns authentication realm implementations added by this extension.
*
* The key of the returned {@link Map} is the type name of the realm, and the value
* is a {@link Realm.Factory} which will construct
* that realm for use in authentication when that realm type is configured.
*
* @param resourceWatcherService Use to watch configuration files for changes
*/
default Map<String, Realm.Factory> getRealms(ResourceWatcherService resourceWatcherService) {
return Collections.emptyMap();
}
/**
* Returns the set of {@link Setting settings} that may be configured for the each type of realm.
*
* Each <em>setting key</em> must be unqualified and is in the same format as will be provided via {@link RealmConfig#settings()}.
* If a given realm-type is not present in the returned map, then it will be treated as if it supported <em>all</em> possible settings.
*
* The life-cycle of an extension dictates that this method will be called before {@link #getRealms(ResourceWatcherService)}
*/
default Map<String, Set<Setting<?>>> getRealmSettings() { return Collections.emptyMap(); }
/**
* Returns a handler for authentication failures, or null to use the default handler.
*
* Only one installed extension may have an authentication failure handler. If more than
* one extension returns a non-null handler, an error is raised.
*/
default AuthenticationFailureHandler getAuthenticationFailureHandler() {
return null;
}
/**
* Returns an ordered list of role providers that are used to resolve role names
* to {@link RoleDescriptor} objects. Each provider is invoked in order to
* resolve any role names not resolved by the reserved or native roles stores.
*
* Each role provider is represented as a {@link BiConsumer} which takes a set
* of roles to resolve as the first parameter to consume and an {@link ActionListener}
* as the second parameter to consume. The implementation of the role provider
* should be asynchronous if the computation is lengthy or any disk and/or network
* I/O is involved. The implementation is responsible for resolving whatever roles
* it can into a set of {@link RoleDescriptor} instances. If successful, the
* implementation must invoke {@link ActionListener#onResponse(Object)} to pass along
* the resolved set of role descriptors. If a failure was encountered, the
* implementation must invoke {@link ActionListener#onFailure(Exception)}.
*
* By default, an empty list is returned.
*
* @param settings The configured settings for the node
* @param resourceWatcherService Use to watch configuration files for changes
*/
default List<BiConsumer<Set<String>, ActionListener<Set<RoleDescriptor>>>>
getRolesProviders(Settings settings, ResourceWatcherService resourceWatcherService) {
return Collections.emptyList();
}
/**
* Loads the XPackSecurityExtensions from the given class loader
*/
static List<SecurityExtension> loadExtensions(ClassLoader loader) {
SPIClassIterator<SecurityExtension> iterator = SPIClassIterator.get(SecurityExtension.class, loader);
List<SecurityExtension> extensions = new ArrayList<>();
while (iterator.hasNext()) {
final Class<? extends SecurityExtension> c = iterator.next();
try {
extensions.add(c.getConstructor().newInstance());
} catch (Exception e) {
throw new ServiceConfigurationError("failed to load security extension [" + c.getName() + "]", e);
}
}
return extensions;
}
}

View File

@ -360,8 +360,7 @@ public class MachineLearning extends Plugin implements ActionPlugin, AnalysisPlu
PersistentTasksService persistentTasksService = new PersistentTasksService(settings, clusterService, threadPool, client); PersistentTasksService persistentTasksService = new PersistentTasksService(settings, clusterService, threadPool, client);
components.addAll(createComponents(client, clusterService, threadPool, xContentRegistry, environment, resourceWatcherService, components.addAll(createComponents(client, clusterService, threadPool, xContentRegistry, environment));
persistentTasksService));
// This was lifted from the XPackPlugins createComponents when it got split // This was lifted from the XPackPlugins createComponents when it got split
// This is not extensible and anyone copying this code needs to instead make this work // This is not extensible and anyone copying this code needs to instead make this work
@ -379,10 +378,11 @@ public class MachineLearning extends Plugin implements ActionPlugin, AnalysisPlu
return components; return components;
} }
public Collection<Object> createComponents(Client client, ClusterService clusterService, ThreadPool threadPool, // TODO: once initialization of the PersistentTasksClusterService, PersistentTasksService
NamedXContentRegistry xContentRegistry, Environment environment, // and PersistentTasksExecutorRegistry has been moved somewhere else the entire contents of
ResourceWatcherService resourceWatcherService, // this method can replace the entire contents of the overridden createComponents() method
PersistentTasksService persistentTasksService) { private Collection<Object> createComponents(Client client, ClusterService clusterService, ThreadPool threadPool,
NamedXContentRegistry xContentRegistry, Environment environment) {
if (enabled == false || transportClientMode || tribeNode || tribeNodeClient) { if (enabled == false || transportClientMode || tribeNode || tribeNodeClient) {
return emptyList(); return emptyList();
} }
@ -426,12 +426,13 @@ public class MachineLearning extends Plugin implements ActionPlugin, AnalysisPlu
this.autodetectProcessManager.set(autodetectProcessManager); this.autodetectProcessManager.set(autodetectProcessManager);
DatafeedJobBuilder datafeedJobBuilder = new DatafeedJobBuilder(client, jobProvider, auditor, System::currentTimeMillis); DatafeedJobBuilder datafeedJobBuilder = new DatafeedJobBuilder(client, jobProvider, auditor, System::currentTimeMillis);
DatafeedManager datafeedManager = new DatafeedManager(threadPool, client, clusterService, datafeedJobBuilder, DatafeedManager datafeedManager = new DatafeedManager(threadPool, client, clusterService, datafeedJobBuilder,
System::currentTimeMillis, auditor, persistentTasksService); System::currentTimeMillis, auditor);
this.datafeedManager.set(datafeedManager); this.datafeedManager.set(datafeedManager);
MlLifeCycleService mlLifeCycleService = new MlLifeCycleService(environment, clusterService, datafeedManager, MlLifeCycleService mlLifeCycleService = new MlLifeCycleService(environment, clusterService, datafeedManager,
autodetectProcessManager); autodetectProcessManager);
InvalidLicenseEnforcer invalidLicenseEnforcer =
new InvalidLicenseEnforcer(settings, getLicenseState(), threadPool, datafeedManager, autodetectProcessManager); // This object's constructor attaches to the license state, so there's no need to retain another reference to it
new InvalidLicenseEnforcer(settings, getLicenseState(), threadPool, datafeedManager, autodetectProcessManager);
return Arrays.asList( return Arrays.asList(
mlLifeCycleService, mlLifeCycleService,
@ -442,7 +443,6 @@ public class MachineLearning extends Plugin implements ActionPlugin, AnalysisPlu
jobDataCountsPersister, jobDataCountsPersister,
datafeedManager, datafeedManager,
auditor, auditor,
invalidLicenseEnforcer,
new MlAssignmentNotifier(settings, auditor, clusterService) new MlAssignmentNotifier(settings, auditor, clusterService)
); );
} }

View File

@ -22,6 +22,7 @@ import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.ToXContent;
import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.common.xcontent.XContentFactory;
import org.elasticsearch.index.engine.VersionConflictEngineException;
import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.transport.TransportService; import org.elasticsearch.transport.TransportService;
import org.elasticsearch.xpack.ml.MLMetadataField; import org.elasticsearch.xpack.ml.MLMetadataField;
@ -83,8 +84,13 @@ public class TransportPutCalendarAction extends HandledTransportAction<PutCalend
@Override @Override
public void onFailure(Exception e) { public void onFailure(Exception e) {
listener.onFailure( if (e instanceof VersionConflictEngineException) {
ExceptionsHelper.serverError("Error putting calendar with id [" + calendar.getId() + "]", e)); listener.onFailure(ExceptionsHelper.badRequestException("Cannot create calendar with id [" +
calendar.getId() + "] as it already exists"));
} else {
listener.onFailure(
ExceptionsHelper.serverError("Error putting calendar with id [" + calendar.getId() + "]", e));
}
} }
}); });
} }

View File

@ -32,7 +32,6 @@ import org.elasticsearch.xpack.ml.job.messages.Messages;
import org.elasticsearch.xpack.ml.notifications.Auditor; import org.elasticsearch.xpack.ml.notifications.Auditor;
import org.elasticsearch.xpack.persistent.PersistentTasksCustomMetaData; import org.elasticsearch.xpack.persistent.PersistentTasksCustomMetaData;
import org.elasticsearch.xpack.persistent.PersistentTasksCustomMetaData.PersistentTask; import org.elasticsearch.xpack.persistent.PersistentTasksCustomMetaData.PersistentTask;
import org.elasticsearch.xpack.persistent.PersistentTasksService;
import java.util.ArrayList; import java.util.ArrayList;
import java.util.Iterator; import java.util.Iterator;
@ -55,7 +54,6 @@ public class DatafeedManager extends AbstractComponent {
private final Client client; private final Client client;
private final ClusterService clusterService; private final ClusterService clusterService;
private final PersistentTasksService persistentTasksService;
private final ThreadPool threadPool; private final ThreadPool threadPool;
private final Supplier<Long> currentTimeSupplier; private final Supplier<Long> currentTimeSupplier;
private final Auditor auditor; private final Auditor auditor;
@ -66,14 +64,13 @@ public class DatafeedManager extends AbstractComponent {
private volatile boolean isolated; private volatile boolean isolated;
public DatafeedManager(ThreadPool threadPool, Client client, ClusterService clusterService, DatafeedJobBuilder datafeedJobBuilder, public DatafeedManager(ThreadPool threadPool, Client client, ClusterService clusterService, DatafeedJobBuilder datafeedJobBuilder,
Supplier<Long> currentTimeSupplier, Auditor auditor, PersistentTasksService persistentTasksService) { Supplier<Long> currentTimeSupplier, Auditor auditor) {
super(Settings.EMPTY); super(Settings.EMPTY);
this.client = Objects.requireNonNull(client); this.client = Objects.requireNonNull(client);
this.clusterService = Objects.requireNonNull(clusterService); this.clusterService = Objects.requireNonNull(clusterService);
this.threadPool = threadPool; this.threadPool = threadPool;
this.currentTimeSupplier = Objects.requireNonNull(currentTimeSupplier); this.currentTimeSupplier = Objects.requireNonNull(currentTimeSupplier);
this.auditor = Objects.requireNonNull(auditor); this.auditor = Objects.requireNonNull(auditor);
this.persistentTasksService = Objects.requireNonNull(persistentTasksService);
this.datafeedJobBuilder = Objects.requireNonNull(datafeedJobBuilder); this.datafeedJobBuilder = Objects.requireNonNull(datafeedJobBuilder);
clusterService.addListener(taskRunner); clusterService.addListener(taskRunner);
} }
@ -91,8 +88,7 @@ public class DatafeedManager extends AbstractComponent {
ActionListener<DatafeedJob> datafeedJobHandler = ActionListener.wrap( ActionListener<DatafeedJob> datafeedJobHandler = ActionListener.wrap(
datafeedJob -> { datafeedJob -> {
Holder holder = new Holder(task.getPersistentTaskId(), task.getAllocationId(), datafeed, datafeedJob, Holder holder = new Holder(task, datafeed, datafeedJob, new ProblemTracker(auditor, job.getId()), taskHandler);
task.isLookbackOnly(), new ProblemTracker(auditor, job.getId()), taskHandler);
runningDatafeedsOnThisNode.put(task.getAllocationId(), holder); runningDatafeedsOnThisNode.put(task.getAllocationId(), holder);
task.updatePersistentStatus(DatafeedState.STARTED, new ActionListener<PersistentTask<?>>() { task.updatePersistentStatus(DatafeedState.STARTED, new ActionListener<PersistentTask<?>>() {
@Override @Override
@ -279,7 +275,7 @@ public class DatafeedManager extends AbstractComponent {
public class Holder { public class Holder {
private final String taskId; private final TransportStartDatafeedAction.DatafeedTask task;
private final long allocationId; private final long allocationId;
private final DatafeedConfig datafeed; private final DatafeedConfig datafeed;
// To ensure that we wait until loopback / realtime search has completed before we stop the datafeed // To ensure that we wait until loopback / realtime search has completed before we stop the datafeed
@ -291,13 +287,13 @@ public class DatafeedManager extends AbstractComponent {
volatile Future<?> future; volatile Future<?> future;
private volatile boolean isRelocating; private volatile boolean isRelocating;
Holder(String taskId, long allocationId, DatafeedConfig datafeed, DatafeedJob datafeedJob, boolean autoCloseJob, Holder(TransportStartDatafeedAction.DatafeedTask task, DatafeedConfig datafeed, DatafeedJob datafeedJob,
ProblemTracker problemTracker, Consumer<Exception> handler) { ProblemTracker problemTracker, Consumer<Exception> handler) {
this.taskId = taskId; this.task = task;
this.allocationId = allocationId; this.allocationId = task.getAllocationId();
this.datafeed = datafeed; this.datafeed = datafeed;
this.datafeedJob = datafeedJob; this.datafeedJob = datafeedJob;
this.autoCloseJob = autoCloseJob; this.autoCloseJob = task.isLookbackOnly();
this.problemTracker = problemTracker; this.problemTracker = problemTracker;
this.handler = handler; this.handler = handler;
} }
@ -397,7 +393,7 @@ public class DatafeedManager extends AbstractComponent {
return; return;
} }
persistentTasksService.waitForPersistentTaskStatus(taskId, Objects::isNull, TimeValue.timeValueSeconds(20), task.waitForPersistentTaskStatus(Objects::isNull, TimeValue.timeValueSeconds(20),
new WaitForPersistentTaskStatusListener<StartDatafeedAction.DatafeedParams>() { new WaitForPersistentTaskStatusListener<StartDatafeedAction.DatafeedParams>() {
@Override @Override
public void onResponse(PersistentTask<StartDatafeedAction.DatafeedParams> persistentTask) { public void onResponse(PersistentTask<StartDatafeedAction.DatafeedParams> persistentTask) {

View File

@ -99,8 +99,8 @@ class AggregationDataExtractor implements DataExtractor {
private Aggregations search() throws IOException { private Aggregations search() throws IOException {
LOGGER.debug("[{}] Executing aggregated search", context.jobId); LOGGER.debug("[{}] Executing aggregated search", context.jobId);
SearchResponse searchResponse = executeSearchRequest(buildSearchRequest()); SearchResponse searchResponse = executeSearchRequest(buildSearchRequest());
LOGGER.debug("[{}] Search response was obtained", context.jobId);
ExtractorUtils.checkSearchWasSuccessful(context.jobId, searchResponse); ExtractorUtils.checkSearchWasSuccessful(context.jobId, searchResponse);
return validateAggs(searchResponse.getAggregations()); return validateAggs(searchResponse.getAggregations());
} }

View File

@ -117,6 +117,7 @@ public class ChunkedDataExtractor implements DataExtractor {
.addAggregation(AggregationBuilders.max(LATEST_TIME).field(context.timeField)); .addAggregation(AggregationBuilders.max(LATEST_TIME).field(context.timeField));
SearchResponse response = executeSearchRequest(searchRequestBuilder); SearchResponse response = executeSearchRequest(searchRequestBuilder);
LOGGER.debug("[{}] Data summary response was obtained", context.jobId);
ExtractorUtils.checkSearchWasSuccessful(context.jobId, response); ExtractorUtils.checkSearchWasSuccessful(context.jobId, response);

View File

@ -95,6 +95,7 @@ class ScrollDataExtractor implements DataExtractor {
protected InputStream initScroll(long startTimestamp) throws IOException { protected InputStream initScroll(long startTimestamp) throws IOException {
LOGGER.debug("[{}] Initializing scroll", context.jobId); LOGGER.debug("[{}] Initializing scroll", context.jobId);
SearchResponse searchResponse = executeSearchRequest(buildSearchRequest(startTimestamp)); SearchResponse searchResponse = executeSearchRequest(buildSearchRequest(startTimestamp));
LOGGER.debug("[{}] Search response was obtained", context.jobId);
return processSearchResponse(searchResponse); return processSearchResponse(searchResponse);
} }
@ -195,6 +196,7 @@ class ScrollDataExtractor implements DataExtractor {
throw searchExecutionException; throw searchExecutionException;
} }
} }
LOGGER.debug("[{}] Search response was obtained", context.jobId);
return processSearchResponse(searchResponse); return processSearchResponse(searchResponse);
} }

View File

@ -10,7 +10,7 @@ import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.rest.BaseRestHandler; import org.elasticsearch.rest.BaseRestHandler;
import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestController;
import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.RestRequest;
import org.elasticsearch.rest.action.AcknowledgedRestListener; import org.elasticsearch.rest.action.RestToXContentListener;
import org.elasticsearch.xpack.ml.MachineLearning; import org.elasticsearch.xpack.ml.MachineLearning;
import org.elasticsearch.xpack.ml.action.UpdateCalendarJobAction; import org.elasticsearch.xpack.ml.action.UpdateCalendarJobAction;
import org.elasticsearch.xpack.ml.calendars.Calendar; import org.elasticsearch.xpack.ml.calendars.Calendar;
@ -39,6 +39,6 @@ public class RestDeleteCalendarJobAction extends BaseRestHandler {
String jobId = restRequest.param(Job.ID.getPreferredName()); String jobId = restRequest.param(Job.ID.getPreferredName());
UpdateCalendarJobAction.Request request = UpdateCalendarJobAction.Request request =
new UpdateCalendarJobAction.Request(calendarId, Collections.emptySet(), Collections.singleton(jobId)); new UpdateCalendarJobAction.Request(calendarId, Collections.emptySet(), Collections.singleton(jobId));
return channel -> client.execute(UpdateCalendarJobAction.INSTANCE, request, new AcknowledgedRestListener<>(channel)); return channel -> client.execute(UpdateCalendarJobAction.INSTANCE, request, new RestToXContentListener<>(channel));
} }
} }

View File

@ -41,7 +41,6 @@ import org.elasticsearch.xpack.ml.notifications.Auditor;
import org.elasticsearch.xpack.ml.notifications.AuditorField; import org.elasticsearch.xpack.ml.notifications.AuditorField;
import org.elasticsearch.xpack.persistent.PersistentTasksCustomMetaData; import org.elasticsearch.xpack.persistent.PersistentTasksCustomMetaData;
import org.elasticsearch.xpack.persistent.PersistentTasksCustomMetaData.PersistentTask; import org.elasticsearch.xpack.persistent.PersistentTasksCustomMetaData.PersistentTask;
import org.elasticsearch.xpack.persistent.PersistentTasksService;
import org.junit.Before; import org.junit.Before;
import org.mockito.ArgumentCaptor; import org.mockito.ArgumentCaptor;
@ -122,8 +121,6 @@ public class DatafeedManagerTests extends ESTestCase {
when(threadPool.executor(MachineLearning.DATAFEED_THREAD_POOL_NAME)).thenReturn(executorService); when(threadPool.executor(MachineLearning.DATAFEED_THREAD_POOL_NAME)).thenReturn(executorService);
when(threadPool.executor(ThreadPool.Names.GENERIC)).thenReturn(executorService); when(threadPool.executor(ThreadPool.Names.GENERIC)).thenReturn(executorService);
PersistentTasksService persistentTasksService = mock(PersistentTasksService.class);
datafeedJob = mock(DatafeedJob.class); datafeedJob = mock(DatafeedJob.class);
when(datafeedJob.isRunning()).thenReturn(true); when(datafeedJob.isRunning()).thenReturn(true);
when(datafeedJob.stop()).thenReturn(true); when(datafeedJob.stop()).thenReturn(true);
@ -135,8 +132,7 @@ public class DatafeedManagerTests extends ESTestCase {
return null; return null;
}).when(datafeedJobBuilder).build(any(), any(), any()); }).when(datafeedJobBuilder).build(any(), any(), any());
datafeedManager = new DatafeedManager(threadPool, client, clusterService, datafeedJobBuilder, () -> currentTime, auditor, datafeedManager = new DatafeedManager(threadPool, client, clusterService, datafeedJobBuilder, () -> currentTime, auditor);
persistentTasksService);
verify(clusterService).addListener(capturedClusterStateListener.capture()); verify(clusterService).addListener(capturedClusterStateListener.capture());
} }

View File

@ -51,7 +51,7 @@ public abstract class LocalExporterIntegTestCase extends MonitoringIntegTestCase
.put("xpack.monitoring.exporters." + exporterName + ".type", LocalExporter.TYPE) .put("xpack.monitoring.exporters." + exporterName + ".type", LocalExporter.TYPE)
.put("xpack.monitoring.exporters." + exporterName + ".enabled", false) .put("xpack.monitoring.exporters." + exporterName + ".enabled", false)
.put("xpack.monitoring.exporters." + exporterName + "." + CLUSTER_ALERTS_MANAGEMENT_SETTING, false) .put("xpack.monitoring.exporters." + exporterName + "." + CLUSTER_ALERTS_MANAGEMENT_SETTING, false)
// .put(XPackSettings.WATCHER_ENABLED.getKey(), false) .put(XPackSettings.MACHINE_LEARNING_ENABLED.getKey(), false)
.put(NetworkModule.HTTP_ENABLED.getKey(), false) .put(NetworkModule.HTTP_ENABLED.getKey(), false)
.build(); .build();
} }

View File

@ -264,6 +264,8 @@ public class Security extends Plugin implements ActionPlugin, IngestPlugin, Netw
private final SetOnce<SecurityActionFilter> securityActionFilter = new SetOnce<>(); private final SetOnce<SecurityActionFilter> securityActionFilter = new SetOnce<>();
private final List<BootstrapCheck> bootstrapChecks; private final List<BootstrapCheck> bootstrapChecks;
private final XPackExtensionsService extensionsService; private final XPackExtensionsService extensionsService;
private final List<SecurityExtension> securityExtensions = new ArrayList<>();
public Security(Settings settings, final Path configPath) { public Security(Settings settings, final Path configPath) {
this.settings = settings; this.settings = settings;
@ -350,15 +352,16 @@ public class Security extends Plugin implements ActionPlugin, IngestPlugin, Netw
try { try {
return createComponents(client, threadPool, clusterService, resourceWatcherService, return createComponents(client, threadPool, clusterService, resourceWatcherService,
extensionsService.getExtensions()); extensionsService.getExtensions().stream().collect(Collectors.toList()));
} catch (final Exception e) { } catch (final Exception e) {
throw new IllegalStateException("security initialization failed", e); throw new IllegalStateException("security initialization failed", e);
} }
} }
public Collection<Object> createComponents(Client client, ThreadPool threadPool, ClusterService clusterService, // pkg private for testing - tests want to pass in their set of extensions hence we are not using the extension service directly
Collection<Object> createComponents(Client client, ThreadPool threadPool, ClusterService clusterService,
ResourceWatcherService resourceWatcherService, ResourceWatcherService resourceWatcherService,
List<XPackExtension> extensions) throws Exception { List<SecurityExtension> extensions) throws Exception {
if (enabled == false) { if (enabled == false) {
return Collections.emptyList(); return Collections.emptyList();
} }
@ -411,7 +414,10 @@ public class Security extends Plugin implements ActionPlugin, IngestPlugin, Netw
anonymousUser, securityLifecycleService, threadPool.getThreadContext()); anonymousUser, securityLifecycleService, threadPool.getThreadContext());
Map<String, Realm.Factory> realmFactories = new HashMap<>(InternalRealms.getFactories(threadPool, resourceWatcherService, Map<String, Realm.Factory> realmFactories = new HashMap<>(InternalRealms.getFactories(threadPool, resourceWatcherService,
getSslService(), nativeUsersStore, nativeRoleMappingStore, securityLifecycleService)); getSslService(), nativeUsersStore, nativeRoleMappingStore, securityLifecycleService));
for (XPackExtension extension : extensions) { for (SecurityExtension extension : securityExtensions) {
extensions.add(extension);
}
for (SecurityExtension extension : extensions) {
Map<String, Realm.Factory> newRealms = extension.getRealms(resourceWatcherService); Map<String, Realm.Factory> newRealms = extension.getRealms(resourceWatcherService);
for (Map.Entry<String, Realm.Factory> entry : newRealms.entrySet()) { for (Map.Entry<String, Realm.Factory> entry : newRealms.entrySet()) {
if (realmFactories.put(entry.getKey(), entry.getValue()) != null) { if (realmFactories.put(entry.getKey(), entry.getValue()) != null) {
@ -427,14 +433,14 @@ public class Security extends Plugin implements ActionPlugin, IngestPlugin, Netw
AuthenticationFailureHandler failureHandler = null; AuthenticationFailureHandler failureHandler = null;
String extensionName = null; String extensionName = null;
for (XPackExtension extension : extensions) { for (SecurityExtension extension : extensions) {
AuthenticationFailureHandler extensionFailureHandler = extension.getAuthenticationFailureHandler(); AuthenticationFailureHandler extensionFailureHandler = extension.getAuthenticationFailureHandler();
if (extensionFailureHandler != null && failureHandler != null) { if (extensionFailureHandler != null && failureHandler != null) {
throw new IllegalStateException("Extensions [" + extensionName + "] and [" + extension.name() + "] " + throw new IllegalStateException("Extensions [" + extensionName + "] and [" + extension.toString() + "] " +
"both set an authentication failure handler"); "both set an authentication failure handler");
} }
failureHandler = extensionFailureHandler; failureHandler = extensionFailureHandler;
extensionName = extension.name(); extensionName = extension.toString();
} }
if (failureHandler == null) { if (failureHandler == null) {
logger.debug("Using default authentication failure handler"); logger.debug("Using default authentication failure handler");
@ -451,7 +457,7 @@ public class Security extends Plugin implements ActionPlugin, IngestPlugin, Netw
final NativeRolesStore nativeRolesStore = new NativeRolesStore(settings, client, getLicenseState(), securityLifecycleService); final NativeRolesStore nativeRolesStore = new NativeRolesStore(settings, client, getLicenseState(), securityLifecycleService);
final ReservedRolesStore reservedRolesStore = new ReservedRolesStore(); final ReservedRolesStore reservedRolesStore = new ReservedRolesStore();
List<BiConsumer<Set<String>, ActionListener<Set<RoleDescriptor>>>> rolesProviders = new ArrayList<>(); List<BiConsumer<Set<String>, ActionListener<Set<RoleDescriptor>>>> rolesProviders = new ArrayList<>();
for (XPackExtension extension : extensions) { for (SecurityExtension extension : extensions) {
rolesProviders.addAll(extension.getRolesProviders(settings, resourceWatcherService)); rolesProviders.addAll(extension.getRolesProviders(settings, resourceWatcherService));
} }
final CompositeRolesStore allRolesStore = new CompositeRolesStore(settings, fileRolesStore, nativeRolesStore, final CompositeRolesStore allRolesStore = new CompositeRolesStore(settings, fileRolesStore, nativeRolesStore,
@ -1043,4 +1049,9 @@ public class Security extends Plugin implements ActionPlugin, IngestPlugin, Netw
} }
} }
} }
@Override
public void reloadSPI(ClassLoader loader) {
securityExtensions.addAll(SecurityExtension.loadExtensions(loader));
}
} }

View File

@ -31,7 +31,6 @@ import org.elasticsearch.test.ESTestCase;
import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.watcher.ResourceWatcherService; import org.elasticsearch.watcher.ResourceWatcherService;
import org.elasticsearch.xpack.XPackSettings; import org.elasticsearch.xpack.XPackSettings;
import org.elasticsearch.xpack.extensions.XPackExtension;
import org.elasticsearch.xpack.security.audit.AuditTrailService; import org.elasticsearch.xpack.security.audit.AuditTrailService;
import org.elasticsearch.xpack.security.audit.index.IndexAuditTrail; import org.elasticsearch.xpack.security.audit.index.IndexAuditTrail;
import org.elasticsearch.xpack.security.audit.logfile.LoggingAuditTrail; import org.elasticsearch.xpack.security.audit.logfile.LoggingAuditTrail;
@ -75,26 +74,18 @@ public class SecurityTests extends ESTestCase {
private ThreadContext threadContext = null; private ThreadContext threadContext = null;
private TestUtils.UpdatableLicenseState licenseState; private TestUtils.UpdatableLicenseState licenseState;
public static class DummyExtension extends XPackExtension { public static class DummyExtension implements SecurityExtension {
private String realmType; private String realmType;
DummyExtension(String realmType) { DummyExtension(String realmType) {
this.realmType = realmType; this.realmType = realmType;
} }
@Override @Override
public String name() {
return "dummy";
}
@Override
public String description() {
return "dummy";
}
@Override
public Map<String, Realm.Factory> getRealms(ResourceWatcherService resourceWatcherService) { public Map<String, Realm.Factory> getRealms(ResourceWatcherService resourceWatcherService) {
return Collections.singletonMap(realmType, config -> null); return Collections.singletonMap(realmType, config -> null);
} }
} }
private Collection<Object> createComponents(Settings testSettings, XPackExtension... extensions) throws Exception { private Collection<Object> createComponents(Settings testSettings, SecurityExtension... extensions) throws Exception {
if (security != null) { if (security != null) {
throw new IllegalStateException("Security object already exists (" + security + ")"); throw new IllegalStateException("Security object already exists (" + security + ")");
} }

View File

@ -152,7 +152,7 @@
calendar_id: "mayan" calendar_id: "mayan"
- do: - do:
catch: /version_conflict_engine_exception/ catch: bad_request
xpack.ml.put_calendar: xpack.ml.put_calendar:
calendar_id: "mayan" calendar_id: "mayan"
@ -237,6 +237,8 @@
xpack.ml.delete_calendar_job: xpack.ml.delete_calendar_job:
calendar_id: "wildlife" calendar_id: "wildlife"
job_id: "tiger" job_id: "tiger"
- match: { calendar_id: "wildlife" }
- length: { job_ids: 0 }
- do: - do:
xpack.ml.get_calendars: xpack.ml.get_calendars:

View File

@ -16,6 +16,7 @@ import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
import org.elasticsearch.cluster.routing.Preference; import org.elasticsearch.cluster.routing.Preference;
import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.util.concurrent.AbstractRunnable;
import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.common.xcontent.XContentFactory;
import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.common.xcontent.XContentType;
@ -109,8 +110,14 @@ public class TransportExecuteWatchAction extends WatcherTransportAction<ExecuteW
private void executeWatch(ExecuteWatchRequest request, ActionListener<ExecuteWatchResponse> listener, private void executeWatch(ExecuteWatchRequest request, ActionListener<ExecuteWatchResponse> listener,
Watch watch, boolean knownWatch) { Watch watch, boolean knownWatch) {
threadPool.executor(XPackField.WATCHER).submit(() -> { threadPool.executor(XPackField.WATCHER).submit(new AbstractRunnable() {
try { @Override
public void onFailure(Exception e) {
listener.onFailure(e);
}
@Override
protected void doRun() throws Exception {
// ensure that the headers from the incoming request are used instead those of the stored watch // ensure that the headers from the incoming request are used instead those of the stored watch
// otherwise the watch would run as the user who stored the watch, but it needs to be run as the user who // otherwise the watch would run as the user who stored the watch, but it needs to be run as the user who
// executes this request // executes this request
@ -141,8 +148,6 @@ public class TransportExecuteWatchAction extends WatcherTransportAction<ExecuteW
record.toXContent(builder, WatcherParams.builder().hideSecrets(true).debug(request.isDebug()).build()); record.toXContent(builder, WatcherParams.builder().hideSecrets(true).debug(request.isDebug()).build());
listener.onResponse(new ExecuteWatchResponse(record.id().value(), builder.bytes(), XContentType.JSON)); listener.onResponse(new ExecuteWatchResponse(record.id().value(), builder.bytes(), XContentType.JSON));
} catch (IOException e) {
listener.onFailure(e);
} }
}); });
} }

View File

@ -0,0 +1,50 @@
apply plugin: 'elasticsearch.esplugin'
esplugin {
name 'spi-extension'
description 'An example spi extension pluing for xpack security'
classname 'org.elasticsearch.example.SpiExtensionPlugin'
extendedPlugins = ['x-pack-security']
}
dependencies {
provided project(path: ':x-pack-elasticsearch:plugin:core', configuration: 'runtime')
testCompile project(path: ':x-pack-elasticsearch:transport-client', configuration: 'runtime')
}
integTestRunner {
systemProperty 'tests.security.manager', 'false'
}
integTestCluster {
dependsOn buildZip
plugin ':x-pack-elasticsearch:plugin'
setting 'xpack.security.authc.realms.custom.order', '0'
setting 'xpack.security.authc.realms.custom.type', 'custom'
setting 'xpack.security.authc.realms.custom.filtered_setting', 'should be filtered'
setting 'xpack.security.authc.realms.esusers.order', '1'
setting 'xpack.security.authc.realms.esusers.type', 'file'
setting 'xpack.security.authc.realms.native.type', 'native'
setting 'xpack.security.authc.realms.native.order', '2'
setting 'xpack.ml.enabled', 'false'
// This is important, so that all the modules are available too.
// There are index templates that use token filters that are in analysis-module and
// processors are being used that are in ingest-common module.
distribution = 'zip'
setupCommand 'setupDummyUser',
'bin/x-pack/users', 'useradd', 'test_user', '-p', 'x-pack-test-password', '-r', 'superuser'
waitCondition = { node, ant ->
File tmpFile = new File(node.cwd, 'wait.success')
ant.get(src: "http://${node.httpUri()}/_cluster/health?wait_for_nodes=>=${numNodes}&wait_for_status=yellow",
dest: tmpFile.toString(),
username: 'test_user',
password: 'x-pack-test-password',
ignoreerrors: true,
retries: 10)
return tmpFile.exists()
}
}
check.dependsOn integTest

View File

@ -0,0 +1,67 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.example;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.example.realm.CustomAuthenticationFailureHandler;
import org.elasticsearch.example.realm.CustomRealm;
import org.elasticsearch.example.role.CustomInMemoryRolesProvider;
import org.elasticsearch.watcher.ResourceWatcherService;
import org.elasticsearch.xpack.security.SecurityExtension;
import org.elasticsearch.xpack.security.authc.AuthenticationFailureHandler;
import org.elasticsearch.xpack.security.authc.Realm;
import org.elasticsearch.xpack.security.authz.RoleDescriptor;
import java.security.AccessController;
import java.security.PrivilegedAction;
import java.util.Arrays;
import java.util.Collection;
import java.util.Collections;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.function.BiConsumer;
import static org.elasticsearch.example.role.CustomInMemoryRolesProvider.ROLE_A;
import static org.elasticsearch.example.role.CustomInMemoryRolesProvider.ROLE_B;
/**
* An example x-pack extension for testing custom realms and custom role providers.
*/
public class ExampleSecurityExtension implements SecurityExtension {
static {
// check that the extension's policy works.
AccessController.doPrivileged((PrivilegedAction<Void>) () -> {
System.getSecurityManager().checkPrintJobAccess();
return null;
});
}
@Override
public Map<String, Realm.Factory> getRealms(ResourceWatcherService resourceWatcherService) {
return Collections.singletonMap(CustomRealm.TYPE, CustomRealm::new);
}
@Override
public AuthenticationFailureHandler getAuthenticationFailureHandler() {
return new CustomAuthenticationFailureHandler();
}
@Override
public List<BiConsumer<Set<String>, ActionListener<Set<RoleDescriptor>>>>
getRolesProviders(Settings settings, ResourceWatcherService resourceWatcherService) {
CustomInMemoryRolesProvider rp1 = new CustomInMemoryRolesProvider(settings, Collections.singletonMap(ROLE_A, "read"));
Map<String, String> roles = new HashMap<>();
roles.put(ROLE_A, "all");
roles.put(ROLE_B, "all");
CustomInMemoryRolesProvider rp2 = new CustomInMemoryRolesProvider(settings, roles);
return Arrays.asList(rp1, rp2);
}
}

View File

@ -0,0 +1,31 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.example;
import org.elasticsearch.example.realm.CustomRealm;
import org.elasticsearch.plugins.ActionPlugin;
import org.elasticsearch.plugins.Plugin;
import java.util.Arrays;
import java.util.Collection;
import java.util.Collections;
import java.util.List;
/**
* An example x-pack extension for testing custom realms and custom role providers.
*/
public class SpiExtensionPlugin extends Plugin implements ActionPlugin {
@Override
public Collection<String> getRestHeaders() {
return Arrays.asList(CustomRealm.USER_HEADER, CustomRealm.PW_HEADER);
}
@Override
public List<String> getSettingsFilter() {
return Collections.singletonList("xpack.security.authc.realms.*.filtered_setting");
}
}

View File

@ -0,0 +1,50 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.example.realm;
import org.elasticsearch.ElasticsearchSecurityException;
import org.elasticsearch.common.util.concurrent.ThreadContext;
import org.elasticsearch.rest.RestRequest;
import org.elasticsearch.xpack.security.authc.AuthenticationToken;
import org.elasticsearch.xpack.security.authc.DefaultAuthenticationFailureHandler;
import org.elasticsearch.transport.TransportMessage;
public class CustomAuthenticationFailureHandler extends DefaultAuthenticationFailureHandler {
@Override
public ElasticsearchSecurityException failedAuthentication(RestRequest request, AuthenticationToken token,
ThreadContext context) {
ElasticsearchSecurityException e = super.failedAuthentication(request, token, context);
// set a custom header
e.addHeader("WWW-Authenticate", "custom-challenge");
return e;
}
@Override
public ElasticsearchSecurityException failedAuthentication(TransportMessage message, AuthenticationToken token, String action,
ThreadContext context) {
ElasticsearchSecurityException e = super.failedAuthentication(message, token, action, context);
// set a custom header
e.addHeader("WWW-Authenticate", "custom-challenge");
return e;
}
@Override
public ElasticsearchSecurityException missingToken(RestRequest request, ThreadContext context) {
ElasticsearchSecurityException e = super.missingToken(request, context);
// set a custom header
e.addHeader("WWW-Authenticate", "custom-challenge");
return e;
}
@Override
public ElasticsearchSecurityException missingToken(TransportMessage message, String action, ThreadContext context) {
ElasticsearchSecurityException e = super.missingToken(message, action, context);
// set a custom header
e.addHeader("WWW-Authenticate", "custom-challenge");
return e;
}
}

View File

@ -0,0 +1,70 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.example.realm;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.common.settings.SecureString;
import org.elasticsearch.common.util.concurrent.ThreadContext;
import org.elasticsearch.xpack.security.authc.AuthenticationResult;
import org.elasticsearch.xpack.security.authc.AuthenticationToken;
import org.elasticsearch.xpack.security.authc.Realm;
import org.elasticsearch.xpack.security.authc.RealmConfig;
import org.elasticsearch.xpack.security.authc.support.CharArrays;
import org.elasticsearch.xpack.security.authc.support.UsernamePasswordToken;
import org.elasticsearch.xpack.security.user.User;
public class CustomRealm extends Realm {
public static final String TYPE = "custom";
public static final String USER_HEADER = "User";
public static final String PW_HEADER = "Password";
public static final String KNOWN_USER = "custom_user";
public static final SecureString KNOWN_PW = new SecureString("x-pack-test-password".toCharArray());
static final String[] ROLES = new String[] { "superuser" };
public CustomRealm(RealmConfig config) {
super(TYPE, config);
}
@Override
public boolean supports(AuthenticationToken token) {
return token instanceof UsernamePasswordToken;
}
@Override
public UsernamePasswordToken token(ThreadContext threadContext) {
String user = threadContext.getHeader(USER_HEADER);
if (user != null) {
String password = threadContext.getHeader(PW_HEADER);
if (password != null) {
return new UsernamePasswordToken(user, new SecureString(password.toCharArray()));
}
}
return null;
}
@Override
public void authenticate(AuthenticationToken authToken, ActionListener<AuthenticationResult> listener) {
UsernamePasswordToken token = (UsernamePasswordToken)authToken;
final String actualUser = token.principal();
if (KNOWN_USER.equals(actualUser)) {
if (CharArrays.constantTimeEquals(token.credentials().getChars(), KNOWN_PW.getChars())) {
listener.onResponse(AuthenticationResult.success(new User(actualUser, ROLES)));
} else {
listener.onResponse(AuthenticationResult.unsuccessful("Invalid password for user " + actualUser, null));
}
} else {
listener.onResponse(AuthenticationResult.notHandled());
}
}
@Override
public void lookupUser(String username, ActionListener<User> listener) {
listener.onResponse(null);
}
}

View File

@ -0,0 +1,57 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.example.role;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.common.component.AbstractComponent;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.xpack.security.authz.RoleDescriptor;
import java.util.HashSet;
import java.util.Map;
import java.util.Set;
import java.util.function.BiConsumer;
/**
* A custom roles provider implementation for testing that serves
* static roles from memory.
*/
public class CustomInMemoryRolesProvider
extends AbstractComponent
implements BiConsumer<Set<String>, ActionListener<Set<RoleDescriptor>>> {
public static final String INDEX = "foo";
public static final String ROLE_A = "roleA";
public static final String ROLE_B = "roleB";
private final Map<String, String> rolePermissionSettings;
public CustomInMemoryRolesProvider(Settings settings, Map<String, String> rolePermissionSettings) {
super(settings);
this.rolePermissionSettings = rolePermissionSettings;
}
@Override
public void accept(Set<String> roles, ActionListener<Set<RoleDescriptor>> listener) {
Set<RoleDescriptor> roleDescriptors = new HashSet<>();
for (String role : roles) {
if (rolePermissionSettings.containsKey(role)) {
roleDescriptors.add(
new RoleDescriptor(role, new String[] { "all" },
new RoleDescriptor.IndicesPrivileges[] {
RoleDescriptor.IndicesPrivileges.builder()
.privileges(rolePermissionSettings.get(role))
.indices(INDEX)
.grantedFields("*")
.build()
}, null)
);
}
}
listener.onResponse(roleDescriptors);
}
}

View File

@ -0,0 +1,3 @@
grant {
permission java.lang.RuntimePermission "queuePrintJob";
};

View File

@ -0,0 +1 @@
org.elasticsearch.example.ExampleSecurityExtension

View File

@ -0,0 +1,121 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.example.realm;
import org.apache.http.message.BasicHeader;
import org.elasticsearch.action.admin.cluster.health.ClusterHealthResponse;
import org.elasticsearch.action.admin.cluster.node.info.NodeInfo;
import org.elasticsearch.action.admin.cluster.node.info.NodesInfoResponse;
import org.elasticsearch.client.Response;
import org.elasticsearch.client.ResponseException;
import org.elasticsearch.client.transport.NoNodeAvailableException;
import org.elasticsearch.client.transport.TransportClient;
import org.elasticsearch.common.network.NetworkModule;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.transport.TransportAddress;
import org.elasticsearch.common.util.concurrent.ThreadContext;
import org.elasticsearch.env.Environment;
import org.elasticsearch.plugins.Plugin;
import org.elasticsearch.test.ESIntegTestCase;
import org.elasticsearch.xpack.XPackClientPlugin;
import org.elasticsearch.xpack.client.PreBuiltXPackTransportClient;
import java.util.Collection;
import java.util.Collections;
import java.util.List;
import static org.hamcrest.Matchers.is;
/**
* Integration test to test authentication with the custom realm
*/
public class CustomRealmIT extends ESIntegTestCase {
@Override
protected Settings externalClusterClientSettings() {
return Settings.builder()
.put(ThreadContext.PREFIX + "." + CustomRealm.USER_HEADER, CustomRealm.KNOWN_USER)
.put(ThreadContext.PREFIX + "." + CustomRealm.PW_HEADER, CustomRealm.KNOWN_PW.toString())
.put(NetworkModule.TRANSPORT_TYPE_KEY, "security4")
.build();
}
@Override
protected Collection<Class<? extends Plugin>> transportClientPlugins() {
return Collections.<Class<? extends Plugin>>singleton(XPackClientPlugin.class);
}
public void testHttpConnectionWithNoAuthentication() throws Exception {
try {
getRestClient().performRequest("GET", "/");
fail("request should have failed");
} catch(ResponseException e) {
Response response = e.getResponse();
assertThat(response.getStatusLine().getStatusCode(), is(401));
String value = response.getHeader("WWW-Authenticate");
assertThat(value, is("custom-challenge"));
}
}
public void testHttpAuthentication() throws Exception {
Response response = getRestClient().performRequest("GET", "/",
new BasicHeader(CustomRealm.USER_HEADER, CustomRealm.KNOWN_USER),
new BasicHeader(CustomRealm.PW_HEADER, CustomRealm.KNOWN_PW.toString()));
assertThat(response.getStatusLine().getStatusCode(), is(200));
}
public void testTransportClient() throws Exception {
NodesInfoResponse nodeInfos = client().admin().cluster().prepareNodesInfo().get();
List<NodeInfo> nodes = nodeInfos.getNodes();
assertTrue(nodes.isEmpty() == false);
TransportAddress publishAddress = randomFrom(nodes).getTransport().address().publishAddress();
String clusterName = nodeInfos.getClusterName().value();
Settings settings = Settings.builder()
.put("cluster.name", clusterName)
.put(Environment.PATH_HOME_SETTING.getKey(), createTempDir().toAbsolutePath().toString())
.put(ThreadContext.PREFIX + "." + CustomRealm.USER_HEADER, CustomRealm.KNOWN_USER)
.put(ThreadContext.PREFIX + "." + CustomRealm.PW_HEADER, CustomRealm.KNOWN_PW.toString())
.build();
try (TransportClient client = new PreBuiltXPackTransportClient(settings)) {
client.addTransportAddress(publishAddress);
ClusterHealthResponse response = client.admin().cluster().prepareHealth().execute().actionGet();
assertThat(response.isTimedOut(), is(false));
}
}
public void testTransportClientWrongAuthentication() throws Exception {
NodesInfoResponse nodeInfos = client().admin().cluster().prepareNodesInfo().get();
List<NodeInfo> nodes = nodeInfos.getNodes();
assertTrue(nodes.isEmpty() == false);
TransportAddress publishAddress = randomFrom(nodes).getTransport().address().publishAddress();
String clusterName = nodeInfos.getClusterName().value();
Settings settings = Settings.builder()
.put("cluster.name", clusterName)
.put(Environment.PATH_HOME_SETTING.getKey(), createTempDir().toAbsolutePath().toString())
.put(ThreadContext.PREFIX + "." + CustomRealm.USER_HEADER, CustomRealm.KNOWN_USER + randomAlphaOfLength(1))
.put(ThreadContext.PREFIX + "." + CustomRealm.PW_HEADER, CustomRealm.KNOWN_PW.toString())
.build();
try (TransportClient client = new PreBuiltXPackTransportClient(settings)) {
client.addTransportAddress(publishAddress);
client.admin().cluster().prepareHealth().execute().actionGet();
fail("authentication failure should have resulted in a NoNodesAvailableException");
} catch (NoNodeAvailableException e) {
// expected
}
}
public void testSettingsFiltering() throws Exception {
NodesInfoResponse nodeInfos = client().admin().cluster().prepareNodesInfo().clear().setSettings(true).get();
for(NodeInfo info : nodeInfos.getNodes()) {
Settings settings = info.getSettings();
assertNotNull(settings);
assertNull(settings.get("xpack.security.authc.realms.custom.filtered_setting"));
assertEquals(CustomRealm.TYPE, settings.get("xpack.security.authc.realms.custom.type"));
}
}
}

View File

@ -0,0 +1,48 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.example.realm;
import org.elasticsearch.action.support.PlainActionFuture;
import org.elasticsearch.common.settings.SecureString;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.util.concurrent.ThreadContext;
import org.elasticsearch.env.TestEnvironment;
import org.elasticsearch.test.ESTestCase;
import org.elasticsearch.xpack.security.authc.AuthenticationResult;
import org.elasticsearch.xpack.security.authc.RealmConfig;
import org.elasticsearch.xpack.security.authc.support.UsernamePasswordToken;
import org.elasticsearch.xpack.security.user.User;
import static org.hamcrest.Matchers.equalTo;
import static org.hamcrest.Matchers.notNullValue;
public class CustomRealmTests extends ESTestCase {
public void testAuthenticate() {
Settings globalSettings = Settings.builder().put("path.home", createTempDir()).build();
CustomRealm realm = new CustomRealm(new RealmConfig("test", Settings.EMPTY, globalSettings,
TestEnvironment.newEnvironment(globalSettings), new ThreadContext(globalSettings)));
SecureString password = CustomRealm.KNOWN_PW.clone();
UsernamePasswordToken token = new UsernamePasswordToken(CustomRealm.KNOWN_USER, password);
PlainActionFuture<AuthenticationResult> plainActionFuture = new PlainActionFuture<>();
realm.authenticate(token, plainActionFuture);
User user = plainActionFuture.actionGet().getUser();
assertThat(user, notNullValue());
assertThat(user.roles(), equalTo(CustomRealm.ROLES));
assertThat(user.principal(), equalTo(CustomRealm.KNOWN_USER));
}
public void testAuthenticateBadUser() {
Settings globalSettings = Settings.builder().put("path.home", createTempDir()).build();
CustomRealm realm = new CustomRealm(new RealmConfig("test", Settings.EMPTY, globalSettings,
TestEnvironment.newEnvironment(globalSettings), new ThreadContext(globalSettings)));
SecureString password = CustomRealm.KNOWN_PW.clone();
UsernamePasswordToken token = new UsernamePasswordToken(CustomRealm.KNOWN_USER + "1", password);
PlainActionFuture<AuthenticationResult> plainActionFuture = new PlainActionFuture<>();
realm.authenticate(token, plainActionFuture);
final AuthenticationResult result = plainActionFuture.actionGet();
assertThat(result.getStatus(), equalTo(AuthenticationResult.Status.CONTINUE));
}
}

View File

@ -0,0 +1,95 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.example.role;
import org.apache.http.message.BasicHeader;
import org.elasticsearch.client.Response;
import org.elasticsearch.client.ResponseException;
import org.elasticsearch.common.network.NetworkModule;
import org.elasticsearch.common.settings.SecureString;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.util.concurrent.ThreadContext;
import org.elasticsearch.example.realm.CustomRealm;
import org.elasticsearch.plugins.Plugin;
import org.elasticsearch.test.ESIntegTestCase;
import org.elasticsearch.xpack.XPackClientPlugin;
import org.elasticsearch.xpack.security.authc.support.UsernamePasswordToken;
import org.elasticsearch.xpack.security.client.SecurityClient;
import java.util.Collection;
import java.util.Collections;
import static org.elasticsearch.example.role.CustomInMemoryRolesProvider.INDEX;
import static org.elasticsearch.example.role.CustomInMemoryRolesProvider.ROLE_A;
import static org.elasticsearch.example.role.CustomInMemoryRolesProvider.ROLE_B;
import static org.elasticsearch.xpack.security.authc.support.UsernamePasswordToken.basicAuthHeaderValue;
import static org.hamcrest.Matchers.is;
/**
* Integration test for custom roles providers.
*/
public class CustomRolesProviderIT extends ESIntegTestCase {
private static final String TEST_USER = "test_user";
private static final String TEST_PWD = "change_me";
@Override
protected Settings externalClusterClientSettings() {
return Settings.builder()
.put(ThreadContext.PREFIX + "." + CustomRealm.USER_HEADER, CustomRealm.KNOWN_USER)
.put(ThreadContext.PREFIX + "." + CustomRealm.PW_HEADER, CustomRealm.KNOWN_PW.toString())
.put(NetworkModule.TRANSPORT_TYPE_KEY, "security4")
.build();
}
@Override
protected Collection<Class<? extends Plugin>> transportClientPlugins() {
return Collections.singleton(XPackClientPlugin.class);
}
public void setupTestUser(String role) {
SecurityClient securityClient = new SecurityClient(client());
securityClient.preparePutUser(TEST_USER, TEST_PWD.toCharArray(), role).get();
}
public void testAuthorizedCustomRoleSucceeds() throws Exception {
setupTestUser(ROLE_B);
// roleB has all permissions on index "foo", so creating "foo" should succeed
Response response = getRestClient().performRequest("PUT", "/" + INDEX, authHeader());
assertThat(response.getStatusLine().getStatusCode(), is(200));
}
public void testFirstResolvedRoleTakesPrecedence() throws Exception {
// the first custom roles provider has set ROLE_A to only have read permission on the index,
// the second custom roles provider has set ROLE_A to have all permissions, but since
// the first custom role provider appears first in order, it should take precedence and deny
// permission to create the index
setupTestUser(ROLE_A);
// roleB has all permissions on index "foo", so creating "foo" should succeed
try {
getRestClient().performRequest("PUT", "/" + INDEX, authHeader());
fail(ROLE_A + " should not be authorized to create index " + INDEX);
} catch (ResponseException e) {
assertThat(e.getResponse().getStatusLine().getStatusCode(), is(403));
}
}
public void testUnresolvedRoleDoesntSucceed() throws Exception {
setupTestUser("unknown");
// roleB has all permissions on index "foo", so creating "foo" should succeed
try {
getRestClient().performRequest("PUT", "/" + INDEX, authHeader());
fail(ROLE_A + " should not be authorized to create index " + INDEX);
} catch (ResponseException e) {
assertThat(e.getResponse().getStatusLine().getStatusCode(), is(403));
}
}
private BasicHeader authHeader() {
return new BasicHeader(UsernamePasswordToken.BASIC_AUTH_HEADER,
basicAuthHeaderValue(TEST_USER, new SecureString(TEST_PWD.toCharArray())));
}
}

View File

@ -34,3 +34,77 @@
- is_true: error.script_stack - is_true: error.script_stack
- match: { status: 500 } - match: { status: 500 }
---
"Test painless exceptions are returned when logging a broken response":
- do:
cluster.health:
wait_for_status: green
- do:
xpack.watcher.execute_watch:
body: >
{
"watch" : {
"trigger": {
"schedule": {
"interval": "1d"
}
},
"input": {
"simple": {
"foo": "bar"
}
},
"actions": {
"my-logging": {
"transform": {
"script": {
"source": "def x = [:] ; def y = [:] ; x.a = y ; y.a = x ; return x"
}
},
"logging": {
"text": "{{ctx}}"
}
}
}
}
}
- match: { watch_record.watch_id: "_inlined_" }
- match: { watch_record.trigger_event.type: "manual" }
- match: { watch_record.state: "executed" }
- match: { watch_record.result.actions.0.status: "failure" }
- match: { watch_record.result.actions.0.error.caused_by.caused_by.type: "illegal_argument_exception" }
- match: { watch_record.result.actions.0.error.caused_by.caused_by.reason: "Iterable object is self-referencing itself" }
- do:
catch: bad_request
xpack.watcher.execute_watch:
body: >
{
"watch": {
"trigger": {
"schedule": {
"interval": "10s"
}
},
"input": {
"simple": {
"foo": "bar"
}
},
"actions": {
"my-logging": {
"transform": {
"script": {
"source": "def x = [:] ; def y = [:] ; x.a = y ; y.a = x ; return x"
}
},
"logging": {
"text": "{{#join}}ctx.payload{{/join}}"
}
}
}
}
}