Merge remote-tracking branch 'elastic/master' into feature/sql

Original commit: elastic/x-pack-elasticsearch@273e0a110e
This commit is contained in:
Igor Motov 2017-09-26 11:59:49 -04:00
commit fdd98f01ed
57 changed files with 854 additions and 833 deletions

View File

@ -8,10 +8,11 @@
{xpack} includes commands that help you configure security:
* <<certgen>>
//* <<setup-passwords>>
* <<setup-passwords>>
* <<users-command>>
--
include::certgen.asciidoc[]
include::setup-passwords.asciidoc[]
include::users-command.asciidoc[]

View File

@ -0,0 +1,47 @@
[role="xpack"]
[[setup-passwords]]
== setup-passwords
The `setup-passwords` command sets the passwords for the built-in `elastic`,
`kibana`, and `logstash_system` users.
[float]
=== Synopsis
[source,shell]
--------------------------------------------------
bin/x-pack/setup-passwords auto|interactive [-u "<URL>"]
--------------------------------------------------
[float]
=== Description
This command is intended for use only during the initial configuration of
{xpack}. It uses the
{xpack-ref}/setting-up-authentication.html#bootstrap-elastic-passwords[`elastic` bootstrap password]
to run user management API requests. After you set a password for the `elastic`
user, the bootstrap password is no longer active and you cannot use this command.
Instead, you can change passwords by using the *Management > Users* UI in {kib}
or the <<security-api-change-password,Change Password API>>.
[float]
=== Parameters
`auto`:: Outputs randomly-generated passwords to the console.
`interactive`:: Prompts you to manually enter passwords.
`-u "<URL>"`:: Specifies the URL that the tool uses to submit the user management API
requests. The default value is determined from the settings in your
`elasticsearch.yml` file.
[float]
=== Examples
The following example uses the `-u` parameter to tell the tool where to submit
its user management API requests:
[source,shell]
--------------------------------------------------
bin/x-pack/setup-passwords auto -u "http://localhost:9201"
--------------------------------------------------

View File

@ -197,19 +197,27 @@ bin/elasticsearch
----------------------------------------------------------
--
.. Set the passwords for all built-in users. You can update passwords from the
**Management > Users** UI in {kib}, use the `setup-passwords` tool, or use the
security user API. For example:
.. Set the passwords for all built-in users. The +setup-passwords+ command is
the simplest method to set the built-in users' passwords for the first time.
+
--
For example, you can run the command in an "interactive" mode, which prompts you
to enter new passwords for the `elastic`, `kibana`, and `logstash_system` users:
[source,shell]
--------------------------------------------------
bin/x-pack/setup-passwords interactive
--------------------------------------------------
If you prefer to have randomly generated passwords, specify `auto` instead of
`interactive`. If the node is not listening on "http://localhost:9200", use the
`-u` parameter to specify the appropriate URL. For more information,
see {xpack-ref}/setting-up-authentication.html[Setting Up User Authentication].
For more information about the command options, see <<setup-passwords>>.
IMPORTANT: The `setup-passwords` command uses a transient bootstrap password
that is no longer valid after the command runs successfully. You cannot run the
`setup-passwords` command a second time. Instead, you can update passwords from
the **Management > Users** UI in {kib} or use the security user API.
For more information, see
{ref}/setting-up-authentication.html#set-built-in-user-passwords[Setting Built-in User Passwords].
--
. {kibana-ref}/installing-xpack-kb.html[Install {xpack} on {kib}].

View File

@ -19,13 +19,10 @@ These users have a fixed set of privileges and cannot be authenticated until the
passwords have been set. The `elastic` user can be used to
<<set-built-in-user-passwords,set all of the built-in user passwords>>.
.{security} Built-in Users
|========
| Name | Description
| `elastic` | A built-in _superuser_. See <<built-in-roles>>.
| `kibana` | The user Kibana uses to connect and communicate with Elasticsearch.
| `logstash_system` | The user Logstash uses when storing monitoring information in Elasticsearch.
|========
`elastic`:: A built-in _superuser_. See <<built-in-roles>>.
`kibana`:: The user Kibana uses to connect and communicate with Elasticsearch.
`logstash_system`:: The user Logstash uses when storing monitoring information in Elasticsearch.
[float]
[[built-in-user-explanation]]
@ -43,74 +40,84 @@ realm will not have any effect on the built-in users. The built-in users can
be disabled individually, using the
{ref}/security-api-users.html[user management API].
[float]
[[bootstrap-elastic-passwords]]
==== The Elastic Bootstrap Password
When you install {xpack}, if the `elastic` user does not already have a password,
it uses a default bootstrap password. The bootstrap password is a transient
password that enables you to run the tools that set all the built-in user passwords.
By default, the bootstrap password is derived from a randomized `keystore.seed`
setting, which is added to the keystore when you install {xpack}. You do not need
to know or change this bootstrap password. If you have defined a
`bootstrap.password` setting in the keystore, however, that value is used instead.
For more information about interacting with the keystore, see
{ref}/secure-settings.html[Secure Settings].
////
//TBD: Is the following still true?
As the `elastic` user is stored in the native realm, the password will be
synced to all the nodes in a cluster. It is safe to bootstrap the password with
multiple nodes as long as the password is the same. If different passwords are
set with different nodes, it is unpredictable which password will be bootstrapped.
////
NOTE: After you <<set-built-in-user-passwords,set passwords for the built-in users>>,
in particular for the `elastic` user, there is no further use for the bootstrap
password.
[float]
[[set-built-in-user-passwords]]
==== Set Built-in User Passwords
[IMPORTANT]
=============================================================================
==== Setting Built-in User Passwords
You must set the passwords for all built-in users.
You can update passwords from the *Management > Users* UI in Kibana, using the
setup-passwords tool, or with the security user api.
The setup-passwords tool is a command line tool that is provided to assist with
setup. When it is run, it will use the `elastic` user to execute API requests
that will change the passwords of the `elastic`, `kibana`, and
`logstash_system` users. In "auto" mode the passwords will be generated randomly and
printed to the console.
[source,shell]
--------------------------------------------------
bin/x-pack/setup-passwords auto
--------------------------------------------------
There is also an "interactive" mode that will prompt you to manually enter passwords.
The +setup-passwords+ tool is the simplest method to set the built-in users'
passwords for the first time. It uses the `elastic` user's bootstrap password to
run user management API requests. For example, you can run the command in
an "interactive" mode, which prompts you to enter new passwords for the
`elastic`, `kibana`, and `logstash_system` users:
[source,shell]
--------------------------------------------------
bin/x-pack/setup-passwords interactive
--------------------------------------------------
If the node is not listening at "http://localhost:9200", you will need to pass the url parameter
to tell the tool where to submit the requests.
For more information about the command options, see
{ref}/setup-passwords.html[setup-passwords].
IMPORTANT: After you set a password for the `elastic` user, the bootstrap
password is no longer valid; you cannot run the `setup-passwords` command a
second time.
Alternatively, you can set the initial passwords for the built-in users by using
the *Management > Users* page in {kib} or the
{ref}/security-api-change-password.html[Change Password API]. These methods are
more complex. You must supply the `elastic` user and its bootstrap password to
log into {kib} or run the API. This requirement means that you cannot use the
default bootstrap password that is derived from the `keystore.seed` setting.
Instead, you must explicitly set a `bootstrap.password` setting in the keystore
before you start {es}. For example, the following command prompts you to enter a
new bootstrap password:
[source,shell]
--------------------------------------------------
bin/x-pack/setup-passwords auto -u "http://localhost:9201"
--------------------------------------------------
----------------------------------------------------
bin/elasticsearch-keystore add "bootstrap.password"
----------------------------------------------------
The {ref}/security-api-users.html#security-api-reset-user-password[Reset Password API] can
also be used to change the passwords manually.
You can then start {es} and {kib} and use the `elastic` user and bootstrap
password to log into {kib} and change the passwords. Alternatively, you can
submit Change Password API requests for each built-in user. These methods are
better suited for changing your passwords after the initial setup is complete,
since at that point the bootstrap password is no longer required.
[source,js]
---------------------------------------------------------------------
PUT _xpack/security/user/elastic/_password
{
"password": "elasticpassword"
}
---------------------------------------------------------------------
// CONSOLE
[float]
[[add-built-in-user-passwords]]
==== Adding Built-in User Passwords To {kib} and Logstash
[source,js]
---------------------------------------------------------------------
PUT _xpack/security/user/kibana/_password
{
"password": "kibanapassword"
}
---------------------------------------------------------------------
// CONSOLE
[source,js]
---------------------------------------------------------------------
PUT _xpack/security/user/logstash_system/_password
{
"password": "logstashpassword"
}
---------------------------------------------------------------------
// CONSOLE
Once the `kibana` user password is reset, you need to update the Kibana server
with the new password by setting `elasticsearch.password` in the
`kibana.yml` configuration file:
After the `kibana` user password is set, you need to update the {kib} server
with the new password by setting `elasticsearch.password` in the `kibana.yml`
configuration file:
[source,yaml]
-----------------------------------------------
@ -138,16 +145,15 @@ Once the password has been changed, you can enable the user via the following AP
PUT _xpack/security/user/logstash_system/_enable
---------------------------------------------------------------------
// CONSOLE
=============================================================================
[float]
[[disabling-default-password]]
==== Disable Default Password Functionality
==== Disabling Default Password Functionality
[IMPORTANT]
=============================================================================
This setting is deprecated. The elastic user no longer has a default password. The password must
be set before the user can be used.
This setting is deprecated. The elastic user no longer has a default password.
The password must be set before the user can be used.
See <<bootstrap-elastic-passwords>>.
=============================================================================
[float]

View File

@ -16,23 +16,12 @@ To get started with {security}:
. <<installing-xpack, Install X-Pack>>.
. On at least one of the nodes in your cluster, set the "bootstrap.password" secure setting in the keystore.
+
--
[source,shell]
--------------------------------------------------
bin/elasticsearch-keystore create
bin/elasticsearch-keystore add "bootstrap.password"
--------------------------------------------------
. Start {es} and {kib}.
--
. Start Elasticsearch and Kibana. The Elasticsearch node with the "bootstrap.password" setting will use that
setting to set the `elastic` user password on node startup.
. Set the passwords of the built in `elastic`, `kibana`, and `logstash_system` users using the provided setup
passwords tool. In "auto" mode this tool will randomly generate passwords and print them to the console.
. Set the passwords of the built in `elastic`, `kibana`, and `logstash_system` users.
In most cases, you can simply run the `bin/x-pack/setup-passwords` tool on one of the nodes in your cluster.
Run that command with the same user that is running your {es} process.
In "auto" mode this tool will randomly generate passwords and print them to the console.
+
--
[source,shell]
@ -40,9 +29,10 @@ passwords tool. In "auto" mode this tool will randomly generate passwords and pr
bin/x-pack/setup-passwords auto
--------------------------------------------------
For more information, see <<set-built-in-user-passwords>>.
--
. Set up roles and users to control access to Elasticsearch and Kibana.
. Set up roles and users to control access to {es} and {kib}.
For example, to grant _John Doe_ full access to all indices that match
the pattern `events*` and enable him to create visualizations and dashboards
for those indices in Kibana, you could create an `events_admin` role and
@ -76,7 +66,7 @@ curl -XPOST -u elastic 'localhost:9200/_xpack/security/user/johndoe' -H "Content
[[enable-auditing]]
. Enable Auditing to keep track of attempted and successful interactions with
your Elasticsearch cluster:
your {es} cluster:
+
--
.. Add the following setting to `elasticsearch.yml` on all nodes in your cluster:
@ -85,10 +75,10 @@ curl -XPOST -u elastic 'localhost:9200/_xpack/security/user/johndoe' -H "Content
----------------------------
xpack.security.audit.enabled: true
----------------------------
.. Restart Elasticsearch.
.. Restart {es}.
By default, events are logged to a dedicated `elasticsearch-access.log` file in
`ES_HOME/logs`. You can also store the events in an Elasticsearch index for
`ES_HOME/logs`. You can also store the events in an {es} index for
easier analysis and control what events are logged. For more information, see
{xpack-ref}/auditing.html[Configuring Auditing].
--

View File

@ -30,13 +30,9 @@ Set to `false` to disable {es} {monitoring} for Elasticsearch.
The `xpack.monitoring.collection` settings control how data is collected from
your Elasticsearch nodes.
`xpack.monitoring.collection.cluster.state.timeout`::
Sets the timeout for collecting the cluster state. Defaults to `10m`.
`xpack.monitoring.collection.cluster.stats.timeout`::
Sets the timeout for collecting the cluster statistics. Defaults to `10m`.
Sets the timeout for collecting the cluster statistics. Defaults to `10s`.
`xpack.monitoring.collection.indices`::
@ -50,11 +46,11 @@ You can update this setting through the Cluster Update Settings API.
`xpack.monitoring.collection.index.stats.timeout`::
Sets the timeout for collecting index statistics. Defaults to `10m`.
Sets the timeout for collecting index statistics. Defaults to `10s`.
`xpack.monitoring.collection.indices.stats.timeout`::
Sets the timeout for collecting total indices statistics. Defaults to `10m`.
Sets the timeout for collecting total indices statistics. Defaults to `10s`.
`xpack.monitoring.collection.index.recovery.active_only`::
@ -63,7 +59,7 @@ collect only active recoveries. Defaults to `false`.
`xpack.monitoring.collection.index.recovery.timeout`::
Sets the timeout for collecting the recovery information. Defaults to `10m`.
Sets the timeout for collecting the recovery information. Defaults to `10s`.
`xpack.monitoring.collection.interval`::

View File

@ -160,8 +160,7 @@ class LicensesMetaData extends AbstractNamedDiffable<MetaData.Custom> implements
streamOutput.writeBoolean(true); // has a license
license.writeTo(streamOutput);
}
// TODO Eventually this should be 6.0. But it is 7.0 temporarily for bwc
if (streamOutput.getVersion().onOrAfter(Version.V_7_0_0_alpha1)) {
if (streamOutput.getVersion().onOrAfter(Version.V_6_1_0)) {
if (trialVersion == null) {
streamOutput.writeBoolean(false);
} else {
@ -177,8 +176,7 @@ class LicensesMetaData extends AbstractNamedDiffable<MetaData.Custom> implements
} else {
license = LICENSE_TOMBSTONE;
}
// TODO Eventually this should be 6.0. But it is 7.0 temporarily for bwc
if (streamInput.getVersion().onOrAfter(Version.V_7_0_0_alpha1)) {
if (streamInput.getVersion().onOrAfter(Version.V_6_1_0)) {
boolean hasExercisedTrial = streamInput.readBoolean();
if (hasExercisedTrial) {
this.trialVersion = Version.readVersion(streamInput);

View File

@ -10,7 +10,7 @@ import org.elasticsearch.common.logging.LoggerMessageFormat;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.license.License.OperationMode;
import org.elasticsearch.xpack.XPackPlugin;
import org.elasticsearch.xpack.monitoring.MonitoringSettings;
import org.elasticsearch.xpack.monitoring.Monitoring;
import java.util.Collections;
import java.util.LinkedHashMap;
@ -159,7 +159,7 @@ public class XPackLicenseState {
newMode, newMode, newMode),
LoggerMessageFormat.format(
"Automatic index cleanup is locked to {} days for clusters with [{}] license.",
MonitoringSettings.HISTORY_DURATION.getDefault(Settings.EMPTY).days(), newMode)
Monitoring.HISTORY_DURATION.getDefault(Settings.EMPTY).days(), newMode)
};
}
break;

View File

@ -82,7 +82,6 @@ import org.elasticsearch.xpack.ml.MachineLearning;
import org.elasticsearch.xpack.ml.MachineLearningFeatureSet;
import org.elasticsearch.xpack.monitoring.Monitoring;
import org.elasticsearch.xpack.monitoring.MonitoringFeatureSet;
import org.elasticsearch.xpack.monitoring.MonitoringSettings;
import org.elasticsearch.xpack.notification.email.Account;
import org.elasticsearch.xpack.notification.email.EmailService;
import org.elasticsearch.xpack.notification.email.attachment.DataAttachmentParser;
@ -415,7 +414,7 @@ public class XPackPlugin extends Plugin implements ScriptPlugin, ActionPlugin, I
public List<Setting<?>> getSettings() {
ArrayList<Setting<?>> settings = new ArrayList<>();
settings.addAll(Security.getSettings(transportClientMode, extensionsService));
settings.addAll(MonitoringSettings.getSettings());
settings.addAll(monitoring.getSettings());
settings.addAll(watcher.getSettings());
settings.addAll(machineLearning.getSettings());
settings.addAll(licensing.getSettings());
@ -451,7 +450,7 @@ public class XPackPlugin extends Plugin implements ScriptPlugin, ActionPlugin, I
filters.add("xpack.notification.pagerduty.account.*." + PagerDutyAccount.SERVICE_KEY_SETTING);
filters.add("xpack.notification.hipchat.account.*.auth_token");
filters.addAll(security.getSettingsFilter(extensionsService));
filters.addAll(MonitoringSettings.getSettingsFilter());
filters.addAll(monitoring.getSettingsFilter());
if (transportClientMode == false) {
for (XPackExtension extension : extensionsService.getExtensions()) {
filters.addAll(extension.getSettingsFilter());

View File

@ -283,7 +283,7 @@ public class MachineLearning implements ActionPlugin {
// This will only only happen when path.home is not set, which is disallowed in production
throw new ElasticsearchException("Failed to create native process controller for Machine Learning");
}
autodetectProcessFactory = new NativeAutodetectProcessFactory(jobProvider, env, settings, nativeController, internalClient);
autodetectProcessFactory = new NativeAutodetectProcessFactory(env, settings, nativeController, internalClient);
normalizerProcessFactory = new NativeNormalizerProcessFactory(env, settings, nativeController);
} catch (IOException e) {
// This also should not happen in production, as the MachineLearningFeatureSet should have

View File

@ -15,7 +15,6 @@ import org.elasticsearch.common.logging.Loggers;
import org.elasticsearch.common.util.concurrent.AbstractRunnable;
import org.elasticsearch.common.xcontent.NamedXContentRegistry;
import org.elasticsearch.common.xcontent.XContentType;
import org.elasticsearch.xpack.ml.action.OpenJobAction.JobTask;
import org.elasticsearch.xpack.ml.job.config.DataDescription;
import org.elasticsearch.xpack.ml.job.config.Job;
import org.elasticsearch.xpack.ml.job.config.JobUpdate;
@ -56,7 +55,6 @@ public class AutodetectCommunicator implements Closeable {
private static final Duration FLUSH_PROCESS_CHECK_FREQUENCY = Duration.ofSeconds(1);
private final Job job;
private final JobTask jobTask;
private final AutodetectProcess autodetectProcess;
private final StateStreamer stateStreamer;
private final DataCountsReporter dataCountsReporter;
@ -66,12 +64,11 @@ public class AutodetectCommunicator implements Closeable {
private final NamedXContentRegistry xContentRegistry;
private volatile boolean processKilled;
AutodetectCommunicator(Job job, JobTask jobTask, AutodetectProcess process, StateStreamer stateStreamer,
AutodetectCommunicator(Job job, AutodetectProcess process, StateStreamer stateStreamer,
DataCountsReporter dataCountsReporter, AutoDetectResultProcessor autoDetectResultProcessor,
Consumer<Exception> onFinishHandler, NamedXContentRegistry xContentRegistry,
ExecutorService autodetectWorkerExecutor) {
this.job = job;
this.jobTask = jobTask;
this.autodetectProcess = process;
this.stateStreamer = stateStreamer;
this.dataCountsReporter = dataCountsReporter;
@ -261,10 +258,6 @@ public class AutodetectCommunicator implements Closeable {
}
}
public JobTask getJobTask() {
return jobTask;
}
public ZonedDateTime getProcessStartTime() {
return autodetectProcess.getProcessStartTime();
}

View File

@ -17,7 +17,6 @@ import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.util.concurrent.AbstractRunnable;
import org.elasticsearch.common.util.concurrent.EsRejectedExecutionException;
import org.elasticsearch.common.util.concurrent.ThreadContext;
import org.elasticsearch.common.util.set.Sets;
import org.elasticsearch.common.xcontent.NamedXContentRegistry;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentType;
@ -50,7 +49,6 @@ import org.elasticsearch.xpack.ml.job.process.normalizer.NormalizerFactory;
import org.elasticsearch.xpack.ml.job.process.normalizer.Renormalizer;
import org.elasticsearch.xpack.ml.job.process.normalizer.ScoresUpdater;
import org.elasticsearch.xpack.ml.job.process.normalizer.ShortCircuitingRenormalizer;
import org.elasticsearch.xpack.ml.job.results.Forecast;
import org.elasticsearch.xpack.ml.notifications.Auditor;
import org.elasticsearch.xpack.ml.utils.ExceptionsHelper;
import org.elasticsearch.xpack.persistent.PersistentTasksCustomMetaData.PersistentTask;
@ -59,7 +57,6 @@ import java.io.IOException;
import java.io.InputStream;
import java.time.Duration;
import java.time.ZonedDateTime;
import java.util.Arrays;
import java.util.Date;
import java.util.Iterator;
import java.util.List;
@ -105,8 +102,7 @@ public class AutodetectProcessManager extends AbstractComponent {
private final JobResultsPersister jobResultsPersister;
private final JobDataCountsPersister jobDataCountsPersister;
private final ConcurrentMap<Long, AutodetectCommunicator> autoDetectCommunicatorByOpenJob = new ConcurrentHashMap<>();
private final ConcurrentMap<Long, AutodetectCommunicator> autoDetectCommunicatorByClosingJob = new ConcurrentHashMap<>();
private final ConcurrentMap<Long, ProcessContext> processByAllocation = new ConcurrentHashMap<>();
private final int maxAllowedRunningJobs;
@ -134,53 +130,37 @@ public class AutodetectProcessManager extends AbstractComponent {
}
public synchronized void closeAllJobsOnThisNode(String reason) throws IOException {
int numJobs = autoDetectCommunicatorByOpenJob.size();
int numJobs = processByAllocation.size();
if (numJobs != 0) {
logger.info("Closing [{}] jobs, because [{}]", numJobs, reason);
for (AutodetectCommunicator communicator : autoDetectCommunicatorByOpenJob.values()) {
closeJob(communicator.getJobTask(), false, reason);
for (ProcessContext process : processByAllocation.values()) {
closeJob(process.getJobTask(), false, reason);
}
}
}
public void killProcess(JobTask jobTask, boolean awaitCompletion, String reason) {
String extraInfo;
AutodetectCommunicator communicator = autoDetectCommunicatorByOpenJob.remove(jobTask.getAllocationId());
if (communicator == null) {
extraInfo = " while closing";
// if there isn't an open job, check for a closing job
communicator = autoDetectCommunicatorByClosingJob.remove(jobTask.getAllocationId());
} else {
extraInfo = "";
}
if (communicator != null) {
if (reason == null) {
logger.info("Killing job [{}]{}", jobTask.getJobId(), extraInfo);
} else {
logger.info("Killing job [{}]{}, because [{}]", jobTask.getJobId(), extraInfo, reason);
}
killProcess(communicator, jobTask.getJobId(), awaitCompletion, true);
ProcessContext processContext = processByAllocation.remove(jobTask.getAllocationId());
if (processContext != null) {
processContext.newKillBuilder()
.setAwaitCompletion(awaitCompletion)
.setFinish(true)
.setReason(reason)
.kill();
}
}
public void killAllProcessesOnThisNode() {
// first kill open jobs, then closing jobs
for (Iterator<AutodetectCommunicator> iter : Arrays.asList(autoDetectCommunicatorByOpenJob.values().iterator(),
autoDetectCommunicatorByClosingJob.values().iterator())) {
while (iter.hasNext()) {
AutodetectCommunicator communicator = iter.next();
iter.remove();
killProcess(communicator, communicator.getJobTask().getJobId(), false, false);
}
}
}
private void killProcess(AutodetectCommunicator communicator, String jobId, boolean awaitCompletion, boolean finish) {
try {
communicator.killProcess(awaitCompletion, finish);
} catch (IOException e) {
logger.error("[{}] Failed to kill autodetect process for job", jobId);
Iterator<ProcessContext> iterator = processByAllocation.values().iterator();
while (iterator.hasNext()) {
ProcessContext processContext = iterator.next();
processContext.newKillBuilder()
.setAwaitCompletion(false)
.setFinish(false)
.setSilent(true)
.kill();
iterator.remove();
}
}
@ -205,7 +185,7 @@ public class AutodetectProcessManager extends AbstractComponent {
*/
public void processData(JobTask jobTask, InputStream input, XContentType xContentType,
DataLoadParams params, BiConsumer<DataCounts, Exception> handler) {
AutodetectCommunicator communicator = autoDetectCommunicatorByOpenJob.get(jobTask.getAllocationId());
AutodetectCommunicator communicator = getOpenAutodetectCommunicator(jobTask);
if (communicator == null) {
throw ExceptionsHelper.conflictStatusException("Cannot process data because job [" + jobTask.getJobId() + "] is not open");
}
@ -223,7 +203,7 @@ public class AutodetectProcessManager extends AbstractComponent {
*/
public void flushJob(JobTask jobTask, FlushJobParams params, ActionListener<FlushAcknowledgement> handler) {
logger.debug("Flushing job {}", jobTask.getJobId());
AutodetectCommunicator communicator = autoDetectCommunicatorByOpenJob.get(jobTask.getAllocationId());
AutodetectCommunicator communicator = getOpenAutodetectCommunicator(jobTask);
if (communicator == null) {
String message = String.format(Locale.ROOT, "Cannot flush because job [%s] is not open", jobTask.getJobId());
logger.debug(message);
@ -250,7 +230,7 @@ public class AutodetectProcessManager extends AbstractComponent {
*/
public void forecastJob(JobTask jobTask, ForecastParams params, Consumer<Exception> handler) {
logger.debug("Forecasting job {}", jobTask.getJobId());
AutodetectCommunicator communicator = autoDetectCommunicatorByOpenJob.get(jobTask.getAllocationId());
AutodetectCommunicator communicator = getOpenAutodetectCommunicator(jobTask);
if (communicator == null) {
String message = String.format(Locale.ROOT, "Cannot forecast because job [%s] is not open", jobTask.getJobId());
logger.debug(message);
@ -271,7 +251,7 @@ public class AutodetectProcessManager extends AbstractComponent {
public void writeUpdateProcessMessage(JobTask jobTask, List<JobUpdate.DetectorUpdate> updates, ModelPlotConfig config,
Consumer<Exception> handler) {
AutodetectCommunicator communicator = autoDetectCommunicatorByOpenJob.get(jobTask.getAllocationId());
AutodetectCommunicator communicator = getOpenAutodetectCommunicator(jobTask);
if (communicator == null) {
String message = "Cannot process update model debug config because job [" + jobTask.getJobId() + "] is not open";
logger.debug(message);
@ -298,6 +278,7 @@ public class AutodetectProcessManager extends AbstractComponent {
}
logger.info("Opening job [{}]", jobId);
processByAllocation.putIfAbsent(jobTask.getAllocationId(), new ProcessContext(jobTask));
jobProvider.getAutodetectParams(job, params -> {
// We need to fork, otherwise we restore model state from a network thread (several GET api calls):
threadPool.executor(MachineLearning.UTILITY_THREAD_POOL_NAME).execute(new AbstractRunnable() {
@ -308,19 +289,29 @@ public class AutodetectProcessManager extends AbstractComponent {
@Override
protected void doRun() throws Exception {
ProcessContext processContext = processByAllocation.get(jobTask.getAllocationId());
if (processContext == null) {
logger.debug("Aborted opening job [{}] as it has been closed", jobId);
return;
}
if (processContext.getState() != ProcessContext.ProcessStateName.NOT_RUNNING) {
logger.debug("Cannot open job [{}] when its state is [{}]", jobId, processContext.getState().getClass().getName());
return;
}
try {
AutodetectCommunicator communicator = autoDetectCommunicatorByOpenJob.computeIfAbsent(jobTask.getAllocationId(),
id -> create(jobTask, params, handler));
communicator.init(params.modelSnapshot());
createProcessAndSetRunning(processContext, params, handler);
processContext.getAutodetectCommunicator().init(params.modelSnapshot());
setJobState(jobTask, JobState.OPENED);
} catch (Exception e1) {
// No need to log here as the persistent task framework will log it
try {
// Don't leave a partially initialised process hanging around
AutodetectCommunicator communicator = autoDetectCommunicatorByOpenJob.remove(jobTask.getAllocationId());
if (communicator != null) {
communicator.killProcess(false, false);
}
processContext.newKillBuilder()
.setAwaitCompletion(false)
.setFinish(false)
.kill();
processByAllocation.remove(jobTask.getAllocationId());
} finally {
setJobState(jobTask, JobState.FAILED, e2 -> handler.accept(e1));
}
@ -333,13 +324,28 @@ public class AutodetectProcessManager extends AbstractComponent {
});
}
private void createProcessAndSetRunning(ProcessContext processContext, AutodetectParams params, Consumer<Exception> handler) {
try {
// At this point we lock the process context until the process has been started.
// The reason behind this is to ensure closing the job does not happen before
// the process is started as that can result to the job getting seemingly closed
// but the actual process is hanging alive.
processContext.tryLock();
AutodetectCommunicator communicator = create(processContext.getJobTask(), params, handler);
processContext.setRunning(communicator);
} finally {
// Now that the process is running and we have updated its state we can unlock.
// It is important to unlock before we initialize the communicator (ie. load the model state)
// as that may be a long-running method.
processContext.unlock();
}
}
AutodetectCommunicator create(JobTask jobTask, AutodetectParams autodetectParams, Consumer<Exception> handler) {
// Closing jobs can still be using some or all threads in MachineLearning.AUTODETECT_THREAD_POOL_NAME
// that an open job uses, so include them too when considering if enough threads are available.
// There's a slight possibility that the same key is in both sets, hence it's not sufficient to simply
// add the two map sizes.
int currentRunningJobs = Sets.union(autoDetectCommunicatorByOpenJob.keySet(), autoDetectCommunicatorByClosingJob.keySet()).size();
if (currentRunningJobs >= maxAllowedRunningJobs) {
int currentRunningJobs = processByAllocation.size();
if (currentRunningJobs > maxAllowedRunningJobs) {
throw new ElasticsearchStatusException("max running job capacity [" + maxAllowedRunningJobs + "] reached",
RestStatus.TOO_MANY_REQUESTS);
}
@ -390,7 +396,7 @@ public class AutodetectProcessManager extends AbstractComponent {
}
throw e;
}
return new AutodetectCommunicator(job, jobTask, process, new StateStreamer(client), dataCountsReporter, processor, handler,
return new AutodetectCommunicator(job, process, new StateStreamer(client), dataCountsReporter, processor, handler,
xContentRegistry, autodetectWorkerExecutor);
}
@ -429,31 +435,34 @@ public class AutodetectProcessManager extends AbstractComponent {
String jobId = jobTask.getJobId();
long allocationId = jobTask.getAllocationId();
logger.debug("Attempting to close job [{}], because [{}]", jobId, reason);
// don't remove the communicator immediately, because we need to ensure it's in the
// map of closing communicators before it's removed from the map of running ones
AutodetectCommunicator communicator = autoDetectCommunicatorByOpenJob.get(allocationId);
if (communicator == null) {
logger.debug("Cannot close: no active autodetect process for job [{}]", jobId);
return;
}
// keep a record of the job, so that it can still be killed while closing
autoDetectCommunicatorByClosingJob.putIfAbsent(allocationId, communicator);
communicator = autoDetectCommunicatorByOpenJob.remove(allocationId);
if (communicator == null) {
// if we get here a simultaneous close request beat us to the remove() call
logger.debug("Already closing autodetect process for job [{}]", jobId);
// don't remove the process context immediately, because we need to ensure
// it is reachable to enable killing a job while it is closing
ProcessContext processContext = processByAllocation.get(allocationId);
if (processContext == null) {
logger.debug("Cannot close job [{}] as it has already been closed", jobId);
return;
}
processContext.tryLock();
processContext.setDying();
processContext.unlock();
if (reason == null) {
logger.info("Closing job [{}]", jobId);
} else {
logger.info("Closing job [{}], because [{}]", jobId, reason);
}
AutodetectCommunicator communicator = processContext.getAutodetectCommunicator();
if (communicator == null) {
logger.debug("Job [{}] is being closed before its process is started", jobId);
jobTask.markAsCompleted();
return;
}
try {
communicator.close(restart, reason);
autoDetectCommunicatorByClosingJob.remove(allocationId);
processByAllocation.remove(allocationId);
} catch (Exception e) {
logger.warn("[" + jobId + "] Exception closing autodetect process", e);
setJobState(jobTask, JobState.FAILED);
@ -462,15 +471,29 @@ public class AutodetectProcessManager extends AbstractComponent {
}
int numberOfOpenJobs() {
return autoDetectCommunicatorByOpenJob.size();
return (int) processByAllocation.values().stream()
.filter(p -> p.getState() != ProcessContext.ProcessStateName.DYING)
.count();
}
boolean jobHasActiveAutodetectProcess(JobTask jobTask) {
return autoDetectCommunicatorByOpenJob.get(jobTask.getAllocationId()) != null;
return getAutodetectCommunicator(jobTask) != null;
}
private AutodetectCommunicator getAutodetectCommunicator(JobTask jobTask) {
return processByAllocation.getOrDefault(jobTask.getAllocationId(), new ProcessContext(jobTask)).getAutodetectCommunicator();
}
private AutodetectCommunicator getOpenAutodetectCommunicator(JobTask jobTask) {
ProcessContext processContext = processByAllocation.get(jobTask.getAllocationId());
if (processContext.getState() == ProcessContext.ProcessStateName.RUNNING) {
return processContext.getAutodetectCommunicator();
}
return null;
}
public Optional<Duration> jobOpenTime(JobTask jobTask) {
AutodetectCommunicator communicator = autoDetectCommunicatorByOpenJob.get(jobTask.getAllocationId());
AutodetectCommunicator communicator = getAutodetectCommunicator(jobTask);
if (communicator == null) {
return Optional.empty();
}
@ -516,7 +539,7 @@ public class AutodetectProcessManager extends AbstractComponent {
}
public Optional<Tuple<DataCounts, ModelSizeStats>> getStatistics(JobTask jobTask) {
AutodetectCommunicator communicator = autoDetectCommunicatorByOpenJob.get(jobTask.getAllocationId());
AutodetectCommunicator communicator = getAutodetectCommunicator(jobTask);
if (communicator == null) {
return Optional.empty();
}
@ -597,6 +620,5 @@ public class AutodetectProcessManager extends AbstractComponent {
awaitTermination.countDown();
}
}
}
}

View File

@ -14,7 +14,6 @@ import org.elasticsearch.common.util.concurrent.EsRejectedExecutionException;
import org.elasticsearch.env.Environment;
import org.elasticsearch.xpack.ml.job.config.Job;
import org.elasticsearch.xpack.ml.job.config.MlFilter;
import org.elasticsearch.xpack.ml.job.persistence.JobProvider;
import org.elasticsearch.xpack.ml.job.process.NativeController;
import org.elasticsearch.xpack.ml.job.process.ProcessCtrl;
import org.elasticsearch.xpack.ml.job.process.ProcessPipes;
@ -39,19 +38,16 @@ public class NativeAutodetectProcessFactory implements AutodetectProcessFactory
private static final Logger LOGGER = Loggers.getLogger(NativeAutodetectProcessFactory.class);
private static final NamedPipeHelper NAMED_PIPE_HELPER = new NamedPipeHelper();
private static final Duration PROCESS_STARTUP_TIMEOUT = Duration.ofSeconds(10);
public static final Duration PROCESS_STARTUP_TIMEOUT = Duration.ofSeconds(10);
private final Client client;
private final Environment env;
private final Settings settings;
private final JobProvider jobProvider;
private final NativeController nativeController;
public NativeAutodetectProcessFactory(JobProvider jobProvider, Environment env, Settings settings,
NativeController nativeController, Client client) {
public NativeAutodetectProcessFactory(Environment env, Settings settings, NativeController nativeController, Client client) {
this.env = Objects.requireNonNull(env);
this.settings = Objects.requireNonNull(settings);
this.jobProvider = Objects.requireNonNull(jobProvider);
this.nativeController = Objects.requireNonNull(nativeController);
this.client = client;
}

View File

@ -0,0 +1,195 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.xpack.ml.job.process.autodetect;
import org.apache.logging.log4j.Logger;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.common.logging.Loggers;
import org.elasticsearch.xpack.ml.action.OpenJobAction.JobTask;
import org.elasticsearch.xpack.ml.utils.ExceptionsHelper;
import java.io.IOException;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.locks.ReentrantLock;
/**
* The process context that encapsulates the job task, the process state and the autodetect communicator.
*/
final class ProcessContext {
private static final Logger LOGGER = Loggers.getLogger(ProcessContext.class);
private final ReentrantLock lock = new ReentrantLock();
private final JobTask jobTask;
private volatile AutodetectCommunicator autodetectCommunicator;
private volatile ProcessState state;
ProcessContext(JobTask jobTask) {
this.jobTask = jobTask;
this.state = new ProcessNotRunningState();
}
JobTask getJobTask() {
return jobTask;
}
AutodetectCommunicator getAutodetectCommunicator() {
return autodetectCommunicator;
}
private void setAutodetectCommunicator(AutodetectCommunicator autodetectCommunicator) {
this.autodetectCommunicator = autodetectCommunicator;
}
ProcessStateName getState() {
return state.getName();
}
private void setState(ProcessState state) {
this.state = state;
}
void tryLock() {
try {
if (lock.tryLock(NativeAutodetectProcessFactory.PROCESS_STARTUP_TIMEOUT.getSeconds(), TimeUnit.SECONDS) == false) {
LOGGER.error("Failed to acquire process lock for job [{}]", jobTask.getJobId());
throw ExceptionsHelper.serverError("Failed to acquire process lock for job [" + jobTask.getJobId() + "]");
}
} catch (InterruptedException e) {
throw new ElasticsearchException(e);
}
}
void unlock() {
lock.unlock();
}
void setRunning(AutodetectCommunicator autodetectCommunicator) {
assert lock.isHeldByCurrentThread();
state.setRunning(this, autodetectCommunicator);
}
void setDying() {
assert lock.isHeldByCurrentThread();
state.setDying(this);
}
KillBuilder newKillBuilder() {
return new ProcessContext.KillBuilder();
}
class KillBuilder {
private boolean awaitCompletion;
private boolean finish;
private boolean silent;
private String reason;
KillBuilder setAwaitCompletion(boolean awaitCompletion) {
this.awaitCompletion = awaitCompletion;
return this;
}
KillBuilder setFinish(boolean finish) {
this.finish = finish;
return this;
}
KillBuilder setSilent(boolean silent) {
this.silent = silent;
return this;
}
KillBuilder setReason(String reason) {
this.reason = reason;
return this;
}
void kill() {
if (autodetectCommunicator == null) {
return;
}
String jobId = jobTask.getJobId();
if (silent == false) {
String extraInfo = (state.getName() == ProcessStateName.DYING) ? " while closing" : "";
if (reason == null) {
LOGGER.info("Killing job [{}]{}", jobId, extraInfo);
} else {
LOGGER.info("Killing job [{}]{}, because [{}]", jobId, extraInfo, reason);
}
}
try {
autodetectCommunicator.killProcess(awaitCompletion, finish);
} catch (IOException e) {
LOGGER.error("[{}] Failed to kill autodetect process for job", jobId);
}
}
}
enum ProcessStateName {
NOT_RUNNING, RUNNING, DYING
}
private interface ProcessState {
void setRunning(ProcessContext processContext, AutodetectCommunicator autodetectCommunicator);
void setDying(ProcessContext processContext);
ProcessStateName getName();
}
private static class ProcessNotRunningState implements ProcessState {
@Override
public void setRunning(ProcessContext processContext, AutodetectCommunicator autodetectCommunicator) {
processContext.setAutodetectCommunicator(autodetectCommunicator);
processContext.setState(new ProcessRunningState());
}
@Override
public void setDying(ProcessContext processContext) {
processContext.setState(new ProcessDyingState());
}
@Override
public ProcessStateName getName() {
return ProcessStateName.NOT_RUNNING;
}
}
private static class ProcessRunningState implements ProcessState {
@Override
public void setRunning(ProcessContext processContext, AutodetectCommunicator autodetectCommunicator) {
LOGGER.debug("Process set to [running] while it was already in that state");
}
@Override
public void setDying(ProcessContext processContext) {
processContext.setState(new ProcessDyingState());
}
@Override
public ProcessStateName getName() {
return ProcessStateName.RUNNING;
}
}
private static class ProcessDyingState implements ProcessState {
@Override
public void setRunning(ProcessContext processContext, AutodetectCommunicator autodetectCommunicator) {
LOGGER.debug("Process set to [running] while it was in [dying]");
}
@Override
public void setDying(ProcessContext processContext) {
LOGGER.debug("Process set to [dying] while it was already in that state");
}
@Override
public ProcessStateName getName() {
return ProcessStateName.DYING;
}
}
}

View File

@ -14,8 +14,10 @@ import org.elasticsearch.common.inject.Module;
import org.elasticsearch.common.inject.util.Providers;
import org.elasticsearch.common.settings.ClusterSettings;
import org.elasticsearch.common.settings.IndexScopedSettings;
import org.elasticsearch.common.settings.Setting;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.settings.SettingsFilter;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.license.LicenseService;
import org.elasticsearch.license.XPackLicenseState;
import org.elasticsearch.plugins.ActionPlugin;
@ -55,6 +57,7 @@ import java.util.function.Supplier;
import static java.util.Collections.emptyList;
import static java.util.Collections.singletonList;
import static org.elasticsearch.common.settings.Setting.timeSetting;
/**
* This class activates/deactivates the monitoring modules depending if we're running a node client, transport client or tribe client:
@ -66,6 +69,27 @@ public class Monitoring implements ActionPlugin {
public static final String NAME = "monitoring";
/**
* The minimum amount of time allowed for the history duration.
*/
public static final TimeValue HISTORY_DURATION_MINIMUM = TimeValue.timeValueHours(24);
/**
* The default retention duration of the monitoring history data.
* <p>
* Expected values:
* <ul>
* <li>Default: 7 days</li>
* <li>Minimum: 1 day</li>
* </ul>
*
* @see #HISTORY_DURATION_MINIMUM
*/
public static final Setting<TimeValue> HISTORY_DURATION = timeSetting("xpack.monitoring.history.duration",
TimeValue.timeValueHours(7 * 24), // default value (7 days)
HISTORY_DURATION_MINIMUM, // minimum value
Setting.Property.Dynamic, Setting.Property.NodeScope);
private final Settings settings;
private final XPackLicenseState licenseState;
private final boolean enabled;
@ -106,7 +130,6 @@ public class Monitoring implements ActionPlugin {
}
final ClusterSettings clusterSettings = clusterService.getClusterSettings();
final MonitoringSettings monitoringSettings = new MonitoringSettings(settings, clusterSettings);
final CleanerService cleanerService = new CleanerService(settings, clusterSettings, threadPool, licenseState);
final SSLService dynamicSSLService = sslService.createDynamicSSLService();
@ -116,16 +139,16 @@ public class Monitoring implements ActionPlugin {
final Exporters exporters = new Exporters(settings, exporterFactories, clusterService, licenseState, threadPool.getThreadContext());
Set<Collector> collectors = new HashSet<>();
collectors.add(new IndexStatsCollector(settings, clusterService, monitoringSettings, licenseState, client));
collectors.add(new ClusterStatsCollector(settings, clusterService, monitoringSettings, licenseState, client, licenseService));
collectors.add(new ShardsCollector(settings, clusterService, monitoringSettings, licenseState));
collectors.add(new NodeStatsCollector(settings, clusterService, monitoringSettings, licenseState, client));
collectors.add(new IndexRecoveryCollector(settings, clusterService, monitoringSettings, licenseState, client));
collectors.add(new JobStatsCollector(settings, clusterService, monitoringSettings, licenseState, client));
collectors.add(new IndexStatsCollector(settings, clusterService, licenseState, client));
collectors.add(new ClusterStatsCollector(settings, clusterService, licenseState, client, licenseService));
collectors.add(new ShardsCollector(settings, clusterService, licenseState));
collectors.add(new NodeStatsCollector(settings, clusterService, licenseState, client));
collectors.add(new IndexRecoveryCollector(settings, clusterService, licenseState, client));
collectors.add(new JobStatsCollector(settings, clusterService, licenseState, client));
final MonitoringService monitoringService = new MonitoringService(settings, clusterSettings, threadPool, collectors, exporters);
return Arrays.asList(monitoringService, monitoringSettings, exporters, cleanerService);
return Arrays.asList(monitoringService, exporters, cleanerService);
}
@Override
@ -145,4 +168,25 @@ public class Monitoring implements ActionPlugin {
}
return singletonList(new RestMonitoringBulkAction(settings, restController));
}
public List<Setting<?>> getSettings() {
return Collections.unmodifiableList(
Arrays.asList(HISTORY_DURATION,
MonitoringService.INTERVAL,
Exporters.EXPORTERS_SETTINGS,
Collector.INDICES,
ClusterStatsCollector.CLUSTER_STATS_TIMEOUT,
IndexRecoveryCollector.INDEX_RECOVERY_TIMEOUT,
IndexRecoveryCollector.INDEX_RECOVERY_ACTIVE_ONLY,
IndexStatsCollector.INDEX_STATS_TIMEOUT,
JobStatsCollector.JOB_STATS_TIMEOUT,
NodeStatsCollector.NODE_STATS_TIMEOUT)
);
}
public List<String> getSettingsFilter() {
final String exportersKey = Exporters.EXPORTERS_SETTINGS.getKey();
return Collections.unmodifiableList(Arrays.asList(exportersKey + "*.auth.*", exportersKey + "*.ssl.*"));
}
}

View File

@ -10,6 +10,7 @@ import org.apache.logging.log4j.util.Supplier;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.common.component.AbstractLifecycleComponent;
import org.elasticsearch.common.settings.ClusterSettings;
import org.elasticsearch.common.settings.Setting;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.common.util.concurrent.AbstractRunnable;
@ -36,6 +37,24 @@ import java.util.concurrent.atomic.AtomicBoolean;
*/
public class MonitoringService extends AbstractLifecycleComponent {
/**
* Minimum value for sampling interval (1 second)
*/
static final TimeValue MIN_INTERVAL = TimeValue.timeValueSeconds(1L);
/**
* Sampling interval between two collections (default to 10s)
*/
public static final Setting<TimeValue> INTERVAL = new Setting<>("xpack.monitoring.collection.interval", "10s",
(s) -> {
TimeValue value = TimeValue.parseTimeValue(s, null, "xpack.monitoring.collection.interval");
if (TimeValue.MINUS_ONE.equals(value) || value.millis() >= MIN_INTERVAL.millis()) {
return value;
}
throw new IllegalArgumentException("Failed to parse monitoring interval [" + s + "], value must be >= " + MIN_INTERVAL);
},
Setting.Property.Dynamic, Setting.Property.NodeScope);
/** State of the monitoring service, either started or stopped **/
private final AtomicBoolean started = new AtomicBoolean(false);
@ -55,8 +74,8 @@ public class MonitoringService extends AbstractLifecycleComponent {
this.threadPool = Objects.requireNonNull(threadPool);
this.collectors = Objects.requireNonNull(collectors);
this.exporters = Objects.requireNonNull(exporters);
this.interval = MonitoringSettings.INTERVAL.get(settings);
clusterSettings.addSettingsUpdateConsumer(MonitoringSettings.INTERVAL, this::setInterval);
this.interval = INTERVAL.get(settings);
clusterSettings.addSettingsUpdateConsumer(INTERVAL, this::setInterval);
}
void setInterval(TimeValue interval) {
@ -71,7 +90,7 @@ public class MonitoringService extends AbstractLifecycleComponent {
boolean isMonitoringActive() {
return isStarted()
&& interval != null
&& interval.millis() >= MonitoringSettings.MIN_INTERVAL.millis();
&& interval.millis() >= MIN_INTERVAL.millis();
}
private String threadPoolName() {

View File

@ -1,258 +0,0 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.xpack.monitoring;
import org.elasticsearch.common.component.AbstractComponent;
import org.elasticsearch.common.settings.ClusterSettings;
import org.elasticsearch.common.settings.Setting;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.xpack.XPackPlugin;
import java.util.Arrays;
import java.util.Collections;
import java.util.List;
import java.util.function.Function;
import static org.elasticsearch.common.settings.Setting.Property;
import static org.elasticsearch.common.settings.Setting.boolSetting;
import static org.elasticsearch.common.settings.Setting.groupSetting;
import static org.elasticsearch.common.settings.Setting.listSetting;
import static org.elasticsearch.common.settings.Setting.timeSetting;
// TODO Remove this class and put the settings in Monitoring class
public class MonitoringSettings extends AbstractComponent {
public static final String HISTORY_DURATION_SETTING_NAME = "history.duration";
/**
* The minimum amount of time allowed for the history duration.
*/
public static final TimeValue HISTORY_DURATION_MINIMUM = TimeValue.timeValueHours(24);
/**
* Minimum value for sampling interval (1 second)
*/
static final TimeValue MIN_INTERVAL = TimeValue.timeValueSeconds(1L);
/**
* Sampling interval between two collections (default to 10s)
*/
public static final Setting<TimeValue> INTERVAL = new Setting<>(collectionKey("interval"), "10s",
(s) -> {
TimeValue value = TimeValue.parseTimeValue(s, null, collectionKey("interval"));
if (TimeValue.MINUS_ONE.equals(value) || value.millis() >= MIN_INTERVAL.millis()) {
return value;
}
throw new IllegalArgumentException("Failed to parse monitoring interval [" + s + "], value must be >= " + MIN_INTERVAL);
},
Property.Dynamic, Property.NodeScope);
/**
* Timeout value when collecting index statistics (default to 10s)
*/
public static final Setting<TimeValue> INDEX_STATS_TIMEOUT =
timeSetting(collectionKey("index.stats.timeout"), TimeValue.timeValueSeconds(10), Property.Dynamic, Property.NodeScope);
/**
* List of indices names whose stats will be exported (default to all indices)
*/
public static final Setting<List<String>> INDICES =
listSetting(collectionKey("indices"), Collections.emptyList(), Function.identity(), Property.Dynamic, Property.NodeScope);
/**
* Timeout value when collecting the cluster state (default to 10s)
*/
public static final Setting<TimeValue> CLUSTER_STATE_TIMEOUT =
timeSetting(collectionKey("cluster.state.timeout"), TimeValue.timeValueSeconds(10), Property.Dynamic, Property.NodeScope);
/**
* Timeout value when collecting the recovery information (default to 10s)
*/
public static final Setting<TimeValue> CLUSTER_STATS_TIMEOUT =
timeSetting(collectionKey("cluster.stats.timeout"), TimeValue.timeValueSeconds(10), Property.Dynamic, Property.NodeScope);
/**
* Timeout value when collecting ML job statistics (default to 10s)
*/
public static final Setting<TimeValue> JOB_STATS_TIMEOUT =
timeSetting(collectionKey("ml.job.stats.timeout"), TimeValue.timeValueSeconds(10), Property.Dynamic, Property.NodeScope);
/**
* Timeout value when collecting the nodes statistics (default to 10s)
*/
public static final Setting<TimeValue> NODE_STATS_TIMEOUT =
timeSetting(collectionKey("node.stats.timeout"), TimeValue.timeValueSeconds(10), Property.Dynamic, Property.NodeScope);
/**
* Timeout value when collecting the recovery information (default to 10s)
*/
public static final Setting<TimeValue> INDEX_RECOVERY_TIMEOUT =
timeSetting(collectionKey("index.recovery.timeout"), TimeValue.timeValueSeconds(10), Property.Dynamic, Property.NodeScope);
/**
* Flag to indicate if only active recoveries should be collected (default to false: all recoveries are collected)
*/
public static final Setting<Boolean> INDEX_RECOVERY_ACTIVE_ONLY =
boolSetting(collectionKey("index.recovery.active_only"), false, Property.Dynamic, Property.NodeScope) ;
/**
* The default retention duration of the monitoring history data.
* <p>
* Expected values:
* <ul>
* <li>Default: 7 days</li>
* <li>Minimum: 1 day</li>
* </ul>
*
* @see #HISTORY_DURATION_MINIMUM
*/
public static final Setting<TimeValue> HISTORY_DURATION =
timeSetting(key(HISTORY_DURATION_SETTING_NAME),
TimeValue.timeValueHours(7 * 24), // default value (7 days)
HISTORY_DURATION_MINIMUM, // minimum value
Property.Dynamic, Property.NodeScope);
/**
* Settings/Options per configured exporter
*/
public static final Setting<Settings> EXPORTERS_SETTINGS =
groupSetting(key("exporters."), Property.Dynamic, Property.NodeScope);
public static List<Setting<?>> getSettings() {
return Arrays.asList(INDICES,
INTERVAL,
INDEX_RECOVERY_TIMEOUT,
INDEX_STATS_TIMEOUT,
INDEX_RECOVERY_ACTIVE_ONLY,
CLUSTER_STATE_TIMEOUT,
CLUSTER_STATS_TIMEOUT,
JOB_STATS_TIMEOUT,
NODE_STATS_TIMEOUT,
HISTORY_DURATION,
EXPORTERS_SETTINGS);
}
public static List<String> getSettingsFilter() {
return Arrays.asList(key("exporters.*.auth.*"), key("exporters.*.ssl.*"));
}
private volatile TimeValue indexStatsTimeout;
private volatile TimeValue clusterStateTimeout;
private volatile TimeValue clusterStatsTimeout;
private volatile TimeValue recoveryTimeout;
private volatile TimeValue jobStatsTimeout;
private volatile TimeValue nodeStatsTimeout;
private volatile boolean recoveryActiveOnly;
private volatile String[] indices;
public MonitoringSettings(Settings settings, ClusterSettings clusterSettings) {
super(settings);
setIndexStatsTimeout(INDEX_STATS_TIMEOUT.get(settings));
clusterSettings.addSettingsUpdateConsumer(INDEX_STATS_TIMEOUT, this::setIndexStatsTimeout);
setIndices(INDICES.get(settings));
clusterSettings.addSettingsUpdateConsumer(INDICES, this::setIndices);
setClusterStateTimeout(CLUSTER_STATE_TIMEOUT.get(settings));
clusterSettings.addSettingsUpdateConsumer(CLUSTER_STATE_TIMEOUT, this::setClusterStateTimeout);
setClusterStatsTimeout(CLUSTER_STATS_TIMEOUT.get(settings));
clusterSettings.addSettingsUpdateConsumer(CLUSTER_STATS_TIMEOUT, this::setClusterStatsTimeout);
setJobStatsTimeout(JOB_STATS_TIMEOUT.get(settings));
clusterSettings.addSettingsUpdateConsumer(JOB_STATS_TIMEOUT, this::setJobStatsTimeout);
setNodeStatsTimeout(NODE_STATS_TIMEOUT.get(settings));
clusterSettings.addSettingsUpdateConsumer(NODE_STATS_TIMEOUT, this::setNodeStatsTimeout);
setRecoveryTimeout(INDEX_RECOVERY_TIMEOUT.get(settings));
clusterSettings.addSettingsUpdateConsumer(INDEX_RECOVERY_TIMEOUT, this::setRecoveryTimeout);
setRecoveryActiveOnly(INDEX_RECOVERY_ACTIVE_ONLY.get(settings));
clusterSettings.addSettingsUpdateConsumer(INDEX_RECOVERY_ACTIVE_ONLY, this::setRecoveryActiveOnly);
}
public TimeValue indexStatsTimeout() {
return indexStatsTimeout;
}
public String[] indices() {
return indices;
}
public TimeValue clusterStateTimeout() {
return clusterStateTimeout;
}
public TimeValue clusterStatsTimeout() {
return clusterStatsTimeout;
}
public TimeValue jobStatsTimeout() {
return jobStatsTimeout;
}
public TimeValue nodeStatsTimeout() {
return nodeStatsTimeout;
}
public TimeValue recoveryTimeout() {
return recoveryTimeout;
}
public boolean recoveryActiveOnly() {
return recoveryActiveOnly;
}
private void setIndexStatsTimeout(TimeValue indexStatsTimeout) {
this.indexStatsTimeout = indexStatsTimeout;
}
private void setClusterStateTimeout(TimeValue clusterStateTimeout) {
this.clusterStateTimeout = clusterStateTimeout;
}
private void setClusterStatsTimeout(TimeValue clusterStatsTimeout) {
this.clusterStatsTimeout = clusterStatsTimeout;
}
private void setJobStatsTimeout(TimeValue jobStatsTimeout) {
this.jobStatsTimeout = jobStatsTimeout;
}
public void setNodeStatsTimeout(TimeValue nodeStatsTimeout) {
this.nodeStatsTimeout = nodeStatsTimeout;
}
private void setRecoveryTimeout(TimeValue recoveryTimeout) {
this.recoveryTimeout = recoveryTimeout;
}
private void setRecoveryActiveOnly(boolean recoveryActiveOnly) {
this.recoveryActiveOnly = recoveryActiveOnly;
}
private void setIndices(List<String> indices) {
this.indices = indices.toArray(new String[0]);
}
/**
* Prefix the {@code key} with the Monitoring prefix and "collection." .
*
* @param key The key to prefix
* @return The key prefixed by the product prefixes + "collection." .
* @see #key(String)
*/
static String collectionKey(String key) {
return key("collection." + key);
}
/**
* Prefix the {@code key} with the Monitoring prefix.
*
* @param key The key to prefix
* @return The key prefixed by the product prefixes.
*/
static String key(String key) {
return XPackPlugin.featureSettingPrefix(Monitoring.NAME) + "." + key;
}
}

View File

@ -5,10 +5,6 @@
*/
package org.elasticsearch.xpack.monitoring.cleaner;
import java.util.List;
import java.util.concurrent.CopyOnWriteArrayList;
import java.util.concurrent.ScheduledFuture;
import org.elasticsearch.common.component.AbstractLifecycleComponent;
import org.elasticsearch.common.settings.ClusterSettings;
import org.elasticsearch.common.settings.Settings;
@ -18,10 +14,14 @@ import org.elasticsearch.common.util.concurrent.EsRejectedExecutionException;
import org.elasticsearch.common.util.concurrent.FutureUtils;
import org.elasticsearch.license.XPackLicenseState;
import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.xpack.monitoring.MonitoringSettings;
import org.elasticsearch.xpack.monitoring.Monitoring;
import org.joda.time.DateTime;
import org.joda.time.chrono.ISOChronology;
import java.util.List;
import java.util.concurrent.CopyOnWriteArrayList;
import java.util.concurrent.ScheduledFuture;
/**
* {@code CleanerService} takes care of deleting old monitoring indices.
*/
@ -41,11 +41,11 @@ public class CleanerService extends AbstractLifecycleComponent {
this.licenseState = licenseState;
this.threadPool = threadPool;
this.executionScheduler = executionScheduler;
this.globalRetention = MonitoringSettings.HISTORY_DURATION.get(settings);
this.globalRetention = Monitoring.HISTORY_DURATION.get(settings);
this.runnable = new IndicesCleaner();
// the validation is performed by the setting's object itself
clusterSettings.addSettingsUpdateConsumer(MonitoringSettings.HISTORY_DURATION, this::setGlobalRetention);
clusterSettings.addSettingsUpdateConsumer(Monitoring.HISTORY_DURATION, this::setGlobalRetention);
}
public CleanerService(Settings settings, ClusterSettings clusterSettings, ThreadPool threadPool, XPackLicenseState licenseState) {
@ -91,7 +91,7 @@ public class CleanerService extends AbstractLifecycleComponent {
return globalRetention;
}
else {
return MonitoringSettings.HISTORY_DURATION.getDefault(Settings.EMPTY);
return Monitoring.HISTORY_DURATION.getDefault(Settings.EMPTY);
}
}
@ -106,8 +106,7 @@ public class CleanerService extends AbstractLifecycleComponent {
public void setGlobalRetention(TimeValue globalRetention) {
// notify the user that their setting will be ignored until they get the right license
if (licenseState.isUpdateRetentionAllowed() == false) {
logger.warn("[{}] setting will be ignored until an appropriate license is applied",
MonitoringSettings.HISTORY_DURATION.getKey());
logger.warn("[{}] setting will be ignored until an appropriate license is applied", Monitoring.HISTORY_DURATION.getKey());
}
this.globalRetention = globalRetention;

View File

@ -10,32 +10,50 @@ import org.apache.logging.log4j.util.Supplier;
import org.elasticsearch.ElasticsearchTimeoutException;
import org.elasticsearch.cluster.node.DiscoveryNode;
import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.Strings;
import org.elasticsearch.common.component.AbstractComponent;
import org.elasticsearch.common.inject.internal.Nullable;
import org.elasticsearch.common.settings.Setting;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.license.XPackLicenseState;
import org.elasticsearch.xpack.monitoring.MonitoringSettings;
import org.elasticsearch.xpack.XPackPlugin;
import org.elasticsearch.xpack.monitoring.Monitoring;
import org.elasticsearch.xpack.monitoring.exporter.MonitoringDoc;
import java.util.Collection;
import java.util.List;
import java.util.Objects;
import java.util.function.Function;
import static java.util.Collections.emptyList;
import static org.elasticsearch.common.settings.Setting.Property;
import static org.elasticsearch.common.settings.Setting.listSetting;
import static org.elasticsearch.common.settings.Setting.timeSetting;
/**
* {@link Collector} are used to collect monitoring data about the cluster, nodes and indices.
*/
public abstract class Collector extends AbstractComponent {
/**
* List of indices names whose stats will be exported (default to all indices)
*/
public static final Setting<List<String>> INDICES =
listSetting(collectionSetting("indices"), emptyList(), Function.identity(), Property.Dynamic, Property.NodeScope);
private final String name;
private final Setting<TimeValue> collectionTimeoutSetting;
protected final ClusterService clusterService;
protected final MonitoringSettings monitoringSettings;
protected final XPackLicenseState licenseState;
public Collector(Settings settings, String name, ClusterService clusterService,
MonitoringSettings monitoringSettings, XPackLicenseState licenseState) {
public Collector(final Settings settings, final String name, final ClusterService clusterService,
final Setting<TimeValue> timeoutSetting, final XPackLicenseState licenseState) {
super(settings);
this.name = name;
this.clusterService = clusterService;
this.monitoringSettings = monitoringSettings;
this.collectionTimeoutSetting = timeoutSetting;
this.licenseState = licenseState;
}
@ -92,6 +110,33 @@ public abstract class Collector extends AbstractComponent {
return System.currentTimeMillis();
}
/**
* Returns the value of the collection timeout configured for the current {@link Collector}.
*
* @return the collection timeout, or {@code null} if the collector has not timeout defined.
*/
public TimeValue getCollectionTimeout() {
if (collectionTimeoutSetting == null) {
return null;
}
return clusterService.getClusterSettings().get(collectionTimeoutSetting);
}
/**
* Returns the names of indices Monitoring collects data from.
*
* @return a array of indices
*/
public String[] getCollectionIndices() {
final List<String> indices = clusterService.getClusterSettings().get(INDICES);
assert indices != null;
if (indices.isEmpty()) {
return Strings.EMPTY_ARRAY;
} else {
return indices.toArray(new String[indices.size()]);
}
}
/**
* Creates a {@link MonitoringDoc.Node} from a {@link DiscoveryNode} and a timestamp, copying over the
* required information.
@ -112,4 +157,14 @@ public abstract class Collector extends AbstractComponent {
node.getName(),
timestamp);
}
protected static String collectionSetting(final String settingName) {
Objects.requireNonNull(settingName, "setting name must not be null");
return XPackPlugin.featureSettingPrefix(Monitoring.NAME) + ".collection." + settingName;
}
protected static Setting<TimeValue> collectionTimeoutSetting(final String settingName) {
String name = collectionSetting(settingName);
return timeSetting(name, TimeValue.timeValueSeconds(10), Property.Dynamic, Property.NodeScope);
}
}

View File

@ -14,17 +14,17 @@ import org.elasticsearch.client.Client;
import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.Nullable;
import org.elasticsearch.common.settings.Setting;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.license.License;
import org.elasticsearch.license.LicenseService;
import org.elasticsearch.license.LicenseUtils;
import org.elasticsearch.license.XPackLicenseState;
import org.elasticsearch.xpack.XPackFeatureSet;
import org.elasticsearch.xpack.action.XPackUsageRequestBuilder;
import org.elasticsearch.xpack.monitoring.MonitoringSettings;
import org.elasticsearch.xpack.monitoring.collector.Collector;
import org.elasticsearch.xpack.monitoring.exporter.MonitoringDoc;
import org.elasticsearch.xpack.security.InternalClient;
import java.util.Collection;
import java.util.Collections;
@ -42,16 +42,20 @@ import java.util.List;
*/
public class ClusterStatsCollector extends Collector {
/**
* Timeout value when collecting the cluster stats information (default to 10s)
*/
public static final Setting<TimeValue> CLUSTER_STATS_TIMEOUT = collectionTimeoutSetting("cluster.stats.timeout");
private final LicenseService licenseService;
private final Client client;
public ClusterStatsCollector(final Settings settings,
final ClusterService clusterService,
final MonitoringSettings monitoringSettings,
final XPackLicenseState licenseState,
final Client client,
final LicenseService licenseService) {
super(settings, ClusterStatsMonitoringDoc.TYPE, clusterService, monitoringSettings, licenseState);
super(settings, ClusterStatsMonitoringDoc.TYPE, clusterService, CLUSTER_STATS_TIMEOUT, licenseState);
this.client = client;
this.licenseService = licenseService;
}
@ -65,8 +69,7 @@ public class ClusterStatsCollector extends Collector {
@Override
protected Collection<MonitoringDoc> doCollect(final MonitoringDoc.Node node) throws Exception {
final Supplier<ClusterStatsResponse> clusterStatsSupplier =
() -> client.admin().cluster().prepareClusterStats()
.get(monitoringSettings.clusterStatsTimeout());
() -> client.admin().cluster().prepareClusterStats().get(getCollectionTimeout());
final Supplier<List<XPackFeatureSet.Usage>> usageSupplier =
() -> new XPackUsageRequestBuilder(client).get().getUsages();

View File

@ -9,9 +9,10 @@ import org.elasticsearch.action.admin.indices.recovery.RecoveryResponse;
import org.elasticsearch.action.support.IndicesOptions;
import org.elasticsearch.client.Client;
import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.settings.Setting;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.license.XPackLicenseState;
import org.elasticsearch.xpack.monitoring.MonitoringSettings;
import org.elasticsearch.xpack.monitoring.collector.Collector;
import org.elasticsearch.xpack.monitoring.exporter.MonitoringDoc;
@ -21,6 +22,8 @@ import java.util.Collections;
import java.util.List;
import java.util.Objects;
import static org.elasticsearch.common.settings.Setting.boolSetting;
/**
* Collector for the Recovery API.
* <p>
@ -29,18 +32,32 @@ import java.util.Objects;
*/
public class IndexRecoveryCollector extends Collector {
/**
* Timeout value when collecting the recovery information (default to 10s)
*/
public static final Setting<TimeValue> INDEX_RECOVERY_TIMEOUT = collectionTimeoutSetting("index.recovery.timeout");
/**
* Flag to indicate if only active recoveries should be collected (default to false: all recoveries are collected)
*/
public static final Setting<Boolean> INDEX_RECOVERY_ACTIVE_ONLY =
boolSetting(collectionSetting("index.recovery.active_only"), false, Setting.Property.Dynamic, Setting.Property.NodeScope);
private final Client client;
public IndexRecoveryCollector(final Settings settings,
final ClusterService clusterService,
final MonitoringSettings monitoringSettings,
final XPackLicenseState licenseState,
final Client client) {
super(settings, IndexRecoveryMonitoringDoc.TYPE, clusterService, monitoringSettings, licenseState);
super(settings, IndexRecoveryMonitoringDoc.TYPE, clusterService, INDEX_RECOVERY_TIMEOUT, licenseState);
this.client = Objects.requireNonNull(client);
}
boolean getActiveRecoveriesOnly() {
return clusterService.getClusterSettings().get(INDEX_RECOVERY_ACTIVE_ONLY);
}
@Override
protected boolean shouldCollect() {
return super.shouldCollect() && isLocalNodeMaster();
@ -50,10 +67,10 @@ public class IndexRecoveryCollector extends Collector {
protected Collection<MonitoringDoc> doCollect(final MonitoringDoc.Node node) throws Exception {
List<MonitoringDoc> results = new ArrayList<>(1);
RecoveryResponse recoveryResponse = client.admin().indices().prepareRecoveries()
.setIndices(monitoringSettings.indices())
.setIndices(getCollectionIndices())
.setIndicesOptions(IndicesOptions.lenientExpandOpen())
.setActiveOnly(monitoringSettings.recoveryActiveOnly())
.get(monitoringSettings.recoveryTimeout());
.setActiveOnly(getActiveRecoveriesOnly())
.get(getCollectionTimeout());
if (recoveryResponse.hasRecoveries()) {
results.add(new IndexRecoveryMonitoringDoc(clusterUUID(), timestamp(), node, recoveryResponse));

View File

@ -10,9 +10,10 @@ import org.elasticsearch.action.admin.indices.stats.IndicesStatsResponse;
import org.elasticsearch.action.support.IndicesOptions;
import org.elasticsearch.client.Client;
import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.settings.Setting;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.license.XPackLicenseState;
import org.elasticsearch.xpack.monitoring.MonitoringSettings;
import org.elasticsearch.xpack.monitoring.collector.Collector;
import org.elasticsearch.xpack.monitoring.exporter.MonitoringDoc;
@ -29,14 +30,18 @@ import java.util.List;
*/
public class IndexStatsCollector extends Collector {
/**
* Timeout value when collecting index statistics (default to 10s)
*/
public static final Setting<TimeValue> INDEX_STATS_TIMEOUT = collectionTimeoutSetting("index.stats.timeout");
private final Client client;
public IndexStatsCollector(final Settings settings,
final ClusterService clusterService,
final MonitoringSettings monitoringSettings,
final XPackLicenseState licenseState,
final Client client) {
super(settings, "index-stats", clusterService, monitoringSettings, licenseState);
super(settings, "index-stats", clusterService, INDEX_STATS_TIMEOUT, licenseState);
this.client = client;
}
@ -49,7 +54,7 @@ public class IndexStatsCollector extends Collector {
protected Collection<MonitoringDoc> doCollect(final MonitoringDoc.Node node) throws Exception {
final List<MonitoringDoc> results = new ArrayList<>();
final IndicesStatsResponse indicesStats = client.admin().indices().prepareStats()
.setIndices(monitoringSettings.indices())
.setIndices(getCollectionIndices())
.setIndicesOptions(IndicesOptions.lenientExpandOpen())
.clear()
.setDocs(true)
@ -62,7 +67,7 @@ public class IndexStatsCollector extends Collector {
.setRefresh(true)
.setQueryCache(true)
.setRequestCache(true)
.get(monitoringSettings.indexStatsTimeout());
.get(getCollectionTimeout());
final long timestamp = timestamp();
final String clusterUuid = clusterUUID();

View File

@ -6,15 +6,15 @@
package org.elasticsearch.xpack.monitoring.collector.ml;
import org.elasticsearch.cluster.metadata.MetaData;
import org.elasticsearch.cluster.node.DiscoveryNode;
import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.settings.Setting;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.license.XPackLicenseState;
import org.elasticsearch.xpack.XPackClient;
import org.elasticsearch.xpack.XPackSettings;
import org.elasticsearch.xpack.ml.action.GetJobsStatsAction;
import org.elasticsearch.xpack.ml.client.MachineLearningClient;
import org.elasticsearch.xpack.monitoring.MonitoringSettings;
import org.elasticsearch.xpack.monitoring.collector.Collector;
import org.elasticsearch.xpack.monitoring.exporter.MonitoringDoc;
import org.elasticsearch.xpack.security.InternalClient;
@ -32,18 +32,21 @@ import java.util.stream.Collectors;
*/
public class JobStatsCollector extends Collector {
/**
* Timeout value when collecting ML job statistics (default to 10s)
*/
public static final Setting<TimeValue> JOB_STATS_TIMEOUT = collectionTimeoutSetting("ml.job.stats.timeout");
private final MachineLearningClient client;
public JobStatsCollector(final Settings settings, final ClusterService clusterService,
final MonitoringSettings monitoringSettings,
final XPackLicenseState licenseState, final InternalClient client) {
this(settings, clusterService, monitoringSettings, licenseState, new XPackClient(client).machineLearning());
this(settings, clusterService, licenseState, new XPackClient(client).machineLearning());
}
JobStatsCollector(final Settings settings, final ClusterService clusterService,
final MonitoringSettings monitoringSettings,
final XPackLicenseState licenseState, final MachineLearningClient client) {
super(settings, JobStatsMonitoringDoc.TYPE, clusterService, monitoringSettings, licenseState);
super(settings, JobStatsMonitoringDoc.TYPE, clusterService, JOB_STATS_TIMEOUT, licenseState);
this.client = client;
}
@ -61,7 +64,7 @@ public class JobStatsCollector extends Collector {
// fetch details about all jobs
final GetJobsStatsAction.Response jobs =
client.getJobsStats(new GetJobsStatsAction.Request(MetaData.ALL))
.actionGet(monitoringSettings.jobStatsTimeout());
.actionGet(getCollectionTimeout());
final long timestamp = timestamp();
final String clusterUuid = clusterUUID();

View File

@ -12,9 +12,10 @@ import org.elasticsearch.action.admin.indices.stats.CommonStatsFlags;
import org.elasticsearch.bootstrap.BootstrapInfo;
import org.elasticsearch.client.Client;
import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.settings.Setting;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.license.XPackLicenseState;
import org.elasticsearch.xpack.monitoring.MonitoringSettings;
import org.elasticsearch.xpack.monitoring.collector.Collector;
import org.elasticsearch.xpack.monitoring.exporter.MonitoringDoc;
@ -30,6 +31,11 @@ import java.util.Objects;
*/
public class NodeStatsCollector extends Collector {
/**
* Timeout value when collecting the nodes statistics (default to 10s)
*/
public static final Setting<TimeValue> NODE_STATS_TIMEOUT = collectionTimeoutSetting("node.stats.timeout");
private static final CommonStatsFlags FLAGS =
new CommonStatsFlags(CommonStatsFlags.Flag.Docs,
CommonStatsFlags.Flag.FieldData,
@ -44,10 +50,10 @@ public class NodeStatsCollector extends Collector {
public NodeStatsCollector(final Settings settings,
final ClusterService clusterService,
final MonitoringSettings monitoringSettings,
final XPackLicenseState licenseState,
final Client client) {
super(settings, NodeStatsMonitoringDoc.TYPE, clusterService, monitoringSettings, licenseState);
super(settings, NodeStatsMonitoringDoc.TYPE, clusterService, NODE_STATS_TIMEOUT, licenseState);
this.client = Objects.requireNonNull(client);
}
@ -67,7 +73,7 @@ public class NodeStatsCollector extends Collector {
request.threadPool(true);
request.fs(true);
final NodesStatsResponse response = client.admin().cluster().nodesStats(request).actionGet(monitoringSettings.nodeStatsTimeout());
final NodesStatsResponse response = client.admin().cluster().nodesStats(request).actionGet(getCollectionTimeout());
// if there's a failure, then we failed to work with the
// _local node (guaranteed a single exception)

View File

@ -13,7 +13,6 @@ import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.regex.Regex;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.license.XPackLicenseState;
import org.elasticsearch.xpack.monitoring.MonitoringSettings;
import org.elasticsearch.xpack.monitoring.collector.Collector;
import org.elasticsearch.xpack.monitoring.exporter.MonitoringDoc;
@ -33,10 +32,9 @@ public class ShardsCollector extends Collector {
public ShardsCollector(final Settings settings,
final ClusterService clusterService,
final MonitoringSettings monitoringSettings,
final XPackLicenseState licenseState) {
super(settings, ShardMonitoringDoc.TYPE, clusterService, monitoringSettings, licenseState);
super(settings, ShardMonitoringDoc.TYPE, clusterService, null, licenseState);
}
@Override
@ -48,7 +46,7 @@ public class ShardsCollector extends Collector {
protected Collection<MonitoringDoc> doCollect(final MonitoringDoc.Node node) throws Exception {
final List<MonitoringDoc> results = new ArrayList<>(1);
ClusterState clusterState = clusterService.state();
final ClusterState clusterState = clusterService.state();
if (clusterState != null) {
RoutingTable routingTable = clusterState.routingTable();
if (routingTable != null) {
@ -58,8 +56,11 @@ public class ShardsCollector extends Collector {
final String stateUUID = clusterState.stateUUID();
final long timestamp = timestamp();
final String[] indices = getCollectionIndices();
final boolean isAllIndices = IndexNameExpressionResolver.isAllIndices(Arrays.asList(indices));
for (ShardRouting shard : shards) {
if (match(shard.getIndexName())) {
if (isAllIndices || Regex.simpleMatch(indices, shard.getIndexName())) {
MonitoringDoc.Node shardNode = null;
if (shard.assignedToNode()) {
// If the shard is assigned to a node, the shard monitoring document refers to this node
@ -73,9 +74,4 @@ public class ShardsCollector extends Collector {
}
return Collections.unmodifiableCollection(results);
}
private boolean match(final String indexName) {
final String[] indices = monitoringSettings.indices();
return IndexNameExpressionResolver.isAllIndices(Arrays.asList(indices)) || Regex.simpleMatch(indices, indexName);
}
}

View File

@ -9,7 +9,6 @@ import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.settings.SettingsException;
import org.elasticsearch.license.XPackLicenseState;
import org.elasticsearch.xpack.monitoring.MonitoringSettings;
import org.joda.time.format.DateTimeFormat;
import org.joda.time.format.DateTimeFormatter;
@ -73,11 +72,11 @@ public abstract class Exporter implements AutoCloseable {
protected abstract void doClose();
protected static String settingFQN(final Config config) {
return MonitoringSettings.EXPORTERS_SETTINGS.getKey() + config.name;
return Exporters.EXPORTERS_SETTINGS.getKey() + config.name;
}
protected static String settingFQN(final Config config, final String setting) {
return MonitoringSettings.EXPORTERS_SETTINGS.getKey() + config.name + "." + setting;
return Exporters.EXPORTERS_SETTINGS.getKey() + config.name + "." + setting;
}
protected static DateTimeFormatter dateTimeFormatter(final Config config) {

View File

@ -13,11 +13,11 @@ import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.component.AbstractLifecycleComponent;
import org.elasticsearch.common.component.Lifecycle;
import org.elasticsearch.common.settings.Setting;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.settings.SettingsException;
import org.elasticsearch.common.util.concurrent.ThreadContext;
import org.elasticsearch.license.XPackLicenseState;
import org.elasticsearch.xpack.monitoring.MonitoringSettings;
import org.elasticsearch.xpack.monitoring.exporter.local.LocalExporter;
import java.util.ArrayList;
@ -32,9 +32,16 @@ import java.util.Set;
import java.util.concurrent.atomic.AtomicReference;
import static java.util.Collections.emptyMap;
import static org.elasticsearch.common.settings.Setting.groupSetting;
public class Exporters extends AbstractLifecycleComponent implements Iterable<Exporter> {
/**
* Settings/Options per configured exporter
*/
public static final Setting<Settings> EXPORTERS_SETTINGS =
groupSetting("xpack.monitoring.exporters.", Setting.Property.Dynamic, Setting.Property.NodeScope);
private final Map<String, Exporter.Factory> factories;
private final AtomicReference<Map<String, Exporter>> exporters;
private final ClusterService clusterService;
@ -52,7 +59,7 @@ public class Exporters extends AbstractLifecycleComponent implements Iterable<Ex
this.clusterService = Objects.requireNonNull(clusterService);
this.licenseState = Objects.requireNonNull(licenseState);
clusterService.getClusterSettings().addSettingsUpdateConsumer(MonitoringSettings.EXPORTERS_SETTINGS, this::setExportersSetting);
clusterService.getClusterSettings().addSettingsUpdateConsumer(EXPORTERS_SETTINGS, this::setExportersSetting);
}
private void setExportersSetting(Settings exportersSetting) {
@ -67,7 +74,7 @@ public class Exporters extends AbstractLifecycleComponent implements Iterable<Ex
@Override
protected void doStart() {
exporters.set(initExporters(MonitoringSettings.EXPORTERS_SETTINGS.get(settings)));
exporters.set(initExporters(EXPORTERS_SETTINGS.get(settings)));
}
@Override

View File

@ -198,9 +198,7 @@ public class AutodetectCommunicatorTests extends ESTestCase {
((ActionListener<Boolean>) invocation.getArguments()[0]).onResponse(true);
return null;
}).when(dataCountsReporter).finishReporting(any());
JobTask jobTask = mock(JobTask.class);
when(jobTask.getJobId()).thenReturn("foo");
return new AutodetectCommunicator(createJobDetails(), jobTask, autodetectProcess, stateStreamer,
return new AutodetectCommunicator(createJobDetails(), autodetectProcess, stateStreamer,
dataCountsReporter, autoDetectResultProcessor, finishHandler,
new NamedXContentRegistry(Collections.emptyList()), executorService);
}

View File

@ -459,7 +459,6 @@ public class AutodetectProcessManagerTests extends ESTestCase {
JobTask jobTask = mock(JobTask.class);
when(jobTask.getJobId()).thenReturn("foo");
assertFalse(manager.jobHasActiveAutodetectProcess(jobTask));
when(communicator.getJobTask()).thenReturn(jobTask);
manager.openJob(jobTask, e -> {});
manager.processData(jobTask, createInputStream(""), randomFrom(XContentType.values()),

View File

@ -8,38 +8,29 @@ package org.elasticsearch.xpack.monitoring;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.test.ESTestCase;
import org.junit.Rule;
import org.junit.rules.ExpectedException;
/**
* Tests {@link MonitoringSettings}
*/
public class MonitoringSettingsTests extends ESTestCase {
@Rule
public ExpectedException expectedException = ExpectedException.none();
public class MonitoringHistoryDurationSettingsTests extends ESTestCase {
public void testHistoryDurationDefaults7Days() {
TimeValue sevenDays = TimeValue.timeValueHours(7 * 24);
// 7 days
assertEquals(sevenDays, MonitoringSettings.HISTORY_DURATION.get(Settings.EMPTY));
assertEquals(sevenDays, Monitoring.HISTORY_DURATION.get(Settings.EMPTY));
// Note: this verifies the semantics because this is taken for granted that it never returns null!
assertEquals(sevenDays, MonitoringSettings.HISTORY_DURATION.get(buildSettings(MonitoringSettings.HISTORY_DURATION.getKey(), null)));
assertEquals(sevenDays, Monitoring.HISTORY_DURATION.get(buildSettings(Monitoring.HISTORY_DURATION.getKey(), null)));
}
public void testHistoryDurationMinimum24Hours() {
// hit the minimum
assertEquals(MonitoringSettings.HISTORY_DURATION_MINIMUM,
MonitoringSettings.HISTORY_DURATION.get(buildSettings(MonitoringSettings.HISTORY_DURATION.getKey(), "24h")));
assertEquals(Monitoring.HISTORY_DURATION_MINIMUM,
Monitoring.HISTORY_DURATION.get(buildSettings(Monitoring.HISTORY_DURATION.getKey(), "24h")));
}
public void testHistoryDurationMinimum24HoursBlocksLower() {
expectedException.expect(IllegalArgumentException.class);
// 1 ms early!
String oneSecondEarly = (MonitoringSettings.HISTORY_DURATION_MINIMUM.millis() - 1) + "ms";
MonitoringSettings.HISTORY_DURATION.get(buildSettings(MonitoringSettings.HISTORY_DURATION.getKey(), oneSecondEarly));
final String oneSecondEarly = (Monitoring.HISTORY_DURATION_MINIMUM.millis() - 1) + "ms";
expectThrows(IllegalArgumentException.class,
() -> Monitoring.HISTORY_DURATION.get(buildSettings(Monitoring.HISTORY_DURATION.getKey(), oneSecondEarly)));
}
private Settings buildSettings(String key, String value) {

View File

@ -86,7 +86,7 @@ public class MonitoringPluginTests extends MonitoringIntegTestCase {
protected Settings nodeSettings(int nodeOrdinal) {
return Settings.builder()
.put(super.nodeSettings(nodeOrdinal))
.put(MonitoringSettings.INTERVAL.getKey(), "-1")
.put(MonitoringService.INTERVAL.getKey(), "-1")
.build();
}

View File

@ -33,18 +33,20 @@ import static org.mockito.Mockito.when;
public class MonitoringServiceTests extends ESTestCase {
TestThreadPool threadPool;
MonitoringService monitoringService;
XPackLicenseState licenseState = mock(XPackLicenseState.class);
ClusterService clusterService;
ClusterSettings clusterSettings;
private TestThreadPool threadPool;
private MonitoringService monitoringService;
private XPackLicenseState licenseState = mock(XPackLicenseState.class);
private ClusterService clusterService;
private ClusterSettings clusterSettings;
@Before
public void setUp() throws Exception {
super.setUp();
threadPool = new TestThreadPool(getTestName());
clusterService = mock(ClusterService.class);
clusterSettings = new ClusterSettings(Settings.EMPTY, new HashSet<>(MonitoringSettings.getSettings()));
final Monitoring monitoring = new Monitoring(Settings.EMPTY, licenseState);
clusterSettings = new ClusterSettings(Settings.EMPTY, new HashSet<>(monitoring.getSettings()));
when(clusterService.getClusterSettings()).thenReturn(clusterSettings);
}
@ -77,7 +79,7 @@ public class MonitoringServiceTests extends ESTestCase {
}
public void testInterval() throws Exception {
Settings settings = Settings.builder().put(MonitoringSettings.INTERVAL.getKey(), TimeValue.MINUS_ONE).build();
Settings settings = Settings.builder().put(MonitoringService.INTERVAL.getKey(), TimeValue.MINUS_ONE).build();
CountingExporter exporter = new CountingExporter();
monitoringService = new MonitoringService(settings, clusterSettings, threadPool, emptySet(), exporter);
@ -102,7 +104,7 @@ public class MonitoringServiceTests extends ESTestCase {
final CountDownLatch latch = new CountDownLatch(1);
final BlockingExporter exporter = new BlockingExporter(latch);
Settings settings = Settings.builder().put(MonitoringSettings.INTERVAL.getKey(), MonitoringSettings.MIN_INTERVAL).build();
Settings settings = Settings.builder().put(MonitoringService.INTERVAL.getKey(), MonitoringService.MIN_INTERVAL).build();
monitoringService = new MonitoringService(settings, clusterSettings, threadPool, emptySet(), exporter);
monitoringService.start();

View File

@ -1,167 +0,0 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.xpack.monitoring;
import org.elasticsearch.action.admin.cluster.settings.ClusterUpdateSettingsRequestBuilder;
import org.elasticsearch.common.network.NetworkModule;
import org.elasticsearch.common.settings.Setting;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.plugins.Plugin;
import org.elasticsearch.test.ESIntegTestCase;
import org.elasticsearch.transport.Netty4Plugin;
import org.elasticsearch.xpack.monitoring.test.MonitoringIntegTestCase;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collection;
import java.util.List;
import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;
import static org.hamcrest.Matchers.equalTo;
@ESIntegTestCase.ClusterScope(scope = ESIntegTestCase.Scope.TEST, supportsDedicatedMasters = false, numDataNodes = 1, numClientNodes = 0)
public class MonitoringSettingsIntegTests extends MonitoringIntegTestCase {
private final TimeValue interval = newRandomTimeValue();
private final TimeValue indexStatsTimeout = newRandomTimeValue();
private final String[] indices = randomStringArray();
private final TimeValue clusterStateTimeout = newRandomTimeValue();
private final TimeValue clusterStatsTimeout = newRandomTimeValue();
private final TimeValue jobStatsTimeout = newRandomTimeValue();
private final TimeValue recoveryTimeout = newRandomTimeValue();
private final Boolean recoveryActiveOnly = randomBoolean();
@Override
protected Settings nodeSettings(int nodeOrdinal) {
return Settings.builder()
.put(super.nodeSettings(nodeOrdinal))
.put(NetworkModule.HTTP_ENABLED.getKey(), false)
.put(monitoringSettings())
.build();
}
@Override
protected Collection<Class<? extends Plugin>> nodePlugins() {
ArrayList<Class<? extends Plugin>> plugins = new ArrayList<>(super.nodePlugins());
plugins.add(Netty4Plugin.class); // for http
return plugins;
}
private Settings monitoringSettings() {
return Settings.builder()
.put(MonitoringSettings.INTERVAL.getKey(), interval)
.put(MonitoringSettings.INDEX_STATS_TIMEOUT.getKey(), indexStatsTimeout)
.putArray(MonitoringSettings.INDICES.getKey(), indices)
.put(MonitoringSettings.CLUSTER_STATE_TIMEOUT.getKey(), clusterStateTimeout)
.put(MonitoringSettings.CLUSTER_STATS_TIMEOUT.getKey(), clusterStatsTimeout)
.put(MonitoringSettings.JOB_STATS_TIMEOUT.getKey(), jobStatsTimeout)
.put(MonitoringSettings.INDEX_RECOVERY_TIMEOUT.getKey(), recoveryTimeout)
.put(MonitoringSettings.INDEX_RECOVERY_ACTIVE_ONLY.getKey(), recoveryActiveOnly)
.build();
}
public void testMonitoringSettings() throws Exception {
for (final MonitoringSettings monitoringSettings : internalCluster().getInstances(MonitoringSettings.class)) {
assertThat(monitoringSettings.indexStatsTimeout().millis(), equalTo(indexStatsTimeout.millis()));
assertArrayEquals(monitoringSettings.indices(), indices);
assertThat(monitoringSettings.clusterStateTimeout().millis(), equalTo(clusterStateTimeout.millis()));
assertThat(monitoringSettings.clusterStatsTimeout().millis(), equalTo(clusterStatsTimeout.millis()));
assertThat(monitoringSettings.jobStatsTimeout().millis(), equalTo(jobStatsTimeout.millis()));
assertThat(monitoringSettings.recoveryTimeout().millis(), equalTo(recoveryTimeout.millis()));
assertThat(monitoringSettings.recoveryActiveOnly(), equalTo(recoveryActiveOnly));
}
for (final MonitoringService service : internalCluster().getInstances(MonitoringService.class)) {
assertThat(service.getInterval().millis(), equalTo(interval.millis()));
}
logger.info("--> testing monitoring dynamic settings update");
Settings.Builder transientSettings = Settings.builder();
final Setting[] monitoringSettings = new Setting[]{
MonitoringSettings.INDICES,
MonitoringSettings.INTERVAL,
MonitoringSettings.INDEX_RECOVERY_TIMEOUT,
MonitoringSettings.INDEX_STATS_TIMEOUT,
MonitoringSettings.INDEX_RECOVERY_ACTIVE_ONLY,
MonitoringSettings.CLUSTER_STATE_TIMEOUT,
MonitoringSettings.CLUSTER_STATS_TIMEOUT,
MonitoringSettings.JOB_STATS_TIMEOUT};
for (Setting<?> setting : monitoringSettings) {
if (setting.isDynamic()) {
if (setting.get(Settings.EMPTY) instanceof TimeValue) {
transientSettings.put(setting.getKey(), newRandomTimeValue().toString());
} else if (setting.get(Settings.EMPTY) instanceof Boolean) {
transientSettings.put(setting.getKey(), randomBoolean());
} else if (setting.get(Settings.EMPTY) instanceof List) {
transientSettings.putArray(setting.getKey(), randomStringArray());
} else {
fail("unknown dynamic setting [" + setting + "]");
}
}
}
logger.error("--> updating settings");
final Settings updatedSettings = transientSettings.build();
assertAcked(prepareRandomUpdateSettings(updatedSettings).get());
logger.error("--> checking that the value has been correctly updated on all monitoring settings services");
for (Setting<?> setting : monitoringSettings) {
if (setting.isDynamic() == false) {
continue;
}
if (setting == MonitoringSettings.INTERVAL) {
for (final MonitoringService service : internalCluster().getInstances(MonitoringService.class)) {
assertEquals(service.getInterval(), setting.get(updatedSettings));
}
} else {
for (final MonitoringSettings monitoringSettings1 : internalCluster().getInstances(MonitoringSettings.class)) {
if (setting == MonitoringSettings.INDEX_STATS_TIMEOUT) {
assertEquals(monitoringSettings1.indexStatsTimeout(), setting.get(updatedSettings));
} else if (setting == MonitoringSettings.CLUSTER_STATS_TIMEOUT) {
assertEquals(monitoringSettings1.clusterStatsTimeout(), setting.get(updatedSettings));
} else if (setting == MonitoringSettings.JOB_STATS_TIMEOUT) {
assertEquals(monitoringSettings1.jobStatsTimeout(), setting.get(updatedSettings));
} else if (setting == MonitoringSettings.CLUSTER_STATE_TIMEOUT) {
assertEquals(monitoringSettings1.clusterStateTimeout(), setting.get(updatedSettings));
} else if (setting == MonitoringSettings.INDEX_RECOVERY_TIMEOUT) {
assertEquals(monitoringSettings1.recoveryTimeout(), setting.get(updatedSettings));
} else if (setting == MonitoringSettings.INDEX_RECOVERY_ACTIVE_ONLY) {
assertEquals(Boolean.valueOf(monitoringSettings1.recoveryActiveOnly()), setting.get(updatedSettings));
} else if (setting == MonitoringSettings.INDICES) {
assertEquals(Arrays.asList(monitoringSettings1.indices()), setting.get(updatedSettings));
} else {
fail("unable to check value for unknown dynamic setting [" + setting + "]");
}
}
}
}
}
private ClusterUpdateSettingsRequestBuilder prepareRandomUpdateSettings(Settings updateSettings) {
ClusterUpdateSettingsRequestBuilder requestBuilder = client().admin().cluster().prepareUpdateSettings();
if (randomBoolean()) {
requestBuilder.setTransientSettings(updateSettings);
} else {
requestBuilder.setPersistentSettings(updateSettings);
}
return requestBuilder;
}
private TimeValue newRandomTimeValue() {
return TimeValue.parseTimeValue(randomFrom("30m", "1h", "3h", "5h", "7h", "10h", "1d"), null, getClass().getSimpleName());
}
private String[] randomStringArray() {
final int size = scaledRandomIntBetween(1, 10);
String[] items = new String[size];
for (int i = 0; i < size; i++) {
items[i] = randomAlphaOfLength(5);
}
return items;
}
}

View File

@ -14,7 +14,6 @@ import org.elasticsearch.search.aggregations.AggregationBuilders;
import org.elasticsearch.search.aggregations.bucket.terms.StringTerms;
import org.elasticsearch.test.ESIntegTestCase.ClusterScope;
import org.elasticsearch.test.ESIntegTestCase.Scope;
import org.elasticsearch.xpack.monitoring.MonitoringSettings;
import org.elasticsearch.xpack.monitoring.collector.node.NodeStatsMonitoringDoc;
import org.elasticsearch.xpack.monitoring.test.MonitoringIntegTestCase;
import org.junit.After;
@ -33,7 +32,7 @@ public class MultiNodesStatsTests extends MonitoringIntegTestCase {
protected Settings nodeSettings(int nodeOrdinal) {
return Settings.builder()
.put(super.nodeSettings(nodeOrdinal))
.put(MonitoringSettings.INTERVAL.getKey(), "-1")
.put(MonitoringService.INTERVAL.getKey(), "-1")
.put("xpack.monitoring.exporters.default_local.type", "local")
.build();
}

View File

@ -27,6 +27,7 @@ import org.elasticsearch.xpack.monitoring.collector.indices.IndexStatsMonitoring
import org.elasticsearch.xpack.monitoring.collector.indices.IndicesStatsMonitoringDoc;
import org.elasticsearch.xpack.monitoring.collector.node.NodeStatsMonitoringDoc;
import org.elasticsearch.xpack.monitoring.collector.shards.ShardMonitoringDoc;
import org.elasticsearch.xpack.monitoring.exporter.Exporters;
import org.hamcrest.Matcher;
import org.joda.time.format.DateTimeFormat;
@ -63,9 +64,9 @@ public class OldMonitoringIndicesBackwardsCompatibilityTests extends AbstractOld
Settings.Builder settings = Settings.builder().put(super.nodeSettings(ord))
.put(XPackSettings.MONITORING_ENABLED.getKey(), true)
// Don't clean old monitoring indexes - we want to make sure we can load them
.put(MonitoringSettings.HISTORY_DURATION.getKey(), TimeValue.timeValueHours(1000 * 365 * 24).getStringRep())
.put(Monitoring.HISTORY_DURATION.getKey(), TimeValue.timeValueHours(1000 * 365 * 24).getStringRep())
// Do not start monitoring exporters at startup
.put(MonitoringSettings.INTERVAL.getKey(), "-1");
.put(MonitoringService.INTERVAL.getKey(), "-1");
if (httpExporter) {
/* If we want to test the http exporter we have to create it but disable it. We need to create it so we don't use the default
@ -85,7 +86,7 @@ public class OldMonitoringIndicesBackwardsCompatibilityTests extends AbstractOld
httpExporter.put("auth.username", SecuritySettingsSource.TEST_USER_NAME);
httpExporter.put("auth.password", SecuritySettingsSource.TEST_PASSWORD);
settings.putProperties(httpExporter, k -> MonitoringSettings.EXPORTERS_SETTINGS.getKey() + "my_exporter." + k);
settings.putProperties(httpExporter, k -> Exporters.EXPORTERS_SETTINGS.getKey() + "my_exporter." + k);
}
@Override
@ -105,7 +106,7 @@ public class OldMonitoringIndicesBackwardsCompatibilityTests extends AbstractOld
}
// Monitoring can now start to collect new data
Settings.Builder settings = Settings.builder().put(MonitoringSettings.INTERVAL.getKey(), timeValueSeconds(3).getStringRep());
Settings.Builder settings = Settings.builder().put(MonitoringService.INTERVAL.getKey(), timeValueSeconds(3).getStringRep());
assertAcked(client().admin().cluster().prepareUpdateSettings().setTransientSettings(settings).get());
final String prefix = ".monitoring-" + MonitoredSystem.ES.getSystem() + "-" + TEMPLATE_VERSION + "-";
@ -167,7 +168,7 @@ public class OldMonitoringIndicesBackwardsCompatibilityTests extends AbstractOld
if they have not been re created by some in flight monitoring bulk request */
internalCluster().getInstances(MonitoringService.class).forEach(MonitoringService::stop);
Settings.Builder settings = Settings.builder().put(MonitoringSettings.INTERVAL.getKey(), "-1");
Settings.Builder settings = Settings.builder().put(MonitoringService.INTERVAL.getKey(), "-1");
if (httpExporter) {
logger.info("--> Disabling http exporter after test");
setupHttpExporter(settings, null);

View File

@ -8,7 +8,8 @@ package org.elasticsearch.xpack.monitoring.cleaner;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.test.ESIntegTestCase.ClusterScope;
import org.elasticsearch.xpack.monitoring.MonitoringSettings;
import org.elasticsearch.xpack.monitoring.Monitoring;
import org.elasticsearch.xpack.monitoring.MonitoringService;
import org.elasticsearch.xpack.monitoring.exporter.Exporter;
import org.elasticsearch.xpack.monitoring.exporter.Exporters;
import org.elasticsearch.xpack.monitoring.exporter.MonitoringTemplateUtils;
@ -29,7 +30,7 @@ public abstract class AbstractIndicesCleanerTestCase extends MonitoringIntegTest
protected Settings nodeSettings(int nodeOrdinal) {
Settings.Builder settings = Settings.builder()
.put(super.nodeSettings(nodeOrdinal))
.put(MonitoringSettings.INTERVAL.getKey(), "-1");
.put(MonitoringService.INTERVAL.getKey(), "-1");
return settings.build();
}
@ -151,7 +152,7 @@ public abstract class AbstractIndicesCleanerTestCase extends MonitoringIntegTest
public void testRetentionAsGlobalSetting() throws Exception {
final int max = 10;
final int retention = randomIntBetween(1, max);
internalCluster().startNode(Settings.builder().put(MonitoringSettings.HISTORY_DURATION.getKey(),
internalCluster().startNode(Settings.builder().put(Monitoring.HISTORY_DURATION.getKey(),
String.format(Locale.ROOT, "%dd", retention)));
final DateTime now = now();

View File

@ -12,7 +12,7 @@ import org.elasticsearch.license.XPackLicenseState;
import org.elasticsearch.test.ESTestCase;
import org.elasticsearch.threadpool.TestThreadPool;
import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.xpack.monitoring.MonitoringSettings;
import org.elasticsearch.xpack.monitoring.Monitoring;
import org.joda.time.DateTime;
import org.joda.time.DateTimeZone;
import org.junit.After;
@ -40,7 +40,7 @@ public class CleanerServiceTests extends ESTestCase {
@Before
public void start() {
clusterSettings = new ClusterSettings(Settings.EMPTY, Collections.singleton(MonitoringSettings.HISTORY_DURATION));
clusterSettings = new ClusterSettings(Settings.EMPTY, Collections.singleton(Monitoring.HISTORY_DURATION));
threadPool = new TestThreadPool("CleanerServiceTests");
}
@ -54,14 +54,14 @@ public class CleanerServiceTests extends ESTestCase {
expectedException.expect(IllegalArgumentException.class);
TimeValue expected = TimeValue.timeValueHours(1);
Settings settings = Settings.builder().put(MonitoringSettings.HISTORY_DURATION.getKey(), expected.getStringRep()).build();
Settings settings = Settings.builder().put(Monitoring.HISTORY_DURATION.getKey(), expected.getStringRep()).build();
new CleanerService(settings, clusterSettings, threadPool, licenseState);
}
public void testGetRetentionWithSettingWithUpdatesAllowed() {
TimeValue expected = TimeValue.timeValueHours(25);
Settings settings = Settings.builder().put(MonitoringSettings.HISTORY_DURATION.getKey(), expected.getStringRep()).build();
Settings settings = Settings.builder().put(Monitoring.HISTORY_DURATION.getKey(), expected.getStringRep()).build();
when(licenseState.isUpdateRetentionAllowed()).thenReturn(true);
@ -73,7 +73,7 @@ public class CleanerServiceTests extends ESTestCase {
public void testGetRetentionDefaultValueWithNoSettings() {
when(licenseState.isUpdateRetentionAllowed()).thenReturn(true);
assertEquals(MonitoringSettings.HISTORY_DURATION.get(Settings.EMPTY),
assertEquals(Monitoring.HISTORY_DURATION.get(Settings.EMPTY),
new CleanerService(Settings.EMPTY, clusterSettings, threadPool, licenseState).getRetention());
verify(licenseState).isUpdateRetentionAllowed();
@ -81,11 +81,11 @@ public class CleanerServiceTests extends ESTestCase {
public void testGetRetentionDefaultValueWithSettingsButUpdatesNotAllowed() {
TimeValue notExpected = TimeValue.timeValueHours(25);
Settings settings = Settings.builder().put(MonitoringSettings.HISTORY_DURATION.getKey(), notExpected.getStringRep()).build();
Settings settings = Settings.builder().put(Monitoring.HISTORY_DURATION.getKey(), notExpected.getStringRep()).build();
when(licenseState.isUpdateRetentionAllowed()).thenReturn(false);
assertEquals(MonitoringSettings.HISTORY_DURATION.get(Settings.EMPTY),
assertEquals(Monitoring.HISTORY_DURATION.get(Settings.EMPTY),
new CleanerService(settings, clusterSettings, threadPool, licenseState).getRetention());
verify(licenseState).isUpdateRetentionAllowed();

View File

@ -12,12 +12,19 @@ import org.elasticsearch.cluster.metadata.MetaData;
import org.elasticsearch.cluster.node.DiscoveryNode;
import org.elasticsearch.cluster.node.DiscoveryNodes;
import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.settings.ClusterSettings;
import org.elasticsearch.common.settings.Setting;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.transport.TransportAddress;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.common.util.set.Sets;
import org.elasticsearch.license.XPackLicenseState;
import org.elasticsearch.test.ESTestCase;
import org.elasticsearch.xpack.monitoring.MonitoringSettings;
import org.elasticsearch.xpack.monitoring.Monitoring;
import org.elasticsearch.xpack.security.InternalClient;
import java.util.function.Function;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.when;
@ -28,9 +35,9 @@ public abstract class BaseCollectorTestCase extends ESTestCase {
protected ClusterState clusterState;
protected DiscoveryNodes nodes;
protected MetaData metaData;
protected MonitoringSettings monitoringSettings;
protected XPackLicenseState licenseState;
protected InternalClient client;
protected Settings settings;
@Override
public void setUp() throws Exception {
@ -40,9 +47,9 @@ public abstract class BaseCollectorTestCase extends ESTestCase {
clusterState = mock(ClusterState.class);
nodes = mock(DiscoveryNodes.class);
metaData = mock(MetaData.class);
monitoringSettings = mock(MonitoringSettings.class);
licenseState = mock(XPackLicenseState.class);
client = mock(InternalClient.class);
settings = Settings.EMPTY;
}
protected void whenLocalNodeElectedMaster(final boolean electedMaster) {
@ -63,6 +70,28 @@ public abstract class BaseCollectorTestCase extends ESTestCase {
when(metaData.clusterUUID()).thenReturn(clusterUUID);
}
protected void withCollectionTimeout(final Setting<TimeValue> collectionTimeout, final TimeValue timeout) {
withCollectionSetting(builder -> builder.put(collectionTimeout.getKey(), timeout.getStringRep()));
}
protected void withCollectionIndices(final String[] collectionIndices) {
final String key = Collector.INDICES.getKey();
if (collectionIndices != null) {
withCollectionSetting(builder -> builder.putArray(key, collectionIndices));
} else {
withCollectionSetting(builder -> builder.putNull(key));
}
}
protected void withCollectionSetting(final Function<Settings.Builder, Settings.Builder> builder) {
settings = Settings.builder()
.put(settings)
.put(builder.apply(Settings.builder()).build())
.build();
when(clusterService.getClusterSettings())
.thenReturn(new ClusterSettings(settings, Sets.newHashSet(new Monitoring(settings, licenseState).getSettings())));
}
protected static DiscoveryNode localNode(final String uuid) {
return new DiscoveryNode(uuid, new TransportAddress(TransportAddress.META_ADDRESS, 9300), Version.CURRENT);
}

View File

@ -25,7 +25,6 @@ import org.elasticsearch.xpack.action.XPackUsageResponse;
import org.elasticsearch.xpack.logstash.Logstash;
import org.elasticsearch.xpack.logstash.LogstashFeatureSet;
import org.elasticsearch.xpack.monitoring.MonitoredSystem;
import org.elasticsearch.xpack.monitoring.MonitoringTestUtils;
import org.elasticsearch.xpack.monitoring.collector.BaseCollectorTestCase;
import org.elasticsearch.xpack.monitoring.exporter.MonitoringDoc;
@ -64,7 +63,7 @@ public class ClusterStatsCollectorTests extends BaseCollectorTestCase {
whenLocalNodeElectedMaster(false);
final ClusterStatsCollector collector =
new ClusterStatsCollector(Settings.EMPTY, clusterService, monitoringSettings, licenseState, client, licenseService);
new ClusterStatsCollector(Settings.EMPTY, clusterService, licenseState, client, licenseService);
assertThat(collector.shouldCollect(), is(false));
verify(nodes).isLocalNodeElectedMaster();
@ -74,13 +73,16 @@ public class ClusterStatsCollectorTests extends BaseCollectorTestCase {
whenLocalNodeElectedMaster(true);
final ClusterStatsCollector collector =
new ClusterStatsCollector(Settings.EMPTY, clusterService, monitoringSettings, licenseState, client, licenseService);
new ClusterStatsCollector(Settings.EMPTY, clusterService, licenseState, client, licenseService);
assertThat(collector.shouldCollect(), is(true));
verify(nodes).isLocalNodeElectedMaster();
}
public void testDoCollect() throws Exception {
final TimeValue timeout = TimeValue.timeValueSeconds(randomIntBetween(1, 120));
withCollectionTimeout(ClusterStatsCollector.CLUSTER_STATS_TIMEOUT, timeout);
whenLocalNodeElectedMaster(true);
final String clusterName = randomAlphaOfLength(10);
@ -102,9 +104,6 @@ public class ClusterStatsCollectorTests extends BaseCollectorTestCase {
.build();
when(licenseService.getLicense()).thenReturn(license);
final TimeValue timeout = mock(TimeValue.class);
when(monitoringSettings.clusterStatsTimeout()).thenReturn(timeout);
final ClusterStatsResponse mockClusterStatsResponse = mock(ClusterStatsResponse.class);
final ClusterHealthStatus clusterStatus = randomFrom(ClusterHealthStatus.values());
@ -137,7 +136,8 @@ public class ClusterStatsCollectorTests extends BaseCollectorTestCase {
when(xPackUsageFuture.actionGet()).thenReturn(xPackUsageResponse);
final ClusterStatsCollector collector =
new ClusterStatsCollector(Settings.EMPTY, clusterService, monitoringSettings, licenseState, client, licenseService);
new ClusterStatsCollector(Settings.EMPTY, clusterService, licenseState, client, licenseService);
assertEquals(timeout, collector.getCollectionTimeout());
final Collection<MonitoringDoc> results = collector.doCollect(node);
assertEquals(1, results.size());

View File

@ -16,6 +16,7 @@ import org.elasticsearch.cluster.node.DiscoveryNode;
import org.elasticsearch.cluster.routing.RecoverySource;
import org.elasticsearch.cluster.routing.ShardRouting;
import org.elasticsearch.cluster.routing.UnassignedInfo;
import org.elasticsearch.common.Strings;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.index.shard.ShardId;
@ -53,8 +54,7 @@ public class IndexRecoveryCollectorTests extends BaseCollectorTestCase {
when(licenseState.isMonitoringAllowed()).thenReturn(false);
whenLocalNodeElectedMaster(randomBoolean());
final IndexRecoveryCollector collector =
new IndexRecoveryCollector(Settings.EMPTY, clusterService, monitoringSettings, licenseState, client);
final IndexRecoveryCollector collector = new IndexRecoveryCollector(Settings.EMPTY, clusterService, licenseState, client);
assertThat(collector.shouldCollect(), is(false));
verify(licenseState).isMonitoringAllowed();
@ -65,8 +65,7 @@ public class IndexRecoveryCollectorTests extends BaseCollectorTestCase {
// this controls the blockage
whenLocalNodeElectedMaster(false);
final IndexRecoveryCollector collector =
new IndexRecoveryCollector(Settings.EMPTY, clusterService, monitoringSettings, licenseState, client);
final IndexRecoveryCollector collector = new IndexRecoveryCollector(Settings.EMPTY, clusterService, licenseState, client);
assertThat(collector.shouldCollect(), is(false));
verify(licenseState).isMonitoringAllowed();
@ -77,8 +76,7 @@ public class IndexRecoveryCollectorTests extends BaseCollectorTestCase {
when(licenseState.isMonitoringAllowed()).thenReturn(true);
whenLocalNodeElectedMaster(true);
final IndexRecoveryCollector collector =
new IndexRecoveryCollector(Settings.EMPTY, clusterService, monitoringSettings, licenseState, client);
final IndexRecoveryCollector collector = new IndexRecoveryCollector(Settings.EMPTY, clusterService, licenseState, client);
assertThat(collector.shouldCollect(), is(true));
verify(licenseState).isMonitoringAllowed();
@ -86,6 +84,9 @@ public class IndexRecoveryCollectorTests extends BaseCollectorTestCase {
}
public void testDoCollect() throws Exception {
final TimeValue timeout = TimeValue.timeValueSeconds(randomIntBetween(1, 120));
withCollectionTimeout(IndexRecoveryCollector.INDEX_RECOVERY_TIMEOUT, timeout);
whenLocalNodeElectedMaster(true);
final String clusterName = randomAlphaOfLength(10);
@ -100,19 +101,18 @@ public class IndexRecoveryCollectorTests extends BaseCollectorTestCase {
final MonitoringDoc.Node node = randomMonitoringNode(random());
final boolean recoveryOnly = randomBoolean();
when(monitoringSettings.recoveryActiveOnly()).thenReturn(recoveryOnly);
withCollectionSetting(builder -> builder.put(IndexRecoveryCollector.INDEX_RECOVERY_ACTIVE_ONLY.getKey(), recoveryOnly));
final String[] indices;
if (randomBoolean()) {
indices = null;
indices = randomBoolean() ? null : Strings.EMPTY_ARRAY;
} else {
indices = new String[randomIntBetween(1, 5)];
for (int i = 0; i < indices.length; i++) {
indices[i] = randomAlphaOfLengthBetween(5, 10);
}
}
when(monitoringSettings.indices()).thenReturn(indices);
when(monitoringSettings.recoveryTimeout()).thenReturn(TimeValue.timeValueSeconds(12));
withCollectionIndices(indices);
final int nbRecoveries = randomBoolean() ? 0 : randomIntBetween(1, 3);
final Map<String, List<RecoveryState>> recoveryStates = new HashMap<>();
@ -130,9 +130,6 @@ public class IndexRecoveryCollectorTests extends BaseCollectorTestCase {
final RecoveryResponse recoveryResponse =
new RecoveryResponse(randomInt(), randomInt(), randomInt(), randomBoolean(), recoveryStates, emptyList());
final TimeValue timeout = mock(TimeValue.class);
when(monitoringSettings.recoveryTimeout()).thenReturn(timeout);
final RecoveryRequestBuilder recoveryRequestBuilder =
spy(new RecoveryRequestBuilder(mock(ElasticsearchClient.class), RecoveryAction.INSTANCE));
doReturn(recoveryResponse).when(recoveryRequestBuilder).get(eq(timeout));
@ -146,8 +143,17 @@ public class IndexRecoveryCollectorTests extends BaseCollectorTestCase {
final Client client = mock(Client.class);
when(client.admin()).thenReturn(adminClient);
final IndexRecoveryCollector collector =
new IndexRecoveryCollector(Settings.EMPTY, clusterService, monitoringSettings, licenseState, client);
final IndexRecoveryCollector collector = new IndexRecoveryCollector(Settings.EMPTY, clusterService, licenseState, client);
assertEquals(timeout, collector.getCollectionTimeout());
assertEquals(recoveryOnly, collector.getActiveRecoveriesOnly());
if (indices != null) {
assertArrayEquals(indices, collector.getCollectionIndices());
} else {
// Collection indices has a default value equals to emptyList(),
// so it won't return a null indices array
assertArrayEquals(Strings.EMPTY_ARRAY, collector.getCollectionIndices());
}
final Collection<MonitoringDoc> results = collector.doCollect(node);
verify(indicesAdminClient).prepareRecoveries();

View File

@ -43,8 +43,7 @@ public class IndexStatsCollectorTests extends BaseCollectorTestCase {
when(licenseState.isMonitoringAllowed()).thenReturn(false);
whenLocalNodeElectedMaster(randomBoolean());
final IndexStatsCollector collector =
new IndexStatsCollector(Settings.EMPTY, clusterService, monitoringSettings, licenseState, client);
final IndexStatsCollector collector = new IndexStatsCollector(Settings.EMPTY, clusterService, licenseState, client);
assertThat(collector.shouldCollect(), is(false));
verify(licenseState).isMonitoringAllowed();
@ -55,8 +54,7 @@ public class IndexStatsCollectorTests extends BaseCollectorTestCase {
// this controls the blockage
whenLocalNodeElectedMaster(false);
final IndexStatsCollector collector =
new IndexStatsCollector(Settings.EMPTY, clusterService, monitoringSettings, licenseState, client);
final IndexStatsCollector collector = new IndexStatsCollector(Settings.EMPTY, clusterService, licenseState, client);
assertThat(collector.shouldCollect(), is(false));
verify(licenseState).isMonitoringAllowed();
@ -67,8 +65,7 @@ public class IndexStatsCollectorTests extends BaseCollectorTestCase {
when(licenseState.isMonitoringAllowed()).thenReturn(true);
whenLocalNodeElectedMaster(true);
final IndexStatsCollector collector =
new IndexStatsCollector(Settings.EMPTY, clusterService, monitoringSettings, licenseState, client);
final IndexStatsCollector collector = new IndexStatsCollector(Settings.EMPTY, clusterService, licenseState, client);
assertThat(collector.shouldCollect(), is(true));
verify(licenseState).isMonitoringAllowed();
@ -76,6 +73,9 @@ public class IndexStatsCollectorTests extends BaseCollectorTestCase {
}
public void testDoCollect() throws Exception {
final TimeValue timeout = TimeValue.timeValueSeconds(randomIntBetween(1, 120));
withCollectionTimeout(IndexStatsCollector.INDEX_STATS_TIMEOUT, timeout);
whenLocalNodeElectedMaster(true);
final String clusterName = randomAlphaOfLength(10);
@ -86,9 +86,6 @@ public class IndexStatsCollectorTests extends BaseCollectorTestCase {
final MonitoringDoc.Node node = randomMonitoringNode(random());
final TimeValue timeout = mock(TimeValue.class);
when(monitoringSettings.indexStatsTimeout()).thenReturn(timeout);
final Map<String, IndexStats> indicesStats = new HashMap<>();
final int indices = randomIntBetween(0, 10);
for (int i = 0; i < indices; i++) {
@ -114,8 +111,8 @@ public class IndexStatsCollectorTests extends BaseCollectorTestCase {
final Client client = mock(Client.class);
when(client.admin()).thenReturn(adminClient);
final IndexStatsCollector collector =
new IndexStatsCollector(Settings.EMPTY, clusterService, monitoringSettings, licenseState, client);
final IndexStatsCollector collector = new IndexStatsCollector(Settings.EMPTY, clusterService, licenseState, client);
assertEquals(timeout, collector.getCollectionTimeout());
final Collection<MonitoringDoc> results = collector.doCollect(node);
verify(indicesAdminClient).prepareStats();

View File

@ -47,7 +47,7 @@ public class JobStatsCollectorTests extends BaseCollectorTestCase {
when(licenseState.isMonitoringAllowed()).thenReturn(false);
when(licenseState.isMachineLearningAllowed()).thenReturn(mlAllowed);
final JobStatsCollector collector = new JobStatsCollector(settings, clusterService, monitoringSettings, licenseState, client);
final JobStatsCollector collector = new JobStatsCollector(settings, clusterService, licenseState, client);
assertThat(collector.shouldCollect(), is(false));
@ -63,7 +63,7 @@ public class JobStatsCollectorTests extends BaseCollectorTestCase {
// this controls the blockage
whenLocalNodeElectedMaster(false);
final JobStatsCollector collector = new JobStatsCollector(settings, clusterService, monitoringSettings, licenseState, client);
final JobStatsCollector collector = new JobStatsCollector(settings, clusterService, licenseState, client);
assertThat(collector.shouldCollect(), is(false));
@ -78,7 +78,7 @@ public class JobStatsCollectorTests extends BaseCollectorTestCase {
when(licenseState.isMachineLearningAllowed()).thenReturn(randomBoolean());
whenLocalNodeElectedMaster(randomBoolean());
final JobStatsCollector collector = new JobStatsCollector(settings, clusterService, monitoringSettings, licenseState, client);
final JobStatsCollector collector = new JobStatsCollector(settings, clusterService, licenseState, client);
assertThat(collector.shouldCollect(), is(false));
@ -93,7 +93,7 @@ public class JobStatsCollectorTests extends BaseCollectorTestCase {
when(licenseState.isMachineLearningAllowed()).thenReturn(false);
whenLocalNodeElectedMaster(randomBoolean());
final JobStatsCollector collector = new JobStatsCollector(settings, clusterService, monitoringSettings, licenseState, client);
final JobStatsCollector collector = new JobStatsCollector(settings, clusterService, licenseState, client);
assertThat(collector.shouldCollect(), is(false));
@ -107,7 +107,7 @@ public class JobStatsCollectorTests extends BaseCollectorTestCase {
when(licenseState.isMachineLearningAllowed()).thenReturn(true);
whenLocalNodeElectedMaster(true);
final JobStatsCollector collector = new JobStatsCollector(settings, clusterService, monitoringSettings, licenseState, client);
final JobStatsCollector collector = new JobStatsCollector(settings, clusterService, licenseState, client);
assertThat(collector.shouldCollect(), is(true));
@ -115,20 +115,20 @@ public class JobStatsCollectorTests extends BaseCollectorTestCase {
}
public void testDoCollect() throws Exception {
final TimeValue timeout = mock(TimeValue.class);
final MetaData metaData = mock(MetaData.class);
final String clusterUuid = randomAlphaOfLength(5);
final MonitoringDoc.Node node = randomMonitoringNode(random());
final MachineLearningClient client = mock(MachineLearningClient.class);
when(monitoringSettings.jobStatsTimeout()).thenReturn(timeout);
final TimeValue timeout = TimeValue.timeValueSeconds(randomIntBetween(1, 120));
withCollectionTimeout(JobStatsCollector.JOB_STATS_TIMEOUT, timeout);
when(clusterService.state()).thenReturn(clusterState);
when(clusterState.metaData()).thenReturn(metaData);
when(metaData.clusterUUID()).thenReturn(clusterUuid);
final JobStatsCollector collector =
new JobStatsCollector(Settings.EMPTY, clusterService, monitoringSettings, licenseState, client);
final JobStatsCollector collector = new JobStatsCollector(Settings.EMPTY, clusterService, licenseState, client);
assertEquals(timeout, collector.getCollectionTimeout());
final List<JobStats> jobStats = mockJobStats();

View File

@ -15,6 +15,7 @@ import org.elasticsearch.client.AdminClient;
import org.elasticsearch.client.Client;
import org.elasticsearch.client.ClusterAdminClient;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.xpack.monitoring.MonitoredSystem;
import org.elasticsearch.xpack.monitoring.collector.BaseCollectorTestCase;
import org.elasticsearch.xpack.monitoring.exporter.MonitoringDoc;
@ -41,8 +42,7 @@ public class NodeStatsCollectorTests extends BaseCollectorTestCase {
when(licenseState.isMonitoringAllowed()).thenReturn(false);
whenLocalNodeElectedMaster(randomBoolean());
final NodeStatsCollector collector =
new NodeStatsCollector(Settings.EMPTY, clusterService, monitoringSettings, licenseState, client);
final NodeStatsCollector collector = new NodeStatsCollector(Settings.EMPTY, clusterService, licenseState, client);
assertThat(collector.shouldCollect(), is(false));
verify(licenseState).isMonitoringAllowed();
@ -52,8 +52,7 @@ public class NodeStatsCollectorTests extends BaseCollectorTestCase {
when(licenseState.isMonitoringAllowed()).thenReturn(true);
whenLocalNodeElectedMaster(true);
final NodeStatsCollector collector =
new NodeStatsCollector(Settings.EMPTY, clusterService, monitoringSettings, licenseState, client);
final NodeStatsCollector collector = new NodeStatsCollector(Settings.EMPTY, clusterService, licenseState, client);
assertThat(collector.shouldCollect(), is(true));
verify(licenseState).isMonitoringAllowed();
@ -62,6 +61,9 @@ public class NodeStatsCollectorTests extends BaseCollectorTestCase {
public void testDoCollectWithFailures() throws Exception {
when(licenseState.isMonitoringAllowed()).thenReturn(true);
final TimeValue timeout = TimeValue.parseTimeValue(randomPositiveTimeValue(), NodeStatsCollectorTests.class.getName());
withCollectionTimeout(NodeStatsCollector.NODE_STATS_TIMEOUT, timeout);
final NodesStatsResponse nodesStatsResponse = mock(NodesStatsResponse.class);
when(nodesStatsResponse.hasFailures()).thenReturn(true);
@ -69,10 +71,10 @@ public class NodeStatsCollectorTests extends BaseCollectorTestCase {
when(nodesStatsResponse.failures()).thenReturn(Collections.singletonList(exception));
final Client client = mock(Client.class);
thenReturnNodeStats(client, nodesStatsResponse);
thenReturnNodeStats(client, timeout, nodesStatsResponse);
final NodeStatsCollector collector =
new NodeStatsCollector(Settings.EMPTY, clusterService, monitoringSettings, licenseState, client);
final NodeStatsCollector collector = new NodeStatsCollector(Settings.EMPTY, clusterService, licenseState, client);
assertEquals(timeout, collector.getCollectionTimeout());
final FailedNodeException e = expectThrows(FailedNodeException.class, () -> collector.doCollect(randomMonitoringNode(random())));
assertEquals(exception, e);
@ -81,6 +83,9 @@ public class NodeStatsCollectorTests extends BaseCollectorTestCase {
public void testDoCollect() throws Exception {
when(licenseState.isMonitoringAllowed()).thenReturn(true);
final TimeValue timeout = TimeValue.timeValueSeconds(randomIntBetween(1, 120));
withCollectionTimeout(NodeStatsCollector.NODE_STATS_TIMEOUT, timeout);
final boolean isMaster = randomBoolean();
whenLocalNodeElectedMaster(isMaster);
@ -99,10 +104,10 @@ public class NodeStatsCollectorTests extends BaseCollectorTestCase {
when(nodeStats.getTimestamp()).thenReturn(timestamp);
final Client client = mock(Client.class);
thenReturnNodeStats(client, nodesStatsResponse);
thenReturnNodeStats(client, timeout, nodesStatsResponse);
final NodeStatsCollector collector =
new NodeStatsCollector(Settings.EMPTY, clusterService, monitoringSettings, licenseState, client);
final NodeStatsCollector collector = new NodeStatsCollector(Settings.EMPTY, clusterService, licenseState, client);
assertEquals(timeout, collector.getCollectionTimeout());
final Collection<MonitoringDoc> results = collector.doCollect(node);
assertEquals(1, results.size());
@ -124,10 +129,10 @@ public class NodeStatsCollectorTests extends BaseCollectorTestCase {
assertThat(document.isMlockall(), equalTo(BootstrapInfo.isMemoryLocked()));
}
private void thenReturnNodeStats(final Client client, final NodesStatsResponse nodesStatsResponse) {
private void thenReturnNodeStats(final Client client, final TimeValue timeout, final NodesStatsResponse nodesStatsResponse) {
@SuppressWarnings("unchecked")
final ActionFuture<NodesStatsResponse> future = (ActionFuture<NodesStatsResponse>) mock(ActionFuture.class);
when(future.actionGet(eq(monitoringSettings.nodeStatsTimeout()))).thenReturn(nodesStatsResponse);
when(future.actionGet(eq(timeout))).thenReturn(nodesStatsResponse);
final ClusterAdminClient clusterAdminClient = mock(ClusterAdminClient.class);
when(clusterAdminClient.nodesStats(any(NodesStatsRequest.class))).thenReturn(future);

View File

@ -47,7 +47,7 @@ public class ShardsCollectorTests extends BaseCollectorTestCase {
when(licenseState.isMonitoringAllowed()).thenReturn(false);
whenLocalNodeElectedMaster(randomBoolean());
final ShardsCollector collector = new ShardsCollector(Settings.EMPTY, clusterService, monitoringSettings, licenseState);
final ShardsCollector collector = new ShardsCollector(Settings.EMPTY, clusterService, licenseState);
assertThat(collector.shouldCollect(), is(false));
verify(licenseState).isMonitoringAllowed();
@ -58,7 +58,7 @@ public class ShardsCollectorTests extends BaseCollectorTestCase {
// this controls the blockage
whenLocalNodeElectedMaster(false);
final ShardsCollector collector = new ShardsCollector(Settings.EMPTY, clusterService, monitoringSettings, licenseState);
final ShardsCollector collector = new ShardsCollector(Settings.EMPTY, clusterService, licenseState);
assertThat(collector.shouldCollect(), is(false));
verify(licenseState).isMonitoringAllowed();
@ -69,7 +69,7 @@ public class ShardsCollectorTests extends BaseCollectorTestCase {
when(licenseState.isMonitoringAllowed()).thenReturn(true);
whenLocalNodeElectedMaster(true);
final ShardsCollector collector = new ShardsCollector(Settings.EMPTY, clusterService, monitoringSettings, licenseState);
final ShardsCollector collector = new ShardsCollector(Settings.EMPTY, clusterService, licenseState);
assertThat(collector.shouldCollect(), is(true));
verify(licenseState).isMonitoringAllowed();
@ -79,7 +79,7 @@ public class ShardsCollectorTests extends BaseCollectorTestCase {
public void testDoCollectWhenNoClusterState() throws Exception {
when(clusterService.state()).thenReturn(null);
final ShardsCollector collector = new ShardsCollector(Settings.EMPTY, clusterService, monitoringSettings, licenseState);
final ShardsCollector collector = new ShardsCollector(Settings.EMPTY, clusterService, licenseState);
final Collection<MonitoringDoc> results = collector.doCollect(randomMonitoringNode(random()));
assertThat(results, notNullValue());
@ -98,7 +98,7 @@ public class ShardsCollectorTests extends BaseCollectorTestCase {
when(clusterState.stateUUID()).thenReturn(stateUUID);
final String[] indices = randomFrom(NONE, Strings.EMPTY_ARRAY, new String[]{"_all"}, new String[]{"_index*"});
when(monitoringSettings.indices()).thenReturn(indices);
withCollectionIndices(indices);
final RoutingTable routingTable = mockRoutingTable();
when(clusterState.routingTable()).thenReturn(routingTable);
@ -108,7 +108,9 @@ public class ShardsCollectorTests extends BaseCollectorTestCase {
when(nodes.get(eq("_current"))).thenReturn(localNode);
when(clusterState.getNodes()).thenReturn(nodes);
final ShardsCollector collector = new ShardsCollector(Settings.EMPTY, clusterService, monitoringSettings, licenseState);
final ShardsCollector collector = new ShardsCollector(Settings.EMPTY, clusterService, licenseState);
assertNull(collector.getCollectionTimeout());
assertArrayEquals(indices, collector.getCollectionIndices());
final Collection<MonitoringDoc> results = collector.doCollect(node);
assertThat(results, notNullValue());

View File

@ -19,8 +19,7 @@ import org.elasticsearch.license.XPackLicenseState;
import org.elasticsearch.test.ESTestCase;
import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.xpack.monitoring.MonitoredSystem;
import org.elasticsearch.xpack.monitoring.MonitoringSettings;
import org.elasticsearch.xpack.monitoring.action.MonitoringBulkDoc;
import org.elasticsearch.xpack.monitoring.MonitoringService;
import org.elasticsearch.xpack.monitoring.cleaner.CleanerService;
import org.elasticsearch.xpack.monitoring.exporter.local.LocalExporter;
import org.elasticsearch.xpack.security.InternalClient;
@ -40,6 +39,7 @@ import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.concurrent.atomic.AtomicReference;
import static java.util.Collections.singleton;
import static org.hamcrest.Matchers.containsString;
import static org.hamcrest.Matchers.empty;
import static org.hamcrest.Matchers.equalTo;
@ -75,7 +75,7 @@ public class ExportersTests extends ESTestCase {
// default state.version() will be 0, which is "valid"
state = mock(ClusterState.class);
clusterSettings = new ClusterSettings(Settings.EMPTY,
new HashSet<>(Arrays.asList(MonitoringSettings.INTERVAL, MonitoringSettings.EXPORTERS_SETTINGS)));
new HashSet<>(Arrays.asList(MonitoringService.INTERVAL, Exporters.EXPORTERS_SETTINGS)));
when(clusterService.getClusterSettings()).thenReturn(clusterSettings);
when(clusterService.state()).thenReturn(state);
@ -180,7 +180,7 @@ public class ExportersTests extends ESTestCase {
.put("xpack.monitoring.exporters._name0.type", "_type")
.put("xpack.monitoring.exporters._name1.type", "_type")
.build();
clusterSettings = new ClusterSettings(nodeSettings, new HashSet<>(Arrays.asList(MonitoringSettings.EXPORTERS_SETTINGS)));
clusterSettings = new ClusterSettings(nodeSettings, singleton(Exporters.EXPORTERS_SETTINGS));
when(clusterService.getClusterSettings()).thenReturn(clusterSettings);
exporters = new Exporters(nodeSettings, factories, clusterService, licenseState, threadContext) {

View File

@ -29,7 +29,7 @@ import org.elasticsearch.test.ESIntegTestCase.Scope;
import org.elasticsearch.test.http.MockRequest;
import org.elasticsearch.test.http.MockResponse;
import org.elasticsearch.test.http.MockWebServer;
import org.elasticsearch.xpack.monitoring.MonitoringSettings;
import org.elasticsearch.xpack.monitoring.MonitoringService;
import org.elasticsearch.xpack.monitoring.MonitoringTestUtils;
import org.elasticsearch.xpack.monitoring.collector.indices.IndexRecoveryMonitoringDoc;
import org.elasticsearch.xpack.monitoring.exporter.ClusterAlertsUtil;
@ -107,7 +107,7 @@ public class HttpExporterIT extends MonitoringIntegTestCase {
// we make an exporter on demand per test
return Settings.builder()
.put(super.nodeSettings(nodeOrdinal))
.put(MonitoringSettings.INTERVAL.getKey(), "-1")
.put(MonitoringService.INTERVAL.getKey(), "-1")
.put("xpack.monitoring.exporters._http.type", "http")
.put("xpack.monitoring.exporters._http.enabled", false)
.build();

View File

@ -11,7 +11,7 @@ import org.elasticsearch.license.XPackLicenseState;
import org.elasticsearch.threadpool.TestThreadPool;
import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.xpack.XPackSettings;
import org.elasticsearch.xpack.monitoring.MonitoringSettings;
import org.elasticsearch.xpack.monitoring.MonitoringService;
import org.elasticsearch.xpack.monitoring.cleaner.CleanerService;
import org.elasticsearch.xpack.monitoring.exporter.Exporter;
import org.elasticsearch.xpack.monitoring.test.MonitoringIntegTestCase;
@ -65,7 +65,7 @@ public abstract class LocalExporterIntegTestCase extends MonitoringIntegTestCase
protected Settings localExporterSettings() {
return Settings.builder()
.put(MonitoringSettings.INTERVAL.getKey(), "-1")
.put(MonitoringService.INTERVAL.getKey(), "-1")
.put("xpack.monitoring.exporters." + exporterName + ".type", LocalExporter.TYPE)
.put("xpack.monitoring.exporters." + exporterName + ".enabled", false)
.put("xpack.monitoring.exporters." + exporterName + "." + CLUSTER_ALERTS_MANAGEMENT_SETTING, useClusterAlerts())

View File

@ -26,7 +26,7 @@ import org.elasticsearch.search.aggregations.metrics.max.Max;
import org.elasticsearch.test.ESIntegTestCase;
import org.elasticsearch.xpack.XPackClient;
import org.elasticsearch.xpack.monitoring.MonitoredSystem;
import org.elasticsearch.xpack.monitoring.MonitoringSettings;
import org.elasticsearch.xpack.monitoring.MonitoringService;
import org.elasticsearch.xpack.monitoring.MonitoringTestUtils;
import org.elasticsearch.xpack.monitoring.action.MonitoringBulkDoc;
import org.elasticsearch.xpack.monitoring.action.MonitoringBulkRequestBuilder;
@ -71,7 +71,7 @@ public class LocalExporterIntegTests extends LocalExporterIntegTestCase {
private void stopMonitoring() throws Exception {
// Now disabling the monitoring service, so that no more collection are started
assertAcked(client().admin().cluster().prepareUpdateSettings().setTransientSettings(
Settings.builder().putNull(MonitoringSettings.INTERVAL.getKey())
Settings.builder().putNull(MonitoringService.INTERVAL.getKey())
.putNull("xpack.monitoring.exporters._local.enabled")
.putNull("xpack.monitoring.exporters._local.index.name.time_format")));
}
@ -126,7 +126,7 @@ public class LocalExporterIntegTests extends LocalExporterIntegTestCase {
// monitoring service is started
exporterSettings = Settings.builder()
.put(MonitoringSettings.INTERVAL.getKey(), 3L, TimeUnit.SECONDS);
.put(MonitoringService.INTERVAL.getKey(), 3L, TimeUnit.SECONDS);
assertAcked(client().admin().cluster().prepareUpdateSettings().setTransientSettings(exporterSettings));
final int numNodes = internalCluster().getNodeNames().length;

View File

@ -11,7 +11,6 @@ import org.apache.http.entity.ContentType;
import org.apache.http.entity.StringEntity;
import org.apache.http.nio.entity.NStringEntity;
import org.apache.lucene.util.Constants;
import org.apache.lucene.util.LuceneTestCase;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.Version;
import org.elasticsearch.client.Response;
@ -64,7 +63,6 @@ import static org.hamcrest.Matchers.not;
import static org.hamcrest.Matchers.notNullValue;
import static org.hamcrest.Matchers.nullValue;
@LuceneTestCase.AwaitsFix(bugUrl = "https://github.com/elastic/x-pack-elasticsearch/issues/2609")
public class MonitoringIT extends ESRestTestCase {
private static final String BASIC_AUTH_VALUE = basicAuthHeaderValue("x_pack_rest_user", TEST_PASSWORD_SECURE_STRING);
@ -411,9 +409,6 @@ public class MonitoringIT extends ESRestTestCase {
final Map<String, Object> source = (Map<String, Object>) document.get("_source");
assertEquals(5, source.size());
final Map<String, Object> nodeStats = (Map<String, Object>) source.get(NodeStatsMonitoringDoc.TYPE);
assertEquals(Constants.WINDOWS ? 8 : 9, nodeStats.size());
NodeStatsMonitoringDoc.XCONTENT_FILTERS.forEach(filter -> {
if (Constants.WINDOWS && filter.startsWith("node_stats.os.cpu.load_average")) {
// load average is unavailable on Windows

View File

@ -10,7 +10,7 @@ import org.elasticsearch.common.bytes.BytesArray;
import org.elasticsearch.common.collect.Tuple;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.xcontent.XContentType;
import org.elasticsearch.xpack.monitoring.MonitoringSettings;
import org.elasticsearch.xpack.monitoring.MonitoringService;
import org.elasticsearch.xpack.monitoring.test.MonitoringIntegTestCase;
import org.elasticsearch.xpack.security.InternalClient;
@ -24,7 +24,7 @@ public class MonitoringInternalClientTests extends MonitoringIntegTestCase {
protected Settings nodeSettings(int nodeOrdinal) {
return Settings.builder()
.put(super.nodeSettings(nodeOrdinal))
.put(MonitoringSettings.INTERVAL.getKey(), "-1")
.put(MonitoringService.INTERVAL.getKey(), "-1")
.build();
}

View File

@ -14,7 +14,7 @@ import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.xcontent.json.JsonXContent;
import org.elasticsearch.plugins.Plugin;
import org.elasticsearch.transport.Netty4Plugin;
import org.elasticsearch.xpack.monitoring.MonitoringSettings;
import org.elasticsearch.xpack.monitoring.MonitoringService;
import org.elasticsearch.xpack.monitoring.test.MonitoringIntegTestCase;
import java.util.ArrayList;
@ -35,7 +35,7 @@ public class MonitoringSettingsFilterTests extends MonitoringIntegTestCase {
return Settings.builder()
.put(super.nodeSettings(nodeOrdinal))
.put(NetworkModule.HTTP_ENABLED.getKey(), true)
.put(MonitoringSettings.INTERVAL.getKey(), "-1")
.put(MonitoringService.INTERVAL.getKey(), "-1")
.put("xpack.monitoring.exporters._http.type", "http")
.put("xpack.monitoring.exporters._http.enabled", false)
.put("xpack.monitoring.exporters._http.auth.username", "_user")

View File

@ -34,7 +34,6 @@ import org.elasticsearch.xpack.XPackPlugin;
import org.elasticsearch.xpack.XPackSettings;
import org.elasticsearch.xpack.ml.MachineLearning;
import org.elasticsearch.xpack.monitoring.MonitoringService;
import org.elasticsearch.xpack.monitoring.MonitoringSettings;
import org.elasticsearch.xpack.monitoring.client.MonitoringClient;
import org.elasticsearch.xpack.monitoring.exporter.ClusterAlertsUtil;
import org.elasticsearch.xpack.monitoring.exporter.MonitoringTemplateUtils;
@ -404,7 +403,7 @@ public abstract class MonitoringIntegTestCase extends ESIntegTestCase {
protected void updateMonitoringInterval(long value, TimeUnit timeUnit) {
assertAcked(client().admin().cluster().prepareUpdateSettings().setTransientSettings(
Settings.builder().put(MonitoringSettings.INTERVAL.getKey(), value, timeUnit)));
Settings.builder().put(MonitoringService.INTERVAL.getKey(), value, timeUnit)));
}
/** security related settings */

View File

@ -830,6 +830,7 @@ public class AuthenticationServiceTests extends ESTestCase {
random().nextBytes(randomBytes);
final CountDownLatch latch = new CountDownLatch(1);
final Authentication expected = new Authentication(user, new RealmRef(firstRealm.name(), firstRealm.type(), "authc_test"), null);
AtomicBoolean success = new AtomicBoolean(false);
try (ThreadContext.StoredContext ignore = threadContext.stashContext()) {
threadContext.putHeader("Authorization", "Bearer " + Base64.getEncoder().encodeToString(randomBytes));
service.authenticate("_action", message, null, ActionListener.wrap(result -> {
@ -839,18 +840,22 @@ public class AuthenticationServiceTests extends ESTestCase {
assertThat(result.getAuthenticatedBy(), is(notNullValue()));
assertThreadContextContainsAuthentication(result);
assertEquals(expected, result);
success.set(true);
latch.countDown();
}, this::logAndFail));
} catch (IllegalArgumentException ex) {
assertThat(ex.getMessage(), containsString("array length must be <= to " + ArrayUtil.MAX_ARRAY_LENGTH + " but was: "));
latch.countDown();
} catch (NegativeArraySizeException ex) {
assertThat(ex.getMessage(), containsString("array size must be positive but was: "));
latch.countDown();
}
// we need to use a latch here because the key computation goes async on another thread!
latch.await();
verify(auditTrail).authenticationSuccess(firstRealm.name(), user, "_action", message);
if (success.get()) {
verify(auditTrail).authenticationSuccess(firstRealm.name(), user, "_action", message);
}
verifyNoMoreInteractions(auditTrail);
}

View File

@ -46,7 +46,9 @@ public class ESNativeRealmMigrateToolTests extends CommandTestCase {
@Override
protected Environment createEnv(Terminal terminal, Map<String, String> settings) throws UserException {
return new Environment(Settings.builder().put(settings).build());
Settings.Builder builder = Settings.builder();
settings.forEach((k,v) -> builder.put(k, v));
return new Environment(builder.build());
}
};

View File

@ -100,7 +100,9 @@ public class SetupPasswordToolTests extends CommandTestCase {
return new AutoSetup() {
@Override
protected Environment createEnv(Terminal terminal, Map<String, String> settings) throws UserException {
return new Environment(Settings.builder().put(settings).build());
Settings.Builder builder = Settings.builder();
settings.forEach((k,v) -> builder.put(k, v));
return new Environment(builder.build());
}
};
}
@ -110,7 +112,9 @@ public class SetupPasswordToolTests extends CommandTestCase {
return new InteractiveSetup() {
@Override
protected Environment createEnv(Terminal terminal, Map<String, String> settings) throws UserException {
return new Environment(Settings.builder().put(settings).build());
Settings.Builder builder = Settings.builder();
settings.forEach((k,v) -> builder.put(k, v));
return new Environment(builder.build());
}
};
}

View File

@ -50,7 +50,9 @@ public class SystemKeyToolTests extends CommandTestCase {
@Override
protected Environment createEnv(Terminal terminal, Map<String, String> settings) throws UserException {
return new Environment(Settings.builder().put(settings).build());
Settings.Builder builder = Settings.builder();
settings.forEach((k,v) -> builder.put(k, v));
return new Environment(builder.build());
}
};

View File

@ -145,6 +145,11 @@ public class LicensingTribeIT extends ESIntegTestCase {
});
}
public void testDummy() throws Exception {
// this test is here so that testLicensePropagateToTribeNode's assumption
// doesn't result in this test suite to have no tests run and trigger a build failure
}
private static final String PLATINUM_LICENSE = "{\"license\":{\"uid\":\"1\",\"type\":\"platinum\"," +
"\"issue_date_in_millis\":1411948800000,\"expiry_date_in_millis\":1914278399999,\"max_nodes\":1," +
"\"issued_to\":\"issuedTo\",\"issuer\":\"issuer\"," +