Merge branch 'master' into feature/sql

Original commit: elastic/x-pack-elasticsearch@75f438cd4a
This commit is contained in:
Nik Everett 2017-08-10 13:10:36 -04:00
commit 9d81805616
33 changed files with 277 additions and 137 deletions

View File

@ -135,7 +135,66 @@ the correct value for your environment, you may consider setting the value to
`*` which will allow automatic creation of all indices.
=============================================================================
. Start {es}.
. Change the passwords for the built-in users. For more information,
see {xpack-ref}/setting-up-authentication.html[Setting Up User Authentication].
.. If you have not already done so, bootstrap the password for the `elastic`
user by placing a password in the keystore of at least one node.
+
--
[source,shell]
--------------------------------------------------
bin/elasticsearch-keystore create
bin/elasticsearch-keystore add "bootstrap.password"
--------------------------------------------------
After you run the "add" command, you will be prompted to enter a password. This
bootstrap password is only intended to be a transient password that is used to
help you set all the built-in user passwords.
--
.. If you have more than one node, you must configure SSL/TLS for inter-node
communication. For more information, see
{xpack-ref}/encrypting-communications.html[Encrypting Communications].
... Generate node certificates. For example, you can use the `certgen` command
line tool to generate a certificate authority and signed certificates for your
nodes.
+
--
[source,shell]
----------------------------------------------------------
bin/x-pack/certgen
----------------------------------------------------------
This command generates a zip file with the CA certificate, private key, and
signed certificates and keys in the PEM format for each node that you specify.
If you want to use a commercial or organization-specific CA, you can use the
`-csr` parameter to generate certificate signing requests (CSR) for the nodes
in your cluster.
TIP: For easier setup, use the node name as the instance name when you run
this tool.
--
... Copy the certificate data into a directory within the {es} configuration
directory. For example,
`/home/es/config/certs`.
... Add the following information to the `elasticsearch.yml` on all nodes:
+
--
[source,yaml]
-----------------------------------------------------------
xpack.ssl.key: certs/${node.name}/${node.name}.key <1>
xpack.ssl.certificate: certs/${node.name}/${node.name}.crt <2>
xpack.ssl.certificate_authorities: certs/ca/ca.crt <3>
xpack.security.authc.token.enabled: false <4>
-----------------------------------------------------------
<1> If this path does not exist on every node or the file name does not match
the `node.name` configuration setting, you must specify the full path to the
node key file.
<2> Alternatively, specify the full path to the node certificate.
<3> Alternatively specify the full path to the CA certificate.
<4> Disables the built-in token service.
--
.. Start {es}.
+
--
[source,shell]
@ -144,22 +203,24 @@ bin/elasticsearch
----------------------------------------------------------
--
For information, see
{kibana-ref}/installing-xpack-kb.html[Installing {xpack} on {kib}] and
{logstash-ref}/installing-xpack-log.html[Installing {xpack} on Logstash].
.. Set the passwords for all built-in users. You can update passwords from the
**Management > Users** UI in {kib}, use the `setup-passwords` tool, or use the
security user API. For example:
+
--
[source,shell]
--------------------------------------------------
bin/x-pack/setup-passwords interactive
--------------------------------------------------
If you prefer to have randomly generated passwords, specify `auto` instead of
`interactive`. If the node is not listening on "http://localhost:9200", use the
`-u` parameter to specify the appropriate URL.
--
[IMPORTANT]
=============================================================================
SSL/TLS encryption is disabled by default, which means user credentials are
passed in the clear. **Do not deploy to production without enabling encryption!**
For more information, see {xpack-ref}/encrypting-communications.html[Encrypting
Communications].
. {kibana-ref}/installing-xpack-kb.html[Install {xpack} on {kib}].
. {logstash-ref}/installing-xpack-log.html[Install {xpack} on Logstash].
You must also **change the passwords for the built-in `elastic` user and the
`kibana` user that enables {kib} to communicate with {es} before
deploying to production**. For more information,
see {xpack-ref}/setting-up-authentication.html[Setting Up User Authentication].
=============================================================================
[float]
[[xpack-package-installation]]

View File

@ -56,7 +56,7 @@ The aggregations are defined in the {dfeed} as follows:
PUT _xpack/ml/datafeeds/datafeed-farequote
{
"job_id":"farequote",
"indexes": ["farequote"],
"indices": ["farequote"],
"types": ["response"],
"aggregations": {
"buckets": {

View File

@ -8,7 +8,7 @@
directly to configure and access {xpack} features.
* <<info-api, Info API>>
* Graph <<graph-explore-api, Explore API>>
* <<graph-explore-api, Graph Explore API>>
* <<ml-apis, Machine Learning APIs>>
* <<security-api,Security APIs>>
* <<watcher-api, Watcher APIs>>

View File

@ -2,7 +2,7 @@
[[ml-close-job]]
=== Close Jobs
The close job API enables you to close a job.
The close job API enables you to close one or more jobs.
A job can be opened and closed multiple times throughout its lifecycle.
A closed job cannot receive data or perform analysis
@ -11,12 +11,18 @@ operations, but you can still explore and navigate results.
==== Request
`POST _xpack/ml/anomaly_detectors/<job_id>/_close`
`POST _xpack/ml/anomaly_detectors/<job_id>/_close` +
`POST _xpack/ml/anomaly_detectors/<job_id>,<job_id>/_close` +
`POST _xpack/ml/anomaly_detectors/_all/_close` +
==== Description
//A job can be closed once all data has been analyzed.
You can close multiple jobs in a single API request by using a group name, a
comma-separated list of jobs, or a wildcard expression. You can close all jobs
by using `_all` or by specifying `*` as the `<job_id>`.
When you close a job, it runs housekeeping tasks such as pruning the model history,
flushing buffers, calculating final results and persisting the model snapshots.
@ -40,8 +46,9 @@ results the job might have recently produced or might produce in the future.
==== Path Parameters
`job_id` (required)::
(string) Identifier for the job
`job_id`::
(string) Identifier for the job. It can be a job identifier, a group name, or
a wildcard expression.
==== Query Parameters
@ -59,7 +66,6 @@ results the job might have recently produced or might produce in the future.
You must have `manage_ml`, or `manage` cluster privileges to use this API.
For more information, see {xpack-ref}/security-privileges.html[Security Privileges].
//<<privileges-list-cluster>>.
==== Examples

View File

@ -8,13 +8,24 @@ The get {dfeed} statistics API enables you to retrieve usage information for
==== Request
`GET _xpack/ml/datafeeds/_stats` +
`GET _xpack/ml/datafeeds/<feed_id>/_stats`
`GET _xpack/ml/datafeeds/<feed_id>/_stats` +
`GET _xpack/ml/datafeeds/<feed_id>,<feed_id>/_stats` +
`GET _xpack/ml/datafeeds/_stats` +
`GET _xpack/ml/datafeeds/_stats/_all` +
==== Description
You can get statistics for multiple {dfeeds} in a single API request by using a
comma-separated list of {dfeeds} or a wildcard expression. You can get
statistics for all {dfeeds} by using `_all`, by specifying `*` as the
`<feed_id>`, or by omitting the `<feed_id>`.
If the {dfeed} is stopped, the only information you receive is the
`datafeed_id` and the `state`.
@ -22,9 +33,9 @@ If the {dfeed} is stopped, the only information you receive is the
==== Path Parameters
`feed_id`::
(string) Identifier for the {dfeed}.
This parameter does not support wildcards, but you can specify `_all` or
omit the `feed_id` to get information about all {dfeeds}.
(string) Identifier for the {dfeed}. It can be a {dfeed} identifier or a
wildcard expression. If you do not specify one of these options, the API
returns statistics for all {dfeeds}.
==== Results
@ -41,7 +52,6 @@ The API returns the following information:
You must have `monitor_ml`, `monitor`, `manage_ml`, or `manage` cluster
privileges to use this API. For more information, see
{xpack-ref}/security-privileges.html[Security Privileges].
//<<privileges-list-cluster>>.
==== Examples

View File

@ -7,18 +7,29 @@ The get {dfeeds} API enables you to retrieve configuration information for
==== Request
`GET _xpack/ml/datafeeds/<feed_id>` +
`GET _xpack/ml/datafeeds/<feed_id>,<feed_id>` +
`GET _xpack/ml/datafeeds/` +
`GET _xpack/ml/datafeeds/<feed_id>`
`GET _xpack/ml/datafeeds/_all` +
//===== Description
===== Description
You can get information for multiple {dfeeds} in a single API request by using a
comma-separated list of {dfeeds} or a wildcard expression. You can get
information for all {dfeeds} by using `_all`, by specifying `*` as the
`<feed_id>`, or by omitting the `<feed_id>`.
==== Path Parameters
`feed_id`::
(string) Identifier for the {dfeed}.
This parameter does not support wildcards, but you can specify `_all` or
omit the `feed_id` to get information about all {dfeeds}.
(string) Identifier for the {dfeed}. It can be a {dfeed} identifier or a
wildcard expression. If you do not specify one of these options, the API
returns information about all {dfeeds}.
==== Results
@ -35,7 +46,6 @@ The API returns the following information:
You must have `monitor_ml`, `monitor`, `manage_ml`, or `manage` cluster
privileges to use this API. For more information, see
{xpack-ref}/security-privileges.html[Security Privileges].
//<<privileges-list-cluster>>.
==== Examples
@ -61,7 +71,7 @@ The API returns the following results:
"job_id": "it-ops-kpi",
"query_delay": "60s",
"frequency": "150s",
"indexes": [
"indices": [
"it_ops_metrics"
],
"types": [
@ -74,36 +84,9 @@ The API returns the following results:
"boost": 1
}
},
"aggregations": {
"buckets": {
"date_histogram": {
"field": "@timestamp",
"interval": 30000,
"offset": 0,
"order": {
"_key": "asc"
},
"keyed": false,
"min_doc_count": 0
},
"aggregations": {
"events_per_min": {
"sum": {
"field": "events_per_min"
}
},
"@timestamp": {
"max": {
"field": "@timestamp"
}
}
}
}
},
"scroll_size": 1000,
"chunking_config": {
"mode": "manual",
"time_span": "30000000ms"
"mode": "auto"
}
}
]

View File

@ -7,18 +7,31 @@ The get jobs API enables you to retrieve usage information for jobs.
==== Request
`GET _xpack/ml/anomaly_detectors/_stats` +
`GET _xpack/ml/anomaly_detectors/<job_id>/_stats`
//===== Description
`GET _xpack/ml/anomaly_detectors/<job_id>,<job_id>/_stats` +
`GET _xpack/ml/anomaly_detectors/_stats` +
`GET _xpack/ml/anomaly_detectors/_stats/_all` +
===== Description
You can get statistics for multiple jobs in a single API request by using a
group name, a comma-separated list of jobs, or a wildcard expression. You can
get statistics for all jobs by using `_all`, by specifying `*` as the
`<job_id>`, or by omitting the `<job_id>`.
==== Path Parameters
`job_id`::
(string) A required identifier for the job.
This parameter does not support wildcards, but you can specify `_all` or omit
the `job_id` to get information about all jobs.
(string) An identifier for the job. It can be a job identifier, a group name,
or a wildcard expression. If you do not specify one of these options, the API
returns statistics for all jobs.
==== Results
@ -35,7 +48,6 @@ The API returns the following information:
You must have `monitor_ml`, `monitor`, `manage_ml`, or `manage` cluster
privileges to use this API. For more information, see
{xpack-ref}/security-privileges.html[Security Privileges].
//<<privileges-list-cluster>>.
==== Examples

View File

@ -7,19 +7,29 @@ The get jobs API enables you to retrieve configuration information for jobs.
==== Request
`GET _xpack/ml/anomaly_detectors/<job_id>` +
`GET _xpack/ml/anomaly_detectors/<job_id>,<job_id>` +
`GET _xpack/ml/anomaly_detectors/` +
`GET _xpack/ml/anomaly_detectors/<job_id>`
`GET _xpack/ml/anomaly_detectors/_all`
===== Description
You can get information for multiple jobs in a single API request by using a
group name, a comma-separated list of jobs, or a wildcard expression. You can
get information for all jobs by using `_all`, by specifying `*` as the
`<job_id>`, or by omitting the `<job_id>`.
//===== Description
==== Path Parameters
`job_id`::
(string) Identifier for the job.
This parameter does not support wildcards, but you can specify `_all` or omit
the `job_id` to get information about all jobs.
(string) Identifier for the job. It can be a job identifier, a group name,
or a wildcard expression. If you do not specify one of these options, the API
returns information for all jobs.
==== Results
@ -35,7 +45,6 @@ The API returns the following information:
You must have `monitor_ml`, `monitor`, `manage_ml`, or `manage` cluster
privileges to use this API. For more information, see
{xpack-ref}/security-privileges.html[Security Privileges].
//<<privileges-list-cluster>>.
==== Examples

View File

@ -2,19 +2,34 @@
[[ml-stop-datafeed]]
=== Stop {dfeeds-cap}
The stop {dfeeds} API enables you to stop one or more {dfeeds}.
A {dfeed} that is stopped ceases to retrieve data from {es}.
A {dfeed} can be started and stopped multiple times throughout its lifecycle.
==== Request
`POST _xpack/ml/datafeeds/<feed_id>/_stop`
`POST _xpack/ml/datafeeds/<feed_id>/_stop` +
`POST _xpack/ml/datafeeds/<feed_id>,<feed_id>/_stop` +
`POST _xpack/ml/datafeeds/_all/_stop`
//TBD: Can there be spaces between the items in the list?
===== Description
You can stop multiple {dfeeds} in a single API request by using a
comma-separated list of {dfeeds} or a wildcard expression. You can close all
{dfeeds} by using `_all` or by specifying `*` as the `<feed_id>`.
//===== Description
==== Path Parameters
`feed_id` (required)::
(string) Identifier for the {dfeed}
`feed_id`::
(string) Identifier for the {dfeed}. It can be a {dfeed} identifier or a
wildcard expression.
==== Request Body
@ -31,7 +46,7 @@ A {dfeed} can be started and stopped multiple times throughout its lifecycle.
You must have `manage_ml`, or `manage` cluster privileges to use this API.
For more information, see
{xpack-ref}/security-privileges.html[Security Privileges].
//<<privileges-list-cluster>>.
==== Examples

View File

@ -164,9 +164,9 @@ A role is defined by the following JSON structure:
privileges effectively mean no index level permissions).
[[valid-role-name]]
NOTE: Role names must be at least 1 and no more than 1024 characters. They can
contain alphanumeric characters (`a-z`, `A-Z`, `0-9`), spaces,
punctuation, and printable symbols in the https://en.wikipedia.org/wiki/Basic_Latin_(Unicode_block)[Basic Latin (ASCII) block].
NOTE: Role names must be at least 1 and no more than 1024 characters. They can
contain alphanumeric characters (`a-z`, `A-Z`, `0-9`), spaces,
punctuation, and printable symbols in the https://en.wikipedia.org/wiki/Basic_Latin_(Unicode_block)[Basic Latin (ASCII) block].
Leading or trailing whitespace is not allowed.
The following describes the structure of an indices permissions entry:
@ -406,7 +406,7 @@ click_admins:
-----------------------------------
{security} continuously monitors the `roles.yml` file and automatically picks
up and apples any changes to it.
up and applies any changes to it.
include::authorization/alias-privileges.asciidoc[]

View File

@ -11,7 +11,6 @@ import org.elasticsearch.common.Nullable;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.io.stream.Writeable;
import org.elasticsearch.common.xcontent.ToXContent;
import org.elasticsearch.common.xcontent.ToXContentObject;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.xpack.XPackBuild;
@ -77,7 +76,7 @@ public class XPackInfoResponse extends ActionResponse {
this.featureSetsInfo = in.readOptionalWriteable(FeatureSetsInfo::new);
}
public static class LicenseInfo implements ToXContent, Writeable {
public static class LicenseInfo implements ToXContentObject, Writeable {
private final String uid;
private final String type;
@ -225,7 +224,7 @@ public class XPackInfoResponse extends ActionResponse {
}
}
public static class FeatureSet implements ToXContent, Writeable {
public static class FeatureSet implements ToXContentObject, Writeable {
private final String name;
@Nullable private final String description;

View File

@ -11,7 +11,7 @@ import org.elasticsearch.common.Strings;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.io.stream.Streamable;
import org.elasticsearch.common.xcontent.ToXContent;
import org.elasticsearch.common.xcontent.ToXContentFragment;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentParser;
@ -22,7 +22,7 @@ import java.net.Proxy;
import java.net.UnknownHostException;
import java.util.Objects;
public class HttpProxy implements ToXContent, Streamable {
public class HttpProxy implements ToXContentFragment, Streamable {
public static final HttpProxy NO_PROXY = new HttpProxy(null, null);

View File

@ -10,7 +10,7 @@ import org.elasticsearch.common.Nullable;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.io.stream.Writeable;
import org.elasticsearch.common.xcontent.ToXContent;
import org.elasticsearch.common.xcontent.ToXContentObject;
import org.elasticsearch.common.xcontent.XContentBuilder;
import java.io.IOException;
@ -20,7 +20,7 @@ import java.util.Objects;
/**
* Information about deprecated items
*/
public class DeprecationIssue implements Writeable, ToXContent {
public class DeprecationIssue implements Writeable, ToXContentObject {
public enum Level implements Writeable {
NONE,

View File

@ -29,7 +29,6 @@ import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.io.stream.Writeable;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.xcontent.ToXContent;
import org.elasticsearch.common.xcontent.ToXContentObject;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.threadpool.ThreadPool;
@ -148,7 +147,7 @@ public class GetDatafeedsStatsAction extends Action<GetDatafeedsStatsAction.Requ
public static class Response extends ActionResponse implements ToXContentObject {
public static class DatafeedStats implements ToXContent, Writeable {
public static class DatafeedStats implements ToXContentObject, Writeable {
private final String datafeedId;
private final DatafeedState datafeedState;

View File

@ -33,7 +33,6 @@ import org.elasticsearch.common.io.stream.Writeable;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.common.util.concurrent.AtomicArray;
import org.elasticsearch.common.xcontent.ToXContent;
import org.elasticsearch.common.xcontent.ToXContentObject;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.tasks.Task;
@ -175,7 +174,7 @@ public class GetJobsStatsAction extends Action<GetJobsStatsAction.Request, GetJo
public static class Response extends BaseTasksResponse implements ToXContentObject {
public static class JobStats implements ToXContent, Writeable {
public static class JobStats implements ToXContentObject, Writeable {
private final String jobId;
private DataCounts dataCounts;
@Nullable

View File

@ -14,7 +14,6 @@ import org.elasticsearch.common.io.stream.Writeable;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.common.xcontent.ConstructingObjectParser;
import org.elasticsearch.common.xcontent.ObjectParser;
import org.elasticsearch.common.xcontent.ToXContent;
import org.elasticsearch.common.xcontent.ToXContentObject;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.xpack.ml.utils.ExceptionsHelper;
@ -342,7 +341,7 @@ public class JobUpdate implements Writeable, ToXContentObject {
modelSnapshotId);
}
public static class DetectorUpdate implements Writeable, ToXContent {
public static class DetectorUpdate implements Writeable, ToXContentObject {
@SuppressWarnings("unchecked")
public static final ConstructingObjectParser<DetectorUpdate, Void> PARSER =
new ConstructingObjectParser<>("detector_update", a -> new DetectorUpdate((int) a[0], (String) a[1],

View File

@ -12,7 +12,7 @@ import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.io.stream.Writeable;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.common.xcontent.ToXContent;
import org.elasticsearch.common.xcontent.ToXContentObject;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.rest.RestStatus;
@ -73,7 +73,7 @@ public class MonitoringBulkResponse extends ActionResponse {
out.writeOptionalWriteable(error);
}
public static class Error implements Writeable, ToXContent {
public static class Error implements Writeable, ToXContentObject {
private final Throwable cause;
private final RestStatus status;

View File

@ -15,10 +15,6 @@ import org.elasticsearch.common.xcontent.XContentParser;
import org.joda.time.DateTime;
import org.joda.time.DateTimeZone;
import javax.mail.MessagingException;
import javax.mail.internet.AddressException;
import javax.mail.internet.InternetAddress;
import javax.mail.internet.MimeMessage;
import java.io.IOException;
import java.io.UnsupportedEncodingException;
import java.nio.charset.StandardCharsets;
@ -30,6 +26,11 @@ import java.util.List;
import java.util.Locale;
import java.util.Map;
import javax.mail.MessagingException;
import javax.mail.internet.AddressException;
import javax.mail.internet.InternetAddress;
import javax.mail.internet.MimeMessage;
import static java.util.Collections.unmodifiableMap;
public class Email implements ToXContentObject {
@ -491,7 +492,7 @@ public class Email implements ToXContentObject {
}
}
public static class AddressList implements Iterable<Address>, ToXContent {
public static class AddressList implements Iterable<Address>, ToXContentObject {
public static final AddressList EMPTY = new AddressList(Collections.<Address>emptyList());

View File

@ -6,7 +6,7 @@
package org.elasticsearch.xpack.notification.email.attachment;
import org.elasticsearch.common.ParseField;
import org.elasticsearch.common.xcontent.ToXContent;
import org.elasticsearch.common.xcontent.ToXContentFragment;
import org.elasticsearch.common.xcontent.XContentBuilder;
import java.io.IOException;
@ -14,7 +14,7 @@ import java.util.Collection;
import java.util.Collections;
import java.util.Objects;
public class EmailAttachments implements ToXContent {
public class EmailAttachments implements ToXContentFragment {
public static final EmailAttachments EMPTY_ATTACHMENTS = new EmailAttachments(
Collections.<EmailAttachmentParser.EmailAttachment>emptyList());

View File

@ -10,6 +10,7 @@ import org.elasticsearch.common.Nullable;
import org.elasticsearch.common.ParseField;
import org.elasticsearch.common.Strings;
import org.elasticsearch.common.xcontent.ToXContent;
import org.elasticsearch.common.xcontent.ToXContentObject;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentParser;
import org.elasticsearch.xpack.common.text.TextTemplate;
@ -20,7 +21,7 @@ import java.util.Locale;
import java.util.Map;
import java.util.Objects;
public class IncidentEventContext implements ToXContent {
public class IncidentEventContext implements ToXContentObject {
enum Type {
LINK, IMAGE

View File

@ -12,14 +12,14 @@ import org.elasticsearch.cluster.ClusterName;
import org.elasticsearch.cluster.node.DiscoveryNode;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.xcontent.ToXContent;
import org.elasticsearch.common.xcontent.ToXContentFragment;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentFactory;
import java.io.IOException;
import java.util.List;
public class ClearRealmCacheResponse extends BaseNodesResponse<ClearRealmCacheResponse.Node> implements ToXContent {
public class ClearRealmCacheResponse extends BaseNodesResponse<ClearRealmCacheResponse.Node> implements ToXContentFragment {
public ClearRealmCacheResponse() {
}

View File

@ -12,7 +12,7 @@ import org.elasticsearch.cluster.ClusterName;
import org.elasticsearch.cluster.node.DiscoveryNode;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.xcontent.ToXContent;
import org.elasticsearch.common.xcontent.ToXContentFragment;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentFactory;
@ -22,7 +22,7 @@ import java.util.List;
/**
* The response object that will be returned when clearing the cache of native roles
*/
public class ClearRolesCacheResponse extends BaseNodesResponse<ClearRolesCacheResponse.Node> implements ToXContent {
public class ClearRolesCacheResponse extends BaseNodesResponse<ClearRolesCacheResponse.Node> implements ToXContentFragment {
public ClearRolesCacheResponse() {
}

View File

@ -8,7 +8,7 @@ package org.elasticsearch.xpack.security.action.user;
import org.elasticsearch.action.ActionResponse;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.xcontent.ToXContent;
import org.elasticsearch.common.xcontent.ToXContentObject;
import org.elasticsearch.common.xcontent.XContentBuilder;
import java.io.IOException;
@ -17,7 +17,7 @@ import java.io.IOException;
* Response when deleting a native user. Returns a single boolean field for whether the user was
* found (and deleted) or not found.
*/
public class DeleteUserResponse extends ActionResponse implements ToXContent {
public class DeleteUserResponse extends ActionResponse implements ToXContentObject {
private boolean found;

View File

@ -5,14 +5,6 @@
*/
package org.elasticsearch.xpack.security.authc.support.mapper.expressiondsl;
import java.io.IOException;
import java.util.Collection;
import java.util.Collections;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.function.Predicate;
import org.elasticsearch.common.Numbers;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
@ -21,6 +13,14 @@ import org.elasticsearch.common.xcontent.ToXContent;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.xpack.security.support.Automatons;
import java.io.IOException;
import java.util.Collection;
import java.util.Collections;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.function.Predicate;
/**
* An expression that evaluates to <code>true</code> if a field (map element) matches
* the provided values. A <em>field</em> expression may have more than one provided value, in which

View File

@ -17,7 +17,6 @@ import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.io.stream.Streamable;
import org.elasticsearch.common.xcontent.NamedXContentRegistry;
import org.elasticsearch.common.xcontent.ToXContent;
import org.elasticsearch.common.xcontent.ToXContentObject;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentHelper;
@ -473,7 +472,7 @@ public class RoleDescriptor implements ToXContentObject {
* A class representing permissions for a group of indices mapped to
* privileges, field permissions, and a query.
*/
public static class IndicesPrivileges implements ToXContent, Streamable {
public static class IndicesPrivileges implements ToXContentObject, Streamable {
private static final IndicesPrivileges[] NONE = new IndicesPrivileges[0];

View File

@ -187,7 +187,7 @@ public class ActionStatus implements ToXContentObject {
return new ActionStatus(ackStatus, lastExecution, lastSuccessfulExecution, lastThrottle);
}
public static class AckStatus implements ToXContent {
public static class AckStatus implements ToXContentObject {
public enum State {
AWAITS_SUCCESSFUL_EXECUTION((byte) 1),
@ -291,7 +291,7 @@ public class ActionStatus implements ToXContentObject {
}
}
public static class Execution implements ToXContent {
public static class Execution implements ToXContentObject {
public static Execution successful(DateTime timestamp) {
return new Execution(timestamp, true, null);

View File

@ -19,6 +19,7 @@ import org.elasticsearch.client.Client;
import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.routing.Preference;
import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.collect.MapBuilder;
import org.elasticsearch.common.collect.Tuple;
import org.elasticsearch.common.component.AbstractComponent;
import org.elasticsearch.common.metrics.MeanMetric;
@ -42,6 +43,7 @@ import org.elasticsearch.xpack.watcher.input.Input;
import org.elasticsearch.xpack.watcher.transform.Transform;
import org.elasticsearch.xpack.watcher.trigger.TriggerEvent;
import org.elasticsearch.xpack.watcher.watch.Watch;
import org.elasticsearch.xpack.watcher.watch.WatchStatus;
import org.joda.time.DateTime;
import java.io.IOException;
@ -333,7 +335,12 @@ public class ExecutionService extends AbstractComponent {
public void updateWatchStatus(Watch watch) throws IOException {
// at the moment we store the status together with the watch,
// so we just need to update the watch itself
ToXContent.MapParams params = new ToXContent.MapParams(Collections.singletonMap(Watch.INCLUDE_STATUS_KEY, "true"));
// we do not want to update the status.state field, as it might have been deactivated inbetween
Map<String, String> parameters = MapBuilder.<String, String>newMapBuilder()
.put(Watch.INCLUDE_STATUS_KEY, "true")
.put(WatchStatus.INCLUDE_STATE, "false")
.immutableMap();
ToXContent.MapParams params = new ToXContent.MapParams(parameters);
XContentBuilder source = JsonXContent.contentBuilder().
startObject()
.field(Watch.Field.STATUS.getPreferredName(), watch.status(), params)

View File

@ -6,14 +6,14 @@
package org.elasticsearch.xpack.watcher.transform;
import org.apache.logging.log4j.Logger;
import org.elasticsearch.common.xcontent.ToXContent;
import org.elasticsearch.common.xcontent.ToXContentFragment;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.xpack.watcher.execution.WatchExecutionContext;
import org.elasticsearch.xpack.watcher.watch.Payload;
import java.io.IOException;
public abstract class ExecutableTransform<T extends Transform, R extends Transform.Result> implements ToXContent {
public abstract class ExecutableTransform<T extends Transform, R extends Transform.Result> implements ToXContentFragment {
protected final T transform;
protected final Logger logger;

View File

@ -38,6 +38,8 @@ import static org.joda.time.DateTimeZone.UTC;
public class WatchStatus implements ToXContentObject, Streamable {
public static final String INCLUDE_STATE = "include_state";
private State state;
@Nullable private DateTime lastChecked;
@ -209,7 +211,9 @@ public class WatchStatus implements ToXContentObject, Streamable {
@Override
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
builder.startObject();
builder.field(Field.STATE.getPreferredName(), state, params);
if (params.paramAsBoolean(INCLUDE_STATE, true)) {
builder.field(Field.STATE.getPreferredName(), state, params);
}
if (lastChecked != null) {
builder.field(Field.LAST_CHECKED.getPreferredName(), lastChecked);
}

View File

@ -10,12 +10,19 @@ import org.elasticsearch.Version;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.get.GetRequest;
import org.elasticsearch.action.get.GetResponse;
import org.elasticsearch.action.support.PlainActionFuture;
import org.elasticsearch.action.update.UpdateRequest;
import org.elasticsearch.action.update.UpdateResponse;
import org.elasticsearch.client.Client;
import org.elasticsearch.cluster.node.DiscoveryNode;
import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.collect.Tuple;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.util.concurrent.EsRejectedExecutionException;
import org.elasticsearch.common.xcontent.NamedXContentRegistry;
import org.elasticsearch.common.xcontent.XContentFactory;
import org.elasticsearch.common.xcontent.XContentParser;
import org.elasticsearch.common.xcontent.XContentType;
import org.elasticsearch.index.IndexNotFoundException;
import org.elasticsearch.index.get.GetResult;
import org.elasticsearch.test.ESTestCase;
@ -33,10 +40,13 @@ import org.elasticsearch.xpack.watcher.history.HistoryStore;
import org.elasticsearch.xpack.watcher.history.WatchRecord;
import org.elasticsearch.xpack.watcher.input.ExecutableInput;
import org.elasticsearch.xpack.watcher.input.Input;
import org.elasticsearch.xpack.watcher.input.none.ExecutableNoneInput;
import org.elasticsearch.xpack.watcher.support.xcontent.ObjectPath;
import org.elasticsearch.xpack.watcher.support.xcontent.XContentSource;
import org.elasticsearch.xpack.watcher.transform.ExecutableTransform;
import org.elasticsearch.xpack.watcher.transform.Transform;
import org.elasticsearch.xpack.watcher.trigger.TriggerEvent;
import org.elasticsearch.xpack.watcher.trigger.manual.ManualTrigger;
import org.elasticsearch.xpack.watcher.trigger.manual.ManualTriggerEvent;
import org.elasticsearch.xpack.watcher.trigger.schedule.ScheduleTriggerEvent;
import org.elasticsearch.xpack.watcher.watch.Payload;
@ -53,7 +63,9 @@ import java.util.Collection;
import java.util.Collections;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.concurrent.ArrayBlockingQueue;
import java.util.concurrent.atomic.AtomicBoolean;
import static java.util.Arrays.asList;
import static java.util.Collections.emptyMap;
@ -941,6 +953,32 @@ public class ExecutionServiceTests extends ESTestCase {
assertThat(queuedWatches.get(queuedWatches.size() - 1).watchId(), is("_id0"));
}
public void testUpdateWatchStatusDoesNotUpdateState() throws Exception {
WatchStatus status = new WatchStatus(DateTime.now(UTC), Collections.emptyMap());
Watch watch = new Watch("_id", new ManualTrigger(), new ExecutableNoneInput(logger), AlwaysCondition.INSTANCE, null, null,
Collections.emptyList(), null, status);
final AtomicBoolean assertionsTriggered = new AtomicBoolean(false);
doAnswer(invocation -> {
UpdateRequest request = (UpdateRequest) invocation.getArguments()[0];
try (XContentParser parser =
XContentFactory.xContent(XContentType.JSON).createParser(NamedXContentRegistry.EMPTY, request.doc().source())) {
Map<String, Object> map = parser.map();
Map<String, String> state = ObjectPath.eval("status.state", map);
assertThat(state, is(nullValue()));
assertionsTriggered.set(true);
}
PlainActionFuture<UpdateResponse> future = PlainActionFuture.newFuture();
future.onResponse(new UpdateResponse());
return future;
}).when(client).update(any());
executionService.updateWatchStatus(watch);
assertThat(assertionsTriggered.get(), is(true));
}
private WatchExecutionContext createMockWatchExecutionContext(String watchId, DateTime executionTime) {
WatchExecutionContext ctx = mock(WatchExecutionContext.class);
when(ctx.id()).thenReturn(new Wid(watchId, executionTime));

View File

@ -15,7 +15,6 @@ import org.elasticsearch.search.aggregations.Aggregations;
import org.elasticsearch.search.aggregations.bucket.terms.Terms;
import org.elasticsearch.test.http.MockResponse;
import org.elasticsearch.test.http.MockWebServer;
import org.elasticsearch.test.junit.annotations.TestLogging;
import org.elasticsearch.xpack.common.http.HttpMethod;
import org.elasticsearch.xpack.common.http.HttpRequestTemplate;
import org.elasticsearch.xpack.watcher.condition.AlwaysCondition;
@ -75,7 +74,6 @@ public class HistoryTemplateHttpMappingsTests extends AbstractWatcherIntegration
return false; // remove security noise from this test
}
@TestLogging("org.elasticsearch.test.http:TRACE")
public void testHttpFields() throws Exception {
PutWatchResponse putWatchResponse = watcherClient().preparePutWatch("_id").setSource(watchBuilder()
.trigger(schedule(interval("5s")))
@ -128,6 +126,7 @@ public class HistoryTemplateHttpMappingsTests extends AbstractWatcherIntegration
assertThat(webServer.requests().get(1).getUri().getPath(), is("/webhook/path"));
}
@AwaitsFix(bugUrl = "https://github.com/elastic/x-pack-elasticsearch/issues/2222")
public void testExceptionMapping() {
// delete all history indices to ensure that we start with a fresh mapping
assertAcked(client().admin().indices().prepareDelete(HistoryStore.INDEX_PREFIX + "*"));

View File

@ -26,10 +26,9 @@ import java.util.stream.Stream;
public class TimeWarpedWatcher extends Watcher {
private static final Logger logger = Loggers.getLogger(TimeWarpedWatcher.class);
public TimeWarpedWatcher(Settings settings) {
super(settings);
Logger logger = Loggers.getLogger(TimeWarpedWatcher.class, settings);
logger.info("using time warped watchers plugin");
}

View File

@ -6,7 +6,7 @@
package org.elasticsearch.xpack.watcher.trigger.schedule;
import org.elasticsearch.common.util.CollectionUtils;
import org.elasticsearch.common.xcontent.ToXContent;
import org.elasticsearch.common.xcontent.ToXContentObject;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.test.ESTestCase;
import org.elasticsearch.xpack.watcher.trigger.schedule.support.DayOfWeek;
@ -357,7 +357,7 @@ public abstract class ScheduleTestCase extends ESTestCase {
return randomBoolean() ? randomIntBetween(24, 40) : randomIntBetween(-60, -1);
}
static class HourAndMinute implements ToXContent {
static class HourAndMinute implements ToXContentObject {
int hour;
int minute;