[DOCS] Change "data feed" to "datafeed" in Machine Learning documentation (elastic/x-pack-elasticsearch#1277)
* [DOCS] Add xpackml attribute to XPack Reference * [DOCS] Use attribute for datafeed terms Original commit: elastic/x-pack-elasticsearch@f37bf48ee4
This commit is contained in:
parent
3a0bc504a9
commit
9b2fb6ac16
|
@ -35,15 +35,15 @@ The main {ml} resources can be accessed with a variety of endpoints:
|
|||
[[ml-api-datafeeds]]
|
||||
=== /datafeeds/
|
||||
|
||||
* <<ml-put-datafeed,PUT /datafeeds/<datafeed_id+++>+++>>: Create a data feed
|
||||
* <<ml-start-datafeed,POST /datafeeds/<datafeed_id>/_start>>: Start a data feed
|
||||
* <<ml-get-datafeed,GET /datafeeds>>: List data feeds
|
||||
* <<ml-get-datafeed,GET /datafeeds/<datafeed_id+++>+++>>: Get data feed details
|
||||
* <<ml-get-datafeed-stats,GET /datafeeds/<datafeed_id>/_stats>>: Get statistical information for data feeds
|
||||
* <<ml-preview-datafeed,GET /datafeeds/<datafeed_id>/_preview>>: Get a preview of a data feed
|
||||
* <<ml-update-datafeed,POST /datafeeds/<datafeedid>/_update>>: Update certain settings for a data feed
|
||||
* <<ml-stop-datafeed,POST /datafeeds/<datafeed_id>/_stop>>: Stop a data feed
|
||||
* <<ml-delete-datafeed,DELETE /datafeeds/<datafeed_id+++>+++>>: Delete data feed
|
||||
* <<ml-put-datafeed,PUT /datafeeds/<datafeed_id+++>+++>>: Create a {dfeed}
|
||||
* <<ml-start-datafeed,POST /datafeeds/<datafeed_id>/_start>>: Start a {dfeed}
|
||||
* <<ml-get-datafeed,GET /datafeeds>>: List {dfeeds}
|
||||
* <<ml-get-datafeed,GET /datafeeds/<datafeed_id+++>+++>>: Get {dfeed} details
|
||||
* <<ml-get-datafeed-stats,GET /datafeeds/<datafeed_id>/_stats>>: Get statistical information for {dfeeds}
|
||||
* <<ml-preview-datafeed,GET /datafeeds/<datafeed_id>/_preview>>: Get a preview of a {dfeed}
|
||||
* <<ml-update-datafeed,POST /datafeeds/<datafeedid>/_update>>: Update certain settings for a {dfeed}
|
||||
* <<ml-stop-datafeed,POST /datafeeds/<datafeed_id>/_stop>>: Stop a {dfeed}
|
||||
* <<ml-delete-datafeed,DELETE /datafeeds/<datafeed_id+++>+++>>: Delete {dfeed}
|
||||
|
||||
[float]
|
||||
[[ml-api-results]]
|
||||
|
|
|
@ -2,14 +2,14 @@
|
|||
== Getting Started
|
||||
|
||||
////
|
||||
{xpack} {ml} features automatically detect:
|
||||
{xpackml} features automatically detect:
|
||||
* Anomalies in single or multiple time series
|
||||
* Outliers in a population (also known as _entity profiling_)
|
||||
* Rare events (also known as _log categorization_)
|
||||
|
||||
This tutorial is focuses on an anomaly detection scenario in single time series.
|
||||
////
|
||||
Ready to get some hands-on experience with the {xpack} {ml} features? This
|
||||
Ready to get some hands-on experience with the {xpackml} features? This
|
||||
tutorial shows you how to:
|
||||
|
||||
* Load a sample data set into {es}
|
||||
|
@ -40,7 +40,7 @@ viewing jobs +
|
|||
//ll {ml} features are available to use as an API, however this tutorial
|
||||
//will focus on using the {ml} tab in the {kib} UI.
|
||||
|
||||
WARNING: The {xpack} {ml} features are in beta and subject to change.
|
||||
WARNING: The {xpackml} features are in beta and subject to change.
|
||||
Beta features are not subject to the same support SLA as GA features,
|
||||
and deployment in production is at your own risk.
|
||||
|
||||
|
@ -66,9 +66,9 @@ activity related to jobs, see <<ml-settings>>.
|
|||
[[ml-gs-users]]
|
||||
==== Users, Roles, and Privileges
|
||||
|
||||
The {xpack} {ml} features implement cluster privileges and built-in roles to
|
||||
The {xpackml} features implement cluster privileges and built-in roles to
|
||||
make it easier to control which users have authority to view and manage the jobs,
|
||||
data feeds, and results.
|
||||
{dfeeds}, and results.
|
||||
|
||||
By default, you can perform all of the steps in this tutorial by using the
|
||||
built-in `elastic` super user. The default password for the `elastic` user is
|
||||
|
@ -87,7 +87,7 @@ For more information, see <<built-in-roles>> and <<privileges-list-cluster>>.
|
|||
|
||||
For the purposes of this tutorial, we provide sample data that you can play with
|
||||
and search in {es}. When you consider your own data, however, it's important to
|
||||
take a moment and think about where the {xpack} {ml} features will be most
|
||||
take a moment and think about where the {xpackml} features will be most
|
||||
impactful.
|
||||
|
||||
The first consideration is that it must be time series data. The {ml} features
|
||||
|
@ -104,12 +104,12 @@ insights.
|
|||
|
||||
The final consideration is where the data is located. This tutorial assumes that
|
||||
your data is stored in {es}. It guides you through the steps required to create
|
||||
a _data feed_ that passes data to a job. If your own data is outside of {es},
|
||||
a _{dfeed}_ that passes data to a job. If your own data is outside of {es},
|
||||
analysis is still possible by using a post data API.
|
||||
|
||||
IMPORTANT: If you want to create {ml} jobs in {kib}, you must use data feeds.
|
||||
IMPORTANT: If you want to create {ml} jobs in {kib}, you must use {dfeeds}.
|
||||
That is to say, you must store your input data in {es}. When you create
|
||||
a job, you select an existing index pattern and {kib} configures the data feed
|
||||
a job, you select an existing index pattern and {kib} configures the {dfeed}
|
||||
for you under the covers.
|
||||
|
||||
|
||||
|
@ -168,7 +168,7 @@ particular time. If your data is stored in {es}, you can generate
|
|||
this type of sum or average by using aggregations. One of the benefits of
|
||||
summarizing data this way is that {es} automatically distributes
|
||||
these calculations across your cluster. You can then feed this summarized data
|
||||
into {xpack} {ml} instead of raw results, which reduces the volume
|
||||
into {xpackml} instead of raw results, which reduces the volume
|
||||
of data that must be considered while detecting anomalies. For the purposes of
|
||||
this tutorial, however, these summary values are stored in {es},
|
||||
rather than created using the {ref}/search-aggregations.html[_aggregations framework_].
|
||||
|
@ -319,7 +319,7 @@ This tutorial uses {kib} to create jobs and view results, but you can
|
|||
alternatively use APIs to accomplish most tasks.
|
||||
For API reference information, see <<ml-apis>>.
|
||||
|
||||
The {xpack} {ml} features in {kib} use pop-ups. You must configure your
|
||||
The {xpackml} features in {kib} use pop-ups. You must configure your
|
||||
web browser so that it does not block pop-up windows or create an
|
||||
exception for your Kibana URL.
|
||||
--
|
||||
|
@ -406,7 +406,7 @@ NOTE: Some functions such as `count` and `rare` do not require fields.
|
|||
interval that the analysis is aggregated into.
|
||||
+
|
||||
--
|
||||
The {xpack} {ml} features use the concept of a bucket to divide up the time series
|
||||
The {xpackml} features use the concept of a bucket to divide up the time series
|
||||
into batches for processing. For example, if you are monitoring
|
||||
the total number of requests in the system,
|
||||
//and receive a data point every 10 minutes
|
||||
|
@ -432,7 +432,7 @@ typical anomalies and the frequency at which alerting is required.
|
|||
. Determine whether you want to process all of the data or only part of it. If
|
||||
you want to analyze all of the existing data, click
|
||||
**Use full server-metrics* data**. If you want to see what happens when you
|
||||
stop and start data feeds and process additional data over time, click the time
|
||||
stop and start {dfeeds} and process additional data over time, click the time
|
||||
picker in the {kib} toolbar. Since the sample data spans a period of time
|
||||
between March 23, 2017 and April 22, 2017, click **Absolute**. Set the start
|
||||
time to March 23, 2017 and the end time to April 1, 2017, for example. Once
|
||||
|
@ -440,7 +440,7 @@ you've got the time range set up, click the **Go** button. +
|
|||
+
|
||||
--
|
||||
[role="screenshot"]
|
||||
image::images/ml-gs-job1-time.jpg["Setting the time range for the data feed"]
|
||||
image::images/ml-gs-job1-time.jpg["Setting the time range for the {dfeed}"]
|
||||
--
|
||||
+
|
||||
--
|
||||
|
@ -462,7 +462,7 @@ As the job is created, the graph is updated to give a visual representation of
|
|||
the progress of {ml} as the data is processed. This view is only available whilst the
|
||||
job is running.
|
||||
|
||||
TIP: The `create_single_metic.sh` script creates a similar job and data feed by
|
||||
TIP: The `create_single_metic.sh` script creates a similar job and {dfeed} by
|
||||
using the {ml} APIs. You can download that script by clicking
|
||||
here: https://download.elastic.co/demos/machine_learning/gettingstarted/create_single_metric.sh[create_single_metric.sh]
|
||||
For API reference information, see <<ml-apis>>.
|
||||
|
@ -511,12 +511,12 @@ A closing job cannot accept further data.
|
|||
`failed`::: The job did not finish successfully due to an error.
|
||||
This situation can occur due to invalid input data.
|
||||
If the job had irrevocably failed, it must be force closed and then deleted.
|
||||
If the data feed can be corrected, the job can be closed and then re-opened.
|
||||
If the {dfeed} can be corrected, the job can be closed and then re-opened.
|
||||
|
||||
Datafeed state::
|
||||
The status of the data feed, which can be one of the following values: +
|
||||
started::: The data feed is actively receiving data.
|
||||
stopped::: The data feed is stopped and will not receive data until it is
|
||||
{dfeed-cap} state::
|
||||
The status of the {dfeed}, which can be one of the following values: +
|
||||
started::: The {dfeed} is actively receiving data.
|
||||
stopped::: The {dfeed} is stopped and will not receive data until it is
|
||||
re-started.
|
||||
|
||||
Latest timestamp::
|
||||
|
@ -527,22 +527,22 @@ If you click the arrow beside the name of job, you can show or hide additional
|
|||
information, such as the settings, configuration information, or messages for
|
||||
the job.
|
||||
|
||||
You can also click one of the **Actions** buttons to start the data feed, edit
|
||||
the job or data feed, and clone or delete the job, for example.
|
||||
You can also click one of the **Actions** buttons to start the {dfeed}, edit
|
||||
the job or {dfeed}, and clone or delete the job, for example.
|
||||
|
||||
[float]
|
||||
[[ml-gs-job1-datafeed]]
|
||||
==== Managing Data Feeds
|
||||
==== Managing {dfeeds-cap}
|
||||
|
||||
A data feed can be started and stopped multiple times throughout its lifecycle.
|
||||
If you want to retrieve more data from {es} and the data feed is
|
||||
stopped, you must restart it.
|
||||
A {dfeed} can be started and stopped multiple times throughout its lifecycle.
|
||||
If you want to retrieve more data from {es} and the {dfeed} is stopped, you must
|
||||
restart it.
|
||||
|
||||
For example, if you did not use the full data when you created the job, you can
|
||||
now process the remaining data by restarting the data feed:
|
||||
now process the remaining data by restarting the {dfeed}:
|
||||
|
||||
. In the **Machine Learning** / **Job Management** tab, click the following
|
||||
button to start the data feed: image:images/ml-start-feed.jpg["Start data feed"]
|
||||
button to start the {dfeed}: image:images/ml-start-feed.jpg["Start {dfeed}"]
|
||||
|
||||
|
||||
. Choose a start time and end time. For example,
|
||||
|
@ -553,20 +553,20 @@ otherwise you might miss anomalies. +
|
|||
+
|
||||
--
|
||||
[role="screenshot"]
|
||||
image::images/ml-gs-job1-datafeed.jpg["Restarting a data feed"]
|
||||
image::images/ml-gs-job1-datafeed.jpg["Restarting a {dfeed}"]
|
||||
--
|
||||
|
||||
The data feed state changes to `started`, the job state changes to `opened`,
|
||||
The {dfeed} state changes to `started`, the job state changes to `opened`,
|
||||
and the number of processed records increases as the new data is analyzed. The
|
||||
latest timestamp information also increases. For example:
|
||||
[role="screenshot"]
|
||||
image::images/ml-gs-job1-manage2.jpg["Job opened and data feed started"]
|
||||
image::images/ml-gs-job1-manage2.jpg["Job opened and {dfeed} started"]
|
||||
|
||||
TIP: If your data is being loaded continuously, you can continue running the job
|
||||
in real time. For this, start your data feed and select **No end time**.
|
||||
in real time. For this, start your {dfeed} and select **No end time**.
|
||||
|
||||
If you want to stop the data feed at this point, you can click the following
|
||||
button: image:images/ml-stop-feed.jpg["Stop data feed"]
|
||||
If you want to stop the {dfeed} at this point, you can click the following
|
||||
button: image:images/ml-stop-feed.jpg["Stop {dfeed}"]
|
||||
|
||||
Now that you have processed all the data, let's start exploring the job results.
|
||||
|
||||
|
@ -574,7 +574,7 @@ Now that you have processed all the data, let's start exploring the job results.
|
|||
[[ml-gs-jobresults]]
|
||||
=== Exploring Job Results
|
||||
|
||||
The {xpack} {ml} features analyze the input stream of data, model its behavior,
|
||||
The {xpackml} features analyze the input stream of data, model its behavior,
|
||||
and perform analysis based on the detectors you defined in your job. When an
|
||||
event occurs outside of the model, that event is identified as an anomaly.
|
||||
|
||||
|
|
|
@ -3,7 +3,7 @@
|
|||
|
||||
[partintro]
|
||||
--
|
||||
The {xpack} {ml} features automate the analysis of time-series data by creating
|
||||
The {xpackml} features automate the analysis of time-series data by creating
|
||||
accurate baselines of normal behaviors in the data and identifying anomalous
|
||||
patterns in that data.
|
||||
|
||||
|
@ -37,9 +37,9 @@ Jobs::
|
|||
necessary to perform an analytics task. For a list of the properties associated
|
||||
with a job, see <<ml-job-resource, Job Resources>>.
|
||||
|
||||
Data feeds::
|
||||
{dfeeds-cap}::
|
||||
Jobs can analyze either a one-off batch of data or continuously in real time.
|
||||
Data feeds retrieve data from {es} for analysis. Alternatively you can
|
||||
{dfeeds-cap} retrieve data from {es} for analysis. Alternatively you can
|
||||
<<ml-post-data,POST data>> from any source directly to an API.
|
||||
|
||||
Detectors::
|
||||
|
@ -51,7 +51,7 @@ Detectors::
|
|||
see <<ml-detectorconfig, Detector Configuration Objects>>.
|
||||
|
||||
Buckets::
|
||||
The {xpack} {ml} features use the concept of a bucket to divide the time
|
||||
The {xpackml} features use the concept of a bucket to divide the time
|
||||
series into batches for processing. The _bucket span_ is part of the
|
||||
configuration information for a job. It defines the time interval that is used
|
||||
to summarize and model the data. This is typically between 5 minutes to 1 hour
|
||||
|
@ -63,7 +63,7 @@ Buckets::
|
|||
Machine learning nodes::
|
||||
A {ml} node is a node that has `xpack.ml.enabled` and `node.ml` set to `true`,
|
||||
which is the default behavior. If you set `node.ml` to `false`, the node can
|
||||
service API requests but it cannot run jobs. If you want to use {xpack} {ml}
|
||||
service API requests but it cannot run jobs. If you want to use {xpackml}
|
||||
features, there must be at least one {ml} node in your cluster. For more
|
||||
information about this setting, see <<ml-settings>>.
|
||||
|
||||
|
|
|
@ -8,7 +8,7 @@ The following limitations and known problems apply to the {version} release of
|
|||
=== Pop-ups must be enabled in browsers
|
||||
//See x-pack-elasticsearch/#844
|
||||
|
||||
The {xpack} {ml} features in Kibana use pop-ups. You must configure your
|
||||
The {xpackml} features in Kibana use pop-ups. You must configure your
|
||||
web browser so that it does not block pop-up windows or create an
|
||||
exception for your Kibana URL.
|
||||
|
||||
|
@ -17,8 +17,8 @@ exception for your Kibana URL.
|
|||
=== Jobs must be re-created at GA
|
||||
//See x-pack-elasticsearch/#844
|
||||
|
||||
The models that you create in the {xpack} {ml} Beta cannot be upgraded.
|
||||
After the {xpack} {ml} features become generally available, you must
|
||||
The models that you create in the {xpackml} Beta cannot be upgraded.
|
||||
After the {xpackml} features become generally available, you must
|
||||
re-create your jobs. If you have data sets and job configurations that
|
||||
you work with extensively in the beta, make note of all the details so
|
||||
that you can re-create them successfully.
|
||||
|
@ -39,15 +39,15 @@ represented as a single dot. If there are only two data points, they are joined
|
|||
by a line.
|
||||
|
||||
[float]
|
||||
=== Jobs close on the data feed end date
|
||||
=== Jobs close on the {dfeed} end date
|
||||
//See x-pack-elasticsearch/#1037
|
||||
|
||||
If you start a data feed and specify an end date, it will close the job when
|
||||
the data feed stops. This behavior avoids having numerous open one-time jobs.
|
||||
If you start a {dfeed} and specify an end date, it will close the job when
|
||||
the {dfeed} stops. This behavior avoids having numerous open one-time jobs.
|
||||
|
||||
If you do not specify an end date when you start a data feed, the job
|
||||
remains open when you stop the data feed. This behavior avoids the overhead
|
||||
of closing and re-opening large jobs when there are pauses in the data feed.
|
||||
If you do not specify an end date when you start a {dfeed}, the job
|
||||
remains open when you stop the {dfeed}. This behavior avoids the overhead
|
||||
of closing and re-opening large jobs when there are pauses in the {dfeed}.
|
||||
|
||||
[float]
|
||||
=== Post data API requires JSON format
|
||||
|
@ -111,14 +111,13 @@ Likewise, when you create a single or multi-metric job in {kib}, in some cases
|
|||
it uses aggregations on the data that it retrieves from {es}. One of the
|
||||
benefits of summarizing data this way is that {es} automatically distributes
|
||||
these calculations across your cluster. This summarized data is then fed into
|
||||
{xpack} {ml} instead of raw results, which reduces the volume of data that must
|
||||
{xpackml} instead of raw results, which reduces the volume of data that must
|
||||
be considered while detecting anomalies. However, if you have two jobs, one of
|
||||
which uses pre-aggregated data and another that does not, their results might
|
||||
differ. This difference is due to the difference in precision of the input data.
|
||||
The {ml} analytics are designed to be aggregation-aware and the likely increase
|
||||
in performance that is gained by pre-aggregating the data makes the potentially
|
||||
poorer precision worthwhile. If you want to view or change the aggregations
|
||||
that are used in your job, refer to the `aggregations` property in your data
|
||||
feed.
|
||||
that are used in your job, refer to the `aggregations` property in your {dfeed}.
|
||||
|
||||
For more information, see <<ml-datafeed-resource>>.
|
||||
|
|
|
@ -3,23 +3,23 @@
|
|||
|
||||
Use machine learning to detect anomalies in time series data.
|
||||
|
||||
* <<ml-api-datafeed-endpoint,Datafeeds>>
|
||||
* <<ml-api-datafeed-endpoint,{dfeeds-cap}>>
|
||||
* <<ml-api-job-endpoint,Jobs>>
|
||||
* <<ml-api-snapshot-endpoint, Model Snapshots>>
|
||||
* <<ml-api-result-endpoint,Results>>
|
||||
* <<ml-api-definitions, Definitions>>
|
||||
|
||||
[[ml-api-datafeed-endpoint]]
|
||||
=== Data Feeds
|
||||
=== {dfeeds-cap}
|
||||
|
||||
* <<ml-put-datafeed,Create data feed>>
|
||||
* <<ml-delete-datafeed,Delete data feed>>
|
||||
* <<ml-get-datafeed,Get data feed info>>
|
||||
* <<ml-get-datafeed-stats,Get data feed statistics>>
|
||||
* <<ml-preview-datafeed,Preview data feed>>
|
||||
* <<ml-start-datafeed,Start data feed>>
|
||||
* <<ml-stop-datafeed,Stop data feed>>
|
||||
* <<ml-update-datafeed,Update data feed>>
|
||||
* <<ml-put-datafeed,Create {dfeed}>>
|
||||
* <<ml-delete-datafeed,Delete {dfeed}>>
|
||||
* <<ml-get-datafeed,Get {dfeed} info>>
|
||||
* <<ml-get-datafeed-stats,Get {dfeed} statistics>>
|
||||
* <<ml-preview-datafeed,Preview {dfeed}>>
|
||||
* <<ml-start-datafeed,Start {dfeed}>>
|
||||
* <<ml-stop-datafeed,Stop {dfeed}>>
|
||||
* <<ml-update-datafeed,Update {dfeed}>>
|
||||
|
||||
include::ml/put-datafeed.asciidoc[]
|
||||
include::ml/delete-datafeed.asciidoc[]
|
||||
|
@ -88,8 +88,8 @@ include::ml/get-record.asciidoc[]
|
|||
[[ml-api-definitions]]
|
||||
=== Definitions
|
||||
|
||||
* <<ml-datafeed-resource,Data feeds>>
|
||||
* <<ml-datafeed-counts,Data feed counts>>
|
||||
* <<ml-datafeed-resource,{dfeeds-cap}>>
|
||||
* <<ml-datafeed-counts,{dfeed-cap} counts>>
|
||||
* <<ml-job-resource,Jobs>>
|
||||
* <<ml-jobstats,Job statistics>>
|
||||
* <<ml-snapshot-resource,Model snapshots>>
|
||||
|
|
|
@ -27,7 +27,7 @@ After it is closed, the job has a minimal overhead on the cluster except for
|
|||
maintaining its meta data. Therefore it is a best practice to close jobs that
|
||||
are no longer required to process data.
|
||||
|
||||
When a data feed that has a specified end date stops, it automatically closes
|
||||
When a {dfeed} that has a specified end date stops, it automatically closes
|
||||
the job.
|
||||
|
||||
NOTE: If you use the `force` query parameter, the request returns before the
|
||||
|
|
|
@ -1,11 +1,11 @@
|
|||
//lcawley Verified example output 2017-04-11
|
||||
[[ml-datafeed-resource]]
|
||||
==== Data Feed Resources
|
||||
==== {dfeed-cap} Resources
|
||||
|
||||
A data feed resource has the following properties:
|
||||
A {dfeed} resource has the following properties:
|
||||
|
||||
`aggregations`::
|
||||
(object) If set, the data feed performs aggregation searches.
|
||||
(object) If set, the {dfeed} performs aggregation searches.
|
||||
For syntax information, see {ref}/search-aggregations.html[Aggregations].
|
||||
Support for aggregations is limited and should only be used with
|
||||
low cardinality data.
|
||||
|
@ -22,11 +22,11 @@ A data feed resource has the following properties:
|
|||
For example: {"mode": "manual", "time_span": "3h"}
|
||||
|
||||
`datafeed_id`::
|
||||
(string) A numerical character string that uniquely identifies the data feed.
|
||||
(string) A numerical character string that uniquely identifies the {dfeed}.
|
||||
|
||||
`frequency`::
|
||||
(time units) The interval at which scheduled queries are made while the data
|
||||
feed runs in real time. The default value is either the bucket span for short
|
||||
(time units) The interval at which scheduled queries are made while the
|
||||
{dfeed} runs in real time. The default value is either the bucket span for short
|
||||
bucket spans, or, for longer bucket spans, a sensible fraction of the bucket
|
||||
span. For example: "150s"
|
||||
|
||||
|
@ -34,7 +34,7 @@ A data feed resource has the following properties:
|
|||
(array) An array of index names. For example: ["it_ops_metrics"]
|
||||
|
||||
`job_id` (required)::
|
||||
(string) The unique identifier for the job to which the data feed sends data.
|
||||
(string) The unique identifier for the job to which the {dfeed} sends data.
|
||||
|
||||
`query`::
|
||||
(object) The {es} query domain-specific language (DSL). This value
|
||||
|
@ -59,7 +59,7 @@ A data feed resource has the following properties:
|
|||
[[ml-datafeed-chunking-config]]
|
||||
===== Chunking Configuration Objects
|
||||
|
||||
Data feeds might be required to search over long time periods, for several months
|
||||
{dfeeds-cap} might be required to search over long time periods, for several months
|
||||
or years. This search is split into time chunks in order to ensure the load
|
||||
on {es} is managed. Chunking configuration controls how the size of these time
|
||||
chunks are calculated and is an advanced configuration option.
|
||||
|
@ -80,21 +80,21 @@ A chunking configuration object has the following properties:
|
|||
|
||||
[float]
|
||||
[[ml-datafeed-counts]]
|
||||
==== Data Feed Counts
|
||||
==== {dfeed-cap} Counts
|
||||
|
||||
The get data feed statistics API provides information about the operational
|
||||
progress of a data feed. For example:
|
||||
The get {dfeed} statistics API provides information about the operational
|
||||
progress of a {dfeed}. For example:
|
||||
|
||||
`assignment_explanation`::
|
||||
(string) For started data feeds only, contains messages relating to the
|
||||
(string) For started {dfeeds} only, contains messages relating to the
|
||||
selection of a node.
|
||||
|
||||
`datafeed_id`::
|
||||
(string) A numerical character string that uniquely identifies the data feed.
|
||||
(string) A numerical character string that uniquely identifies the {dfeed}.
|
||||
|
||||
`node`::
|
||||
(object) The node upon which the data feed is started. The data feed and
|
||||
job will be on the same node.
|
||||
(object) The node upon which the {dfeed} is started. The {dfeed} and job will
|
||||
be on the same node.
|
||||
`id`::: The unique identifier of the node. For example,
|
||||
"0-o0tOoRTwKFZifatTWKNw".
|
||||
`name`::: The node name. For example, "0-o0tOo".
|
||||
|
@ -104,7 +104,7 @@ progress of a data feed. For example:
|
|||
`attributes`::: For example, {"max_running_jobs": "10"}.
|
||||
|
||||
`state`::
|
||||
(string) The status of the data feed, which can be one of the following values: +
|
||||
`started`::: The data feed is actively receiving data.
|
||||
`stopped`::: The data feed is stopped and will not receive data until it is
|
||||
(string) The status of the {dfeed}, which can be one of the following values: +
|
||||
`started`::: The {dfeed} is actively receiving data.
|
||||
`stopped`::: The {dfeed} is stopped and will not receive data until it is
|
||||
re-started.
|
||||
|
|
|
@ -1,8 +1,8 @@
|
|||
//lcawley Verified example output 2017-04-11
|
||||
[[ml-delete-datafeed]]
|
||||
==== Delete Data Feeds
|
||||
==== Delete {dfeeds-cap}
|
||||
|
||||
The delete data feed API enables you to delete an existing data feed.
|
||||
The delete {dfeed} API enables you to delete an existing {dfeed}.
|
||||
|
||||
|
||||
===== Request
|
||||
|
@ -12,23 +12,13 @@ The delete data feed API enables you to delete an existing data feed.
|
|||
|
||||
===== Description
|
||||
|
||||
NOTE: You must stop the data feed before you can delete it.
|
||||
NOTE: You must stop the {dfeed} before you can delete it.
|
||||
|
||||
|
||||
===== Path Parameters
|
||||
|
||||
`feed_id` (required)::
|
||||
(string) Identifier for the data feed
|
||||
////
|
||||
===== Responses
|
||||
|
||||
200
|
||||
(EmptyResponse) The cluster has been successfully deleted
|
||||
404
|
||||
(BasicFailedReply) The cluster specified by {cluster_id} cannot be found (code: clusters.cluster_not_found)
|
||||
412
|
||||
(BasicFailedReply) The Elasticsearch cluster has not been shutdown yet (code: clusters.cluster_plan_state_error)
|
||||
////
|
||||
(string) Identifier for the {dfeed}
|
||||
|
||||
|
||||
===== Authorization
|
||||
|
@ -39,7 +29,7 @@ For more information, see <<privileges-list-cluster>>.
|
|||
|
||||
===== Examples
|
||||
|
||||
The following example deletes the `datafeed-it-ops` data feed:
|
||||
The following example deletes the `datafeed-it-ops` {dfeed}:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
|
@ -48,7 +38,7 @@ DELETE _xpack/ml/datafeeds/datafeed-it-ops
|
|||
// CONSOLE
|
||||
// TEST[skip:todo]
|
||||
|
||||
When the data feed is deleted, you receive the following results:
|
||||
When the {dfeed} is deleted, you receive the following results:
|
||||
[source,js]
|
||||
----
|
||||
{
|
||||
|
|
|
@ -19,8 +19,8 @@ IMPORTANT: Deleting a job must be done via this API only. Do not delete the
|
|||
DELETE Document API. When {security} is enabled, make sure no `write`
|
||||
privileges are granted to anyone over the `.ml-*` indices.
|
||||
|
||||
Before you can delete a job, you must delete the data feeds that are associated
|
||||
with it. See <<ml-delete-datafeed,Delete Data Feeds>>.
|
||||
Before you can delete a job, you must delete the {dfeeds} that are associated
|
||||
with it. See <<ml-delete-datafeed,Delete {dfeeds-cap}>>.
|
||||
|
||||
It is not currently possible to delete multiple jobs using wildcards or a comma
|
||||
separated list.
|
||||
|
@ -28,7 +28,7 @@ separated list.
|
|||
===== Path Parameters
|
||||
|
||||
`job_id` (required)::
|
||||
(string) Identifier for the job
|
||||
(string) Identifier for the job
|
||||
|
||||
|
||||
===== Authorization
|
||||
|
|
|
@ -1,9 +1,9 @@
|
|||
//lcawley Verified example output 2017-04-11
|
||||
[[ml-get-datafeed-stats]]
|
||||
==== Get Data Feed Statistics
|
||||
==== Get {dfeed-cap} Statistics
|
||||
|
||||
The get data feed statistics API enables you to retrieve usage information for
|
||||
data feeds.
|
||||
The get {dfeed} statistics API enables you to retrieve usage information for
|
||||
{dfeeds}.
|
||||
|
||||
|
||||
===== Request
|
||||
|
@ -15,16 +15,16 @@ data feeds.
|
|||
|
||||
===== Description
|
||||
|
||||
If the data feed is stopped, the only information you receive is the
|
||||
If the {dfeed} is stopped, the only information you receive is the
|
||||
`datafeed_id` and the `state`.
|
||||
|
||||
|
||||
===== Path Parameters
|
||||
|
||||
`feed_id`::
|
||||
(string) Identifier for the data feed.
|
||||
(string) Identifier for the {dfeed}.
|
||||
This parameter does not support wildcards, but you can specify `_all` or
|
||||
omit the `feed_id` to get information about all data feeds.
|
||||
omit the `feed_id` to get information about all {dfeeds}.
|
||||
|
||||
|
||||
===== Results
|
||||
|
@ -32,8 +32,8 @@ If the data feed is stopped, the only information you receive is the
|
|||
The API returns the following information:
|
||||
|
||||
`datafeeds`::
|
||||
(array) An array of data feed count objects.
|
||||
For more information, see <<ml-datafeed-counts,Data Feed Counts>>.
|
||||
(array) An array of {dfeed} count objects.
|
||||
For more information, see <<ml-datafeed-counts>>.
|
||||
|
||||
|
||||
===== Authorization
|
||||
|
@ -45,7 +45,7 @@ privileges to use this API. For more information, see <<privileges-list-cluster>
|
|||
===== Examples
|
||||
|
||||
The following example gets usage information for the
|
||||
`datafeed-farequote` data feed:
|
||||
`datafeed-farequote` {dfeed}:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
|
|
|
@ -1,9 +1,9 @@
|
|||
//lcawley Verified example output 2017-04-11
|
||||
[[ml-get-datafeed]]
|
||||
==== Get Data Feeds
|
||||
==== Get {dfeeds-cap}
|
||||
|
||||
The get data feeds API enables you to retrieve configuration information for
|
||||
data feeds.
|
||||
The get {dfeeds} API enables you to retrieve configuration information for
|
||||
{dfeeds}.
|
||||
|
||||
===== Request
|
||||
|
||||
|
@ -16,9 +16,9 @@ data feeds.
|
|||
===== Path Parameters
|
||||
|
||||
`feed_id`::
|
||||
(string) Identifier for the data feed.
|
||||
(string) Identifier for the {dfeed}.
|
||||
This parameter does not support wildcards, but you can specify `_all` or
|
||||
omit the `feed_id` to get information about all data feeds.
|
||||
omit the `feed_id` to get information about all {dfeeds}.
|
||||
|
||||
|
||||
===== Results
|
||||
|
@ -26,8 +26,8 @@ data feeds.
|
|||
The API returns the following information:
|
||||
|
||||
`datafeeds`::
|
||||
(array) An array of data feed objects.
|
||||
For more information, see <<ml-datafeed-resource,data feed resources>>.
|
||||
(array) An array of {dfeed} objects.
|
||||
For more information, see <<ml-datafeed-resource>>.
|
||||
|
||||
|
||||
===== Authorization
|
||||
|
@ -39,7 +39,7 @@ privileges to use this API. For more information, see <<privileges-list-cluster>
|
|||
===== Examples
|
||||
|
||||
The following example gets configuration information for the
|
||||
`datafeed-it-ops-kpi` data feed:
|
||||
`datafeed-it-ops-kpi` {dfeed}:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
|
|
|
@ -39,7 +39,7 @@ progress of a job.
|
|||
`failed`::: The job did not finish successfully due to an error.
|
||||
This situation can occur due to invalid input data.
|
||||
If the job had irrevocably failed, it must be force closed and then deleted.
|
||||
If the data feed can be corrected, the job can be closed and then re-opened.
|
||||
If the {dfeed} can be corrected, the job can be closed and then re-opened.
|
||||
|
||||
[float]
|
||||
[[ml-datacounts]]
|
||||
|
@ -97,7 +97,7 @@ or old results are deleted, the job counts are not reset.
|
|||
it is possible that not all fields are missing. The value of
|
||||
`processed_record_count` includes this count. +
|
||||
|
||||
NOTE: If you are using data feeds or posting data to the job in JSON format, a
|
||||
NOTE: If you are using {dfeeds} or posting data to the job in JSON format, a
|
||||
high `missing_field_count` is often not an indication of data issues. It is not
|
||||
necessarily a cause for concern.
|
||||
|
||||
|
@ -117,9 +117,9 @@ necessarily a cause for concern.
|
|||
(long) The number of records that have been processed by the job.
|
||||
This value includes records with missing fields, since they are nonetheless
|
||||
analyzed. +
|
||||
If you use data feeds and have aggregations in your search query,
|
||||
If you use {dfeeds} and have aggregations in your search query,
|
||||
the `processed_record_count` will be the number of aggregated records
|
||||
processed, not the number of {es} documents.
|
||||
processed, not the number of {es} documents.
|
||||
|
||||
`sparse_bucket_count`::
|
||||
(long) The number of buckets that contained few data points compared to the
|
||||
|
|
|
@ -212,10 +212,10 @@ LEAVE UNDOCUMENTED
|
|||
|
||||
The data description defines the format of the input data when you send data to
|
||||
the job by using the <<ml-post-data,post data>> API. Note that when configure
|
||||
a data feed, these properties are automatically set.
|
||||
a {dfeed}, these properties are automatically set.
|
||||
|
||||
When data is received via the <<ml-post-data,post data>> API, it is not stored
|
||||
in Elasticsearch. Only the results for anomaly detection are retained.
|
||||
in {es}. Only the results for anomaly detection are retained.
|
||||
|
||||
A data description object has the following properties:
|
||||
|
||||
|
|
|
@ -1,8 +1,8 @@
|
|||
//lcawley: Verified example output 2017-04-11
|
||||
[[ml-preview-datafeed]]
|
||||
==== Preview Data Feeds
|
||||
==== Preview {dfeeds-cap}
|
||||
|
||||
The preview data feed API enables you to preview a data feed.
|
||||
The preview {dfeed} API enables you to preview a {dfeed}.
|
||||
|
||||
|
||||
===== Request
|
||||
|
@ -13,14 +13,14 @@ The preview data feed API enables you to preview a data feed.
|
|||
===== Description
|
||||
|
||||
The API returns the first "page" of results from the `search` that is created
|
||||
by using the current data feed settings. This preview shows the structure of
|
||||
by using the current {dfeed} settings. This preview shows the structure of
|
||||
the data that will be passed to the anomaly detection engine.
|
||||
|
||||
|
||||
===== Path Parameters
|
||||
|
||||
`datafeed_id` (required)::
|
||||
(string) Identifier for the data feed
|
||||
(string) Identifier for the {dfeed}
|
||||
|
||||
|
||||
===== Authorization
|
||||
|
@ -31,7 +31,7 @@ privileges to use this API. For more information, see <<privileges-list-cluster>
|
|||
|
||||
===== Examples
|
||||
|
||||
The following example obtains a preview of the `datafeed-farequote` data feed:
|
||||
The following example obtains a preview of the `datafeed-farequote` {dfeed}:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
|
|
|
@ -1,8 +1,8 @@
|
|||
//lcawley Verified example output 2017-04-11
|
||||
[[ml-put-datafeed]]
|
||||
==== Create Data Feeds
|
||||
==== Create {dfeeds-cap}
|
||||
|
||||
The create data feed API enables you to instantiate a data feed.
|
||||
The create {dfeed} API enables you to instantiate a {dfeed}.
|
||||
|
||||
|
||||
===== Request
|
||||
|
@ -12,20 +12,20 @@ The create data feed API enables you to instantiate a data feed.
|
|||
|
||||
===== Description
|
||||
|
||||
You must create a job before you create a data feed. You can associate only one
|
||||
data feed to each job.
|
||||
You must create a job before you create a {dfeed}. You can associate only one
|
||||
{dfeed} to each job.
|
||||
|
||||
|
||||
===== Path Parameters
|
||||
|
||||
`feed_id` (required)::
|
||||
(string) A numerical character string that uniquely identifies the data feed.
|
||||
(string) A numerical character string that uniquely identifies the {dfeed}.
|
||||
|
||||
|
||||
===== Request Body
|
||||
|
||||
`aggregations`::
|
||||
(object) If set, the data feed performs aggregation searches.
|
||||
(object) If set, the {dfeed} performs aggregation searches.
|
||||
For more information, see <<ml-datafeed-resource>>.
|
||||
|
||||
`chunking_config`::
|
||||
|
@ -33,8 +33,8 @@ data feed to each job.
|
|||
See <<ml-datafeed-chunking-config>>.
|
||||
|
||||
`frequency`::
|
||||
(time units) The interval at which scheduled queries are made while the data
|
||||
feed runs in real time. The default value is either the bucket span for short
|
||||
(time units) The interval at which scheduled queries are made while the {dfeed}
|
||||
runs in real time. The default value is either the bucket span for short
|
||||
bucket spans, or, for longer bucket spans, a sensible fraction of the bucket
|
||||
span. For example: "150s".
|
||||
|
||||
|
@ -65,7 +65,7 @@ data feed to each job.
|
|||
For example: ["network","sql","kpi"].
|
||||
|
||||
For more information about these properties,
|
||||
see <<ml-datafeed-resource, Data Feed Resources>>.
|
||||
see <<ml-datafeed-resource>>.
|
||||
|
||||
|
||||
===== Authorization
|
||||
|
@ -75,7 +75,7 @@ For more information, see <<privileges-list-cluster>>.
|
|||
|
||||
===== Examples
|
||||
|
||||
The following example creates the `datafeed-it-ops-kpi` data feed:
|
||||
The following example creates the `datafeed-it-ops-kpi` {dfeed}:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
|
@ -94,7 +94,7 @@ PUT _xpack/ml/datafeeds/datafeed-it-ops-kpi
|
|||
// CONSOLE
|
||||
// TEST[skip:todo]
|
||||
|
||||
When the data feed is created, you receive the following results:
|
||||
When the {dfeed} is created, you receive the following results:
|
||||
[source,js]
|
||||
----
|
||||
{
|
||||
|
|
|
@ -271,7 +271,7 @@ probability of this occurrence.
|
|||
|
||||
There can be many anomaly records depending on the characteristics and size of
|
||||
the input data. In practice, there are often too many to be able to manually
|
||||
process them. The {xpack} {ml} features therefore perform a sophisticated
|
||||
process them. The {xpackml} features therefore perform a sophisticated
|
||||
aggregation of the anomaly records into buckets.
|
||||
|
||||
The number of record results depends on the number of anomalies found in each
|
||||
|
|
|
@ -1,9 +1,9 @@
|
|||
//lcawley Verified example output 2017-04
|
||||
[[ml-start-datafeed]]
|
||||
==== Start Data Feeds
|
||||
==== Start {dfeeds-cap}
|
||||
|
||||
A data feed must be started in order to retrieve data from {es}.
|
||||
A data feed can be started and stopped multiple times throughout its lifecycle.
|
||||
A {dfeed} must be started in order to retrieve data from {es}.
|
||||
A {dfeed} can be started and stopped multiple times throughout its lifecycle.
|
||||
|
||||
===== Request
|
||||
|
||||
|
@ -11,21 +11,21 @@ A data feed can be started and stopped multiple times throughout its lifecycle.
|
|||
|
||||
===== Description
|
||||
|
||||
NOTE: Before you can start a data feed, the job must be open. Otherwise, an error
|
||||
NOTE: Before you can start a {dfeed}, the job must be open. Otherwise, an error
|
||||
occurs.
|
||||
|
||||
When you start a data feed, you can specify a start time. This enables you to
|
||||
When you start a {dfeed}, you can specify a start time. This enables you to
|
||||
include a training period, providing you have this data available in {es}.
|
||||
If you want to analyze from the beginning of a dataset, you can specify any date
|
||||
earlier than that beginning date.
|
||||
|
||||
If you do not specify a start time and the data feed is associated with a new
|
||||
If you do not specify a start time and the {dfeed} is associated with a new
|
||||
job, the analysis starts from the earliest time for which data is available.
|
||||
|
||||
When you start a data feed, you can also specify an end time. If you do so, the
|
||||
When you start a {dfeed}, you can also specify an end time. If you do so, the
|
||||
job analyzes data from the start time until the end time, at which point the
|
||||
analysis stops. This scenario is useful for a one-off batch analysis. If you
|
||||
do not specify an end time, the data feed runs continuously.
|
||||
do not specify an end time, the {dfeed} runs continuously.
|
||||
|
||||
The `start` and `end` times can be specified by using one of the
|
||||
following formats: +
|
||||
|
@ -40,12 +40,12 @@ designator, where Z is accepted as an abbreviation for UTC time.
|
|||
NOTE: When a URL is expected (for example, in browsers), the `+` used in time
|
||||
zone designators must be encoded as `%2B`.
|
||||
|
||||
If the system restarts, any jobs that had data feeds running are also restarted.
|
||||
If the system restarts, any jobs that had {dfeeds} running are also restarted.
|
||||
|
||||
When a stopped data feed is restarted, it continues processing input data from
|
||||
When a stopped {dfeed} is restarted, it continues processing input data from
|
||||
the next millisecond after it was stopped. If your data contains the same
|
||||
timestamp (for example, it is summarized by minute), then data loss is possible
|
||||
for the timestamp value when the data feed stopped. This situation can occur
|
||||
for the timestamp value when the {dfeed} stopped. This situation can occur
|
||||
because the job might not have completely processed all data for that millisecond.
|
||||
If you specify a `start` value that is earlier than the timestamp of the latest
|
||||
processed record, that value is ignored.
|
||||
|
@ -54,20 +54,20 @@ processed record, that value is ignored.
|
|||
===== Path Parameters
|
||||
|
||||
`feed_id` (required)::
|
||||
(string) Identifier for the data feed
|
||||
(string) Identifier for the {dfeed}
|
||||
|
||||
===== Request Body
|
||||
|
||||
`end`::
|
||||
(string) The time that the data feed should end. This value is exclusive.
|
||||
(string) The time that the {dfeed} should end. This value is exclusive.
|
||||
The default value is an empty string.
|
||||
|
||||
`start`::
|
||||
(string) The time that the data feed should begin. This value is inclusive.
|
||||
(string) The time that the {dfeed} should begin. This value is inclusive.
|
||||
The default value is an empty string.
|
||||
|
||||
`timeout`::
|
||||
(time) Controls the amount of time to wait until a data feed starts.
|
||||
(time) Controls the amount of time to wait until a {dfeed} starts.
|
||||
The default value is 20 seconds.
|
||||
|
||||
|
||||
|
@ -79,7 +79,7 @@ For more information, see <<privileges-list-cluster>>.
|
|||
|
||||
===== Examples
|
||||
|
||||
The following example opens the `datafeed-it-ops-kpi` data feed:
|
||||
The following example starts the `datafeed-it-ops-kpi` {dfeed}:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
|
@ -91,7 +91,7 @@ POST _xpack/ml/datafeeds/datafeed-it-ops-kpi/_start
|
|||
// CONSOLE
|
||||
// TEST[skip:todo]
|
||||
|
||||
When the job opens, you receive the following results:
|
||||
When the {dfeed} starts, you receive the following results:
|
||||
[source,js]
|
||||
----
|
||||
{
|
||||
|
|
|
@ -1,9 +1,9 @@
|
|||
//lcawley Verified example output 2017-04-11
|
||||
[[ml-stop-datafeed]]
|
||||
==== Stop Data Feeds
|
||||
==== Stop {dfeeds-cap}
|
||||
|
||||
A data feed that is stopped ceases to retrieve data from {es}.
|
||||
A data feed can be started and stopped multiple times throughout its lifecycle.
|
||||
A {dfeed} that is stopped ceases to retrieve data from {es}.
|
||||
A {dfeed} can be started and stopped multiple times throughout its lifecycle.
|
||||
|
||||
===== Request
|
||||
|
||||
|
@ -14,15 +14,15 @@ A data feed can be started and stopped multiple times throughout its lifecycle.
|
|||
===== Path Parameters
|
||||
|
||||
`feed_id` (required)::
|
||||
(string) Identifier for the data feed
|
||||
(string) Identifier for the {dfeed}
|
||||
|
||||
===== Request Body
|
||||
|
||||
`force`::
|
||||
(boolean) If true, the data feed is stopped forcefully.
|
||||
(boolean) If true, the {dfeed} is stopped forcefully.
|
||||
|
||||
`timeout`::
|
||||
(time) Controls the amount of time to wait until a data feed stops.
|
||||
(time) Controls the amount of time to wait until a {dfeed} stops.
|
||||
The default value is 20 seconds.
|
||||
|
||||
|
||||
|
@ -33,7 +33,7 @@ For more information, see <<privileges-list-cluster>>.
|
|||
|
||||
===== Examples
|
||||
|
||||
The following example stops the `datafeed-it-ops-kpi` data feed:
|
||||
The following example stops the `datafeed-it-ops-kpi` {dfeed}:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
|
@ -45,7 +45,7 @@ POST _xpack/ml/datafeeds/datafeed-it-ops-kpi/_stop
|
|||
// CONSOLE
|
||||
// TEST[skip:todo]
|
||||
|
||||
When the data feed stops, you receive the following results:
|
||||
When the {dfeed} stops, you receive the following results:
|
||||
[source,js]
|
||||
----
|
||||
{
|
||||
|
|
|
@ -1,8 +1,8 @@
|
|||
//lcawley Verified example output 2017-04
|
||||
[[ml-update-datafeed]]
|
||||
==== Update Data Feeds
|
||||
==== Update {dfeeds-cap}
|
||||
|
||||
The update data feed API enables you to update certain properties of a data feed.
|
||||
The update {dfeed} API enables you to update certain properties of a {dfeed}.
|
||||
|
||||
===== Request
|
||||
|
||||
|
@ -13,14 +13,14 @@ The update data feed API enables you to update certain properties of a data feed
|
|||
===== Path Parameters
|
||||
|
||||
`feed_id` (required)::
|
||||
(string) Identifier for the data feed
|
||||
(string) Identifier for the {dfeed}
|
||||
|
||||
===== Request Body
|
||||
|
||||
The following properties can be updated after the data feed is created:
|
||||
The following properties can be updated after the {dfeed} is created:
|
||||
|
||||
`aggregations`::
|
||||
(object) If set, the data feed performs aggregation searches.
|
||||
(object) If set, the {dfeed} performs aggregation searches.
|
||||
For more information, see <<ml-datafeed-resource>>.
|
||||
|
||||
`chunking_config`::
|
||||
|
@ -28,8 +28,8 @@ The following properties can be updated after the data feed is created:
|
|||
See <<ml-datafeed-chunking-config>>.
|
||||
|
||||
`frequency`::
|
||||
(time units) The interval at which scheduled queries are made while the data
|
||||
feed runs in real time. The default value is either the bucket span for short
|
||||
(time units) The interval at which scheduled queries are made while the
|
||||
{dfeed} runs in real time. The default value is either the bucket span for short
|
||||
bucket spans, or, for longer bucket spans, a sensible fraction of the bucket
|
||||
span. For example: "150s".
|
||||
|
||||
|
@ -60,7 +60,7 @@ The following properties can be updated after the data feed is created:
|
|||
For example: ["network","sql","kpi"].
|
||||
|
||||
For more information about these properties,
|
||||
see <<ml-datafeed-resource, Data Feed Resources>>.
|
||||
see <<ml-datafeed-resource>>.
|
||||
|
||||
|
||||
===== Authorization
|
||||
|
@ -70,7 +70,7 @@ For more information, see <<privileges-list-cluster>>.
|
|||
|
||||
===== Examples
|
||||
|
||||
The following example updates the `it-ops-kpi` job:
|
||||
The following example updates the `datafeed-it-ops-kpi` {dfeed}:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
|
@ -114,7 +114,7 @@ POST _xpack/ml/datafeeds/datafeed-it-ops-kpi/_update
|
|||
// CONSOLE
|
||||
// TEST[skip:todo]
|
||||
|
||||
When the data feed is updated, you receive the following results:
|
||||
When the {dfeed} is updated, you receive the following results:
|
||||
[source,js]
|
||||
----
|
||||
{
|
||||
|
|
|
@ -92,7 +92,7 @@ Grants `manage_ml` cluster privileges and read access to the `.ml-*` indices.
|
|||
|
||||
[[built-in-roles-ml-user]]
|
||||
`machine_learning_user`::
|
||||
Grants the minimum privileges required to view {xpack} {ml} configuration,
|
||||
Grants the minimum privileges required to view {xpackml} configuration,
|
||||
status, and results. This role grants `monitor_ml` cluster privileges and
|
||||
read access to the `.ml-notifications` and `.ml-anomalies*` indices,
|
||||
which store {ml} results.
|
||||
|
|
|
@ -16,7 +16,7 @@ All cluster read-only operations, like cluster health & state, hot threads, node
|
|||
info, node & cluster stats, snapshot/restore status, pending cluster tasks.
|
||||
|
||||
`monitor_ml`::
|
||||
All read only {ml} operations, such as getting information about data feeds, jobs,
|
||||
All read only {ml} operations, such as getting information about {dfeeds}, jobs,
|
||||
model snapshots, or results.
|
||||
|
||||
`monitor_watcher`::
|
||||
|
@ -24,14 +24,14 @@ All read only watcher operations, such as getting a watch and watcher stats.
|
|||
|
||||
`manage`::
|
||||
Builds on `monitor` and adds cluster operations that change values in the cluster.
|
||||
This includes snapshotting,updating settings, and rerouting. This privilege does
|
||||
This includes snapshotting, updating settings, and rerouting. This privilege does
|
||||
not include the ability to manage security.
|
||||
|
||||
`manage_index_templates`::
|
||||
All operations on index templates.
|
||||
|
||||
`manage_ml`::
|
||||
All {ml} operations, such as creating and deleting data feeds, jobs, and model
|
||||
All {ml} operations, such as creating and deleting {dfeeds}, jobs, and model
|
||||
snapshots.
|
||||
|
||||
`manage_pipeline`::
|
||||
|
|
|
@ -10,8 +10,8 @@ You do not need to configure any settings to use {ml}. It is enabled by default.
|
|||
Set to `true` (default) to enable {ml}. +
|
||||
+
|
||||
If set to `false` in `elasticsearch.yml`, the {ml} APIs are disabled.
|
||||
You also cannot open jobs or start data feeds.
|
||||
If set to `false` in `kibana.yml`, the {ml} icon is not visible in Kibana. +
|
||||
You also cannot open jobs or start {dfeeds}.
|
||||
If set to `false` in `kibana.yml`, the {ml} icon is not visible in {kib}. +
|
||||
+
|
||||
TIP: If you want to use {ml} features in your cluster, you must enable {ml} on
|
||||
all master-eligible nodes. This is the default behavior.
|
||||
|
|
Loading…
Reference in New Issue