Original commit: elastic/x-pack-elasticsearch@81be1e49b6
This commit is contained in:
Lisa Cawley 2017-06-19 19:31:39 -07:00 committed by GitHub
parent b41ff62bcc
commit 0e133ada9e
14 changed files with 82 additions and 91 deletions

View File

@ -20,66 +20,55 @@ The main {ml} resources can be accessed with a variety of endpoints:
[[ml-api-jobs]]
=== /anomaly_detectors/
* <<ml-put-job,PUT /anomaly_detectors/<job_id+++>+++>>: Create a job
* <<ml-open-job,POST /anomaly_detectors/<job_id>/_open>>: Open a job
* <<ml-post-data,POST /anomaly_detectors/<job_id>/_data>>: Send data to a job
* <<ml-get-job,GET /anomaly_detectors>>: List jobs
* <<ml-get-job,GET /anomaly_detectors/<job_id+++>+++>>: Get job details
* <<ml-get-job-stats,GET /anomaly_detectors/<job_id>/_stats>>: Get job statistics
* <<ml-update-job,POST /anomaly_detectors/<job_id>/_update>>: Update certain properties of the job configuration
* <<ml-flush-job,POST anomaly_detectors/<job_id>/_flush>>: Force a job to analyze buffered data
* <<ml-close-job,POST /anomaly_detectors/<job_id>/_close>>: Close a job
* <<ml-delete-job,DELETE /anomaly_detectors/<job_id+++>+++>>: Delete a job
* {ref}/ml-put-job.html[PUT /anomaly_detectors/<job_id+++>+++]: Create a job
* {ref}/ml-open-job.html[POST /anomaly_detectors/<job_id>/_open]: Open a job
* {ref}/ml-post-data.html[POST /anomaly_detectors/<job_id>/_data]: Send data to a job
* {ref}/ml-get-job.html[GET /anomaly_detectors]: List jobs
* {ref}/ml-get-job.html[GET /anomaly_detectors/<job_id+++>+++]: Get job details
* {ref}/ml-get-job-stats.html[GET /anomaly_detectors/<job_id>/_stats]: Get job statistics
* {ref}/ml-update-job.html[POST /anomaly_detectors/<job_id>/_update]: Update certain properties of the job configuration
* {ref}/ml-flush-job.html[POST anomaly_detectors/<job_id>/_flush]: Force a job to analyze buffered data
* {ref}/ml-close-job.html[POST /anomaly_detectors/<job_id>/_close]: Close a job
* {ref}/ml-delete-job.html[DELETE /anomaly_detectors/<job_id+++>+++]: Delete a job
[float]
[[ml-api-datafeeds]]
=== /datafeeds/
* <<ml-put-datafeed,PUT /datafeeds/<datafeed_id+++>+++>>: Create a {dfeed}
* <<ml-start-datafeed,POST /datafeeds/<datafeed_id>/_start>>: Start a {dfeed}
* <<ml-get-datafeed,GET /datafeeds>>: List {dfeeds}
* <<ml-get-datafeed,GET /datafeeds/<datafeed_id+++>+++>>: Get {dfeed} details
* <<ml-get-datafeed-stats,GET /datafeeds/<datafeed_id>/_stats>>: Get statistical information for {dfeeds}
* <<ml-preview-datafeed,GET /datafeeds/<datafeed_id>/_preview>>: Get a preview of a {dfeed}
* <<ml-update-datafeed,POST /datafeeds/<datafeedid>/_update>>: Update certain settings for a {dfeed}
* <<ml-stop-datafeed,POST /datafeeds/<datafeed_id>/_stop>>: Stop a {dfeed}
* <<ml-delete-datafeed,DELETE /datafeeds/<datafeed_id+++>+++>>: Delete {dfeed}
* {ref}/ml-put-datafeed.html[PUT /datafeeds/<datafeed_id+++>+++]: Create a {dfeed}
* {ref}/ml-start-datafeed.html[POST /datafeeds/<datafeed_id>/_start]: Start a {dfeed}
* {ref}/ml-get-datafeed.html[GET /datafeeds]: List {dfeeds}
* {ref}/ml-get-datafeed.html[GET /datafeeds/<datafeed_id+++>+++]: Get {dfeed} details
* {ref}/ml-get-datafeed-stats.html[GET /datafeeds/<datafeed_id>/_stats]: Get statistical information for {dfeeds}
* {ref}/ml-preview-datafeed.html[GET /datafeeds/<datafeed_id>/_preview]: Get a preview of a {dfeed}
* {ref}/ml-update-datafeed.html[POST /datafeeds/<datafeedid>/_update]: Update certain settings for a {dfeed}
* {ref}/ml-stop-datafeed.html[POST /datafeeds/<datafeed_id>/_stop]: Stop a {dfeed}
* {ref}/ml-delete-datafeed.html[DELETE /datafeeds/<datafeed_id+++>+++]: Delete {dfeed}
[float]
[[ml-api-results]]
=== /results/
* <<ml-get-bucket,GET /results/buckets>>: List the buckets in the results
* <<ml-get-bucket,GET /results/buckets/<bucket_id+++>+++>>: Get bucket details
* <<ml-get-category,GET /results/categories>>: List the categories in the results
* <<ml-get-category,GET /results/categories/<category_id+++>+++>>: Get category details
* <<ml-get-influencer,GET /results/influencers>>: Get influencer details
* <<ml-get-record,GET /results/records>>: Get records from the results
* {ref}/ml-get-bucket.html[GET /results/buckets]: List the buckets in the results
* {ref}/ml-get-bucket.html[GET /results/buckets/<bucket_id+++>+++]: Get bucket details
* {ref}/ml-get-category.html[GET /results/categories]: List the categories in the results
* {ref}/ml-get-category.html[GET /results/categories/<category_id+++>+++]: Get category details
* {ref}/ml-get-influencer.html[GET /results/influencers]: Get influencer details
* {ref}/ml-get-record.html[GET /results/records]: Get records from the results
[float]
[[ml-api-snapshots]]
=== /model_snapshots/
* <<ml-get-snapshot,GET /model_snapshots>>: List model snapshots
* <<ml-get-snapshot,GET /model_snapshots/<snapshot_id+++>+++>>: Get model snapshot details
* <<ml-revert-snapshot,POST /model_snapshots/<snapshot_id>/_revert>>: Revert a model snapshot
* <<ml-update-snapshot,POST /model_snapshots/<snapshot_id>/_update>>: Update certain settings for a model snapshot
* <<ml-delete-snapshot,DELETE /model_snapshots/<snapshot_id+++>+++>>: Delete a model snapshot
* {ref}/ml-get-snapshot.html[GET /model_snapshots]: List model snapshots
* {ref}/ml-get-snapshot.html[GET /model_snapshots/<snapshot_id+++>+++]: Get model snapshot details
* {ref}/ml-revert-snapshot.html[POST /model_snapshots/<snapshot_id>/_revert]: Revert a model snapshot
* {ref}/ml-update-snapshot.html[POST /model_snapshots/<snapshot_id>/_update]: Update certain settings for a model snapshot
* {ref}/ml-delete-snapshot.html[DELETE /model_snapshots/<snapshot_id+++>+++]: Delete a model snapshot
[float]
[[ml-api-validate]]
=== /validate/
* <<ml-valid-detector,POST /anomaly_detectors/_validate/detector>>: Validate a detector
* <<ml-valid-job, POST /anomaly_detectors/_validate>>: Validate a job
//[float]
//== Where to Go Next
//<<ml-getting-started, Getting Started>> :: Enable machine learning and start
//discovering anomalies in your data.
//[float]
//== Have Comments, Questions, or Feedback?
//Head over to our {forum}[Graph Discussion Forum] to share your experience, questions, and
//suggestions.
* {ref}/ml-valid-detector.html[POST /anomaly_detectors/_validate/detector]: Validate a detector
* {ref}/ml-valid-job.html[POST /anomaly_detectors/_validate]: Validate a job

View File

@ -14,8 +14,8 @@ send your data to that job.
** You can create a {dfeed}, which retrieves data from {es} for analysis.
** You can use {kib} to expedite the creation of jobs and {dfeeds}.
* If your data is not stored in {es}, you can <<ml-post-data,POST data>> from any
source directly to an API.
* If your data is not stored in {es}, you can
{ref}/ml-post-data.html[POST data] from any source directly to an API.
The results of {ml} analysis are stored in {es} and you can use {kib} to help
you visualize and explore the results.

View File

@ -6,7 +6,8 @@ flexible ways to analyze data for anomalies.
When you create jobs, you specify one or more detectors, which define the type of
analysis that needs to be done. If you are creating your job by using {ml} APIs,
you specify the functions in <<ml-detectorconfig,Detector Configuration Objects>>.
you specify the functions in
{ref}/ml-job-resource.html#ml-detectorconfig[Detector Configuration Objects].
If you are creating your job in {kib}, you specify the functions differently
depending on whether you are creating single metric, multi-metric, or advanced
jobs. For a demonstration of creating jobs in {kib}, see <<ml-getting-started>>.

View File

@ -39,7 +39,7 @@ These functions support the following properties:
* `partition_field_name` (optional)
For more information about those properties,
see <<ml-detectorconfig,Detector Configuration Objects>>.
see {ref}/ml-job-resource.html#ml-detectorconfig[Detector Configuration Objects].
.Example 1: Analyzing events with the count function
[source,js]
@ -123,7 +123,7 @@ These functions support the following properties:
* `partition_field_name` (optional)
For more information about those properties,
see <<ml-detectorconfig,Detector Configuration Objects>>.
see {ref}/ml-job-resource.html#ml-detectorconfig[Detector Configuration Objects].
For example, if you have the following number of events per bucket:
@ -182,7 +182,7 @@ These functions support the following properties:
* `partition_field_name` (optional)
For more information about those properties,
see <<ml-detectorconfig,Detector Configuration Objects>>.
see {ref}/ml-job-resource.html#ml-detectorconfig[Detector Configuration Objects].
.Example 6: Analyzing users with the distinct_count function
[source,js]

View File

@ -21,7 +21,7 @@ This function supports the following properties:
* `partition_field_name` (optional)
For more information about those properties,
see <<ml-detectorconfig,Detector Configuration Objects>>.
see {ref}/ml-job-resource.html#ml-detectorconfig[Detector Configuration Objects].
.Example 1: Analyzing transactions with the lat_long function
[source,js]
@ -74,4 +74,5 @@ format. For example, the following Painless script transforms
}
--------------------------------------------------
For more information about `script_fields`, see <<ml-datafeed-resource>>.
For more information about `script_fields`, see
{ref}/ml-datafeed-resource.html[Datafeed Resources].

View File

@ -28,8 +28,8 @@ These functions support the following properties:
* `over_field_name` (optional)
* `partition_field_name` (optional)
For more information about those properties,
see <<ml-detectorconfig,Detector Configuration Objects>>.
For more information about those properties, see
{ref}/ml-job-resource.html#ml-detectorconfig[Detector Configuration Objects].
.Example 1: Analyzing subdomain strings with the info_content function
[source,js]

View File

@ -30,8 +30,8 @@ This function supports the following properties:
* `over_field_name` (optional)
* `partition_field_name` (optional)
For more information about those properties,
see <<ml-detectorconfig,Detector Configuration Objects>>.
For more information about those properties, see
{ref}/ml-job-resource.html#ml-detectorconfig[Detector Configuration Objects].
.Example 1: Analyzing minimum transactions with the min function
[source,js]
@ -64,8 +64,8 @@ This function supports the following properties:
* `over_field_name` (optional)
* `partition_field_name` (optional)
For more information about those properties,
see <<ml-detectorconfig,Detector Configuration Objects>>.
For more information about those properties, see
{ref}/ml-job-resource.html#ml-detectorconfig[Detector Configuration Objects].
.Example 2: Analyzing maximum response times with the max function
[source,js]
@ -124,8 +124,8 @@ These functions support the following properties:
* `over_field_name` (optional)
* `partition_field_name` (optional)
For more information about those properties,
see <<ml-detectorconfig,Detector Configuration Objects>>.
For more information about those properties, see
{ref}/ml-job-resource.html#ml-detectorconfig[Detector Configuration Objects].
.Example 4: Analyzing response times with the median function
[source,js]
@ -161,8 +161,8 @@ These functions support the following properties:
* `over_field_name` (optional)
* `partition_field_name` (optional)
For more information about those properties,
see <<ml-detectorconfig,Detector Configuration Objects>>.
For more information about those properties, see
{ref}/ml-job-resource.html#ml-detectorconfig[Detector Configuration Objects].
.Example 5: Analyzing response times with the mean function
[source,js]
@ -225,8 +225,8 @@ This function supports the following properties:
* `over_field_name` (optional)
* `partition_field_name` (optional)
For more information about those properties,
see <<ml-detectorconfig,Detector Configuration Objects>>.
For more information about those properties, see
{ref}/ml-job-resource.html#ml-detectorconfig[Detector Configuration Objects].
.Example 8: Analyzing response times with the metric function
[source,js]
@ -261,8 +261,8 @@ These functions support the following properties:
* `over_field_name` (optional)
* `partition_field_name` (optional)
For more information about those properties,
see <<ml-detectorconfig,Detector Configuration Objects>>.
For more information about those properties, see
{ref}/ml-job-resource.html#ml-detectorconfig[Detector Configuration Objects].
.Example 9: Analyzing response times with the varp function
[source,js]

View File

@ -41,8 +41,8 @@ This function supports the following properties:
* `over_field_name` (optional)
* `partition_field_name` (optional)
For more information about those properties,
see <<ml-detectorconfig,Detector Configuration Objects>>.
For more information about those properties, see
{ref}/ml-job-resource.html#ml-detectorconfig[Detector Configuration Objects].
.Example 1: Analyzing status codes with the rare function
[source,js]
@ -97,8 +97,8 @@ This function supports the following properties:
* `over_field_name` (optional)
* `partition_field_name` (optional)
For more information about those properties,
see <<ml-detectorconfig,Detector Configuration Objects>>.
For more information about those properties, see
{ref}/ml-job-resource.html#ml-detectorconfig[Detector Configuration Objects].
.Example 3: Analyzing URI values in a population with the freq_rare function
[source,js]

View File

@ -41,8 +41,8 @@ These functions support the following properties:
* `over_field_name` (optional)
* `partition_field_name` (optional)
For more information about those properties,
see <<ml-detectorconfig,Detector Configuration Objects>>.
For more information about those properties, see
{ref}/ml-job-resource.html#ml-detectorconfig[Detector Configuration Objects].
.Example 1: Analyzing total expenses with the sum function
[source,js]
@ -95,8 +95,8 @@ These functions support the following properties:
* `by_field_name` (optional)
* `partition_field_name` (optional)
For more information about those properties,
see <<ml-detectorconfig,Detector Configuration Objects>>.
For more information about those properties, see
{ref}/ml-job-resource.html#ml-detectorconfig[Detector Configuration Objects].
NOTE: Population analysis (that is to say, use of the `over_field_name` property)
is not applicable for this function.

View File

@ -48,8 +48,8 @@ This function supports the following properties:
* `over_field_name` (optional)
* `partition_field_name` (optional)
For more information about those properties,
see <<ml-detectorconfig,Detector Configuration Objects>>.
For more information about those properties, see
{ref}/ml-job-resource.html#ml-detectorconfig[Detector Configuration Objects].
.Example 1: Analyzing events with the time_of_day function
[source,js]
@ -78,8 +78,8 @@ This function supports the following properties:
* `over_field_name` (optional)
* `partition_field_name` (optional)
For more information about those properties,
see <<ml-detectorconfig,Detector Configuration Objects>>.
For more information about those properties, see
{ref}/ml-job-resource.html#ml-detectorconfig[Detector Configuration Objects].
.Example 2: Analyzing events with the time_of_week function
[source,js]

View File

@ -315,7 +315,7 @@ analytical task.
--
This tutorial uses {kib} to create jobs and view results, but you can
alternatively use APIs to accomplish most tasks.
For API reference information, see <<ml-apis>>.
For API reference information, see {ref}/ml-apis.html[Machine Learning APIs].
The {xpackml} features in {kib} use pop-ups. You must configure your
web browser so that it does not block pop-up windows or create an
@ -463,7 +463,7 @@ job is running.
TIP: The `create_single_metic.sh` script creates a similar job and {dfeed} by
using the {ml} APIs. You can download that script by clicking
here: https://download.elastic.co/demos/machine_learning/gettingstarted/create_single_metric.sh[create_single_metric.sh]
For API reference information, see <<ml-apis>>.
For API reference information, see {ref}/ml-apis.html[Machine Learning APIs].
[[ml-gs-job1-manage]]
=== Managing Jobs

View File

@ -66,7 +66,8 @@ of closing and re-opening large jobs when there are pauses in the {dfeed}.
The post data API enables you to send data to a job for analysis. The data that
you send to the job must use the JSON format.
For more information about this API, see <<ml-post-data>>.
For more information about this API, see
{ref}/ml-post-data.html[Post Data to Jobs].
[float]
@ -83,7 +84,7 @@ Missing fields might be expected due to the structure of the data and therefore
do not generate poor results.
For more information about `missing_field_count`,
see <<ml-datacounts,Data Counts Objects>>.
see {ref}/ml-datacounts.html[Data Counts Objects].
[float]
@ -116,7 +117,8 @@ this additional model information for every bucket might be problematic. If you
are not certain that you need this option or if you experience performance
issues, edit your job configuration to disable this option.
For more information, see <<ml-apimodelplotconfig,Model Plot Config>>.
For more information, see
{ref}/ml-apimodelplotconfig.html[Model Plot Config].
Likewise, when you create a single or multi-metric job in {kib}, in some cases
it uses aggregations on the data that it retrieves from {es}. One of the
@ -131,4 +133,4 @@ in performance that is gained by pre-aggregating the data makes the potentially
poorer precision worthwhile. If you want to view or change the aggregations
that are used in your job, refer to the `aggregations` property in your {dfeed}.
For more information, see <<ml-datafeed-resource>>.
For more information, see {ref}/ml-datafeed-resource.html[Datafeed Resources].

View File

@ -10,7 +10,7 @@ concepts from the outset will tremendously help ease the learning process.
Machine learning jobs contain the configuration information and metadata
necessary to perform an analytics task. For a list of the properties associated
with a job, see <<ml-job-resource, Job Resources>>.
with a job, see {ref}ml-job-resource.html[Job Resources].
[float]
[[ml-dfeeds]]
@ -18,7 +18,7 @@ with a job, see <<ml-job-resource, Job Resources>>.
Jobs can analyze either a one-off batch of data or continuously in real time.
{dfeeds-cap} retrieve data from {es} for analysis. Alternatively you can
<<ml-post-data,POST data>> from any source directly to an API.
{ref}/ml-post-data.html[POST data] from any source directly to an API.
[float]
[[ml-detectors]]
@ -28,8 +28,8 @@ As part of the configuration information that is associated with a job,
detectors define the type of analysis that needs to be done. They also specify
which fields to analyze. You can have more than one detector in a job, which
is more efficient than running multiple jobs against the same data. For a list
of the properties associated with detectors,
see <<ml-detectorconfig, Detector Configuration Objects>>.
of the properties associated with detectors, see
{ref}/ml-job-resource.html#ml-detectorconfig[Detector Configuration Objects].
[float]
[[ml-buckets]]

View File

@ -9,8 +9,8 @@
* <<security-api, Security APIs>>
* <<watcher-api, Watcher APIs>>
* <<graph-api, Graph APIs>>
* Machine Learning APIs
* <<ml-api-definitions, Definitions>>
* {ref}/ml-apis.html[Machine Learning APIs]
* {ref}/ml-api-definitions.html[Definitions]
--
[[info-api]]
@ -118,5 +118,3 @@ include::security.asciidoc[]
include::watcher.asciidoc[]
include::graph.asciidoc[]
//include::ml-api.asciidoc[]
//include::defs.asciidoc[]