746 lines
22 KiB
Plaintext
746 lines
22 KiB
Plaintext
[role="xpack"]
|
||
[testenv="platinum"]
|
||
[[put-dfanalytics]]
|
||
= Create {dfanalytics-jobs} API
|
||
[subs="attributes"]
|
||
++++
|
||
<titleabbrev>Create {dfanalytics-jobs}</titleabbrev>
|
||
++++
|
||
|
||
Instantiates a {dfanalytics-job}.
|
||
|
||
experimental[]
|
||
|
||
[[ml-put-dfanalytics-request]]
|
||
== {api-request-title}
|
||
|
||
`PUT _ml/data_frame/analytics/<data_frame_analytics_id>`
|
||
|
||
|
||
[[ml-put-dfanalytics-prereq]]
|
||
== {api-prereq-title}
|
||
|
||
If the {es} {security-features} are enabled, you must have the following built-in roles and privileges:
|
||
|
||
* `machine_learning_admin`
|
||
* source indices: `read`, `view_index_metadata`
|
||
* destination index: `read`, `create_index`, `manage` and `index`
|
||
|
||
For more information, see <<built-in-roles>>, <<security-privileges>>, and
|
||
{ml-docs-setup-privileges}.
|
||
|
||
|
||
NOTE: The {dfanalytics-job} remembers which roles the user who created it had at
|
||
the time of creation. When you start the job, it performs the analysis using
|
||
those same roles. If you provide
|
||
<<http-clients-secondary-authorization,secondary authorization headers>>,
|
||
those credentials are used instead.
|
||
|
||
[[ml-put-dfanalytics-desc]]
|
||
== {api-description-title}
|
||
|
||
This API creates a {dfanalytics-job} that performs an analysis on the source
|
||
indices and stores the outcome in a destination index.
|
||
|
||
If the destination index does not exist, it is created automatically when you
|
||
start the job. See <<start-dfanalytics>>.
|
||
|
||
If you supply only a subset of the {regression} or {classification} parameters,
|
||
{ml-docs}/hyperparameters.html[hyperparameter optimization] occurs. It
|
||
determines a value for each of the undefined parameters.
|
||
|
||
[[ml-put-dfanalytics-path-params]]
|
||
== {api-path-parms-title}
|
||
|
||
`<data_frame_analytics_id>`::
|
||
(Required, string)
|
||
include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=job-id-data-frame-analytics-define]
|
||
|
||
[role="child_attributes"]
|
||
[[ml-put-dfanalytics-request-body]]
|
||
== {api-request-body-title}
|
||
|
||
`allow_lazy_start`::
|
||
(Optional, Boolean)
|
||
Specifies whether this job can start when there is insufficient {ml} node
|
||
capacity for it to be immediately assigned to a node. The default is `false`; if
|
||
a {ml} node with capacity to run the job cannot immediately be found, the
|
||
<<start-dfanalytics>> API returns an error. However, this is also subject to the
|
||
cluster-wide `xpack.ml.max_lazy_ml_nodes` setting. See <<advanced-ml-settings>>.
|
||
If this option is set to `true`, the API does not return an error and the job
|
||
waits in the `starting` state until sufficient {ml} node capacity is available.
|
||
|
||
//Begin analysis
|
||
`analysis`::
|
||
(Required, object)
|
||
The analysis configuration, which contains the information necessary to perform
|
||
one of the following types of analysis: {classification}, {oldetection}, or
|
||
{regression}.
|
||
+
|
||
.Properties of `analysis`
|
||
[%collapsible%open]
|
||
====
|
||
//Begin classification
|
||
`classification`:::
|
||
(Required^*^, object)
|
||
The configuration information necessary to perform
|
||
{ml-docs}/dfa-classification.html[{classification}].
|
||
+
|
||
TIP: Advanced parameters are for fine-tuning {classanalysis}. They are set
|
||
automatically by hyperparameter optimization to give the minimum validation
|
||
error. It is highly recommended to use the default values unless you fully
|
||
understand the function of these parameters.
|
||
+
|
||
.Properties of `classification`
|
||
[%collapsible%open]
|
||
=====
|
||
`class_assignment_objective`::::
|
||
(Optional, string)
|
||
include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=class-assignment-objective]
|
||
|
||
`dependent_variable`::::
|
||
(Required, string)
|
||
+
|
||
include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dependent-variable]
|
||
+
|
||
The data type of the field must be numeric (`integer`, `short`, `long`, `byte`),
|
||
categorical (`ip` or `keyword`), or boolean. There must be no more than 30
|
||
different values in this field.
|
||
|
||
`eta`::::
|
||
(Optional, double)
|
||
include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=eta]
|
||
|
||
`feature_bag_fraction`::::
|
||
(Optional, double)
|
||
include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=feature-bag-fraction]
|
||
|
||
`feature_processors`::::
|
||
(Optional, list)
|
||
include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dfas-feature-processors]
|
||
|
||
`gamma`::::
|
||
(Optional, double)
|
||
include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=gamma]
|
||
|
||
`lambda`::::
|
||
(Optional, double)
|
||
include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=lambda]
|
||
|
||
`max_trees`::::
|
||
(Optional, integer)
|
||
include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=max-trees]
|
||
|
||
`num_top_classes`::::
|
||
(Optional, integer)
|
||
Defines the number of categories for which the predicted probabilities are
|
||
reported. It must be non-negative or -1. If it is -1 or greater than the total
|
||
number of categories, probabilities are reported for all categories; if you have
|
||
a large number of categories, there could be a significant effect on the size of your destination index. Defaults to 2.
|
||
+
|
||
--
|
||
NOTE: To use the
|
||
{ml-docs}/ml-dfanalytics-evaluate.html#ml-dfanalytics-class-aucroc[AUC ROC evaluation method],
|
||
`num_top_classes` must be set to `-1` or a value greater than or equal to the
|
||
total number of categories.
|
||
|
||
--
|
||
|
||
`num_top_feature_importance_values`::::
|
||
(Optional, integer)
|
||
Advanced configuration option. Specifies the maximum number of
|
||
{ml-docs}/ml-feature-importance.html[{feat-imp}] values per document to return.
|
||
By default, it is zero and no {feat-imp} calculation occurs.
|
||
|
||
`prediction_field_name`::::
|
||
(Optional, string)
|
||
include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=prediction-field-name]
|
||
|
||
`randomize_seed`::::
|
||
(Optional, long)
|
||
include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=randomize-seed]
|
||
|
||
`training_percent`::::
|
||
(Optional, integer)
|
||
include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=training-percent]
|
||
//End classification
|
||
=====
|
||
//Begin outlier_detection
|
||
`outlier_detection`:::
|
||
(Required^*^, object)
|
||
The configuration information necessary to perform
|
||
{ml-docs}/dfa-outlier-detection.html[{oldetection}]:
|
||
+
|
||
.Properties of `outlier_detection`
|
||
[%collapsible%open]
|
||
=====
|
||
`compute_feature_influence`::::
|
||
(Optional, Boolean)
|
||
include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=compute-feature-influence]
|
||
|
||
`feature_influence_threshold`::::
|
||
(Optional, double)
|
||
include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=feature-influence-threshold]
|
||
|
||
`method`::::
|
||
(Optional, string)
|
||
include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=method]
|
||
|
||
`n_neighbors`::::
|
||
(Optional, integer)
|
||
include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=n-neighbors]
|
||
|
||
`outlier_fraction`::::
|
||
(Optional, double)
|
||
include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=outlier-fraction]
|
||
|
||
`standardization_enabled`::::
|
||
(Optional, Boolean)
|
||
include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=standardization-enabled]
|
||
//End outlier_detection
|
||
=====
|
||
//Begin regression
|
||
`regression`:::
|
||
(Required^*^, object)
|
||
The configuration information necessary to perform
|
||
{ml-docs}/dfa-regression.html[{regression}].
|
||
+
|
||
TIP: Advanced parameters are for fine-tuning {reganalysis}. They are set
|
||
automatically by hyperparameter optimization to give minimum validation error.
|
||
It is highly recommended to use the default values unless you fully understand
|
||
the function of these parameters.
|
||
+
|
||
.Properties of `regression`
|
||
[%collapsible%open]
|
||
=====
|
||
`dependent_variable`::::
|
||
(Required, string)
|
||
+
|
||
include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dependent-variable]
|
||
+
|
||
The data type of the field must be numeric.
|
||
|
||
`eta`::::
|
||
(Optional, double)
|
||
include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=eta]
|
||
|
||
`feature_bag_fraction`::::
|
||
(Optional, double)
|
||
include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=feature-bag-fraction]
|
||
|
||
`feature_processors`::::
|
||
(Optional, list)
|
||
include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dfas-feature-processors]
|
||
|
||
`gamma`::::
|
||
(Optional, double)
|
||
include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=gamma]
|
||
|
||
`lambda`::::
|
||
(Optional, double)
|
||
include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=lambda]
|
||
|
||
`loss_function`::::
|
||
(Optional, string)
|
||
The loss function used during {regression}. Available options are `mse` (mean
|
||
squared error), `msle` (mean squared logarithmic error), `huber` (Pseudo-Huber
|
||
loss). Defaults to `mse`. Refer to
|
||
{ml-docs}/dfa-regression.html#dfa-regression-lossfunction[Loss functions for {regression} analyses]
|
||
to learn more.
|
||
|
||
`loss_function_parameter`::::
|
||
(Optional, double)
|
||
A positive number that is used as a parameter to the `loss_function`.
|
||
|
||
`max_trees`::::
|
||
(Optional, integer)
|
||
include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=max-trees]
|
||
|
||
`num_top_feature_importance_values`::::
|
||
(Optional, integer)
|
||
Advanced configuration option. Specifies the maximum number of
|
||
{ml-docs}/ml-feature-importance.html[{feat-imp}] values per document to return.
|
||
By default, it is zero and no {feat-imp} calculation occurs.
|
||
|
||
`prediction_field_name`::::
|
||
(Optional, string)
|
||
include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=prediction-field-name]
|
||
|
||
`randomize_seed`::::
|
||
(Optional, long)
|
||
include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=randomize-seed]
|
||
|
||
`training_percent`::::
|
||
(Optional, integer)
|
||
include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=training-percent]
|
||
=====
|
||
//End regression
|
||
====
|
||
//End analysis
|
||
|
||
//Begin analyzed_fields
|
||
`analyzed_fields`::
|
||
(Optional, object)
|
||
Specify `includes` and/or `excludes` patterns to select which fields will be
|
||
included in the analysis. The patterns specified in `excludes` are applied last,
|
||
therefore `excludes` takes precedence. In other words, if the same field is
|
||
specified in both `includes` and `excludes`, then the field will not be included
|
||
in the analysis.
|
||
+
|
||
--
|
||
[[dfa-supported-fields]]
|
||
The supported fields for each type of analysis are as follows:
|
||
|
||
* {oldetection-cap} requires numeric or boolean data to analyze. The algorithms
|
||
don't support missing values therefore fields that have data types other than
|
||
numeric or boolean are ignored. Documents where included fields contain missing
|
||
values, null values, or an array are also ignored. Therefore the `dest` index
|
||
may contain documents that don't have an {olscore}.
|
||
* {regression-cap} supports fields that are numeric, `boolean`, `text`,
|
||
`keyword`, and `ip`. It is also tolerant of missing values. Fields that are
|
||
supported are included in the analysis, other fields are ignored. Documents
|
||
where included fields contain an array with two or more values are also
|
||
ignored. Documents in the `dest` index that don’t contain a results field are
|
||
not included in the {reganalysis}.
|
||
* {classification-cap} supports fields that are numeric, `boolean`, `text`,
|
||
`keyword`, and `ip`. It is also tolerant of missing values. Fields that are
|
||
supported are included in the analysis, other fields are ignored. Documents
|
||
where included fields contain an array with two or more values are also ignored.
|
||
Documents in the `dest` index that don’t contain a results field are not
|
||
included in the {classanalysis}. {classanalysis-cap} can be improved by mapping
|
||
ordinal variable values to a single number. For example, in case of age ranges,
|
||
you can model the values as "0-14" = 0, "15-24" = 1, "25-34" = 2, and so on.
|
||
|
||
If `analyzed_fields` is not set, only the relevant fields will be included. For
|
||
example, all the numeric fields for {oldetection}. For more information about
|
||
field selection, see <<explain-dfanalytics>>.
|
||
--
|
||
+
|
||
.Properties of `analyzed_fields`
|
||
[%collapsible%open]
|
||
====
|
||
`excludes`:::
|
||
(Optional, array)
|
||
An array of strings that defines the fields that will be excluded from the
|
||
analysis. You do not need to add fields with unsupported data types to
|
||
`excludes`, these fields are excluded from the analysis automatically.
|
||
|
||
`includes`:::
|
||
(Optional, array)
|
||
An array of strings that defines the fields that will be included in the
|
||
analysis.
|
||
//End analyzed_fields
|
||
====
|
||
|
||
`description`::
|
||
(Optional, string)
|
||
include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=description-dfa]
|
||
|
||
`dest`::
|
||
(Required, object)
|
||
include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dest]
|
||
|
||
`max_num_threads`::
|
||
(Optional, integer)
|
||
The maximum number of threads to be used by the analysis.
|
||
The default value is `1`. Using more threads may decrease the time
|
||
necessary to complete the analysis at the cost of using more CPU.
|
||
Note that the process may use additional threads for operational
|
||
functionality other than the analysis itself.
|
||
|
||
`model_memory_limit`::
|
||
(Optional, string)
|
||
The approximate maximum amount of memory resources that are permitted for
|
||
analytical processing. The default value for {dfanalytics-jobs} is `1gb`. If
|
||
your `elasticsearch.yml` file contains an `xpack.ml.max_model_memory_limit`
|
||
setting, an error occurs when you try to create {dfanalytics-jobs} that have
|
||
`model_memory_limit` values greater than that setting. For more information, see
|
||
<<ml-settings>>.
|
||
|
||
`source`::
|
||
(object)
|
||
The configuration of how to source the analysis data. It requires an `index`.
|
||
Optionally, `query` and `_source` may be specified.
|
||
+
|
||
.Properties of `source`
|
||
[%collapsible%open]
|
||
====
|
||
`index`:::
|
||
(Required, string or array) Index or indices on which to perform the analysis.
|
||
It can be a single index or index pattern as well as an array of indices or
|
||
patterns.
|
||
+
|
||
WARNING: If your source indices contain documents with the same IDs, only the
|
||
document that is indexed last appears in the destination index.
|
||
|
||
`query`:::
|
||
(Optional, object) The {es} query domain-specific language (<<query-dsl,DSL>>).
|
||
This value corresponds to the query object in an {es} search POST body. All the
|
||
options that are supported by {es} can be used, as this object is passed
|
||
verbatim to {es}. By default, this property has the following value:
|
||
`{"match_all": {}}`.
|
||
|
||
`_source`:::
|
||
(Optional, object) Specify `includes` and/or `excludes` patterns to select which
|
||
fields will be present in the destination. Fields that are excluded cannot be
|
||
included in the analysis.
|
||
+
|
||
.Properties of `_source`
|
||
[%collapsible%open]
|
||
=====
|
||
`includes`::::
|
||
(array) An array of strings that defines the fields that will be included in the
|
||
destination.
|
||
|
||
`excludes`::::
|
||
(array) An array of strings that defines the fields that will be excluded from
|
||
the destination.
|
||
=====
|
||
====
|
||
|
||
|
||
|
||
[[ml-put-dfanalytics-example]]
|
||
== {api-examples-title}
|
||
|
||
|
||
[[ml-put-dfanalytics-example-preprocess]]
|
||
=== Preprocessing actions example
|
||
|
||
The following example shows how to limit the scope of the analysis to certain
|
||
fields, specify excluded fields in the destination index, and use a query to
|
||
filter your data before analysis.
|
||
|
||
[source,console]
|
||
--------------------------------------------------
|
||
PUT _ml/data_frame/analytics/model-flight-delays-pre
|
||
{
|
||
"source": {
|
||
"index": [
|
||
"kibana_sample_data_flights" <1>
|
||
],
|
||
"query": { <2>
|
||
"range": {
|
||
"DistanceKilometers": {
|
||
"gt": 0
|
||
}
|
||
}
|
||
},
|
||
"_source": { <3>
|
||
"includes": [],
|
||
"excludes": [
|
||
"FlightDelay",
|
||
"FlightDelayType"
|
||
]
|
||
}
|
||
},
|
||
"dest": { <4>
|
||
"index": "df-flight-delays",
|
||
"results_field": "ml-results"
|
||
},
|
||
"analysis": {
|
||
"regression": {
|
||
"dependent_variable": "FlightDelayMin",
|
||
"training_percent": 90
|
||
}
|
||
},
|
||
"analyzed_fields": { <5>
|
||
"includes": [],
|
||
"excludes": [
|
||
"FlightNum"
|
||
]
|
||
},
|
||
"model_memory_limit": "100mb"
|
||
}
|
||
--------------------------------------------------
|
||
// TEST[skip:setup kibana sample data]
|
||
|
||
<1> Source index to analyze.
|
||
<2> This query filters out entire documents that will not be present in the
|
||
destination index.
|
||
<3> The `_source` object defines fields in the dataset that will be included or
|
||
excluded in the destination index.
|
||
<4> Defines the destination index that contains the results of the analysis and
|
||
the fields of the source index specified in the `_source` object. Also defines
|
||
the name of the `results_field`.
|
||
<5> Specifies fields to be included in or excluded from the analysis. This does
|
||
not affect whether the fields will be present in the destination index, only
|
||
affects whether they are used in the analysis.
|
||
|
||
In this example, we can see that all the fields of the source index are included
|
||
in the destination index except `FlightDelay` and `FlightDelayType` because
|
||
these are defined as excluded fields by the `excludes` parameter of the
|
||
`_source` object. The `FlightNum` field is included in the destination index,
|
||
however it is not included in the analysis because it is explicitly specified as
|
||
excluded field by the `excludes` parameter of the `analyzed_fields` object.
|
||
|
||
|
||
[[ml-put-dfanalytics-example-od]]
|
||
=== {oldetection-cap} example
|
||
|
||
The following example creates the `loganalytics` {dfanalytics-job}, the analysis
|
||
type is `outlier_detection`:
|
||
|
||
[source,console]
|
||
--------------------------------------------------
|
||
PUT _ml/data_frame/analytics/loganalytics
|
||
{
|
||
"description": "Outlier detection on log data",
|
||
"source": {
|
||
"index": "logdata"
|
||
},
|
||
"dest": {
|
||
"index": "logdata_out"
|
||
},
|
||
"analysis": {
|
||
"outlier_detection": {
|
||
"compute_feature_influence": true,
|
||
"outlier_fraction": 0.05,
|
||
"standardization_enabled": true
|
||
}
|
||
}
|
||
}
|
||
--------------------------------------------------
|
||
// TEST[setup:setup_logdata]
|
||
|
||
|
||
The API returns the following result:
|
||
|
||
[source,console-result]
|
||
----
|
||
{
|
||
"id": "loganalytics",
|
||
"description": "Outlier detection on log data",
|
||
"source": {
|
||
"index": ["logdata"],
|
||
"query": {
|
||
"match_all": {}
|
||
}
|
||
},
|
||
"dest": {
|
||
"index": "logdata_out",
|
||
"results_field": "ml"
|
||
},
|
||
"analysis": {
|
||
"outlier_detection": {
|
||
"compute_feature_influence": true,
|
||
"outlier_fraction": 0.05,
|
||
"standardization_enabled": true
|
||
}
|
||
},
|
||
"model_memory_limit": "1gb",
|
||
"create_time" : 1562265491319,
|
||
"version" : "7.6.0",
|
||
"allow_lazy_start" : false,
|
||
"max_num_threads": 1
|
||
}
|
||
----
|
||
// TESTRESPONSE[s/1562265491319/$body.$_path/]
|
||
// TESTRESPONSE[s/"version" : "7.6.0"/"version" : $body.version/]
|
||
|
||
|
||
[[ml-put-dfanalytics-example-r]]
|
||
=== {regression-cap} examples
|
||
|
||
The following example creates the `house_price_regression_analysis`
|
||
{dfanalytics-job}, the analysis type is `regression`:
|
||
|
||
[source,console]
|
||
--------------------------------------------------
|
||
PUT _ml/data_frame/analytics/house_price_regression_analysis
|
||
{
|
||
"source": {
|
||
"index": "houses_sold_last_10_yrs"
|
||
},
|
||
"dest": {
|
||
"index": "house_price_predictions"
|
||
},
|
||
"analysis":
|
||
{
|
||
"regression": {
|
||
"dependent_variable": "price"
|
||
}
|
||
}
|
||
}
|
||
--------------------------------------------------
|
||
// TEST[skip:TBD]
|
||
|
||
|
||
The API returns the following result:
|
||
|
||
[source,console-result]
|
||
----
|
||
{
|
||
"id" : "house_price_regression_analysis",
|
||
"source" : {
|
||
"index" : [
|
||
"houses_sold_last_10_yrs"
|
||
],
|
||
"query" : {
|
||
"match_all" : { }
|
||
}
|
||
},
|
||
"dest" : {
|
||
"index" : "house_price_predictions",
|
||
"results_field" : "ml"
|
||
},
|
||
"analysis" : {
|
||
"regression" : {
|
||
"dependent_variable" : "price",
|
||
"training_percent" : 100
|
||
}
|
||
},
|
||
"model_memory_limit" : "1gb",
|
||
"create_time" : 1567168659127,
|
||
"version" : "8.0.0",
|
||
"allow_lazy_start" : false
|
||
}
|
||
----
|
||
// TESTRESPONSE[s/1567168659127/$body.$_path/]
|
||
// TESTRESPONSE[s/"version": "8.0.0"/"version": $body.version/]
|
||
|
||
|
||
The following example creates a job and specifies a training percent:
|
||
|
||
[source,console]
|
||
--------------------------------------------------
|
||
PUT _ml/data_frame/analytics/student_performance_mathematics_0.3
|
||
{
|
||
"source": {
|
||
"index": "student_performance_mathematics"
|
||
},
|
||
"dest": {
|
||
"index":"student_performance_mathematics_reg"
|
||
},
|
||
"analysis":
|
||
{
|
||
"regression": {
|
||
"dependent_variable": "G3",
|
||
"training_percent": 70, <1>
|
||
"randomize_seed": 19673948271 <2>
|
||
}
|
||
}
|
||
}
|
||
--------------------------------------------------
|
||
// TEST[skip:TBD]
|
||
|
||
<1> The percentage of the data set that is used for training the model.
|
||
<2> The seed that is used to randomly pick which data is used for training.
|
||
|
||
The following example uses custom feature processors to transform the
|
||
categorical values for `DestWeather` into numerical values using one-hot,
|
||
target-mean, and frequency encoding techniques:
|
||
|
||
[source,console]
|
||
--------------------------------------------------
|
||
PUT _ml/data_frame/analytics/flight_prices
|
||
{
|
||
"source": {
|
||
"index": [
|
||
"kibana_sample_data_flights"
|
||
]
|
||
},
|
||
"dest": {
|
||
"index": "kibana_sample_flight_prices"
|
||
},
|
||
"analysis": {
|
||
"regression": {
|
||
"dependent_variable": "AvgTicketPrice",
|
||
"num_top_feature_importance_values": 2,
|
||
"feature_processors": [
|
||
{
|
||
"frequency_encoding": {
|
||
"field": "DestWeather",
|
||
"feature_name": "DestWeather_frequency",
|
||
"frequency_map": {
|
||
"Rain": 0.14604811155570188,
|
||
"Heavy Fog": 0.14604811155570188,
|
||
"Thunder & Lightning": 0.14604811155570188,
|
||
"Cloudy": 0.14604811155570188,
|
||
"Damaging Wind": 0.14604811155570188,
|
||
"Hail": 0.14604811155570188,
|
||
"Sunny": 0.14604811155570188,
|
||
"Clear": 0.14604811155570188
|
||
}
|
||
}
|
||
},
|
||
{
|
||
"target_mean_encoding": {
|
||
"field": "DestWeather",
|
||
"feature_name": "DestWeather_targetmean",
|
||
"target_map": {
|
||
"Rain": 626.5588814585794,
|
||
"Heavy Fog": 626.5588814585794,
|
||
"Thunder & Lightning": 626.5588814585794,
|
||
"Hail": 626.5588814585794,
|
||
"Damaging Wind": 626.5588814585794,
|
||
"Cloudy": 626.5588814585794,
|
||
"Clear": 626.5588814585794,
|
||
"Sunny": 626.5588814585794
|
||
},
|
||
"default_value": 624.0249512020454
|
||
}
|
||
},
|
||
{
|
||
"one_hot_encoding": {
|
||
"field": "DestWeather",
|
||
"hot_map": {
|
||
"Rain": "DestWeather_Rain",
|
||
"Heavy Fog": "DestWeather_Heavy Fog",
|
||
"Thunder & Lightning": "DestWeather_Thunder & Lightning",
|
||
"Cloudy": "DestWeather_Cloudy",
|
||
"Damaging Wind": "DestWeather_Damaging Wind",
|
||
"Hail": "DestWeather_Hail",
|
||
"Clear": "DestWeather_Clear",
|
||
"Sunny": "DestWeather_Sunny"
|
||
}
|
||
}
|
||
}
|
||
]
|
||
}
|
||
},
|
||
"analyzed_fields": {
|
||
"includes": [
|
||
"AvgTicketPrice",
|
||
"Cancelled",
|
||
"DestWeather",
|
||
"FlightDelayMin",
|
||
"DistanceMiles"
|
||
]
|
||
},
|
||
"model_memory_limit": "30mb"
|
||
}
|
||
--------------------------------------------------
|
||
// TEST[skip:TBD]
|
||
|
||
NOTE: These custom feature processors are optional; automatic
|
||
{ml-docs}/ml-feature-encoding.html[feature encoding] still occurs for all
|
||
categorical features.
|
||
|
||
[[ml-put-dfanalytics-example-c]]
|
||
=== {classification-cap} example
|
||
|
||
The following example creates the `loan_classification` {dfanalytics-job}, the
|
||
analysis type is `classification`:
|
||
|
||
[source,console]
|
||
--------------------------------------------------
|
||
PUT _ml/data_frame/analytics/loan_classification
|
||
{
|
||
"source" : {
|
||
"index": "loan-applicants"
|
||
},
|
||
"dest" : {
|
||
"index": "loan-applicants-classified"
|
||
},
|
||
"analysis" : {
|
||
"classification": {
|
||
"dependent_variable": "label",
|
||
"training_percent": 75,
|
||
"num_top_classes": 2
|
||
}
|
||
}
|
||
}
|
||
--------------------------------------------------
|
||
// TEST[skip:TBD]
|